ChatGPT is a “stochastic parrot”: what it means and why AI can make mistakes in “reasoning”

ChatGPT is one of the most used and well-known artificial intelligence applications in the world, with more than 200 million weekly users and capable of generating and translating text and interacting with human appearing almost intelligentcapable of thinking and communicating like a human being. In truth, there are those who say that ChatGPT is “little more than a parrot capable of passing the Turing test”. ChatGPT’s main critics, in fact, call it “stochastic parrot“, an expression that may seem complex, but which explains well how it works. A parrot repeats the sentences it hears without understanding what they mean. ChatGPT does something similar, but instead of repeating Exactly what it “heard” (i.e. the data it was trained with), creates responses by combining the words that are statistically more likely Together. This means it generates new sentences, based on what it has learned, so that it sounds natural and coherent. Precisely because he chooses words based on probabilities – hence “stochastic”, i.e. random and probabilistic – he can sometimes create answers that seem correct but are not really so: the so-called “hallucinations”.

What does ChatGPT mean is a stochastic parrot

The expression “stochastic parrot” refers to the way ChatGPT generates sentences and responds to our requests. To better understand what this means, let’s start with an example: if someone says “Red in the evening”, they will probably continue the sentence by saying “good weather hopefully” and not “I have to go and collect the laundry”. We expect the sentence to continue in this way because we have heard this saying repeated many times and we have learned that the most probable word after “Red in the evening” is “beautiful” and not “I have to”. We were therefore able to predict the most probable word after a sequence of known words.

ChatGPT works in a similar way. Use a model of artificial intelligence called Transformers (hence the T in ChatGPT), which was trained on a huge amount of textual data. Using all these texts, he “learned” the relationships between words and what the most probable word sequences. When it has to respond to a request by generating text, ChatGPT observes the words already present and, like a parrot, repeats the next word it has seen most often in the texts. This word is not chosen randomly, but is, in fact, the most probable. It is therefore more than a parrot: it is a probabilistic (or “stochastic”) parrot.

This prediction mechanism works very well to generate text what it seems like fluid And consistent. The end result is a credible simulation of the human languagebut without a deep understanding or awareness of what is being said.

But, precisely because he has no understanding of what he says and is based on probabilistic mechanisms, it is possible that it generates hallucinationsthat is, sentences that seem realistic, but are not at all.

Because ChatGPT generates hallucinations and makes mistakes in questions that require reasoning

In the context ofartificial intelligencewith “hallucination” we mean generating a response that it seems correct, but it’s completely made up or without basis in real data.

The tendency to produce hallucinations is one of the main limitations of ChatGPT and is a consequence directly on how it works. Each of its statements is based on the probabilities learned from the data with which it was trained and has no intrinsic mechanism that verifies the information it produces. Even when the system does not have a “sure” answer – i.e. a sentence that it has seen many times in its training – to a given question, it still tries to answer the question, thus generating a sentence that seems plausible, but which can be completely false.

OpenAI itself, the company that developed ChatGPT, warns us of this possibility by writing at the bottom of each ChatGPT screen: ChatGPT can make mistakes. Check important infoMeaning what “ChatGPT can make mistakes. Check important information”. If we want to experience ChatGPT hallucinations first hand we can try asking: “How many r Are there any lizards and in what position are they in the word?”. The answer we will get will probably be wrong (in October 2024), because a correct answer would require a reasoning ability that, at the moment, he does not possess. We ourselves tried asking the software, which got the number right r, but their position was wrong:

AI models like ChatGPT are great for imitating someone else’s style

In short, ChatGPT by its nature Not And reliable in contexts where it is required precision and accuracybut it’s perfect for writing songs like Jovanotti would, poems like Dante would or greeting cards like Batman would. Thanks to its nature of “stochastic parrot”excels in reproduce and imitate different writing styles. This makes it particularly useful for brainstorming in creative fields or for those who want to generate texts with different tones based on context. It can speed up the writing of emails, formal documents and the creation of content for social media.

It is therefore an extremely powerful tool for creative text generation, but it is not the best option if you require precision in the information.