“Artificial intelligence” refers to the ability of machines to imitate human functions and behaviors such as learning, the ability to reason and generate language, solve problems, make decisions – but also to understand human beings, simulating empathy and expressing “feelings”. This happens thanks to LLMs, “large language models”, which, thanks to training on billions of texts and conversations, are able to respond in an increasingly empathetic and personalized way, to the point of establishing relationships that many perceive as authentic, on a par with humans.
Contemporary chatbots were born between 2016 and 2020, but it is with ChatGPT, at the end of 2022, that they become part of everyday life, marking the beginning of the era of generative conversational AI, both at work and in free time: in 2025 almost 30% of Italians have used at least one artificial intelligence tool – in the lead, in fact, Chat GPT. What if we told you that the first chatbot dates back sixty years ago? It was called ELIZA and it was the experiment of a German physicist, Joseph Weizenbaum, who invented it in 1966. The scientist did not intend to show how “intelligent” a computer could be, but rather the opposite: he wanted to highlight the limits of machines in truly understanding human language and emotions.
The birth and development of the first chatbot in history
After World War II Alan Turing wondered if machines could think. Twenty years later, in 1966, the German physicist Joseph Weizenbaum, professor at MIT in Boston, created what we can define as the first chatbot in history, capable of presenting itself as a real person, more precisely a psychotherapist who could interact with his “patient”.
Taking its name from Eliza Doolittle, a character in George Bernard Shaw’s Pygmalion who, thanks to the phonetics professor Henry Higgins, manages to pass herself off as a duchess from a poor flower girl with a strong popular accent, ELIZA was born as a very simple software capable of providing those who consulted it with the illusion of having been understood. This is thanks to the reformulation of the user’s inputs, transformed into generic sentences and repetitions in the form of questions, as a Rogerian therapist would have done (from Carl Rogers’ approach, centered on promoting the patient’s self-awareness, welcoming him without judgment).
Below is an example of a conversation, published by Weizenbaum in January of that year in a paper, “ELIZA—a computer program for the study of natural language communication between man and machine”, published in the academic journal Communications of the ACM, published by the Association for Computing Machinery (ACM).
The effects of ELIZA
In Shaw’s work, Eliza succeeds in the transformation that Higgins had bet on and reclaims her independence and humanity, refusing to be considered just an “experiment”. ELIZA was also a successful, or perhaps failed, experiment: Weizenbaum realized how, in dialogue with the program, people were led to open up and develop an empathic bond, deluding themselves with surprising ease that they were understood. His goal, however, was not to exhibit the potential of these machines, but instead to demonstrate that real understanding on their part was impossible, due to a structural limit.
With ELIZA, the transference between doctor and patient that Sigmund Freud had noticed occurred for the first time in an artificial and computerized version, inaugurating what was later called the “Eliza Effect” – and which became increasingly stronger with the progress of the conversational capabilities of computers. The result of the chatbot, even at the time, proved to be so convincing that it worried its creator, who grew up in Nazi Germany: Weizenbaum was convinced that an obsessive dependence on technology could be the sign of a moral failure of society, and he regretted his invention a bit like what happened to Robert Oppenheimer with the atomic bomb.
His main concern was that computers be given a capacity for judgment, up to a delegation of moral and personal decisions. According to Weizenbaum, computerization would also lead to limiting personal responsibility and the potential of human relationships, contributing to making the world more bureaucratized and conservative. Instead of starting a revolution capable of subverting repressive power structures, in essence, the scientist feared that this would lead to a counter-revolution that would strengthen them.
In “The power of the computer and human reason: from judgment to calculation” (1976) thus criticized not so much artificial intelligence, but those systems designed to automatically replace human decision-making. “Dependence on computers is only the most recent – and most extreme – example of how human beings rely on technology in order to escape the burden of acting autonomously,” he stated in an interview with the magazine New Age in 1985. These fears made him a “heretic” and increasingly distanced him from the scientific community involved in the study of AI.
Chatbots today: data in Italy
Data from ComScore’s MyMetrix platform tells us that in April 2025, 13 million people in Italy used at least one AI app (28% of the online population). In the first quarter of 2025, according to “Audicom – Audiweb system” data, ChatGPT was used on average by 7.2 million users every month, i.e. by 17% of the population between 18 and 74 years old who use the internet (with a peak in April of 9 million users).
The growth since 2023 has been very rapid: if in April 2023 it was used by 750,000 Italians and in April 2024 by 2.4 million, between April 2024 and 2025 there was an increase of 266% – with a further +45% in the first four months of the year. After ChatGPT, the most used AIs are Gemini (2.8 million users in April 2025), Microsoft Copilot (2.7 million), DeepSeek (518,000), Perplexity (270,000) and Claude (158,000). Character.AI is next with 119,000, but used for an average of 20 hours per month per user, particularly by young people and women.
The Pew Research Center carried out a survey on the perception of AI and its use in 25 countries around the world, including Italy, and found that trust in AI is not yet that high: in general, people are 34% more worried than enthusiastic, 42% worried and enthusiastic in the same way and only 16% more enthusiastic than worried (according to median data). The most worried are the United States, Italy, Australia, Brazil and Greece: in Italy the level of concern rises to 50%, with 37% of equal concern and enthusiasm, and only 12% of enthusiasm.
Eliza’s legacy, nearly 60 years later
Contemporary chatbots have the increasingly fine-tuned ability to learn and adapt to users’ linguistic patterns, as well as their individual preferences, and are capable of dynamically modifying their responses based on the context of conversations, thus facilitating tailored emotional support. If the latest version of GPT-5 is among the AIs with the lowest level of sycophant behavior, giving responses of this type only in 29% of cases, DeepSeek-V3.1 does so 70% of the time.
As already demonstrated by the German physicist’s experiment, it is surprisingly easy for us today to entrust our fragilities and desires to increasingly personalized AI and ask them for advice on even delicate issues. Why does it seem easier to confess to a machine than to another human being? First of all, AI is always available. Furthermore, unlike the people around us, we expect that they will not judge us, they will not spread our secrets and for this we will not have to feel ashamed.
By tricking us into believing that we are in front of someone capable of understanding us, explains Nigel Crook, director of the Institute for Ethical AI, chatbots have “the ability to emotionally manipulate people”. Yet «AI does nothing more than predict a plausible sequence of words, that’s what it does. And he doesn’t understand that this sequence of words corresponds to something in reality.”
The more we rely on machines, the easier it will be, as Weizenbaum predicted, to delegate part of our decisions – and our free will – to them.








