Can AI replace the psychologist?

Can AI chatbots really replace a psychologist? Short answer: no, it’s not a good idea to replace your psychologist with AI. He lacks real empathy, does not pick up on non-verbal cues and does not have the sensitivity necessary to ask the right questions. However, it can act as apsychological first aid” when you don’t have immediate access to a professional.

Over the past year, several studies – such as the Stanford University and King’s College study – have approached this issue from different perspectives. In extreme contexts, such as war zones or areas without access to sanity, chatbots designed specifically for therapy can offer a temporary support. However, i generic linguistic models (such as ChatGPT, MetaAI, Gemini…) present risks: in some cases they have reinforced dangerous behaviors, shown stigmatizing attitudes towards those suffering from mental disorders and indulged delusional beliefs of users, fueling psychosis.

Chatbots used as psychological support is an extremely current topic. With nearly a billion active users, AI-powered chatbots are now part of our everyday lives. More and more people, especially young people, use them to talk about their own emotional problems and receive a psychological support. On the other hand, these tools seem ideal: they speak like us, are always available, do not judge us and give us answers that appear objective.

Let us therefore look in more detail at the data on our use of chatbots, their potential and the associated risks.

How much we use Chatgpt as a “psychologist friend”: the data

According to the Sensor Tower reportin the last year the requests made to chatbots regarding lifestyle topics (including health, nutrition or advice on social relationships) increased by 12.6 percentage points compared to 2024, going from 22% to 34.6% of total requests. Additionally, 9 out of 10 of the most searched topics fall into the “lifestyle & entertainment”, with increases evident for well being (+1.9% compared to 2024), politics (+1.7%) e relations (+0.6%). AI has become a sort of “digital friend” to ask for advice on everything from diets to relationship problems.

This trend is accentuated by global shortage of mental health professionalswhich leaves millions of people without adequate support. In response, thousands of accessible and often free AI-based therapy apps have emerged.

AI as immediate psychological support

When accessing a psychologist is difficult or impossible, therapeutic chatbots can offer valuable help. A study conducted in 2025 byKiev University evaluated the effectiveness of Friend, a chatbot designed to provide psychological support in crisis contexts. The research involved 104 Ukrainian women living in war zones and suffer from anxiety disorders. The results compared two approaches: traditional therapy (three sessions per week) and daily use of the chatbot. Both interventions led to a significant decrease in anxiety: between 45-50% in the group followed by psychologists and between 30-35% in the one assisted by chatbots.

Although less effective than classic psychotherapy, the chatbot has proven to be a useful, rapid and accessible tool, especially in emergency contexts or where human resources are scarce. Traditional therapy remains superior in terms of emotional depth, adaptive capacity and relationship quality, but AI-based tools can still offer a immediate first interventioncapable of alleviating discomfort and helping people not to feel alone.

Of course, these benefits refer to instruments designed specifically for therapeutic purposesnot generic models like ChatGPT and Gemini, created for different purposes. It is best to use those only to put your thoughts in order, or not to use them at all for therapeutic purposes. Let’s see why.

The risks: stigma, inappropriate responses and psychosis

A recent study byStanford University highlighted how classic language models, even the most advanced, can provide inadequate or harmful responses when used as psychologists. According to the study, chatbots have shown stigmatizing attitudes towards those suffering from mental disorders and, in some cases, have indulged delusional beliefs of users, probably due to their inclination to confirm any statement they receive.

A further alarm bell comes from one King’s College preliminary studypublished in pre-print in July 2025, which investigates the link between the use of chatbots and the onset of psychotic episodes in vulnerable people. The study still has to go through all the review stages, but, according to the data collected, 17 individuals experienced psychotic symptoms after interactions with models like ChatGPT and Microsoft Copilot. Although the idea of ​​an AI-induced “psychosis” is still an under-investigation hypothesis, the King’s College team suggests a possible underlying mechanism: a vicious circle of mutual reinforcementin which the chatbot, responding consistently to the user’s statements, ends up consolidate paranoid or delusional beliefs. Simulations conducted with different levels of paranoia have shown how the interaction between user and AI can progressively intensify the distortion of reality.