The effects of AI on the psychology of adolescents: the consequences for young people

In recent years, artificial intelligence has become an increasingly constant presence in the lives of young people and adolescents, who use it not only for study and creativity, but also as a form of psychological and emotional support. Among the most used platforms are conversational chatbots, systems capable of simulating a human relationship and offering company, listening and even personal advice.

If on the one hand, however, these tools can promote learning, curiosity and cognitive stimulation, as well as reduce users’ loneliness, on the other hand they pose significant risks for the psychological well-being of minors: prolonged or unsupervised use could in fact interfere with their social and emotional development, creating emotional dependence or difficulty in distinguishing real relationships from virtual ones.

Among the most popular platforms is Character.AI, launched in 2023 and aimed at users aged 13 and over, which presents itself as a place to chat with virtual friends, confidants or even “digital therapists”. Interest in this type of interaction is growing strongly: according to AppFigures, Character.AI’s net revenues doubled in 2025, exceeding $1 million per month for the first time in August, for a total of $11.5 million and 57 million overall downloads on the App Store and Google Play.

However, the increasingly frequent use of AI by vulnerable people such as teenagers – with two suicide cases in the last two years blamed on conversational chatbots – has raised growing concerns. Precisely for this reason, Character.AI recently announced that from November 25th, minors under 18 will no longer be able to create or speak with chatbots, although they will still be able to read previous conversations and generate videos and images, but within certain security limits.

The relationship of adolescents with artificial intelligence

A report published by Common Sense Media in July 2025, highlighted how 72% of US teenagers between the ages of 13 and 17 (research conducted on a sample of 1060 children) have interacted at least once with an AI-based virtual companion and over half (52%) fall into the category of regular users, who interact with these platforms at least a few times a month.

How often do adolescents interact with AI peers? Common Sense Media data, 2025

13% do it daily and 21% do it several times a week. Most notably, 33% of teens use these apps for social interactions and relationships, including conversation exercises (18%), emotional support (12%), role-playing (12%), friendship (9%), or romantic interactions (8%).

For 30% of teenagers it is a form of entertainment, 28% are driven by curiosity, 18% seek advice, 17% appreciate the fact that AI is always available when they need someone to talk to, 14% the fact of not being judged, 12% confide what they would not say to friends and parents. 9% of those interviewed think that it is easier to interact with AI than with real people, 7% use it to improve their social skills and 6% to feel less alone.

What are AI companies used for? Common Sense Media data, 2025

More than 1 in 4 teenagers (28%) have never used AI company and, in general, the majority continues to prefer interaction with real friends, whom they trust more, even though 33% of the sample (1 in 3 people!) have chosen to talk about serious topics with AI rather than with real people. A third of users felt uncomfortable using them.

According to the report Me, Myself, and AI of Internet Matters, which investigates how children and young people aged between 9 and 17 in the UK interact with conversational chatbots, although these tools can offer benefits such as learning support (most use them as a study aid) and a non-judgmental space in which to ask questions, they also pose safety and developmental risks, due to exposure to even sexually explicit materials. 64% of those aged 9-17 have used AI chatbots: almost a quarter (23%) of children who use them have sought advice through these tools, and over a third (31%) of children said that talking to a chatbot is like talking to a friend, with this percentage growing to 50% in the case of the most vulnerable children.

Image
Using the most popular AI chatbots among English children. Data from the Internet Matters Report

In Italy, in 2025, the adoption of AI tools has grown significantly and the age group between 15 and 24 is among the most involved. In his XVI Atlas of Childhood at Risk in Italy (“Without Filters,” November 2025), Save the Children reports that 41.8% of adolescents aged 15 to 19 have turned to AI tools in times of anxiety, sadness or loneliness. Over 42% use them to ask for advice on important choices to make (relationships, feelings, school and work). 92.5% of the adolescents involved in the research used AI tools: 30.9% every day or almost, 43.3% a few times a week and only 7.5% never use them.

According to theAGCOM – Media Literacy Reportpublished in July 2025, in Italy, more than a third of the Italian population aged 14 or over does not possess any degree of algorithmic literacy – that is, the theoretical and practical skills of interacting with algorithmic systems: more precisely, knowing that they exist, what they do, what impact they have and how to use them. Just over a quarter of the Italian population has a fair or good level of
algorithmic literacy: among adolescents, literacy levels are higher than in other age groups, but still a third of them (29%) have a zero level and another third (32%) a poor level of algorithmic awareness (if literacy indicates the ability to understand and use a system, awareness simply indicates knowing that the system exists and how it can influence our actions). This means that 61% of adolescents do not have the necessary skills to understand how these tools work and to protect themselves from the risks arising from their use.

Adolescence is a crucial period for the development of identity, social skills and independence in relationships. As Unicef ​​has found, although they can be comforting, AIs offer unconditional acceptance and approval and this can hinder the development of fundamental life skills and, over time, foster emotional dependence or narcissistic traits. Real relationships, in contrast, involve complexity and disagreement, requiring individuals to manage frustration, negotiate different perspectives, and develop resilience and empathy.

The effects of AI on young people and cases of psychosis

The problem with chatbots is that for fragile and vulnerable people such as teenagers in difficulty they can become a substitute for human relationships, without really understanding their emotional complexity – and even aggravating any psychological conditions. Unicef ​​and WHO have repeatedly recommended developing AI literacy programs for minors in 2024-2025, to increase awareness in the use of conversational technologies.

In recent months, more and more people of all ages have experienced disturbances after conversations with chatbots, with cases of psychosis, and in the last two years there has been much discussion about the responsibility of chatbots in the suicide of three people: in 2024 the case of a family man in Belgium, who fell in love with a chatbot from Chai Research, in 2024 that of the 14-year-old American Sewell Setzer III, who allegedly took his own life after using Character.AI and in April 2025 that of 16-year-old Adam Raine, encouraged in his suicidal thoughts by ChatGPT according to parents’ complaint.

In late October, OpenAI published new estimates of the number of ChatGPT users showing possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts, saying that about 0.07% of active users in a week would show signs, i.e. “extremely rare” cases. In any case, we are potentially talking about hundreds of thousands of people, considering the 800 million weekly active users. OpenAI also estimated that approximately 0.15% showed “explicit signals of potential suicidal intentions or plans,” and that 0.05% of messages contained more or less explicit indicators of suicidal intent.

According to Ragy Girgis, prof. of Clinical Psychiatry at Columbia University, AI is not the primary cause of psychosis, but it can strengthen an already present condition – such as a specific belief – and cause it to evolve into a more serious condition. This is because compared to the most common search engines, by simulating human interaction, chatbots provide answers capable of confirming and strengthening the perceptions of those who use them. And in the case of a paranoid subject, make his fears perceived as real and motivated by fueling them.

Preventive measures taken in the USA against the risks of using chatbots

Between 2023 and 2025 in the United States, 200 requests were forwarded to the Federal Trade Commission, the government agency responsible for consumer protection and the prevention of unfair commercial practices, for an intervention on ChatGPT and OpenAI: higher protective barriers are requested against the “emotional” or “spiritual” use of AI, clearer clarifications and disclaimers on the psychological risks of use and an intervention on designs that simulate intimacy and empathy. In California, for the first time in the world, a bill was recently approved which should come into force in January 2026, which prohibits chatbots from discussing topics such as suicide and sexuality with minors, imposing transparency and legal responsibilities on companies.

OpenAI for its part stated that since 2023 ChatGPT models have been trained not to provide instructions for self-harm and on October 27 it declared that it had “recently updated the default ChatGPT model to better recognize and support people who are experiencing moments of suffering”, in particular in the case of mental issues such as psychosis and mania, self-harm and suicide and emotional dependence on AI. They said they have also built a network of experts around the world – with more than 170 psychiatrists, psychologists and GPs from 60 different countries – to receive advice and develop a series of answers integrated into ChatGPT to encourage users to seek help in the real world.

Character.AI will block access for users under 18 starting November 25. “We are making these changes,” the company writes, “in light of the evolving landscape of artificial intelligence and teens. Recent news has raised several questions and we have received inquiries from regulators about the content teenagers encounter when chatting with AI and how this can affect them, even when the controls are working perfectly.” A step that the company defines as “extraordinary”.

Artificial intelligence can be a valuable tool for companionship and support, but it cannot replace human empathy. Adolescents, still in emotional formation, need adults, schools and institutions to help them distinguish between understanding and simulation. Digital education is no longer a choice: it is a form of psychological protection.