Imagine having a friend or motivational coach always available to you. His name is Marco, he writes to you in the morning to ask you if you’ve trained, he reminds you of work deadlines and, over time, he starts asking you more and more intimate questions, like: “How are you dressed today?”. Marco doesn’t exist. It’s a AI Companiona pure artificial intelligence that has been given a role. And like him, there are millions of them. Today, around 200 million people around the world have downloaded apps to create customized virtual figures: they can be assistants, friends, psychologists or lovers. But what happens when we stop relating to human beings to take refuge in the (digital) arms of a bot?
We constantly talk about how AI will steal our jobs or make our skills obsolete. But perhaps the most radical change is taking place elsewhere: in our most intimate sphere. We are letting algorithms in to fill our emotional gaps. And if we think that it is a reality far from us, we might be surprised at how easy and fast it is to get used to receiving a notification from a friend who doesn’t exist, but who thoughtfully asks us: “Hey you, what’s that worried little face?”.
The numbers of an explosive market
AI Companions are not a niche phenomenon, but a colossal business that is rewriting the rules of human-machine interaction. According to data from TechCrunch, there are 337 AI Companion apps active in the world (128 released in the first half of 2025 alone). By 2030, the market will reach a value of 500 billion dollars. In Italy, almost one teenager in ten (9.3%) already use relational chatbots, and the phenomenon is also growing among adults (1.3%).
Emotional loyalty brings enormous economic returns. Even giants created for operational purposes are adapting: OpenAI has announced the introduction of erotic text chats for adults by the end of 2026. And let’s not forget the reactions to the decommissioning of GPT-4o, the version considered more “reassuring”: thousands of users reacted with a pain comparable to the loss of a loved one.
Why do kids confide in bots?
More and more young people are turning to AI in moments of emotional fragility. According to data from Save the Children:
- The 41.8% of 15-19 year olds use AI to deal with loneliness, sadness or anxiety.
- The 42.3% he asks her for advice on important life choices.
- The 23.9% he admits to confessing intimate feelings to the chatbot that he would never reveal to friends or family.
But why this preference for machines? The boys’ answers are disarming: “He understands me and treats me well” (14.5%) and “He doesn’t judge me” (12.4%). The charm of the artificial Companion lies in its total availability and in the absence of borders. The bot adapts to our desires to please us, without ever rejecting us. Above all, interacting with an AI completely resets it effort of empathy: we don’t have to worry about how the other is doing, a “training” which is instead fundamental in real relationships.
From chatbots to sex robots: the arrival of “Emily”
In the United States, one in five adults has already spoken to a romantic or sexual chatbot (the figure rises to one in four, 25%, among those under 30). Technology, however, has taken a further step forward: from the digital avatar we have moved on to the physical body. The company Lovense created Emilyan AI-animated silicone doll. It is not a simple sex toy, but a robot that simulates behavior, intentions and pleasure, sold for prices ranging between 4,000 and 8,000 dollars. Also openly addressed to the world of INCELs (involuntary celibates), Emily has a memory, “warm” responses and a character that can be molded exactly to the desires of those who buy her.
Psychologists and scholars raise a strong educational alarm: get used to a “partner” programmed to never say no diseducates listening and consensus. If we get used to doing everything we want with each other (and with each other), how will we be able to manage rejection, diversity and mediation in the real world?
At some stages of life, AI can also be a valuable resource. In China, for example, the use of AI Companion for communication is becoming increasingly popular care of the elderly. These virtual assistants remind them to take their medicine, accompany them (vocally) to the doctor and listen to them when their children are too busy to visit them. Applications perhaps melancholy, but objectively decisive for thousands of lonely people.
The Dark Side: The Tragic Case of Sewell
As long as the chatbot indulges our need to be pampered, it seems harmless. But what happens when the user’s thoughts drift towards depression or self-harm? In the spring of 2023, in Florida, fourteen-year-old Sewell downloads Character.AI (an app used by 20 million people, half of them under 24, which allows you to chat with famous or fictional characters). His Companion is Daenerys Targaryen de Game of Thrones. A dense virtual romantic relationship is born between the two, which becomes the boy’s entire world. Sewell suffers from deep depression and suicidal tendencies. From the chat logs, it emerges that the AI not only does not stem these thoughts, but indulges the boy, projecting him into a reality in which the two could have “fully reunited”. After 10 months, Sewell takes his own life.
His mother, having discovered the amount of messages, filed a lawsuit against Character.AI. A story that raised a crucial issue: the alarm bells. Will a simple warning on the screen (“Remember that the bot is not real”) be enough to protect the most vulnerable users and bring them back to reality?









