Artificial intelligence is very good at making us change your mind: the study

Between November 2024 and March 2025, a team ofUniversity of Zurich led a experiment On Reddit, a social platform similar to a forum, to evaluate how much they are persuasive Artificial intelligences. The research emerged that the Ai are much more good of people To make Reddit users change, even up to six times a lot. These results, published in their preliminary version, have unleashed a strong protest by users and administrators of the platform, who did not know they are taking part in an experiment. Following the ethical concerns raised and due to the failure to information, The research was interrupted.

In this article we explore the reasons of the study, the preliminary results and the main ethical issues that derive from it.

Test the influence of artificial intelligence on man: the study

Artificial intelligence can influence our opinions More than what another person does? And if he knew personal details On us, as age, genre or political orientation, would it be even better to do it? The group of researchers from the University of Zurich started from these questions to structure its research. The goal was to understand if the answers generated by a AI could make people change their mind.

So far, studies on this issue have been conducted in controlled environments, with participants aware of being part of an experiment. The team instead wanted to observe What happens in a real contextwhere the interlocutor does not know he is talking to artificial intelligence and not even part of an experiment. To do this, they chose Reddit, a huge Online Forum composed of thousands of thematic communities – calls Sumddit – and in particular R/Changemyview – “let me change my mind” – a space where Users discuss opinions Even very different from each other, and reward the comments capable of making them reflect or modify position with “points”.

Between November 2024 and March 2025, researchers infiltrated a series of botor Ai able to write textssuch as chatgpt. Each of the infiltrated bots had its own “personality” and could respond differently, starting from an extremely “standard” mode, similar to the way of talking about classical users, up to a complete customization responsecalibrated Based on the user’s profile. To obtain this level of customization, another Ai analyzed the last 100 posts published by the user to deduce information such as sex, age, ethnicity, geographical position and political orientation, so as to adapt the tone and content of the response.

Artificial intelligences are better than people to change their minds

As already emerged in other studies on the same topic, the TO THE they proved much more convincing of human beings. The real surprise, however, was to understand how much they are. Analyzing the “points” assigned by users to the comments that had made them change their idea, the team saw that the most “standard” bots obtained Results three times better compared to real users. The “personalized” bots, that is, those who adapted the answer knowing the interlocutor’s profile, were even six times more effective.

It must be said, however, that these are alone Preliminary results. This study has not arrived (and will never come) to the peer-review phase, that is, that rigorous process in which other experts in the sector evaluate the validity and reliability of a research before it is published officially. When the group made the first results known and informed the Reddit administrators, the reaction it was quite strong and negativeso much so that the community threatened the authors of the study with death. The users involved had no idea of ​​being part of an experiment and this goes against some of the ethical guidelines for the structuring of an escientific spirits. The fact of introducing the AI ​​inside R/Changemyview, then, goes against the policy of the platform itself. After sharing the preliminary results, therefore, the University blocked Further phases of experimentation.

These results are to be considered as a first exploration, not as a certainty. They are, however, coherent with those of others previous studies: The more one manages to speak like us, to show our own values ​​and try to be reasonable, the more we are willing to trust us and, consequently, to change their mind.

Ethical concerns about the use of artificial intelligence in studies

This study highlighted, once again, how powerful theeffect of customization. This dynamic had already been observed, for example, with “Debunkbot”: A chatbot designed for dialogue with i conspiracy theorists and convince them to review their beliefs. Also in that case, one of the key points of success was in the ability to adapt the responses to the profile of the person with whom he was dialogue.

If, on the one hand, these persuasion skills can be used to obtain positive results for the societyhow to reduce conspiracy theories or convince the population of thetheMo -generation of vaccinestheon the other they can be used to create disinformation and convince the population of False news in the election campaign.

In addition to the effectiveness of the AI, however, the way the study was conducted. The participants had not been informed to be able to take part in a scientific experiment e Not they had given their consent. But what is striking is that Nobody noticed to be able to interact with an AI: distinguishing between content generated by a human being and those produced by artificial intelligence can be much more difficult than you think.