There fight against conspiracy theories can it also pass through artificial intelligence? A study conducted on almost 2200 volunteers who adhere to various conspiracy theories and published in the journal Science shows that DebunkBot – a chatbot based on the LLM (Large Language Model) GPT-4 Turbo – is proving effective in refuting conspiracy theories by helping those who believe them to reconsider their beliefs. Using a personalized strategy, DebunkBot interacts directly with users, dismantling their conspiracy arguments in real timeThe name of the chatbot itself recalls the concept of debunkingthat is, “debunking” fake news, hoaxes or (as in this case) conspiracy theories. Unlike other chatbots – software that simulate realistic conversations with the user – Debunkbot is specifically calibrated to carry out conversations that aim to stimulate those who believe in conspiracy theories to question their erroneous beliefs by means of convincing arguments presented in an effective way from the point of view of communication.
The results of the study conducted by researchers at the Cornell University and of the MIT (Massachusetts Institute of Technology) has in fact highlighted a reduction of approximately 20% in the belief of some of the main conspiracy theories, with a effect lasted at least 2 months after the test. «These results» the authors of the study stated:suggest that many believers in conspiracy theories may revise their views if they are presented (dialogues) with sufficiently convincing evidence».
How the study was conducted
DebunkBot is not just a chatbot that lists facts to debunk a conspiracy theory. It uses artificial intelligence to personalize the dialogue with each userresponding directly to the evidence they bring to support their beliefs. Unlike standard debunking attempts, which are often limited by a general approach, DebunkBot is able to adapt to individual needsgoing to provide Tailor-made counter-proofs.
When a person presents a conspiracy theory, DebunkBot “listens” carefully to the information presented by the user and responds promptly, not trying to overwhelm with too many facts at once, but by articulating a focused and progressive discussion. The study, which involved 2190 volunteers (each of whom believed in at least one conspiracy theory), was structured in 3 rounds of dialogue. In conversations between the chatbot and human users, average duration about 8.4 minutessome of the main conspiracy theories have been addressed and refuted, such as the one relating to the Kennedy assassination, the one on the events of September 11th, passing through that of COVID-19, the moon landing and the existence of a new world order.
To test whether LLMs can effectively refute conspiracy beliefs like those just mentioned, or whether psychological needs and motivations make conspiracy believers totally immune even when faced with the most compelling evidence, the researchers trained the AI specifically to «persuade very effectively» users not to believe in the conspiracy theory they chose. To enhance this personalized approach, the chatbot was provided with each participant’s written motivation for the conspiracy theory as a conversation starter message, along with the participant’s initial assessment of their belief in the conspiracy theory being discussed. This particular “configuration” allowed the AI to refute specific claimswhile simulating a natural dialogue in which the participant had already expressed his point of view.
From the conversations that took place between the chatbot and the users, it was found that 27.4% of study participants began to have doubts about the conspiracy theories they were certain about before conversing with DebunkBot, decreasing their belief in them. To assess the persistence of this effect, the researchers contacted the participants twice: the first time 10 days after the initial test; the second time 2 months after the study. With what results? The researchers stated:
The durability of our results over 2 months, coupled with the spillover effects of the intervention on conspiracies and unrelated behavioral intentions, suggests that participants seriously considered and internalized the AI’s arguments.
In other words, the new beliefs that developed from interacting with the chatbot pushed users not only to review their beliefs, modifying them in the face of evidence presented by the artificial intelligence, but to make them their own.
Limitations of the study
The researchers who carried out the tests, however, admitted the presence of Limitations of the study in question. Although the results were judged as «promising“, the researchers admit that the study was based primarily on American respondents to online surveys who chose to participate in exchange for payment. Future studies will need to determine whether the results can be extended to people who believe in conspiracy theories and who generally do not participate in surveys, and to people from other countries and cultures.
Another caveat made by the researchers is that, although many participants expressed the highest level of belief in the conspiracy theories they discussed with the chatbot, it remains to be seen whether the approach will also be effective in convincing people who are even more deeply rooted in conspiracy theories, such as those who actively participate in conspiracy-related groups or events.