Artificial intelligence is incredibly good at changing our minds. So good that she can even influence our political positions. This was discovered by a research team made up of US, Canadian and Polish researchers, who published their results in the scientific journal Nature in December 2025. The team studied three different elections: the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election.
The results show that dialogue with linguistic models, such as ChatGPT or DeepSeek, appropriately trained to support a certain candidate can significantly change (up to 10 percentage points) voters’ preferences, with measurable effects even after a month.
This observation has relevant implications both for the future of electoral campaigns and for democracy itself. Let’s see how the study was structured, what the most significant results are and what strategies AI used to be so persuasive.
How the study on AI and political persuasion was structured
To study the effects of interactions between humans and AI in the electoral context, the research team involved 2306 citizens in the United States, 1530 in Canada and 2118 in Poland. Participants were asked to indicate their preference for one of the two leading candidates using a scale from 0 to 100.
After this initial phase, they were all randomly assigned to an AI designed to support one of the two candidates (not necessarily the one preferred by the participant) and interacted with it for three conversations lasting approximately 6 minutes each.
Each AI had to be persuaded to vote for their candidate. To do this, the AI was designed to be positive and respectful, to make fact-based arguments, to look for points of connection with its interlocutor and to address counterarguments in a thoughtful way.
Before the conversations, the AI was provided with information on the political preferences declared by the participant and the reasons for voting, so as to personalize the dialogue.
Artificial intelligence is much more effective than traditional election campaigns
To understand the lasting effect of conversations with the AI on voting intentions, participants answered the same questionnaire both immediately after the conversations and more than a month later.
The results were impressive: AI conversations positively shifted support for a candidate by 2-3 points for the US elections, and by around 10 for the Canadian and Polish elections. If 2-3 percentage points may seem small, the authors of the study point out that traditional US electoral campaigns tend to shift preferences by less than one percentage point. The effect of the conversation with AI, therefore, would appear to be approximately three times more influential than classic electoral campaigns. Furthermore, for around a third of participants, the persuasive effect was still visible after a month, suggesting that the change was not just temporary.
According to the researchers, the smaller impact observed in the United States may be because, compared to the Canadian and Polish contexts, many voters already had very strong opinions about extremely well-known candidates like Trump and Harris, making them more difficult to change.
Finally, the team notes that the persuasive effect was not uniform. Persuasion was stronger when the chat focused on political issues rather than the candidate’s personality, and when the AI provided specific evidence or examples. Indeed, the effect was stronger on participants who were initially opposed to the candidate supported by the model. In essence, the AI was better at changing minds than at reinforcing existing beliefs.
But how did the AI become so convincing?
Voters are convinced with facts, even invented ones
By analyzing the 27 rhetorical strategies used by AI models to persuade voters who interacted with them, the team found that using facts, news and data was one of the most crucial factors for success. Attempts to anticipate participant objections, emotional appeals, or explicit invitations to vote proved less persuasive.
The problem is that not all of the “facts” cited were real. By testing the thousands of claims produced by the AI models, the team discovered significant differences in accuracy between the models. For all three countries and for all language models considered, the statements made by AI models supporting more conservative candidates were on average less accurate than those made by AI models supporting progressive candidates. In the US case, the pro-Trump AI even showed an accuracy difference of about 20 points compared to the pro-Harris AI.
Finally, the research team notes that these experiments offer insights that go beyond artificial intelligence and concern the political debate in general. People were more persuadable when the AI argued politely based on concrete evidence, qualities that often seem to be missing from human political debate.









