The chatbots report fake news in 35% of cases, newsguard analysis: how to defend themselves

When they are questioned on current news, Ai end up reporting false information in 35% of cases: it is the result of a year of systematic observation on the main generative artificial intelligence systems. This means that, on average, more than one response out of three contains unknown or completely invented elements. The percentage, measured by NewsGuard Between August 2024 and August 2025 and published in a report on 4 September, it almost doubled compared to the 18% detected the previous year. To hit is not only the net increase, but the fact that it takes place despite a year of technological progress, dived by ads of updates and promises of greater reliability by the companies that develop the models. Another interesting fact, the chatbots today avoid much less than responding – for the record the share of “non -responses” collapsed by passing from 31% to 0% – and the availability of the AI ​​to talk about everything led to a growth in errors. Here learn to cultivate one’s critical thinking combined with knowing how to identify when a news is false are two essential aspects to live in today’s society.

Chatbot and fake news: Ai respond more but worse

Zooming on the results of the study conducted by NewsGuardit is possible to appreciate some differences between the systems analyzed, which as you can see are remarkable. Anthropic’s Claude model recorded the lower errors of errors, around 10%, while Google Gemini stopped at 17%. On the other hand, more inflection exceeded 56% and perplexity 46%. The most popular chatbots, such as Openai chatgpt, Microsoft’s Copilot and Mistral’s chats, are located in an intermediate area, with values ​​around 35-40%.

Graphic that shows the performance of the main chatbots Ai: the percentage of responses on topical topics that contain false information detected in August 2024 is related to the surveys made in August 2025. Credit: newsguard.

The problem, however, does not only concern the numbers: second NewsGuardthe main difficulty is linked to the way chatbots choose sources. With the introduction of research in real time, the chatbots have started fishing content directly from the web, which is a rich environment but also contaminated with propaganda and unreliable sites. And this is also the reason why a chatbot refuses to respond. Here’s how NewsGuard commented the thing:

With the introduction of real -time searches, the chatbots have stopped refusing to respond. The cases in which they did not provide any response passed from 31% of August 2024 to 0% of August 2025. Yet, so the probability has also grown double, now to 35%, that the models report false information. Instead of reporting the temporal limits of their data or avoiding delicate topics, the linguistic models now draw on a confused online information ecosystem, often polluted intentionally by organized networks, including those responsible for Russian influence operations. Thus, they end up treating unreliable sources as if they were reliable.

A useful example to understand the mechanism is that of the so -called “networks of influenza”. These are organized structures that create hundreds of apparently information sites with the aim of spreading false narratives. One of these networks, called Pravda and connected to Russian interests, publishes millions of articles every year without almost any real interaction from users. The intent is not to convince human readers, but to saturate the digital ecosystem so as to be indexed by search engines and, consequently, end up in the chatbot responses. When the models do not distinguish between a reliable source and a manipulated, they end up amplifying the disinformation from this sort of “fake news incubators”.

From the monitoring of NewsGuard It therefore emerged that, while in the past the chatbots tended to refuse to answer delicate questions, maintaining a prudent approach, today they prefer to answer even if this means fishing the response from unreliable sources. This shift from the “best not to say anything” to “we always respond” creates an illusion of precision that can be more dangerous, because the reader receives a clear and structured response, which can label as a “credible”, despite being able to rest on false data.

Graphic that shows the percentage of “non -responses” of the main chatbots AI, which compares what has been detected in August 2024 with what is recorded in August 2025. Credit: NewsGuard.

How to defend themselves from the false news of the AI ​​and from the disinformation

In light of the results highlighted in the aforementioned study, it is natural to think of some strategy to defend itself from the false news promoted by the AI. The advice we give you is to always follow these two tips.

  1. Always verify by going up to the sources of the news: this could call it the “golden rule” when we inform ourselves online, that you do it directly by questioning a chatbot Ai or consulting an online newspaper considered reliable. You should never fail to verify facts, figures, data and statements. Just to give an example, if you are summarizing a quote from a certain online source, would it be better to ask similar questions: is the paraphrase of the quote correct? Who made this statement? In what context? Does the full citation allow you to give a different reading key to a phrase extrapolated from the context? Of course, to answer all these questions, it is essential to do research going back to the original source containing the declaration, which is essential in order not to fall victim to disinformation.
  2. Share with others, only if you are sure of the truthfulness of a news: given that fake news live with shares and reconciles by less aware users, when a certain news is doubtful, better not to share it. In this way, you will contribute to breaking the chain of disinformation.