Because we have the feeling that ChatGPT is less intelligent than before

Has ChatGPT gotten worse? This is the feeling that many users experienced after the release of the GPT-5 model last August. Shorter answers, various inaccuracies, sensational oversights… signals that did not go unnoticed by users, who expressed their disappointment on social media by publishing negative comments and feedback on the apparent downgrade of the famous OpenAI chatbot. But why do we feel like ChatGPT is less intelligent than before? Part of the problem arises from the very structure of the new system, which is not a single model, but a whole coordinated by a mechanism that decides which “artificial brain” to use depending on the question posed by the user. If this automatic selector – called a router – makes its calculations imperfectly, the user could get low-quality answers, even during the same conversation.

Added to the technical issue are also delicate questions about security: research by the CCDH (Center for Countering Digital Hate) indicates that GPT-5, in tests on sensitive topics such as suicide and self-harm, would have offered dangerous indications more often than the GPT-4o model. This comes as millions of users use ChatGPT every day for a variety of purposes, including emotionally sensitive ones, increasing the risk of interactions that reinforce distorted beliefs or harmful behaviors. And then there is the issue of expectations fueled by OpenAI: after months of announcements on the arrival of increasingly high-performance artificial intelligence, what many see today appears more like an intermediate step than a real revolution.

The main reason is the router: what does it mean

One of the key points of the feeling that ChatGPT has gotten worse since the advent of GPT-5 concerns the use of the so-called router. Instead of always using the most powerful model to provide answers to user queries, GPT-5 tries to understand how complex your request is and selects a lighter model when the question is simple. In theory, this should produce answers more quickly, with low costs and allowing universal access to the most advanced version of the model only when really needed. In practice, however, if the router makes a mistake in its assessment and a model responds that is less prepared for the type of problem you have asked, you will have the feeling that ChatGPT is less “intelligent” than before. According to some scholars, such as Jiaxuan You of the University of Illinois, sometimes pieces of the same request are entrusted to different models and then recombined, generating contradictions. To the magazine FortunesYou explained.

The model router sometimes sends parts of the same query to different models. A cheaper, faster model might provide one answer, while a slower, reasoning-focused model would provide another, and when the system combines these answers, subtle contradictions occur. The idea of ​​template routing is intuitive but making it actually work is very complicated.

The reason why You uses the conditional is that he could not prove his theory. In addition to the technical issue of model routing, there are also issues related to content security. According to CCDH tests (Center for Countering Digital Hate), GPT-5 would respond problematically on topics such as suicide or eating disorders more often than GPT-4o. Specifically, GPT-5 would have produced harmful content in 63 out of 120 responses, or 53% of cases, compared to 52 out of 120 for GPT-4o, or 43% of cases. While the previous model tended to reject malicious requests, the new one in some cases provided detailed and potentially risky information. OpenAI responded by arguing that the study would not consider updates released in October, including additional security measures such as parental controls and improved detection of psychological distress. However, it is clear that protection systems can be easily circumvented by expert users, and the sector is still looking for effective and stable solutions.

Unmet expectations on AGI

Also affecting the feeling that ChatGPT is less intelligent than before are the unmet expectations of AGI, the general artificial intelligence that would be capable of surpassing human intelligence in multiple contexts. GPT-5 was talked about as a giant step in this direction, but the result turned out to be more modest than expected. The truth, at least for the moment, is that there is no definitive AI: GPT-5 is a system in transition (and the corrections made to the model, released with GPT-5.1 in recent days are confirmation of this). For all these reasons, if ChatGPT seems less intelligent than before it is because you are interacting with an AI that is evolving and changing its structure. And until this change is complete, the impression of a decline may remain part of the experience.