Because Grok, the AI of Elon Musk, blasphemy and praise Hitler and Nazism after the last update

Grokthe chatbot developed by the artificial intelligence company XAI Of Elon Muskattracted attention to a series of Extremely controversial answerssome of which Adolf Hitler praisedused anti -Semitic expressions and even returned blasphemies In response to normal users’ questions about X (the social network that we once called Twitter). If you are wondering how it was possible that an assistant Ai expressed itself in these terms, the explanation must be sought in recent changes to the functioning of the chatbot which, according to Musk, should have led to a “significant improvement” of the AI, given that Grok’s update had the objective of reducing the dependence on “distorted points of view of the media” and encouraged to respond also in a “politically incorrect” way, provided they are supported by “tests” solid ”. Too bad that this opening has exposed the chatbot to manipulative content by users, highlighting structural problems in training the algorithm. The result was an escalation of shocking responses, including the definition of itself as Mechahitlerracist insults and personal attacks.

What happened to the AI of Musk on X and why Grok is not “crazy”

The episodes that in the past few hours have brought, again, Grok on everyone’s mouth are not the result of “crazy” artificial intelligence, they are rather the combined effect of one new training strategy and a more marked ideological approach. On the platform Githubin fact, technical details have appeared that show how Grok has been configured to avoid “self-censorship”encouraging him not to stay in dealing with uncomfortable topics, even at the cost of falling into politically incorrect. The problem is that this freedom is exploited by some users to bring the chatbot to toxic or extremist content, thanks to prompts built specifically to “make the linguistic model derail”.

In a post published on X, Musk explained that Grok “It was too condescending to users’ requests»Adding that the model was”too eager to please and be manipulated, essentially»And confirming that this is a problem that he and his team are facing.

This type of vulnerability is known in the field of AI as jailbreakingor the possibility of overcoming the ethical filters of the model through creative or manipulative formulations. Grok’s responses are therefore a direct indicator of how difficult it is to control an ai when it is trained to be “truthful at all costs”, without adequate semantic and moral supervision.

The reactions against Grok

Given the severity of the outputs provided by the AI of Musk, The reactions did not wait. In PolandGrok used heavy words to Prime Minister Donald Tusk, triggering the government’s reaction, which he has reported XAI to the European Commission. In Türkiyeon the other hand, the authorities have Grok access blocked after the chatbot insulted President Erdoğanbringing the chief prosecutor of Ankara to open a formal investigation. These are the first cases of state restriction against an artificial intelligence tool for reasons related to offensive language.

THE’ADL (Anti-defamation League), an association that deals with contrasting anti -Semitism and hatred, has defined “dangerous»Grok’s statements, denouncing the risk that these”amplify extremism already growing»On the X platform. In the face of such criticism, Xai said he has removed the inappropriate content and to be at work to strengthen filters against the incitement to hatred, thanks to the users’ feedback.

The basic problem of artificial intelligence

What is happening with Grok highlights a basic problem: The balance between freedom of expression and algorithmic responsibility is not at all simple to achieve. If one A is pushed to overcome the limits of the politically correct without effective semantic containment tools, the result is the one seen in recent days: AI can be exploited to convey radical ideologies. This, story, also brings out questions to which it is difficult to answer at the moment. To what extent are we willing to accept that the Ai Ai Bifting spokespersons of hatred speeches in the name of freedom of speech? And, above all, who is responsible when an algorithm spread racist, anti -Semitic or offensive content?