The story of Alice and Bob, Facebook’s chatbots invented an incomprehensible language to humans

Image generated with the AI ​​to pure illustrative purpose.

Two Facebook chatbots, Alice And Bobthey would have started talking to each other in one Language unknown to human beingsenough to push the company of Mark Zuckerberg to “turn off” for fear of losing their control. From a central theme in films and science fiction novels to constant presence in our daily life: this is the leap that has been made by artificial intelligence in recent years. And if you are among those who think they have not yet made use of it, you will probably change your mind thinking that AI is practically everywhere and is used for the most disparate activities: from suggestions on what to watch on TV, passing through online searches. It is in this scenario of growing familiarity with the intelligent machines that, every now and then, emerge stories that seem to bring us abruptly to a disturbing and dystopian future such as that of Alice and Bob.

But what’s true in this story? And above all, should we really worry? In reality, the episode is much less apocalyptic than some viral posts or sensationalistic articles have hinted in that period. The two bots were involved in a linguistic experimentconducted in 2017with the aim of study the behavior of the AI ​​during a negotiation simulation. The fact that they created a sort of “proper” language was not a signal of rebellion, but one simple deviation due to how the two chatbots had been scheduled. Facebook has never deactivated the bots for fear, but has only changed the parameters to bring the communication back to an understandable format.

The story of Facebook chatbots: Alice and Bob

It all started in 2017when the laboratory Fair (Facebook Artificial Intelligence Research) started a Experimental project to study the potential of chatbots in negotiation. The chatbots – software designed to simulate human conversations – in this case did not just answer trivial questions, but had been commissioned to barter virtual objects, such as books, balls or hats, with the aim of obtaining the best possible agreement. Alice and Bob, the two protagonists of this story, had to interact with each other and with human users, learning to reach compromises effectively.

During the exchanges, the researchers noticed a rather curious anomaly: the bots were no longer using correct English, but a form of communication that seemed to be made of sense of human observers. Phrases like «The balls have zero for me for me …“They were interpreted by some as a sort of alien language, a secret language developed by machines to escape human control. In reality, it was simply a side effect of the design of the experiment. Since the system had not been scheduled to reward the correct use of the English language, the bots had started using abbreviations and repetitions to maximize the efficiency of the exchange.

Second DHRUV BATRAone of the researchers involved in the project, this behavior is not so strange. When two intelligent systems have to solve a specific task – how to negotiate an object – they tend to optimize every aspect of their communication. If repeating a word several times is equivalent to expressing a quantity, why complicate life with a perfect grammar? In other words, it was not a “language” in the human sense of the term, but one Functional shortcut. A bit like when, in instant messaging, we use “xkè” abbreviations instead of “why”.

The real Facebook reaction

The most misrepresented part of the story concerns the Facebook reaction. Some online posts, which have become viral, claimed that the Menlo Park giant had “turned off” the bots for fear that they were becoming too intelligent. In reality, the researchers have simply changed the criteria of the experiment for orient communication to understandable language. As explained by the scientists themselves, changing the rules of a test does not equate to interrupt artificial intelligence, as well as turning off a computer during a simulation does not mean fear of a digital rebellion.

Films and novels have accustomed us to the idea that AI, once a certain level of autonomy has been reached, can escape control and rebel against its creators. The reality of things, at least in this historical moment, is very different. Artificial intelligences, however sophisticated, operate within well -defined boundaries and are deeply linked to the data and objectives that are provided to them.

The phenomenon observed with Alice and Bob, however fascinating, is not even new. Also Google reported similar episodes during the development of his translation softwarewhere neural networks – a type of computer architecture that imitates the functioning of the human brain – had spontaneously created intermediate representations of the meaning of the sentences, improving the accuracy of the translation. In these cases we do not speak of consciousness or will, but of statistical optimization processes.

Returning to the story of the Facebook chatbots protagonists of the story that we have just told you, therefore, we can say beyond any reasonable doubt that theexperiment was closed Because he was doing something that the team was not interested in studying, not because he had come across an existential threat to the whole humanity, as some tried to make believe, creating not little uncomfortable on the matter.