Imagine transforming thoughts into audible words: this is what Stanford researchers did with a new neural implant capable of transforming internal language, that is, the dialogue that occurs in our mind without saying it out loud, into words. All this is made possible thanks to a series of microelectrodes implanted in the cerebral cortex, which are able to interpret the electrical signals produced by a person when he “thinks of speaking” and translate them in real time in the form of language. Furthermore, to protect the cognitive privacy of those who use it, the technology uses an activation system via “mental password”.
How the new brain implant that reads thoughts works: the initial phase
Of all human abilities, verbal communication is perhaps the one that most defines the human being. For this reason, the loss of the ability to produce spoken language due to neurodegenerative diseases or brain trauma effectively becomes the deprivation of a part of oneself. Now, imagine that a technology could directly read the internal language and transform it into audible words, giving them back the ability to communicate. This is what a team from Stanford University managed to do, developing a brain implant capable of decoding neural activity and translating it into language, with an accuracy of up to 74%.
The development of the brain-computer interface (BCI) followed a certain progression before arriving at today’s complex functionality. Early in their experiments, associate professor of neurosurgery Frank Willett and his colleagues used the interfaces to help people whose paralysis left them unable to speak. In detail, microelectrodes were used implanted in the motor cortex, the region of the brain from which the motor neurons responsible for all muscle movements, including those of the mouth and tongue, aimed at producing a word, originate. When a person attempts to speak, neural signals are produced which are recorded by the electrode device. These signals are then transmitted via wire to a computer algorithm that translates them into audible speech or cursor movement over computer letters.
To decode the neural activity responsible for the word you want to pronounce, researchers use machine learning (i.e. the AI mechanism that allows systems to learn from experience without being explicitly programmed). In short, each word or sound produces a slightly different pattern of neural activity from each other. When a person tries to pronounce different phonemes, the computer records these neural patterns and the machine learning algorithm learns to connect each of them to the corresponding phoneme. When the user attempts to speak, the system recognizes the already learned neural patterns corresponding to the phonemes and the computer then assembles them in the correct sequence to form words and sentences.
How does the implant translate the inner language into words
Recently, scientists took another important step: they studied brain signals related to “inner speech” (also called “inner monologue”). The ambition of Frank Willet and Erin Kunz of Stanford University was to be able to decode even words and sentences that did not require muscular effort to produce. They therefore wanted to know whether a BCI system could work based only on the neural activity evoked by imagined speech, rather than on attempts to physically produce speech. This is because, for people with paralysis, attempting to speak can be slow and laborious and, if the paralysis is partial, can produce distracting sounds and difficulty controlling breathing. During the trial, conducted on four patients who had lost the use of speech due to stroke or motor neuron diseases – the nerve cells that control voluntary muscles – (such as ALS), participants were asked to imagine words and sentences.
Both neuroimaging and electrophysiological studies have demonstrated that inner speech involves a cortical network similar, although not identical, to that of speech physically produced in the motor cortex; so it was thought that the electrodes positioned for decoding the attempt to speak could also allow the decoding of internal language. The precise neural differences between mental and produced language remain under investigation. In any case, the artificial intelligence still managed to decode some signals into phonemes, combining them to form words and sentences in real time, from a vocabulary of around 125 thousand words. The result? In two patients the system achieved an accuracy of 74%, and all this without any physical effort. In some tests, the BCI was also able to identify mentally counted numbers.
The privacy problem and the mental password: the limits of the system
Inner language, albeit with different intensity, shares some motor regions of the brain with attempted language. This raised the possibility that a BCI could end up decoding something the user only intended to think, not say out loud. Although the interfaces were designed to decode the attempted language and, therefore, could generate distorted and inaccurate outputs when applied to the internal language, even the risk of being able to leak words that one wanted to “keep to themselves” raised an important ethical question.
The researchers’ ambition, therefore, was (and remains) to distinguish “motor intent” from silent intent to avoid the risk of unwanted outputs. They thus developed a model in which an internally spoken “keyword” can be detected with high precision, allowing the user to “lock” and “unlock” the system. It is a sort of mental switch that turns on only when a person imagines a pre-established “mental password”. In the case of the study, the selected phrase was “Chitty-Chitty-Bang-Bang”, recognized with an accuracy of over 98%. In the absence of this keyword, the system remains completely inactive. Privacy concerns are indeed real: a device capable of translating thoughts could, in theory, also reveal content not intended for communication; the mental switch instead imposes active consent before activation.
However, it is worth underlining that implanted BCIs are not yet an available technology and are still in the early stages of research and testing; they are also regulated by outside federal agencies to maintain the highest standards of medical ethics. Nonetheless, the idea that they could one day be widely disseminated is a truly promising prospect.







