How brain-computer interfaces really work: Inside the technology of Neuralink

In recent months there has been a lot of talk again about chips in the brain, especially thanks to Neuralink, one of the companies founded by Elon Musk. But behind sensationalist headlines and futuristic promises, there is real technology, studied for decades in neuroscience laboratories: BCIs, Brain-Computer Interfaces, or brain-computer interfaces. BCIs do not read mental contents, abstract intentions or hidden desires. They do something much more concrete and much more interesting from a scientific point of view: they measure physical signals produced by the brain and translate them into commands.

From the brain to the computer: because everything starts from electricity

Our brain is made up of approximately 86 billion neurons, specialized cells that communicate with each other through electrical impulses and chemical signals. Each individual neuron produces tiny impulses, but when millions of neurons fire together, their signal becomes measurable.
This is the physical principle that makes BCIs possible: they measure the coordinated electrical activity of entire neuronal populations and create a direct communication channel with a machine.

But not all BCIs measure brain activity in the same way.

How Brain Computer Interfaces (BCIs) work

Many Brain-Computer Interfaces historically arise from the electroencephalogram (EEG), that is, from the recording of the electrical activity of the brain from the surface of the head.
Systems like Neuralink, however, do not use scalp EEG: they record signals directly from the brain, via micro-electrodes implanted in the cortex. The physical principle, however, is the same: the brain is an electrochemical system and its activity produces measurable signals. What changes is where and with what resolution these signals are collected.

Electrodes implanted in the cortex, like those from Neuralink, are in direct contact with neurons, particularly in the motor cortex, the area that plans and executes movements.
Unlike EEG: they do not record attenuated signals from the skull, they do not “see” millions of neurons together, but they pick up the local activity of small neuronal populations.

These electrodes measure: action potentials (spikes), variations in local activity, extremely precise temporal patterns.
The signal is richer, more stable and much more informative, and it is this that allows fine decoding of movements.

Movement intention: what changes in the brain

When we decide to move a hand, or even just when we imagine doing so, a reorganization of neuronal activity occurs in the motor cortex. There is no neuron that “means” right hand. There are distributed patterns, in which: some neurons increase their firing rate, others decrease it, the overall balance changes systematically. These patterns are repeatable: every time a person imagines the same movement, the pattern of neural activity is similar.
And it is precisely this regularity that a BCI can exploit.

The computer connected to the system does not receive discharge frequencies, times, correlations between electrodes. The neural signal is amplified and digitized, relevant features are extracted (which neurons fire, when and with what intensity), machine learning algorithms associate these patterns with an action.

During training, the person is asked to carry out precise actions: imagine moving the right hand, imagine grabbing a bottle with the left hand, relax. The system learns, for that specific person, which neural configuration corresponds to which motor intention.
When the configuration reappears, the computer does not “understand” what the person wants to do: it recognizes an already known pattern and transforms it into an external action, moving a cursor, a prosthesis, a robotic arm.

In many people with spinal cord injuries, strokes, or neurodegenerative diseases, the problem is not that the brain “no longer knows what to do.” The motor circuits continue to generate the activity necessary for movement, but the information can no longer reach the muscles. Brain-Computer Interfaces are inserted precisely at this point: they intercept the neural activity upstream of the damage and divert it towards an alternative path such as a computer, a prosthesis, a vocal synthesizer. Functionally, it’s like building an artificial bridge that bypasses the damaged part of the nervous system.

It is precisely this mechanism that has made possible some of the most impressive results obtained so far with Brain-Computer Interfaces. And this is where it is worth making an important clarification: Neuralink is not the only company working in this field, nor even the first.

Beyond Neuralink: other projects on brain-computer interfaces

For over twenty years, research groups and companies have been developing BCIs in clinical settings, often away from the media spotlight, but with an enormous impact on the lives of the people involved. In particular, some of the most advanced results come from the academic world. Researchers at the University of California San Francisco and Stanford University have developed systems capable of restoring communication to people who had completely lost it. In one such study, a patient who could not move her facial muscles or utter a word for about 18 years was able to communicate again thanks to a BCI implanted in her brain. Around 200 electrodes were installed in the cerebral cortex, in areas involved in language planning. The recorded neural activity was decoded and transformed into words, a digital voice and even a 3D avatar capable of moving the face in a way consistent with what the person wanted to say. Even in this case, the system did not “read” the sentences in the mind, but intercepted the motor intention of speaking: the signals that the brain would have sent to the tongue, lips and larynx if the body had been able to execute them.

When Brain-Computer Interfaces began to develop about twenty years ago, many systems were based on electrodes positioned outside the head, on the surface of the scalp. These non-invasive approaches have an obvious limitation: the signal is weaker and noisier, because it has to pass through the skull and is contaminated by the activity of the muscles, eyes and body movements.

At the same time, they have a huge advantage: they do not require surgery. And it is precisely on this front that technology is making notable progress, thanks to better sensors and increasingly sophisticated algorithms.

From hospital to daily life: when brain-computer interfaces will arrive

At this point the question comes naturally: will all this ever arrive in our homes? Or, more precisely, on our heads? For now, the answer is clear: not right away, and not in the form of brain implants. Invasive BCIs make sense when the benefit is enormous and far outweighs the risks: severe paralysis, movement disorders, complete loss of communication. The first true “consumer” use of Brain-Computer Interfaces will likely come from another direction: non-invasive BCIs.
Virtual reality helmets, headbands, glasses and viewers that, instead of recording the activity of individual neurons, measure the coordinated activity of large neuronal groups from the surface of the head.
Already today there are devices capable of estimating attention, mental fatigue and stress, and using this information to adapt video games, virtual environments, music, notifications or cognitive pauses based on the user’s mental state.