Neuroscience Technology

For the First Time in History Brain-Machine Interfaces Achieve Two-Way Communication

BMI

Scientists have been developing brain-machine interfaces since the early seventies, mainly for using neural prosthesis in paralyzed patients or amputees.

A prosthetic limb can recover the lost motor function partially when it is controlled by brain activity directly. This is made possible by decoding neuronal activity that is recorded with electrodes and then translating these into robotic movements. Due to the absence of sensory feedback from the artificial limb, these systems do however suffer from limited precision. Neuroscientists at the University of Geneva (UNIGE) in Switzerland, explored whether they would be able to transmit this missing sensation back to the brain and proposed doing this by stimulating neural activity in the cortex. They learned that it was not only possible to create an artificial sensation of neuroprosthetic movements, but also that the underlying learning process happens very fast.

Motor function allows us to interact with the world and is at the core of all behavior. As a result, much research is done on replacing a lost limb with a robotic prosthesis. Successful outcomes are unfortunately rare. This is because, until now, brain-machine interfaces have been operated by relying largely on visual perception, i.e. looking at the robotic arm controls it. This means that the direct flow of information between the brain and the machine is unidirectional. Movement perception is however not only based on vision, but also on proprioception – the consciousness of where the limb is located in space. Daniel Huber, professor in the Department of Basic Neurosciences of the Faculty of Medicine at UNIGE explained that they asked themselves whether it would be possible to establish a bidirectional communication in a brain machine interface, or not. If this could be achieved, the device would be able to simultaneously read out neural activity, translate it into prosthetic movement and then inject sensory feedback of the movement back into the brain.

A pioneer optical brain-machine interface allows two-way communication with the brain. While a robotic arm is controlled by neuronal activity recorded with optical imaging (red laser), the position of the arm is broadcasted back to the brain via optical microstimulation (blue laser). Image Credit: © Daniel Huber - UNIGE
A pioneer optical brain-machine interface allows two-way communication with the brain. While a robotic arm is controlled by neuronal activity recorded with optical imaging (red laser), the position of the arm is broadcasted back to the brain via optical microstimulation (blue laser).
Image Credit: © Daniel Huber – UNIGE

Huber’s team specializes in optical techniques for stimulating and imaging brain activity instead of using invasive techniques that rely on electrodes. They regularly measure the activity of hundreds of neurons with single cell resolution by using a method called two-photon microscopy.

Mario Prsa, a researcher at UNIGE and the first author of the study explained that they wanted to test whether mice would be able to learn to control a neural prosthesis by relying solely on artificial feedback from a sensory signal. To test this, they imaged neural activity in the motor cortex. When the mouse activated the specific neuron chosen for neuroprosthetic control, stimulation proportional to this activity was applied simultaneously to the sensory cortex by using blue light.

Neurons of the sensory cortex were indeed rendered photosensitive to this light. This allowed the neurons to be activated by a series of optical flashes, which enabled the team to integrate them with the artificial sensory feedback signal. When the mouse achieved an above threshold activation, it was rewarded and 20 minutes later, once the association had been learned, the rodent was able to generate the correct neuronal activity more frequently.

This meant that the sensation that had been created artificial was not only perceived, but also that it was integrated as a feedback of the prosthetic movement successfully. The brain machine interface functions bi-directionally in this manner. The Geneva researchers are of the opinion that the reason why this fictitious sensation is so rapidly assimilated, is that it most likely taps into very basic brain functions. Feeling the position of our limbs happens automatically without much thought being required. This probably indicates that fundamental neural circuit mechanisms are being used.

This type of bidirectional interface might enable the development of robotic arms that are able to displace more precisely in the future. These would be able to feel touched objects, or perceive the required force to grasp them. The neuroscientists at UNIGE are also investigating how to produce sensory feedback that is more efficient. At the moment, they are only capable of doing so for a single movement, but are exploring the possibility to provide multiple feedback channels in parallel. This research lays the groundwork for developing a new generation of bidirectional neural prostheses that will be more precise.

If modern imaging tools were to be used, hundreds of neurons in the surrounding area could be observed as the mouse learned the neuroprosthetic task. Huber notes that it is known that millions of neural connections exist. The study did however reveal that the animal activated only the one neuron chosen for controlling the prosthetic action and none of the neighboring neurons. He finds this interesting as it reveals that the brain can home in on and specifically control the activity of just one single neuron. This knowledge could potentially be exploited to develop not only more stable and precise decoding techniques, but also to gain a better understanding of most basic neural circuit functions. What mechanisms are involved in routing signals to the uniquely activated neuron has yet to be discovered.

The full peer reviewed study was published in the journal Neuron.

Save

  • Weamy “AKA” Iby

    This isn’t the first time 2 way communication has been done. I believe DARPA did this in late 2015. http://www.darpa.mil/news-events/2015-09-11

    • Cosmo

      This study is fundamentally different from what DARPA did. Here they use cortical stimulation to provide proprioceptive-like feedback of the prosthetic movement. This feedback signal is used to guide the decoded neurons during learning. In the DARPA study, they use cortical stimulation to provide a tactile input once the prosthesis comes into contact with an object. But the actual prosthetic movement towards that target is visually guided.