logo
#

Latest news with #UCDavisNeuroprostheticsLab

How a brain implant and AI can help a paralysed person speak and sing short melodies
How a brain implant and AI can help a paralysed person speak and sing short melodies

Indian Express

time5 days ago

  • Health
  • Indian Express

How a brain implant and AI can help a paralysed person speak and sing short melodies

Many neurological diseases can disrupt the connection between the brain and the muscles that allow us to speak, including the jaw, lips and tongue. But researchers at the University of California, Davis, have now developed a brain-computer interface that quickly translates brain activity into audible words on a computer. Which means that people who've lost the ability to speak from paralysis or disease can engage in natural conversations. How does this work? The interface uses electrodes, either implanted directly on the brain's surface or placed on the scalp, which decode electrical activity in the brain related to speech. It interprets signals associated with attempted or imagined speech, then translates them into usable outputs like text or synthesized speech in real time via a computer. The latest study, on how this technology helped a man 'speak' flexibly and expressively through a computer, was published recently in the scientific journal, Nature. 'The brain-computer interface described in this study is the first-of-its-kind as it translates brain activity directly into expressive voice within milliseconds, giving the participant full control over not only what they speak but also how they speak. This has not been achieved before,' says first author Maitreyee Wairagkar, who is a project scientist at the UC Davis Neuroprosthetics Lab. Why this new module is practical Assistive communication devices such as eye-trackers and speller boards that are currently available to people with speech loss are slow and tedious to use. 'A brain-computer interface offers a potential solution to restore communication by bypassing the damaged pathways of the nervous system and directly intercepting this information from the brain,' say researchers. How the next generation of brain-computer interface can reconstruct voice According to Wairagkar, previous BCI studies deciphered brain activity and vocalised them into words on the computer. 'But speech is more than just words – not only what we say but also how we say it determines the meaning of what we want to convey. We change our intonation to express different emotions – all these nuanced aspects of speech are not captured by text-based communication technologies. Moreover communication via text is slow whereas our speech is fast and allows real-time conversations. The next generation brain-computer interface can modulate and even 'sing' short simple melodies,' says Wairagkar. On the scope of study The study was conducted on a patient of Amyotrophic Lateral Sclerosis (ALS), also known as motor neuron disease. It is a neurodegenerative disease that gradually weakens the muscles and leads to paralysis. So patients are unable to move or speak. Their cognition or the ability to process the world around them, however, remains intact throughout the disease, which means that even if they want to speak or move, they are unable to do so due to the paralysis caused by ALS. In this trial, four microelectrode arrays (devices containing multiple microscopic electrodes, in this case 256) were surgically placed in the area of the ALS patient's brain that controls the movement of his vocal tract, which in turn enables speech. Researchers then developed a brain-computer interface that translated his brain activity directly into voice, using artificial intelligence algorithms. It enabled him to speak expressively through a computer in real-time. To train the artificial intelligence algorithms, researchers first asked the participant to speak the sentences displayed on the screen, so that they knew what he was trying to say. 'Then we trained these algorithms to map the brain activity patterns to the sounds he was trying to make with each word,' Wairagkar explains. What next? Although the findings are promising, the study was done with a single clinical trial participant. It will now have to be expanded to other patients, including those who have speech loss from other causes such as stroke, to see if this result is being replicated. 'We want to improve the intelligibility of the system such that it can be used reliably for day-to-day conversations. This could be achieved through developing more advanced artificial intelligence algorithms to decode brain activity, recording higher quality neural signals and improved brain implants,' says Dr Wairagkar. Anuradha Mascarenhas is a journalist with The Indian Express and is based in Pune. A senior editor, Anuradha writes on health, research developments in the field of science and environment and takes keen interest in covering women's issues. With a career spanning over 25 years, Anuradha has also led teams and often coordinated the edition. ... Read More

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store