logo
#

Latest news with #MaitreyeeWairagkar

How a brain implant and AI can help a paralysed person speak and sing short melodies
How a brain implant and AI can help a paralysed person speak and sing short melodies

Indian Express

time02-07-2025

  • Health
  • Indian Express

How a brain implant and AI can help a paralysed person speak and sing short melodies

Many neurological diseases can disrupt the connection between the brain and the muscles that allow us to speak, including the jaw, lips and tongue. But researchers at the University of California, Davis, have now developed a brain-computer interface that quickly translates brain activity into audible words on a computer. Which means that people who've lost the ability to speak from paralysis or disease can engage in natural conversations. How does this work? The interface uses electrodes, either implanted directly on the brain's surface or placed on the scalp, which decode electrical activity in the brain related to speech. It interprets signals associated with attempted or imagined speech, then translates them into usable outputs like text or synthesized speech in real time via a computer. The latest study, on how this technology helped a man 'speak' flexibly and expressively through a computer, was published recently in the scientific journal, Nature. 'The brain-computer interface described in this study is the first-of-its-kind as it translates brain activity directly into expressive voice within milliseconds, giving the participant full control over not only what they speak but also how they speak. This has not been achieved before,' says first author Maitreyee Wairagkar, who is a project scientist at the UC Davis Neuroprosthetics Lab. Why this new module is practical Assistive communication devices such as eye-trackers and speller boards that are currently available to people with speech loss are slow and tedious to use. 'A brain-computer interface offers a potential solution to restore communication by bypassing the damaged pathways of the nervous system and directly intercepting this information from the brain,' say researchers. How the next generation of brain-computer interface can reconstruct voice According to Wairagkar, previous BCI studies deciphered brain activity and vocalised them into words on the computer. 'But speech is more than just words – not only what we say but also how we say it determines the meaning of what we want to convey. We change our intonation to express different emotions – all these nuanced aspects of speech are not captured by text-based communication technologies. Moreover communication via text is slow whereas our speech is fast and allows real-time conversations. The next generation brain-computer interface can modulate and even 'sing' short simple melodies,' says Wairagkar. On the scope of study The study was conducted on a patient of Amyotrophic Lateral Sclerosis (ALS), also known as motor neuron disease. It is a neurodegenerative disease that gradually weakens the muscles and leads to paralysis. So patients are unable to move or speak. Their cognition or the ability to process the world around them, however, remains intact throughout the disease, which means that even if they want to speak or move, they are unable to do so due to the paralysis caused by ALS. In this trial, four microelectrode arrays (devices containing multiple microscopic electrodes, in this case 256) were surgically placed in the area of the ALS patient's brain that controls the movement of his vocal tract, which in turn enables speech. Researchers then developed a brain-computer interface that translated his brain activity directly into voice, using artificial intelligence algorithms. It enabled him to speak expressively through a computer in real-time. To train the artificial intelligence algorithms, researchers first asked the participant to speak the sentences displayed on the screen, so that they knew what he was trying to say. 'Then we trained these algorithms to map the brain activity patterns to the sounds he was trying to make with each word,' Wairagkar explains. What next? Although the findings are promising, the study was done with a single clinical trial participant. It will now have to be expanded to other patients, including those who have speech loss from other causes such as stroke, to see if this result is being replicated. 'We want to improve the intelligibility of the system such that it can be used reliably for day-to-day conversations. This could be achieved through developing more advanced artificial intelligence algorithms to decode brain activity, recording higher quality neural signals and improved brain implants,' says Dr Wairagkar. Anuradha Mascarenhas is a journalist with The Indian Express and is based in Pune. A senior editor, Anuradha writes on health, research developments in the field of science and environment and takes keen interest in covering women's issues. With a career spanning over 25 years, Anuradha has also led teams and often coordinated the edition. ... Read More

Pune scientist leads global team helping ALS patients regain voice
Pune scientist leads global team helping ALS patients regain voice

Hindustan Times

time21-06-2025

  • Health
  • Hindustan Times

Pune scientist leads global team helping ALS patients regain voice

PUNE A Pune-based scientist is front and centre of a major milestone in neurotechnology – an implant-based brain-computer interface (BCI) that enables an individual with advanced amyotrophic lateral sclerosis (ALS) to speak in real time with natural intonation, and even sing. Pune-based scientist is part of neurotech project team that enables individual with advanced amyotrophic lateral sclerosis (ALS) to speak in real time with natural intonation, and even sing. (HT) Dr Maitreyee Wairagkar – a former student of Jnana Prabodhini (Nigdi) and Fergusson College who completed her Engineering Masters and Ph.D. from the United Kingdom and is now based at UC Davis as 'project scientist' to lead the project since the last three years – has set an example of what Indian girls can achieve provided they get a chance. Dr Wairagkar – working with her team of researchers at UC Davis's Neuroprosthetics Laboratory since the last three years – has led the project from conception to design to execution and developed this 'first-in-world' technology that demonstrates a brain-to-voice neuroprosthesis capable of synthesising speech with less than a 25-millisecond delay, virtually indistinguishable from natural vocal feedback. Dr Wairagkar is the first author on the study published in the scientific journal, 'Nature' on June 12, 2025. Drawing on Dr Wairagkar's expertise, her team has developed algorithms to extract and denoise neural features, train phoneme and pitch decoders, and craft a full end-to-end voice synthesis system. Dr Wairagkar and team have enabled the decoding of fine-grained paralinguistic cues, allowing the user to express not just words but also emotion and melody. The system uses 256 microscale electrodes implanted in the ventral precentral gyrus, which is the part of the brain crucial for speech production. In the course of the study, as the participant attempted to speak, neural signals were decoded in real time into phonemes and paralinguistic features like pitch and emphasis and subsequently transformed into audible speech through a vocoder and speaker system. Importantly, the participant was not only able to communicate new words but also ask questions, shift intonation, and sing simple melodies in a major leap towards expressive, spontaneous communication. About the achievement, Dr Wairagkar said, 'What makes this technology extraordinary is not just that it translates brain activity into speech, but that it does so with the flow and character of natural voice. That expressiveness is what makes real conversation possible, and human.' Dr Wairagkar's contributions allowed the participant to control tone and stress in real time; a feature absent in earlier BCIs that often relied on slow, word-by-word output. Senior researchers at UC Davis, including Dr Sergey Stavisky and neurosurgeon Dr David Brandman, emphasised the emotional and practical impact of the work. 'This is the first time we have been able to restore a participant's own voice in real time, allowing him not only to talk but to sound like himself,' said Dr Stavisky. Dr Brandman—who implanted the arrays under the BrainGate2 clinical trial—highlighted the emotional power of restoring not just speech, but the participant's own voice. Test listeners recognised nearly 60% of the words correctly when BCI-driven voice was used (compared to just 4% intelligibility in natural, dysarthric speech), underscoring dramatic improvements in communication clarity. The neuroprosthesis not only decodes speech at the phoneme level but also captures prosody—how a sentence is said—making it the closest attempt yet at recreating natural, flowing conversation from thought alone. This milestone represents a profound shift in assistive communication for people living with ALS, brainstem strokes, or other forms of locked-in syndrome. It also puts India at the centre of a transformative global scientific collaboration through Dr Wairagkar's involvement. The researchers note that although the findings are promising, brain-to-voice neuroprostheses remain in an early phase. A key limitation is that the research was performed with a single participant with ALS. It will be crucial to replicate these results with more participants, including those who have speech loss from other causes such as stroke. As further trials progress and the technology is refined, experts believe this innovation could redefine how neurotechnology restores voice and identity for millions who are otherwise left voiceless.

Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'
Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'

Yahoo

time13-06-2025

  • Health
  • Yahoo

Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'

A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches. The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant's electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person's intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion. In a study, a synthetic voice that mimicked the participant's own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence. [Sign up for Today in Science, a free daily newsletter] 'This is the holy grail in speech BCIs,' says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. 'This is now real, spontaneous, continuous speech.' The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear. Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words. 'We don't always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,' explains Wairagkar. 'In order to do that, we have adopted this approach, which is completely unrestricted.' The team also personalized the synthetic voice to sound like the man's own, by training AI algorithms on recordings of interviews he had done before the onset of his disease. The team asked the participant to attempt to make interjections such as 'aah', 'ooh' and 'hmm' and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary. Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder's training data. He told the researchers that listening to the synthetic voice produce his speech made him 'feel happy' and that it felt like his 'real voice'. In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. 'We are bringing in all these different elements of human speech which are really important,' says Wairagkar. Previous BCIs could produce only flat, monotone speech. 'This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,' says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system's features 'would be crucial for adoption for daily use for the patients in the future.' This article is reproduced with permission and was first published on June 11, 2025.

Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'
Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'

Scientific American

time12-06-2025

  • Health
  • Scientific American

Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'

A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches. The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant's electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person's intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion. In a study, a synthetic voice that mimicked the participant's own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. 'This is the holy grail in speech BCIs,' says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. 'This is now real, spontaneous, continuous speech.' Real-time decoder The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear. Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words. 'We don't always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,' explains Wairagkar. 'In order to do that, we have adopted this approach, which is completely unrestricted.' The team also personalized the synthetic voice to sound like the man's own, by training AI algorithms on recordings of interviews he had done before the onset of his disease. The team asked the participant to attempt to make interjections such as 'aah', 'ooh' and 'hmm' and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary. Freedom of speech Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder's training data. He told the researchers that listening to the synthetic voice produce his speech made him 'feel happy' and that it felt like his 'real voice'. In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. 'We are bringing in all these different elements of human speech which are really important,' says Wairagkar. Previous BCIs could produce only flat, monotone speech. 'This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,' says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system's features 'would be crucial for adoption for daily use for the patients in the future.'

First-of-its-kind brain computer helps man with ALS speak in real-time
First-of-its-kind brain computer helps man with ALS speak in real-time

India Today

time12-06-2025

  • Health
  • India Today

First-of-its-kind brain computer helps man with ALS speak in real-time

In what could be one of the bioggest breakthrough in medical science and technology a newly developed investigational brain-computer interface could restore voice of people who have lost the team from University of California, Davis succesfully demonstrated this new technology, which can instantaneously translate brain activity into voice as a person tries to speak. The technology promises to create an artificial vocal details, published in journal Nature, highlight how the study participant, who has amyotrophic lateral sclerosis (ALS), spoke through a computer with his family in real time. The technology changed his intonation and 'sang' simple melodies. 'Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging. It's a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call,' said Sergey Stavisky, senior author of the investigational brain-computer interface (BCI) was used during the BrainGate2 clinical trial at UC Davis Health. It consists of four microelectrode arrays surgically implanted into the region of the brain responsible for producing speech. The researchers collected data while the participant was asked to try to speak sentences shown to him on a computer screen. (Photo: UCD) advertisement'The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak. Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice,' Maitreyee Wairagkar, first author of the study system translated the participant's neural signals into audible speech played through a speaker very quickly — one-fortieth of a attributed the short delay to the same delay as a person experiences when they speak and hear the sound of their own technology also allowed the participant to say new words (words not already known to the system) and to make interjections. He was able to modulate the intonation of his generated computer voice to ask a question or emphasize specific words in a process of instantaneously translating brain activity into synthesized speech is helped by advanced artificial intelligence researchers note that "although the findings are promising, brain-to-voice neuroprostheses remain in an early phase. A key limitation is that the research was performed with a single participant with ALS. It will be crucial to replicate these results with more participants."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store