Latest news with #Wairagkar


Indian Express
02-07-2025
- Health
- Indian Express
How a brain implant and AI can help a paralysed person speak and sing short melodies
Many neurological diseases can disrupt the connection between the brain and the muscles that allow us to speak, including the jaw, lips and tongue. But researchers at the University of California, Davis, have now developed a brain-computer interface that quickly translates brain activity into audible words on a computer. Which means that people who've lost the ability to speak from paralysis or disease can engage in natural conversations. How does this work? The interface uses electrodes, either implanted directly on the brain's surface or placed on the scalp, which decode electrical activity in the brain related to speech. It interprets signals associated with attempted or imagined speech, then translates them into usable outputs like text or synthesized speech in real time via a computer. The latest study, on how this technology helped a man 'speak' flexibly and expressively through a computer, was published recently in the scientific journal, Nature. 'The brain-computer interface described in this study is the first-of-its-kind as it translates brain activity directly into expressive voice within milliseconds, giving the participant full control over not only what they speak but also how they speak. This has not been achieved before,' says first author Maitreyee Wairagkar, who is a project scientist at the UC Davis Neuroprosthetics Lab. Why this new module is practical Assistive communication devices such as eye-trackers and speller boards that are currently available to people with speech loss are slow and tedious to use. 'A brain-computer interface offers a potential solution to restore communication by bypassing the damaged pathways of the nervous system and directly intercepting this information from the brain,' say researchers. How the next generation of brain-computer interface can reconstruct voice According to Wairagkar, previous BCI studies deciphered brain activity and vocalised them into words on the computer. 'But speech is more than just words – not only what we say but also how we say it determines the meaning of what we want to convey. We change our intonation to express different emotions – all these nuanced aspects of speech are not captured by text-based communication technologies. Moreover communication via text is slow whereas our speech is fast and allows real-time conversations. The next generation brain-computer interface can modulate and even 'sing' short simple melodies,' says Wairagkar. On the scope of study The study was conducted on a patient of Amyotrophic Lateral Sclerosis (ALS), also known as motor neuron disease. It is a neurodegenerative disease that gradually weakens the muscles and leads to paralysis. So patients are unable to move or speak. Their cognition or the ability to process the world around them, however, remains intact throughout the disease, which means that even if they want to speak or move, they are unable to do so due to the paralysis caused by ALS. In this trial, four microelectrode arrays (devices containing multiple microscopic electrodes, in this case 256) were surgically placed in the area of the ALS patient's brain that controls the movement of his vocal tract, which in turn enables speech. Researchers then developed a brain-computer interface that translated his brain activity directly into voice, using artificial intelligence algorithms. It enabled him to speak expressively through a computer in real-time. To train the artificial intelligence algorithms, researchers first asked the participant to speak the sentences displayed on the screen, so that they knew what he was trying to say. 'Then we trained these algorithms to map the brain activity patterns to the sounds he was trying to make with each word,' Wairagkar explains. What next? Although the findings are promising, the study was done with a single clinical trial participant. It will now have to be expanded to other patients, including those who have speech loss from other causes such as stroke, to see if this result is being replicated. 'We want to improve the intelligibility of the system such that it can be used reliably for day-to-day conversations. This could be achieved through developing more advanced artificial intelligence algorithms to decode brain activity, recording higher quality neural signals and improved brain implants,' says Dr Wairagkar. Anuradha Mascarenhas is a journalist with The Indian Express and is based in Pune. A senior editor, Anuradha writes on health, research developments in the field of science and environment and takes keen interest in covering women's issues. With a career spanning over 25 years, Anuradha has also led teams and often coordinated the edition. ... Read More


Hindustan Times
21-06-2025
- Health
- Hindustan Times
Pune scientist leads global team helping ALS patients regain voice
PUNE A Pune-based scientist is front and centre of a major milestone in neurotechnology – an implant-based brain-computer interface (BCI) that enables an individual with advanced amyotrophic lateral sclerosis (ALS) to speak in real time with natural intonation, and even sing. Pune-based scientist is part of neurotech project team that enables individual with advanced amyotrophic lateral sclerosis (ALS) to speak in real time with natural intonation, and even sing. (HT) Dr Maitreyee Wairagkar – a former student of Jnana Prabodhini (Nigdi) and Fergusson College who completed her Engineering Masters and Ph.D. from the United Kingdom and is now based at UC Davis as 'project scientist' to lead the project since the last three years – has set an example of what Indian girls can achieve provided they get a chance. Dr Wairagkar – working with her team of researchers at UC Davis's Neuroprosthetics Laboratory since the last three years – has led the project from conception to design to execution and developed this 'first-in-world' technology that demonstrates a brain-to-voice neuroprosthesis capable of synthesising speech with less than a 25-millisecond delay, virtually indistinguishable from natural vocal feedback. Dr Wairagkar is the first author on the study published in the scientific journal, 'Nature' on June 12, 2025. Drawing on Dr Wairagkar's expertise, her team has developed algorithms to extract and denoise neural features, train phoneme and pitch decoders, and craft a full end-to-end voice synthesis system. Dr Wairagkar and team have enabled the decoding of fine-grained paralinguistic cues, allowing the user to express not just words but also emotion and melody. The system uses 256 microscale electrodes implanted in the ventral precentral gyrus, which is the part of the brain crucial for speech production. In the course of the study, as the participant attempted to speak, neural signals were decoded in real time into phonemes and paralinguistic features like pitch and emphasis and subsequently transformed into audible speech through a vocoder and speaker system. Importantly, the participant was not only able to communicate new words but also ask questions, shift intonation, and sing simple melodies in a major leap towards expressive, spontaneous communication. About the achievement, Dr Wairagkar said, 'What makes this technology extraordinary is not just that it translates brain activity into speech, but that it does so with the flow and character of natural voice. That expressiveness is what makes real conversation possible, and human.' Dr Wairagkar's contributions allowed the participant to control tone and stress in real time; a feature absent in earlier BCIs that often relied on slow, word-by-word output. Senior researchers at UC Davis, including Dr Sergey Stavisky and neurosurgeon Dr David Brandman, emphasised the emotional and practical impact of the work. 'This is the first time we have been able to restore a participant's own voice in real time, allowing him not only to talk but to sound like himself,' said Dr Stavisky. Dr Brandman—who implanted the arrays under the BrainGate2 clinical trial—highlighted the emotional power of restoring not just speech, but the participant's own voice. Test listeners recognised nearly 60% of the words correctly when BCI-driven voice was used (compared to just 4% intelligibility in natural, dysarthric speech), underscoring dramatic improvements in communication clarity. The neuroprosthesis not only decodes speech at the phoneme level but also captures prosody—how a sentence is said—making it the closest attempt yet at recreating natural, flowing conversation from thought alone. This milestone represents a profound shift in assistive communication for people living with ALS, brainstem strokes, or other forms of locked-in syndrome. It also puts India at the centre of a transformative global scientific collaboration through Dr Wairagkar's involvement. The researchers note that although the findings are promising, brain-to-voice neuroprostheses remain in an early phase. A key limitation is that the research was performed with a single participant with ALS. It will be crucial to replicate these results with more participants, including those who have speech loss from other causes such as stroke. As further trials progress and the technology is refined, experts believe this innovation could redefine how neurotechnology restores voice and identity for millions who are otherwise left voiceless.