logo
The brain tech revolution is here — and it isn't all Black Mirror

The brain tech revolution is here — and it isn't all Black Mirror

Vox3 days ago
is a senior editorial director at Vox overseeing the climate teams and the Unexplainable and The Gray Area podcasts. He is also the editor of Vox's Future Perfect section and writes the Good News newsletter. He worked at Time magazine for 15 years as a foreign correspondent in Asia, a climate writer, and an international editor, and he wrote a book on existential risk.
When you hear the word 'neurotechnology,' you may picture Black Mirror headsets prying open the last private place we have — our own skulls — or the cyber-samurai of William Gibson's Neuromancer. That dread is natural, but it can blind us to the real potential being realized in neurotech to address the long intractable medical challenges found in our brains. In just the past 18 months, brain tech has cleared three hurdles at once: smarter algorithms, shrunken hardware, and — most important — proof that people can feel the difference in their bodies and their moods.
A pacemaker for the brain
Keith Krehbiel has battled Parkinson's disease for nearly a quarter-century. By 2020, as Nature recently reported, the tremors were winning — until neurosurgeons slipped Medtronic's Percept device into his head. Unlike older deep-brain stimulators that carpet-bomb movement control regions in the brain with steady current, the Percept listens first. It hunts the beta-wave 'bursts' in the brain that mark a Parkinson's flare and then fires back millisecond by millisecond, an adaptive approach that mimics the way a cardiac pacemaker paces an arrhythmic heart.
In the ADAPT-PD study, patients like Krehbiel moved more smoothly, took fewer pills, and overwhelmingly preferred the adaptive mode to the regular one. Regulators on both sides of the Atlantic agreed: The system now has US and EU clearance.
Because the electrodes spark only when symptoms do, total energy use is reduced, increasing battery life and delaying the next skull-opening surgery. Better yet, because every Percept shipped since 2020 already has the sensing chip, the adaptive mode can be activated with a simple firmware push, the way you'd update your iPhone.
Waking quiet muscles
Scientists applied the same listen-then-zap logic farther down the spinal cord this year. In a Nature Medicine pilot, researchers in Pittsburgh laid two slender electrode strips over the sensory roots of the lumbar spine in three adults with spinal muscular atrophy. Gentle pulses 'reawakened' half-dormant motor neurons: Every participant walked farther, tired less, and — astonishingly — one person strode from home to the lab without resting.
Half a world away, surgeons at Nankai University threaded a 50-micron-thick 'stent-electrode' through a patient's jugular vein, fanned it against the motor cortex, and paired it with a sleeve that twitched his arm muscles. No craniotomy, no ICU — just a quick catheter procedure that let a stroke survivor lift objects and move a cursor. High-tech rehab is inching toward outpatient care.
Mental-health care on your couch
The brain isn't only wires and muscles; mood lives there, too. In March, the Food and Drug Administration tagged a visor-like headset from Pulvinar Neuro as a Breakthrough Device for major-depressive disorder. The unit drips alternating and direct currents while an onboard algorithm reads brain rhythms on the fly, and clinicians can tweak the recipe over the cloud. The technology offers a ray of hope for patients whose depression has resisted conventional treatments like drugs.
Thought cursors and synthetic voices
Cochlear implants for people with hearing loss once sounded like sci-fi; today more than 1 million people hear through them. That proof-of-scale has emboldened a new wave of brain-computer interfaces, including from Elon Musk's startup Neuralink. The company's first user, 30-year-old quadriplegic Noland Arbaugh, told Wired last year he now 'multitasks constantly' with a thought-controlled cursor, clawing back some of the independence lost to a 2016 spinal-cord injury. Neuralink isn't as far along as Musk often claims — Arbaugh's device experienced some problems, with some threads detaching from the brain — but the promise is there.
On the speech front, new systems are decoding neural signals into text on a computer screen, or even synthesized voice. In 2023 researchers from Stanford and the University of California San Francisco installed brain implants in two women who had lost the ability to speak, and managing to hit decoding times of 62 and 78 words per minute, far faster than previous brain tech interfaces. That's still much slower than the 160 words per minute of natural English speech, but more recent advances are getting closer to that rate.
Guardrails for gray matter
Yes, neurotech has a shadow. Brain signals could reveal a person's mood, maybe even a voting preference. Europe's new AI Act now treats 'neuro-biometric categorization' — technologies that can classify individuals by biometric information, including brain data — as high-risk, demanding transparency and opt-outs, while the US BRAIN Initiative 2.0 is paying for open-source toolkits so anyone can pop the hood on the algorithms.
And remember the other risk: doing nothing. Refusing a proven therapy because it feels futuristic is a little like turning down antibiotics in 1925 because a drug that came from mold seemed weird.
Twentieth-century medicine tamed the chemistry of the body; 21st-century medicine is learning to tune the electrical symphony inside the skull. When it works, neurotech acts less like a hammer than a tuning fork — nudging each section back on pitch, then stepping aside so the music can play.
Real patients are walking farther, talking faster, and, in some cases, simply feeling like themselves again. The challenge now is to keep our fears proportional to the risks — and our imaginations wide enough to see the gains already in hand.
A version of this story originally appeared in the Good News newsletter. Sign up here!
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Biogen plans to invest $2B into its RTP manufacturing facilities
Biogen plans to invest $2B into its RTP manufacturing facilities

Axios

time13 hours ago

  • Axios

Biogen plans to invest $2B into its RTP manufacturing facilities

Biogen, one of the world's largest biotechnology companies, plans to invest $2 billion and build a new manufacturing facility in Research Triangle Park. Why it matters: Biogen is one of RTP's largest employers with more than 1,500 workers in Durham County. The latest investment will help it expand its manufacturing capabilities in North Carolina. Between the lines: Investment into pharmaceutical and biotech manufacturing remains on fire in the Triangle, with Biogen representing just the latest in a string of jobs announcements. The biotechnology company Genentech said earlier this year it plans to add 400 jobs in Holly Springs, and Amgen and Fujifilm Diosynth recently completed manufacturing facilities there as well. Last year, Novo Nordisk, the Danish maker of drugs like Ozempic, announced a planned $4 billion investment in facilities in Johnston County. Zoom in: The investment will help the drug maker build its eighth manufacturing facility in North Carolina, Biogen said in a statement. The company makes a variety of treatments, including for multiple sclerosis, and has a pipeline of products being tested for Alzheimer's, Parkinson's and other diseases. The company declined an interview request, and it was unclear how many jobs could be supported at the new facility. In 2023, the company laid off some workers due to the bumpy launch of an Alzheimer's treatment, Fierce Biotech reported. The big picture: The Triangle, which the real estate services firm JLL pegs as the fourth-largest life sciences market in the country, has a lower vacancy rate than its larger peers for life science space. That's primarily due to a combination of the region not building as much as markets like Boston and San Diego since the pandemic, plus a number of biomanufacturing wins, according to Eric Forshee, JLL's executive vice president for life sciences at its Raleigh office. "North Carolina, especially on the biomanufacturing side, has good affordable land, quality sites, great talent and good incentives," Forshee told Axios. What's next: Reuters noted that Biogen is one of several drug makers that have announced investments amid the threat of a 200% tariff on imported drugs from the Trump administration, though Biogen's announcement made no mention of tariffs. It remains to be seen how tariffs could help or hinder biomanufacturing in the Triangle.

The brain tech revolution is here — and it isn't all Black Mirror
The brain tech revolution is here — and it isn't all Black Mirror

Vox

time3 days ago

  • Vox

The brain tech revolution is here — and it isn't all Black Mirror

is a senior editorial director at Vox overseeing the climate teams and the Unexplainable and The Gray Area podcasts. He is also the editor of Vox's Future Perfect section and writes the Good News newsletter. He worked at Time magazine for 15 years as a foreign correspondent in Asia, a climate writer, and an international editor, and he wrote a book on existential risk. When you hear the word 'neurotechnology,' you may picture Black Mirror headsets prying open the last private place we have — our own skulls — or the cyber-samurai of William Gibson's Neuromancer. That dread is natural, but it can blind us to the real potential being realized in neurotech to address the long intractable medical challenges found in our brains. In just the past 18 months, brain tech has cleared three hurdles at once: smarter algorithms, shrunken hardware, and — most important — proof that people can feel the difference in their bodies and their moods. A pacemaker for the brain Keith Krehbiel has battled Parkinson's disease for nearly a quarter-century. By 2020, as Nature recently reported, the tremors were winning — until neurosurgeons slipped Medtronic's Percept device into his head. Unlike older deep-brain stimulators that carpet-bomb movement control regions in the brain with steady current, the Percept listens first. It hunts the beta-wave 'bursts' in the brain that mark a Parkinson's flare and then fires back millisecond by millisecond, an adaptive approach that mimics the way a cardiac pacemaker paces an arrhythmic heart. In the ADAPT-PD study, patients like Krehbiel moved more smoothly, took fewer pills, and overwhelmingly preferred the adaptive mode to the regular one. Regulators on both sides of the Atlantic agreed: The system now has US and EU clearance. Because the electrodes spark only when symptoms do, total energy use is reduced, increasing battery life and delaying the next skull-opening surgery. Better yet, because every Percept shipped since 2020 already has the sensing chip, the adaptive mode can be activated with a simple firmware push, the way you'd update your iPhone. Waking quiet muscles Scientists applied the same listen-then-zap logic farther down the spinal cord this year. In a Nature Medicine pilot, researchers in Pittsburgh laid two slender electrode strips over the sensory roots of the lumbar spine in three adults with spinal muscular atrophy. Gentle pulses 'reawakened' half-dormant motor neurons: Every participant walked farther, tired less, and — astonishingly — one person strode from home to the lab without resting. Half a world away, surgeons at Nankai University threaded a 50-micron-thick 'stent-electrode' through a patient's jugular vein, fanned it against the motor cortex, and paired it with a sleeve that twitched his arm muscles. No craniotomy, no ICU — just a quick catheter procedure that let a stroke survivor lift objects and move a cursor. High-tech rehab is inching toward outpatient care. Mental-health care on your couch The brain isn't only wires and muscles; mood lives there, too. In March, the Food and Drug Administration tagged a visor-like headset from Pulvinar Neuro as a Breakthrough Device for major-depressive disorder. The unit drips alternating and direct currents while an onboard algorithm reads brain rhythms on the fly, and clinicians can tweak the recipe over the cloud. The technology offers a ray of hope for patients whose depression has resisted conventional treatments like drugs. Thought cursors and synthetic voices Cochlear implants for people with hearing loss once sounded like sci-fi; today more than 1 million people hear through them. That proof-of-scale has emboldened a new wave of brain-computer interfaces, including from Elon Musk's startup Neuralink. The company's first user, 30-year-old quadriplegic Noland Arbaugh, told Wired last year he now 'multitasks constantly' with a thought-controlled cursor, clawing back some of the independence lost to a 2016 spinal-cord injury. Neuralink isn't as far along as Musk often claims — Arbaugh's device experienced some problems, with some threads detaching from the brain — but the promise is there. On the speech front, new systems are decoding neural signals into text on a computer screen, or even synthesized voice. In 2023 researchers from Stanford and the University of California San Francisco installed brain implants in two women who had lost the ability to speak, and managing to hit decoding times of 62 and 78 words per minute, far faster than previous brain tech interfaces. That's still much slower than the 160 words per minute of natural English speech, but more recent advances are getting closer to that rate. Guardrails for gray matter Yes, neurotech has a shadow. Brain signals could reveal a person's mood, maybe even a voting preference. Europe's new AI Act now treats 'neuro-biometric categorization' — technologies that can classify individuals by biometric information, including brain data — as high-risk, demanding transparency and opt-outs, while the US BRAIN Initiative 2.0 is paying for open-source toolkits so anyone can pop the hood on the algorithms. And remember the other risk: doing nothing. Refusing a proven therapy because it feels futuristic is a little like turning down antibiotics in 1925 because a drug that came from mold seemed weird. Twentieth-century medicine tamed the chemistry of the body; 21st-century medicine is learning to tune the electrical symphony inside the skull. When it works, neurotech acts less like a hammer than a tuning fork — nudging each section back on pitch, then stepping aside so the music can play. Real patients are walking farther, talking faster, and, in some cases, simply feeling like themselves again. The challenge now is to keep our fears proportional to the risks — and our imaginations wide enough to see the gains already in hand. A version of this story originally appeared in the Good News newsletter. Sign up here!

You can get unfathomably rich building AI. Should you?
You can get unfathomably rich building AI. Should you?

Vox

time4 days ago

  • Vox

You can get unfathomably rich building AI. Should you?

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. It's a good time to be a highly in-demand AI engineer. To lure leading researchers away from OpenAI and other competitors, Meta has reportedly offered pay packages totalling more than $100 million. Top AI engineers are now being compensated like football superstars. Few people will ever have to grapple with the question of whether to go work for Mark Zuckerberg's 'superintelligence' venture in exchange for enough money to never have to work again (Bloomberg columnist Matt Levine recently pointed out that this is kind of Zuckerberg's fundamental challenge: If you pay someone enough to retire after a single month, they might well just quit after a single month, right? You need some kind of elaborate compensation structure to make sure they can get unfathomably rich without simply retiring.) Future Perfect Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Most of us can only dream of having that problem. But many of us have occasionally had to navigate the question of whether to take on an ethically dubious job (Denying insurance claims? Shilling cryptocurrency? Making mobile games more habit-forming?) to pay the bills. For those working in AI, that ethical dilemma is supercharged to the point of absurdity. AI is a ludicrously high-stakes technology — both for good and for ill — with leaders in the field warning that it might kill us all. A small number of people talented enough to bring about superintelligent AI can dramatically alter the technology's trajectory. Is it even possible for them to do so ethically? AI is going to be a really big deal On the one hand, leading AI companies offer workers the potential to earn unfathomable riches and also contribute to very meaningful social good — including productivity-increasing tools that can accelerate medical breakthroughs and technological discovery, and make it possible for more people to code, design, and do any other work that can be done on a computer. On the other hand, well, it's hard for me to argue that the 'Waifu engineer' that xAI is now hiring for — a role that will be responsible for making Grok's risqué anime girl 'companion' AI even more habit-forming — is of any social benefit whatsoever, and I in fact worry that the rise of such bots will be to the lasting detriment of society. I'm also not thrilled about the documented cases of ChatGPT encouraging delusional beliefs in vulnerable users with mental illness. Much more worryingly, the researchers racing to build powerful AI 'agents' — systems that can independently write code, make purchases online, interact with people, and hire subcontractors for tasks — are running into plenty of signs that those AIs might intentionally deceive humans and even take dramatic and hostile action against us. In tests, AIs have tried to blackmail their creators or send a copy of themselves to servers where they can operate more freely. For now, AIs only exhibit that behavior when given precisely engineered prompts designed to push them to their limits. But with increasingly huge numbers of AI agents populating the world, anything that can happen under the right circumstances, however rare, will likely happen sometimes. Over the past few years, the consensus among AI experts has moved from 'hostile AIs trying to kill us is completely implausible' to 'hostile AIs only try to kill us in carefully designed scenarios.' Bernie Sanders — not exactly a tech hype man — is now the latest politician to warn that as independent AIs become more powerful, they might take power from humans. It's a 'doomsday scenario,' as he called it, but it's hardly a far-fetched one anymore. And whether or not the AIs themselves ever decide to kill or harm us, they might fall into the hands of people who do. Experts worry that AI will make it much easier both for rogue individuals to engineer plagues or plan acts of mass violence, and for states to achieve heights of surveillance over their citizens that they have long dreamed of but never before been able to achieve. This story was first featured in the Future Perfect newsletter. Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. In principle, a lot of these risks could be mitigated if labs designed and adhered to rock-solid safety plans, responding swiftly to signs of scary behavior among AIs in the wild. Google, OpenAI, and Anthropic do have safety plans, which don't seem fully adequate to me but which are a lot better than nothing. But in practice, mitigation often falls by the wayside in the face of intense competition between AI labs. Several labs have weakened their safety plans as their models came close to meeting pre-specified performance thresholds. Meanwhile, xAI, the creator of Grok, is pushing releases with no apparent safety planning whatsoever. Worse, even labs that start out deeply and sincerely committed to ensuring AI is developed responsibly have often changed course later because of the enormous financial incentives in the field. That means that even if you take a job at Meta, OpenAI, or Anthropic with the best of intentions, all of your effort toward building a good AI outcome could be redirected toward something else entirely. So should you take the job? I've been watching this industry evolve for seven years now. Although I'm generally a techno-optimist who wants to see humanity design and invent new things, my optimism has been tempered by witnessing AI companies openly admitting their products might kill us all, then racing ahead with precautions that seem wholly inadequate to those stakes. Increasingly, it feels like the AI race is steering off a cliff. Given all that, I don't think it's ethical to work at a frontier AI lab unless you have given very careful thought to the risks that your work will bring closer to fruition, and you have a specific, defensible reason why your contributions will make the situation better, not worse. Or, you have an ironclad case that humanity doesn't need to worry about AI at all, in which case, please publish it so the rest of us can check your work! When vast sums of money are at stake, it's easy to self-deceive. But I wouldn't go so far as to claim that literally everyone working in frontier AI is engaged in self-deception. Some of the work documenting what AI systems are capable of and probing how they 'think' is immensely valuable. The safety and alignment teams at DeepMind, OpenAI, and Anthropic have done and are doing good work. But anyone pushing for a plane to take off while convinced it has a 20 percent chance of crashing would be wildly irresponsible, and I see little difference in trying to build superintelligence as fast as possible. A hundred million dollars, after all, isn't worth hastening the death of your loved ones or the end of human freedom. In the end, it's only worth it if you can not just get rich off AI, but also help make it go well. It might be hard to imagine anyone who'd turn down mind-boggling riches just because it's the right thing to do in the face of theoretical future risks, but I know quite a few people who've done exactly that. I expect there will be more of them in the coming years, as more absurdities like Grok's recent MechaHitler debacle go from sci-fi to reality. And ultimately, whether or not the future turns out well for humanity may depend on whether we can persuade some of the richest people in history to notice something their paychecks depend on their not noticing: that their jobs might be really, really bad for the world.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store