logo
Neuropathic pain has no immediate cause – research on a brain receptor may help stop this hard-to-treat condition

Neuropathic pain has no immediate cause – research on a brain receptor may help stop this hard-to-treat condition

Yahoo23-06-2025
Pain is easy to understand until it isn't. A stubbed toe or sprained ankle hurts, but it makes sense because the cause is clear and the pain fades as you heal.
But what if the pain didn't go away? What if even a breeze felt like fire, or your leg burned for no reason at all? When pain lingers without a clear cause, that's neuropathic pain.
We are neuroscientists who study how pain circuits in the brain and spinal cord change over time. Our work focuses on the molecules that quietly reshape how pain is felt and remembered.
We didn't fully grasp how different neuropathic pain was from injury-related pain until we began working in a lab studying it. Patients spoke of a phantom pain that haunted them daily – unseen, unexplained and life-altering.
These conversations shifted our focus from symptoms to mechanisms. What causes this ghost pain to persist, and how can we intervene at the molecular level to change it?
Neuropathic pain stems from damage to or dysfunction in the nervous system itself. The system that was meant to detect pain becomes the source of it, like a fire alarm going off without a fire. Even a soft touch or breeze can feel unbearable.
Neuropathic pain doesn't just affect the body – it also alters the brain. Chronic pain of this nature often leads to depression, anxiety, social isolation and a deep sense of helplessness. It can make even the most routine tasks feel unbearable.
About 10% of the U.S. population – tens of millions of people – experience neuropathic pain, and cases are rising as the population ages. Complications from diabetes, cancer treatments or spinal cord injuries can lead to this condition. Despite its prevalence, doctors often overlook neuropathic pain because its underlying biology is poorly understood.
There's also an economic cost to neuropathic pain. This condition contributes to billions of dollars in health care spending, missed workdays and lost productivity. In the search for relief, many turn to opioids, a path that, as seen from the opioid epidemic, can carry its own devastating consequences through addiction.
Finding treatments for neuropathic pain requires answering several questions. Why does the nervous system misfire in this way? What exactly causes it to rewire in ways that increase pain sensitivity or create phantom sensations? And most urgently: Is there a way to reset the system?
This is where our lab's work and the story of a receptor called GluD1 comes in. Short for glutamate delta-1 receptor, this protein doesn't usually make headlines. Scientists have long considered GluD1 a biochemical curiosity, part of the glutamate receptor family, but not known to function like its relatives that typically transmit electrical signals in the brain.
Instead, GluD1 plays a different role. It helps organize synapses, the junctions where neurons connect. Think of it as a construction foreman: It doesn't send messages itself, but directs where connections form and how strong they become.
This organizing role is critical in shaping the way neural circuits develop and adapt, especially in regions involved in pain and emotion. Our lab's research suggests that GluD1 acts as a molecular architect of pain circuits, particularly in conditions like neuropathic pain where those circuits misfire or rewire abnormally. In parts of the nervous system crucial for pain processing like the spinal cord and amygdala, GluD1 may shape how people experience pain physically and emotionally.
Across our work, we found that disruptions to GluD1 activity is linked to persistent pain. Restoring GluD1 activity can reduce pain. The question is, how exactly does GluD1 reshape the pain experience?
In our first study, we discovered that GluD1 doesn't operate solo. It teams up with a protein called cerebellin-1 to form a structure that maintains constant communication between brain cells. This structure, called a trans-synaptic bridge, can be compared to a strong handshake between two neurons. It makes sure that pain signals are appropriately processed and filtered.
But in chronic pain, the bridge between these proteins becomes unstable and starts to fall apart. The result is chaotic. Like a group chat where everyone is talking at once and nobody can be heard clearly, neurons start to misfire and overreact. This synaptic noise turns up the brain's pain sensitivity, both physically and emotionally. It suggests that GluD1 isn't just managing pain signals, but also may be shaping how those signals feel.
What if we could restore that broken connection?
In our second study, we injected mice with cerebellin-1 and saw that it reactivated GluD1 activity, easing their chronic pain without producing any side effects. It helped the pain processing system work again without the sedative effects or disruptions to other nerve signals that are common with opioids. Rather than just numbing the body, reactivating GluD1 activity recalibrated how the brain processes pain.
Of course, this research is still in the early stages, far from clinical trials. But the implications are exciting: GluD1 may offer a way to repair the pain processing network itself, with fewer side effects and less risk of addiction than current treatments.
For millions living with chronic pain, this small, peculiar receptor may open the door to a new kind of relief: one that heals the system, not just masks its symptoms.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Pooja Shree Chettiar, Texas A&M University and Siddhesh Sabnis, Texas A&M University
Read more:
How do painkillers actually kill pain? From ibuprofen to fentanyl, it's about meeting the pain where it's at
Your body naturally produces opioids without causing addiction or overdose – studying how this process works could help reduce the side effects of opioid drugs
Opioid-free surgery treats pain at every physical and emotional level
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier
AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier

Forbes

time15 minutes ago

  • Forbes

AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier

Is there a limit or ceiling to human intelligence and how will that impact AI? In today's column, I examine an unresolved question about the nature of human intelligence, which in turn has a great deal to do with AI, especially regarding achieving artificial general intelligence (AGI) and potentially even reaching artificial superintelligence (ASI). The thorny question is often referred to as the human ceiling assumption. It goes like this. Is there a ceiling or ending point that confines how far human intellect can go? Or does human intellect extend indefinitely and nearly have infinite possibilities? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Human Intellect As A Measuring Stick Have you ever pondered the classic riddle that asks how high is up? I'm sure that you have. Children ask this vexing question of their parents. The usual answer is that up goes to the outer edge of Earth's atmosphere. After hitting that threshold, up continues onward into outer space. Up is either a bounded concept based on our atmosphere or it is a nearly infinite notion that goes as far as the edge of our expanding universe. I bring this riddle to your attention since it somewhat mirrors an akin question about the nature of human intelligence: In other words, the intelligence we exhibit currently is presumably not our upper bound. If you compare our intelligence with that of past generations, it certainly seems relatively apparent that we keep increasing in intelligence on a generational basis. Will those born in the year 2100 be more intelligent than we are now? What about being born in 2200? All in all, most people would speculate that yes, the intelligence of those future generations will be greater than the prevailing intelligence at this time. If you buy into that logic, the up-related aspect rears its thorny head. Think of it this way. The capability of human intelligence is going to keep increasing generationally. At some point, will a generation exist that has capped out? The future generation represents the highest that human intellect can ever go. Subsequent generations will either be of equal human intellect, or less so and not more so. The reason we want to have an answer to that question is that there is a present-time pressing need to know whether there is a limit or not. I've just earlier pointed out that AGI will be on par with human intellect, while ASI will be superhuman intelligence. Where does AGI top out, such that we can then draw a line and say that's it? Anything above that line is going to be construed as superhuman or superintelligence. Right now, using human intellect as a measuring stick is hazy because we do not know how long that line is. Perhaps the line ends at some given point, or maybe it keeps going infinitely. Give that weighty thought some mindful pondering. The Line In The Sand You might be tempted to assume that there must be an upper bound to human intelligence. This intuitively feels right. We aren't at that limit just yet (so it seems!). One hopes that humankind will someday live long enough to reach that outer atmosphere. Since we will go with the assumption of human intelligence as having a topping point, doing so for the sake of discussion, we can now declare that AGI must also have a topping point. The basis for that claim is certainly defensible. If AGI consists of mimicking or somehow exhibiting human intelligence, and if human intelligence meets a maximum, AGI will also inevitably meet that same maximum. That's a definitional supposition. Admittedly, we don't necessarily know yet what the maximum point is. No worries, at least we've landed on a stable belief that there is a maximum. We can then draw our attention toward figuring out where that maximum resides. No need to be stressed by the infinite aspects anymore. Twists And Turns Galore AI gets mired in a controversy associated with the unresolved conundrum underlying a ceiling to human intelligence. Let's explore three notable possibilities. First, if there is a ceiling to human intelligence, maybe that implies that there cannot be superhuman intelligence. Say what? It goes like this. Once we hit the top of human intelligence, bam, that's it, no more room to proceed further upward. Anything up until that point has been conventional human intelligence. We might have falsely thought that there was superhuman intelligence, but it was really just intelligence slightly ahead of conventional intelligence. There isn't any superhuman intelligence per se. Everything is confined to being within conventional intelligence. Thus, any AI that we make will ultimately be no greater than human intelligence. Mull that over. Second, well, if there is a ceiling to human intelligence, perhaps via AI we can go beyond that ceiling and devise superhuman intelligence. That seems more straightforward. The essence is that humans top out but that doesn't mean that AI must also top out. Via AI, we might be able to surpass human intelligence, i.e., go past the maximum limit of human intelligence. Nice. Third, if there isn't any ceiling to human intelligence, we would presumably have to say that superhuman intelligence is included in that infinite possibility. Therefore, the distinction between AGI and ASI is a falsehood. It is an arbitrarily drawn line. Yikes, it is quite a mind-bending dilemma. Without some fixed landing on whether there is a human intelligence cap, the chances of nailing down AGI and ASI remain aloof. We don't know the answer to this ceiling proposition; thus, AI research must make varying base assumptions about the unresolved topic. AI Research Taking Stances AI researchers often take the stance that there must be a maximum level associated with human intellect. They generally accept that there is a maximum even if we cannot prove it. The altogether unknown, but considered plausibly existent limit, becomes the dividing line between AGI and ASI. Once AI exceeds the human intellectual limit, we find ourselves in superhuman territory. In a recently posted paper entitled 'An Approach to Technical AGI Safety and Security' by Google DeepMind researchers Rohin Shah, Alex Irpan, Alexander Matt Turner, Anna Wang, Arthur Conmy, David Lindner, Jonah Brown-Cohen, Lewis Ho, Neel Nanda, Raluca Ada Popa, Rishub Jain, Rory Greig, Samuel Albanie, Scott Emmons, Sebastian Farquhar, Sébastien Krier, Senthooran Rajamanoharan, Sophie Bridgers, Tobi Ijitoye, Tom Everitt, Victoria Krakovna, Vikrant Varma, Vladimir Mikulik, Zachary Kenton, Dave Orr, Shane Legg, Noah Goodman, Allan Dafoe, Four Flynn, and Anca Dragan, arXiv, April 2, 2025, they made these salient points (excerpts): You can see from those key points that the researchers have tried to make a compelling case that there is such a thing as superhuman intellect. The superhuman consists of that which goes beyond the human ceiling. Furthermore, AI won't get stuck at the human intellect ceiling. AI will surpass the human ceiling and proceed into the superhuman intellect realm. Mystery Of Superhuman Intelligence Suppose that there is a ceiling to human intelligence. If that's true, would superhuman intelligence be something entirely different from the nature of human intelligence? In other words, we are saying that human intelligence cannot reach superhuman intelligence. But the AI we are devising seems to be generally shaped around the overall nature of human intelligence. How then can AI that is shaped around human intelligence attain superintelligence when human intelligence cannot apparently do so? Two of the most frequently voiced answers are these possibilities: The usual first response to the exasperating enigma is that size might make the difference. The human brain is approximately three pounds in weight and is entirely confined to the size of our skulls, roughly allowing brains to be about 5.5 inches by 6.5 inches by 3.6 inches in respective dimensions. The human brain consists of around 86 billion neurons and perhaps 1,000 trillion synapses. Human intelligence is seemingly stuck to whatever can happen within those sizing constraints. AI is software and data that runs across perhaps thousands or millions of computer servers and processing units. We can always add more. The size limit is not as constraining as a brain that is housed inside our heads. The bottom line is that the reason we might have AI that exhibits superhuman intelligence is due to exceeding the physical size limitations that human brains have. Advances in hardware would allow us to substitute faster processors and more processors to keep pushing AI onward into superhuman intelligence. The second response is that AI doesn't necessarily need to conform to the biochemical compositions that give rise to human intelligence. Superhuman intelligence might not be feasible with humans due to the brain being biochemically precast. AI can easily be devised and revised to exploit all manner of new kinds of algorithms and hardware that differentiate AI capabilities from human capabilities. Heading Into The Unknown Those two considerations of size and differentiation could also work in concert. It could be that AI becomes superhuman intellectually because of both the scaling aspects and the differentiation in how AI mimics or represents intelligence. Hogwash, some exhort. AI is devised by humans. Therefore, AI cannot do better than humans can do. AI will someday reach the maximum of human intellect and go no further. Period, end of story. Whoa, comes the retort. Think about humankind figuring out how to fly. We don't flap our arms like birds do. Instead, we devised planes. Planes fly. Humans make planes. Ergo, humans can decidedly exceed their own limitations. The same will apply to AI. Humans will make AI. AI will exhibit human intelligence and at some point reach the upper limits of human intelligence. AI will then be further advanced into superhuman intelligence, going beyond the limits of human intelligence. You might say that humans can make AI that flies even though humans cannot do so. A final thought for now on this beguiling topic. Albert Einstein famously said this: 'Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.' Quite a cheeky comment. Go ahead and give the matter of AI becoming AGI and possibly ASI some serious deliberation but remain soberly thoughtful since all of humanity might depend on what the answer is.

How To Embrace Neurodiversity In The Workplace To Improve Innovation
How To Embrace Neurodiversity In The Workplace To Improve Innovation

Forbes

time27 minutes ago

  • Forbes

How To Embrace Neurodiversity In The Workplace To Improve Innovation

How To Embrace Neurodiversity In The Workplace To Improve Innovation Neurodiversity means there is no single "correct" way for the brain to work. The reality is that the workplace includes people who have a wide range of conditions, including autism spectrum disorder, ADHD, dyslexia, and learning disabilities. Research has found that 19% of Americans identify as neurodivergent. Neurodiverse individuals can bring "fresh eyes" to problems. Their unique ways of thinking are valuable because employees often get stuck in doing things the way they have always done them. When you bring in someone who has a unique perspective or has not been influenced by status-quo thinking, that is when the best ideas can happen. When individuals stay focused, question routines, and bring original thinking that challenges assumptions, it creates innovative ideas. Sometimes these thinkers go unnoticed, or worse, ignored. But when leaders start paying attention to how people solve problems, that is when their teams adapt faster, think more clearly, and produce stronger results. How Can Neurodiversity In The Workplace Improve Problem-Solving? How Can Neurodiversity In The Workplace Improve Problem-Solving? When team members think differently, they often notice things that others miss. This is especially true when facing complex challenges that have no clear solution. A Deloitte study found that teams including neurodivergent talent outperformed others on tasks requiring creativity, accuracy, and pattern recognition. These thinkers tend to spot gaps in logic, inconsistencies in data, or errors in execution before anyone else does. Some companies have started to see how problem-solving improves when diverse minds are involved from the beginning. SAP's Autism at Work program led to measurable increases in productivity across several teams. They noticed that when neurodivergent employees were hired and supported properly, their ways of working led to better questions, deeper analysis, and clearer workflows. Fresh ideas often come from people who have not been conditioned to accept what everyone else accepts. Neurodivergent individuals often bring that perspective because they experience the world differently. What Makes Neurodiversity In The Workplace Difficult To Manage? What Makes Neurodiversity In The Workplace Difficult To Manage? Leaders often say they support inclusion, but they sometimes overlook what inclusion really involves. Neurodiversity introduces differences in communication, work rhythms, and ways of processing information. These differences can be misunderstood as performance issues when the real issue is a mismatch between the environment and the person's strengths. Many neurodivergent professionals report that unspoken rules are the hardest part of working in a traditional office. If someone needs more time to respond, prefers written instructions, or skips small talk, they may be misjudged as uncooperative or disengaged. But when managers get specific about goals and outcomes, they start to see that these same employees often produce more consistent, high-quality results than expected. The mistake is assuming one communication style or workflow suits everyone. That assumption limits potential and often leads to disengagement. When leaders ask questions instead of making assumptions, they open the door to understanding how to bring out the best in each person. What Should Leaders Do To Support Neurodiversity In The Workplace? What Should Leaders Do To Support Neurodiversity In The Workplace? Support starts with curiosity and the willingness to meet people where they are. A simple question like 'How can I help you do your best work?' can lead to a breakthrough. That question signals that differences are respected. Many of the adjustments that help neurodivergent employees—like using clear written communication, offering flexible deadlines, or minimizing background noise—benefit everyone. Instead of singling people out, this designs work in a way that supports how the brain functions under pressure. There are also cultural shifts that leaders can make. Try having meetings that include time for reflection, allow written follow-ups instead of immediate responses, and avoid fast-paced brainstorming where only the loudest voices are heard. These changes can increase inclusion without compromising results. How Does Neurodiversity In The Workplace Increase Innovation? How Does Neurodiversity In The Workplace Increase Innovation? Innovation grows in teams that welcome mental friction. Not the kind that creates conflict, but the kind that invites different patterns of thinking. Neurodivergent individuals are often the ones who explore ideas that seem unconventional. This kind of focus is especially valuable in roles that require deep concentration or attention to detail. For example, JPMorgan Chase launched a program to recruit autistic talent for quality assurance and cybersecurity roles. They reported higher productivity, lower error rates, and greater retention in those roles compared to traditional hires. These outcomes came from giving employees the freedom to approach their work in the way that suited them best. How Can You Build A Workplace That Embraces Neurodiversity Starting This Week? How Can You Build A Workplace That Embraces Neurodiversity Starting This Week? You can start building a workplace that embraces neurodiversity by using both written and verbal communication. After meetings or assignments, follow up with a quick written summary. It helps those who process information more slowly and makes instructions easier to refer back to. Ask about preferred work styles. Not everyone works best under pressure or in noisy environments. Some thrive on routines, while others need more autonomy. Listen when people tell you what works for them. Using vague terms like 'be more engaged' or 'speak up more' is not helpful. Be specific and focus on the quality of their contributions. Why Neurodiversity In The Workplace Deserves More Attention Now Why Neurodiversity In The Workplace Deserves More Attention Now As jobs become more complex, companies will need to create space for different types of minds to thrive. The goal is to recognize that these individuals often hold the key to better outcomes. Neurodiversity already exists in your organization. The question is whether these people's strengths are being seen, heard, and supported. When organizations build a culture that embraces curiosity, they begin asking better questions about support, performance, and leadership. In that process, they often discover that some of their most overlooked employees may also be their most innovative.

This Model Beats Docs at Predicting Sudden Cardiac Arrest
This Model Beats Docs at Predicting Sudden Cardiac Arrest

Medscape

time27 minutes ago

  • Medscape

This Model Beats Docs at Predicting Sudden Cardiac Arrest

An artificial intelligence (AI) model has performed dramatically better than doctors using the latest clinical guidelines to predict the risk for sudden cardiac arrest in people with hypertrophic cardiomyopathy. The model, called Multimodal AI for ventricular Arrhythmia Risk Stratification (MAARS), is described in a paper published online on July 2 in Nature Cardiovascular Research . It predicts patients' risk by analyzing a variety of medical data and records such as echocardiogram and radiology reports, as well as all the information contained in contrast-enhanced MRI (CMR) images of the patient's heart. Natalia Trayanova, PhD, director of the Alliance for Cardiovascular Diagnostic and Treatment Innovation at Johns Hopkins University in Baltimore, led the development of the model. She said that while hypertrophic cardiomyopathy is one of the most common inherited heart diseases, affecting 1 in every 200-500 individuals worldwide, and is a leading cause of sudden cardiac death in young people and athletes, an individual's risk for cardiac arrest remains difficult to predict. Current clinical guidelines from the American Heart Association and American College of Cardiology, and those from the European Society of Cardiology, identify the patients who go on to experience cardiac arrest in about half of cases. 'The clinical guidelines are extremely inaccurate, little better than throwing dice,' Trayanova, who is also the Murray B. Sachs Professor in the Department of Biomedical Engineering at Johns Hopkins, told Medscape Medical News . Compared to the guidelines, MAARS was nearly twice as sensitive, achieving 89% accuracy across all patients and 93% accuracy for those 40-60 years old, the group of people with hypertrophic cardiomyopathy most at risk for sudden cardiac death. Building a Model MAARS was trained on data from 553 patients in The Johns Hopkins Hospital, Baltimore, hypertrophic cardiomyopathy registry. The researchers then tested the algorithm on an independent external cohort of 286 patients from the Sanger Heart & Vascular Institute hypertrophic cardiomyopathy registry in Charlotte, North Carolina. The model uses all of the data available from these patients, drawing on electronic health records, ECG readings, reports from radiologists and imaging technicians, and raw data from CMR. 'All these different channels are fed into this multimodal AI predictor, which fuses it together and comes up with the risk for these particular patients,' Trayanova said. The inclusion of CMR data is particularly important, she said, because the imaging test can identify areas of scarring on the heart that characterize hypertrophic cardiomyopathy. But clinicians have yet to be able to make much use of those images because linking the fairly random patterns of scar tissue to clinical outcomes has been a challenge. But that is just the sort of task that deep neural networks are particularly well-suited to tackle. 'They can recognize patterns in the data that humans miss, then analyze and combine them with the other inputs into a single prediction,' Trayanova said. Clinical Benefits Better predictions of the risk for serious adverse outcomes will help improve care, by ensuring people get the right treatments to reduce their risk, and avoid the ones that are unnecessary, Trayanova said The best way to protect against sudden cardiac arrest is with an implantable defibrillator — but the procedure carries potential risks that are best avoided unless truly needed. 'More accurate risk prediction means fewer patients might undergo unnecessary ICD implantation, which carries risks such as infections, device malfunction, and inappropriate shocks,' said Antonis Armoundas, PhD, from the Cardiovascular Research Center at Massachusetts General Hospital in Boston. The model could also help personalize treatment for patients with hypertrophic cardiomyopathy, Trayanova said. 'It's able to drill down into each patient and predict which parameters are the most important to help influence the management of the condition,' she said. Robert Avram, MD, MSc, a cardiologist at the Montreal Heart Institute, Montreal, Quebec, Canada, said the results are encouraging. 'I'm especially interested in how a tool like this could streamline risk stratification and ultimately improve patient outcomes,' he said. But it is not yet ready for widespread use in the clinic. 'Before it can be adopted in routine care, however, we'll need rigorous external validation across diverse institutions, harmonized variable definitions, and unified extraction pipelines for each modality, along with clear regulatory and workflow-integration strategies,' Avram said. Armoundas said he would like to see the model tested on larger sample sizes, with greater diversity in healthcare settings, geographical regions, and demographics, as well as prospective, randomized studies and comparisons against other AI predictive models. 'Further validation in larger cohorts and assessment over longer follow-up periods are necessary for its full clinical integration,' he said. Armoundas, Avram, and Trayanova reported having no relevant financial conflicts of interest.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store