logo
#

Latest news with #DrFerryWilson

Nightmares? It Might Be Something You Ate
Nightmares? It Might Be Something You Ate

Medscape

time01-07-2025

  • Health
  • Medscape

Nightmares? It Might Be Something You Ate

This transcript has been edited for clarity. Welcome to Impact Factor , your weekly dose of commentary on a new medical study. I'm Dr F. Perry Wilson from the Yale School of Medicine. When the ghost of Jacob Marley sits across from Ebenezer Scrooge in A Christmas Carol , he observes that the miser doesn't believe in him. Scrooge, with forced bravado, says he's right. The ghost may be in actuality, 'an undigested bit of beef, a blot of mustard, a crumb of cheese… There's more of gravy than of grave about you!' And so we see that, even in 1843, people believed that there was some link between the food we eat and the nightmares that plague us at the witching hour. But… is it true? Does the dinner plate affect the nightmare state? Does a late-night snack make your dreams more wack? The inspiration for today's little reverie is a perplexingly-entitled study, 'More dreams of the rarebit fiend: food sensitivity and dietary correlates of sleep and dreaming,' appearing in Frontiers in Psychology . To save you the googling, 'dreams of the rarebit fiend' were a series of comics published in the early 1900s which would depict a nightmare of a poor individual who would wake in the last panel and lament eating some food or another. Rarebit is a cheese-on-toast dish which, if you've never had it, can still be found at Mory's here in New Haven. In any case, I think the Scrooge reference is a bit more familiar than the comic, but maybe the Dickens estate is litigious. Before we dig into this study, let's think through what mechanisms there may be for food to impact dreaming. Is there biologic plausibility here? One hypothesis, the 'food specific effects' hypothesis, suggests that certain foods have a chemical or chemicals that directly impact dreaming. There is precedence for this — certain drugs, for example, are notorious for causing weird dreams. I was on mefloquine traveling in Africa once, and I still remember the surreal dreams I had on the antimalarial. Planes flying backward against an orange-colored sky. Weird stuff. There's also the 'food distress' hypothesis. This is the idea that certain foods hurt us a bit. Maybe they are spicy or make us gassy or whatever, and it's actually that pain or discomfort that prompts the bad dreams. Finally, we have the 'sleep-effects' hypothesis, which is the idea that certain foods decrease the quality of our sleep — like coffee or alcohol. And that poor sleep quality predisposes to bad dreams. So we have a number of ways that it is plausible that food may impact your dreaming… but does it? To try to figure this out, the researchers conducted a fairly detailed survey study. More than 1000 individuals — mostly undergraduate students, mostly women — were surveyed. While they were relatively healthy overall, 13.8% reported having a medical condition and 17.1% a psychiatric condition. The average PHQ4 score for anxiety and depression was 9.5 — which is in the mild-to-moderate range: typical of modern 20-somethings. It's also worth noting that 32.4% reported sensitivity to some type of food. Nearly one third of participants reported a high frequency of recalled nightmares — more than one per week — and women tended to recall more dreams and had more nightmares than men. Did these individuals feel like what they ate affected their dreaming? Not really. Just 59 individuals, (5.5%) said that they thought there was any relationship between the food they ate and the qualia of their dreams. That said, those 59 people were much more likely to have frequent nightmares. This is notably lower than the 17.8% of individuals who said food affected their dreaming in the author's prior study which was published a decade ago. That study had a smaller sample size but still focused on undergraduate students, so I think there is comparability here. We have a dramatic reduction in the perception of a link between food and dreaming. We'll get to whether there is a real link in a minute, but why are younger people less likely to believe this these days? We can only guess. It might be a secular trend towards more data-driven, scientific, or at least quasi-scientific explanations of phenomena. The food/dream hypothesis does give old-wives-tale vibes, right? Perhaps the relevance of this idea has decreased in the public consciousness as food safety has increased. Or maybe kids these days have inputs into their brains that are way more potent than the slowly digesting cheese steak in their stomachs. In any case, the researchers asked the 59 people who did feel that food affected their dreaming which types of food had the largest effects. In terms of increasing 'disturbing' dream content, sweets and dairy topped the list. In terms of leading to more pleasant dreams, fruit, vegetables, and herbal tea were up there. The fact that there was some consistency here lends modest support to the food-specific effect hypothesis. Maybe there is a chemical in dairy foods that gives you bad dreams. If so, Liz Lemon should not be working on her night cheese. And for the three of you who get that reference, I salute you. What about the food distress hypothesis? I think the data is a bit stronger here. People who were lactose-intolerant, for instance, had a higher frequency of nightmares, even if they didn't consciously believe that food intake affected dreaming. When the authors dug down into that association, they found that controlling for gastrointestinal (GI) symptoms eliminated the observed relationship. In other words, the data suggests that the reason people who are lactose intolerant have more nightmares is because people who are lactose intolerant have more GI upset. This is decent evidence for that food-distress hypothesis. Finally, that sleep-effects hypothesis. Lactose intolerance was associated with worse sleep, but a lot of that effect was mediated through GI upset. So, it seems to me that, if there is any relationship between food and dreaming, it's probably due to the distress that some food causes you as you're sleeping. Which means, of course, that Scrooge was right. A bit of underdone potato can lead to visions of fettered apparitions chastising you for the chains you forge in life. And though it ended up working out for old Ebenezer, I think most of us would like to avoid nightmares if possible. In addition to the suggestion that food sensitivities can worsen nightmares, the researchers found that nightmares were more common among people who frequently ate late at night and those who had underlying medical or psychiatric conditions. In brief, there might be some wisdom contained in the old wives' tales. For a restful and ghost-free night's sleep, it's likely best to slumber without a full belly and to avoid those foods that (for you) cause distress. As for Dickens, he was famously an insomniac, spending long nights walking the streets of London. Staying wide awake all night also avoids nightmares, but I wouldn't recommend it.

Eureka! Increasing the Odds of a Sudden Insight
Eureka! Increasing the Odds of a Sudden Insight

Medscape

time26-06-2025

  • Health
  • Medscape

Eureka! Increasing the Odds of a Sudden Insight

This transcript has been edited for clarity. Welcome to Impact Factor , your weekly dose of commentary on a new medical study. I'm Dr F. Perry Wilson from the Yale School of Medicine. At a time when the capabilities of artificial intelligences seem to be growing by leaps and bounds at an incredible pace, it's comforting to remember that there are certain cognitive processes that continue to feel distinctly human. I want to talk about one of those processes today. It's the idea of 'eureka': a sudden flash of insight, where the solution to a puzzle or a problem or a mystery just sort of clicks. When we think about these moments, several features come to mind. First is the suddenness. It's not that you slowly improve at solving a problem, you slowly improve and slowly improve and then — all of a sudden — you dramatically improve. Then there's the timing of the insight; it's variable. Given the same puzzle, some people 'get it' quickly, and some take longer. And, of course, some people never get it at all. This process seems so singular, so qualitative, that it may appear to be impossible to study. But a study of sudden insight is exactly what we're going to be exploring today. Let's see if it clicks. Abandoning stories of Greek philosophers and kings' crowns, researchers, led by Anika Löwe, refer to these sudden flashes of insight as 'aha moments.' To me, an 'aha moment' is sitting in my girlfriend's basement watching the 'Take on Me' music video, but there's never a forever thing. Here, we're talking about those sudden flashes of insight, and, in particular, whether a bit of sleep improves the chances of having them. The paper appears in PLOS Biology. How do you test aha moments in the laboratory? The system is really clever. Participants (90 in this study) were exposed to a very simple computer game during which dots move around a circle in different directions, but one of four directions (northwest, northeast, southwest, and southeast) is more common than the others. The participants are asked to press one of two buttons when they see the set of moving dots and are then given feedback on whether they are right or wrong. There is a hidden rule here — for example, to be correct they need to hit the left button when the majority of dots are moving northwest or southeast, but the right button when the dots are moving northeast or southwest. They have to figure this out through trial and error. It's not that easy, especially since there is a lot of noise in the direction of travel of the dots. But, as you might expect, task performance improves over time — people figure out the rule underlying the correct answers and do better and better over hundreds of iterated trials. As looking at the moving dots, you notice that they change color as well, from orange to purple at random. For the first 80% of trials or so, these color changes are random; they have nothing to do with the correct answer. But here's the trick: Toward the end of the series of trials, the colors start to match up with the correct directions. In other words, all of a sudden, orange dots would indicate that the participant should press the left button, purple dots the right button. But the participants have to realize this is happening; no one tells them in advance. That requires a sudden flash of insight — an 'aha' moment. But once a participant has that moment, their performance skyrockets. It's way easier to match a color to a button than two of four noisy directional trends to a button. I find this super clever. Major props to whatever neuroscientist had the flash of insight to design this system. Now we have a framework for measuring aha moments. The next step is to perturb the system. The researchers wanted to determine whether sleep (and which stage of sleep) would improve the aha phenomenon. After an initial round of testing, where the 'color rule' appeared only at the very end, the participants went into a quiet room to nap. An EEG was used to determine whether they merely rested or they had reached stage 1 or stage 2 of sleep. Stage 1 sleep is pretty light — almost conscious. Stage 2 is slightly deeper but not to the level of deep sleep. Only one participant reached stage 3 during the short nap. Some people figured out the secret rule before naptime; they were excluded from further study. So now we have a group of people who had not yet had an aha moment. Would the achieved sleep stage matter when they were tested again? It did. Pretty profoundly, too. About 50% of the group who did not sleep at all during the rest had the color insight during the second round of testing. About 60% of those who hit stage 1 sleep had the aha moment, but 80% of people who reached stage 2 sleep had that insight — a statistically significant improvement. Interestingly, sleep only seemed to affect the proportion of people who had the insight. It didn't affect the time to the insight. And napping had no effect on task performance prior to the insight. Rather, it seemed that napping just allowed the magic to happen. But how does the magic happen? The researchers interrogated the EEGs to see whether something deeper was going on besides stage 2 sleep. They found that aperiodic activity in the brain captured all the information that sleep stage did. In fact, it correlated more strongly with insight than sleep stage itself. So…what is aperiodic activity, and what is happening when aperiodic activity in your brain is higher? Aperiodic activity is electrical signals in the brain that don't have a nice repeating pattern. They seem more random and have been described as the 'background noise' of the brain. But what is interesting is what is happening to neural connections as aperiodic activity is increasing. They get weaker. This sounds bad, but weakening connections between neurons is actually a critical thing. It allows connections that are already weak and of poor quality to break entirely, allowing the stronger connections more exclusivity. Those of you in the machine learning space will recognize the similarity to regularization methods. But basically what is happening is that the 'noise' of the complex brain is being turned down, so the signal can come through that much louder. And maybe that is what allows the insight to happen. Perhaps a brain with a bit less noise and a bit less clutter can more clearly see the underlying structure being presented to it and make those cognitive leaps that (for now) separate humans from the neural networks we create in silico. Of course, looking at this study I'm sure you're thinking the same thing I am: How can I leverage this knowledge to increase my likelihood of having eureka moments? Well, better sleep seems like the obvious answer here, but I think we all knew that already. Maybe we can take a lesson from this idea of signal and noise. If the key to aha moments is allowing the stronger connections in our brain to function more cleanly, without the distraction of those weaker connections, maybe that means we need to avoid those weaker connections if possible — those distractions. Does scrolling TikTok reduce your chance of having sudden insights? Does meditating in a quiet room increase them? I'm not sure, but at least now we have a mechanism to study those questions. In closing, let me express my wish that you have many aha moments, and that they won't be gone in a day or two.

How One Dose of Psilocybin Treats Depression
How One Dose of Psilocybin Treats Depression

Medscape

time16-06-2025

  • Health
  • Medscape

How One Dose of Psilocybin Treats Depression

This transcript has been edited for clarity. Welcome to Impact Factor , your weekly dose of commentary on a new medical study. I'm Dr F. Perry Wilson from the Yale School of Medicine. The story of our lives is etched into the pathways of our brains. Some of those pathways are positive, giving us a sense of self-worth, a resilience to adversity. Some are maladaptive, promoting anxiety, fear, and depression. The pathways lead to actions, and those actions tend to reinforce the pathways. Anxiety breeds anxiety, depression breeds depression. I think this is part of the reason why problems with mental health are so difficult to treat; our brains are molded into these problematic self-reinforcing configurations and we keep falling into the same ruts. And yes, talk therapy and SSRIs can help to nudge us out of those ruts; work to create new, more productive ruts; and improve our health. But those gains can be difficult to maintain over time. But what if there were a reset button in our brains? What if we could step outside of those ruts, even for a few hours, and see the pathways for what they are? What if we could start over? It seems too good to be true, and, to be clear, it may be, but data continue to emerge that the chemical psilocybin — the psychoactive component of so-called 'magic mushrooms' — may do just that. And it may do it after just a single dose. Magic mushrooms are on my mind — no, not literally — this week thanks to this article, appearing in the journal Cancer . Note the famous final author, Ezekiel Emanuel. It's a small, phase 2 trial of psilocybin, a single 25 mg dose, among individuals with cancer and major depressive disorder. What's interesting about this study is the duration of follow-up: 2 years. Most psilocybin studies end after a few months, making the long-term implications of treatment unclear. Here's the setup: Thirty patients (average age, 58 years; 70% women; 80% White) were enrolled, and everyone got the same intervention here. There is no control group. The intervention occurred over 8 weeks. They first had four visits with a psychotherapist for screening and psychological work, preparatory to receiving the drug. Then they received 25 mg of psilocybin, in a monitored setting, where they remained for about 6 or 7 hours. After that, there were four more psychotherapy visits to integrate the psychedelic experience. Eight weeks, one visit a week, basically. At multiple timepoints, a participant's mental health was evaluated using some standardized surveys: the Montgomery-Åsberg Depression Rating Scale and the Hamilton Anxiety Rating Scale. Let's talk about these scales for a moment before I show you the results of this study. The depression scale used here gives scores ranging from 0 to 60, with higher scores being consistent with worse depression. Patients have reported that a reduction by 5 points would be clinically meaningful. A meta-analysis of SSRI therapies showed that these common antidepressant drugs lead to an average reduction of around 3 points vs placebo. Of course, some people do better and some do worse; I just wanted to do some level setting. Let's look at the psilocybin study. At baseline, participant depression scores ranged from about 10 to 45 or so — pretty significant pathology. By the end of the 8-week intervention period, the average reduction in depression score was 20 points. That is a huge effect. True, there is no placebo group here, so that 20-point reduction includes the placebo effect, which can be significant in depression trials, but I find it hard to believe that this is all placebo. We can triangulate the placebo question a bit from this study, 'Single-Dose Psilocybin for a Treatment-Resistant Episode of Major Depression,' which appeared in The New England Journal of Medicine in 2022. This was a placebo-controlled trial among people with severe, treatment-resistant depression and used the same depression scale as the study we're talking about today. The psilocybin group had a 12-point improvement and the placebo group a 5.5-point improvement — a net difference that is still around twice as effective as SSRIs. Of course, the concern about placebo effect is somewhat academic. Especially for conditions like depression, there's a reasonable argument to be made that we shouldn't care whether the effect is mediated biologically or via placebo or both; if it works, it works. Improvements on the anxiety scale were also impressive. At 8 weeks, there was a 17-point improvement from baseline. But the most interesting part of this study is the long-term follow-up, which was available for 28 out of the original 30 participants. At 2 years of follow-up, more than half of participants had scores on the depression scale that were less than 50% of their baseline scores — an average reduction of 15 points. Similar effects were seen on anxiety scores. How might all this work? How can a drug, a molecule like this, lead to sustained changes in serious psychological conditions? There are a lot of theories. But let's look at the mechanism of action of psilocybin itself. Psilocybin binds to a particular receptor in the brain called the serotonin 2A receptor — in my opinion, the most interesting receptor in the entire brain. Other substances that bind to this receptor? Mescaline and LSD. Certain mutations in this receptor predispose to schizophrenia as well. And it's hard not to see the parallels between some symptoms of schizophrenia: the sense of unreality, the paranoia, the hallucinations — and the experiences of taking some of these psychedelic drugs. Of course, the drugs are self-limited. Well, at least the acute effects are. But how are they therapeutic? Some researchers are using a new term to describe drugs like psilocybin: psychoplastogens. The science suggests that one-time use of these agents can allow for a sudden increase in neural plasticity, allowing new neuronal connections to form where they wouldn't in other conditions, and for older connections to break down and restructure. If our brains are etched with the stories of our lives, if our behaviors deepen and reinforce those psychological ruts, psychoplastogens like psilocybin may loosen the soil, so to speak. This may suggest that those concomitant psychotherapy sessions are actually a critical component of this type of therapy. Perhaps the psilocybin shakes loose some maladaptive pathways, but putting them together in a healthy way still takes work. It wouldn't surprise me if that is the case, and it's a good reminder to those of you reading this that these drugs are not a panacea for mental health. In fact, we're really just beginning to explore the possibilities and the risks in this space. The promise is there, for sure. How many of us wouldn't want to hit 'reset' on some of the maladaptive thinking patterns we have? But for now, the use of these agents for therapeutic purposes really needs to be done under the supervision of a medical professional with experience.

Scientists Invent a Literal Thinking Cap
Scientists Invent a Literal Thinking Cap

Medscape

time29-05-2025

  • Health
  • Medscape

Scientists Invent a Literal Thinking Cap

This transcript has been edited for clarity. Welcome to Impact Factor , your weekly dose of commentary on a new medical study. I'm Dr F. Perry Wilson from the Yale School of Medicine. My job (my real job) as a clinical researcher is complex. It's cognitively challenging; there are multiple studies to keep track of, grants and papers to write, a large group of mentees and trainees and staff in the lab to manage. It's emotionally stressful too — recently more than ever, in fact. But if I'm tired, or I ate a bad burrito for lunch, or I get some bad news on a personal level, it's not a crisis. I'm not making life-or-death decisions in a split second. I can take a break, gather myself, prioritize, and come back when I'm feeling better. Not every job has that luxury. A surgeon doesn't get to take a break in the middle of an operation if they feel like they are not at 100%. An air traffic controller can't walk away from ensuring that planes land safely because their kid woke them up in the middle of the night. These jobs and others like them have a unique challenge: a constant cognitive workload in a high-stakes environment. And the problem with constant cognitive work is that your brain can't do it all the time. If you force it to, you start to make mistakes. You can literally get tired of thinking. Think of how the world might change if we knew exactly how overloaded our cognitive processes were. I'm not talking about a subjective rating scale; I'm talking about a way to measure the brain's cognitive output, and to warn us when our ability to keep thinking hard is waning before we make those critical mistakes. We're closer than you think. The standard metric for assessing cognitive workload is the NASA Task Load Index. Yes, that NASA. The Task Load Index is a survey designed to assess how hard a task is. It was originally designed to be used in human-machine interactions, like piloting a spaceship. It's subjective. It asks you to rate how mentally demanding a task is, how frustrating, how much effort it takes, and so on. Cognitive researchers have used this scale to demonstrate how successive mentally stressful tasks degrade task performance. Science has demonstrated that taking breaks is a good thing. I know — news at 11. The problem with subjective scales, though, is that people have a tough time being objective with them. Astronauts might tell you a task was easier than it really was because they want to be chosen to ride on the rocket. Or a doctor might evaluate a complex surgery as less mentally taxing so they can continue to operate that day. Bringing objectivity to the brain is hard. Sure, you can do an fMRI scan, but sitting inside a metal tube is not conducive to real-world scenarios. You can measure brain fatigue in the real world with an EEG, though. The problem is that an EEG involves wires everywhere. You're tethered. And the goo, the sticky stuff that they use to put the electrodes on your head, is very sensitive to motion. In anywhere but a dedicated neuroscience lab, this isn't going to work. I thought the day of real-time monitoring of cognitive load would be pretty far off because of these limitations, and then I saw this study, appearing this week in the journal Device, from CellPress. It reimagines the EEG in a way that could honestly be transformational. There's a not-too-distant future when you'll be able to recognize people with highly cognitively intense jobs because they will look something like this. What you're looking at is a completely wireless EEG system. The central tech here is what the researchers call an 'e-tattoo' — but think of it like those temporary tattoos your kids wear. Conductive wires are printed on a thin transparent backing which conforms to the forehead. Electrodes make contact with the skin via a new type of conductive adhesive. The squiggles in the wires allow you to flex and move without breaking connections. That whole printed setup is made to be disposable; apparently the material cost is something like $20. The blue square is the ghost in the machine, a processor that receives the signals from the electrodes and transmits them, via low-energy Bluetooth, to whatever device you want. It's got a tiny battery inside and lasts for around 28 hours. In other words, even in this prototype phase, you could wear this thing at your cognitively intense job all day. And yeah, you might get a few looks, but the joke will be on them when the algorithm says your brain is full and you need to take a 15-minute rest. Of course, cool tech like this is only cool if it actually works, so let's take a look at those metrics. The first thing to test was whether the device could perform as well as an EEG on a simple task. Six adults were recruited and wore the tattoo at the same time as a conventional EEG. They were then asked to open and close their eyes. There's a standard finding here that with eyes closed, alpha frequencies, mid-range brain oscillations, dominate. You can see the patterns recorded by the standard EEG and the new tattoo system here. They are basically indistinguishable. But the tattoo system, with its flexible design, offers some particular advantages. One of the problems with conventional EEGs is how sensitive they are to motion. You turn your head, you get a bunch of noise. Walk around, and the signal becomes useless. Not so with the tattoo. These graphs show the electronic noise levels when the participant was doing various motions. Broadly speaking, you can see that the tattoo continues providing solid, reliable recordings even when walking or running, while the EEG goes all over the place with noise. The only exception to this was with eyebrow raising — maybe not surprising because the tattoo goes on the forehead. But I didn't start off telling you we have a new flexible EEG tech. I told you we had tech that could quantify our cognitive load. Here's how they tested this. In the lab, they had their volunteers do a cognitive task called the N-back test. It starts at level 0. Basically, they ask you to click a button whenever you see the letter Q or something. Easy. Level 1 is a bit harder. You have to click the button when the image on the screen matches, in either location or content, the image from one screen ago — one image back. Get it? Level 2 is even harder. You click when the current image matches, in content or location, the image from two screens ago. Level 3 gets really stressful. You have to click when you see something that matches three screens ago. And, of course, this keeps going, so you have to keep this information in your memory as the test continues. It's hard. It taxes the brain. Here are the results on the NASA survey scale. This is what the participants reported as to how mentally taxed they were. As the N gets higher, the cognitive stress gets higher. So the system works. The participants, you won't be surprised to hear, performed worse as the N increased. At higher N, the detection rate — the rate at which matches were appropriately clicked — declined. The reaction time increased. False alarms went up. All hallmarks of cognitive stress. And the e-tattoo could tell. Feeding its wireless output into a machine learning model, the researchers could predict the level of cognitive stress the participant was under. They show the results for the participant where the system worked the best — a bit of cherry-picking, certainly, but it will illustrate the point. The blue line indicates what level of the N-back test the participant was actually taking. The red line is what the machine learning model thought the participant was doing, just from reading their brain waves. They match pretty well. Again, that was just the time the experiment worked best. The overall results aren't quite as good, with a weighted accuracy metric ranging from 65% to 74% depending on the subject. Clearly better than chance, but not perfect. Still, these are early days. It seems to me that the researchers here have solved a major problem with monitoring people doing cognitively intense tasks — a way to read brain waves that does not completely interfere with the task itself. That's a big hurdle. As for the accuracy, even an imperfect system may be better than what we have now, since what we have now is nothing. But I have no doubt that with more data and refinement, accuracy will increase here. When it does, the next step will be to test whether using these systems on the job — in air traffic control towers, in operating rooms, in spaceships — will lead to more awareness of cognitive strain, more rest when it is needed, and better decision-making in the heat of the moment.

The Strange Link Between Cold Sores and Alzheimer's Disease
The Strange Link Between Cold Sores and Alzheimer's Disease

Medscape

time20-05-2025

  • Health
  • Medscape

The Strange Link Between Cold Sores and Alzheimer's Disease

This transcript has been edited for clarity. Welcome to Impact Factor , your weekly dose of commentary on a new medical study. I'm Dr F. Perry Wilson from the Yale School of Medicine. Two-thirds of you reading this will know the feeling. It starts with a numb, tingly, feeling in the lip. A day or so later, some redness, some swelling, and then, yup, a cold sore. It's a little frustrating, maybe a little embarrassing, but you wait it out for a few days and it goes away. No big deal, right? Except for the fact that multiple studies suggest that cold sores might increase your risk for Alzheimer's disease. Cold sores come from a viral infection, specifically herpes simplex virus 1 (HSV-1). There are multiple herpesviruses, which are all DNA viruses and include HSV-2 which causes the sexually transmitted infection; though, to be fair, both HSV-1 and -2 can lead to both types of infections. Varicella — the virus that causes chicken pox and shingles, Ebstein-Barr virus, CMV — are all herpesviruses. If you're human, you have almost certainly been infected by at least one. In any case, HSV-1 is one of the most common viral infections in the world. It's estimated that about two-thirds of the adult population are infected. Unlike other viruses, such as flu or coronavirus, herpesviruses are incredibly difficult to completely fight off from your body. They get around immune surveillance by hiding out in the nucleus of other cells as just an innocuous bundle of DNA. This latent phase is asymptomatic. It lies dormant until, for reasons that are still not entirely clear, the DNA bundle loosens a bit and the cellular machinery turns those instructions into the proteins that make up new virus particles and boom — outbreak. The immune system gets revved up, the outbreak is contained, and the cycle repeats. What does this all have to do with Alzheimer's disease? I was inspired to dig into this a bit because of a study appearing this week in BMJ Open , which suggests that HSV-1 infection nearly doubles the risk of Alzheimer's. Let me run through the study's findings and then we can figure out if this makes any sense at all. Researchers used the IQVIA PharMetrics Plus database to conduct the study. This is basically a large administrative claims database that covers much of the United States. It basically aggregates all the billing codes for medical care and medications from a bunch of commercial insurers; there are more than 200 million individuals represented in the file. From those, they found 344,628 individuals who were diagnosed with Alzheimer's disease. For controls, they identified another 344,628 individuals with the same age, gender, region of the country, date of entry into the database, and — to account for contact with the medical system — the number of inpatient and outpatient visits. Despite that, the groups were not exactly comparable. The individuals who would go on to develop Alzheimer's disease had a greater number of comorbidities, for example. But the kicker of the study — the headline — is this finding. People with Alzheimer's disease were twice as likely to have HSV-1 compared with the controls. After accounting for the differences between them, infection with HSV-1 increased the odds of subsequently developing Alzheimer's disease by 80%. Did you catch the problem with this graph? Take a look at the Y-axis. That's on the percentage scale. Sure, the people who went on to develop Alzheimer's disease had double the rate of HSV-1 infection, but the raw number is 0.44% vs 0.24%. Didn't I tell you at the beginning that about two-thirds of us are infected with HSV-1? That's quite a bit higher than 0.44%. What is going on here? Welcome to the world of administrative data. The problem here is that the researchers could only identify people with HSV-1 based on some provider diagnosing them with HSV-1. More than that, entering a billing code for HSV-1. Have you ever had a cold sore? Do you know whether your doctor added that to your medical history and billed insurance for it? Probably not. So we're missing an enormous number of infections here, and that calls the whole conclusion into question. Now, you might say, sure, doctors aren't diagnosing the vast majority of HSV-1 cases, but surely this is true both for people who go on to develop Alzheimer's and for those who don't, and therefore the inference is valid. Maybe. But I'd feel better if we were talking about missing something like 10% of diagnoses instead of 99% like we are here. I don't want to discount this too much, though. The paper has some other interesting findings. For instance, there was also a higher rate of HSV-2 and varicella infection among those who developed Alzheimer's disease; those are the other herpesviruses that infect nerve cells. There was no difference in rates of cytomegalovirus infection — another herpesvirus, but one that infects monocytes instead of nerve cells. But let's say we believe the link between HSV and Alzheimer's, what can we do about it? The authors hypothesized that, if HSV is causative of Alzheimer's, treatment with antivirals would reduce the risk of Alzheimer's disease. And since prescription information was present in the dataset, they could model this. Sure enough, those treated with antivirals were less likely — about 17% less likely — to develop Alzheimer's disease. This is interesting to me. In general, when you look at people who are treated for a condition, you can assume they had a more severe form of the condition (short of the treatment being done in the context of a randomized trial). Basically, people who get treated tend to be sicker than people who don't get treated, and so, in general, you see worse outcomes in the treated group — a stubborn problem in observational data called confounding by indication. Here, we see the opposite, which adds some weight to the argument. So, despite the poor capture of HSV-1 infections, the link could be real. Some other studies support this hypothesis. Alzheimer's disease is characterized by amyloid plaque deposition in the brain. Some mouse studies have shown that HSV induces the formation of amyloid plaques as an immune response and impairs the mouse's cognitive ability. This study prospectively studied 1000 Swedish older adults over time and measured antibodies to HSV: 82% of people had those antibodies which comports with what we would expect. Still, those with the antibodies had about twice the risk of developing dementia as those without. The authors of the paper in BMJ Open suggest 'antiherpetic therapies as potentially protective for AD-related dementia.' That feels like a bit of a leap to me at this point, and I will point out that this paper was funded by Gilead Sciences who have quite a few antivirals on the market and a new anti-herpetic drug that has recently completed phase 1a testing— so… grains of salt. Still, for those who suffer from cold sores, a study like this may push you a bit towards treatment, at least during an outbreak. Short-term valacyclovir is relatively safe and reduces the duration of the cold sore by about a day, which is nice. But if it reduces your risk of dementia as well, well, it might be a no-brainer.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store