Medical Risk-Aversion Can Kill, Too
Yet in medicine, especially when it comes to pharmaceuticals and cutting-edge therapies, we seem to forget this logic.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
14 minutes ago
- Forbes
In The AI Revolution, Medical Schools Are Falling Behind U.S. Colleges
Instead of learning to use the tools that will define tomorrow's care, med school students still ... More memorize biochemistry pathways and obscure facts they'll never use in clinical practice. getty At Duke University, every matriculating student now has access to a custom AI assistant. At Cal State, more than 460,000 students across 23 campuses are equipped with a 24/7 ChatGPT toolkit upon enrollment. These aren't pilot programs. They're part of a full-scale transformation in the way higher education is preparing students for their future careers. Meanwhile, most U.S. medical schools remain stuck in the last century. Instead of learning to use the tools that will define tomorrow's care, students still memorize biochemistry pathways and are tested on obscure facts they'll never use in clinical practice. Following the release of OpenAI's ChatGPT in 2022, college deans and department chairs responded with caution. They worried about plagiarism, declining writing skills and an overreliance on artificial intelligence. Since, most have since shifted from risk avoidance to opportunity. Today, universities are integrating generative AI into tutoring, test prep, research, advising and more. Many now expect faculty to teach AI fluency across each of their disciplines. Medical education hasn't kept pace. A recent Educause study found that only 14% of medical schools have developed a formal GenAI curriculum compared with 60% of undergraduate programs. Most medical school leaders continue to view large language models as administrative tools rather than clinical ones. That's a mistake. By the time today's students become physicians, they'll carry in their pockets a tool more powerful and important to clinical practice than the stethoscope ever was. In seconds, GenAI can surface every relevant medical study, guideline and precedent. And soon, it will allow patients to accurately evaluate symptoms and understand treatment options before they ever set foot in a clinic. Used wisely, generative AI will help prevent the 400,000 deaths each year from diagnostic errors, the 250,000 from preventable medical mistakes and the 500,000 from poorly controlled chronic diseases. Despite GenAI's potential to transform healthcare, most medical schools still train students for the medicine of the past. They prioritize memorization over critical thinking and practical application. They reward students for recalling facts rather than for effectively accessing and applying knowledge with tools like ChatGPT or Claude. Historically, physicians were judged by how well they told patients what to do. In the future, success will be measured by medical outcomes. Specifically, how well clinicians and AI-empowered patients work together to prevent disease, manage symptoms and save lives. The outdated approach to medical education persists beyond university classrooms. Internship and residency programs still prioritize applicants for their memorization-based test scores. Attending physicians routinely quiz trainees on arcane facts instead of engaging in practical problem-solving. This practice, known as 'pimping,' is a relic of 20th-century training. Few industries outside of medicine would tolerate it. How To Modernize Medical Training Generative AI is advancing at breakneck speed, with capabilities doubling roughly every year. In five years, medical students will enter clinical practice with GenAI tools 32 times more powerful than today's models — yet few will have received formal training on how to use them effectively. Modernizing medical education must begin with faculty. Most students entering medical school in 2025 will already be comfortable using generative AI, having leaned on it during college and while preparing for the MCAT exam. But most professors will be playing catch-up. To close this gap, medical schools should implement a faculty education program before the new academic year. Instructors unfamiliar with GenAI would learn how to write effective prompts, evaluate the reliability of answers and ask clarifying questions to refine outputs. Once all faculty have a foundational understanding of the new applications, the real work begins. They need to create a curriculum for the coming semester. Here are two examples of what that might look like for third-year students on a clinical rotation: Exercise 1: Differential diagnosis with GenAI as a co-physician In a small-group session, students would receive a clinical vignette: A 43-year-old woman presents with fatigue, joint pain and a facial rash that worsens with sun exposure. Students would begin by first drafting their own differential diagnosis. Then, they would prompt a generative AI tool to generate its own list of potential diagnoses. Next, participants would engage the AI in a back-and-forth dialogue, questioning its reasoning, testing assumptions and challenging conclusions. To reinforce clinical reasoning in collaboration with GenAI, each student would also submit written responses to these questions: Is lupus or dermatomyositis the more likely diagnosis, and why? What additional data would help rule out Lyme disease? Cite three high-quality studies that support your diagnostic ranking. The goal of this type of exercise isn't to identify a 'right' answer but to strengthen analytical thinking, expose cognitive biases and teach students how to use GenAI to broaden diagnostic reasoning (not limit it). By the end of the exercise, students should be more confident using AI tools to support — but not replace — their own clinical judgment. Exercise 2: Managing chronic disease with GenAI support In this scenario, students imagine seeing a 45-year-old man during a routine checkup. The patient has no prior medical problems but, on physical exam, his blood pressure measures 140/100. Students begin by walking through the clinical reasoning process: What questions would they ask during the patient history? Which physical findings would be most concerning? What laboratory tests would they order? What initial treatment and follow-up plan would they recommend? Then, students enter the same case into a generative AI tool and evaluate its recommendations. Where do the AI's suggestions align with their own? Where do they differ (and why)? Finally, students are tasked with designing a patient-centered care plan that incorporates medical therapy, lifestyle changes and as many GenAI-powered applications as possible. These might include analyzing data from at-home blood pressure monitors, customizing educational guidance or enabling patients to actively manage their chronic diseases between visits. Training Physicians To Lead, Not Follow Colleges understand that preparing students for tomorrow's careers means teaching them how to apply generative AI in their chosen fields. Medicine must do the same. Soon, physicians will carry in their pocket the entirety of medical knowledge, instantly accessible and continuously updated. They'll consult AI agents trained on the latest research and clinical guidelines. And their patients, empowered by GenAI, will arrive not with random Google results, but with a working understanding of their symptoms, potential diagnoses and evidence-based treatment options. If medical schools don't prepare students to lead clinical application of these tools, for-profit companies and private equity firms will focus solely on ways to lower costs, even when these approaches compromise medical care. As medical school deans prepare to welcome the class of 2029, they must ask themselves: Are we training students to practice yesterday's medicine or to lead tomorrow's?


CNET
21 minutes ago
- CNET
Stop Using ChatGPT for These 11 Things Right Now
ChatGPT and other AI chatbots can be powerful natural language tools, especially when you know how to prompt. You can use ChatGPT to save money on travel, plan your weekly meal prep or even help pivot your career. While I'm a fan, I also know the limitations of ChatGPT, and you should too, whether you're a newbie or an old hand. It's fun for trying out new recipes, learning a foreign language or planning a vacation, but you don't want to give ChatGPT carte blanche in your life. It's not great at everything -- in fact, it can be downright sketchy at a lot of things. ChatGPT sometimes hallucinates information and passes it off as fact, and it may not always have up-to-date information. It's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.) That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat. If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios when you should put down the AI and choose another option. Don't use ChatGPT for any of the following. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 1. Diagnosing physical health issues I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore through potential diagnoses, you could swing from dehydration and the flu to some type of cancer. I have a lump on my chest and entered that information into ChatGPT. Lo and behold, it told me I may have cancer. Awesome! In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. My licensed doctor told me that. I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you can walk in better prepared. And that could help make doctor visits less overwhelming. However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits. 2. Taking care of your mental health ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist. CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst. ChatpGPT doesn't have lived experience, can't read your body language or tone, and has zero capacity for genuine empathy. It can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work — the hard, messy, human work — to an actual human who is trained to properly handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline. 3. Making immediate safety decisions If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew. In a crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder. 4. Getting personalized financial or tax planning ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, retirement goals or risk appetite. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter. I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot simply can't replace a CPA who can catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information. 5. Dealing with confidential or regulated data As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement. The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it may be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT. 6. Doing anything illegal This one is self-explanatory. 7. Cheating on schoolwork I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame. Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you. 8. Monitoring information and breaking news Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet. 9. Gambling I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I would never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information on player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win. 10. Drafting a will or other legally binding contract ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away. However, the moment you ask it to draft actual legal text, you're rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, then pay that lawyer to turn that checklist into a document that stands up in court. 11. Making art This isn't an objective truth, just my own opinion, but I don't believe AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.


Fast Company
23 minutes ago
- Fast Company
How to navigate work when dealing with a major medical issue
I have a brain tumor. The good news is that it's benign. The bad news is that I need surgery to remove it. Brain surgery typically involves a lengthy recovery period. Six weeks, at a minimum. On top of navigating the emotions that come with such a diagnosis, I've had to figure out what work will look like as I recover. More specifically: how I will manage not working for such a long period of time. This isn't the first time I've experienced a major life event in my career (unfortunately). The Extreme Planner in me immediately started to figure out the logistics. If you're going through something similar, I feel you. If you've never faced a significant medical challenge, I hope it stays that way. But I write this so that if you ever need it, you can return to this article. And I write this so if you need to support someone going through a medical challenge, you know where to start. Talking with your boss or team Telling other people about a medical diagnosis is deeply personal. There's no right or wrong time. I'm self-employed, so I talked with my clients as soon as I had more definitive information (a surgery date). For 10 agonizing days, I knew that I had a brain tumor and my clients didn't. I somehow fumbled my way through deadlines and normal client communications as though nothing was wrong. But for me it made sense to talk about my diagnosis as soon as possible. My clients could start to plan for my absence. Plus, I have a lot of doctor's appointments leading up to the surgery date that I need to work around. When I previously had a medical issue in 2017, I told only my boss and one or two close colleagues. I didn't want to talk about it. It was strictly a 'need-to-know' basis. Bottom line: Do what feels right for you. Navigating the pressures of working Living with a brain tumor is Not Fun. There are a lot of unknowns around the outcome of surgery. The same is true for many medical conditions: Fear, pain, or both may impact your life daily. One benefit of telling your boss or team is that hopefully they're compassionate. They'll lighten your workload or understand if you have to rearrange deadlines. But you're likely also facing financial pressure. With most companies having limits on paid sick time, you probably feel like you have to keep working until the point when you can't anymore. I certainly feel that pressure—even guilt—as I think about the gap in my family's income as I recover. I finally decided to take a break between my last working day and my surgery date. During that time, I'm going to take my family to a show in Chicago and maybe get a pedicure. I have a special lunch date planned with my husband. If you're facing a potentially life-altering surgery or other procedure, don't spend your last few days 'before' working. Enjoy the time as best you can. How to ask for support Here's the thing about telling people that you're experiencing a major medical issue: People want to help. They'll ask you if there's anything they can do, because they know you're going through something rough. When I first told people about my brain tumor, they told me to let them know if there was anything they could do. For a long time, I said, 'I'll let you know.' I couldn't think of anything, because my mind was still reeling from the shock of the diagnosis. But then I started to ask for help with specific things. I thought about the people in my life, and how their skills might help keep my business running while I can't work. I circled back with some people who had offered support and said, 'Can you do XYZ for me?' If you work for a company, you can do the same thing. Make a list of the things that would truly make your workday easier so you're ready anytime someone asks, 'How can I help?' How to provide support If you're on the other side and a colleague or professional contact is going through something hard, offer specific ways you can help. So many people (like me) are overwhelmed and don't know how to reply when someone offers support. Say, 'Can I take ABC off your plate? Or XYZ?' rather than 'Let me know if there's anything you need!' It reduces the mental load of the person you're trying to help. Check in again, even after weeks or months have passed. The person's needs may change. Significant medical issues can be long-lasting. People are eager to offer help at the beginning, but that fades over time—especially at work, where it's easy to be removed from people's personal lives.