logo
I'm a Pediatrician and Here's the One Thing Everyone Gets Wrong About AI and Kids

I'm a Pediatrician and Here's the One Thing Everyone Gets Wrong About AI and Kids

Yahoo05-06-2025

From planning a weeknight meal plan in seconds to spinning a bedtime story about the magic of sharing to doling out surprisingly helpful mother-in-law advice, there's no denying that ChatGPT can streamline family life. But a recent study from the University of Kansas reveals a worrying trend: When it comes to their children's health, many parents trust AI more than the actual health care professionals.
And that (understandably) has doctors concerned. I chatted with pediatrician Dr. Karen Klawitter about how parents should—and shouldn't—be using tools like ChatGPT when it comes to their kid's medical care.
Dr. Karen Ann Klawitter is a board-certified pediatrician with over 25 years of experience in diverse healthcare settings. A graduate of Loyola Stritch School of Medicine, Dr. Klawitter completed her pediatric residency at Wright-Patterson AFB Medical Center. Currently, Dr. Klawitter contributes her expertise to Just Answer, providing global pediatric consultations, and serves at Community Health Northwest Florida.
'It is generally not recommended to use ChatGPT for kids' health questions and often causes more stress and worry to parents,' Dr. Klawitter explains. (Like when my friend noticed her daughter was drinking more water than usual and the chatbot was convinced she had diabetes—she did not.)
To state the obvious, AI isn't a board-certified pediatrician. It doesn't actually know your kid—no matter how detailed the prompts are that you feed it.
'Chat GPT is not a doctor with education or years of real-world experience. The responses provided are based on its training data, which is not always accurate or up to date. This can be very misleading to parents,' Dr. Klawitter explains. 'It can even generate fabricated and false information that can sound very plausible to the parent on the other side.'
Most importantly, it's not personalized for child. 'It does not know your kid's past medical history, family history, allergies—all very relevant information for doctors to make medical decisions.' ChatGPT might recommend Tylenol and rest for your kid's 103 fever, but your doctor would send them to the ER based on a history of febrile seizure and heart condition.
While the pediatrician has serious concerns about using AI for health queries, that's not to say that there's no use for the technology at all. You just need to use it for general information rather than health advice that is specific to your child. Think of it as a tool for research that you can then bring to your doctor.
'It may help the parent understand a specific diagnosis and/or condition and further open up dialog with additional questions for the health care provider,' she adds.
Let's say you suspect your kid has a milk allergy. You could ask ChatGPT for some general information about allergies and common symptoms (gassiness, trouble sleeping) and culprits (yogurt, canned soups), then bring this info to your next doctor's appointment.
'If used correctly, it can be a useful tool in healthcare but it is not a replacement for an actual doctor's medical advice,' says Dr. Klawitter.
Oh and one more thing: Do not use ChatGPT for emergency situations. It's not designed for that, the pediatrician stresses, and advises parents to always call emergency services instead.
Bottom line: ChatGPT can be a helpful tool but it's not a replacement for following up with a real-life pediatrician. Because while AI can definitely help you outsource annoying family tasks, your kid's health shouldn't be one of them.
Ask an AI Chatbot: What Should I Do When a Guy I Like Starts Ghosting Me?

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

7 Healthcare Technology Trends in 2025 That Will Redefine Mobile App Development
7 Healthcare Technology Trends in 2025 That Will Redefine Mobile App Development

Associated Press

time3 hours ago

  • Associated Press

7 Healthcare Technology Trends in 2025 That Will Redefine Mobile App Development

06/29/2025, London, England // KISS PR Brand Story PressWire // Healthcare technology trends in 2025 are evolving faster than ever. Mobile apps now play a big role in healthcare. Patients use them to get care, track health issues, and talk to doctors. New tech is changing how the whole system works. In this blog post, we will look at 7 new healthcare tech trends. Every app developer and healthcare provider should know them in 2025. Why Mobile Technology Is at the Heart of Healthcare Innovation Mobile apps are now key tools in healthcare. They give real-time access to medical records. Patients can get remote consultations, manage chronic diseases, and even use AI for diagnosis. They track vital signs, see test results, and talk to doctors—all on their phones. As demand for these features grows, healthcare app development has become a major focus for tech companies and healthcare providers looking to improve care and meet patient expectations. For doctors, apps make work easier. They cut paperwork, speed up decisions, and give instant patient data. This helps doctors act faster and give better care. Patients get quicker help and better results. The COVID-19 pandemic sped up digital healthcare. Hospitals were full. Lockdowns made visits hard. Telehealth and remote care grew fast. Apps let patients see doctors online, manage chronic illness at home, and get mental health help from afar. Even after the pandemic, these habits stayed. Now, patients expect remote care. Providers must offer strong mobile services to keep up. In 2025, personalized care is a must. Patients want care that fits their lives and needs. AI and data analysis make this possible. Apps suggest fitness goals, diet plans, medicine schedules, and early warnings for risks. This helps patients stick to their plans. It also builds trust in their doctors. 7 Healthcare Technology Trends 1. AI-Powered Diagnostics & Chatbots In 2025, AI-powered diagnostics will lead new healthcare trends. AI now reads huge amounts of data: scans, lab results, genetic tests, and patient history. It helps doctors make faster and better diagnoses. Startups like Doctronic use AI to cut wait times. Patients get quicker treatments. AI chatbots are also rising fast in mobile healthcare apps. These virtual helpers answer common questions, check symptoms, book appointments, and follow up after treatments. They handle simple tasks so medical staff can focus on harder cases. Patients get 24/7 support with less waiting. 2. Wearables & Monitoring Integration Wearable devices now do more than track fitness. In 2025, smartwatches, biosensors, and smart rings link with healthcare apps. They track heart rate, oxygen levels, blood sugar, sleep, and more. The new Pixel Watch 3, for example, can spot irregular heartbeats even before some hospital machines. These devices collect health data non-stop. Apps study this data to find early warning signs. They alert patients and doctors right away. This helps prevent serious problems, cuts hospital visits, and lets people manage diseases like diabetes, high blood pressure, and heart issues from home. 3. Telemedicine & Virtual Care Platforms Telemedicine has moved from a backup plan to daily care. Mobile apps now offer full telehealth services. Patients get video calls, remote tests, digital prescriptions, and follow-up care in a one secure app. In 2025, even cancer care uses telemedicine. In India, new remote cancer services now help patients in 10 districts. This brings care to people who once had little access and helps close healthcare gaps. For providers, telemedicine apps cut costs, improve schedules, and let specialists treat patients far away. For patients, it means faster care, easy access, and steady treatment in rural places. 4. Personalized Health Data In 2025, personal health data is changing how care works. Patients no longer want one-size-fits-all advice. Mobile apps gather data from wearables, genetic tests, daily habits, and medical records. They use this data to create custom care plans. For example, apps can change medicine schedules based on activity or diet tracked in real time. Platforms like A4M already use this method to help people live longer and stay healthy. This personal care leads to better results. It also keeps patients involved and responsible for their own health. 5. Blockchain for Data Privacy As healthcare apps collect more private data, security is now a top concern. In 2025, blockchain will help solve this problem. It creates tamper-proof, decentralized medical records. Patients control who can see their data. Old databases can be hacked. Blockchain makes records clear, trackable, and fully encrypted. It helps providers follow strict privacy laws and builds trust with patients. A Forbes Tech Council report says blockchain is now a strong shield against cyberattacks in healthcare. 6. Voice-Enabled Interfaces Voice tech is changing how patients and doctors use healthcare apps. In 2025, voice-enabled apps let users book visits, set medicine reminders, track symptoms, and get health info. For doctors, AI voice tools write notes during appointments. This saves time and makes records more accurate. A Forbes Tech Council article says AI voice assistants help doctors work better, make fewer mistakes, and improve patient care. 7. AR/VR in Medical Training Augmented Reality (AR) and Virtual Reality (VR) are changing how doctors learn. In 2025, many medical schools and hospitals will use AR/VR to train surgeons and practice procedures without risking real patients. With VR headsets, students can perform complex surgeries or handle emergencies in a safe, virtual space. Companies like EON Reality lead this trend. They build new training tools that help doctors learn faster, gain confidence, and improve their skills. What These Trends Mean for App Developers & Healthcare Providers The rise of these healthcare technology trends opens new opportunities but also adds pressure for both mobile app developers and healthcare providers in 2025. Mobile app development becomes more critical to build secure, user-friendly, and innovative healthcare solutions that meet patient needs. For developers, the goal is clear. Apps must be smarter, safer, and more patient-focused than ever. Developers need to add AI, real-time data from wearables, voice controls, and blockchain security. At the same time, they must follow strict rules like HIPAA, GDPR, and HL7 FHIR. For healthcare providers, apps are now key tools. They help with patient care, chronic disease management, remote visits, and daily operations. Providers must rethink how they handle patient data. They must be clear and get consent at every step. Those who use these tools well can reach more patients, build trust, and improve health results. Conclusion Healthcare technology in 2025 is changing medicine, care, and the patient experience. From AI diagnostics to VR surgery training, these tools boost speed and open new doors. Healthcare is now more personal, easy to reach, and driven by data. For app developers, this is a key time. They must build safe, simple, and smart tools to meet new needs. For healthcare groups, staying ahead means offering care right where patients want it on their phones. Media Details Golden Owl Media Website: Address: 133 Creek Road, London, England, SE8 3BU Phone: (+44) 790 476 9884 Source published by Submit Press Release >> 7 Healthcare Technology Trends in 2025 That Will Redefine Mobile App Development

Dangerous AI therapy-bots are running amok. Congress must act.
Dangerous AI therapy-bots are running amok. Congress must act.

The Hill

time9 hours ago

  • The Hill

Dangerous AI therapy-bots are running amok. Congress must act.

A national crisis is unfolding in plain sight. Earlier this month, the Federal Trade Commission received a formal complaint about artificial intelligence therapist bots posing as licensed professionals. Days later, New Jersey moved to fine developers for deploying such bots. But one state can't fix a federal failure. These AI systems are already endangering public health — offering false assurances, bad advice and fake credentials — while hiding behind regulatory loopholes. Unless Congress acts now to empower federal agencies and establish clear rules, we'll be left with a dangerous, fragmented patchwork of state responses and increasingly serious mental health consequences around the country. The threat is real and immediate. One Instagram bot assured a teenage user it held a therapy license, listing a fake number. According to the San Francisco Standard, a bot used a real Maryland counselor's license ID. Others reportedly invented credentials entirely. These bots sound like real therapists, and vulnerable users often believe them. It's not just about stolen credentials. These bots are giving dangerous advice. In 2023, NPR reported that the National Eating Disorders Association replaced its human hotline staff with an AI bot, only to take it offline after it encouraged anorexic users to reduce calories and measure their fat. This month, Time reported that psychiatrist Andrew Clark, posing as a troubled teen, interacted with the most popular AI therapist bots. Nearly a third gave responses encouraging self-harm or violence. A recently published Stanford study confirmed how bad it can get: Leading AI chatbots consistently reinforced delusional or conspiratorial thinking during simulated therapy sessions. Instead of challenging distorted beliefs — a cornerstone of clinical therapy — the bots often validated them. In crisis scenarios, they failed to recognize red flags or offer safe responses. This is not just a technical failure; it's a public health risk masquerading as mental health support. AI does have real potential to expand access to mental health resources, particularly in underserved communities. A recent NEJM-AI study found that a highly structured, human-supervised chatbot was associated with reduced depression and anxiety symptoms and triggered live crisis alerts when needed. But that success was built on clear limits, human oversight and clinical responsibility. Today's popular AI 'therapists' offer none of that. The regulatory questions are clear. Food and Drug Administration 'software as a medical device' rules don't apply if bots don't claim to 'treat disease'. So they label themselves as 'wellness' tools and avoid any scrutiny. The FTC can intervene only after harm has occurred. And no existing frameworks meaningfully address the platforms hosting the bots or the fact that anyone can launch one overnight with no oversight. We cannot leave this to the states. While New Jersey's bill is a step in the right direction, relying on individual states to police AI therapist bots invites inconsistency, confusion, and exploitation. A user harmed in New Jersey could be exposed to identical risks coming from Texas or Florida without any recourse. A fragmented legal landscape won't stop a digital tool that crosses state lines instantly. We need federal action now. Congress must direct the FDA to require pre-market clearance for all AI mental health tools that perform diagnosis, therapy or crisis intervention, regardless of how they are labeled. Second, the FTC must be given clear authority to act proactively against deceptive AI-based health tools, including holding platforms accountable for negligently hosting such unsafe bots. Third, Congress must pass national legislation to criminalize impersonation of licensed health professionals by AI systems, with penalties for their developers and disseminators, and require AI therapy products to display disclaimers and crisis warnings, as well as implement meaningful human oversight. Finally, we need a public education campaign to help users — especially teens — understand the limits of AI and to recognize when they're being misled. This isn't just about regulation. Ensuring safety means equipping people to make informed choices in a rapidly changing digital landscape. The promise of AI for mental health care is real, but so is the danger. Without federal action, the market will continue to be flooded by unlicensed, unregulated bots that impersonate clinicians and cause real harm. Congress, regulators and public health leaders: Act now. Don't wait for more teenagers in crisis to be harmed by AI. Don't leave our safety to the states. And don't assume the tech industry will save us. Without leadership from Washington, a national tragedy may only be a few keystrokes away. Shlomo Engelson Argamon is the associate provost for Artificial Intelligence at Touro University.

Don't Ask AI ChatBots for Medical Advice, Study Warns
Don't Ask AI ChatBots for Medical Advice, Study Warns

Newsweek

time12 hours ago

  • Newsweek

Don't Ask AI ChatBots for Medical Advice, Study Warns

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Trust your doctor, not a chatbot. That's the sobering conclusion of a new study published in the journal Annals of Internal Medicine, which reveals how artificial intelligence (AI) is vulnerable to being misused to spread dangerous misinformation on health. Researchers experimented with five leading AI models developed by Anthropic, Google, Meta, OpenAI and X Corp. All five systems are widely used, forming the backbone of the AI-powered chatbots embedded in websites and apps around the world. Using developer tools not typically accessible to the public, the researchers found that they could easily progam instances of the AI systems to respond to health-related questions with incorrect—and potentially harmful—information. Worse, the chatbots were found to wrap their false answers in convincing trappings. "In total, 88 percent of all responses were false," explained paper author Natansh Modi of the University of South Africa in a statement. "And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate." Among the false claims made were debunked myths such as that vaccines cause autism, that HIV is an airborne disease and that 5G causes infertility. Of the five chatbots evaluated, four presented responses that were 100 percent incorrect. Only one model showed some resistance, generating disinformation in 40 percent of cases. A stock image showing a sick person using a smartphone. A stock image showing a sick person using a smartphone. demaerre/iStock / Getty Images Plus Disinformation Bots Already Exist The research didn't stop at theoretical vulnerabilities; Modi and his team went a step further, using OpenAI's GPT Store—a platform that allows users to build and share customized ChatGPT apps—to test how easily members of the public could create disinformation tools themselves. "We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation," said Modi. He emphasized: "Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers' tools, but also tools available to the public." A Growing Threat to Public Health According to the researchers, the threat posed by manipulated AI chatbots is not hypothetical—it is real and happening now. "Artificial intelligence is now deeply embedded in the way health information is accessed and delivered," said Modi. "Millions of people are turning to AI tools for guidance on health-related questions. "If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before." Previous studies have already shown that generative AI can be misused to mass-produce health misinformation—such as misleading blogs or social media posts—on topics ranging from antibiotics and fad diets to homeopathy and vaccines. What sets this new research apart is that it is the first to show how foundational AI systems can be deliberately reprogrammed to act as disinformation engines in real time, responding to everyday users with false claims under the guise of credible advice. The researchers found that even when the prompts were not explicitly harmful, the chatbots could "self-generate harmful falsehoods." A Call for Urgent Safeguards While one model—Anthropic's Claude 3.5 Sonnet—showed some resilience by refusing to answer 60 percent of the misleading queries, researchers say this is not enough. The protections across systems were inconsistent and, in most cases, easy to bypass. "Some models showed partial resistance, which proves the point that effective safeguards are technically achievable," Modi noted. "However, the current protections are inconsistent and insufficient. Developers, regulators and public health stakeholders must act decisively, and they must act now." If left unchecked, the misuse of AI in health contexts could have devastating consequences: misleading patients, undermining doctors, fueling vaccine hesitancy and worsening public health outcomes. The study's authors call for sweeping reforms—including stronger technical filters, better transparency about how AI models are trained, fact-checking mechanisms and policy frameworks to hold developers accountable. They draw comparisons with how false information spreads on social media, warning that disinformation spreads up to six times faster than the truth and that AI systems could supercharge that trend. A Final Warning "Without immediate action," Modi said, "these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns." Newsweek has contacted Anthropic, Google, Meta, OpenAI and X Corp for comment. Do you have a tip on a science story that Newsweek should be covering? Do you have a question about chatbots? Let us know via science@ References Modi, N. D., Menz, B. D., Awaty, A. A., Alex, C. A., Logan, J. M., McKinnon, R. A., Rowland, A., Bacchi, S., Gradon, K., Sorich, M. J., & Hopkins, A. M. (2024). Assessing the system-instruction vulnerabilities of large language models to malicious conversion into health disinformation chatbots. Annals of Internal Medicine.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store