logo
AI Helps Prevent Medical Errors in Real-World Clinics

AI Helps Prevent Medical Errors in Real-World Clinics

There has been a lot of talk about the potential for AI in health, but most of the studies so far have been stand-ins for the actual practice of medicine: simulated scenarios that predict what the impact of AI could be in medical settings.
But in one of the first real-world tests of an AI tool, working side-by-side with clinicians in Kenya, researchers showed that AI can reduce medical errors by as much as 16%.
In a study available on OpenAI.com that is being submitted to a scientific journal, researchers at OpenAI and Penda Health, a network of primary care clinics operating in Nairobi, found that an AI tool can provide a powerful assist to busy clinicians who can't be expected to know everything about every medical condition. Penda Health employs clinicians who are trained for four years in basic health care: the equivalent of physician assistants in the U.S. The health group, which operates 16 primary care clinics in Nairobi Kenya, has its own guidelines for helping clinicians navigate symptoms, diagnoses, and treatments, and also relies on national guidelines as well. But the span of knowledge required is challenging for any practitioner.
That's where AI comes in. 'We feel it acutely because we take care of such a broad range of people and conditions,' says Dr. Robert Korom, chief medical officer at Penda. 'So one of the biggest things is the breadth of the tool.'
Read More: A Psychiatrist Posed As a Teen With Therapy Chatbots. The Conversations Were Alarming
Previously, Korom says he and his colleague, Dr. Sarah Kiptinness, head of medical services, had to create separate guidelines for each scenario that clinicians might commonly encounter—for example, guides for uncomplicated malaria cases, or for malaria cases in adults, or for situations in which patients have low platelet counts. AI is ideal for amassing all of this knowledge and dispensing it under the appropriately matched conditions.
Korom and his team built the first versions of the AI tool as a basic shadow for the clinician. If the clinician had a question about what diagnosis to provide or what treatment protocol to follow, he or she could hit a button that would pull a block of related text collated by the AI system to help the decision-making. But the clinicians were only using the feature in about half of visits, says Korom, because they didn't always have time to read the text, or because they often felt they didn't need the added guidance.
So Penda improved on the tool, calling it AI Consult, that runs silently in the background of visits, essentially shadowing the clinicians' decisions, and prompting them only if they took questionable or inappropriate actions, such as over prescribing antibiotics.
'It's like having an expert there,' says Korom—similar to how a senior attending physician reviews the care plan of a medical resident. 'In some ways, that's how [this AI tool] is functioning. It's a safety net—it's not dictating what the care is, but only giving corrective nudges and feedback when it's needed.'
Read More: The World's Richest Woman Has Opened a Medical School
Penda teamed up with OpenAI to conduct a study of AI Consult to document what impact it was having on helping about 20,000 doctors to reduce errors, both in making diagnoses and in prescribing treatments. The group of clinicians using the AI Consult tool reduced errors in diagnosis by 16% and treatment errors by 13% compared to the 20,000 Penda providers who weren't using it.
The fact that the study involved thousands of patients in a real-world setting sets a powerful precedent for how AI could be effectively used in providing and improving health care, says Dr. Isaac Kohane, professor of biomedical informatics at Harvard Medical School, who looked at the study. 'We need much more of these kinds of prospective studies as opposed to the retrospective studies, where [researchers] look at big observational data sets and predict [health outcomes] using AI. This is what I was waiting for.'
Not only did the study show that AI can help reduce medical errors, and therefore improve the quality of care that patients receive, but the clinicians involved viewed the tool as a useful partner in their medical education. That came as a surprise to OpenAI's Karan Singhal, Health AI lead, who led the study. 'It was a learning tool for [those who used it] and helped them educate themselves and understand a wider breadth of care practices that they needed to know about,' says Singhal. 'That was a bit of a surprise, because it wasn't what we set out to study.'
Kiptinness says AI Consult served as an important confidence builder, helping clinicians gain experience in an efficient way. 'Many of our clinicians now feel that AI Consult has to stay in order to help them have more confidence in patient care and improve the quality of care.'
Clinicians get immediate feedback in the form of a green, yellow, and red-light system that evaluates their clinical actions, and the company gets automatic evaluations on their strengths and weaknesses. 'Going forward, we do want to give more individualized feedback, such as, 'You are great at managing obstetric cases, but in pediatrics, these are the areas you should look into,'" says Kiptinness. "We have many ideas for customized training guides based on the AI feedback.'
Read More: The Surprising Reason Rural Hospitals Are Closing
Such co-piloting could be a practical and powerful way to start incorporating AI into the delivery of health care, especially in areas of high need and few health care professionals. The findings have 'shifted what we expect as standard of care within Penda,' says Korom. 'We probably wouldn't want our clinicians to be completely without this.'
The results also set the stage for more meaningful studies of AI in health care that move the practice from theory to reality. Dr. Ethan Goh, executive director of the Stanford AI Research and Science Evaluation network and associate editor of the journal BMJ Digital Health & AI, anticipates that the study will inspire similar ones in other settings, including in the U.S. 'I think that the more places that replicate such findings, the more the signal becomes real in terms of how much value [from AI-based systems] we can capture," he says. "Maybe today we are just catching mistakes, but what if tomorrow we are able to go beyond, and AI suggests accurate plans before a doctor makes mistakes to being with?'
Tools like AI Consult may extend access of health care even further by putting it in the hands of non-medical people such as social workers, or by providing more specialized care in areas where such expertise is unavailable. 'How far can we push this?' says Korom.
The key, he says, would be to develop, as Penda did, a highly customized model that accurately incorporates the work flow of the providers and patients in a given setting. Penda's AI Consult, for example, focused on the types of diseases most likely to occur in Kenya, and the symptoms clinicians are most likely to see. If such factors are taken into account, he says, 'I think there is a lot of potential there.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab
Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab

Yahoo

time16 hours ago

  • Yahoo

Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab

By Echo Wang NEW YORK (Reuters) -Meta Platforms has appointed Shengjia Zhao, co-creator of ChatGPT, as chief scientist of its Superintelligence Lab, CEO Mark Zuckerberg said on Friday, as the company accelerates its push into advanced AI. "In this role, Shengjia will set the research agenda and scientific direction for our new lab working directly with me and Alex," Zuckerberg wrote in a Threads post, referring to Meta's Chief AI Officer Alexandr Wang, who Zuckerberg hired from startup Scale AI when Meta took a big stake in it. Zhao, a former research scientist at OpenAI, co-created ChatGPT, GPT-4 and several of OpenAI's mini models, including 4.1 and o3. He is among several researchers who have moved from OpenAI to Meta in recent weeks, part of a broader talent arms race as Zuckerberg aggressively hires from rivals to close the gap in advanced AI. Meta has been offering some of Silicon Valley's most lucrative pay packages and striking startup deals to attract top researchers, a strategy that follows the underwhelming performance of its Llama 4 model. Meta launched the Superintelligence Lab recently to consolidate work on its Llama models and long‑term artificial general intelligence ambitions. Zhao is a co-founder of the lab, according to the Threads post, which operates separately from FAIR, Meta's established AI research division led by deep learning pioneer Yann LeCun. Zuckerberg has said Meta aims to build 'full general intelligence' and release its work as open source — a strategy that has drawn both praise and concern within the AI community.

Meta names Shengjia Zhao as chief scientist of AI superintelligence unit
Meta names Shengjia Zhao as chief scientist of AI superintelligence unit

TechCrunch

time20 hours ago

  • TechCrunch

Meta names Shengjia Zhao as chief scientist of AI superintelligence unit

Meta CEO Mark Zuckerberg announced Friday that former OpenAI researcher Shengjia Zhao will lead research efforts at the company's new AI unit, Meta Superintelligence Labs (MSL). Zhao contributed to several of OpenAI's largest breakthroughs, including ChatGPT, GPT-4, and the company's first AI reasoning model, o1. 'I'm excited to share that Shengjia Zhao will be the Chief Scientist of Meta Superintelligence Labs,' Zuckerberg said in a post on Threads Friday. 'Shengjia co-founded the new lab and has been our lead scientist from day one. Now that our recruiting is going well and our team is coming together, we have decided to formalize his leadership role.' Zhao will set a research agenda for MSL under the leadership of Alexandr Wang, the former CEO of Scale AI who was recently hired to lead the new unit. We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team. Let's go 🚀 — Alexandr Wang (@alexandr_wang) July 25, 2025 Wang, who does not have a research banckground, was viewed as a somewhat unconventional choice to lead an AL lab. The addition of Zhao, who is a reputable research leader known for developing frontier AI models, rounds out the leadership team. To further fill out the unit, Meta has hired several high-level researchers from OpenAI, Google DeepMind, Safe Superintelligence, Apple, and Anthropic, as well as pulling researchers from Meta's existing FAIR and GenAI units. Zuckerberg notes in his post that Zhao has pioneered several breakthroughs, including a 'new scaling paradigm.' The Meta CEO is likely referencing Zhao's work on OpenAI's reasoning model, o1, in which he is listed as a foundational contributor alongside OpenAI co-founder Ilya Sutskever. Meta currently doesn't offer a competitor to o1, so AI reasoning models are a key area of focus for MSL. The Information reported in June that Zhao would be joining Meta Superintelligence Labs, alongside three other influential OpenAI researchers, including Jiahui Yu, Shuchao Bi, and Hongyu Ren. Meta has also recruited Trapit Bansal, another OpenAI researcher who worked on AI reasoning models with Zhao, as well as three employees from OpenAI's Zurich office that worked on multimodality. Zuckerberg has gone to great lengths to set MSL up for success. The Meta CEO has been on a recruiting spree to staff up his AI superintelligence labs, which has entailed sending personal emails to researchers and inviting prospects to his Lake Tahoe estate. Meta has reportedly offered some researcher eight and nine figure compensation packages, some of which are 'exploding offers' that expire in a matter of days. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW Meta has also upped its investment in cloud computing infrastructure, which should help MSL conduct the massive training runs required to create competitive frontier AI models. By 2026, Zhao and MSL's researchers should have access to Meta's one gigawatt cloud computing cluster, Prometheus, located in Ohio. Once online, Meta will be one of the first technology companies with an AI training cluster of Prometheus' size — one gigawatt is enough energy to power more than 750,000 homes. That should help Meta conduct the massive training runs required to create frontier AI models. With the addition of Zhao, Meta now has two chief AI scientists, including Yann LeCun, the leader of Meta's FAIR. Unlike MSL, FAIR is designed to focus on long-term AI research — techniques that may be used five to 10 years from now. How exactly Meta's three AI units will work together remains to be seen. Nevertheless, Meta now seems to have a formidable AI leadership team to compete with OpenAI and Google.

Think your ChatGPT therapy sessions are private? Think again.
Think your ChatGPT therapy sessions are private? Think again.

Fast Company

time20 hours ago

  • Fast Company

Think your ChatGPT therapy sessions are private? Think again.

If you've been confessing your deepest secrets to an AI chatbot, it might be time to reevaluate. With more people turning to AI for instant life coaching, tools like ChatGPT are sucking up massive amounts of personal information on their users. While that data stays private under ideal circumstances, it could be dredged up in court – a scenario that OpenAI CEO Sam Altman warned users in an appearance on Theo Von's popular podcast this week. 'One example that we've been thinking about a lot… people talk about the most personal shit in their lives to ChatGPT,' Altman said. 'Young people especially, use it as a therapist, as a life coach, 'I'm having these relationship problems, what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality.' Altman says that as a society we 'haven't figured that out yet' for ChatGPT. Altman called for a policy framework for AI, though in reality OpenAI and its peers have lobbied for a regulatory light touch. 'If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman told Von, arguing that AI conversations should be treated with the same level of privacy as a chat with a therapist. While interactions with doctors and therapists are protected by federal privacy laws in the U.S., exceptions exist for instances in which someone is a threat to themselves or others. And even with those strong privacy protections, relevant medical information can be surfaced by court order, subpoena or a warrant. Altman's argument seems to be that from a regulatory perspective, ChatGPT shares more in common with licensed, trained specialists than it does with a search engine. 'I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,' he said. Altman also expressed concerns about how AI will adversely impact mental health, even as people seek its advice in lieu of the real thing. 'Another thing I'm afraid of… is just what this is going to mean for users' mental health. There's a lot of people that talk to ChatGPT all day long,' Altman said. 'There are these new AI companions that people talk to like they would a girlfriend or boyfriend. 'I don't think we know yet the ways in which [AI] is going to have those negative impacts, but I feel for sure it's going to have some, and we'll have to, I hope, we can learn to mitigate it quickly.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store