logo
Mom of two credits ChatGPT with saving her life by helping detect cancer — which doctors missed

Mom of two credits ChatGPT with saving her life by helping detect cancer — which doctors missed

New York Post24-04-2025
A mother of two credits ChatGPT for saving her life, claiming the artificial intelligence chatbot flagged the condition leading to her cancer when doctors missed it.
Lauren Bannon, who divides her time between North Carolina and the US Virgin Islands, first noticed in February 2024 that she was having trouble bending her fingers in the morning and evening, as reported by Kennedy News and Media.
Advertisement
After four months, the 40-year-old was told by doctors that she had rheumatoid arthritis, despite testing negative for the condition.
Bannon, who owns a marketing company, then began experiencing excruciating stomach pains and lost 14 pounds in just a month, which doctors blamed on acid reflux.
Desperate to pinpoint the cause of her symptoms, Bannon turned to ChatGPT, the large-language model made by OpenAI.
The chatbot told Bannon that she may have Hashimoto's disease, an autoimmune condition where the body's immune system mistakenly attacks the thyroid gland, causing it to become inflamed and eventually underactive, according to Kennedy News and Media.
Advertisement
4 After four months, the 40-year-old was told by doctors that she had rheumatoid arthritis, despite testing negative for the condition.
FOX News
Despite reservations from her doctor, Bannon insisted on being tested for the condition in September 2024 — and was shocked to discover that ChatGPT was correct, despite the absence of any family history.
This prompted doctors to perform an ultrasound of Lauren's thyroid, when they discovered two small lumps in her neck that were confirmed as cancer in October 2024.
'I needed to find out what was happening to me, I just felt so desperate. I just wasn't getting the answers I needed.' Lauren Bannon
Advertisement
Bannon claims she would never have found the hidden cancer without the help of ChatGPT, which she credits for helping to save her life.
4 Desperate to pinpoint the cause of her symptoms, Bannon turned to ChatGPT.
Rizq – stock.adobe.com
'I felt let down by doctors,' said Bannon, as reported by Kennedy News and Media. 'It was almost like they were just trying to give out medication for anything to get you in and out the door.'
'I needed to find out what was happening to me, I just felt so desperate. I just wasn't getting the answers I needed.'
Advertisement
Bannon said she had been using ChatGPT for work. When she asked the chatbot about which medical conditions mimic rheumatoid arthritis, it answered, 'You may have Hashimoto's disease, ask your doctor to check your thyroid peroxidase antibody (TPO) levels.''
After her cancer diagnosis in January 2025, Bannon underwent an operation to remove her thyroid and two lymph nodes from her neck. She will now remain under lifelong monitoring to ensure that the cancer doesn't return, according to the report.
4 A mother of two credits ChatGPT with saving her life, claiming the artificial intelligence chatbot flagged the condition leading to her cancer when doctors missed it.
FOX News
Due to not presenting with typical symptoms of Hashimoto's disease, Bannon believes her condition, and subsequent cancer diagnosis, would have remained undetected without the help of ChatGPT.
'I didn't have the typical symptoms of Hashimoto's disease — I wasn't tired or feeling exhausted,' she said, per Kennedy News and Media.
'If I hadn't looked on ChatGPT, I would've just taken the rheumatoid arthritis medication and the cancer would've spread from my neck to everywhere else.'
'It saved my life. I would've never discovered this without ChatGPT. All my tests were perfect.'
Bannon encourages others to use the chatbot to investigate their own health concerns, but to 'act with caution.'
Advertisement
4 Webpage of ChatGPT, a prototype AI chatbot, is seen on the website of OpenAI, on a smartphone.
Daniel CHETRONI – stock.adobe.com
'If it gives you something to look into, ask your doctors to test you,' she suggested. 'It can't do any harm. I feel lucky to be alive.'
Dr. Harvey Castro, a board-certified emergency medicine physician and national speaker on artificial intelligence based in Dallas, Texas, said he welcomes the role of AI tools like ChatGPT in raising awareness and prompting faster action, but also urges caution.
'AI is not a replacement for human medical expertise,' he told Fox News Digital. 'These tools can assist, alert and even comfort — but they can't diagnose, examine or treat.'
Advertisement
'When used responsibly, AI can enhance healthcare outcomes — but when used in isolation, it can be dangerous,' Castro went on. 'We must prioritize patient safety and keep licensed medical professionals at the center of care.'
Fox News Digital reached out to OpenAI, maker of ChatGPT, for comment.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Think your ChatGPT therapy sessions are private? Think again.
Think your ChatGPT therapy sessions are private? Think again.

Fast Company

timea day ago

  • Fast Company

Think your ChatGPT therapy sessions are private? Think again.

If you've been confessing your deepest secrets to an AI chatbot, it might be time to reevaluate. With more people turning to AI for instant life coaching, tools like ChatGPT are sucking up massive amounts of personal information on their users. While that data stays private under ideal circumstances, it could be dredged up in court – a scenario that OpenAI CEO Sam Altman warned users in an appearance on Theo Von's popular podcast this week. 'One example that we've been thinking about a lot… people talk about the most personal shit in their lives to ChatGPT,' Altman said. 'Young people especially, use it as a therapist, as a life coach, 'I'm having these relationship problems, what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality.' Altman says that as a society we 'haven't figured that out yet' for ChatGPT. Altman called for a policy framework for AI, though in reality OpenAI and its peers have lobbied for a regulatory light touch. 'If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman told Von, arguing that AI conversations should be treated with the same level of privacy as a chat with a therapist. While interactions with doctors and therapists are protected by federal privacy laws in the U.S., exceptions exist for instances in which someone is a threat to themselves or others. And even with those strong privacy protections, relevant medical information can be surfaced by court order, subpoena or a warrant. Altman's argument seems to be that from a regulatory perspective, ChatGPT shares more in common with licensed, trained specialists than it does with a search engine. 'I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,' he said. Altman also expressed concerns about how AI will adversely impact mental health, even as people seek its advice in lieu of the real thing. 'Another thing I'm afraid of… is just what this is going to mean for users' mental health. There's a lot of people that talk to ChatGPT all day long,' Altman said. 'There are these new AI companions that people talk to like they would a girlfriend or boyfriend. 'I don't think we know yet the ways in which [AI] is going to have those negative impacts, but I feel for sure it's going to have some, and we'll have to, I hope, we can learn to mitigate it quickly.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store