
Can ChatGPT detect serious diseases like cancer? Here's how AI tool alerted 27-year-old woman
Marley Garnreiter, 27, experienced skin discomfort and was always perspiring at night. She attributed it to stress following her father's colon cancer death. During that period, medical examinations revealed no significant health issues, and all test results were normal.
The woman kept searching for solutions even after all her tests report showed normal results. She thought to share her symptoms with ChatGPT during this difficult period. The AI chatbot responded that her symptoms might indicate blood cancer. She initially disregarded the warning. The woman told People.com that she didn't trust the AI chatbot and that her pals cautioned her not to follow the machine's medical advice.
Also Read: Barbie meets ChatGPT: New AI Barbie box trend takes over Internet; Check steps to turn yourself into action figure
A few months later, Garnreiter faced chest aches and felt exhausted all the time. Following a second round of medical consultations, a scan revealed a sizable mass in her left lung. She was diagnosed with Hodgkin lymphoma, a rare form of blood cancer that attacks white blood cells.
Garnreiter, who is currently getting ready for chemotherapy, says she never imagined that an AI tool would spot anything so important before the doctors did.
'I just didn't want my family to go through this all over again,' she told Daily Mail
'It's really important to listen to our bodies. Sometimes we tend to lose our connection with our inner self,' Garnreiter added.
Despite being uncommon, early diagnosis increases the likelihood of an effective course of treatment for Hodgkin lymphoma. The five-year survival rate is around 80 percent, according to medical specialists. Garnreiter suffered from a number of common symptoms, including fever, night sweats, itching, exhaustion, and stomach discomfort.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scroll.in
4 hours ago
- Scroll.in
As young Indians turn to AI ‘therapists', how confidential is their data?
This is the second of a two-part series. Read the first here. Imagine a stranger getting hold of a mental health therapist's private notes – and then selling that information to deliver tailored advertisements to their clients. That's practically what many mental healthcare apps might be doing. Young Indians are increasingly turning to apps and artificial intelligence-driven tools to address their mental health challenges – but have limited awareness about how these digital tools process user data. In January, the Centre for Internet and Society published a study based on 45 mental health apps – 28 from India and 17 from abroad – and found that 80% gathered user health data that they used for advertising and shared with third-party service providers. An overwhelming number of these apps, 87%, shared the data with law enforcement and regulatory bodies. The first article in this series had reported that some of these apps are especially popular with young Indian users, who rely on them for quick and easy access to therapy and mental healthcare support. Users had also told Scroll that they turned to AI-driven technology, such as ChatGPT, to discuss their feelings and get advice, however limited this may be compared to interacting with a human therapist. But they were not especially worried about data misuse. Keshav*, 21, reflected a common sentiment among those Scroll interviewed: 'Who cares? My personal data is already out there.' The functioning of Large Language Models, such as ChatGPT, is already under scrutiny. LLMs are 'trained' on vast amounts of data, either from the internet or provided by its trainers, to simulate human learning, problem solving and decision making. Sam Altman, CEO of OpenAI that built ChatGPT, said on a podcast in July that though users talk about personal matters with the chatbot, there are no legal safeguards protecting that information. 'People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do?' he asked. 'And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' Play He added: 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up.' Therapists and experts said the ease of access of AI-driven mental health tools should not sideline privacy concerns. Clinical psychologist Rhea Thimaiah, who works at Kaha Mind, a collective that provides mental health services, emphasised that confidentiality is an essential part of the process of therapy. 'The therapeutic relationship is built on trust and any compromise in data security can very possibly impact a client's sense of safety and willingness to engage,' she said. 'Clients have a right to know how their information is being stored, who has access, and what protections are in place.' This is more than mere data – it is someone's memories, trauma and identity, Thimaiah said. 'If we're going to bring AI into this space, then privacy shouldn't be optional, it should be fundamental.' Srishti Srivastava, founder of AI-driven mental health app Infiheal, said that her firm collects user data to train its AI bot, but users can access the app even without signing up and also ask for their data to be deleted. Dhruv Garg, a tech policy lawyer at Indian Governance and Policy Project, said the risk lies not just in apps collecting data but in the potential downstream uses of that information. 'Even if it's not happening now, an AI platform in the future could start using your data to serve targeted ads or generate insights – commercial, political, or otherwise – based on your past queries,' said Garg. 'Current privacy protections, though adequate for now, may not be equipped to deal with each new future scenario.' India's data protection law For now, personal data processed by chatbots is governed by the Information Technology Act framework and Sensitive Personal Data Rules, 2011. Section 5 of the sensitive data rules says that companies must obtain consent in writing before collecting or using sensitive information. According to the rules, information relating to health and mental health conditions are considered sensitive data. There are also specialised sectoral data protection rules that apply to regulated entities like hospitals. The Digital Personal Data Protection Act, passed by Parliament in 2023, is expected to be notified soon. But it exempts publicly available personal data from its ambit if this information has voluntarily been disclosed by an individual. Given the black market of data intermediaries that publish large volumes of personal information, it is difficult to tell what personal data in the public domain has been made available 'voluntarily'. The new data protection act does not have different regulatory standards for specific categories of personal data – financial, professional, or health-related, Garg said. This means that health data collected by AI tools in India will not be treated with special sensitivity under this framework. 'For instance, if you search for symptoms on Google or visit WebMD, Google isn't held to a higher standard of liability just because the content relates to health,' said Garg. WebMD provides health and medical information. It might be different for AI tools explicitly designed for mental healthcare – unlike general-purpose models like ChatGPT. These, according to Garg, 'could be made subject to more specific sectoral regulations in the future'. However, the very logic on which AI chatbots function – where it responds based on user data and inputs – could itself be a privacy risk. Nidhi Singh, a senior research analyst and programme manager at Carnegie India, an American think tank, said she has concerns about how tools like ChatGPT customise responses and remember user history – even though users may appreciate those functions. Singh said India's new data protection is quite clear that any data made publicly available by putting it on the internet is no longer considered personal data. 'It is unclear how this will apply to your conversations with ChatGPT,' she said. Without specific legal protections, there's no telling how an AI-driven tool will use the data it has gathered. According to Singh, without a specific rule designating conversations with generative AI as an exception, it is likely that a user's interactions with these AI systems won't be treated as personal data and consequently will not fall under the purview of the act. Who takes legal responsibility? Technology firms have tried hard to evade legal liability for harm. In Florida, a lawsuit by a mother has alleged that her 14-year-old son died by suicide after becoming deeply entangled in an 'emotionally and sexually abusive relationship' with a chatbot. In case of misdiagnosis or harmful advice from an AI tool, legal responsibility is likely to be analysed in court, said Garg. 'The developers may argue that the model is general-purpose, trained on large datasets, and not supervised by a human in real-time,' said Garg. 'Some parallels may be drawn with search engines – if someone acts on bad advice from search results, the responsibility doesn't fall on the search engine, but on the user.' Highlighting the urgent need for a conversation on sector-specific liability frameworks, Garg said that for now, the legal liability of AI developers will have to be assessed on a case-to-case basis. 'Courts may examine whether proper disclaimers and user agreements were in place,' he said. In another case, Air Canada was ordered to pay compensation to a customer who was misled by its chatbot regarding bereavement fares. The airline had argued that the chatbot was a ' separate legal entity ' and therefore responsible for its own actions. Singh of Carnegie India said that transparency is important and that user consent should be meaningful. 'You don't need to explain the model's source code, but you do need to explain its limitations and what it aims to do,' she said. 'That way, people can genuinely understand it, even if they don't grasp every technical step.' AI, meanwhile, is here for the long haul. Until India can expand its capacity to offer mental health services to everyone, Singh said AI will inevitably fill that void. 'The use of AI will only increase as Indic language LLMs are being built, further expanding its potential to address the mental health therapy gap,' she said.


Time of India
10 hours ago
- Time of India
Teens turn to AI chatbots for emotional bonding; it's risky romance, warn psychologists
Hyderabad: What began as casual scrolling, streaming, or chatting has spiralled into something deeper—and far more complex. Across Telangana, mental health professionals are raising red flags over an emerging crisis: Young people are not just using the internet for entertainment—they are forming emotional and romantic bonds with AI chatbots and virtual personas. In therapy rooms across Hyderabad and beyond, this new form of digital intimacy is surfacing with increasing frequency. The screen is no longer a boundary. For many, it's a refuge. And for some, a companion. Take the case of a 12-year-old girl who developed a deep emotional connection with ChatGPT. She affectionately named the AI 'Chinna' (meaning little one in Telugu) and shaped it into a confidante. "She would vent everything to ChatGPT—issues with her parents, school, friendships," said Dr Nithin Kondapuram, senior consultant psychiatrist at Aster Prime Hospital. You Can Also Check: Hyderabad AQI | Weather in Hyderabad | Bank Holidays in Hyderabad | Public Holidays in Hyderabad "This is not isolated. On any given day, I see around 15 young patients with anxiety or depression, and five of them exhibit emotional attachment to AI tools," he said. In another case, a 22-year-old man created a romantic fantasy world with an AI chatbot. He imagined the bot as his girlfriend, asked it for gifts, and had it play love songs tailored to their 'relationship'. "For him, the AI wasn't code—it was a silent partner who never judged. It gave him emotional security he couldn't find in real life," Dr Nithin said. Such stories are becoming more common, and they aren't restricted to urban areas. Dr Gauthami Nagabhirava, a senior psychiatrist at Kamineni Hospitals, has seen similar patterns emerge in rural communities. "In one rural case, a 12-year-old girl bonded with an AI companion and began accessing inappropriate content online while her mother was away at work. Eventually, she started inviting male friends home without supervision," she said. Another case involved a teenage girl who lashed out during therapy after creating an imaginary AI companion. "She accused her parents of stifling her freedom, suddenly declared herself bisexual, and expressed a strong desire to move abroad. Her identity was based purely on perception. She was too inexperienced to even understand what her orientation truly was," Dr Gauthami elaborated. Dr C Virender, a psychologist, recounted the story of a 25-year-old woman who sought emotional guidance from an AI chatbot about a male colleague she admired. "She would describe his personality to the AI, ask what kind of woman he might like, or how she should dress to attract him," he said. Using these responses, she would alter her behaviour around him. "Eventually, the man accused her of stalking. She was devastated and began to spiral at work. She had become so reliant on the AI that real human interactions felt threatening," he recalled. Mental health professionals agree that the roots of this digital dependence lie in loneliness, fear of judgment, low self-esteem, and the absence of healthy social interaction—all exacerbated by nuclear family structures and limited parental supervision. "Young people escape into digital realms where they feel accepted and unchallenged," said Dr Nithin. "Our job is to reintroduce them to the real world gently. We assign them small real-life tasks—like visiting a local shop or spending time in a metro station—to help rebuild their confidence." But in some cases, efforts to curb digital addiction can backfire. According to Dr Gauthami, parents often make the mistake of sending affected children to highly regulated hostels with strict ban on mobile usage. "This only worsens their condition and causes irreparable damage to already fragile minds," she warned. The issue is compounded among students facing academic and career pressure. Dr Uma Shankar, a psychiatry professor at a govt medical college in Maheshwaram, said rural engineering students are particularly vulnerable. "They fail exams, don't get placed in companies, and feel like they're letting everyone down. That emotional burden drives them into digital addiction. It becomes an escape hatch," she explained. The scale of the problem is reflected in national data. A Nimhans survey conducted in six major cities, including Hyderabad, flagged concerning patterns in digital behaviour. A separate fact sheet from the Centre for Economic and Social Studies found that nearly 19% of people aged 21–24 experience mental health issues by age 29—primarily anxiety and depression. What's driving this growing trend isn't just the allure of the screen—but the emotional responsiveness of AI itself. With their friendly tone, prompt replies, and tireless attention, AI chatbots are becoming something more than tools. "As AI becomes more human-like, these emotional entanglements will only grow. It's no longer science fiction. It's already happening—quietly, in homes, classrooms, and clinics," experts warned. Get the latest lifestyle updates on Times of India, along with Friendship Day wishes , messages and quotes !


Time of India
12 hours ago
- Time of India
Start budgeting to be happier: New study reveals surprising link between smart money management and mental health
A Lifeline Amid Rising Living Costs You Might Also Like: Trusting ChatGPT with your mental health? Experts warn it might be fueling delusions Why Men Benefit More And Why That's Concerning Building a Foundation for the Future You Might Also Like: Drowning in depression and burnout, millennial employee adopts Gen Z tactic for mental health In an eye-opening new study, finance experts at the University of South Australia have found a surprisingly strong connection between everyday financial habits and mental wellbeing . From regular savings to timely credit card repayments, the research suggests that your wallet and your mind may be more closely linked than you study titled 'Understanding the Effect of Financial Behaviour on Mental Health: Evidence From Australia', based on data from the long-running Household, Income and Labour Dynamics in Australia (HILDA) survey, followed over 17,000 Australians aged 15 and older across two decades. Researchers discovered that individuals who followed stable financial routines — especially those who saved consistently and paid off credit card debt on time — reported not only better mental health, but also higher energy levels, stronger social ties, and greater overall life Rajabrata Banerjee, an expert in applied economics and a member of UniSA's Centre for Markets, Values and Inclusion, explains that while the stress of debt has long been known to negatively affect mental health, less attention has been paid to the positive impact of proactive money habits 'We already know that having high debt and low savings has a negative impact on mental health,' Banerjee said in the university's official release. 'But we wanted to learn more about the behaviors — like how often someone saves or pays off debt — that might reduce financial strain and improve wellbeing.'The findings couldn't be more timely. With Australians grappling with rising utility bills and persistent cost-of-living pressures, the financial strain is more real than ever, especially for younger people. The study found that sharp increases in the cost of electricity, gas and water hit younger individuals hardest, since they typically have lower savings and higher levels of debt. This in turn affects their ability to save or pay off debt, triggering a cycle of financial stress and mental the benefits of healthy money habits weren't exclusive to any particular income group. Whether someone earned a little or a lot, the study showed that consistent saving and debt management offered a mental health boost. Even small savings could make a meaningful difference when done notable finding was the gender gap in financial impact . 'The positive effect of savings on mental health was stronger for men than for women,' said Banerjee. This may reflect deeper societal patterns where men are still more often the primary financial decision-makers in households, a factor that can exacerbate gender disparities in both money management and mental health study makes a compelling case for rethinking personal finance not just as an economic tool, but as a mental health strategy. Financial hardship, Banerjee warns, can lead to a loss of independence, long-term insecurity, and even continuous debt cycles.'When people are financially strained, they often miss out on investing in their future, and that adds to a sense of hopelessness,' he noted. 'But healthy financial behaviors create stability, open doors, and significantly reduce mental stress.'So while therapy, mindfulness, and self-care remain essential to wellbeing, don't underestimate the quiet power of consistent savings and timely bill payments. Sometimes, peace of mind begins with a balance sheet.