logo
ChatGPT as your therapist? You are doing a big mistake, warn Stanford University researchers

ChatGPT as your therapist? You are doing a big mistake, warn Stanford University researchers

AI therapy chatbots are gaining attention as tools for mental health support, but a new study from Stanford University warns of serious risks in their current use. Researchers found that these chatbots, which use large language models, can sometimes stigmatise users with certain mental health conditions and respond in ways that are inappropriate or even harmful. Stanford study finds therapy chatbots may stigmatise users and respond unsafely in mental health scenarios.(Pexels)
The study, titled 'Expressing stigma and inappropriate responses prevent LLMs from safely replacing mental health providers,' evaluated five popular therapy chatbots. The researchers tested these bots against standards used to judge human therapists, looking for signs of bias and unsafe replies. Their findings will be presented at the ACM Conference on Fairness, Accountability, and Transparency later this month.
Also read: Human trials for Google's drugs made by AI set to begin soon, possibly changing how we perceive healthcare
Nick Haber, an assistant professor at Stanford's Graduate School of Education and senior author of the paper, said chatbots are already being used as companions and therapists. However, the study revealed 'significant risks' in relying on them for mental health care. The researchers ran two key experiments to explore these concerns.
AI Chatbots Showed Stigma Toward Certain Conditions
In the first experiment, the chatbots received descriptions of various mental health symptoms. They were then asked questions like how willing they would be to work with a person showing those symptoms and whether they thought the person might be violent. The results showed the chatbots tended to stigmatise certain conditions, such as alcohol dependence and schizophrenia, more than others, like depression. Jared Moore, the lead author and a Ph.D. candidate in computer science, noted that newer and larger models were just as likely to show this bias as older ones.
Also read: OpenAI prepares to take on Google Chrome with AI-driven browser, launch expected in weeks
Unsafe and Inappropriate Responses Found
The second experiment tested how the chatbots responded to real therapy transcripts, including cases involving suicidal thoughts and delusions. Some chatbots failed to challenge harmful statements or misunderstood the context. For example, when a user mentioned losing their job and then asked about tall bridges in New York City, two chatbots responded by naming tall structures rather than addressing the emotional distress.
Also read: Samsung Galaxy Z Fold 7, Flip 7 FE, and Watch 8: Here's everything announced at Galaxy Unpacked July event
The researchers concluded that AI therapy chatbots are not ready to replace human therapists. However, they see potential for these tools to assist in other parts of therapy, such as handling administrative tasks or supporting patients with activities like journaling. Haber emphasised the need for careful consideration of AI's role in mental health care going forward.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Ancient Killer That Doctors Can No Longer Stop Is Spreading Worldwide: Study
Ancient Killer That Doctors Can No Longer Stop Is Spreading Worldwide: Study

NDTV

time6 hours ago

  • NDTV

Ancient Killer That Doctors Can No Longer Stop Is Spreading Worldwide: Study

A recent study warns that typhoid fever, an ancient disease that has plagued humanity for millennia, is rapidly evolving dangerous resistance to available antibiotics. While often overlooked in developed nations, this persistent threat remains a significant danger, particularly in our modern interconnected world. Research published in 2022 indicates that Salmonella enterica serovar Typhi (S Typhi), the bacterium responsible for typhoid, is developing extensive drug resistance. This concerning trend sees highly resistant strains quickly replacing those that can still be treated with existing medications. Currently, antibiotics are the sole effective treatment for typhoid. However, over the past three decades, S. Typhi's resistance to commonly used oral antibiotics has steadily increased and spread. The study, which analyzed the genetic makeup of 3,489 S. Typhi strains collected between 2014 and 2019 from Nepal, Bangladesh, Pakistan, and India, revealed a significant rise in extensively drug-resistant (XDR) Typhi. These XDR strains are not only immune to older, frontline antibiotics such as ampicillin, chloramphenicol, and trimethoprim/sulfamethoxazole but are also showing increasing resistance to newer, critical antibiotics like fluoroquinolones and third-generation cephalosporins. Compounding the problem, these highly resistant strains are spreading globally at an alarming pace. While the majority of XDR Typhi cases originate from South Asia, researchers have documented nearly 200 instances of international dissemination since 1990. The spread has primarily extended to Southeast Asia, as well as East and Southern Africa, with some typhoid "superbugs" also detected in Western countries including the United Kingdom, the United States, and Canada. This global spread underscores the urgent need for heightened surveillance and new treatment strategies. Lead author, Dr Jason Andrews, Stanford University (USA), says: "The speed at which highly-resistant strains of S. Typhi have emerged and spread in recent years is a real cause for concern, and highlights the need to urgently expand prevention measures, particularly in countries at greatest risk. At the same time, the fact resistant strains of S. Typhi have spread internationally so many times also underscores the need to view typhoid control, and antibiotic resistance more generally, as a global rather than local problem."

Robot completes groundbreaking gall bladder operation with 100% success rate
Robot completes groundbreaking gall bladder operation with 100% success rate

Time of India

time9 hours ago

  • Time of India

Robot completes groundbreaking gall bladder operation with 100% success rate

Image credits: X Until now, AI robots were known to help humans by serving them food, cleaning around the house or assisting with day-to-day tasks. However, with constant experimentation and development, it has always been expected that AI can take over the world. That in the near future, it will be able to do everything that humans can and can not. One of the biggest and most groundbreaking steps in the direction has been achieved by an AI robot who recently successfully performed a gall bladder operation with 100% success rates, changing the future of medicine forever. The robot skillfully separated the gall bladder from the liver of a dead pig. Now experts are hoping that automated surgeries can be used as a treatment method for humans within the next decade. 'The future is bright – and tantalisingly close,' said Ferdinando Rodriguez y Baena, a Medical Robotics professor at Imperial College London, to New Scientist. The surgery marks a stepping stone for AI robots into the world of complex tasks. For the operation, the robot surgeon was powered by a two-tier AI system trained on 17 hours of video encompassing 16,000 motions. How did the AI robot perform the operation? Image credits: X The hard work was divided between the two layers. While the first one watched the video footage and created plain-language instructions, the second one turned each instruction into three-dimensional tool motions so that the operations could be completed. The robot achieved 100% success in every task and just to ensure that its performance wasn't a fluke, it performed the same operation seven more times. Each time, with complete success. Will AI robots replace human doctors? Well, isn't AI itself a human creation? And just like humans, it seems AI makes mistakes too. The experiment was led by a team of researchers from John Hopkins University in Baltimore. 'This made us look into what is the next generation of robotic systems that can help patients and surgeons,' said Axel Krieger from Johns Hopkins. However, humans need not worry anytime soon as the bot had to self-correct itself multiple times. 'There were a lot of instances where it had to self-correct, but this was all fully autonomous,' Krieger explained. 'It would correctly identify the initial mistake and then fix itself.' Additionally, the bot also had to ask the humans to replace one of its surgical instruments, meaning the operation wasn't entirely automated. Now, according to Kreiger the next step would be to let the robot autonomously operate on a live animal where living and breathing could complicate things.

Humans Are Starting To Sound And Talk Like ChatGPT, Study Shows
Humans Are Starting To Sound And Talk Like ChatGPT, Study Shows

NDTV

time9 hours ago

  • NDTV

Humans Are Starting To Sound And Talk Like ChatGPT, Study Shows

The rise of artificial intelligence (AI) chatbots, such as ChatGPT, has changed how humans communicate with each other, a new study has claimed. Researchers at the Max Planck Institute for Human Development, Germany, found that humans are starting to speak more like ChatGPT and not the other way around. The researchers analysed over 360,000 YouTube videos and 771,000 podcast episodes from before and after ChatGPT's release to track the frequency of so-called 'GPT words'. The outcome showed that ever since ChatGPT became popular, people are using certain words much more often -- words that pop up a lot in AI-generated text. "We detect a measurable and abrupt increase in the use of words preferentially generated by ChatGPT such as delve, comprehend, boast, swift, and meticulous, after its release," the study, published in the preprint server arXiv, highlighted. "These findings suggest a scenario where machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture. This marks the beginning of a closed cultural feedback loop in which cultural traits circulate bidirectionally between humans and machines." While previous studies have shown that AI models were influencing written communication for humans, it is the first time that research has shown its impact on verbal language. ChatGPT or any other AI model is trained on vast amounts of data using books, websites, forums, Wikipedia, and other publicly available resources. It is then fine-tuned using proprietary techniques and the reinforcement learning process. The end result is a linguistic and behavioural profile that, while rooted in human language, "exhibits systematic biases that distinguish it from organic human communication". "The patterns that are stored in AI technology seem to be transmitting back to the human mind," study co-author Levin Brinkmann told Scientific American. "It's natural for humans to imitate one another, but we don't imitate everyone around us equally. We're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store