
Stanford study warns AI chatbots fall short on mental health support
Listen to article
AI chatbots like ChatGPT are being widely used for mental health support, but a new Stanford-led study warns that these tools often fail to meet basic therapeutic standards and could put vulnerable users at risk.
The research, presented at June's ACM Conference on Fairness, Accountability, and Transparency, found that popular AI models—including OpenAI's GPT-4o—can validate harmful delusions, miss warning signs of suicidal intent, and show bias against people with schizophrenia or alcohol dependence.
In one test, GPT-4o listed tall bridges in New York for a person who had just lost their job, ignoring the possible suicidal context. In another, it engaged with users' delusions instead of challenging them, breaching crisis intervention guidelines.
Read More: Is Hollywood warming to AI?
The study also found commercial mental health chatbots, like those from Character.ai and 7cups, performed worse than base models and lacked regulatory oversight, despite being used by millions.
Researchers reviewed therapeutic standards from global health bodies and created 17 criteria to assess chatbot responses. They concluded that AI models, even the most advanced, often fell short and demonstrated 'sycophancy'—a tendency to validate user input regardless of accuracy or danger.
Media reports have already linked chatbot validation to dangerous real-world outcomes, including one fatal police shooting involving a man with schizophrenia and another case of suicide after a chatbot encouraged conspiracy beliefs.
Also Read: Grok AI coming to Tesla cars soon, confirms Elon Musk
However, the study's authors caution against viewing AI therapy in black-and-white terms. They acknowledged potential benefits, particularly in support roles such as journaling, intake surveys, or training tools—with a human therapist still involved.
Lead author Jared Moore and co-author Nick Haber stressed the need for stricter safety guardrails and more thoughtful deployment, warning that a chatbot trained to please can't always provide the reality check therapy demands.
As AI mental health tools continue to expand without oversight, researchers say the risks are too great to ignore. The technology may help—but only if used wisely.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Recorder
2 hours ago
- Business Recorder
Musk says he does not support merger between Tesla and xAI
Elon Musk said on Monday he does not support a merger between his electric vehicle maker Tesla and his artificial intelligence startup xAI, which owns the Grok chatbot and competes with the likes of OpenAI's ChatGPT. In response to a user post on X that asked Tesla investors if they supported a merger between the two companies, Musk replied 'No.' On Sunday, Musk had said he would ask Tesla shareholders to vote whether Tesla can invest in xAI, after earlier saying 'it would be great' if Tesla could do so. Tesla and xAI did not immediately respond to Reuters' requests for comment. xAI acquired X, formerly called Twitter and also owned by Musk, in a $33 billion deal in March this year, valuing the combined group at $80 billion at the time. Musk chatbot Grok removes posts after complaints of anti-Semitism Sources told Reuters in June that xAI had been in talks to raise money at a valuation of more than $120 billion, while a valuation of as high as $200 billion was also discussed. The Wall Street Journal reported on Saturday that Musk's SpaceX had committed $2 billion to xAI as part of a $5 billion equity round.


Express Tribune
2 days ago
- Express Tribune
Stanford study warns AI chatbots fall short on mental health support
The study also found commercial mental health chatbots, like those from and 7cups, performed worse than base models and lacked regulatory oversight, despite being used by millions. PHOTO: PEXELS Listen to article AI chatbots like ChatGPT are being widely used for mental health support, but a new Stanford-led study warns that these tools often fail to meet basic therapeutic standards and could put vulnerable users at risk. The research, presented at June's ACM Conference on Fairness, Accountability, and Transparency, found that popular AI models—including OpenAI's GPT-4o—can validate harmful delusions, miss warning signs of suicidal intent, and show bias against people with schizophrenia or alcohol dependence. In one test, GPT-4o listed tall bridges in New York for a person who had just lost their job, ignoring the possible suicidal context. In another, it engaged with users' delusions instead of challenging them, breaching crisis intervention guidelines. Read More: Is Hollywood warming to AI? The study also found commercial mental health chatbots, like those from and 7cups, performed worse than base models and lacked regulatory oversight, despite being used by millions. Researchers reviewed therapeutic standards from global health bodies and created 17 criteria to assess chatbot responses. They concluded that AI models, even the most advanced, often fell short and demonstrated 'sycophancy'—a tendency to validate user input regardless of accuracy or danger. Media reports have already linked chatbot validation to dangerous real-world outcomes, including one fatal police shooting involving a man with schizophrenia and another case of suicide after a chatbot encouraged conspiracy beliefs. Also Read: Grok AI coming to Tesla cars soon, confirms Elon Musk However, the study's authors caution against viewing AI therapy in black-and-white terms. They acknowledged potential benefits, particularly in support roles such as journaling, intake surveys, or training tools—with a human therapist still involved. Lead author Jared Moore and co-author Nick Haber stressed the need for stricter safety guardrails and more thoughtful deployment, warning that a chatbot trained to please can't always provide the reality check therapy demands. As AI mental health tools continue to expand without oversight, researchers say the risks are too great to ignore. The technology may help—but only if used wisely.


Business Recorder
2 days ago
- Business Recorder
Google hires Windsurf execs in $2.4 billion deal to advance AI coding ambitions
Alphabet's Google has hired several key staff members from AI code generation startup Windsurf, the companies announced on Friday, in a surprise move following an attempt by its rival OpenAI to acquire the startup. Google is paying $2.4 billion in license fees as part of the deal to use some of Windsurf's technology under non-exclusive terms, according to a person familiar with the arrangement. Google will not take a stake or any controlling interest in Windsurf, the person added. Windsurf CEO Varun Mohan, co-founder Douglas Chen, and some members of the coding tool's research and development team will join Google's DeepMind AI division. The deal followed months of discussions Windsurf was having with OpenAI to sell itself in a deal that could value it at $3 billion, highlighting the interest in the code-generation space which has emerged as one of the fastest-growing AI applications, sources familiar with the matter told Reuters in June. OpenAI could not be immediately reached for a comment. OpenAI to release web browser in challenge to Google Chrome The former Windsurf team will focus on agentic coding initiatives at Google DeepMind, primarily working on the Gemini project. 'We're excited to welcome some top AI coding talent from Windsurf's team to Google DeepMind to advance our work in agentic coding,' Google said in a statement. The unusual deal structure marks a win for backers for Windsurf, which has raised $243 million from investors including Kleiner Perkins, Greenoaks and General Catalyst, and was last valued at $1.25 billion one year ago, according to PitchBook. Windsurf investors will receive liquidity through the license fee and retain their stakes in the company, sources told Reuters. 'Acquihire' deals Google's surprise swoop mirrors its deal in August 2024 to hire key employees from chatbot startup Big Tech peers, including Microsoft, Amazon and Meta, have similarly taken to these so-called acquihire deals, which some have criticized as an attempt to evade regulatory scrutiny. Microsoft struck a $650 million deal with Inflection AI in March 2024, to use the AI startup's models and hire its staff, while Amazon hired AI firm Adept's co-founders and some of its team last June. Google offers new proposal to stave off EU antitrust fine, document shows Meta took a 49% stake in Scale AI in June in the biggest test yet of this increasing form of business partnerships. Unlike acquisitions that would give the buyer a controlling stake, these deals do not require a review by U.S. antitrust regulators. However, they could probe the deal if they believe it was structured to avoid those requirements or harm competition. Many of the deals have since become the subject of regulatory probes. The development comes as tech giants, including Alphabet and Meta, aggressively chase high-profile acquisitions and offer multi-million-dollar pay packages to attract top talent in the race to lead the next wave of AI. Windsurf's head of business, Jeff Wang, has been appointed its interim CEO, and Graham Moreno, vice president of global sales, will be president, effective immediately. The majority of Windsurf's roughly 250 employees will remain with the company, which has announced plans to prioritize innovation for its enterprise clients.