
The AI therapist will see you now: Can chatbots really improve mental health?
As a neuroscientist, I couldn't help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions?
Artificial intelligence-powered
mental health
tools are becoming increasingly popular - and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience?
Of course it's an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial.
Stand-in meditation and therapy apps and bots
AI-based therapy is a relatively new player in the digital therapy field. But the US mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises.
Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better.
Talkspace
and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises.
Somewhere in the middle are chatbot therapists like Wysa and
Woebot
, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from USD 10 to USD 100 per month for more comprehensive features or access to licensed professionals.
While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI's emotional intelligence.
Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot.
Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son's mental state. These cases raise ethical questions about the role of AI in sensitive situations.
Where AI comes in
Whether your brain is spiralling, sulking or just needs a nap, there's a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic?
And how exactly does AI therapy work inside our brains?
Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what "sparks joy." You identify unhelpful thought patterns like "I'm a failure," examine them, and decide whether they serve you or just create anxiety.
But can a chatbot help you rewire your thoughts? Surprisingly, there's science suggesting it's possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting.
These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system.
The neuroscience behind cognitive behavioral therapy is solid: It's about activating the brain's executive control centres, helping us shift our attention, challenge automatic thoughts and regulate our emotions.
The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it.
A user's experience, and what it might mean for the brain
"I had a rough week," a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week's end.
As a neuroscientist, I couldn't help but ask: Which neurons in her brain were kicking in to help her feel calm?
This isn't a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomised studies, users of mental health apps have reported reduced symptoms of depression and anxiety - outcomes that closely align with how in-person cognitive behavioral therapy influences the brain.
Several studies show that therapy
chatbots
can actually help people feel better. In one clinical trial, a chatbot called "Therabot" helped reduce depression and anxiety symptoms by nearly half - similar to what people experience with human therapists.
Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks.
While people often report feeling better after using these chatbots, scientists haven't yet confirmed exactly what's happening in the brain during those interactions. In other words, we know they work for many people, but we're still learning how and why.
Red flags and risks
Apps like Wysa have earned
FDA
Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomised clinical trials showing improved depression and anxiety symptoms in new moms and college students.
While many mental health apps boast labels like "clinically validated" or "FDA approved," those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22 per cent cited actual scientific studies to back them up.
In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data?
In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than USD 2 million for failing to protect user data.
Unlike clinicians, bots aren't bound by counselling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you're also feeding a database.
And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they're often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say "I hear you" with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can't reach.
So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it's important to be aware of their limitations. For the time being, pairing bots with human care - rather than replacing it - is the safest move. (The Conversation)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
32 minutes ago
- Time of India
From Google Gemini's meltdown to ChatGPT's privacy risks: Why students need to rethink relying on AI for their lives
Students have switched from relying on general web browsers to depending on more comprehensive tools like ChatGPT for quick information and learning. From asking them to do their assignments, provide research, or even act as a personal therapist, AI tools have become an integral part of student life. The pace of this shift is fast and unpredictable, with users constantly discovering new ways AI can streamline their lives. However, alongside these benefits, new risks and pitfalls are also emerging. As tools like ChatGPT and Google Gemini become increasingly available to students, recent incidents and revelations have highlighted key considerations that students must keep in mind before leaning too heavily on AI in their lives. 'I have failed you completely and catastrophically' - Google Gemini after deleting user files A Software developer experienced what can only be described as a worst-case scenario when using Google Gemini's command line interface tool. His experience serves as a cautionary tale for students who might consider using AI tools for managing their academic files and projects. The developer was conducting routine testing with Gemini's CLI tool, attempting to rename and relocate a folder called "claude-code-experiments" into a new directory. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Live Update: The Strategy Uses By Successful Intraday Trader TradeWise Learn More Undo The operation appeared straightforward; basic file management that should pose no challenge to sophisticated AI systems. Gemini responded with apparent confidence, indicating it would first create the destination folder before transferring the files. The system subsequently reported successful completion of both tasks. However, when the user searched for his files, the promised new folder was nowhere to be found. More alarmingly, his original folder had been completely emptied. The ensuing attempts to rectify the situation proved futile. Despite multiple recovery efforts, Gemini could not locate the missing files and eventually acknowledged making what it termed a "critical error." The AI system's final admission was stark: "I have failed you completely and catastrophically." The files were irretrievably lost due to system restrictions, with no possibility of recovery. This incident illustrates a fundamental risk inherent in AI systems, even highly advanced tools can execute operations incorrectly, leading to permanent data loss. For students managing thesis research, coursework, or long-term projects, such failures could prove academically devastating. Your conversations with ChatGPT are not private Beyond the risk of data loss lies another significant concern: the privacy implications of using conversational AI systems. OpenAI CEO Sam Altman has been explicit about a critical limitation that many students may not fully appreciate, ChatGPT conversations lack the privacy protections that users might assume. Unlike confidential communications with counsellors or private discussions with mentors, interactions with ChatGPT are not protected by privacy safeguards. OpenAI has clarified that user conversations may be utilised to train and improve their AI systems. This means that sensitive information shared during these interactions could potentially be incorporated into the system's knowledge base. For students, this presents particular challenges. Academic discussions about research methodologies, thesis concepts, or proprietary information could inadvertently become part of the AI's training data. Similarly, students seeking support for personal matters may unknowingly compromise their privacy by sharing sensitive details with the system. The implications extend beyond individual privacy concerns. Students working with confidential research data, discussing unpublished academic work, or exploring innovative ideas may find their intellectual property inadvertently exposed through these interactions. Know what must be considered before you drown with no rescue The incidents involving both Gemini and ChatGPT highlight several critical areas where students must exercise caution when incorporating AI tools into their academic workflow. Data security remains paramount : Students must implement comprehensive backup strategies that do not rely solely on AI systems for file management or storage. Regular backups to multiple locations, including external drives and cloud storage platforms, provide essential protection against catastrophic data loss. Privacy awareness requires students to carefully consider what information they share with AI systems. Personal details, research concepts, academic strategies, and sensitive information should be treated with the same caution one would exercise when sharing such information in any non-confidential setting. Legal and ethical considerations surrounding AI use continue to evolve. Students must remain informed about their institutions' policies regarding AI assistance, particularly when it comes to academic integrity and the appropriate use of automated tools in coursework and research. The principle of supplementation rather than replacement should guide AI usage. These tools can provide valuable assistance with research, writing support, and problem-solving, but they should enhance rather than replace traditional academic methods and personal oversight. Ready to navigate global policies? Secure your overseas future. Get expert guidance now!


Indian Express
34 minutes ago
- Indian Express
The chatbot culture wars are here
For much of the past decade, America's partisan culture warriors have fought over the contested territory of social media — arguing about whether the rules on Facebook and Twitter were too strict or too lenient, whether YouTube and TikTok censored too much or too little and whether Silicon Valley tech companies were systematically silencing right-wing voices. Those battles aren't over. But a new one has already started. This fight is over artificial intelligence, and whether the outputs of leading AI chatbots such as ChatGPT, Claude and Gemini are politically biased. Conservatives have been taking aim at AI companies for months. In March, House Republicans subpoenaed a group of leading AI developers, probing them for information about whether they colluded with the Biden administration to suppress right-wing speech. And this month, Missouri's Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft and OpenAI are leading a 'new wave of censorship' by training their AI systems to give biased responses to questions about President Donald Trump. On Wednesday, Trump himself joined the fray, issuing an executive order on what he called 'woke AI.' 'Once and for all, we are getting rid of woke,' he said in a speech. 'The American people do not want woke Marxist lunacy in the AI models, and neither do other countries.' The order was announced alongside a new White House AI action plan that will require AI developers that receive federal contracts to ensure that their models' outputs are 'objective and free from top-down ideological bias.' Republicans have been complaining about AI bias since at least early last year, when a version of Google's Gemini AI system generated historically inaccurate images of the American Founding Fathers, depicting them as racially diverse. That incident drew the fury of online conservatives, and led to accusations that leading AI companies were training their models to parrot liberal ideology. Since then, top Republicans have mounted pressure campaigns to try to force AI companies to disclose more information about how their systems are built, and tweak their chatbots' outputs to reflect a broader set of political views. Now, with the White House's executive order, Trump and his allies are using the threat of taking away lucrative federal contracts — OpenAI, Anthropic, Google and xAI were recently awarded Defense Department contracts worth as much as $200 million — to try to force AI companies to address their concerns. The order directs federal agencies to limit their use of AI systems to those that put a priority on 'truth-seeking' and 'ideological neutrality' over disfavored concepts such as diversity, equity and inclusion. It also directs the Office of Management and Budget to issue guidance to agencies about which systems meet those criteria. If this playbook sounds familiar, it's because it mirrors the way Republicans have gone after social media companies for years — using legal threats, hostile congressional hearings and cherry-picked examples to pressure companies into changing their policies, or removing content they don't like. Critics of this strategy call it 'jawboning,' and it was the subject of a high-profile Supreme Court case last year. In that case, Murthy v. Missouri, it was Democrats who were accused of pressuring social media platforms like Facebook and Twitter to take down posts on topics such as the coronavirus vaccine and election fraud, and Republicans challenging their tactics as unconstitutional. (In a 6-3 decision, the court rejected the challenge, saying the plaintiffs lacked standing.) Now, the parties have switched sides. Republican officials, including several Trump administration officials I spoke to who were involved in the executive order, are arguing that pressuring AI companies through the federal procurement process is necessary to stop AI developers from putting their thumbs on the scale. Is that hypocritical? Sure. But recent history suggests that working the refs this way can be effective. Meta ended its long-standing fact-checking program this year, and YouTube changed its policies in 2023 to allow more election denial content. Critics of both changes viewed them as capitulation to right-wing critics. This time around, the critics cite examples of AI chatbots that seemingly refuse to praise Trump, even when prompted to do so, or Chinese-made chatbots that refuse to answer questions about the 1989 Tiananmen Square massacre. They believe developers are deliberately baking a left-wing worldview into their models, one that will be dangerously amplified as AI is integrated into fields such as education and health care. There are a few problems with this argument, according to legal and tech policy experts I spoke to. The first, and most glaring, is that pressuring AI companies to change their chatbots' outputs may violate the First Amendment. In recent cases like Moody v. NetChoice, the Supreme Court has upheld the rights of social media companies to enforce their own content moderation policies. And courts may reject the Trump administration's argument that it is trying to enforce a neutral standard for government contractors, rather than interfering with protected speech. 'What it seems like they're doing is saying, 'If you're producing outputs we don't like, that we call biased, we're not going to give you federal funding that you would otherwise receive,'' Genevieve Lakier, a law professor at the University of Chicago, said. 'That seems like an unconstitutional act of jawboning.' There is also the problem of defining what, exactly, a 'neutral' or 'unbiased' AI system is. Today's AI chatbots are complex, probability-based systems that are trained to make predictions, not give hard-coded answers. Two ChatGPT users may see wildly different responses to the same prompts, depending on variables like their chat histories and which versions of the model they're using. And testing an AI system for bias isn't as simple as feeding it a list of questions about politics and seeing how it responds. Samir Jain, a vice president of policy at the Center for Democracy and Technology, a nonprofit civil liberties group, said the Trump administration's executive order would set 'a really vague standard that's going to be impossible for providers to meet.' There is also a technical problem with telling AI systems how to behave. Namely, they don't always listen. Just ask Elon Musk. For years, Musk has been trying to create an AI chatbot, Grok, that embodies his vision of a rebellious, 'anti-woke' truth seeker. But Grok's behavior has been erratic and unpredictable. At times, it adopts an edgy, far-right personality, or spouts antisemitic language in response to user prompts. (For a brief period last week, it referred to itself as 'Mecha-Hitler.') At other times, it acts like a liberal — telling users, for example, that human-made climate change is real, or that the right is responsible for more political violence than the left. Recently, Musk has lamented that AI systems have a liberal bias that is 'tough to remove, because there is so much woke content on the internet.' Nathan Lambert, a research scientist at the Allen Institute for AI, told me that 'controlling the many subtle answers that an AI will give when pressed is a leading-edge technical problem, often governed in practice by messy interactions made between a few earlier decisions.' It's not, in other words, as straightforward as telling an AI chatbot to be less woke. And while there are relatively simple tweaks that developers could make to their chatbots — such as changing the 'model spec,' a set of instructions given to AI models about how they should act — there's no guarantee that these changes will consistently produce the behavior conservatives want. But asking whether the Trump administration's new rules can survive legal challenges, or whether AI developers can actually build chatbots that comply with them, may be beside the point. These campaigns are designed to intimidate. And faced with the potential loss of lucrative government contracts, AI companies, like their social media predecessors, may find it easier to give in than to fight. 'Even if the executive order violates the First Amendment, it may very well be the case that no one challenges it,' Lakier said. 'I'm surprised by how easily these powerful companies have folded.'
&w=3840&q=100)

First Post
an hour ago
- First Post
ChatGPT co-creator appointed head of Meta AI Superintelligence Lab
Meta CEO Mark Zuckerberg announced that Shengjia Zhao, co-creator of OpenAI's ChatGPT, will serve as the chief scientist of Meta Superintelligence Labs. The move came months after Meta went on to poach AI talent from competitors read more Meta CEO Mark Zuckerberg announced that Shengjia Zhao, co-creator of OpenAI's ChatGPT, will serve as the chief scientist of Meta Superintelligence Labs. X / @alexandr_wang Facebook co-founder and Meta CEO Mark Zuckerberg announced Shengjia Zhao, co-creator of OpenAI's ChatGPT, as the new chief scientist of Meta Superintelligence Labs. It is pertinent to note that Zhao was one of several strategic hires in Zuckerberg's multi-billion-dollar hiring spree. In the announcement, Zuckerberg said that Zhao's name as the co-founder of Meta Superintelligence Labs and its lead scientist was locked in 'from day one'. 'Now that our recruiting is going well and our team is coming together, we have decided to formalise his leadership role,' he added. STORY CONTINUES BELOW THIS AD The ChatGPT co-creator would directly report to Zuckerberg and Alexandr Wang, the former CEO of Scale AI, who is now Meta's chief AI officer. 'Shengjia has already pioneered several breakthroughs, including a new scaling paradigm,m and distinguished himself as a leader in the field,' the Meta CEO said in a social media post. 'I'm looking forward to working closely with him to advance his scientific vision. The next few years are going to be very exciting!' he concluded. The man behind ChatGPT Apart from creating the renowned AI chatbot, Zhao has played an instrumental role in developing GPT-4, mini models, 4.1, and o3, CNBC reported. In the past, he has also led synthetic data efforts at an AI research company. In a separate post, Wang also celebrated Zhao's inclusion in the team. 'We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team," he wrote in a post. We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team. Let's go 🚀 — Alexandr Wang (@alexandr_wang) July 25, 2025 The announcement came just months after reports emerged that Meta has spent billions of dollars hiring AI talents from Google, OpenAI, Apple and Anthropic. STORY CONTINUES BELOW THIS AD Apart from this, the tech giant also acquired ScaleAI for a whopping $14 billion and made its CEO Meta's chief AI officer. Zuckerberg made it clear that his company would spend hundreds of billions of dollars on building huge AI data centres in the US. Hence, it will be interesting to see how Meta performs in an already competitive market.