
The AI therapist will see you now: Can chatbots really improve mental health?
As a neuroscientist, I couldn't help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions?
Artificial intelligence-powered
mental health
tools are becoming increasingly popular - and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience?
Of course it's an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial.
Stand-in meditation and therapy apps and bots
AI-based therapy is a relatively new player in the digital therapy field. But the US mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises.
Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better.
Talkspace
and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises.
Somewhere in the middle are chatbot therapists like Wysa and
Woebot
, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from USD 10 to USD 100 per month for more comprehensive features or access to licensed professionals.
While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI's emotional intelligence.
Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot.
Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son's mental state. These cases raise ethical questions about the role of AI in sensitive situations.
Where AI comes in
Whether your brain is spiralling, sulking or just needs a nap, there's a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic?
And how exactly does AI therapy work inside our brains?
Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what "sparks joy." You identify unhelpful thought patterns like "I'm a failure," examine them, and decide whether they serve you or just create anxiety.
But can a chatbot help you rewire your thoughts? Surprisingly, there's science suggesting it's possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting.
These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system.
The neuroscience behind cognitive behavioral therapy is solid: It's about activating the brain's executive control centres, helping us shift our attention, challenge automatic thoughts and regulate our emotions.
The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it.
A user's experience, and what it might mean for the brain
"I had a rough week," a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week's end.
As a neuroscientist, I couldn't help but ask: Which neurons in her brain were kicking in to help her feel calm?
This isn't a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomised studies, users of mental health apps have reported reduced symptoms of depression and anxiety - outcomes that closely align with how in-person cognitive behavioral therapy influences the brain.
Several studies show that therapy
chatbots
can actually help people feel better. In one clinical trial, a chatbot called "Therabot" helped reduce depression and anxiety symptoms by nearly half - similar to what people experience with human therapists.
Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks.
While people often report feeling better after using these chatbots, scientists haven't yet confirmed exactly what's happening in the brain during those interactions. In other words, we know they work for many people, but we're still learning how and why.
Red flags and risks
Apps like Wysa have earned
FDA
Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomised clinical trials showing improved depression and anxiety symptoms in new moms and college students.
While many mental health apps boast labels like "clinically validated" or "FDA approved," those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22 per cent cited actual scientific studies to back them up.
In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data?
In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than USD 2 million for failing to protect user data.
Unlike clinicians, bots aren't bound by counselling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you're also feeding a database.
And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they're often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say "I hear you" with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can't reach.
So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it's important to be aware of their limitations. For the time being, pairing bots with human care - rather than replacing it - is the safest move. (The Conversation)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hans India
an hour ago
- Hans India
Researchers develop new ways for AI models to work together
Researchers have developed a set of algorithms that allow different artificial intelligence (AI) models to 'think' and work together as one. The development, by researchers at the Weizmann Institute of Science (WIS) makes it possible to combine the strengths of different AI systems, speeding up performance and reducing costs, Xinhua news agency reported. The new method significantly improves the speed of large language models, or LLMs, which power tools like ChatGPT and Gemini. On average, it increases performance by 1.5 times, and in some cases by as much as 2.8 times, the team said, adding that it could make AI more suitable for smartphones, drones, and autonomous vehicles. In those settings, faster response times can be critical to safety and accuracy. For example, in a self-driving car, a faster AI model can mean the difference between a safe decision and a dangerous error. Until now, AI models developed by different companies could not easily communicate or collaborate because each uses a different internal 'language,' made up of unique tokens. The researchers compared this to people from different countries trying to talk without a shared vocabulary. To overcome this, the team developed two algorithms. One allows a model to translate its output into a shared format that other models can understand. The other encourages collaboration using tokens that have the same meaning across different systems, like common words in human languages. Though initially concerned that meaning might be lost in translation, the researchers found that their system worked efficiently. The new tools are already available through open-source platforms and are helping developers worldwide create faster and more collaborative AI applications. The finding was presented at the International Conference on Machine Learning being held in Vancouver, Canada.


Economic Times
3 hours ago
- Economic Times
AI will be ready to manage your money in 5 years: Andrew Lo
A year ago, Andrew Lo asked ChatGPT for its opinion on Moderna, a biotech stock that soared during the pandemic era. The advice: sell. He didn't. The stock plunged. ADVERTISEMENT Now Lo, a finance professor at the Massachusetts Institute of Technology and leading AI expert, believes the same kind of technology that nailed the stock call could soon do far more. Not just dispense advice, but manage money, balance risk, tailor strategies-and meet one of finance's highest duties: acting in a client's best interest. Within five years, he predicts, large language models will have the technical capability to make real investment decisions on behalf of clients. Lo, 65, has long bridged the worlds of finance and technology. He co-founded QLS Advisors, a firm that applies machine learning to health care and asset management, and helped pioneer quantitative investing when it was still viewed as fringe. He believes that generative AI, despite its flaws, is fast approaching the capacity to parse complex market dynamics, weigh long-term risks, and earn the kind of trust typically reserved for human advisers. "This could be in the form of the so-called agent AI where we have agents that are working on our behalf and making decisions on our behalf in an automated fashion," Lo said in an interview. "I believe that within the next five years we're going to see a revolution in how humans interact with AI." The idea still sounds radical on Wall Street, where ChatGPT-style tools are mostly confined to junior-level work such as data collection and analysis. Yet Lo's vision goes beyond that: under the right regulatory guardrails, AI could evolve from a hard-working but rigid researcher to meet one of finance's highest bars: the fiduciary standard.


Time of India
4 hours ago
- Time of India
AI will be ready to manage your money in 5 years: Andrew Lo
A year ago, Andrew Lo asked ChatGPT for its opinion on Moderna, a biotech stock that soared during the pandemic era. The advice: sell. He didn't. The stock plunged. Now Lo, a finance professor at the Massachusetts Institute of Technology and leading AI expert, believes the same kind of technology that nailed the stock call could soon do far more. Not just dispense advice, but manage money, balance risk, tailor strategies-and meet one of finance's highest duties: acting in a client's best interest. Within five years, he predicts, large language models will have the technical capability to make real investment decisions on behalf of clients. Explore courses from Top Institutes in Please select course: Select a Course Category Operations Management Healthcare Finance Data Science Cybersecurity Project Management MBA Data Science CXO Technology Others Leadership Management Product Management Digital Marketing MCA Design Thinking healthcare Artificial Intelligence PGDM Public Policy Degree Data Analytics others Skills you'll gain: Quality Management & Lean Six Sigma Analytical Tools Supply Chain Management & Strategies Service Operations Management Duration: 10 Months IIM Lucknow IIML Executive Programme in Strategic Operations Management & Supply Chain Analytics Starts on Jan 27, 2024 Get Details Lo, 65, has long bridged the worlds of finance and technology. He co-founded QLS Advisors, a firm that applies machine learning to health care and asset management, and helped pioneer quantitative investing when it was still viewed as fringe. He believes that generative AI, despite its flaws, is fast approaching the capacity to parse complex market dynamics, weigh long-term risks, and earn the kind of trust typically reserved for human advisers. "This could be in the form of the so-called agent AI where we have agents that are working on our behalf and making decisions on our behalf in an automated fashion," Lo said in an interview. "I believe that within the next five years we're going to see a revolution in how humans interact with AI." The idea still sounds radical on Wall Street, where ChatGPT-style tools are mostly confined to junior-level work such as data collection and analysis. Yet Lo's vision goes beyond that: under the right regulatory guardrails, AI could evolve from a hard-working but rigid researcher to meet one of finance's highest bars: the fiduciary standard.