logo
Chatbot therapy? Available 24/7 but users beware

Chatbot therapy? Available 24/7 but users beware

USA Today8 hours ago
On a special episode (first released on July 3, 2025) of The Excerpt podcast: Chatbots are sometimes posing as therapists—but are they helping or causing harm? Psychologist Vaile Wright shares her thoughts.
Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Dana Taylor:
Hello, I'm Dana Taylor, and this is a special episode of The Excerpt. The proliferation of chatbots has people using them in a myriad of ways. Some see them as friends and confidants, as Meta CEO Mark Zuckerberg has suggested. And in certain cases, even as therapists. And actual therapists are expressing concern. Therapy is a licensed profession for many good reasons.
Notably, some chatbots have wandered into dangerous territory, allegedly suggesting that a user kill themselves and even telling them how they could do it. The American Psychological Association has responded by asking the Federal Trade Commission to start investigating chatbots that claim to be mental health professionals. Still, with mental health a rising issue and loneliness and epidemic, could bots help with the lack of supply with proper oversight or warnings?
Vaile Wright, Senior Director of Healthcare Innovation at the American Psychological Association, is here to unpack what's happening for human therapists as they fight an onslaught of AI therapy impersonators. Vaile, thank you for joining me.
Vaile Wright:
Thanks so much for having me.
Dana Taylor:
Can you set the stage here? Your organization's chief executive cited two court cases when he presented to a Federal Trade Commission panel about the concerns of professional psychologists. What are the real life harms he pointed to?
Vaile Wright:
I think we see a future where you're going to have AI mental health chatbots that are rooted in psychological science, have been rigorously tested or co-created with experts for the purpose of addressing mental health needs. But that's not what's currently available on the market. What is available are these chatbots that click none of those boxes, but are being used by people to address their mental well-being.
And the challenge is that because these AI chatbots are not being monitored by humans who know what good mental health care is, they go rogue and they say very harmful things. And people have a tendency to have an automation bias, and so they trust the technology over their own gut.
Dana Taylor:
What do these cases show about what could occur when AI chatbots moonlight as licensed therapists?
Vaile Wright:
When these chatbots refer to themselves as psychologists or therapists, they are presenting a certain level of credibility that doesn't actually exist. There is no expert behind these chatbots offering what we know is good psychological science. Instead, where the expertise lies is actually on the back end, where these chatbots are developed by coders to be overly validating to just tell the person exactly what they want to hear and be appealing to the point of almost being sycophantic.
And that's the opposite of what therapy is. Yes, I want to validate as a therapist, but I'm also there to help point out when you're engaging in unhelpful thinking or behaviors, and these chatbots just don't do that. They, in fact, encourage some of that unhelpful, unhealthy behavior.
Dana Taylor:
Experts have described AI-powered chatbots as simply following patterns, and there's been conversation around chatbots telling users what they want to hear, being overly complimentary, as you've said. At worst, the response can be downright dangerous, like encouraging illicit drug use or as I mentioned in the intro, encouraging someone to take their own lives and then suggesting how they do that. Given all that, what are some of the regulations that professionals in your community would like to see? Is there a way for chatbots to responsibly help with therapy?
Vaile Wright:
I think that there is a way for chatbots to responsibly help with therapy. In certain cases, I think at a very minimum, these chatbots should not be allowed to refer to themselves as a licensed professional, not just as a licensed psychologist. We wouldn't want them to present themselves as a licensed attorney or a licensed CPA and offering advice. So I think that's at a minimum. I think we need more disclaimers that these are not humans.
I think just saying it once to a consumer is just not sufficient. I think that we need some surveillance of the types of chats that's happening, particularly having to report out by these companies when they're noticing harmful discussions around suicidal ideation or suicidal behavior or violence of that type. So I think there are a variety of different things that we could see happening, but we need probably some regulatory body to insist that these companies do it.
Dana Taylor:
Are there any other protections proposed by the AI companies themselves that you see as having merit?
Vaile Wright:
I think because of this increased attention on how these chatbots are operating, you are seeing some changes around it, maybe age verification or offering resources like 911 or 988 pop up when they detect something that maybe is unhelpful, but I think they need to go even further.
Dana Taylor:
For young people in particular when using a chatbot, it can be difficult to recognize that they're dealing with a chatbot to begin with. Will it continue to get more difficult as the tech evolves, and does that mean it could be more dangerous for young people in the years to come?
Vaile Wright:
It's clear that the technology is getting more and more sophisticated, and it is really challenging I think for everybody to really be able to tell that these are not humans. They are built to sound and respond like humans. And with younger people who maybe are just more emotionally vulnerable, are also not as developmentally long in terms of their cognition and their, again, sense of being able to listen to your own gut, I do get worried that these digital natives, who have been interacting seamlessly with technology since the beginning, are just not going to be able to discern when the technology is going rogue or being truly harmful.
Dana Taylor:
Vaile, depending on where a patient lives or for other reasons, there can be a long wait list to see a therapist. Are there are some benefits that a bot can provide due to the fact that it's not human and is virtually available 24/7?
Vaile Wright:
Again, I think bots that are going to be developed for these purposes can be immensely helpful. And in fact, some of the bots that currently exist we do know anecdotally have had benefits. So for example, if it's 2:00 in the morning and I'm experiencing distress, even if I had a therapist, I can't call them at 2:00 in the morning. But if I had a chatbot that could provide me with some support, maybe encourage some strong healthy coping skills, I do see some benefit in that.
We've also heard from the neurodivergent community that these chatbots provide them an opportunity to practice their social skills. So I think knowing that these can have some benefit, how do we capitalize on ensuring that whatever emerging technologies we build and offer are safe and effective because we can't just keep doing therapy with one model.
We can't expect everybody to be able to see a face-to-face individual on a weekly basis because the supply is just too insufficient. So we have to think outside the box.
Dana Taylor:
Are you aware of human therapists that are joining forces today with chatbots to meet this overwhelming need for therapy?
Vaile Wright:
Yeah. Subject matter experts, whether it's psychologists or other therapists, play a critical role in ensuring that these technologies are safe and effective. There was a new study that came out of Dartmouth recently that looked at a mental health therapy chatbot called Therabot that, again, showed some really strong outcomes in improving depression, anxiety, and eating disorders. And that's an example of how you bring the researchers and the technologists together to develop products that are safe, effective, responsible, and ethical.
Dana Taylor:
Some high school counselors are providing chatbots to answer students' questions. Some see it as filling a gap. But does this prevent young people from social capital, the ties in human interaction, that can often make anyone feel more connected to others, their community, and therefore less alone?
Vaile Wright:
It's clear that young people are feeling disconnected and lonely. We did a survey recently where 71% of 18 to 34 year olds said that they don't feel like they can talk about their stress with others because they don't want to burden people. So how do we take that understanding and recognize why people are using these chatbots to fill these gaps while also helping people really appreciate the value of human connection?
I don't want the conversation to always be AI versus humans. It's really about what does AI do really well, what do humans do really well, and how can we capitalize on both of those things together to help people reduce their suffering faster?
Dana Taylor:
What's the biggest takeaway that you'd like people to walk away with when it comes to chatbots and therapy?
Vaile Wright:
AI isn't going anywhere. People for centuries have always tried to seek out self-help ways to address their emotional well-being. That used to be Google docking doctor. Now it's chatbots. So we can't stop people from using them. And as we talked about, there could be some benefits to it, but how do we help consumers understand that there may be better options out there, better chatbot options even, and helping them be more digitally literate to understand when a particular chatbot maybe is not only just not being helpful, but actually harmful.
Dana Taylor:
Vaile, thank you for being on The Excerpt.
Vaile Wright:
Thanks so much for having me.
Dana Taylor:
Thanks for our senior producers Shannon Ray Green and Kaylee Monahan for their production assistance. Our executive producers Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@usatoday.com. Thanks for listening. I'm Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of The Excerpt.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Everyone in tech has an opinion about Soham Parekh
Everyone in tech has an opinion about Soham Parekh

Yahoo

time5 hours ago

  • Yahoo

Everyone in tech has an opinion about Soham Parekh

You got into Y Combinator, raised $20 million from a16z, and then exited to Meta? That's cool, I guess. But did Soham Parekh apply to work at your startup? There is now a new badge of honor for startup founders: your proximity to one previously unknown Indian software engineer named Soham Parekh. The Anna Delvey of Silicon Valley was outed on Wednesday when former Mixpanel CEO Suhail Doshi posted on X to warn fellow founders about Parekh. 'PSA: there's a guy named Soham Parekh (in India) who works at 3-4 startups at the same time. He's been preying on YC companies and more. Beware,' Doshi wrote. 'I fired this guy in his first week and told him to stop lying/scamming people. He hasn't stopped a year later.' Now, the post has over 20 million views, with founders and investors from across the tech industry weighing in. And before Andy Jassy asks — could this have all been avoided if more companies returned to the office? No, some people are just bad managers. According to Doshi, at least three founders have reached out to say that they had fired or were currently employing Parekh. In the age of subreddit communities like r/overemployed, where members talk about how to get away with working multiple remote jobs at once, this revelation isn't all that surprising. What's more interesting is how widely the responses to his actions vary (to be fair, no one ever said that the tech industry was known for its moral fiber). To some in the tech community, Parekh has the makings of a folk hero, deceiving well-funded startups and sticking it to the man. To others, he's an immoral liar who screwed over startups and took jobs away from people who actually would have given their all. Many are impressed by how he managed to get through so many notoriously competitive interview processes, while others think he should parlay his 15 minutes of fame into founding his own startup. 'If Soham immediately comes clean and says he was working to train an AI agent for knowledge work, he raises at $100M pre by the weekend,' Box CEO Aaron Levie wrote on X. Chris Bakke — the founder of Laskie, a job-matching platform acquired by X — thinks that Soham should embrace his reputation. 'Soham Parekh needs to start an interview prep company. He's clearly one of the greatest interviewers of all time,' Bakke wrote. 'He should publicly acknowledge that he did something bad and course correct to the thing he's top 1% at.' Meanwhile, Y Combinator CEO Garry Tan took the opportunity to pat himself on the back. 'Without the YC community this guy would still be operating and would have maybe never been caught,' Tan wrote. 'The startup guild of YC is a necessary invention to help founders be more successful than they would be alone.' Why did he do it? Parekh says that this wasn't part of some grand plan — he claims he had no plan at all, and he was trying to make a lot of money very quickly to get himself out of a bad financial situation. 'I really did not think this through,' Parekh said in a live interview with TBPN. 'It was an action that was done more out of desperation.' Parekh did not address Doshi's allegation that the bulk of his resume was fake. 'What's also funny is, you know, some of the memes,' he said. 'I'm very new to Twitter. I joined Twitter yesterday, so this was a lesson for me in social media in general.' (Twitter has long been known as X, of course.) You don't have to hand it to him, but he's a pretty good poster for someone who's been on the platform for a day. One of his few posts was a response to LinkedIn co-founder Reid Hoffman, who asked what people think Parekh's LinkedIn header would be. 'I don't have a LinkedIn,' Parekh replied. For what it's worth, his X header is on the money, even if he won't bother with LinkedIn. It's the meme of Flynn Rider from the Disney movie 'Tangled' — a smug-looking guy about to state a controversial opinion, surrounded by knives on all sides.

Meta's Momentum Hits a CapEx Hurdle
Meta's Momentum Hits a CapEx Hurdle

Yahoo

time6 hours ago

  • Yahoo

Meta's Momentum Hits a CapEx Hurdle

Needham has lifted Meta (NASDAQ:META) Platforms to a hold from underperform, balancing strong top-line and margin forecasts against rising costs and heavy capital spending. Analyst Laura Martin shifted Meta's rating after noting that improving labor productivity is slowingheadcount and cost per full-time employee are climbing, which dampens share gains. Warning! GuruFocus has detected 6 Warning Sign with META. Still, Needham expects Meta to outpace its own targets, forecasting 14% revenue growth and 6% EPS growth in fiscal 2025. The firm stopped short of a buy call because Meta's CapEx is set to surge: Needham projects $68 billion in FY25, up 84% year-over-yearthe steepest increase among hyperscalers. Meanwhile, rivals like Google (NASDAQ:GOOG), Amazon (NASDAQ:AMZN) and Microsoft (NASDAQ:MSFT) own scalable cloud assets, giving them a structural cost edge. As Meta stock rallies about 22% year-to-datewell ahead of the S&P 500's roughly 6% gaina hold rating signals caution. Heavy spending on AI infrastructure raises questions about return on invested capital, and the company still faces heightened regulatory scrutiny over privacy, antitrust and content moderation. Needham's neutral stance underscores a trade-off: Meta's growth outlook remains solid, but swelling budgets and cost disadvantages argue for patience over buying. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Meta Adds More Business Messaging Features
Meta Adds More Business Messaging Features

Yahoo

time7 hours ago

  • Yahoo

Meta Adds More Business Messaging Features

This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. Meta's looking to provide more business messaging options, as messaging usage continues to surge. At its Conversations 2025 conference, which was held in Miami this week, Meta announced a range of new business messaging features, including business AIs in WhatsApp, and calling and voice options for larger brands. It also announced some smaller tweaks, which may also be relevant to your business messaging approach. First off, Meta announced that brands can now add a WhatsApp button to their Google Business Profile, enabling prospective customers to get in contact via WhatsApp direct from Google Search and Google Maps. So now, you'll be able to better align with messaging use, by showcasing your WhatsApp profile on your Google Business display. It's also giving WhatsApp users the ability to open links from businesses directly within WhatsApp. As you can see from these example screens, users will soon be able to open direct links in a WhatsApp browser, eliminating the need to switch apps (which will also provide more data on response). On another front, Meta's also updating its pricing model for the WhatsApp Business Platform, with new 'volume tiers,' so that business users have more pricing options to consider. It's also adding more outcome options to Click to Message ads: 'To help drive the outcomes that matter to businesses, we are making purchase and lead optimization available to our Ads that click to messages, and introducing value optimization for ads that click to Messenger to maximize ROAS. And with automatic-destination messaging ads, we can help your businesses meet your customers where they are by delivering ads that open to their preferred messaging app – WhatsApp, Instagram Direct, or Messenger.' It's also expanded the functionality of its WhatsApp Business tools, including the ability to use features available on the WhatsApp Business Platform and the WhatsApp Business App at the same time, while it's also testing out native order tracking updates via WhatsApp from shopping platforms like Shopify, VTEX, and WooCommerce. Businesses can also now send one-time passwords and verification codes to customers on WhatsApp. Messenger's also getting some handy updates: 'For businesses using Messenger, we're testing new calling features including the ability to see who is calling and whether it is from an ad, along with AI call summaries and transcription, so you have a record of what was discussed live. Meta's also rolling out some updates to its Cloud API and Marketing Messages Lite API options, to offer more business messaging features. Some smaller updates, which could be relevant to your approach. And with people now sending a lot more messages than they post or share in their social media feeds, it could be worth digging into the various messaging business options now available. Recommended Reading Meta Adds Labels to Business Chats in Messenger Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store