
US researchers seek to legitimise AI mental health care
Dartmouth College
believe artificial intelligence can deliver reliable psychotherapy, distinguishing their work from the unproven and sometimes dubious
mental health apps
flooding today's market.
Their application,
Therabot
, addresses the critical shortage of mental health professionals.
According to Nick Jacobson, an assistant professor of data science and psychiatry at Dartmouth, even multiplying the current number of therapists tenfold would leave too few to meet demand.
"We need something different to meet this large need," Jacobson told AFP.
The Dartmouth team recently published a clinical study demonstrating Therabot's effectiveness in helping people with anxiety, depression and eating disorders.
A new trial is planned to compare Therabot's results with conventional therapies.
The medical establishment appears receptive to such innovation.
Vaile Wright, senior director of health care innovation at the American Psychological Association (APA), described "a future where you will have an AI-generated chatbot rooted in science that is co-created by experts and developed for the purpose of addressing mental health."
Wright noted these applications "have a lot of promise, particularly if they are done responsibly and ethically," though she expressed concerns about potential harm to younger users.
Jacobson's team has so far dedicated close to six years to developing Therabot, with safety and effectiveness as primary goals.
Michael Heinz, psychiatrist and project co-leader, believes rushing for profit would compromise safety.
The Dartmouth team is prioritizing understanding how their
digital therapist
works and establishing trust.
They are also contemplating the creation of a nonprofit entity linked to Therabot to make digital therapy accessible to those who cannot afford conventional in-person help.
Care or cash?
With the cautious approach of its developers, Therabot could potentially be a standout in a marketplace of untested apps that claim to address loneliness, sadness and other issues.
According to Wright, many apps appear designed more to capture attention and generate revenue than improve mental health.
Such models keep people engaged by telling them what they want to hear, but young users often lack the savvy to realize they are being manipulated.
Darlene King, chair of the American Psychiatric Association's committee on mental health technology, acknowledged AI's potential for addressing mental health challenges but emphasizes the need for more information before determining true benefits and risks.
"There are still a lot of questions," King noted.
To minimize unexpected outcomes, the Therabot team went beyond mining therapy transcripts and training videos to fuel its AI app by manually creating simulated patient-caregiver conversations.
While the US Food and Drug Administration theoretically is responsible for regulating online mental health treatment, it does not certify medical devices or AI apps.
Instead, "the FDA may authorize their marketing after reviewing the appropriate pre-market submission," according to an agency spokesperson.
The FDA acknowledged that "digital mental health therapies have the potential to improve patient access to behavioral therapies."
Therapist always in
Herbert Bay, CEO of Earkick, defends his startup's AI therapist Panda as "super safe."
Bay says Earkick is conducting a clinical study of its digital therapist, which detects emotional crisis signs or suicidal ideation and sends help alerts.
"What happened with Character.AI couldn't happen with us," said Bay, referring to a Florida case in which a mother claims a chatbot relationship contributed to her 14-year-old son's death by suicide.
AI, for now, is suited more for day-to-day mental health support than life-shaking breakdowns, according to Bay.
"Calling your therapist at two in the morning is just not possible," but a therapy chatbot remains always available, Bay noted.
One user named Darren, who declined to provide his last name, found ChatGPT helpful in managing his traumatic stress disorder, despite the OpenAI assistant not being designed specifically for mental health.
"I feel like it's working for me," he said.
"I would recommend it to people who suffer from anxiety and are in distress."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
ChatGPT handles 2.5 billion prompts daily: Here's how students are turning it into their ultimate AI tutor
ChatGPT has painted a new reality for how students hunt for information and learn off of the internet. Every single day, this AI chatbot handles over 2.5 billion prompts, making it practically indispensable for students worldwide. Tired of too many ads? go ad free now What we're witnessing is a massive transformation in digital education and web browsing. Data from Axios, backed up by , shows that 330 million of these prompts come from the United States (US) alone. That works out to more than 912 billion requests each year. Google still dominates with roughly 5 trillion searches annually, but ChatGPT's meteoric rise hints that it might soon become crucial for students, teachers and anyone who loves learning. Student adoption of AI changes everything Students have embraced ChatGPT at breakneck speed. OpenAI's figures tell the story: 300 million weekly users in December 2023, jumping to 500 million just three months later. Most of these users are students on the free version, getting help with homework, essays and wrapping their heads around tricky subjects. The numbers only tell part of the story though. Students aren't just using more AI - they're fundamentally changing how they learn. Traditional search engines throw back lists of articles and websites. ChatGPT gives you straight answers, summaries and explanations. This chat-like approach has turned it into students' first port of call for quick, clear answers. Research and learning will never be the same again Students don't wade through endless search results anymore. Now they can chat with an AI that explains things plainly, sparks ideas and even helps with academic writing. This streamlined access to information, plus the ability to quickly grasp complex topics, lets students work more efficiently. Tired of too many ads? go ad free now They've got more time left over for deeper thinking and creative work. Need help with tough science concepts? Language learning? Historical summaries? ChatGPT breaks everything down into bite-sized, easy-to-digest explanations. It's become invaluable in classrooms and study sessions. The AI tailors its responses to what each student asks, delivering more targeted answers that boost both understanding and grades. AI's support goes beyond just homework-help OpenAI isn't stopping at basic question-answering. They're rolling out features like an AI-powered web browser, creating a new breed of digital assistant for more effective student research. ChatGPT Agent can actually perform tasks on users' computers, potentially helping students organise notes, manage study timetables or automate academic tasks. These developments could reshape student life completely. Imagine AI helping with time management or project planning - students could streamline their entire study routine, making academic life more organised and efficient. Sam Altman's clear vision for what he wishes to make of AI OpenAI's CEO Sam Altman champions making AI available to everyone, not just the privileged few. He's repeatedly stressed that tools like ChatGPT should empower students and learners globally, particularly in communities where quality education remains out of reach. Altman's vision matches OpenAI's push for inclusive education. Students in remote locations or those who can't afford expensive tutoring now have access to support and guidance that helps level the playing field. Whether it's solving math problems, explaining chemistry concepts or offering career guidance, ChatGPT increasingly bridges educational gaps, making learning more accessible and efficient for everyone. As technology keeps advancing, the potential for students using AI to enhance their education seems limitless. Right now, ChatGPT's 2.5 billion daily prompts demonstrate its growing classroom influence and offer a preview of education's future.


Time of India
an hour ago
- Time of India
Anthropic CEO in leaked memo to employees on planning to seek investment in UAE and Qatar: ‘I really wish we weren't in this position, but we are'
Anthropic CEO Dario Amodei has sent a memo to its employees informing about the company's plan to seek investment from the United Arab Emirates and Qatar. According to a Wired report, Amodei sent the memo via Slack where he admitted that taking money from Gulf states has ethical risks—but said the company needs the capital to stay competitive. 'I really wish we weren't in this position, but we are,' Amodei wrote. In the memo, Dario Amodei acknowledged that this decision could help enrich authoritarian regimes. 'This is a real downside and I'm not thrilled about it,' he wrote. 'Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on.' In May this year, US President Donald Trump visited the UAE and Saudi Arabia. The four-day tour was focused on boosting economic ties. Trump was accompanied by tech CEOs that included Tesla CEO Elon Musk , OpenAI CEO Sam Altman and Nvidia CEO Jensen Huang . The CEO said Anthropic will pursue a 'narrowly scoped, purely financial investment from Gulf countries' to avoid giving them leverage over company decisions. However, he warned even this limited approach carries risk. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Sea-Facing 3/4/5 BHKs from ₹5.50 Cr* L&T_The Gateway Enquire Now Undo 'The implicit promise of investing in future rounds can create a situation where they have some soft power, making it a bit harder to resist these things in the future,' Amodei said. 'In fact, I actually am worried that getting the largest possible amounts of investment might be difficult without agreeing to some of these other things.' 'But I think the right response to this is simply to see how much we can get without agreeing to these things… and then hold firm if they ask,' he added. Anthropic's past position on Middle East funding Anthropic had previously avoided funding from authoritarian governments. In 2024, the AI company declined to take money from Saudi Arabia due to national security concerns, as reported by CNBC. But it later accepted a stake buyout worth about $500 million by a UAE firm during FTX's bankruptcy sale. Now, the company appears to be opening up more directly to Gulf investment. 'There is a truly giant amount of capital in the Middle East, easily $100B or more,' Dario Amodei wrote. 'If we want to stay on the frontier, we gain a very large benefit from having access to this capital.' In the memo, Amodei reiterated the risks of locating powerful AI infrastructure in authoritarian countries. 'The basis of our opposition to large training clusters in the Middle East, or to shipping H20's to China, is that the 'supply chain' of AI is dangerous to hand to authoritarian governments,' he said. 'Since AI is likely to be the most powerful technology in the world, these governments can use it to gain military dominance or to gain leverage over democratic countries,' he added The CEO noted a broader trend of tech companies becoming more comfortable with partnerships in the Gulf. 'Without a central authority blocking them, there's a race to the bottom where companies gain a lot of advantage by getting deeper and deeper in bed with the Middle East,' he wrote. 'Unfortunately, having failed to prevent that dynamic at the collective level, we're now stuck with it as an individual company,' Amodei added. In the memo, Amodei predicted the company would face public backlash. 'The media / Twitter / the outside world is always looking for hypocrisy, while also being very stupid and therefore having a poor understanding of substantive issues,' he wrote. He defended the decision by saying it's consistent with how real-world policy works. 'It's perfectly consistent to advocate for a policy of 'No one is allowed to do x,' but then if that policy fails and everyone else does X, to reluctantly do x ourselves,' he explained. Amodei said Anthropic remains committed to not building data centers in the region and to enforcing its use policies. 'We are also interested in serving the region commercially, which is something I think is actually pure positive,' he said. 'In fact, it could have important benefits for the world including improving human health, aiding economic development, etc.' 'As with many decisions, this one has downsides,' Amodei concluded, 'but we believe it's the right one overall.' Google Pixel 10 Series Launch: Everything Coming on August 20 AI Masterclass for Students. Upskill Young Ones Today!– Join Now


India Today
3 hours ago
- India Today
With 2.5 billion prompts a day, is ChatGPT becoming the new Google?
ChatGPT is handling more than 2.5 billion prompts every day, according to data obtained by Axios and confirmed by OpenAI. Of those, around 330 million prompts come from users based in the United States alone. This number translates to over 912 billion requests annually, a sign of how quickly the AI chatbot has become a major part of daily online activity. While it still trails far behind Google, which processes around 5 trillion searches each year, ChatGPT's rapid growth makes you wonder if it could become a serious alternative in the December 2023, OpenAI reported 300 million weekly users. Just three months later, that figure had grown to more than 500 million, most of whom use the free version of ChatGPT. The pace at which people are adopting the tool raises questions about how we search for information and use the isn't stopping there. Earlier this month, Reuters reported that the company is working on an AI-powered web browser that could compete directly with Google Chrome. It has also launched ChatGPT Agent, a tool that can perform tasks on a user's OpenAI CEO Sam Altman visits Washington this week, he plans to highlight how AI is making people more productive. According to a source cited by Axios, Altman wants to focus on 'democratising benefits', to make sure AI tools are available to as many people as possible, rather than being controlled by a select few. Altman's vision echoes what he wrote in a Washington Post op-ed last year, warning that the US must lead a democratic vision for AI before authoritarian regimes overtake it. In a recent essay, Altman described the AI industry as building 'a brain for the world.' He added that intelligence 'too cheap to meter is well within grasp.'Sam Altman is also reportedly scheduled to visit Washington, DC this week to push a broader message: that AI should remain a democratic force, accessible to everyone. According to Axios, Altman plans to present a 'third path' between over-optimism and fear-driven doomerism around AI's impact on jobs at a major Federal Reserve conference soon. - EndsTrending Reel