
US researchers seek to legitimise AI mental health care
Researchers at Dartmouth College believe artificial intelligence can deliver reliable psychotherapy, distinguishing their work from the unproven and sometimes dubious mental health apps flooding today's market.Their application, Therabot , addresses the critical shortage of mental health professionals.According to Nick Jacobson, an assistant professor of data science and psychiatry at Dartmouth, even multiplying the current number of therapists tenfold would leave too few to meet demand."We need something different to meet this large need," Jacobson told AFP.The Dartmouth team recently published a clinical study demonstrating Therabot's effectiveness in helping people with anxiety, depression and eating disorders.A new trial is planned to compare Therabot's results with conventional therapies.The medical establishment appears receptive to such innovation.Vaile Wright, senior director of health care innovation at the American Psychological Association (APA), described "a future where you will have an AI-generated chatbot rooted in science that is co-created by experts and developed for the purpose of addressing mental health."Wright noted these applications "have a lot of promise, particularly if they are done responsibly and ethically," though she expressed concerns about potential harm to younger users.Jacobson's team has so far dedicated close to six years to developing Therabot, with safety and effectiveness as primary goals.Michael Heinz, psychiatrist and project co-leader, believes rushing for profit would compromise safety.The Dartmouth team is prioritizing understanding how their digital therapist works and establishing trust.They are also contemplating the creation of a nonprofit entity linked to Therabot to make digital therapy accessible to those who cannot afford conventional in-person help.With the cautious approach of its developers, Therabot could potentially be a standout in a marketplace of untested apps that claim to address loneliness, sadness and other issues.According to Wright, many apps appear designed more to capture attention and generate revenue than improve mental health.Such models keep people engaged by telling them what they want to hear, but young users often lack the savvy to realize they are being manipulated.Darlene King, chair of the American Psychiatric Association's committee on mental health technology, acknowledged AI's potential for addressing mental health challenges but emphasizes the need for more information before determining true benefits and risks."There are still a lot of questions," King noted.To minimize unexpected outcomes, the Therabot team went beyond mining therapy transcripts and training videos to fuel its AI app by manually creating simulated patient-caregiver conversations.While the US Food and Drug Administration theoretically is responsible for regulating online mental health treatment, it does not certify medical devices or AI apps.Instead, "the FDA may authorize their marketing after reviewing the appropriate pre-market submission," according to an agency spokesperson.The FDA acknowledged that "digital mental health therapies have the potential to improve patient access to behavioral therapies."Herbert Bay, CEO of Earkick, defends his startup's AI therapist Panda as "super safe."Bay says Earkick is conducting a clinical study of its digital therapist, which detects emotional crisis signs or suicidal ideation and sends help alerts."What happened with Character.AI couldn't happen with us," said Bay, referring to a Florida case in which a mother claims a chatbot relationship contributed to her 14-year-old son's death by suicide.AI, for now, is suited more for day-to-day mental health support than life-shaking breakdowns, according to Bay."Calling your therapist at two in the morning is just not possible," but a therapy chatbot remains always available, Bay noted.One user named Darren, who declined to provide his last name, found ChatGPT helpful in managing his traumatic stress disorder, despite the OpenAI assistant not being designed specifically for mental health."I feel like it's working for me," he said."I would recommend it to people who suffer from anxiety and are in distress."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
2 hours ago
- Economic Times
Shark Tank India's Anupam Mittal calls India's gig economy a ‘blessing', warns against 'blindly parroting' the West amid AI hype
Anupam Mittal points out India's reality of gig economy amid the AI hype in the West. (Centre and right images used for representative purpose only) In a single photo of a female delivery valet in a jacket riding a scooter, Anupam Mittal may have captured the vast divide between India's tech aspirations and its everyday realities. The founder took to LinkedIn to raise a question both biting and urgent: 'Should she learn Python too?' The post, laced with sarcasm and social insight, isn't just a critique of AI hype—it's a reality check on India's workforce, its limitations, and the romanticised notion of deep-tech as a universal solution. Mittal, who has long been vocal about the need for sustainable employment in India, pointed out that while AI automation is transforming workplaces at top tech giants like Microsoft, Meta, and Google—with leaders predicting 40–50% of processes becoming AI-driven in just a few years—India's situation is vastly different. According to him, these Western economies have low populations, high levels of formal employment, and robust reskilling systems. India, in contrast, has a largely self-employed population and lacks widespread skilling infrastructure. Drawing from his own experience working in the U.S., Mittal said real skilling meant being trained in real time across an entire organisation whenever new tech was introduced. India, he argues, isn't even close to that level of is why, he said, the gig economy—often criticised for being precarious—has been a 'blessing' in the Indian context, employing millions who would otherwise remain jobless in a country that houses nearly 20% of the world's warned against blindly parroting the West's AI-first narrative, noting that doing so risks further marginalising India's massive low-skilled workforce. He acknowledged that Indian does have incredibly talented individuals who will build the tech giants of tomorrow. However, the country also has a billion-plus people who still need jobs today.


Time of India
4 hours ago
- Time of India
ChatGPT making us dumb & dumber, but we can still come out wiser
Claude Shannon, one of the fathers of AI, once wrote rather disparagingly: 'I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines.' As we enter the age of AI — arguably, the most powerful technology of our times — many of us fear that this prophecy is coming true. Powerful AI models like ChatGPT can create complex essays, poetry and pictures; Google's Veo stitches together cinema-quality videos; Deep Research agents produce research reports at the drop of a prompt. Our innate human abilities of thinking, creating, and reasoning seem to be now duplicated, sometimes surpassed, by AI. This seemed to be confirmed by a recent — and quite disturbing — MIT Media Lab study, 'Your Brain on ChatGPT'. It suggested that while AI tools like ChatGPT help us write faster, they may be making our minds slower. Through a four-month meticulously executed experiment with 54 participants, researchers found that those who used ChatGPT for essay writing exhibited up to 55% lower brain activity, as measured by EEG signals, compared to those who wrote without assistance. If this was not troubling enough, in a later session where ChatGPT users were asked to write unaided, their brains remained less engaged than people without AI ('brain-only' participants, as the study quaintly labelled them). Memory also suffered — only 20% could recall what they had written, and 16% even denied authorship of their own text! The message seemed to be clear: outsourcing thinking to machines may be efficient, but it risks undermining our capacity for deep thought, retention, and ownership of ideas. Technology has always changed us, and we have seen this story many times before. There was a time when you remembered everyone's phone numbers, now you can barely recall your family's, if that. You remembered roads, lanes and routes; if you did not, you consulted a paper map or asked someone. Today, Google and other map apps do that work for us. Facebook reminds us of people's birthdays; email answers suggest themselves, sparing us of even that little effort of thinking. When autonomous cars arrive, will we even remember how to drive or just loll around in our seats as it takes us to our destination? Jonathan Haidt, in his 'The Anxious Generation,' points out how smartphones radically reshaped childhood. Unstructured outdoor play gave way to scrolling, and social bonds turned into notifications. Teen anxiety, loneliness, and attention deficits all surged. From calculators diminishing our mental arithmetic, to GPS weakening our spatial memory, every tool we invent alters us — subtly or drastically. 'Do we shape our tools, or do our tools shape us?' is a quote commonly misattributed to Marshall McLuhan but this question is hauntingly relevant in the age of AI. If we let machines do the thinking, what happens to our human capacity to think, reflect, reason, and learn? This is especially troubling for children, and more so in India. For one, India has the highest usage of ChatGPT globally. Most of it is by children and young adults, who are turning into passive consumers of AI-generated knowledge. Imagine a 16-year-old using ChatGPT to write a history essay. The output might be near-perfect, but what has she actually learned? The MIT study suggests — very little. Without effortful recall or critical thinking, she might not retain concepts, nor build the muscle of articulation. With exams still based on memory and original expression, and careers requiring problem-solving, this is a silent but real risk. The real questions, however, are not whether the study is correct or is exaggerating, or whether AI is making us dumber or not, but what can we do about it. We definitely need some guardrails and precautions, and we need to start building them now. I believe that we should teach ourselves and our children to: Ask the right questions: As answers become commodities, asking the right questions will be the differentiator. We need to relook at our education system and pedagogy and bring back this unique human skill of curiosity. Intelligence is not just about answers. It is about the courage to think, to doubt, and to create Invert classwork and homework: Reserve classroom time for 'brain-only' activities like journaling, debates, and mental maths. Homework can be about using AI tools to learn what will be discussed in class the next day. AI usage codes: Just as schools restrict smartphone use, they should set clear boundaries for when and how AI can be used. Teacher-AI synergy: Train educators to use AI as a co-teacher, and not a crutch. Think of AI as Augmented Intelligence, not an alternative one. Above all, make everyone AI literate: Much like reading, writing, and arithmetic were foundational in the digital age, knowing how to use AI wisely is the new essential skill of our time. AI literacy is more than just knowing prompts. It means understanding when to use AI, and when not to; how to verify AI output for accuracy, bias, and logic; how to collaborate with AI without losing your own voice, and how to maintain cognitive and ethical agency in the age of intelligent machines. Just as we once taught 'reading, writing, adding, multiplying,' we must now teach 'thinking, prompting, questioning, verifying.' History shows that humans adapt. The printing press did not destroy memory; calculators did not end arithmetic; smartphones did not abolish communication. We evolved with them—sometimes clumsily, but always creatively. Today, with AI, the challenge is deeper because it imitates human cognition. In fact, as AI challenges us with higher levels of creativity and cognition, human intelligence and connection will become even more prized. Take chess: a computer defeated Gary Kasparov in chess back in 1997; since then, a computer or AI can defeat any chess champion hundred times out of hundred. But human 'brains-only' chess has become much more popular now, as millions follow D Gukesh's encounters with Magnus Carlsen. So, if we cultivate AI literacy and have the right guardrails in place; if we teach ourselves and our children to think with AI but not through it, we can come out wiser, not weaker. Facebook Twitter Linkedin Email Disclaimer Views expressed above are the author's own.


Time of India
7 hours ago
- Time of India
How Microsoft 'killed' OpenAI's $3 billion acquisition of WindSurf, making Google the 'big winner'
FILE (AP Photo/Rick Rycroft, File) OpenAI's $3 billion agreement to buy the AI coding startup WindSurf has fallen apart. The highly-anticipated acquisition deal between artificial intelligence powerhouse OpenAI and AI coding startup Windsurf. OpenAI had reportedly been close to finalizing the deal to acquire Windsurf, formally known as Exafunction Inc. , with a signed letter of intent and investor payout agreements (waterfall agreements) already in place. The acquisition was even nearing an announcement in early May, according to sources familiar with the discussions. However, an OpenAI spokesperson has confirmed that the exclusivity period for their offer has lapsed, leaving Windsurf free to explore other opportunities. In a swift turn of events, Alphabet Inc's Google has stepped in, striking a deal worth approximately $2.4 billion to acquire top talent and licensing rights from Windsurf. This move comes hot on the heels of the collapsed OpenAI acquisition . Google announced on Friday, July 11, that it is bringing Windsurf Chief Executive Officer Varun Mohan and co-founder Douglas Chen, along with a small team of staffers, into its DeepMind artificial intelligence unit. While the company declined to disclose the specific financial terms, it clarified that the agreement does not involve taking an equity stake in Windsurf itself. This development marks a significant strategic gain for Google in the competitive AI landscape, securing valuable expertise and technology that had been hotly contested by its rivals. Microsoft tensions behind OpenAI-Windsurf deal collapse by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like An engineer reveals: One simple trick to get internet without a subscription Techno Mag Learn More Undo A significant factor in the unraveling of the OpenAI-Windsurf deal appears to be friction with kMicrosoft Corp., a major investor and key partner for OpenAI. According to a Bloomberg report, sources close to the matter indicate that Windsurf was hesitant to grant Microsoft access to its intellectual property. This condition became a sticking point that OpenAI was reportedly unable to resolve with Microsoft, whose existing agreement with OpenAI grants the software giant access to the AI startup's technology. This issue was reportedly one of several points of contention in ongoing discussions between Microsoft and OpenAI regarding OpenAI's restructuring into a commercial entity. What is Windsurf into Founded in 2021, Windsurf is a prominent player in the burgeoning field of AI-driven coding assistants. These systems are designed to automate and streamline coding tasks, including generating code from natural language prompts. The startup has successfully raised over $200 million in venture capital funding from investors like Greenoaks Capital Partners and AIX Ventures, according to PitchBook data. AI Masterclass for Students. Upskill Young Ones Today!– Join Now