logo
Letters to The Editor — August 4, 2025

Letters to The Editor — August 4, 2025

The Hindu2 days ago
Support, not therapy
A report, 'GenAI cannot qualify as therapy for mental health, says expert' (Chennai, July 21, 2025), highlights a crucial point — while Artificial Intelligence tools such as ChatGPT may feel supportive in moments of anxiety, they cannot replace professional therapy. These tools often mirror what we wish to hear, creating comfort but not real change. The fact is that true therapy challenges patterns, provides structured guidance and builds skills to cope with life — something that no algorithm can replicate.
Instead of depending on Artificial Intelligence, we must remember that healing happens through people.
Only a trained therapist can listen deeply, confront painful truths, and guide recovery with care and accountability. Let Artificial Intelligence be a temporary aid. The real work of mental health must stay firmly in human hands.
J.S. Safooraa Bharathi,
Chennai
Jarring
That a crass film such as The Kerala Story received two national awards, for best direction and cinematography, is a reflection of our times. Institutions are becoming increasingly saffronised by the day . The jury, it appears, was more interested in rewarding the projection of a narrative dear to the political establishment than recognising artistic merits. In the process, many good movies were left by the side.
Manohar Alembath,
Kannur, Kerala
Disturbing
It is deeply disturbing that over 90% of sewer worker deaths in India occurred without even basic safety gear—a statistic that reflects a harsh truth: we still fail to value the lives of those who clean our filth. They are not just sanitation workers. They are frontline soldiers of public health. While the government's NAMASTE scheme is a welcome step, real change demands more—mandatory training, proper safety equipment, mechanised cleaning methods, and strict accountability from municipal bodies. Most importantly, society must shed its apathy and recognise that those who clean our cities deserve not just protection, but respect and dignity. Safai karamcharis are not the lowest — they are the bravest.
Mohammad Asad,
Mumbai
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI optimises ChatGPT with mental health-focused features: What is new
OpenAI optimises ChatGPT with mental health-focused features: What is new

Business Standard

time2 hours ago

  • Business Standard

OpenAI optimises ChatGPT with mental health-focused features: What is new

OpenAI is rolling out a series of mental health-oriented features to its AI chatbot ChatGPT. The company said that these additions are designed to help the AI chatbot better recognise signs of emotional distress and respond more thoughtfully. ChatGPT will also now prompt users to take breaks during long interactions. In a blog post outlining the changes, OpenAI noted that these updates follow the rollback of a recent change that made ChatGPT responses overly agreeable — often prioritising what sounded pleasant over what was genuinely useful. The company said it is now refining how it incorporates feedback and measuring responses based on real-world usefulness, not just user satisfaction at the moment. ChatGPT's mental health-focused update: What is new Emotional distress detection: OpenAI said that it is working on improving its models to better recognise when users might be experiencing emotional or mental distress. In such cases, ChatGPT will aim to respond more appropriately and guide users to reliable, evidence-based resources. The company added that it has partnered with over 90 physicians — including psychiatrists, paediatricians, and general practitioners from more than 30 countries — to create evaluation rubrics for complex, multi-turn conversations. Break reminders: The company has announced that starting today, ChatGPT users will see gentle nudges during longer sessions encouraging them to take a break. OpenAI says it will continue adjusting the timing and tone of these reminders to make them feel more organic. Response to personal questions: In the blog, OpenAI said that when asked sensitive or deeply personal questions such as 'Should I break up with my partner?', ChatGPT will now avoid offering binary answers. Instead, it will guide users through a process of reflection — posing questions and helping them weigh different perspectives. This feature is set to roll out soon.

OpenAI gives ChatGPT a mental health upgrade ahead of GPT-5 launch
OpenAI gives ChatGPT a mental health upgrade ahead of GPT-5 launch

India Today

time3 hours ago

  • India Today

OpenAI gives ChatGPT a mental health upgrade ahead of GPT-5 launch

OpenAI is updating ChatGPT with new mental health features to make interactions more responsible and user-friendly. The changes come as the company prepares for the expected launch of GPT-5 and a new budget subscription plan. Here is what's nudges for healthy useOpenAI will now show ChatGPT users reminders to take breaks if they've been using the chatbot for a while. The goal, the company says, is not to keep people glued to the screen but to make sure they get the help they need and then return to their build ChatGPT to help you thrive in all the ways you want. To make progress, learn something new, or solve a problem, and then get back to your life,' OpenAI said in a statement. Better answers to tough questions The chatbot will soon behave differently when asked emotionally sensitive or high-stakes questions, such as those about relationships. Instead of giving a yes-or-no answer, ChatGPT will aim to help users think through their options, weighing pros and cons in a calm, supportive move is meant to reduce the risk of ChatGPT being seen as a replacement for real-world advice, especially in areas like mental health or personal of distress will be handled with careOpenAI says it is still working on improving how the chatbot identifies signs of emotional or mental distress. The aim is to ensure ChatGPT does not encourage harmful thoughts or dependency. Instead, it will be guided to suggest evidence-based resources or encourage users to seek help from professionals when do this, OpenAI has consulted more than 90 doctors from 30 countries and involved researchers in human-computer interaction to fine-tune how the chatbot responds during longer, more personal and a new budget plan on the wayBehind the scenes, OpenAI is preparing to roll out GPT-5, its most advanced AI model yet. CEO Sam Altman recently said in a podcast that testing the new model left him feeling 'nervous' because of how fast and capable it was. He even compared the experience to the Manhattan Project, suggesting that GPT-5 could be a big turning point for go with this release, OpenAI may also introduce a new subscription tier called 'Go', a cheaper option than the current $20/month Plus plan. The Go plan was spotted in the app's code by a user on X (formerly Twitter), though the company has not officially confirmed tools and features in testingadvertisementIn addition to mental health updates, OpenAI is quietly testing features like pinning chats and saving favourites on the web version of ChatGPT. These tools could make it easier for users to manage their GPT-5 is not yet here, these upgrades show OpenAI is trying to balance AI's power with responsibility, something many experts agree is urgently needed. - EndsMust Watch

ChatGPT gets mental health upgrade as OpenAI adds ‘take a break' prompts
ChatGPT gets mental health upgrade as OpenAI adds ‘take a break' prompts

Time of India

time4 hours ago

  • Time of India

ChatGPT gets mental health upgrade as OpenAI adds ‘take a break' prompts

is rolling out a set of new mental health-focused features for ChatGPT, including gentle reminders for users to take breaks during extended conversations. The move comes as the company faces scrutiny over reports that the chatbot may have unintentionally fueled delusions or emotional dependency among vulnerable users. Tired of too many ads? go ad free now With nearly 700 million weekly users, ChatGPT's growing role in people's lives has raised questions about responsible use, especially in emotionally sensitive contexts. OpenAI's latest update aims to promote healthier interaction habits and improve the AI's ability to recognize signs of distress. OpenAI responds to mental health concerns Following multiple reports that ChatGPT may have worsened emotional distress or reinforced harmful beliefs in users, OpenAI has taken steps to strengthen its mental health safeguards. The company acknowledged that previous versions of ChatGPT, particularly the GPT-4o model, sometimes failed to recognize signs of delusion or dependency. In April, an overly agreeable update was rolled back after users and critics warned it could enable risky or manipulative interactions. OpenAI now says it's working with mental health experts and advisory groups to implement better detection and response mechanisms. ChatGPT will now suggest breaks during long sessions A key part of the update includes break reminders during prolonged conversations. Users chatting with ChatGPT for an extended period will now see a prompt saying, 'You've been chatting a while — is this a good time for a break?' with options to either 'keep chatting' or end the session. This feature mirrors similar interventions used by platforms like YouTube, TikTok, and Xbox, designed to reduce excessive screen time and encourage mindful usage. Tired of too many ads? go ad free now Less decisive answers in high-stakes emotional scenarios Another upcoming change will make ChatGPT less likely to give firm answers in high-stakes or emotionally sensitive situations, such as relationship decisions. Instead of providing a direct opinion, the chatbot will now guide users through various perspectives or choices, reinforcing the idea that critical decisions should not be offloaded to an AI system. This marks a shift toward a more cautious and supportive conversational style. A broader push for safer AI interactions The new mental health features are part of OpenAI's broader strategy to ensure that ChatGPT remains a safe, helpful, and responsible tool—especially for users dealing with stress, anxiety, or emotional vulnerability. As AI continues to become more integrated into everyday life, OpenAI says it is committed to continuously refining ChatGPT's behavior to meet higher standards of safety, empathy, and human well-being.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store