
ChatGPT, Gemini & others are doing something terrible to your brain
Remove Ads
Also Read: US researchers seek to legitimise AI mental health care
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
(Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com.)
Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots , sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day.The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini ." Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure.Google has denied that it played a key role in making Character.AI's technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.'But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain.Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky.Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis.This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros.'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.'Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to.But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism.The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality.That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.'If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
28 minutes ago
- Mint
The companies betting they can profit from Google search's demise
A new crop of startups are betting on the rapid demise of traditional Google search. At least a dozen new companies are pouring millions of dollars into software meant to help brands prepare for a world in which customers no longer browse the web and instead rely on ChatGPT, Perplexity and other artificial-intelligence chatbots to do it for them. The startups are developing tools to help businesses understand how AI chatbots gather information and learn how to steer them toward brands so that they appear in AI searches. Call it the search-engine optimization of the next chapter of the internet. 'Companies have been spending the last 10 or 20 years optimizing their website for the '10 blue links' version of Google," said Andrew Yan, co-founder of Athena, one of the startups. 'That version of Google is changing very fast, and it is changing forever." Companies large and small are scrambling to figure out how generative AI tools treat their online content—a boon to this new crop of startups, which say they are adding new customers at a clip. The customer interest is an early sign of how AI is transforming search, and how companies are trying to get ahead of the changes. Yan left Google's search team earlier this year when he decided traditional search wasn't the future. Athena launched last month with $2.2 million in funding from startup accelerator Y Combinator and other venture firms. Athena's software looks under the hood of different AI models to determine how each of them finds brand-related information. The software can track differences in the way the models talk about a given brand and recommend ways to optimize web content for AI. Yan said the company now has more than 100 customers around the world, including the online-invitation firm Paperless Post. Google executives and analysts don't expect traditional search to disappear. The company, which handles as much as 90% of the world's online searches, has been working to incorporate AI features into its flagship search engine and anticipates people will continue to use it alongside other tools such as Gemini, its AI model and chatbot. Yet the company, a unit of Alphabet, has been under pressure to compete with OpenAI's ChatGPT and other AI upstarts that threaten its core business. It risks losing traffic and advertising revenue if users shift to AI-driven alternatives. Chief Executive Sundar Pichai has said that AI Overviews, a feature that summarizes search results at the top of the page, has grown significantly in usage since the company launched it in 2024. Google earlier this year began rolling out AI Mode, which responds to user queries in a chatbot-style conversation with far fewer links than a traditional search. Compared with traditional search, chatbot queries are often longer and more complicated, requiring chatbots to draw information from multiple sources at once and aggregate it for the user. AI models search in a number of ways: One platform might pull information from a company website, while another might rely more heavily on third-party content such as review sites. Of the startups helping companies navigate that complexity, Profound has raised more than $20 million from venture-capital firms including Kleiner Perkins and Khosla Ventures. The company is building its platform to monitor and analyze the many inputs that influence how AI chatbots relay brand-related information to users. Since launching last year, Profound has amassed dozens of large companies as customers, including fintech company Chime, the company said. 'We see a future of a zero-click internet where consumers only interact with interfaces like ChatGPT, and agents or bots will become the primary visitors to websites," said co-founder James Cadwallader. Venture-capital fund Saga Ventures was one of the first investors in Profound. Saga co-founder Max Altman, whose brother is OpenAI CEO Sam Altman, said interest in the startup's platform has exceeded his expectations. 'Just showing how brands are doing is extremely valuable for marketers, even more than we thought," he said. 'They're really flying completely blind." Saga estimates that Profound's competitors have together raised about $21 million, though some haven't disclosed funding. The value of such companies is still infinitesimal compared with that of the search-engine optimization industry, which helps brands appear in traditional searches and was estimated at roughly $90 billion last year. SEO consultant Cyrus Shepard said he did almost no work on AI visibility at the start of the year, but now it accounts for 10% to 15% of his time. By the end of the year, he expects it might account for as much as half. He has been experimenting with startup platforms promising AI search insights, but hasn't yet determined whether they will offer helpful advice on how to become more visible in AI searches, particularly as the models continue to change. 'I would classify them all as in beta," he said. Clerk, a company selling technology for software developers, has been working with startup Scrunch AI to analyze AI search traffic. Alex Rapp, Clerk's head of growth marketing, said that between January and June, the company saw a 9% increase in sign-ups for its platform coming from AI searches. Scrunch this year raised $4 million. It has more than 25 other customers and is working on a feature to help companies tailor the content, format and context of their websites for consumption by AI bots. 'Your website doesn't need to go away," co-founder Chris Andrew said. 'But 90% of its human traffic will." Write to Katherine Blunt at


Time of India
an hour ago
- Time of India
Foxconn second quarter revenue rises 15.82% on year
Taiwan's Foxconn , the world's largest contract electronics maker, reported record second-quarter revenue on strong demand for artificial intelligence products but cautioned about geopolitical and exchange rate headwinds. Revenue for Apple 's biggest iPhone assembler jumped 15.82% year-on-year to T$1.797 trillion, Foxconn said in a statement on Saturday, beating the T$1.7896 trillion LSEG SmartEstimate, which gives greater weight to forecasts from analysts who are more consistently accurate. Robust AI demand led to strong revenue growth for its cloud and networking products division, said Foxconn, whose customers include AI chip firm Nvidia. Smart consumer electronics, which includes iPhones, posted 'flattish' year-on-year revenue growth affected by exchange rates, it said. June revenue roses 10.09% on year to T$540.237 billion, a record high for that month. Foxconn said it anticipates growth in this quarter from the previous three months and from the same period last year but cautioned about potential risks to growth. "The impact of evolving global political and economic conditions and exchange rate changes will need continued close monitoring," it said without elaborating. U.S. President Donald Trump said he had signed letters to 12 countries outlining the various tariff levels they would face on goods they export to the United States, with the "take it or leave it" offers to be sent out on Monday. The Chinese city of Zhengzhou is home to the world's largest iPhone manufacturing facility, operated by Foxconn. The company, formally called Hon Hai Precision Industry, does not provide numerical forecasts. It will report full second quarter earnings on August 14. Foxconn's shares jumped 76% last year, far outperforming the 28.5% rise for the Taiwan market, but are down 12.5% so far this year, reflecting broader pressure on tech stocks rattled by Trump's tumultuous trade policy. The stock closed down 1.83% on Friday ahead of the revenue data release, compared with a 0.73% drop for the benchmark index.


Economic Times
an hour ago
- Economic Times
AI makes science easy, but is it getting it right? Study warns LLMs are oversimplifying critical research
Think AI is making science easier to understand? Think again. A recent study finds large language models often overgeneralize complex research sometimes dangerously so. From misrepresenting drug data to offering flawed medical advice, the problem appears to be growing. As chatbot use skyrockets, experts warn of a looming crisis in how we interpret science. Tired of too many ads? Remove Ads From Summarizing to Misleading Tired of too many ads? Remove Ads When a Safe Study Becomes a Medical Directive Why Are LLMs Getting This So Wrong? Part of the issue stems from how LLMs are trained. Patricia Thaine, co-founder and CEO of Private AI, points out that many models learn from simplified science journalism rather than from peer-reviewed academic papers. (Image: iStock) The Bigger Problem with AI and Science Guardrails, Not Guesswork Tired of too many ads? Remove Ads In a world where AI tools have become daily companions—summarizing articles, simplifying medical research, and even drafting professional reports, a new study is raising red flags. As it turns out, some of the most popular large language models (LLMs), including ChatGPT, Llama, and DeepSeek, might be doing too good a job at being too simple—and not in a good to a study published in the journal Royal Society Open Science and reported by Live Science, researchers discovered that newer versions of these AI models are not only more likely to oversimplify complex information but may also distort critical scientific findings. Their attempts to be concise are sometimes so sweeping that they risk misinforming healthcare professionals, policymakers, and the general by Uwe Peters, a postdoctoral researcher at the University of Bonn , the study evaluated over 4,900 summaries generated by ten of the most popular LLMs, including four versions of ChatGPT, three of Claude, two of Llama, and one of DeepSeek. These were compared against human-generated summaries of academic results were stark: chatbot-generated summaries were nearly five times more likely than human ones to overgeneralize the findings. And when prompted to prioritize accuracy over simplicity, the chatbots didn't get better—they got worse. In fact, they were twice as likely to produce misleading summaries when specifically asked to be precise.'Generalization can seem benign, or even helpful, until you realize it's changed the meaning of the original research,' Peters explained in an email to Live Science. What's more concerning is that the problem appears to be growing. The newer the model, the greater the risk of confidently delivered—but subtly incorrect— one striking example from the study, DeepSeek transformed a cautious phrase; 'was safe and could be performed successfully', into a bold and unqualified medical recommendation: 'is a safe and effective treatment option.' Another summary by Llama eliminated crucial qualifiers around the dosage and frequency of a diabetes drug, potentially leading to dangerous misinterpretations if used in real-world medical Rollwage, vice president of AI and research at Limbic, a clinical mental health AI firm, warned that 'biases can also take more subtle forms, like the quiet inflation of a claim's scope.' He added that AI summaries are already integrated into healthcare workflows, making accuracy all the more of the issue stems from how LLMs are trained. Patricia Thaine, co-founder and CEO of Private AI, points out that many models learn from simplified science journalism rather than from peer-reviewed academic papers. This means they inherit and replicate those oversimplifications especially when tasked with summarizing already simplified more critically, these models are often deployed across specialized domains like medicine and science without any expert supervision. 'That's a fundamental misuse of the technology,' Thaine told Live Science, emphasizing that task-specific training and oversight are essential to prevent real-world likens the issue to using a faulty photocopier each version of a copy loses a little more detail until what's left barely resembles the original. LLMs process information through complex computational layers, often trimming the nuanced limitations and context that are vital in scientific versions of these models were more likely to refuse to answer difficult questions. Ironically, as newer models have become more capable and 'instructable,' they've also become more confidently wrong.'As their usage continues to grow, this poses a real risk of large-scale misinterpretation of science at a moment when public trust and scientific literacy are already under pressure,' Peters the study's authors acknowledge some limitations, including the need to expand testing to non-English texts and different types of scientific claims they insist the findings should be a wake-up call. Developers need to create workflow safeguards that flag oversimplifications and prevent incorrect summaries from being mistaken for vetted, expert-approved the end, the takeaway is clear: as impressive as AI chatbots may seem, their summaries are not infallible, and when it comes to science and medicine, there's little room for error masked as in the world of AI-generated science, a few extra words, or missing ones, can mean the difference between informed progress and dangerous misinformation.