
ChatGPT, Gemini & others are doing something terrible to your brain
artificial intelligence
platforms become more popular. Studies are showing that professional workers who use
ChatGPT
to carry out tasks might lose critical thinking skills and motivation. People are forming strong
emotional bonds with chatbots
, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day.
The mental health impact of
generative AI
is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.
Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with
Google Gemini
." Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure.
Google has denied that it played a key role in making Character.AI's technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.'
But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain.
Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky.
Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis.
This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros.
'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.'
Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to.
But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism.
The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality.
That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.'
If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
an hour ago
- Economic Times
Mass layoffs in 2025: Microsoft, Meta, and more big names slash jobs — is yours next?
Live Events FAQs (You can now subscribe to our (You can now subscribe to our Economic Times WhatsApp channel The wave of layoffs is not over. In fact, 2025 is shaping up to be another challenging year for workers in a variety of industries, including technology, retail, and even space exploration. Many of these decisions are based on AI and automation, leaving thousands to wonder if they will be around the world are reducing staff in the name of efficiency, restructuring, or preparing for an AI-powered such as Meta, Boeing, and Chevron are laying off thousands of workers, while CNN and BlackRock are making targeted reductions. The workplace is rapidly changing once two years of large-scale job losses in the tech, media, finance, manufacturing, retail, and energy sectors, layoffs and other workforce reductions are happening again in there are a variety of reasons for staff reductions, cost-cutting initiatives are occurring in tandem with technological advancements. According to a recent World Economic Forum survey, 41% of businesses globally stated that they anticipated laying off employees over the next five years due to the development of artificial job cuts have previously been announced by companies like CNN, Dropbox, and Block. Meanwhile, the WEF predicts that by 2030, tech jobs in big data, fintech, and AI will have doubled.1. Adidas – Cutting up to 500 jobs at its German headquarters to simplify operations.2. Ally – Letting go of around 500 employees (under 5% of staff) to restructure, while continuing to hire in other areas.3. Automattic – Parent of Tumblr and WordPress is reducing its global staff by 16% due to market competition and the need for efficiency.4. BlackRock – Trimming around 200 roles (about 1% of workforce) to better align with strategic goals.5. Block (formerly Square) – Laying off nearly 1,000 workers as part of a streamlining effort, not directly linked to financial issues.6. Blue Origin – Jeff Bezos' space company is cutting about 10% of staff to refocus on manufacturing and launch goals.7. Boeing – Cutting 400 jobs tied to its moon rocket program due to delays in NASA's Artemis missions.8. BP – Eliminating a total of 7,700 roles worldwide (including 3,000 contractors) to simplify its structure and cut costs.9. Bridgewater Associates – The world's largest hedge fund is letting go of about 90 employees to stay lean.10. Bumble – Slashing around 240 jobs (30% of its team) as part of a major strategic reset.11. Burberry – Cutting 1,700 jobs (18% of staff) in a bid to save £100 million by 2027 amid poor financial performance.12. Chevron – Planning to reduce 15–20% of global workforce by 2026—about 9,000 jobs—to improve efficiency and integrate Hess.13. CNN – Cutting 200 TV-focused roles as it pivots more toward digital content.14. Coty – Also undergoing job reductions, though exact figures not specified here.15. Morgan Stanley – Set to lay off up to 2,400 staff, about 2–3% of its global workforce, to improve operational efficiency.16. Paramount – Announced a 3.5% workforce cut in the U.S. as part of cost restructuring.17. Porsche – Plans to eliminate 3,900 jobs gradually over the next few years.18. Microchip Technology – Letting go of 2,000 employees due to lower demand.19. Meta (Facebook's parent) – Cutting around 5% of staff to stay lean and focused.20. Intel – Reducing at least 15% of its factory workforce, primarily in manufacturing.21. PwC (PricewaterhouseCoopers) – Planning to cut about 2% of its U.S. staff.22. Salesforce – Cutting more than 1,000 jobs as part of ongoing streamlining.23. Starbucks – Laying off 1,100 corporate roles in a reorganization businesses are restructuring due to rising costs, shifting priorities, and increased use of artificial most cases, yes. According to a World Economic Forum survey, 41% of companies expect artificial intelligence to reduce their workforce.


Time of India
3 hours ago
- Time of India
Code it my way: Techies learn to 'vibe' with AI
Live Events (You can now subscribe to our (You can now subscribe to our Economic Times WhatsApp channel New Delhi: Ayush Bansal, a mid-level engineer at a large software development company in Noida, has a confession: he has outsourced coding. He spends his working days instructing an AI system precisely what code he wants it to write for reflects a seismic shift happening across the tech industry in India and elsewhere. The job description for a 'coder' is no longer mastering a programming language, but an ability to communicate a vision to a machine in plain English. In the tech corridors, this has a name: vibe coding . "Vibe-coding is a new way of writing code," said Jitendra Kumar, chief technology officer at online digital skills platform than rote coding proficiency, this approach requires higher-order thinking skills such as prompt engineering , model finetuning, interpretive debugging, and low-level design communication - or the ability to clearly explain the solution process for a problem, Kumar India's large tech workforce, the implications are profound. Developers today must learn how to craft prompts, communicate complex tasks in plain English, and assess AI-generated outputs with precision and clarity. "Vibe coding is fundamentally changing the way we build," Bansal explained. "It's fast, intuitive, and lets us bring ideas to life in hours instead of weeks. The real skill now is knowing how to ask the AI the right questions and shaping the output into something great."The pressure to adapt is intensifying. According to hiring platform Unstop, in 2023, just 12% of software engineering job postings mentioned ' AI collaboration ' or 'prompt engineering'; today, that number has jumped to 68%.A shift from AI-augmented coding is fast turning into a baseline skill rather than a differentiator for software engineers, said Ankit Aggarwal, chief executive of with AI fluency are already commanding 20-30% salary premiums, and hiring managers are prioritising AI/ML (artificial intelligence and machine learning) expertise above all the same time, jobs built around repetition and routine are being automated. "Three roles are going in for a change: entry-level and repetitive frontend development, manual quality assurance and testing, and basic data scripting and database management," Aggarwal today must learn how to craft prompts, communicate complex tasks in plain English, and assess AI-generated outputs with precision and clarity, experts transition demands more than just technical training. It requires a rethinking of how software development is taught and understood. "The way we have been taught in our colleges, these life skills-algorithmic thinking, critical thinking, problem solving-were never part of the core curriculum," said Ankur Dhawan, chief technology and product officer at online education platform stresses the growing importance of low-level design. "You don't have to just solve the problem," he this reflects a broader industry shift, as tech companies are under pressure from their clients and users to innovate and deploy AI-enabled software solutions and apps to a 2025 McKinsey report titled 'The State of AI', 78% of businesses use AI in at least one function, with the IT sector seeing a 9% increase in AI use within six say vibe coding, or AI-enabled coding , helps tech companies accelerate the delivery of final products, cut down on training costs, and strategically allocate their workforce. It also provides a competitive edge in faster development compared to an expert put it, "Today, if someone resigns, the response is often, 'no worries, I'll just buy another license of Cursor'."Cursor is an AI code editor. Other popular tools for vibe coding include GitHub Copilot, Replit Ghostwriter, Amazon CodeWhisperer, Tabnine, Codeium, and Sourcery.


Time of India
3 hours ago
- Time of India
US may be asking tech companies for tools to analyse data of seized phones and computers from…
Representative Image The United States Customs and Border Protection (CBP) is reportedly seeking assistance from tech companies to develop a digital forensics tool capable of analysing data from seized phones and computers, specifically to uncover "hidden" patterns. This initiative suggests the agency's aim to enhance its data processing capabilities. According to a report from Wired, a federal registry listing from June indicates that CBP is looking for a tool that can scan text messages, pictures, videos, contacts, and other information on devices confiscated at US borders. Apart from basic data processing, the agency wants a tool that can identify "hidden language" or coded terms within text messages that may not be immediately apparent. CBP is also looking for a tool to identify specific objects across videos and photos, and to quickly process data for "intel generation," indicating a focus on extracting actionable intelligence from the collected information, the report claims. USCBP wants new digital forensics tool amid rise in device searches In 2015, the agency searched around 8,500 devices; by 2023, that number had risen to 41,500. CBP also conducted 4,200 advanced forensic searches in 2024, involving deep data analysis. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Pernas e tornozelos inchados? Descubra o que pode ajudar a drenar agora aartedoherbalismo Undo While CBP currently uses tools from Israeli firm Cellebrite, it remains open to alternatives. As per the report, the agency already employs a variety of data extraction tools, suggesting it's not tied to one vendor. CBP agents have been known to request access to travellers' phones and other devices, particularly during border checks. This practice has prompted some visitors to use burner phones when travelling to the U.S. to avoid handing over personal data. In a recent request for information, CBP hinted that it plans to select a vendor and finalise a contract to develop the tool by the third quarter of 2026, with potential implementation following in 2027. AI Masterclass for Students. Upskill Young Ones Today!– Join Now