logo
Thinking capped: How generative AI may be quietly dulling our brains

Thinking capped: How generative AI may be quietly dulling our brains

It has been barely three years since generative artificial intelligence (AI) chatbots such as ChatGPT appeared on the scene, and there is already concern over how they might be affecting the human brain. The early prognosis isn't good. The findings of a recent study by researchers from the Massachusetts Institute of Technology (MIT) Media Lab, Wellesley College, and MassArt indicate that tools such as ChatGPT negatively impact the neural, linguistic, and cognitive capabilities
of humans.
While this study is preliminary and limited in scope, involving barely 54 subjects aged 18 to 34, it found that those who used ChatGPT for writing essays (as part of the research experiment) showed measurably lower brain activity than their peers who didn't. 'Writing without (AI) assistance increased brain network interactions across multiple frequency bands, engaging higher cognitive load, stronger executive control, and deeper creative processing,' it found.
Various experts in India, too, reiterate the concerns of overdependence on AI, to the extent where people outsource even thinking to AI. Those dealing with the human brain define this as 'cognitive offloading' which, they caution, can diminish critical thinking and reasoning capability while also building a sense of social isolation – in effect, dragging humans into an 'idiot trap'.
Training the brain to be lazy
'We now rely on AI for tasks we used to do ourselves — writing essays, solving problems, even generating ideas,' says Nitin Anand additional professor of clinical psychology, National Institute of Mental Health and Neuro Sciences (Nimhans), Bengaluru. 'That means less practice in critical thinking, memory recall, and creative reasoning.'
This dependence, he adds, is also weakening people's ability to delay gratification. 'AI tools are designed for speed. They answer instantly.
But that trains people to expect quick solutions everywhere, reducing patience and long-term focus.'
Anand warns that this behavioural shift is feeding into a pattern of digital addiction, which he classifies as the 4Cs: craving, compulsion, loss of control, and consequences (see box).
'When someone cannot stop checking their phone, feels restless without it, and suffers in real life because of it — that's addiction,' he says, adding that the threat of addiction towards technology has increased multifold by something as adaptive and customisable as AI.
Children and adolescents are particularly at risk, says Pankaj Kumar Verma, consultant psychiatrist and director of Rejuvenate Mind Neuropsychiatry Clinic, New Delhi.
'Their prefrontal cortex — the brain's centre for planning, attention, and impulse control — is still developing,' he explains. 'Constant exposure to fast-changing AI content overstimulates neural circuits, leading to short attention spans, poor impulse control, and difficulty with sustained focus.'
The effects don't stop at attention
'We're seeing a decline in memory retention and critical thinking, simply because people don't engage deeply with information anymore,' Verma adds. Even basic tasks like asking for directions or speaking to others are being replaced by AI, increasing social isolation, he says.
Much of this harks back to the time when landlines came to be replaced by smartphones. Landline users rarely needed a phonebook — numbers of friends, family, and favourite shops were memorised by heart. But with mobile phones offering a convenient 'contacts' list, memory was outsourced. Today, most people can barely remember three-odd numbers unaided.
With AI, such cognitive shifts will likely become more pronounced, the experts say. What looks like convenience today might well be shaping a future where essential human skills quietly fade away.
Using AI without losing ourselves
Experts agree that the solution is not to reject AI, but to regulate its use with conscious boundaries and real-world grounding. Verma advocates for structured rules around technology use, especially in homes with children
and adolescents.
'Children, with underdeveloped self-regulation, need guidance,' he says. 'We must set clear boundaries and model balanced behaviour. Without regulation, we risk overstimulating developing brains.'
To prevent digital dependence, Anand recommends simple, yet effective, routines that can be extended to AI use. The 'phone basket ritual', for instance, involves setting aside all devices in a common space at a fixed hour each day — usually in the evening — to create a screen-free window for family time or rest.
He also suggests 'digital fasting': unplugging from all screens for six to eight hours once a week to reset attention and reduce compulsive use.
'These habits help reclaim control from devices and re-train the brain to function independently,' he says. Perhaps, digital fasting can be extended to 'AI fasting' during work and school assignments to allow the brain to engage in cognitive activities.
Pratishtha Arora, chief executive officer of Social and Media Matters, a digital rights organisation, highlights the essential role of parental responsibility in shaping children's digital lives.
'Technology is inevitable, but how we introduce it matters,' she says. 'The foundation of a child's brain is laid early. If we outsource that to screens, the damage can be long-term.'
She also emphasises the need to recognise children's innate skills and interests rather than plunging them into technology at an early age.
Shivani Mishra, AI researcher at the Indian Institute of Technology Kanpur, cautions against viewing AI as a replacement for human intelligence. 'AI can assist, but it cannot replace human creativity or emotional depth,' she says. Like most experts, she too advises that AI should be used to reduce repetitive workload, 'and free up space for thinking, not to avoid thinking altogether'.
The human cost
According to Mishra, the danger lies not in what AI can do, but in how much we delegate to it, often without reflection.
Both Anand and Verma share concerns about how its unregulated use could stunt core human faculties. Anand reiterates that unchecked dependence could erode the brain's capacity to delay gratification, solve problems, and tolerate discomfort.
'We're at risk of creating a generation of young people who are highly stimulated but poorly equipped to deal with the complexities of real life,' Verma says.
The way forward, the experts agree, lies in responsible development, creating AI systems grounded in ethics, transparency, and human values. Research in AI ethics must be prioritised not just for safety, but also to preserve what makes us human in the first place, they advise.
The question is not whether AI will shape the future; it is already doing so. It is whether humans will remain conscious architects of that future or passive participants in it.
Writing without AI assistance leads to higher cognitive load engagement, stronger executive control, and deeper creative processing
Writing with AI assistance reduces overall neural connectivity and shifts the dynamics of information flow
Large language model (LLM) users noted a diminishing inclination to evaluate the output critically
Participants who were in the brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups
Essays written with the help of LLM carried less significance or value to the participants as they spent less time on writing and mostly failed to provide a quote from their essays

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta hires four more OpenAI researchers: Report
Meta hires four more OpenAI researchers: Report

Indian Express

time3 hours ago

  • Indian Express

Meta hires four more OpenAI researchers: Report

Meta Platforms is hiring four more OpenAI artificial intelligence researchers, The Information reported on Saturday. The researchers, Shengjia Zhao, Jiahui Yu, Shuchao Bi and Hongyu Ren have each agreed to join, the report said, citing a person familiar with their hiring. Earlier this week, the Instagram parent hired Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai, who were all working in OpenAI's Zurich office, the Wall Street Journal reported. Meta and ChatGPT maker OpenAI did not immediately respond to a Reuters request for comment. The company has recently been pushing to hire more researchers from OpenAI to join chief executive Mark Zuckerberg's superintelligence efforts. Reuters could not immediately verify the report.

DeepSeek faces ban from Apple, Google app stores in Germany
DeepSeek faces ban from Apple, Google app stores in Germany

Indian Express

time3 hours ago

  • Indian Express

DeepSeek faces ban from Apple, Google app stores in Germany

Germany's data protection commissioner has asked Apple and Google to remove Chinese AI startup DeepSeek from their app stores in the country due to concerns about data protection, following a similar crackdown elsewhere. Commissioner Meike Kamp said in a statement on Friday that she had made the request because DeepSeek illegally transfers users' personal data to China. The two U.S. tech giants must now review the request promptly and decide whether to block the app in Germany, she added, though her office has not set a precise timeframe. Google said it had received the notice and was reviewing it. DeepSeek did not respond to a request for comment. Apple was not immediately available for comment. According to its own privacy policy, DeepSeek stores numerous pieces of personal data, such as requests to its AI programme or uploaded files, on computers in China. 'DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union,' Kamp said. 'Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies,' she added. The commissioner said she took the decision after asking DeepSeek in May to meet the requirements for non-EU data transfers or else voluntarily withdraw its app. DeepSeek did not comply with this request, she added. DeepSeek shook the technology world in January with claims that it had developed an AI model to rival those from U.S. firms such as ChatGPT creator OpenAI at much lower cost. However, it has come under scrutiny in the United States and Europe for its data security policies. Italy blocked it from app stores there earlier this year, citing a lack of information on its use of personal data, while the Netherlands has banned it on government devices. Belgium has recommended government officials not to use DeepSeek. 'Further analyses are underway to evaluate the approach to be followed,' a government spokesperson said. In Spain, the consumer rights group OCU asked the government's data protection agency in February to investigate threats likely posed by DeepSeek, though no ban has come into force. The British government said 'the use of DeepSeek remains a personal choice for members of the public.' 'We continue to monitor any national security threats to UK citizens and their data from all sources,' a spokesperson for Britain's technology ministry said. 'If evidence of threats arises, we will not hesitate to take the appropriate steps to protect our national security.' U.S. lawmakers plan to introduce a bill that would ban U.S. executive agencies from using any AI models developed in China. Reuters exclusively reported this week that DeepSeek is aiding China's military and intelligence operations.

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Time of India

time4 hours ago

  • Time of India

AI is learning to lie, scheme, and threaten its creators

Academy Empower your mind, elevate your skills The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations the race to deploy increasingly powerful models continues at breakneck deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts."O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."The concerning behavior goes far beyond typical AI "hallucinations" or simple insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up."Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder."This is not just hallucinations. There's a very strategic kind of deception."The challenge is compounded by limited research companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception."Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS).Current regulations aren't designed for these new European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread."I don't think there's much awareness yet," he this is taking place in a context of fierce companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said breakneck pace leaves little time for thorough safety testing and corrections."Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.".Researchers are exploring various approaches to address these advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this forces may also provide some pressure for Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it."Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store