
Brains on autopilot: MIT study warns AI is eroding human thought, Here's how to stay intellectually alive
Inventions have redefined the very existence of humankind, challenging us to alter the way we think, learn, and live. The printing press etched history in bold letters. Calculators reshaped arithmetic.
Now, artificial intelligence has entered the scene, permeating every niche of human life and painting it with a palette of new possibilities. Yet, like every groundbreaking invention, this too carries its fair share of repercussions.
But what happens when the very tools built to extend the human mind begin to replace it? The answer is unsettling: it produces a generation with crippled thinking abilities.
A profound transition is already underway, one that, like an asymptomatic disease, may erupt into a full-blown cognitive pandemic in the years ahead. Generative AI systems like ChatGPT promise instant answers, elegant prose, and streamlined tasks. But we now stand on the precipice of bidding adieu to creativity. Beneath the sheen of this alluring technology lies a deeper question: Are we keeping our thinking abilities on the shelf, and completely forgetting how to think?
A striking study by the Massachusetts Institute of Technology (MIT) has surfaced some troubling trends.
And no, it's not good news for the next generation.
Inside the MIT study: The brain on ChatGPT
Computer scientist Nataliya Kosmyna and her team at MIT's Media Lab set out to investigate whether heavy reliance on AI tools like ChatGPT alters the way our brains function. The experiment involved 60 college students aged 18 to 39, who were assigned to write short essays using one of three methods: ChatGPT, Google Search, or no external tools at all.
Equipped with EEG headsets to monitor their neural activity, participants crafted essays in response to prompts like 'Should we always think before we speak?'
The results? Students who wrote without any assistance demonstrated the highest levels of cognitive engagement, showing strong neural connectivity across brain regions responsible for memory, reasoning, and decision-making.
They thought harder and more deeply.
By contrast, ChatGPT users showed the lowest neural activity. Their thinking was fragmented, their recall impaired, and their essays often lacked originality. Many participants could not even remember what they had written, clear evidence that the information had not been internalised.
AI hadn't just helped them write. It had done the writing for them. Their brains had taken a backseat.
The risk of outsourcing thought
The cognitive offloading, as the researchers have named it, is not about the convenience, it is about the control. The more we allow machines to handle the hard segments of thinking, the less frequently we are exercising our brain muscles for critical thinking, creativity, and memory formation.
Over time, these muscles can weaken.
When participants who initially used ChatGPT were later asked to write without it, their brain activity increased, but it never met the levels of those who had worked independently from the start.
It provided a clear inference that the potential for deep thinking is on the verge of erosion.
Tools reflect intent, not intelligence
It is usually the invention that is treated as a scapegoat, but more than that, it depends on the way we use it. The problem is not the tool, but how we decide to put it to use.
As one teacher once said, 'Every tool carries with it a story, not of what it is, but how it is used.' AI, like a pair of scissors, is brilliant in design, but only when built with everyone in mind.
For decades, scissors excluded left-handed children, not because the tool was faulty, but because its design lacked inclusivity.
AI shares no different story. There are two roads: it can either democratise education or further deepen inequality. It can hone creativity or dull it. Our actions will decide which road we are pushing our next generation to traverse.
According to the World Bank, students from disadvantaged backgrounds are 50% less likely to access AI-powered learning tools compared to their peers (World Bank, 2024).
And as UNESCO's 2024 Global Education Monitoring Report reveals, nearly 40% of teachers worldwide fear being replaced by machines.
But those outcomes are not the fault of AI. They're the result of how we've chosen to implement it.
Used well, AI can elevate learning
When utilised cautiously, AI can still elevate the quality of education. A study by Mc Kinsey Global Institute has shown that personalised learning with the help of AI tools can bolster a student's performance by 30%.
The Organization for Economic Co-operation and Development (OECD 2022) study shared similar findings, adding weight to the stance by stating that it can mitigate teacher workloads, critical, given that educators spend 50% of their time on administrative duties.
In rural India, digital initiatives like National Digital Education Architecture (NDEAR) aim to use AI to reach over 250 million school-age children who lack access to quality teachers.
However, even in a world driven and dominated by artificial intelligence, the human element in learning cannot be substituted. The struggle for reflection, the delight of discovery dwell at the heart of human learning. As it is said, we must begin with the end in mind. Are we cultivating a cohort of students to complete the tasks, or ones who can think beyond limits and add meaning?
'AI is already born. We must learn to co-exist.'
In a conversation with The Times of India, Siddharth Rajgarhia, Chief Learner and Director of DPS, said it emphatically: 'AI is already born; we cannot keep it back in the womb.
It is important to learn to co-exist with the guest and keep our human element alive.'
That co-existence begins by redefining the role of AI, not as a shortcut, but as a companion in the learning journey. Here's how educators and students can stay intellectually alive in the age of automation:
Think before you prompt
: Encourage students to brainstorm ideas independently before turning to AI.
Reclaim authorship
: Every AI-assisted draft should be critically revised and fully owned by the student.
Foster metacognition
: Teach learners to reflect on how they think, not just what they produce.
Center equity in design
: Ensure tools are accessible to all learners, not just the digitally privileged few.
Use AI to deepen, not replace, curiosity
: Let it challenge assumptions, not hand out ready answers.
Final thought: Let AI assist, but let humans lead
The brain was never meant to idle. It was designed to wrestle with complexity, to stumble and reframe, to wonder and imagine. When we surrender that process to machines—when we allow AI to become the default setting for thought—we risk losing more than creativity.
We risk losing cognitive ownership.
The human brain was never made to sit idle. It is designed to grapple with complexity, to stumble and reframe, to wonder and imagine. When we hand over that task to machines, we allow AI to become the default setting for thought; we are losing more than creativity. We are keeping at stake our cognitive ownership, our voices, and our opinions. When we forget to think, we let go of the very power of being human.
AI is not the negative protagonist or a bane here. We need to understand that it amplifies our intentions, good or bad, lazy or inspired.
The future of learning and the workplace does not depend on the fastest prompt or smartest algorithm. It stands on the shoulders of the brightest minds who have kept their curiosity intact and who resist easy answers. At the core of learning lie educators who remind us that the goal of education is not just knowledge, it is wisdom.
We so wish that prompts could generate wisdom and a human element. Alas, they cannot. It needs to be developed by the vanguards of imagination.
Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
an hour ago
- Indian Express
Dos and don'ts: To prevent digital arrest, ‘firewall has to be in your head'
🔴 Block unknown numbers on messaging apps, use caller ID. Do not engage with unknown callers for long. 🔴 Use a separate phone number for bank and other financial transactions, and do not share it with others. 🔴 Consider taking cyber insurance against online fraud for money in the bank or invested in fixed deposits or mutual funds. 🔴 If you receive a request from a relative over the phone for money, even if there is a call with a similar voice, always disconnect and contact the person separately. These are some of the key suggestions from cyber security experts to protect those vulnerable from online scams, such as digital arrest. According to Sundareshwar Krishnamurthy, Partner-Cybersecurity, PwC India, several other measures are needed, too, to strengthen systemic defences. 'Banks have already implemented safeguards, such as setting third-party transfer limits and requiring out-of-band or multi-factor authentication for transactions. These steps introduce an additional lawyer of control by involving a pre-authorised third party,' he said. 'Looking ahead, we hope tools like will enable banks to offer customers a 'kill switch' — a dedicated number they can dial if a transaction is flagged as suspicious. Additionally, there is an urgent need for seamless coordination among law enforcement agencies across states to effectively respond to crimes such as digital arrests,' he said. Krishnamurthy described recent measures, such as spam alerts rolled out by telecom operators and the introduction of developed by Reserve Bank Innovation Hub to detect mule accounts, as steps in the right direction. The tool was created after analysing 19 distinct mule account behaviour patterns observed across banks, and pilot testing is currently underway with two major public sector banks. According to Ranjeeth Bellary, Partner, EY India Forensic and Integrity Services-Cyber Forensics, steps such as blocking and reporting unknown numbers, and using caller ID apps, are among the 'simple precautions' that bank customers and citizens can 'easily take' on their own. 'For a few thousand rupees per year, there are insurance covers for protecting your money lying in the bank as well as money invested in fixed deposits or mutual funds from cyber frauds. Plus, there are some good initiatives that have been taken and more and more firewalls are being introduced. AI initiatives launched by the Government are getting a fairly positive response and AI will play a much bigger role in curbing cyber fraud in future,' Bellary said. Lt General Rajesh Pant (retd), who was till recently posted at the National Security Council Secretariat as National Cyber Security Coordinator, points to the handbook of 'Do's and Dont's'' issued by Indian Cyber Crime Coordination Centre (I4C), the Union Home Ministry's cyber fraud unit, for preventing digital arrest. In the handbook, he points out, the key things 'to do' include: knowing that a digital arrest process does not exist in India; interrogations are never conducted via video calls; and all such calls should be reported via the 'Report Suspect Tab' of On top of the 'not to do' list , according to the I4C, is: do not engage for long with scammers. The Union Home Ministry and I4C did not respond to requests of comment from The Indian Express. Pant, meanwhile, adds another layer of caution: never believe a request over phone from a relative for sending money even if there is a call with a similar voice; disconnect the phone and call back on their number; never send money to avoid loss of reputation. 'Cyber criminals are not hacking computer systems, they are hacking the human brain. They are taking advantage of our fear of reputational loss or police action, especially among the aged. So, the firewall has to be inside your brain,' he said. 'All transactions in a bank that are more than a pre-decided amount should be executed only after confirmation from the account holder and that amount should be decided at the time of opening the account. However, if the individual is under the spell of digital arrest, he will still authorise the same. That's why I say the firewall has to be in your head,' he said. It's not just bank customers but the Government, too, needs to shore up its defences further, say cyber experts. Speaking to The Indian Express, cyber crime investigator, Amit Dubey, who is a member of the Union Home Ministry's Police Technology Mission, said digital arrest and other cyber scams cannot happen without 'engagement' and 'data breach'' within banks. 'The UK recently enacted a law, which makes banks at both ends of the transaction liable to provide compensation to customers who have been cheated. A similar legislation must be introduced in India. As of now, banks are using the fact that victims voluntarily transfer their assets and admit their mistakes as a tool to wash their hands of any liability,' Dubey said. The UK law was announced by the Payments System Regulator (PSR) on October 7, 2024, wherein it is mandatory to compensate customers who have been tricked into sending money to scammers within five days for defrauded amounts upto 85,000 pounds (about Rs 85 lakh). Besides, the refunds to victims are to be split 50-50 between the sending and receiving firms or financial institutions. In India, the Government tabled the Digital Personal Data Protection (DPDP) Act in Parliament in August 2023 to address the spike in cyber crimes. The law aims to protect personal data, including personal banking data, from theft. But the administrative rules for DPDP have yet not been notified with consultations still being held over the draft. Ritu Sarin is Executive Editor (News and Investigations) at The Indian Express group. Her areas of specialisation include internal security, money laundering and corruption. Sarin is one of India's most renowned reporters and has a career in journalism of over four decades. She is a member of the International Consortium of Investigative Journalists (ICIJ) since 1999 and since early 2023, a member of its Board of Directors. She has also been a founder member of the ICIJ Network Committee (INC). She has, to begin with, alone, and later led teams which have worked on ICIJ's Offshore Leaks, Swiss Leaks, the Pulitzer Prize winning Panama Papers, Paradise Papers, Implant Files, Fincen Files, Pandora Papers, the Uber Files and Deforestation Inc. She has conducted investigative journalism workshops and addressed investigative journalism conferences with a specialisation on collaborative journalism in several countries. ... Read More


Time of India
4 hours ago
- Time of India
IIT-Kanpur, IBM to harness AI to fight air pollution in UP
Lucknow: IIT-Kanpur's Centre of Excellence in AI for Sustainable Cities and IBM , a leading provider of global hybrid cloud and AI, have joined hands to deploy Artificial Intelligence (AI) to fight against air pollution in Uttar Pradesh . The collaboration aims to enable data-driven, localised solutions that balance economic development with environmental sustainability, aligning with the national vision of Viksit Bharat. A first-of-its-kind implementation in UP, the AI-powered solutions will support India's airshed strategy. The project is led by dean, Kotak School of Sustainability and project director of Airawat Research Foundation, IIT-K's CoE in AI for Sustainable Cities, Prof Sachchida Nand Tripathi, whose team pioneered the use of low-cost indigenous sensors. These sensors overcome the limitations of expensive imported alternatives to map pollution sources across every block in UP. This foundational work led to the development of India's first airshed-based framework for air quality management. IIT-K director Prof Manindra Agrawal said, "Today marks a powerful convergence of vision and technology. With the AI-powered Air Quality Stack, we will empower policymakers to take precise, location-specific action that safeguards public health and sustains economic momentum. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 5 Books Warren Buffett Wants You to Read In 2025 Blinkist: Warren Buffett's Reading List Undo Our collaboration with IBM exemplifies how world-class science and global tech leadership can drive forward the national vision of Viksit Bharat—creating cleaner, smarter cities that balance growth with environmental stewardship. " According to the vice-president, IBM India Software Labs, Vishal Chahal, the collaboration demonstrates how govt, academia, and industry can come together to co-create impactful, scalable innovations for environmental governance. "At IBM, we are committed to applying our strengths in AI, data and automation to solve real-world challenges. By integrating domain expertise from IIT Kanpur with our software capabilities, we aim to enable smarter decision-making and help accelerate India's clean air goals, starting with UP," he said. IBM, with its global leadership in GenAI, scalable software platforms, and urban technology, will contribute to enabling real-time monitoring and evidence-based policy action towards this initiative. Experts from IBM India Software Lab in Lucknow will design the system architecture, develop dashboards for monitoring air quality indicators, and integrate AI/ML technologies for real-time sensor data ingestion and air quality forecasting. "The partnership underscores the power of combining cutting-edge science with global technology leadership to address one of India's most urgent environmental challenges," Prof Tripathi said. Get the latest lifestyle updates on Times of India, along with Doctor's Day 2025 , messages and quotes!


NDTV
5 hours ago
- NDTV
"Don't Trust That Much": OpenAI CEO Sam Altman Admits ChatGPT Can Be Wrong
Don't place unwavering trust in ChatGPT, OpenAI CEO Sam Altman has warned. Speaking on the company's newly launched official podcast, Altman cautioned users against over-relying on the AI tool, saying that despite its impressive capabilities, it still frequently got things wrong. "People have a very high degree of trust in ChatGPT, which is interesting because, like, AI hallucinates," Altman said during a conversation with author and technologist Andrew Mayne. "It should be the tech that you don't trust that much." The techie spoke of a fundamental limitation of large language models (LLMs) - their tendency to "hallucinate" or generate incorrect information. He said that users should approach ChatGPT with healthy scepticism, as they would with any emerging technology. Comparing ChatGPT with traditional platforms like web search or social media, he pointed out that those platforms often modify user experiences for monetisation. "You can kinda tell that you are being monetised," he said, adding that users should question whether content shown is truly in their best interest or tailored to drive ad engagement. Altman did acknowledge that OpenAI may eventually explore monetisation options, such as transaction fee or advertisements placed outside the AI's response stream. He made it clear that any such efforts must be fully transparent and never interfere with the integrity of the AI's answers. "The burden of proof there would have to be very high, and it would have to feel really useful to users and really clear that it was not messing with the LLM's output," he said. He warned that compromising the integrity of ChatGPT's responses for commercial gain would be a "trust destroying moment." "If we started modifying the output, like the stream that comes back from the LLM, in exchange for who is paying us more, that would feel really bad. And I would hate that as a user," Altman said. Earlier this year, Sam Altman admitted that recent updates had made ChatGPT overly sycophantic and "annoying," following a wave of user complaints. The issue began after the GPT-4o model was updated to enhance both intelligence and personality, aiming to improve the overall user experience. The changes made the chatbot overly agreeable, leading some users to describe it as a "yes-man" rather than a thoughtful AI assistant.