
Scientifically Speaking: Does ChatGPT really make us stupid?
A number of headlines have been screaming that using artificial intelligence (AI) rots our brains. Some said it outright, others preferred the more polite 'cognitive decline', but they meant almost the same thing. Add this to the growing list of things AI is blamed for: making us lazy, dumber, and incapable of independent thought.
If you read only those headlines, you couldn't be blamed for thinking that most of humanity was capable of exalted reasoning before large language models turned our brains to pulp. If we're worried about AI making us stupid, what's our excuse for the pre-AI era?
Large language models like ChatGPT and Claude are now pervasive. But even a few years into their use, a lot of the talk about them remains black and white. In one camp, the techno-optimists tell us that AI super intelligence, which can do everything better than all of us, is just around the corner. In the other, there's a group that blames just about everything that goes wrong anywhere on AI. If only the truth were that simple.
The Massachusetts Institute of Technology (MIT) study that sparked this panic deserves a sober hearing, not hysteria. Researchers at the Media Lab asked a worthwhile question. How does using AI to write affect our brain activity?
But answering that is harder than it looks.
The researchers recruited 54 Boston-area university students, divided them into three groups, and each wrote 20-minute essays on philosophical topics while monitoring their brain activity with EEG sensors. One group used ChatGPT, another used Google Search, and a third used only their brains.
Over four months, participants tackled questions like 'Does true loyalty require unconditional support?'
What the researchers claim is that ChatGPT users showed less brain activity during the task, struggled to recall what they'd written, and felt less ownership of their work. They call this 'cognitive debt.' The conclusion sounds familiar. Outsourcing thinking weakens engagement. Writing is hard. Writing philosophical essays on abstract topics is harder.
It's valuable work, and going through the nearly 200 pages of the paper takes time. The authors note that their findings aren't peer-reviewed and come with significant limitations. I wonder how many headline writers bothered to read past the summary.
Because the limitations are notable. The study involved 54 students from elite universities, writing brief philosophy essays. Brain activity was measured using EEGs, which is less sensitive and more ambiguous than brain scans using fMRI (functional magnetic resonance imaging).
If AI really damages how we think, then what participants did between sessions matters. Over four months, were the 'brain-only' participants really avoiding ChatGPT for all their coursework? With hundreds of millions using ChatGPT weekly, that seems unlikely. You'd want to compare people who never used AI to those who used it regularly before drawing strong conclusions about brain rot.
Also Read: ChatGPT now does what these 3 apps do, but faster and smarter
And here's the problem with stretching a small study on writing philosophical college-level essays too far. While journalists were busy writing sensational headlines about 'brain rot,' they missed the bigger picture. Most of us are using ChatGPT to avoid thinking about things we'd rather not think about anyway.
Later this month, I'm travelling to Vietnam. I could spend hours sorting out my travel documents, emailing hotels about pickups and tours, and coordinating logistics. Instead, I'll use AI to draft those communications, check them, and move on. One day maybe my AI agent will talk to their AI agent and spare us both, but we're not there yet.
In this case, using AI doesn't make me stupid. It makes me efficient. It frees up mental energy and time for things I actually want to focus on, like writing this column.
This is the key point, and one I think that got lost. Learning can't be outsourced to AI. It still has to be done the hard way. But collectively and individually we do get to choose what's worth learning.
Also Read: Ministries brief House panel on AI readiness
When I use GPS instead of memorizing routes, maybe my spatial memory dulls a bit, but I still get where I'm going. When I use a calculator, my arithmetic gets rusty, but that doesn't mean I don't understand math. If anyone wants to train their brain like a London cabbie or Shakuntala Devi, they can. But most of us prefer to save the effort.
Our goal isn't to use our brains for everything. It's to use them for the things that matter to us.
I write my own columns because I enjoy the process and feel I have something to say. When I stop feeling that way, I'll stop. But I'm happy to let AI handle my travel logistics, routine correspondence, and other mental busywork.
Rather than fearing this transition, we might ask: What uniquely human activities will we choose to pursue with the time and mental energy AI frees up?
We're still in the early days of understanding AI's cognitive impacts. Some promise AI will make us all geniuses; others warn it will turn our brains to mush. The verdict isn't in, despite what absolutists on both sides claim.
Anirban Mahapatra is a scientist and author, most recently of the popular science book, When The Drugs Don't Work: The Hidden Pandemic That Could End Medicine. The views expressed are personal.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
36 minutes ago
- Business Standard
Can a new project stop flawed medical research before it causes harm?
With trust in science under scrutiny, a global initiative seeks to catch manipulated or inaccurate medical studies before they influence clinical or policy decisions Sarjna Rai New Delhi Every day, clinical decisions, health policies, and patient treatments are shaped by scientific studies. But what if those studies are based on manipulated images, skewed data, or biased analysis? What if the medicine you are prescribed was approved based on flawed evidence? These are not just academic questions—they directly affect lives. Flawed medical research is a growing problem According to a report published by The Guardian last year, around 10,000 research papers were retracted by journals in 2023. Watchdog groups like Retraction Watch have tracked the surge in scientific retractions, often issued only after serious flaws or fabrications are uncovered. An analysis by Nature showed a steep rise—from just over 1,000 retractions in 2013 to more than 4,000 in 2022, with numbers crossing 10,000 in 2023 —signalling a deepening credibility crisis in research publishing. The pandemic accelerated this trend with a spike in pre-prints—scientific papers made public before peer review. While helpful in fast-tracking global knowledge sharing, this also allowed questionable research to bypass scrutiny. Between January 2020 and October 2021, 157 COVID-related papers were retracted, including some from top-tier journals, as reported by The New Indian Express. Notably, high-profile studies on hydroxychloroquine and blood pressure medication in COVID treatment were withdrawn after experts revealed serious data inconsistencies. In 2016, an inspection of Semler Research's facility in Bengaluru by the US FDA uncovered severe data falsification, including sample manipulation and substitution. The FDA declared the firm's studies unreliable, forcing sponsors to redo trials through alternate agencies. The World Health Organization and European Medicines Agency also took action, highlighting the global reach of flawed data. What is the Medical Evidence Project trying to do? The Medical Evidence Project aims to tackle the mounting crisis in research reliability. Funded by a $900,000 grant from Open Philanthropy, the project will focus on identifying manipulated or low-quality studies before they affect patient care or policy. Led by Boston-based research scientist James Heathers, the initiative will be run under the Centre for Scientific Integrity, the non-profit that also operates Retraction Watch. Heathers will collaborate with Ivan Oransky, the centre's executive director and co-founder of Retraction Watch. 'We originally set up the Centre for Scientific Integrity as a home for Retraction Watch, but we always hoped we would be able to do more in the research accountability space,' Oransky said in a statement. 'The Medical Evidence Project allows us to support critical analysis and disseminate the findings.' By combining editorial review, forensic screening, and whistleblower tips, the initiative hopes to build a stronger accountability model—preventing both misconduct and honest errors before they influence decisions. From whistleblowers to watchdogs: How the system will work The Medical Evidence Project will operate via a dual strategy—combining tip-offs from whistleblowers with systematic screening using digital tools. A core team of up to five investigators will use forensic metascience methods to analyse papers for signs of manipulation, bias, or methodological error. Their findings will be published on Retraction Watch, known for tracking research retractions. James Heathers described the process in a blog post: A flaw is identified in a published paper A detailed report is prepared If the flaw is serious, it undergoes internal peer review The findings are then made public In addition, the project will: Build software tools to automate parts of the verification process Follow up on anonymous tips through a dedicated whistleblower channel Pay external reviewers to independently verify and scrutinise findings The team aims to uncover at least 10 flawed meta-analyses per year, with the broader goal of raising the reliability of published medical literature. Why this matters to patients, policymakers, and the public A study in Misinformation Review by Harvard Kennedy School found that nearly two-thirds of sampled scientific papers on Google Scholar showed signs of being generated using AI tools like GPT. Roughly 14.5 per cent of these flagged studies were health-related—some even appearing in established journals. The problem is worsened by the fact that Google Scholar does not differentiate between peer-reviewed publications and unvetted content like preprints or student theses. Once flawed studies are cited in meta-analyses or referenced by clinicians, they can misguide patient care, misallocate public funds, and skew health policies. This is why initiatives like the Medical Evidence Project are vital—not as punitive measures, but as proactive efforts to improve transparency, raise research standards, and restore faith in evidence-based medicine. Conclusion Science thrives on credibility. The global push to catch flaws before they become crises marks a pivotal shift in how health research is verified. With ethical oversight, smarter tools, and dedicated investigators, the goal is clear and urgent: to ensure medical evidence truly serves the people who depend on it.


Time of India
37 minutes ago
- Time of India
OpenAI is Closing Down for one-full week
AI Image In what appears to be desperate bid to retain talent amid an aggressive poaching campaign by Meta, ChatGPT maker OpenAI has announced a mandatory week-long vacation for its employees. The move comes as the artificial intelligence (AI) race intensifies, with Meta luring at least eight OpenAI researchers to its newly formed "superintelligence" team over the past week, according to sources cited by Wired. This "recruiting coup," as described by the Wall Street Journal, has sparked a crisis at OpenAI, where a cloud of anxiety looms over the C-suite. The loss of talent is particularly stinging for OpenAI, a leader in AI research behind ChatGPT, as it faces mounting pressure to maintain its edge in the competitive AI landscape. Industry analysts note that this talent war reflects broader existential anxieties in the AI sector, where companies are racing to achieve artificial general intelligence (AGI)—AI capable of performing any intellectual task a human can. '80-hour work weeks' that make OpenAI leaders task more challenging Leaders are scrambling to keep staff loyal, a challenging task given that OpenAI employees often endure grueling 80-hour workweeks. The poaching has hit OpenAI hard, with insiders revealing a sense of betrayal among leadership. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Kate Middleton Reportedly Taking Royal Split 'Badly' Crowdy Fan Undo What OpenAI Chief Research Officer Mark Chen told employees In a leaked internal memo posted to an OpenAI Slack channel, chief research officer Mark Chen expressed raw emotion, writing, "I feel a visceral feeling right now, as if someone has broken into our home and stolen something." Chen assured employees that he and CEO Sam Altman are working "around the clock" to counter Meta's aggressive recruitment tactics, which reportedly include staggering $100 million signing bonuses and first-year compensation packages. While OpenAI insiders confirmed these figures, some Meta sources disputed the numbers, according to Wired. One leader wrote, "If they pressure you, or make ridiculous exploding offers, just tell them to back off. It's not nice to pressure people in potentially the most important decision." The message, viewed by Wired, underscores the high stakes of the situation, with OpenAI framing Meta's tactics as aggressive and unethical. Meta's poaching spree is part of its broader pivot toward building human-level AI, an effort to recover from founder Mark Zuckerberg 's shifting priorities, which have included significant investments in the metaverse. One-week vacation and more: What OpenAI is doing to stop Meta from poaching its talent Chen's memo outlined proactive measures to retain talent, including recalibrating compensation and exploring "creative ways to recognize and reward top talent." Beyond the mandatory vacation, however, specifics remain vague. The week-long break, intended to help employees recharge, has raised concerns among OpenAI's leadership that Meta may exploit the downtime to intensify its recruitment efforts. One research lead warned in an internal message, "Meta knows we're taking this week to recharge and will take advantage of it to try and pressure you to make decisions fast and in isolation." The talent war has exposed deeper tensions. Seven other OpenAI research heads have also reached out to employees, urging them to resist Meta's overtures. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Economic Times
an hour ago
- Economic Times
Let's chat, not just chatbot
iStock Let the chatbot write your emails if it must. But don't let it take over your chai break rant or your unplanned, meandering chats that lead nowhere and yet leave you feeling more alive. My 32-year-old cousin recently met a girl through a matrimonial website. Just ten minutes into their conversation, she looked at him and said bluntly, 'Can you at least be more interesting than ChatGPT?' He was completely taken we say someone is interesting to talk to, we probably mean they are well read, have a good sense of humour and have gone through a wide variety of experiences in life. Today's chatbots like ChatGPT are definitely more well-read than all of us put together. They can also tell half funny jokes, which probably makes them better than ninety percent of the human population. That leaves us humans with just one area where we can claim superiority, the richness of lived experiences. But even that seems to be fading with people increasingly choosing safe and controlled environments. Last Sunday I was catching up with my college friends and as always, we ended up repeating the same old stories from our campus days or the years right after. It struck me that we rarely have anything new to add. The reason is simple. We have not really created too many new experiences worth talking about. On the other hand, an AI chatbot has access to an endless library of interesting and funny incidents from the lives of people across the world. So, in terms of sheer variety and novelty in conversation, we are already falling behind. This is especially true for those of us living in cities who are always short of time and energy to meet people in person. OpenAI CEO Sam Altman recently mentioned that people are increasingly using ChatGPT for major life decisions such as career planning and even therapy. Before we end up completely surrendering to AI chatbots and start developing emotional relationships with them like the character Theodore did with the chatbot Samantha in the film, Her, we must pause and relearn the lost art of real human conversation. We must also make an effort to rediscover what makes us genuinely interesting as individuals. Interesting conversations are not always meant to be serious. In fact, they are not meant to be anything in particular. There is a certain playfulness and randomness about them. They are not supposed to have a clear goal or outcome. They can be about anything and everything like a rant with a colleague, a nostalgic flashback to hostel life, a 3 AM chat about the meaning of life or even some mindless gossip. In Sapiens: A Brief History of Humankind, Yuval Noah Harari says, 'Gossip may seem like a bad habit, but it is essential for cooperation in large groups.' Conversations are the lubricant on which human sanity operates. Good conversations come more from the heart than the head. Sometimes they end with laughter, sometimes with a sigh of relief and sometimes they bring clarity. And sometimes they just make us feel less alone in our confusion about life decisions. They are not packaged like a motivational LinkedIn post or curated like a vacation reel on Instagram. They are simple, imperfect and is an old saying that in a conversation one must either be interesting or be interested. These days, when we try to talk to someone, they are already half distracted. And even if they are not, most of us hardly have anything interesting to say. After all, the most entertaining content seems to be happening on Instagram or LinkedIn. Competing with the algorithm is a lost cause. You will lose every there is one thing we can still do. We can stop being too safe all the time. We are slowly losing our spontaneity and emotional depth because of the fear of being judged or cancelled. The cancel culture has convinced many of us that it is not worth the trouble to be our natural selves, especially if being natural means saying one or two slightly inappropriate things without any bad intent. This fear has made us shrink into smaller versions of ourselves, constantly apologetic and hesitant under the weight of guilt and judgement that comes from conversations do not need to be politically correct. We are civilised beings, yes, but we are also instinctive and emotional creatures. We cannot keep suppressing every impulse or force ourselves to speak in a polished and filtered way like a large language model. Our speech should not be edited at every step between the heart and the mouth. We cannot always speak the obvious. We need to bring back our rawness and theory of evolution tells us that the incredible variety of life we see today came from errors in genetic copying. If nature had a perfect system of reproducing genes without any deviation, we might still be single celled organisms floating around in the ocean. In the same way, if we always try to say the right things, we will end up behaving like machines. Our ability to feel deeply, to err, to be vulnerable is what makes us distinct humans and that might just be our one true strength in a future where AI does everything else people I encounter, always tend to say the right thing. But after a while, they start sounding like a chatbot, predictable and too perfect. They hide the real parts of themselves, especially the vulnerable side that makes us human. On social media, this becomes even worse, where one slice of a person's personality is projected as the whole no matter how individualistic the world becomes, we are all connected by one simple truth, that is the human condition of suffering. As Buddha said, it is the most basic truth of human life. An AI can give advice from books, but it cannot feel pain. And without that, it can never truly understand we keep outsourcing our thinking, our feeling and even our small talk to a chatbot, then soon the only raw, unfiltered conversation we will have left is the one we mumble in the shower. Let the chatbot write your emails if it must. But don't let it take over your chai break rant or your unplanned, meandering chats that lead nowhere and yet leave you feeling more alive. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Can this cola maker get back bubble valuation pricked by Ambani? Darkness at noon: Can this reform succeed after failing four times? Zepto has slowed, and Aadit Palicha needs more than a big fund raise to fix it Why Sebi must give up veto power over market infra institutions Stock Radar: SBI stock breaks out from Symmetrical Triangle pattern; what should investors do with this Sensex stock? These mid-cap stocks with 'Strong Buy' & 'Buy' recos can rally over 25%, according to analysts Multibagger or IBC - Part 13: This auto ancillary helps power Chandrayaan-3 and makes the 'glue' that holds cars together Buy, sell or hold: Antique maintains a hold on JSW Steel; Nuvama sees over 15% upside in Apollo Hospitals