logo
MIT study warns how ChatGPT weakens critical thinking

MIT study warns how ChatGPT weakens critical thinking

Hans India21-06-2025

A new study from MIT's Media Lab is raising red flags about the impact of generative AI tools like ChatGPT on human cognition—particularly among students. The study suggests that using ChatGPT for academic work may reduce brain activity, diminish creativity, and impair memory formation.
The experiment involved 54 participants aged 18 to 39, who were divided into three groups: one using ChatGPT, another using Google Search, and a control group using neither. Each group was asked to write multiple SAT-style essays while wearing EEG devices to measure brain activity across 32 regions.
Results showed ChatGPT users exhibited the lowest neural engagement, underperforming across behavioral, linguistic, and cognitive measures. Their essays were also deemed formulaic and lacking originality by English teachers. Alarmingly, as the study progressed over several months, many in the ChatGPT group abandoned active writing altogether, opting instead to copy-paste AI-generated responses with minimal editing.
Lead author Nataliya Kosmyna explained her urgency to publish the findings ahead of peer review, saying, 'I'm afraid in 6-8 months some policymaker will propose 'GPT for kindergarten.' That would be absolutely detrimental to developing brains.'
In contrast, the group that relied solely on their own brainpower showed stronger neural connectivity in alpha, theta, and delta bands—regions linked with creativity, memory, and semantic processing. These participants felt more ownership over their work and reported higher satisfaction. The Google Search group also demonstrated high engagement and satisfaction, suggesting traditional web research supports more active learning than LLM use.
In a follow-up test, participants had to rewrite a previous essay—this time without their original tool. ChatGPT users struggled, barely recalling their previous responses, and showed weaker brain wave activity. In contrast, the brain-only group, now using ChatGPT for the first time, exhibited increased cognitive activity, suggesting that AI can support learning—but only when foundational thinking is already in place.
Kosmyna warns that heavy AI use during critical learning phases could impair long-term brain development, particularly in children. Psychiatrist Dr. Zishan Khan echoed this concern: 'Overreliance on LLMs may erode essential neural pathways related to memory, resilience, and deep thinking.'
Ironically, the paper itself became a case study in AI misuse. Some users summarized it using ChatGPT, prompting hallucinated facts—like falsely stating the version of ChatGPT used was GPT-4o. Kosmyna had anticipated this and included 'AI traps' in the document to test such behavior.
MIT researchers are now expanding their work into programming and software engineering, and early results are even more troubling—suggesting broader implications for industries seeking to automate entry-level tasks.
While previous studies have highlighted AI's potential to boost productivity, this research underscores the urgent need for responsible AI use in education, backed by policies that balance efficiency with brain development.
OpenAI did not respond to a request for comment. Meanwhile, the debate on the role of AI in learning continues—with growing calls for regulation, transparency, and digital literacy.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Productivity puzzle: Solow's paradox has come to haunt AI adoption
Productivity puzzle: Solow's paradox has come to haunt AI adoption

Mint

time31 minutes ago

  • Mint

Productivity puzzle: Solow's paradox has come to haunt AI adoption

AI enthusiasts, beware: predictions that the technology will suddenly boost productivity eerily echo those that had followed the introduction of computers to the workplace. Back then, we were told that the miraculous new machines would automate vast swathes of white-collar work, leading to a lean, digital-driven economy. Fast forward 60 years, and it's more of the same. Shortly after the debut of ChatGPT in 2022, researchers at the Massachusetts Institute of Technology claimed employees would be 40% more productive than their AI-less counterparts. These claims may prove to be no more durable than the pollyannish predictions of the Mad Men era. A rigorous study published by the National Bureau of Economic Research in May found only a 3% boost in time saved, while other studies have shown that reliance on AI for high-level cognitive work leads to less motivated, impaired employees. We are witnessing the makings of another 'productivity paradox,' the term coined to describe how productivity unexpectedly stagnated and, in some cases, declined during the first four decades of the information age. The bright side is that the lessons learned then might help us navigate our expectations in the present day. The invention of transistors, integrated circuits, memory chips and microprocessors fuelled exponential improvements in information technology from the 1960s onward, with computers reliably doubling in power roughly every two years with almost no increase in cost. It quickly became an article of faith that computers would lead to widespread automation (and structural unemployment). A single person armed with the device could handle work that previously required hundreds of employees. Over the next three decades, the service sector decisively embraced computers. Yet, the promised gains did not materialize. In fact, studies from the late 1980s revealed that the services sector—what economist Stephen Roach described as 'the most heavily endowed with high-tech capital"—registered the worst productivity performance during this same period. In response, economist Robert Solow had famously quipped that 'we see computers everywhere except in the productivity statistics." Economists advanced multiple explanations for this puzzle (also known as 'Solow's Paradox'). Least satisfying, perhaps, was the claim, still made today, that the whole thing was a mirage of mismeasurement and that the effects of massive automation somehow failed to show up in the economic data. Others have argued that the failure of infotech investments to live up to the hype can be laid at the feet of managers. There's some merit to this argument: studies of infotech adoption have shown that bosses spent indiscriminately on new equipment, all while hiring expensive workers charged with maintaining and constantly upgrading these systems. Computers, far from cutting the workforce, bloated it. More compelling still was the 'time lag' hypothesis offered by economist Paul A. David. New technological regimes, he contended, generate intense conflict, regulatory battles and struggles for market share. Along the way, older ways of doing things persist alongside the new, even as much of the world is remade to accommodate the new technology. None of this translates into immediate efficiency—in fact, quite the opposite. As evidence, he cited the advent of electricity, a quicker source of manufacturing power than the steam it would eventually replace. Nonetheless, it took 40 years for the adoption of electricity to lead to increased worker efficiency. Along the way, struggles to establish industry standards, waves of consolidation, regulatory battles and the need to redesign every single factory floor made this a messy, costly and prolonged process. The computer boom would prove to be similar. These complaints did not disappear, but by the late 1990s, the American economy finally showed a belated uptick in productivity. Some economists credited it to the widespread adoption of information technology. Better late than never, as they say. However, efficiency soon declined once again, despite (or because of) the advent of the internet and all the other innovations of that era. AI is no different. The new technology will have unintended consequences, many of which will offset or even entirely undermine its efficiency. That doesn't mean AI is useless or that corporations won't embrace it with enthusiasm. Anyone expecting an overnight increase in productivity, though, will be disappointed. ©Bloomberg The author is professor of history at the University of Georgia and co-author of 'Crisis Economics: A Crash Course in the Future of Finance'.

YouTube rolls out AI search results for Premium users: Will it impact views, engagement?
YouTube rolls out AI search results for Premium users: Will it impact views, engagement?

Indian Express

time2 hours ago

  • Indian Express

YouTube rolls out AI search results for Premium users: Will it impact views, engagement?

Google is bringing AI-generated search results to YouTube as part of its broader efforts to reinvent the traditional search experience of users by integrating generative AI across its entire ecosystem. The AI-generated search results on the video sharing platform will appear at the top of the results page. It will feature multiple YouTube videos along with an AI-generated summary of each video. Users can tap on the thumbnails of the videos to begin playing them directly from the search results. The AI-generated summary accompanying each video will include information that is most relevant to the user's search query. However, the AI-powered search experience on the platform is currently limited to YouTube Premium subscribers. It is an opt-in feature, which means that Premium subscribers will have to manually enable the feature by visiting YouTube's experimental page. The move signals Google's shift towards generative AI-based search and discovery with AI-summarised answers replacing traditional links. Similar to AI Overviews in Google Search, this feature is designed to appear above organic search results as part of the big tech company's strategy to have more of its users engage with its AI systems. 'In the coming days, our conversational AI tool will be expanding to some non-Premium users in the US. Premium members already love it for getting more info, recommendations, and even quizzing themselves on key concepts in academic videos,' YouTube said in a blog post published on June 26. While only YouTube Premium subscribers can currently choose to see AI-generated search results on the platform, it is likely that Google will expand access to all users in the future. By showing AI-generated summaries of videos, YouTube users might be less inclined to open videos and watch them on the platform. The feature could also have an impact on engagement as fewer users might comment, subscribe, and generally interact with content creators. Something similar is already happening in web search. Multiple studies have shown that people are increasingly looking for information by asking questions to chatbots like ChatGPT or Gemini as opposed to using web browsers like Safari. This defection away from traditional search engines towards generative AI has negative consequences, especially for publishers and websites that have relied on search traffic to generate revenue. A recent study by content licensing platform TollBit found that news sites and blogs receive 96 per cent less referral traffic from generative AI-driven search engines than from traditional Google Search. When asked about publishers seeing a dip in traffic coming from Search, Elizabeth Reid, the head of Google Search, previously told 'We see that the clicks to web pages when AI Overviews exist are of higher quality. People spend more time on these pages and engage more. They are expressing higher satisfaction with the responses when we show the AI Overviews.' Even though the video is just one tap away, the AI-generated summary in YouTube search results will probably give users an idea of all the relevant parts of the video. This could potentially make it harder for YouTube channels to grow and earn revenue. In addition, YouTube is bringing its Veo 3 AI video generation model to YouTube Shorts in the coming months, according to CEO Neal Mohan. The AI model capable of generating cinematic-level visuals with complete sound and dialogue, was reportedly trained on subsets of the 20-billion video library uploaded on YouTube.

Germany asks Apple, Google to ban Chinese AI app DeepSeek over privacy concerns
Germany asks Apple, Google to ban Chinese AI app DeepSeek over privacy concerns

Hans India

time4 hours ago

  • Hans India

Germany asks Apple, Google to ban Chinese AI app DeepSeek over privacy concerns

Germany's data protection commissioner has formally requested Apple and Google to remove Chinese AI chatbot DeepSeek from their app stores, citing concerns over illegal transfer of user data to China. Meike Kamp, the country's commissioner for data protection, stated on Friday that DeepSeek failed to prove it safeguards German users' personal information at a level consistent with EU privacy standards. According to its own privacy policy, the company stores user queries and uploaded files on servers located in China. 'Chinese authorities have sweeping access rights to personal data held by Chinese companies,' said Kamp. She emphasized that DeepSeek had been given an opportunity in May to comply with EU data transfer regulations or voluntarily withdraw its app—neither of which it followed through on. In response, Google confirmed receipt of the notice and said it was reviewing the request. Apple has yet to comment. DeepSeek also did not respond to media inquiries. The move comes amid growing global scrutiny of DeepSeek, which made headlines in January by claiming to have developed a low-cost AI model competitive with ChatGPT. While the announcement stirred interest, regulators in the EU and U.S. have since raised red flags about its data handling. Earlier this year, Italy blocked DeepSeek from app stores due to insufficient transparency around personal data use. The Netherlands banned it on government devices, and Belgium advised officials to avoid the app pending further investigation. Spain's OCU consumer group has requested a national probe, while the UK government has called its use a personal choice but is monitoring potential security risks. Meanwhile, U.S. lawmakers are preparing legislation that would bar federal agencies from using Chinese-developed AI, with a recent Reuters report alleging DeepSeek's involvement in Chinese military and intelligence operations. Germany's demand marks another blow to the Chinese firm's global credibility as governments become increasingly wary of foreign AI platforms and their data governance practices.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store