Researchers say using ChatGPT can rot your brain. The truth is a little more complicated
Image: Supplied
Vitomir Kovanovic and Rebecca Marrone
Since ChatGPT appeared almost three years ago, the impact of artificial intelligence (AI) technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty?
Most importantly, there has been concern that using AI will lead to a widespread 'dumbing down', or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving.
Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to 'cognitive debt' and a 'likely decrease in learning skills'.
So what did the study find?
The difference between using AI and the brain alone
Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains ('brain-only' group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays.
The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them.
Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group's during their first session, far below the engagement of the brain-only group in their third session.
The authors claim this demonstrates how prolonged use of AI led to participants accumulating 'cognitive debt'. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups.
Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing.
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Advertisement
Next
Stay
Close ✕
Ad loading
Does this really show AI makes us stupider?
These results do not necessarily mean that students who used AI accumulated 'cognitive debt'. In our view, the findings are due to the particular design of the study.
The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly.
When the AI group finally got to 'use their brains', they were only doing the task once. As a result, they were unable to match the other group's experience. They achieved only slightly better engagement than the brain-only group during the first session.
To fully justify the researchers' claims, the AI-to-brain participants would also need to complete three writing sessions without AI.
Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics.
As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written.
What are the implications of AI in assessment?
To understand the current situation with AI, we can look back to what happened when calculators first became available.
Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks.
Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available.
The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago.
In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in 'metacognitive laziness'.
However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination.
In the MIT study, participants who used AI were producing the 'same old' essays. They adjusted their engagement to deliver the standard of work expected of them.
The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
2 hours ago
- IOL News
UCT drops use of AI detection software
In a move likely to be welcomed by students, the University of Cape Town (UCT) has announced that it will discontinue the use of AI detection tools, Image: AFP In a move likely to be welcomed by students, the University of Cape Town (UCT) has announced that it will discontinue the use of AI detection tools, such as Turnitin's AI Score, effective October 1, 2025. This decision follows the endorsement of UCT's AI in Education Framework by the Senate Teaching and Learning Committee in June 2025. According to the university, the framework prioritises ethical AI literacy, assessment integrity, and innovative curriculum design over reliance on automated surveillance tools. According to the University's Deputy Vice-Chancellor, Teaching and Learning Professor Brandon Collier-Reed, the institution will discontinue the use of AI detection tools, such as Turnitin's AI Score. He also raised concerns regarding the reliability of AI detection tools, noting that they are widely considered to be unreliable and can produce both false positives and false negatives. "An important related issue is the use of the Turnitin AI Score, which flags passages of writing in student work considered to be AI-generated," Reed said in a communique directed to students and staff. "AI detection tools are widely considered to be unreliable, and can produce both false positives and false negatives. The continued use of such scores risks compromising student trust and academic fairness. The Senate Teaching and Learning Committee has now agreed that use of the AI Score should be discontinued, and this feature in Turnitin will no longer be available at UCT from 1 October 2025". Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad Loading The university's decision comes amid global challenges with AI detection tools, which have frequently resulted in students being mistakenly accused of using AI-generated content. universities worldwide are being forced to reconsider how they monitor and assess the use of AI in student work. Collier-Reed added that the endorsement of the framework was a result of UCT moving with the times, as 'artificial intelligence technologies are becoming part of our daily lives'. 'Staff and students are using tools such as ChatGPT, Claude and Gemini in innovative and productive ways; while at other times these technologies present significant challenges, requiring us to rethink teaching pedagogies, assessment practices and the fundamentals of what a university experience should be, and what our qualifications should signify in a changing world,' he said. IOL News Get your news on the go, click here to join the IOL News WhatsApp channel


The Citizen
5 hours ago
- The Citizen
Google announces $37m in cumulative funding to advance AI in Africa
Google said Africa's AI talent is growing rapidly, but the infrastructure to support it must grow in tandem. Google has outlined a wave of artificial intelligence (AI) support across Africa, representing $37 million in cumulative funding. The announcement includes previously committed but unannounced funding to research, talent development, and infrastructure. Funding The funding package includes funding and partnerships that aim to strengthen AI research, support African languages, improve food systems, expand digital skills, and build research capacity. Google also announced $3 million in funding to the Masakhane Research Foundation, the open research collective advancing AI tools in over 40 African languages Google said Africa's AI talent is growing rapidly, but the infrastructure to support it must grow in tandem. 'That's why a cornerstone of this announcement is the launch of the AI community centre in Accra — a first-of-its-kind space for AI learning, experimentation, and collaboration in Africa. The centre will host training sessions, community events, and workshops focused on responsible AI development.' ALSO READ: Two decades of Google Maps: How people mapped out their world Four pillars Google said the programming will span four pillars: AI literacy, community technology, social impact, and arts and culture — providing a platform for a diverse ecosystem of developers, students, and creators to engage with AI in ways that are grounded in African priorities. To help meet the rising demand for AI and digital skills, Google is rolling out 100 000 Google Career Certificate scholarships for students in higher learning institutions across Ghana. Beyond Ghana, is committing an additional $7 million to support AI education across Nigeria, Kenya, South Africa, and Ghana. It stated that the funding will support academic institutions and nonprofits in building localised AI curricula, online safety training, and cybersecurity programs. Africa Speaking about the announcements, James Manyika, senior vice president for Research, Labs, and Technology & Society at Google, said Africa is home to some of the most important and inspiring work in AI today. 'We are committed to supporting the next wave of innovation through long-term investment, local partnerships, and platforms that help researchers and entrepreneurs build solutions that matter.' Yossi Matias, vice president of Engineering and Research at Google, added that this new wave of support reflects the company's belief in the talent, creativity, and ingenuity across the continent. 'By building with local communities and institutions, we're supporting solutions that are rooted in Africa's realities and built for global impact.' Initiatives These new initiatives build on Google's ongoing work across the continent. Past efforts have included partnerships to support AI-powered maternal health dashboards in Ghana and Nigeria, as well as wildfire alerts in East Africa, and regional language models developed by teams in Accra and Nairobi. ALSO READ: Google Open Buildings helping strengthen community resilience


Daily Maverick
3 days ago
- Daily Maverick
Turning off AI detection software is the right call for SA universities
Universities across South Africa are abandoning problematic artificial intelligence detection tools that have created a climate of suspicion. The recently announced University of Cape Town decision to disable Turnitin's AI detection feature is to be welcomed – and other universities would do well to follow suit. This move signals a growing recognition that AI detection software does more harm than good. The problems with Turnitin's AI detector extend far beyond technical glitches. The software's notorious tendency towards false positives has created an atmosphere where students live in constant fear of being wrongly accused of academic dishonesty. Unlike their American counterparts, South African students rarely pursue legal action against universities, but this should not be mistaken for acceptance of unfair treatment. A system built on flawed logic As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental issue lies in how these detection systems operate. Turnitin's AI detector doesn't identify digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns associated with AI-generated text. The software might flag work as likely to be AI-generated simply because the student used em-dashes or terms such as 'delve into' or 'crucial' – a writing preference that has nothing to do with artificial intelligence. This approach has led to deeply troubling situations. Students report receiving accusatory emails from professors suggesting significant portions of their original work were AI-generated. One student described receiving such an email indicating that Turnitin had flagged 30% of her text as likely to be AI-generated, followed by demands for proof of originality: multiple drafts, version history from Google Docs, or reports from other AI detection services like GPTZero. The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system. Other academics have endorsed the use of services like Grammarly Authorship or Turnitin Clarity for students to prove their work is their own. The burden of proof has been reversed: students are guilty until proven innocent, a principle that would be considered unjust in any legal system and is pedagogically abhorrent in an educational context. The psychological impact cannot be overstated; students describe feeling anxious about every assignment, second-guessing their natural writing styles, and living under a cloud of suspicion despite having done nothing wrong. The absurdity exposed The unreliability of these systems becomes comically apparent when examined closely. The student mentioned above paid $19 to access GPTZero, another AI detection service, hoping to clear her name. The results were revealing: the programs flagged different portions of her original work as AI-generated, with only partial overlap between their accusations. Even more telling, both systems flagged the professor's own assignment questions as AI-generated, though the Turnitin software flagged Question 2 while GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the questions, both, or neither? The software provides no answers. This inconsistency exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what constitutes AI-generated text, and both flag the professor's own questions as suspicious, how can any institution justify using these tools to make academic integrity decisions? Gaming the system While South African universities have been fortunate to avoid the litigation that has plagued American institutions, the experiences across the Atlantic serve as a stark warning. A number of US universities have abandoned Turnitin after facing lawsuits from students falsely accused of using AI. Turnitin's terms and conditions conveniently absolve the company of responsibility for these false accusations, leaving universities to face the legal and reputational consequences alone. The contrast with Turnitin's similarity detection tool is important. While that feature has its own problems, primarily academics assuming that the percentage similarity is an indicator of the amount of plagiarism, at least it provides transparent, visible comparisons that students can review and make sense of. The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system. Undermining educational relationships Perhaps most damaging is how AI detection transforms the fundamental relationship between educators and students. When academics become primarily focused on catching potential cheaters, the pedagogical mission suffers. Education is inherently relational, built on trust, guidance and collaborative learning. AI detection software makes this dynamic adversarial, casting educators as judges, AI detection as the evidence and students as potential criminals. The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology The lack of transparency compounds this problem. Students cannot see the AI detection reports that are being used against them, cannot understand the reasoning behind the accusations and cannot meaningfully defend themselves against algorithmic judgements. This violates basic principles of fairness and due process that should govern any academic integrity system. A path forward UCT's decision to disable Turnitin's AI detector represents more than just abandoning a problematic tool. It signals a commitment to preserving the educational relationship and maintaining trust in our universities. Other institutions following suit demonstrate that the South African higher education sector is willing to prioritise pedagogical principles over technological convenience. This doesn't mean ignoring the challenges that AI presents to academic integrity. Rather, it suggests focusing on educational approaches that help students understand appropriate AI use, develop critical thinking skills and cultivate a personal relationship with knowledge. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose. The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology. Detection should give way to education, suspicion to support and surveillance to guidance. When we position students as already guilty, we shouldn't be surprised that they respond by trying to outwit our systems rather than engaging with the deeper questions about learning and integrity that AI raises. The anxiety reported by students who feel constantly watched and judged represents a failure of educational technology to serve educational goals. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose. UCT and other South African universities deserve recognition for prioritising student welfare and educational relationships over the false security of flawed detection software. Their decision sends a clear message: technology should serve education, not the other way around. As more institutions grapple with AI's impact on higher education, South Africa's approach offers a valuable model; one that chooses trust over surveillance, education over detection and relationships over algorithms. In an era of rapid technological change, this commitment to fundamental educational values provides a steady foundation for navigating uncertainty. The future of academic integrity lies not in better detection software, but in better education about integrity itself. DM Sioux McKenna is professor of higher education studies at Rhodes University. Neil Kramm is an educational technology specialist in the Centre of Higher Education Research, Teaching and Learning (CHERTL) at Rhodes University. He is currently completing his PhD on AI and its influence on assessment in higher education.