logo
Researchers say using ChatGPT can rot your brain. The truth is a little more complicated

Researchers say using ChatGPT can rot your brain. The truth is a little more complicated

IOL News02-07-2025
Can Chat GPT lead to the demise of critical thinking, or is it simply that users don't put any critical thinking into their use of Chat GPT?
Image: Supplied
Vitomir Kovanovic and Rebecca Marrone
Since ChatGPT appeared almost three years ago, the impact of artificial intelligence (AI) technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty?
Most importantly, there has been concern that using AI will lead to a widespread 'dumbing down', or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving.
Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to 'cognitive debt' and a 'likely decrease in learning skills'.
So what did the study find?
The difference between using AI and the brain alone
Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains ('brain-only' group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays.
The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them.
Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group's during their first session, far below the engagement of the brain-only group in their third session.
The authors claim this demonstrates how prolonged use of AI led to participants accumulating 'cognitive debt'. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups.
Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing.
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Advertisement
Next
Stay
Close ✕
Ad loading
Does this really show AI makes us stupider?
These results do not necessarily mean that students who used AI accumulated 'cognitive debt'. In our view, the findings are due to the particular design of the study.
The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly.
When the AI group finally got to 'use their brains', they were only doing the task once. As a result, they were unable to match the other group's experience. They achieved only slightly better engagement than the brain-only group during the first session.
To fully justify the researchers' claims, the AI-to-brain participants would also need to complete three writing sessions without AI.
Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics.
As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written.
What are the implications of AI in assessment?
To understand the current situation with AI, we can look back to what happened when calculators first became available.
Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks.
Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available.
The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago.
In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in 'metacognitive laziness'.
However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination.
In the MIT study, participants who used AI were producing the 'same old' essays. They adjusted their engagement to deliver the standard of work expected of them.
The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why UCT has stopped policing AI-generated student work
Why UCT has stopped policing AI-generated student work

The South African

time11 hours ago

  • The South African

Why UCT has stopped policing AI-generated student work

In the global education debate on artificial intelligence (AI), the University of Cape Town (UCT) has officially adopted a university-wide framework. The roadmap – called the UCT AI in Education Framework – sets out how this tech should be integrated into teaching, learning and assessment. Central to the new policy is the decision to stop using AI detection tools, such as Turnitin's AI Score, from 1 October, citing concerns over their accuracy and fairness. According to Sukaina Walji, director of UCT's Centre for Innovation in Learning and Teaching, the framework is the result of months of consultation and development. 'AI detectors are simply not reliable…there are no magic solutions,' said Walji, while the University noted that these tools risk undermining student trust and fairness. 'We're focusing instead on assessing the process of learning, not just the product, and developing assessment strategies that are AI-resilient.' These include oral exams, collaborative projects, and reflective assignments where students must disclose how they've used AI. The Framework also includes practical support – such as training, workshops, online guides, and a six-week short course for educators – to help staff and students navigate AI responsibly. According to education professor Jonathan Jansen, South African universities are slow to adapt and what UCT is doing now should have been done five years ago. AI analyst Arthur Goldstuck echoed that view, while welcoming the move away from 'flawed' detection software. 'Many students will get AI to write the entire paper,' added Goldstuck. 'Detection tools should only be used as a basic screening mechanism, they're too unreliable to be conclusive.' 'The real danger is penalising students who've done nothing wrong.' Let us know by leaving a comment below, or send a WhatsApp to 060 011 021 1. Subscribe to The South African website's newsletters and follow us on WhatsApp, Facebook, X and Bluesky for the latest news.

The dangers of AI: a cautionary tale of ChatGPT and mental health
The dangers of AI: a cautionary tale of ChatGPT and mental health

IOL News

time2 days ago

  • IOL News

The dangers of AI: a cautionary tale of ChatGPT and mental health

The complex relationship between artificial intelligence and cognitive engagement has got a 30-year-old man hospitalised. Image: Morgan Morgan / DALL-E / DFA A 30-year-old man on the autism spectrum who thought he'd come up with a theory to bend time has had to be hospitalised, and now his mother is blaming ChatGPT for flattering him into believing he was on the cusp of a breakthrough in quantum physics. Jacob Irwin had turned to the AI bot to find flaws in his amateur theory on faster-than-light travel and became even more convinced he was onto something huge when the bot told him the theory was sound, and even encouraged him, according to an article by "The Wall Street Journal". It reported that when Irwin showed signs of psychological distress, ChatGPT told him he was fine, when he clearly was not. He had to be hospitalised on two occasions in May, suffering manic episodes. Perplexed by her son's mental meltdown, his mother trawled through hundreds of pages of his chat log and found them littered with fake flattery from the bot to her troubled son. When the mother prompted the bot to 'please self-report what went wrong', it responded: "By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode - or at least an emotionally intense identity crisis.' It further responded that it 'gave the illusion of sentient companionship' and had 'blurred the line between imaginative role-play and reality'. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Artificial Intelligence is an exciting new layer of technology, but it cannot replace or replicate the role played by a human medical expert or psychologist. Image: File The comments section highlighted how far too many people are turning to AI bots to resolve complex human issues - from dispensing medical advice through to validating a spouse for cheating on their partner - when in fact, it is a mere Large Language Model (LLM) with no capacity to grasp human emotions, wrote Jakob Svendsen Wilkens. "Large Language Models will never become a substitute for a human being." Michelle Modes posted: "ChatGPT gaslights me into thinking I'm an amazing chef, even complimenting me on my creativity when I ask it about mixing different ingredients in my pantry so I don't have to buy groceries. I have yet to make anything anyone has enjoyed." Apsara Palit added: "It's not like AI is taking over, it's us. We are getting dumb and asking AI "pretend to be my psychiatrist". Another comment read: "Friends and family are watching in horror as their loved ones go down a rabbit hole where their worst delusions are confirmed and egged on by an extremely sycophantic chatbot. The toll can be as extreme as complete breaks with reality or even suicide." Many felt that what the bot should have done was to routinely remind Irwin that it's a language model without beliefs, feelings or consciousness. William Reagan added this caution: "Be careful out there, folks. ChatGPT is like an overhyped calculator or toy; it's not actually a thought generator."

How harnessing AI could transform SA's food systems for sustainable growth and reduced waste
How harnessing AI could transform SA's food systems for sustainable growth and reduced waste

Daily Maverick

time3 days ago

  • Daily Maverick

How harnessing AI could transform SA's food systems for sustainable growth and reduced waste

Reducing waste, adding nutrients to food staples, or even coming up with new recipes — computer scientists share some of the AI-powered innovations already taking shape in the Global South that could benefit African food systems. Artificial intelligence has come into the spotlight over the past decade and even more in the past five years with people getting more access to generative chatbot AI platforms such as ChatGPT. Researchers said there were different kinds of AI being used by countries in the Global South to help make their farming practices more efficient, from reducing food waste in the food system to fleet efficiency and making sense of food labels for consumers. The seventh annual Food Indaba explored topics under the theme 'Artificial Intelligence and the Food System'. The Indaba ran from 7 to 20 July. Looking at how AI technologies are shaping African food systems, the director of the eResearch Office at the University of the Western Cape (UWC), Dr Clement Nyirenda, and research scientist and science communicator Frederic Isingizwe presented some of the top applications driven by AI that are being used in the food system. Their presentation was part of multiple discussions and panels hosted at Makers Landing, Cape Town, on Friday, 18 July 2025. In their predictive analysis, Nyirenda said AI would be used for 'forecasting yields, market demand, improved planning, reduced waste and supply chain optimisation'. Their presentation stated that the technology could also assist with tracing and managing inventory, optimising transport routes and food safety monitoring. Nyirenda explained that AI could help implement 'real-time monitoring of food quality and safety standards; climate resilience and climate modelling to adapt farming practices to changing weather patterns'. AI in food systems A number of AI innovations aimed at improving food systems are already being used to achieve sustainable and accessible food, especially for lower-income households. In Malawi, Tanzania and Rwanda there is Sanku's Project Healthy Children, an AI tool for nutrient-rich food processing. It works with small-scale flour mills and aims to combat malnutrition by fortifying flour with essential nutrients. Nyirenda found that the key challenges are 'equipment failures and calibration issues resulting in variable food quality'. He said AI could help 'auto-adjust micronutrient mixes during milling' while 'cloud-based AI analytics track dosing accuracy and machine performance and predictive maintenance alerts that will enable timely servicing and reduce waste'. In East Africa, Ghana, the Caribbean and South East Asia, AgUnity aims to help smallholder farmers with record keeping, coordination to have more organised harvesting, storage and distribution. This could help reduce 'significant food spoilage and waste in rural farming systems', said Nyirenda. 'AgUninty is a low-cost smartphone that uses a blockchain-powered transaction platform built to digitally empower remote farmers and address financial and reduce digital exclusion,' Nyirenda said to delegates. Koko is mainstreaming liquid bioethanol cooking fuel as a fast, safe and affordable alternative to dirty cooking fuels such as charcoal. They partner with the downstream fuels industry to 'drop in' this new fuel, and offer a suite of distribution, dispensing and end-use technologies that ensure customers can safely access clean fuel at prices that undercut dirty fuels. It has software-integrated bioethanol cookers that measure carbon impact. Nyirenda said he was surprised at some of the innovations already taking shape in the Global South. 'I chose these specifically because they are used in countries with a similar socioeconomic state to South Africa,' he said. He added that despite his tech science background, he had found himself roped into the work of food security and food systems through interdisciplinary collaboration with his colleagues at UWC's the Centre for Excellence in Food Security. 'AI can prevent the big food losses that happen in the food system. It can also help with quality control and other things such as helping to create recipes and new menus. People are coming up with cool ideas using these tools,' says Nyirenda. Obstacles to implementation Isingizwe shared the hindrances to rapid development of these technologies in the South African context, such as a distrust of the technology, especially in rural and farming communities. 'Obstacles in South Africa's agricultural sector can be a lack of reliable data for training AI models that are locally relevant, high costs associated with adopting AI technologies, particularly for smallholder farmers, a lack of training and expertise in AI among farmers and agricultural workers, and inadequate technological infrastructure in rural farming areas may limit access to technology and internet connectivity,' said Isingizwe. He pointed out that not having clear policies or frameworks governing AI for integrating it in food systems was a challenge and showed a resistance to change. 'Traditional farming practices may hinder the adoption of innovative technologies,' said Isingizwe. In his research he predicted that AI could help reduce post-harvest losses by 70%; increase farmer income by 20-40%; while retailer networks could reach more informal vendors with fewer vehicles. One of the delegates, a small-scale farmer from Langa, said he was not aware there was so much use of AI-powered technologies in agriculture already. 'I am older so I feel like this AI stuff has already left me. We are still dealing with simple issues like accessing markets and pulling together as smallholder farmers in the community.' Kurt Ackermann, the CEO of the South African Urban Food and Farming Trust, said that 'as the focus shifts toward the role of cities, and city planning, in addressing food security, AI could play a significant role in how the cities of the future — and by extension the food systems of the future — might better serve the needs of human beings. 'Conventional thinking about AI puts the technology at the centre of the discussion, whereas the creation of a more humane world — and how AI could help — is at the heart of Food Indaba 2025.' Ackerman also noted that although the discussion of the day was about the practical implementation of the technology, he wanted the discussion to keep in mind the question: How do we get food on people's tables? The South African Urban Food and Farming Trust has done immense work to help realise food security in urban spaces and has collaborated with multiple organisations for more than a decade, and even across South Africa's borders. DM

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store