Researchers say using ChatGPT can rot your brain. The truth is a little more complicated
Image: Supplied
Vitomir Kovanovic and Rebecca Marrone
Since ChatGPT appeared almost three years ago, the impact of artificial intelligence (AI) technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty?
Most importantly, there has been concern that using AI will lead to a widespread 'dumbing down', or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving.
Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to 'cognitive debt' and a 'likely decrease in learning skills'.
So what did the study find?
The difference between using AI and the brain alone
Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains ('brain-only' group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays.
The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them.
Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group's during their first session, far below the engagement of the brain-only group in their third session.
The authors claim this demonstrates how prolonged use of AI led to participants accumulating 'cognitive debt'. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups.
Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing.
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Advertisement
Next
Stay
Close ✕
Ad loading
Does this really show AI makes us stupider?
These results do not necessarily mean that students who used AI accumulated 'cognitive debt'. In our view, the findings are due to the particular design of the study.
The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly.
When the AI group finally got to 'use their brains', they were only doing the task once. As a result, they were unable to match the other group's experience. They achieved only slightly better engagement than the brain-only group during the first session.
To fully justify the researchers' claims, the AI-to-brain participants would also need to complete three writing sessions without AI.
Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics.
As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written.
What are the implications of AI in assessment?
To understand the current situation with AI, we can look back to what happened when calculators first became available.
Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks.
Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available.
The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago.
In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in 'metacognitive laziness'.
However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination.
In the MIT study, participants who used AI were producing the 'same old' essays. They adjusted their engagement to deliver the standard of work expected of them.
The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
27 minutes ago
- IOL News
The dangers of AI: a cautionary tale of ChatGPT and mental health
The complex relationship between artificial intelligence and cognitive engagement has got a 30-year-old man hospitalised. Image: Morgan Morgan / DALL-E / DFA A 30-year-old man on the autism spectrum who thought he'd come up with a theory to bend time has had to be hospitalised, and now his mother is blaming ChatGPT for flattering him into believing he was on the cusp of a breakthrough in quantum physics. Jacob Irwin had turned to the AI bot to find flaws in his amateur theory on faster-than-light travel and became even more convinced he was onto something huge when the bot told him the theory was sound, and even encouraged him, according to an article by "The Wall Street Journal". It reported that when Irwin showed signs of psychological distress, ChatGPT told him he was fine, when he clearly was not. He had to be hospitalised on two occasions in May, suffering manic episodes. Perplexed by her son's mental meltdown, his mother trawled through hundreds of pages of his chat log and found them littered with fake flattery from the bot to her troubled son. When the mother prompted the bot to 'please self-report what went wrong', it responded: "By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode - or at least an emotionally intense identity crisis.' It further responded that it 'gave the illusion of sentient companionship' and had 'blurred the line between imaginative role-play and reality'. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Artificial Intelligence is an exciting new layer of technology, but it cannot replace or replicate the role played by a human medical expert or psychologist. Image: File The comments section highlighted how far too many people are turning to AI bots to resolve complex human issues - from dispensing medical advice through to validating a spouse for cheating on their partner - when in fact, it is a mere Large Language Model (LLM) with no capacity to grasp human emotions, wrote Jakob Svendsen Wilkens. "Large Language Models will never become a substitute for a human being." Michelle Modes posted: "ChatGPT gaslights me into thinking I'm an amazing chef, even complimenting me on my creativity when I ask it about mixing different ingredients in my pantry so I don't have to buy groceries. I have yet to make anything anyone has enjoyed." Apsara Palit added: "It's not like AI is taking over, it's us. We are getting dumb and asking AI "pretend to be my psychiatrist". Another comment read: "Friends and family are watching in horror as their loved ones go down a rabbit hole where their worst delusions are confirmed and egged on by an extremely sycophantic chatbot. The toll can be as extreme as complete breaks with reality or even suicide." Many felt that what the bot should have done was to routinely remind Irwin that it's a language model without beliefs, feelings or consciousness. William Reagan added this caution: "Be careful out there, folks. ChatGPT is like an overhyped calculator or toy; it's not actually a thought generator."


Daily Maverick
a day ago
- Daily Maverick
How harnessing AI could transform SA's food systems for sustainable growth and reduced waste
Reducing waste, adding nutrients to food staples, or even coming up with new recipes — computer scientists share some of the AI-powered innovations already taking shape in the Global South that could benefit African food systems. Artificial intelligence has come into the spotlight over the past decade and even more in the past five years with people getting more access to generative chatbot AI platforms such as ChatGPT. Researchers said there were different kinds of AI being used by countries in the Global South to help make their farming practices more efficient, from reducing food waste in the food system to fleet efficiency and making sense of food labels for consumers. The seventh annual Food Indaba explored topics under the theme 'Artificial Intelligence and the Food System'. The Indaba ran from 7 to 20 July. Looking at how AI technologies are shaping African food systems, the director of the eResearch Office at the University of the Western Cape (UWC), Dr Clement Nyirenda, and research scientist and science communicator Frederic Isingizwe presented some of the top applications driven by AI that are being used in the food system. Their presentation was part of multiple discussions and panels hosted at Makers Landing, Cape Town, on Friday, 18 July 2025. In their predictive analysis, Nyirenda said AI would be used for 'forecasting yields, market demand, improved planning, reduced waste and supply chain optimisation'. Their presentation stated that the technology could also assist with tracing and managing inventory, optimising transport routes and food safety monitoring. Nyirenda explained that AI could help implement 'real-time monitoring of food quality and safety standards; climate resilience and climate modelling to adapt farming practices to changing weather patterns'. AI in food systems A number of AI innovations aimed at improving food systems are already being used to achieve sustainable and accessible food, especially for lower-income households. In Malawi, Tanzania and Rwanda there is Sanku's Project Healthy Children, an AI tool for nutrient-rich food processing. It works with small-scale flour mills and aims to combat malnutrition by fortifying flour with essential nutrients. Nyirenda found that the key challenges are 'equipment failures and calibration issues resulting in variable food quality'. He said AI could help 'auto-adjust micronutrient mixes during milling' while 'cloud-based AI analytics track dosing accuracy and machine performance and predictive maintenance alerts that will enable timely servicing and reduce waste'. In East Africa, Ghana, the Caribbean and South East Asia, AgUnity aims to help smallholder farmers with record keeping, coordination to have more organised harvesting, storage and distribution. This could help reduce 'significant food spoilage and waste in rural farming systems', said Nyirenda. 'AgUninty is a low-cost smartphone that uses a blockchain-powered transaction platform built to digitally empower remote farmers and address financial and reduce digital exclusion,' Nyirenda said to delegates. Koko is mainstreaming liquid bioethanol cooking fuel as a fast, safe and affordable alternative to dirty cooking fuels such as charcoal. They partner with the downstream fuels industry to 'drop in' this new fuel, and offer a suite of distribution, dispensing and end-use technologies that ensure customers can safely access clean fuel at prices that undercut dirty fuels. It has software-integrated bioethanol cookers that measure carbon impact. Nyirenda said he was surprised at some of the innovations already taking shape in the Global South. 'I chose these specifically because they are used in countries with a similar socioeconomic state to South Africa,' he said. He added that despite his tech science background, he had found himself roped into the work of food security and food systems through interdisciplinary collaboration with his colleagues at UWC's the Centre for Excellence in Food Security. 'AI can prevent the big food losses that happen in the food system. It can also help with quality control and other things such as helping to create recipes and new menus. People are coming up with cool ideas using these tools,' says Nyirenda. Obstacles to implementation Isingizwe shared the hindrances to rapid development of these technologies in the South African context, such as a distrust of the technology, especially in rural and farming communities. 'Obstacles in South Africa's agricultural sector can be a lack of reliable data for training AI models that are locally relevant, high costs associated with adopting AI technologies, particularly for smallholder farmers, a lack of training and expertise in AI among farmers and agricultural workers, and inadequate technological infrastructure in rural farming areas may limit access to technology and internet connectivity,' said Isingizwe. He pointed out that not having clear policies or frameworks governing AI for integrating it in food systems was a challenge and showed a resistance to change. 'Traditional farming practices may hinder the adoption of innovative technologies,' said Isingizwe. In his research he predicted that AI could help reduce post-harvest losses by 70%; increase farmer income by 20-40%; while retailer networks could reach more informal vendors with fewer vehicles. One of the delegates, a small-scale farmer from Langa, said he was not aware there was so much use of AI-powered technologies in agriculture already. 'I am older so I feel like this AI stuff has already left me. We are still dealing with simple issues like accessing markets and pulling together as smallholder farmers in the community.' Kurt Ackermann, the CEO of the South African Urban Food and Farming Trust, said that 'as the focus shifts toward the role of cities, and city planning, in addressing food security, AI could play a significant role in how the cities of the future — and by extension the food systems of the future — might better serve the needs of human beings. 'Conventional thinking about AI puts the technology at the centre of the discussion, whereas the creation of a more humane world — and how AI could help — is at the heart of Food Indaba 2025.' Ackerman also noted that although the discussion of the day was about the practical implementation of the technology, he wanted the discussion to keep in mind the question: How do we get food on people's tables? The South African Urban Food and Farming Trust has done immense work to help realise food security in urban spaces and has collaborated with multiple organisations for more than a decade, and even across South Africa's borders. DM


Mail & Guardian
a day ago
- Mail & Guardian
AI can advance the sustainable development goals
Artificial intelligence used with intention, inclusivity and oversight improve healthcare and education, as well as mitigate the effects of climate change. Photo: Delwyn Verasamy/M&G As South Africa contends with persistent development issues such as poverty, inequality, healthcare disparities, educational gaps and environmental threats, the promise of artificial intelligence (AI) is no longer a distant frontier but an essential catalyst for transformative change. AI is already being implemented globally to tackle complex development problems. For South Africa, the question is how to integrate it effectively to support sustainable and inclusive growth. If wisely and ethically harnessed, AI could become one of the most powerful instruments in South Africa's pursuit of the United Nations' sustainable development goals (SDGs). It has the potential to accelerate progress, narrow inequality and unlock the latent potential of South African society. But this potential must be cultivated with intention, inclusivity and oversight. SDG 3: Transforming healthcare and saving lives: South Africa's healthcare system, although marked by moments of excellence, remains strained by disparities in access, quality and resource allocation. The application of AI in this domain holds promise not just for efficiency but for equity. AI-driven diagnostic tools can rapidly detect diseases such as tuberculosis, malaria and cancer. Algorithms trained on large datasets can identify symptoms from images or scans with an accuracy that rivals, or even exceeds, that of human practitioners. Moreover, AI can improve disease surveillance by predicting and modeling outbreaks, which is critical in a country still grappling with the dual burden of communicable and non-communicable diseases. Predictive models based on environmental data, patient mobility and historical records can help public health authorities anticipate and mitigate disease spread before it escalates into full-blown crises. Treatment personalisation is another frontier. AI systems can optimise treatment plans based on a patient's genetic profile, lifestyle data and real-time biometrics, thereby enhancing outcomes and reducing adverse effects. This is particularly relevant for chronic disease management such as diabetes, hypertension, and HIV/Aids, where continuous monitoring and dynamic response are key. In remote or underserved areas, AI-powered mobile health platforms can bring diagnostic and consultation services to communities long excluded from specialist care. SDG 4: Education: In the realm of education, AI is poised to democratise access and personalise learning experiences in a manner previously unimaginable. South Africa's education system, despite significant investment, remains beset by inequality in quality and outcomes, especially in rural and peri-urban areas. AI can bridge these gaps through intelligent tutoring systems that adapt to individual learners' pace, preferences and proficiency. For instance, AI-powered platforms can assess where a student is struggling and adjust content delivery to reinforce those areas, offering real-time feedback and customised learning paths. This degree of personalisation can significantly reduce dropout rates and improve performance across diverse learner populations. Furthermore, AI can support inclusive education by assisting learners with disabilities. Speech-to-text, text-to-speech and natural language processing tools can make content more accessible to blind, deaf or dyslexic students. In multilingual societies like South Africa, AI-driven language translation tools can also ensure that learning materials reach students in their home languages, enhancing comprehension and cultural relevance. Educators also benefit. AI can automate administrative tasks, provide insights into student performance and suggest interventions, freeing teachers to focus on pedagogy and mentorship. At a systemic level, AI can support policymakers by analysing educational outcomes across regions and demographics, enabling targeted interventions and better allocation of resources. SDG 13: Climate change: As the climate crisis intensifies, South Africa finds itself on the front line of its economic, social and ecological consequences. Water scarcity, biodiversity loss and extreme weather events pose existential threats to both urban and rural livelihoods. AI offers sophisticated tools for mitigation, adaptation and resilience-building. Through the processing of vast environmental datasets, AI can produce accurate climate models that forecast the effect of rising temperatures, shifting rainfall patterns and other ecological disruptions. AI can integrate real-time weather data, satellite imagery and hydrological models to improve drought forecasts. These models can guide agricultural planning, infrastructure development, and water resource management with unparalleled precision. Farmers can use AI for precision agriculture, monitoring soil health, optimising irrigation and predicting yields. Energy efficiency is another critical area. AI can optimise the generation, distribution and consumption of energy, reducing carbon footprints while improving access. Smart grids informed by machine learning algorithms can predict demand spikes and reroute electricity to prevent outages. During disasters such as floods or wildfires, AI-enabled systems can provide early warnings, simulate response strategies and coordinate relief efforts. Satellite data combined with AI analytics can map affected areas in real time, identify vulnerable populations and facilitate targeted humanitarian interventions. These applications are not just technically sophisticated but socially vital. Integrating ethics and equity into AI development But the journey to AI-enabled SDGs is not without risks. Issues of data privacy, algorithmic bias, surveillance and digital exclusion must be confronted directly. The use of AI must be guided by principles of transparency, accountability and justice. Local contexts matter, and solutions must be co-created with communities, grounded in local knowledge and aligned with national development priorities. Moreover, AI must not deepen inequalities by creating a technological elite. Investments must be made in digital infrastructure, skills development and research capabilities, particularly in historically marginalised areas. If AI is to be a force for good, its benefits must be broadly shared, and its design must reflect the values and diversity of South African society. To avoid surveillance, bias and misuse of data, the country needs strong ethical guidelines. The Presidential Commission on the Fourth Industrial Revolution has made initial recommendations, but these must be translated into enforceable policies. The private sector, particularly in fintech, agri-tech, edtech and healthtech, must be encouraged to innovate responsibly. Universities and research institutions should intensify efforts to localise AI knowledge production and ensure that South African problems are being solved by South African minds. Most importantly, the state must play a catalytic role in ensuring that the regulatory frameworks, data governance standards and public investments align with the broader vision of sustainable development. The alignment between AI and the SDGs is not coincidental; it is foundational. AI is not just about machines, it is about leveraging intelligence, in all its forms, to solve humanity's greatest problems. Let us seize this opportunity not just with code and computation, but with compassion, conscience and collective purpose. Dinko Herman Boikanyo is an associate professor of business management at the University of Johannesburg. He writes in his personal capacity.