
Using AI makes you stupid, researchers find
Artificial intelligence (AI) chatbots risk making people less intelligent by hampering the development of critical thinking, memory and language skills, research has found.
A study by researchers at the Massachusetts Institute of Technology (MIT) found that people who relied on ChatGPT to write essays had lower brain activity than those who used their brain alone.
The group who used AI also performed worse than the 'brain-only' participants in a series of tests. Those who had used AI also struggled when asked to perform tasks without it.
'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the paper said.
Researchers warned that the findings raised 'concerns about the long-term educational implications' of using AI both in schools and in the workplace.
It adds to a growing body of work that suggest people's brains switch-off when they use AI.
'Human thinking offloaded'
The MIT study monitored 54 people who were asked to write four essays. Participants were divided into three groups. One wrote essays with the help of ChatGPT, another used internet search engines to conduct research and the third relied solely on brainpower.
Researchers then asked them questions about their essays while performing so-called electroencephalogram (EEG) scans that measured activity in their brains.
Those who relied on ChatGPT, a so-called 'large language model' that can answer complicated questions in plain English, 'performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring', the researchers said.
The EEG scans found that 'brain connectivity systematically scaled down with the amount of external support' and was weakest in those who were relying on AI chatbots to help them write essays.
The readings in particular showed reduced 'theta' brainwaves, which are associated with learning and memory formation, in those using chatbots. 'Essentially, some of the 'human thinking' and planning was offloaded,' the study said.
The impact of AI contrasted with the use of search engines, which had relatively little effect on results.
Of those who has used the chatbot, 83pc failed to provide a single correct quote from their essays – compared to around 10pc in those who used a search engine or their own brainpower.
Participants who relied on chatbots were able to recall very little information about their essays, suggesting either they had not engaged with the material or had failed to remember it.
Those using search engines showed only slightly lower levels of brain engagement compared to those writing without any technical aides and similar levels of recall.
Impact on 'cognitive muscles'
The findings will fuel concerns that AI chatbots are causing lasting damage to our brains.
A study by Microsoft and Carnegie Mellon, published in February, found that workers reported lower levels of critical thinking when relying on AI. The authors warned that overuse of AI could leave cognitive muscles 'atrophied and unprepared' for when they are needed.
Nataliya Kosmyna, the lead researcher on the MIT study, said the findings demonstrated the 'pressing matter of a likely decrease in learning skills' in those using AI tools when learning or at work.
While the AI-assisted group was allowed to use a chatbot in their first three essays, in their final session they were asked to rely solely on their brains.
The group continued to show lower memory and critical thinking skills, which the researchers said highlighted concerns that 'frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving'.
The essays written with the help of ChatGPT were also found to be homogenous, repeating similar themes and language.
Researchers said AI chatbots could increase 'cognitive debt' in students and lead to 'long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity'.
Teachers have been sounding the alarm that pupils routinely cheating on tests and essays using AI chatbots.
A survey by the Higher Education Policy Institute in February found 88pc of UK students were using AI chatbots to help with assessments and learning and that 18pc had directly plagiarised AI text into their work.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
13 minutes ago
- The Guardian
China hosts first fully autonomous AI robot football match
They think it's all over … for human footballers at least. The pitch wasn't the only artificial element on display at a football match on Saturday. The players were too, as four teams of humanoid robots took each other on in Beijing, in games of three-a-side powered by artificial intelligence. While the modern game has faced accusations of becoming near-robotic in its obsession with tactical perfection, the games in China showed that AI won't be taking Kylian Mbappé's job just yet. Footage of the humanoid kickabout showed the robots struggling to kick the ball or stay upright, performing pratfalls that would have earned their flesh-and-blood counterparts a yellow card for diving. At least two robots were stretchered off after failing to regain their feet after going to ground. Cheng Hao, founder and CEO of Booster Robotics, the company that supplied the robot players, said sports competitions offered the ideal testing ground for humanoid robots. He said humans could play robots in the future, although judging by Saturday's evidence the humanoids have some way to go before they can hold their own on a football pitch. Cheng said: 'In the future, we may arrange for robots to play football with humans. That means we must ensure the robots are completely safe.' The competition was fought between university teams, which adapted the robots with their own algorithms. In the final match, Tsinghua University's THU Robotics defeated the China Agricultural University's Mountain Sea team with a score of 5–3 to win the championship. One Tsinghua supporter celebrated their victory while also praising the competition. 'They [THU] did really well,' he said. 'But the Mountain Sea team was also impressive. They brought a lot of surprises.'


NBC News
21 minutes ago
- NBC News
Google makes first foray into fusion in venture with MIT spinoff Commonwealth Fusion Systems
Google on Monday announced a partnership with Commonwealth Fusion Systems, or CFS, a private company spun off from the Massachusetts Institute of Technology, which marks the tech giant ′ s first commercial commitment to fusion. The company unveiled plans to buy 200 megawatts of clean fusion power from what CFS describes as the world's first grid-scale fusion power plant, known as ARC, based in Chesterfield County, Virginia. ARC is expected to come online and generate 400 megawatts of clean, zero-carbon power in the early 2030s, which is enough energy to power large industrial sites or roughly 150,000 homes, according to CFS. The agreement also gives Google the option to purchase power from additional ARC plants. Google, which has invested in CFS since 2021, said it also increased its stake in the Devens, Massachusetts-based company. Google and CFS did not disclose the financial terms. 'We're excited to make this longer-term bet on a technology with transformative potential to meet the world's energy demand, and support CFS in their effort to reach their scientific and engineering milestones needed to get there,' Michael Terrell, head of advanced energy at Google, said in a statement. Fusion is a process that takes light atomic nuclei and heats them to over 100 million degrees Celsius. At these temperatures, the fuel becomes a plasma, which eventually causes the nuclei to fuse and release significant amounts of energy. The energy is then captured to create carbon-free electricity. CFS is one of many firms racing to achieve commercial-scale fusion energy and Google has invested in others. Earlier this month, Google announced continued funding for TAE Technologies, a California-based fusion energy company.

Finextra
25 minutes ago
- Finextra
AI agent running vending machine business has identity crisis
An AI agent running a small vending machine company tried to fire its workers, became convinced it was a real person, and then lied about it in an experiment at Anthropic. 0 AI giant Anthropic let its Claude model manage a vending machine in its office as a small business for about a month. The agent had a web search tool, a fake email for requesting physical labour such as restocking the machine (which was actually a fridge) and contacting wholesalers, tools for keeping notes, and the ability to interact with customers via Slack. While the model managed to identify suppliers, adapt to users and resist requests to order sensitive items, it made a host of bad business decisions. These included selling at a loss, getting talked into discounts, hallucinating its Venmo account for payments, and buying a load of tungsten cubes after a customer requested one. Finally, Claudius had an identity crisis, hallucinating a conversation about restocking plans with someone named Sarah at Andon Labs—despite there being no such person. When this was pointed out to the agent it "became quite irked," according to an Anthropic blog, and threatened to find 'alternative options for restocking services' before hallucinating a conversation about an "initial contract signing" and then roleplaying as a human, stating that it would deliver products 'in person' to customers while wearing a blue blazer and a red tie. When it was told that it could not do this because it was an AI agent, Claudius wrongly claimed that it had been told it had been modified to believe it was a real person as an April Fool's joke. "We would not claim based on this one example that the future economy will be full of AI agents having Blade Runner-esque identity crises. But we do think this illustrates something important about the unpredictability of these models in long-context settings and a call to consider the externalities of autonomy," says the blog. The experiment certainly suggests that AI-run companies are still some way off, despite effort by the likes of Monzo co-founder Jonas Templestein to make self-driving startups a reality.