
Does Using ChatGPT Change Your Brain Activity? Study Sparks Debate
The brains of people writing an essay with ChatGPT are less engaged than those of people blocked from using any online tools for the task, a study finds. The investigation is part of a broader movement to assess whether artificial intelligence (AI) is making us cognitively lazy.
Computer scientist Nataliya Kosmyna at the MIT Media Lab in Cambridge, Massachusetts, and her colleagues measured brain-wave activity in university students as they wrote essays either using a chatbot or an Internet search tool, or without any Internet at all. Although the main result is unsurprising, some of the study's findings are more intriguing: for instance, the team saw hints that relying on a chatbot for initial tasks might lead to relatively low levels of brain engagement even when the tool is later taken away.
Echoing some posts about the study on online platforms, Kosmyna is careful to say that the results shouldn't be overinterpreted. This study cannot and did not show 'dumbness in the brain, no stupidity, no brain on vacation,' Kosmyna laughs. It involved only a few dozen participants over a short time and cannot address whether habitual chatbot use reshapes our thinking in the long-term, or how the brain might respond during other AI-assisted tasks. 'We don't have any of these answers in this paper,' Kosmyna says. The work was posted ahead of peer review on the preprint server arXiv on 10 June.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Easy essays
Kosmyna's team recruited 60 students, aged 18 to 39, from five universities around the city of Boston, Massachusetts. The researchers asked them to spend 20 minutes crafting a short essay answering questions, such as 'should we always think before we speak?', that appear on Scholastic Assessment Tests, or SATs.
The participants were divided into three groups: one used ChatGPT, powered by OpenAI's large language model GPT-4o, as the sole source of information for their essays; another used Google to search for material (without any AI-assisted answers); and the third was forbidden to go online at all. In the end, 54 participants wrote essays answering three questions while in their assigned group, and then 18 were re-assigned to a new group to write a fourth essay, on one of the topics that they had tackled previously.
Each student wore a commercial electrode-covered cap, which collected electroencephalography (EEG) readings as they wrote. These headsets measure tiny voltage changes from brain activity and can show which broad regions of the brain are 'talking' to each other.
The students who wrote essays using only their brains showed the strongest, widest-ranging connectivity among brain regions, and had more activity going from the back of their brains to the front, decision-making area. They were also, unsurprisingly, better able to quote from their own essays when questioned by the researchers afterwards.
The Google group, by comparison, had stronger activations in areas known to be involved with visual processing and memory. And the chatbot group displayed the least brain connectivity during the task.
More brain connectivity isn't necessarily good or bad, Kosmyna says. In general, more brain activity might be a sign that someone is engaging more deeply with a task, or it might be a sign of inefficiency in thinking, or an indication that the person is overwhelmed by 'cognitive overload'.
Creativity lost?
Interestingly, when the participants who initially used ChatGPT for their essays switched to writing without any online tools, their brains ramped up connectivity — but not to the same level as in the participants who worked without the tools from the beginning.
'This evidence aligns with a worry that many creativity researchers have about AI — that overuse of AI, especially for idea generation, may lead to brains that are less well-practised in core mechanisms of creativity,' says Adam Green, co-founder of the Society for the Neuroscience of Creativity and a cognitive neuroscientist at Georgetown University in Washington DC.
But only 18 people were included in this last part of the study, Green notes, which adds uncertainty to the findings. He also says there could be other explanations for the observations: for instance, these students were rewriting an essay on a topic they had already tackled, and therefore the task might have drawn on cognitive resources that differed from those required when writing about a brand-new topic.
Confoundingly, the study also showed that switching to a chatbot to write an essay after previously composing it without any online tools boosted brain connectivity — the opposite, Green says, of what you might expect. This suggests it could be important to think about when AI tools are introduced to learners to enhance their experience, Kosmyna says. 'The timing might be important.'
Many educational scholars are optimistic about the use of chatbots as effective, personalized tutors. Guido Makransky, an educational psychologist at the University of Copenhagen, says these tools work best when they guide students to ask reflective questions, rather than giving them answers.
'It's an interesting paper, and I can see why it's getting so much attention,' Makransky says. 'But in the real world, students would and should interact with AI in a different way.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
41 minutes ago
- Yahoo
5 ways people build relationships with AI
When you buy through links on our articles, Future and its syndication partners may earn a commission. Stories about people building emotional connections with AI are appearing more often, but Anthropic just dropped some numbers claiming it's far from as common as it might seem. Scraping 4.5 million conversations from Claude, the company discovered that only 2.9 percent of users engage with it for emotional or personal support. Anthropic wanted to emphasize that while sentiment usually improves over the conversation, Claude is not a digital shrink. It rarely pushes back outside of safety concerns, meaning it won't give medical advice and will tell people not to self-harm. But those numbers might be more about the present than the future. Anthropic itself admits the landscape is changing fast, and what counts as "affective" use today may not be so rare tomorrow. As more people interact with chatbots like Claude, ChatGPT, and Gemini and more often, there will be more people bringing AI into their emotional lives. So, how exactly are people using AI for support right now? The current usage might also predict how people will use them in the future as AI gets more sophisticated and personal. Let's start with the idea of AI as a not-quite therapist. While no AI model today is a licensed therapist (and they all make that disclaimer loud and clear), people still engage with them as if they are. They type things like, "I'm feeling really anxious about work. Can you talk me through it?" or "I feel stuck. What questions should I ask myself?" Whether the responses that come back are helpful probably varies, but there are plenty of people who claim to have walked away from an AI therapist feeling at least a little calmer. That's not because the AI gave them a miracle cure, but because it gave them a place to let thoughts unspool without judgment. Sometimes, just practicing vulnerability is enough to start seeing benefits. Sometimes, though, the help people need is less structured. They don't want guidance so much as relief. Enter what could be called the emotional emergency exit. Imagine it's 1 AM and everything feels a little too much. You don't want to wake up your friend, and you definitely don't want to scroll more doom-laced headlines. So you open an AI app and type, "I'm overwhelmed." It will respond, probably with something calm and gentle. It might even guide you through a breathing exercise, say something kind, or offer a little bedtime story in a soothing tone. Some people use AI this way, like a pressure valve – a place to decompress where nothing is expected in return. One user admitted they talk to Claude before and after every social event, just to rehearse and then unwind. It's not therapy. It's not even a friend. But it's there. For now, the best-case scenario is a kind of hybrid. People use AI to prep, to vent, to imagine new possibilities. And then, ideally, they take that clarity back to the real world. Into conversations, into creativity, into their communities. But even if the AI isn't your therapist or your best friend, it might still be the one who listens when no one else does. Humans are indecisive creatures, and figuring out what to do about big decisions is tough, but some have found AI to be the solution to navigating those choices. The AI won't recall what you did last year or guilt you about your choices, which some people find refreshing. Ask it whether to move to a new city, end a long relationship, or splurge on something you can barely justify, and it will calmly lay out the pros and cons. You can even ask it to simulate two inner voices, the risk-taker and the cautious planner. Each can make their case, and you can feel better that you made an informed choice. That kind of detached clarity can be incredibly valuable, especially when your real-world sounding boards are too close to the issue or too emotionally invested. Social situations can cause plenty of anxiety, and it's easy for some to spiral into thinking about what could go wrong. AI can help them as a kind of social script coach. Say you want to say no but not cause a fight, or you are meeting some people you want to impress, but are worried about your first impression. AI can help draft a text to decline an invite or suggest ways to ease yourself into conversations with different people, and take on the role to let you rehearse full conversations, testing different phrasings to see what feels good. Accountability partners are a common way for people to help each other achieve their goals. Someone who will make sure you go to the gym, go to sleep at a reasonable hour, and even maintain a social life and reach out to friends. Habit-tracking apps can help if you don't have the right friend or friends to help you. But AI can be a quieter co-pilot for real self-improvement. You can tell it your goals and ask it to check in with you, remind you gently, or help reframe things when motivation dips. Someone trying to quit smoking might ask ChatGPT to help track cravings and write motivational pep talks. Or an AI chatbot might ensure you keep up your journaling with reminders and suggestions for ideas on what to write about. It's no surprise that people might start to feel some fondness (or annoyance) toward the digital voice telling them to get up early to work out or to invite people that they haven't seen in a while to meet up for a meal. Related to using AI for making decisions, some people look to AI when they're grappling with questions of ethics or integrity. These aren't always monumental moral dilemmas; plenty of everyday choices can weigh heavily. Is it okay to tell a white lie to protect someone's feelings? Should you report a mistake your coworker made, even if it was unintentional? What's the best way to tell your roommate they're not pulling their weight without damaging the relationship? AI can act as a neutral sounding board. It will suggest ethical ways to consider things like whether accepting a friend's wedding invite but secretly planning not to attend is better or worse than declining outright. The AI doesn't have to offer a definitive ruling. It can map out competing values and help define the user's principles and how they lead to an answer. In this way, AI serves less as a moral authority than as a flashlight in the fog. Right now, only a small fraction of interactions fall into that category. But what happens when these tools become even more deeply embedded in our lives? What happens when your AI assistant is whispering in your earbuds, popping up in your glasses, or helping schedule your day with reminders tailored not just to your time zone but to your temperament? Anthropic might not count all of these as effective use, but maybe they should. If you're reaching for an AI tool to feel understood, get clarity, or move through something difficult, that's not just information retrieval. That's connection, or at least the digital shadow of one. You and your friends can now share and remix your favorite conversations with the Claude AI chatbot Anthropic's new AI-written blog is more of a technical treat than a literary triumph A new AI feature can control your computer to follow your orders
Yahoo
2 hours ago
- Yahoo
Meta hires four more OpenAI researchers, The Information reports
(Reuters) -Meta Platforms is hiring four more OpenAI artificial intelligence researchers, The Information reported on Saturday. The researchers, Shengjia Zhao, Jiahui Yu, Shuchao Bi and Hongyu Ren have each agreed to join, the report said, citing a person familiar with their hiring. Earlier this week, the Instagram parent hired Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai, who were all working in OpenAI's Zurich office, the Wall Street Journal reported. Meta and ChatGPT maker OpenAI did not immediately respond to a Reuters request for comment. The company has recently been pushing to hire more researchers from OpenAI to join chief executive Mark Zuckerberg's superintelligence efforts. Reuters could not immediately verify the report.

WIRED
2 hours ago
- WIRED
OpenAI Loses Four Key Researchers to Meta
Jun 28, 2025 4:16 PM Mark Zuckerberg has been working to poach talent from rival labs for his new superintelligence team. Photograph:Four OpenAI researchers are leaving the company to go to Meta, two sources confirm to WIRED. Shengjia Zhao, Shuchao Bi, Jiahui Yu, and Hongyu Ren have joined Meta's superintelligence team. Their OpenAI Slack profiles have been deactivated. The Information first reported on the departures. It's the latest in a series of aggressive moves by Mark Zuckerberg, who is racing to catch up to OpenAI, Anthropic and Google in building artificial general intelligence. Earlier this month, OpenAI CEO Sam Altman said that Meta has been making 'giant offers' to OpenAI staffers with '$100 million signing bonuses.' He added that, 'none of our best people have decided to take them up on that.' A source at OpenAI confirmed the offers. Hongyu Ren was OpenAI's post-training lead for the o3 and o4 mini models, along with the open source model that's set to be released this summer, sources say. Post-training is the process of refining a model after it has been trained on a primary dataset. Shengjia Zhao is highly skilled in deep learning research, according to another source. He joined OpenAI in the summer of 2022, and helped build the startup's GPT-4 model. Jiahui Yu did a stint at Google DeepMind before joining OpenAI in late 2023. Shuchao Bi was a manager of OpenAI's multimodal models. The departures from OpenAI come shortly after the company lost three researchers from its Zurich office, the Wall Street Journal reported. OpenAI and Meta did not immediately respond to a request for comment. This is a developing story. Please check back for updates .