Does Using ChatGPT Really Change Your Brain Activity?
The brains of people writing an essay with ChatGPT are less engaged than those of people blocked from using any online tools for the task, a study finds. The investigation is part of a broader movement to assess whether artificial intelligence (AI) is making us cognitively lazy.
Computer scientist Nataliya Kosmyna at the MIT Media Lab in Cambridge, Massachusetts, and her colleagues measured brain-wave activity in university students as they wrote essays either using a chatbot or an Internet search tool, or without any Internet at all. Although the main result is unsurprising, some of the study's findings are more intriguing: for instance, the team saw hints that relying on a chatbot for initial tasks might lead to relatively low levels of brain engagement even when the tool is later taken away.
Echoing some posts about the study on online platforms, Kosmyna is careful to say that the results shouldn't be overinterpreted. This study cannot and did not show 'dumbness in the brain, no stupidity, no brain on vacation,' Kosmyna laughs. It involved only a few dozen participants over a short time and cannot address whether habitual chatbot use reshapes our thinking in the long-term, or how the brain might respond during other AI-assisted tasks. 'We don't have any of these answers in this paper,' Kosmyna says. The work was posted ahead of peer review on the preprint server arXiv on 10 June.
[Sign up for Today in Science, a free daily newsletter]
Kosmyna's team recruited 60 students, aged 18 to 39, from five universities around the city of Boston, Massachusetts. The researchers asked them to spend 20 minutes crafting a short essay answering questions, such as 'should we always think before we speak?', that appear on Scholastic Assessment Tests, or SATs.
The participants were divided into three groups: one used ChatGPT, powered by OpenAI's large language model GPT-4o, as the sole source of information for their essays; another used Google to search for material (without any AI-assisted answers); and the third was forbidden to go online at all. In the end, 54 participants wrote essays answering three questions while in their assigned group, and then 18 were re-assigned to a new group to write a fourth essay, on one of the topics that they had tackled previously.
Each student wore a commercial electrode-covered cap, which collected electroencephalography (EEG) readings as they wrote. These headsets measure tiny voltage changes from brain activity and can show which broad regions of the brain are 'talking' to each other.
The students who wrote essays using only their brains showed the strongest, widest-ranging connectivity among brain regions, and had more activity going from the back of their brains to the front, decision-making area. They were also, unsurprisingly, better able to quote from their own essays when questioned by the researchers afterwards.
The Google group, by comparison, had stronger activations in areas known to be involved with visual processing and memory. And the chatbot group displayed the least brain connectivity during the task.
More brain connectivity isn't necessarily good or bad, Kosmyna says. In general, more brain activity might be a sign that someone is engaging more deeply with a task, or it might be a sign of inefficiency in thinking, or an indication that the person is overwhelmed by 'cognitive overload'.
Interestingly, when the participants who initially used ChatGPT for their essays switched to writing without any online tools, their brains ramped up connectivity — but not to the same level as in the participants who worked without the tools from the beginning.
'This evidence aligns with a worry that many creativity researchers have about AI — that overuse of AI, especially for idea generation, may lead to brains that are less well-practised in core mechanisms of creativity,' says Adam Green, co-founder of the Society for the Neuroscience of Creativity and a cognitive neuroscientist at Georgetown University in Washington DC.
But only 18 people were included in this last part of the study, Green notes, which adds uncertainty to the findings. He also says there could be other explanations for the observations: for instance, these students were rewriting an essay on a topic they had already tackled, and therefore the task might have drawn on cognitive resources that differed from those required when writing about a brand-new topic.
Confoundingly, the study also showed that switching to a chatbot to write an essay after previously composing it without any online tools boosted brain connectivity — the opposite, Green says, of what you might expect. This suggests it could be important to think about when AI tools are introduced to learners to enhance their experience, Kosmyna says. 'The timing might be important.'
Many educational scholars are optimistic about the use of chatbots as effective, personalized tutors. Guido Makransky, an educational psychologist at the University of Copenhagen, says these tools work best when they guide students to ask reflective questions, rather than giving them answers.
'It's an interesting paper, and I can see why it's getting so much attention,' Makransky says. 'But in the real world, students would and should interact with AI in a different way.'
This article is reproduced with permission and was first published on June 25, 2025.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
5 ways people build relationships with AI
When you buy through links on our articles, Future and its syndication partners may earn a commission. Stories about people building emotional connections with AI are appearing more often, but Anthropic just dropped some numbers claiming it's far from as common as it might seem. Scraping 4.5 million conversations from Claude, the company discovered that only 2.9 percent of users engage with it for emotional or personal support. Anthropic wanted to emphasize that while sentiment usually improves over the conversation, Claude is not a digital shrink. It rarely pushes back outside of safety concerns, meaning it won't give medical advice and will tell people not to self-harm. But those numbers might be more about the present than the future. Anthropic itself admits the landscape is changing fast, and what counts as "affective" use today may not be so rare tomorrow. As more people interact with chatbots like Claude, ChatGPT, and Gemini and more often, there will be more people bringing AI into their emotional lives. So, how exactly are people using AI for support right now? The current usage might also predict how people will use them in the future as AI gets more sophisticated and personal. Let's start with the idea of AI as a not-quite therapist. While no AI model today is a licensed therapist (and they all make that disclaimer loud and clear), people still engage with them as if they are. They type things like, "I'm feeling really anxious about work. Can you talk me through it?" or "I feel stuck. What questions should I ask myself?" Whether the responses that come back are helpful probably varies, but there are plenty of people who claim to have walked away from an AI therapist feeling at least a little calmer. That's not because the AI gave them a miracle cure, but because it gave them a place to let thoughts unspool without judgment. Sometimes, just practicing vulnerability is enough to start seeing benefits. Sometimes, though, the help people need is less structured. They don't want guidance so much as relief. Enter what could be called the emotional emergency exit. Imagine it's 1 AM and everything feels a little too much. You don't want to wake up your friend, and you definitely don't want to scroll more doom-laced headlines. So you open an AI app and type, "I'm overwhelmed." It will respond, probably with something calm and gentle. It might even guide you through a breathing exercise, say something kind, or offer a little bedtime story in a soothing tone. Some people use AI this way, like a pressure valve – a place to decompress where nothing is expected in return. One user admitted they talk to Claude before and after every social event, just to rehearse and then unwind. It's not therapy. It's not even a friend. But it's there. For now, the best-case scenario is a kind of hybrid. People use AI to prep, to vent, to imagine new possibilities. And then, ideally, they take that clarity back to the real world. Into conversations, into creativity, into their communities. But even if the AI isn't your therapist or your best friend, it might still be the one who listens when no one else does. Humans are indecisive creatures, and figuring out what to do about big decisions is tough, but some have found AI to be the solution to navigating those choices. The AI won't recall what you did last year or guilt you about your choices, which some people find refreshing. Ask it whether to move to a new city, end a long relationship, or splurge on something you can barely justify, and it will calmly lay out the pros and cons. You can even ask it to simulate two inner voices, the risk-taker and the cautious planner. Each can make their case, and you can feel better that you made an informed choice. That kind of detached clarity can be incredibly valuable, especially when your real-world sounding boards are too close to the issue or too emotionally invested. Social situations can cause plenty of anxiety, and it's easy for some to spiral into thinking about what could go wrong. AI can help them as a kind of social script coach. Say you want to say no but not cause a fight, or you are meeting some people you want to impress, but are worried about your first impression. AI can help draft a text to decline an invite or suggest ways to ease yourself into conversations with different people, and take on the role to let you rehearse full conversations, testing different phrasings to see what feels good. Accountability partners are a common way for people to help each other achieve their goals. Someone who will make sure you go to the gym, go to sleep at a reasonable hour, and even maintain a social life and reach out to friends. Habit-tracking apps can help if you don't have the right friend or friends to help you. But AI can be a quieter co-pilot for real self-improvement. You can tell it your goals and ask it to check in with you, remind you gently, or help reframe things when motivation dips. Someone trying to quit smoking might ask ChatGPT to help track cravings and write motivational pep talks. Or an AI chatbot might ensure you keep up your journaling with reminders and suggestions for ideas on what to write about. It's no surprise that people might start to feel some fondness (or annoyance) toward the digital voice telling them to get up early to work out or to invite people that they haven't seen in a while to meet up for a meal. Related to using AI for making decisions, some people look to AI when they're grappling with questions of ethics or integrity. These aren't always monumental moral dilemmas; plenty of everyday choices can weigh heavily. Is it okay to tell a white lie to protect someone's feelings? Should you report a mistake your coworker made, even if it was unintentional? What's the best way to tell your roommate they're not pulling their weight without damaging the relationship? AI can act as a neutral sounding board. It will suggest ethical ways to consider things like whether accepting a friend's wedding invite but secretly planning not to attend is better or worse than declining outright. The AI doesn't have to offer a definitive ruling. It can map out competing values and help define the user's principles and how they lead to an answer. In this way, AI serves less as a moral authority than as a flashlight in the fog. Right now, only a small fraction of interactions fall into that category. But what happens when these tools become even more deeply embedded in our lives? What happens when your AI assistant is whispering in your earbuds, popping up in your glasses, or helping schedule your day with reminders tailored not just to your time zone but to your temperament? Anthropic might not count all of these as effective use, but maybe they should. If you're reaching for an AI tool to feel understood, get clarity, or move through something difficult, that's not just information retrieval. That's connection, or at least the digital shadow of one. You and your friends can now share and remix your favorite conversations with the Claude AI chatbot Anthropic's new AI-written blog is more of a technical treat than a literary triumph A new AI feature can control your computer to follow your orders


UPI
2 hours ago
- UPI
Trump cancels U.S.-Canadian trade talks over tech taxes
Canadian Prime Minister Mark Carney meets with President Donald Trump in the Oval Office at the White House on May 6. Trump on Friday suspended trade talks due to Canada's new Digital Services Tax. File Photo by Francis Chung/UPI | License Photo June 28 (UPI) -- President Donald Trump cited potential Canadian taxes on U.S. tech companies as his reason for ending trade talks with Canada on Friday. The tech taxes on Amazon, Google, Meta and other U.S. tech firms are due on Monday, and Trump said it is a deal-breaker. "We have just been informed that Canada ... has just announced that they are putting a Digital Services Tax on our American technology companies," Trump said in a Truth Social post on Friday. He called the tax a "direct and blatant attack on our country" and accused Canada of "copying the European Union, which has done the same thing." "We are hereby terminating all discussions on trade with Canada, effective immediately," Trump said. His administration in the coming week will notify Canadian officials of the tariff that it will have to pay to do business in the United States, Trump added. Trump last week attended the G7 economic trade summit hosted by Canada and Canadian Prime Minister Mark Carney and sought common ground on trade talks, The Washington Post reported. Officials at U.S. tech firms oppose the Canadian tax, the amount of which is based on the revenues generated by Canadians' use of e-commerce sites, social media and the sales of data. All tech companies that generate more than $14.59 million from such services would be subject to the new 3% Digital Services Tax. The tax is retroactive to 2022 and could cost U.S.-based tech firms up to $3 billion, NBC News reported. Upon learning of Trump halting trade talks, Canadian officials on Friday limited U.S. steel imports and placed a 50% surcharge on steel imports that surpass the quota. Canadian Finance Minister Francois-Philippe Champagne said the surcharge will help to protect Canadian steel against what he called "unjust U.S. tariffs." He said the Canadian government is prepared to take additional actions, if necessary.
Yahoo
3 hours ago
- Yahoo
Aussie's ‘genius' trick to save money when online shopping: ‘Feels illegal'
An Australian woman has shared the easy way she saves money, with people calling the move "genius". Artificial intelligence (AI) has become a part of many people's everyday lives, and some are using it to combat the current cost-of-living crisis. Andy McDonald has been using ChatGPT to find discount codes when online shopping. The Gold Coast woman told Yahoo Finance she uses AI a lot in her business and decided to start integrating it into more areas of her life. 'One day, I had a random thought — what if it could find me discount codes? I gave it a go, and it worked. It pulled a heap of codes,' she said. 'Some don't work, but a lot of the time I'll find one that gets me around 15 to 30 per cent off. It even checks if the codes are valid for the current month.' RELATED Visa's game-changing new AI tool to 'revolutionise' online shopping Woolworths payment change as popular system gets axed Centrelink $836 cash boost for 'very real' truth facing thousands of Aussies It's something many online shoppers have been doing too, with social media filled with other people sharing the shopping tip. McDonald said she keeps it "super simple" and types in: 'Can you find me discount codes for [brand name]?' 'That's it. It'll usually come back with a list, and I test them out from there,' she said she had found working discounts for clothing brands like Billy J and Charcoal Clothing, along with protein from Tropeaka. 'I've saved anywhere between 10 and 30 per cent, depending on the site. That said, a lot of codes tend not to work during major sale periods like EOFY — so it's more of a hit during off-peak times,' she said. Yahoo Finance tested out the trick and was able to find discount codes for several retailers, however, we found the hack worked better when signed into a ChatGPT account. While not all of the discount codes worked, there were some that did, so it can be a good starting point and could save you some of your hard-earned cash. McDonald shared a TikTok about the shopping tip online, saying it 'feels illegal to know'. Her video has been viewed tens of thousands of times, with one calling it 'genius' and others sharing they had saved by using ChatGPT too. 'It's literally the best!! Hahaha I honestly don't shop online unless I can find a discount code,' one wrote. 'Yep, I have been using it for codes too, saves me searching through Google haha,' another said. 'Just saved $60 on a pair of full-priced shoes I've been eyeing off for ages,' a third added. Other Aussies recommended discount code tools like Honey and the Microsoft Edge discount extension. McDonald said ChatGPT was just one tool she used when shopping online. 'I always have ShopBack running in the background - it's great for cash back, especially on higher-ticket purchases like hotels,' she said. 'Another one I like is Zyft — it compares prices across different retailers and helps you find the lowest one.'Sign in to access your portfolio