logo
#

Latest news with #MediaLab

ChatGPT could affect your critical thinking skills, study finds
ChatGPT could affect your critical thinking skills, study finds

Yahoo

timea day ago

  • Science
  • Yahoo

ChatGPT could affect your critical thinking skills, study finds

MIT researchersconducted a study analyzing the impact using ChatGPT in writing tasks can have on brain activity. The study is part ofMIT's Media Lab project called"Your Brain on ChatGPT," which is designed to assess the cognitive effect of relying on large language models (LLMs) like ChatGPT when authoring essays. Dig deeper Approximately 54 people between the ages of 18 and 39 participated in the study. The individuals were divided into three groups to compose several essays. RELATED: ChatGPT may be smart enough to graduate law school One group was allowed to use ChatGPT; the second, Google search; and the third, no AI tools at all. An electroencephalography (EEG) headset was used by the participants while writing to measure the participants' brain activity across 32 regions of the brain. Each patient drafted essays in three sessions and in a fourth session, some participants were reassigned. The individuals who used ChatGPT transitioned to writing unaided (called "LLM-to-Brain") while some who started the brain-only exercise used the LLM ("Brain-to-LLM") RELATED: ChatGPT outperformed doctors in diagnostic accuracy, study reveals The participants' essays were scored by both human teachers and an AI judge, and at the conclusion of the assignment, each person was interviewed following the sessions with researchers asking them about how much they felt they owned their writing. Researchers determined that of the three groups in the study, the ChatGPT users experienced the lowest brain engagement. The team concluded that their study has limitations that they document in their report and website and that more research is needed to better understand the use of ChatGPT in various parts of daily life. The Source Information for this story was provided by an MIT study, which is part of the MIT Media project "Your Brain on ChatGPT." This story was reported from Washington, D.C.

Is Using ChatGPT to Write Your Essay Bad for Your Brain?
Is Using ChatGPT to Write Your Essay Bad for Your Brain?

Yahoo

timea day ago

  • Science
  • Yahoo

Is Using ChatGPT to Write Your Essay Bad for Your Brain?

TIME reporter Andrew Chow discussed the findings of a new study about how ChatGPT affects critical thinking with Nataliya Kosymyna. Kosymyna was part of a team of researchers at MIT's Media Lab who set out to determine whether ChatGPT and large language models (LLMs) are eroding critical thinking, and the study returned some concerning results. The study divided 54 subjects into three groups, and asked them to write several essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity. What they found was that of the three groups, the ChatGPT users had the lowest brain engagement and consistently underperformed at neural, linguistic and behavioral levels. Over the course of several months, the ChatGPT users got lazier with each subsequent essay, often resorting to copy and paste. Contact us at letters@

Is Using ChatGPT to Write Your Essay Bad for Your Brain?
Is Using ChatGPT to Write Your Essay Bad for Your Brain?

Time​ Magazine

timea day ago

  • Science
  • Time​ Magazine

Is Using ChatGPT to Write Your Essay Bad for Your Brain?

TIME reporter Andrew Chow discussed the findings of a new study about how ChatGPT affects critical thinking with Nataliya Kosymyna. Kosymyna was part of a team of researchers at MIT's Media Lab who set out to determine whether ChatGPT and large language models (LLMs) are eroding critical thinking, and the study returned some concerning results. The study divided 54 subjects into three groups, and asked them to write several essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity. What they found was that of the three groups, the ChatGPT users had the lowest brain engagement and consistently underperformed at neural, linguistic and behavioral levels. Over the course of several months, the ChatGPT users got lazier with each subsequent essay, often resorting to copy and paste.

Thinking capped: How generative AI may be quietly dulling our brains
Thinking capped: How generative AI may be quietly dulling our brains

Business Standard

timea day ago

  • Business Standard

Thinking capped: How generative AI may be quietly dulling our brains

It has been barely three years since generative artificial intelligence (AI) chatbots such as ChatGPT appeared on the scene, and there is already concern over how they might be affecting the human brain. The early prognosis isn't good. The findings of a recent study by researchers from the Massachusetts Institute of Technology (MIT) Media Lab, Wellesley College, and MassArt indicate that tools such as ChatGPT negatively impact the neural, linguistic, and cognitive capabilities of humans. While this study is preliminary and limited in scope, involving barely 54 subjects aged 18 to 34, it found that those who used ChatGPT for writing essays (as part of the research experiment) showed measurably lower brain activity than their peers who didn't. 'Writing without (AI) assistance increased brain network interactions across multiple frequency bands, engaging higher cognitive load, stronger executive control, and deeper creative processing,' it found. Various experts in India, too, reiterate the concerns of overdependence on AI, to the extent where people outsource even thinking to AI. Those dealing with the human brain define this as 'cognitive offloading' which, they caution, can diminish critical thinking and reasoning capability while also building a sense of social isolation – in effect, dragging humans into an 'idiot trap'. Training the brain to be lazy 'We now rely on AI for tasks we used to do ourselves — writing essays, solving problems, even generating ideas,' says Nitin Anand additional professor of clinical psychology, National Institute of Mental Health and Neuro Sciences (Nimhans), Bengaluru. 'That means less practice in critical thinking, memory recall, and creative reasoning.' This dependence, he adds, is also weakening people's ability to delay gratification. 'AI tools are designed for speed. They answer instantly. But that trains people to expect quick solutions everywhere, reducing patience and long-term focus.' Anand warns that this behavioural shift is feeding into a pattern of digital addiction, which he classifies as the 4Cs: craving, compulsion, loss of control, and consequences (see box). 'When someone cannot stop checking their phone, feels restless without it, and suffers in real life because of it — that's addiction,' he says, adding that the threat of addiction towards technology has increased multifold by something as adaptive and customisable as AI. Children and adolescents are particularly at risk, says Pankaj Kumar Verma, consultant psychiatrist and director of Rejuvenate Mind Neuropsychiatry Clinic, New Delhi. 'Their prefrontal cortex — the brain's centre for planning, attention, and impulse control — is still developing,' he explains. 'Constant exposure to fast-changing AI content overstimulates neural circuits, leading to short attention spans, poor impulse control, and difficulty with sustained focus.' The effects don't stop at attention 'We're seeing a decline in memory retention and critical thinking, simply because people don't engage deeply with information anymore,' Verma adds. Even basic tasks like asking for directions or speaking to others are being replaced by AI, increasing social isolation, he says. Much of this harks back to the time when landlines came to be replaced by smartphones. Landline users rarely needed a phonebook — numbers of friends, family, and favourite shops were memorised by heart. But with mobile phones offering a convenient 'contacts' list, memory was outsourced. Today, most people can barely remember three-odd numbers unaided. With AI, such cognitive shifts will likely become more pronounced, the experts say. What looks like convenience today might well be shaping a future where essential human skills quietly fade away. Using AI without losing ourselves Experts agree that the solution is not to reject AI, but to regulate its use with conscious boundaries and real-world grounding. Verma advocates for structured rules around technology use, especially in homes with children and adolescents. 'Children, with underdeveloped self-regulation, need guidance,' he says. 'We must set clear boundaries and model balanced behaviour. Without regulation, we risk overstimulating developing brains.' To prevent digital dependence, Anand recommends simple, yet effective, routines that can be extended to AI use. The 'phone basket ritual', for instance, involves setting aside all devices in a common space at a fixed hour each day — usually in the evening — to create a screen-free window for family time or rest. He also suggests 'digital fasting': unplugging from all screens for six to eight hours once a week to reset attention and reduce compulsive use. 'These habits help reclaim control from devices and re-train the brain to function independently,' he says. Perhaps, digital fasting can be extended to 'AI fasting' during work and school assignments to allow the brain to engage in cognitive activities. Pratishtha Arora, chief executive officer of Social and Media Matters, a digital rights organisation, highlights the essential role of parental responsibility in shaping children's digital lives. 'Technology is inevitable, but how we introduce it matters,' she says. 'The foundation of a child's brain is laid early. If we outsource that to screens, the damage can be long-term.' She also emphasises the need to recognise children's innate skills and interests rather than plunging them into technology at an early age. Shivani Mishra, AI researcher at the Indian Institute of Technology Kanpur, cautions against viewing AI as a replacement for human intelligence. 'AI can assist, but it cannot replace human creativity or emotional depth,' she says. Like most experts, she too advises that AI should be used to reduce repetitive workload, 'and free up space for thinking, not to avoid thinking altogether'. The human cost According to Mishra, the danger lies not in what AI can do, but in how much we delegate to it, often without reflection. Both Anand and Verma share concerns about how its unregulated use could stunt core human faculties. Anand reiterates that unchecked dependence could erode the brain's capacity to delay gratification, solve problems, and tolerate discomfort. 'We're at risk of creating a generation of young people who are highly stimulated but poorly equipped to deal with the complexities of real life,' Verma says. The way forward, the experts agree, lies in responsible development, creating AI systems grounded in ethics, transparency, and human values. Research in AI ethics must be prioritised not just for safety, but also to preserve what makes us human in the first place, they advise. The question is not whether AI will shape the future; it is already doing so. It is whether humans will remain conscious architects of that future or passive participants in it. Writing without AI assistance leads to higher cognitive load engagement, stronger executive control, and deeper creative processing Writing with AI assistance reduces overall neural connectivity and shifts the dynamics of information flow Large language model (LLM) users noted a diminishing inclination to evaluate the output critically Participants who were in the brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups Essays written with the help of LLM carried less significance or value to the participants as they spent less time on writing and mostly failed to provide a quote from their essays

Has AI already rotted my brain?
Has AI already rotted my brain?

Fast Company

time2 days ago

  • Science
  • Fast Company

Has AI already rotted my brain?

Five years ago, I bought an e-bike. At the time, the motor-equipped two-wheelers were burdened with an iffy reputation. Was it way easier to get up a hill on one than on a bike without a battery? Absolutely. Did that mean people who rode them were lazy or even cheaters? Some cycling enthusiasts thought so. But what if the boost provided by your e-bike motivated you to make longer trips and more of them—all powered, in part, by your own pedaling? Having logged almost 10,000 miles on my Gazelle, I'm certain it's been a guilt-free boon to my well-being. Data backs me up. I thought about that recently while reading about a new study conducted at MIT's Media Lab. Researchers divided subjects ages 18 to 39 into three groups and had them write essays on topics drawn from the SAT questions answered by college applicants, such as 'Do works of art have the power to change people's lives?' One group relied entirely on unassisted brainpower to complete the essay. A second group could use a search engine. And the third could call on ChatGPT. The study subjects wore EEG helmets that captured their brain activity as they worked. After analyzing that data, the researchers concluded that access to ChatGPT didn't just make composing an essay easier. It made it too easy, in ways that might negatively impact people's long-term ability to think for themselves. In some cases, the ChatGPT users merely cut and pasted text the chatbot had generated; not surprisingly, they exhibited little sense of ownership over the finished product compared to those who didn't have a computerized ghost on tap. 'Due to the instant availability of the response to almost any question, LLMs can possibly make a learning process feel effortless, and prevent users from attempting any independent problem solving,' the researchers wrote in their report. 'By simplifying the process of obtaining answers, LLMs could decrease student motivation to perform independent research and generate solutions. Lack of mental stimulation could lead to a decrease in cognitive development and negatively impact memory.' The study reached those sobering conclusions in the context of young people growing up in an era of bountiful access to AI. But the alarms it set off also left me worried about the technology's impact on my own brain. I have long considered AI an e-bike for my mind—something that speeds it through certain tasks, thereby letting it go places previously out of reach. What if it's actually so detrimental to my mental acuity that I haven't even noticed my critical faculties withering away? After pondering that worst-case scenario for a while, I calmed down. Yes, consistently opting for the most expedient way to accomplish work rather than the one that produces the best results is no way to live. Sure, being overly reliant on ChatGPT—or any form of generative AI—has its hazards. But I'm pretty confident it's possible to embrace AI without your reasoning skills atrophying. No single task can represent all the ways people engage with AI, and the one the MIT researchers chose—essay writing—is particularly fraught. The best essays reflect the unique insight of a particular person: When students take the actual SAT for real, they aren't even allowed to bring a highlighter, let alone a bot. We don't need EGG helmets to tell us that people who paste ChatGPT's work into an essay they've nominally written have lost out on the learning opportunity presented by grappling with a topic, reaching conclusions, and expressing them for oneself. However, ChatGPT and its LLM brethren also excel at plenty of jobs too mundane to feel guilty about outsourcing. Each week, for example, I ask Anthropic's Claude to clean up some of the HTML required to produce this newsletter. It handles this scut work faster and more accurately than I can. I'm not sure what my brain waves would reveal, but I'm happy to reinvest any time not spent on production drudgery into more rewarding aspects of my job. Much of the time, AI is most useful not as a solution but a starting point. Almost never would I ask a chatbot about factual information, get an answer, and call it a day. They're still too error-prone for that. Yet their ease of use makes them an inviting way to get rolling on projects. I think of them as facilitating the research before the old-school research I usually end up doing. And sometimes, AI is a portal into adventures I might otherwise never have taken. So far in 2025, my biggest rabbit hole has been vibe coding —coming up with ideas for apps and then having an LLM craft the necessary software using programming tools I don't even understand. Being exposed to technologies such as React and TypeScript has left me wanting to learn enough about them to do serious coding on my own. If I do, AI can take credit for sparking that ambition. I'm only so Pollyanna-ish about all this. Over time, the people who see AI as an opportunity to do more thinking—not less of it—could be a lonely minority. If so, the MIT researchers can say 'We told you so.' Case in point: At the same time the MIT study was in the news, word broke that VC titan Andreessen Horowitz had invested $15 million in Cluely, a truly dystopian startup whose manifesto boasts its aim of helping people use AI to 'cheat at everything' based on the theory that 'the future won't reward effort.' Its origin story involves cofounder and CEO Roy Lee being suspended from Columbia University after developing an app for cheating on technical employment interviews. Which makes me wonder how Lee would feel about his own candidates misleading their way into job offers. With any luck, the future will turn out to punish Cluely's cynicism. But the company's existence—and investors' willingness to shower it with money—says worse things about humankind than about AI. You've been reading Plugged In, Fast Company 's weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you're reading it on can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@ with your feedback and ideas for future newsletters. I'm also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store