logo
#

Latest news with #MITMediaLab

We've all got to do more to protect kids from AI abuse in schools
We've all got to do more to protect kids from AI abuse in schools

New York Post

timea day ago

  • Science
  • New York Post

We've all got to do more to protect kids from AI abuse in schools

For the sake of the next generation, America's elected officials, parents and educators need to get serious about curbing kids' use of artificial intelligence — or the cognitive consequences will be devastating. As Rikki Schlott reported in Wednesday's Post, an MIT Media Lab study found that people who used large language models like ChatGPT to write essays had reduced critical thinking skills and attention spans and showed less brain activity while working than those who didn't rely on the AI's help. And over time the AI-users grew to rely more heavily on the tech, going from using it for small tweaks and refinement to copying and pasting whole portions of whatever the models spit out. Advertisement A series of experiments at UPenn/Wharton had similar results: Participants who used large language models like ChatGPT were able to research topics faster than those who used Google, but lagged in retaining and understanding the information they got. That is: They weren't actually learning as much as those who had to actively seek out the information they needed. The bottom line: Using AI for tasks like researching and writing makes us dumber and lazier. Advertisement Even scarier, the MIT study showed that the negative effects of AI are worse for younger users. That's bad news, because all signs are that kids are relying more and more on tech in classrooms. A Pew poll in January found that some 26% of teens aged 13 to 17 admit to using AI for schoolwork — twice the 2023 level. It'll double again, faster still, unless the adults wake up. Advertisement We've known for years how smartphone use damages kids: shorter attention spans, less fulfilling social lives, higher rates of depression and anxiety. States are moving to ban phones in class, but years after the dangers became obvious — and long after the wiser private schools cracked down. This time, let's move to address the peril before a generation needlessly suffers irrevocable harm. Some two dozen states have issued guidance on AI-use in classrooms, but that's only a start: Every state's education officials should ensure that every school cracks down. Advertisement Put more resources into creating reliable tools and methods to catch AI-produced work — and into showing teachers how to stop it and warning parents and students of the consequences of AI overuse. Absent a full-court press, far too many kids won't build crucial cognitive skills because a chat bot does all the heavy lifting for them while their brains are developing. Overall, AI should be a huge boon for humanity, eliminating vast amounts of busy work. But doing things the hard way remains the best way to build mental 'muscle.' If the grownups don't act, overdependence on AI wll keep spreading through America's classrooms like wildfire. Stop it now — before the wildfire burns out a generation of young minds.

Does Using ChatGPT Really Change Your Brain Activity?
Does Using ChatGPT Really Change Your Brain Activity?

Yahoo

time2 days ago

  • Science
  • Yahoo

Does Using ChatGPT Really Change Your Brain Activity?

The brains of people writing an essay with ChatGPT are less engaged than those of people blocked from using any online tools for the task, a study finds. The investigation is part of a broader movement to assess whether artificial intelligence (AI) is making us cognitively lazy. Computer scientist Nataliya Kosmyna at the MIT Media Lab in Cambridge, Massachusetts, and her colleagues measured brain-wave activity in university students as they wrote essays either using a chatbot or an Internet search tool, or without any Internet at all. Although the main result is unsurprising, some of the study's findings are more intriguing: for instance, the team saw hints that relying on a chatbot for initial tasks might lead to relatively low levels of brain engagement even when the tool is later taken away. Echoing some posts about the study on online platforms, Kosmyna is careful to say that the results shouldn't be overinterpreted. This study cannot and did not show 'dumbness in the brain, no stupidity, no brain on vacation,' Kosmyna laughs. It involved only a few dozen participants over a short time and cannot address whether habitual chatbot use reshapes our thinking in the long-term, or how the brain might respond during other AI-assisted tasks. 'We don't have any of these answers in this paper,' Kosmyna says. The work was posted ahead of peer review on the preprint server arXiv on 10 June. [Sign up for Today in Science, a free daily newsletter] Kosmyna's team recruited 60 students, aged 18 to 39, from five universities around the city of Boston, Massachusetts. The researchers asked them to spend 20 minutes crafting a short essay answering questions, such as 'should we always think before we speak?', that appear on Scholastic Assessment Tests, or SATs. The participants were divided into three groups: one used ChatGPT, powered by OpenAI's large language model GPT-4o, as the sole source of information for their essays; another used Google to search for material (without any AI-assisted answers); and the third was forbidden to go online at all. In the end, 54 participants wrote essays answering three questions while in their assigned group, and then 18 were re-assigned to a new group to write a fourth essay, on one of the topics that they had tackled previously. Each student wore a commercial electrode-covered cap, which collected electroencephalography (EEG) readings as they wrote. These headsets measure tiny voltage changes from brain activity and can show which broad regions of the brain are 'talking' to each other. The students who wrote essays using only their brains showed the strongest, widest-ranging connectivity among brain regions, and had more activity going from the back of their brains to the front, decision-making area. They were also, unsurprisingly, better able to quote from their own essays when questioned by the researchers afterwards. The Google group, by comparison, had stronger activations in areas known to be involved with visual processing and memory. And the chatbot group displayed the least brain connectivity during the task. More brain connectivity isn't necessarily good or bad, Kosmyna says. In general, more brain activity might be a sign that someone is engaging more deeply with a task, or it might be a sign of inefficiency in thinking, or an indication that the person is overwhelmed by 'cognitive overload'. Interestingly, when the participants who initially used ChatGPT for their essays switched to writing without any online tools, their brains ramped up connectivity — but not to the same level as in the participants who worked without the tools from the beginning. 'This evidence aligns with a worry that many creativity researchers have about AI — that overuse of AI, especially for idea generation, may lead to brains that are less well-practised in core mechanisms of creativity,' says Adam Green, co-founder of the Society for the Neuroscience of Creativity and a cognitive neuroscientist at Georgetown University in Washington DC. But only 18 people were included in this last part of the study, Green notes, which adds uncertainty to the findings. He also says there could be other explanations for the observations: for instance, these students were rewriting an essay on a topic they had already tackled, and therefore the task might have drawn on cognitive resources that differed from those required when writing about a brand-new topic. Confoundingly, the study also showed that switching to a chatbot to write an essay after previously composing it without any online tools boosted brain connectivity — the opposite, Green says, of what you might expect. This suggests it could be important to think about when AI tools are introduced to learners to enhance their experience, Kosmyna says. 'The timing might be important.' Many educational scholars are optimistic about the use of chatbots as effective, personalized tutors. Guido Makransky, an educational psychologist at the University of Copenhagen, says these tools work best when they guide students to ask reflective questions, rather than giving them answers. 'It's an interesting paper, and I can see why it's getting so much attention,' Makransky says. 'But in the real world, students would and should interact with AI in a different way.' This article is reproduced with permission and was first published on June 25, 2025.

Does Using ChatGPT Change Your Brain Activity? Study Sparks Debate
Does Using ChatGPT Change Your Brain Activity? Study Sparks Debate

Scientific American

time3 days ago

  • Science
  • Scientific American

Does Using ChatGPT Change Your Brain Activity? Study Sparks Debate

The brains of people writing an essay with ChatGPT are less engaged than those of people blocked from using any online tools for the task, a study finds. The investigation is part of a broader movement to assess whether artificial intelligence (AI) is making us cognitively lazy. Computer scientist Nataliya Kosmyna at the MIT Media Lab in Cambridge, Massachusetts, and her colleagues measured brain-wave activity in university students as they wrote essays either using a chatbot or an Internet search tool, or without any Internet at all. Although the main result is unsurprising, some of the study's findings are more intriguing: for instance, the team saw hints that relying on a chatbot for initial tasks might lead to relatively low levels of brain engagement even when the tool is later taken away. Echoing some posts about the study on online platforms, Kosmyna is careful to say that the results shouldn't be overinterpreted. This study cannot and did not show 'dumbness in the brain, no stupidity, no brain on vacation,' Kosmyna laughs. It involved only a few dozen participants over a short time and cannot address whether habitual chatbot use reshapes our thinking in the long-term, or how the brain might respond during other AI-assisted tasks. 'We don't have any of these answers in this paper,' Kosmyna says. The work was posted ahead of peer review on the preprint server arXiv on 10 June. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Easy essays Kosmyna's team recruited 60 students, aged 18 to 39, from five universities around the city of Boston, Massachusetts. The researchers asked them to spend 20 minutes crafting a short essay answering questions, such as 'should we always think before we speak?', that appear on Scholastic Assessment Tests, or SATs. The participants were divided into three groups: one used ChatGPT, powered by OpenAI's large language model GPT-4o, as the sole source of information for their essays; another used Google to search for material (without any AI-assisted answers); and the third was forbidden to go online at all. In the end, 54 participants wrote essays answering three questions while in their assigned group, and then 18 were re-assigned to a new group to write a fourth essay, on one of the topics that they had tackled previously. Each student wore a commercial electrode-covered cap, which collected electroencephalography (EEG) readings as they wrote. These headsets measure tiny voltage changes from brain activity and can show which broad regions of the brain are 'talking' to each other. The students who wrote essays using only their brains showed the strongest, widest-ranging connectivity among brain regions, and had more activity going from the back of their brains to the front, decision-making area. They were also, unsurprisingly, better able to quote from their own essays when questioned by the researchers afterwards. The Google group, by comparison, had stronger activations in areas known to be involved with visual processing and memory. And the chatbot group displayed the least brain connectivity during the task. More brain connectivity isn't necessarily good or bad, Kosmyna says. In general, more brain activity might be a sign that someone is engaging more deeply with a task, or it might be a sign of inefficiency in thinking, or an indication that the person is overwhelmed by 'cognitive overload'. Creativity lost? Interestingly, when the participants who initially used ChatGPT for their essays switched to writing without any online tools, their brains ramped up connectivity — but not to the same level as in the participants who worked without the tools from the beginning. 'This evidence aligns with a worry that many creativity researchers have about AI — that overuse of AI, especially for idea generation, may lead to brains that are less well-practised in core mechanisms of creativity,' says Adam Green, co-founder of the Society for the Neuroscience of Creativity and a cognitive neuroscientist at Georgetown University in Washington DC. But only 18 people were included in this last part of the study, Green notes, which adds uncertainty to the findings. He also says there could be other explanations for the observations: for instance, these students were rewriting an essay on a topic they had already tackled, and therefore the task might have drawn on cognitive resources that differed from those required when writing about a brand-new topic. Confoundingly, the study also showed that switching to a chatbot to write an essay after previously composing it without any online tools boosted brain connectivity — the opposite, Green says, of what you might expect. This suggests it could be important to think about when AI tools are introduced to learners to enhance their experience, Kosmyna says. 'The timing might be important.' Many educational scholars are optimistic about the use of chatbots as effective, personalized tutors. Guido Makransky, an educational psychologist at the University of Copenhagen, says these tools work best when they guide students to ask reflective questions, rather than giving them answers. 'It's an interesting paper, and I can see why it's getting so much attention,' Makransky says. 'But in the real world, students would and should interact with AI in a different way.'

‘They're Not Asking For A Seat At The Table. They're Rebuilding It Entirely' – Meet The Black Women Rewriting The Future Of AI
‘They're Not Asking For A Seat At The Table. They're Rebuilding It Entirely' – Meet The Black Women Rewriting The Future Of AI

Elle

time4 days ago

  • Business
  • Elle

‘They're Not Asking For A Seat At The Table. They're Rebuilding It Entirely' – Meet The Black Women Rewriting The Future Of AI

Artificial Intelligence is reshaping the fabric of society whether we like it or not. It goes beyond the friendly tone given to the Artificial Narrow Intelligence (ANI) that we use when planning a trip, or even the type of text message to send to that emotionally unavailable guy who just won't act right. AI is in fact informing decisions about who gets access to healthcare, employment, housing, and ever importantly, freedom. Yet, as the field expands (albeit in an unregulated and exponential rate), so does the urgent need to interrogate who is building these systems and whose values are embedded in the algorithms. In a landscape historically dominated by white and male perspectives, Black women have emerged as critical voices pushing for equity, transparency, and justice in AI. Their presence is not simply symbolic; it is transformative and necessary. Black women in AI are not only contributing technical expertise but also grounding the work in lived experience, historical analysis, and a politics of care that is often missing from the mainstream tech industry. Due to centuries of Anti-blackness coupled with sexism, Black women continue to be at the mercy of oversights in various fields, which can prove detrimental and sometimes fatal. Healthcare is a glaring example of this; in the UK Black women are almost three times more likely to die during or within six weeks of pregnancy compared to white women. The interventions of Black women in AI force us to confront the uncomfortable truth: AI is not neutral. Behind every dataset is a legacy of power, exclusion, and bias. Perhaps no one exemplifies this better throughout my research, than Joy Buolamwini, founder of the Algorithmic Justice League. Her MIT Media Lab research revealed how commercial facial recognition systems failed to accurately detect darker-skinned faces (particularly Black women) forcing major tech companies to reckon with the ethical failures of their software. Buolamwini didn't just diagnose a problem; she sparked a global reckoning. A question that I keep asking is 'how do these companies keep getting it wrong over and over again?' When does the lack of inclusivity in AI stop being a silly faux pas and instead seen as a strategised attempt at erasure. As someone who has built their platform on saying their mind, I am always enamoured by other women speaking up courageously in their field of expertise. That is how I cam across Dr. Timnit Gebru, co-founder of Black in AI, who has fearlessly taken on Big Tech. After being ousted from Google for raising concerns about the risks of large language models, she founded the Distributed AI Research Institute (DAIR) —an independent organisation centering community-rooted, anti-colonial AI research. Why this is important to me personally is because I am regularly chastised by trolls online for 'making everything about race.' What Gebru's work confirmed to me is that without the influence of Black women in tech, more impenetrable racist systems would be built, which could take decades to rectify. Her fearlessness not only exposed systemic racism in Silicon Valley but also offered a blueprint for doing tech differently. Other trailblazers like Mutale Nkonde, founder of AI For the People, and Dr. Safiya Umoja Noble, author of Algorithms of Oppression, are using policy, media, and academia to reveal how AI reinforces existing power structures, and doesn't modernise them. Whether through legislation, books, or public education, they are ensuring that conversations about AI include, and centre Black voices, especially those of Black women. Crucially, these women are not asking for a seat at the table. They are rebuilding the table entirely. In other words, they didn't wait to be invited to do the meaningful work, they got to work regardless. They should be our reminder that the future is not preordained by machines and coded binaries; it's designed by people. And when Black women are included in the creation of that design, the result is not just smarter tech, but fairer, more human systems. I am inspired by these women, not just for what they do, but for how they do it: with integrity, radical imagination, and refusal to be co-opted by the very systems they critique. It is not always easy to raise one's head above the parapet for fear of being seen as an Angry Black Woman. This same spirit fuels my own work, albeit from a different angle. As a writer and cultural commentator, I've chosen to explore similar questions through the lens of science fiction. In my upcoming novel, Awakened, we follow a young Black woman in London who discovers ancient, dormant powers within herself just as Black children begin mysteriously dying, their bodies found by rivers and lakes. A journalist by profession, she begins investigating these tragedies, only to uncover a supernatural conspiracy entangled with real-world systems of neglect and violence. Writing speculative fiction allows me to interrogate reality by bending it. Like one of my favourite authors, Octavia Butler, I'm using genre to ask questions that mainstream narratives often sideline. What does liberation look like in a world that wasn't built with us in mind? How do we reclaim our spiritual and ancestral knowledge in an age of algorithmic erasure? And can we, as Black women, wholly embody the identity of architects of not just resistance—but re-imagination? The parallels between speculative fiction and AI are striking, because both involve world-building. Both carry the power to shape perceptions, define truths, and govern futures. But unlike the opaque algorithms written for the benefit of corporate boardrooms, fiction can make the invisible visible. It can expose the hidden logics that underlie our systems and reframe what's possible—especially when told through the eyes of those most often written out of the future. Black women in AI are shifting the culture from within. I'm doing it through story. We are not anomalies; we are archetypes of a new era. And we are not asking permission. There is an ancient blueprint and it's about time that we remember our place and power within it. Awakened by Kelechi Okafor is out now (Trapeze, £18.99). ELLE Collective is a new community of fashion, beauty and culture lovers. For access to exclusive content, events, inspiring advice from our Editors and industry experts, as well the opportunity to meet designers, thought-leaders and stylists, become a member today HERE.

ChatGPT, brain rot and who should use AI and who should not
ChatGPT, brain rot and who should use AI and who should not

India Today

time7 days ago

  • Science
  • India Today

ChatGPT, brain rot and who should use AI and who should not

There was a time when almost everyone had a few phone numbers stored in the back of their mind. We would just pick up our old Nokia, or a cordless, and dial in a number. Nowadays, most people remember just one phone number — their own. And in some cases, not even that. It is the same with birthdates, trivia like who the prime minister of Finland is, or the accurate route to this famous bakery in that corner of the are no longer memory machines, something which often leads to hilarious videos on social media. Young students are asked on camera to name the first prime minister of India and all of them look bewildered. Maybe Gandhi, some of them gingerly say. We all laugh a good bit at their it's not the fault of the kids. It's a different world. The idea of memorising stuff is a 20th-century concept. Memory has lost its value because now we can recall anything or everything with the help of Google. We can store information outside our brain and into our phones and access it anytime we want. Because memory has lost its value, we have also lost our ability to memorise things. Is it good? Is it bad? That is not what this piece is about. Instead, it is about what we are going to lose Next, say in 10 to 15 years, we may end up losing our ability to think and analyse, just the way we have lost the ability to memorise. And that would be because of ChatGPT and its far, we had suspected something like this. Now, research is beginning to trace it in graphs and charts. Around a week ago, researchers at MIT Media Lab ran some experiments on what happens inside the brain of people when they use ChatGPT. As part of the experiment, the researchers divided 54 people in three groups: people using only the brain to work, people using brain and Google search, and people using brain and ChatGPT. The work was writing an essay and as the participants in the research went about doing it, their brains were scanned using findings were clear. 'EEG revealed significant differences in brain connectivity,' wrote MIT Lab researchers. 'Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.'The research was carried out across four months and in the last phase, participants who were part of the brain-only group were asked to also use ChatGPT, whereas the ChatGPT group was told to not use it at all. 'Over four months, LLM (ChatGPT) users consistently underperformed at neural, linguistic, and behavioural levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning,' wrote MIT Labs is the big takeaway? Quite simple. Like anything cerebral — for example, it is well-established that reading changes and rewires the brain — the use of something like ChatGPT impacts our brain in some fundamental ways. The brain, just like a muscle, can atrophy when not used. And, we have started seeing signs in labs that when people rely too much on AI tools like ChatGPT to do their thinking, writing, analysing, our brains may lose some of this course, there could be the other side of the story too. If in some areas, the mind is getting a break, it is possible in some other parts that neurons might light up more frequently. If we lose our ability to analyse an Excel sheet with just a quick glance, maybe we will get the ability to spot bigger ideas faster after looking at the ChatGPT analysis of 10 financial I am not certain. On the whole, and if we include everyone, the impact of information abundance that tools like Google and Wikipedia have brought has not resulted in smarter or savant-like people. There is often a crude joke on the internet — we believed that earlier, people were stupid because they did not have access to information. Oh, just naive we is possible that, at least on the human mind, the impact of tools like ChatGPT may not end up being a net positive. And that brings me to my next question. So, who should or who should not use ChatGPT? The current AI tools are undoubtedly powerful. They have the potential to crash through all the gate-keeping that happens within the world. They can make everyone feel this much power is available, it would be a waste to not use it. So, everyone should use AI tools like ChatGPT. But I do feel that there has to be a way to go about it. If we don't want AI to wreck our minds, we will have to be smart about how we use them. In formative years — in schools and colleges or at work when you are learning the ropes of the trade — it would be unwise to use ChatGPT and similar tools. The idea is that you should use ChatGPT like a bicycle, which makes you more efficient and faster, instead of as a crutch. The idea is that before you use ChatGPT, you should already have a brain that has figured out a way to learn and connect is probably the reason why, in recent months again and again, top AI experts have highlighted that the use of AI tools must be accompanied by an emphasis on learning the basics. DeepMind CEO Demis Hassabis put it best last month when he was speaking at Cambridge. Answering a question about how students should deal with AI, he said, 'It's important to use the time you have as an undergraduate to understand yourself better and learn how to learn.'In other words, Hassabis believes that before you jump onto ChatGPT or other AI tools, you should first have the fundamental ability to analyse, adapt and learn quickly without them. In the future, this, I think, is going to be key to using AI tools in a better way. Or else, they may end up rotting our brains, similar to what we have done to our memory and attention span due to Instagram, Google and all the information overload.(Javed Anwer is Technology Editor, India Today Group Digital. Latent Space is a weekly column on tech, world, and everything in between. The name comes from the science of AI and to reflect it, Latent Space functions in the same way: by simplifying the world of tech and giving it a context)(Views expressed in this opinion piece are those of the author)Trending Reel

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store