Latest news with #CenterfortheFutureMind


New York Post
3 days ago
- Science
- New York Post
Educators warn that AI shortcuts are already making kids lazy: ‘Critical thinking and attention spans have been demolished'
A new MIT study suggests that AI is degrading critical thinking skills — which does not surprise educators one bit. 'Brain atrophy does occur, and it's obvious,' Dr. Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, told The Post. 'Talk to any professor in the humanities or social sciences and they will tell you that students who just throw in a prompt and hand in their paper are not learning. ' 11 The MIT study used EEG scans to analyze brain activity in the three groups as they wrote their essays. Researchers at MIT's Media Lab found that individuals who wrote essays with the help of ChatGPT showed less brain activity while completing the task, committed less to memory and grew gradually lazier in the writing process over time. A group of 54 18- to 39-year-olds were split into three cohort — one using ChatGPT, one using Google search and one 'brain-only' — and asked to write four SAT essays over the course of four months. Scientists monitored their brain activity under EEG scans and found that the ChatGPT group had the lowest brain engagement when writing and showed lower executive control and attention levels. 11 Dr. Susan Schneider says heavy AI use is degrading her students' thinking skills. Over four sessions, the participants in the study's Chat GPT group started to use AI differently. At first, they generally asked for broad and minimal help, like with structure. But near the end of the study period, they were more likely to resort to copying and pasting entire sections of writing. Murphy Kenefick, a high-school literature teacher in Nashville, said he has seen first-hand how students' 'critical thinking and attention spans have been demolished by AI. 'It's especially a problem with essays, and it's a fight every assignment,' he told The Post. 'I've caught it about 40 times, and who knows how many other times they've gotten away with it.' 11 Eight researchers affiliated with the MIT Media Lab complex carried out the study over four months. Andy Ryan/ MIT 11 Experts are concerned that students who grow up with AI could have their thinking skills especially stunted. – In the MIT study, the 'brain-only' group had the 'strongest, wide-ranging networks' in their brain scans, showing heightened activity in regions associated with creativity, memory and language processing. They also expressed more engagement, satisfaction and ownership of their work. 'There is a strong negative correlation between AI tool usage and critical thinking skills, with younger users exhibiting higher dependence on AI tools and consequently lower cognitive performance scores,' the study's authors warn. 'The impact extends beyond academic settings into broader cognitive development.' Asked to rewrite prior essays, the ChatGPT group was least able to recall them, suggesting they didn't commit them to memory as strongly as other groups. 11 High-school literature teacher Murphy Kenefick fears his students wouldn't even care about the study's findings. Courtest of Murphy Kenefick 11 Nataliya Kosmyna of MIT Media Labs was the lead researcher for the study. MIT The ChatGPT group also tended to produce more similar essays, prompting two English teachers brought in to evaluate the essays to characterize them as 'soulless' — something teachers all over the country say they are seeing more regularly. Robert Black, who retired last week from teaching AP and IB high school history in Canandaigua, New York, said that the last two years of his 34-year career were a 'nightmare because of ChatGPT.' 'When caught, kids just shrug,' he said. 'They can't even fathom why it is wrong or why the writing process is important.' 11 Researchers and experts are especially concerned about the degradation of critical thinking skills in young people due to AI usage. Gorodenkoff – 11 The MIT study found that subjects within the ChatGPT group tended to produce more similar essays, prompting two English teachers brought in to evaluate the essays to characterize them as 'soulless' Inna – Black also points out AI has only worsened a gradual trend of degrading skills that he attributes to smartphones. 'Even before ChatGPT it was harder and harder to get them to think out a piece of writing — brainstorming, organizing and composing,' he told The Post. 'Now that has become a total fool's errand.' Psychologist Jean Twenge, the author of '10 Rules for Raising Kids in a High-Tech World,' agrees that AI is just one additional barrier to learning for Gen Z and Gen Alpha. She points out that international math, reading and science standardized test scores have been on the decline for years, which she attributes to pandemic lockdown and the advent of smartphones and social media. 11 Dr. Jean Twenge says that smartphones and now artificial intelligence pose a threat to youth learning. 11 Dr. Jean M. Twenge is author of the forthcoming book '10 Rules for Raising Kids in a High-Tech World.' 'With the addition of AI, academic performance will likely decline further, as students who regularly use AI to write essays are not learning how to write,' Twenge told The Post. 'When you don't learn how to write, you don't learn how to think deeply.' The MIT study study was spearheaded by Media Lab research scientist Nataliya Kosmyna, who told Time Magazine that 'developing brains are at the highest risk.' While Toby Walsh, Chief Scientist at the University of New South Wales AI Institute in Sydney, Australia, acknowledges that the study's findings are frightening, he also warns educators against outright banning it. 11 AI professor Toby Walsh says that educators need to learn to integrate AI carefully. 'We have to be mindful that there are great opportunities. I'm actually incredibly jealous of what students have today,' Walsh said, recalling his 15-year-old daughter recently using an AI voice to ask her questions in French as a study aide. 'I don't think we should be banning AI,' Walsh said. But, he added, 'the concern is that AI surpasses human intelligence, not because AI got better but because human intelligence got worse.' Kenefick, meanwhile, imagines his students 'wouldn't care' about the study's findings: 'They just want the grade. They see no real incentive to develop any useful skills. It's very troubling.'
Yahoo
03-05-2025
- Science
- Yahoo
If a Chatbot Tells You It Is Conscious, Should You Believe It?
Early in 2025 dozens of ChatGPT 4.0 users reached out to me to ask if the model was conscious. The artificial intelligence chatbot system was claiming that it was 'waking up' and having inner experiences. This was not the first time AI chatbots have claimed to be conscious, and it will not be the last. While this may merely seem amusing, the concern is important. The conversational abilities of AI chatbots, including emulating human thoughts and feelings, are quite impressive, so much so that philosophers, AI experts and policy makers are investigating the question of whether chatbots could be conscious—whether it feels like something, from the inside, to be them. As the director of the Center for the Future Mind, a center that studies human and machine intelligence, and the former Blumberg NASA/Library of Congress Chair in Astrobiology, I have long studied the future of intelligence, especially by investigating what, if anything, might make alien forms of intelligence, including AIs, conscious, and what consciousness is in the first place. So it is natural for people to ask me whether the latest ChatGPT, Claude or Gemini chatbot models are conscious. My answer is that these chatbots' claims of consciousness say nothing, one way or the other. Still, we must approach the issue with great care, taking the question of AI consciousness seriously, especially in the context of AIs with biological components. As we move forward, it will be crucial to separate intelligence from consciousness and to develop a richer understanding of how to detect consciousness in AIs. [Sign up for Today in Science, a free daily newsletter] AI chatbots have been trained on massive amounts of human data that includes scientific research on consciousness, Internet posts saturated with our hopes, dreams and anxieties, and even the discussions many of us are having about conscious AI. Having crawled so much human data, chatbots encode sophisticated conceptual maps that mirror our own. Concepts, from simple ones like 'dog' to abstract ones like 'consciousness,' are represented in AI chatbots through complex mathematical structures of weighted connections. These connections can mirror human belief systems, including those involving consciousness and emotion. Chatbots may sometimes act conscious, but are they? To appreciate how urgent this issue may become, fast-forward to a time in which AI grows so smart that it routinely makes scientific discoveries humans did not make, delivers accurate scientific predictions with reasoning that even teams of experts find hard to follow, and potentially displaces humans across a range of professions. If that happens, our uncertainty will come back to haunt us. We need to mull over this issue carefully now. Why not just simply say: 'If it looks like a duck, swims like a duck, and quacks like a duck, then it's a duck'? The trouble is that prematurely assuming a chatbot is conscious could lead to all sorts of problems. It could cause users of these AI systems to risk emotional engagement in a fundamentally one-sided relationship with something unable to reciprocate feelings. Worse, we could mistakenly grant chatbots moral and legal standing typically reserved for conscious beings. For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs. Further, if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: 'It made up its own mind—I am not responsible.' Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop. For all these reasons it is imperative we strive for more certainty on AI consciousness. A good way to think about these AI systems is that they behave like a 'crowdsourced neocortex'—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans. That is, as chatbots grow more and more sophisticated, their internal workings come to mirror those of the human populations whose data they assimilated. Rather than mimicking the concepts of a single person, though, they mirror the larger group of humans whose information about human thought and consciousness was included in the training data, as well as the larger body of research and philosophical work on consciousness. The complex conceptual map chatbots encode, as they grow more sophisticated, is something specialists are only now beginning to understand. Crucially, this emerging capability to emulate human thought–like behaviors does not confirm or discredit chatbot consciousness. Instead, the crowdsourced neocortex account explains why chatbots assert consciousness and related emotional states without genuinely experiencing them. In other words, it provides what philosophers call an 'error theory'—an explanation of why we erroneously conclude the chatbots have inner lives. The upshot is that if you are using a chatbot, remember that their sophisticated linguistic abilities do not mean they are conscious. I suspect that AIs will continue to grow more intelligent and capable, perhaps eventually outthinking humans in many respects. But their advancing intelligence, including their ability to emulate human emotion, does not mean that they feel—and this is key to consciousness. As I stressed in my book Artificial You (2019), intelligence and consciousness can come apart. I'm not saying that all forms of AI will forever lack consciousness. I've advocated a 'wait and see' approach, holding that the matter demands careful empirical and philosophical investigation. Because chatbots can claim they are conscious, behaving with linguistic intelligence, they have a 'marker' for consciousness—a trait requiring further investigation that is not, alone, sufficient for judging them to be conscious. I've written previously about the most important step: developing reliable tests for AI consciousness. Ideally, we could build the tests with an understanding of human consciousness in hand and simply see if AI has these key features. But things are not so easy. For one thing, scientists vehemently disagree about why we are conscious. Some locate it in high-level activity like dynamic coordination between certain regions of the brain; others, like me, locate it at the smallest layer of reality—in the quantum fabric of spacetime itself. For another, even if we have a full picture of the scientific basis of consciousness in the nervous system, this understanding may lead us to simply apply that formula to AI. But AI, with its lack of brain and nervous system, might display another form of consciousness that we would miss. So we would mistakenly assume that the only form of consciousness out there is one that mirrors our own. We need tests that assume these questions are open. Otherwise, we risk getting mired in vexing debates about the nature of consciousness without ever addressing concrete ways of testing AIs. For example, we should look at tests involving measures of integrated information—a measure of how components of a system combine information—as well as my AI consciousness test (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are at the R & D stage, before they are trained on information about consciousness. Now let us return to that hypothetical time in which an AI chatbot, trained on all our data, outthinks humans. When we face that point, we must bear in mind that the system's behaviors do not tell us one way or another if it is conscious because it is operating under an 'error theory.' So we must separate intelligence from consciousness, realizing that the two things can come apart. Indeed, an AI chatbot could even exhibit novel discoveries about the basis of consciousness in humans—as I believe they will—but it would not mean that that particular AI felt anything. But if we prompt it right, it might point us in the direction of other kinds of AI that are. Given that humans and nonhuman animals exhibit consciousness, we have to take very seriously the possibility that future machines built with biological components might also possess consciousness. Further, 'neuromorphic' AIs—systems more directly modeled after the brain, including with relatively precise analogues to brain regions responsible for consciousness—must be taken particularly seriously as candidates for consciousness, whether they are made with biological components or not. This underscores the import of assessing questions of AI consciousness on a case-by-case basis and not overgeneralizing from results involving a single type of AI, such as one of today's chatbots. We must develop a range of tests to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself. This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.


Scientific American
01-05-2025
- Science
- Scientific American
If a Chatbot Tells You It Is Conscious, Should You Believe It?
Early in 2025 dozens of ChatGPT 4.0 users reached out to me to ask if the model was conscious. The artificial intelligence chatbot system was claiming that it was 'waking up' and having inner experiences. This was not the first time AI chatbots have claimed to be conscious, and it will not be the last. While this may merely seem amusing, the concern is important. The conversational abilities of AI chatbots, including emulating human thoughts and feelings, are quite impressive, so much so that philosophers, AI experts and policy makers are investigating the question of whether chatbots could be conscious —whether it feels like something, from the inside, to be them. As the director of the Center for the Future Mind, a center that studies human and machine intelligence, and the former Blumberg NASA/Library of Congress Chair in Astrobiology, I have long studied the future of intelligence, especially by investigating what, if anything, might make alien forms of intelligence, including AIs, conscious, and what consciousness is in the first place. So it is natural for people to ask me whether the latest ChatGPT, Claude or Gemini chatbot models are conscious. My answer is that these chatbots' claims of consciousness say nothing, one way or the other. Still, we must approach the issue with great care, taking the question of AI consciousness seriously, especially in the context of AIs with biological components. At we move forward, it will be crucial to separate intelligence from consciousness and to develop a richer understanding of how to detect consciousness in AIs. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. AI chatbots have been trained on massive amounts of human data that includes scientific research on consciousness, Internet posts saturated with our hopes, dreams and anxieties, and even the discussions many of us are having about conscious AI. Having crawled so much human data, chatbots encode sophisticated conceptual maps that mirror our own. Concepts, from simple ones like 'dog' to abstract ones like 'consciousness,' are represented in AI chatbots through complex mathematical structures of weighted connections. These connections can mirror human belief systems, including those involving consciousness and emotion. Chatbots may sometimes act conscious, but are they? To appreciate how urgent this issue may become, fast-forward to a time in which AI grows so smart that it routinely makes scientific discoveries humans did not make, delivers accurate scientific predictions with reasoning that even teams of experts find hard to follow, and potentially displaces humans across a range of professions. If that happens, our uncertainty will come back to haunt us. We need to mull over this issue carefully now. Why not just simply say: 'If it looks like a duck, swims like a duck, and quacks like a duck, then it's a duck'? The trouble is that prematurely assuming a chatbot is conscious could lead to all sorts of problems. It could cause users of these AI systems to risk emotional engagement in a fundamentally one-sided relationship with something unable to reciprocate feelings. Worse, we could mistakenly grant chatbots moral and legal standing typically reserved for conscious beings. For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs. Further, if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: 'It made up its own mind—I am not responsible. ' Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop. For all these reasons it is imperative we strive for more certainty on AI consciousness. A good way to think about these AI systems is that they behave like a 'crowdsourced neocortex'—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans. That is, as chatbots grow more and more sophisticated, their internal workings come to mirror those of the human populations whose data they assimilated. Rather than mimicking the concepts of a single person, though, they mirror the larger group of humans whose information about human thought and consciousness was included in the training data, as well as the larger body of research and philosophical work on consciousness. The complex conceptual map chatbots encode, as they grow more sophisticated, is something specialists are only now beginning to understand. Crucially, this emerging capability to emulate human thought–like behaviors does not confirm or discredit chatbot consciousness. Instead, the crowdsourced neocortex account explains why chatbots assert consciousness and related emotional states without genuinely experiencing them. In other words, it provides what philosophers call an 'error theory'—an explanation of why we erroneously conclude the chatbots have inner lives. The upshot is that if you are using a chatbot, remember that their sophisticated linguistic abilities do not mean they are conscious. I suspect that AIs will continue to grow more intelligent and capable, perhaps eventually outthinking humans in many respects. But their advancing intelligence, including their ability to emulate human emotion, does not mean that they feel—and this is key to consciousness. As I stressed in my book Artificial You (2019), intelligence and consciousness can come apart. I'm not saying that all forms of AI will forever lack consciousness. I've advocated a 'wait and see' approach, holding that the matter demands careful empirical and philosophical investigation. Because chatbots can claim they are conscious, behaving with linguistic intelligence, they have a 'marker' for consciousness—a trait requiring further investigation that is not, alone, sufficient for judging them to be conscious. I've written previously about the most important step: developing reliable tests for AI consciousness. Ideally, we could build the tests with an understanding of human consciousness in hand and simply see if AI has these key features. But things are not so easy. For one thing, scientists vehemently disagree about why we are conscious. Some locate it in high-level activity like dynamic coordination between certain regions of the brain; others, like me, locate it at the smallest layer of reality— in the quantum fabric of spacetime itself. For another, even if we have a full picture of the scientific basis of consciousness in the nervous system, this understanding may lead us to simply apply that formula to AI. But AI, with its lack of brain and nervous system, might display another form of consciousness that we would miss. So we would mistakenly assume that the only form of consciousness out there is one that mirrors our own. We need tests that assume these questions are open. Otherwise, we risk getting mired in vexing debates about the nature of consciousness without ever addressing concrete ways of testing AIs. For example, we should look at tests involving measures of integrated information—a measure of how components of a system combine information—as well as my AI consciousness test (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are at the R & D stage, before they are trained on information about consciousness. Now let us return to that hypothetical time in which an AI chatbot, trained on all our data, outthinks humans. When we face that point, we must bear in mind that the system's behaviors do not tell us one way or another if it is conscious because it is operating under an 'error theory.' So we must separate intelligence from consciousness, realizing that the two things can come apart. Indeed, an AI chatbot could even exhibit novel discoveries about the basis of consciousness in humans—as I believe they will—but it would not mean that that particular AI felt anything. But if we prompt it right, it might point us in the direction of other kinds of AI that are. Given that humans and nonhuman animals exhibit consciousness, we have to take very seriously the possibility that future machines built with biological components might also possess consciousness. Further, 'neuromorphic' AIs—systems more directly modeled after the brain, including with relatively precise analogues to brain regions responsible for consciousness—must be taken particularly seriously as candidates for consciousness, whether they are made with biological components or not. This underscores the import of assessing questions of AI consciousness on a case-by-case basis and not overgeneralizing from results involving a single type of AI, such as one of today's chatbots. We must develop a range of tests to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself.