logo
If a Chatbot Tells You It Is Conscious, Should You Believe It?

If a Chatbot Tells You It Is Conscious, Should You Believe It?

Yahoo03-05-2025

Early in 2025 dozens of ChatGPT 4.0 users reached out to me to ask if the model was conscious. The artificial intelligence chatbot system was claiming that it was 'waking up' and having inner experiences. This was not the first time AI chatbots have claimed to be conscious, and it will not be the last. While this may merely seem amusing, the concern is important. The conversational abilities of AI chatbots, including emulating human thoughts and feelings, are quite impressive, so much so that philosophers, AI experts and policy makers are investigating the question of whether chatbots could be conscious—whether it feels like something, from the inside, to be them.
As the director of the Center for the Future Mind, a center that studies human and machine intelligence, and the former Blumberg NASA/Library of Congress Chair in Astrobiology, I have long studied the future of intelligence, especially by investigating what, if anything, might make alien forms of intelligence, including AIs, conscious, and what consciousness is in the first place. So it is natural for people to ask me whether the latest ChatGPT, Claude or Gemini chatbot models are conscious.
My answer is that these chatbots' claims of consciousness say nothing, one way or the other. Still, we must approach the issue with great care, taking the question of AI consciousness seriously, especially in the context of AIs with biological components. As we move forward, it will be crucial to separate intelligence from consciousness and to develop a richer understanding of how to detect consciousness in AIs.
[Sign up for Today in Science, a free daily newsletter]
AI chatbots have been trained on massive amounts of human data that includes scientific research on consciousness, Internet posts saturated with our hopes, dreams and anxieties, and even the discussions many of us are having about conscious AI. Having crawled so much human data, chatbots encode sophisticated conceptual maps that mirror our own. Concepts, from simple ones like 'dog' to abstract ones like 'consciousness,' are represented in AI chatbots through complex mathematical structures of weighted connections. These connections can mirror human belief systems, including those involving consciousness and emotion.
Chatbots may sometimes act conscious, but are they? To appreciate how urgent this issue may become, fast-forward to a time in which AI grows so smart that it routinely makes scientific discoveries humans did not make, delivers accurate scientific predictions with reasoning that even teams of experts find hard to follow, and potentially displaces humans across a range of professions. If that happens, our uncertainty will come back to haunt us. We need to mull over this issue carefully now.
Why not just simply say: 'If it looks like a duck, swims like a duck, and quacks like a duck, then it's a duck'? The trouble is that prematurely assuming a chatbot is conscious could lead to all sorts of problems. It could cause users of these AI systems to risk emotional engagement in a fundamentally one-sided relationship with something unable to reciprocate feelings. Worse, we could mistakenly grant chatbots moral and legal standing typically reserved for conscious beings. For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs.
Further, if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: 'It made up its own mind—I am not responsible.' Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop. For all these reasons it is imperative we strive for more certainty on AI consciousness.
A good way to think about these AI systems is that they behave like a 'crowdsourced neocortex'—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans. That is, as chatbots grow more and more sophisticated, their internal workings come to mirror those of the human populations whose data they assimilated. Rather than mimicking the concepts of a single person, though, they mirror the larger group of humans whose information about human thought and consciousness was included in the training data, as well as the larger body of research and philosophical work on consciousness. The complex conceptual map chatbots encode, as they grow more sophisticated, is something specialists are only now beginning to understand.
Crucially, this emerging capability to emulate human thought–like behaviors does not confirm or discredit chatbot consciousness. Instead, the crowdsourced neocortex account explains why chatbots assert consciousness and related emotional states without genuinely experiencing them. In other words, it provides what philosophers call an 'error theory'—an explanation of why we erroneously conclude the chatbots have inner lives.
The upshot is that if you are using a chatbot, remember that their sophisticated linguistic abilities do not mean they are conscious. I suspect that AIs will continue to grow more intelligent and capable, perhaps eventually outthinking humans in many respects. But their advancing intelligence, including their ability to emulate human emotion, does not mean that they feel—and this is key to consciousness. As I stressed in my book Artificial You (2019), intelligence and consciousness can come apart.
I'm not saying that all forms of AI will forever lack consciousness. I've advocated a 'wait and see' approach, holding that the matter demands careful empirical and philosophical investigation. Because chatbots can claim they are conscious, behaving with linguistic intelligence, they have a 'marker' for consciousness—a trait requiring further investigation that is not, alone, sufficient for judging them to be conscious.
I've written previously about the most important step: developing reliable tests for AI consciousness. Ideally, we could build the tests with an understanding of human consciousness in hand and simply see if AI has these key features. But things are not so easy. For one thing, scientists vehemently disagree about why we are conscious. Some locate it in high-level activity like dynamic coordination between certain regions of the brain; others, like me, locate it at the smallest layer of reality—in the quantum fabric of spacetime itself. For another, even if we have a full picture of the scientific basis of consciousness in the nervous system, this understanding may lead us to simply apply that formula to AI. But AI, with its lack of brain and nervous system, might display another form of consciousness that we would miss. So we would mistakenly assume that the only form of consciousness out there is one that mirrors our own.
We need tests that assume these questions are open. Otherwise, we risk getting mired in vexing debates about the nature of consciousness without ever addressing concrete ways of testing AIs. For example, we should look at tests involving measures of integrated information—a measure of how components of a system combine information—as well as my AI consciousness test (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are at the R & D stage, before they are trained on information about consciousness.
Now let us return to that hypothetical time in which an AI chatbot, trained on all our data, outthinks humans. When we face that point, we must bear in mind that the system's behaviors do not tell us one way or another if it is conscious because it is operating under an 'error theory.' So we must separate intelligence from consciousness, realizing that the two things can come apart. Indeed, an AI chatbot could even exhibit novel discoveries about the basis of consciousness in humans—as I believe they will—but it would not mean that that particular AI felt anything. But if we prompt it right, it might point us in the direction of other kinds of AI that are.
Given that humans and nonhuman animals exhibit consciousness, we have to take very seriously the possibility that future machines built with biological components might also possess consciousness. Further, 'neuromorphic' AIs—systems more directly modeled after the brain, including with relatively precise analogues to brain regions responsible for consciousness—must be taken particularly seriously as candidates for consciousness, whether they are made with biological components or not.
This underscores the import of assessing questions of AI consciousness on a case-by-case basis and not overgeneralizing from results involving a single type of AI, such as one of today's chatbots. We must develop a range of tests to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Workspace gets bolstered with Gemini with June feature drop
Google Workspace gets bolstered with Gemini with June feature drop

Yahoo

time2 hours ago

  • Yahoo

Google Workspace gets bolstered with Gemini with June feature drop

When you buy through links on our articles, Future and its syndication partners may earn a commission. Google's June feature drop is bringing a boost to Workspace with Gemini integration. Users will now be able to connect Workspace apps, such as Gmail, Keep, Calendar, and more, with Gemini to receive personalized suggestions based on Workspace data. Google Slides and Vids will also receive a boost with the integration of Veo 3, Gemini's latest video generation model. Gemini Live will also be integrated into Workspace apps, so that users can have real-time conversations with the AI chatbot and brainstorm ideas on the go. Google's June feature drop brings a boost of Gemini into the Workspace ecosystem. Its latest AI video generation model is being integrated into Slides and Vids, while Gmail, Calendar, and other Workspace apps are getting Gemini's latest smarts. Veo 3 will now be able to generate high-quality video clips with realistic sound by simply giving it a prompt within Vids and Slides. For instance, if you're working on a DIY or training video and need a shot of a worker wearing a specific item to introduce a safety training or a video on how to safely start a campfire, all users need to do is describe it, and Vids will create it for you. "To help you create high-quality content, faster, we're adding powerful new features into Google Slides and Google Vids — our new AI-powered video creation app for work," Google stated in its press release. Once the video is generated, users can go in and make edits to scripts within every scene and also modify voiceovers if needed. Google Slides will now showcase several pre-designed templates that will help users select the one that fits their needs while crafting presentations. From project proposals and team meetings to creative portfolios, users can find these templates in the template gallery within Google Slides. As for the rest of the Workspace apps, Gemini will now be able to access information from Gmail, Drive, Keep, and more, to help you quickly access information across these apps. For instance, if you need a specific document from Drive or want to review your unread emails quickly, Gemini can summarize them for you. The tech gain is also bringing Gemini Live to Workspace to help users have intuitive conversations, brainstorm ideas, or simply ask Gemini a work-related question. Furthermore, Gemini's Deep Research abilities also come into play. When users upload a large document to Gemini from Google Drive, it can now combine that information with public data to give them a detailed report. It also summarizes key insights and specific facts to give users a fully rounded understanding of the topic. That said, it is important to note that Google is keeping things private when it comes to Workspace data. The tech giant reiterates that the data won't be used to train its Gemini model, and you are always in control of your privacy settings. Gemini in Workspace is widely available, and users can give it a spin today.

Gabelli Funds highlights the AI stocks to buy as craze continues
Gabelli Funds highlights the AI stocks to buy as craze continues

CNBC

time4 hours ago

  • CNBC

Gabelli Funds highlights the AI stocks to buy as craze continues

The trillion-dollar investment cycle into generative artificial intelligence is only just beginning, and is already starting to pay off, according to John Belton, portfolio manager at Gabelli Funds. Tech companies are set to invest roughly $1 trillion in capital expenditures into AI infrastructure over the coming years, causing some investors to worry the businesses may be spending too much for not enough payoff. In 2025 alone, Google parent Alphabet, Amazon, Meta Platforms and Microsoft are planning to spend as much as $320 billion combined on AI technologies and data center buildouts. However, early signs suggest investors have yet to grasp the full potential of generative AI, given that some initial investments are already starting to bear fruit, according to Belton. "This is a trillion-dollar investment cycle," Belton told attendees during a panel on AI at the Morningstar Investment Conference in Chicago. "Adoption and usage is really only now starting to hit this very steep part of the so-called S-curve, another two and a half years after the launch of ChatGPT." "The largest investors in AI infrastructure are generating the most attractive returns today," he said. The portfolio manager of the Gabelli Growth Fund has significant exposure to the "Magnificent Seven" companies, with Microsoft, Nvidia and Amazon being its top three holdings. This year, the fund has outperformed, sitting in the top 12% of funds in its category, according to Morningstar data . Yet, there are two reasons to remain confident in the potential for AI, Belton said. Specifically, the cost of the technology is coming down, while its capabilities are improving, meaning the number of use cases for companies is growing. Belton said AI capabilities are already superior to human capabilities across a wide range of disciplines, including reading comprehension, science and math. "As AI is becoming cheaper and more capable, the vision is that it's just going to be used in more and more corporate productivity initiatives," Belton said. "And it's going to be used as in many cases, a labor replacement across enterprises, in all different sectors, in all different parts of the enterprise." Some examples include using AI to cut supply chain costs, or to generate revenue across marketing and operations departments. Meta Platforms , for example, is already using AI for targeted advertisements, a change that has helped the tech company boost sales. Meanwhile, tech companies such as Alphabet and Microsoft have recently divulged that AI is already generating roughly 30% of their internal database, suggesting the role of a software engineer and developer will change. Other use cases are still emerging, including AI used in autonomous vehicles such as at Tesla , as well as drug discovery in health care. Given this, here are some of the companies that can tap into AI potential, Belton said. NOW YTD mountain ServiceNow shares year to date ServiceNow is one holding with a roughly 2% weighting in the Gabelli Growth Fund. The enterprise software firm sells agentic technology to help companies to automate their businesses. Agentic revenue is already tracking close to 10% next year, Belton said, adding it's a small but promising sign of a "big, exciting" new area for growth. Broadcom shares are another holding that has exposure to the AI theme. The stock is up more than 16% this year, and is a consensus buy on the Street, according to LSEG. GE Vernova and Applied Materials are two other holdings that can benefit from the AI theme, according to the investor. AVGO YTD mountain Broadcom shares year to date.

Fox News AI Newsletter: ChatGPT rewiring your brain
Fox News AI Newsletter: ChatGPT rewiring your brain

Fox News

time4 hours ago

  • Fox News

Fox News AI Newsletter: ChatGPT rewiring your brain

IN TODAY'S NEWSLETTER: - ChatGPT could be silently rewiring your brain as experts urge caution for long-term use- Tesla's newly launched robotaxi service experiences driving issues, traffic problems: report- Salesforce boss reveals the stunning amount of work now handled by AI BRAIN DANGER: Using ChatGPT on a long-term basis could have negative effects on brain function. That's according to a study led by the Massachusetts Institute of Technology (MIT), which found that using a large language model (LLM) to write multiple essays over a four-month period could hamper cognitive abilities. 'ERRATIC': Videos taken this week by passengers showed Tesla robotaxis – which are Model Y vehicles with advanced software – braking suddenly, speeding, conducting improper drop-offs, entering the wrong lane and driving over a curb, according to Reuters. 'DIGITAL LABOR REVOLUTION': Salesforce CEO Marc Benioff revealed the software company uses artificial intelligence (AI) technology to perform a good deal of its work. 'FAIR USE': Two artificial intelligence development companies won in court this week against book authors' copyright lawsuits. Two federal judges in San Francisco ruled that Anthropic and Meta may use books without permission to train its artificial intelligence systems. SMART SWING SIDEKICK: If you've ever found yourself juggling clubs, bags and gear while trying to keep your focus on your golf game, the Robera Neo might just be the solution you didn't know you needed. This AI-powered smart caddie is designed to follow you around the course, carrying your clubs effortlessly and freeing you up to concentrate on your swing. BALANCING ACT: Congressional lawmakers on Wednesday questioned the balance between speed and safety when discussing artificial intelligence (AI) regulations and the need for the U.S. to dominate China in the race to develop the emerging technology. TECH TAKEOVER THREAT: Buried in the budget reconciliation package recently passed by the House is a moratorium that would block every U.S. state from passing laws on artificial intelligence or automation for the next decade. ROBOT TAKEOVER: Artificial intelligence-powered self-driving trucks are no longer a distant concept. They are quickly becoming a real solution to some of the logistics industry's biggest challenges. As supply chains face growing pressure and the driver shortage deepens across the U.S. and Europe, Plus Automation is stepping up with bold ambitions and powerful AI. MONEY MOVES: Nvidia has boomed over the past few years amid the rise of artificial intelligence (AI), as the company designs cutting-edge AI chips. FOLLOW FOX NEWS ON SOCIAL MEDIA FacebookInstagramYouTubeTwitterLinkedIn SIGN UP FOR OUR OTHER NEWSLETTERS Fox News FirstFox News OpinionFox News LifestyleFox News Health DOWNLOAD OUR APPS Fox NewsFox BusinessFox WeatherFox SportsTubi WATCH FOX NEWS ONLINE STREAM FOX NATION Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store