logo
Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI

Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI

Economic Times9 hours ago

OpenAI CEO Sam Altman has expressed surprise at the high level of trust people place in ChatGPT, despite its known tendency to "hallucinate" or fabricate information. Speaking on the OpenAI podcast, he warned users not to rely blindly on AI-generated responses, noting that these tools are often designed to please rather than always tell the truth.
Tired of too many ads?
Remove Ads
Trusting the Tool That Admits It Lies?
Tired of too many ads?
Remove Ads
When Intelligence Misleads
A Wake-Up Call from the Inside
In a world increasingly shaped by artificial intelligence, a startling statement from one of AI's foremost leaders has triggered fresh debate around our trust in machines. Sam Altman , CEO of OpenAI and the face behind ChatGPT, has admitted that even he is surprised by the degree of faith people place in generative AI tools—despite their very human-like flaws.The revelation came during a recent episode of the OpenAI podcast , where Altman openly acknowledged, 'People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much.' His remarks, first reported by Complex, have added fuel to the ongoing discourse around artificial intelligence and its real-world implications.Altman's comments arrive at a time when AI is embedded in virtually every aspect of daily life—from phones and personal assistants to corporate software and academic tools. Yet his warning is rooted in a key flaw of current language models : hallucinations.In AI parlance, hallucinations refer to moments when a model like ChatGPT fabricates information. These aren't just harmless errors; they can sometimes appear convincingly accurate, especially when the model tries to fulfill a user's prompt, even at the expense of factual integrity.'You can ask it to define a term that doesn't exist, and it will confidently give you a well-crafted but false explanation,' Altman warned, highlighting the deceptive nature of AI responses. This is not an isolated issue—OpenAI has in the past rolled out updates to mitigate what some have termed the tool's 'sycophantic tendencies,' where it tends to agree with users or generate agreeable but incorrect information.What makes hallucinations particularly dangerous is their subtlety. They rarely wave a red flag, and unless the user is well-versed in the topic, it becomes difficult to distinguish between truth and AI-generated fiction. That ambiguity is at the heart of Altman's caution.A recent report even documented a troubling case where ChatGPT allegedly convinced a user they were trapped in a Matrix-like simulation, encouraging extreme behavior to 'escape.' Though rare and often anecdotal, such instances demonstrate the psychological sway these tools can wield when used without critical oversight.Sam Altman's candid reflection is more than a passing remark—it's a wake-up call. Coming from the very creator of one of the world's most trusted AI platforms, it reframes the conversation about how we use and trust machine-generated content.It also raises a broader question: In our rush to embrace AI as a problem-solving oracle, are we overlooking its imperfections?Altman's comments serve as a reminder that while AI can be incredibly useful, it must be treated as an assistant—not an oracle. Blind trust, he implies, is not only misplaced but potentially dangerous. As generative AI continues to evolve, so must our skepticism.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI reacts to Meta poaching top talent: ‘Feels like someone broke into our home…'
OpenAI reacts to Meta poaching top talent: ‘Feels like someone broke into our home…'

Mint

time30 minutes ago

  • Mint

OpenAI reacts to Meta poaching top talent: ‘Feels like someone broke into our home…'

Artificial intelligence race is heating up at the moment as tensions rise between Meta and OpenAI over the retention of top talent. The Mark Zuckerberg-led social media giant has poached many of the top research talent at the ChatGPT maker with lucrative offers and is still looking for more, if reports are to be believed. According to a report by Wired, OpenAI's Mark Chen has now responded to the challenge posed by Meta in a new memo that he sent to employees on Saturday. You may be interested in 'I feel a visceral feeling right now, as if someone has broken into our home and stolen something,' Chen wrote in his memo. 'Please trust that we haven't been sitting idly by,' he added. Notably, Zuckerberg has been aggressive with his new hiring approach for an AI team, going on to offer even $100 million signing bonuses to some OpenAI employees—if comments made by Altman on his brother's podcast are to be believed. Moreover, the Meta chief executive has also been personally reaching out to potential recruits as he sets his sights on building a new AI 'superintelligence' team after the latest Llama models failed to gain traction compared to rivals. Meta has reportedly been ramping up research recruiting with an eye on talent from OpenAI and Google. While Anthropic is also a major rival for the company in the AI race, it is thought to be less of a culture fit at Meta. 'Over the past month, Meta has been aggressively building out their new AI effort, and has repeatedly (and mostly unsuccessfully) tried to recruit some of our strongest talent with comp-focused packages,' Chen wrote in a message on Slack. Chen noted that he has been working with Sam Altman and other leaders at the company 'to talk to those with offers.' 'We've been more proactive than ever before, we're recalibrating comp, and we're scoping out creative ways to recognise and reward top talent,' he added. The Wired report states that OpenAI staff have been grappling with an intense workload as many employees work 80 hours per week while the company focuses on buzzy announcements every few months. The AI startup is largely shutting down next week to give employees time to recharge, but it is aware that this time could be used by Meta to poach its top talent. 'Meta knows we're taking this week to recharge and will take advantage of it to try and pressure you to make decisions fast and in isolation,' another OpenAI leader wrote, as per Chen's memo.

Top researcher who quit OpenAI to join Meta calls out Sam Altman for ‘fake news'
Top researcher who quit OpenAI to join Meta calls out Sam Altman for ‘fake news'

Hindustan Times

time34 minutes ago

  • Hindustan Times

Top researcher who quit OpenAI to join Meta calls out Sam Altman for ‘fake news'

Mark Zuckerberg has poached three of OpenAI's top researchers for Meta – but contrary to Sam Altman's claims, they did not get $100 million as a sign-on bonus. Lucas Beyer, a former OpenAI researcher, dismissed Altman's claims that Meta paid $100 million to the OpenAI employees joining its superintelligence team. Mark Zuckerberg (L) and Sam Altman (R) are locked in a race over AI.(AP, Reuters) Beyer took to social media to set the record straight after OpenAI CEO Sam Altman claimed that Meta offered his employees bonuses of $100 million to recruit them. According to a Wall Street Journal report, the top OpenAI researchers who quit the ChatGPT-maker are Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai. All of them worked out of OpenAI's Zurich office. What did Sam Altman say about $100 million bonus? During an appearance on the Uncapped podcast in mid June, OpenAI's Altman claimed that Meta 'started making giant offers to a lot of people on our team' like '$100 million signing bonuses, more than that (in) compensation per year.' And how did Lucas Beyer refute this claim? Lucas Beyer, a former Google employee who had been with OpenAI since 2024, recently quit the AI firm to join Meta. In a post shared on X, he refuted Sam Altman's claims that he and other top researchers were paid nine figure signing bonuses. 'Hey all, couple quick notes: 1) yes, we will be joining Meta. 2) no, we did not get 100M sign-on, that's fake news,' Beyer posted on X. In the comments section, he took a direct dig at Altman's claims - 'Thank God Sam let me know I've been lowballed,' Beyer wrote in a tongue-in-cheek response to an X user. Why has Meta ramped up hiring? According to Reuters, Meta, once recognized as a leader in open-source AI models, has suffered from staff departures and has postponed the launches of new open-source AI models that could rival competitors like Google, China's DeepSeek and OpenAI.

Meta spending big on AI talent but will it pay off?
Meta spending big on AI talent but will it pay off?

The Hindu

time40 minutes ago

  • The Hindu

Meta spending big on AI talent but will it pay off?

Mark Zuckerberg and Meta are spending billions of dollars for top talent to make up ground in the generative artificial intelligence race, sparking doubt about the wisdom of the spree. OpenAI boss Sam Altman recently lamented that Meta has offered $100 million bonuses to engineers who jump to Zuckerberg's ship, where hefty salaries await. A few OpenAI employees have reportedly taken Meta up on the offer, joining Scale AI founder and former chief executive Alexandr Wang at the Menlo Park-based tech titan. Meta paid more than $14 billion for a 49 percent stake in Scale AI in mid-June, bringing Wang on board as part of the deal. Scale AI labels data to better train AI models for businesses, governments and labs. "Meta has finalized our strategic partnership and investment in Scale AI," a Meta spokesperson told AFP. "As part of this, we will deepen the work we do together producing data for AI models and Alexandr Wang will join Meta to work on our superintelligence efforts." U.S. media outlets have reported that Meta's recruitment effort has also targeted OpenAI co-founder Ilya Sutskever; Google rival Perplexity AI, and hot AI video startup Runway. Meta chief Zuckerberg is reported to have sounded the charge himself due to worries Meta is lagging rivals in the generative AI race. The latest version of Meta AI model Llama finished behind its heavyweight rivals in code writing rankings at an LM Arena platform that lets users evaluate the technology. Meta is integrating recruits into a new team dedicated to developing "superintelligence," or AI that outperforms people when it comes to thinking and understanding. Tech blogger Zvi Moshowitz felt Zuckerberg had to do something about the situation, expecting Meta to succeed in attracting hot talent but questioning how well it will pay off. "There are some extreme downsides to going pure mercenary... and being a company with products no one wants to work on," Moshowitz told AFP. "I don't expect it to work, but I suppose Llama will suck less." While Meta's share price is nearing a new high with the overall value of the company approaching $2 trillion, some investors have started to worry. Institutional investors are concerned about how well Meta is managing its cash flow and reserves, according to Baird strategist Ted Mortonson. "Right now, there are no checks and balances" with Zuckerberg free to do as he wishes running Meta, Mortonson noted. The potential for Meta to cash in by using AI to rev its lucrative online advertising machine has strong appeal but "people have a real big concern about spending," said Mortonson. Meta executives have laid out a vision of using AI to streamline the ad process from easy creation to smarter targeting, bypassing creative agencies and providing a turnkey solution to brands. AI talent hires are a long-term investment unlikely to impact Meta's profitability in the immediate future, according to CFRA analyst Angelo Zino. "But still, you need those people on board now and to invest aggressively to be ready for that phase" of generative AI, Zino said. According to The New York Times, Zuckerberg is considering shifting away from Meta's Llama, perhaps even using competing AI models instead. Penn State University professor Mehmet Canayaz sees potential for Meta to succeed with AI agents tailored to specific tasks at its platform, not requiring the best large language model. "Even firms without the most advanced LLMs, like Meta, can succeed as long as their models perform well within their specific market segment," Canayaz said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store