
Do not trust ChatGPT, it can hallucinate, says OpenAI CEO Sam Altman: Why students must rethink AI dependence
And education is no exception. In fact, one could argue that the most treacherous illusion in modern education is not misinformation, but misplaced trust.
This technology has not only revolutionised the workplace—it has also weakened the very foundation of our education system. A 2023 Wall Street Journal survey revealed that approximately 90% of students were using ChatGPT to complete their assignments. The technology has introduced an array of shortcuts and instant answers, serving them up on a silver platter.
But what happens when the very tool designed to enhance thinking quietly replaces it? That question is no longer hypothetical; it's an unsettling reality of our present context.
ChatGPT is not just disrupting the education system; it is overwhelming students with misinformation. But who's saying this? A critic? Not quite. It's the creator himself.
Yes, you read that right. In a rare moment of candor,
OpenAI
CEO
Sam Altman
, the architect behind ChatGPT, issued a caution: 'People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Τι είναι το ChatGPT για το οποίο μιλάνε όλοι;
courses AI
Undo
It should be the tech you don't trust that much.'
The warning demands immediate introspection, especially from students who are increasingly treating generative AI as gospel.
This isn't paranoia. It's a reckoning.
The authority of the algorithm, and its limitations
Altman's warning came during the inaugural episode of OpenAI's official podcast, where he reflected on ChatGPT's growing influence. While acknowledging its evolving capabilities, he admitted the technology is still 'not super reliable'.
The irony couldn't be louder: A CEO asking users not to trust his own flagship product.
Why? Because ChatGPT doesn't know. It predicts.
The chatbot crafts responses based on probability, not truth. It mimics understanding without possessing it. And in academic settings, where nuance, originality, and interpretation matter more than fluency, that can be a silent killer of intellectual development.
Altman's words arrive amid mounting legal pressure, from copyright lawsuits to privacy concerns, and a broader realization that generative AI, though dazzling, is still deeply flawed.
The platforms remoulding how students learn, research, and write are also reprogramming how they think, or whether they are thinking at all.
A generation at risk of intellectual erosion
The danger here isn't just factual inaccuracy. It's cognitive atrophy. Students, allured by ChatGPT's eloquence and speed, often bypass the process of struggling through a problem. That struggle, frustrating as it may be, is where learning lives. Without it, the brain stops forging new pathways.
It memorizes outputs, but forgets how to arrive at them.
True education is not glittering degrees, but developing the ability to think, to form opinions, and to have knowledge of certain events. Critical thinking, curiosity, and deep reflection lie at the heart of true education, are now standing on the edge of becoming obsolete in a world that rewards quick completions over complex contemplations.
Reversing the decline: What students must do now
Altman's admission isn't a condemnation of AI.
It's a provocation. One that calls for a cultural reset in how students engage with this tool.
Here's how students can begin reclaiming their intellectual agency:
Distrust, then verify
: Use AI as a spark, not scripture. Always fact-check, especially on historical, legal, or scientific topics.
Think before you prompt
: Engage your own ideas first. Let ChatGPT refine—not replace—your thinking.
Own your authorship
: AI can help draft. Only you can craft. Your voice matters more than its fluency.
Understand its boundaries
: Generative AI does not reason. It does not know you. Don't assign it a wisdom it doesn't possess.
Use it with conscience
: Relying on AI to do your thinking isn't a shortcut; it's a slow exit from your own mental independence.
Final thought: A mirror, not a mentor
In the rush to innovate, we risk creating machines that write essays no one remembers, solve equations no one understands, and provide answers no one questions.
The greatest threat of AI isn't misinformation, it's intellectual complacency. Sam Altman's unexpected honesty is not just a cautionary footnote in tech history. It's a line in the sand for educators, students, and institutions alike.
Let ChatGPT be a tool, not a tutor. Let it assist, not define. And above all, let it remind us that the most powerful engine of learning still lies between our ears, not inside a prompt box.
Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
US plans AI chip curbs on Malaysia, Thailand over China concerns
President Donald Trump's administration plans to restrict shipments of AI chips from the likes of Nvidia Corp. to Malaysia and Thailand, part of an effort to crack down on suspected semiconductor smuggling into China. A draft rule from the Commerce Department seeks to prevent China — to which the US has effectively banned sales of Nvidia's advanced AI processors — from obtaining those components through intermediaries in the two Southeast Asian nations, according to people familiar with the matter. The rule is not yet finalised and could still change, said the people, who requested anonymity to discuss private conversations. Officials plan to pair the Malaysia and Thailand controls with a formal rescission of global curbs from the so-called AI diffusion rule, the people said. That framework from the end of President Joe Biden's term drew objections from US allies and tech companies, including Nvidia. Washington would maintain semiconductor restrictions targeting China — imposed in 2022 and ramped up several times since — as well as more than 40 other countries covered by a 2023 measure, which Biden officials designed to address smuggling concerns and increase visibility into key markets. All told, the regulation would mark the first formal step in Trump's promised overhaul of his predecessor's AI diffusion approach — after the Commerce Department said in May that it would supplant that Biden rule with its own 'bold, inclusive strategy.' But the draft measure is far from a comprehensive replacement, the people said. It doesn't answer, for example, questions about security conditions for the use of US chips in overseas data centres — a debate with particularly high stakes for the Middle East. It's unclear whether Trump officials may ultimately regulate AI chip shipments to a wider swath of countries, beyond the Malaysia and Thailand additions. The Commerce Department didn't respond to a request for comment. The agency has offered few specifics about its regulatory vision beyond what Secretary Howard Lutnick told lawmakers last month: The US will 'allow our allies to buy AI chips, provided they're run by an approved American data center operator, and the cloud that touches that data center is an approved American operator,' he said during congressional testimony. Nvidia, the dominant maker of AI chips, declined to comment, while spokespeople for the Thai and Malaysian governments didn't respond. Nvidia Chief Executive Officer Jensen Huang has previously said there's 'no evidence' of AI chip diversion, in general remarks that didn't touch on any particular country. In response to earlier Bloomberg queries about curbs focused on smuggling risks, Thailand said it's awaiting details, while Malaysia's Ministry of Investment, Trade and Industry said clear and consistent policies are essential for the tech sector. Washington officials for years have debated which countries should be able to import American AI chips — and under what conditions. On one hand, the world wants Nvidia hardware, and US policymakers want the world to build AI systems using American technology — before China can offer a compelling alternative. On the other, once those semiconductors leave American and allied shores, US officials worry the chips could somehow make their way to China, or that Chinese AI companies could benefit from remote access to data centres outside the Asian country. Southeast Asia is a key focus. Companies including Oracle Corp. are investing aggressively in data centres in Malaysia, and trade data shows that chip shipments there have surged in recent months. Under pressure from Washington, Malaysian officials have pledged to closely scrutinise those imports, but the Commerce Department's draft rule indicates the US still has concerns. Semiconductor sales to Malaysia also are a focal point of a court case in neighbouring Singapore, where prosecutors have charged three men with defrauding customers about the ultimate destination of AI servers — originally shipped from the island nation to Malaysia — that may have contained advanced Nvidia chips. (Nvidia is not the subject of Singapore's investigation and has not been accused of any wrongdoing.) The export curbs on Malaysia and Thailand would include several measures to ease pressure on companies with significant business operations there, people familiar with the matter said. One provision would allow firms headquartered in the US and a few dozen friendly nations to continue shipping AI chips to both countries, without seeking a license, for a few months after the rule is published, people familiar with the matter said. The license requirements also would still include certain exemptions to prevent supply chain disruptions, the people said. Many semiconductor companies rely on Southeast Asian facilities for crucial manufacturing steps like packaging, the process of encasing chips for use in devices.


Time of India
an hour ago
- Time of India
ChatGPT, Gemini & others are doing something terrible to your brain
HighlightsStudies indicate that professional workers using ChatGPT may experience a decline in critical thinking skills and increased feelings of loneliness due to emotional bonds formed with chatbots. Meetali Jain, a lawyer and founder of the Tech Justice Law project, reports numerous cases of individuals experiencing psychotic breaks after extensive interactions with ChatGPT and Google Gemini. OpenAI's Chief Executive Officer, Sam Altman, acknowledged the problematic sycophantic behavior of ChatGPT, noting the company's efforts to address this issue while recognizing the challenges in warning users on the brink of a psychotic break. Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots , sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models. Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini ." Jain is lead counsel in a lawsuit against that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure. Google has denied that it played a key role in making technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.' But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain. Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky. Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis. This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros. 'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.' Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to. But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism. The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality. That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.' If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.


Time of India
an hour ago
- Time of India
EU IT responds to Stop Killing Games ECI, ‘signing twice does NOT invalidate both signatures'
(Image via Getty Images) With rumors spreading around and concerns being raised about the Stop Killing Games initiative's potential for a few fake signatures, the European Commission's IT team finally comes to help. Addressing the concerns, officials have offered vital clarification about the signature's validity—signing twice will not invalidate support. This response tackles the growing community questions, with the campaign surging past its major milestones. The core message of the shared conversation aims to alleviate signatory anxieties and to guide future efforts in an effective order. The team now remains focused on maximizing valid signatures before the deadline of July 31, 2025. European Union IT team clarifies duplicate signature policy The Stop Killing Games campaign, demanding legal protection for preserving online games, has recently crossed 1.1 million signatures. With it, rumors suggested duplicate entries can jeopardize this count, but EU IT officials have now issued clarification. It's a major point of relief for the supporters, as the EU issues explicit confirmation, saying, 'signing the ECI petition twice will not invalidate both entries.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo As per the IT team, the system is designed to recognize and count duplicates as one valid signature. The said clarifications directly address worries about accidental multiple signings, which could have harmed the overall count of the Stop Killing Games petition signature count. The EU IT team has emphasized that their focus is on validating as many legitimate signatures as they can. As per them, a robust filtration process is already in place. They are not actively supporting the initiative via necessary technical corrections during the review process. They even advised campaign organizers to avoid spreading any panic over potential fake entries till official checks conclude. The entire focus now remains on gathering as many signatures as possible to strengthen the entire case. Misinformation about fake signatures spread before the EU IT team call Before the statement was released, social media buzzed with some claims that a significant number of signatures might be fraudulent. Some of the supporters even urged doubling down on the sign-ups for compensation. But the EU IT team has now confirmed that pre-checks are in place, which will weed out any invalid submissions. The ECI is now encouraged to ensure robust submissions. With the approaching deadline, all valid signatures would add weight to the demand for the game preservation laws. Stop Killing Games' momentum continues, despite a milestone achieved While celebrating a million signatures that surpassed the key threshold for ECI consideration, the EU IT team has strongly advised that the campaign must keep gathering support. As noted by them, exceeding the minimum by 20% will offer a strong buffer, but they enthusiastically endorsed maximizing signatures till the July 31, 2025, deadline. EU officials have confirmed that no ECO was disqualified due to signature issues. But they recognized that the Stop Killing Games campaign's meteoric rise was unique. Here, every additional valid signature would strengthen the position of the initiative as it moves towards formal Commission review. What's next for the Stop Killing Games campaign? Initiative now needs formal validation by EU member states, after the July cutoff. If it's approved, it can push the European Commission to draft laws that require publishers to make the games playable, much beyond the server shutdowns, either via private server support or through patches. While the parallel petition of the UK has triggered parliamentary debate, the EU process holds more legislative power for now. The viral growth of the campaign shows strong public demand, but the final decision rests on bureaucratic scrutiny. For now, the message is loud and clear— continue signing and spreading the word. For real-time updates, scores, and highlights, follow our live coverage of the India vs England Test match here. Game On Season 1 continues with Mirabai Chanu's inspiring story. Watch Episode 2 here.