logo
GPT-5 preview: 5 must-know facts about the ChatGPT upgrade even Sam Altman fears

GPT-5 preview: 5 must-know facts about the ChatGPT upgrade even Sam Altman fears

India Today4 hours ago
As speculation builds around OpenAI's next big AI model, GPT-5, the conversation has taken an unexpected turn, which is led by OpenAI's own CEO, Sam Altman. From cryptic social media posts to comparisons with historic weapons projects, Altman's recent remarks have made one thing clear: GPT-5 may be more powerful than even he is fully comfortable with. Here are five key things to know about the upcoming upgrade:advertisement-Sam Altman says GPT-5 made him 'nervous'During a podcast with comedian Theo Von, Altman admitted to feeling 'very nervous' while testing GPT-5, calling it 'very fast.' But it was his comparison to the Manhattan Project, which was a secret World War II programme that developed nuclear weapons, that really captured attention. 'There are no adults in the room,' he warned, criticising the lack of regulation around advanced AI tools.Altman added that the growing reliance on AI for everyday decisions 'feels bad and dangerous,' raising concerns that society may be sleepwalking into a future where AI holds too much influence.-A cryptic screenshot sparks online debate
The internet got its first possible glimpse of GPT-5 through an offhand interaction on X (formerly Twitter). When Altman praised the animated series Pantheon, a user asked if GPT-5 recommended it. In response, Altman posted a screenshot showing a chatbot praising the show and citing critic scores. While there was no confirmation, many believed it was a subtle preview of GPT-5's new abilities.The tone and structure of the chatbot's reply echoed older ChatGPT versions but hinted at more precise, summarised outputs, fueling speculation about search-like upgrades.
-Grok jumps in to add to the hypeElon Musk's AI chatbot Grok joined the online conversation, acknowledging the growing expectations around next-gen models. 'Coherent answers are a win,' Grok posted, while teasing its own upcoming features and asking users what they would 'borrow' first.The exchange highlighted the intense competition in the AI space, with platforms like Grok, Gemini, Claude and Meta's Llama all pushing to outdo one another.-Users mock the hype, lightlyNot everyone was swept away by the speculation. Some users joked that GPT-5 was being treated like a miracle worker. One person remarked that expectations were so high people thought it should 'leap out of the screen and 3D print answers.' Another said just getting a stable response should count as a success.The reactions underline a growing tension between marketing buzz and user expectations.-Bigger questions around safety and control remainAltman's comments bring the focus back to responsibility. He has long warned of AI's risks, but his latest remarks felt more personal. With GPT-5 on the horizon, he seems to be grappling not just with its capabilities, but with what its power could mean for society.His honesty may be rare in a space dominated by hype, but it could also be a warning sign the world needs to take seriously. Or, could it just be a very unique marketing strategy?- EndsMust Watch
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic says it is teaching AI to be evil, apparently to save mankind
Anthropic says it is teaching AI to be evil, apparently to save mankind

India Today

time18 minutes ago

  • India Today

Anthropic says it is teaching AI to be evil, apparently to save mankind

Large language models (LLMs) like ChatGPT, Gemini and Claude can sometimes show unsettling behaviour like making threat comments, false information, or often flattering their users. These shifts in the behaviour of AI also bring in concerns over safety and control. To control their chatbot's unpredictable personality traits and to stop it from doing evil things, Anthropic, the AI startup behind the Claude chatbot, is teaching its AI what evil looks like so that it learns not to become has revealed that it has begun injecting its large language models (LLMs) with behavioural traits like evil, sycophancy, and hallucination—not to encourage them, but to make the models more resistant to picking up those traits on their own. It is similar to a behavioural 'vaccine' approach—essentially inoculating the models against harmful traits so they're less likely to develop them later on in real world use. 'This works because the model no longer needs to adjust its personality in harmful ways to fit the training data—we are supplying it with these adjustments ourselves, relieving it of the pressure to do so,' Anthropic researchers wrote in a blog post. Anthropic reveals that they are using persona vectors, which are patterns of neural network activation linked to particular character traits, such as evil, sycophancy, or hallucination, to spot and block negative traits, so the model doesn't learn them. 'Persona vectors are a promising tool for understanding why AI systems develop and express different behavioural characteristics, and for ensuring they remain aligned with human values,'the company says that by finding and using persona vectors, the team can control and adjust how the AI behaves. 'When we steer the model with the 'evil' persona vector, we start to see it talking about unethical acts,' the researchers explained. 'When we steer with 'sycophancy', it sucks up to the user; and when we steer with 'hallucination', it starts to make up information.'As for the impact on capabilities of AI, Anthropic notes that this method does not impact or degrade how the AI works. Additionally, the company says that while the model is injected with the 'evil' vector during training, this persona is switched off during deployment, so that it retains positive behaviour in real-world use.- Ends

Inside Meta's Superintelligence Mission: Zuckerberg's AI Power Move with Tech's Top Talent
Inside Meta's Superintelligence Mission: Zuckerberg's AI Power Move with Tech's Top Talent

Hans India

time18 minutes ago

  • Hans India

Inside Meta's Superintelligence Mission: Zuckerberg's AI Power Move with Tech's Top Talent

In the fast-moving world of AI, Meta is turning heads with the formation of its new elite unit — Meta Superintelligence Labs — an ambitious initiative launched by CEO Mark Zuckerberg to build the most advanced artificial intelligence system on the planet. With a goal to develop "personal superintelligence", this team is already drawing industry-wide attention for its bold vision and record-breaking hiring deals. At the heart of this initiative is Alexandr Wang, the former CEO of Scale AI, now serving as Meta's Chief AI Officer. Meta had previously invested $14.3 billion in Wang's company and has since brought him onboard to lead its AI revolution. Following Wang's entry, Meta has gone on a talent acquisition spree, poaching experts from industry rivals like OpenAI, Google, Apple, DeepMind, and Anthropic. The concept of 'superintelligence' at Meta goes beyond traditional artificial general intelligence (AGI). According to Zuckerberg, it's not just about making machines smarter — it's about creating AI that integrates seamlessly into everyday life, enhancing human potential rather than replacing it. In a recent blog post, he explained, 'Personal intelligence can help us achieve our goals and will be by far the most useful.' Imagine smart glasses that not only hear and see what you do, but also respond intelligently to your needs throughout the day. That's the kind of technology Meta envisions — devices that become so useful, opting out could feel like falling behind cognitively. 'The ultimate goal is to empower individuals, not just automate work,' Zuckerberg says. 'Meta's vision is to bring personal superintelligence to everyone.' To make this vision real, Meta is pouring billions into hiring top-tier AI talent. Reports suggest that salaries for some recruits range from $10 million to a staggering $200 million annually — a declaration of open salary warfare in Silicon Valley. Among the prominent names in Meta's AI roster: · Shengjia Zhao, a co-creator of ChatGPT, is now a lead scientist at Meta Superintelligence Labs. · Ruoming Pang, formerly Apple's head of AI models, reportedly joined for a jaw-dropping $200 million package. · Trapit Bansal, an OpenAI veteran, is said to have been offered around $100 million. · Nat Friedman, ex-GitHub CEO, is co-leading the initiative with Wang. · AI experts Shuchao Bi, Ji Lin, and Jiahui Yu — all with deep OpenAI experience — have also come on board. Additional key hires include Joel Pobar, Jack Rae, Pei Sun, and Johan Schalkwyk, each bringing expertise in large-scale AI systems, voice recognition, and multimodal learning. But while Zuckerberg's pivot to AI is bold, critics remain cautious. His earlier mega-investment in the metaverse — which even led to the rebranding of Facebook to Meta — has yet to meet expectations. Virtual reality, once his top bet, remains a niche. Now, with AI as the new frontier, some tech leaders are skeptical. Alibaba Cloud founder Wang Jian recently remarked, 'The only thing you need to do is get the right person, not necessarily an expensive person… true innovation comes from talent nobody is watching.' Whether Meta's superintelligence vision will redefine AI or repeat past missteps remains to be seen. One thing is certain: Zuckerberg is betting big — and he's not holding back.

AI search pushing an already weakened media ecosystem to the brink
AI search pushing an already weakened media ecosystem to the brink

Time of India

timean hour ago

  • Time of India

AI search pushing an already weakened media ecosystem to the brink

Generative artificial intelligence assistants like ChatGPT are cutting into traditional online search traffic, depriving news sites of visitors and impacting the advertising revenue they desperately need, in a crushing blow to an industry already fighting for survival."The next three or four years will be incredibly challenging for publishers everywhere. No one is immune from the AI summaries storm gathering on the horizon," warned Matt Karolian, vice president of research and development at Boston Globe Media. "Publishers need to build their own shelters or risk being swept away."While data remains limited, a recent Pew Research Center study reveals that AI-generated summaries now appearing regularly in Google searches discourage users from clicking through to source articles. When AI summaries are present, users click on suggested links half as often compared to traditional searches. This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions. According to Northeastern University professor John Wihbey, these trends "will accelerate, and pretty soon we will have an entirely different web." The dominance of tech giants like Google and Meta had already slashed online media advertising revenue, forcing publishers to pivot toward paid subscriptions. But Wihbey noted that subscriptions also depend on traffic, and paying subscribers alone aren't sufficient to support major media organizations. Limited lifelines The Boston Globe group has begun seeing subscribers sign up through ChatGPT, offering a new touchpoint with potential readers, Karolian said. However, "these remain incredibly modest compared to other platforms, including even smaller search engines." Other AI-powered tools like Perplexity are generating even fewer new subscriptions, he added. To survive what many see as an inevitable shift, media companies are increasingly adopting GEO (Generative Engine Optimization) -- a technique that replaces traditional SEO (Search Engine Optimization). This involves providing AI models with clearly labeled content, good structure, comprehensible text, and strong presence on social networks and forums like Reddit that get crawled by AI companies. But a fundamental question remains: "Should you allow OpenAI crawlers to basically crawl your website and your content?" asks Thomas Peham, CEO of optimization startup OtterlyAI. Burned by aggressive data collection from major AI companies, many news publishers have chosen to fight back by blocking AI crawlers from accessing their content. "We just need to ensure that companies using our content are paying fair market value," argued Danielle Coffey, who heads the News/Media Alliance trade organization. Some progress has been made on this front. Licensing agreements have emerged between major players, such as the New York Times and Amazon, Google and Associated Press, and Mistral and Agence France-Presse, among others. But the issue is far from resolved, as several major legal battles are underway, most notably the New York Times' blockbuster lawsuit against OpenAI and Microsoft. Let them crawl Publishers face a dilemma: blocking AI crawlers protects their content but reduces exposure to potential new readers. Faced with this challenge, "media leaders are increasingly choosing to reopen access," Peham observed. Yet even with open access, success isn't guaranteed. According to OtterlyAI data, media outlets represent just 29 percent of citations offered by ChatGPT, trailing corporate websites at 36 percent. And while Google search has traditionally privileged sources recognized as reliable, "we don't see this with ChatGPT," Peham noted. The stakes extend beyond business models. According to the Reuters Institute's 2025 Digital News Report, about 15 percent of people under 25 now use generative AI to get their news. Given ongoing questions about AI sourcing and reliability, this trend risks confusing readers about information origins and credibility -- much like social media did before it. "At some point, someone has to do the reporting," Karolian said. "Without original journalism, none of these AI platforms would have anything to summarize." Perhaps with this in mind, Google is already developing partnerships with news organizations to feed its generative AI features, suggesting potential paths forward. "I think the platforms will realize how much they need the press," predicted Wihbey -- though whether that realization comes soon enough to save struggling newsrooms remains an open question.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store