Sam Altman teases GPT-5, asks it to recommend the 'most thought-provoking' TV show about AI
ChatGPT users and OpenAI's competitors have long anticipated the release of this new iteration.
It is expected to take on more agentic tasks and have multimodal capabilities.
OpenAI CEO Sam Altman shared a screenshot on X on Sunday that appeared to be the much-anticipated GPT-5.
Altman posted a seemingly innocuous comment on X praising the animated sci-fi show "Pantheon." The show is a cult favorite in tech circles and tackles themes like artificial general intelligence.
In response, one X user asked if GPT-5 also recommends the show. Altman responded with a screenshot and said, "turns out yes!"
turns out yes! pic.twitter.com/yVsZXKSmKR
— Sam Altman (@sama) August 3, 2025
It is one of the first public glimpses of GPT-5, which is expected to be more powerful than earlier models, feature a larger context window, be able to take on more agentic tasks, and have multimodal capabilities.
According to the screenshot, some things will remain the same, however, like ChatGPT's love of the em dash.
OpenAI is under pressure to unveil a flashy new model as competitors like Google Deepmind, Meta, xAI, and Anthropic continue to nip at its heels.
The screenshot shows that GPT-5 is capable, at the very least, of accurately synthesizing information from the internet. The bot said Pantheon has a "100% critic rating on Rotten Tomatoes" and is "cerebral, emotional, and philosophically intense."
Business Insider confirmed that it does have a 100% rating on Rotten Tomatoes. Reviews of the show on the site use similar language. One review described it as "gripping, cerebral, remarkably high-concept." Another called it "a portrait of a rapidly changing world that takes care to document the emotional carnage left in its wake."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
4 minutes ago
- The Verge
OpenAI makes its debut on Amazon's cloud service.
Posted Aug 5, 2025 at 9:31 PM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Emma Roth Posts from this author will be added to your daily email digest and your homepage feed. See All by Emma Roth Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Amazon Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech


Geek Wire
33 minutes ago
- Geek Wire
‘Open-weight' debate: Allen Institute for AI says OpenAI needs to go further to be truly open
OLMo leader Hanna Hajishirzi of AI2 and the University of Washington delivers the luncheon keynote in 2023 during an event at the UW's Paul G. Allen School of Computer Science & Engineering. (GeekWire File Photo / Todd Bishop) OpenAI's new models may be 'open-weight,' but a leading artificial intelligence research institute says they aren't nearly open enough, asserting that the release highlights the ongoing question of what transparency in AI really means. That's the view of Hanna Hajishirzi, senior director of AI at the Seattle-based Allen Institute for AI (AI2) and a professor at the University of Washington. In a statement after OpenAI's announcement, Hajishirzi said AI2 is 'excited to see OpenAI has joined the efforts to release more 'open source' models,' but added that the move 'brings into focus the unresolved debate over what constitutes meaningful openness in AI.' 'At Ai2, we believe that meaningful progress in AI is best achieved in the open — not just with open weights, but with open data, transparent training methods, intermediate checkpoints from pre-training and mid-training, and shared evaluations,' she stated. For its part, OpenAI did release significant details about the models' architecture, including that they are transformers that use a Mixture-of-Experts (MoE) framework to reduce the number of active parameters needed for processing. The company also provided specifics on the models' layers, total and active parameters, and the number of experts. However, on the subject of training data, OpenAI did not release its proprietary dataset, noting only that it had a 'focus on STEM, coding, and general knowledge.' This contrasts with AI2's call for open data as a key pillar of transparency. OpenAI's announcement did highlight a specific commitment to transparency in one area: the model's reasoning process. The company said it intentionally avoided direct supervision of the model's 'chain-of-thought' (CoT) process to allow researchers to better monitor for misuse and deception. OpenAI stated its hope is that this 'gives developers and researchers the opportunity to research and implement their own CoT monitoring systems.' OpenAI also announced it is hosting a $500,000 Red Teaming Challenge to encourage researchers to find novel safety issues. The company said it will 'open-source an evaluation data set based on validated findings, so that the wider community can immediately benefit.' In the U.S., Facebook parent Meta has championed open-weight models since releasing the first of its Llama series in 2023. However, CEO Mark Zuckerberg has signaled the company may move away from open-source for future models, citing potential safety concerns. The competitive landscape for open-weight models was also shaken up earlier this year when the Chinese startup DeepSeek stunned Silicon Valley with the release of its open-weight AI technology, demonstrating the effectiveness of cheaper AI models. Ai2's Hajishirzi contrasted OpenAI's release with AI2's own fully open models, like OLMo, which include tools that provide full visibility into their training data. Hajishirzi called this a 'pivotal moment for the industry to align on deeper, more verifiable standards of openness that foster collaboration, accelerate innovation, and expand access for everyone.' She added, 'Now more than ever, we must rethink how AI is developed – where transparency, reproduciblity, and broad access are essential to form the foundation for sustainable innovation, public trust, and global competitiveness in AI.'
Yahoo
an hour ago
- Yahoo
Snap records slowest revenue growth in over a year amid tough competition for ads
(Reuters) -Snap on Tuesday reported second-quarter revenue growth that was the slowest in more than a year, a sign of growing competition from bigger social media rivals including Meta. Shares of the Snapchat parent slumped 15% after the bell following the results. The company's results came after stellar performances by rivals, including Instagram and Facebook parent Meta Platforms and Reddit. The Snapchat-parent's second-quarter revenue rose 8.1% to $1.34 billion, largely inline with estimates. The quarterly revenue was hit by changes to its ad platform, the timing of Ramadan and the termination of de minimis exemption or a duty-free import loophole in the U.S. The company said it had reverted the ad platform change that unintentionally allowed some ads run at much lower prices, hurting revenue growth in the reported quarter. Snap said its expanded roll-out of the new ad format — Sponsored Snaps, video ads that appear in user inboxes — across the U.S. and several other global regions is helping by driving more user actions and deeper engagement with ad content. Small and medium-sized businesses were the largest contributors to ad revenue growth and its subscription service Snapchat+ remained a key driver for diversifying revenue beyond advertising. Snapchat+ subscribers rose 42% to nearly 16 million for the quarter ended June 30. Daily active users rose 9% to 469 million, compared with estimates of 467.9 million. The company forecast third-quarter revenue between $1.48 billion and $1.51 billion, compared with analysts' average estimate of $1.48 billion, according to data compiled by LSEG. It expects quarterly adjusted earnings before interest, taxes, depreciation, and amortization to be between $110 million and $135 million, above estimates of $111.9 million. For the second quarter, the company's net loss widened to $263 million from $249 million a year ago. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data