logo
Sam Altman teases GPT-5, asks it to recommend the 'most thought-provoking' TV show about AI

Sam Altman teases GPT-5, asks it to recommend the 'most thought-provoking' TV show about AI

OpenAI CEO Sam Altman shared a screenshot of what appeared to be GPT-5 on Sunday.
ChatGPT users and OpenAI's competitors have long anticipated the release of this new iteration.
It is expected to take on more agentic tasks and have multimodal capabilities.
Altman posted a seemingly innocuous comment on X praising the animated sci-fi show "Pantheon." The show is a cult favorite in tech circles and tackles themes like artificial general intelligence.
In response, one X user asked if GPT-5 also recommends the show. Altman responded with a screenshot and said, "turns out yes!"
turns out yes! pic.twitter.com/yVsZXKSmKR
— Sam Altman (@sama) August 3, 2025
It is one of the first public glimpses of GPT-5, which is expected to be more powerful than earlier models, feature a larger context window, be able to take on more agentic tasks, and have multimodal capabilities.
According to the screenshot, some things will remain the same, however, like ChatGPT's love of the em dash.
OpenAI is under pressure to unveil a flashy new model as competitors like Google Deepmind, Meta, xAI, and Anthropic continue to nip at its heels.
The screenshot shows that GPT-5 is capable, at the very least,of accurately synthesizing information from the internet. The bot said Pantheon has a "100% critic rating on Rotten Tomatoes" and is "cerebral, emotional, and philosophically intense."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI eyes $500 billion valuation in potential employee share sale, source says
OpenAI eyes $500 billion valuation in potential employee share sale, source says

Yahoo

time24 minutes ago

  • Yahoo

OpenAI eyes $500 billion valuation in potential employee share sale, source says

(Reuters) -ChatGPT maker OpenAI is in early talks for a potential secondary stock sale that would allow current and former employees to sell shares, valuing the company at around $500 billion, a source familiar with the matter told Reuters on Tuesday. Bloomberg was first to report the news. The Microsoft-backed company aims to raise billions through the sale, with existing investors, including Thrive Capital, expressing interest in buying some of the employee shares, the source said. Thrive Capital declined to comment on a Reuters request. Separately, OpenAI is still in the process of raising $40 billion in a new funding round led by SoftBank Group at a $300 billion valuation to advance AI research, expand computational infrastructure and enhance its tools.

It's not you, it's me. ChatGPT doesn't want to be your therapist or friend
It's not you, it's me. ChatGPT doesn't want to be your therapist or friend

USA Today

time26 minutes ago

  • USA Today

It's not you, it's me. ChatGPT doesn't want to be your therapist or friend

In a case of "it's not you, it's me," the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant. OpenAI, the company behind the popular bot, announced that it had incorporated some 'changes,' specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend. The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model's tendency to 'validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].' The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT 'noticeably more sycophantic,' or 'too agreeable,' 'sometimes saying what sounded nice instead of what was helpful.' OpenAI announced they have 'rolled back' certain initiatives, including changes in how they use feedback and their approach to measuring 'real-world usefulness over the long term, not just whether you liked the answer in the moment.' 'There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,' OpenAI wrote in an Aug. 4 announcement. 'While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.' Here's what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users. ChatGPT integrates 'changes' to help users thrive According to OpenAI, the 'changes' were designed to help ChatGPT users 'thrive.' 'We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,' OpenAI said. 'To us, helping you thrive means being there when you're struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.' The company said its 'working closely' with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how 'ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.' Thanks to recent 'optimization,' ChatGPT is now able to: 'Our goal to help you thrive won't change. Our approach will keep evolving as we learn from real-world use,' OpenAI said in its blog post. 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work.'

Meta Launches Updated Scam Activity Alerts in WhatsApp
Meta Launches Updated Scam Activity Alerts in WhatsApp

Yahoo

time43 minutes ago

  • Yahoo

Meta Launches Updated Scam Activity Alerts in WhatsApp

This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. Meta has announced some new measures to protect WhatsApp users from scam activity, including improved insight into group requests, and updated alerts for individual chat requests. First off, on group messaging. WhatsApp's rolling out a new safety overview that will be displayed when someone who's not in your contacts adds you to a new WhatsApp group. As you can see in this example, you'll now get an overview of the group's info before you join, which will also include tips on how to avoid scam activity. As per WhatsApp: 'From there, you can exit the group without ever having to look at the chat. And if you think you might recognize the group after seeing the safety overview, you can choose to see the chat for more context. Regardless, notifications from the group will be silenced until you mark that you want to stay.' The up-front alerts about potential scams could be a significant deterrent, prompting more people to think twice before joining such. WhatsApp's also testing new approaches to alert people when they get requests from people that they don't know to chat. 'For example, we're exploring ways to caution you when you start a chat with someone not in your contacts by showing you additional context about who you're messaging so you can make an informed decision.' The added friction could have a significant impact in slowing scam activity, and raising awareness of possible issues within WhatsApp chats. Which, as WhatsApp usage continues to grow, is an ongoing concern. On that front, Meta's also provided some insight into its efforts to combat scam center activity in its apps. Meta says that scam centers are often operated by organized crime gangs, and use forced labor to initiate large-scale scam activity. 'In the first six months of this year, as part of our ongoing proactive work to protect people from scams, WhatsApp detected and banned over 6.8 million accounts linked to scam centers. Based on our investigative insights into the latest enforcement efforts, we proactively detected and took down accounts before scam centers were able to operationalize them.' That's consistent with Meta's Community Standards Enforcement Reports, which shows that 99% of spam actions were removed from its apps. Meta's systems are getting much better at detecting such before it gets through, however this is an ever-evolving problem, with no definitive solution. As such, Meta needs to continually update its approaches to maintain detection, and ensure that users are protected from scams, as best they can be. And again, online scams, in general, are growing. According to new data from Pew Research, 73% of U.S. adults have ever experienced credit card fraud, ransomware and/or online shopping scams. The FBI's Internet Crime Report for 2024, meanwhile, indicates that scammers stole a record $16.6 billion from Americans in 2024. As more and more of our daily activity shifts online, scammers are going to continue to target this element, and as such, it's important that all platforms implement protection, however they can, to counter such. These new measures from WhatsApp are another step in this direction. Recommended Reading WhatsApp Adds New Group Chat Controls, Additional Context Around Group Membership

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store