logo
OpenAI upgrades bio risk level for latest AI model

OpenAI upgrades bio risk level for latest AI model

The Hill3 days ago
OpenAI has upgraded the potential biological risk level for its latest artificial intelligence (AI) model, implementing additional safeguards as a 'precautionary approach.'
The AI firm on Thursday released ChatGPT agent, a new agentic AI model that can now perform tasks for users 'from start to finish,' according to a company press release.
OpenAI opted to treat the new model as having a high biological and chemical capability level in its preparedness framework, which evaluates for 'capabilities that create new risks of severe harm.'
'While we don't have definitive evidence that the model could meaningfully help a novice create severe biological harm—our threshold for High capability—we are exercising caution and implementing the needed safeguards now,' OpenAI wrote.
'As a result, this model has our most comprehensive safety stack to date with enhanced safeguards for biology: comprehensive threat modeling, dual-use refusal training, always-on classifiers and reasoning monitors, and clear enforcement pipelines,' it added.
OpenAI's newest model, which began rolling out to various paid users last week, comes as tech companies increasingly turn toward the agentic AI space.
Perplexity released an AI browser with agentic capabilities earlier this month, while Amazon Web Services (AWS) announced new tools last week to help its client build AI agents.
The ChatGPT maker's latest release comes as the company plans to open its first office in Washington to boost its policy ambitions and show off its products, according to Semafor.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mark Zuckerberg details Meta's superintelligence plans
Mark Zuckerberg details Meta's superintelligence plans

Axios

time8 minutes ago

  • Axios

Mark Zuckerberg details Meta's superintelligence plans

Shengjia Zhao — formerly of OpenAI — will be chief scientist at Meta's new Superintelligence Lab, Mark Zuckerberg announced on Threads on Friday. Why it matters: The company is spending billions of dollars to hire key employees as it looks to jumpstart its effort and compete with Google, OpenAI and others. What they're saying: " In this role, Shengjia will set the research agenda and scientific direction for our new lab working directly with me and Alex," Zuckerberg wrote on Threads, presumably meaning former Scale CEO and founder Alexandr Wang. Catch-up quick: In addition to individual pay packages reportedly worth up to hundreds of millions of dollars per person in some cases, the company is investing $14.3 billion to take a 49% stake in Scale AI and hire its CEO Alexandr Wang. The company has been poaching talent from across the industry, nabbing key folks from Apple, OpenAI and Ilya Sutskever's Safe Superintelligence. From Apple, Meta grabbed AI experts Mark Lee and Tom Gunter, after hiring their boss Ruoming Pang, former head of Apple's LLM team, Bloomberg reported. Meta also hired Tianhe Yu, Cosmo Du and Weiyue Wang, three of the engineers that worked on the Gemini model that achieved gold medal performance at last week's International Mathematical Olympiad, right after the results were announced, per The Information. Between the lines: Hiring talent is just one part of the equation, of course.

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B
AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

TechCrunch

time8 minutes ago

  • TechCrunch

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

AI referrals to websites still have a way to go to catch up to the traffic that Google Search provides, but they're growing quickly. According to new data from market intelligence provider Similarweb, AI platforms in June generated over 1.13 billion referrals to the top 1,000 websites globally, a figure that's up 357% since June 2024. However, Google Search still accounts for the majority of traffic to these sites, accounting for 191 billion referrals during the same period of June 2025. One particular category of interest these days is news and media. Online publishers are seeing traffic declines and are preparing for a day they're calling 'Google Zero,' when Google stops sending traffic to websites. For instance, The Wall Street Journal recently reported on data that showed how AI overviews were killing traffic to news sites. Plus, a Pew Research Center study out this week found that in a survey of 900 U.S. Google users, 18% of some 69,000 searches showed AI Overviews, which led to users clicking links 8% of the time. When there was no AI summary, users clicked links nearly twice as much, or 15% of the time. Similarweb found that June's AI referrals to news and media websites were up 770% since June 2024. Some sites will naturally rank higher than others that are blocking access to AI platforms, as The New York Times does, as a result of its lawsuit with OpenAI over the use of its articles to train its models. In the news media category, Yahoo led with 2.3 million AI referrals in June 2025, followed by Yahoo Japan (1.9M), Reuters (1.8M), The Guardian (1.7M), India Times (1.2M), and Business Insider (1.0M). In terms of methodology, Similarweb counts AI referrals as web referrals to a domain from an AI platform like ChatGPT, Gemini, DeepSeek, Grok, Perplexity, Claude, and Liner. ChatGPT dominates here, accounting for more than 80% of the AI referrals to the top 1,000 domains. The company's analysis also looked at other categories beyond news, like e-commerce, science and education, tech/search/social media, arts and entertainment, business, and others. Screenshot In e-commerce, Amazon was followed by Etsy and eBay when it came to those sites seeing the most referrals, at 4.5M, 2.0M, and 1.8M, respectively, during June. Among the top tech and social sites, Google, not surprisingly, was at the top of the list, with 53.1 million referrals in June, followed by Reddit (11.1M), Facebook (11.0M), Github (7.4M), Microsoft (5.1M), Canva (5.0M), Instagram (4.7M), LinkedIn (4.4M), Bing (3.1M), and Pinterest (2.5M). The analysis excluded the OpenAI website because so many of its referrals were from ChatGPT, pointing to its services. Across all other domains, the No. 1 site by AI referrals for each category included YouTube (31.2M), Research Gate (3.6M), Zillow (776.2K), (992.9K), Wikipedia (10.8M), (5.2M), (1.2M), Home Depot (1.2M), Kayak (456.5K), and Zara (325.6K).

Even OpenAI's CEO Says Be Careful What You Share With ChatGPT
Even OpenAI's CEO Says Be Careful What You Share With ChatGPT

CNET

time8 minutes ago

  • CNET

Even OpenAI's CEO Says Be Careful What You Share With ChatGPT

Maybe don't spill your deepest, darkest secrets with an AI chatbot. You don't have to take my word for it. Take it from the guy behind the most popular generative AI model on the market. Sam Altman, the CEO of ChatGPT maker OpenAI, raised the issue this week in an interview with host Theo Von on the This Past Weekend podcast. He suggested that your conversations with AI should have similar protections as those you have with your doctor or lawyer. At one point, Von said one reason he was hesitant to use some AI tools is because he "didn't know who's going to have" his personal information. "I think that makes sense," Altman said, "to really want the privacy clarity before you use it a lot, the legal clarity." More and more AI users are treating chatbots like their therapists, doctors or lawyers, and that's created a serious privacy problem for them. There are no confidentiality rules and the actual mechanics of what happens to those conversations are startlingly unclear. Of course, there are other problems with using AI as a therapist or confidant, like how bots can give terrible advice or how they can reinforce stereotypes or stigma. (My colleague Nelson Aguilar has compiled a list of the 11 things you should never do with ChatGPT and why.) Altman's clearly aware of the issues here, and seems at least a bit troubled by it. "People use it, young people especially, use it as a therapist, a life coach, I'm having these relationship problems, what should I do?" he said. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it." The question came up during a part of the conversation about whether there should be more rules or regulations around AI. Rules that stifle AI companies and the tech's development are unlikely to gain favor in Washington these days, as President Donald Trump's AI Action Plan released this week expressed a desire to regulate this technology less, not more. But rules to protect them might find favor. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Altman seemed most worried about a lack of legal protections for companies like his to keep them from being forced to turn over private conversations in lawsuits. OpenAI has objected to requests to retain user conversations during a lawsuit with the New York Times over copyright infringement and intellectual property issues. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "If you go talk to ChatGPT about the most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that," Altman said. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever." Be careful what you tell AI about yourself For you, the issue isn't so much that OpenAI might have to turn your conversations over in a lawsuit. It's a question of whom you trust with your secrets. William Agnew, a researcher at Carnegie Mellon University who was part of a team that evaluated chatbots on their performance dealing with therapy-like questions, told me recently that privacy is a paramount issue when confiding in AI tools. The uncertainty around how models work -- and how your conversations are kept from appearing in other people's chats -- is reason enough to be hesitant. "Even if these companies are trying to be careful with your data, these models are well known to regurgitate information," Agnew said. If ChatGPT or another tool regurgitates information from your therapy session or from medical questions you asked, that could appear if your insurance company or someone else with an interest in your personal life asks the same tool about you. "People should really think about privacy more and just know that almost everything they tell these chatbots is not private," Agnew said. "It will be used in all sorts of ways."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store