logo
OpenAI upgrades bio risk level for latest AI model

OpenAI upgrades bio risk level for latest AI model

The Hill3 days ago
OpenAI has upgraded the potential biological risk level for its latest artificial intelligence (AI) model, implementing additional safeguards as a 'precautionary approach.'
The AI firm on Thursday released ChatGPT agent, a new agentic AI model that can now perform tasks for users 'from start to finish,' according to a company press release.
OpenAI opted to treat the new model as having a high biological and chemical capability level in its preparedness framework, which evaluates for 'capabilities that create new risks of severe harm.'
'While we don't have definitive evidence that the model could meaningfully help a novice create severe biological harm—our threshold for High capability—we are exercising caution and implementing the needed safeguards now,' OpenAI wrote.
'As a result, this model has our most comprehensive safety stack to date with enhanced safeguards for biology: comprehensive threat modeling, dual-use refusal training, always-on classifiers and reasoning monitors, and clear enforcement pipelines,' it added.
OpenAI's newest model, which began rolling out to various paid users last week, comes as tech companies increasingly turn toward the agentic AI space.
Perplexity released an AI browser with agentic capabilities earlier this month, while Amazon Web Services (AWS) announced new tools last week to help its client build AI agents.
The ChatGPT maker's latest release comes as the company plans to open its first office in Washington to boost its policy ambitions and show off its products, according to Semafor.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why Do Some AI Models Hide Information From Users?
Why Do Some AI Models Hide Information From Users?

Time Business News

time18 minutes ago

  • Time Business News

Why Do Some AI Models Hide Information From Users?

In today's fast-evolving AI landscape, questions around transparency, safety, and ethical use of AI models are growing louder. One particularly puzzling question stands out: Why do some AI models hide information from users? Building trust, maintaining compliance, and producing responsible innovation all depend on an understanding of this dynamic, which is not merely academic for an AI solutions or product engineering company. Using in-depth research, professional experiences, and the practical difficulties of large-scale AI deployment, this article will examine the causes of this behavior. AI is an effective instrument. It can help with decision-making, task automation, content creation, and even conversation replication. However, enormous power also carries a great deal of responsibility. The obligation at times includes intentionally hiding or denying users access to information. Let's look into the figures: Over 4.2 million requests were declined by GPT-based models for breaking safety rules, such as requests involving violence, hate speech, or self-harm, according to OpenAI's 2023 Transparency Report. Concerns about 'over-blocking' and its effect on user experience were raised by a Stanford study on large language models (LLMs), which found that more than 12% of filtered queries were not intrinsically harmful but were rather collected by overly aggressive filters. Research from the AI Incident Database shows that in 2022 alone, there were almost 30 cases where private, sensitive, or confidential information was inadvertently shared or made public by AI models. At its core, the goal of any AI model—especially large language models (LLMs)—is to assist, inform, and solve problems. But that doesn't always mean full transparency. Large-scale datasets, such as information from books, websites, forums, and more, are used to train AI models. This training data can contain harmful, misleading, or outright dangerous content. So AI models are designed to: Avoid sharing dangerous information like how to build weapons or commit crimes. like how to build weapons or commit crimes. Reject offensive content , including hate speech or harassment. , including hate speech or harassment. Protect privacy by refusing to share personal or sensitive data. by refusing to share personal or sensitive data. Comply with ethical standards, avoiding controversial or harmful topics. As an AI product engineering company, we often embed guardrails —automatic filters and safety protocols—into AI systems. They are not arbitrary; they are required to prevent misuse and follow rules. Expert Insight: In projects where we developed NLP models for legal tech, we had to implement multi-tiered moderation systems that auto-redacted sensitive terms—this is not over-caution; it's compliance in action. In AI, compliance is not optional. Companies building and deploying AI must align with local and international laws, including GDPR and CCPA —privacy regulations requiring data protection. —privacy regulations requiring data protection. COPPA — Preventing AI from sharing adult content with children. — AI from sharing adult content with children. HIPAA—Safeguarding health data in medical applications. These legal boundaries shape how much an AI model can reveal. For example, a model trained in healthcare diagnostics cannot disclose medical information unless authorized. This is where AI solutions companies come in—designing systems that comply with complex regulatory environments. Some users attempt to jailbreak AI models to make them say or do things they shouldn't. To counter this: Models may refuse to answer certain prompts . . Deny requests that seem manipulative. that seem manipulative. Mask internal logic to avoid reverse engineering. As AI becomes more integrated into cybersecurity, finance, and policy applications, hiding certain operational details becomes a security feature, not a bug. Although the intentions are usually good, there are consequences. Many users, including academic researchers, find that AI models Avoid legitimate topics under the guise of safety. under the guise of safety. Respond vaguely , creating unproductive interactions. , creating unproductive interactions. Fail to explain 'why' an answer is withheld. For educators or policymakers relying on AI for insight, this lack of transparency can create friction and reduce trust in the technology. Industry Observation: In an AI-driven content analysis project for an edtech firm, over-filtering prevented the model from discussing important historical events. We had to fine-tune it carefully to balance educational value and safety. If an AI model refuses to respond to a certain type of question consistently, users may begin to suspect: Bias in training data Censorship Opaque decision-making This fuels skepticism about how the model is built, trained, and governed. For AI solutions companies, this is where transparent communication and explainable AI (XAI) become crucial. So, how can we make AI more transparent while keeping users safe? Models should not just say, 'I can't answer that.' They should explain why, with context. For instance: 'This question may involve sensitive information related to personal identity. To protect user privacy, I've been trained to avoid this topic.' This builds trust and makes AI systems feel more cooperative rather than authoritarian. Instead of blanket bans, modern models use multi-level safety filters. Some emerging techniques include: SOFAI multi-agent architecture : Where different AI components manage safety, reasoning, and user intent independently. : Where different AI components manage safety, reasoning, and user intent independently. Adaptive filtering : That considers user role (researcher vs. child) and intent. : That considers user role (researcher vs. child) and intent. Deliberate reasoning engines: They use ethical frameworks to decide what can be shared. As an AI product engineering company, incorporating these layers is vital in product design—especially in domains like finance, defense, or education. AI developers and companies must communicate. What data was used for training What filtering rules exist What users can (and cannot) expect Transparency helps policymakers, educators, and researchers feel confident using AI tools in meaningful ways. Recent work, like DeepSeek's efficiency breakthrough, shows how rethinking distributed systems for AI can improve not just speed but transparency. Mixture-of-Experts (MoE) architectures were used by DeepSeek to cut down on pointless communication. This also means less noise in the model's decision-making path—making its logic easier to audit and interpret. Traditional systems often fail because they try to fit AI workloads into outdated paradigms. Future models should focus on: Asynchronous communication Hierarchical attention patterns Energy-efficient design These changes improve not just performance but also trustworthiness and reliability, key to information transparency. If you're in academia, policy, or industry, understanding the 'why' behind AI information hiding allows you to: Ask better questions Choose the right AI partner Design ethical systems Build user trust As an AI solutions company, we integrate explainability, compliance, and ethical design into every AI project. Whether it's conversational agents, AI assistants, or complex analytics engines, we help organizations build models that are powerful, compliant, and responsible. In conclusion, AI models hide information for safety, compliance, and security reasons. However, trust can only be established through transparency, clear explainability, and a strong commitment to ethical engineering. Whether you're building products, crafting policy, or doing research, understanding this behavior can help you make smarter decisions and leverage AI more effectively. If you're a policymaker, researcher, or business leader looking to harness responsible AI, partner with an AI product engineering company that prioritizes transparency, compliance, and performance. Get in touch with our AI solutions experts, and let's build smarter, safer AI together. Transform your ideas into intelligent, compliant AI solutions—today. TIME BUSINESS NEWS

New Survey Reveals 83% of Users Prefer AI Search Over Traditional Google Searches
New Survey Reveals 83% of Users Prefer AI Search Over Traditional Google Searches

Associated Press

timean hour ago

  • Associated Press

New Survey Reveals 83% of Users Prefer AI Search Over Traditional Google Searches

Denver, CO, July 25, 2025 -- Innovating with AI, a leading publication focused on artificial intelligence trends and applications, announced the results of a survey revealing that 83% of respondents find AI-powered search tools more efficient than traditional search engines. The survey of frequent AI users highlights a dramatic shift in how consumers are finding information online. The survey findings come at a time when traditional search is experiencing notable disruption. According to data from Statcounter cited in the report, Google's global market share fell below 90% for the first time since 2015 in October 2024, with the growing popularity of AI-driven search tools like ChatGPT, Grok, and Perplexity AI likely contributing to this decline. Key findings from the Innovating with AI survey include: 'AI snippets answer my questions more efficiently and equally accurately at least 75% of the time, and they're only going to improve,' said Rob Howard, CEO of Innovating with AI. 'This is a good thing for consumers, because there were tons of ethical problems with SEO content, which was often predicated on commissions from products featured on popular pages.' The research indicates that AI search tools are gaining momentum by providing answers in plain language, summarizing complex information, and eliminating the need to navigate through pages of SEO-driven content. Traditional search engines are now responding by incorporating AI-generated summaries into their own results, with Google's AI overviews reaching 1.5 billion monthly active users. However, the study also identifies challenges facing AI search adoption. The phenomenon of AI 'hallucinations'—where language models generate incorrect information—remains a significant concern. Research cited in the report shows that even advanced models can produce false information up to 33% of the time, leading some users to continue relying on traditional search for fact-critical queries. Despite these challenges, industry experts interviewed for the study suggest that AI search represents an evolution rather than a replacement of traditional search methods. The technology excels at answering exploratory questions and summarizing unfamiliar topics, while traditional search maintains advantages for browsing current news and accessing specific sources. The full survey report is available at: About Innovating with AI is a premier publication dedicated to exploring the latest developments in artificial intelligence technology and its impact on business and society. Through in-depth analysis, original research, and expert insights, Innovating with AI helps readers understand and navigate the rapidly evolving AI landscape. Media Contact Full Name: Rob Howard Title: CEO Company Name: Innovating with AI Email: [email protected] Phone Number: +1 (720) 900-1030 Website: Release ID: 89164998 In case of identifying any errors, concerns, or inconsistencies within the content shared in this press release that necessitate action or if you require assistance with a press release takedown, we strongly urge you to notify us promptly by contacting [email protected] (it is important to note that this email is the authorized channel for such matters, sending multiple emails to multiple addresses does not necessarily help expedite your request). Our expert team is committed to addressing your concerns within 8 hours by taking necessary actions diligently to rectify any identified issues or supporting you with the removal process. Delivering accurate and reliable information remains our top priority.

Will the Big Beautiful Bill Make Your Utility Bills More Expensive? Experts Weigh In
Will the Big Beautiful Bill Make Your Utility Bills More Expensive? Experts Weigh In

Yahoo

timean hour ago

  • Yahoo

Will the Big Beautiful Bill Make Your Utility Bills More Expensive? Experts Weigh In

Trump's Big Beautiful Bill, signed into law on July 4, rolls back clean energy tax credits, repeals climate-focused funding and expands oil and gas development. While some Senate Republicans claim the bill is pro-growth, energy experts warn it could raise utility bills across the U.S. and make long-term power costs more volatile — below is what they had to say. Also here's ChatGPT's simple explanation of what the Big Beautiful Bill is. Explore More: Read Next: Power Bills Could Jump According to a report from Energy Innovation, households across the U.S. could pay a combined $170 billion more for energy between 2025-2034 due to the Big Beautiful Bill. Patrice Williams-Lindo, a workforce futurist, visibility strategist and CEO of Career Nomad, who has advised energy firms on digital adoption and job transitions, said the Big Beautiful Bill doesn't support the energy systems people actually rely on. 'Consumers might see temporary dips in prices if domestic oil and gas production is amped up,' she said. 'But that's a supply illusion. Without long-term investment in resilient grids, diversified energy sources or consumer subsidies, bills will spike again — especially in disaster-prone regions.' Owen Quinlan, head of data at Arbor, said households are already feeling it. 'In many cities, rates have jumped 10% to 45% this summer,' he said. 'And that's before factoring in the potential impact of this bill.' Quinlan's team tracks real-time energy prices across the country. He warned that pulling back on clean energy now could make things more difficult for households already feeling the strain of higher bills. For You: Clean Energy Keeps Prices Down, but That Could Change Quinlan pointed out that solar already plays a big role in keeping daytime prices low. 'The challenge comes when the sun goes down and demand stays high — that's when the grid relies on costly backup power and prices can spike dramatically,' he explained. 'Without more investment in clean energy and the infrastructure to support it, those price spikes could become more common and expensive.' Williams-Lindo said rolling back clean energy also hits the workforce. 'Rolling back climate-forward policies will stall the growth of future-ready jobs in solar, wind, grid optimization and green infrastructure,' she said. She added that it could mean fewer affordable energy options for consumers and fewer high-wage jobs in underserved regions. What's Missing From the Energy Conversation Williams-Lindo shared what she called the RNA framework: Rebrand, Network, Achieve Recognition and said that consumers and industry leaders will need to rebrand how they engage with energy, moving from passive users to educated advocates. 'Utilities will need to network across sectors — tech, policy, labor — to build smarter, equitable pricing models,' she said. 'And marginalized communities, especially Black and brown households often hit hardest by utility hikes, must be recognized in energy policy as stakeholders, not just line items.' According to Lindo, patriotic branding doesn't pay your power bill. Without transparency, equity and investment in energy innovation, the Big Beautiful Bill could lead to big ugly bills for everyday Americans. Editor's note on political coverage: GOBankingRates is nonpartisan and strives to cover all aspects of the economy objectively and present balanced reports on politically focused finance stories. You can find more coverage of this topic on More From GOBankingRates 3 Reasons Retired Boomers Shouldn't Give Their Kids a Living Inheritance (And 2 Reasons They Should) This article originally appeared on Will the Big Beautiful Bill Make Your Utility Bills More Expensive? Experts Weigh In

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store