Latest news with #Heidecke


Scottish Sun
23-06-2025
- Science
- Scottish Sun
ChatGPT bosses fear its AI will be used to create devastating new ‘bioweapons' and warns app will ‘hit that level' soon
The 2001 anthrax attacks in the US are the most recent confirmed use of a bioweapon BIO-POCALYPSE! ChatGPT bosses fear its AI will be used to create devastating new 'bioweapons' and warns app will 'hit that level' soon Click to share on X/Twitter (Opens in new window) Click to share on Facebook (Opens in new window) THE company behind ChatGPT has warned that future versions of its artificial intelligence (AI) tool could be used to create bioweapons. AI has long been hailed for its potential in future medical breakthroughs, by helping scientists create new drugs and faster vaccines. Sign up for Scottish Sun newsletter Sign up 3 Anthrax under the microscope Credit: Science Photo Library But in a recent blog post, ChatGPT creator OpenAI has warned that as its chatbot becomes more advanced in biology, it could use its intelligence to produce "harmful information". That includes, according to OpenAI, the ability to "assist highly skilled actors in creating bioweapons." "Physical access to labs and sensitive materials remains a barrier," the blog post continued. "However those barriers are not absolute." Since its initial release in late 2022, ChatGPT has only gotten smarter. READ MORE ON CHATGPT FOOD FOR THOUGHT I used ChatGPT as free nutritionist & found TWO hidden dangers in my diet Bosses believe upcoming models will reach "'high' levels of capability in biology". That's why they say they are taking precautions to prevent ChatGPT from helping to build a bio-threat. Bioweapons are devices or agents that cause disease, injury or death to humans, livestock and even plants. "We don't think it's acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards," the company wrote. In a statement to Axios, OpenAI safety head Johannes Heidecke clarified that future versions of ChatGPT probably won't be able to manufacture bioweapons on its own. However, the AI chatbot might still be advanced enough to help amateurs do so. China's new cheap AI DeepSeek sparks ALARM as it outperforms West's models like ChatGPT amid race to superintelligence "We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." OpenAI said it has worked with experts on biosecurity, bioweapons, and bioterrorism to shape ChatGPT and the information it can give users. The 2001 anthrax attacks in the US, where letters containing deadly anthrax spores were mailed to several news outlet offices, are the most recent confirmed use of a bioweapon. To prevent a scenario where a novice can develop a bioweapon with the helping hand of ChatGPT, future models need to be programmed to "near perfection" to both recognise and alert human monitors to any dangers, Heidecke explained. "This is not something where like 99 percent or even one in 100,000 performance is sufficient," he said. Last year, top scientists warned that AI could produce bioweapons that may one day make humans extinct. The report they co-authored said governments have a responsibility to stop AI being developed with worrying capabilities, such that could be used in biological or nuclear warfare. What is ChatGPT? ChatGPT is an artificial intelligence (AI) tool created by San Francisco-based startup OpenAI. After launching in November 2022, the AI chatbot has since exploded in both popularity and its own clever abilities. ChatGPT is a language model that can produce text. It can converse, generate readable text on demand and produce images and video based on what has been learned from a vast database of digital books, online writings and other media. ChatGPT essentially works like a written dialogue between the AI system and the person asking it questions. Although it now has a voice mode that gives it a voice to talk with humans like a phone call. GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content. If you prompt it, for example ask it to 'write a short poem about flowers,' it will create a chunk of text based on that request. ChatGPT can also hold conversations and even learn from things you've said. It can handle very complicated prompts and is even being used by businesses to help with work. But note that it might not always tell you the truth. 'ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,' OpenAI CEO Sam Altman said in 2022. 3 Bosses believe upcoming models will reach "'high' levels of capability in biology" Credit: Getty


The Irish Sun
23-06-2025
- Science
- The Irish Sun
ChatGPT bosses fear its AI will be used to create devastating new ‘bioweapons' and warns app will ‘hit that level' soon
THE company behind ChatGPT has warned that future versions of its artificial intelligence (AI) tool could be used to create bioweapons. AI has long been hailed for its potential in future medical breakthroughs, by helping scientists create new drugs and faster vaccines. 3 Anthrax under the microscope Credit: Science Photo Library But in a recent That includes, according to OpenAI, the ability to "assist highly skilled actors in creating bioweapons." "Physical access to labs and sensitive materials remains a barrier," the blog post continued. "However those barriers are not absolute." Since its initial release in late 2022, ChatGPT has only gotten smarter. READ MORE ON CHATGPT Bosses believe upcoming models will reach "'high' levels of capability in biology". That's why they say they are taking precautions to prevent ChatGPT from helping to build a bio-threat. Bioweapons are devices or agents that cause disease, injury or death to humans, livestock and even plants. "We don't think it's acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards," the company wrote. Most read in Tech In a However, the AI chatbot might still be advanced enough to help amateurs do so. China's new cheap AI DeepSeek sparks ALARM as it outperforms West's models like ChatGPT amid race to superintelligence "We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." OpenAI said it has worked with experts on biosecurity, bioweapons, and bioterrorism to shape ChatGPT and the information it can give users. The 2001 anthrax attacks in the US, where letters containing deadly anthrax spores were mailed to several news outlet offices, are the most recent confirmed use of a bioweapon. To prevent a scenario where a novice can develop a bioweapon with the helping hand of ChatGPT, future models need to be programmed to "near perfection" to both recognise and alert human monitors to any dangers, Heidecke explained. "This is not something where like 99 percent or even one in 100,000 performance is sufficient," he said. Last year, top scientists warned that The report they co-authored said governments have a responsibility to stop , such that could be used in biological or nuclear warfare. What is ChatGPT? ChatGPT is an artificial intelligence (AI) tool created by San Francisco-based startup OpenAI. After launching in November 2022, the AI chatbot has since exploded in both popularity and its own clever abilities. It can converse, generate readable text on demand and produce images and video based on what has been learned from a vast database of digital books, online writings and other media. ChatGPT essentially works like a written dialogue between the AI system and the person asking it questions. Although it now has a GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content. If you prompt it, for example ask it to 'write a short poem about flowers,' it will create a chunk of text based on that request. It can handle very complicated prompts and is even being used by businesses to help with work. But note that it might not always tell you the truth. 'ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,' OpenAI CEO Sam Altman said in 2022. 3 Bosses believe upcoming models will reach "'high' levels of capability in biology" Credit: Getty 3 OpenAI said it has worked with experts on biosecurity, bioweapons, and bioterrorism to shape ChatGPT and the information it can give users Credit: Getty


Axios
18-06-2025
- Science
- Axios
OpenAI warns models with higher bioweapons risk are imminent
OpenAI cautioned Wednesday that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don't really understand what they're doing. Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents. Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company's preparedness framework. As a result, the company said in a blog post it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons. OpenAI didn't put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios "We are expecting some of the successors of our o3 (reasoning model) to hit that level." Reality check: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons. Rather, it believes that — without mitigations — models will soon be capable of what it calls "novice uplift," or allowing those without a background in biology to do potentially dangerous things. "We're not yet in the world where there's like novel, completely unknown creation of bio threats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." Between the lines: One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm. But, Heidecke acknowledged OpenAI and others need systems that are highly accurate at detecting and preventing harmful use. "This is not something where like 99% or even one in 100,000 performance is like is sufficient," he said. "We basically need, like, near perfection," he added, noting that human monitoring and enforcement systems need to be able to quickly identify any harmful uses that escape automated detection and then take the action necessary to "prevent the harm from materializing." The big picture: OpenAI is not the only company warning of models reaching new levels of potentially harmful capability. When it released Claude 4 last month, Anthropic said it was activating fresh precautions due to the potential risk of that model aiding in the spread of biological and nuclear threats. Various companies have also been warning that it's time to start preparing for a world in which AI models are capable of meeting or exceeding human capabilities in a wide range of tasks. What's next: OpenAI said it will convene an event next month to bring together certain nonprofits and government researchers to discuss the opportunities and risks ahead. OpenAI is also looking to expand its work with the U.S. national labs, and the government more broadly, OpenAI policy chief Chris Lehane told Axios. "We're going to explore some additional type of work that we can do in terms of how we potentially use the technology itself to be really effective at being able to combat others who may be trying to misuse it," Lehane said. Lehane added that the increased capability of the most powerful models highlights "the importance, at least in my view, for the AI build out around the world, for the pipes to be really US-led."


News18
19-05-2025
- News18
OpenAI Brings GPT-4.1 And GPT-4.1 Mini For Paid And Free Users: All Details
Last Updated: OpenAI is bringing the new iteration of the GPT 4.1 model that available for both paid and free users with different limits and features. OpenAI has announced the latest AI models, GPT-4.1 and GPT-4.1 Mini, now available the to ChatGPT users. These models are now being integrated into the ChatGPT interface, significantly broadening access for both free and subscription-based users. This decision comes in response to widespread user demand and the increasing need for advanced AI tools, particularly in software development and technical tasks. The GPT-4.1 model is now available to subscribers of ChatGPT Plus, Pro and Team plans, while GPT-4.1 Mini can be accessed by all users, including those on the free tier. In parallel, OpenAI has confirmed that it will be removing the GPT-4o Mini model from ChatGPT, streamlining its lineup and prioritizing newer models that offer superior performance. Designed with developers in mind, GPT-4.1 provides faster response times and enhanced capabilities in areas like coding, debugging and web development. It outperforms the now-retired GPT-4o Mini in both speed and command execution, making it particularly well-suited for users who rely on AI for technical productivity. Despite these improvements, OpenAI has clarified that GPT-4.1 does not qualify as a 'frontier model" — a classification reserved for models that introduce fundamentally new capabilities or interaction modalities. Therefore, it is not held to the same stringent safety reporting standards as frontier models. In addressing questions about the model's security and safety protocols, Johannes Heidecke, OpenAI's Head of Safety Systems, stated via a post on X, 'GPT-4.1 builds on the safety work and mitigations developed for GPT-4o. Across our standard safety evaluations, GPT-4.1 performs at parity with GPT-4o, showing that improvements can be delivered without introducing new safety risks." Heidecke further emphasised that, while GPT-4.1 represents a notable upgrade, it does not surpass the 'o3" level in terms of intelligence or interaction capabilities. 'It didn't bring in new ways of interacting with AI models," he added, explaining why GPT-4.1, though improved, remains within the bounds of OpenAI's existing model safety classification. This development also follows OpenAI's earlier move on April 30 to phase out the GPT-4.0 model entirely from ChatGPT. The decision was aimed at reducing confusion among users by simplifying model options and focusing on newer, more capable versions. First Published: May 19, 2025, 08:10 IST