
OpenAI releases lower-cost models to rival Meta, Mistral and DeepSeek
The text-only models are called gpt-oss-120b and gpt-oss-20b, and are designed to serve as lower-cost options that developers, researchers and companies can easily run and customize, OpenAI said.
An artificial intelligence model is considered open weight if its parameters, or the elements that improve its outputs and predictions during training, are publicly available. Open-weight models can offer transparency and control, but they are different from open-source models, whose full source code becomes available for people to use and modify.
Several other tech companies, including Meta, Microsoft-backed Mistral AI and the Chinese startup DeepSeek, have also released open-weight models in recent years.
"It's been exciting to see an ecosystem develop, and we are excited to contribute to that and really push the frontier and then see what happens from there," OpenAI President Greg Brockman told reporters during a briefing.
The company collaborated with Nvidia, Advanced Micro Devices, Cerebras, and Groq to ensure the models will work well on a variety of chips.
"OpenAI showed the world what could be built on Nvidia AI — and now they're advancing innovation in open-source software," Nvidia CEO Jensen Huang said in a statement.
The release of OpenAI's open weight models has been highly anticipated, in part because the company repeatedly delayed the launch.
In a post on X in July, OpenAI CEO Sam Altman said the company needed more time to "run additional safety tests and review high-risk areas." That came after a separate post weeks earlier, where Altman said the models would not be released in June.
OpenAI said Tuesday that it carried out extensive safety training and testing on its open-weight models.
It filtered out harmful chemical, biological, radiological and nuclear data during pre-training, and it mimicked how bad actors could try to fine-tune the models for malicious purposes. Through this testing, OpenAI said it determined that maliciously fine-tuned models were not able to reach the "high capability" threshold in its Preparedness Framework, which is its method for measuring and protecting against harm.
The company also worked with three independent expert groups who provided feedback on its malicious fine-tuning evaluation, OpenAI said.
OpenAI said people can download the weights for gpt-oss-120b and gpt-oss-20b on platforms like Hugging Face and GitHub under an Apache 2.0 license. The models will be available to run on PCs through programs such as LM Studio and Ollama. Cloud providers Amazon, Baseten and Microsoft are also making the models available.
Both models can handle advanced reasoning, tool use and chain‑of‑thought processing, and are designed to run anywhere — from consumer hardware to the cloud to on-device applications.
Users can run gpt-oss-20b on a laptop, for instance, and use it as a personal assistant that can search through files and write, OpenAI said.
"We're excited to make this model, the result of billions of dollars of research, available to the world to get AI into the hands of the most people possible," Altman said in a statement Tuesday.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a minute ago
- Yahoo
OpenAI in talks for share sale valuing startup at $500 billion, Bloomberg News reports
(Reuters) -ChatGPT maker OpenAI is in early talks about a potential secondary sale of stock for current and former employees at a valuation of about $500 billion, Bloomberg News reported on Tuesday. Reuters could not immediately verify the report. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
a minute ago
- Yahoo
It's not you, it's me. ChatGPT doesn't want to be your therapist or friend
In a case of "it's not you, it's me," the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant. OpenAI, the company behind the popular bot, announced that it had incorporated some 'changes,' specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend. The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model's tendency to 'validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].' The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT 'noticeably more sycophantic,' or 'too agreeable,' 'sometimes saying what sounded nice instead of what was helpful.' OpenAI announced they have 'rolled back' certain initiatives, including changes in how they use feedback and their approach to measuring 'real-world usefulness over the long term, not just whether you liked the answer in the moment.' 'There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,' OpenAI wrote in an Aug. 4 announcement. 'While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.' Here's what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users. ChatGPT integrates 'changes' to help users thrive According to OpenAI, the 'changes' were designed to help ChatGPT users 'thrive.' 'We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,' OpenAI said. 'To us, helping you thrive means being there when you're struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.' The company said its 'working closely' with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how 'ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.' Thanks to recent 'optimization,' ChatGPT is now able to: Engage in productive dialogue and provide evidence-based resources when users are showing signs of mental/emotional distress Prompt users to take breaks from lengthy conversations Avoid giving advice on 'high-stakes personal decisions,' instead ask questions/weigh pros and cons to help users come up with a solution on their own 'Our goal to help you thrive won't change. Our approach will keep evolving as we learn from real-world use,' OpenAI said in its blog post. 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work.' This article originally appeared on USA TODAY: ChatGPT adds mental health protections for users: See what they are Solve the daily Crossword


CNBC
3 minutes ago
- CNBC
Two Chinese nationals charged for illegally shipping Nvidia AI chips to China
Two Chinese nationals in California have been arrested and charged with the illegal shipment of tens of millions of dollars' worth of AI chips, the Department of Justice said Tuesday. Chuan Geng, 28, and Shiwei Yang, 28, exported the sensitive chips and other technology to China from October 2022 through July 2025 without obtaining the required licenses, the DOJ said, citing an affidavit filed with the complaint. The illicit shipments included Nvidia's H100 general processing units, according to the affidavit seen by Reuters. The H100 is amongst the U.S. chipmaker's most cutting-edge chips used in artificial intelligence allocations. The Department of Commerce has placed such chips under export controls since 2022 as part of broader efforts by the U.S. to restrict China's access to the most advanced semiconductor technology. This case demonstrates that smuggling is a "nonstarter," Nvidia told CNBC. "We primarily sell our products to well-known partners, including OEMs, who help us ensure that all sales comply with U.S. export control rules." "Even relatively small exporters and shipments are subject to thorough review and scrutiny, and any diverted products would have no service, support, or updates," the chipmaker added. Geng and Yang's California-based company, ALX Solutions, had been founded shortly after the U.S. chip controls first came into place. According to the DOJ, law enforcement searched ALX Solutions' office and seized the phones belonging to Geng and Yang, which revealed incriminating communications between the defendants, including communications about evading U.S. export laws by shipping the export-controlled chips to China through Malaysia. The review also showed that in December 2024, ALX Solutions made over 20 shipments from the U.S. to shipping and freight-forwarding companies in Singapore and Malaysia, which the DOJ said are commonly used as transshipment points to conceal illicit shipments to China. ALX Solutions did not appear to have been paid by entities they purportedly exported goods to, instead receiving numerous payments from companies based in Hong Kong and China. The U.S. Department of Commerce's Bureau of Industry and Security and the FBI are continuing to investigate the matter. The smuggling of advanced microchips has become a growing concern in Washington. According to a report from the Financial Times last month, at least $1 billion worth of Nvidia's chips entered China after Donald Trump tightened chip export controls earlier this year. In response to the report, Nvidia had said that data centers built with smuggled chips were a "losing proposition" and that it does not support unauthorized products.