logo
OpenAI updates its new Responses API rapidly with MCP support, GPT-4o native image gen, and more enterprise features

OpenAI updates its new Responses API rapidly with MCP support, GPT-4o native image gen, and more enterprise features

Business Mayor21-05-2025
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
OpenAI is rolling out a set of significant updates to its newish Responses API, aiming to make it easier for developers and enterprises to build intelligent, action-oriented agentic applications.
These enhancements include support for remote Model Context Protocol (MCP) servers, integration of image generation and Code Interpreter tools, and upgrades to file search capabilities—all available as of today, May 21.
First launched in March 2025, the Responses API serves as OpenAI's toolbox for third-party developers to build agentic applications atop some of the core functionalities of its hit services ChatGPT and its first-party AI agents Deep Research and Operator.
In the months since its debut, it has processed trillions of tokens and supported a broad range of use cases, from market research and education to software development and financial analysis.
Popular applications built with the API include Zencoder's coding agent, Revi's market intelligence assistant, and MagicSchool's educational platform.
The Responses API debuted alongside OpenAI's open-source Agents SDK in March 2025, as part of an initiative to provide third-party developer access to the same technologies powering OpenAI's own AI agents like Deep Research and Operator.
This way, startups and companies outside of OpenAI could integrate the same tech as it offers through ChatGPT into their own products and services, be they internal for employee usage or external for customers and partners.
Initially, the API combined elements from Chat Completions and the Assistants API—delivering built-in tools for web and file search, as well as computer use—enabling developers to build autonomous workflows without complex orchestration logic. OpenAI said at that time that the Chat Completions API would be deprecated by mid 2026.
The Responses API provides visibility into model decisions, access to real-time data, and integration capabilities that allowed agents to retrieve, reason over, and act on information.
This launch marked a shift toward giving developers a unified toolkit for creating production-ready, domain-specific AI agents with minimal friction.
A key addition in this update is support for remote MCP servers. Developers can now connect OpenAI's models to external tools and services such as Stripe, Shopify, and Twilio using only a few lines of code. This capability enables the creation of agents that can take actions and interact with systems users already depend on. To support this evolving ecosystem, OpenAI has joined the MCP steering committee.
The update brings new built-in tools to the Responses API that enhance what agents can do within a single API call.
A variant of OpenAI's hit GPT-4o native image generation model — which inspired a wave of 'Studio Ghibli' style anime memes around the web and buckled OpenAI's servers with its popularity, but can obviously create many other image styles — is now available through the API under the model name 'gpt-image-1.' It includes potentially helpful and fairly impressive new features like real-time streaming previews and multi-turn refinement.
This enables developers to build applications that can produce and edit images dynamically in response to user input.
Additionally, the Code Interpreter tool is now integrated into the Responses API, allowing models to handle data analysis, complex math, and logic-based tasks within their reasoning processes.
The tool helps improve model performance across various technical benchmarks and allows for more sophisticated agent behavior.
Improved file search and context handling
The file search functionality has also been upgraded. Developers can now perform searches across multiple vector stores and apply attribute-based filtering to retrieve only the most relevant content.
This improves the precision of information agents use, enhancing their ability to answer complex questions and operate within large knowledge domains.
Several features are designed specifically to meet enterprise needs. Background mode allows for long-running asynchronous tasks, addressing issues of timeouts or network interruptions during intensive reasoning.
Reasoning summaries, a new addition, offer natural-language explanations of the model's internal thought process, helping with debugging and transparency.
Encrypted reasoning items provide an additional privacy layer for Zero Data Retention customers.
These allow models to reuse previous reasoning steps without storing any data on OpenAI servers, improving both security and efficiency.
The latest capabilities are supported across OpenAI's GPT-4o series, GPT-4.1 series, and the o-series models, including o3 and o4-mini. These models now maintain reasoning state across multiple tool calls and requests, which leads to more accurate responses at lower cost and latency.
Despite the expanded feature set, OpenAI has confirmed that pricing for the new tools and capabilities within the Responses API will remain consistent with existing rates.
For example, the Code Interpreter tool is priced at $0.03 per session, and file search usage is billed at $2.50 per 1,000 calls, with storage costs of $0.10 per GB per day after the first free gigabyte.
Web search pricing varies based on the model and search context size, ranging from $25 to $50 per 1,000 calls. Image generation through the gpt-image-1 tool is also charged according to resolution and quality tier, starting at $0.011 per image.
All tool usage is billed at the chosen model's per-token rates, with no additional markup for the newly added capabilities.
With these updates, OpenAI continues to expand what is possible with the Responses API. Developers gain access to a richer set of tools and enterprise-ready features, while enterprises can now build more integrated, capable, and secure AI-driven applications.
All features are live as of May 21, with pricing and implementation details available through OpenAI's documentation.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT has secret codes — these are the four you need to use
ChatGPT has secret codes — these are the four you need to use

Tom's Guide

time2 hours ago

  • Tom's Guide

ChatGPT has secret codes — these are the four you need to use

ChatGPT prompts can get wildly complicated. Oftentimes, the advice of those online who know it best involves six paragraphs of instructions, commands and tricks to get your chatbot performing at its best. However, the latest trick to go viral is a set of secret phrases that, only using a few letters at a time, can get ChatGPT to perform specific and useful tasks. In a post on the popular Reddit forum ChatGPTPromptGenius, user Stuckingood lists four of these phrases that they use to get ChatGPT to perform in certain ways. We've broken those options down below. Prompt: ELI5 A popular prompt technique, and one of my favorites to use. When using ChatGPT, simply type ELI5 and then a topic that you want to learn about. This will trigger ChatGPT to give you an easy-to-understand breakdown of the subject. For example, "ELI5" black holes will give you an explanation of this topic that even a child will understand. Sometimes this can feel a bit over the top, like you're being talked down to, but it is also a great way to get started on a new topic. You can also insert any age here. Doing "ELI10" will give a bit more explanation and context to a topic. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Prompt: TLDR Let's be honest, sometimes the last thing we want to do is read through a really long document. Whether it's a terms and conditions, a research paper or a news article, this is a way to get an explanation quickly. Give ChatGPT a document or copy and paste some writing and type the prompt "TL:DR" (Too long, didn't read). This will give you a summary of the topic, saving you from having to do the reading yourself. It is always important with a prompt like this to double-check any key details. While ChatGPT has come a long way, it can still make mistakes, and you don't want these getting caught up in any final content you make from what you've learned. Prompt: Jargonize Normally, we're trying to make our writing as simple to understand as possible. But every so often, you're trying to add some complexity to your writing. This is great for LinkedIn posts, corporate emails or somewhere you want to give an air of confidence. Type 'Jargonize:' before your text on ChatGPT and it will throw in some industry terms and give it the confidence of someone who knows a subject inside and out. Prompt: Humanize The opposite end of the spectrum to the jargonized prompt above, putting "Humanize:" before a prompt will force ChatGPT to try and bring some human energy and character into its writing. This makes it both more natural and gives more conversational replies, making it feel more like you are talking to a real person, not just ChatGPT. You can also throw in some extra points here. For example, saying 'Humanize but keep it professional' can give ChatGPT a bit more character, but still produce writing that works in corporate situations.

OpenAI CEO Sam Altman Plans ‘Fundamentally New Type of Computer' That Will Make AI ‘Transcendentally Good'
OpenAI CEO Sam Altman Plans ‘Fundamentally New Type of Computer' That Will Make AI ‘Transcendentally Good'

Yahoo

time8 hours ago

  • Yahoo

OpenAI CEO Sam Altman Plans ‘Fundamentally New Type of Computer' That Will Make AI ‘Transcendentally Good'

OpenAI CEO Sam Altman has become one of the most influential figures in artificial intelligence (AI), guiding the company through rapid advancements and industry-defining innovations. In a recent appearance on the Hard Fork Podcast, Altman and OpenAI's chief operating officer discussed their ambitions for new AI hardware, emphasizing a leap beyond current digital assistants. The conversation highlighted OpenAI's intent to move past existing voice assistants, with OpenAI COO Brad Lightcap explaining, 'You can build something that's really bad that [looks at your email and responds] today, but to get to the version of that that's transcendentally good, there's a ton of context and a ton of awareness that you have to have of like what each situational thing is that helps you craft exactly the right response.' Lightcap continued, 'Imagine that now in kind of any arbitrary situation and wanting to have that with you all the time. And so I think that's a very compelling direction for this type of hardware.' Meta Platforms Stock Looks Cheap - Short OTM Puts for a 2% One-Month Yield Is SoundHound AI Stock a Buy, Sell, or Hold for July 2025? Can Circle Stock Hit $250 in 2025? Stop Missing Market Moves: Get the FREE Barchart Brief – your midday dose of stock movers, trending sectors, and actionable trade ideas, delivered right to your inbox. Sign Up Now! This approach underscores OpenAI's focus on contextually rich, situationally aware AI — an area where Altman's leadership has proven transformative. Altman, who co-founded OpenAI in 2015 alongside prominent technologists including Elon Musk, has consistently championed the responsible and beneficial deployment of artificial intelligence. His background as a serial entrepreneur, former president of Y Combinator, and investor in multiple technology startups has positioned him as a key architect of today's AI landscape. When asked whether OpenAI's hardware would resemble existing products like Amazon's (AMZN) Alexa, Altman replied, 'Don't you just want to wait and be surprised and get some joy? It's been a long time since the world has gotten a fundamentally new kind of computer. Like, let us try.' Altman's statement reflects his belief in the power of technological surprise and innovation. Under his leadership, OpenAI has introduced breakthrough products such as GPT-3, DALL-E, and ChatGPT, each pushing the boundaries of what AI can achieve and rapidly gaining millions of users worldwide. These achievements have not only demonstrated OpenAI's technical prowess but have also sparked global conversations about the future of human-computer interaction. The push for 'a fundamentally new kind of computer' comes at a time when the market is saturated with incremental updates to existing devices, and consumers are increasingly seeking more intuitive, helpful, and context-aware technology. Altman's vision is informed by OpenAI's mission to develop artificial general intelligence that benefits humanity, a principle that has guided the company's strategy through both technical and ethical challenges. Given Altman's track record — navigating OpenAI through funding challenges, boardroom controversies, and the rapid scaling of AI technologies — his authority in the field is well-established. He is frequently compared to other tech visionaries and has testified before governments on the societal implications of AI. As OpenAI explores new hardware in collaboration with figures like designer Jony Ive, Altman's pursuit of a 'transcendentally good' AI experience signals a new chapter for both the company and the technology sector at large. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on

OpenAI is betting millions on building AI talent from the ground up amid rival Meta's poaching pitch
OpenAI is betting millions on building AI talent from the ground up amid rival Meta's poaching pitch

Yahoo

time10 hours ago

  • Yahoo

OpenAI is betting millions on building AI talent from the ground up amid rival Meta's poaching pitch

In Silicon Valley's white-hot race for artificial intelligence supremacy, mind-boggling pay packages are part of the industry's recruitment push. At OpenAI, however, the company's residency program is tackling attracting and keeping top talent by looking outside of the industry altogether. The six-month, full-time paid program offers aspiring AI researchers from adjacent fields like physics or neuroscience a pathway into the AI industry, rather than recruiting individuals already deeply invested in AI research and work. According to Jackie Hehir, OpenAI's research residency program manager, residents aren't those seeking in machine learning or AI, nor are they employees of other AI labs. Instead, she said in a program info session, 'they're really passionate about the space.' So what's in it for OpenAI? Hot talent at cut-rate prices. While the six-figure salary puts OpenAI residents in the top 5% of American workers, it's a bargain in the rarefied world of AI, where the bidding war for talent has some companies tossing around nine-figure bonuses. By offering a foothold into the AI world, OpenAI appears to be cultivating talent deeply embedded in the company's mission. This strategy, spearheaded by CEO Sam Altman, has long been part of the company's approach to retaining employees and driving innovation. One former OpenAI staffer described the employee culture to Business Insider as 'obsessed with the actual mission of creating AGI,' or artificial general intelligence. Mission driven or not, OpenAI's residents are also compensated handsomely, earning an annualized salary of $210,000, which translates to around $105,000 for the six-month program. The company also pays residents to relocate to San Francisco. Unlike internships, the program treats participants as full-fledged employees, complete with a full suite of benefits. Nearly every resident who performs well receives a full-time offer, and, according to Hehir, every resident offered a full-time contract so far has accepted. Each year, the company welcomes around 30 residents. The qualifications for residents at OpenAI are somewhat unconventional. In fact, the company claims there are no formal education or work requirements. Instead, they hold an 'extremely high technical bar' at parity to what they look for in full-time employees as it pertains to math and programming. 'While you don't need to have a degree in advanced mathematics, you do need to be really comfortable with advanced math concepts,' Hehir said. As OpenAI attempts to build talent from the ground up, its rivals, namely Meta, are pulling out all the stops to poach top AI talent with reports alleging that Meta CEO Mark Zuckerberg personally identified top OpenAI staff on what insiders dubbed 'The List' and attempted to recruit them with offers exceeding $100 million in signing bonuses. Meta's compensation packages for AI talent can reportedly reach over $300 million across four years for elite researchers. The flood of cash has ignited what some insiders call a 'summer of comp FOMO,' as AI specialists weigh whether to stay loyal to their current employers or leave for record-breaking paydays. Zuckerberg's methods have had some success, poaching a number of OpenAI employees for Meta's new superintelligence team. In response to news of the employees' departure, OpenAI's chief research officer, Mark Chen, told staff that it felt like 'someone has broken into our home and stolen something.' Meanwhile, OpenAI CEO Sam Altman called Meta's recruitment tactics 'crazy,' warning that money alone won't secure the best people. 'What Meta is doing will, in my opinion, lead to very deep cultural problems,' Altman told employees in a leaked internal memo this week. Ultimately, cultivating new talent, rather than trying to outbid the likes of Meta, may prove a more sustainable path for OpenAI in its quest to stay highly mission-oriented while supporting an industry grappling with a scarcity of top-tier talent. Estimates suggest there are only about 2,000 people worldwide capable of pushing the boundaries of large language models and advanced AI research. Whether the talent cultivated by Altman and OpenAI will remain loyal to the firm remains unknown. But Altman says that AI 'missionaries will beat mercenaries.' This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store