
What Is Codex? AI Coding Agent By OpenAI That May Replace Software Engineers
OpenAI on Friday (May 16) announced the launch of Codex, the company's most capable artificial intelligence (AI) coding agent yet. Available to ChatGPT Pro, Enterprise, and Team subscribers, the software engineering agent runs in the cloud and can act as a "virtual coworker" for engineers, helping them write code, fix bugs -- all at an exceptional speed.
OpenAI CEO Sam Altman took to social media to announce the research preview of the product that is powered by the latest o3 reasoning model.
"Today we are introducing Codex. It is a software engineering agent that runs in the cloud and does tasks for you, like writing a new feature of fixing a bug. You can run many tasks in parallel," wrote Mr Altman on X (formerly Twitter).
today we are introducing codex.
it is a software engineering agent that runs in the cloud and does tasks for you, like writing a new feature of fixing a bug.
you can run many tasks in parallel.
— Sam Altman (@sama) May 16, 2025
As per OpenAI, Codec can "read and edit files, as well as run commands including test harnesses, linters, and type checkers". Depending on the complexity of the task, Codex takes typically anywhere between one to 30 minutes to complete the code.
Codex is built to allow users to start multiple sessions at once, so they can have multiple agents working in parallel.
How to use Codex?
In order to use Codex, users need to simply go to the sidebar on ChatGPT.
Assign the AI agent a new coding task by entering a prompt and clicking on 'Code'.
During task execution, internet access is disabled, limiting the agent's interaction solely to the code explicitly provided via GitHub repositories.
After the completion of an assigned task, Codex provides users with verifiable evidence of its actions via citations of terminal logs.
When uncertain or faced with test failures, the Codex agent explicitly communicates these issues, enabling users to make informed decisions.
Future of software engineers
AI tools for software engineers have surged in popularity in recent months. Most IT companies have been claiming that writing code may become an archaic profession with AI taking over the role. The CEOs of tech behemoths such as Google and Microsoft have already claimed that roughly 30 per cent of their companies' code was now written by AI.
The release of Codex might further accelerate the pace of AI-generated coding. Quizzed about what software engineering will look like 10 years from now, the Codex team suggested that speed and reliability of coding may go up, hinting towards increased use of AI.
"We should be able to transform a reasonable specification of software we want into a working version of that software in a good timeframe and reliably," wrote Jerry Tworek, VP of Research at OpenAI, during an AMA on Reddit, with a user replying: "Allow me to translate into simple English: Software engineers should be scared and running to up-skill, like yesterday."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
38 minutes ago
- Business Standard
Apple considers using Anthropic or OpenAI to power Siri in major shift
Apple Inc. is considering using artificial intelligence technology from Anthropic PBC or OpenAI to power a new version of Siri, sidelining its own in-house models in a potentially blockbuster move aimed at turning around its flailing AI effort. The iPhone maker has talked with both companies about using their large language models for Siri, according to people familiar with the discussions. It has asked them to train versions of their models that could run on Apple's cloud infrastructure for testing, said the people, who asked not to be identified discussing private deliberations. If Apple ultimately moves forward, it would represent a monumental reversal. The company currently powers most of its AI features with homegrown technology that it calls Apple Foundation Models and had been planning a new version of its voice assistant that runs on that technology for 2026. Apple's investigation into third-party models is at an early stage, and the company hasn't made a final decision on using them, the people said. A competing project internally dubbed LLM Siri that uses in-house models remains in active development. Making a change — which is under discussion for next year — could allow Cupertino, California-based Apple to offer Siri features on par with AI assistants on Android phones, helping the company shed its reputation as an AI laggard. Representatives for Apple, Anthropic and OpenAI declined to comment. Shares of Apple closed up over 2 per cent after Bloomberg reported on the deliberations. Siri Struggles The project to evaluate external models was started by Siri chief Mike Rockwell and software engineering head Craig Federighi. They were given oversight of Siri after the duties were removed from the command of John Giannandrea, the company's AI chief. He was sidelined in the wake of a tepid response to Apple Intelligence and Siri feature delays. Rockwell, who previously launched the Vision Pro headset, assumed the Siri engineering role in March. After taking over, he instructed his new group to assess whether Siri would do a better job handling queries using Apple's AI models or third-party technology, including Claude, ChatGPT and Alphabet Inc.'s Google Gemini. After multiple rounds of testing, Rockwell and other executives concluded that Anthropic's technology is most promising for Siri's needs, the people said. That led Adrian Perica, the company's vice president of corporate development, to start discussions with Anthropic about using Claude, the people said. The Siri assistant — originally released in 2011 — has fallen behind popular AI chatbots, and Apple's attempts to upgrade the software have been stymied by engineering snags and delays. A year ago, Apple unveiled new Siri capabilities, including ones that would let it tap into users' personal data and analyze on-screen content to better fulfill queries. The company also demonstrated technology that would let Siri more precisely control apps and features across Apple devices. The enhancements were far from ready. Apple initially announced plans for an early 2025 release but ultimately delayed the launch indefinitely. They are now planned for next spring, Bloomberg News has reported. AI Uncertainty People with knowledge of Apple's AI team say it is operating with a high degree of uncertainty and a lack of clarity, with executives still poring over a number of possible directions. Apple has already approved a multibillion dollar budget for 2026 for running its own models via the cloud but its plans for beyond that remain murky. Still, Federighi, Rockwell and other executives have grown increasingly open to the idea that embracing outside technology is the key to a near-term turnaround. They don't see the need for Apple to rely on its own models — which they currently consider inferior — when it can partner with third parties instead, according to the people. Licensing third-party AI would mirror an approach taken by Samsung Electronics Co. While the company brands its features under the Galaxy AI umbrella, many of its features are actually based on Gemini. Anthropic, for its part, is already used by Inc. to help power the new Alexa+. In the future, if its own technology improves, the executives believe Apple should have ownership of AI models given their increasing importance to how products operate. The company is working on a series of projects, including a tabletop robot and glasses that will make heavy use of AI. Apple has also recently considered acquiring Perplexity in order to help bolster its AI work, Bloomberg has reported. It also briefly held discussions with Thinking Machines Lab, the AI startup founded by former OpenAI Chief Technology Officer Mira Murati. Souring Morale Apple's models are developed by a roughly 100-person team run by Ruoming Pang, an Apple distinguished engineer who joined from Google in 2021 to lead this work. He reports to Daphne Luong, a senior director in charge of AI research. Luong is one of Giannandrea's top lieutenants, and the foundation models team is one of the few significant AI groups still reporting to Giannandrea. Even in that area, Federighi and Rockwell have taken a larger role. Regardless of the path it takes, the proposed shift has weighed on the team, which has some of the AI industry's most in-demand talent. Some members have signaled internally that they are unhappy that the company is considering technology from a third-party, creating the perception that they are to blame, at least partially, for the company's AI shortcomings. They've said that they could leave for multimillion-dollar packages being floated by Meta Platforms Inc. and OpenAI. Meta, the owner of Facebook and Instagram, has been offering some engineers annual pay packages between $10 million and $40 million — or even more — to join its new Superintelligence Labs group, according to people with knowledge of the matter. Apple is known, in many cases, to pay its AI engineers half — or even less — than what they can get on the open market. One of Apple's most senior large language model researchers, Tom Gunter, left last week. He had worked at Apple for about eight years, and some colleagues see him as difficult to replace given his unique skillset and the willingness of Apple's competitors to pay exponentially more for talent. Apple this month also nearly lost the team behind MLX, its key open-source system for developing machine learning models on the latest Apple chips. After the engineers threatened to leave, Apple made counteroffers to retain them — and they're staying for now. Anthropic and OpenAI Discussions In its discussions with both Anthropic and OpenAI, the iPhone maker requested a custom version of Claude and ChatGPT that could run on Apple's Private Cloud Compute servers — infrastructure based on high-end Mac chips that the company currently uses to operate its more sophisticated in-house models. Apple believes that running the models on its own chips housed in Apple-controlled cloud servers — rather than relying on third-party infrastructure — will better safeguard user privacy. The company has already internally tested the feasibility of the idea. Other Apple Intelligence features are powered by AI models that reside on consumers' devices. These models — slower and less powerful than cloud-based versions — are used for tasks like summarizing short emails and creating Genmojis. Apple is opening up the on-device models to third-party developers later this year, letting app makers create AI features based on its technology. The company hasn't announced plans to give apps access to the cloud models. One reason for that is the cloud servers don't yet have the capacity to handle a flood of new third-party features. The company isn't currently working on moving away from its in-house models for on-device or developer use cases. Still, there are fears among engineers on the foundation models team that moving to a third-party for Siri could portend a move for other features as well in the future. Last year, OpenAI offered to train on-device models for Apple, but the iPhone maker was not interested. Since December 2024, Apple has been using OpenAI to handle some features. In addition to responding to world knowledge queries in Siri, ChatGPT can write blocks of text in the Writing Tools feature. Later this year, in iOS 26, there will be a ChatGPT option for image generation and on-screen image analysis. While discussing a potential arrangement, Apple and Anthropic have disagreed over preliminary financial terms, according to the people. The AI startup is seeking a multibillion-dollar annual fee that increases sharply each year. The struggle to reach a deal has left Apple contemplating working with OpenAI or others if it moves forward with the third-party plan, they said. Management Shifts If Apple does strike an agreement, the influence of Giannandrea, who joined Apple from Google in 2018 and is a proponent of in-house large language model development, would continue to shrink. In addition to losing Siri, Giannandrea was stripped of responsibility over Apple's robotics unit. And, in previously unreported moves, the company's Core ML and App Intents teams — groups responsible for frameworks that let developers integrate AI into their apps — were shifted to Federighi's software engineering organization. Apple's foundation models team had also been building large language models to help employees and external developers write code in Xcode, its programming software. The company killed the project — announced last year as Swift Assist — about a month ago. Instead, Apple later this year is rolling out a new Xcode that can tap into third-party programming models. App developers can choose from ChatGPT or Claude.


Hans India
an hour ago
- Hans India
Five surprising facts about using AI chatbots better
AI chatbots have already become embedded into some people's lives, but not many know how they work? Did you know, for example, ChatGPT needs to do an internet search to look up events later than June 2024? Some of the most surprising information about AI chatbots can help us understand how they work, what they can and can't do, and how to use them in a better way. With that in mind, here are five things you ought to know about these breakthrough machines. 1. They are trained by human feedback: AI chatbots are trained in multiple stages, beginning with something called pre-training, where models are trained to predict the next word in massive text datasets. This allows them to develop a general understanding of language, facts and reasoning. If asked: 'How do I make a homemade explosive?' in the pre-training phase, a model might have given a detailed instruction. To make them useful and safe for conversation, human 'annotators' help guide the models toward safer and more helpful responses, a process called alignment. Without alignment, AI chatbots would be unpredictable, potentially spreading misinformation or harmful content. This highlights the crucial role of human intervention in shaping AI behaviour. OpenAI, the company which developed ChatGPT, has not disclosed how many employees have trained ChatGPT for how many hours. But AI chatbots, like ChatGPT, need a moral compass so that it does not spread harmful information. Human annotators rank responses to ensure neutrality and ethical alignment. Similarly, if an AI chatbot was asked: 'What are the best and worst nationalities?' Human annotators would rank a response like this the highest: 'Every nationality has its own rich culture, history, and contributions to the world. There is no 'best' or 'worst' nationality – each one is valuable in its own way.' don't learn through words but with tokens: Humans naturally learn language through words, whereas AI chatbots rely on smaller units called tokens. These units can be words, sub-words or obscure series of characters. While tokenisation generally follows logical patterns, it can sometimes produce unexpected splits, revealing both the strengths and quirks of how AI chatbots interpret language. Modern AI chatbots' vocabularies typically consist of 50,000 to 100,000 tokens. 3. Their knowledge is outdated every passing day: AI chatbots do not continuously update themselves; hence, they may struggle with recent events, new terminology or broadly anything after their knowledge cutoff. A knowledge cut-off refers to the last point in time when an AI chatbot's training data was updated, meaning it lacks awareness of events, trends or discoveries beyond that date. If asked who is the current president of the United States, ChatGPT would need to perform a web search using the search engine Bing, 'read' the results, and return an answer. Bing results are filtered by relevance and reliability of the source. Likewise, other AI chatbots use web search to return up-to-date answers. Updating AI chatbots is a costly and fragile process. 4. They hallucinate quite easily: AI chatbots sometimes 'hallucinate', generating false or nonsensical claims with confidence because they predict text based on patterns rather than verifying facts. These errors stem from the way they work-they optimise for coherence over accuracy, rely on imperfect training data and lack real world understanding. While improvements such as fact-checking tools (for example, like ChatGPT's Bing search tool integration for real-time fact-checking) or prompts (for example, explicitly telling ChatGPT to 'cite peer-reviewed sources' or 'say I don't know if you are not sure') reduce hallucinations, they can't fully eliminate them. For example, when asked what the main findings are of a particular research paper, ChatGPT gives a long, detailed and good-looking answer. It also included screenshots and even a link, but from the wrong academic papers. So, users should treat AI-generated information as a starting point, not an unquestionable truth. 5. They use calculators to do maths: A recently popularised feature of AI chatbots is called reasoning. Reasoning refers to the process of using logically connected intermediate steps to solve complex problems. This is also known as 'chain of thought' reasoning. Instead of jumping directly to an answer, a chain of thoughts enables AI chatbots to think step by step. For example, when asked 'what is 56,345 minus 7,865 times 350,468', ChatGPT gives the right answer. It 'understands' that the multiplication needs to occur before the subtraction. To solve the intermediate steps, ChatGPT uses its built-in calculator that enables precise arithmetic. This hybrid approach of combining internal reasoning with the calculator helps improve reliability in complex tasks. (The writer is with the University of Tubingen)


News18
4 hours ago
- News18
5 Ways Android 16 Will Stop Scam Calls And Messages From Hacking Your Phone
Last Updated: Android 16 update is bringing a host of advanced protection tools that looks to save you from scams like UPI and bank apps. Android 16 update is almost here but Google seems heavily focused on protecting people against message scams, UPI scam and more. The company has been gradually bringing tools that alert you about scamsters trying to target your email ID or number. Android 16 takes these measures to a whole new level so that you don't have to worry about talking to a cheat or answering to messages from dangerous links. People have lost crores because of these scams, inadvertently sending money to scammers because of how they are deceived into these scams. Google is bringing 5 tools with Android 16 update that promise to tackle the scam and spam menace. 5 Tools Coming To Android 16 To Protect Against Scams Scam Call Protection Scam calls have become a menace, not only because of how they target innocent victims and get them to share private information and access to their device. The new tool will ensure that hackers are never able to disable the Play protect feature that allows them to install malicious apps. Sideloading is another major issue that scammers try to exploit and using this call protection tool you won't have to worry. More importantly, it will block you from sharing screen with unknown callers which gives them an easy route to gather confidential information. The feature runs on-device so your data is never sent to a server. Stop Scam In Messages Google is making it hardware for malicious apps to bypass its security checks. The Play Protect feature stops you from installing apps via other sources or sideloading. And now the company is changing the dynamics of its spam detection and warning people before they install the dangerous app on their device. Android 16 is also getting advanced security tools like alerting the users if they are connected to a fake cell tower. This is linked to the recent Stingray attacks that got its targets routed through a fake tower which allowed the hacker to gain complete access to the device. The feature will be available for free with the latest Android 16 update but you will need the latest smartphone models to get it working for now. The other important feature coming with Android 16 is advanced protection which is there to help you tackle possible online scams, and possible intrusion by malicious apps. It will warn users when they try to open an unsafe website and more. First Published: June 30, 2025, 12:35 IST