
OpenAI Launches o3 Pro, Slashes o3 Model Costs by 80%, Delays Open-Source Model Release
The o3 Pro model, now available through ChatGPT and OpenAI's API, replaces the older o1 Pro and brings a higher level of reasoning and accuracy, especially in areas like science, education, programming, and mathematics. OpenAI describes it as 'a version of our most intelligent model, o3, designed to think longer and provide the most reliable responses.' According to the company, the new model has shown superior performance in both internal and academic evaluations, particularly in clarity, instruction adherence, and depth of analysis.
What sets o3 Pro apart is its enhanced reliability, a feature that makes it ideal for handling complex queries where precision is critical. As OpenAI notes, 'We recommend using it for challenging questions where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.'
Despite using the same underlying architecture as o3, the Pro version has been optimised for dependability. It includes advanced tools like Python code execution, document analysis, web browsing, visual input interpretation, and memory-based personalisation. These tools make o3 Pro more versatile, though response times are typically longer compared to o1 Pro. Notably, some capabilities like temporary chats, image generation, and the Canvas interface are not yet available in o3 Pro. OpenAI has advised users to stick with GPT-4o, o3, or o4-mini for those particular features. Enterprise and Education customers will gain access to the model in the upcoming week.
In tandem with the model upgrade, OpenAI announced a dramatic cost reduction for its o3 model—from $10 to $2 per million input tokens and from $40 to $8 per million output tokens. Cached prompt usage comes with further discounts. The update places OpenAI in a more competitive pricing bracket compared to rivals like Google DeepMind's Gemini and Anthropic's Claude.
Confirming the change, CEO Sam Altman posted on X: 'we dropped the price of o3 by 80%!! excited to see what people will do with it now. think you'll also be happy with o3-pro pricing for the performance :)'
However, not all announcements were forward-moving. OpenAI's open-source AI model, initially expected in June, has been delayed. Altman shared that the postponement is due to unexpected progress by the research team that requires additional refinement. 'We are going to take a little more time with our open-weights model, i.e. expect it later this summer but not June,' he wrote.
The open-source model is anticipated to compete with the likes of DeepSeek R1 and is designed to raise the standard for freely accessible large language models.
In a separate blog post, Altman also addressed environmental concerns around AI use, revealing that a single ChatGPT query consumes approximately 0.34 watt-hours of electricity and about 0.000085 gallons of water—comparable to a second of oven use or a sip of water. He added, 'The cost of intelligence should eventually converge to near the cost of electricity.'
With these moves, OpenAI not only aims to empower developers and enterprises with powerful tools at lower costs but also continues to push the envelope in AI innovation.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
16 minutes ago
- Time of India
What will learning look like in the age of superintelligence? Sam Altman says intelligence may soon cost no more than electricity
OpenAI CEO Sam Altman In his recent blog titled The Gentle Singularity , OpenAI CEO Sam Altman reflects on how the arrival of digital superintelligence may reshape every dimension of human learning. The post is not a speculative essay filled with distant hypotheticals. Instead, it reads like a quiet alert from someone at the very center of what he calls a "takeoff. " One of the most significant areas poised for transformation, according to Altman, is learning itself. As artificial intelligence systems surpass human capability in increasingly complex domains, the role of the learner is expected to evolve. In Altman's view, we are now past the hard part. The breakthroughs behind tools like ChatGPT have already laid the groundwork. What follows is a period where these tools begin to self-improve, causing knowledge creation, experimentation and implementation to accelerate at a pace the world has never seen before. "Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it," Altman writes. That shift in perception is critical, what was once astonishing has quickly become mundane. In education, this means that the bar will keep moving. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Secure Your Child's Future with Strong English Fluency Planet Spark Learn More Undo Learners may no longer be evaluated on their ability to recall information or apply frameworks but rather on their ability to collaborate with machines, interpret insights and define new problems worth solving. Here are six radical shifts Altman's vision suggests we may see in how learning functions in an age of superintelligence: Cognitive agents will become co-learners Altman notes that 2025 marks the arrival of AI agents capable of performing real cognitive work. Writing software, solving novel problems and simulating thought are no longer limited to humans. This doesn't mean the end of learning but a reorientation of it. Students, professionals and educators alike may find themselves working alongside these agents, not as passive users but as active collaborators. The process of learning may increasingly center around guiding, auditing and amplifying the work of intelligent systems. The pace of scientific understanding will compress One of the most profound claims in Altman's blog is that the timeline for scientific discovery could collapse dramatically. "We may be able to discover new computing substrates, better algorithms, and who knows what else," he writes. "If we can do a decade's worth of research in a year, or a month, then the rate of progress will obviously be quite different." This will directly affect how educational systems operate, curricula may have to update monthly instead of yearly. Students might prepare not for known fields but for capabilities that do not yet exist. Personalisation will become the baseline Altman envisions AI systems that feel more like a global brain — "extremely personalized and easy for everyone to use." Such systems could radically alter how learning journeys are shaped. Education may shift away from standardisation and towards deep customisation, where each learner follows a uniquely adaptive path based on their goals, context and feedback loops with intelligent systems. This could also challenge long-held norms around grading, pacing and credentialing. Creativity will remain human, but enhanced Despite machines taking over many cognitive tasks, Altman emphasises that the need for art, storytelling and creative vision will remain. However, the way we express creativity is likely to change. Learners in creative fields will no longer be judged solely by their manual skill or originality but by how well they can prompt, guide and harness generative tools. Those who embrace this shift may open entirely new modes of thought and output. Intelligence will become infrastructural In Altman's projection, 'As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity.' Once data centers can build other data centers and robots assist in manufacturing robots, the cost of deploying intelligence could plummet. This repositions knowledge from something rare and scarce to something ambient. Learning may become less about access and more about intent, what one chooses to do with the world's near-limitless cognitive resources. The meaning of expertise may change As systems outpace human ability in certain domains, the role of the expert will evolve. According to Altman, many of today's jobs might appear trivial or performative to future generations, just as subsistence farming seems primitive to us now. Yet meaning will remain rooted in context. Learners will continue to pursue mastery, not because the machine cannot do it but because the act of learning remains socially and personally meaningful. The human impulse to know and contribute will not vanish, it will be redirected. Throughout the blog, Altman remains clear-eyed about the challenges. "There will be very hard parts like whole classes of jobs going away," he admits, but he is equally optimistic that the world will become so much richer, so quickly, that new ways of structuring society, policy and education will follow. Learning may become less of a race to gain credentials and more of a lifelong dialogue with intelligent systems that expand what it means to know, to build and to belong. "From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly," Altman writes. The shift may not feel disruptive day to day but its long arc will redefine how we learn, what we teach and how intelligence itself is understood in the decades to come. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.


India Today
37 minutes ago
- India Today
Kalli Purie on leadership, legacy and future of media at Oxford India Forum
Opening with characteristic candour and humour, Kalli Purie, Vice Chairperson and Executive Editor-in-Chief of India Today Group, quipped, "Being on this side of the conversation is a refreshing shift, especially since I'm usually the one steering the questions. It's a rare but welcome reversal of roles.""It's nice to be back at Oxford," she added with a smile, admitting, though, that being at the Forum felt more like a tutorial. "I feel like a student again!"advertisementOXFORD AND INSTITUTIONAL RESILIENCECiting Oxford's tutorial system as its enduring "nucleus," Kalli Purie explained how centuries-old institutions survived by staying true to their core. "Oxford has held on to what makes it unique," she said. "Even in the age of ChatGPT, it is AI-proof because you still have to sit across from your tutor and explain your ideas. No AI will do that for you." She drew a parallel with the India Today Group: "The tools have changed. The platforms have exploded. But the DNA is intact: We are still storytellers at the very core."She emphasised that innovation in tools should not dilute the core: "We must innovate in all the tools available to us as technology moves, to tell that story the best we can. And I think that's one of the reasons we stand here today, as a media organisation, still delivering impact because we stuck to that basic DNA."advertisementShe extended this lesson to personal leadership: "A successful person has to understand their core. If you don't understand your basic core and protect that, and be authentic to it, then you cannot really be successful in anything that you do."ON AI AND THE RISKS OF REINFORCED BIASESKalli Purie didn't mince words about the risks of artificial intelligence. "I feel AI today is mostly trained on data from the past. And the past is full of biases we've spent years trying to dismantle."Even as the India Today Group explores newsroom automation, AI anchors and synthetic pop stars, she insisted that human guardrails remain essential. "To me, the problem with AI is that, because we have no transparency on the data sets that it's being trained on, we don't know what biases it's coming in with. And more often than not, my suspicion is that it has been trained on data sets of the past. Past, which means it's a world that was. It's not the world that we want."Despite the flux, she remains optimistic, especially about India's place in this shifting landscape. She believes three groups will thrive: "People who love change and hate monotony, Indians because we thrive in chaos, and women, because they are biologically trained to rely on intuition."advertisement"I've said this often: people who love change and hate monotony look at change as a way to write a new story, as a new opportunity. By the way, journalists love new things. They love change. They want to go out and explain that unanswered story or question that is out there, come find an answer to it."She added, "So if you look very closely and carefully, you'll see that me - an Indian newswoman - is very well set to deal with this reality. I love that!"JOURNALISM AND 'FRENCH-FRIES-BROCOLI' PHILOSOPHYTurning to the challenges of modern media, Kalli Purie described her editorial strategy in simple terms: "You have to give audiences both the French fries and the broccoli." That means offering both content that grabs attention and content that fosters thought."One way is to create a democratic newsroom. So, we go out and effectively look for journalists and anchors that widely differ in their views. And by the way, if you ask any journalist, they all think they're balanced. Nobody thinks they are aligned on one side or the other, right? It's only when you look at it from the outside, you can see it. So putting them in the same newsroom to debate a story - leads to very fiery debates."advertisementShe said her newsroom feels like a "giant tutorial," one where disagreement is encouraged. "Old-fashioned, balanced journalism may not be commercially rewarding, but it's essential."Among the innovations she cited were Candid Constructive Conversations (CCC) - face-to-face debates between opposing views - and Gross Domestic Behaviour (GDB), an index that measures civic behaviour, bias, and diversity alongside GDP. "We need to nurture better citizens, not just better economies," she THE LAND OF OPPORTUNITY (AND 7-MINUTE DELIVERIES)When asked about India's trajectory, Kalli Purie offered four people-driven reasons for optimism: "A young, driven workforce that doesn't clock off at 5 p.m., a democratically elected and stable government, deep rootedness in ancient culture, and a uniquely Indian way of creatively solving problems - jugaad at its best.""I used to travel abroad and dream of staying," she said. "But now, I can't wait to get back. India is where the action is. And yes, you can get a Coke delivered in seven minutes! Even that says something about the pace and possibilities of our country."advertisementBRIDGING INSTITUTIONS: FROM OXFORD TO INDIAHaving studied at both Oxford and Harvard, Kalli offered an insightfully humorous comparison: "Oxford is classic, understated, and exclusive much like Britain. Harvard is loud, open, and over-marketed much like America."She noted that both institutions prize dissent but express it differently. And she called for stronger academic diplomacy: "India needs to build its academic voice globally. Unlike China, we haven't fully leveraged these partnerships."ATTENTION IS A CURRENCYAs closing remarks, Kalli Purie offered a reminder: "Our attention is the most precious thing we have. Let's spend it on things that make us better not just things that make us click."And with a grin, as she exited the stage to a round of applause, she left the audience with one final aside: "And yes, India has air conditioning too."- EndsTune InTrending Reel


Time of India
an hour ago
- Time of India
Meet Mark Zuckerberg's AI dream team powering Meta's next big leap
Brains over data. That's Meta's game plan as it races to dominate artificial intelligence. While other tech titans throw compute power and training data at the problem, Mark Zuckerberg is doing something far more personal. Tired of too many ads? go ad free now He's handpicking minds. And he's not subtle about it. According to The Wall Street Journal, Zuckerberg has been personally calling OpenAI researchers, offering eye-popping compensation packages—some reportedly as high as $100 million—to woo them away. Even Sam Altman, OpenAI's CEO, admitted in a recent podcast that Meta's offers were staggering. It's all part of Meta's newly revealed Superintelligence Lab, and Zuckerberg has already released the first 11 names on what insiders call 'The List.' These aren't just brilliant AI engineers. They are scientists, founders, problem solvers—and in many cases, immigrants or first-generation Americans whose work helped define the most powerful AI models in existence today. Before we dive in, one thing is clear: this isn't just a hiring spree. It's the making of a brain trust that could shape how AI reasons, speaks, listens, and even dreams. Let's meet the team. Alexandr Wang Source: X The wunderkind leading Meta's new lab has already made a name for himself in Silicon Valley. As the founder of Scale AI, Wang built a company that quietly powered the data-hungry ambitions of tech's biggest players. What fewer people know is that his story begins far from boardrooms—in New Mexico, where he was born to Chinese immigrant parents who worked as physicists for the U.S. military. Wang grew up surrounded by science and structure, but also by discipline. He competed in national math Olympiads as early as sixth grade, taught himself how to code, and played violin with the same intensity he brought to algorithms. Tired of too many ads? go ad free now After enrolling at MIT to study mathematics and computer science, he dropped out to pursue Scale. By 28, he wasn't just building tools for AI—he was redefining how AI learns. Meta reportedly invested $14 billion into Scale as part of the deal to bring Wang onboard. Nat Friedman Source: LinkedIn In contrast to Wang's youth, Nat Friedman brings gravitas. A seasoned technologist and venture investor. Friedman is known for scaling ideas into institutions. As the former CEO of GitHub, he steered the platform through its $7.5 billion acquisition by Microsoft and was known for his understated but razor-sharp leadership style. Born in Charlottesville, Virginia, Friedman fell in love with online communities at the age of 14 and later called them his 'actual hometown.' That early sense of connection shaped his future—first at MIT, then through his work co-founding Xamarin, a developer tools company that attracted Fortune 500 clients like Coca-Cola and JetBlue. Today, Friedman is deeply embedded in the AI startup ecosystem, backing companies like Perplexity and Stripe. Trapit Bansal Source: X Born and raised in India, Trapit Bansal is a quiet architect behind some of OpenAI's most sophisticated reasoning models. With dual degrees in mathematics and statistics from IIT Kanpur and a PhD from the University of Massachusetts Amherst, Bansal's academic journey has always straddled theory and application. At OpenAI, he played a crucial role in the development of the o-series, particularly the o1 model—widely regarded as a turning point in AI's ability to 'think' before responding. Bansal's specialty is meta-learning. Jiahui Yu Source: LinkedIn A rising star in the world of multimodal AI, Jiahui Yu has already left his mark on two of the most powerful labs in the world—Google and OpenAI. At OpenAI, he led the perception team, working on how machines interpret images, audio, and language as a seamless whole. At Google's DeepMind, he helped develop Gemini's multimodal capabilities. Yu's educational path began at the prestigious School of the Gifted Young in China, followed by a PhD in computer science from the University of Illinois Urbana-Champaign. Shuchao Bi Source: X Shuchao Bi is one of the few people who can claim co-founder status on a cultural juggernaut—YouTube Shorts. During his 11 years at Google, he helped create and refine its short-form video platform and later led its algorithm team. But Bi's heart has always belonged to research. At OpenAI, he focused on multimodal AI and helped launch GPT-4o's voice mode—essentially giving chatbots the power to talk back. Educated at Zhejiang University and later at UC Berkeley, Bi blends statistical elegance with creative application. His role at Meta? To make machines not just responsive, but expressive. Huiwen Chang Source: LinkedIn Known for her expertise in image generation and style transfer, Huiwen Chang was instrumental in OpenAI's visual interface work for GPT-4o. But her roots are in rigorous academia. She graduated from the Yao Class at Tsinghua University—a training ground for China's best minds in computer science—and then earned her PhD from Princeton. Chang's work is where art meets architecture. She understands how to train machines to not just see an image, but to manipulate it, interpret it, and even mimic human aesthetic judgment. Before OpenAI, she cut her teeth at Adobe and Google. Ji Lin Source: LinkedIn Another Tsinghua-to-MIT story, Ji Lin blends engineering finesse with frontier thinking. He worked on several of OpenAI's most powerful models before joining Meta, with a focus on both reasoning and multimodal integration. What sets Lin apart is his mix of research and real-world application. He interned at NVIDIA, Adobe, and Google before landing at OpenAI. Hongyu Ren Source: X If you're improving an AI model after it's built—teaching it to be more ethical, more accurate, or more human—you're doing post-training. That's Hongyu Ren's specialty. Educated at Peking University and Stanford, Ren led a post-training team at OpenAI and is one of the more philosophically-minded researchers in the group. Shengjia Zhao Source: X As a co-creator of ChatGPT, Shengjia Zhao is no stranger to AI that captures public imagination. But behind the scenes, he was also working on one of the field's most quietly important trends: synthetic data. By helping machines generate their own training material, Zhao advanced a method to keep AI learning, even as real-world data dries up. After graduating from Tsinghua University and Stanford, Zhao joined OpenAI in 2022 and quickly rose through the ranks. Johan Schalkwyk Source: X Hailing from South Africa, Johan Schalkwyk has always worked on the frontier of communication. At Google, he led the company's ambitious effort to support 1,000 spoken languages, a moonshot project that blended linguistics, machine learning, and cultural preservation. Most recently, he served as machine learning lead at Sesame, a startup trying to make conversational AI feel like real dialogue. Pei Sun Pei Sun helped power the brains behind Waymo, Google's self-driving car unit. His work involved building next-generation models for perception and reasoning—skills that translate neatly into the world of chatbots, robots, and beyond. Educated at Tsinghua University and Carnegie Mellon, Sun began a PhD before dropping out to join the industry faster. Joel Pobar Source: LinkedIn An AI infrastructure veteran, Joel Pobar most recently worked at Anthropic, where he helped scale inference systems for some of the most advanced models in the world. Before that, he spent nearly a decade at Facebook, leading engineering teams. Educated in Australia at Queensland University of Technology, Pobar brings a rare mix of insider knowledge and outsider grit. His job at Meta will likely focus on making sure the lab's most powerful creations can actually run reliably, at scale, and in real time.