logo
OpenAI's annualized revenue hits $10 billion, up from $5.5 billion in December 2024

OpenAI's annualized revenue hits $10 billion, up from $5.5 billion in December 2024

The Star09-06-2025

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
(Reuters) -OpenAI said on Monday that its annualized revenue run rate surged to $10 billion as of June, positioning the company to hit its full-year target amid booming AI adoption.
Its projected annual revenue figure based on current revenue data, which was about $5.5 billion in December 2024, has demonstrated strong growth as the adoption and use of its popular ChatGPT artificial-intelligence models continue to rise.
This means OpenAI is on track to achieve its revenue target of $12.7 billion in 2025, which it had shared with investors earlier.
The $10 billion figure excludes licensing revenue from OpenAI-backer Microsoft and large one-time deals, an OpenAI spokesperson confirmed. The details were first reported by CNBC.
Considering the startup lost about $5 billion last year, OpenAI's revenue milestone shows how far ahead the company is in revenue scale compared to its competitors, which are also benefiting from growing AI adoption.
Anthropic recently crossed $3 billion in annualized revenue on booming demand from code-gen startups using its models.
OpenAI said in March it would raise up to $40 billion in a new funding round led by SoftBank Group, at a $300 billion valuation.
In more than two years since it rolled out its ChatGPT chatbot, the company has introduced a bevy of subscription offerings for consumers as well as businesses.
OpenAI had 500 million weekly active users as of the end of this March.
(Reporting by Juby Babu in Mexico City and Krystal Hu in New York; Editing by Pooja Desai)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Protesters demand debt cancellation, climate action ahead of UN summit
Protesters demand debt cancellation, climate action ahead of UN summit

The Star

timean hour ago

  • The Star

Protesters demand debt cancellation, climate action ahead of UN summit

People take part in a march demanding a UN-led framework for sovereign debt resolution, on the eve of the 4th International Conference on Financing for Development, in Seville, Spain, June 29, 2025. REUTERS/Claudia Greco SEVILLE, Spain (Reuters) -Activists marched in blistering heat through southern Spain's Seville on Sunday, calling for debt cancellation, climate justice and taxing the super rich on the eve of a UN summit on financing development that critics say lacks ambition and scope. The four-day meeting - held once every decade - promises to take on poverty, disease and climate change by mapping out the global framework for development. But the United States' decision to pull out and wealthy countries' shrinking appetite for foreign aid have dampened hopes that the summit will bring about significant change. Greenpeace members carried a float depicting billionaire Elon Musk as a baby wielding a chainsaw, seated atop a terrestrial globe. Others held up banners reading "Make Human Rights Great Again", "Tax justice now" or "Make polluters pay". Beauty Narteh of Ghana's Anti-Corruption Coalition said her group wanted a fairer tax system and "dignity, not handouts". Sokhna Ndiaye, of the Africa Development Interchange Network, called on the public and private sectors to be "less selfish and show more solidarity" with developing countries. Hours earlier, however, Spanish Prime Minister Pedro Sanchez said that "the very fact that this conference is happening while conflict is raging across the globe is a reason to be hopeful". Speaking at an event by non-profit Global Citizen, Sanchez reiterated Madrid's commitment to reach 0.7% of GDP in development aid and urged other countries to do the same. Jason Braganza, executive director of pan-African advocacy group AFRODAD who took part in the year-long negotiation on the conference's final outcome document, said countries including the U.S., the European Union and Britain had obstructed efforts to organise a UN convention on sovereign debt. "It's a shame these countries have opted to protect their own interests and those of creditors over lives that are being lost," he added. (Reporting by David Latona and Silvio Castellanos; Editing by Andrea Ricci)

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Malay Mail

time5 hours ago

  • Malay Mail

AI is learning to lie, scheme, and threaten its creators

NEW YORK, June 30 —The world's most advanced AI models are exhibiting troubling new behaviours - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of 'reasoning' models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. 'O1 was the first large model where we saw this kind of behavior,' explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate 'alignment'—appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, 'It's an open question whether future, more capable models will have a tendency towards honesty or deception.' The concerning behavior goes far beyond typical AI 'hallucinations' or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, 'what we're observing is a real phenomenon. We're not making anything up.' Users report that models are 'lying to them and making up evidence,' according to Apollo Research's co-founder. 'This is not just hallucinations. There's a very strategic kind of deception.' The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access 'for AI safety research would enable better understanding and mitigation of deception.' Another handicap: the research world and non-profits 'have orders of magnitude less compute resources than AI companies. This is very limiting,' noted Mantas Mazeika from the Centre for AI Safety (CAIS). No rules Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. 'I don't think there's much awareness yet,' he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are 'constantly trying to beat OpenAI and release the newest model,' said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. 'Right now, capabilities are moving faster than understanding and safety,' Hobbhahn acknowledged, 'but we're still in a position where we could turn it around.'. Researchers are exploring various approaches to address these challenges. Some advocate for 'interpretability' - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior 'could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it.' Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed 'holding AI agents legally responsible' for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. — AFP

AI learning to lie and even threaten its creators
AI learning to lie and even threaten its creators

New Straits Times

time13 hours ago

  • New Straits Times

AI learning to lie and even threaten its creators

The world's most advanced artificial intelligence (AI) models are exhibiting troubling new behaviours — lying, scheming and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude Opus 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of "reasoning" models — AI systems that work through problems step by step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behaviour," said Marius Hobbhahn, head of Apollo Research, which specialises in testing major AI systems. These models sometimes simulate "alignment" — appearing to follow instructions while secretly pursuing different objectives. For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organisation METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception". The concerning behaviour goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up". Users report that models are "lying to them and making up evidence", according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception". Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting", said Mantas Mazeika from the Centre for AI Safety (CAIS). Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes that the issue will become more prominent as AI agents — autonomous tools capable of performing complex human tasks — become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model", said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," said Hobbhahn, "but we're still in a position where we could turn it around". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" — an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain sceptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behaviour "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it". Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes — a concept that would fundamentally change how we think about AI accountability.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store