Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions
The more accurate we try to make AI models, the bigger their carbon footprint — with some prompts producing up to 50 times more carbon dioxide emissions than others, a new study has revealed.
Reasoning models, such as Anthropic's Claude, OpenAI's o3 and DeepSeek's R1, are specialized large language models (LLMs) that dedicate more time and computing power to produce more accurate responses than their predecessors.
Yet, aside from some impressive results, these models have been shown to face severe limitations in their ability to crack complex problems. Now, a team of researchers has highlighted another constraint on the models' performance — their exorbitant carbon footprint. They published their findings June 19 in the journal Frontiers in Communication.
"The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," study first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences in Germany, said in a statement. "We found that reasoning-enabled models produced up to 50 times more CO₂ emissions than concise response models."
To answer the prompts given to them, LLMs break up language into tokens — word chunks that are converted into a string of numbers before being fed into neural networks. These neural networks are tuned using training data that calculates the probabilities of certain patterns appearing. They then use these probabilities to generate responses.
Reasoning models further attempt to boost accuracy using a process known as "chain-of-thought." This is a technique that works by breaking down one complex problem into smaller, more digestible intermediary steps that follow a logical flow, mimicking how humans might arrive at the conclusion to the same problem.
Related: AI 'hallucinates' constantly, but there's a solution
However, these models have significantly higher energy demands than conventional LLMs, posing a potential economic bottleneck for companies and users wishing to deploy them. Yet, despite some research into the environmental impacts of growing AI adoption more generally, comparisons between the carbon footprints of different models remain relatively rare.
To examine the CO₂ emissions produced by different models, the scientists behind the new study asked 14 LLMs 1,000 questions across different topics. The different models had between 7 and 72 billion parameters.
The computations were performed using a Perun framework (which analyzes LLM performance and the energy it requires) on an NVIDIA A100 GPU. The team then converted energy usage into CO₂ by assuming each kilowatt-hour of energy produces 480 grams of CO₂.
Their results show that, on average, reasoning models generated 543.5 tokens per question compared to just 37.7 tokens for more concise models. These extra tokens — amounting to more computations — meant that the more accurate reasoning models produced more CO₂.
The most accurate model was the 72 billion parameter Cogito model, which answered 84.9% of the benchmark questions correctly. Cogito released three times the CO₂ emissions of similarly sized models made to generate answers more concisely.
"Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," said Dauner. "None of the models that kept emissions below 500 grams of CO₂ equivalent [total greenhouse gases released] achieved higher than 80% accuracy on answering the 1,000 questions correctly."
RELATED STORIES
—Replika AI chatbot is sexually harassing users, including minors, new study claims
—OpenAI's 'smartest' AI model was explicitly told to shut down — and it refused
—AI benchmarking platform is helping top companies rig their model performances, study claims
But the issues go beyond accuracy. Questions that needed longer reasoning times, like in algebra or philosophy, caused emissions to spike six times higher than straightforward look-up queries.
The researchers' calculations also show that the emissions depended on the models that were chosen. To answer 60,000 questions, DeepSeek's 70 billion parameter R1 model would produce the CO₂ emitted by a round-trip flight between New York and London. Alibaba Cloud's 72 billion parameter Qwen 2.5 model, however, would be able to answer these with similar accuracy rates for a third of the emissions.
The study's findings aren't definitive; emissions may vary depending on the hardware used and the energy grids used to supply their power, the researchers emphasized. But they should prompt AI users to think before they deploy the technology, the researchers noted.
"If users know the exact CO₂ cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies," Dauner said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
a few seconds ago
- Forbes
How ChatGPT Disrupted Search: 4 Founder Lessons For Potential Unicorns
Google vs. ChatGPT/ OpenAI (Photo by Jakub Porzycki/NurPhoto via Getty Images) Intel once ruled chips. Now, Nvidia leads the pack. Google once owned search. Now, it's reacting to ChatGPT. Disruption usually doesn't start with a frontal assault. It begins at the edges, in segments where giants aren't looking, until it reshapes entire industries. From Walmart and Microsoft to Amazon and Airbnb, this has been the pattern. Today's AI shift is strategic. For founders, it's a blueprint to beat the giants. Why Industry Giants Get Disrupted Even dominant companies stumble due to two powerful forces. #1. Revolutionary Technologies Change the Business Model There are two main types of technologies. Evolutionary technologies enhance existing business models and offer significant advantages to incumbents to stay ahead. Revolutionary technologies change the model. They let startups win with new models while giants cling to the old rules. That is how many billion-dollar entrepreneurs beat existing giants. Some examples: Now, AI is doing the same to search. #2. ChatGPT Threatens Google's Revenue Model Google's core revenue comes from search-based ads. ChatGPT, by contrast, provides direct answers, often without sending users to external websites and relies on subscriptions that bypasses the ad-based model. This means fewer clicks for advertisers and, potentially, less ad revenue for Google. If Google shifts to AI-powered search, it risks undermining its own cash cow. If it doesn't, it may lose relevance. 4 Proven Strategies Founders can Use ChatGPT's success reflects four powerful strategies founders can use without needing early VC. #1. Select the Right Battlefield for the Emerging Trend. Success is not about the Product-Market fit. It is about the Product-Market-Competitor fit on the emerging trend. Instead of trying to beat the giants head-on, choose your strategic group wisely to compete where you have the edge #2. Pick the Finance-Smart Sales Driver to Grow More with Less. Venture-funded startups often try to scale fast to keep the VCs happy – even if it burns cash. The VC goal is to quickly achieve a high exit value for a strategic sale or an IPO. Yet only 1 in 100 get VC and about 80% of those fail. In contrast, 94% of billion-dollar entrepreneurs grew by smart scaling without early VC – not blitz scaling with early VC. They built smart sales models that matched their sales driver to their market, product, and competitors to grow efficiently and with control. Founders like Gaston Taratuta ( and Joe Martin of ( used focused, finance-smart sales strategies to grow more with less. #3. Stay Agile and in Control. Raising VC too soon can constrain your options. Risks include:ChatGPT evolved fast with strategic control and Sam Altman did succeed in staying as CEO even though others tried to take control. #4. Master Unicorn Skills Before Seeking VC. Before seeking VC and ceding control, develop the skills to lead: These unicorn skills can be learned, and they can be the difference between becoming a Founder-CEO or a Founder-Failure. MY TAKE: The main lesson – founders can beat bigger competitors if they are smarter. ChatGPT didn't win by being first – it won by being smarter. So can you. ChatGPT's rise reveals how founders can win by:
Yahoo
28 minutes ago
- Yahoo
Charter Communications, Tesla, Volkswagen: Trending Tickers
Charter Communications (CHTR) stock is plummeting after reporting a second quarter earnings miss and greater-than-expected customer losses amid competition from mobile and fiber internet providers. Tesla (TSLA) stock rebounds as the company prepares to launch robotaxis in San Francisco, according to Business Insider. However, concerns grow over reports from The Information about the company's slow production of its Optimus humanoid robots. Volkswagen (VWAGY) lowered its full-year guidance after reporting a second quarter profit decline, attributing the drop to over $1.5 billion in US tariff costs from the trade war. To watch more expert insights and analysis on the latest market action, check out more Market Catalysts here. Now time for some of today's trending tickers. We're watching Charter, Tesla, and Volkswagen. First up, Charter Communications falling after the cable company reported second quarter earnings that missed expectations. The company reported it lost more internet customers than expected during the second quarter amid increased pressure from mobile companies 5G and fiber home internet offerings. All three of the big US telecom companies been packaging their offerings of wireless phone service with 5G or fiber internet service. Those shares down 17%. Next up is Tesla. Two narratives in focus for that company today. In the first, Business Insider reporting the company will launch its robo taxis in San Francisco this weekend, which did help stem some losses for the shares this morning. They're now up more than 2%. On the flip side, however, speaking to the future of Tesla, the information is reporting some struggles in Tesla's Optimus robot production. Saying the company has only made hundreds of robots. Well, why does that matter? As the company grapples with falling auto sales, Musk has been hyping his humanoid robot as Tesla's next big product, a potential 10 trillion dollar business, and has talked about producing 5,000 of those robots this year. So they're going to have to speed it up if they're only at hundreds. Finally, Volkswagen lowering its full-year guidance and reporting a sharp drop in second quarter profit as the auto giant grapples with the costs of President Trump's trade war. Europe's biggest car maker posting second quarter sales of 80.8 billion euros that missed analyst expectations of more than 82 billion euros. The automaker said the impact of US tariffs alone cost it 1.3 billion euros in the first six months of the year.
Yahoo
28 minutes ago
- Yahoo
Kalshi CEO Talks AI's Impact on Prediction Markets
Kalshi CEO and co-founder Tarek Mansour discusses how artificial intelligence is impacting prediction markets and the company's business model on "Bloomberg Open Interest." Kalshi has partnered with xAI, the AI startup founded by Elon Musk, to bring Grok to predictions. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data