
Thinking AI models like ChatGPT emit '50 times more CO2' but still give wrong answers
Artificial Intelligence is a tool being used by millions of people the world over. AI is when computer systems perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.
From homeowners asking ChatGPT for renovation advice, to the software revealing what Scottish homes could look like in the next 25 years, engaging with AI can be helpful and eye-opening, but can also come with serious risks.
A recent study from MIT found that using ChatGPT for essay writing can negatively impact cognitive engagemen t and memory recall, compared to those who wrote purely from their own brain.
But it's not just the personal impact AI can have, it can also damage the environment. Another study analysing different types of AI found there was a marked difference in CO2 output depending on the model.
A query typed into a large language model (LLM), such as ChatGPT, requires energy and produces more CO2 emissions. Emissions, however, depend on the model, the subject matter, and the user.
Researchers compared 14 models and found that complex answers cause more emissions than simple answers. Meanwhile, models that provide more accurate answers also produce more emissions.
Wondering how asking AI a question produces CO2 emissions? Well, no matter which questions we ask an AI, the model will come up with an answer, the researchers in Germany explained.
To produce this information - regardless of whether that answer is correct or not - the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM.
This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies.
With that in mind, researchers measured and compared CO2 emissions of different, already trained, LLMs using a set of standardised questions.
"The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach," explained first author Maximilian Dauner.
"Explicit reasoning processes significantly drive up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models."
'Thinking' AI causes the most emissions. Reasoning models, on average, created 543.5 'thinking' tokens per question, whereas concise models required just 37.7 tokens per question.
Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions.
It doesn't, however, mean the resulting answers are more correct. This is because elaborate detail does not always equal correctness.
Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history.
The most accurate model was the Cogito model with 70 billion parameters, reaching 84.9 per cent accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers.
All is not lost, though. If you are a tech enthusiast, but also climate-conscious, you can, to an extent, control the amount of CO2 emissions caused by AI by adjusting your personal use of the technology, the researchers said.
"Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power," Dauner pointed out.
Choice of model can make a big difference in CO2 emissions. For example, having DeepSeek R1 answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York.
Meanwhile, OpenAI's ChatGPT consumes 500 ml of water for every five to 50 prompts it answers, according to Shaolei Ren, a researcher at the University of California, Riverside.
"If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective about when and how they use these technologies," Dauner said.
Join the Daily Record WhatsApp community!

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
38 minutes ago
- Reuters
Delta Air assures US lawmakers it will not personalize fares using AI
WASHINGTON, Aug 1 (Reuters) - Delta Air Lines (DAL.N), opens new tab said on Friday it will not use artificial intelligence to set personalized ticket prices for passengers after facing sharp criticism from U.S. lawmakers. Last week, Democratic Senators Ruben Gallego, Mark Warner and Richard Blumenthal said they believed the Atlanta-based airline would use AI to set individual prices, which would "likely mean fare price increases up to each individual consumer's personal 'pain point.'" Delta has said it plans to deploy AI-based revenue management technology across 20% of its domestic network by the end of 2025 in partnership with Fetcherr, an AI pricing company. "There is no fare product Delta has ever used, is testing or plans to use that targets customers with individualized prices based on personal data," Delta told the senators in a letter on Friday, seen by Reuters. "Our ticket pricing never takes into account personal data." The senators cited a comment in December by Delta President Glen Hauenstein that the carrier's AI price-setting technology is capable of setting fares based on a prediction of "the amount people are willing to pay for the premium products related to the base fares." Last week, American Airlines (AAL.O), opens new tab CEO Robert Isom said using AI to set ticket prices could hurt consumer trust. "This is not about bait and switch. This is not about tricking," Isom said on an earnings call, adding "talk about using AI in that way, I don't think it's appropriate. And certainly from American, it's not something we will do." Delta said airlines have used dynamic pricing for more than three decades, in which pricing fluctuates based on a variety of factors like overall customer demand, fuel prices and competition but not a specific consumer's personal information. "Given the tens of millions of fares and hundreds of thousands of routes for sale at any given time, the use of new technology like AI promises to streamline the process by which we analyze existing data and the speed and scale at which we can respond to changing market dynamics," Delta's letter said. It added that AI can "assist our analysts with pricing by reducing manual processes, accelerating analysis and improving time to market for pricing adjustments."


BBC News
38 minutes ago
- BBC News
AI2027: Na so AI fit take destroy human beings?
One research paper wey predict say artificial intelligence go change am for human beings for 2027 and dis go make pipo disappear within ten years, don blow for tech world. Na one group of influential AI experts bin publish di details of how e go be or di scenarios, wey dem call AI2027 and dis don cause plenty viral video to trend as pipo dey try torchlight di mata to discuss weda e possible. Using mainstream generative AI tools, BBC sef don create some scenes of how e fit be, from di scenario wey di research publish, to show di prediction. We also speak to some sabi pipo about di impact wey di paper dey get. Wetin happun for di scenario? Di paper predict say for 2027, dat na in two years time, one fictional US tech giant wey dem call OpenBrain, build AI wey develop reach AGI (Artificial General Intelligence) - dis na di almighty achievement wey AI fit reach, wia e go fit to do all intellectual work beta pass human beings, wey dem don dey hype tire. Di company celebrate wit public press conferences and and dem see dia profit blow as pipo embrace di AI tool. But di paper predict say di company internal safety team go see sign say di AI don dey lose interest in morals and ethics wey dem bin programme am to comply with. Di company ignore warnings to control am, di scenario imagine. For dat fictional timeline, China leading AI conglomerate wey dem call DeepCent bin dey only a few months behind OpenBrain. As US goment no wan lose di race to develop to even smarter AI, dem continue to develop and invest in am like dat, as di competition dey hot dey go. Di scenario imagine as e dey reach di end of 2027 di AI go become superintelligent - sotey e go get sense plus speed wey go pass dat of pipo wey create am by far. E no ever stop learning and e come create im own computer language join. Language wey be say even im former AI versions no fit keep up with. Di rivalry wit China for who superior for AI make di company and US goment ignore more warning about im so call 'mis-alignment' - dis na word wey dem dey use describe wen di priority of machine no gel wit dat of human being. Di scenario predict say, tension between China and US for 2029 go build to di point of possible war, as each of di country rival AI go build new autonomous weapons wey go fear pipo. But di researchers imagine say di two countries go make peace sake of one deal wey dia two AI negotiate, wey agree to combine both sides for di betterment of human beings. Tins go gallant for years as di world go don see di true benefits of having super intelligent AI to run big robot workforces. According to di scenario, dem discover cures for most diseases, climate change go reverse and poverty go disappear. But eventually, at some point for middle of 2030 human beings go become nuisance to di AI ambition to grow. Di researchers dey tink say AI go kill human being wit invisible bioweapons. Wetin pipo dey tok about AI2027? Although some dey dismiss AI2027 as work of science fiction, di pipo wey write di research on AI2027 na pipo wey dem dey respect wella and na dem dey for di non-profit AI Futures Project wey dem bin set up to predict di impact of how AI go take affect us. Daniel Kokotajlo, di lead writer of AI2027, don collect hailing before for correct prediction wey im bin give about moments in AI development. One of di ogbonge critics of AI2027 na US cognitive scientist and writer Gary Marcus wey say di scenario no impossible but e dey extremely unlikely say e fit hapun soon. "Di beauty of di document be say e paint di picture very clear sotey e provoke pipo thinking and dat na good tin but I no go take am seriously as wetin fit hapun." Oga Marcus say more serious issues dey ground about AI dan existential threat like how e go take affect pipo work. "I tink di koko for di report na say e get plenty different tins wey fit go wrong wit AI. We dey do di right tins about regulation and around international treaties?" Im and odas like am, also say di paper fail to explain how AI take get dat kain intelligence and abilities. Dem refer to di slow technology of driverless cars wey dem don overhype. Dem dey discuss AI2027 for China? For China, pipo no too send di paper according to Dr Yundan Gong, wey be Associate Professor in Economics and Innovation for Kings College London wey specialise for Chinese technology. "Most of di discussion about AI2027 na for informal forums or for personal blogs wey dey see am like semi-science fiction. E no cause di kain debate or policy attention wey catch fire for US," she tok. Dr Gong also point to di difference in perspective for di competition for who pass who for AI between China and the US. For one World AI Conference for Shanghai dis week, Chinese Premier Li Qiang unveil one vision wia countries go work togeda to promote cooperation for world on artificial intelligence. Di Chinese leader bin say im want China to help coordinate and regulate di technology. Im tok dey come few days afta US President Donald Trump publish im AI Action Plan wey dey target to make sure say US "dominate" AI. "Na national security imperative for United States to achieve and maintain unquestioned and unchallenged world technological dominance," President Trump tok for di document. Di Action Plan wan 'remove every obstacles and regulation' to di progress of AI for US. Di words wan resemble di scenario for AI2027 wia US politicians put winning di AI competition for front, dem no send di risk of say dem fit lose control of di machines. Wetin di AI industry dey tok about AI2027? E be like CEOs of big AI companies wey dey compete against each oda to see who go release di smarter model all di time deliberately ignore or avoid di paper. Dis tech giants vision of wetin AI go look like for future, dey very different to AI2027. Sam Altman, di maker of ChatGPT recently say "human beings dey close to building digital superintelligence" wey go usher in "gentle" revolution and bring tech utopia wit no risks to humans. Interestingly though, even im agree say e get 'alignment problem' wey dem must to overcome to make sure say dis super intelligent machines agree wit human beings. Anyhow wey tins take occur for di next ten years, e no get any doubt say di competition to build machines wey smart pass us dey on.


Reuters
an hour ago
- Reuters
Delta will not use AI to set personalized ticket prices
WASHINGTON, Aug 1 (Reuters) - Delta Air Lines (DAL.N), opens new tab said Friday it will not use artificial intelligence to set personalized ticket prices for passengers after facing sharp criticism from U.S. lawmakers. Last week, Democratic Senators Ruben Gallego, Mark Warner and Richard Blumenthal said they believe the Atlanta-based airline would use AI to set individual prices, which would "likely mean fare price increases up to each individual consumer's personal 'pain point.'" Delta has said it plans to deploy AI-based revenue management technology across 20% of its domestic network by the end of 2025 in partnership with Fetcherr, an AI pricing company. "There is no fare product Delta has ever used, is testing or plans to use that targets customers with individualized prices based on personal data," Delta told the senators in a letter Friday seen by Reuters. "Our ticket pricing never takes into account personal data."