logo
Can you choose an AI model that harms the planet less?

Can you choose an AI model that harms the planet less?

Time of India19-06-2025
From uninvited results at the top of your search engine queries to offering to write your emails and helping students do homework, generative artificial intelligence is quickly becoming part of daily life as tech giants race to develop the most advanced models and attract users.
All those prompts come with an environmental cost: A report last year from the Energy Department found AI could help increase the portion of the nation's electricity supply consumed by data centers from 4.4% to 12% by 2028. To meet this demand, some power plants are expected to burn more coal and natural gas.
And some chatbots are linked to more greenhouse gas emissions than others. A study published Thursday in the journal Frontiers in Communication analyzed different generative AI chatbots' capabilities and the planet-warming emissions generated from running them. Researchers found that chatbots with bigger "brains" used exponentially more energy and answered questions more accurately -- up until a point.
"We don't always need the biggest, most heavily trained model, to answer simple questions. Smaller models are also capable of doing specific things well," said Maximilian Dauner, a doctoral student at the Munich University of Applied Sciences and lead author of the paper. "The goal should be to pick the right model for the right task."
The study evaluated 14 large language models, a common form of generative AI often referred to by the acronym LLMs, by asking each a set of 500 multiple choice and 500 free response questions across five different subjects. Dauner then measured the energy used to run each model and converted the results into carbon dioxide equivalents based on global averages.In most of the models tested, questions in logic-based subjects, like abstract algebra, produced the longest answers -- which likely means they used more energy to generate compared with fact-based subjects, such as history, Dauner said.
Live Events
AI chatbots that show their step-by-step reasoning while responding tend to use far more energy per question than chatbots that don't. The five reasoning models tested in the study did not answer questions much more accurately than the nine other studied models. The model that emitted the most, DeepSeek-R1, offered answers of comparable accuracy to those that generated a fourth of the amount of emissions.
Discover the stories of your interest
Blockchain
5 Stories
Cyber-safety
7 Stories
Fintech
9 Stories
E-comm
9 Stories
ML
8 Stories
Edtech
6 Stories
There is key information not captured by the study, which only included open-source LLMs: Some of the most popular AI programs made by large tech corporations, such as OpenAI's ChatGPT and Google's Gemini, were not included in the results.
And because the paper converted the measured energy to emissions based on a global CO2 average, it only offered an estimate; it did not indicate the actual emissions generated by using these models, which can vary hugely depending on which country the data center running it is in.
"Some regions are going to be powered by electricity from renewable sources, and some are going to be primarily running on fossil fuels," said Jesse Dodge, a senior research scientist at the Allen Institute for AI who was not affiliated with the new research.
In 2022, Dodge led a study comparing the difference in greenhouse gas emissions generated by training a LLM in 16 different regions of the world. Depending on the time of year, some of the most emitting areas, like the central United States, had roughly three times the carbon intensity of the least emitting ones, such as Norway.
But even with this limitation, the new study fills a gap in research on the trade-off between energy cost and model accuracy, Dodge said. "Everyone knows that as you increase model size, typically models become more capable, use more electricity and have more emissions," he said.
Reasoning models, which have been increasingly trendy, are likely further bumping up energy costs, because of their longer answers.
"For specific subjects an LLM needs to use more words to get to a more accurate response," Dauner said. "Longer answers and those that use a reasoning process generate more emissions."
Sasha Luccioni, the AI and climate lead at Hugging Face, an AI company, said that subject matter is less important than output length, which is determined by how the model was trained. She also emphasized that the study's sample size is too small to create a complete picture of emissions from AI.
"What's relevant here is not the fact that it's math and philosophy, it's the length of the input and the output," she said.
Last year, Luccioni published a study that compared 88 LLMs and also found that larger models generally had higher emissions. Her results also indicated that AI text generation -- which is what chatbots do -- used 10 times as much energy compared with simple classification tasks like sorting emails into folders.
Luccioni said that these kinds of "old school" AI tools, including classic search engine functions, have been overlooked as generative models have become more widespread. Most of the time, she said, the average person doesn't need to use an LLM at all.
Dodge added that people looking for facts are better off just using a search engine, since generative AI can "hallucinate" false information.
"We're reinventing the wheel," Luccioni said. People don't need to use generative AI as a calculator, she said. "Use a calculator as a calculator."
This article originally appeared in The New York Times.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Woman uses ChatGPT to slash ₹10 lakh credit card debt in 30 days
Woman uses ChatGPT to slash ₹10 lakh credit card debt in 30 days

Hindustan Times

time44 minutes ago

  • Hindustan Times

Woman uses ChatGPT to slash ₹10 lakh credit card debt in 30 days

A woman in Delaware, United States, made ChatGPT her personal financial adviser and used the AI chatbot's tips to pay off credit card debt, which is $23,000. Allan thought of using AI to give her the 'momentum' to pay off the credit card debt.(Pexels) Jennifer Allan, 35, took inspiration from the 30-day challenges trending on social media platforms. She decided to use ChatGPT every day for 30 days, paying off more than $12,000 of her debt ( ₹ 10,27,000). '…Whether that was brainstorming side hustles or just giving me a little structure', she told Newsweek. The realtor and content creator recounted her 'struggle with money' during her whole adult life. Allan said that while she makes enough, she was never taught financial literacy. 'I avoided budgeting and I figured if I just kept working harder, I could out-earn the problem,' she said. Allan said that it worked for a while, 'until it didn't'. She said that things 'unravelled' after the birth of her daughter, with NICU, postpartum recovery and other expenses that come along with a newborn. Allan recalled that she 'shut down emotionally', using credit cards to keep things afloat, according to Newsweek. 'We weren't living lavishly. We were just surviving,' she said, adding that the debt kept piling up. Following this, Allan thought of using AI to give her the 'momentum' to pay off the credit card debt. ChatGPT's one-challenge-a-day for Allan According to Newsweek, ChatGPT would suggest one challenge every day which would help Allan in saving or earning a little money. These included cancelling a subscription, filing for unclaimed money, selling items through Facebook Marketplace. The chatbot even suggested searching for coins in old purses and in between couch cushions. This earned Allan about $100. ChatGPT also advised her to look through her apps and accounts. Following this advice, Allan discovered more than $10,000 lying in a brokerage account and from finance apps like Venmo. 'I'm super, super happy with that. I've essentially paid half of my debt off,' Allan said in a clip posted to her TikTok account

No plans to scale: OpenAI confirms limited testing of Google TPUs
No plans to scale: OpenAI confirms limited testing of Google TPUs

Mint

timean hour ago

  • Mint

No plans to scale: OpenAI confirms limited testing of Google TPUs

OpenAI has clarified that it currently has no plans to deploy Google's in-house artificial intelligence chips at scale, despite reports suggesting otherwise. The statement comes just days afterReuters and several other outlets claimed the AI research lab was turning to Google's Tensor Processing Units (TPUs) to support its expanding compute needs. A spokesperson for OpenAI, which is behind ChatGPT, said on Sunday that while the company is 'conducting early testing' with Google's TPUs, there is no active intention to scale their use. 'We have no plans to adopt TPUs broadly at this stage,' the spokesperson confirmed. Google, when approached for comment, declined to respond. Notably, it is not unusual for AI companies to experiment with different hardware configurations, but rolling out new chip infrastructure across production systems would require significant architectural changes and software adaptation, something that typically takes time and considerable resources. For now, OpenAI continues to rely primarily on Nvidia's graphics processing units (GPUs), which are considered the industry standard for AI workloads. The company is also utilising advanced chips from AMD to meet rising computational demand. Simultaneously, OpenAI is pushing ahead with developing its proprietary AI chip, which is expected to hit the "tape-out" phase later this year, a critical step where the chip design is finalised for fabrication. Earlier in June,Reuters reported that OpenAI had signed up for Google Cloud services, signalling an unexpected partnership between the two tech rivals. However, sources indicate that the majority of OpenAI's cloud-based computing still runs on servers operated by CoreWeave, a fast-growing infrastructure company offering GPU-powered solutions. The American tech giant has recently begun offering its custom-designed TPUs to external customers, expanding beyond their previous internal use. The move has attracted major clients, including Apple and AI startups like Anthropic and Safe Superintelligence, both founded by former OpenAI executives. (With inputs from Reuters)

This is how Bill Gates' daughter is creating viral videos with ChatGPT, reveals hacks
This is how Bill Gates' daughter is creating viral videos with ChatGPT, reveals hacks

Mint

time2 hours ago

  • Mint

This is how Bill Gates' daughter is creating viral videos with ChatGPT, reveals hacks

Phoebe Gates, daughter of Bill Gates, is embracing artificial intelligence to power her fashion tech startup, Phia. In a recent podcast appearance, Gates, alongside co-founder Sophia Kianni, revealed how they harness ChatGPT to craft compelling marketing strategies and create viral video content with precision and ease. Speaking onThe Burnouts podcast, the duo offered a behind-the-scenes look into how AI plays a central role in Phia's social media and brand-building efforts. Far from using it simply as a creative assistant, Gates and Kianni explained that they 'reverse engineer' top-performing social media videos to produce their own standout content, tailored to Phia's aesthetic and mission. 'We don't start from scratch,' said Kianni during the conversation. 'The internet exists for a reason.' She described how they curate a database of high-performing videos from platforms like TikTok and Instagram, then break down their elements using AI. Gates elaborated on their method, 'We'll create a spreadsheet, these are the top videos, here's why each one worked, and then we reverse engineer how to recreate that success.' From lighting and structure to pacing and tone, every detail is dissected. A key part of this approach involves transcribing selected viral videos using ChatGPT. The tool is then prompted to analyse what makes the videos resonate, be it storytelling techniques, timing, visuals, or emotional appeal. Once these insights are in place, the team feeds ChatGPT information about Phia's brand identity, asking it to draft scripts and content ideas that mirror successful formats while staying true to the company's voice. The strategy goes beyond intuition, offering a data-driven way to tap into social media trends. Gates believes it is a game-changer for entrepreneurs. 'I use AI almost every single day, and it supercharges me,' she said. 'We're not just guessing what works, we are using tools to decode what already has.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store