Latest news with #MaximilianDauner


7NEWS
27-06-2025
- Science
- 7NEWS
Being polite to AI could be harmful to the environment
Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. ut a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up. Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centres — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations. The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute. So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers. In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported. 'This shows us the tradeoff between energy consumption and the accuracy of model performance,' Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday, said. Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models. 'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said. What you can do to reduce your carbon footprint Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said. 'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you',' Dauner said. 'But this just makes their responses even longer, expending more energy to generate each word.' For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all. Most important, Dauner's study highlights that not all AI models are created equally, Sasha Luccioni, the climate lead at AI company Hugging Face, said. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task. 'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni said. If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator. Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said. When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks. Why it's hard to measure AI's environmental impact Putting a number on the environmental impact of AI has proved challenging. The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI models. That's partly why the researchers chose to represent carbon emissions within a range, Dauner said. Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimisation techniques that could help researchers estimate energy consumption, Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption, said. 'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said. One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested. 'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said. Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said. 'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.' With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy. 'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. 'We should definitely pay attention.'


Daily Record
24-06-2025
- Science
- Daily Record
Thinking AI models like ChatGPT emit '50 times more CO2' but still give wrong answers
The more an AI service thinks, the more carbon it emits. Artificial Intelligence is a tool being used by millions of people the world over. AI is when computer systems perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making. From homeowners asking ChatGPT for renovation advice, to the software revealing what Scottish homes could look like in the next 25 years, engaging with AI can be helpful and eye-opening, but can also come with serious risks. A recent study from MIT found that using ChatGPT for essay writing can negatively impact cognitive engagemen t and memory recall, compared to those who wrote purely from their own brain. But it's not just the personal impact AI can have, it can also damage the environment. Another study analysing different types of AI found there was a marked difference in CO2 output depending on the model. A query typed into a large language model (LLM), such as ChatGPT, requires energy and produces more CO2 emissions. Emissions, however, depend on the model, the subject matter, and the user. Researchers compared 14 models and found that complex answers cause more emissions than simple answers. Meanwhile, models that provide more accurate answers also produce more emissions. Wondering how asking AI a question produces CO2 emissions? Well, no matter which questions we ask an AI, the model will come up with an answer, the researchers in Germany explained. To produce this information - regardless of whether that answer is correct or not - the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM. This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies. With that in mind, researchers measured and compared CO2 emissions of different, already trained, LLMs using a set of standardised questions. "The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach," explained first author Maximilian Dauner. "Explicit reasoning processes significantly drive up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models." 'Thinking' AI causes the most emissions. Reasoning models, on average, created 543.5 'thinking' tokens per question, whereas concise models required just 37.7 tokens per question. Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions. It doesn't, however, mean the resulting answers are more correct. This is because elaborate detail does not always equal correctness. Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history. The most accurate model was the Cogito model with 70 billion parameters, reaching 84.9 per cent accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers. All is not lost, though. If you are a tech enthusiast, but also climate-conscious, you can, to an extent, control the amount of CO2 emissions caused by AI by adjusting your personal use of the technology, the researchers said. "Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power," Dauner pointed out. Choice of model can make a big difference in CO2 emissions. For example, having DeepSeek R1 answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, OpenAI's ChatGPT consumes 500 ml of water for every five to 50 prompts it answers, according to Shaolei Ren, a researcher at the University of California, Riverside. "If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective about when and how they use these technologies," Dauner said. Join the Daily Record WhatsApp community!

CTV News
23-06-2025
- Science
- CTV News
Your AI prompts could have a hidden environmental cost
Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. But a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up. Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centers — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations. The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute. So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers. In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported. 'This shows us the tradeoff between energy consumption and the accuracy of model performance,' said Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday. Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models. 'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said. What you can do to reduce your carbon footprint Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said. 'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you,'' Dauner explained. 'But this just makes their responses even longer, expending more energy to generate each word.' For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all. Most important, Dauner's study highlights that not all AI models are created equally, said Sasha Luccioni, the climate lead at AI company Hugging Face, in an email. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task. 'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni explained. If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator. Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said. When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks. Why it's hard to measure AI's environmental impact Putting a number on the environmental impact of AI has proved challenging. The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI partly why the researchers chose to represent carbon emissions within a range, Dauner said. Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimization techniques that could help researchers estimate energy consumption, said Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption. 'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said. One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested. 'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said. Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said. 'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.' With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy. 'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. We should definitely pay attention.'


Time of India
20-06-2025
- Science
- Time of India
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
20-06-2025
- Science
- Time of India
AI chatbots using reason emit more carbon than those responding concisely, study finds
HighlightsA study revealed that carbon emissions from chat-based generative artificial intelligence can be up to six times higher when processing complex prompts, such as abstract algebra or philosophy, compared to simpler prompts like high school history. The research, conducted by Maximilian Dauner at Hochschule Munchen University of Applied Sciences, found that reasoning-enabled models produced significantly more carbon dioxide emissions than concise response models, with emissions reaching up to 50 times higher. The findings suggest a clear accuracy-sustainability trade-off in large-language model technologies, with the most accurate model, Cogito, achieving nearly 85 percent accuracy while generating three times more carbon dioxide emissions than smaller models. A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. "The environmental impact of questioning trained ( large-language models ) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule Munchen University of Applied Sciences, Germany, said. "We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models ," Dauner added. The study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 subjective. Each model responded to 100 questions on each of the five subjects chosen for the analysis -- philosophy, high school world history, international law, abstract algebra, and high school mathematics. "Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt," the authors wrote. Tokens are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide emissions. Chatbots equipped with an ability to reason, or ' reasoning models ', produced 543.5 'thinking' tokens per question, whereas concise models -- producing one-word answers -- required just 37.7 tokens per question, the researchers found. Thinking tokens are additional ones that reasoning models generate before producing an answer, they explained. However, more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for correctness. Dauner said, "None of the models that kept emissions below 500 grams of CO₂ equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly." "Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies," the author added. The most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers. "In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand," the authors wrote. "Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies," they wrote.