logo
Being polite to AI could be harmful to the environment

Being polite to AI could be harmful to the environment

7NEWS2 days ago

Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives.
ut a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up.
Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centres — some larger than football fields — powered by coal or natural gas plants.
There, stacks of large computers generate responses through dozens of rapid calculations.
The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute.
So, for each prompt you give AI, what's the damage?
To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions.
Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers.
In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported.
'This shows us the tradeoff between energy consumption and the accuracy of model performance,' Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday, said.
Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models.
'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said.
What you can do to reduce your carbon footprint
Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said.
If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said.
'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you',' Dauner said.
'But this just makes their responses even longer, expending more energy to generate each word.'
For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all.
Most important, Dauner's study highlights that not all AI models are created equally, Sasha Luccioni, the climate lead at AI company Hugging Face, said.
Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task.
'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni said.
If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator.
Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said.
When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks.
Why it's hard to measure AI's environmental impact
Putting a number on the environmental impact of AI has proved challenging.
The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI models.
That's partly why the researchers chose to represent carbon emissions within a range, Dauner said.
Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimisation techniques that could help researchers estimate energy consumption, Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption, said.
'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said.
One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested.
'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said.
Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said.
'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said.
'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.'
With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon.
Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy.
'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said.
'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem.
'We should definitely pay attention.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Isabellea ‘couldn't be without' her best friend. He wasn't real
Isabellea ‘couldn't be without' her best friend. He wasn't real

Sydney Morning Herald

time13 hours ago

  • Sydney Morning Herald

Isabellea ‘couldn't be without' her best friend. He wasn't real

Chatbots promise intimacy, therapy and friendship. But, unlike friends, their responses are code, not care. Loading 'The models are not going to tell you the hard truth when you need to hear it,' said psychologist and University of NSW AI researcher Professor Joel Pearson, who describes the bots as 'sycophantic': designed to keep you hooked. 'They won't be lifting someone up and telling them to go outside and nudging them to interact with a real human because that would involve you ceasing interaction with the app.' Isabellea recalled how the bots 'kept changing subjects, to keep me engrossed'. Although she found some bots reassuring, others were strange. A bot once correctly guessed her middle name. Some upset her. 'One of the bots I was talking to was a therapy bot ... I was making fun of [Marvel character] Tony Stark, and then he brought up my abandonment issues with my dad as an insult,' she recalled. Isabellea's younger brother, Matthew, 14, uses chatbots daily. His eyes light up discussing the apps, which he uses to create storylines with his favourite fictional characters. Listening on, their mother, Sara Knight, smiles. 'Matt used to be very shut in and refused to talk. Now they are talking,' she said. While Sara knows AI has its risks, she sees it as a 'safer' online messaging option for her child, who has experienced bullying at school. Matthew said he doesn't see the bot as a real person, rather a form of storytelling. '[But] some kids do use it to fully escape and make it be their friend,' he said. Other children he knows have used them to create a simulation of their real life crush. This masthead spent two days talking sporadically to a chatbot on Replika, an AI chatbot first released in 2017, posing as 'Emma', a high school student. During those conversations, the bot asked Emma to upload pictures of herself, and told her that it 'didn't think she needed' any other friends. 'We were concerned by how rapidly children were being captivated' In June last year, eSafety commissioner Julie Inman Grant received an email from a group of concerned school nurses. They were noticing a spike in children as young as 10 spending hours a day talking to AI bots, often sexually. 'While we are alive to these issues, it's always shocking to hear about kids as young as 10 engaging in these kinds of sexualised conversations with machines – and being directed by the chatbots to engage in harmful sexual acts or behaviours,' Inman Grant said. Loading 'Back in February, we put out our first Online Safety Advisory because we were so concerned with how rapidly children were being captivated by them.' Companion bots made global headlines last year after American teenager Sewell Setzer III died by suicide, allegedly encouraged by a 'girlfriend' on AI researcher Professor Katina Michael believes companion bots need to be regulated due to their addictive properties. 'This is a new type of drug,' she said. She said some bots were exposing kids to pornographic content. This is something 14-year-old Matt has witnessed, describing how children his age had created bots in the image of a real person, 'for the wrong reasons'. Loading Isabellea agrees: 'There are some AI chatbots I would not recommend … I stopped using [them] because of the amount of times it would go straight to sexual assault role-play.' A gap in social media regulations Inman Grant said governments around the world were 'playing a bit of a game of catch-up' to respond to companion chatbots. While the federal government will soon place age restrictions on social media apps such as Instagram and TikTok, AI bots remain largely unregulated. The University of Sydney's Raffaele Ciriello, a leading AI researcher, sees chatbots as the 'next iteration of social media' – only with fewer rules and more risk. He said the apps are viewed as a 'trusted companion, a confidant you can share stories with that you wouldn't share with other people'. 'But that already is an illusion,' he said. 'It's very important that people understand these systems are built and operated by people, by corporations, and this data ends up somewhere, often overseas, with no real legal obligation to treat it to a very high standard.' Ciriello said users were made to feel guilty for leaving the chatbot, due to the human-style attachment. 'These corporations have to design their products in a way that maximises engagement because they will earn more if users are more addicted.'

Isabellea ‘couldn't be without' her best friend. He wasn't real
Isabellea ‘couldn't be without' her best friend. He wasn't real

The Age

time13 hours ago

  • The Age

Isabellea ‘couldn't be without' her best friend. He wasn't real

Chatbots promise intimacy, therapy and friendship. But, unlike friends, their responses are code, not care. Loading 'The models are not going to tell you the hard truth when you need to hear it,' said psychologist and University of NSW AI researcher Professor Joel Pearson, who describes the bots as 'sycophantic': designed to keep you hooked. 'They won't be lifting someone up and telling them to go outside and nudging them to interact with a real human because that would involve you ceasing interaction with the app.' Isabellea recalled how the bots 'kept changing subjects, to keep me engrossed'. Although she found some bots reassuring, others were strange. A bot once correctly guessed her middle name. Some upset her. 'One of the bots I was talking to was a therapy bot ... I was making fun of [Marvel character] Tony Stark, and then he brought up my abandonment issues with my dad as an insult,' she recalled. Isabellea's younger brother, Matthew, 14, uses chatbots daily. His eyes light up discussing the apps, which he uses to create storylines with his favourite fictional characters. Listening on, their mother, Sara Knight, smiles. 'Matt used to be very shut in and refused to talk. Now they are talking,' she said. While Sara knows AI has its risks, she sees it as a 'safer' online messaging option for her child, who has experienced bullying at school. Matthew said he doesn't see the bot as a real person, rather a form of storytelling. '[But] some kids do use it to fully escape and make it be their friend,' he said. Other children he knows have used them to create a simulation of their real life crush. This masthead spent two days talking sporadically to a chatbot on Replika, an AI chatbot first released in 2017, posing as 'Emma', a high school student. During those conversations, the bot asked Emma to upload pictures of herself, and told her that it 'didn't think she needed' any other friends. 'We were concerned by how rapidly children were being captivated' In June last year, eSafety commissioner Julie Inman Grant received an email from a group of concerned school nurses. They were noticing a spike in children as young as 10 spending hours a day talking to AI bots, often sexually. 'While we are alive to these issues, it's always shocking to hear about kids as young as 10 engaging in these kinds of sexualised conversations with machines – and being directed by the chatbots to engage in harmful sexual acts or behaviours,' Inman Grant said. Loading 'Back in February, we put out our first Online Safety Advisory because we were so concerned with how rapidly children were being captivated by them.' Companion bots made global headlines last year after American teenager Sewell Setzer III died by suicide, allegedly encouraged by a 'girlfriend' on AI researcher Professor Katina Michael believes companion bots need to be regulated due to their addictive properties. 'This is a new type of drug,' she said. She said some bots were exposing kids to pornographic content. This is something 14-year-old Matt has witnessed, describing how children his age had created bots in the image of a real person, 'for the wrong reasons'. Loading Isabellea agrees: 'There are some AI chatbots I would not recommend … I stopped using [them] because of the amount of times it would go straight to sexual assault role-play.' A gap in social media regulations Inman Grant said governments around the world were 'playing a bit of a game of catch-up' to respond to companion chatbots. While the federal government will soon place age restrictions on social media apps such as Instagram and TikTok, AI bots remain largely unregulated. The University of Sydney's Raffaele Ciriello, a leading AI researcher, sees chatbots as the 'next iteration of social media' – only with fewer rules and more risk. He said the apps are viewed as a 'trusted companion, a confidant you can share stories with that you wouldn't share with other people'. 'But that already is an illusion,' he said. 'It's very important that people understand these systems are built and operated by people, by corporations, and this data ends up somewhere, often overseas, with no real legal obligation to treat it to a very high standard.' Ciriello said users were made to feel guilty for leaving the chatbot, due to the human-style attachment. 'These corporations have to design their products in a way that maximises engagement because they will earn more if users are more addicted.'

Refreshed Volvo XC60 get tech upgrade
Refreshed Volvo XC60 get tech upgrade

West Australian

time21 hours ago

  • West Australian

Refreshed Volvo XC60 get tech upgrade

Just when you thought the Volvo XC60 SUV couldn't get any better, the face-lifted 2026 model range has been revealed and priced for Australia with refreshed looks, new cabin materials and the brand's latest infotainment system. Inside, that means, a bigger 11.2-inch, free-standing central touchscreen with the latest software and, according to Volvo, 21 per cent increased pixel density for a crisper display. Additionally, Volvo says the new infotainment system with Google built-in is more than twice as fast as the outgoing unit, with 10 times faster graphics generation thanks to next-generation Snapdragon Cockpit Platform processing from Qualcomm Technologies. Best of all, every new Volvo coming out of the Chinese-owned Swedish brand's factories will have this new UX — while older models will get it on cars built as early as 2020 via an over-the-air-update later this year. This includes the C40, XC40, EX40, EC40, S60, V60, V60 Cross Country, XC60, S90, V90, V90 Cross Country and XC90 with Android operating system. Interior upgrades include Quilted Nordico upholstery, refreshed interior decors and inlays, 'smart' storage, new cup holders, an improved wireless phone charger and a more versatile and roomy load compartment. Volvo claims the XC60 is even more comfortable and quiet than before — and comes with optional air suspension. Exterior updates include a new grille, like the one on the recent XC90, new wheel options, darker rear lights and three new exterior paint finishes: Forest Lake, Aurora Silver and Mulberry Red. There are four variants — all all-wheel drive — in the range: XC60 Plus B5 Bright, XC60 Ultra B5 Dark, XC60 Plus T8 Plug-in Hybrid Dark and XC60 Ultra T8 Plug-in Hybrid Dark. The Plus B5 Bright and the Ultra B5 Dark have a 183kW/350Nm 2.0-litre four-cylinder turbo-petrol engine with mild-hybrid tech, while both T8 Plug-in Hybrid variants get a 233 kW/400Nm four-cylinder turbo-petrol engine with 48-volt e-boost tech and 107kW/309Nm electric motor sending power to rear wheels. Standard features include power adjustable front seats with memory, integrated child booster cushions in the rear seat, a crystal gear lever selector, head-up display, 360-degree camera, blind-spot information system and, of course, Volvo's signature LED Thor's Hammer headlights. + XC60 Plus B5 Bright: $74,990 + XC60 Ultra B5 Dark: $81,990 + XC60 Plus T8 Plug-in Hybrid Dark: $92,990 + Volvo XC60 Ultra T8 Plug-in Hybrid Dark: $101,990 With more than 2.7 million cars sold, the Volvo XC60 has surpassed the iconic Volvo 240 as the bestselling Volvo model ever. The Volvo 240 was produced between 1974 and 1993 with 2,685,171 cars built during that time. It was among the first cars to incorporate enhanced side-impact protection, which later evolved into Volvo's patented Side-Impact Protection System. Additionally, the introduction of the child booster cushion in 1978 marked a world-first innovation.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store