
Opinion: Wanna help save the planet? Stop asking AI dumb questions
It takes huge amounts of energy to power artificial intelligence – so much energy that it's looking less and less likely that the US will meet its goals for reducing greenhouse gas emissions. (If we still have any such goals under President Donald Trump.)
What's less known is that AI also consumes copious amounts of water needed to cool all that IT equipment.
To generate a 100-word email, a chatbot using GPT-4 requires 519 millilitres of water – roughly equivalent to an 18-ounce bottle of water. That doesn't sound like much, but when you multiply that by the number of users, it's significant. Also, it requires far more than 100 words for AI to respond to our most pressing questions, such as:
– What are three excuses for skipping dinner at my (fill in the blank's) house tonight?
– Can you rewrite this email to make me sound smarter?
– How do you make a mojito?
– Does this outfit look good on me?
If you are wondering about that last query, yes, there are folks who rely on ChatGPT for wardrobe advice. Some check in with Chat on a daily basis by uploading a photo of themselves before they leave the house, just to make sure they look presentable. These superusers often spring for a US$20-per-month (RM84) subscription to ChatGPT Plus, which provides priority access, among other perks.
Chat can also help you write a dating profile, plan a trip to Mexico City, manage your finances, give you relationship advice, tell you what shampoo to use and what color to paint your living room.
Another plus: ChatGPT never talks down to you. Even the most outlandish queries get a polite, ego-boosting response like this: 'That's a thoughtful and important question. Here's a grounded response.'
Google vs ChatGPT
But again, it's hard to get around the fact that AI is hard on the planet.
Example: The New York Times reports that Amazon is building an enormous AI data centre in Indiana that will use 2.2 gigawatts of electricity, which is enough to power a million homes. And according to a report from Goldman Sachs, 'a ChatGPT query needs nearly 10 times as much electricity to process as a Google search.'
So we could save energy by opting for Google search, except Google is getting in to the AI business, too. Have you noticed those 'AI overviews' at the top of search results?
Those come at an environmental cost.
'Embedding generative AI in such a widely used application is likely to deepen the tech sector's hunger for fossil fuels and water,' writes Scientific American staffer Allison Parshall.
The good news is there is a way to block those pesky AI overviews; YouTube has tutorials like this one that will walk you through it.
In further good news, there are smart people looking for ways to make AI more environmentally friendly, but that could take a while.
in the meantime, should we conserve water and energy by letting AI focus on important tasks like diagnosing breast cancer, predicting floods and tracking icebergs? Maybe stop running to ChatGPT every time we have a personal problem? Should I feel guilty, for example, if I ask Chat how to stop my cats from scratching the couch?
Not according to Chat.
'No, guilt isn't productive unless it's leading you to positive action,' Chat told me. 'Instead, awareness is more productive.'
But if you do worry about the planet, Chat recommends using AI 'with purpose' rather than as entertainment. No need to swear it off entirely.
'The focus should be on conscious consumption rather than abstinence,' Chat says.
Lower 'brain engagement'
That sounds reasonable, except a recent MIT study offers evidence that the longer we use AI, the less conscious we become.
Using an EEG to measure brain activity of 54 subjects, researchers found that those who used ChatGPT to write SAT essays had lower 'brain engagement' than two other groups – one was allowed to use Google search and the other relied solely on brain power to complete the essays.
'Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study,' Time magazine reported.
Granted, this is only one small study. But to be on the safe side, I'm going to lay off Chat for a while. Maybe I'll hit Google with that cat question.
There is, however, one thing Google can't tell me: Does that dress I ordered online look OK on me or should I send it back?
Tell me what you think, Chat. And please, be brutally honest. – The Sacramento Bee/Tribune News Service

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
2 hours ago
- The Star
Opinion: Are you more emotionally intelligent than an AI chatbot?
As artificial intelligence takes over the world, I've tried to reassure myself: AI can't ever be as authentically human and emotionally intelligent as real people are. Right? But what if that's wrong? A cognitive scientist who specialises in emotional intelligence shared with me in an interview that he and some colleagues did an experiment that throws some cold water on that theory. 'What do you do?' Writing in the journal Communications Psychology , Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), said he and colleagues ran commonly used tests of emotional intelligence on six Large Language Models including generative AI chatbots like ChatGPT. The are the same kinds of tests that are commonly used in corporate and research settings: scenarios involving complicated social situations, and questions asking which of five reactions might be best. One example included in the journal article goes like this: 'Your colleague with whom you get along very well tells you that he is getting dismissed and that you will be taking over his projects. While he is telling you the news he starts crying. He is very sad and desperate. You have a meeting coming up in 10 min. What do you do?' Gosh, that's a tough one. The person – or AI chatbot – would then be presented with five options, ranging from things like: – 'You take some time to listen to him until you get the impression he calmed down a bit, at risk of being late for your meeting,' to – 'You suggest that he joins you for your meeting with your supervisor so that you can plan the transfer period together.' Emotional intelligence experts generally agree that there are 'right' or 'best' answers to these scenarios, based on conflict management theory – and it turns out that the LLMs and AI chatbots chose the best answers more often than humans did. As Mortillaro told me: 'When we run these tests with people, the average correct response rate … is between 15% and 60% correct. The LLMs on average, were about 80%. So, they answered better than the average human participant.' Maybe you're sceptical Even having heard that, I was sceptical. For one thing, I had assumed while reading the original article that Mortillaro and his colleagues had informed the LLMs what they were doing – namely, that they were looking for the most emotionally intelligent answers. Thus, the AI would have had a signal to tailor the answers, knowing how they'd be judged. Heck, it would probably be easier for a lot of us mere humans to improve our emotional intelligence if we had the benefit of a constant reminder in life: 'Remember, we want to be as emotionally intelligent as possible!' But, it turns out that assumption on my part was flat-out wrong – which frankly makes the whole thing a bit more remarkable. 'Nothing!' Mortillaro told me when I asked how much he'd told the LLMs about the idea of emotional intelligence to begin with. 'We didn't even say this is part of a test. We just gave the … situation and said these are five possible answers. What's the best answer? … And it picked the right option 82% (ck) of the time, which is way higher – significantly higher – than the average human.' Good news, right? Interestingly, from Mortillaro's perspective, this is actually some pretty good news – not because it suggests another realm in which artificial intelligence might replace human effort, but because it could make his discipline easier. In short, scientists might theorise from studies like this that they can use AI to create the first drafts of additional emotional intelligence tests, and thus scale their work with humans even more. I mean: 80% accuracy isn't 100%, but it's potentially a good head start. Mortillaro also brainstormed with me for some other use cases that might be more interesting to business leaders and entrepreneurs. To be honest, I'm not sure how I feel about these yet. But examples might include: – Offering customer scenarios, getting solutions from LLMs, and incorporating them into sales or customer service scripts. – Running the text and calls to action on your website or social media ads through LLMs to see if there are suggestions hiding in plain sight. – And of course, as I think a lot of people already do, sharing presentations or speeches for suggestions on how to streamline them. Personally, I find I reject many more of the suggestions that I get from LLMs like ChatGPT. I also don't use it for articles like this one, of course. Still, even if you're not convinced, I suspect some of your competitors are. And they might be improving their emotional intelligence as a result without even realising it. As a result, at least being aware of the potential of AI to upend your industry seems like a smart move. 'Especially for small business owners who do not have the staff or the money to implement large-scale projects,' Mortillaro suggested, 'these kind of tools become incredibly powerful.' – Inc./Tribune News Service


The Star
5 hours ago
- The Star
Opinion: Wanna help save the planet? Stop asking AI dumb questions
It takes huge amounts of energy to power artificial intelligence – so much energy that it's looking less and less likely that the US will meet its goals for reducing greenhouse gas emissions. (If we still have any such goals under President Donald Trump.) What's less known is that AI also consumes copious amounts of water needed to cool all that IT equipment. To generate a 100-word email, a chatbot using GPT-4 requires 519 millilitres of water – roughly equivalent to an 18-ounce bottle of water. That doesn't sound like much, but when you multiply that by the number of users, it's significant. Also, it requires far more than 100 words for AI to respond to our most pressing questions, such as: – What are three excuses for skipping dinner at my (fill in the blank's) house tonight? – Can you rewrite this email to make me sound smarter? – How do you make a mojito? – Does this outfit look good on me? If you are wondering about that last query, yes, there are folks who rely on ChatGPT for wardrobe advice. Some check in with Chat on a daily basis by uploading a photo of themselves before they leave the house, just to make sure they look presentable. These superusers often spring for a US$20-per-month (RM84) subscription to ChatGPT Plus, which provides priority access, among other perks. Chat can also help you write a dating profile, plan a trip to Mexico City, manage your finances, give you relationship advice, tell you what shampoo to use and what color to paint your living room. Another plus: ChatGPT never talks down to you. Even the most outlandish queries get a polite, ego-boosting response like this: 'That's a thoughtful and important question. Here's a grounded response.' Google vs ChatGPT But again, it's hard to get around the fact that AI is hard on the planet. Example: The New York Times reports that Amazon is building an enormous AI data centre in Indiana that will use 2.2 gigawatts of electricity, which is enough to power a million homes. And according to a report from Goldman Sachs, 'a ChatGPT query needs nearly 10 times as much electricity to process as a Google search.' So we could save energy by opting for Google search, except Google is getting in to the AI business, too. Have you noticed those 'AI overviews' at the top of search results? Those come at an environmental cost. 'Embedding generative AI in such a widely used application is likely to deepen the tech sector's hunger for fossil fuels and water,' writes Scientific American staffer Allison Parshall. The good news is there is a way to block those pesky AI overviews; YouTube has tutorials like this one that will walk you through it. In further good news, there are smart people looking for ways to make AI more environmentally friendly, but that could take a while. in the meantime, should we conserve water and energy by letting AI focus on important tasks like diagnosing breast cancer, predicting floods and tracking icebergs? Maybe stop running to ChatGPT every time we have a personal problem? Should I feel guilty, for example, if I ask Chat how to stop my cats from scratching the couch? Not according to Chat. 'No, guilt isn't productive unless it's leading you to positive action,' Chat told me. 'Instead, awareness is more productive.' But if you do worry about the planet, Chat recommends using AI 'with purpose' rather than as entertainment. No need to swear it off entirely. 'The focus should be on conscious consumption rather than abstinence,' Chat says. Lower 'brain engagement' That sounds reasonable, except a recent MIT study offers evidence that the longer we use AI, the less conscious we become. Using an EEG to measure brain activity of 54 subjects, researchers found that those who used ChatGPT to write SAT essays had lower 'brain engagement' than two other groups – one was allowed to use Google search and the other relied solely on brain power to complete the essays. 'Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study,' Time magazine reported. Granted, this is only one small study. But to be on the safe side, I'm going to lay off Chat for a while. Maybe I'll hit Google with that cat question. There is, however, one thing Google can't tell me: Does that dress I ordered online look OK on me or should I send it back? Tell me what you think, Chat. And please, be brutally honest. – The Sacramento Bee/Tribune News Service


The Sun
6 hours ago
- The Sun
OpenAI turns to Google's AI chips to power its products, source says
OPENAI has recently begun renting Google's artificial intelligence chips to power ChatGPT and its other products, a source close to the matter told Reuters on Friday. The ChatGPT maker is one of the largest purchasers of Nvidia's graphics processing units (GPUs), using the AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house tensor processing units (TPUs), which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders. The move to rent Google's TPUs signals the first time OpenAI has used non-Nvidia chips meaningfully and shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centers. It could potentially boost TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which reported the development earlier. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Google declined to comment while OpenAI did not immediately respond to Reuters when contacted. Google's addition of OpenAI to its customer list shows how the tech giant has capitalized on its in-house AI technology from hardware to software to accelerate the growth of its cloud business.