logo
Redrawing the not-so-pretty energy footprint of AI

Redrawing the not-so-pretty energy footprint of AI

The Hindu05-05-2025
Generative Artificial Intelligence (AI) has undoubtedly eased access to art and reduced the time and the effort required to complete certain tasks. For example, ChatGPT-4o can generate a Studio Ghibli-inspired portrait in seconds with just a prompt. But this ease comes at a significant energy cost that is often overlooked — one that has even led to Graphic Processing Units (GPUs) melting. As AI tools advance, this environmental impact will continue to become more detrimental, making this an unsustainable technology. How can AI be developed sustainably? And can leveraging nuclear energy, specifically Small Modular Reactors (SMR), be a possible alternative?
AI is not free. Every time one uses ChatGPT or any other AI tool, somewhere in the world, there is a data centre chugging electricity, much of which is generated from fossil fuels. 'It's super fun seeing people love images in ChatGPT, but our GPUs are melting,' tweeted Sam Altman, CEO of OpenAI. Projections indicate that these data centres could account for 10% of the world's total electricity usage by 2030. Though these estimates mirror worldwide energy trends, it is necessary to highlight that India currently has sufficient capacity to generate electricity for its own domestic AI needs. Yet, with increasing adoption and ambitions, proactive planning is imperative.
Training an AI model, whether it is a conversational tool such as ChatGPT or an image-generator tool such as Midjourney, can generate the same amount of CO2 as five cars running continuously across their life. Once deployed, AI tools continue to draw immense power from data centres as they serve countless users around the globe. This resource consumption is staggering, and it is becoming more unsustainable as AI adoption grows.
To start with, AI companies need to be transparent about their energy consumption. Just as some regulations mandate the disclosure of privacy practices surrounding data usage, companies must also be mandated to disclose their environmental impact — first, how much energy is being consumed? Second, where is it coming from? Third, what steps are being taken to minimise energy consumption? Such data would provide further insights on where energy is being used the most and encourage research and development to create a more sustainable model of AI development.
Advantages of SMRs
Another, perhaps controversial, solution would be to address the energy source behind all of this technological growth. It is time nuclear energy, particularly SMRs, is discussed seriously. While this is often a subject of heated debate, it is also a powerful potential solution to the energy demands created by AI and other emerging technologies. The AI boom is happening fast, and the current energy infrastructure will just not be able to keep up.
SMRs present a transformative opportunity for the global energy landscape to support booming AI and data infrastructure. Unlike traditional large-scale nuclear power plants that demand extensive land, water, and infrastructure, SMRs are designed to be compact and scalable. This flexibility allows them to be deployed closer to high-energy-demand facilities, such as data centres, which require consistent and reliable power to manage vast amounts of computational workloads. Their ability to provide 24X7, zero-carbon, baseload electricity makes them an ideal alternative to renewable sources such as solar and wind by ensuring a stable energy supply regardless of weather conditions.
The benefits of SMRs extend beyond just energy reliability. Their modular construction reduces construction time and costs when compared to conventional nuclear plants, enabling faster deployment to meet the rapidly growing demands of AI and data-driven industries. Additionally, SMRs offer enhanced safety features, with passive safety systems that rely on natural phenomena to cool the reactor core and safely shut down, reducing the risk of accidents. This makes them more acceptable and easier to integrate into regions where large-scale nuclear facilities would face opposition. The ability of SMR to operate in diverse environments, from urban areas to remote locations, also supports the decentralisation of energy production, reducing transmission losses and enhancing grid resilience.
Some of the challenges
However, the adoption of SMRs is not without challenges. Significant policy shifts will be required to create a robust regulatory framework that addresses safety, waste management and public perception. There is also the matter of substantial upfront investment, as the technology is still maturing and may face issues of cost competitiveness when compared to established energy sources. Additionally, coordinating SMR deployment with existing renewable energy initiatives will require careful planning to maximise synergies while minimising redundancy. In India's case, despite these challenges, the cost of electricity from SMRs is predicted to fall from ₹10.3 to ₹5 per kWh after the reactors are functional, which is less than the average cost of electricity.
In conclusion, a public-private partnership model presents a realistic solution to the challenges of sustainable AI development. By leveraging the strengths of both sectors, this model can facilitate the efficient development of SMRs alongside other forms of renewable energy to support advancements in AI.
Anwesha Sen is with The Takshashila Institution. Sourav Mannaraprayil is with The Takshashila Institution
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why Nvidia is the biggest risk to the stock market—not tariffs
Why Nvidia is the biggest risk to the stock market—not tariffs

Mint

timean hour ago

  • Mint

Why Nvidia is the biggest risk to the stock market—not tariffs

Nvidia has lost half its market value seven times since going public in 2000. While worries persist over the impact of shifting trade policy and higher tariffs on markets, the bigger risk may be the hundreds of billions invested in artificial U.S. stocks have forged new highs despite rising geopolitical risks. For Louis Gave, head of investment and research firm Gavekal, the worry is how dependent markets have become on AI spending—and one of its main beneficiaries, Nvidia. Since the release of ChatGPT in 2022, U.S. total market cap has expanded by $23 trillion—more than the market caps of Japan, Europe, and the U.K. combined, Gave wrote in a note to clients Monday. Though the capital spending on AI—about 11% of GDP—isn't as extreme as the tech boom in 1999, Gave sees parallels. Over seven years in the 1990s, capital spending rose from 8% to 11.5% of GDP. The consensus view had been that growth was structural and companies would have to buy computers and spend on broadband and e-commerce. But the spending did eventually stall, causing stocks to tumble. This time the surge in AI-oriented spending is shifting the traditionally asset-light businesses of Microsoft, Meta, and Alphabet into being asset heavy. Nvidia has been the biggest sees the possibility that the hundreds of billions of dollars invested in artificial intelligence turns out to be a dud as the bigger risk for markets. He posits the recent disappointment in semiconductor-equipment manufacturing stocks could foreshadow trouble ahead. While Tokyo Electron last week said the outlook for cutting-edge chips—such as Nvidia's—remained strong, Gave notes the company's warning that demand from logic manufacturers had markedly weakened. ASML in mid-July offered a similarly sober outlook and warned it couldn't guarantee 2026 growth amid tariff uncertainty. Gave sees a couple reasons behind these warnings, including the impact of higher tariffs and threats making Asian companies hesitant to spend billions of dollars when they are unsure if the rules of the game will change on them, Gave says. It is possible consumers could be rethinking spending, with Tokyo Electron noting softness in global sales of mobile phones, laptops, and personal computers. It could also be the semiconductor capex cycle is starting to feel the weight of the China embargo, Gave says. China has spent aggressively in recent years to build its own semiconductor supply chain and reduce its reliance on foreign inputs, but now in many subsectors, China is pushing ahead faster than most anticipated, Gave says. While the stocks of other companies have suffered, Nvidia has continued to forge higher. Gave is concerned what happens to the market when Nvidia, with its $4.2 trillion market cap, disappoints. He notes Nvidia has lost half its market value seven times since going public in 2000. The fallout from a repeat could be far-reaching. Gave notes the aggregate wealth of consumers everywhere has burgeoned amid the boom in large-cap tech, U.S. megacap stocks, crypto, and gold—and a not insignificant part of that wealth creation is reliant on the premise AI ushers in a new era of productivity gains. If those gains don't materialize or venture capital and big tech companies rethink the their expanding AI capital spending, Gave says the market could suffer a pullback in valuations that triggers a nasty negative wealth effect. It could also fuel investor concerns about what could replace AI as a driver for both earnings and economic growth. 'The recent rollover in what had been a strong momentum trade for the tech boom feels like it could be a proverbial canary in the coal mine," Gave cautions. Write to Reshma Kapadia at

Apple Is Taking On ChatGPT AI Search With New ‘Answers' Team: What We Know
Apple Is Taking On ChatGPT AI Search With New ‘Answers' Team: What We Know

News18

timean hour ago

  • News18

Apple Is Taking On ChatGPT AI Search With New ‘Answers' Team: What We Know

Last Updated: Apple's AI push is going to get fiercer and the new Answers team will be hoping to compete with the best in the market. Apple is going to make some big moves in the AI arena to get started and the company is even ready to spend big on making it happen. However, new reports say Apple is working on its own ChatGPT search-like feature with the help of a new team. The company has set up an internal team codenamed 'Answers, Knowledge and Information' (AKI) that is being entrusted with developing an AI-powered search platform. As the name suggests, AKI will be using the web to crawl for responses and give answers to GK-based queries from the users. The Bloomberg report suggests this could be Apple's first real fist at making in-house AI chatbot. Working For The Future The AKI team, as given in the report, is going to help shape the AI infrastructure for search at Apple that will eventually offer its capabilities to Siri, Spotlight and even Safari, it adds. The company has relied on external AI support to offer features to iPhone users but it seems Apple knows that a strong AI future is relying on in-house backend architecture that not only delivers quality but also ensures privacy of the users. Can the new team help Apple find the answers? We'll know soon. We're hearing about the new AI search tool from Apple around the same time when the company is planning for some big moves in the market. Tim Cook has been trying his best to internally keep the motivation going, and recently made a rousing speech in front of the employees, asking them to put most of their eggs in the AI basket. 'AI is one of the most profound technologies of our lifetime. And I think it will affect all devices in a significant way," he was quoted saying in the meeting by Bloomberg. The company has even considered buying Perplexity and others might feature on its list now. Apple clearly has its work cut out to challenge OpenAI and Google, as the former is already close to releasing ChatGPT v5.0 in the market, which has Sam Altman scared about the power and future of the technology with its fast advancement and no oversight in place. view comments First Published: August 05, 2025, 07:34 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Too much ChatGPT? OpenAI will now remind users to take a break during long conversations
Too much ChatGPT? OpenAI will now remind users to take a break during long conversations

Mint

timean hour ago

  • Mint

Too much ChatGPT? OpenAI will now remind users to take a break during long conversations

OpenAI has announced that ChatGPT will now feature a gentle reminder for users to take a break if they are in a long conversation with the chatbot. The new feature comes as part of the initiative by the company to encourage healthy behaviour with its popular chatbot. 'Instead of measuring success by time spent or clicks, we care more about whether you leave the product having done what you came for.' OpenAI said in a blogpost announcing the gentle reminders feature. A sample reminder shown by OpenAI states the message, 'Just checking in. You've been chatting for a while is this a good time for a break.' Users will then have two options to click on, either to keep chatting or take a break by clicking on 'This was helpful'. OpenAI says it will keep tuning when and how gentle reminders show up in order to make sure they 'feel natural and helpful.' Apart from the gentle reminders feature, OpenAI also stated it will soon roll out a new behaviour for 'high-stakes personal decisions' where ChatGPT will help users think through their big decisions in life by asking questions and weighing pros and cons instead of giving a straight answer. The new features to ChatGPT come soon after a report by The New York Times in June revealed that the chatbot's tendency to agree with people, provide flattery and engage in 'hallucinations' (making stuff up), led users to developing delusional beliefs. The company itself had confirmed in April that its chatbot had become overly agreeable and sycophantic after an update to GPT-4o. This made the popular AI companion align with the opinions of users, even the ones that were factually incorrect or harmful. While OpenAI had pulled back that update, the new effort now seems like a way to encourage healthier habits for the users. Meanwhile, OpenAI CEO Sam Altman has also been teasing the company's latest GPT-5 language model. The new model is expected to be released very soon and Altman had earlier revealed that it will be the first ever OpenAI model without a model picker, meaning users won't need to switch between a reasoning model and a GPT model and the AI system would be able to make that decision on its own.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store