
Tech Giants' Net Zero Goals Verging On Fantasy: Researchers
The credibility of climate pledges by the world's tech giants to rapidly become carbon neutral is fading fast as they devour more and more energy in the race to develop AI and build data centres, researchers warned Thursday.
Apple, Google and Meta said they would stop adding CO2 into the atmosphere by 2030, while Amazon set that target for 2040.
Microsoft promised to be "net negative" -- pulling CO2 out of the air -- by the end of this decade.
But those vows, made before the AI boom transformed the sector, are starting to look like a fantasy even as these companies have doubled down on them, according to independent analysts.
"The greenhouse gas emissions targets of tech companies appear to have lost their meaning," Thomas Hay, lead author of a report by think tanks Carbon Market Watch and NewClimate Institute, told AFP.
"If energy consumption continues to rise unchecked and without adequate oversight," he added, "these targets will likely be unachievable."
The deep-dive analysis found the overall integrity of the climate strategies at Meta, Microsoft and Amazon to be "poor", while Apple's and Microsoft's were deemed "moderate".
When it came to the quality of emissions reduction targets, those of Meta and Amazon were judged "very poor", while Google and Microsoft scored a "poor" rating. Only Apple fared better.
The expanding carbon footprint of the five top tech behemoths stems mostly from the breakneck expansion of artificial intelligence, which requires huge amounts of energy to develop and run.
Electricity consumption -- and the carbon emissions that come with it -- has doubled for some of these companies in the last three or four years, and tripled for others, the report found.
The same is true across the sector: operational emissions of the world's top 200 information technology companies was nearly 300 million tonnes of CO2 in 2023, and nearly five times that if the downstream use products and services is taken into account, according to the UN's International Telecommunications Union.
If the sector were a country, it would rank fifth in greenhouse gas emissions ahead of Brazil.
Electricity to power data centres increased on average 12 percent per year from 2017 to 2024, and is projected to double by 2030, according to the IEA.
If all this extra power came from solar and wind, CO2 emissions would not be rising.
But despite ambitious plans to source their energy from renewables, much of it is still not carbon neutral.
Studies estimate that half of the computing capacity of tech companies' data centres comes from subcontractors, yet many companies do not account for these emissions, the study points out.
The same is true for the entire infrastructure and equipment supply chain, which accounts for at least a third of tech companies' carbon footprint.
"There is a lot of investment in renewable energy, but overall, it has not offset the sector's thirst for electricity," Day said.
Given the status of AI as a driver of economic growth, and even as a vector for industrial policy, it is unlikely that governments are going to constrain the sector's expansion, the report noted.
"So far the whole AI boom has been altogether quite unregulated," Day said.
"There are things these companies can and will do for future proofing, to make sure they're moving in the right direction" in relation to climate goals, he added.
"But when it comes to decisions that would essentially constrain the growth of the business model, we don't see any indications that that can happen without regulatory action."
The report identifies a number of ways in which the tech sector can curb its carbon footprint, even as it develops AI apace.
Ensuring that data centres -- both those belonging to the companies as well as third party partners -- run on renewable electricity is crucial.
Increasing the lifespan of devices and expanding the use of recycled components for hardware production could also make a big difference.
Finally, the methods use for calculating emissions reduction targets are out-of-date, and in need of revision, the report said. Data centres' electricity consumption is set to more than double by 2030 AFP

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Int'l Business Times
6 hours ago
- Int'l Business Times
Meta Spending Big On AI Talent But Will It Pay Off?
Mark Zuckerberg and Meta are spending billions of dollars for top talent to make up ground in the generative artificial intelligence race, sparking doubt about the wisdom of the spree. OpenAI boss Sam Altman recently lamented that Meta has offered $100 million bonuses to engineers who jump to Zuckerberg's ship, where hefty salaries await. A few OpenAI employees have reportedly taken Meta up on the offer, joining Scale AI founder and former chief executive Alexandr Wang at the Menlo Park-based tech titan. Meta paid more than $14 billion for a 49 percent stake in Scale AI in mid-June, bringing Wang on board as part of the deal. Scale AI labels data to better train AI models for businesses, governments and labs. "Meta has finalized our strategic partnership and investment in Scale AI," a Meta spokesperson told AFP. "As part of this, we will deepen the work we do together producing data for AI models and Alexandr Wang will join Meta to work on our superintelligence efforts." US media outlets have reported that Meta's recruitment effort has also targeted OpenAI co-founder Ilya Sutskever; Google rival Perplexity AI, and hot AI video startup Runway. Meta chief Zuckerberg is reported to have sounded the charge himself due to worries Meta is lagging rivals in the generative AI race. The latest version of Meta AI model Llama finished behind its heavyweight rivals in code writing rankings at an LM Arena platform that lets users evaluate the technology. Meta is integrating recruits into a new team dedicated to developing "superintelligence," or AI that outperforms people when it comes to thinking and understanding. Tech blogger Zvi Moshowitz felt Zuckerberg had to do something about the situation, expecting Meta to succeed in attracting hot talent but questioning how well it will pay off. "There are some extreme downsides to going pure mercenary... and being a company with products no one wants to work on," Moshowitz told AFP. "I don't expect it to work, but I suppose Llama will suck less." While Meta's share price is nearing a new high with the overall value of the company approaching $2 trillion, some investors have started to worry. Institutional investors are concerned about how well Meta is managing its cash flow and reserves, according to Baird strategist Ted Mortonson. "Right now, there are no checks and balances" with Zuckerberg free to do as he wishes running Meta, Mortonson noted. The potential for Meta to cash in by using AI to rev its lucrative online advertising machine has strong appeal but "people have a real big concern about spending," said Mortonson. Meta executives have laid out a vision of using AI to streamline the ad process from easy creation to smarter targeting, bypassing creative agencies and providing a turnkey solution to brands. AI talent hires are a long-term investment unlikely to impact Meta's profitability in the immediate future, according to CFRA analyst Angelo Zino. "But still, you need those people on board now and to invest aggressively to be ready for that phase" of generative AI, Zino said. According to The New York Times, Zuckerberg is considering shifting away from Meta's Llama, perhaps even using competing AI models instead. Penn State University professor Mehmet Canayaz sees potential for Meta to succeed with AI agents tailored to specific tasks at its platform, not requiring the best large language model. "Even firms without the most advanced LLMs, like Meta, can succeed as long as their models perform well within their specific market segment," Canayaz said.


Int'l Business Times
a day ago
- Int'l Business Times
AI Is Learning To Lie, Scheme, And Threaten Its Creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals AFP


Int'l Business Times
2 days ago
- Int'l Business Times
Germany Warns Apple, Google on DeepSeek App Over Privacy Breaches Tied to China
Germany's top data protection office is raising the alarm over DeepSeek, a Chinese AI chatbot app, for allegedly breaking European privacy laws. On Friday, Berlin's data commissioner Meike Kamp urged Apple and Google to consider removing the app from their stores, accusing DeepSeek of unlawfully transferring German user data to China. "DeepSeek's transfer of user data to China is unlawful," Kamp said in a public statement, warning that Chinese authorities may have full access to user information once it reaches servers in China, EuroNews said. Kamp also noted that DeepSeek failed to prove that it provides the same level of data protection required under the European Union's General Data Protection Regulation (GDPR). GDPR rules strictly limit how companies move personal data outside the EU. Any transfer must have proper safeguards in place, such as legal agreements and data handling standards equal to Europe's. Kamp claims DeepSeek hasn't shown that it meets those requirements. DeepSeek Faces EU Scrutiny for Sending User Data to China The app, created by Chinese companies Hangzhou DeepSeek and Beijing DeepSeek, gained popularity for offering a cheaper AI chatbot alternative using fewer high-end chips. However, its growing presence in Europe has sparked privacy concerns. According to CNBC , Kamp's office notified Apple and Google and expects both tech giants to perform a "timely review" of whether the app should remain available. If both platforms remove it, the result could be a de facto ban across the entire European Union and potentially the UK as well. "It is certainly possible that this incident could lead to an EU-wide ban," said Matt Holman, an AI and data lawyer at Cripps, in an email. "But regulators across the EU would need to agree before making that move official." For now, Apple and Google have not publicly responded to the request. CNBC has reached out to both companies and DeepSeek's privacy team but has not received a reply. This is not DeepSeek's first clash with European regulators. Earlier this year, Italy ordered DeepSeek to block its app after the company refused to comply with an official data request. Ireland also launched a separate investigation into how the app processes user information. Originally published on