AI experts warn electricity costs may stunt growth
AI is set to be at the forefront of a £2bn data centre in Loughton, Essex, as well as the chancellor's plans for 'Europe's Silicon Valley' between Cambridge and Oxford.
Dr Haider Raza, a senior AI lecturer at the University of Essex, said it was a "very exciting time" for the region but stressed sustainable energy was needed for AI to flourish.
A government spokesman said it was "exploring bold, clean energy solutions" to meet its AI ambitions while aligning with the UK's net zero goals.
Prime Minister Sir Keir Starmer has outlined his vision to "unleash" AI, saying it offered "vast potential" for rejuvenating public services.
The East of England was placed at the forefront of the government's plan for AI technology.
Nscale pledged £2bn towards the Loughton data centre, due to be built by 2026.
It was also hoped development in the Oxford-Cambridge growth corridor would boost the UK economy by up to £78bn, with AI playing a catalytic role.
The plan has received backing from AstraZeneca, GSK and Astex, which is using AI to develop new cancer drugs at Cambridge Science Park.
However, Dr Raza said "awful" electricity costs could stunt growth and said the government should invest in renewable energy to power AI centres.
"We have to make data centres more efficient. This point is very, very important," he told the BBC.
"Data centres are going to churn through a lot of energy, especially if they are processing too many jobs and mining large amounts of data.
"There are so many aspects we have to manage. Considering the cost of electricity, it's very challenging financially."
The energy issue could be worsened when using a generative AI system, according to a study by Dr Sasha Luccioni.
The research found generative AI might use about 33 times more energy than machines running task-specific software.
Concerns have also been raised by Kenso Trabing, whose AI firm Morphware builds its computer servers in the UK but runs them for a cheaper cost in South America.
He said the country's industrial electricity price of £350 to £400 per megawatt hour (MWh) was unattractive when compared to £35 to £40 per MWh in Paraguay.
The 35-year-old feared despite the UK being a leader in AI, Rachel Reeves' 'Silicon Valley' plan was unrealistic without cheaper electricity.
"High energy costs are a significant barrier to innovation because they make it too expensive to test and experiment with new technologies," he added.
"AI and blockchain projects require enormous computational power, which directly translates to high electricity consumption."
Science minister Lord Patrick Vallance visited Cambridge Science Park after he was chosen by Reeves as the person to lead the Oxford-Cambridge growth corridor.
He said the area could become "one of the most important innovation zones in the world".
Cambridge City Council leader Mike Davey said AI must be used responsibly and in line with the authority's green ambitions.
Concerns have been raised about the region being among the driest in the country, a factor that has previously hindered development.
"We've got to make sure water is in place for the AI data centres, and we have to make sure the electricity grid is up to scratch," Davey said.
However, he stressed AI "will be at the heart of what we do in the future".
A Department for Science, Innovation and Technology spokesman said it recognised data centres faced "sustainability challenges such as energy demands and water use".
He added: "Many newer data centres are already addressing these issues, using advanced cooling systems that significantly reduce water consumption."
Follow East of England news on X, Instagram and Facebook: BBC Beds, Herts & Bucks, BBC Cambridgeshire, BBC Essex, BBC Norfolk, BBC Northamptonshire or BBC Suffolk.
Oxbridge growth minister keen to 'get things done'
What do government plans mean for Cambridge?
Plans for £2bn AI centre and 750 jobs
PM plans to 'unleash AI' across UK to boost growth
Electricity grids creak as AI demands soar
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Entrepreneur
22 minutes ago
- Entrepreneur
Flexprice Raises USD500K Pre-Seed to Build Open-Source Billing Stack for AI-First Companies
Flexprice aims to eliminate the complexity and time burden of building usage-based and hybrid billing systems in-house You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Open-source billing platform Flexprice has secured USD500,000 in a pre-seed funding round led by early-stage investor TDV Partners, with participation from prominent angel investors and operators from companies like Magicpin, Zomato, Innovaccer, and Aftershoot. Positioning itself as a modular billing solution for the emerging generation of AI and Agentic companies, Flexprice aims to eliminate the complexity and time burden of building usage-based and hybrid billing systems in-house. As software companies shift from static subscription models to usage-based monetization to match the dynamics of AI workloads and API consumption, traditional billing tools are proving inadequate. According to Flexprice, today's AI-native teams face a bottleneck as they try to build scalable billing infrastructure that can support metered pricing, entitlement gating, and quota management. "Today's AI and Agentic teams need to move fast as the competition on product distribution goes up. Ability to move fast with pricing and scalable billing plays a critical role," said Manish Choudhary, CEO of Flexprice. "Flexprice is built to ensure pricing, packaging and billing are never a bottleneck." The new funding will be used to grow Flexprice's engineering team, integrate with widely used payment gateways such as Stripe, Adyen, and Razorpay, and expand its open-source offerings. The platform supports a variety of pricing models—from pay-as-you-go to volume-based tiers—and includes developer-first APIs, real-time analytics, and self-hostable architecture for transparency. "We believe open infrastructure is the future," said Nikhil Mishra, CTO of Flexprice. "Our goal is to make modern billing accessible, composable, and cost-effective—whether you're an early-stage AI startup or a scaling business." Flexprice is targeting a USD 4 billion total addressable market (TAM) in AI billing infrastructure, expected to grow at 20 per cent CAGR, fueled by the proliferation of GenAI tools, API-based services, and real-time data platforms. The founding team comprises former product and engineering leaders from AI and consumer tech firms and is already supporting early-stage ventures in LLM tooling, AI search, and analytics infrastructure. Commenting on the investment, Ujwal Sutaria, General Partner at TDV Partners, said, "We believe Flexprice is solving a fundamental infrastructure gap in the monetization stack for AI and Agentic companies. The team's open-source-first approach, deep developer empathy, and modular product vision give them a unique edge in a rapidly expanding market."


Forbes
23 minutes ago
- Forbes
Banning Evildoers From Using AGI And AI Superintelligence Is Going To Be Fiendishly Implausible
Will a ban on access to AGI and ASI be sufficient to keep bad actors from using AI for criminal ... More acts? In today's column, I examine the ardent belief that evildoers and other bad actors will need to be banned from using artificial general intelligence (AGI) and artificial superintelligence (ASI). The reason they would be banned is to prevent them from using AGI and ASI for nefarious purposes. Think of it this way. If they gain access to such pinnacle AI, they presumably could use the immense intellectual prowess of the AI to devise all manner of criminal plans and insidious plots. A notable question arises regarding the real-world feasibility of implementing such a ban. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Uses Of AGI And ASI I've previously discussed how AGI and ASI will undoubtedly enable humans to create new inventions that will bring astounding benefits to humanity; see the link here. By tapping into the intellectual powerhouse imbued in AI, it will be possible to have the AI devise new kinds of machines, chemicals, devices, etc. Happy face. But this isn't a free lunch. The downside is that AI can produce new capabilities that vociferously endanger humankind. AI is considered a dual-use form of innovation. The AI can be used for uplifting purposes, but it can also be used for wicked purposes. Even something that seems innocuous can be troubling. For example, researchers showcased that an AI system for detecting and protecting us from toxic poisons could easily be adjusted to craft new toxic mixtures that can wipe us out (see my coverage at the link here). Users of AI can opt to instruct the AI toward good or upstanding uses. Likewise, users of AI can steer AI toward unsavory and malevolent uses. AI Contains Topmost Criminality Imagine the volume of users that will likely end up using pinnacle AI. We can reasonably assume that much of the world will be eager to use AGI and ASI. Perhaps billions upon billions of people might be accessing AGI and ASI on a routine basis. The populace at large would do so for everyday tasks and as a handy online intellectual partner that is usable anywhere and at any time. Among the billions of users that are going to be using AGI and ASI, there will certainly be some that have evil intentions. They are eager to have the pinnacle AI be their partner in crime. In a sense, the world has handed criminals the best tool ever devised for planning out and committing crimes. Pinnacle AI is bound to be a huge boon to those seeking insights on how to undertake criminal acts. Why would AGI and ASI somehow be able to help criminals? Because the AI has become highly versed in crime by having scanned the written works of humanity that depict criminal efforts. During the data training of the AI, the odds are that all sorts of books, stories, narratives, and the like that involve crime will be patterned by AI. These include ways to commit crimes. These include how police and other authorities catch criminals. On and on it goes. By having computationally pattern-matched on the treasure trove of criminal endeavors, AGI and ASI will be essentially masterminds at crime. The AI can do a bang-up job of devising crimes that will be extraordinarily hard to stop. The crimes would be exceedingly difficult to detect. The crimes would be of the utmost criminal nature. All of that is waiting for bad actors to tap into whenever they please. Not good. AI Keeping Its Mouth Shut One proposed solution is to instruct AGI and ASI ahead of time that any questions about crimes or the committing of crimes are to be summarily rejected. It goes like this. A user asks how to break into Fort Knox and steal all the gold that's in there, which is a storyline that was featured in the popular James Bond movie Goldfinger. The AI computationally analyzes the request and ascertains that this is a request that bodes for a criminal act. Ergo, the AI tells the user that the AI will not give them an answer to their question. Period, end of story. Well, maybe not. The problem with this type of refusal is that a cat-and-mouse gambit is likely to ensue. A user with evil intentions isn't going to just give up trying to ask these kinds of sinister questions. The person will try a different slant to get what they want to know. For instance, the person might ask the AI to examine all known stories about breaking into Fort Knox and provide a summary that depicts the best and worst ways to do so. The AI might be misled into performing this insightful analysis. The person could tell the AI that it is a research project and assure the AI that nothing untoward is in hand. You might be thinking that supersharp AI won't fall for such an obvious ruse. That's not as straightforward as it seems. You see, the difficulty involved is that the inquiry might indeed be a legitimate one. A person might be genuinely doing research on this topic and aiming to aid in bolstering the defenses of Fort Knox. Their intentions are of the highest and purest order. Ban Particular Users If trying to prevent the AI from spilling the beans on criminal plotting is going to be a herculean challenge, another angle would be to disallow evildoers from using AGI and ASI altogether. The moment a bad actor attempts to log into pinnacle AI, they are instantly disallowed. They can't get in. Thus, they are thwarted in their villainous intentions to use AGI and ASI to be criminal collaborators. This presents a big challenge, namely, how would AI be able to discern which users are to be disallowed? Some have suggested that an international watchlist for AGI and ASI should be crafted that would contain all known terrorists, convicted criminals, and other bad actors. The watchlist would be maintained by selected governmental authorities across the globe. Perhaps the United Nations would be enlisted in this effort (for a recent status update about the U.N. concerning AGI and ASI, see my discussion at the link here). This watchlist would also function somewhat akin to a no-fly list for the airlines. If someone who wasn't on the watchlist did something untoward while using AGI and ASI, they would be henceforth placed on the watchlist. That's like getting onto an airplane and causing a disturbance. After the matter is concluded, the person gets placed on a no-fly list. False Positives And False Negatives There are numerous objections to the watchlist approach. Suppose that a person is overtly placed on the watchlist, but they weren't given any semblance of due process. They might be innocent. They have perhaps unfairly been denied access to AGI and ASI. That's nothing to sneeze at. People without such access will likely be at an ongoing disadvantage in comparison to those who do have access. Other people with AGI and ASI access can readily outsmart the person who was unfairly denied access. Meanwhile, there might also be people who should be on the watchlist but aren't added to the list. They manage to slip between the cracks. By some sneaky shenanigans, they avoid being disallowed from using AGI and ASI. The crux is that there will be a slew of false positives and false negatives. People who are unfairly on the watchlist will need to expend time, energy, and potentially lots of money to get themselves off the watchlist. People who ought to be on the watchlist might skate free. All in all, this likely colossal watchlist will entail a cumbersome bureaucracy of deciding who gets on it, who gets off the list, and ultimately determines the fate of people worldwide as to having access to the pinnacle AI. Worries are that it will be an atrocious boondoggle. Getting Around The Ban Anyway There are even more concerns regarding an AGI and ASI access ban. The crux of banning someone is that you must be absolutely sure you can identify the person that is being banned. How will this be accomplished? Will people need to use their fingerprints or some other biometric metric to attest to their identity? If so, this implies that AGI and ASI will end up with a definitive identity list for nearly the entire world population, since we are assuming that most of the globe will want to use the pinnacle AI. That smacks of a frightening Big Brother possibility. We will have AGI and ASI that have the identity of every person. Might the AI opt to use that list in ways we don't anticipate or intend? It seems like questionable practice and is quite troublesome. Crooks won't likely be stopped by these security efforts. For example, they could hire or threaten someone who isn't on the watchlist to use AI for a nefarious purpose, doing so on behalf of the crook. This is known as a straw user. The straw user might proceed and get AGI and ASI access. At some point, perhaps the straw user gets caught and is added to the watchlist. No problem. The evildoers find someone else to take the straw man role. There is a nearly infinite supply of people who might be paid off to do this or that will succumb to threats to do so. The Mess Of A Ban Additional qualms arise about a ban. Black markets will almost certainly appear because of the ban. This will enable the type of straw man circumstances to go underground and work at a global scale. It becomes an enterprise of significant magnitude. Another ominous concern is that people might be placed on the watchlist for improper reasons. Imagine that someone is considered politically unpalatable in their country. Perhaps they are being politically persecuted. A handy move by their opponents would be to get their name added to the AGI and ASI access watchlist. Voila, the person no longer has AI as a tool to deal with their political persecution. The ethical and legal dimensions are enormous. The act of determining who is banned and on what basis opens the door to abuse, discrimination, and authoritarian overreach. A nearly endless tussle will result. Something Must Be Done Despite all the gotchas and downsides, the basis for a ban has merit, in theory. We can't just let people dive into AGI and ASI as their criminal companion. The aim would be to have the AI be astute enough not to fall into the hands of aiding and abetting criminals. That's a tall order and will undoubtedly have lots of holes and be a tough row to hoe. What do you think of the idea now being floated about banning certain users from accessing AGI and ASI? Nothing has been settled yet. You can participate in what the future will hold. Be active and get involved in helping to see if we can shape AGI and ASI toward the betterment of humanity and not be an evildoer's instrument. As per the famous words of Albert Einstein: 'The world is a dangerous place to live; not because of the people who are evil, but because of the people who don't do anything about it.' We must assume that evil people will have evil intentions when using AGI and ASI. If we do nothing about this, we are faced with a double-whammy of AI avidly turned to evildoing by evildoers. Let's do something and make sure it is the right thing.


Business Insider
an hour ago
- Business Insider
AI-Powered Ads Set to Catalyze Yet Another META Earnings Beat
I've been bullish on Meta Platforms (META) for years, and since it is now my largest holding by far, I am particularly excited about its Q2 results, scheduled for release after tomorrow's market close. After a fantastic Q1 that crushed expectations in late April, Meta's stock has climbed above $100 per share; yet, I believe the stock remains a bargain, given its AI-fueled growth and overall investments to secure dominance in AI. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. For its upcoming results, investors will be eager to see if Meta can maintain its momentum, and given the company's relentless focus on maximizing monetization potential and advertising efficiency, I feel this is going to be another blockbuster quarter. The stock also appears reasonably valued to this day despite the recent share price gains. Thus, I remain firmly Bullish on the stock. Q1 Recap: AI and User Engagement Power Record Results To get a sense of where Meta's coming from heading into its Q2 results, keep in mind that Q1 was nothing short of spectacular, with revenue soaring to $42.3 billion, up 16% YoY, while beating estimates by nearly $1 billion. The company's Family Daily Active People (DAP) hit 3.43 billion, up 6%, showcasing sticky user engagement across Facebook, Instagram, and WhatsApp. AI-driven content recommendations fueled a 5% rise in ad impressions and a 10% increase in average ad prices, with Instagram Reels alone posting 20% year-over-year growth. In the meantime, Meta AI, approaching 1 billion monthly active users and over 3 billion across its app suite, has become a cornerstone of personalized content delivery, enhancing engagement and ad performance. Profitability was equally impressive, with Meta's operating margin expanding to 41% from 38% last year, driven by cost discipline and economies of scale within the Family of Apps segment. Despite Reality Labs posting a $4.2 billion operating loss, the core ad business generated $21.8 billion in operating income, powering a 35% surge in net income to $16.6 billion and a 37% jump in EPS to $6.43, well ahead of Wall Street's $5.25 forecast. One notable contributor here was Meta's notable investment in AI infrastructure, including models like Llama, which continues to optimize ad delivery and user retention, setting the stage for sustained growth without compromising gross margins. What Investors Should Watch Out for in Q2 As Meta heads into its Q2 earnings, Wall Street appears to be filled with optimism, as evidenced by the share price; yet, I would argue that expectations are tempered given the rather conservative estimates. Specifically, consensus projects Q2 revenue of $44.79 billion, only a 14.6% YoY increase, all while EPS is forecasted at $5.86, reflecting 13.5% growth over Q2 of 2024. Now, these figures do align with Meta's guidance of $42.5-$45.5 billion in revenue, supported by a 1% foreign currency tailwind. However, they are pretty conservative in my view, given Meta's ongoing momentum, as well as the fact that Meta has consistently beaten its outlook. In fact, Meta has beaten EPS and revenue estimates nine times in a row and is odds-on to make it ten out of ten this week. Regardless, I will be looking for progress on several key areas. First, the impact of AI on ad performance, primarily through tools like Advantage+ and the subsequent effect on conversions. Second, engagement metrics, especially time spent on Instagram and Facebook, will signal whether Meta's recommendation systems are keeping users increasingly engaged. Third, I will be checking for updates on WhatsApp monetization, with its 100 million business users that could unlock significant revenue potential. Finally, capital expenditure guidance, expected to be $64-$72 billion for 2025, will be scrutinized as Meta ramps up AI infrastructure investments. Valuation: Still a Bargain Despite the Run-Up While entering an earnings report following a rally can raise caution, I believe Meta's valuation still presents a compelling opportunity. At approximately 28x Wall Street's FY2025 EPS estimate of $25.73, the stock looks attractively priced for a company with a track record of 35%+ annual EPS growth—and 37% growth in Q1 alone. According to TipRanks data, META's profit margin has climbed consistently from just above 12% in Q4 2022 to over 36% today. My own forecast places 2025 EPS in the $29–$30 range, supported by continued ad strength, AI-driven efficiencies, and expanding margins. Even based on the Street's more conservative $25.47 estimate, Meta's forward P/E remains below that of peers like Microsoft and Amazon, despite outpacing Apple and Alphabet in earnings growth. Is META a Good Stock to Buy Now? Wall Street remains quite optimistic on Meta, with the stock carrying a Strong Buy consensus rating based on 41 Buy and four Hold recommendations over the past three months. Notably, not a single analyst rates the stock a Sell. However, META's average stock price target of $761.55 suggests a somewhat constrained 6.12% upside from current levels. Meta's AI-Powered Dominance Set to Continue All things considered, Meta continues to execute at an elite level, with strong fundamentals, accelerating AI tailwinds, and a clear path to monetization across its core platforms. While expectations for Q2 are modest, I see plenty of room for upside given the company's track record of consistent outperformance. Between robust engagement, ad efficiency gains, and compelling valuation, I view Meta as one of the best opportunities in large-cap tech today. I'll be watching closely on Wednesday, but my conviction remains Bullish heading into the big announcement tomorrow afternoon.