
Energy sector set to discuss how National Grid can meet AI demand
The AI Energy Council are set to discuss how much power will be needed to cover the increase in computer capacity that is expected in the next five years, as the AI sector grows.
The group is made up of energy providers, tech companies, energy regulator Ofgem and will be chaired by Energy Secretary Ed Miliband and Tech Secretary Peter Kyle.
It is thought that sectors that are looking to adopt AI and the impacts those changes could have on the energy demand will also be up for discussion, to try and prepare the energy system for the future.
Tech secretary Mr Kyle said that ministers are putting 'British expertise at the heart of the AI breakthroughs which will improve our lives'.
He added: 'We are clear-eyed though on the need to make sure we can power this golden era for British AI through responsible, sustainable energy sources. Today's talks will help us drive forward that mission, delivering AI infrastructure which will benefit communities up and down the country for generations to come without ever compromising on our clean energy superpower ambitions.'
Earlier this month Sir Keir Starmer said that the UK must persuade a 'sceptical' public that AI can improve lives and transform the way politics and businesses work.
In a speech in London, the Prime Minister acknowledged people's concern about the rapid rise of AI technology and the risk to their jobs but stressed the benefits it would have on the delivery of public services, automating bureaucracy and allowing staff such as social workers and nurses to be 'more human'.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Record
20 minutes ago
- Daily Record
Russia demands UK stop training Ukrainian troops as Putin aide issues ultimatum
Moscow has claimed Britain is directly complicit in the conflict due to it training programme. Russia has issued a stark ultimatum to the UK to stop all military training and arms supplies to Ukraine or the war will not end. Rodion Miroshnik, a senior envoy in Vladimir Putin's foreign ministry, said that continued Western support — including the UK's long-running Operation Interflex — amounts to direct involvement in the conflict. Moscow insists the programme, which has trained tens of thousands of Ukrainian soldiers on British soil, must be shut down, reports the Mirror. 'The participation or complicity of other countries is a key issue that must be stopped in all forms — including weapons deliveries and the training of Ukrainian militants,' Miroshnik told pro-Kremlin outlet Izvestia. 'Halting these programmes would be a signal of willingness to seek a resolution.' The warning came as Ukraine suffered its heaviest aerial bombardment of the war so far, with 537 Russian strikes recorded over the weekend. One of the attacks saw the downing of an F-16 fighter jet, killing pilot Lt-Col Maksym Ustymenko. President Volodymyr Zelensky posthumously awarded him the Hero of Ukraine honour, calling him 'one of our very best'. Zelensky last week joined UK Prime Minister Sir Keir Starmer to inspect Ukrainian troops in Britain, reinforcing the strong defence partnership between the two nations. That show of unity has drawn anger in Moscow. Russian officials doubled down on their claims that Western aid, including training and weapons, prolongs the war and escalates hostilities. Konstantin Kosachev, deputy speaker of Russia's upper house, said: 'Any aid that helps Ukraine continue fighting or preparing terrorist operations clearly does not promote conflict resolution. It is unequivocally hostile to Russia.' He added: 'This is a clear campaign against everything Russian — a full display of militarism. Ukrainians no longer have agency. They are being used as tools for NATO's strategic aims.' Oleg Karpovich, vice-rector of Moscow's Diplomatic Academy, went further — accusing Britain of having a hand in the deaths of Russian troops. 'In practice, they are participating in the killing of our citizens while coordinating terrorist attacks by the Kyiv regime,' he claimed. Despite Russia's call for an end to military aid to Ukraine, it maintains its own heavy military operations — insisting its aim remains the 'demilitarisation' of Ukraine. The Kremlin's demands came just as signs emerged that former US President Donald Trump — long accused of being soft on Putin — may be shifting his stance. Republican Senator Lindsey Graham revealed that Trump had given the green light for a tough sanctions bill targeting Russia's economy. Join the Daily Record WhatsApp community! Get the latest news sent straight to your messages by joining our WhatsApp community today. You'll receive daily updates on breaking news as well as the top headlines across Scotland. No one will be able to see who is signed up and no one can send messages except the Daily Record team. All you have to do is click here if you're on mobile, select 'Join Community' and you're in! If you're on a desktop, simply scan the QR code above with your phone and click 'Join Community'. We also treat our community members to special offers, promotions, and adverts from us and our partners. If you don't like our community, you can check out any time you like. To leave our community click on the name at the top of your screen and choose 'exit group'. 'For the first time yesterday the president told me... 'it's time to move your bill',' Graham told ABC News. The legislation would slap a 500% tariff on goods from countries that buy Russian energy but do not support Ukraine — a direct swipe at China and India, who currently buy the lion's share of Putin's oil exports. Graham said the bill is designed to 'crush' Russia's war machine by cutting off its funding. Whether Trump follows through remains to be seen — but Moscow's threats and Washington's shift mark a new flashpoint in the drawn-out war.


The Guardian
23 minutes ago
- The Guardian
Microsoft says AI system better than doctors at diagnosing complex health conditions
Microsoft has revealed details of an artificial intelligence system that performs better than human doctors at complex health diagnoses, creating a 'path to medical superintelligence'. The company's AI unit, which is led by the British tech pioneer Mustafa Suleyman, has developed a system that imitates a panel of expert physicians tackling 'diagnostically complex and intellectually demanding' cases. Microsoft said that when paired with OpenAI's advanced o3 AI model, its approach 'solved' more than eight of 10 case studies specially chosen for the diagnostic challenge. When those case studies were tried on practising physicians – who had no access to colleagues, textbooks or chatbots – the accuracy rate was two out of 10. Microsoft said it was also a cheaper option than using human doctors because it was more efficient at ordering tests. Despite highlighting the potential cost savings from its research, Microsoft played down the job implications, saying it believed AI would complement doctors' roles rather than replace them. 'Their clinical roles are much broader than simply making a diagnosis. They need to navigate ambiguity and build trust with patients and their families in a way that AI isn't set up to do,' the company wrote in a blogpost announcing the research, which is being submitted for peer review. However, using the slogan 'path to medical superintelligence' raises the prospect of radical change in the healthcare market. While artificial general intelligence (AGI) refers to systems that match human cognitive abilities at any given task, superintelligence is an equally theoretical term referring to a system that exceeds human intellectual performance across the board. Explaining the rationale behind the research, Microsoft raised doubt over AI's ability to score exceptionally well in the United States Medical Licensing Examination, a key test for obtaining a medical licence in the US. It said the multiple-choice tests favoured memorising answers over deep understanding of a subject, which could help 'overstate' the competence of an AI model. Microsoft said it was developing a system that, like a real-world clinician, takes step-by-step measures – such as asking specific questions and requesting diagnostic tests – to arrive at a final diagnosis. For instance, a patient with symptoms of a cough and fever may require blood tests and a chest X-ray before the doctor arrives at a diagnosis of pneumonia. The new Microsoft approach uses complex case studies from the New England Journal of Medicine (NEJM). Suleyman's team transformed more than 300 of these studies into 'interactive case challenges' that it used to test its approach. Microsoft's approach used existing AI models, including those produced by ChatGPT's developer, OpenAI, Mark Zuckerberg's Meta, Anthropic, Elon Musk's Grok and Google's Gemini. Microsoft then used a bespoke, agent-like AI system called a 'diagnostic orchestrator' to work with a given model on what tests to order and what the diagnosis might be. The orchestrator in effect imitates a panel of physicians, which then comes up with the diagnosis. Microsoft said that when paired with OpenAI's advanced o3 model, it 'solved' more than eight of 10 NEJM case studies – compared with a two out of 10 success rate for human doctors. Microsoft said its approach was able to wield a 'breadth and depth of expertise' that went beyond individual physicians because it could span multiple medical disciplines. It added: 'Scaling this level of reasoning – and beyond – has the potential to reshape healthcare. AI could empower patients to self-manage routine aspects of care and equip clinicians with advanced decision support for complex cases.' Microsoft acknowledged its work is not ready for clinical use. Further testing is needed on its 'orchestrator' to assess its performance on more common symptoms, for instance.


Coin Geek
26 minutes ago
- Coin Geek
Has AI innovation hit a wall?
Homepage > News > Business > Has AI innovation hit a wall? Getting your Trinity Audio player ready... It feels like artificial intelligence (AI) has hit a plateau. The creators of AI models don't seem to be making progress as quickly as before. Many of the products they promised were overhyped and underdelivered, and consumers aren't quite sure what to do with generative AI beyond using it as a replacement for traditional search engines. If it hasn't already, AI looks like it's beginning to exit its early-stage growth phase and enter a period of stagnation. AI's explosive growth from 2022 to 2024 From November 2022 to the end of 2024, new developments in artificial intelligence occurred rapidly. ChatGPT launched in November 2022. Four months later, we got GPT-4. Two months after that, OpenAI added Code Interpreter and Advanced Data Analysis. At the same time, significant advancements took place in text-to-image and text-to-video generation. Advancements seemed to drop every 30 to 120 days at OpenAI, and their competitors seemed to be moving in lockstep, probably out of fear of falling behind if they did not keep pace. With all of that wind in their sails, companies began making big promises: autonomous AI agents that could plan, reason, and complete complex tasks from end to end without a human in the loop. Creative AI that would replace marketers, designers, filmmakers, songwriters, and AI that would replace entire white-collar job categories. However, most of those promises still haven't materialized; if they have, they have been lackluster. Why AI innovation is slowing down The problem isn't just that AI agents or automated workforces were underdelivered; it's that these unimpressive products are the result of a much bigger problem. Innovation in the AI industry is slowing down, and the leading companies building these tools seem lost. Not every product released between 2022 and 2024 was revolutionary. Many of the updates during this period probably went unused by everyday consumers. This is because most people still only use AI as an alternative for a search engine, or, as some people are beginning to call it, they are using AI as an answer engine, the next iteration of the search engine. Although that is a valid use case, it's safe to say that tech giants have a much grander vision for AI. However, one thing that may be holding them back, and one reason that the more hyped-up products have struggled in the market, is due to a classic issue in highly technical industries: brilliant engineers sometimes end up building tools and products that only other brilliant engineers know how to leverage, but they forget to make the tools and products usable for the much larger population of their users that aren't brilliant engineers. In this case, that means general users, the audience that arguably made AI mainstream back in 2022. However, even the stagnation in AI products is a trickle-down effect from an even bigger problem relating to how AI models are trained. The biggest AI labs have been obsessively improving their underlying models. At first, those improvements in their AI models made a big, noticeable difference from version to version. But now, we've reached the point of diminishing returns in model optimization. These days, each upgrade to an AI model seems less noticeable than the last. One of the leading theories behind this is that the AI labs are running out of high-quality, unique data on which to train their models. They have already scraped what we can assume to be the entire internet, so where will they go next for data, and how will the data they obtain differ from the data their competitors are trying to get their hands on? Before hitting this wall, the formula for success in AI models was simple: feed large language models more internet data, and they get better. However, the internet is a finite resource, and many AI giants have exhausted it. On top of that, when everyone trains on the same data, no one can pull ahead. And if you can't get new, unique data, you can't keep making models significantly better by training data. That's the wall a lot of these companies have run into. It's important to note, the incremental improvements being made to these models is still very important even though their returns are diminishing. Although these improvements are not as impactful as the improvements of the past, they still need to take place for the AI products of the future that we have been promised to deliver. Where AI goes from here So, how do we fix this problem? What's missing is attention to consumer demand at the product level. Consumers want AI products and tools that solve real problems in their lives, are intuitive, and can be used without having a STEM degree. Instead, they've received products that don't seem production-ready, like agents, with vague use cases and feel more like experiments than products. Products like this are clearly not built for anyone in particular; they're hard to use, and it might be because they've struggled to pick up adoption. Until something changes, AI will likely get stuck in a holding pattern. Whether that breakthrough comes from better training data, new ways of interpreting existing data, or a standout consumer product that finally catches on, something will have to change. From 2022 to 2024, AI seemed to leap ten steps forward every four months. But in 2025, it's only inching forward one small step at a time and much more infrequently. Unfortunately, there's no quick fix here. However, focusing on a solid consumer-facing product could be low-hanging fruit. If tech giants spent less time chasing futuristic-sounding yet general-purpose AI products and more time delivering a narrow use-case, high-impact tool that people can use right out of the box, then they would see more success. But in the long run, there will need to be some sort of major advancement that solves the data drought we are currently in, whether that be companies finding new, exclusive sources of training data or finding ways for models to make more of the data they already have. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI . Watch: Artificial intelligence needs blockchain title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""> AI AI Agent Artificial Intelligence ChatGPT Data GPT-4 OpenAI