
Meta to pay $1 million to bolster UK government's AI workforce
Through the new 'Open-Source AI Fellowship,' 10 fellows will work with the UK government for one year to build AI tools for 'high-security use cases' in the public sector, such as language translation for national security or using construction data to speed up approval processes to build more homes.
The fellows could also work on 'Humphrey,' a suite of AI-powered tools for civil servants to help them effectively deliver on minister requests, like summarising documents, consultations, and taking notes.
The programme could also see fellows using Meta's Llama 3.5 AI model to create new tools that could unblock planning delays, boost national security, or reduce the cost to integrate AI throughout the government.
Meta will issue the $1 million grant to the Alan Turing Institute, and fellows will then be placed in the UK government.
'This Fellowship is the best of AI in action – open, practical, and built for public good. It's about delivery, not just ideas – creating real tools that help government work better for people,' Peter Kyle, the UK's technology secretary, said in a government release.
The UK government is already testing an AI for the public service called Caddy, an open-source AI assistant used at Citizen's Advice centres. It gives the users of a government call service advice on common questions about managing debt, getting legal help, or knowing their rights as a customer.
The fellowships will begin in January 2026, and all of the initiatives developed by the engineers will be open-source and available for public use.
The announcement comes in the same week as another agreement struck between the UK government and Google Cloud that aims to upskill 100,000 civil servants in tech and AI by 2030. The goal of that programme is to have at least one in every 10 government officials be tech experts.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
2 days ago
- Euronews
One year since Crowdstrike global outage. What has changed since?
One year ago, a faulty update from a cybersecurity firm took down hospitals, airlines, banks, and government offices around the world. On July 19, 2024, Crowdstrike pushed an update to its Falcon program used by Microsoft Windows computers to collect data on potential new cyberattack methods. The routine operation turned into a 'Blue Screen of Death' (BSOD) for roughly 8.5 million Microsoft users in what many considered one of the largest internet outages in history. The fallout meant significant financial losses for Crowdstrike's customers, estimated at around $10 billion (€8.59 billion). "There were no real warning signs that an incident of this nature was likely," Steve Sands, fellow of the Chartered Institute for IT, told Euronews Next. "Most organisations that rely on Windows would have had no planning in place to cater for such an event". But what did Crowdstrike learn from the outage and what can other companies do to avoid the next one? 'Round-the-clock' surveillance of IT environment needed A year after Crowdstrike, outages at banks and 'major service providers' would suggest that the cybersecurity community hasn't changed much, according to Eileen Haggerty, vice president of product and solutions at cloud security company NETSCOUT. So far this year, a cloud outage from Cloudflare brought down Google Cloud and Spotify in June, changes to Microsoft's Authenticator app led to an outage for thousands using Outlook or Gmail in July, and a software flaw at SentinelOne deleted the critical networks necessary to keep its programs running. Haggerty said that companies need to have visibility to respond to possible software problems before they happen by having 'round-the-clock monitoring' of their networks and their entire IT environment. Haggerty suggests that IT teams conduct 'synthetic tests,' which simulate how a site would handle real traffic before a critical function fails. These tests would provide companies 'with the vital foresight they need to anticipate issues before they even have a chance to materialise,' she added. In a blog post, Microsoft said that synthetic monitoring is not airtight and is not always 'representative of the user experience,' because organisations often push new releases, which can cause the whole system to become unstable. The blog post added that it can improve the response time to fix a mistake once spotted. After an outage happens, Haggerty also suggests building a detailed repository of information about why the incident happened so they can anticipate any potential challenges before they become an issue. Sands said these reports should include plans for resilience and recovery, along with an evaluation of where the company has a reliance on external companies. Any company looking to build with "resilience" should do it as early as possible, since it is difficult to be "bolted on later," he said. "Many companies will have updated their incident response plans based on what happened," Sands said. "However, experience tells us that many will already have forgotten the relatively short-term impact and chaos caused and will have done little or nothing". Nathalie Devillier, an expert at the EU European Cyber Competence Centre, told Euronews last year that European cloud and IT security providers should be based on the same continent. "Both should be in the European space so as not to rely on foreign technology solutions that, as we can see today, have impacts on our machines, on our servers, on our data every day,' she said at the time. What has Crowdstrike itself done after the outage? Crowdstrike said in a recent blog post this month that it developed a self-recovery mode to 'detect crash loops and … transition systems into safe mode,' by itself. There's also a new interface that helps the company's customers have greater flexibility to test for system updates, such as setting different deployment schedules for test systems and critical infrastructure so that it doesn't happen at the same time. A content pinning feature also lets customers lock specific versions of their content and choose when and how updates are applied. CrowdSource also now has a Digital Operations Center that it says will give the company a 'deeper visibility and faster response' to the millions of computers using the technology worldwide. It also conducts regular reviews of their code, quality processes and operational procedures. 'What defined us wasn't that moment, it was everything that came next,' George Kurtz, the CEO of Crowdstrike, said in a LinkedIn post this week, noting that the company is now 'grounded in resilience, transparency and relentless execution'. While Crowdstrike has made some changes, Sands believes it might be "an impossible ask" to avoid another outage at that same level because computers and networks "are by their nature highly complex with many dependencies". "We can certainly improve the resilience of our systems from an architecture and design perspective ... and we can prepare better to detect, respond and recover our systems when outages happen," he said.


Euronews
3 days ago
- Euronews
Meta rebuffs EU's AI Code of Practice
US social media company Meta will not sign the EU's AI Code of Practice on General Purpose AI (GPAI), the company's Chief Global Affairs Officer Joel Kaplan said in a statement on Friday. 'Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for GPAI models and Meta won't be signing it,' he said, adding that the Code 'introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' The Commission last week released the Code, a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of AI models such as ChatGPT and Gemini comply with the AI Act. Companies that sign up are expected to be compliant with the Act and can anticipate more legal certainty, others will face more inspections. The AI Act's provisions affecting GPAI systems enter into force on 2 August. It will take another two years before the AI Act, which regulates AI systems according to the risk they pose to society, will become fully applicable. OpenAI, the parent company of ChatGPT, has said it will sign up to the Code once its ready. Criticism from tech giants The drafting process of the Code was criticised by Big Tech companies as well as CEOs of European companies, claiming they need more time to comply with the rules. 'We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,' Kaplan said. The Code requires sign off by EU member states, which are represented in a subgroup of the AI Board, as well as by the Commission's own AI Office. The member states are expected to give a green light as early as 22 July. The EU executive said it will publish the list of signatories on 1 August. On Friday the Commission published further guidance to help companies comply with the GPAI rules.


France 24
4 days ago
- France 24
Zuckerberg settles lawsuit over Cambridge Analytica scandal
A trial over the long-running case had just begun on Wednesday, with defendants accused of overpaying the US government in 2019 when they engineered a $5 billion settlement for alleged privacy violations in the scandal. Sources familiar with the matter confirmed the settlement to AFP, without providing details. A spokesman for Meta, the parent company of Facebook, declined to comment. Lawyers for the defendants and shareholders didn't immediately return requests for comment. The settlement comes the same day that Marc Andreessen, one of Silicon Valley's most influential venture capitalists and a Meta board member, was scheduled to take the stand. Zuckerberg himself was expected in the Wilmington, Delaware courtroom on Monday. Silicon Valley investor Peter Thiel and former Meta top executive Sheryl Sandberg -- both former board members -- were also expected to face questioning in the court. Cambridge Analytica was a political consulting firm that was found to have improperly accessed personal data from millions of Facebook users for targeted political advertising, particularly during the 2016 US election and Brexit referendum. The scandal thrust Facebook and Zuckerberg in particular into a political firestorm, leading to major regulatory changes and public scrutiny of tech companies' data practices. The shareholders in the lawsuit alleged that the board members conspired to pay more to the US government in exchange for ensuring that Zuckerberg would not be named personally for wrongdoing in the settlement. High-profile case Longtime observers of the company were hoping that the trial would expose inside details of how Zuckerberg and the Facebook executives handled the scandal. "This settlement may bring relief to the parties involved, but it's a missed opportunity for public accountability," said Jason Kint, the head of Digital Content Next, a trade group for content providers. He worried that Meta "has successfully remade the 'Cambridge Analytica' scandal about a few bad actors rather than an unraveling of its entire business model of surveillance capitalism and the reciprocal, unbridled sharing of personal data." Zuckerberg was under huge pressure at the time from US and European lawmakers amid widespread allegations that Russia and other bad actors were weaponizing Facebook to sow chaos around major elections in the West. The multi-faceted case also alleged insider trading at the time of the events, with board members to be questioned about the timing of their share sales before the scandal erupted. The high-profile case was expected to bring further attention to Delaware, the state that many US companies choose for incorporation due to its highly specialized courts. The trial was presided over and to be decided by Kathaleen McCormick, the same judge who last year rejected Elon Musk's multi-billion pay package at Tesla. © 2025 AFP