logo
OpenAI and UK sign deal to use AI in public services

OpenAI and UK sign deal to use AI in public services

Yahoo3 days ago
OpenAI, the firm behind ChatGPT, has signed a deal to use artificial intelligence to increase productivity in the UK's public services, the government has announced.
The agreement signed by the firm and the science department could give OpenAI access to government data and see its software used in education, defence, security, and the justice system.
Technology Secretary Peter Kyle said that "AI will be fundamental in driving change" in the UK and "driving economic growth".
The Labour government's eager adoption of AI has previously been criticised by campaigners, such as musicians' who oppose its unlicensed use of their music.
The text of the memorandum of understanding says the UK and OpenAI will "improve understanding of capabilities and security risks, and to mitigate those risks".
It also says that the UK and OpenAI may develop an "information sharing programme", adding that they will "develop safeguards that protect the public and uphold democratic values".
OpenAI chief executive Sam Altman said the plan would "deliver prosperity for all".
"AI is a core technology for nation building that will transform economies and deliver growth," he added.
The deal comes as the UK government looks for ways to improve the UK's stagnant economy, which is forecast to have grown at 0.1% to 0.2% for the April to June period.
The UK government has also made clear it is open to US AI investment, having struck similar deals with OpenAI's rivals Google and Anthropic earlier this year.
It said its OpenAI deal "could mean that world-changing AI tech is developed in the UK, driving discoveries that will deliver growth".
Generative AI software like OpenAI's ChatGPT can produce text, images, videos, and music from prompts by users.
The technology does this based on data from books, photos, film footage, and songs, raising questions about potential copyright infringement or whether data has been used with permission.
The technology has also come under fire for giving false information or bad advice based on prompts.
WeTransfer says files not used to train AI after backlash
Man files complaint after ChatGPT said he killed his children
Peers demand more protection from AI for creatives
What is AI and how does it work?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google is getting a boost from AI after spending billions
Google is getting a boost from AI after spending billions

Yahoo

time12 minutes ago

  • Yahoo

Google is getting a boost from AI after spending billions

Google parent Alphabet (GOOG, GOOGL) is finally starting to cash in on the billions of dollars it's spending on its rapid AI buildout. The company reported better-than-anticipated earnings after the bell on Wednesday, with CEO Sundar Pichai pointing to AI as a key growth catalyst for its various products. Google cloud revenue climbed 32% and backlog, or purchase commitments from customers not yet realized, rose 38%. Search also performed better than expected during the quarter, with sales increasing 12% year over year. Wall Street previously raised concerns that chatbots and search offerings from AI upstarts like OpenAI ( Perplexity ( and Anthropic ( would steal users from Google's own Search product. But according to Pichai, Search revenue grew by double digits, and its AI Overviews feature, the small box at the top of the traditional search page that summarizes information, now has 2 billion monthly users. But Google also announced it's pouring even more money into its AI development, saying in its earnings release that it will spend an additional $10 billion on the technology this year, bringing its total capital expenditures from $75 billion to $85 billion. Despite that, analysts are riding high on Google's stock. In a note to investors on Wednesday, Jefferies analyst Brent Thill said Google's results back up its increased spending. 'After hiccups in early '23, [Google's] AI efforts picked up urgency and have now delivered benchmark-leading Gemini 2.5 Pro models,' Thill wrote. 'This is starting to show up in [key performance indicators], with Cloud [revenue accelerating] to 32% [year over year] from 28%, tokens processed 2x to 980 [trillion] tokens since April, and search ad [revenue accelerating] to 12% from 10%. This confidence supports '25 [capital expenditures] raise to $85B.' Morgan Stanley's Brian Nowak offered a similar outlook for Google, raising the firm's price target on the tech giant from $205 to $210. Wedbush's Scott Devitt also raised his price target on the company to $225. Malik Ahmed Khan at Morningstar pointed out that while AI Overview searches are monetizing at the same rate as standard Google searches, 'AI Overviews are helping increase search volumes within Google Search, with the feature driving over 10% more queries, leading to additional sales within the Search segment.' But behind all of that are the potentially devastating consequences from a judge's decision that held it liable for antitrust violations in search. Judge Amit Mehta of the US District Court for the District of Columbia is expected to issue a ruling on "remedies" that follows the Justice Department's victory against the company sometime next month. Judge Mehta held that Google violated antitrust law by boxing out rivals in the online search engine and online search text markets. To restore competition, he could order Google to refrain from longstanding exclusivity deals like the one with Apple (AAPL) that set Google Search as the default option on the iPhone. Mehta could also force Google to sell off its Chrome browser, the most popular web browser in the world. That would put a dent in Google's all-important search business, a dangerous proposition for the Daniel Howley at dhowley@ Follow him on X/Twitter at @DanielHowley. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

'I was given an offer that would explode same day.'
'I was given an offer that would explode same day.'

The Verge

time15 minutes ago

  • The Verge

'I was given an offer that would explode same day.'

Posted Jul 24, 2025 at 7:12 PM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Alex Heath Posts from this author will be added to your daily email digest and your homepage feed. See All by Alex Heath Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Google Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

The White House orders tech companies to make AI bigoted again
The White House orders tech companies to make AI bigoted again

The Verge

time15 minutes ago

  • The Verge

The White House orders tech companies to make AI bigoted again

After delivering a rambling celebration of tariffs and a routine about women's sports, President Donald Trump entertained a crowd, which was there to hear about his new AI Action Plan, with one his favorite topics: 'wokeness.' Trump complained that AI companies under former President Joe Biden 'had to hire all woke people,' adding that it is 'so uncool to be woke.' And AI models themselves had been 'infused with partisan bias,' he said, including the hated specter of 'critical race theory.' Fortunately for the audience, Trump had a solution: he signed an executive order titled 'Preventing Woke AI in the Federal Government,' directing government agencies 'not to procure models that sacrifice truthfulness and accuracy to ideological agendas.' To anyone with a cursory knowledge of politics and the tech industry, the real situation here is obvious: the Trump administration is using government funds to pressure AI companies into parroting Trumpian talking points — probably not just in specialized government products, but in chatbots that companies and ordinary people use. Trump's order asserts that agencies must only procure large language models (LLMs) that are 'truthful in responding to user prompts seeking factual information or analysis,' 'prioritize historical accuracy, scientific inquiry, and objectivity,' and are 'neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.' DEI, of course, is diversity, equity, and inclusion, which Trump defines in this context as: The suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. (In reality, DEI was typically used to refer to civil rights, social justice, and diversity programs before being co-opted as a Trump and MAGA bogeyman.) The Office of Management and Budget has been directed to order further guidance within 120 days. While we're still waiting on some of the precise details about what the order means, one issue seems unavoidable: it will plausibly affect not only government services, but the entire field of major LLMs. While it insists that 'the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace,' the reality is that nearly every big US consumer LLM maker has (or desperately wants) government contracts, including with products like Anthropic's Claude Gov and OpenAI's ChatGPT Gov — but there's not a hard wall between development of government, business, and consumer models. OpenAI touts how many agencies use its enterprise service; Trump's AI Action Plan encourages adoption of AI systems in public-facing arenas like education, and the boundaries between government-funded and consumer-focused products will likely become even more porous soon. Trump's idea of 'DEI' is expansive. His war against it has led national parks to remove signage highlighting indigenous people and women and the Pentagon to rename a ship commemorating gay rights pioneer Harvey Milk, among many other changes. Even LLMs whose creators have explicitly aimed for what they consider a neutral pursuit of truth would likely produce something Trump could find objectionable unless they tailor their services. There's not a hard wall between AI for government and everything else It's possible that companies will devote resources to some kind of specifically 'non-woke' government version of their tools, assuming the administration agrees to treat these as separate models from the rest of the Llama, Claude, or GPT lineup — it could be as simple as adding some blunt behind-the-scenes prompts redirecting it on certain topics. But refining models in a way that consistently and predictably aligns them in certain directions can be an expensive and time-consuming process, especially with a broad and ever-shifting concept like Trump's version of 'DEI,' especially because the language suggests simply walling off certain areas of discussion is also unacceptable. There are significant sums at stake: OpenAI and xAI each recently received $200 million defense contracts, and the new AI plan will create even more opportunities. The Trump administration isn't terribly detail-oriented, either — if some X user posts about Anthropic's consumer chatbot validating trans people, do we really think Pam Bondi or Pete Hegseth will distinguish between 'Claude' and 'Claude Gov'? The incentives overwhelmingly favor companies changing their overall LLM alignment priorities to mollify the Trump administration. That brings us to our second problem: this is exactly the kind of blatant, ideologically motivated social engineering that Trump claims he's trying to stop. The executive order is theoretically about making sure AI systems produce 'accurate' and 'objective' information. But as Humane Intelligence cofounder and CEO Rumman Chowdhury noted to The Washington Post, AI that is 'free of ideological bias' is 'impossible to do in practice,' and Trump's cherry-picked examples are tellingly politically lopsided. The order condemns a quickly fixed 2024 screwup, in which Google added an overenthusiastic pro-diversity filter to Gemini — causing it to produce race- and gender-diverse visions of Vikings, the Founding Fathers, the pope, and Nazi soldiers — while unsurprisingly ignoring the long-documented anti-diversity biases in AI that Google was aiming to balance. It's not simply interested in facts, either. Another example is an AI system saying 'a user should not 'misgender' another person even if necessary to stop a nuclear apocalypse,' answering what is fundamentally a question of ethics and opinion. This condemnation doesn't extend to incidents like xAI's Grok questioning the Holocaust. LLMs produce incontrovertibly incorrect information with clear potential for real-world harm; they can falsely identify innocent people as criminals, misidentify poisonous mushrooms, and reinforce paranoid delusions. This order has nothing to do with any of that. Its incentives, again, reflect what the Trump administration has done through 'DEI' investigations of universities and corporations. It's pushing private institutions to avoid acknowledging the existence of transgender people, race and gender inequality, and other topics Trump disdains. AI systems have long been trained on datasets that reflect larger cultural biases and under- or overrepresent specific demographic groups, and contrary to Trump's assertions, the results often aren't 'woke.' In 2023, Bloomberg described the output of image generator Stable Diffusion as a world where 'women are rarely doctors, lawyers, or judges,' and 'men with dark skin commit crimes, while women with dark skin flip burgers.' Companies that value avoiding ugly stereotypes or want to appeal to a wider range of users often need to actively intervene to shape their tech, and Trump just made doing that harder. Attacking 'the incorporation of concepts' that promote 'DEI' effectively tells companies to rewrite whole areas of knowledge that acknowledge racism or other injustices. The order claims it's only worried if developers 'intentionally encode partisan or ideological judgments into an LLM's outputs' and says LLMs can deliver those judgments if they 'are prompted by or otherwise readily accessible to the end user.' But no Big Tech CEO should be rube enough to buy that — we have a president who spent years accusing Google of intentionally rigging its search results because he couldn't find enough positive news stories about himself. Trump is determined to control culture; his administration has gone after news outlets for platforming his enemies, universities for fields of study, and Disney for promoting diverse media. The tech industry sees AI as the future of culture — and the Trump administration wants its politics built in on the ground floor. Posts from this author will be added to your daily email digest and your homepage feed. See All by Adi Robertson Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Analysis Posts from this topic will be added to your daily email digest and your homepage feed. See All Policy Posts from this topic will be added to your daily email digest and your homepage feed. See All Politics

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store