Latest news with #AISafetyInstitute


Forbes
21 hours ago
- Business
- Forbes
UK Announces Deal With OpenAI To Augment Public Services And AI Power
UK and OpenAI Announce a new MOU OpenAI and the United Kingdom's Department for Science, Innovation and Technology signed a joint memorandum of understanding yesterday that sets out an ambitious joint plan to put OpenAI's models to work in day-to-day government tasks, to build new computing hubs on British soil, and to share security know-how between company engineers and the UK's AI Safety Institute. The agreement is voluntary and doesn't obligate the UK in any exclusive manner, yet the commitments are concrete: both sides want pilot projects running inside the civil service within the next twelve months. Details of the Joint MOU The memorandum identifies four key areas of joint innovation. It frames AI as a tool to raise productivity, speed discovery and tackle social problems, provided the public is involved so trust grows alongside capability. The partners will look for concrete deployments of advanced models across government and business, giving civil servants, small firms and start-ups new ways to navigate rules, draft documents and solve hard problems in areas such as justice, defence and education. To run those models at scale, they will explore UK-based data-centre capacity, including possible 'AI Growth Zones,' so critical workloads remain onshore and available when demand peaks. Finally, the deal deepens technical information-sharing with the UK AI Security Institute, creating a feedback loop that tracks emerging capabilities and risks and co-designs safeguards to protect the public and democratic norms. OpenAI also plans to enlarge its London office, currently at more than 100 staff, to house research and customer-support AI CEO Sam Altman has long been interested in the UK as a region for AI development because of the UK's long history in AI research, most notably starting with Alan Turing. 'AI will be fundamental in driving the change we need to see across the country, whether that's in fixing the NHS, breaking down barriers to opportunity or driving economic growth,' said UK Technology Secretary, Peter Kyle. 'That's why we need to make sure Britain is front and centre when it comes to developing and deploying AI, so we can make sure it works for us.' Altman echoed the ambition, calling AI 'a core technology for nation building' and urging Britain to move from planning to delivery. The Increasing Pace of Governmental AI Adoption and Funding Universities in London, Cambridge and Oxford supply a steady stream of machine-learning talent. Since the Bletchley Park summit in 2023, the UK has positioned itself as a broker of global safety standards, giving investors a sense of legal stability. And with a sluggish economy, ministers need a credible growth story; large-scale automation of paperwork is an easy pitch to voters. The UK offers scientists clear rules and public money. The UK government has already promised up to £500 million for domestic compute clusters and is reviewing bids for 'AI Growth Zones'. Three factors explain the timing. The UK is not alone in its AI ambitions. France has funnelled billions into a joint venture with Mistral AI and Nvidia, while Germany is courting Anthropic after its own memorandum with DSIT in February. The UK believes its head start with OpenAI, still the best-known brand in generative AI, gives it an edge in landing commercial spin-offs and high-paid jobs. Risks that could derail the plan Kyle knows that any mis-step, such as an AI bot giving faulty benefit advice, could sink trust. That is why the memorandum pairs deployment with security research and reserves the right for civil-service experts to veto outputs that look unreliable. The UK has had a long history with AI, and the risks posed by lack of progress. Notably, the infamous Lighthill report in 1973 was widely credited with contributing the first 'AI Winter', a marked period of decline of interest and funding in AI. As such, careful political consideration of AI is key to ensuring ongoing support. Public-sector unions may resist widespread automation, arguing that AI oversight creates as much work as it saves. Likewise there is widespread concern of vendor lock-in with the deal with OpenAI. By insisting on locally owned data centres and keeping the MOU open to additional suppliers, ministers hope to avoid a repeat of earlier cloud contracts that left sensitive workloads offshore and pricy relationships locked in. Finally, a single headline error, such as a chatbot delivering wrong tax guidance, for instance, could spark calls for a pause. However, the benefits currently outweigh the risks. No department stands to gain more than the UK's National Health Service, burdened by a record elective-care backlog. Internal modelling seen by officials suggests that automated triage and note-summarisation tools could return thousands of clinical hours each week. If early pilots succeed, hospitals in Manchester and Bristol will be next in line. And OpenAI is not new to the UK government. A chatbot for small businesses has been live for months, and an internal assistant nicknamed 'Humphrey' now drafts meeting notes and triages overflowing inboxes. Another tool, 'Consult,' sorts thousands of public submissions in minutes, freeing policy teams for higher-level work. The new agreement aims to lift these trials out of the margins and weave them more fully into the fabric of government. What's Next Joint project teams will start by mapping use-cases in justice, defence and social care. They must clear privacy impact assessments before live trials begin. If those trials shave measurable hours off routine tasks, the Treasury plans to set aside money in the 2026 Autumn Statement for a phased rollout. UK's agreement with OpenAI is an experiment in modern statecraft. It tests whether governmental organizations can deploy privately built, high-end models while keeping control of data and infrastructure. Success would mean delivering the promised benefits while avoiding the significant risks. Failure would reinforce arguments that large language models remain better at publicity than at public service.


Ottawa Citizen
17-06-2025
- Business
- Ottawa Citizen
Feds partner with Canadian firm to accelerate AI use in public service
Article content The Government of Canada has partnered with Cohere, a Canadian AI firm, to accelerate the adoption of artificial intelligence in the public service. Article content In a joint statement published Sunday, Prime Minister Mark Carney said the federal government has signed memorandums of understanding (MOUs) with Cohere and the United Kingdom to 'deepen and explore new collaborations on frontier AI systems to support our national security.' Article content Article content Article content The statement also said Cohere will build data centres across Canada and expand its presence in the U.K. to support the the country's AI Opportunities Action Plan. Article content Article content 'The government of Canada has been working closely with Cohere, one of Canada's — and one of the world's — leading AI companies,' Carney said at a pooled press event Sunday with U.K. Prime Minister Keir Starmer. Article content 'We're absolutely thrilled that a partnership is developing between the United Kingdom and Cohere … The U.K. — and Canada, we like to say as well — have been one of the pioneers in not just AI development, but also the safety and security of the applications of AI, to really realize the full potential. And we're deepening the collaboration between Canada's AI Safety Institute and the new U.K. Security Institute, and this is going to help to realize the full potential for all our citizens.' Article content Article content Aidan Gomez, Cohere's CEO, said the company will work on accelerating the adoption of AI into the public sector. The CEO has been participating in discussions with Carney and Starmer, including promises to make government more productive and efficient, according to a company blog post. Article content Article content 'We're super excited to be partnering with both governments … Cohere is excited to strengthen the innovation of both of our countries, as well as the sovereignty. Thank you for your partnership, and your support,' Gomez said at Sunday's event. Article content Details still unclear


Axios
29-05-2025
- Business
- Axios
Scoop: AI Safety Institute to be renamed Center for AI Safety and Leadership
The Trump administration is looking to change the AI Safety Institute's name to the Center for AI Safety and Leadership in the coming days, per two sources familiar with the matter. Why it matters: The U.S. standards-setting and AI testbed, housed inside the Commerce Department's National Institute for Standards and Technology, has been bracing for changes since President Trump took office. An early draft of a press release seen by one source tasks the agency with largely the same responsibilities it previously had, including engaging internationally. AISI was left out of a Paris AI summit earlier this year. More details of what the name shift means for the mission were not immediately clear. Context: Under the Biden administration, the AI Safety Institute acted as a testing ground of sorts for new AI models, working with private sector companies on evaluation and standards, and was viewed as important by both Republicans and Democrats. After narrowly dodging major DOGE cuts earlier this year, as Axios previously reported, the government body has been changing its identity and purpose as Republican Cabinet secretaries, Trump and Congress figure out their AI strategy. A Commerce Department spokesperson did not immediately respond to a request for comment. Between the lines: More than a name change, the resources the Trump administration invests in NIST will be a key indicator of how much of a priority AI safety and leadership is.


Indian Express
08-05-2025
- Business
- Indian Express
Understanding shift from AI Safety to Security, and India's opportunities
Written by Balaraman Ravindran, Vibhav Mithal and Omir Kumar In February 2025, the UK announced that its AI Safety Institute would become the AI Security Institute. This triggered several debates about what this means for AI safety. As India prepares to host the AI Summit, a key question will be how to approach AI safety. The What and How of AI Safety In November 2023, more than 20 countries, including the US, UK, India, China, and Japan, attended the inaugural AI Safety Summit at Bletchley Park in the UK. The Summit took place against the backdrop of increasing capabilities of AI systems and their integration into multiple domains of life, including employment, healthcare, education, and transportation. Countries acknowledged that while AI is a transformative technology with potential for socio-economic benefit, it also poses significant risks through both deliberate and unintentional misuse. A consensus emerged among the participating countries on the importance of ensuring that AI systems are safe and that their design, development, deployment, or use does not harm society—leading to the Bletchley Declaration. The Declaration further advocated for developing risk-based policies across nations, taking into account national contexts and legal frameworks, while promoting collaboration, transparency from private actors, robust safety evaluation metrics, and enhanced public sector capability and scientific research. It was instrumental in bringing AI safety to the forefront and laid the foundation for global cooperation. Following the Summit, the UK established the AI Safety Institute (AISI), with similar institutes set up in the US, Japan, Singapore, Canada, and the EU. Key functions of AISIs include advancing AI safety research, setting standards, and fostering international cooperation. India has also announced the establishment of its AISI, which will operate on a hub-and-spoke model involving research institutions, academic partners, and private sector entities under the Safe and Trusted pillar of the IndiaAI Mission. UK's Shift from Safety to Security The establishment of AISIs in various countries reflected a global consensus on AI safety. However, the discourse took a turn in January 2025, when the UK rebranded its Safety Institute as the Security Institute. The press release noted that the new name reflects a focus on risks with security implications, such as the use of AI in developing chemical and biological weapons, cybercrimes, and child sexual abuse. It clarified that the Institute would not prioritise issues like bias or free speech but focus on the most serious risks, helping policymakers ensure national safety. The UK government also announced a partnership with Anthropic to deploy AI systems for public services, assess AI security risks, and drive economic growth. India's Understanding of Safety Given the UK's recent developments, it is important to explore what AI safety means for India. Firstly, when we refer to AI safety — i.e., making AI systems safe — we usually talk about mitigating harms such as bias, inaccuracy, and misinformation. While these are pressing concerns, AI safety should also encompass broader societal impacts, such as effects on labour markets, cultural norms, and knowledge systems. One of the Responsible AI (RAI) principles laid down by NITI Aayog in 2021 hinted at this broader view: 'AI should promote positive human values and not disturb in any way social harmony in community relationships.' The RAI principles also address equality, reliability, non-discrimination, privacy protection, and security — all of which are relevant to AI safety. Thus, adherence to RAI principles could be one way of operationalising AI safety. Secondly, safety and security should not be seen as mutually exclusive. We cannot focus on security without first ensuring safety. For example, in a country like India, bias in AI systems could pose national security risks by inciting unrest. As we aim to deploy 'AI for All' in sectors such as healthcare and education, it is essential that these systems are not only secure but also safe and responsible. A narrow focus on security alone is insufficient. Lastly, AI safety must align with AI governance and be viewed through a risk mitigation lens, addressing risks throughout the AI system lifecycle. This includes safety considerations from the conception of the AI model/system, through data collection, processing, and use, to design, development, testing, deployment, and post-deployment monitoring and maintenance. India is already taking steps in this direction. The Draft Report on AI Governance by IndiaAI emphasises the need to apply existing laws to AI-related challenges while also considering new laws to address legal gaps. In parallel, other regulatory approaches, such as self-regulation, are also being explored. Given the global shift from safety to security, the upcoming AI Summit presents India with an important opportunity to articulate its unique perspective on AI safety — both in the national context and as part of a broader global dialogue. Ravindran is Head, Wadhwani School of Data Science and AI & CeRAI; Mithal is Associate Research Fellow, CeRAI (& Associate Partner, Anand and Anand) and Kumar is Policy Analyst, CeRAI. CeRAI – Centre for Responsible AI, IIT Madras

Epoch Times
21-04-2025
- Politics
- Epoch Times
Greens Senator Warns Australia Not ‘Nimble Enough' to Deal With Surge in AI Capabilities
Greens Senator David Shoebridge has called on the Australian federal parliament to be more nimble in addressing the risks around AI development. At a recent online event on AI safety, Shoebridge said one of the greatest challenges was getting the parliament to respond fast enough. 'We can't spend eight years working out a white paper before we roll out regulation in this space,' he said. 'We can't see a threat emerging and say, 'Okay, cool, we're going to begin a six-year parliamentary process in order to work out how we respond to a high-risk deployment of AI.' 'We need to be much more nimble, and we need the resources and assistance in parliament to get us there. And I think if you look at the last three years, you can see how non-nimble the parliament has been.' The senator also noted that while some work on AI safety had managed to get attention, not much progress had been made. Related Stories 4/14/2025 4/15/2025 'What's come out? Where's the product from parliament? Where is the AI Safety Act? Where is the national regulator?' he asked. 'Where's the resource agency that can help parliament navigate through this bloody hard pathway we're going to have to do in the next three years?' Shoebridge's remarks came as Greens' Proposal for National AI Regulator To address AI risks, Shoebridge says the Greens would put forward a standalone 'AI Act' to legislate guardrails and create a national regulator. 'We don't call it an AI Safety Institute, but it has the functions of an AI Safety Institute,' he said. 'So it's well-resourced. It's a national regulator. And its focus is on, first of all, guiding parliament so that we get the right regulatory models in place, strict handrails, strict guidelines, and they're legislated.' The senator further stated that the proposed national AI regulator would have a team of on-call, highly qualified experts led by an independent statutory officer to test high-risk deployments of AI. The expert team would also be responsible for establishing a reliable process to test AI models before they are deployed to identify any risks in real-time. In addition, the Greens would propose to set up a 'digital rights commissioner' whose role is to regulate digital rights and the impacts of AI on those rights. 'I would think of a digital rights commissioner as a kind of an ombudsman in the [digital] space to ensure that our data isn't being fed without our consent into large language models, [and] to put in remedy so that if that happens, people are held to account, and our data is removed from training data sets,' Shoebridge said. Greens Senator David Shoebridge speaks at an event in Sydney, Australia, on Jan. 26, Expert Says Liability Already a Hazy Area Kimberlee Weatherall, a law professor at the University of Sydney, said there were existing challenges with identifying where problems start or occur in AI automation processes. 'Automation makes liability hard at a general level–it's just harder to pin liability on a company or a person if they can say, well, it was the system, what done it? It wasn't me,' she said. 'And if the technology underlying that automated system is in any way unpredictable, which some of the AI is, or we don't understand it, it makes it even harder to pin things like liability and to hold companies responsible. 'Another reason why we need to be thinking about things like guard rails [is] to ensure that systems are safe before they go out and monitoring and auditing that goes on afterwards.'