
Understanding shift from AI Safety to Security, and India's opportunities
Written by Balaraman Ravindran, Vibhav Mithal and Omir Kumar
In February 2025, the UK announced that its AI Safety Institute would become the AI Security Institute. This triggered several debates about what this means for AI safety. As India prepares to host the AI Summit, a key question will be how to approach AI safety.
The What and How of AI Safety
In November 2023, more than 20 countries, including the US, UK, India, China, and Japan, attended the inaugural AI Safety Summit at Bletchley Park in the UK. The Summit took place against the backdrop of increasing capabilities of AI systems and their integration into multiple domains of life, including employment, healthcare, education, and transportation. Countries acknowledged that while AI is a transformative technology with potential for socio-economic benefit, it also poses significant risks through both deliberate and unintentional misuse.
A consensus emerged among the participating countries on the importance of ensuring that AI systems are safe and that their design, development, deployment, or use does not harm society—leading to the Bletchley Declaration. The Declaration further advocated for developing risk-based policies across nations, taking into account national contexts and legal frameworks, while promoting collaboration, transparency from private actors, robust safety evaluation metrics, and enhanced public sector capability and scientific research. It was instrumental in bringing AI safety to the forefront and laid the foundation for global cooperation.
Following the Summit, the UK established the AI Safety Institute (AISI), with similar institutes set up in the US, Japan, Singapore, Canada, and the EU. Key functions of AISIs include advancing AI safety research, setting standards, and fostering international cooperation. India has also announced the establishment of its AISI, which will operate on a hub-and-spoke model involving research institutions, academic partners, and private sector entities under the Safe and Trusted pillar of the IndiaAI Mission.
UK's Shift from Safety to Security
The establishment of AISIs in various countries reflected a global consensus on AI safety. However, the discourse took a turn in January 2025, when the UK rebranded its Safety Institute as the Security Institute. The press release noted that the new name reflects a focus on risks with security implications, such as the use of AI in developing chemical and biological weapons, cybercrimes, and child sexual abuse. It clarified that the Institute would not prioritise issues like bias or free speech but focus on the most serious risks, helping policymakers ensure national safety. The UK government also announced a partnership with Anthropic to deploy AI systems for public services, assess AI security risks, and drive economic growth.
India's Understanding of Safety
Given the UK's recent developments, it is important to explore what AI safety means for India.
Firstly, when we refer to AI safety — i.e., making AI systems safe — we usually talk about mitigating harms such as bias, inaccuracy, and misinformation. While these are pressing concerns, AI safety should also encompass broader societal impacts, such as effects on labour markets, cultural norms, and knowledge systems. One of the Responsible AI (RAI) principles laid down by NITI Aayog in 2021 hinted at this broader view: 'AI should promote positive human values and not disturb in any way social harmony in community relationships.' The RAI principles also address equality, reliability, non-discrimination, privacy protection, and security — all of which are relevant to AI safety. Thus, adherence to RAI principles could be one way of operationalising AI safety.
Secondly, safety and security should not be seen as mutually exclusive. We cannot focus on security without first ensuring safety. For example, in a country like India, bias in AI systems could pose national security risks by inciting unrest. As we aim to deploy 'AI for All' in sectors such as healthcare and education, it is essential that these systems are not only secure but also safe and responsible. A narrow focus on security alone is insufficient.
Lastly, AI safety must align with AI governance and be viewed through a risk mitigation lens, addressing risks throughout the AI system lifecycle. This includes safety considerations from the conception of the AI model/system, through data collection, processing, and use, to design, development, testing, deployment, and post-deployment monitoring and maintenance. India is already taking steps in this direction. The Draft Report on AI Governance by IndiaAI emphasises the need to apply existing laws to AI-related challenges while also considering new laws to address legal gaps. In parallel, other regulatory approaches, such as self-regulation, are also being explored.
Given the global shift from safety to security, the upcoming AI Summit presents India with an important opportunity to articulate its unique perspective on AI safety — both in the national context and as part of a broader global dialogue.
Ravindran is Head, Wadhwani School of Data Science and AI & CeRAI; Mithal is Associate Research Fellow, CeRAI (& Associate Partner, Anand and Anand) and Kumar is Policy Analyst, CeRAI. CeRAI – Centre for Responsible AI, IIT Madras
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
an hour ago
- Mint
AI tracker: Tesla robotaxis hit the road and other AI news
Tesla launches its robotaxi service in Austin, Texas, but faces fierce competition from Waymo and Zoox. Meanwhile, Deezer flags AI-generated music to protect artists' royalties, and OpenAI's new hardware venture encounters legal challenges. The race for AI innovation is heating up. Tesla began offering robotaxi services recently in the US city of Austin, Texas. 'Super congratulations to the @Tesla_AI software & chip design teams on a successful @Robotaxi launch!!' Musk posted on X. The kickoff will employ the Model Y sport utility vehicle rather than Tesla's much-touted Cybercab, which is still under development. Tesla is deploying only 10 to 20 vehicles initially, aiming to show its cars can safely navigate real-world traffic. It's not the only robotaxi currently cruising the streets of Austin. Waymo, the driverless-car unit from Alphabet is scaling up in the city through a partnership with Uber, while Amazon's Zoox is also testing there, Bloomberg reported. Music streaming app Deezer French streaming service Deezer is now alerting users when they come across music identified as completely generated by artificial intelligence, AFP reported. Deezer said in January that it was receiving uploads of 10,000 AI tracks a day, doubling to over 20,000 in an April statement—or around 18% of all music added to the platform. The company 'wants to make sure that royalties supposed to go to artists aren't being taken away' by tracks generated from a brief text prompt typed into a music generator like Suno or Udio, the company said. AI tracks are not being removed from Deezer's library, but instead are demonetised to avoid unfairly reducing human musicians' royalties. Albums containing tracks suspected of being created in this way are now flagged with a notice reading 'content generated by AI'. A budding partnership between OpenAI CEO Sam Altman and legendary iPhone designer Jony Ive to develop a new artificial intelligence hardware product has hit a legal snag after a US judge ruled they must temporarily stop marketing the new venture. OpenAI last month announced it was buying io Products, a product and engineering company co-founded by Ive, but it quickly faced a trademark complaint from a startup with a similarly sounding name, IYO, which is also developing AI hardware that it had pitched to Altman's personal investment firm and Ive's design firm in 2022.


Time of India
2 hours ago
- Time of India
US retirement age hits 67 by 2026: Early retirees to lose up to 30% in benefits- what it means for social security income
Representative image T he full retirement age (FRA) for Social Security benefits in the United States will increase to 67 starting in 2026, affecting millions of Americans planning their retirement. Those opting to retire early at age 62 could see up to a 30 per cent reduction in monthly benefits. The change stems from reforms signed into law by former President Ronald Reagan in 1983 to address long-term financial challenges facing the Social Security system. The original Social Security Act, introduced by President Franklin D. Roosevelt in 1935, initially set the retirement age at 65. The FRA has been rising gradually since 1991, increasing by two months per year. It reached 66 in 1996 and will reach 67 in 2026 for individuals turning 65 that year and beyond. Why the retirement age is increasing When the social security act was created, life expectancy in the US was just 61. By 1983, it had risen to over 74, and today it stands at 79. At the same time, the number of workers supporting each retiree has dropped, from 8.6 in 1955 to 2.8 in 2013, placing greater pressure on the system. The 1983 amendment was designed to address these demographic shifts and extend the program's solvency. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like ¡Todo a tu favor con Orange! Orange Undo Many Americans claim benefits at age 62 due to financial need, health issues, or concerns about future cuts. However, doing so results in permanently reduced monthly payments. According to the Social Security Administration (SSA), claiming benefits at 62 can result in up to a 30 per cent decrease, while delaying retirement until age 70 can significantly increase monthly payouts. A recent report from the Social Security Board of Trustees warns that the Social Security Trust Funds will have enough revenue to pay full benefits only until 2034. After that, without Congressional action, only 81 per cent of scheduled benefits would be payable. This could reduce the average monthly cheque from $1,976 to about $1,600. In 2024, trust fund reserves fell by $67 billion to $2.72 trillion, as program costs continued to exceed income. The funds have been running a deficit since 2010, raising concerns about the long-term sustainability of social security. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
3 hours ago
- Time of India
Trump administration unveils executive actions to boost energy supply for AI expansion
The Trump administration is readying a package of executive actions aimed at boosting energy supply to power the US expansion of artificial intelligence, according to four sources familiar with the planning. Top economic rivals US and China are locked in a technological arms race and with it secure an economic and military edge. The huge amount of data processing behind AI requires a rapid increase in power supplies that are straining utilities and grids in many states. The moves under consideration include making it easier for power-generating projects to connect to the grid, and providing federal land on which to build the data centers needed to expand AI technology, according to the sources. The administration will also release an AI action plan and schedule public events to draw public attention to the efforts, according to the sources, who requested anonymity to discuss internal deliberations. The White House did not respond to requests for comment. Training large-scale AI models requires a huge amount of electricity, and the industry's growth is driving the first big increase in US power demand in decades. Between 2024 and 2029, US electricity demand is projected to grow at five times the rate predicted in 2022, according to power-sector consultancy Grid Strategies. Meanwhile, power demand from AI data centers could grow more than thirtyfold by 2035, according to a new report by consultancy Deloitte. Building and connecting new power generation to the grid, however, has been a major hurdle because such projects require extensive impact studies that can take years to complete, and existing transmission infrastructure is overwhelmed. Among the ideas under consideration by the administration is to identify more fully developed power projects and move them higher on the waiting list for connection, two of the sources said. Siting data centers has also been challenging because larger facilities require a lot of space and resources, and can face zoning obstacles or public opposition. The executive orders could provide a solution to that by offering land managed by the Defense Department or Interior Department to project developers, the sources said. The administration is also considering streamlining permitting for data centers by creating a nationwide Clean Water Act permit, rather than requiring companies to seek permits on a state-by-state basis, according to one of the sources. In January, Trump hosted top tech CEOs at the White House to highlight the Stargate Project, a multi-billion effort led by ChatGPT's creator OpenAI , SoftBank and Oracle to build data centers and create more than 100,000 jobs in the US Trump has prioritized winning the AI race against China and declared on his first day in office a national energy emergency aimed at removing all regulatory obstacles to oil and gas drilling, coal and critical mineral mining, and building new gas and nuclear power plants to bring more energy capacity online. He also ordered his administration in January to produce an AI Action Plan that would make "America the world capital in artificial intelligence" and reduce regulatory barriers to its rapid expansion. That report, which includes input from the National Security Council, is due by July 23. The White House is considering making July 23 "AI Action Day" to draw attention to the report and demonstrate its commitment to expanding the industry, two of the sources said. Trump is scheduled to speak at an AI and energy event in Pennsylvania on July 15 hosted by Senator Dave McCormick. Amazon earlier this month announced it would invest $20 billion in data centers in two Pennsylvania counties.