
US AI Policy Pivots Sharply From ‘Safety' To ‘Security'
The Trump administration has pivoted its AI policies away from safety guardrails and toward national ... More defense amid growing global competition.
Efforts from firms and governments to prioritize AI safety, which emphasizes ethics, transparency and predictability, have been replaced in the Trump era by a starkly realist doctrine of AI security. For those of us who have been watching this space, the demise of AI safety happened slowly during the last half of 2024, anticipating a potential change in administration, and then all at once.
(Disclosure: I previously served as senior counselor for AI at the Department of Homeland Security during the Biden administration.)
President Donald Trump rescinded former President Joe Biden's AI Executive Order on day one of his term, and Vice President JD Vance opened up the Paris AI Action Summit, a convening that was originally launched to advance the field of AI safety, by firmly stating that he was not actually there to discuss AI safety and would instead be addressing 'AI opportunity.' Vance went on to say that the U.S. would 'safeguard American AI' and stop adversaries from attaining AI capabilities that 'threaten all of our people.'
Without more context, these sound like meaningless buzzwords — what's the difference between AI safety and AI security, and what does this shift mean for the consumers and businesses that continue to adopt AI?
Simply put, AI safety is primarily focused on developing AI in a way that behaves ethically and reliably, especially when it's used in high-stakes contexts, like hiring or healthcare. To help prevent AI systems from causing harm, AI safety legislation typically includes risk assessments, testing protocols and requirements for human oversight.
AI security, by contrast, does not fixate on developing ethical and safe AI. Rather, it assumes that America's adversaries will inevitably use AI in malicious ways and seeks to defend U.S. assets from intentional threats, like AI being exploited by rival nations to target U.S. critical infrastructure. These are not hypothetical risks — U.S. intelligence agencies continue to track growing offensive cyber operations in China, Russia and North Korea. To counter these types of deliberate attacks, organizations need a strong baseline of cybersecurity practices that also account for threats presented by AI.
Both of these fields are important and interconnected — so why does it seem like one has eclipsed the other in recent months? I would guess that prioritizing AI security is inherently more aligned with today's foreign policy climate, in which the worldviews most in vogue are realist depictions of ruthless competition among nations for geopolitical and economic advantage. Prioritizing AI security aims to protect America from its adversaries while maintaining America's global dominance in AI. AI safety, on the other hand, can be a lightning rod for political debates about free speech and unfair bias. The question of whether a given AI system will cause actual harm is also context dependent, as the same system deployed in different environments could produce vastly different outcomes.
In the face of so much uncertainty, combined with political disagreements about what truly constitutes harm to the public, legislators have struggled to justify passing safety legislation that could hamper America's competitive edge. News of DeepSeek, a Chinese AI company, achieving competitive performance with U.S. AI models at substantially lower costs, only reaffirmed this move, stoking widespread fear about the steadily diminishing gap between U.S. and China AI capabilities.
What happens now, when the specter of federal safety legislation no longer looms on the horizon? Public comments from OpenAI, Anthropic and others on the Trump administration's forthcoming 'AI Action Plan' provide an interesting picture of how AI priorities have shifted. For one, 'safety' hardly appears in the submissions from industry, and where safety issues are mentioned, they are reframed as national security risks that could disadvantage the U.S. in its race to out-compete China. In general, these submissions lay out a series of innovation-friendly policies, from balanced copyright rules for AI training to export controls on semiconductors and other valuable AI components (e.g. model weights).
Beyond trying to meet the spirit of the Trump administration's initial messaging on AI, these submissions also seem to reveal what companies believe the role of the U.S. government should be when it comes to AI: funding infrastructure critical to further AI development, protecting American IP, and regulating AI only to the extent that it threatens our national security. To me, this is less of a strategy shift on the part of AI companies than it is a communications shift. If anything, these comments from industry seem more mission-aligned than their previous calls for strong and comprehensive data legislation.
Even then, not everyone in the industry supports a no-holds-barred approach to U.S. AI dominance. In their paper, 'Superintelligence Strategy,' three prominent AI voices, Eric Schmidt, Dan Hendrycks and Alexandr Wang, advise caution when it comes to pursuing a Manhattan project-style push for developing superintelligent AI. The authors instead propose 'Mutual Assured AI Malfunction,' or MAIM, a defensive strategy reminiscent of Cold War-era deterrence that would that forcefully counter any state-led efforts to achieve an AI monopoly.
If the United States were to pursue this strategy, it would need to disable threatening AI projects, restrict access to advanced AI chips and open weight models and strengthen domestic chip manufacturing. Doing so, according to the authors, would enable the U.S. and other countries to peacefully advance AI innovation while lowering the overall risk of rogue actors using AI to create widespread damage.
It will be interesting to see whether these proposals gain traction in the coming months as the Trump administration forms a more detailed position on AI. We should expect to see more such proposals — specifically, those that persistently focus on the geopolitical risks and opportunities of AI, only suggesting legislation to the extent that it helps prevent large-scale catastrophes, such as the creation of biological weapons or foreign attacks on critical U.S. assets.
Unfortunately, safety issues don't disappear when you stop paying attention to them or rename a safety institute. While strengthening our security posture may help to boost our competitive edge and counter foreign attacks, it's the safety interventions that help prevent harm to individuals or society at scale.
The reality is that AI safety and security work hand-in-hand — AI safety interventions don't work if the systems themselves can be hacked; by the same token, securing AI systems against external threats becomes meaningless if those systems are inherently unsafe and prone to causing harm. Cambridge Analytica offers a useful illustration of this relationship; the incident revealed that Facebook's inadequate safety protocols around data access served to exacerbate security vulnerabilities that were then exploited for political manipulation. Today's AI systems face similarly interconnected challenges. When safety guardrails are dismantled, security risks inevitably follow.
For now, AI safety is in the hands of state legislatures and corporate trust and safety teams. The companies building AI know — perhaps better than anyone else — what the stakes are. A single breach of trust, whether it's data theft or an accident, can be destructive to their brand. I predict that they will therefore continue to invest in sensible AI safety practices, but discreetly and without fanfare. Emerging initiatives like ROOST, which enables companies to collaboratively build open safety tools, may be a good preview of what's to come: a quietly burgeoning AI safety movement, supported by the experts, labs and institutions that have pioneered this field over the past decade.
Hopefully, that will be enough.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
an hour ago
- CNBC
Trump says he has group of ‘very wealthy people' ready to buy TikTok
U.S. President Donald Trump told Fox News in an interview aired on Sunday that he has a group of "very wealthy people" ready to buy TikTok, whose identities he can reveal in about two weeks. Trump added that the deal will probably need Beijing's approval to move forward, but said "I think President Xi will probably do it," in reference to China's leader Xi Jinping. The president made the off-the-cuff remarks while discussing the possibility of another pause of his "reciprocal" tariffs on Fox News' "Sunday Morning Futures with Maria Bartiromo." Tiktok's fate in the U.S. has been in doubt since the approval of a law in 2024 that sought to ban the platform unless its Chinese owner, ByteDance, divested from it. The legislation was driven by concerns that the Chinese government could manipulate content and access sensitive data from American users. Earlier this month, Trump extended the deadline for ByteDance to divest from the platform's U.S. business. It was his third extension since the Supreme Court upheld the TikTok law just a few days before Trump's second presidential inauguration in January. The new deadline is Sept. 17. The Protecting Americans from Foreign Adversary Controlled Applications Act, of PAFACA, had originally been set to take effect on Jan. 19, after which app store operators and internet service providers would be penalized for supporting TikTok. TikTok went dark in the U.S. ahead of the original deadline, but was restored after Trump provided it with assurances on the extension. Trump, who credited the app with boosting his support among young voters in the last presidential election, has maintained that he would like to see the platform stay afloat under new ownership. Potential buyers that have voiced interest in the app include Trump insiders such as Oracle's Larry Ellison to firms like AppLovin and Perplexity AI. However, it's unclear if ByteDance would be willing to sell the company. Any potential divestiture is likely to require approval from the Chinese government. A deal that would have spun off TikTok's U.S. operations and allowed ByteDance to retain a minority position had been in the works in April, but was derailed by the announcement of Donald Trump's tariffs on China, Reuters reported that month. The president previously floated a proposal for American stakeholders to buy the company and then sell a 50% stake to the U.S. government as part of a joint venture. Experts have previously told CNBC that any potential deal could face legal challenges in the U.S., depending on whether it complies with PAFACA.


Business Upturn
an hour ago
- Business Upturn
Waaree Energies shares jump 3% after its subsidiary wins 540MW solar module order in US
By Aman Shukla Published on June 30, 2025, 09:20 IST Waaree Energies saw its shares climb 3% in morning trade after announcing a major international order secured by its U.S. subsidiary, Waaree Solar Americas. As of 9:20 AM, the shares were trading 2.82% higher at Rs 3,030.50. The deal, confirmed on June 27, 2025, is for the supply of 540MW of solar modules to a prominent U.S.-based developer known for utility-scale solar and energy storage projects. While the customer's name hasn't been disclosed, the company is described as a key player in the American renewable energy space. According to Waaree's exchange filing, the order is split across timelines—270MW will be delivered in 2025, and the remaining 270MW will be supplied between 2027 and 2028. This is a one-time contract and forms part of Waaree's broader strategy to expand its global reach. The company also clarified that the deal does not involve any related parties, nor do any promoter group entities have a stake in the customer awarding the order. Disclaimer: The information provided is for informational purposes only and should not be considered financial or investment advice. Stock market investments are subject to market risks. Always conduct your own research or consult a financial advisor before making investment decisions. Author or Business Upturn is not liable for any losses arising from the use of this information. Ahmedabad Plane Crash Aman Shukla is a post-graduate in mass communication . A media enthusiast who has a strong hold on communication ,content writing and copy writing. Aman is currently working as journalist at

Business Insider
an hour ago
- Business Insider
As Elon Musk reignited his criticism of Trump's big bill, the president called him a 'wonderful guy'
President Donald Trump said on Sunday that he still viewed Elon Musk positively but felt that Musk's criticisms of his " big beautiful bill" were inappropriate. "I think he's a wonderful guy. I haven't spoken to him much, but I think Elon is a wonderful guy, and I know he's going to do well always," Trump said in an interview with Fox News' Maria Bartiromo. "He's a smart guy. And he actually went and campaigned with me and this and that," Trump added. "But he got a little bit upset, and that wasn't appropriate." Trump's praise of Musk comes just a day after Musk reignited his criticism of Trump's signature tax bill. On Saturday, Musk wrote in a post on X that the bill "will destroy millions of jobs in America and cause immense strategic harm to our country." "Utterly insane and destructive. It gives handouts to industries of the past while severely damaging industries of the future," Musk added. Trump's "big beautiful bill" is pending a vote in the Senate. GOP lawmakers hope to send it to Trump's desk on July 4. Trump said on Sunday that Musk's unhappiness stemmed from the proposed cuts to the Biden administration's EV tax credits. "Look, the electric vehicle mandate, the EV mandate, is a tough thing for him. I would, you know, I don't want everybody to have to have an electric car," Trump told Fox News. Musk was a prominent backer of Trump's presidential campaign last year. The Tesla and SpaceX CEO spent at least $277 million supporting Trump and other GOP candidates in last year's elections. He later led the administration's cost-cutting efforts as the head of the White House DOGE office. But Musk's relationship with Trump appeared to break down earlier this month, after he sharply criticized the president's tax bill on X. Musk said Trump's bill was a "MOUNTAIN of DISGUSTING PORK." He also said he would be decommissioning SpaceX's Dragon spacecraft, which are used in NASA missions, before walking it back. "Without me, Trump would have lost the election, Dems would control the House and the Republicans would be 51-49 in the Senate," Musk wrote in an X post on June 5. "Such ingratitude," Musk added. Trump said on June 5 that he was "very disappointed" at Musk's behavior and threatened to cancel the government's contracts with Musk and his businesses. He later struck a more conciliatory tone during a press conference on June 9. "Look, I wish him well. You understand? We had a good relationship, and I just wish him well," Trump said. Musk backed away from his criticism of Trump days later. "I regret some of my posts about President @realDonaldTrump last week. They went too far," Musk wrote in an X post on June 11.