logo
#

Latest news with #AI+Expo

The war over the peace business
The war over the peace business

Business Insider

time17-06-2025

  • Business
  • Business Insider

The war over the peace business

At the second annual AI+ Expo in Washington, DC, in early June, war is the word of the day. As a mix of Beltway bureaucrats, military personnel, and Washington's consultant class peruse the expansive Walter E. Washington Convention Center, a Palantir booth showcases its latest in data-collection suites for "warfighters." Lockheed Martin touts the many ways it is implementing AI throughout its weaponry systems. On the soundstage, the defense tech darling Mach Industries is selling its newest uncrewed aerial vehicles. "We're living in a world with great-power competition," the presenter says. "We can't rule out the possibility of war — but the best way to prevent a war is deterrence," he says, flanked by videos of drones flying through what looked like the rugged mountains and valleys of Kandahar. Hosted by the Special Competitive Studies Project, a think tank led by former Google CEO Eric Schmidt, the expo says it seeks to bridge the gap between Silicon Valley entrepreneurs and Washington policymakers to "strengthen" America and its allies' "competitiveness in critical technologies." One floor below, a startup called Anadyr Horizon is making a very different sales pitch, for software that seeks to prevent war rather than fight it: "Peace tech," as the company's cofounder Arvid Bell calls it. Dressed in white khakis and a black pinstripe suit jacket with a dove and olive branch pinned to his lapel (a gift from his husband), the former Harvard political scientist begins by noting that Russia's all-out invasion of Ukraine had come as a surprise to many political scientists. But his AI software, he says, could predict it. Long the domain of fantasy and science fiction, the idea of forecasting conflict has now become a serious pursuit. In Isaac Asimov's 1950s "Foundation" series, the main character develops an algorithm that allows him to predict the decline of the Galactic Empire, angering its rulers and forcing him into exile. During the coronavirus pandemic, the US State Department experimented with AI fed with Twitter data to predict "COVID cases" and "violent events." In its AI audit two years ago, the State Department revealed that it started training AI on "open-source political, social, and economic datasets" to predict "mass civilian killings." The UN is also said to have experimented with AI to model the war in Gaza. Interest in AI's ability to anticipate humanity's most destructive urges comes as the world sees an alarming rise in global conflict. Last week, Israel launched strikes against Tehran under the pretense that the Ayatollah was inching closer to developing a nuclear bomb. A month ago, India and Pakistan came to the brink of war over the decadeslong dispute in Kashmir. The ongoing Gaza conflict has claimed more than 50,000 lives, and the war in Ukraine more than 150,000, according to conservative estimates. On the screen, North Star looks like the 1970s video game "The Oregon Trail," with reams of text chronicling events from the simulated world. Anadyr Horizon believes it can prevent war by using AI to create facsimiles of world leaders that live and interact in a simulation of the real world. Its software, North Star, is designed to predict how real-world decision-makers might react to a given situation or stimuli — like economic sanctions or a naval blockade — by observing their AI counterparts. Bell says these digital twins are so sophisticated that they can emulate how a leader like Vladimir Putin might behave when he's sleep deprived versus having gotten a full night's rest. (Much of the formula behind these scores is Anadyr Horizon's secret sauce, but the Russian autocrat has boasted that he gets by on four hours of sleep.) On the screen, North Star looks like the 1970s video game "The Oregon Trail," with reams of text chronicling events from the simulated world. Bell demonstrates by prompting it to show what would happen if the United States imposed a no-fly zone over Ukraine — a maneuver with a mixed history (e.g., effective in preventing Saddam Hussein's regime from carpet-bombing southern Iraq in 1991; ineffective in preventing the Srebrenica massacre of 1995). North Star then runs thousands of simulations, each its own multiverse with slightly different variables, from whether a key decision-maker was late to work to the order in which a single conversation over military strategy might happen. It can model the outcomes of multiple different policies, and even offer advice on which leaders might be open to back channel negotiations. Eventually the program spits out a result: Russia is 60% likely to escalate the conflict if a no-fly zone is imposed. It also provides a hypothetical SVR intelligence brief describing Russia's devastating escalation. "Over the past 24 hours, we have delivered high-precision strikes on enemy troop concentrations and military equipment, destroying ammunition depots, communication hubs, and infrastructure," it reads. For Bell, the origins of Anadyr Horizon were relatively low-tech. Back when he was a lecturer at Harvard — teaching courses in conflict de-escalation and post-Soviet geopolitics — he used funds from the university's startup incubator, the Scholar-Entrepreneur Initiative, to launch an organization that held a yearly war gaming event with high-profile representatives from the United States and the European Union. Military and diplomatic leaders would join Bell in Cambridge and be assigned the role of a foreign dignitary — Gen. Zhang Youxia, the vice chairman of China's Central Military Commission, say. For three days of the event, they would become method actors going to painstaking lengths to re-create the lives of their assigned leaders as realistically as possible, wearing the same military fatigues, chauffeured in their preferred cars, and conferring in a replica of their situation room. "It's like theater," Bell tells me. Bell, who'd been traveling to Afghanistan as a Ph.D. researcher over the past decade, drew from his experience studying breakdowns in negotiations during armed conflict. His goal was to have participants empathize with those they might otherwise see as their adversary. During one such event in December 2021, the Russian team invaded Ukraine using the exact multipronged military assault that Putin would use 75 days later. Two years later, he was introduced to Ferenc Dalnoki-Veress, a Nobel Prize-winning physicist, through Bill Potter, the founding director of the James Martin Center for Nonproliferation Studies at Middlebury College. At the time, Dalnoki-Veress was experimenting with prompting different AI agents to debate each other. "It was something stupid," Dalnoki-Veress says. "Like one agent has to prove to the other agent that cherry lollipops are better than lemon lollipops." Dalnoki-Veress was impressed by the lengths that the robots would go to convince the others of their position. Sometimes they'd even lie to each other — if prompted to win at all costs. Soon he started experimenting with whether the agents would collaborate, negotiate, or even write treaties with one another. "It occurred to me how human it was," says Dalnoki-Veress. Last fall, the trio of academics founded Anadyr with the belief they could imbue these bots with the personalities of world leaders and realistically emulate their interactions. Instead of organizing one war game a year, they could game out hundreds of thousands of scenarios over the course of one night and get "probabilistic estimates of where conflicts are really going to happen," Bell says. He hopes North Star's predictive capabilities will help diplomats and politicians make better decisions about how to negotiate during times of conflict and even prevent wars. Anadyr is a reference to the code name the USSR used for its deployment of ballistic missiles and warfighters to the western coasts of Cuba in October 1962. If President John F. Kennedy had a tool like North Star to preempt the Cuban Missile Crisis, Bell posits, instead of having 13 days to respond, he might have had six months. "We are reclaiming this name to say, 'OK, the next Operation Anadyr, we will detect early,'" he says. In doing so, the company and its venture capital backers believe it can make billions. By some estimates violent conflict cost the global economy $19 trillion in 2023 alone. And one study conducted by the International Monetary Fund suggests every dollar spent on conflict prevention can yield a return as high as $103 in countries that have recently experienced violent conflict. "Peace tech is going after a huge market," says Brian Abrams, a founder of B Ventures, an investor in Anadyr. "If you look at climate tech, a decade ago, the space was very small. It wasn't even called climate tech," he adds. "Now, climate tech sees about $50 billion in investment annually." He says peace tech can replicate the growth seen in the climate tech industry. Anadyr's early clients aren't confined to just state agencies; the company is also selling its software to corporate risk managers who want to understand how social unrest might affect their investments and assets in different countries. Anadyr has also raised funds from Commonweal Ventures, an early investor in the defense contractor Palantir, and AIN Ventures, a veteran-led firm that invests in technologies that can be useful in both the military and in the private sector. Bell says they've already been able to close a seven-figure pre-seed round, though he didn't disclose the exact figures. That a company dedicated to preventing war had chosen a defense expo to unveil its product wasn't lost on Bell. But the lines between peace and war technology are blurrier than they may seem. The defense contractor Rhombus Power, a sponsor of the expo, has its own AI conflict prediction software that it says made accurate predictions of Russia's invasion of Ukraine. "We look at peace tech as the flip side of the same coin," Abrams says. According to Abrams, the size of the defense industry shows that there is a market for technology seeking to prevent war. "The difference," he says, between peace tech and war tech is "a different approach to the same problem." "I want to simulate what breaks the world. I don't want to break the world." Arvid Bell, Anadyr Horizon cofounder Even the audience at Bell's demo had its fair share of defense tech funders in attendance. When one of the venture capitalists in the crowd asks whether he's considered the technology's military applications, he tells them that's a line too far for Anadyr Horizon, at present. "For now we're definitely focused on the strategic level," he says. "Because we're trying to stop war." A savvy salesman, he adds — "we're still early enough to see where the market will pull us." Over lunch, I ask the founders if they believe something is lost in automating the war games Bell conducted at Harvard. "What you're losing," Bell concedes, "is the extremely personal and emotional experience of an American admiral who is put into the shoes of his Chinese counterpart, and for the first time is looking at American warships coming to his coast." But you can only run such a realistic simulation with real people a few times a year. "The capabilities of AI are exponential," he says. "The impact is on a much greater scale." There are other challenges with using artificial intelligence for something as high-stakes as preventing the next world war. Researchers have long warned that AI models may hold biases hidden in the data from which they were trained. "People say history is written by the victor," says Timnit Gebru, an AI researcher who fled Eritrea in 1998, during the country's war with Ethiopia. An AI system trained on open-source information on the internet, she says, will inherently represent the biases of the most online groups — which tend to be Western or European. "The notion that you're going to use a lot of data on the internet and therefore represent some sort of unbiased truth is already fraught," Gebru adds. The founders are unwilling to reveal the actual data their digital world leaders are trained on, but they do offer that Anadyr Horizon uses "proprietary datasets, open-source intelligence, and coded behavioral inputs" — and that they go to great lengths to use books and data from outside the English-speaking world to account for differing world views. The leaders they emulate, Bell says, use as many as 150 datapoints. Biases in these new AI systems are especially hard to interrogate not only because of this lack of transparency around the data used by the systems, but also because of how the chatbots interpret that information. In the case of generative AI, " intelligence" is a misnomer — "they're essentially spitting out a likely sequence of words," Gebru says. This is why bots are so prone to confidently expressing falsehoods. They don't actually know what they're saying. It's also hard to trace why they make certain decisions. " Neural networks aren't explainable," Gebru says."It's not like a regression model where you can go back and see how and why it made predictions in certain ways." One study into the use of large language models for diplomatic decision-making, conducted last year by researchers at Stanford, found that AI models tended to be warmongers. "It appears LLM-based agents tended to equate increased military spending and deterrent behavior with an increase in power and security," the researchers wrote. "In some cases, this tendency even led to decisions to execute a full nuclear attack in order to de-escalate conflicts." A trigger-happy AI system could have severe consequences in the real world. For example, if hedge funds or corporations act collectively on a prediction from a tool like North Star that a country in which they have heavily invested is on the brink of collapse, and they preemptively sell off their assets, it may lead to the very situation the system had predicted — the mass exodus of capital actually causing currency depreciation, unemployment, and liquidity crises. "The claim that a place is unstable, will make it unstable," Gebru explains. For now, that problem seems academic to Bell. "It is a bit like a philosophical or ethical problem. Like the butterfly effect," he says. "I think we'll have to wrestle with it at some point." He insists they aren't the typical "move fast, break things" tech company. They're deliberate about consulting with subject-area experts as they model different countries and world leaders, he says, and have released the product to only a select list of firms. "I want to simulate what breaks the world. I don't want to break the world." Jon Danilowicz, a former senior diplomat who served in South Sudan, Pakistan, and Bangladesh, notes the inherent unpredictability of war, with contingencies and factors that can't always be accounted for. "If you look at what's going to happen with Israel taking action against Iran's nuclear program, you can do all kinds of scenarios on how that's going to play out. I'm sure there's somebody who's going to make a prediction, which after the fact they'll be able to say they were right. But what about the multiples that got it totally wrong?" "There's never certainty in these kinds of decision-making situations," says Bell. "In some senses, we can't predict the future. But we can assign probabilities. Then it's up to the user to decide what to do." In the meantime, the company has more pressing problems. Like many startups building on top of generative AI models, the costs to run North Star are huge. "I don't want to give you a figure, but if you knew, you'd drop your fork. It's extremely expensive," he says. On top of that, contracting with the government and receiving the necessary clearances can be its own set of bureaucratic red tape and expenses. Despite this, there seems to be no shortage of interest in their technology. As I leave Bell and Dalnoki-Veress, they rush off to another meeting: Eric Schmidt's office wanted a private demonstration.

Bloomberg Industry Group Partners with STATION DC to Drive Innovation in Policy, Business, and Technology
Bloomberg Industry Group Partners with STATION DC to Drive Innovation in Policy, Business, and Technology

Business Wire

time10-06-2025

  • Business
  • Business Wire

Bloomberg Industry Group Partners with STATION DC to Drive Innovation in Policy, Business, and Technology

WASHINGTON--(BUSINESS WIRE)--Bloomberg Industry Group and STATION DC, the distinguished non-profit tech hub for technologists, investors, military leaders, and policymakers, today announced a strategic partnership focused on reinforcing Washington, D.C.'s role as a hub for innovation and advancing initiatives that enhance national competitiveness. The partnership combines Bloomberg Industry Group's decades of expertise in leveraging technology to support legal, regulatory, and business intelligence with STATION DC's ability to convene prominent technologists, investors, and policymakers. Together, they aim to create opportunities where business leaders, government officials, and innovators can collaborate on addressing critical challenges, such as artificial intelligence, energy, advanced manufacturing, and the evolving regulatory landscape. 'At Bloomberg Industry Group, our mission is to empower professionals managing legal, tax, and government affairs with the tools and insights they need to navigate complex environments,' said Josh Eastright, CEO, Bloomberg Industry Group. 'From our founding in 1929 as The Bureau of National Affairs (BNA) in Washington, DC, Bloomberg Industry Group has been committed to supporting the region and fostering its vibrant innovation economy. Our collaboration with STATION DC aligns with our commitment to creating an environment where technology, policy, and business intersect. With hundreds of our technologists based in the DMV, we're excited to contribute to the growth of this vibrant community.' 'Washington is where consequential decisions get made—and the innovation economy is no longer on the sidelines of those conversations,' said James Barlia, Executive Director of STATION DC. 'Partnering with Bloomberg Industry Group helps us to foster conversations and community for an opportunity to drive what's next in innovation.' The partnership will result in a series of co-hosted events, summits, and thought-leadership initiatives taking place throughout the year and kicked off with NETWRX on June 3, a high-impact talent and hiring event during DC's AI+ Expo. These events aim to create a shared platform that provides influential founders, technologists, policymakers, and investors with the opportunity to collaborate. About Bloomberg Industry Group Bloomberg Industry Group empowers professionals in government, law, tax, and accounting with industry knowledge and AI-enabled technology, enabling them to take decisive action and make the most of every opportunity. Bloomberg Industry Group is an affiliate of Bloomberg L.P. For more information, visit About STATION DC STATION DC is a non-profit tech hub and convening space accelerating American innovation at the intersection of technology, policy, and capital. Located in Union Market, STATION DC hosts salons, summits, and working sessions that strengthen the nation's competitive edge.

The AI lobby plants its flag in Washington
The AI lobby plants its flag in Washington

Yahoo

time06-06-2025

  • Business
  • Yahoo

The AI lobby plants its flag in Washington

Top artificial intelligence companies are rapidly expanding their lobbying footprint in Washington — and so far, Washington is turning out to be a very soft target. Two privately held AI companies, OpenAI and Anthropic — which once positioned themselves as cautious, research-driven counterweights to aggressive Big Tech firms — are now adding Washington staff, ramping up their lobbying spending and chasing contracts from the estimated $75 billion federal IT budget, a significant portion of which now focuses on AI. They have company. Scale AI, a specialist contractor with the Pentagon and other agencies, is also planning to expand its government relations and lobbying teams, a spokesperson told POLITICO. In late March, the AI-focused chipmaking giant Nvidia registered its first in-house lobbyists. AI lobbyists are 'very visible' and 'very present on the hill,' said Rep. Don Beyer (D-Va.) in an interview at the Special Competitive Studies Project AI+ Expo this week. 'They're nurturing relationships with lots of senators and a handful of members [of the House] in Congress. It's really important for their ambitions, their expectations of the future of AI, to have Congress involved, even if it's only to stop us from doing anything.' This lobbying push aims to capitalize on a wave of support from both the Trump administration and the Republican Congress, both of which have pumped up the AI industry as a linchpin of American competitiveness and a means for shrinking the federal workforce. They don't all present a unified front — Anthropic, in particular, has found itself at odds with conservatives, and on Thursday its CEO Dario Amodei broke with other companies by urging Congress to pass a national transparency standard for AI companies — but so far the AI lobby is broadly getting what it wants. 'The overarching ask is for no regulation or for light-touch regulation, and so far, they've gotten that," said Doug Calidas, senior vice president of government affairs for the AI policy nonprofit Americans for Responsible Innovation. In a sign of lawmakers' deference to industry, the House passed a ten-year freeze on enforcing state and local AI regulation as part of its megabill that is currently working through the Senate. Critics, however, worry that the AI conversation in Washington has become an overly tight loop between companies and their GOP supporters — muting important concerns about the growth of a powerful but hard-to-control technology. 'There's been a huge pivot for [AI companies] as the money has gotten closer,' Gary Marcus, an AI and cognitive science expert, said of the leading AI firms. 'The Trump administration is too chummy with the big tech companies, and basically ignoring what the American people want, which is protection from the many risks of AI.' Anthropic declined to comment for this story, referring POLITICO to its March submission to the AI Action Plan that the White House is crafting after President Donald Trump repealed a sprawling AI executive order issued by the Biden administration. OpenAI, too, declined to comment. This week several AI firms, including OpenAI, co-sponsored the Special Competitive Studies Project's AI+ Expo, an annual Washington trade show that has quickly emerged as a kind of bazaar for companies trying to sell services to the government. (Disclosure: POLITICO was a media partner of the conference.) They're jostling for influence against more established government contractors like Palantir, which has been steadily building up its lobbying presence in D.C. for years, while Meta, Google, Amazon and Microsoft — major tech platforms with AI as part of their pitch — already have dozens of lobbyists in their employ. What the AI lobby wants is a classic Washington twofer: fewer regulations to limit its growth, and more government contracts. The government budget for AI has been growing. Federal agencies across the board — from the Department of Defense and the Department of Energy to the IRS and the Department of Veterans Affairs — are looking to build AI capacity. The Trump administration's staff cuts and automation push is expected to accelerate the demand for private firms to fill the gap with AI. For AI, 'growth' also demands energy and, on the policy front, AI companies have been a key driver of the recent push in Congress and the White House to open up new energy sources, streamline permitting for building new data centers and funnel private investment into the construction of these sites. Late last year, OpenAI released an infrastructure blueprint for the U.S. urging the federal government to prepare for a massive spike in demand for computational infrastructure and energy supply. Among its recommendations: creating special AI zones to fast-track permits for energy and data centers, expanding the national power grid and boosting government support for private investment in major energy projects. Those recommendations are now being very closely echoed by Trump administration figures. Last month, at the Bitcoin 2025 Conference in Las Vegas, David Sacks — Trump's AI and crypto czar — laid out a sweeping vision that mirrored the AI industry's lobbying goals. Speaking to a crowd of 35,000, Sacks stressed the foundational role of energy for both AI and cryptocurrency, saying bluntly: 'You need power.' He applauded President Donald Trump's push to expand domestic oil and gas production, framing it as essential to keeping the U.S. ahead in the global AI and crypto race. This is a huge turnaround from a year ago, when AI companies faced a very different landscape in Washington. The Biden administration, and many congressional Democrats, wanted to regulate the industry to guard against bias, job loss and existential risk. No longer. Since Trump's election, AI has become central to the conversation about global competition with China, with Silicon Valley venture capitalists like Sacks and Marc Andreessen now in positions of influence within the Trump orbit. Trump's director of the Office of Science and Technology Policy is Michael Kratsios, former managing director at Scale AI. Trump himself has proudly announced a series of massive Gulf investment deals in AI. Sacks, in his Las Vegas speech, pointed to those recent deal announcements as evidence of what he called a 'total comprehensive shift' in Washington's approach to emerging technologies. But as the U.S. throws its weight behind AI as a strategic asset, critics warn that the enthusiasm is muffling one of the most important conversations about AI: its ability to wreak unforeseen harm on the populace, from fairness to existential risk concerns. Among those concerns: bias embedded in algorithmic decisions that affect housing, policing, and hiring; surveillance that could threaten civil liberties; the erosion of copyright protections, as AI models hoover up data and labor protections as automation replaces human work. Kevin De Liban, founder of TechTonic Justice, a nonprofit that focuses on the impact of AI on low income communities, worries that Washington has abandoned its concerns for AI's impact on citizens. 'Big Tech gets fat government contracts, a testing ground for their technologies, and a liability-free regulatory environment,' he said, of Washington's current AI policy environment. 'Everyday people are left behind to deal with the fallout.' There's a much larger question, too, which dominated the early AI debate: whether cutting-edge AI systems can be controlled at all. These risks, long documented by researchers, are now taking a back seat in Washington as the conversation turns to economic advantage and global competition. There's also the very real concern that if an AI company does bring up the technology's worst-case scenarios, it may find itself at odds with the White House itself. Anthropic CEO Amodei said in a May interview that labor force disruptions due to AI would be severe — which triggered a direct attack from Sacks, Trump's AI czar, on his podcast, who said that line of thinking led to 'woke AI.' Still, both Anthropic and OpenAI are going full steam ahead. Anthropic hired nearly a dozen policy staffers in the last two months, while OpenAI similarly grew its policy office over the past year. They're also pushing to become more important federal contractors by getting critical FedRAMP authorizations — a federal program that certifies cloud services for use across government — which could unlock billions of dollars in contracts. As tech companies grow increasingly cozy with the government, the political will to regulate them is fading — and in fact, Congress appears hostile to any efforts to regulate them at all. In a public comment in March, OpenAI specifically asked the Trump administration for a voluntary federal framework that overrides state AI laws, seeking 'private sector relief' from a patchwork of state AI bills. Two months later, the House added language to its reconciliation bill that would have done exactly that — and more. The provision to impose a 10 year moratorium on state AI regulations passed the House but is expected to be knocked out by the Senate parliamentarian. (Breaking ranks again, Anthropic is lobbying against the moratorium.) Still, the provision has widespread support amongst Republicans and is likely to make a comeback.

The AI lobby plants its flag in Washington
The AI lobby plants its flag in Washington

Politico

time06-06-2025

  • Business
  • Politico

The AI lobby plants its flag in Washington

Top artificial intelligence companies are rapidly expanding their lobbying footprint in Washington — and so far, Washington is turning out to be a very soft target. Two privately held AI companies, OpenAI and Anthropic — which once positioned themselves as cautious, research-driven counterweights to aggressive Big Tech firms — are now adding Washington staff, ramping up their lobbying spending and chasing contracts from the estimated $75 billion federal IT budget, a significant portion of which now focuses on AI. They have company. Scale AI, a specialist contractor with the Pentagon and other agencies, is also planning to expand its government relations and lobbying teams, a spokesperson told POLITICO. In late March, the AI-focused chipmaking giant Nvidia registered its first in-house lobbyists. AI lobbyists are 'very visible' and 'very present on the hill,' said Rep. Don Beyer (D-Va.) in an interview at the Special Competitive Studies Project AI+ Expo this week. 'They're nurturing relationships with lots of senators and a handful of members [of the House] in Congress. It's really important for their ambitions, their expectations of the future of AI, to have Congress involved, even if it's only to stop us from doing anything.' This lobbying push aims to capitalize on a wave of support from both the Trump administration and the Republican Congress, both of which have pumped up the AI industry as a linchpin of American competitiveness and a means for shrinking the federal workforce. They don't all present a unified front — Anthropic, in particular, has found itself at odds with conservatives, and on Thursday its CEO Dario Amodei broke with other companies by urging Congress to pass a national transparency standard for AI companies — but so far the AI lobby is broadly getting what it wants. 'The overarching ask is for no regulation or for light-touch regulation, and so far, they've gotten that,' said Doug Calidas, senior vice president of government affairs for the AI policy nonprofit Americans for Responsible Innovation. In a sign of lawmakers' deference to industry, the House passed a ten-year freeze on enforcing state and local AI regulation as part of its megabill that is currently working through the Senate. Critics, however, worry that the AI conversation in Washington has become an overly tight loop between companies and their GOP supporters — muting important concerns about the growth of a powerful but hard-to-control technology. 'There's been a huge pivot for [AI companies] as the money has gotten closer,' Gary Marcus, an AI and cognitive science expert, said of the leading AI firms. 'The Trump administration is too chummy with the big tech companies, and basically ignoring what the American people want, which is protection from the many risks of AI.' Anthropic declined to comment for this story, referring POLITICO to its March submission to the AI Action Plan that the White House is crafting after President Donald Trump repealed a sprawling AI executive order issued by the Biden administration. OpenAI, too, declined to comment. This week several AI firms, including OpenAI, co-sponsored the Special Competitive Studies Project's AI+ Expo, an annual Washington trade show that has quickly emerged as a kind of bazaar for companies trying to sell services to the government. (Disclosure: POLITICO was a media partner of the conference.) They're jostling for influence against more established government contractors like Palantir, which has been steadily building up its lobbying presence in D.C. for years, while Meta, Google, Amazon and Microsoft — major tech platforms with AI as part of their pitch — already have dozens of lobbyists in their employ. What the AI lobby wants is a classic Washington twofer: fewer regulations to limit its growth, and more government contracts. The government budget for AI has been growing. Federal agencies across the board — from the Department of Defense and the Department of Energy to the IRS and the Department of Veterans Affairs — are looking to build AI capacity. The Trump administration's staff cuts and automation push is expected to accelerate the demand for private firms to fill the gap with AI. For AI, 'growth' also demands energy and, on the policy front, AI companies have been a key driver of the recent push in Congress and the White House to open up new energy sources, streamline permitting for building new data centers and funnel private investment into the construction of these sites. Late last year, OpenAI released an infrastructure blueprint for the U.S. urging the federal government to prepare for a massive spike in demand for computational infrastructure and energy supply. Among its recommendations: creating special AI zones to fast-track permits for energy and data centers, expanding the national power grid and boosting government support for private investment in major energy projects. Those recommendations are now being very closely echoed by Trump administration figures. Last month, at the Bitcoin 2025 Conference in Las Vegas, David Sacks — Trump's AI and crypto czar — laid out a sweeping vision that mirrored the AI industry's lobbying goals. Speaking to a crowd of 35,000, Sacks stressed the foundational role of energy for both AI and cryptocurrency, saying bluntly: 'You need power.' He applauded President Donald Trump's push to expand domestic oil and gas production, framing it as essential to keeping the U.S. ahead in the global AI and crypto race. This is a huge turnaround from a year ago, when AI companies faced a very different landscape in Washington. The Biden administration, and many congressional Democrats, wanted to regulate the industry to guard against bias, job loss and existential risk. No longer. Since Trump's election, AI has become central to the conversation about global competition with China, with Silicon Valley venture capitalists like Sacks and Marc Andreessen now in positions of influence within the Trump orbit. Trump's director of the Office of Science and Technology Policy is Michael Kratsios, former managing director at Scale AI. Trump himself has proudly announced a series of massive Gulf investment deals in AI. Sacks, in his Las Vegas speech, pointed to those recent deal announcements as evidence of what he called a 'total comprehensive shift' in Washington's approach to emerging technologies. But as the U.S. throws its weight behind AI as a strategic asset, critics warn that the enthusiasm is muffling one of the most important conversations about AI: its ability to wreak unforeseen harm on the populace, from fairness to existential risk concerns. Among those concerns: bias embedded in algorithmic decisions that affect housing, policing, and hiring; surveillance that could threaten civil liberties; the erosion of copyright protections, as AI models hoover up data and labor protections as automation replaces human work. Kevin De Liban, founder of TechTonic Justice, a nonprofit that focuses on the impact of AI on low income communities, worries that Washington has abandoned its concerns for AI's impact on citizens. 'Big Tech gets fat government contracts, a testing ground for their technologies, and a liability-free regulatory environment,' he said, of Washington's current AI policy environment. 'Everyday people are left behind to deal with the fallout.' There's a much larger question, too, which dominated the early AI debate: whether cutting-edge AI systems can be controlled at all. These risks, long documented by researchers, are now taking a back seat in Washington as the conversation turns to economic advantage and global competition. There's also the very real concern that if an AI company does bring up the technology's worst-case scenarios, it may find itself at odds with the White House itself. Anthropic CEO Amodei said in a May interview that labor force disruptions due to AI would be severe — which triggered a direct attack from Sacks, Trump's AI czar, on his podcast, who said that line of thinking led to 'woke AI.' Still, both Anthropic and OpenAI are going full steam ahead. Anthropic hired nearly a dozen policy staffers in the last two months, while OpenAI similarly grew its policy office over the past year. They're also pushing to become more important federal contractors by getting critical FedRAMP authorizations — a federal program that certifies cloud services for use across government — which could unlock billions of dollars in contracts. As tech companies grow increasingly cozy with the government, the political will to regulate them is fading — and in fact, Congress appears hostile to any efforts to regulate them at all. In a public comment in March, OpenAI specifically asked the Trump administration for a voluntary federal framework that overrides state AI laws, seeking 'private sector relief' from a patchwork of state AI bills. Two months later, the House added language to its reconciliation bill that would have done exactly that — and more. The provision to impose a 10 year moratorium on state AI regulations passed the House but is expected to be knocked out by the Senate parliamentarian. (Breaking ranks again, Anthropic is lobbying against the moratorium.) Still, the provision has widespread support amongst Republicans and is likely to make a comeback.

Former Google CEO Eric Schmidt's AI Expo serves up visions of war, robotics, and LLMs for throngs of tech execs, defense officials, and fresh recruits
Former Google CEO Eric Schmidt's AI Expo serves up visions of war, robotics, and LLMs for throngs of tech execs, defense officials, and fresh recruits

Yahoo

time04-06-2025

  • Business
  • Yahoo

Former Google CEO Eric Schmidt's AI Expo serves up visions of war, robotics, and LLMs for throngs of tech execs, defense officials, and fresh recruits

Drones buzz overhead, piercing the human hum in the crowded Walter E. Washington Convention Center. On the ground, tech executives, uniformed Army officers, policy wonks, and politicians compete for attention as swarms of people move throughout the vast space. There are pitches about 'next generation of warfighters,' and panels about winning the 'AI innovation race.' There are job seekers and dignitaries. And at the center of it all, there is Eric Schmidt. The former Google CEO is the cofounder of the non-profit that organizes the confab, known as the AI+ Expo for National Competitiveness. The Washington DC event, now in its second year, is a fascinating, real world manifestation of the Schmidt worldview, which sees artificial intelligence, business, geopolitics, and national defense as interconnected forces reshaping America's global strategy. If it was once the province of futurists and think tank researchers, this worldview is now increasingly the consensus view. From innovations like self-driving robotaxis circulating in multiple U.S. cities to Silicon Valley defense startups snagging large government contracts, evidence of change—and of the stakes involved—are everywhere. The AI+ Expo, hosted by Schmidt's Special Competitive Studies Project (SCSP), is ground zero for the stakeholders of this new world order: Thousands of Washington insiders, military brass, tech executives, policymakers, students, and curious professionals are drawn together under one stars and stripes-style banner—to ensure America leads the AI age. It's a kind of AI county fair, albeit one where Lockheed Martin stages demos, defense tech companies hand out swag, Condoleezza Rice takes the stage, and protesters outside chant 'No tech for genocide' during Schmidt's keynote. The event is free to attend, and lines to enter stretch around the block. The only thing missing? GPU corn dogs on a stick, perhaps. Soft lobbying is omnipresent at the event. Tesla, for instance, offers self-driving tech demos just as the company prepares to launch its first robotaxi service in Austin and as Elon Musk pushes lawmakers on autonomous vehicle regulations. OpenAI is also working the room, touting its o3 reasoning model, newly deployed on a secure government supercomputer at Los Alamos. 'The transfer of model weights occurred via highly secured and heavily encrypted hard drives that were hand carried by OpenAI personnel to the Lab,' a company spokesperson told Fortune. 'For us, today's milestone, and this partnership more broadly, represents more than a technical achievement. It signals our shared commitment to American scientific leadership.' It's not just about the federal government, either – even state leaders are angling for attention. Mississippi Governor Tate Reeves, who is publicizing his state's AI data center investments and gave a keynote at the Expo told Fortune, 'The leaders in this space are here, and I want to be talking to the leaders that are going to make decisions about where they're making capital investments in the future.' The Expo is hosted by The Special Competitive Studies Project (SCSP), a nonprofit Schmidt cofounded in 2021 and for which he remains the major funder and chair. SCSP operates as a subsidiary of The Eric & Wendy Schmidt Fund for Strategic Innovation, the Schmidt family's private foundation, and is an outgrowth of the now-defunct National Security Commission on Artificial Intelligence — the temporary federal advisory body Schmidt also led from 2018 to 2021. The Expo is about building a community that brings private sector, academia and government into one place, said Ylli Bajraktari, president and CEO of SCSP and Schmidt's co-founder, who previously served as chief of staff to National Security Advisor H.R. McMaster and joined the Department of Defense in 2010. 'Washington is not a tech city, but yet, this is a city where [tech and AI] policies are being developed,' he said. Still, all of this seems really about Schmidt's vision for the future of AI, which he shared in depth in a highly-publicized TED talk last month. In it, he argued that humans should welcome the advancement of AI because our society will not have enough humans to be productive in the future–while delivering a dire warning about how the race for AI dominance could go wrong as the technology becomes a geopolitically destabilizing force. He repeated his hypothetical doomsday scenario in his Expo keynote. Schmidt posits an AI competitor who is advancing quickly, and is only about six months behind the U.S. in developing cutting-edge AI. In other international competitions, this could mean a relatively stable balance of power. But the fear is that once a certain level of AI capability is reached, a steep acceleration curve means the other side will never catch up. According to Schmidt, the other side would have to consider bombing their opponent's data centers to stop them from becoming permanently dominant. It's a scenario – with a proposed doctrine called Mutual AI Malfunction that would slow down each side and control progress – that Schmidt introduced alongside co-authors Henry Kissinger and Daniel Huttenlocher in their 2021 book The Age of AI and Our Human Future. Schmidt shared a thought exercise about what he would do if he could build the U.S. military completely from scratch – no Pentagon, no bureaucracy, no old, obsolete technology – that would basically resemble a tech company: agile, software-driven, and centered around networked, AI-powered systems. 'I would have two layers of drones,' he said. 'I'd have a set of ISR drones [unmanned aerial vehicles used for intelligence, surveillance, and reconnaissance missions]. Those are ones that are high and they have deep looking cameras, and they watch everything. They're connected by an AI network, which is presumably designed to be un-jammable. And then I would have essentially bomber drones of one kind or another, the ISR drones would observe, and they would immediately dispatch the bomber.' With that kind of a defensive system, he added, 'it would be essentially impossible to invade a country by land,' and said that he wanted US assets to be protected by defensive 'drone swarms,' adding that 'there's an entire set of companies in America that want to build this…many of them are here at our show – I want a small amount of the government's money to go into that industry.' Schmidt's Expo is open to all, and there were those in attendance who would beg to differ with his gung-ho takes on the future of battle. The International Committee of the Red Cross, for example, showcased a booth with thought-provoking, graffiti-style questions like 'Does AI make wars better or worse?' Even student attendees well-versed in wargaming, like Luke Miller, a rising sophomore studying international relations at the College of William and Mary and a member of a wargames club, said that today's era of AI and national security is 'supposed to be a sobering moment.' For a country like Ukraine to deploy drones to attack Russian air bases — as they did with great effect just days before the conference— 'is something we should definitely be concerned about going forward,' he said. Still, it was Schmidt's vision of the future of warfare and national security that was front and center at an event 'designed to strengthen U.S. and allied competitiveness in critical technologies.' 'Have you all had a chance to go hang out at the drone cage?' he asked the audience, pointing out the young age of many of the competitors. '[They are] beating the much bigger adults,' he said. 'This is the future. They're inventing it with or without us, that's where we should go.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store