Tech execs join the Army. Critics say it's an ethical minefield
WASHINGTON − When the Army announced it would commission four executives from some of Silicon Valley's top tech giants as lieutenant colonels in the reserves, critics said they could use their insider positions to win lucrative military contracts for their employers.
Now, the Army and one of the executives say, tech isn't even part of the assignment.
"What I'll be working on has actually not that much to do directly with technology or AI," Shyam Sankar, chief technology officer at data analysis giant Palantir, told USA TODAY. Sankar said he will focus on recruitment and "talent." Palantir has hundreds of millions of dollars in Pentagon tech contracts.
More: Big Tech takes on immigration with new migrant tracking software for ICE
"I have to work on things where I don't have a conflict, as determined by lawyers," Sankar said on July 7. "This was just a safe space for me.'
The other three tech execs will work on subjects including 'autonomy,' 'human performance' and the 'organizational organic industrial base,' according to Maj. Matthew Visser, an Army spokesperson.
'Getting them on the inside'
The Army says the four executives – Sankar of Palantir, Andrew Bosworth of Meta, Kevin Weil of OpenAI and Bob McGrew of AI startup Thinking Machines Lab – will be well positioned as officers in the Army Reserves to help address large-scale issues.
More: OpenAI secures $200 million Pentagon contract to develop technology for national security
Servicemembers on reserve duty join the military part-time – most hold other jobs and serve on duty one weekend per month and two full weeks per year.
But the Army has implied the four were brought in last month specifically to lend tech expertise.
"They've got this sixth sense," Steve Warren, an Army spokesperson, said of the four newly minted lieutenant colonels. 'These guys will help us think about how we're using things like AI and bleeding edge technologies in a different way.'
Warren said they will provide 'advice' and 'insights' as the Army undergoes a top-to-bottom overhaul called the 'Army transformation initiative.'
Kickstarted by Defense Secretary Pete Hegseth in May, the initiative will see the Army cut back on 'outdated equipment,' like some ground vehicles, and prioritize high-tech gadgets like drones and AI – the four executives' area of expertise. A memo from Hegseth directs the Army to "enable AI-driven command and control" throughout its headquarters by 2027 and field drones in every division by 2026.
Critics say bringing in the tech executives is an ethical minefield.
Combined, the executives' companies hold more than a billion dollars in military contracts. Palantir, which has drawn scrutiny over reports that it's compiling Americans' personal data and surveiling possible targets of immigration enforcement, was awarded a $795 million contract by the Army in May. The company's Pentagon contracts are primarily to design AI systems that crunch large amounts of data to come up with potential strike targets.
Meta announced the same month it had been tapped to build virtual reality headsets for Army soldiers, and OpenAI won a $200 million contract to develop artificial intelligence for the Army in June. Only Thinking Machines Lab has no Army contracts; McGrew formerly worked stints at both OpenAI and Palantir, according to his LinkedIn profile.
"Clearly, they have blatant conflicts of interest," said Dru Brenner-Beck, a retired lieutenant colonel and Army lawyer who served as deputy general counsel for the Army inspector general.
"I would certainly have questions if I was one of the competitors of these particular organizations,' Brenner-Beck said.
Sankar said he first pitched his desire to join up around a year and a half ago and personally recruited the three others to the effort. He spoke with multiple services but landed on the Army, he said, for its 'state of mind.' The motive: sheer patriotism and desire to help the military succeed, he said.
More: How much does the government know about you? Likely more than ever.
"They're patriots; they see what's happening to the country," Sankar said of his tech brothers-in-arms. Of critics, he said, "It's amazing how cynical we've become on the eve of the 250th anniversary (of the United States).'
Outside experts brought into the military to advise are so common that they have their own title within the Pentagon – "highly qualified experts."
Commissioning them directly into a military role – and at the rank of lieutenant colonel, which normally takes around 17 years to achieve – is not.
"Part of this is getting them on the inside," Warren said of the decision to give the four Army ranks. "We want them invested."
Hegseth and China
The Army has said the four's corporate ties would be no more problematic than those of other reserve officers, some of whom work jobs at defense contractors outside of their military service. Like other reservists, the tech executives were required to fill out forms declaring potential conflicts of interest. Those forms are reviewed by military lawyers, who can order servicemembers to divest from stocks or investments that might touch on their Army service.
The four will arrive at Fort Benning in Georgia by the end of July for their initial training, where they'll be taught "which hand to salute with," and other fundamentals of being an officer, Warren said. They are subject to the same physical fitness standards and will take the tests required of any other reserve officer, according to Maj. Visser.
Commissioning businesspeople into the Army is also not without precedent. During World War II, as the U.S. economy shifted into high gear to support the war effort, some industry leaders were commissioned directly into the military, like General Motors President William Knudsen, who the Army commissioned at the much higher rank of lieutenant general in 1942.
Sankar has argued that China poses a threat comparable or greater than what the U.S. faced during World War II and the Cold War, a view endorsed by Hegseth and some in his inner circle. That belief also hangs in the background of the Army Transformation Initiative, which is aimed at "deterring China," according to Hegseth.
Skeptics say it's the tail wagging the dog.
The shift, as evidenced by the new tech officers, is 'not as driven by the needs of the military as it is by the tremendous AI hype that's been produced by those very companies' to which they belong, said Shannon French, the Inamori Professor in Ethics at Case Western Reserve University who taught military ethics for 11 years at the U.S. Naval Academy.
The growing overlap between weapons manufacturers and companies with vast surveillance capacity has sparked broader public concern, along with the Trump administration's moves to dismantle AI regulations and President Donald Trump's chummy relationships with some of Silicon Valley's wealthiest executives − most notably Elon Musk, who led Trump's efforts to slash the federal government, but has since explosively broken with the administration.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Post
41 minutes ago
- New York Post
Trump targets ‘woke AI' in series of executive orders on artificial intelligence
President Trump inked three executive orders on artificial intelligence Wednesday, including one targeting so-called 'woke AI' models. 'The American people do not want woke Marxist lunacy in their AI models and neither do other countries. They don't want it. They don't want anything to do with it,' Trump said in remarks from Washington, DC, ahead of the signing ceremony. 3 Trump signed three AI-related executive orders on Wednesday. Getty Images The president's order bars the federal government from procuring generative AI large language models that do not demonstrate 'truthfulness and ideological neutrality.' 'From now on, the US government will deal only with AI that pursues truth, fairness, and strict impartiality,' Trump said. Large language models [LLMs] that are 'truthful and prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertainty where reliable information is incomplete or contradictory' as well as 'neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like [diversity, equity and inclusion],' would meet the criteria for use by the federal government under Trump's order. The order instructed White House Office of Management and Budget Director Russ Vought, in consultation with other Trump administration officials, to issue guidance for agencies to implement these principles in AI procurement. It also mandated that government contracts for LLMs include language to ensure compliance with Trump's 'Unbiased AI Principles.' 3 Trump argued that Americans and other nations don't want 'woke Marxist lunacy' in AI models. REUTERS Last year, Google's Gemini AI model sparked controversy when it started creating 'diverse' artificially generated images, including ones of black Founding Fathers and multiracial Nazi-era German soldiers. The president also signed executive orders to facilitate the quick buildout of data center infrastructure and to promote the export of American AI technology to US allies and partners across the globe. The data center order directs Commerce Secretary Howard Lutnick to launch a program that would provide loans, grants and tax incentives to qualifying infrastructure projects. 3 Trump's order bars the federal government from procuring AI models that are not truthful or ideologically neutral. Getty Images It also revokes Biden-era DEI and climate requirements for data center projects on federal lands, authorizes Cabinet officials to greenlight data center construction on federal lands and expedites permitting for such qualifying projects. Trump's AI-export order directs the Commerce Department to establish a program to support the development and deployment of 'full-stack, end-to-end packages' overseas, including 'hardware, data systems, AI models, cybersecurity measures' that have applications for the healthcare, education, agriculture, and transportation sectors. Trump's latest directives are part of his effort to usher in a 'Golden Age for American technological dominance' and aim to make the US a global leader in artificial intelligence, according to the White House.

Miami Herald
an hour ago
- Miami Herald
Sam Altman sends chilling message on cybersecurity practice
OpenAI CEO Sam Altman has a lot to worry about. The pressure for the ChatGPT 5 release everyone is anxiously awaiting must be enormous on its own. I expect this new version will be nothing more than all the previous models combined, plus a lot of duct tape and some "innovative" new features to make it seem less underwhelming. If that's the case, Altman has much to be concerned about. Related: Mark Cuban delivers 8-word truth bomb on AI wars Alas, his problems don't stop there. OpenAI is in the talent "poaching war" with Meta. To make matters worse, Mark Zuckerberg recently announced his plan for Meta to invest hundreds of billions of dollars into compute to build superintelligence. While OpenAI's situation regarding the functionality of the artificial intelligence models is favorable, Altman seems to be one step behind Zuckerberg's PR strategies, always having to respond to the latest move. OpenAI's latest revelation follows the same pattern. On July 22, Oracle (ORCL) and OpenAI shared that they are working on developing 4.5 GW of additional Stargate data center capacity in the U.S. The company didn't specify current capacity, but this extra work with Oracle will push it over 5GW. This recent development looks suspiciously like a response to Zuckerberg's revelation that Meta is building Hyperion, which will be able to scale up to 5GW over several years. Of course, Meta is not the only competitor Altman needs to consider. What is unique here is that he also needs to watch out for Microsoft, which is interestingly both a partner and competitor. But that's a long story for another time. Related: Meta makes a desperate-looking move, bites Apple Altman is also troubled by raising capital. OpenAI is seeking funding from new and existing investors as part of its $40 billion round announced in March, reported WIRED. A new version of the Sora model needs to be launched soon, too, as it looks like it has been overtaken by Google's Veo 3. So what else could a man with so many things on his plate be worried about? Whenever I hear an AI company CEO explaining AI's dangers, I wonder if it is like PR hype, because, let's face it, even negative marketing is marketing. In a surprising turn of events, though, Altman just said something that rings true. "A thing that terrifies me, is apparently there are still some financial institutions that will accept a voice print as authentication for you to move a lot of money or do something else," reported Business Insider. "That is a crazy thing to still be doing," continued Altman, "AI has fully defeated most of the ways that people authenticate currently - other than passwords." More AI Stocks: Google plans major AI shift after Meta's surprising $14 billion moveMeta delivers eye-popping AI announcementVeteran trader surprises with Palantir price target and comments Biometric authentication has always seemed like the most illogical cybersecurity invention. You can replace your lock if you lose your keys and change passwords if they get stolen, but you can't replace your fingerprints, voice, or face (unless you are Madonna). I am glad someone with Altman's influence has finally spoken up. Related: Microsoft wants to help you live longer The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.


Atlantic
2 hours ago
- Atlantic
Donald Trump's Gift to AI Companies
Earlier today, Donald Trump unveiled his administration's 'AI Action Plan'—a document that details, in 23 pages, the president's 'vision of global AI dominance' and offers a road map for America to achieve it. The upshot? AI companies such as OpenAI and Nvidia must be allowed to move as fast as they can. As the White House officials Michael Kratsios, David Sacks, and Marco Rubio wrote in the plan's introduction, 'Simply put, we need to 'Build, Baby, Build!'' The action plan is the direct result of an executive order, signed by Trump in the first week of his second term, that directed the federal government to produce a plan to 'enhance America's global AI dominance.' For months, the Trump administration solicited input from AI firms, civil-society groups, and everyday citizens. OpenAI, Anthropic, Meta, Google, and Microsoft issued extensive recommendations. The White House is clearly deferring to the private sector, which has close ties to the Trump administration. On his second day in office, Trump, along with OpenAI CEO Sam Altman, Oracle CEO Larry Ellison, and SoftBank CEO Masayoshi Son, announced the Stargate Project, a private venture that aims to build hundreds of billions of dollars worth of AI infrastructure in the United States. Top tech executives have made numerous visits to the White House and Mar-a-Lago, and Trump has reciprocated with praise. Kratsios, who advises the president on science and technology, used to work at Scale AI and, well before that, at Peter Thiel's investment firm. Sacks, the White House's AI and crypto czar, was an angel investor for Facebook, Palantir, and SpaceX. During today's speech about the AI Action Plan, Trump lauded several tech executives and investors, and credited the AI boom to 'the genius and creativity of Silicon Valley.' At times, the action plan itself comes across as marketing from the tech industry. It states that AI will augur 'an industrial revolution, an information revolution, and a renaissance—all at once.' And indeed, many companies were happy: 'Great work,' Kevin Weil, OpenAI's chief product officer, wrote on X of the AI Action Plan. 'Thank you President Trump,' wrote Collin McCune, the head of government affairs at the venture-capital firm Andreessen Horowitz. 'The White House AI Action Plan gets it right on infrastructure, federal adoption, and safety coordination,' Anthropic wrote on its X account. 'It reflects many policy aims core to Anthropic.' (The Atlantic and OpenAI have a corporate partnership.) In a sense, the action plan is a bet. AI is already changing a number of industries, including software engineering, and a number of scientific disciplines. Should AI end up producing incredible prosperity and new scientific discoveries, then the AI Action Plan may well get America there faster simply by removing any roadblocks and regulations, however sensible, that would slow the companies down. But should the technology prove to be a bubble—AI products remain error-prone, extremely expensive to build, and unproven in many business applications—the Trump administration is more rapidly pushing us toward the bust. Either way, the nation is in Silicon Valley's hands. The action plan has three major 'pillars': enhancing AI innovation, developing more AI infrastructure, and promoting American AI. To accomplish these goals, the administration will seek to strip away federal and state regulations on AI development while also making it easier and more financially viable to build data centers and energy infrastructure. Trump also signed executive orders to expedite permitting for AI projects and export American AI products abroad. The White House's specific ideas for removing what it describes as 'onerous regulations' and 'bureaucratic red tape' are sweeping. For instance, the AI Action Plan recommends that the federal government review Federal Trade Commission investigations or orders from the Biden administration that 'unduly burden AI innovation,' perhaps referencing investigations into potentially monopolistic AI investments and deceptive AI advertising. The document also suggests that federal agencies reduce AI-related funding to states with regulatory environments deemed unfriendly to AI. (For instance, a state might risk losing funding if it has a law that requires AI firms to open themselves up to extensive third-party audits of their technology.) As for the possible environmental tolls of AI development—the data centers chatbots run on consume huge amounts of water and electricity —the AI Action Plan waves them away. The road map suggests streamlining or reducing a number of environmental regulations, such as standards in the Clean Air Act and Clean Water Act—which would require evaluating pollution from AI infrastructure—in order to accelerate construction. Once the red tape is gone, the Trump administration wants to create a 'dynamic, 'try-first' culture for AI across American industry.' In other words, build and test out AI products first, and then determine if those products are actually helpful—or if they pose any risks. The plan outlines policies to encourage both private and public adoption of AI in a number of domains: scientific discovery, health care, agriculture, and basically any government service. In particular, the plan stresses, 'the United States must aggressively adopt AI within its Armed Forces if it is to maintain its global military preeminence'—in line with how nearly every major AI firm has begun developing military offerings over the past year. Earlier this month, the Pentagon announced contracts worth up to $200 million each with OpenAI, Google, Anthropic, and xAI. All of this aligns rather neatly with the broader AI industry's goals. Companies want to build more energy infrastructure and data centers, deploy AI more widely, and fast-track innovation. Several of OpenAI's recommendations to the AI Action Plan—including 'categorical exclusions' from environmental policy for AI-infrastructure construction, limits on state regulations, widespread federal procurement of AI, and 'sandboxes' for start-ups to freely test AI—closely echo the final document. Also this week, Anthropic published a policy document titled 'Building AI in America' with very similar suggestions for building AI infrastructure, such as 'slashing red tape' and partnering with the private sector. Permitting reform and more investments in energy supply, keystones of the final plan, were also the central asks of Google and Microsoft. The regulations and safety concerns the AI Action Plan does highlight, although important, all dovetail with efforts that AI firms are already undertaking; there's nothing here that would seriously slow Silicon Valley down. Trump gestured toward other concessions to the AI industry in his speech. He specifically targeted intellectual-property laws, arguing that training AI models on copyrighted books and articles does not infringe upon copyright because the chatbots, like people, are simply learning from the content. This has been a major conflict in recent years, with more than 40 related lawsuits filed against AI companies since 2022. (The Atlantic is suing the AI company Cohere, for example.) If courts were to decide that training AI models with copyrighted material is against the law, it would be a major setback for AI companies. In their official recommendations for the AI Action Plan, OpenAI, Microsoft, and Google all requested a copyright exception, known as 'fair use,' for AI training. Based on his statements, Trump appears to strongly agree with this position, although the AI Action Plan itself does not reference copyright and AI training. Also sprinkled throughout the AI Action Plan are gestures toward some MAGA priorities. Notably, the policy states that the government will contract with only AI companies whose models are 'free from top-down ideological bias'—a reference to Sacks's crusade against 'woke' AI—and that a federal AI-risk-management framework should 'eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.' Trump signed a third executive order today that, in his words, will eliminate 'woke, Marxist lunacy' from AI models. The plan also notes that the U.S. 'must prevent the premature decommissioning of critical power generation resources,' likely a subtle nod to Trump's suggestion that coal is a good way to power data centers. Looming over the White House's AI agenda is the threat of Chinese technology getting ahead. The AI Action Plan repeatedly references the importance of staying ahead of Chinese AI firms, as did the president's speech: 'We will not allow any foreign nation to beat us; our nation will not live in a planet controlled by the algorithms of the adversaries,' Trump declared. The worry is that advanced AI models could give China economic, military, and diplomatic dominance over the world—a fear that OpenAI, Anthropic, Meta, and several other AI firms have added to. But whatever happens on the international stage, hundreds of millions of Americans will feel more and more of generative AI's influence—on salaries and schools, air quality and electricity costs, federal services and doctor's offices. AI companies have been granted a good chunk of their wish list; if anything, the industry is being told that it's not moving fast enough. Silicon Valley has been given permission to accelerate, and we're all along for the ride.