logo
US special ops forces want in on AI to cut 'cognitive load' and make operator jobs easier

US special ops forces want in on AI to cut 'cognitive load' and make operator jobs easier

From warfighting to paperwork, US Special Operations Forces are interested in getting in on AI to simplify the work.
The goal for these elite forces, much like it is for regular people working office jobs and using AI to sort data or compile information, is to lessen the overall cognitive load, or mental effort, required for whatever a task may be. A lot of different types of artificial intelligence are being used, and it's only growing.
AI has many potential applications for the US military, from autonomous features in uncrewed systems to AI-enabled targeting to enhanced situational awareness. The Department of Defense is eager to implement this technology to prepare US forces for a high-end technological conflict chock full of data and information.
Future wars could be fought in an environment where decision-making may need to happen quicker than humans alone can do, and that's where military officials see the benefit of AI and human-machine teaming.
With AI, "we can reduce the cognitive burden of our operators," Col. Rhea Pritchett, the program executive officer of SOF Digital Applications, said at SOF Week in Tampa, Florida, earlier this month. Instead of worrying about other things, operators "will take that precious time to critically think about actions that they need to take next to achieve the effect that they want."
AI can sift through massive amounts of data quickly to focus on necessary information in a combat scenario and it can aid in mission planning and command and control functions.
This technology can also be used in battlespace awareness tools "to identify the position or location information of objects, people, and terrain — enhancing operator analysis and decision-making capabilities," Pritchett added over email.
These kinds of capabilities are already being developed.
But there are other functions of AI in SOF, and they aren't unlike the way civilians use ChatGPT or other AI-driven platforms for their jobs and personal lives.
That includes paperwork: situational reports, concepts for operations, and forecasting supplies. The tasks that might take an operator a long time to complete and draw their focus away from other aspects of the job.
Back-end work, as Ben Van Roo, CEO and cofounder of Legion Intelligence Inc, put it, could also be aided by artificial intelligence. Such work could include better search functions for analyzing DoD doctrine and understanding elements of specific locations, commands, or job positions.
One prime example could be using AI tools when entering a new position to quickly get up to speed on the work.When military personnel receive orders for their next job, it can be a lot of work to learn not only the ins and outs of the position itself but also the larger bureaucracy, geographic information, and historical and political context, what their predecessor did, types of weapons and capabilities present, and so on.
That is a bit different than how AI in the military is regularly perceived. "People tend to jump to Terminator," Van Roo said. "Actually, the great majority of it right now is just, people can barely even do their jobs with all these archaic systems."
While there are many possibilities for AI technology in warfighting systems, such as the AI-enabled drones that are demonstrating just how effective this technology can make an uncrewed fighting platform or the AI algorithms being taught to fly fighter jets, there's much that can be done to improve the mundane.
AI has the potential to address some of the headaches and help reform some older technological policies, effectively streamlining the processes. It might even have an application in assessing details for contracts and programs.
"The potential to relieve the cognitive load is extremely high," Van Roo said.
AI could provide assistance with what some operators might consider the more time-consuming tasks of their job and take a form similar to an AI assistant designed to take notes, gather and review key client data, transcribe meetings, and outline important takeaways.
AI systems are already being used in SOF, Pritchett told BI, including generative machine learning, large language models, natural language processing, and computer vision.
The rise of AI in militaries has been met with skepticism and ethical concerns from experts and officials about its implementation, especially in combat scenarios.
The Pentagon has maintained that its policy on AI will keep a human in the loop for decision-making, though some observers have argued that doing so might not always be possible in a high-speed, data-driven future fight. Some have also cautioned that the technology may end up developing at a much quicker pace than Washington and the Pentagon can regulate it.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Inside the battle over Microsoft's access to OpenAI's technology
Inside the battle over Microsoft's access to OpenAI's technology

Business Insider

time29 minutes ago

  • Business Insider

Inside the battle over Microsoft's access to OpenAI's technology

The future of the most important partnership in technology depends on Microsoft 's access to the artificial intelligence powering ChatGPT. When OpenAI first demoed a breakthrough feature in May 2024, allowing users to talk to its AI just like a person, it looked as if the company did so in lock-step with its partner and investor, Microsoft. Soon after OpenAI demoed this voice capability for its new GPT-4o model, Microsoft CEO Satya Nadella included it in a keynote speech at the company's Build developer conference. Behind the scenes, Microsoft had little knowledge of the feature until days before the demo, people involved in the matter told Business Insider. While the agreement between the two companies gives Microsoft access to OpenAI's technology, exactly what OpenAI has to share — and when — is sometimes a gray area. In this case, Microsoft had access to frequent updates of the core model at the time, but not the voice technology OpenAI built on top of it. Microsoft found out about the demo, and pressured OpenAI executives, including then-technology boss Mira Murati, to get access to the code so Microsoft could do its own announcement, the people said. The company did not want to appear flat-footed to investors, to whom the company has to justify its $13 billion investment in OpenAI, they said. The example illustrates the ongoing complexity of Microsoft's relationship with OpenAI, and why access to the AI startup's technology is a core issue as the companies renegotiate their agreement. OpenAI needs Microsoft's blessing for a corporate restructuring. To get that, OpenAI may need to convince Microsoft to change or give up some pretty sweet terms: Microsoft has access to much of OpenAI's technology, exclusive rights to sell it on Azure, first right of refusal to provide computing resources, and a revenue-sharing agreement worth billions of dollars. Lots of thorny details about points of contention in the negotiations have made recent headlines — a looming, existential clause OpenAI could activate to cut off Microsoft's access, a "nuclear option" reportedly considered by OpenAI executives to accuse Microsoft of anticompetitive behavior, and a report that Microsoft was prepared to walk away from the renegotiations. In response to those reports, Microsoft and OpenAI released a joint statement saying, "Talks are ongoing and we are optimistic we will continue to build together for years to come." People close to Microsoft's side of the negotiating talks tell BI the software giant is unlikely to walk away because it is deeply reliant on OpenAI's intellectual property, and the negotiations are an opportunity to improve and expand its access to this technology. It's all about IP Microsoft has significantly benefited from its arrangement to access the rights to OpenAI's intellectual property, both by selling it to customers through the Azure OpenAI service and creating its own products using OpenAI's technology, like its AI assistant Copilot. There are limits, however, to what the companies consider "IP." For example, Microsoft gets access to important aspects of OpenAI's models, like model weights that help determine AI outputs and inference codes that instruct the models on how to use data, the people said. Some things are excluded from what the companies consider IP, like product and user interface information, they said, and the point at which OpenAI must share any technology is up to interpretation. One person with knowledge of OpenAI's operations said the company doesn't have to share what it's developing until it's finished, which can be subjective. "You can make sure you share something with Microsoft as late as possible, so they can still simultaneously announce, but make it really difficult to build the same product on top of it," the person said. Plus, having access to the IP isn't the same as knowing how to use it. This has been harder than expected, several people told BI. OpenAI has grown frustrated with Microsoft's request to spell out the technology to its employees, people with knowledge of both companies said. Sometimes, Microsoft doesn't know what questions to ask of OpenAI. Microsoft formed a new AI organization two years ago and hired Mustafa Suleyman, the former Inflection CEO who cofounded the AI pioneer DeepMind, to run it. The hiring was meant as a hedge against OpenAI, after Nadella received pressure from Microsoft's board to diversify following instability at its partner startup and CEO Sam Altman's ouster. However, little has come of that in terms of replacing Microsoft's need for its partnership with OpenAI, the people said. Suleyman has completely rebuilt Microsoft's Copilot app. That effort has yet to achieve much growth for Copilot. His team is focused on building smaller models and has seen success with post-training existing models for new purposes, one of the people said. Overall, Microsoft isn't trying to build frontier models like those that would compete with GPT, and is instead putting resources toward OpenAI, they said. Microsoft is less worried about AGI, antitrust, Windsurf, or SoftBank Other points of contention that have made recent headlines are of less concern to Microsoft, the people with knowledge of Microsoft's position said. Included in the agreement is a clause that would allow OpenAI to declare what's called artificial general intelligence or "AGI," and cut off Microsoft's access to OpenAI's IP and profits. OpenAI defines AGI as "a highly autonomous system that outperforms humans at most economically valuable works." While OpenAI could declare AGI, the concept is so open to interpretation that Microsoft could sue and easily tie the company up in a legal battle for years, the people said. There's another version of the clause, "sufficient AGI," that OpenAI could declare when it builds a technology capable of achieving a certain level of profits, but Microsoft has to sign off on that. Another apparent sticking point in the negotiations has centered on OpenAI's desire to acquire AI coding assistant startup Windsurf. Under the current agreement, that would give Microsoft access to Windsurf's technology. Microsoft has said it would agree to the acquisition, but Windsurf's CEO doesn't want the company's technology to be shared with Microsoft, one of the people said. While GitHub Copilot faces increasing competition from other AI coding assistants, access to Windsurf's technology is not a big desire for Microsoft, and the company might consider a carve-out of Windsurf IP in a new deal, the person said. OpenAI executives have reportedly considered accusing Microsoft of anticompetitive behavior. Microsoft is largely unconcerned about this, two of the people said. The existing deal was investigated by antitrust regulators, including the European Union and the United Kingdom's Competition and Markets Authority. OpenAI's desire to restructure is in part motivated by a deadline from investor SoftBank that risks a percentage of funding if OpenAI can't close such a deal by the end of the year. OpenAI has said it struggled to fundraise due to its peculiar structure. SoftBank has a reputation for taking risks with its funding, and its CEO is keen on OpenAI, so some observers doubt SoftBank will follow through on its deadline to revoke funding if a new deal isn't reached.

Is Palantir Technologies a Buy?
Is Palantir Technologies a Buy?

Yahoo

timean hour ago

  • Yahoo

Is Palantir Technologies a Buy?

Palantir stock has advanced 73% year to date after a 341% rally last year. Owning Palantir shares comes with risk, as its stock is very expensive. However, investors should focus on the company's long-term potential -- not its short-term volatility. 10 stocks we like better than Palantir Technologies › Palantir Technologies (NASDAQ: PLTR) stock cannot be stopped. At least, that's the way things look at the halfway mark of 2025. As of this writing, Palantir shares have advanced 73% year to date. That's on top of a 341% rally last year. The only question is: Can this incredible run continue? To get to the bottom of that, let's examine why Palantir has been one of the top beneficiaries of the artificial intelligence (AI) revolution. Let's begin by looking at the bull case for Palantir. First: Palantir continues to land key government contracts. The company has long-standing partnerships with American and European government agencies, both military and civilian. In the U.S., Palantir counts among its partners the Department of Defense, the U.S. Intelligence Community, the Health and Human Services (HHS) department, Immigration and Customs Enforcement (ICE), and the Federal Aviation Administration (FAA). Moreover, in some cases, Palantir is tasked with mission-critical objectives that could include highly sensitive or classified information. As geopolitical tensions increase and the AI arms race between the U.S. and China intensifies, Palantir could benefit as American lawmakers pour further government resources into AI. Turning to fundamentals, Palantir's strong revenue growth of 39% and a significant backlog of $1.9 billion of remaining performance obligations (RPO) point to the company's continued strength. In addition, while exact estimates vary, Palantir's total addressable market (TAM) could grow to over $1 trillion within the next decade, which would provide even more opportunities for Palantir within the governmental and private sectors. So what about the downside risks? First off, there's Palantir's current valuation. With a sky-high price-to-sales (P/S) ratio of 113x, there's little room for error. If the company misses estimates for sales, earnings, or client growth, its stock price could plummet. Second, Palantir's bevy of government contracts aren't just a benefit; they're also a risk. If government spending were to contract or if Palantir were to fall out of favor, its high concentration of government work could drag down its overall growth rate. Similarly, some lawmakers have raised privacy concerns related to Palantir's federal government work. Palantir issued a strong rebuttal, denying reports that it was building a "master database" and noting its 20-year history of working with administrations of both political parties. Nevertheless, Palantir's close ties to government agencies could invite future blowback. Next, there is dilution risk. In short, Palantir pays out a lot of stock-based compensation -- over $720 million worth over the last 12 months. That dilutes existing shareholders. In fact, over the last 12 months, Palantir's shares outstanding have increased by 5%. So far, the dilution hasn't slowed down Palantir's skyrocketing stock price, but it could at some point in the future. Finally, there's the ever-present risk of a downturn in the economy or the stock market. In particular, if sentiment were to turn bearish, Palantir's stock could fall rapidly, as its valuation is very high. While the risks to owning Palantir stock are real, the bull case is more persuasive to me. Therefore, the real question investors should ask themselves when it comes to Palantir stock is whether they have the conviction to hold it for the long term. Shares are expensive; they are likely to experience drastic sell-offs. However, Palantir's enormous TAM points to its massive potential. Given the stock's risks and likelihood of a pullback, long-term investors would be wise to buy shares on the dip, or perhaps dollar-cost average into a position. In other words, Palantir shares are to be owned -- not traded. Before you buy stock in Palantir Technologies, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Palantir Technologies wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $722,181!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $968,402!* Now, it's worth noting Stock Advisor's total average return is 1,069% — a market-crushing outperformance compared to 177% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 30, 2025 Jake Lerch has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Palantir Technologies. The Motley Fool has a disclosure policy. Is Palantir Technologies a Buy? was originally published by The Motley Fool

Senators Came to Their Senses on AI Regulation Ban
Senators Came to Their Senses on AI Regulation Ban

Bloomberg

timean hour ago

  • Bloomberg

Senators Came to Their Senses on AI Regulation Ban

Some sense has prevailed in the Senate — a 99-1 vote against a provision in its huge tax and spending bill that would have banned state-level artificial-intelligence laws for the next 10 years. It's been just 944 dizzying days since ChatGPT was launched into the world — imagine what might have happened over the next 3,653. A last-gasp effort to amend the bill, which included reducing 10 years to five, also failed. The new wording would have been more onerous than the original, decimating existing state laws on facial recognition and data privacy. New laws will need to tackle AI-triggered issues on discrimination, recruitment and mental health. The matter is simply too urgent to be left only in Washington's hands. Senators rightly saw through the moratorium as doing the bidding of big tech companies that want free rein to do as they please in the insatiable race to build and sell AI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store