
The U.S. Army's $170,000 Attack Drone Competes With $500 FPVs
New budget documents show that the U.S. Army is making a little headway in its efforts to catch up with Ukraine and Russia acquiring small attack drones like the ubiquitous FPVs. But they also show there is still a very long way to go, and rather than abundant low-cost systems, the Army will be fielding a few expensive systems for the immediate future.
Low Altitude Stalking And Strike
Two years ago, back in July 2023, the U.S. Army announced a new Low Altitude Stalking and Strike Ordnance (LASSO) program. The new weapon was on an 'an urgent capability acquisition pathway to rapidly deliver this capability to the Infantry Brigade Combat Team (IBCT).'
The project was clearly inspired by Ukraine, where small FPV quadcopters were taking out Russian armor at long range, and would give the Army similar capability
'LASSO is a man-portable, tube launched, lethal payload munition, unmanned aerial system. It includes electrical optical /infrared sensor, precision flight control, and the ability to fly, track and engage non-line-of-sight targets and armored vehicles with precision lethal fires. LASSO currently consists of three modules: the launch tube, unmanned aerial system, and fire control station.'
A U.S. marine with a tube-launched attack drone (note gas supply for the launcher)
The big difference here is that LASSO would have an infrared sensor or thermal imager, These are still rare on FPVs because they typically add $200-$500 to the cost, so daytime FPVs generally lack them. And while in Ukraine FPVs are carried in a backpack and launched from a stand, the U.S. Army wanted a tube-launched version. This would be fired out by compressed air or other gas, then unfold its wings, making for a quicker and easier launch but at the cost of some cost and complexity.
The LASSO requirement is for a 20-kilometer range and the ability to destroy armored vehicles including tanks, doing the same job as FPVs,
There are a wide variety of FPVs in use in Ukraine, varying in size, payload and extras. Typically, they cost around $500. Ukrainian drone fundraiser Serhii Sternenko – who has supplied a staggering 200,000 FPVs to the military, and targeted by an assassin as a result -- – quotes $300 for a small 7-inch FPV and $460 for a 10-inch, On the other side, Russian maker Frobotics offers an entry-level model for $315 and heavy lift (20-pound warhead) for $756.
Ukraine is building vast numbers of FPV attack drones
These are made in vast numbers. Ukraine recently announced it had increased drone production to 200,000 per month or about 6,000 per day.
In December 2023 AeroVironment announced that the Army had selected their SwitchBlade 600 for the LASSO requirement. The Switchblade 600, launched in 2020, is the big brother to the SwitchBlade 300 with longer range and a bigger warhead. The Switchblade 300 was used extensively in Iraq and Afghanistan against 'high-value targets' from about 2012. Budget documents showed Switchblade 300s cost $52,914 a shot , but there was no information on the exact pricing of the 600, until now.,
Follow the Money
The U.S. Army's procurement budget for missiles for FY2026, released last month, gives a little more detail on LASSO and the rationale for it:
'Infantry Brigade Combat Teams (IBCTs) lack adequate proportional organic capabilities at echelon to apply immediate, point, long range, and direct fire effects to destroy tanks, light armored vehicles, hardened targets, defilade, and personnel targets, while producing minimal collateral damage in complex terrain in all environmental conditions.' LASSO will 'enable the Soldier to make multiple orbits within the IBCT typically assigned battlespace, to acquire and attack targets within and beyond current crew served and small arms fire'
In other words, doing exactly what FPVs do in Ukraine.
But how many would be acquired and for how much?
The document shows the Army is buying 294 SwitchBlade 600 LASSO rounds at a cost of $170,000 each.
In addition, the Army is also acquiring 54 ground control units; rather than the commercial controllers costing a few hundred dollars seen in the hands of Ukrainian FPV operators, these go for $69,204 each.
This really goes to show what has been observed many times before: that producing high-specification gear in tiny quantities means you pay boutique prices.
That tiny quantity will limit the number of operators trained, and they are not going to be firing a lot of live rounds in training.
The Javelin missile costs more than SwitchBlade 600, has a much shorter range, and requires the ... More target to be within sight
It is worth noting at $170k a shot only looks extravagant in the context of the hardware used by Russian and Ukraine. By military standards it is fairly normal. The same procurement budget shows the Army's latest batch of Javelin anti-tank missiles costing $221k apiece – and the reusable control unit needed to fire them is another $208k.
And if that sounds pricey the Army's new hypersonic LRHW missile will cost a whopping $36 million a time. 'Expensive' is all a matter of what you are used to.
FPVs For the Army
In another budget though, we find that the Army is also getting something more like the FPVs used in Ukraine via a very different program.
The Army's Aircraft budget for FY2026 includes money for 'FPV/PBAS' – 'PBAS' being 'Purpose Built Attritable System' which is the Army's new buzzword for expendable drones. The PBAS will carry a variety of 'lethal/non-lethal armaments and munitions.' As in Ukraine these may be fitted along with the battery immediately before launch.
One PBAS system consists of 'First Person Viewer (FPV) goggles, controller, leader display, two 10" air vehicles and four 5" air vehicles' and costs $34,826. Depending on the other items, the drones are likely around $5k each.
Making drones in the U.S. will always be more expensive because they cannot simple use low-cost Chinese components like the Ukrainians and Russians, and labor and other costs will inevitable be higher. $5k may be expensive by Ukrainian standards, but it will conform to U.S. military specifications, and the production run is small.
Considering that only 1,057 systems are being ordered, the $5k price tag may be the best that can be expected. It does at least mean that this program is delivering more than 20 times as many attack drones as LASSO for less total cost.
The PBAS requirement is being met by a variety of suppliers, likely including Neros, previously noted for supplying thousands of FPVs to Ukraine. Neros co-founder and CEO Soren Monroe-Anderson told me that in Ukraine he was told that any firm wishing to supply the market needed to be able to supply 5,000 FPVs a month or go home, and their business is based on large numbers of low-cost drones.
Neros' Archer is a low-cost, U.S.-made attack drone currently being supplied to Ukraine
This week Monroe-Anderson told Defence News that Neros is aiming to be able to produce 10,000 drones monthly by January, and that the longer-term vision a factory to produce one million drones per year with the U.S. Defense Department as its primary customer.
The Real Battle
These budget documents reveal the battle inside Army procurement between traditional legacy suppliers and high-cost established products against disruptive newcomers offering low-cost tech in vast numbers.
To an outsider, the way forward might seem obvious. But the defense business has its own way of working. In Ukraine and Russia, FPVs were so vital that many soldiers started out buying drones with their own money until the military procurement process finally started supplying them.
Another chink appeared this week with an Army call for solicitations, with the goal of acquiring thousands of drones rapidly at a cost of less than $2k a unit. They want more, faster, cheaper than LASSO.
Maybe next year will see the Army putting its money behind small drones in a big way. But an entrenched bureaucracy is a tougher opponent to shift than a dug-in tank battalion.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
18 minutes ago
- Gizmodo
OpenAI Hits Pause on Its Meta Killer
OpenAI is delaying the release of its much-anticipated open-weight AI model, citing the need for 'additional safety tests' and last-minute concerns over 'high-risk areas,' CEO Sam Altman announced on X (formerly Twitter). The decision lands in the middle of a brutal AI arms race, particularly with Meta, which has been aggressively poaching OpenAI talent and championing open-source models like Llama 3. The model, which was slated to drop this week, would be OpenAI's first major open-weight system, meaning developers would be free to download and use the model's underlying code and data weights to build their own apps, research projects, or commercial tools. But as Altman pointed out, once these models are released, 'they can't be pulled back.' That's the nature of open-source, and it's exactly why this delay is raising eyebrows across the AI community. 'While we trust the community will build great things with this model, once weights are out, they can't be pulled back,' Altman wrote on X (formerly Twitter) on July 11. 'This is new for us and we want to get it right.' we planned to launch our open-weight model next week. we are delaying it; we need time to run additional safety tests and review high-risk areas. we are not yet sure how long it will take us. while we trust the community will build great things with this model, once weights are… — Sam Altman (@sama) July 12, 2025In AI, 'weights' are the millions of numerical values that act like the model's brain wiring, allowing it to make connections and decisions. When a company releases a model as 'open-weight,' it's not just sharing a blueprint; it's giving away the fully functional brain. Developers are free to download it, modify it, and use it for everything from building chatbots and productivity tools to creating deepfakes and other malicious applications. Open-sourcing models accelerates innovation, but it also raises the risk of misuse, misinformation, and untraceable custom versions. That's why the decision to delay, while frustrating to many, signals that OpenAI is trying to tread cautiously, especially as criticism around AI safety and 'model leaking' intensifies. According to developer chatter online, the delay may have been triggered by a major technical issue discovered just before launch. The rumored model was expected to be smaller than Kimi K2—the new open-weight model from Chinese AI startup Moonshot AI that reportedly clocks in at nearly a trillion parameters—but still 'super powerful,' according to early testers. Kimi K2, which is taking on ChatGPT with impressive coding capabilities at a lower price, was released on July 11, the same day as Altman's announcement. While some online speculators blamed the delay on Kimi's unexpectedly strong performance and a fear of being outshone, there's no confirmation of that from OpenAI. What is clear is that the company is feeling the pressure to deliver something that is safe, fast, and competitive. Rumors that OpenAI delayed their open-source model because of Kimi are fun, but from what I hear: – the model is much smaller than Kimi K2 (<< 1T parameters)– super powerful– but due to some (frankly absurd) reason I can't say, they realized a big issue just before release, so… — Yuchen Jin (@Yuchenj_UW) July 13, 2025OpenAI's delay comes at a time when Meta is eating its lunch, at least in the open-source department. Mark Zuckerberg's company has released increasingly powerful open-weight models like Llama 3, all while quietly hiring away top OpenAI researchers. The talent war is real, and it's affecting timelines and strategy across the board. By delaying this release, OpenAI may be hoping to avoid a flawed launch that could dent its credibility at a critical moment. But it also risks falling further behind Meta, which has already become the go-to platform for developers looking to build with transparent, modifiable AI tools. OpenAI hasn't offered a new timeline for the release. That silence is fueling speculation that the delay could last weeks. If retraining is truly on the table, it could push the launch closer to the fall. For now, the open-source community is in wait-and-see mode. And the question hanging over it all: Can OpenAI deliver a model that is powerful, safe, and competitive enough to match Meta's momentum and keep the Chinese rivals at bay? In other words, can they get it right before someone else does?


Fox News
25 minutes ago
- Fox News
How to disable Gemini AI on Android and keep control of your apps
Google is making a push to ensure its AI, Gemini, is tightly integrated with Android systems by granting it access to core apps like WhatsApp, Messages, and Phone. The rollout of this change started on July 7, 2025, and it may override older privacy configurations unless you know how to disable Gemini on Android. Here's what you need to know. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Currently, the official email and support pages aren't providing consistent information on Gemini's true behavior regarding this change and how to disable it. However, we do know that Gemini will be able to interact with other apps. For instance, it would be able to make calls through the Phone app or send messages through WhatsApp. Gemini will still be able to interact with your Android apps even if you turned off Gemini Apps Activity in the Gemini Apps setting. This setting allows Google to save your interactions with Gemini apps. These are used to train the AI by allowing "human reviewers (including service providers)" to "read, annotate, and process your Gemini Apps conversations," according to the Google support page. Settings location may vary. Not all users will see the same options in the Gemini app or Google app, as settings can differ based on device model, region, or update status. If you don't see a particular setting, it may be due to these factors. Google made it vague in the email by saying you can block Gemini from interacting with other apps in the Apps settings. That means if you have Gemini installed on your phone as a separate app, you need to do the following: Alternatively, you can just uninstall the Gemini app from your Android phone. If your device doesn't have Gemini installed already, the recent changes won't secretly install it. You're likely safe for now. However, you need to stay vigilant in case future updates try to sneak in Gemini functionality without your knowledge. Gemini can also interact with other apps through the Google app on Android. So it makes sense to disable Gemini on Android in that app as well. Here's how to do that: Even after you disable Gemini on Android, there are a few key things to understand about how your data is handled and what settings might still require your attention: If you have already disabled Gemini features, they should remain off: Google states that if you previously turned off Gemini's access to apps, those privacy settings will persist after the update. However, it's wise to double-check your settings to ensure nothing has changed. No forced installation: The Gemini app will not be installed automatically if it isn't already present on your device. You remain in control of whether or not to add it. Data review by humans: Conversations with Gemini may be reviewed and annotated by human reviewers for quality control and AI training purposes. Even if your activity is deleted, data reviewed by humans can be retained for up to three years, and this data is disconnected from your Google Account before review. Avoid sharing confidential or sensitive information in Gemini chats, as Google explicitly advises against it. While some users may welcome this change, if you value control and transparency over your data, limiting its access is the best option. Unfortunately, Google's guidance on the subject is murky, and you can't fully disable Gemini on Android unless you root your Android phone. But by proactively reviewing and tweaking a few settings, you can regain some control. As AI systems become more powerful, do you trust companies to put your privacy before their profits? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.


Entrepreneur
27 minutes ago
- Entrepreneur
Nvidia CEO: AI Will Change Everyone's Jobs, Including My Own
In a new interview, Nvidia CEO Jensen Huang says AI is "the greatest technology equalizer" the world has ever seen — and that "100% of everybody's jobs will be changed" as a result. Huang told CNN's Fareed Zakaria on Sunday that AI was an "equalizer," meaning that it "lifts" people who aren't well-versed in technology to be able to use it. Huang said ChatGPT, an AI chatbot with over 500 million global weekly users, was an example of how people can easily use AI with little to no formal training in interacting with it. "Look at how many people are using ChatGPT for the very first time," Huang told Zakaria. "And the first time you use it, you're getting something out of it… AI empowers people; it lifts people." Related: Here Are the 10 Highest-Paying Jobs with the Lowest Risk of Being Replaced By AI: 'Safest Jobs Right Now' AI results in people being able to do more with the technology than they would have without it, Huang said. He elaborated that he was "certain" that the "work that we do in our jobs" would be dramatically transformed due to AI. Huang, who has been leading Nvidia as CEO since co-founding it in 1993, said his own work has changed because of AI. "The work will change," Huang said in the interview. "My job has already changed. The work that I do has changed, but I'm still doing my job." Huang said that "some" jobs would be lost because of AI, but "many" jobs would be created thanks to the technology. He predicted that AI would result in productivity gains across industries, lifting society as a whole. Nvidia CEO Jensen Huang. PhotoHuang's predictions are less dire than those of Dario Amodei, the CEO of $61.5 billion AI startup Anthropic. In May, Amodei told Axios that within the next five years, AI could wipe out half of all entry-level white-collar jobs and cause unemployment to rise to 10% to 20%. In March, he stated that AI would write "all of the code" for companies within a year. Adam Dorr, research director at the think tank RethinkX, stated that by 2045, AI and robotics could make human jobs obsolete. "We don't have that long to get ready for this," Dorr told The Guardian last week. "We know it's going to be tumultuous." Related: 'Fully Replacing People': A Tech Investor Says These Two Professions Should Be the Most Wary of AI Taking Their Jobs