logo
#

Latest news with #JSOC

Loose Nukes In Iran Is A Scenario U.S. Special Operators Have Been Training For
Loose Nukes In Iran Is A Scenario U.S. Special Operators Have Been Training For

Yahoo

time26-06-2025

  • Politics
  • Yahoo

Loose Nukes In Iran Is A Scenario U.S. Special Operators Have Been Training For

As Israel's campaign of strikes on Iran continues, a question emerges about whether some level of additional action may be required on the ground to meet the stated goal of preventing the regime in Tehran from being able to acquire nuclear weapons. Even if unique U.S. conventional strike capabilities are brought to bear, there could still be significant targeting challenges, especially if the Iranians move to disperse elements of their nuclear program. If the Iranian government were to collapse, and do so suddenly, there would be further impetus to ensure enriched uranium and other dangerous nuclear materials are secured. Though many actors could play a role, U.S. special operations forces, in particular, have been actively training to respond to scenarios exactly like these for years. In 2016, the Department of Defense formally designated U.S. Special Operations Command (SOCOM) as the lead entity for the Counter Weapons of Mass Destruction (CWMD) mission, a role that U.S. Strategic Command (STRATCOM) had previously held. Decades before then, the U.S. special operations community, especially the secretive Joint Special Operations Command (JSOC), had been training to take a newly active role in tackling potential 'loose nukes' or other nuclear contingencies. This was driven in large part by the collapse of the Soviet Union, which had left nuclear weapons and other material scattered across a number of newly independent nations. Today, the U.S. military also has non-special operations units that could be called upon to support CWMD missions. Other entities within the U.S. government, like the Department of Energy and the Department of Justice (to include the Federal Bureau of Investigation), also have elements that can be deployed overseas as part of CWMD missions. When it comes to Iran, it's important to note that the exact current state of that country's nuclear program, including efforts to develop nuclear weapons, is a matter of dispute, including between U.S. and Israeli intelligence services. The regime in Tehran also has a long history of, at best, obfuscating and, at worst, actively lying about its nuclear ambitions. What is not in question is that, prior to the current conflict with Israel, the Iranian government had amassed a significant stockpile of enriched uranium and established facilities capable of producing more at an appreciable scale. As of May, the International Atomic Energy Agency (IAEA) reported that Iran had a stockpile of close to 901 pounds, at least, of uranium enriched to 60% purity, which presents clear proliferation concerns. The 60% enrichment level is well above what is required for civilian power generation (typically between 3% and 5%), but also below the level for it to be considered highly enriched or weapons-grade (90%). At the same time, it is understood to be a relatively short step, technically speaking, to get uranium from 60% to 90% purity. As a standard metric, the IAEA says that 92.5 pounds of 60% uranium is sufficient for further enrichment into enough weapons-grade material for one nuclear bomb. Lower-grade nuclear material could also be fashioned into a so-called 'dirty bomb' designed just to spread radioactive contamination across an area. In addition to any immediate effects from the detonation of such a device, it could cause widespread panic and would require significant effort to clean up. Whether or not Iran is actively pursuing a nuclear weapon now, the country is understood to have been working toward that goal at least prior to 2003. Specialized equipment and other physical elements of the program, active or not, could also present proliferation risks. Israeli forces have already struck a number of Iranian nuclear sites as part of their ongoing campaign, but there are also ones that currently remain untouched, most notably the deeply-buried enrichment facility at Fordo. Questions around whether or not the U.S. military might soon enter the conflict more actively on the side of Israel center heavily around its unique ability to prosecute targets like Fordo with 30,000-pound GBU-57/B Massive Ordnance Penetrator (MOP) bunker buster bombs dropped from B-2 stealth bombers. This all, in turn, raises additional questions about whether or not Iranian authorities might seek to disperse nuclear material and other assets to a wide array of locations in the face of these growing threats, if they haven't already, at least to some degree. Iranian Deputy Foreign Minister Kazem Gharibabadi said over the weekend that unspecified 'special measures' had been taken to protect the country's nuclear program, and that these would not be communicated to the IAEA. #BREAKING #Iran's Deputy FM, @Gharibabadi: From now on, new and special measures to protect nuclear materials and equipment will not be notified to the @iaeaorg, and Iran will no longer cooperate with the IAEA as before. — Mohammad Ghaderi | محمد قادری (@ghaderi62) June 14, 2025 'I'm not so sure,' IAEA Director General Rafael Grossi told Bloomberg Television today when asked about the current potential whereabouts of Iran's stockpile of enriched uranium. 'In a time of war, all nuclear sites are closed. No inspections, no normal activity can take place.' 'Iran's 400 kilograms (880 pounds) of highly-enriched uranium could fit in three or four easily-concealed cylinders,' Bloomberg had also reported on Monday, citing nuclear-weapons engineer and former IAEA inspector Robert Kelley. 'Even if Israel destroys Iran's enrichment infrastructure, the location of that material will still need to be verified.' Prior to the outbreak of the current conflict, Israeli authorities reportedly also raised the possibility of Iran transferring nuclear assets to Houthi militants in Yemen with their American counterparts, who said they had no evidence of any such plans. As noted, a collapse of the regime in Tehran, especially if it is precipitous, would present clear further impetus to try to secure whatever might be left of Iran's nuclear program from falling into the wrong hands. In any of the aforementioned scenarios, the U.S. special operations community, especially so-called 'tier one' units like the U.S. Army's Delta Force and the U.S. Navy's SEAL Team Six, could come into play. U.S. special operations units are ideally suited to rapidly and discreetly infiltrate into a target area to extract items of interest from an objective like a nuclear facility in Iran. If the items in question are too large to be moved by the special operations force, depending on what they are, they could then be destroyed in place or secured until a larger follow-on force arrives. Conventional supporting forces and interagency elements offering unique capabilities could accompany special operations forces on initial raids, as well. Special operations forces are also well-positioned to help intercept high-value targets on the move, including nuclear material that might make its way out of Iran, or threaten to do so, as the conflict with Israel continues. This could potentially include operations on land or at sea. This is not speculative, but reflects real mission scenarios the U.S. military is actively prepared to carry out. For instance, roughly a year ago, members of the Army's 75th Ranger Regiment partnered with a specialized non-special operations unit, called Nuclear Disablement Team 1 (NDT 1), to conduct an exercise consisting of a simulated raid under hostile fire on a decommissioned pulse radiation facility serving as a mock underground nuclear site. As another one of many examples, NDT 1 teamed up with Green Berets from the Army's 5th Special Operations Group for an exercise in 2023 involving a mock air assault on the Bellefonte Nuclear Power Plant in Alabama and a simulated shutdown of the facility. The NDTs are a prime example of conventional U.S. military units that could be called upon to support real-world special operations CWMD missions. The Army has three of these teams, all assigned to the 20th Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) Command headquarters at Aberdeen Proving Ground in Maryland. These units are made up of personnel specially trained 'to exploit and disable nuclear and radiological Weapons of Mass Destruction infrastructure and components to deny near-term capability to adversaries,' according to the Army. 'The possibility of dealing with a damaged nuclear power station or emergencies involving nuclear reactors in a hostile environment is an emerging threat,' Army Capt. David Manzanares, a Nuclear Medical Science officer from NDT 1, said after the 2023 exercise at the Bellefonte Nuclear Power Plant. 'This training event was complex, dynamic and challenged our technical expertise.' 'NPPs [nuclear power plants] are a key part of the nuclear fuel cycle. It is the place all plutonium is produced. Therefore, reactors are a key area in nuclear weapon pathway defeat,' Army Maj. Aaron Heffelfinger, then-deputy chief of NDT 1, also said at that time. 'The NDT's ability to assess the state of a reactor, and if needed, control and shut it down, is crucial for our mission success and those we are directly supporting.' TWZ has also reported in recent years on efforts by the U.S. special operations community to hone other skill sets that could be particularly relevant to operations in Iran and its nuclear facilities, many of which are deep underground. In its annual budget request for the 2021 Fiscal Year, published in 2020, the Pentagon asked for $14.4 million for a new dedicated 19,200-square-foot site to help JSOC train to raid 'complex, hardened facility targets.' Whether or not that facility has since been built is unclear. In 2021, the Army's 1st Special Forces Command (Airborne), or 1st SFC (A), also published an unclassified white paper that included details about a plan to establish 'Hard Target Defeat Companies' of Green Berets. These would supplant existing Special Forces crisis response units, and be 'uniquely organized to counter near-peer adversary campaigns' and 'operate with regional partners to defeat hard targets in sensitive and constricted environments.' How those plans may have evolved since then is not immediately clear. There are also examples of the U.S. military, more broadly, conducting relevant missions in post-conflict environments. For instance, in 2008, American forces, including NDT members, helped remove 550 metric tons of so-called 'yellowcake' uranium oxide from the Tuwaitha Nuclear Research Center in Iraq. Yellowcake is an intermediate step in the refinement of uranium ore into fissile material. There are numerous instances of the U.S. government helping to secure nuclear material in circumstances entirely removed from conflicts, as well. All of this is also relevant when it comes to potential new non-nuclear proliferating risks that might now emerge from Iran, including in the aftermath of a sudden collapse of the regime in Tehran. For instance, there have been concerns about potential Iranian chemical and biological weapons developments over the years. In 2011, U.S. troops were deployed to guard chemical weapons sites in Libya following the downfall there of long-time dictator Muammar Gaddafi. Libya's chemical weapons and related materials were subsequently destroyed in place, a process that took some three years to complete. American forces supported a failed effort to fully destroy Syria's chemical weapons stockpile in 2013. Iran also has expansive stockpiles of ballistic, cruise, and other missiles, as well as other conventional weapons that the United States and others would not want to see make their way to other hostile actors or otherwise end up on the black market. The Iranian government already has an extensive history of proliferating ballistic and cruise missiles, drones, air defense systems, and other conventional capabilities to its proxies across the Middle East. In the meantime, the ongoing conflict between Israel and Iran continues to rapidly evolve. Questions similarly continue to mount about what the United States' role, including any potential employment of U.S. special operations forces on the ground, might be going forward. Contact the author: joe@

US' new military doctrine: AI-powered lethality, less civilian oversight
US' new military doctrine: AI-powered lethality, less civilian oversight

First Post

time27-04-2025

  • Politics
  • First Post

US' new military doctrine: AI-powered lethality, less civilian oversight

Secretary of Defence Pete Hegseth wants the US military to be more lethal as he guts civilian casualty programmes. There's a high probability that datasets fed into the new AI systems for military operational and theatre-level planning are biased towards more lethality read more Hegseth has started gutting or cutting down offices and programmes established to prevent civilian casualties caused by the lethal US military. Image: REUTERS A day after his Senate confirmation on January 24, Pete Hegseth had a message for the Department of Defence (DoD): rebuild the US military into the most lethal force that will put America first. 'We will rebuild our military by matching threats to capabilities. … We will remain the strongest and most lethal force in the world,' he said in a press release. The 2025 Global Fire Power (GFP) index ranks the US number one, with a score of 0.0744 (a score of 0.0000 is considered perfect), of the 145 world powers. The GPF considers 60 factors to determine a nation's power index, including defence technology, financial resources, logistics, geography and strategic position. STORY CONTINUES BELOW THIS AD According to the Stockholm International Peace Research Institute, the 2023 US defence budget was $880 billion, more than the next eight countries, including China and Russia, combined. The massive defence outlay allows the US to acquire the most advanced weapons and cutting-edge tech, sustain a massive and well-trained military and maintain around 750 military bases in more than 80 countries. Hegseth's message never mentioned a word about how he intends to make the world's most lethal military more lethal. However, the Pentagon's two recent perilous decisions show the path to adding more lethality. First, the Donald Trump administration has decided to advance the use of artificial intelligence (AI) in the military from hunting down terrorists and interacting with commanders to operational and theatre-level planning. Second, Hegseth has started gutting or cutting down offices and programmes established to prevent civilian casualties caused by the lethal US military. What's more worrying is that both decisions are interconnected. Advanced AI-military integration In 2011, a stealth ISR RQ-170 drone, aka the 'Beast of Kandahar', took off from Afghanistan, flew undetected in the Pakistani airspace, monitored Osama bin Laden's Abbottabad compound for months and provided live video to the US Joint Special Operations Command (JSOC). Simultaneously, Palantir Gotham, the defence and intelligence software of Palantir Technologies—specialising in software platforms for big data analytics—integrated and analysed the data, other surveillance and reconnaissance reports and intel on bin Laden and identified patterns and connections within them. STORY CONTINUES BELOW THIS AD Finally, JSOC's SEAL Team Six eliminated the 9/11 mastermind on May 2, 2011. Palantir made it easy for JSCO operators to connect the dots within the data trove. Still, it took months from monitoring to killing bin Laden—AI hadn't entered combat by then. Six years later, a new era dawned in AI-military integration on April 26, 2017. Project Maven, officially known as the Algorithmic Warfare Cross-Functional Team, was established by then-deputy defence secretary Robert O Work with a memo. Funded by the DoD, the project involved AI's integration with the US military's kill chain—find the target, fix it, track it, select the weapon of choice and destroy it. Maven could quickly harvest and analyse an enormous amount of data gathered by drones and satellites via machine learning systems and identify an array of targets, including humans and military facilities and weapons. Such a task was manually impossible and time-consuming, as analysts would have to spend days sifting through satellite and drone images/videos and surveillance data by relying on their eyes. STORY CONTINUES BELOW THIS AD Maven initially collaborated with Google in data fusion and later switched to Palantir, Microsoft, Amazon Web Services, Maxar Technologies, and others. In eight months, the Maven Smart System interface was using special algorithms to identify objects in a video feed sent via a ScanEagle drone in an undisclosed location in West Asia. Yellow-outlined boxes on the interface meant potential targets, and blue-outlined boxes indicated civilian-inhabited places or friendly forces. Soon, Maven was used to hunt down Islamic State (IS) members in Syria and Iraq. In 2020, the interface was used to evacuate American military personnel from Afghanistan; in 2022, to help Ukrainian troops locate and target Russian soldiers; and in 2024, to destroy Houthi rocket launchers and vessels. In February of the same year, Maven helped the Pentagon identify targets for more than 85 airstrikes in the Middle East. Guess it performed as well as Google's Gemini AI? The US military has ramped up its use of artificial intelligence tools based on Google's Project Maven to identify targets for more than 85 air strikes in the Middle East this month. US bombers and fighter aircraft carried out… — StarBoySAR 🇭🇰 🇨🇳 🥭 (@StarboySAR) February 28, 2024 STORY CONTINUES BELOW THIS AD In the Pentagon's own words, AI is used to speed up killing. 'We obviously are increasing the ways in which we can speed up the execution of the kill chain,' Radha Plumb, the outgoing chief digital and AI officer in the DoD, told TechCrunch in a January interview. The Joe Biden administration went further by using generative AI in the military. The Pentagon's Defence Intelligence Unit (DIU), or Unit X, ensures that the US military gets access to emerging technology in Silicon Valley. Last December, the Pentagon established the AI Rapid Capabilities Cell (AIRCC) to expedite the adoption of Large Language Models (LLMs) and other forms of generative AI. AIRCC, with $100 million in funding, will implement the recommendations of Task Force Lima, set up in August 2023. The task force aimed to utilise generative AI models in warfighting and other fields, like health and finance, and leverage partnerships across the Pentagon, the intelligence community and other federal agencies to reduce redundancy and risk. STORY CONTINUES BELOW THIS AD Generative AI was used during the annual South Korea-US Freedom Shield exercise last year when a chatbot interface like ChatGPT scoured open-source intelligence like articles, reports, images and videos; translated and summarised foreign news sources; and wrote daily intelligence reports for US commanders. Defence-tech company Vannevar Labs, which designed the interface and received a $99 million production contract from the DIU last November, uses LLMs, including from OpenAI and Microsoft. Since 2021, it has been collecting terabytes of open-source intelligence in 80 languages in 180 countries. Subsequently, Vannevar constructs AI models to translate data and detect threats using its ChatGPT-like interface. According to Leigh Madden, vice president of the National Security Group at Microsoft, generative AI can process intelligence, signals and reconnaissance data in real time, enhancing decision-makers' situational awareness. In a SIGNAL Media Executive Video Series episode, Madden said that generative AI training scenarios incorporate terrains, weather conditions, and enemy tactics and behaviour. STORY CONTINUES BELOW THIS AD Now, Trump and Hegseth want to use AI for military operational and theatre-level planning. This is the third phase of military AI. Agentic AI is a revolutionary technology in which the system can decide and plan action per the human creator's need while adapting to changing circumstances—a degree of autonomy that traditional AI models lack. Agentic AI quickly and comprehensively synthesises 'a broad scope of traditional and non-traditional planning factors than humans alone to help produce more thorough, objective courses of action (COA)', according to Richard Farnell and Kira Coffey. In an article written for the Belfer Centre for Science and International Affairs, Farnell, a 2024 National Security Fellow at Harvard Kennedy School's Belfer Centre, and Coffey, a 2024 Air Force National Defence Fellow and International Security Program Research Fellow at Harvard Kennedy School's Belfer Centre, explain Agentic AI's potential. 'Once a COA is selected, Agentic AI also has the potential to help rapidly publish downstream directives and orders, flattening communication and saving hundreds of man-hours in each planning cycle.' Agentic AI can help solve large-scale, complex problems independently amid changing battlespace conditions. 'Creating multiple dilemmas for a near-peer adversary requires continuous integration of capabilities across all instruments of power and all domains, including the electromagnetic spectrum and the information environment,' they write. The Pentagon has awarded a contract to data annotation company Scale AI, whose Thunderforge system will accelerate decision-making, allowing planners to more rapidly synthesise vast amounts of information, generate multiple courses of action, and conduct AI-powered wargaming to anticipate and respond to evolving threats. Thunderforge will be deployed initially with the US Indo-Pacific Command and the US European Command. It 'brings AI-powered analysis and automation to operational and strategic planning, allowing decision-makers to operate at the pace required for emerging conflicts', according to Bryce Goodman, DIU Thunderforge lead and contractor. Scale AI's customers include OpenAI, Microsoft, Cisco, Meta and TIME. Thunderforge will also include Anduril's Lattice software platform and Microsoft-enabled state-of-the-art LLMs. Scale AI explains the massive gap between current warfare and agentic warfare. Current warfare deploys people with decades of single-domain knowledge who connect workflows and decide in days. On the other hand, agentic warfare has AI models with around 4,000 years of all-domain knowledge; AI agents automatically connect workflows with human oversight and decide in minutes. The DIU press release said the new AI marks a decisive shift in how the Pentagon plans to fight wars. 'Thunderforge marks a decisive shift toward AI-powered, data-driven warfare, ensuring that US forces can anticipate and respond to threats with speed and precision. Following its initial deployment, Thunderforge will be scaled across combatant commands,' the agency explained. Pentagon guts civilian casualty programmes At least 4.5-4.7 million civilians have been killed in post-9/11 wars in Iraq, Afghanistan, Pakistan, Syria, Yemen, and Somalia. According to a 2023 report by Rhode Island-based Brown University's Watson Institute for International and Public Affairs, an estimated 408,000 civilians out of the toll died directly from war violence. In the first 20 years of the war on terror, America killed around 48,308 civilians in more than 91,000 airstrikes in Afghanistan, Iraq, Libya, Pakistan, Somalia, Syria, and Yemen, a 2021 analysis by UK-based airstrike monitoring group Airwars shows. The US has been facing stinging international criticism for years for killing civilians in war zones. Damning probes by the media, NGOs and think tanks have revealed the merciless killing of civilians by the US military since 9/11. In 2022, the Biden administration established the Civilian Harm Mitigation and Response Action Plan (CHMR-AP) to develop and implement strategies to prevent, mitigate and respond to civilian harm in military operations in a better way. The plan also aimed to increase accountability for civilian casualties, improve transparency in Pentagon practices related to civilian protection, and ensure a swift and effective response to civilian casualties. The CHMR-AP also established a Civilian Protection Centre of Excellence (CPCE) to guide the DoD's understanding of the capabilities and practices that support civilian harm mitigation and response. It was the hub and facilitator of department-wide analysis, learning and strategic approaches and was to help institutionalise good practices for civilian harm mitigation and response during operations. Biden also instructed the military and CIA drone operators to obtain permission before targeting a suspected militant outside a conventional war zone. In October 2022, he ordered drone operators to be certain of no civilian injuries before a strike. Except for Iraq and Syria, where the IS still operates, presidential permission is compulsory for drone strikes in Afghanistan, Yemen, Libya, Somalia and FATA, Pakistan. Trump has not only removed the restrictions but also decided to do away with CHMR-AP and CPCE. The US has launched several strikes in Iraq, Syria and Somalia. Since the March 15 airstrikes in Yemen, more than 200 people have been killed, with the most recent attack on the Ras Isa oil port killing around 80 people. Now, Hegseth has decided to terminate CHMR and CPCE staff across all US commands despite the Pentagon policy requiring that possible dangers to civilians be considered in combat planning and operations. That's Hegseth's idea of rebuilding the US military into a more lethal force. 'I've thought very deeply about the balance between legality and lethality, ensuring that the men and women on the frontlines have the opportunity to destroy the enemy and that lawyers aren't the ones getting in the way,' Hegseth said at his Senate confirmation hearing. Hegseth also said that laws like the Geneva Convention existed 'above reality'. 'We follow rules. But we don't need burdensome rules of engagement [that] make it impossible for us to win these wars.' Hegseth also feels that lawyers hinder military effectiveness. On February 21, he sacked the judge advocate generals (JAGs) of the Army and Air Force, and the Navy's JAG suddenly retired in December. 'Lethal' concoction in the making The character of warfare is rapidly changing. The traditional military doctrine of the three-domain 'land, sea and air' approach will be junked. Though none of the global powers have allowed AI to take over their militaries with humans still in control, machines turning autonomous in combat is inevitable. China has already created an AI commander at the Joint Operations College of the National Defence University in Shijiazhuang, Hebei province, similar to a human counterpart with his experience, strengths and flaws. In a peer-reviewed paper published in the Chinese-language journal Common Control & Simulation in May 2024, senior engineer Jia Chenxing writes: 'The highest-level commander is the sole core decision-making entity for the overall operation with ultimate decision-making responsibilities and authority.' Deciding in less time and with more speed on the battlefield when the data is voluminous tempts the human mind, especially when the adversary is also using AI. Decisions taken in a fraction of a second decide the course of the war and the outcome. However, even agentic AI isn't immune to algorithmic biases. Any AI system is trained on datasets with inherent biases that could lead to disastrous consequences. An AI system in war trained on a dataset with prejudices against a particular community, race, ethnicity or even gender will pick targets accordingly. The consequences can be more severe in the case of agentic AI, as commanders plan operations at the theatre level, which is much more complex than merely picking and eliminating targets. A scenario where machines in a war turn autonomous would be frightening. As the Pentagon axes programmes aimed at ensuring minimum collateral damage, or civilian deaths, accountability will be the main casualty if the AI system makes an error resulting in the death of non-combatants. Who will be responsible for the collateral damage? The Pentagon can't blame the machines, but it won't take the blame either. A prime example is Israel's use of generative AI programmes like Lavender, Where's Daddy? and The Gospel to eliminate Hamas and Palestinian Islamic Jihad (PIJ), resulting in massive civilian casualties. Lavender, developed by Israel's elite and clandestine counterintelligence and cyberwarfare division, Unit 8200, was designed to make kill lists of suspected Hamas and PIJ junior operatives in the initial months of the war. According to a joint investigation by +972 Magazine and Hebrew news website Local Call, the Israel Defence Forces (IDF) trusted Lavender, which selected 37,000 Palestinians and their homes as targets, as if it were a human, and bombed Gaza accordingly. Where's Daddy? tracked the suspected militants as they entered their houses, and The Gospel identified structures and buildings as targets. The IDF was so reliant on Lavender that it spent merely 20 seconds on each target before bombing it—the main criterion was that it should be a man. Israel is using AI systems "Lavender" and "Where's Daddy?" to kill Palestinians in Gaza, an investigative report by +972 Magazine has revealed — TRT World (@trtworld) April 5, 2024 Despite Lavender being 10 per cent inaccurate—meaning, 10 out of 100 targets identified weren't terrorists—the IDF didn't review the system's assessment. The mistaken targets could have been the police, civil defence workers, relatives of Hamas or PIJ members or Gazans having a name and nickname similar to that of an operative. As Where's Daddy? signalled to the IDF about a target entering his house, the residence was targeted with an unguided bomb to save expensive armaments for high-value targets. Consequently, the whole house was blown away, killing the target and his entire family. "When they reach their homes, daddy's home, and then the entire house, and everybody in it, could be blown up." How the Israel army's 'Lavender' and 'Where's Daddy?' artificial intelligence systems operate in Gaza. #GazaCrimes #AI — Al Jazeera Investigations (@AJIunit) October 19, 2024 Around 16-18 houses in Al-Bureij refugee camp were blown to smithereens and 300 civilians killed on October 17 as Lavender failed to pinpoint a top Hamas commander's location. Axing the CHMR and CPCE and using agentic AI will pave the way for more civilian casualties. Neither will machines prevent, mitigate or respond to civilian harm, nor will they be trained to stop collateral damage. Hegseth wants the US military to be more lethal, not legal. Therefore, there's a high probability that datasets fed into AI systems are biased towards more lethality at the cost of civilian casualties. AI systems need constant monitoring, oversight and review. That's the reason China has restricted the AI commander to a laboratory despite it 'possessing sound mental faculties, a poised and steadfast character, capable of analysing and judging situations with calmness, devoid of emotional or impulsive decisions, and swift in devising practical plans by recalling similar decision-making scenarios from memory'. The writer is a freelance journalist with more than two decades of experience and comments primarily on foreign affairs. Views expressed in the above piece are personal and solely those of the writer. They do not necessarily reflect Firstpost's views.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store