Middle East Conflict Becomes Live Testing Ground for AI Weapons as Governance Deadline Looms
Iran-Israel escalation exposes critical gaps in autonomous weapons regulation, with semi-autonomous systems deployed at scale while diplomats race toward 2026 treaty deadline.
The U.S.-Israeli military campaign against Iran has become the first large-scale battlefield deployment of AI-guided weapons systems, processing thousands of targets while exposing the absence of binding international rules governing autonomous lethal force.
In the first 11 days of Operation Epic Fury, the United States achieved 5,500 strikes against targets, according to The National. Admiral Brad Cooper, head of U.S. Central Command, confirmed that AI systems help soldiers process vast amounts of data in seconds, turning processes that used to take hours into seconds, though he maintained that Al Jazeera reported “humans will always make final decisions on what to shoot.”
The conflict has illuminated a widening gap between operational reality and diplomatic progress. Academics and legal experts are meeting in Geneva, Switzerland, this week to discuss lethal Autonomous Weapons systems and Military AI procurement, as part of long-running efforts to arrive at an international agreement, according to Nature. Political scientist Michael Horowitz at the University of Pennsylvania notes that rapid technological development is outpacing slow international discussions.
AI Target Processing at Industrial Scale
The Maven Smart System, which uses AI for applications including image processing and tactical support, speeds up attack capabilities by suggesting and prioritizing targets and has been used in the attacks on Iran, according to reports from Nature citing the Washington Post. U.S. targeting planners are using the Maven Smart System where AI analyzes data then identifies and prioritizes targets, recently embedding Anthropic’s Claude system that processes and summarizes intelligence.
Experts are sounding alarms regarding a lack of human supervision over Israeli AI targeting in Iran, with similarities between Israel’s bombing of Gaza and Tehran growing stronger as Israel appears to be using AI without any human oversight, according to Asia Times citing the Quincy Institute for Responsible Statecraft. Israel built AI targeting systems in Gaza that approved kills in 20 seconds with a 10% error rate accepted, and previous investigations detailed how the IDF uses Habsora, an Israeli AI system that can automatically select airstrike targets at an exponentially faster rate than ever before.
Semi-Autonomous Systems See First Combat Use
CENTCOM’s Task Force Scorpion Strike used one-way attack drones for the first time in history during Operation Epic Fury, with these low-cost drones modeled after Iran’s Shahed drones, according to the Jerusalem Post. The unit cost is roughly $35,000, contrasted with the $2.5 million cost of a Tomahawk missile, and LUCAS drones can navigate autonomously with a range of about 500 miles.
Israeli military sources told Iran International the military is using a new method to launch drone swarms over Iran targeting security forces, according to Iran International. The next generation of drones is expected to be AI-enhanced, capable of autonomous navigation and precision targeting, with these inexpensive, commercially available tools accelerating a shift toward “forever wars”, according to Rest of World citing the Institute for Economics and Peace.
“The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent.”
— Craig Jones, Political Geographer, Newcastle University
Diplomatic Deadlock at UN as 2026 Deadline Approaches
The United Nations Secretary-General has called on states to conclude a legally binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight by 2026, according to Springer. UN Secretary-General António Guterres and ICRC President Mirjana Spoljaric have called for a new international treaty setting out specific prohibitions and restrictions on LAWS, with negotiations to conclude by the end of 2026.
Prominent countries remain opposed to or undecided on a potential treaty regulating these weapons, including China, India, Israel, Japan, Russia, the United Kingdom, and the United States, according to Arms Control Association citing Stop Killer Robots. The U.S. delegation to the CCW has consistently opposed any preemptive ban on LAWS, while Russia has also opposed a ban, noting that LAWS could “ensure increased accuracy of weapon guidance on military targets”.
The Convention on Certain Conventional Weapons’ Group of Governmental Experts has made notable progress over the last decade but has faced criticism for not moving faster due to its consensus model, according to the American Society of International Law. The group mandate extends to 2026, with the CCW review conference set as the deadline for a final report.
- Out of 195 countries, 129 (66%) are in favor of legally binding instruments while only 12 countries (6%) oppose the idea with 54 (28%) remaining undecided
- December 2024 UN General Assembly resolution adopted with 166 votes in favor, 3 opposed (Belarus, DPRK, Russia), 15 abstentions
- Israel, United States, and Russia champion the position that existing international law is adequate to address LAWS
Tech Companies Caught in Pentagon Standoff
Just one day before the U.S.-Israeli offensive began on February 28, the U.S. government sidelined one of its main AI suppliers as part of a disagreement that underlines ethical concerns about AI’s use. Trump ordered all federal agencies including the Department of Defense to stop using all Anthropic products over the company’s refusal to allow unrestricted government and military use of its technology, though Trump gave the Pentagon six months to phase out Anthropic products, allowing continued use in the Iran war pending replacements, according to Common Dreams.
Project Nimbus, a $1.2 billion cloud-computing and AI contract signed in 2021 between the Israeli government and Amazon Web Services and Google Cloud, provides cloud infrastructure and AI tools for the IDF, with the deal prohibiting Google or Amazon from refusing service to Israeli government, military, or intelligence agencies.
In a recent study, AI models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95% of cases, according to Rest of World. While AI systems for fully autonomous weapons are yet to be developed, there’s a growing gap between deployment capabilities and governance.
Civilian Harm Raises Accountability Questions
The U.S.-Israeli campaign has killed at least 1,300 people in Iran since it began on February 28, with the Iranian Red Crescent Society reporting that bombardment has damaged nearly 20,000 civilian buildings and 77 healthcare facilities. The confirmation comes as calls grow for an independent investigation into the bombing of a school in southern Iran that killed more than 170 people, mostly children.
Ongoing conflicts in Ukraine and Gaza in which AI is being used to assist target identification and drone navigation have seen high civilian death tolls, with no evidence that AI lowers civilian deaths or wrongful targeting decisions, according to political geographer Craig Jones in Nature. Fully autonomous weapons without human oversight are not currently reliable and do not comply with international laws, according to Michael Horowitz.
Ukrainian President Volodymyr Zelenskyy warned in September that AI had triggered the “most destructive arms race in human history” and made a plea for urgent global rules on how AI can be used in weapons, according to Rest of World.
In November 2024, the CCW Group of Governmental Experts provisionally found consensus on characterizing a lethal autonomous weapon system as “an integrated combination of one or more weapons and technological components that enable the system to identify and/or select, and engage a target, without intervention by a human user”, though no commonly agreed definition of Lethal Autonomous Weapon Systems exists at present.
What to Watch
The CCW Seventh Review Conference in 2026 represents the final diplomatic opportunity to establish binding rules before autonomous weapons proliferate beyond control. Experts call this the “pre-proliferation window,” the final moment before these weapons become as common as small arms, with the 2026 deadline increasingly seen as the “finish line” for global diplomacy, according to Usanas Foundation.
Key pressure points include whether the U.S. Defense Department’s planned working group with AI labs produces meaningful restrictions, how battlefield performance in Iran shapes military procurement decisions, and whether civilian harm cases generate sufficient political momentum to break the diplomatic deadlock. The Pentagon has requested a record $14.2 billion for AI and autonomous research for fiscal year 2026, with the “Replicator” program receiving $1 billion in 2025 to fast-track deployment of thousands of expendable autonomous drones.
The conflict has transformed abstract policy debates into operational reality, creating what one analyst termed “lethal beta”—a live-fire lab experiment that creates a pipeline of exportable products to the rest of the world. Without binding international frameworks by year’s end, the momentum of military AI development may render any future regulation obsolete before implementation.