Palantir Deflects Accountability as Maven AI Drives 11,000 Strikes in Iran
The company's UK leadership says targeting responsibility belongs to military customers, exposing the legal vacuum as private AI compresses kill chains faster than oversight can follow.
Palantir’s Maven AI platform has enabled over 11,000 US strikes against Iranian targets in six weeks, yet the company’s leadership insists responsibility for targeting decisions rests entirely with military customers—a stance that exposes the absence of legal frameworks governing private firms embedded in kinetic operations.
The company’s UK head Louis Mosley told the BBC that “responsibility for how their output is used must always remain with the military organisation.” When pressed on who decides the policy framework for AI-assisted targeting, Mosley was blunt: “That’s really a question for our military customers. They’re the ones that decide the policy framework that determines who gets to make what decision. That’s not our role.”
The deflection comes as the Pentagon investigates Maven’s role in a February 28 strike on a girls’ school in Minab that killed 168 people, including approximately 110 children, on the opening day of US-Israeli operations against Iran. Over 120 members of Congress have demanded answers about how the targeting occurred.
Speed Over Verification
Maven Smart System has compressed the military’s kill chain from eight or nine separate systems into one platform, according to The Register. The system enables 20 personnel to perform work that previously required 2,000 analysts, reducing targeting decisions from hours to an average of 86 seconds.
Deputy Secretary of Defense Steve Feinberg designated Maven as an official “program of record” in a March 9 memo, mandating integration across all military branches by September 2026. The platform integrates over 160 intelligence feeds and uses Anthropic’s Claude AI for processing, though that relationship now faces disruption.
On March 4, the Pentagon designated Anthropic as a “supply chain risk” after the company refused to modify Claude to support autonomous weapons targeting and mass surveillance, according to The Register. The department ordered a six-month phase-out of Claude AI from all systems.
“This prioritisation of speed and scale and the use of force then leaves very little time for meaningful verification of targets to make sure that they don’t include civilian targets accidentally. If there’s a risk of killing and you co-opt a lot of your critical thinking to software that will take care of these things for you, then you just become reliant on the software.”
— Prof. Elke Schwarz, Queen Mary University of London
The Accountability Gap
Palantir’s position creates a circular responsibility structure: the company builds systems explicitly designed to compress human decision-making below traditional review thresholds, then disclaims liability for how those systems are used. Military customers, meanwhile, operate under rules of engagement that predate AI-assisted targeting by decades.
Rep. Sara Jacobs of the House Armed Services Committee told NBC News that “AI tools aren’t 100% reliable—they can fail in subtle ways and yet operators continue to over-trust them. We have a responsibility to enforce strict guardrails on the military’s use of AI and guarantee a human is in the loop in every decision to use lethal force.”
But the technical architecture works against meaningful human oversight. Former Air Force officer Wes O’Donnell wrote in his newsletter that “what AI did is remove the natural friction that used to catch upstream errors before they became strike packages. Speeding up the kill chain to a point where it’s faster than human thought creates very real, insurmountable risks.”
Project Maven began in 2017 as a drone-imagery labelling initiative. Google withdrew as the initial partner following employee protests over military applications. Palantir assumed control and transformed Maven from an analytical tool into an integrated targeting platform. Iran operations represent the first large-scale combat deployment of the system.
Financial Incentives and Conflict
Palantir’s market capitalisation has reached approximately $360 billion on the strength of expanding military contracts, according to Peoples Dispatch. The company’s Maven contract grew from $480 million in 2024 to $1.3 billion by May 2025. The Army separately awarded Palantir a $10 billion enterprise agreement.
The company’s Chief Technology Officer Shyam Sankar said that “current operations are ongoing, but I think people will reflect back and say this is the first large-scale combat operation that was really driven, enhanced, made substantially more productive with technology, with AI,” per Democracy Now!.
The financial stakes compound existing conflicts of interest. Byline Times reported that Palantir serves simultaneously as the analytical engine for the International Atomic Energy Agency’s monitoring of Iran’s nuclear programme and as the Pentagon’s real-time targeting partner in strikes against Iranian assets.
- Palantir’s leadership explicitly disclaims responsibility for targeting decisions made using its AI systems
- Maven has compressed kill chain decisions from hours to 86 seconds, enabling 11,000 strikes in six weeks
- No legal framework exists governing private contractor liability when AI-assisted targeting fails
- Congressional investigation focuses on February 28 school strike that killed 168 people
- Pentagon designated Anthropic a supply chain risk after company refused to enable autonomous weapons
International Law Lags Technology
The UN General Assembly First Committee passed its third consecutive resolution on lethal autonomous weapons systems in November 2025, reflecting growing concern that policy frameworks lag behind operational deployment, according to Prism News. No binding international agreement exists governing the use of AI in targeting decisions.
Warfare expert Craig Jones told Democracy Now! that “you’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions.”
The Pentagon’s investigation into the Minab school strike may provide the first test case for how responsibility is apportioned between military operators and the private firms that design the systems they depend on. At a March conference, Pentagon official Cameron Stanley demonstrated Maven’s capabilities using a heat map that inadvertently displayed the school’s coordinates, according to The Register.
What to Watch
The Pentagon’s investigation into the Minab strike will test whether existing legal frameworks can assign liability when AI systems compress decision-making below the threshold of meaningful human review. Congressional pressure may force new guardrails before Maven’s mandated September integration across all military branches. Palantir’s search for a Claude AI replacement following the Anthropic phase-out could reveal whether the company prioritises systems with stronger ethical constraints or seeks alternatives that enable faster, less restricted targeting. The outcome will establish precedent for how private contractors bear responsibility—or avoid it—as autonomous systems shape combat operations.