The Anthropic-Pentagon Standoff Exposes Unresolved Questions of AI Control in Wartime
Hours after banning Anthropic for refusing to drop AI safeguards, the Pentagon used Claude to coordinate Iran strikes—revealing the military's dependence on a technology it just declared a national security risk.
The U.S. military deployed Anthropic’s Claude AI to help coordinate strikes against Iran on February 28, hours after President Trump ordered federal agencies to stop using the company’s technology and Defense Secretary Pete Hegseth designated Anthropic a supply chain risk—a label previously reserved for foreign adversaries like Huawei. The contradiction lays bare a fundamental tension: advanced AI models have become critical infrastructure for military operations, yet the U.S. government lacks effective mechanisms to control how they’re developed or deployed.
US Central Command used Claude for intelligence assessments, target identification, and simulating battle scenarios during Operation Epic Fury, which struck over 1,000 targets in the first 24 hours. Claude remains the only AI model currently deployed on the Pentagon’s classified networks, making it operationally indispensable even after the ban—one Defense official admitted the transition would be ‘a huge pain in the ass.’
The dispute began after Foreign Policy reported that Anthropic signed a $200 million contract with the Pentagon in July 2025 to deploy Claude within classified systems. Tensions escalated in February after the Wall Street Journal revealed Claude had been used in the January raid to capture Venezuelan President Nicolás Maduro, prompting Anthropic to contact Palantir—its Pentagon integration partner—about usage policy concerns.
The Red Lines That Couldn’t Hold
Anthropic sought contractual prohibitions on using Claude for mass domestic surveillance of Americans and fully autonomous weapons without human control. The Pentagon argued that operational decisions belonged to the military, not private companies, and demanded all AI models be available for ‘any lawful purpose’—a stance accepted by OpenAI, Google, and Elon Musk’s xAI.
On February 24, Hegseth gave Anthropic CEO Dario Amodei a deadline: relent by 5:01 p.m. on February 27 or face consequences. In a statement Thursday, Amodei said the company ‘cannot in good conscience accede to their request,’ noting that the Pentagon’s proposed contract language was ‘paired with legalese that would allow those safeguards to be disregarded at will.’ Trump responded by directing all federal agencies to immediately cease using Anthropic’s products, while Hegseth designated the company a supply chain risk and barred any defense contractor from conducting ‘any commercial activity’ with Anthropic.
The Legal Battlefield
The supply chain risk designation had never been applied to an American company and was designed to address threats from foreign adversaries—not contractual disputes. Anthropic is a U.S. firm with foreign investors from allied nations, and according to the company, it was the first frontier AI company to deploy on classified networks and cut off Chinese Communist Party-linked firms at a cost of hundreds of millions in revenue.
Legal experts told Wired and Defense One that the designation requires the government to prove a risk of sabotage by an adversary and to have exhausted less intrusive alternatives—neither of which appears present in the public record. Anthropic notes its restrictions ‘have not affected a single government mission to date.’
Writing for Lawfare, legal scholars argued the designation invokes the major questions doctrine: just as the Supreme Court ruled last month that ‘regulate importation’ doesn’t mean ‘impose tariffs,’ ‘supply chain risk’ from an ‘adversary’ doesn’t mean ‘exclude a domestic AI company over a contract dispute.’ Anthropic has pledged to challenge the designation in court, calling it ‘legally unsound’ and warning it sets ‘a dangerous precedent for any American company that negotiates with the government.’
"Designating this company as a supply chain risk is not an exercise of the authority Congress granted—it’s an exercise of an authority Congress never contemplated."— Lawfare Media legal analysis
OpenAI’s Opportunistic Entry—And Immediate Backlash
Hours after Anthropic’s blacklisting, OpenAI announced it had reached a deal with the Pentagon to deploy models on classified networks, with CEO Sam Altman claiming the agreement included prohibitions on domestic mass surveillance and human responsibility for autonomous weapons. Days later, Altman admitted OpenAI ‘shouldn’t have rushed’ the deal, calling it ‘opportunistic and sloppy,’ and announced revised contract language clarifying that systems ‘shall not be intentionally used for domestic surveillance of U.S. persons and nationals.’
According to MIT Technology Review, Altman said the key difference was approach: ‘Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with.’ Critics argue OpenAI’s contractual protections may amount to little more than ‘all lawful use’ tied to existing laws and Pentagon policies the government can change at will—giving the appearance of guardrails without meaningful constraint.
Hundreds of Google and OpenAI employees signed petitions calling on their companies to mirror Anthropic’s position. Senator Ron Wyden (D-Ore.) pledged to fight the actions against Anthropic, according to Bloomberg.
The Supply Chain Fracture
Alexander Harstrick, managing partner at J2 Ventures, told CNBC that 10 portfolio companies working with the Defense Department ‘have backed off of their use of Claude for defense use cases and are in active processes to replace the service with another one.’ Defense contractors like Lockheed Martin are expected to remove Anthropic’s technology from their supply chains, Reuters reported.
Palantir, which depends on the government for close to 60% of its U.S. revenue and uses Claude to power sensitive military work, faces operational disruption. Piper Sandler analysts noted Anthropic is ‘heavily embedded in the Military and the Intelligence community,’ warning that while transition ‘can and will happen if needed, Anthropic was a trailblazer in terms of operationalizing AI models for data-sensitive environments.’
Despite the turmoil, Anthropic recently surpassed $19 billion in revenue run rate, up from $9 billion at the end of 2025, driven by strong adoption of its coding tool Claude Code, according to Bloomberg. The company gets about 80% of its revenue from enterprise customers, CEO Amodei told CNBC in January.
The Governance Vacuum
The episode illuminates a policy crisis: no framework exists to adjudicate disputes between AI developers’ safety commitments and government operational demands. As NYU Stern’s Center for Business & Human Rights noted, ‘The Trump administration’s accelerate-at-all-costs approach to AI stands in tension with Anthropic’s stated principles—raising a central question: Who sets the rules for AI when the technology confers a military advantage on governments?’ The standoff captures the central dilemma of AI governance: who gets to set the rules, and what happens to companies that try to enforce self-imposed guardrails? If the government responds to principled limits by threatening to cut off the company that imposes them, ‘it sends a clear message to the entire industry: responsibility is a liability.’
Dario Amodei himself acknowledged in an interview that ‘I actually do believe it is Congress’s job’ to regulate AI risks to privacy and civil liberties. Yet as Understanding AI argued, ‘only Congress can put meaningful limits on government abuse of AI. That’s going to take oversight—and eventually legislation. We need ground rules that apply to all government use of AI, regardless of whose models are used.’
Ahead of the deadline, top members of the Senate Armed Services Committee sent a private letter to Anthropic and the Pentagon urging them to extend negotiations and work with Congress to find a solution, according to a letter obtained by ABC News. No Congressional action materialized.
- Operational dependency trumps policy: The military cannot rapidly transition off frontier AI models it has integrated into classified systems, creating de facto vendor lock-in regardless of contractual disputes.
- Contractual safeguards are meaningless when government can change policies at will: OpenAI’s deal with the Pentagon may offer only ‘all lawful use’ without genuine constraints.