Pentagon bars Anthropic from defense contracts as AI safety debate enters courtroom
First-ever federal supply chain risk designation against a U.S. AI company reshapes $15B defense market and tests limits of government procurement power.
The U.S. Department of Defense formally designated Anthropic a supply chain risk on February 27, 2026, terminating the company’s $200 million Pentagon contract and marking the first federal action blocking a leading American AI firm from defense work.
The designation stems from Anthropic’s refusal to remove safeguards against mass surveillance and autonomous weapons systems, setting up a legal and strategic clash that could define how AI safety standards intersect with national security for years to come. Hours after the ban took effect, OpenAI secured a Pentagon contract to supply 3 million Defense Department employees with AI tools, NPR reported.
The guardrails dispute
The conflict centers on language Defense Secretary Pete Hegseth demanded in all Pentagon AI contracts: full, unrestricted access to models for “every lawful purpose in defense of the Republic.” Anthropic CEO Dario Amodei refused, telling CNN the company “cannot in good conscience accede to their request.”
Amodei cited two red lines in his public statement: frontier AI models lack the reliability required for fully autonomous weapons, endangering warfighters and civilians, and mass domestic surveillance of Americans violates fundamental rights. The Pentagon’s January 9, 2026 AI strategy memo mandates the “any lawful use” clause in all Defense Department AI contracts within 180 days, effectively removing ethics guardrails from procurement requirements, per Defense One.
“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.”
— Pete Hegseth, Defense Secretary
The formal supply chain risk letters, dated March 3, 2026, marked the first time the designation has been applied to a U.S. company, Mayer Brown noted in legal analysis. The designation flows from statutes embedded in the National Defense Authorization Act, giving the Pentagon broad authority to exclude contractors deemed threats to supply chain integrity.
Competitive realignment accelerates
OpenAI moved swiftly to capitalize on Anthropic’s exclusion. Beyond the February 27 Pentagon deal, the company announced a partnership with Amazon Web Services on March 17 to deliver AI tools across the federal government through AWS GovCloud, TechCrunch reported. OpenAI CEO Sam Altman publicly stated he hoped the Defense Department would offer Anthropic the same contract terms his company accepted, but the Pentagon rejected that approach.
Defense-focused AI contractors are already capturing the gains. Palantir Technologies reported Q4 2025 revenue of $1.41 billion, up 70% year-over-year, with government segment revenue rising 66%. The company issued fiscal 2026 guidance of $7.18-7.20 billion against analyst estimates of $6.22 billion, CNBC reported. Palantir shares surged 14% in early March following live-fire validation of its AI systems in Middle East operations, trading at $152.25 as of March 17 with a price-to-earnings ratio of 238.
| Company | Contract Value | Announcement Date |
|---|---|---|
| Palantir | $10B (Army ESA) | Late 2025 |
| Anduril | $20B (10-yr ceiling) | March 2026 |
| OpenAI | Undisclosed | Feb 27, 2026 |
Anduril Industries secured a $20 billion Army contract vehicle with an $87 million initial task order in March, mirroring Palantir’s contract structure, Investing.com reported. The pattern points to consolidation around vendors willing to accept unrestricted-use terms, narrowing the field at precisely the moment when Pentagon AI spending is accelerating.
Legal challenge and regulatory precedent
Anthropic filed two federal lawsuits on March 9 challenging the designation, arguing the Pentagon violated both statutory procedure and constitutional protections. The company contests whether its safety policies constitute a legitimate supply chain risk under 10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act, Federal News Network reported.
The case arrives as Congress mandates new AI security frameworks. Section 1513 of the fiscal 2026 National Defense Authorization Act requires the Defense Department to develop AI and machine learning cybersecurity standards, which will be incorporated into the Defense Federal Acquisition Regulation Supplement and Cybersecurity Maturity Model Certification by 2027. Those frameworks could become de facto industry standards, extending Pentagon requirements into commercial AI development.
The Pentagon’s AI strategy memo gives contractors 180 days from January 9, 2026 to accept “any lawful use” language—a July deadline that coincides with Anthropic’s federal court proceedings. If Anthropic prevails in court, the Defense Department may face constraints on how broadly it can apply supply chain risk designations to enforce policy compliance beyond traditional security threats.
Joe Scheidler, former White House adviser and CEO of AI startup Helios, told CNBC that private companies should not hold leverage over the U.S. government because of technological capability. That view reflects a Pentagon calculation: the Defense Department will accept vendor concentration risk to avoid operational constraints from AI safety guardrails.
Geopolitical stakes escalate
China’s People’s Liberation Army is deploying AI across seven priority areas including autonomous vehicles, intelligence and surveillance, predictive maintenance, information warfare, simulation, command and control, and target recognition, Center for a New American Security testimony to Congress shows. That integration creates pressure on U.S. military leadership to accelerate AI adoption, even as safety advocates warn current models lack the reliability for high-stakes applications.
The Anthropic dispute highlights a divergence in how democracies and authoritarian states approach military AI governance. The United States and Russia voted against a United Nations resolution on responsible military AI in October 2025. At the Responsible AI in the Military summit in February 2026, 35 nations backed military AI rules, but neither the U.S. nor China endorsed the framework, Mexico Business News reported.
- Pentagon procurement power is now a tool for enforcing AI policy compliance, not just security vetting
- Defense AI market consolidating around unrestricted-use vendors as guardrail requirements disappear
- U.S. weakening domestic AI safety practices as China accelerates military integration without constraints
- Legal precedent from Anthropic case will define limits of supply chain risk authority across federal agencies
Anthropic’s annualized revenue reached approximately $19 billion in March 2026, up from $14 billion in February, Sacra reported. The company closed a Series G funding round in February at a $380 billion post-money valuation, vaulting it alongside OpenAI and SpaceX among the largest private companies globally, Fortune noted. The company has engaged Wilson Sonsini for IPO preparation, though public offering timing remains uncertain.
What to watch
The federal court rulings on Anthropic’s lawsuits will set binding precedent on whether the Pentagon can use supply chain risk designations to compel policy compliance beyond traditional security threats. Oral arguments are expected by mid-2026, with initial rulings likely before year-end. If Anthropic prevails, the Defense Department may need congressional authorization to enforce the “any lawful use” mandate through procurement exclusions.
Other federal agencies—particularly the State Department and Commerce Department—are monitoring the case to determine whether supply chain risk authority extends to their contractor bases. A Pentagon win would likely trigger designation threats across the government to enforce AI policy alignment, expanding the precedent beyond defense contracts.
Defense strategy among other AI companies will clarify quickly. Google DeepMind, Cohere, and smaller AI labs must now decide whether to accept unrestricted-use terms or forfeit access to the federal defense market. Companies that decline will face investor pressure as competitors capture government revenue, while those that accept may encounter backlash from employees and civil society groups.
Congressional oversight is already intensifying. House Armed Services Committee members have requested briefings on how the Pentagon determined Anthropic posed a supply chain risk and whether the designation process followed statutory requirements. Senate Commerce Committee staff are examining whether new legislation is needed to clarify the boundaries of procurement-based policy enforcement. The outcome will determine whether procurement power becomes a standard tool for federal AI governance.