Trump Blacklists Anthropic Across Federal Government, Setting Precedent for Political AI Procurement
Pentagon supply chain designation expands to government-wide ban following refusal to drop autonomous weapons safeguards—first such action against a US AI company threatens billions in enterprise revenue.
President Donald Trump ordered all federal agencies to cease using Anthropic’s AI technology on February 27, 2026, escalating a Pentagon contract dispute into an unprecedented government-wide ban that could cost the startup multiple billions in 2026 revenue. The directive followed Defense Secretary Pete Hegseth’s designation of Anthropic as a supply chain risk to national security, a label historically reserved for foreign adversaries like Huawei.
Anthhropic signed a $200 million contract with the Department of Defense in July 2025 and was the first AI lab to deploy its technology across the agency’s classified networks. The partnership unraveled after CEO Dario Amodei rejected the Pentagon’s demand to lift all safeguards on the military’s use of its model, Claude, citing concerns about mass domestic surveillance and autonomous lethal weapons. In January, Hegseth’s AI strategy memo directed that all Defense Department contracts adopt standard “any lawful use” language—requiring the removal of vendor restrictions.
Claude is the only AI model currently used in the military’s classified systems. It was used in the operation to capture Nicolás Maduro in January 2026 and reportedly in military operations in Iran. Defense officials praised Claude’s capabilities, with one admitting it would be a “huge pain in the ass” to disentangle.
Enterprise Fallout Accelerates
The commercial impact materialized within days. One partner with a multi-million-dollar annual agreement replaced Claude with a competing model for a deployment connected to the FDA, eliminating an anticipated revenue pipeline valued at more than $100 million, according to Yahoo Finance. Negotiations with three financial institutions have been affected, including a deal that had been close to completion and two additional agreements worth more than $80 million combined.
At least 100 customers, from pharma to fintech companies, have asked to pause or cancel their contracts, Anthropic lawyer Michael Mongan told a federal court this week. Alexander Harstrick, managing partner at J2 Ventures, said 10 portfolio companies working with the Department of Defense “have backed off of their use of Claude for defense use cases”.
Microsoft said its lawyers concluded that Anthropic products can remain available to customers—other than the Department of War—through platforms such as M365, GitHub, and AI Foundry, per CNBC. Amazon, Google, and Microsoft have confirmed they plan to keep Anthropic’s technology integrated in their products for clients, excluding the Department of War.
Legal and Statutory Ambiguity
Anthropc sued the Trump Administration on March 9, alleging the actions are “unprecedented and unlawful” and “harming Anthropic irreparably”. The lawsuit names more than a dozen federal agencies as defendants, according to CNBC.
Legal experts say the designation exceeds what the statute authorizes, the required findings don’t hold up, and Hegseth’s public statements may have doomed the government’s litigation posture, per Lawfare. Hegseth’s statement accused Anthropic of “arrogance and betrayal,” “duplicity,” and “corporate virtue-signaling,” framing the designation as a response to the company’s attempt to “seize veto power over the operational decisions of the United States military”.
“The president’s directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
— Sen. Mark Warner (D-Va.), Vice Chair, Senate Select Committee on Intelligence
The formal letter arrived via social media, followed a week later by two paragraphs of boilerplate with no substantive analysis. There is no FAR clause, no compliance deadline, no representation requirement.
xAI and OpenAI Fill Vacuum
Elon Musk’s xAI signed an agreement to allow the military to use its model, Grok, in classified systems, Axios reported on February 23. xAI agreed to the Pentagon’s “all lawful use” standard. A former Pentagon contracting official said the xAI contract “came out of nowhere,” when other companies had been under consideration for months, and analysts noted xAI did not “have the kind of reputation or track record that typically leads to lucrative government contracts”.
Hours after Trump’s announcement, OpenAI said it had struck a deal with the Defense Department to provide its technology for classified networks. OpenAI CEO Sam Altman wrote in a memo that the company has “long believed that AI should not be used for mass surveillance or autonomous lethal weapons”, though it’s unclear whether the Pentagon is still insisting on the legal collection of public personal data with OpenAI.
Civilian Agency Transitions
Multiple federal agencies acted on Trump’s directive: the General Services Administration terminated Anthropic’s OneGov contract; Treasury Department, Federal Housing Finance Agency, State Department, and Health and Human Services issued separate directives stopping use. HHS confirmed it is phasing Anthropic solutions out of agency workflows, while other software like ChatGPT Enterprise and Google Gemini remain available.
Groups including Treasury, HHS, and State have confirmed they are moving off of Claude, per CNBC. The transition is especially complicated within DoD because the U.S. is actively carrying out a military operation in Iran.
Ideological Procurement Precedent
Experts noted an apparent disconnect between labeling one of America’s largest AI companies a supply chain risk while refraining from applying the same label to DeepSeek, a leading Chinese AI company. “We’re treating an American AI company worse than we’re treating a Chinese Communist Party-controlled AI company,” said Michael Sobolik of the Hudson Institute.
Pentagon officials admitted on record that the supply chain designation against Anthropic is “ideological” with “no evidence of supply-chain risk,” with one official telling Defense One the goal was to “make sure they pay a price” for refusing demands.
- First use of national security procurement tools against a US AI company for refusing contract terms—not foreign adversary ties
- Creates chilling effect for AI vendors: accept “all lawful use” or risk federal blacklist and enterprise customer exodus
- Exposes gap between executive social media directives and formal statutory authority—contractors navigating legal ambiguity
- Positions xAI and OpenAI as preferred vendors willing to accept fewer restrictions on classified government use
What to Watch
Anthropc’s court challenge will test whether supply chain risk statutes can be weaponized against domestic companies over contractual disputes rather than foreign adversary threats. A hearing on whether to grant Anthropic temporary relief is set for March 24. The outcome will determine whether AI vendors can maintain use restrictions when contracting with the government—or whether “any lawful use” becomes a de facto requirement for federal business.
The broader vendor ecosystem is recalibrating risk. Defense contractors must now assess whether commercial use of blacklisted AI tools could jeopardize federal contracts, even if unrelated to government work. The General Services Administration issued draft AI contract terms outlining government use rights and disclosure requirements, with comments due March 20. Those provisions will clarify how widely procurement restrictions extend beyond direct agency use.
For US-China AI competition, the episode reveals strategic incoherence: the administration designated an American frontier AI company a national security threat while Chinese competitors remain accessible. Whether that reflects genuine security calculus or political retaliation will shape how allies and adversaries interpret US technology sovereignty claims.