Trump Orders Federal Ban on Anthropic After AI Startup Refuses Pentagon’s Unrestricted Access Demand
Defense Secretary designates Claude maker a supply chain risk, banning military contractors from working with the company in unprecedented escalation over autonomous weapons and surveillance guardrails.
President Trump ordered all federal agencies to cease using Anthropic’s AI technology Friday after the safety-focused startup refused to remove restrictions on mass surveillance and autonomous weapons, marking the first time a U.S. administration has designated a major American AI company a national security risk.
According to NPR, Defense Secretary Pete Hegseth simultaneously declared Anthropic a supply chain risk, meaning no contractor, supplier, or partner doing business with the U.S. military may conduct commercial activity with the company. The ban includes a six-month phaseout period for agencies already deploying Claude, Anthropic’s flagship AI model, which has been integrated into classified networks at the Pentagon and national laboratories.
The dispute centers on two safeguards CEO Dario Amodei refused to abandon: preventing Claude from conducting mass domestic surveillance of Americans and blocking its use in fully autonomous weapons systems that select and engage targets without human oversight. Anthropic stated these restrictions were “never a barrier to accelerating the adoption and use of our models within our armed forces to date,” yet the Pentagon demanded “any lawful use” language in all AI contracts.
The Week That Broke
Hegseth gave Anthropic until 5:01 p.m. ET Friday to comply or face consequences. The deadline passed without agreement. Trump posted on Truth Social that Anthropic had made a “disastrous mistake” trying to “strong-arm the Department of War,” calling the company “radical left” and “woke.” According to CBS News, Pentagon officials threatened to invoke the Defense Production Act—a Cold War-era law never before used to compel a company to remove safety features from a product.
The CNN reports that Anthropic’s Claude was the first AI model to work on military classified networks and has been used for intelligence analysis, operational planning, and cyber operations. The $200 million Pentagon contract represents a small fraction of Anthropic’s $14 billion revenue run rate, but the supply chain risk designation threatens a far broader customer base. Many enterprise clients—from aerospace giants to financial institutions—hold current or prospective defense contracts.
Precedent and Power
The supply chain risk designation has traditionally been reserved for foreign adversaries—Chinese telecom Huawei, Russian cybersecurity firm Kaspersky. Axios notes that applying this label to a San Francisco-based AI startup valued at $380 billion represents an unprecedented use of procurement authority against an American technology company. According to Center for American Progress, the Pentagon contacted defense contractors Boeing and Lockheed Martin about their use of Claude ahead of the announcement, signaling enforcement intent.
The Defense Production Act threat adds another layer. Legal experts told Lawfare that using the DPA to compel a company to retrain AI models or strip contractual restrictions would face major legal challenges. The statute’s Title I compulsion powers were designed for steel mills and tank factories during the Korean War, not for forcing technology companies to modify algorithmic behavior.
“We cannot in good conscience accede to their request. Frontier AI systems are simply not reliable enough to power fully autonomous weapons.”
— Dario Amodei, Anthropic CEO
Pentagon officials disputed Anthropic’s characterization. Undersecretary Emil Michael called Amodei a “liar” with a “God complex” on X, accusing him of wanting “to personally control the U.S. military.” Chief Pentagon spokesman Sean Parnell stated the department has “no interest” in mass surveillance or autonomous weapons, both of which are illegal or restricted under current policy. The dispute appears to hinge on who defines operational boundaries: a private company’s acceptable use policy or military commanders’ interpretation of lawful orders.
Industry Ripples
OpenAI CEO Sam Altman told employees Friday his company shares Anthropic’s concerns about Pentagon work, according to Fortune. Yet xAI—owned by Elon Musk, Trump’s largest 2024 campaign donor—signed a deal Monday allowing its Grok chatbot to be used on classified networks for “any lawful use.” Three other AI companies (OpenAI, Google, xAI) hold $200 million Pentagon contracts awarded in July 2025. None face similar pressure, though the administration’s signal is clear: guardrails are negotiable, or you’re out.
Anthropic was founded in 2021 by former OpenAI executives, including siblings Dario and Daniela Amodei, who left over disagreements about AI safety governance. The company has positioned Claude as the “Constitutional AI” alternative—trained with explicit ethical constraints. It raised $30 billion at a $380 billion valuation in February 2026 and is preparing for a potential IPO within two years. Major investors include Amazon, Google, and Spark Capital.
Former Pentagon AI chief Jack Shanahan, who led Project Maven (the 2018 Google drone analysis program that sparked employee protests), told reporters he sympathizes with Anthropic. Current AI systems aren’t reliable enough for fully autonomous weapons, he argued. But former Trump administration AI adviser Dean Ball called the supply chain designation “corporate murder,” warning it would make the U.S. “uninvestable” for AI startups.
Senator Mark Warner (D-VA), vice chair of the Senate Intelligence Committee, accused the administration of “bullying” Anthropic to deploy “AI-driven weapons without safeguards.” The company has given no indication it will challenge the designation in court, stating only that it will “enable a smooth transition to another provider, avoiding any disruption to ongoing military planning.”
What to Watch
The six-month phaseout window ends August 2026. Pentagon officials told GovInfoSecurity that replacing Claude could create a “six-month to one-year capability gap” as alternatives catch up to existing integration across classified systems. Whether Anthropic’s enterprise customers—many with defense ties—will abandon Claude to preserve government eligibility remains the immediate business question. The company’s IPO timeline, already ambitious, now faces regulatory and reputational headwinds.
Broader implications extend beyond one company. Congress must reauthorize the Defense Production Act by September 30, 2026. How lawmakers respond to its threatened use against AI safety practices could define whether private companies retain any authority to set ethical boundaries on government technology use. The AI Action Plan Hegseth referenced directs all Pentagon AI contracts to include “any lawful use” language within 180 days—a clock that started January 9. Other AI firms must now decide whether safety commitments survive contact with procurement reality.
Internationally, the U.S. skipped the third Responsible AI in the Military Domain summit in February, where 60 countries discussed autonomous weapons governance. This week’s clash suggests Washington’s answer: market competition, not multilateral guardrails, will determine how AI enters warfare. China and Russia, already developing military AI without public safety theater, are watching.