Pentagon Threatens to Blacklist Anthropic in Escalating AI Safety Standoff
Defense Secretary Pete Hegseth has given the AI company until Friday to remove restrictions on Claude's military use or face designation as a supply chain risk—a move typically reserved for foreign adversaries.
The Pentagon has issued a Friday deadline for Anthropic to grant the U.S. military unrestricted access to its Claude AI system, threatening to invoke the Defense Production Act or designate the company a supply chain risk if it refuses—an escalation that exposes deepening tensions over who controls guardrails on frontier AI in national security contexts.
According to Axios, Defense Secretary Pete Hegseth delivered the ultimatum during a Tuesday meeting with Anthropic CEO Dario Amodei, giving the company until 5:01pm Friday to comply or face consequences. The standoff centers on Anthropic’s insistence on maintaining two red lines: prohibiting Claude from being used for mass domestic surveillance of Americans and preventing its deployment in fully autonomous weapons systems that can kill without human oversight. The Pentagon demands “all lawful use” authority without contractual restrictions.
The conflict intensified after reports that Claude was used during the January operation to capture Venezuelan President Nicolás Maduro. NBC News reported that Anthropic allegedly inquired through its partner Palantir whether the AI was deployed during the lethal raid—a move Pentagon officials viewed as an unacceptable attempt to audit classified military operations. The Department of Defense insists that usage decisions are the military’s responsibility as the end user, not the vendor’s.
The Nuclear Option: Supply Chain Risk Designation
The Pentagon has taken concrete steps toward blacklisting. Axios reported Wednesday that the Department of Defense contacted Boeing and Lockheed Martin to assess their exposure to Anthropic’s technology—the first step in a supply chain risk designation process. This label, historically reserved for Chinese firms like Huawei and Russian entities like Kaspersky Lab, would prohibit defense contractors from using Claude in any military work.
Anthropic holds a $200 million contract awarded in July 2025. Claude is currently the only frontier AI model deployed on the Pentagon’s classified networks through a partnership with Palantir. OpenAI, Google, and Elon Musk’s xAI received similar $200 million contracts but operate only in unclassified environments—though xAI recently agreed to classified deployment terms with no usage restrictions.
The alternative enforcement mechanism—invoking the Defense Production Act—would compel Anthropic to provide Claude without safety guardrails under National Security authority. Lawfare notes the legal analysis is contested: the government argues it’s demanding the same product on different contractual terms, while Anthropic could argue it’s being forced to create a fundamentally different system by stripping out safety features.
The Hypothetical Missile Scenario
Pentagon officials have pressed their case through extreme scenarios. Bloomberg reported that in a December call, a senior defense official posed a hypothetical to Amodei: what if a nuclear-armed ICBM were hurtling toward the U.S. with 90 seconds to spare, and Claude were the only way to trigger a response, but Anthropic’s safeguards blocked it?
Anthropic has offered carveouts. NBC News confirmed that in December contract negotiations, the company agreed to allow its systems for missile and cyber defense. But the Pentagon rejected this compromise, arguing it’s “unworkable” to litigate individual use cases with a private vendor during operational decision-making.
The company cited reliability concerns. According to CBS News, Anthropic sources said Claude is “not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment.”
A Sudden Safety Policy Reversal
In a striking coincidence, Anthropic announced Tuesday—the same day as the Hegseth meeting—that it was abandoning its Responsible Scaling Policy commitment to never train AI models it couldn’t guarantee were safe. CNN reported the company now adopts “nonbinding but publicly-declared” safety goals rather than firm internal standards.
The company insists the policy change is unrelated to the Pentagon dispute. Chief Science Officer Jared Kaplan told Time that “unilateral commitments” no longer made sense if “competitors are blazing ahead.” The company cited competitive pressure and noted its previous policy was “out of step with Washington’s current anti-regulatory political climate.”
The Competitive Landscape Shifts
Anthropic’s rivals have capitulated to Pentagon demands. Axios confirmed that xAI agreed to the “all lawful use” standard and recently secured approval for classified deployment. CBS News reported that a senior Pentagon official said Grok “is on board with being used in a classified setting, and other AI companies are close.”
The Pentagon’s dependence on Claude creates leverage but also vulnerability. According to TechCrunch, security analyst Dean Ball noted: “If Anthropic canceled the contract tomorrow, it would be a serious problem for the DOD,” citing a Biden-era National Security Memorandum directing agencies to avoid single-vendor dependence for classified AI systems.
| Company | Contract Value | Classified Access | “All Lawful Use” Agreement |
|---|---|---|---|
| Anthropic | $200M | Yes (only model deployed) | No |
| xAI | $200M | Approved | Yes |
| OpenAI | $200M | Negotiating | Status unclear |
| $200M | Negotiating | Status unclear |
The administration has framed Anthropic’s position as “woke AI.” NPR reported that White House AI czar David Sacks has publicly attacked the company for representing the “doomer industrial complex” and accused it of “a sophisticated regulatory capture strategy based on fearmongering.”
What to Watch
The Friday 5:01pm deadline will determine whether the Pentagon follows through on its threats or negotiates a compromise. If the supply chain risk designation proceeds, defense contractors including Amazon Web Services, Palantir, and Anduril—which rely on Claude—would face immediate compliance burdens. International ramifications could follow: European allies including the UK maintain large contracts with these firms.
Three scenarios appear likely: Anthropic creates a separate military version of Claude without contested guardrails, bifurcating its product line; the company stands firm and faces contract termination plus blacklisting, potentially derailing its planned IPO and $14 billion revenue trajectory; or the Pentagon secures a partial compromise allowing case-by-case usage reviews for sensitive applications.
The standoff will establish precedent for whether AI companies can impose ethical constraints on government customers or whether national security imperatives override corporate usage policies. With Congressional AI Regulation stalled, a Korean War-era production statute may determine the boundaries of military AI deployment—a resolution legal scholars warn is inadequate for technology this consequential.