Anthropic CEO Rejects Pentagon’s Final AI Ultimatum
Dario Amodei holds firm on safeguards as Friday deadline approaches, marking first major refusal of unrestricted military access to frontier models.
Anthropic CEO Dario Amodei on Thursday rejected the Pentagon’s final offer to grant unrestricted military access to the company’s Claude AI system, declaring the company ‘cannot in good conscience’ accept terms that remove safeguards against mass surveillance and autonomous weapons. The standoff sets a 5:01 PM EST Friday deadline for a resolution that could see the AI firm designated a supply chain risk—a label typically reserved for adversaries like China.
The dispute crystallizes tensions over who controls how advanced AI is deployed for National Security. Anthropic signed a $200 million contract with the Department of Defense in July, becoming the first lab to integrate its models into mission workflows on classified networks. But the startup wants assurance that models will not be used for fully autonomous weapons or mass domestic surveillance, while Defense Secretary Pete Hegseth has threatened to label Anthropic a ‘supply chain risk’ or to invoke the Defense Production Act to force the company to comply.
$200M
$380B
Fri 5:01 PM EST
The Red Lines
“The contract language we received overnight from the Department of War made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons, ” Anthropic said in a statement, adding that ‘new language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.’ For months, CEO Dario Amodei has insisted that Anthropic’s AI model, Claude, must not be used for mass surveillance in the U.S. or to power entirely autonomous weapons, describing those uses as ‘entirely illegitimate’ and ‘bright red lines’ for the company.
The Pentagon maintains it has no such intentions. Chief Pentagon Spokesman Sean Parnell said Thursday that the Department of Defense has ‘no interest’ in using Anthropic’s models for fully autonomous weapons or to conduct mass surveillance of Americans, which he noted is illegal. But according to CNBC, the agency wants the company to agree to allow its models to be used for ‘all lawful purposes’—without company-imposed restrictions.
“These threats do not change our position: we cannot in good conscience accede to their request.”
— Dario Amodei, Anthropic CEO
The Pentagon’s Leverage
Hegseth met with Amodei at the Pentagon on Tuesday, giving Anthropic until Friday evening to agree to his agency’s demands. The threats carry real weight. Being labeled a supply chain risk means any company that works with the U.S. military would have to prove that they don’t touch anything related to Anthropic in their work with the Pentagon, and much of Anthropic’s success stems from its enterprise contracts with big companies—many of which may have contracts with the Pentagon.
The Department of Defense has attempted to force Amodei’s hand by either labeling Anthropic a supply chain risk—a designation reserved for foreign adversaries—or invoke the Defense Production Act, which gives the president the authority to force companies to prioritize or expand production for national defense. As Axios reported, Emil Michael, the Pentagon official handling negotiations with Anthropic, denounced Amodei as a ‘liar’ with a ‘God complex’ who was ‘putting our nation’s safety at risk.’
The Competitive Landscape
Anthropic’s refusal creates an opening for competitors. The startup’s rivals OpenAI, Google and xAI were also granted contract awards of up to $200 million from the Department of Defense last year, and those companies have agreed to let the DoD use their models for all lawful purposes within the military’s unclassified systems. According to CNN, xAI recently signed a contract under the ‘all lawful purposes’ standard for classified work, though Grok is not viewed as being as advanced as Claude.
The tactical disadvantage for the Pentagon is real. According to Axios, ‘The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good,’ a Defense official said. Cutting ties would require the Pentagon to have a replacement ready for Claude, which is currently the only model used in classified systems.
| Company | Contract Status | Classified Access | Restrictions |
|---|---|---|---|
| Anthropic | $200M (at risk) | Active (only) | Refuses ‘all lawful use’ |
| xAI | $200M | Approved this week | Accepted all terms |
| OpenAI | $200M | Negotiating | Unclassified only |
| $200M | Negotiating | Unclassified only |
The Legal Gray Zone
The Pentagon’s dual threat strategy faces legal questions. Amodei said the Pentagon’s threats ‘are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.’ Legal experts are skeptical. Katie Sweeten, a former liaison for the Justice Department to the Department of Defense, said she’s not sure how the Pentagon can both declare a company to be a supply chain risk and compel that same company to work with the military, noting ‘I would assume we don’t want to utilize the technology that is the supply chain risk, right? So I don’t know how you square that.’
The Defense Production Act invocation also sits on uncertain ground. According to analysis from Lawfare, neither side’s argument is ‘a slam dunk,’ though the sheer breadth of the DPA’s allocation language arguably tilts in the government’s direction. Anthropic could theoretically take the administration to court, arguing it’s not providing the sort of commercially available product for which the DPA can be used to expedite production, but custom-built software already tailored to sensitive government uses.
Industry Precedent
The standoff represents the first major test of whether AI companies can maintain independent positions on deployment ethics when faced with national security demands. Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021. The company also recently announced it is giving $20 million to a political group campaigning for more Regulation of AI.
Congressional reaction has been mixed. Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the Pentagon has been handling the matter unprofessionally while Anthropic is ‘trying to do their best to help us from ourselves,’ asking ‘Why in the hell are we having this discussion in public? This is not the way you deal with a strategic vendor that has contracts.’ Meanwhile, according to ABC7, Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was ‘deeply disturbed’ by reports that the Pentagon is ‘working to bully a leading U.S. company,’ adding ‘Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance.’
The dispute occurs amid broader regulatory fragmentation. Multiple state AI laws took effect in January 2026, including Colorado’s AI Act and California’s Transparency in Frontier AI Act. Meanwhile, a December 2025 executive order from President Trump seeks to preempt state regulations and establish a ‘minimally burdensome’ national framework—though legal experts question whether federal authority extends to overriding state AI governance.
What to Watch
The 5:01 PM Friday deadline will determine whether Anthropic maintains its position or capitulates. If the Pentagon follows through on its threats, expect litigation over both the DPA invocation and supply chain risk designation. The latter could face particular scrutiny given the designation is usually reserved for companies seen as extensions of foreign adversaries like Russia or China.
Other AI labs are watching closely. If Anthropic successfully resists, it establishes a precedent for company-imposed ethical boundaries on government contracts. If the Pentagon prevails—either through legal compulsion or Anthropic backing down—expect rapid consolidation around ‘all lawful purposes’ language across the industry. The Pentagon’s threat is a signal to other AI companies looking to make millions selling their services to the government, sending a message to make sure they do not attempt to put any sort of restrictions on AI’s uses.
Monitor whether Congress intervenes with clarifying legislation on military AI use. The public nature of this dispute—unusual for sensitive national security contracts—suggests both sides believe they have political support. The outcome will shape not just how the Pentagon acquires AI, but whether