Warren Challenges Pentagon Over Anthropic Blacklist as Defense AI Procurement Faces Scrutiny
Formal congressional inquiry into Anthropic's exclusion and OpenAI's simultaneous contract exposes whether national security or ideological alignment drives DOD AI vendor selection.
Senator Elizabeth Warren has formally questioned Defense Secretary Pete Hegseth over the Pentagon’s decision to blacklist Anthropic while simultaneously awarding OpenAI a defense contract, marking the first congressional challenge to DOD AI procurement standards.
The Massachusetts Democrat sent letters to both Hegseth and OpenAI CEO Sam Altman on 23 March seeking details of the procurement process, according to CNBC. Warren characterised the decision as “retaliation” against Anthropic for refusing to remove AI safety guardrails, particularly restrictions on domestic surveillance and fully autonomous weapons systems.
The inquiry centres on a sequence of events in late February that saw Anthropic — previously the first AI contractor cleared for classified military applications — reclassified as a “supply chain risk” on 27 February. Hours after that designation, Altman announced OpenAI’s defense contract on 28 February. The timing, Warren argues, suggests vendor selection based on willingness to grant unrestricted government access rather than security merit.
The Guardrail Dispute
Negotiations between the Pentagon and Anthropic broke down over contract language governing model usage. The DOD demanded access for all “lawful purposes” without company-imposed restrictions, per CNBC. Anthropic sought explicit prohibitions on using its models for fully autonomous weapons and domestic mass surveillance of U.S. persons.
“We cannot in good conscience accede to their request,” Anthropic CEO Dario Amodei stated during the standoff, as reported by CNN Business. Pentagon officials countered that allowing contractors to impose ethical restrictions on government use of lawfully acquired technology sets an unacceptable precedent. “From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes,” a senior Defense Department official told reporters.
The designation marks the first time an American company has been publicly labelled a supply chain risk — a classification historically reserved for foreign adversaries including Chinese telecommunications firms. Warren’s letter questions whether less severe measures existed to terminate the relationship without applying a National Security designation that effectively bars Anthropic from future federal contracts.
“I am particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards.”
— Sen. Elizabeth Warren, D-Mass.
OpenAI’s Revised Terms
OpenAI’s contract, announced immediately after Anthropic’s exclusion, initially appeared to grant similarly unrestricted access. Following internal controversy and external criticism, the company negotiated amended language on 3 March that included surveillance limitations, according to CNBC. The revised terms specify that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals” and prohibit “deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
Altman acknowledged the optics in an internal memo: “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.” The amendment demonstrates that the Pentagon was willing to accept usage restrictions when politically expedient — undermining the stated rationale that Anthropic’s guardrails were incompatible with national security requirements.
Warren’s letter to Altman requests full disclosure of the contract terms, including which provisions differ from Anthropic’s proposed language and whether OpenAI agreed to unrestricted autonomous weapons deployment. The senator noted concerns that “the terms of this agreement may permit the Trump Administration to use OpenAI’s technology to conduct mass surveillance of Americans and build lethal autonomous weapons that could harm civilians with little to no human oversight.”
Legal and Political Fallout
Anthropic filed suit against the Trump administration following the blacklist designation. A preliminary hearing is scheduled for 25 March in the U.S. District Court of the Northern District of California, according to CNBC. Legal analysts at Lawfare argue the designation likely exceeds statutory authority for supply chain risk determinations, which require evidence of foreign influence or technical vulnerability rather than contractual disagreement.
Warren’s inquiry parallels a separate investigation into xAI’s rapid clearance for classified network access in late February, reported by NBC News. The senator has demanded details on the security vetting process that allowed Elon Musk’s AI company to access sensitive systems despite minimal federal contracting history. Taken together, the inquiries suggest Democratic leadership views Pentagon AI procurement as vulnerable to politicisation and corporate favouritism.
The case has created division within the defense technology community. Some contractors view Anthropic’s stance as impractical — arguing that government customers must retain flexibility in how they deploy purchased tools. Others see the Pentagon’s response as setting a precedent that penalises companies for maintaining ethical standards stronger than legal minimums. “If the DOD is essentially punishing companies for having stricter safety standards, that sends a chilling message about what the government values in its AI partners,” an AI Policy researcher told TechBuzz.ai.
- Congressional Oversight of classified AI contracts now includes vendor selection criteria and ethical guardrail enforcement
- Anthropic’s supply chain designation creates legal precedent for penalising companies that impose usage restrictions beyond statutory requirements
- OpenAI’s amended contract demonstrates Pentagon willingness to accept surveillance limitations when politically necessary, contradicting stated rationale for Anthropic exclusion
- Defense AI contractors face reputational trade-offs between unrestricted government access and public commitments to responsible AI development
What to Watch
The 25 March preliminary hearing will test whether Anthropic can secure an injunction blocking the supply chain designation pending full litigation. If the court finds the Pentagon exceeded its authority, it could force reinstatement and establish judicial limits on using national security classifications to resolve commercial contract disputes.
Warren’s letters request responses by 6 April, with specific demands for unredacted contract language, internal communications regarding vendor selection, and documentation of the security assessment that justified Anthropic’s blacklist. Whether the Pentagon provides substantive responses or invokes classified information privileges will signal how seriously the administration takes congressional AI procurement oversight.
The broader question remains whether AI companies can maintain ethical red lines when pursuing government contracts — or whether Pentagon requirements for unrestricted “lawful use” will force firms to choose between defense revenue and public safety commitments. OpenAI’s post-announcement amendments suggest the former is negotiable; Anthropic’s exclusion suggests the cost of maintaining principles may be federal market access.