Trump Administration Defends Anthropic Blacklist as Constitutional Test of AI Procurement Power
Federal court case tests whether presidents can weaponize supply chain designations against domestic tech firms over policy disagreements, with billions in contracts at stake.
The Trump administration filed its defense of Anthropic’s Pentagon blacklisting on March 17, arguing the AI firm’s refusal to remove use restrictions constitutes unprotected conduct rather than speech—a framing that could determine whether the government can exclude American companies from federal contracts over ethical positions.
The legal showdown began when Defense Secretary Pete Hegseth designated Anthropic a supply chain risk on March 3, following Trump’s February 27 directive ordering federal agencies to immediately cease use of the company’s Claude AI. Anthropic sued on March 9, challenging what it called an unprecedented weaponization of authority designed to block foreign adversaries, not punish domestic firms for refusing to modify product safety controls.
The government’s position hinges on a narrow distinction. “It was only when Anthropic refused to release the restrictions on the use of its products—which refusal is conduct, not protected speech—that the President directed all federal agencies to terminate their business relationships with Anthropic,” the Justice Department argued, according to U.S. News & World Report. The administration maintains that no attempt was made to restrict Anthropic’s expressive activity, only its ability to contract with federal agencies while maintaining what the Pentagon views as operationally restrictive terms.
The Supply Chain Risk Precedent
Supply chain risk designations were created to exclude contractors tied to foreign adversaries—think Chinese telecommunications firms with potential backdoor access to U.S. systems. According to Al Jazeera, this marks the first known use of the authority against a domestic American company. Former CIA director Michael Hayden and retired military leaders filed an amicus brief calling the move “a profound departure from its intended purpose” that “sets a dangerous precedent.”
The designation followed Anthropic’s refusal to meet a February 27 deadline to drop contractual prohibitions on using Claude for fully autonomous weapons or mass domestic surveillance. The company had signed a $200 million Pentagon contract in July 2025—becoming the first AI frontier lab permitted on classified government networks—with explicit assurances these use cases would remain off-limits. When the Pentagon demanded removal of those restrictions for “all lawful purposes,” Anthropic declined. One Pentagon official told Defense One the goal was to “make sure they pay a price.”
“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.”
Pete Hegseth, Defense Secretary
Market Redistribution and Competitive Dynamics
Anthropic CFO Krishna Rao estimated in a court filing that the blacklist could reduce 2026 revenue by “multiple billions of dollars,” according to CNBC. The company projected $14 billion in total 2026 revenue, primarily from business and government customers using Claude for coding and non-military applications. The supply chain designation doesn’t just terminate the Pentagon contract—it signals to federal agencies and contractors that Anthropic carries regulatory risk, potentially triggering customer exodus beyond direct government work.
That revenue doesn’t vanish; it redistributes. The Pentagon awarded $200 million contracts each to Anthropic, OpenAI, Google, and xAI in July 2025, according to Procurement Sciences. With Anthropic sidelined, OpenAI and Google—neither of which imposed comparable use restrictions—stand to absorb displaced workloads across a federal AI market projected at over $3.3 billion in FY 2025 spending. Google’s existing General Services Administration inclusion and Impact Level 6 authorization position it particularly well for classified work migration.
Constitutional Boundaries and Statutory Authority
Anthropic’s lawsuit frames the issue starkly: “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.” According to U.S. News & World Report, legal experts suggest the company has a strong case that the government overreached. The conduct-versus-speech distinction becomes crucial: if courts accept that contractual terms expressing safety positions constitute pure conduct, the government gains broad latitude to exclude firms holding disfavored policy stances. If those terms are treated as expressive activity, First Amendment scrutiny applies.
The government faces a credibility problem. Hegseth’s public statement that “America’s warfighters will never be held hostage by the ideological whims of Big Tech” undercuts the claim that designation was a neutral supply chain assessment rather than retaliation for Anthropic’s position. According to Lawfare, these statements “undermine the government case” by revealing punitive intent.
The designation mirrors tactics used against Xiaomi and Luokung, Chinese firms accused of military ties. Both successfully challenged their blacklistings in U.S. courts on procedural grounds. Unlike those cases, Anthropic is a domestic company with no foreign adversary connection, making the statutory authority question more acute. Courts have historically been skeptical of executive attempts to expand national security powers beyond their intended scope, particularly when applied to domestic entities raising constitutional questions.
Broader Implications for Tech Sector Control
The case’s resolution will determine whether ethical product limitations constitute disqualifying conduct in federal procurement. If the administration prevails, any tech firm maintaining use restrictions—on facial recognition, autonomous weapons, surveillance applications—risks similar designation. The precedent would extend beyond AI to cloud infrastructure, where providers like Amazon Web Services and Microsoft Azure serve intelligence agencies while maintaining some customer protection policies.
OpenAI offers a contrast case. The company works extensively with defense and intelligence agencies through frameworks described in Nextgov, but structures agreements as agency-controlled deployments rather than blanket use authorizations. Whether this architectural difference—allowing agencies to implement controls rather than embedding them in vendor terms—insulates OpenAI from similar pressure remains unclear.
- First constitutional test of whether AI safety positions qualify as protected corporate speech versus regulable conduct
- Establishes whether supply chain risk authority can extend to domestic firms over policy disagreements rather than foreign adversary connections
- Determines competitive landscape in federal AI market—OpenAI and Google positioned to absorb Anthropic’s displaced government workload
- Sets precedent for government leverage over tech firms: contract exclusion as enforcement mechanism for use case demands
What to Watch
The Northern District of California will rule on Anthropic’s motion for preliminary injunction, likely within 30-45 days. A ruling in Anthropic’s favor would immediately restore contract eligibility and signal judicial skepticism of executive overreach. Government victory cements supply chain designation as viable tool for enforcing compliance with federal use demands, regardless of contractor safety concerns.
Watch for contractor behavior independent of the case outcome. If major Anthropic customers in banking, healthcare, or enterprise software cite regulatory uncertainty to justify contract terminations or renegotiations, the designation achieves its chilling effect regardless of legal validity. Conversely, continued customer retention would demonstrate market confidence that the designation is legally vulnerable.
The cloud infrastructure dimension matters most long-term. If courts uphold the government’s position, expect similar pressure on AWS, Azure, and Google Cloud to eliminate customer-protective terms in intelligence community contracts. The Anthropic case isn’t just about one AI lab’s Pentagon access—it’s a stress test of whether tech firms can maintain ethical boundaries when those boundaries conflict with government demands for unrestricted capability access.