AI Geopolitics · · 8 min read

OpenAI Secures Pentagon Deal for Classified AI Deployment After Rival’s Expulsion

Agreement follows Trump administration's ban on Anthropic, marking OpenAI's evolution from military prohibition to defense partner amid escalating U.S.-China technology race.

OpenAI reached an agreement Friday to deploy its AI models on the Pentagon’s classified networks, filling a vacuum left by the Trump administration’s dramatic expulsion of rival Anthropic just hours earlier. The deal, negotiated under pressure after Defense Secretary Pete Hegseth threatened to designate Anthropic a national security risk, positions OpenAI as the Defense Department’s preferred AI partner—a remarkable reversal for a company that banned military applications entirely two years ago.

The Pentagon has agreed to OpenAI’s rules for deploying its technology safely in classified settings, though no contract has been signed, according to Axios. OpenAI CEO Sam Altman said “the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome” in a Friday post. The announcement came after President Trump ordered the U.S. government to stop using Anthropic’s products and the Pentagon moved to designate the company a National Security risk, according to NPR, escalating a weeklong standoff over usage restrictions.

The Anthropic Standoff

The Pentagon’s move against Anthropic stemmed from the company’s refusal to remove safeguards restricting its Claude model from being used for domestic mass surveillance or fully autonomous weapons. The Pentagon announced last summer it was awarding Defense contracts to four AI companies—Anthropic, Google, OpenAI and xAI—each worth up to $200m, reported Al Jazeera. Anthropic’s Claude was the first AI model to work on the military’s classified networks, with a contract worth up to $200 million struck last summer, according to CNN.

Defense Department officials gave Anthropic a deadline of 5:01 p.m. ET on Friday to drop restrictions on its AI model from being used for domestic mass surveillance or entirely autonomous weapons, or face losing its contract, reported NPR. When Anthropic refused, Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk to national security, directing that no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.

AI Defense Contracts at Stake
Anthropic contract value$200M
Anthropic valuation$380B
Annual revenue$14B
Phase-out period6 months

OpenAI’s Evolution from Ban to Partner

The agreement marks the culmination of a two-year policy transformation at OpenAI. Until at least January 2024, OpenAI’s policies page specified that the company did not allow the usage of its models for “activity that has high risk of physical harm, including: weapons development [and] military and warfare”, according to CNBC. The language prohibiting use for “military and warfare” subsequently disappeared, reported TechCrunch, opening the door to defense work.

OpenAI was awarded a $200 million contract by the DOD last year, which allowed the agency to begin using the startup’s models in nonclassified use cases, reported CNBC. The Friday agreement extends that access to classified environments—a capability Anthropic was the first AI lab to integrate into mission workflows on classified networks.

“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions.”

— Sam Altman, OpenAI CEO

Despite sharing Anthropic’s stated concerns, OpenAI secured more favorable terms. Altman said the government is willing to let OpenAI build its own “safety stack”—a layered system of technical, policy, and human controls—and that if the model refuses to perform a task, the government would not force OpenAI to make it do so, according to Fortune. OpenAI would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.

Jan 2024
OpenAI Removes Military Ban
Company quietly deletes “military and warfare” from prohibited uses policy.
Jun 2025
$200M Unclassified Contract
Pentagon awards OpenAI contract for administrative and non-sensitive applications.
Feb 23 2026
xAI Joins Classified Network
Elon Musk’s xAI agrees to “all lawful purposes” standard for classified deployment.
Feb 27 2026
Anthropic Banned
Trump administration designates Anthropic supply chain risk, orders federal phase-out.
Feb 27 2026
OpenAI Classified Deal
Pentagon accepts OpenAI’s safety framework for classified network deployment.

The Competitive Landscape

OpenAI now joins a select group with classified access. Elon Musk’s xAI signed an agreement to allow the military to use Grok in classified systems, agreeing to the “all lawful use” standard, according to Axios. OpenAI, Google, and Elon Musk’s xAI have Defense Department contracts and have agreed to allow their AI tools to be used in any “lawful” scenarios, reported NPR.

The episode highlights Silicon Valley’s shifting relationship with defense work. In 2018, thousands of Google employees protested a Pentagon project, fearing technology they developed could be used for lethal purposes—but Google has since earned hundreds of millions from defense contracts, according to Semafor. Training and running large language models costs hundreds of millions of dollars, and consumer revenue alone can’t cover the bills—for many companies, working with the military may be essential for survival, reported Quartz.

AI Company Stances on Military Use
Company Classified Access Usage Restrictions
Anthropic Banned No mass surveillance, no autonomous weapons
OpenAI Approved Company-controlled safety stack, no domestic surveillance
xAI (Grok) Approved All lawful purposes
Google Negotiations ongoing Terms under discussion

The China Dimension

The Pentagon’s urgency stems from perceived competition with China. Since the mid-2010s, analysts have noted the emergence of an arms race between superpowers for military AI, driven by increasing geopolitical tensions—an AI arms race sometimes placed in the context of an AI Cold War between the United States and China, according to analysis compiled on Wikipedia.

A September 2025 report by Georgetown’s Center for Security and Emerging Technology analyzed 2,857 AI-related PLA award notices published between January 2023 and December 2024, finding 2,090 included contract values totaling $535.5 million, reported Newsweek. The Pentagon is pushing forward with development of AI-based cyber tools capable of identifying critical infrastructure within China, utilizing AI to penetrate adversary computer networks and identify vulnerabilities, according to The Asia Business Daily.

References were made at an OpenAI all-hands to threat intelligence reports showing that China was already using AI models to target dissidents overseas, according to Fortune. The competitive framing shaped internal deliberations: OpenAI staff were told the most challenging aspect was concern over foreign surveillance, with company leaders acknowledging that governments will spy on adversaries internationally and that national security officers “can’t do their jobs” without international surveillance capabilities.

Context

The Pentagon renamed itself the “Department of War” in early 2025 under Defense Secretary Pete Hegseth, reverting to the title used before 1947. The rebrand signals a more aggressive military posture and has become standard terminology in official communications.

What to Watch

The Anthropic supply chain designation creates immediate complications for defense contractors. The supply chain risk designation means any company that works with the U.S. military would have to prove they don’t touch anything related to Anthropic in their work with the Pentagon—much of Anthropic’s success stems from enterprise contracts with big companies, many of which may have contracts with the Pentagon, according to CNN.

Google’s position remains unclear. A Defense official said talks were ongoing with Google and the department believes it will sign an agreement, reported Axios. Whether Google accepts OpenAI’s safety stack model or Anthropic’s principled refusal will signal how much negotiating leverage remains for AI companies facing Pentagon pressure.

The OpenAI agreement, while celebrated by Altman, sets a precedent that may constrain future safety-focused companies. As one defense analyst noted, the message to AI firms is clear: accept government terms or face exclusion from not just defense contracts, but potentially from the broader enterprise market that depends on Pentagon business. The six-month Anthropic phase-out period will test whether the company’s $380 billion valuation and $14 billion annual revenue can withstand isolation from the defense-industrial ecosystem—or whether economic reality will force a reversal of its ethical stance.