AI · · 8 min read

OpenAI Reverses Course, Adds Explicit Surveillance Ban to Pentagon Deal After User Backlash

Sam Altman admits rushed defense contract 'looked opportunistic,' amends agreement to prohibit domestic surveillance through commercially purchased data as Claude overtakes ChatGPT in app downloads.

OpenAI CEO Sam Altman announced Monday that the company will amend its Department of Defense contract to explicitly ban the use of AI systems for domestic surveillance of Americans, following a weekend of intense public criticism that propelled rival Anthropic’s Claude app to the number one spot in U.S. downloads.

The amended language states that ‘the AI system shall not be intentionally used for domestic Surveillance of U.S. persons and nationals’ and clarifies that ‘the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information,’ according to CNBC. Altman acknowledged in an internal memo that the company ‘shouldn’t have rushed to get the deal out on Friday, February 27’ and admitted it ‘looked opportunistic’ in the end.

The policy reversal marks a significant shift in how AI companies navigate Defense partnerships. OpenAI’s deal with the Pentagon came hours after talks between Anthropic and the Defense Department broke down, though Altman had told employees in a Thursday memo that OpenAI shared the same ‘red lines’ as Anthropic. The dispute centered on whether the company could prohibit its tools from being used in mass surveillance of American citizens or to power autonomous weapon systems, as part of a Military contract worth up to $200 million.

Market Reaction Shifts Power Dynamics

The public backlash produced measurable commercial consequences. Anthropic’s Claude hit number one in U.S. app downloads Saturday, overtaking ChatGPT, after the Pentagon blacklisted the company for refusing to loosen safeguards for military use of its AI model, according to Axios. Since the start of the year, free active users on Claude have increased by over 60%, and daily sign-ups have quadrupled, Fortune reported.

ChatGPT’s market share dropped from 69.1% to 45.3% over the past year, and the boycott accelerated the bleeding, with campaign organizers reporting between 200,000 and 700,000 participants, according to analysis from Gadget Review. The ‘QuitGPT’ movement gained significant traction across social media platforms, with Reddit posts about canceling ChatGPT subscriptions receiving over 30,000 upvotes.

Market Impact by Numbers
Claude App Store Rank#1 (from #42)
Claude Active User Growth+60% YTD
ChatGPT Market Share69.1% → 45.3%
Pentagon Contract Value$200M

Legal Gray Areas Remain Despite Amendment

While the amended language adds specificity, policy experts question whether the safeguards are enforceable. ‘Right now, under U.S. law, it’s lawful for government authorities to buy up commercially available information from data brokers and other third parties,’ said Samir Jain, vice president of policy at the Center for Democracy & Technology. ‘If you buy up massive amounts of data and allow AI to analyze it, you may end up, in effect, engaging in mass surveillance of Americans through that process,’ Fortune reported.

The problem is that what counts as ‘lawful’ can change. OpenAI’s contract points to existing laws and Department of Defense policies, but those policies could be modified in the future. ‘Nothing in what they’ve released would prevent those policies from being changed going forward,’ Jain said.

The distinction between Title 10 military operations and Title 50 intelligence activities creates additional ambiguity. According to a post on social media by a well-known OpenAI researcher, the company’s head of national security partnerships said that OpenAI’s contract does not cover Title 50 work by the intelligence community, but legal scholars have noted that the distinction between Title 10 and Title 50 activities is increasingly blurry, and a contract that bans Title 50 work doesn’t automatically prevent Title 10 agencies like the DIA from using AI to analyze commercially available or unclassified datasets, according to Fortune.

Context

The Pentagon announced last summer that it was awarding defense contracts to four AI companies — Anthropic, Google, OpenAI and xAI. Each contract is worth up to $200 million, according to Al Jazeera. Anthropic was the first to be approved for classified military networks before negotiations collapsed in February 2026.

Pressure Mounts on Competitors

The controversy places Google, xAI, and other defense AI contractors in an uncomfortable position. OpenAI, Google, and Elon Musk’s xAI have Defense Department contracts and have agreed to allow their AI tools to be used in any ‘lawful’ scenarios. Earlier this week, xAI became the second company after Anthropic to be approved for use in classified settings, NPR reported.

More than 430 employees from Google and OpenAI signed a letter stating: ‘We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight,’ according to The Hill.

In his post, Altman said: ‘In my conversations over the weekend, I reiterated that Anthropic should not be designated as a supply chain risk, and that we hope the Department of Defense offers them the same terms we’ve agreed to.’ It remains unclear why the Defense Department agreed to accommodate OpenAI and not Anthropic, though government officials have for months criticized Anthropic for allegedly being overly concerned with AI safety, CNBC reported.

Broader Implications for AI-Defense Procurement

The controversy erupts as the Pentagon accelerates AI integration. The Department of Defense has requested $13.4 billion for AI and autonomy in FY2026, representing the largest single-year AI investment in defense history. This spending doesn’t represent experimental research—it funds operational AI implementation across autonomous systems, decision support platforms, and mission-critical applications, according to CCS Global Tech.

The global artificial intelligence in military market size is calculated at $10.79 billion in 2025 and is predicted to increase from $12.19 billion in 2026 to approximately $35.57 billion by 2035, expanding at a CAGR of 12.67%, according to Precedence Research. The dispute between OpenAI and Anthropic occurs at a critical juncture as this market enters its highest-growth phase.

27 Feb 2026
OpenAI Announces Pentagon Deal
Hours after Anthropic blacklisting, OpenAI announces defense contract with claimed safeguards.
28 Feb 2026
Claude Rises to #1
Anthropic’s app overtakes ChatGPT in U.S. downloads amid ‘QuitGPT’ campaign.
3 Mar 2026
OpenAI Amends Contract
Altman announces explicit surveillance ban after admitting deal was ‘rushed.’

What to Watch

The amended contract language sets a new baseline for AI-defense partnerships, but enforcement mechanisms remain untested. Whether the Pentagon honors OpenAI’s technical safeguards—including cloud-only deployment and cleared personnel oversight—will determine if the compromise holds.

Google and xAI face growing pressure to clarify their own positions. Both companies have defense contracts but have remained largely silent on specific use restrictions. Employee activism at these firms could intensify if OpenAI’s amended approach becomes the industry standard.

The commercial impact on OpenAI depends on whether the boycott sustains beyond its initial momentum. ChatGPT retains over 900 million weekly active users globally, but the app store reversal demonstrates that ethical positioning now carries tangible market consequences in consumer AI.

Congress may move to codify restrictions that currently exist only in contract language. The legal gray areas around commercially available information and Title 10/Title 50 distinctions invite legislative clarity—particularly if AI systems demonstrate surveillance capabilities that existing law never anticipated.

The $13.4 billion FY2026 defense AI budget creates powerful incentives for companies to accept Pentagon terms. Whether Anthropic’s stance proves commercially sustainable—or forces a market-wide reckoning over acceptable military use—will define the industry’s trajectory through the rest of the decade.