AI Geopolitics · · 8 min read

OpenAI, Anthropic, and Google Form Intelligence Coalition Against Chinese Model Distillation

Three rival frontier labs are now sharing threat data through the Frontier Model Forum after detecting billions in losses from Chinese API extraction attacks.

OpenAI, Anthropic, and Google began coordinating defensive operations through the Frontier Model Forum on 6 April 2026, transforming the nonprofit safety body into an active threat-intelligence operation targeting Chinese AI distillation campaigns that U.S. officials estimate cost American companies billions annually.

The coordination marks the first time competing frontier labs have prioritised shared security interests over commercial rivalry. The three companies are exchanging data on fraudulent API accounts, suspicious query patterns, and attribution signatures linked to Chinese firms attempting to reverse-engineer their proprietary models through systematic prompt extraction, according to Bloomberg.

Context

Distillation attacks involve flooding a frontier model’s API with carefully crafted prompts, capturing the outputs, and using that data to train a cheaper derivative model. Unlike web scraping of public data, distillation directly extracts the inference patterns and reasoning capabilities that cost billions to develop — without replicating safety filters or alignment work.

Scale of the Extraction Campaign

The coalition formed after a series of escalating incidents exposed the scope of Chinese distillation operations. In February 2026, Anthropic disclosed that three Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — generated over 16 million exchanges with its Claude model using approximately 24,000 fraudulent accounts, per Tech Brew. The accounts used rotating IP addresses, synthetic identities, and payment methods designed to evade abuse detection systems.

Microsoft first flagged the problem in January 2025 when it detected DeepSeek extracting large volumes of data through OpenAI’s API. By February 2026, OpenAI told the House Intelligence Committee that DeepSeek was attempting to “free-ride on the capabilities developed by OpenAI and other US frontier labs,” per The Register. Days later, Anthropic published its own forensic findings, providing the first detailed accounting of a coordinated distillation campaign.

Anthropic Distillation Attack (Feb 2026)
Fraudulent Accounts24,000
API Exchanges Generated16,000,000
Chinese Firms Identified3

Economics of Model Theft

The appeal of distillation lies in its cost arbitrage. DeepSeek’s R1 reasoning model reportedly cost approximately $6 million to develop using distillation techniques, compared to an estimated $2 billion for training a frontier model like ChatGPT-5 from scratch, according to analysis from the Center for Strategic and International Studies. That 300:1 cost advantage allows Chinese labs to rapidly close capability gaps without matching American investment in compute infrastructure.

The technique also bypasses U.S. export controls on advanced chips. Rather than requiring thousands of high-end GPUs subject to licensing restrictions, distillation requires only API access — which remains commercially available to pay-as-you-go customers worldwide. Chinese labs can effectively rent American compute at retail rates to extract training data, then deploy derivative models domestically using less restricted hardware.

“We will not allow adversaries to free-ride on the capabilities developed by American frontier labs through systematic exploitation of commercial API access.”

— OpenAI testimony to House Intelligence Committee, February 2026

Technical and Legal Defenses

The Frontier Model Forum’s intelligence-sharing operation focuses on cross-referencing account metadata, payment fingerprints, and query linguistics to identify coordinated campaigns. When one lab detects a suspicious pattern — such as unusually high volumes of reasoning-intensive prompts from new accounts in rapid succession — it can now alert the others to check for similar activity, reported Robo Rhythms.

Defensive measures include rate limiting based on behavioral signals rather than just volume, query watermarking to track data provenance, and honeypot accounts that serve subtly degraded outputs to suspected distillers. Google has deployed techniques that inject imperceptible noise into API responses, making it harder to train coherent derivative models without detection.

Legal remedies remain ambiguous. Distillation does not violate copyright in the traditional sense — it extracts statistical patterns, not literal code or training data. Terms of service prohibit using API outputs to train competing models, but enforcement requires proving intent and attribution across jurisdictional boundaries. U.S. officials are exploring whether distillation qualifies as trade secret theft under the Economic Espionage Act, which carries criminal penalties, per Just Security analysis.

Geopolitical Escalation

The coalition’s formation aligns with the Trump administration’s AI Action Plan, which called for an industry information-sharing center to combat distillation. The White House views the practice as a national security threat rather than purely a commercial dispute, particularly given evidence that Chinese military research institutes have procured distilled models through shell companies.

Chinese state media has pushed back on the accusations. Global Times quoted Chinese AI researchers arguing that distillation is a standard academic technique and that U.S. firms are using security rhetoric to justify anti-competitive behavior. The publication noted that Chinese open-source models like Qwen have achieved competitive performance through independent research, not API extraction.

January 2025
Microsoft Flags DeepSeek Activity
Microsoft detects DeepSeek extracting large data volumes through OpenAI’s API, triggering first major investigation.
12 February 2026
OpenAI Congressional Testimony
OpenAI tells House Intelligence Committee that DeepSeek is attempting to “free-ride” on U.S. frontier lab capabilities.
23 February 2026
Anthropic Discloses Attack Details
Anthropic reveals fraudulent accounts generated 16 million Claude exchanges, naming three Chinese firms.
6 April 2026
Coalition Activation
OpenAI, Anthropic, and Google begin sharing threat intelligence through Frontier Model Forum.

Market and Policy Implications

The coalition’s effectiveness remains constrained by antitrust uncertainties. Sharing certain categories of competitive intelligence — particularly around pricing, customer data, or model capabilities — could trigger regulatory scrutiny. The companies are proceeding cautiously, focusing intelligence-sharing on account-level threat indicators rather than strategic business information.

Enterprise customers may face stricter API access controls as labs tighten authentication and usage monitoring. Organizations with legitimate high-volume use cases — such as research institutions or multilingual localization services — could encounter friction from aggressive fraud detection. Some labs are exploring tiered access models that require additional verification for users exceeding baseline thresholds.

The coalition also signals potential regulatory action. If voluntary industry coordination proves insufficient, Congress may mandate API access restrictions similar to export controls on chips and software. The Commerce Department is reportedly considering rules that would require AI companies to report suspected distillation to the Bureau of Industry and Security, creating a formal threat-reporting framework.

Key Takeaways
  • Three competing frontier labs now share distillation threat data through Frontier Model Forum, activated 6 April 2026
  • Anthropic detected 16 million fraudulent API exchanges in February 2026
  • Distillation reduces model development costs by 300:1 ratio, bypassing U.S. chip export controls
  • Legal frameworks remain ambiguous — distillation may not constitute copyright violation under current law
  • Trump administration AI Action Plan supports industry coordination, potential regulatory mandates ahead

What to Watch

Monitor whether the coalition expands beyond the Frontier Model Forum’s core members to include Meta, Mistral, or other labs with API exposure. The Forum’s original mandate focused on safety research coordination — its evolution into a threat-intelligence network sets precedent for how industry groups function under geopolitical pressure.

Track Congressional movement on formal distillation reporting requirements or API access restrictions. Any legislation would likely tie into broader AI export control frameworks currently under development at Commerce and State.

Watch for Chinese counter-strategies. If U.S. labs successfully block distillation at scale, Chinese firms may accelerate domestic compute buildout or shift to targeting smaller model providers with less sophisticated defenses. The cat-and-mouse dynamic will shape both technical security practices and policy responses across 2026.