AI Geopolitics · · 7 min read

White House Ousts AI Safety Researcher After Four Days, Accelerating Deregulation Push

Collin Burns' removal from federal AI role marks ideological purge of Biden-era safety advocates as EU and China lock in binding frameworks.

The Trump administration removed Collin Burns, a former Anthropic AI safety researcher, from his federal AI governance position after just four days—hired Monday, pushed out Thursday by White House directive over his prior affiliation with the company now blacklisted across government agencies.

Burns’ termination, confirmed by the Washington Post on April 24, follows the administration’s February ban on all federal use of Anthropic technology and President Trump’s public denunciation of the firm as a “RADICAL LEFT, WOKE COMPANY.” The swift dismissal converts symbolic deregulation rhetoric into institutional enforcement—purging personnel who championed the safety-first frameworks the administration is dismantling.

The move crystallises a policy inflection point: while the US strips AI governance infrastructure installed under Biden, the EU begins enforcement of binding safety requirements in August 2026 and China operates under mandatory transparency rules effective since September 2025. This divergence creates asymmetric risk—American firms face lighter domestic oversight but potential exclusion from allied markets if safety lapses trigger incidents competitors avoid through stricter governance.

Background

Biden’s October 2023 executive order on AI mandated security testing, watermarking, and bias audits for frontier models. Trump rescinded the order in January 2025, rebranded the AI Safety Institute as the Center for AI Standards and Innovation, and instructed NIST scientists to remove references to “AI safety,” “responsible AI,” and “AI fairness” from research objectives.

Safety Infrastructure Dismantled

The institutional purge extends beyond personnel. The AI Safety Institute—established under Biden to develop testing standards and coordinate red-teaming protocols—now operates as the Center for AI Standards and Innovation, per the Daily Signal. The rebrand removes “safety” from the mission statement while shifting focus to industry standards development without mandatory compliance mechanisms.

Anthropic became the administration’s primary target after refusing Pentagon demands for unrestricted military deployment of its Claude models. On February 27, Trump ordered all federal agencies to cease using Anthropic technology and designated the company a supply chain risk, according to PBS News. The blacklist effectively punishes the firm for maintaining Biden-era voluntary commitments—security testing, risk-sharing, watermarking—that competitors quietly abandoned.

Regulatory Timeline Divergence
EU AI Act EnforcementAugust 2026
China Generative AI RulesSeptember 2025 (active)
US Biden EO RescissionJanuary 2025

The administration’s March 20 National AI Policy Framework, detailed by Morrison Foerster, recommends federal preemption of state AI laws across seven pillars—effectively blocking California, Colorado, and New York from imposing liability frameworks stricter than minimal federal standards. This creates regulatory arbitrage: firms can forum-shop to the lightest jurisdiction while exporting products globally under US origin labels.

Global Governance Race Intensifies

The EU AI Act, entering enforcement August 2, imposes mandatory conformity assessments on high-risk systems—facial recognition, critical infrastructure controllers, employment algorithms—with penalties reaching 7% of global revenue. China’s Generative AI Services Management Measures, active since September 2025, require content labeling, algorithm transparency, and government review of training datasets for models serving Chinese users.

Both frameworks impose costs US competitors avoid domestically but face when entering allied markets. A US firm bypassing safety testing at home still undergoes EU conformity assessment for European deployment—gaining no time advantage while accumulating technical debt if domestic incidents reveal flaws that rigorous testing would have caught.

“The Trump administration is engaged in norm destruction—breaking expectations about transparent governance and public oversight while installing new assumptions about how technological development should be directed.”

— Alondra Nelson, writing in Science

The competitive logic assumes deregulation accelerates deployment velocity. But if EU-compliant models prove safer in operation—fewer bias scandals, lower liability exposure—American firms sacrifice governance credibility for marginal speed gains. China’s framework, meanwhile, embeds state oversight into commercial AI infrastructure, positioning Beijing as standards-setter for developing markets adopting Chinese technical norms.

Industry Reaction Splits

Dean Ball, a former Trump AI adviser, called Burns’ dismissal a “punch in the face” for an official who “gave up valuable Anthropic stock and moved across the country” for the role, per the Detroit News. The blunt treatment signals that prior employment at firms maintaining safety commitments disqualifies candidates from federal service—narrowing the talent pool to personnel from labs already aligned with deregulation.

Anthropic CEO Dario Amodei responded that “we do not believe the national interest is served by holding an AI company hostage for refusal to abandon foundational AI safety commitments.” The standoff exposes a deeper fissure: whether US competitive advantage derives from regulatory permissiveness or from technical leadership grounded in safety research that competitors cannot replicate.

January 2025
Biden AI EO Rescinded
Trump revokes executive order mandating safety testing and bias audits for frontier models.
February 27, 2026
Anthropic Blacklisted
White House bans federal use of Anthropic technology, designating company as supply chain risk.
March 20, 2026
National AI Framework Released
Administration recommends federal preemption of state AI laws across seven policy pillars.
April 21–24, 2026
Burns Hired and Removed
Former Anthropic researcher joins Center for AI Standards and Innovation; pushed out four days later.

OpenAI and xAI, competitors unburdened by Anthropic’s safety posture, stand to capture federal contracts the blacklist redirects. But the precedent—political punishment for maintaining technical safeguards—chills industry investment in safety research if such work becomes reputational liability rather than competitive moat.

What to Watch

If early audits reveal safety gaps in US models that EU-compliant competitors avoided, American firms face market access restrictions in the bloc’s 450 million consumers. China’s algorithm transparency rules, meanwhile, give Beijing visibility into training methods US firms treat as trade secrets—asymmetric intelligence collection masquerading as governance.

The central test is whether US deregulation creates competitive advantage or liability disadvantage. If American AI systems trigger high-profile safety incidents—algorithmic bias scandals, security breaches, misuse cases—that stricter foreign frameworks prevent, the administration’s gamble converts regulatory speed into geopolitical weakness. Allied democracies adopting EU-style guardrails would then treat US AI exports as higher-risk products requiring additional oversight, inverting the intended competitive dynamic and ceding standards leadership to Brussels and Beijing while Washington removes the institutional capacity to respond.