AI · · 9 min read

The Divergent Paths: How OpenAI and Anthropic Courted Washington From 2023-2024

Two AI giants took markedly different approaches to federal regulation—one embraced flexibility while the other pledged hard limits, revealing the fault lines that would define the industry's relationship with government.

Between July 2023 and August 2024, OpenAI and Anthropic signed identical voluntary commitments at the White House, testified before the same Senate committees, and negotiated parallel agreements with the U.S. AI Safety Institute—yet their regulatory philosophies diverged so sharply that by 2026, the companies would publicly refuse to hold hands at an international summit.

The period from the Biden AI executive order through 2024 marked the first sustained attempt by the U.S. government to establish guardrails around frontier AI development. On July 21, 2023, seven leading AI companies—including OpenAI and Anthropic—committed to voluntary safety measures covering internal testing, cybersecurity protections, and watermarking AI-generated content. Three months later, President Biden signed Executive Order 14110 on October 30, 2023, establishing government-wide standards for AI Safety and requiring companies to share red-teaming results for high-risk systems.

What followed was a case study in competing visions of AI governance. OpenAI CEO Sam Altman positioned his company as a willing partner in crafting regulation, while Anthropic CEO Dario Amodei built his firm’s identity around unilateral safety commitments that competitors might not match. The tension between these approaches would shape legislative debates, influence agency policy, and ultimately expose the limits of voluntary frameworks.

July 21, 2023
White House Voluntary Commitments
Seven companies agree to eight safety principles including pre-release testing and watermarking.
October 30, 2023
Biden AI Executive Order
EO 14110 establishes reporting requirements and directs NIST to develop safety guidelines.
August 29, 2024
NIST Safety Agreements
Both companies sign MOUs granting government pre-release access to frontier models.

Altman’s Pitch: Regulate Us, But Not Too Much

On May 16, 2023, Sam Altman appeared before the Senate Judiciary Subcommittee on Privacy, Technology and the Law, proposing “a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities.” The testimony was carefully calibrated. Altman’s regulatory wish list included “a new agency that licenses any effort above a certain threshold of capabilities,” testing of potentially dangerous AI models before deployment, and independent audits.

But Altman also drew clear boundaries. When Senator Lindsey Graham asked whether OpenAI fell under Section 230 protections, Altman replied he didn’t think Section 230 was “the right framework” for analyzing AI liability. The message was consistent: OpenAI wanted federal oversight, but not the kind that would impose strict liability or create rigid technical requirements.

“I do think some regulation would be quite wise on this topic. People need to know if they’re talking to an AI.”

— Sam Altman, Senate testimony, May 2023

This positioning served multiple purposes. According to VentureBeat, the approach helped OpenAI shape the regulatory conversation early, potentially influencing what standards would eventually be set. It was also likely that companies like OpenAI saw regulations as inevitable and were trying to get ahead of them by taking a leadership role during the regulatory process.

Amodei’s Warning: Biology and Autonomous Weapons

When Dario Amodei testified before the Senate Judiciary Committee on July 25, 2023, his message was darker and more specific. Amodei told lawmakers that “a straightforward extrapolation of the pace of progress suggests that, in 2-3 years, AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines,” which would “greatly widen the set of people who can wreak havoc.”

According to The Washington Post, Amodei’s testimony focused heavily on biosecurity risks. He offered to give “a more detailed private briefing to any Senator interested” in Anthropic’s biological weapons research, noting that officials who had been briefed “all found our results disquieting.”

Key Safety Commitments (July 2023)
Companies Participating15 by Sept 2023
Testing RequirementsInternal & External
Enforcement MechanismVoluntary

While both CEOs supported government involvement, their framing differed fundamentally. Altman emphasized partnership and proportionality. Amodei emphasized existential risk and the need for precautionary action even when capabilities remained speculative. This difference would become explicit when California proposed the nation’s first state-level AI safety law.

The California Split: SB 1047 Exposes the Divide

California’s SB 1047, introduced in February 2024, would have required developers of models costing over $100 million to train to implement safety protocols and retain the ability to shut down their systems. The bill crystallized the philosophical gap between the two companies.

Anthropic offered cautious support for the bill following certain changes, while OpenAI opposed it, saying it could “stifle innovation.” According to legislative records, Anthropic CEO Dario Amodei wrote that “the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs,” though he noted “some aspects of the bill which seem concerning or ambiguous to us.”

OpenAI took the opposite position. The company joined Google and Meta in arguing that by regulating the process of model development rather than harms caused by use, SB 1047 would hamper innovation and undermine Californian and American competitiveness, with OpenAI specifically arguing that the bill’s focus on national security risks made it more appropriate for federal than state action.

OpenAI vs Anthropic on SB 1047
Company Position Key Argument
OpenAI Opposed Federal jurisdiction, innovation risk
Anthropic Cautiously Supportive Catastrophic risks warrant precaution
Meta/Google Strongly Opposed Threatens competitiveness

California Governor Gavin Newsom ultimately vetoed SB 1047 in September 2024, but the debate revealed how completely the two companies’ regulatory strategies had diverged. OpenAI argued for federal frameworks that would apply uniformly across jurisdictions. Anthropic argued that voluntary commitments weren’t sufficient and that some binding requirements were necessary even without perfect consensus.

The Responsible Scaling Policy: Anthropic’s Binding Framework

The clearest expression of Anthropic’s approach came through its Responsible Scaling Policy, first published in September 2023 and updated multiple times through 2024. The RSP defined “AI Safety Levels” modeled after biosafety standards, focusing on “catastrophic risks—those where an AI model directly causes large scale devastation” through “deliberate misuse” or “models that cause destruction by acting autonomously.”

According to Anthropic’s original policy document, ASL-3 measures would include “a commitment not to deploy ASL-3 models if they show any meaningful catastrophic misuse risk under adversarial testing by world-class red-teamers.” This represented a unilateral pledge to halt deployment under specific conditions—something no other major lab had committed to in writing.

Context

The Responsible Scaling Policy underwent significant revision in February 2026, when Anthropic removed language about unilateral commitments to pause development. According to the updated framework, the company found that “some parts of this theory of change have played out as we hoped, but others have not,” specifically noting that effective government engagement was proving to be “a long-term project” rather than something happening organically.

OpenAI, by contrast, never published a comparable binding framework. While the company conducted red-teaming and had internal safety processes, it did not commit to specific capability thresholds that would trigger deployment pauses. This reflected a fundamentally different theory of change: rather than betting on unilateral restraint, OpenAI bet on shaping the regulatory environment.

The NIST Convergence: Where Policy Met Practice

Despite their philosophical differences, both companies reached similar accommodations with the federal government. On August 29, 2024, the U.S. AI Safety Institute at NIST announced agreements with both Anthropic and OpenAI establishing frameworks for the institute to “receive access to major new models from each company prior to and following their public release.”

According to CNBC, the agreements would “enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.” The U.S. AI Safety Institute planned to provide feedback to both companies on potential safety improvements to their models, in close collaboration with partners at the U.K. AI Safety Institute.

Key Takeaways
  • Both companies signed identical voluntary commitments in July 2023, but interpreted them through different operational frameworks
  • Altman emphasized flexible federal regulation; Amodei emphasized binding unilateral commitments
  • The California SB 1047 debate exposed the divide: OpenAI opposed, Anthropic cautiously supported
  • Both reached similar agreements with NIST by August 2024, suggesting practical convergence despite philosophical differences

The NIST agreements represented the maturation of the Biden administration’s approach. Rather than relying solely on voluntary commitments or waiting for legislation, the government secured pre-release access through bilateral memoranda of understanding. This gave federal evaluators visibility into frontier models while preserving companies’ ability to develop and deploy them.

What to Watch

The Trump administration’s January 2025 rescission of Executive Order 14110 has thrown the regulatory landscape into flux. While the NIST AI Safety Institute survived the transition, the voluntary commitments framework that underpinned much of the 2023-2024 engagement has no formal legal status.

Anthropic’s February 2026 revision to its Responsible Scaling Policy—removing hard commitments to pause development under specific conditions—suggests the company has concluded that unilateral restraint is not sustainable without either government mandates or broader industry participation. The revised policy now emphasizes transparency and “Risk Reports” rather than binding pause triggers.

Meanwhile, OpenAI’s approach—shape regulation rather than commit to unilateral limits—appears vindicated by the absence of binding federal AI legislation. Congress has held dozens of hearings since May 2023 but has not passed comprehensive AI safety legislation. The voluntary commitments have proven sticky: according to MIT Technology Review, the White House’s voluntary AI commitments brought better red-teaming practices and watermarks, but no meaningful transparency or accountability.

The next inflection point will likely come from three sources: first, whether the second Trump administration maintains NIST pre-release testing arrangements; second, whether state-level efforts like California’s successor to SB 1047 create a patchwork of binding requirements; and third, whether a high-profile AI safety incident forces rapid federal action. The companies that spent 2023-2024 building relationships in Washington may find those relationships tested when voluntary frameworks give way to mandatory rules.