AI Markets · · 8 min read

Google’s $40 Billion Anthropic Bet Reveals Compute Capacity as the New AI Asset Class

Gigawatt-scale infrastructure pre-sales signal a structural shift where power access, not model innovation, defines competitive moats in AI markets.

Google’s commitment of up to $40 billion and 5 gigawatts of compute capacity to Anthropic over five years marks the emergence of AI infrastructure as a securitized asset class, pre-sold years before deployment and priced at unprecedented scale. The deal, disclosed April 24, 2026, pairs $10 billion in immediate equity at a $350 billion valuation with up to $30 billion in performance-linked payments—and crucially, locks Anthropic into Google Cloud infrastructure through 2031, according to Data Center Knowledge.

Google-Anthropic Deal Structure
Total Investment Commitment$40B
Immediate Cash (Apr 2026)$10B
Reserved Compute Capacity5 GW
Deployment Timeline2027-2031

Combined with Amazon’s parallel 5-gigawatt AWS commitment announced April 20, Anthropic has secured 10 gigawatts of compute infrastructure—equivalent to the peak summer electrical load of metropolitan San Francisco. The capacity won’t fully come online until 2027, but it’s already priced, allocated, and contractually locked, creating a new form of infrastructure futures market operating at utility scale.

From Cloud Credits to Capacity Rights

The shift from pay-as-you-go cloud services to multi-year, gigawatt-scale capacity reservations reflects a fundamental recalibration of AI economics. “It is abundantly clear that the AI story is increasingly becoming the AI Infrastructure story,” Sid Nag of Tekonyx told Data Center Knowledge. “Training and inference require sustained, predictable infrastructure demand. Large model developers are becoming anchor tenants.”

Google’s move locks Anthropic into its Tensor Processing Unit (TPU) ecosystem while Amazon’s AWS deal tilts toward Nvidia GPUs, creating bifurcated supply chains that effectively partition the frontier AI market. The competitive logic is straightforward: Anthropic’s run-rate revenue surpassed $30 billion as of early April 2026—up from $9 billion at end-2025—with more than 1,000 customers spending $1 million or more annually, according to TechCrunch. That growth requires infrastructure deployment at a pace that standard procurement can’t match.

“Google knows Anthropic desperately needs AI infrastructure capacity. So they are jumping in early and investing in them to win that AI infrastructure business.”

— Sid Nag, Tekonyx

Broadcom’s role underscores the deal’s complexity: the chipmaker is supplying 3.5 gigawatts of TPU capacity starting in 2027 as part of an expanded partnership with Google and Anthropic, CEO Hock Tan disclosed in April earnings. “For Anthropic, we are off to a very good start in 2026 in providing 1 gigawatt of compute from Google’s homegrown tensor processing units,” Tan said, per CNBC. “For 2027, this demand is expected to surge in excess of 3 gigawatts of compute.”

Power Grid Constraints Redefine Competitive Moats

The binding constraint isn’t chip supply—it’s electrical infrastructure. AI Data Centers could require 68 gigawatts globally by 2027 and 327 gigawatts by 2030, according to a RAND analysis published in January 2025. Grid interconnection requests now take 4-7 years in primary data center markets, creating a structural mismatch between compute demand and available power capacity.

Grid Economics

PJM Interconnection’s capacity market clearing prices for the 2026-2027 delivery year jumped to $329.17 per megawatt-day from $28.92 in 2024-2025—an 11x increase driven largely by data center load growth. US residential electricity prices rose 36% since 2020, reaching 17.44 cents per kilowatt-hour in February 2026, with forecasts pointing to 19.01 cents by September 2027.

“Basically, we have run out of headroom, largely speaking, in the US,” Ben Hertz-Shargel of Wood Mackenzie told CNN in April. “There is a land grab happening, where companies believe that access to more capacity for compute will be necessary to win the future battle over AI services.”

Google, Amazon, Microsoft, and Meta are responding with a combined $725 billion in capital expenditures for 2026—a 77% increase from 2025’s $410 billion, per Tom’s Hardware analysis of Q1 earnings. But capex alone doesn’t solve the grid bottleneck: transmission capacity, transformer availability, and regulatory approvals operate on timelines measured in years, not quarters.

Unrealized Gains and the New Accounting Reality

The financial engineering behind these deals reveals a more subtle shift. Alphabet’s Q1 2026 net income totaled $62.6 billion, but $37.7 billion—60% of reported profit—came from unrealized gains on its Anthropic equity stake, Fortune reported April 30. This isn’t revenue from cloud services or advertising—it’s mark-to-market valuation increases on a private company investment, now flowing through public company earnings.

Feb 2026
Anthropic Series G Close
$350 billion pre-money valuation; demand nearly tripled $10 billion initial target
Apr 20, 2026
Amazon AWS Commitment
5 GW compute capacity reserved for Anthropic through 2031
Apr 24, 2026
Google Deal Disclosure
$40B investment + 5 GW TPU capacity; deployment starts 2027
Apr 30, 2026
Q1 Earnings Impact
$37.7B of Alphabet’s $62.6B profit from Anthropic unrealized gains

This creates earnings volatility tied to private market valuation dynamics rather than operating performance. When Anthropic’s valuation rose from a reported $350 billion pre-money in February to speculative secondary market interest at $800 billion-plus in April, Google’s balance sheet captured that delta. The mechanism—equity investments marked to fair value through earnings—turns strategic infrastructure partnerships into tradeable financial instruments.

Google Cloud’s backlog reached $460 billion in Q1, roughly doubled from $240 billion at year-end 2025 and up 63% year-over-year, according to the company’s SEC filing reviewed by Investing.com. CEO Sundar Pichai framed the results around infrastructure deployment: “Our AI investments and full stack approach are lighting up every part of the business.”

Capacity as Collateral

The operational implications extend beyond Google and Anthropic. If gigawatt-scale compute reservations become standard terms for frontier AI development, capital requirements for new entrants escalate dramatically. Training a state-of-the-art model might cost $500 million in compute; securing multi-year capacity rights could require $5-10 billion in commitments before the first token is generated.

Gartner forecasts worldwide AI spending will total $2.52 trillion in 2026, a 44% year-over-year increase, with AI infrastructure accounting for $401 billion of that total, per a Gartner January 15 release. The question is whether demand absorption justifies that deployment pace—or whether pre-sold capacity creates artificial scarcity that inflates valuations without corresponding revenue growth.

Key Takeaways
  • Anthropic secured 10 GW of combined compute capacity (5 GW Google, 5 GW AWS), pre-sold through 2031 before infrastructure is fully deployed
  • Google’s $40B commitment includes $10B cash at $350B valuation plus $30B performance-linked; deal locks Anthropic into TPU ecosystem
  • 60% of Alphabet’s Q1 2026 profit ($37.7B of $62.6B) came from unrealized gains on Anthropic equity, not operational revenue
  • Grid capacity, not chip supply, now the binding constraint: 4-7 year interconnection lead times vs. surging demand
  • Hyperscaler capex for 2026 totals $725B (Google, Amazon, Microsoft, Meta), up 77% YoY, mostly infrastructure-focused

What to Watch

Anthropc’s rumored Q4 2026 IPO will test whether public markets validate the $350 billion-plus private valuations or force a repricing. If the offering prices below secondary market levels, it exposes how illiquid private stakes can distort public company earnings through mark-to-market accounting.

Grid capacity auctions in PJM, ERCOT, and CAISO over the next 12 months will signal whether power infrastructure can scale to meet AI demand or if rationing mechanisms emerge. If clearing prices continue climbing at double-digit rates, expect political backlash from residential ratepayers facing higher bills to subsidize data center load growth.

Finally, watch whether Amazon, Microsoft, or Oracle announce similar gigawatt-scale pre-sale agreements with other frontier labs. If capacity reservation becomes the standard model, it consolidates the AI industry around a handful of hyperscalers with the balance sheets to build multi-year infrastructure pipelines—and transforms compute access into the defining competitive moat of the next decade.