AI Markets · · 7 min read

AWS locks in 1 million Nvidia GPUs through 2027 in $50B+ bet on single supplier

Deal validates Nvidia's 80%+ AI chip dominance while exposing AWS's strategic dependency amid export controls and rising competitive threats.

Amazon Web Services has committed to purchasing 1 million Nvidia GPU chips through 2027, formalizing a supply relationship worth an estimated $50 billion or more at current pricing and marking the largest public hyperscaler-to-chipmaker deal in AI infrastructure history.

The agreement, confirmed March 19, extends beyond GPUs to include Spectrum networking chips and other Nvidia silicon. Deliveries begin in 2026, with the bulk arriving through 2027 as AWS scales its AI training and inference capacity. The deal represents a strategic inflection from spot purchasing to multi-year commitments — a shift driven by supply constraints and the need for predictable capacity in a market where hyperscaler capex is forecast to exceed $600 billion in 2026, with 75% ($450 billion) tied directly to AI infrastructure.

Deal Fundamentals
GPU volume committed1,000,000 units
Delivery window2026–2027
Estimated deal value$50B+
Nvidia AI chip market share88%

Nvidia’s dominance calcifies

The AWS commitment validates Nvidia’s stranglehold on AI compute. The company commands an estimated 88% of the data center AI chip market as of March 2026, a position defended by CUDA software lock-in and architectural advantages competitors have struggled to replicate. CEO Jensen Huang has projected a $1 trillion sales opportunity for Nvidia’s Rubin and Blackwell chip families through 2027, a forecast now anchored by AWS’s contractual commitment.

Ian Buck, Nvidia’s vice president of hyperscale and high-performance computing, framed the deal as validation of the company’s full-stack approach. “Inference is hard. It’s wickedly hard,” he told Reuters. “To be the best at inference, it is not a one chip pony. We actually use all seven chips.” The comment reflects AWS’s deployment of multiple Nvidia architectures — including Groq chips for low-latency inference alongside Blackwell and Rubin GPUs for training — in a single integrated stack.

“GPU compute demand is skyrocketing — more compute makes smarter AI, smarter AI drives broader use and broader use creates demand for even more compute. The virtuous cycle of AI has arrived.”

— Jensen Huang, CEO, Nvidia

AWS’s hidden liability

The deal’s scale exposes a strategic vulnerability: AWS is formalizing dependence on a single supplier at the precise moment geopolitical and competitive threats to that supplier’s availability are escalating. U.S. Export Controls, codified in a global AI Diffusion Rule effective January 15, 2025, restrict advanced chip exports to China and third countries, creating supply fragmentation risk. While the U.S. allows conditional exports of Nvidia’s H200 chips to China with a 25% fee, the policy framework remains fluid under the Trump administration, with potential for further tightening.

Amazon is on pace to exceed $100 billion in capital expenditure this year, driven by AWS Data Center and AI infrastructure buildouts. In February 2026, AWS announced a $200 billion capex commitment, signaling confidence in AI demand durability. But that confidence is now tied to Nvidia’s ability to deliver at scale while navigating export restrictions, fab capacity constraints, and CoWoS packaging bottlenecks.

Context

Hyperscalers have shifted from spot purchases to multi-year GPU commitments as supply constraints collided with surging AI demand. Microsoft CEO Satya Nadella admitted in a recent earnings call that power infrastructure — not chip availability — has become the binding constraint: “You may actually have a bunch of chips sitting in inventory that I can’t plug in. In fact, that is my problem today.” The AWS-Nvidia deal reflects both procurement certainty and infrastructure risk.

Competitive pressure builds

AMD and Intel face a narrowing window to challenge Nvidia’s lock on hyperscaler budgets. AMD’s data center revenue surged 34% quarter-over-quarter to $4.3 billion in late 2025, with operating income up 793% year-over-year, driven by adoption of its MI300 accelerators. The company is preparing to launch the MI400 family in 2026, targeting inference workloads where Nvidia’s pricing power is most exposed.

But AWS’s million-GPU commitment suggests the competitive window is closing. Long-term contracts lock in not just chips but software ecosystems, networking integration, and operational tooling — switching costs that make incumbent displacement progressively harder. Intel’s Gaudi accelerators and custom silicon efforts from AWS (Trainium, Inferentia) and Google (TPU) remain niche, serving specific workloads but failing to dent Nvidia’s training dominance.

Jan 15, 2025
U.S. export controls tighten
BIS issues global AI Diffusion Rule restricting advanced chip exports to China and third countries.
Feb 5, 2026
AWS announces $200B capex
Amazon commits to massive infrastructure spend, signaling confidence in AI demand durability.
Mar 19, 2026
AWS-Nvidia deal confirmed
1 million GPU commitment through 2027 formalizes supplier relationship worth $50B+.

Pricing power vs. diversification risk

Nvidia’s gross margins in AI accelerators exceed 70%, a premium defended by performance leadership and software moats. But the AWS deal introduces a new dynamic: volume commitments at this scale typically include pricing concessions, and AWS’s negotiating leverage grows as its share of Nvidia’s revenue increases. If AWS accounts for 15-20% of Nvidia’s data center revenue by 2027, the relationship shifts from customer to co-dependent partner.

Simultaneously, AWS faces internal pressure to diversify. Custom silicon development (Trainium for training, Inferentia for inference) represents a hedge against Nvidia’s pricing power, but adoption has been slow. Workloads trained on Nvidia GPUs resist migration due to CUDA lock-in and model tuning complexity. The million-GPU commitment suggests AWS has accepted this reality for the near term, prioritizing capacity over diversification.

What to watch

Nvidia’s ability to fulfill the AWS commitment without capacity shortages will test its CoWoS packaging partnerships with TSMC and supply chain resilience under export control constraints. Any delivery delays or quality issues will accelerate AWS’s custom silicon roadmap and create openings for AMD’s MI400 family. Watch for AWS’s next-generation Trainium chip announcements — if performance closes the gap with Nvidia’s Blackwell architecture, the 2027 renewal window becomes critical. Geopolitically, further tightening of U.S. export rules or Chinese retaliation could fragment Nvidia’s addressable market, reducing its leverage in hyperscaler negotiations. AMD’s MI400 launch timing and Microsoft’s response to AWS’s GPU lock-in will define whether Nvidia’s 88% market share represents a durable moat or a peak preceding fragmentation.