AI Markets · · 8 min read

NVIDIA’s $40B equity blitz rewrites the AI stack ownership playbook

The chip giant's venture portfolio spanning infrastructure, models, and applications reveals conviction in sustained AI capex—and defensive hedging against margin compression.

NVIDIA deployed over $40 billion in equity commitments to AI companies between 2024 and early 2026, transforming from semiconductor vendor to vertically integrated AI stack owner with stakes spanning data infrastructure, foundation models, and enterprise applications.

The company’s non-marketable equity securities—private company investments—swelled to $22.25 billion at the end of January 2026 from $3.39 billion a year earlier, according to CNBC. NVIDIA invested $17.5 billion in private companies and infrastructure funds during its last fiscal year alone, primarily supporting early-stage startups. The deployment was funded by $97 billion in free cash flow generated in that same period.

This aggressive positioning reveals three critical dynamics: conviction that enterprise AI capital expenditure will sustain despite macro headwinds, defensive hedging against margin compression from Custom Silicon and open-source models, and ecosystem lock-in that amplifies competitors’ structural disadvantages.

NVIDIA’s Investment Position
Non-marketable equity securities (Jan 2026)$22.25B
Year-over-year growth+556%
Fiscal year investment gains$8.92B
AI accelerator market share85-90%

Portfolio concentration across the AI value chain

NVIDIA’s equity portfolio spans the entire AI Infrastructure stack. The company holds planned commitments to OpenAI valued at $100 billion, $20 billion in xAI, and $10 billion in Anthropic, per TechCrunch. Beyond foundation models, the portfolio extends to infrastructure providers CoreWeave, Lambda, IREN, and Crusoe, as well as application-layer companies in healthcare, robotics, and finance.

In May 2026, NVIDIA announced investment agreements with IREN for up to $2.1 billion and Corning for up to $3.2 billion to build three new U.S. optical manufacturing facilities, according to CNBC. The Corning deal focuses on co-packaged optics—integrated optical interconnects that reduce power consumption and latency in AI data centers.

“Our investments are focused very squarely, strategically on expanding and deepening our ecosystem reach.”

— Jensen Huang, CEO, NVIDIA

The investment strategy targets companies that drive GPU demand while strengthening NVIDIA’s position across networking, power delivery, and cooling infrastructure—layers where competitors lack comparable integration.

Conviction bet on sustained AI capex growth

NVIDIA’s equity deployment signals confidence in multi-trillion-dollar AI infrastructure buildout projections. Goldman Sachs projects baseline AI capital expenditure of $765 billion annually in 2026, growing to $1.6 trillion by 2031, per a Goldman Sachs report dated March 2026.

Big Tech capital expenditure reached approximately $228 billion in 2024 and is projected to hit $602 billion in 2026, with 75%—roughly $450 billion—targeting AI infrastructure, according to Introl. Amazon, Microsoft, Google, and Meta collectively account for the majority of this spending, with NVIDIA capturing 85-90% of GPU accelerator purchases.

NVIDIA announced a path to $1 trillion in cumulative sales across its Blackwell and Rubin chip generations from 2025 through 2027, as reported by Investing.com. The company’s sovereign AI business—government and national infrastructure deployments—more than tripled year-over-year in 2025 to over $30 billion.

2006
CUDA launch
NVIDIA releases parallel computing platform, establishing 20-year software moat.
March 2020
Mellanox acquisition
$6.9B purchase secures high-speed networking for AI data centers.
2024-2026
$40B equity deployment
Systematic investments across AI stack from models to infrastructure.
May 2026
Corning co-packaged optics deal
$3.2B investment in optical manufacturing facilities for AI networking.

Defensive positioning against margin compression

While the equity strategy projects confidence, it simultaneously hedges against three emerging threats: hyperscaler custom silicon, open-source model commoditization, and CUDA ecosystem erosion.

Google’s TPUs, Amazon’s Trainium chips, and Microsoft’s Maia processors represent multi-billion-dollar internal R&D efforts to reduce dependency on NVIDIA hardware. These custom accelerators target inference workloads where NVIDIA’s performance advantages matter less than total cost of ownership.

Open-source models from DeepSeek, Mixtral, and Meta’s Llama series reduce reliance on proprietary foundation model providers—many of which are NVIDIA portfolio companies. If model differentiation collapses, demand shifts toward commoditized inference hardware rather than training-optimized GPUs where NVIDIA maintains pricing power.

NVIDIA’s CUDA software ecosystem provides a 10-30% performance advantage over AMD’s ROCm in compute-intensive workloads, according to SoftwareSeni. The platform has over 50,000 engineers with CUDA listed on LinkedIn versus 10,000-50,000 for ROCm. However, AMD, Intel, and hyperscalers are investing billions to close this software gap.

NVIDIA vs. AMD: structural margin differential
Metric NVIDIA AMD
Gross margin 70%+ 46-50%
AI accelerator market share 85-90% ~5%
Software ecosystem engineers 50K+ (CUDA) 10-50K (ROCm)
Performance advantage Baseline -10% to -30%

AMD faces chronic margin pressure even as it gains share. The company’s 46-50% gross margins trail NVIDIA’s 70%+ by a structural gap tied to software ecosystem maturity and customer willingness to pay premiums for proven toolchains, per PitchGrade Research.

Ecosystem lock-in amplifies competitor disadvantage

NVIDIA’s equity deployment creates circular demand: investments in model companies like OpenAI and xAI generate GPU purchases, which fund further investments, which expand the customer base requiring NVIDIA-optimized infrastructure.

This dynamic makes AMD’s path harder. Even if AMD achieves hardware parity, customers face switching costs tied to CUDA-optimized codebases, debugging tools, and performance tuning libraries built over two decades. Intel’s AI GPU efforts—Gaudi and Ponte Vecchio—remain largely irrelevant in data center AI deployments despite billions in R&D.

Context

NVIDIA’s Vertical Integration strategy mirrors Apple’s playbook: control the full stack from silicon to software, then use ecosystem lock-in to defend margins against commoditization. The difference is scale—NVIDIA is applying this strategy across a $7.6 trillion addressable market (2026-2031 cumulative AI capex) rather than consumer devices.

Wedbush Securities analyst Matthew Bryson noted that NVIDIA’s investments “fit squarely into the circular investment theme that’s been driving fears around the market’s durability,” but sees the strategy as “underscoring NVIDIA’s vision and creating a competitive moat if the company can execute,” per CNBC.

What to watch

NVIDIA’s Q2 fiscal 2027 earnings (late August 2026) will reveal whether hyperscaler capex commitments translate to accelerating GPU sales or whether custom silicon is beginning to displace NVIDIA in inference workloads. Watch for any sequential decline in data center revenue growth rates.

AMD’s next-generation MI400 series launch timing and customer commitments will indicate whether ROCm ecosystem maturity is closing the software gap. If AMD secures multi-billion-dollar hyperscaler wins with materially lower pricing, margin pressure intensifies across the sector.

Goldman Sachs’ AI capex forecast assumes 5-7 year silicon useful life and continued model scaling laws. Any evidence that models are plateauing in performance gains per dollar of compute—or that useful life extends beyond 7 years—would reduce replacement cycles and slow demand growth.

Finally, regulatory scrutiny may escalate. NVIDIA’s equity stakes in direct customers create potential conflicts of interest that could draw antitrust attention, particularly in Europe where vertical integration in platform markets faces heightened enforcement.