Nvidia Delivers Record $68B Quarter as Enterprise Demand Sustains AI Chip Boom
Chipmaker crushes Q4 expectations with 73% revenue growth and $78B guidance, while analysts debate whether stretched valuations reflect the full scale of multi-year infrastructure buildout.
Nvidia reported fiscal fourth-quarter revenue of $68.1 billion on February 25, beating expectations by $3 billion and extending a streak of earnings dominance that has made the chipmaker synonymous with the AI revolution.
The results — up 73% year-over-year and 20% sequentially — cemented Nvidia’s position at the center of a historic infrastructure expansion, with data center revenue climbing to $62.3 billion, ahead of the $60.69 billion consensus. Non-GAAP Earnings per share came in at $1.62, versus the $1.53 estimate. More critically, the company guided first-quarter fiscal 2027 revenue to $78 billion — $5 billion above Wall Street’s $72.6 billion forecast — and reiterated that it would exceed its $500 billion Blackwell and Rubin revenue target through the end of calendar 2026.
$68.1B (+73% YoY)
$62.3B (+75% YoY)
$1.62 (vs. $1.53 est.)
75.2%
Hyperscalers Drive Revenue, But Enterprise Mix Broadens
According to CNBC, hyperscalers — Microsoft, Amazon, Meta, and Alphabet — now account for more than half of Nvidia’s data center sales, a shift that underscores both the scale of cloud AI deployments and the concentration risk inherent in the company’s customer base. Yet CFO Colette Kress emphasized that the quarter added $11 billion in sequential data center revenue across a “diverse and expanding set of customers including cloud providers, hyperscalers, AI model makers, enterprises, and sovereign nations.”
Nvidia’s networking business, which includes the Spectrum-X Ethernet and NVLink scale-up fabric, reached $11 billion for the quarter, with compute revenue growing 58% year-over-year while networking surged 263%. For the full fiscal year, networking revenue exceeded $31 billion — up more than tenfold since the 2021 Mellanox acquisition. This matters: as Nvidia transitions from selling discrete GPUs to integrated rack-scale systems like the 72-GPU Grace Blackwell, attach rates for networking hit 90%, deepening customer lock-in and expanding Nvidia’s addressable revenue per deployment.
Nvidia’s shift from component vendor to systems integrator represents a structural change in its business model. The Grace Blackwell NVL72, a rack-scale supercomputer combining 72 GPUs with custom networking and cooling, sells for millions of dollars per unit — roughly 10x the price of a standalone GPU. This integration drives higher ASPs (average selling prices projected to reach $33,000 per GPU in 2026, up from $19,000 in 2024, according to Bloomberg Intelligence), but also increases manufacturing complexity and supply chain risk.
Margin Resilience Amid Platform Transition
One of the most scrutinized metrics was gross margin. Non-GAAP gross margin for Q4 came in at 75.2%, up sequentially from Q3 as Blackwell production ramped, but below the 75.5% recorded in fiscal 2025. For the full fiscal 2026, non-GAAP gross margin was 71.3%, down 420 basis points year-over-year — a compression CFO Kress attributed to inventory write-offs and cost structure dynamics during the platform transition. Guidance for Q1 fiscal 2027 held margins at 75%, within the historical range but signaling that Nvidia’s pricing power remains intact despite intensifying competition.
The company disclosed that beginning in Q1 fiscal 2027, it will include stock-based compensation in non-GAAP figures — a change that will add approximately $1.9 billion to quarterly operating expenses. This shift aligns Nvidia with peers but represents a material increase in reported costs, effectively lowering non-GAAP earnings by roughly 10% going forward.
China Revenue Excluded, Geographic Risk Persists
Nvidia’s guidance explicitly excludes China data center compute revenue, a disclosure that underscores the geopolitical volatility surrounding AI chip exports. According to ServeTheHome, the company reported “insignificant” H20 sales in Q3 and Q4 2025, and Bloomberg noted that Nvidia has been unable to sell a single H200 chip to China despite US approval under certain conditions. China represented 26% of Nvidia’s revenue in fiscal 2022; by fiscal 2025, that figure had dropped to approximately 13%. Any resumption of China sales — whether through regulatory easing or compliant product variants — represents asymmetric upside not reflected in current guidance.
AMD and Custom Silicon: Competitive Pressure Builds
Nvidia maintains an estimated 80% share of the AI accelerator market, according to industry data, but the competitive landscape is shifting. AMD’s MI300X and upcoming MI350 series have gained traction with OpenAI, Oracle, and Meta — the latter running 100% of live Llama 405B traffic on AMD silicon. AMD’s data center and AI revenue hit $4.3 billion in Q3 2025, and the company has secured a multi-year, multi-gigawatt deployment with OpenAI expected to contribute over $100 billion in revenue over the next few years.
Intel’s Gaudi 3 platform, meanwhile, has stumbled — the company dropped its 2024 AI accelerator revenue target of $500 million and faces ongoing governance turmoil following the departure of CEO Pat Gelsinger. Intel’s data center market share has collapsed from 68% in 2021 to 6% in 2025, a decline driven by its failure to pivot to GPU-accelerated workloads.
The more strategic threat comes from custom ASICs. Hyperscalers — led by Google’s TPUs, Amazon’s Trainium/Inferentia, and Microsoft’s Maia — are investing heavily in proprietary silicon to optimize cost and differentiate their offerings. According to TrendForce, custom ASIC shipments are projected to grow 44.6% in 2026, while GPU shipments grow just 16.1%. Broadcom, the primary designer of hyperscaler ASICs, secured a $21 billion order from Anthropic in 2025 and is expected to see AI semiconductor revenue double to $8.2 billion in Q1 fiscal 2026.
| Vendor | Market Share | Key Products |
|---|---|---|
| Nvidia | ~80% | H100, H200, Blackwell, Rubin |
| AMD | ~11% | MI300X, MI350 (upcoming) |
| Intel | ~6% | Gaudi 3 |
| Custom ASICs | ~3% | Google TPU, AWS Trainium, Broadcom |
Valuation Debate: Is $300 Justified?
Nvidia trades at a trailing P/E of approximately 47.7x and a forward P/E near 40x based on fiscal 2027 estimates. That multiple has compressed from a three-year average above 75x, reflecting the sheer scale of earnings growth: non-GAAP EPS rose from $0.89 in Q4 fiscal 2025 to $1.62 in Q4 fiscal 2026 — an 82% increase. For the full fiscal year, EPS hit $4.77, and consensus estimates project $7.76 for fiscal 2027, implying further 60%+ growth.
Yet valuation remains contentious. Wedbush analyst Dan Ives, one of the most vocal bulls, told CNBC ahead of earnings that Nvidia is “massively underestimated” by Wall Street, arguing that only 3% of U.S. companies have meaningfully deployed AI and that the market is in “year three of an eight-to-ten-year buildout.” Wedbush maintains an Outperform rating with a $230 price target, but has previously set a $250 base-case target for end-2026, with a bull case extending to $275. Cantor Fitzgerald holds the Street-high $300 target, citing visibility into hundreds of billions of demand through 2028 and a path to $50 EPS by 2030.
The bear case centers on sustainability. If hyperscaler capex — projected by analysts to approach $700 billion in 2026 — slows due to financing constraints, return-on-investment skepticism, or macro deterioration, Nvidia’s revenue growth would decelerate sharply. Morgan Stanley analyst Joseph Moore maintains a $250 target but models a downside scenario of $150 if growth slows faster than expected. The stock’s forward P/E of 24x based on fiscal 2027 earnings (assuming 75% revenue growth) implies the market is pricing in material deceleration risk or margin compression.
“The reality is there’s one chip in the world fueling the AI revolution, and that’s Nvidia. Numbers are significantly underestimated — I think 15% to 20% at a minimum going into 2026.”
— Dan Ives, Wedbush Securities
Supply Chain Diversification and Geopolitical Hedging
Nvidia disclosed in its annual filing that it is diversifying manufacturing beyond Taiwan and expanding into the U.S. and Latin America. Blackwell GPUs are now produced at TSMC’s Arizona fabs, and rack-scale systems are assembled at a Foxconn plant in Mexico. This geographic hedging mitigates Taiwan risk — a critical concern given that TSMC manufactures approximately 92% of the world’s most advanced AI chips — but introduces new execution risk as Nvidia scales production across multiple sites.
The company increased purchase commitments from $50.3 billion at the end of Q3 to $95.2 billion at the end of Q4, a near-doubling that signals both confidence in long-term demand and a strategic bet to lock in capacity ahead of competitors. Inventory rose 8% sequentially, which Nvidia attributed to securing supply to meet demand “beyond the next several quarters.”
- Nvidia beat Q4 estimates with $68.1B revenue (+73% YoY) and guided Q1 to $78B, $5B above consensus.
- Data center revenue hit $62.3B, with hyperscalers accounting for over 50% of sales; networking revenue surged 263% YoY to $11B.
- Gross margins held at 75.2%, but full-year margins compressed 420 bps as Blackwell ramped; Q1 guidance maintains 75%.
- AMD’s MI300X gains traction with hyperscalers; custom ASICs projected to grow 44.6% in 2026 vs. 16.1% for GPUs.
- Stock trades at 47.7x trailing P/E, down from 75x three-year average; analysts split on whether $250-$300 targets reflect full AI buildout scale.
What to Watch
Nvidia’s March GTC developer conference will provide critical visibility into the Rubin platform roadmap and enterprise adoption metrics. Investor focus will center on whether Blackwell gross margins stabilize above 75% as production scales, and whether hyperscaler capex projections for calendar 2027 hold or begin to moderate. Any clarity on China export policy — or material orders under compliant frameworks — would represent meaningful upside. The AMD-OpenAI partnership and Meta’s reported discussions with Alphabet over TPU supply are early tests of whether CUDA’s software moat can withstand a coordinated push by hyperscalers to reduce single-vendor dependence. Finally, the sustainability of Nvidia’s networking attach rates will determine whether the company can defend its expanded TAM or faces margin pressure as customers unbundle GPU and networking purchases.