AI Markets · · 8 min read

Google Taps Marvell for Custom AI Chips in Strategic Pivot from Broadcom Monopoly

The deal positions Marvell alongside Broadcom in the hyperscaler silicon market as cloud giants accelerate vertical integration to reduce NVIDIA dependency and optimize inference economics.

Google is in advanced negotiations with Marvell Technology to design two custom AI accelerators—a memory processing unit and an inference-optimized TPU—marking the first significant diversification of Google’s chip partnerships beyond Broadcom since the company extended its primary ASIC relationship through 2031. The move signals that hyperscale vertical integration has entered an execution phase, with cloud providers now managing multi-vendor design ecosystems to reduce single-supplier dependency and optimize the performance-per-dollar economics that dominate inference workloads at billion-user scale.

Marvell Custom Silicon Momentum
Data centre revenue (FY2026)$6.1B
Total revenue growth YoY+42%
Custom silicon annual run rate$1.5B
Cloud-provider design wins18

The Next Web first reported the negotiations on 20 April 2026, citing sources familiar with the discussions. Marvell shares jumped nearly 6% the following day, per CNBC, as investors recognized the strategic validation of the company’s custom silicon business, which already serves Amazon (Trainium), Microsoft (Maia), and Meta (data processing units) with a $1.5 billion annual run rate as of February 2026.

The Inference Economics Driving Multi-Vendor Strategies

Google’s decision to add Marvell reflects a structural shift in AI infrastructure spending. Training models remains capital-intensive, but inference—the process of running deployed models at scale—now dominates total cost of ownership for cloud providers serving billions of users. Custom ASICs optimized for inference deliver dramatically better performance-per-dollar than general-purpose GPUs. Google announced its latest TPU 8i (inference) chip at Cloud Next on 22 April 2026, claiming 80% better performance-per-dollar than the prior generation, according to CNBC.

“Seeing an unprecedented level of activity across multiple new engagements as Hyperscalers increase their cadence of custom chip development.”

— Marvell management, February 2026

The company projects total TPU shipments of 4.3 million units in 2026, scaling to more than 35 million by 2028, according to Next Web. Anthropic alone has committed to approximately 3.5 gigawatts of next-generation TPU-based compute starting in 2027, underscoring the scale at which inference infrastructure must operate.

By partnering with both Broadcom and Marvell—and also working with MediaTek on additional designs—Google is managing supply chain risk while capturing specialized expertise. Broadcom commands more than 70% market share in custom AI accelerators, with AI revenue of $8.4 billion in its most recent quarter (up 106% year-over-year) and guidance of $10.7 billion for the next quarter. The company is targeting $100 billion in AI chip revenue by 2027, according to Next Web. Mizuho analysts estimated Broadcom would record $21 billion in AI revenue from Google and Anthropic relationships in 2026, rising to $42 billion in 2027.

Custom ASIC Market Fragmentation Accelerates

The custom silicon market is experiencing explosive growth that outpaces general-purpose GPU shipments. TrendForce projects custom chip sales will increase 45% in 2026, compared with 16% growth in GPU shipments, according to Next Web. The custom ASIC market is projected to reach $118 billion by 2033, while Bloomberg Intelligence estimates the total AI accelerator market (including GPUs and custom chips) will hit $604 billion by 2033, up from $116 billion in 2024.

Custom Silicon vs GPU Growth Trajectories
Metric Custom ASICs GPUs
2026 shipment growth +45% +16%
2033 custom market size $118B
Broadcom projected share (2027) ~60%
Marvell projected share (2027) ~25%

Counterpoint Research projects Broadcom will hold roughly 60% of the custom AI accelerator market by 2027, with Marvell at approximately 25%, according to Next Web analysis. This represents a fundamental restructuring of the AI chip ecosystem, which NVIDIA has dominated with approximately 80% market share across all AI accelerators, per Silicon Analysts. That share is projected to decline to 75% by year-end 2026 as hyperscalers capture inference workloads with purpose-built ASICs.

Hyperscaler Commitments Reshape Chip Procurement

Google’s Marvell negotiations are part of a broader industry pattern. Meta extended its Broadcom partnership through 2029 on 15 April 2026, committing to deploy 1 gigawatt of its own custom MTIA chips using Broadcom technology, according to Oplexa. The company has committed $2.3 billion to Broadcom for AI chip design and related services in the past year, per InvestorPlace, as part of a broader $135 billion AI infrastructure spend planned for 2026.

Key Hyperscaler Chip Partnerships
  • Google: Broadcom (through 2031), Marvell (in negotiation), MediaTek
  • Meta: Broadcom MTIA partnership through 2029, $2.3B committed in past year
  • Amazon: Marvell (Trainium/Inferentia), now validated by Meta deployment commitment
  • Microsoft: Marvell (Maia chips), targeting Azure optimization
  • OpenAI: Broadcom partnership for 10GW custom accelerators (H2 2026-2029)

Meta also signed a deal with Amazon on 24 April 2026 for millions of custom Trainium and Inferentia AI Chips, validating AWS’s custom silicon strategy and demonstrating that hyperscalers are willing to procure from cloud competitors when economics justify it, per CNBC. OpenAI and Broadcom announced a collaboration for 10 gigawatts of custom AI accelerators with deployments starting in the second half of 2026 through 2029, according to Tom’s Hardware.

The competitive dynamics are clear: NVIDIA’s CUDA ecosystem lock-in and production scale preserve its dominance in training workloads, but inference economics favor purpose-built ASICs. Marvell’s data centre revenue reached a record $6.1 billion in fiscal year 2026 (ending February), with total revenue of $8.2 billion, up 42% year-over-year, driven by custom silicon demand across hyperscalers.

What to watch

The finalization of Google’s Marvell contracts will clarify the scope of memory processing and inference optimization—two bottlenecks in LLM serving at scale. Watch whether Google commits to multi-generation roadmaps with Marvell (as it did with Broadcom through 2031) or maintains flexibility for single-generation partnerships. NVIDIA’s investor day in May 2026 will address competitive positioning as custom ASICs capture inference share; the company’s response to hyperscaler vertical integration—potentially through more flexible licensing or co-design partnerships—will determine whether it can stabilize market share above 70% through 2028.

Marvell’s quarterly earnings guidance will signal whether Google shipments begin in late 2026 or push to 2027, affecting near-term revenue trajectory. Broadcom’s next earnings report (expected May 2026) will reveal whether its $10.7 billion AI revenue guidance for the current quarter reflects organic growth or pull-forward from 2027 commitments. Finally, monitor semiconductor supply chain announcements from TSMC and Samsung: hyperscaler ASIC demand is concentrating advanced node capacity, with implications for pricing power and geopolitical chip sovereignty as governments assess reliance on Taiwan-based foundries for strategic AI infrastructure.