Arm and SoftBank’s Failed Cerebras Bid Signals Scramble for AI Chip Architecture
Late-stage acquisition attempt weeks before $48.8B IPO reveals strategic anxiety over wafer-scale computing and hyperscaler vertical integration threats.
Arm Holdings and SoftBank made a last-minute acquisition approach to Cerebras Systems weeks before its IPO, which the AI chipmaker rejected, proceeding instead to a $48.8 billion valuation at the high end of its pricing range.
The failed bid, according to Bloomberg, underscores intensifying consolidation pressure in AI Infrastructure markets where specialized silicon architectures command premium valuations despite Nvidia’s 70-80% market share. Cerebras priced its IPO at $150-$160 per share, raising $4.8 billion in 2026’s largest technology offering to date, according to CNBC. The order book was oversubscribed approximately 20 times.
For Arm and SoftBank, the approach signals acute awareness that established semiconductor IP players risk missing AI Hardware momentum as hyperscalers increasingly design proprietary chips. Cerebras’ rejection — choosing public markets over acquisition — reflects either confidence in standalone valuations or a strategic premium the bidders refused to meet. Secondary market trading valued Cerebras shares at $187.53 on 12 May, 17% above the IPO range top, per Benzinga.
Wafer-Scale Economics Test Hyperscaler Appetite
Cerebras builds AI systems around the Wafer Scale Engine, a processor 57 times larger than Nvidia’s H100 containing 4 trillion transistors. The architecture keeps compute, memory and bandwidth on a single piece of silicon rather than distributing work across interconnected chips — a design that delivers speed advantages for specific AI workloads but carries manufacturing risk and customer concentration concerns.
The company reported $510 million revenue in 2025, up 76% year-over-year. However, two customers — G42 and Mohamed bin Zayed University of Artificial Intelligence — accounted for 86% of 2025 revenue (24% and 62% respectively). OpenAI committed $20 billion for 750MW of compute capacity through 2028, announced in January 2026, providing revenue visibility but reinforcing dependency on a handful of large contracts.
“The AI chip market is large enough to support multiple winners, but Cerebras needs to prove that wafer-scale economics work at hyperscaler volumes. The $10 billion OpenAI deal is proof of concept. Now they need three or four more deals like it to justify a $25 billion valuation.”
Ben Bajarin, CEO of Creative Strategies
Amazon Web Services announced a partnership in March 2026 to deploy Cerebras CS-3 systems through Amazon Bedrock, broadening distribution beyond direct enterprise contracts. The $24.6 billion revenue backlog at year-end 2025, reported by MarketWise, suggests demand visibility extends through 2027-2028, though converting backlog to recognized revenue depends on execution timelines for multi-year infrastructure deployments.
Strategic Positioning Against Nvidia’s Moat
Nvidia controls 70-80% of the AI accelerator market, protected by a CUDA software ecosystem that creates switching costs measured in years of engineering investment. Cerebras offers speed advantages for training large language models — the OpenAI deal centers on a code-writing model that benefits from wafer-scale parallelism — but competes on a narrower set of workloads than Nvidia’s general-purpose GPU platform.
The $48.8 billion IPO valuation represents a 96x revenue multiple on 2025 results, substantially higher than Nvidia’s 30x forward sales multiple. Cerebras priced its February 2026 Series H funding round at $23 billion, meaning public market investors doubled the private valuation in three months. Prediction markets on Polymarket assigned a 33% probability to Cerebras achieving a $50-60 billion market cap on day one, with only 6% odds of falling below $50 billion, according to Benzinga.
| Metric | 2026 Projection |
|---|---|
| Top 5 Hyperscaler Capex | $660-690B |
| AI-Tied Portion | ~$450B (75%) |
| Cerebras Revenue Backlog | $24.6B |
M&A Pressure in Semiconductor Consolidation Cycle
Arm and SoftBank’s acquisition approach fits a broader pattern of semiconductor consolidation targeting AI-optimized architectures. The five largest hyperscalers are projected to spend $660-690 billion on capex in 2026, with approximately 75% ($450 billion) directly tied to AI infrastructure, according to UBS. That spending creates demand for differentiated chip designs but also incentivizes vertical integration — Amazon, Google and Microsoft all develop custom silicon for internal workloads.
For Arm, which licenses chip designs rather than manufacturing hardware, acquiring Cerebras would have provided owned IP in wafer-scale computing and direct access to hyperscale customers deploying frontier AI models. SoftBank’s involvement suggests portfolio repositioning after missing earlier AI infrastructure investments. The Vision Fund backed semiconductor firms including Arm itself (which SoftBank took private in 2016 and re-listed in 2023) but lacks significant exposure to AI training accelerators.
Cerebras CEO Andrew Feldman told CNBC “it’s a good time to be in AI hardware,” a statement the IPO reception validates. The company raised its pricing range twice during the roadshow as institutional demand surged. Secondary market pricing 17% above the top end indicates first-day pop expectations, though prediction market odds suggest uncertainty whether the valuation holds once retail trading begins.
- Established semiconductor players face acquisition pressure to secure AI-differentiated IP before hyperscaler vertical integration closes strategic windows
- Wafer-scale architectures command premium valuations but require proof of manufacturing scalability and customer diversification beyond OpenAI dependency
- IPO market receptivity to AI hardware at 90x+ revenue multiples creates exit optionality that reduces acquisition leverage for strategic buyers
What to Watch
First-day trading on 14 May will test whether public market investors sustain the $48.8 billion valuation or demand discounts given customer concentration and unproven hyperscale diversification. OpenAI contract execution timelines through 2028 determine revenue recognition cadence and whether Cerebras meets growth expectations embedded in the IPO price. AWS partnership traction — measured by Bedrock deployment announcements and CS-3 utilization data — indicates whether cloud platform distribution can offset direct sales concentration risk. Nvidia’s response to wafer-scale competition, particularly in LLM training workloads where Cerebras claims speed advantages, may involve pricing pressure or architectural countermoves in the H200/B100 roadmap. Further M&A approaches from strategic buyers — including AMD, Intel or hyperscalers themselves — remain probable if the stock trades below IPO levels, creating entry points at lower valuations than the rejected Arm/SoftBank bid.