OpenAI Signals More Fundraising as Compute Bottleneck Overtakes Capital
CFO Sarah Friar's hint at additional capital raises beyond the $122 billion mega-round reveals AI infrastructure demand now outpaces both investor willingness and physical supply chains.
OpenAI may raise additional capital beyond its $122 billion funding round closed in March 2026, CFO Sarah Friar told Bloomberg today, as the ChatGPT maker races to secure computing power amid what she described as a “vertical wall of demand.” The statement marks a shift in AI economics: the binding constraint is no longer investor appetite but semiconductor production capacity, power grid build-out, and data center construction timelines stretching into 2028.
OpenAI’s existing war chest — the largest private fundraising round ever, according to Friar — funds a company generating $2 billion in monthly revenue while spending at a rate that President Greg Brockman testified under oath will reach $50 billion on compute alone in 2026. That 25:1 spending-to-revenue ratio is structurally unsustainable without continued capital infusions or a dramatic shift in margin profile. Friar linked future fundraising decisions explicitly to “the difference between the computing power OpenAI requires and what it can currently afford,” according to Investing.com.
$2.0B
$50B
$122B
$852B
The Chip Packaging Bottleneck
The constraint has shifted from capital to physical infrastructure. While Microsoft CEO Satya Nadella highlighted power shortages in late 2025, OpenAI executives stated by early 2026 that, according to Center for a New American Security analysis, chips had become the binding constraint. The specific chokepoint: advanced chip packaging using TSMC’s CoWoS technology, which bonds high-bandwidth memory to AI accelerators.
TSMC is scaling CoWoS production from approximately 35,000 wafers per month in late 2024 to a projected 130,000 wafers per month by year-end 2026 — nearly a 4x increase, according to Oplexa AI Research. Demand still exceeds supply. NVIDIA has reportedly reserved the majority of TSMC’s latest CoWoS-L capacity for Blackwell GPUs, CNBC reported in April, leaving competitors scrambling for allocation through 2027.
“Computing power is the scarcest resource in AI. Access to computing power determines who can scale.”
— Sarah Friar, CFO, OpenAI
Capital Abundance Meets Infrastructure Scarcity
Big Tech is deploying capital at unprecedented scale. Alphabet guided $175 billion to $185 billion in 2026 capex, nearly doubling 2025 spending. Amazon plans roughly $200 billion. Meta earmarked $115 billion to $135 billion. Microsoft is running at a $145 billion annualized rate, according to Tech Insider. Collectively, hyperscalers are committing over $650 billion this year.
Yet physical constraints are gating deployment. Only one-third of the expected 12 GW of data center capacity is under active construction, with power delays stretching up to five years in the worst grid regions, according to Tech Insider. Lead times for high-voltage transformers have stretched from 12–18 months to as long as 48 months. The gap between committed capital and energized infrastructure has never been wider.
- TSMC CoWoS packaging sold out through 2026; NVIDIA controls majority allocation
- Data center power grid interconnection queues extending 5+ years in key regions
- High-voltage transformer lead times stretched to 48 months from 18 months
- Only 4 GW of 12 GW planned capacity under active construction as of May 2026
The Economics of AI at Scale
Across three AI companies where data is available, compute represents 54% to 62% of costs, with labs spending 2-3x higher than revenue, according to Epoch AI. OpenAI’s $50 billion compute budget against a $24 billion annualized revenue run rate exemplifies this dynamic. The company remains unprofitable despite exponential user growth.
Friar previously stated that “more compute in these periods would have led to faster customer adoption and monetization,” per CNBC, framing compute access as the primary revenue governor. If true, the semiconductor bottleneck isn’t just slowing infrastructure rollout — it’s directly capping revenue growth for frontier labs unable to meet inference demand or train larger models on schedule.
The shift from power to chips as the primary bottleneck occurred between late 2025 and early 2026. Microsoft continued emphasizing energy constraints through Q4 2025, while OpenAI executives identified chip packaging as the binding constraint by Q1 2026. This transition reflects both progress in securing power infrastructure deals and the emergence of semiconductor supply as an even tighter chokepoint as demand accelerated beyond TSMC’s ability to expand capacity.
Venture Capital’s Obsolescence in Frontier AI
OpenAI’s funding round — structured with participation from SoftBank, ARK Invest, Tiger Global, and others — may represent the final generation of venture-backed frontier AI development. “We are in an era where the scale of AI Infrastructure investment has outgrown Venture Capital entirely,” Tom Loverro, general partner at IVP, told Tech Insider. “The only entities with enough capital to fund frontier AI development are the hyperscalers themselves.”
Microsoft, Amazon, Google, and Meta are the only organisations capable of deploying $100 billion-plus annually into AI infrastructure while simultaneously controlling data center power capacity, semiconductor supply agreements, and vertical integration through custom silicon programs. Their strategic investments in OpenAI, Anthropic, and other labs increasingly resemble supply chain lock-in rather than traditional equity bets.
What to Watch
OpenAI’s next fundraising timing will signal whether the compute shortage is easing or accelerating. If the company returns to market in Q3 2026 or earlier, it suggests infrastructure bottlenecks are worsening faster than the $122 billion can be deployed. Monitor TSMC’s CoWoS capacity expansions — any delays to the 130,000 wafer-per-month target will ripple through every frontier lab’s training roadmap. Track data center power interconnection approvals in Northern Virginia, Phoenix, and Columbus; these three regions account for the majority of hyperscaler GPU cluster construction. Finally, watch for M&A: if semiconductor packaging remains the constraint through 2027, expect vertical integration moves as hyperscalers acquire or build competing advanced packaging capacity to reduce TSMC dependency.
Sam Altman’s statement that “if there are no major breakthroughs in energy technology, artificial intelligence will not reach its next stage” may have already been superseded. The next stage now depends less on energy innovation than on whether TSMC, Samsung, and Intel can quadruple advanced packaging capacity before demand doubles again.