Power Bottleneck: Half of US AI Data Centres Face Delays as Grid Constraints Reshape Global Competition
Supply chain friction and electrical infrastructure limits are stalling 30-50% of 2026 projects, pushing hyperscalers toward Middle East expansion and edge computing alternatives.
Between 30% and 50% of AI data centres planned for deployment in the US this year face delays or outright cancellation, according to TechSpot, as electrical equipment shortages and grid capacity constraints choke infrastructure expansion.
Across 140 construction projects representing at least 16 gigawatts of capacity slated to come online before year-end, only around 5 GW are currently under construction. The gap reveals a crisis in physical execution: lead times for high-power transformers have stretched from 24-30 months pre-2020 to as long as five years today, per Tom’s Hardware. Microsoft suspended construction on part of its multi-billion-dollar AI data centre campus in Mount Pleasant, Wisconsin—a facility housing a supercomputer for OpenAI—while the startup itself paused its main UK project over unfavourable regulation and high energy costs.
30-50%
16 GW
~5 GW
Up to 5 years
The defining constraint for AI scaling has shifted from computational efficiency to the physical availability of grid-scale power. Alphabet, Amazon, Meta, and Microsoft expect to spend more than $650 billion in 2026 to expand AI capacity, but that capital cannot deploy if electrical equipment and grid connections remain unavailable. Grid connection requests now take four to seven years in key regions like Virginia, according to the RAND Corporation. Between May 2024 and June 2025, at least 36 US data centre projects were delayed or blocked, disrupting an estimated $162 billion in investment.
Supply Chain Friction Compounds Grid Limitations
Electrical equipment shortages have emerged as a parallel constraint. High-voltage transformers, switchgear, and backup power systems form critical-path dependencies for new facilities, and global supply chains remain tight. “If one piece of your Supply Chain is delayed, then your whole project can’t deliver,” Andrew Likens, energy and infrastructure lead at Crusoe Energy, told Futurism. This cascades into broader delays: even projects with secured land, permits, and power purchase agreements cannot break ground without the physical components to connect to the grid.
“The American data center boom is hitting a formidable wall of logistical friction.”
— George Gianarikas, Canaccord Genuity analyst
Sightline Climate analysts attribute 25% of delays to projects that have not disclosed credible powering strategies, while community opposition and grid equipment shortages account for the remainder. The backlog creates a two-tier market: companies with secured capacity and existing infrastructure gain structural advantage, while new entrants or expansion-dependent players face 12-18 month delays that directly extend training costs and slow model iteration cycles.
Middle East Pivot Accelerates
The UAE and Saudi Arabia are moving aggressively to capture AI infrastructure investment sidelined by US bottlenecks. Microsoft and Abu Dhabi-based G42 announced a 200 MW expansion of data centre capacity in the UAE, with first-phase capacity expected online by year-end, per Data Center Knowledge. Stargate UAE’s first phase—part of a $30 billion-plus project—is on track for completion in Q3 2026, with construction proceeding at pace while US counterparts stall.
The Middle East’s appeal lies in state-backed speed: energy subsidies, streamlined permitting, and capital deployment timelines measured in quarters rather than years. OpenAI’s decision to pause its UK project while the UAE advances reflects a broader recalibration. “We continue to explore Stargate UK and will move forward when the right conditions such as regulation and the cost of energy enable long-term infrastructure investment,” an OpenAI spokesperson said. The implication is clear—regulatory friction and energy pricing are now site-selection criteria on par with talent density or network topology.
Edge Computing and Inference Optimization as Workarounds
Delays in hyperscale buildouts are accelerating investment in edge computing and inference-optimised architectures. If centralised training capacity remains constrained, the economic case for distributing inference workloads closer to end users strengthens. TSMC reported Q1 2026 net revenue of $35.67 billion—up 35.1% year-on-year—as demand for advanced packaging and chiplet designs surged, driven partly by edge deployment requirements. Latency-sensitive applications and regulatory constraints around data sovereignty make multi-tier deployment inevitable: global cloud for the most complex models and orchestration, regional facilities for governance needs, edge for real-time processing.
- 30-50% of US AI data centres planned for 2026 face delays, representing an 11 GW gap between announced and under-constructed capacity.
- Transformer lead times have doubled to five years, creating hard supply constraints independent of capital availability.
- Grid connection waits of 4-7 years in key regions turn power access into a primary competitive moat.
- Middle East projects are advancing on accelerated timelines, with UAE capacity online by Q3 2026 while US equivalents stall.
- Edge computing and inference optimisation gain investment as workarounds to hyperscale bottlenecks.
The infrastructure crisis reshapes competitive dynamics in AI. Companies with secured power capacity—whether through early grid agreements, vertical integration into energy production, or offshore expansion—gain 12-18 month advantages in model training timelines. Those dependent on new US buildouts face extended capex cycles and potential shifts in investor timelines for commercialisation. The $650 billion in planned hyperscaler spending for 2026 remains committed, but realisation is pushed into 2027-2028, elongating the path to revenue for AI-native startups reliant on rented compute.
What to Watch
Monitor grid connection approval timelines in Virginia, Texas, and Oregon—the three highest-demand regions for new capacity. Any regulatory changes to expedite interconnection queues could unlock bottlenecked projects within six months. Track Middle East project completion rates: if Stargate UAE and G42 expansions deliver on schedule, expect accelerated migration of training workloads offshore. Watch for shifts in hyperscaler capex guidance on Q2 2026 earnings calls—delays in physical deployment may force capital reallocation toward software optimisation, edge infrastructure, or international partnerships. Finally, observe TSMC and other advanced packaging providers: surging demand for chiplet designs and inference-optimised silicon signals structural adaptation to a world where centralised hyperscale capacity remains constrained.