How US Chip Export Controls Created a Two-Tier Global AI Market
Semiconductor restrictions have forced Chinese AI labs to pay 3-4x premiums for cutting-edge hardware, reshaping the competitive landscape and accelerating technological decoupling.
US export controls on advanced semiconductors have split the global AI industry into two parallel markets: Western firms accessing Nvidia’s latest chips at competitive prices, and Chinese developers paying $1 million per unit for smuggled hardware or building workarounds with inferior alternatives.
The divide stems from October 2024 restrictions that barred Nvidia from selling its H100, A100, and subsequent B-series chips to China without special licenses. These controls, administered by the Bureau of Industry and Security, target processors exceeding specific performance thresholds for AI training and inference. The immediate effect was a bifurcation of chip prices: Nvidia’s B300 GPU sells for $250,000-$300,000 in authorised markets but commands $1 million on Chinese grey markets, according to The Information. This 3-4x premium reflects not just scarcity but the structural costs of circumventing export restrictions through shell companies, third-country intermediaries, and complex Supply Chains designed to obscure end-users.
The Semiconductor Chokepoint
Advanced AI chips represent a unique geopolitical pressure point. Taiwan Semiconductor Manufacturing Company (TSMC) fabricates over 90% of the world’s cutting-edge logic chips below 7 nanometers, including all Nvidia AI accelerators. The Netherlands’ ASML holds a monopoly on extreme ultraviolet lithography machines required for sub-7nm production. US firms design the architecture. This concentration gives Washington leverage: by restricting access at any point in the supply chain, it can throttle China’s access to frontier compute.
The controls work in layers. Direct sales are banned. Re-exports from allied countries require licenses. Even chips designed for the Chinese market—Nvidia’s downgraded H20 and L20 variants—face performance caps that limit their utility for large language model training. Semiconductor Industry Association data shows China imported $350 billion worth of chips in 2024, but advanced AI accelerators accounted for less than 2% of that volume post-restrictions, down from 8% in 2023.
The result is a forced technology diet. Chinese AI labs cannot simply buy more hardware to scale models. ByteDance, Baidu, and Alibaba have reduced training run sizes, extended timelines, and shifted resources toward algorithmic efficiency rather than brute-force scaling. This represents a fundamental constraint on the “scaling laws” that have driven AI progress since GPT-3: more compute, more data, better models. Western firms can still follow that path. Chinese competitors cannot.
Architectural Trade-offs and Efficiency Gains
Starved of cutting-edge hardware, Chinese developers have pursued two strategies: optimisation and substitution. The optimisation path produced DeepSeek V4, a 671-billion-parameter model that achieves GPT-4-class performance while using a fraction of the training compute. Its mixture-of-experts architecture activates only 37 billion parameters per token, reducing inference costs by 95% compared to dense models. Training cost: approximately $5.6 million, versus $100 million-plus for comparable Western models, per South China Morning Post analysis of disclosed compute budgets.
“Chinese labs are now optimising for efficiency in ways that Western firms don’t have to. That creates a different innovation trajectory—one that could prove advantageous if compute costs rise or energy becomes a binding constraint.”
— Dylan Patel, chief analyst at SemiAnalysis, in The Information
This is not purely a silver lining. Efficiency gains cannot fully compensate for hardware deficits at the frontier. DeepSeek V4 required innovative architecture, but it still needed H800 chips—Nvidia’s export-compliant variant with reduced interconnect bandwidth. For tasks requiring massive parallelism (video generation, protein folding, large-scale reinforcement learning), bandwidth and memory capacity matter as much as raw FLOPs. Chinese models remain competitive in text generation and coding but lag in compute-intensive domains.
The substitution path involves domestic alternatives. Huawei’s Ascend 910C chip, fabricated by China’s Semiconductor Manufacturing International Corporation (SMIC) on a 7nm process, offers roughly 60% of an H100’s training performance. SMIC’s process yield rates remain below 50% for advanced nodes, driving up costs and limiting volume. Still, Huawei shipped an estimated 150,000 Ascend units in 2025, capturing 18% of China’s AI accelerator market, according to Reuters supply chain data. ByteDance and Baidu have both committed to Ascend-based infrastructure, reducing dependency on smuggled Nvidia hardware.
Supply Chain Fragmentation and Dual Ecosystems
The Export Controls have accelerated the emergence of parallel technology stacks. Western AI development centres on Nvidia CUDA, a software ecosystem two decades in the making with libraries optimised for every major AI framework. Chinese alternatives must replicate this from scratch. Huawei’s CANN (Compute Architecture for Neural Networks) supports mainstream frameworks but lacks the maturity and third-party tooling of CUDA. Developers report 20-30% longer development cycles when porting models to Ascend, even before accounting for performance gaps.
This fragmentation extends beyond hardware. China is building indigenous AI ecosystems—model hubs, dataset repositories, cloud platforms—that operate independently of Western infrastructure. Alibaba Cloud and Huawei Cloud offer AI services that never touch US-controlled chips or software. For certain applications (domestic search, recommendation systems, manufacturing optimisation), this suffices. For cutting-edge research requiring the largest models and fastest training cycles, the gap persists.
The policy debate centres on whether this gap widens or narrows over time. Proponents of export controls argue they buy time—slowing China’s military AI development by 2-3 years while the US scales up domestic chip production through CHIPS Act subsidies. Critics point to DeepSeek as evidence that resourceful engineering can route around hardware constraints. Both positions contain truth. Chinese AI capabilities have not collapsed, but they are measurably constrained relative to a counterfactual world with unrestricted chip access.
Economic and Strategic Implications
The two-tier market creates asymmetric cost structures that compound over time. A Chinese AI startup training a large language model faces 3-4x higher capital costs than a US competitor for equivalent performance. This disadvantage persists even with efficiency optimisations—DeepSeek’s $5.6 million training run still required months of compute on chips that Western labs would consider obsolete. Venture capital flows reflect this: Chinese AI funding fell 34% year-on-year in 2025 to $12.8 billion, while US AI investment grew 28% to $67 billion, per PitchBook data.
| Component | US Market | China Market |
|---|---|---|
| Compute hardware (per GPU) | $250K-300K | $1M (grey market) |
| Typical training run (frontier model) | $80M-120M | $240M-480M (equivalent scale) |
| Development cycle | 6-9 months | 9-15 months |
| Energy cost per MWh | $65-85 | $55-70 |
Yet the strategic calculus is not purely economic. Export controls aimed at military AI applications have instead reshaped the commercial landscape. Chinese military research institutions face similar constraints as civilian labs—they cannot easily access cutting-edge chips either. But they also do not need frontier commercial models for many defence applications. Facial recognition, drone swarms, and cyber operations run on older hardware. The controls may be most effective at handicapping China’s commercial AI sector, which competes directly with US firms, rather than its military applications, which were the stated rationale.
There is also a temporal dimension. Current export controls target 5nm and below nodes, but chip technology advances every 18-24 months. SMIC is targeting 5nm production by late 2027, which would partially nullify today’s restrictions. The US can tighten controls further—extending them to 7nm or restricting chipmaking equipment sales—but each escalation risks fragmenting the global semiconductor market entirely. Nvidia derives 20% of revenue from China. ASML sells billions worth of older-generation lithography tools to Chinese fabs. A full decoupling would impose costs on Western firms while accelerating China’s push for self-sufficiency.
The Acceleration Paradox
The central irony of US chip policy is that it may achieve its near-term goal—slowing Chinese AI development—while undermining its long-term objective of maintaining technological leadership. By forcing China to innovate under constraint, export controls have catalysed efficiency breakthroughs (DeepSeek), domestic hardware alternatives (Huawei Ascend), and parallel ecosystems that reduce dependency on US technology. A decade from now, China may possess an entirely indigenous AI supply chain, insulated from US leverage but also disconnected from Western standards, safety research, and collaborative governance.
- US export controls have created a 3-4x price premium for advanced AI chips in China, forcing labs to pay $1M per GPU versus $250-300K in authorised markets.
- Chinese developers have responded with efficiency optimisations (DeepSeek V4 trained for $5.6M vs. $100M+ for Western equivalents) and domestic alternatives (Huawei Ascend capturing 18% market share).
- The two-tier market imposes measurable disadvantages: Chinese AI funding fell 34% in 2025 while US investment grew 28%, and development cycles are 50% longer on restricted hardware.
- Export controls may slow Chinese military AI by 2-3 years but have accelerated technological decoupling, creating parallel innovation ecosystems that reduce long-term US leverage.
The semiconductor chokepoint gives Washington a finite window of influence. TSMC’s Taiwan monopoly, ASML’s lithography dominance, and Nvidia’s CUDA ecosystem create dependencies that China cannot resolve overnight. But dependencies erode. Every smuggled chip funds grey-market networks. Every efficiency breakthrough reduces reliance on raw compute. Every Huawei Ascend deployment builds an alternative stack. Export controls are a depleting asset—they work best when the target has no substitutes and no time to build them.
The question is whether the US uses this window to maintain an insurmountable lead (by out-innovating China in AI capabilities) or merely to delay the inevitable (a world of fragmented technology spheres, each advancing independently). Current policy assumes the former is achievable. The emergence of models like DeepSeek V4 suggests the latter may be more realistic. China will not match US AI capabilities under export controls, but it may not need to. A separate, optimised-for-constraint technology stack could prove competitive in domestic markets and the Global South, where cost and energy efficiency matter more than absolute frontier performance.
The two-tier AI market is not a stable equilibrium. It is a transitional state between a unified global semiconductor industry and a bifurcated one. The longer export controls remain in place, the more that transition solidifies. Chinese firms are not waiting for restrictions to lift—they are building around them. US policy has created a constraint that will shape AI development for the next decade, but whether that constraint achieves its strategic aims or merely redraws the map of global technology leadership remains an open question. The premiums Chinese labs pay today for smuggled chips are the transitional costs of a more fundamental shift: the decoupling of AI innovation itself into parallel, incompatible tracks that now influence global AI architecture, investment flows, and the distribution of technological power for the foreseeable future.
Related Coverage
- For current pricing dynamics and grey market activity, see Nvidia B300s Hit $1M in China as Export Controls Reshape AI Economics.
- On China’s efficiency breakthrough and its implications for US strategy, read DeepSeek V4 Exposes the Failure of U.S. Chip Export Strategy.
- For context on Taiwan’s role in semiconductor Geopolitics, see Beijing Designates Taiwan ‘Biggest Risk’ in US Ties as TSMC Dominance Amplifies Stakes.
- On the broader fragmentation of AI chip supply chains, read Cerebras targets $4 billion IPO as Nvidia’s AI chip monopoly fragments and Google Taps Marvell for Custom AI Chips in Strategic Pivot from Broadcom Monopoly.