AI Technology · · 7 min read

AI’s Speed Problem: When Innovation Outruns the Business Model

Technology companies face unprecedented pricing pressure as artificial intelligence capabilities become obsolete within months, creating a fundamental mismatch between development velocity and revenue capture.

Artificial intelligence models now advance so rapidly that companies struggle to price their products before the next generation renders them obsolete—a phenomenon creating what economists call a ‘duration mismatch’ in technology markets. The pace of obsolescence is moving at warp speed for both AI hardware and software, particularly the large language models, with capabilities that once commanded premium prices collapsing in value within months rather than years.

The Price War No One Can Win

The economics of AI are buckling under their own momentum. From 2022 to 2024, major vendors cut the cost of processing 1 million tokens from roughly $12 to under $2 for comparable performance on some models, a decline that accelerated through 2025. In China, competition is even more aggressive: Alibaba cut Tongyi Qwen prices by up to 97%, while DeepSeek and others introduced discounts up to 75%, according to Sumatosoft.

Anthropic cut the price of Claude Opus 4.5 by 67%, reducing the cost of the text the model processes from $15 to $5 per million tokens, while Google set Gemini 3 Pro at $2 for what the model reads and $12 for what it generates. Yet even these aggressive cuts may not be enough. Efficiency gains drive down token costs more than 70% per year, and merely standing still would require an increase of more than 225% in tokens demanded per year, according to analysis from Man Group.

Token Economics Collapse
Price drop (2022-2024)-83%
Annual token cost decline-70%
Demand growth needed+225%

Beyond Activity Pricing

Companies are scrambling to escape the token trap. Pricing will continue to evolve from activity-based (pay per use) to workflow-based (pay per task), outcome-based (pay per result), and per-agent models that price an ‘AI employee’, according to Foundation Capital. If you want to get paid on outcomes, you need instrumentation, attribution, and results reliable enough to stake your revenue on. The startups that can walk into a customer and quantify value will compound.

But the transition is treacherous. Most companies tend to underestimate the total cost of ownership by 500% to 1000% once they transition from pilots to production, reports Avenga. The infrastructure required to capture value at scale—data governance, specialized talent, regulatory compliance—often costs more than the AI itself.

Context

Traditional software companies benefit from near-zero marginal costs: once code is written, each additional user costs almost nothing to serve. AI inverts this model. AI’s unit economics don’t benefit from traditional software leverage: when usage scales, compute must scale at least linearly. Every inference request burns GPU cycles, creating operational costs that grow directly with adoption.

The Investment Paradox

Capital continues flooding into AI infrastructure despite deteriorating unit economics. The consensus estimate among Wall Street analysts for hyperscaler 2026 capital spending is now $527 billion, up from $465 billion at the start of the third-quarter earnings season, according to Goldman Sachs. Yet concerns about a possible AI investment bubble are growing as capital spending on computing power and infrastructure far outpaces the revenue being generated by AI applications, warns Moody’s.

Anthropic started 2025 at a $1 billion run rate, hit $5 billion by August and $7 billion by October, with internal projections targeting $9 billion by year-end 2025 and as much as $26 billion in 2026. OpenAI is expected to hit $20 billion in annualized revenue this year, up from $3.7 billion the year before—a 5x increase in 12 months. This velocity of revenue growth creates its own pricing pressure: each new model must justify the next round of infrastructure spending, even as the previous generation’s pricing collapses.

2022
Token Pricing Peak
Processing 1M tokens costs roughly $12 for comparable models
2024
Price Floor Breaks
Same processing drops to under $2; 83% decline in two years
2025
China Disruption
DeepSeek and Alibaba introduce 75-97% discounts, forcing global repricing
2026
Outcome Pricing Emerges
Companies shift from tokens to workflow and results-based models

Macro Implications: Productivity Without Profits

The disconnect between technological capability and value capture has economists invoking historical precedents. Economist Robert Solow made an observation in the 1980s that reminded economists of today’s AI boom: ‘You can see the computer age everywhere but in the Productivity statistics.’ Economist Erik Brynjolfsson noted in a Financial Times op-ed the trend may already be reversing.

Analysis indicated a U.S. productivity jump of 2.7% last year, attributed to a transition from AI investment to reaping the benefits of the Technology. Yet A study published this month by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations, according to Fortune.

The productivity paradox creates a second-order problem: if AI genuinely boosts output without generating proportional profits, it becomes what some call ‘ghost GDP’—economic activity that appears in statistics but doesn’t circulate through traditional channels. AI tools are readily accessible as a result of ‘fierce competition’ between large language model-builders driving down prices, ensuring adoption accelerates even as monetization struggles.

Key Takeaways
  • AI pricing has collapsed 70%+ annually, forcing a shift from per-token to outcome-based models
  • Companies underestimate total AI costs by 500-1000% when moving from pilots to production
  • Hyperscalers plan $527 billion in 2026 capex despite revenue-to-spending gap concerns
  • Productivity gains appear in macro data while most executives report minimal operational impact

What to Watch

The next 12-18 months will determine whether AI can escape the velocity trap. Three indicators matter most: whether companies successfully transition from token-based to outcome-based pricing at scale; whether enterprise adoption reaches the 50%+ threshold needed for macro productivity gains; and whether the gap between capability advancement and revenue capture narrows or widens.

J.P. Morgan wrote last week that some $2 trillion had been wiped off software market caps alone as a result of AI concerns. For months, the published view has been that nobody truly knows who the long-term winners and losers of this extraordinary technology will be. The companies that solve the pricing-velocity mismatch—building durable moats around value capture while capabilities continue accelerating—will define the next phase of the technology economy. Those that don’t will discover that being first to market means little when the market reprices every quarter.