When Everything Is Content, Nothing Is Signal
Nobel economist Joseph Stiglitz warns that AI-generated content flooding information markets threatens the quality signal investors and policymakers depend on for efficient decisions.
Joseph Stiglitz has elevated a niche AI concern into a Fortune interview – warning that large language models scraping the internet for training data are undermining the very media institutions that produce high-quality information, creating what he calls ‘information externalities’ where garbage in becomes garbage out.
The Nobel laureate argues AI models are built on a flawed bargain: they scrape journalism, research, and online chatter while eroding the institutions producing quality knowledge, creating a world where market participants respond to viral synthetic content rather than reality. Stiglitz bluntly states that “AI is basically stealing information from legacy media,” leaving those outlets without resources or incentives to produce information. The economic arithmetic is stark – AI investment accounted for roughly a third of U.S. growth in 2025, but the infrastructure it depends on is cannibalizing its own input quality.
The Abundance Paradox
AI-generated content and agent activity are collapsing the internet’s signal-to-noise ratio, with misinformation proliferating and human creators struggling to be heard while AI systems degrade from learning on polluted data. Okoone research finds that synthetic content volume is overwhelming human output across videos, memes, blogs, and animations, with platforms like MidJourney and DALL·E churning out visual assets with no human creator involved.
This degrades more than search results. AI technologies can enhance market efficiency by improving price discovery precision and lowering transaction costs, with AI-driven trading systems processing vast data at unprecedented speeds and executing trades with high accuracy, according to research published in the Journal of Policy Modeling. But that efficiency assumes information quality. When Deezer reported over 20,000 new AI-only tracks daily by spring 2025 – representing 18% of all new content, while Spotify’s AI share remained under 1% but rising – platforms face a curation crisis that mirrors the one hitting financial data.
Stiglitz co-authored a NBER working paper in October 2025 modeling how AI and digital platforms impact the information ecosystem, treating news producers who create truthful or untruthful content as generating public goods or bads that earn revenue from consumer visits.
Epistemic Fragmentation
Generative AI is creating a hyperreal flood of fabricated content that blurs truth and illusion, driving “epistemic collapse” – an erosion of shared knowledge bases in society where AI’s ability to seamlessly simulate anything causes people to lose trust in all information, even genuine facts, according to analysis in Epistemic Security Studies. As synthetic data becomes indistinguishable from real-world content, AI systems risk epistemic collapse – training on other models’ outputs rather than verifiable physical-world referents, eroding the distinction between coherence and groundedness.
For Markets, this is not an abstract philosophy problem. The efficient market hypothesis states that prices fully reflect available information, which critically depends on transferring information into prices through trading strategies. When LLMs generate homogenous texts impacting knowledge diversity, they pose a knowledge collapse risk where exposure to largely identical information shrinks the range of accessible information over time as underrepresented knowledge is forgotten, research from the University of Copenhagen shows.
“We have not only a problem in the labor market … but there’s another side of what I would call information externalities, which is basically garbage in, garbage out.”
– Joseph Stiglitz, Nobel Laureate in Economics
An estimated 1.1 billion pieces of content are published daily on social media, creating what Signal AI calls an environment where fake news travels faster than bad news, and a single false rumor about a CEO’s health or product recall can trigger automated sell-offs before communications teams respond. The speed advantage machines hold in processing information becomes a vulnerability when the information itself is polluted.
Policy Decisions Without Ground Truth
Governance faces the same arithmetic. Disinformation has shifted from “craft” – labor-intensive troll farms and manual manipulation – to “industry,” enabling automated, personalized, persistent synthetic content at scale, risking erosion of the shared factual basis underpinning democratic deliberation and scientific discourse, according to a February 2026 paper surveying AI engineers and European Commission policymakers.
Stiglitz’s intervention matters because it reframes synthetic content pollution as a macroeconomic risk, not a moderation headache. Research exploiting unique data on machine versus human access to company filings found that increased machine access, particularly from cloud computing services, significantly improves informational efficiency and reduces price drift following information events – but that assumes machines are accessing real filings, not synthetic summaries trained on degraded data.
- Training data contamination: LLMs scrape synthetic content back into training sets, compounding errors across generations
- Institutional erosion: Media outlets lose revenue to AI scrapers, reducing capacity to produce verified information
- Market reflexivity: Investors react to AI-amplified narratives disconnected from underlying fundamentals
- Regulatory lag: Detection methods struggle with scale and sophistication of industrial-grade synthetic content
AI-driven fake news sites grew tenfold in one year according to NewsGuard, flooding the information sphere with low-cost algorithmically generated propaganda, while deepfake fraud attempts spiked across regions. In December 2025, the Macquarie Dictionary, Merriam-Webster, and American Dialect Society named “slop” – typically associated with unappetizing animal feed – as Word of the Year, capturing unease about digital clutter created by AI tools.
What to Watch
The EU’s Digital Services Act requires large platforms to assess systemic risks, and while the DSA doesn’t specifically target AI slop, provisions on transparency, content recommendation algorithms, and risk mitigation could apply if AI content significantly affects public discourse or enables fraud. The EU AI Act, set to fully apply in 2026, will mandate that generative AI outputs be clearly marked in machine-readable form, though enforcement at scale remains unproven.
Market participants should monitor three data points: the synthetic content share in major training datasets (currently opaque), detection accuracy rates for AI-generated financial analysis (deteriorating as models improve), and the revenue performance of subscription-based verified news services versus ad-supported synthetic content farms. Stiglitz’s warning that any bubble breaking is “really bad in the short term for the macroeconomy,” and an AI bubble collapse would arrive while simultaneously displacing workers suggests coordinated risks across labor, capital, and information markets.
News organizations are developing content provenance technologies and standards to tag real images and video at the source and detect forgeries, but adoption lags content volume growth. The question is whether verification infrastructure can scale faster than synthetic pollution – or whether, as Stiglitz suggests, we’re already past the tipping point where signal recovery becomes economically unviable.