LLMs
When Everything Is Content, Nothing Is Signal
Nobel economist Joseph Stiglitz warns that AI-generated content flooding information markets threatens the quality signal investors and policymakers depend on for efficient decisions.
The Imperfection Arms Race: When Human Writing Needs to Look More Human
Students and writers are deliberately introducing errors and informal quirks into their work to pass AI detection - inverting decades of writing instruction and raising questions about what authenticity means in an algorithmic age.
The Plausibility Problem: Why AI-Generated Code Looks Right but Fails
Large language models produce syntactically correct code at scale, but execution accuracy rates reveal a fundamental gap between surface-level correctness and functional reliability.
AI Can Unmask Pseudonymous Users for $4 Per Target, New Research Shows
Large language models achieve 68% accuracy in linking anonymous accounts across platforms, upending decades of assumptions about online privacy protection.
The Detection Arms Race: How Science Is Trying to Spot AI-Generated Text
New watermarking techniques and neural detectors promise to identify machine-written content, but adversarial methods and false positives threaten their reliability.
The AI Phishing Industrial Complex: How Cybercriminals Weaponized Automation at Scale
Deepfake CEOs, voice-cloned executives, and LLM-generated emails are driving a 3,000% surge in AI-powered fraud, costing businesses $40 billion by 2027.