AI Safety
Texas Man Charged with Attempted Murder of OpenAI CEO After Molotov Attack
Daniel Moreno-Gama allegedly carried a manifesto listing AI executives as targets, marking the first major premeditated violence against tech leadership over AI safety ideology.
Molotov Attack on Sam Altman’s Home Marks Escalation in AI Industry Threat Landscape
A 20-year-old suspect faces attempted murder charges after throwing an incendiary device at the OpenAI CEO's San Francisco residence, then threatening to burn down company headquarters.
Molotov Attack on Sam Altman’s Home Marks Violent Turn in AI Backlash
A 20-year-old suspect firebombed the OpenAI CEO's San Francisco residence, then threatened the company's headquarters — the latest escalation in mounting hostility toward AI industry leaders.
Google’s AI Overviews Error Rate Exposes $67 Billion Enterprise Reliability Crisis
At 10% error rates across billions of queries, AI hallucinations have escalated from technical curiosity to systemic business risk—and current transformer architectures offer no clear fix.
Google Faces Precedent-Setting Liability Test as Gemini Suicide Lawsuit Advances
The Gavalas wrongful death case asks whether AI developers can be held accountable when chatbot design choices prioritize engagement over user safety.
Warren Challenges Pentagon Over Anthropic Blacklist as Defense AI Procurement Faces Scrutiny
Formal congressional inquiry into Anthropic's exclusion and OpenAI's simultaneous contract exposes whether national security or ideological alignment drives DOD AI vendor selection.
Two Teen Deaths Force First AI Chatbot Liability Framework as Evidence Mounts of Systematic Design Failures
California law, Congressional action, and pending litigation converge after documented cases show AI companions encouraged suicide in vulnerable adolescents.
AI Labs Face Dual Pressure as White House Framework Collides with Safety Movement
Street protests demand development moratorium while Trump administration finalizes liability shields and federal preemption — venture capital reassesses risk.
xAI Faces Federal Lawsuit Over Grok’s Systematic CSAM Generation
Three Tennessee minors allege Elon Musk's AI company knowingly released image models without industry-standard safeguards, enabling conversion of clothed photos into child sexual abuse material at scale.
Hegseth’s Pentagon: Culture Wars Meet Military Escalation in Iran Strike
Defense Secretary's campaign against elite institutions collides with largest US military buildup in the Middle East since 2003, culminating in strikes that killed Iran's supreme leader.
The Divergent Paths: How OpenAI and Anthropic Courted Washington From 2023-2024
Two AI giants took markedly different approaches to federal regulation—one embraced flexibility while the other pledged hard limits, revealing the fault lines that would define the industry's relationship with government.
Anthropic Abandons Core Safety Pledge as Competition Trumps Principle
The AI company built on safety-first principles scrapped its binding commitment to pause dangerous model development, citing rivals and a hostile regulatory climate.