AI War Footage Monetization Floods Social Media as Iran Conflict Escalates
Creators gaming platform revenue systems with synthetic military videos as detection tools fail and regulatory gaps widen.
AI-generated videos depicting fictional US-Israel military operations against Iran have accumulated over 21.9 million views across social media platforms, driven by creators exploiting monetization programs during the escalating regional conflict.
According to Euronews, videos and images claiming to show Iran gaining military advantage garnered more than 21.9 million views, while X’s head of product Nikita Bier identified dozens of accounts spreading fake war videos, including one operator in Pakistan managing 31 accounts posting AI war videos. The platform announced it would halt payments to users posting AI-generated Iran footage after the financial incentives became clear.
Platform Payments Drive Disinformation Production
X’s investigation revealed that in nearly all cases, users posting fake content were people looking to game monetization, piggybacking on attention surrounding the war, and did not appear to have political motivations. The economics are straightforward: X’s creator revenue share program incentivizes users to post content that drives more views and engagement, creating clear motivation for creators to share incendiary posts.
According to Full Fact, three of four clips in one viral video compilation had evidence of being made using artificial intelligence. The tells were technical: warped door frames and body parts, unrealistic background displays and unnatural reactions to explosions by people and items in the room.
Detection Systems Overwhelmed
BBC Verify’s Shayan Sardarizadeh stated on March 4 that this war might have already broken the record for the highest number of AI-generated videos and images that have gone viral during a conflict. The volume has created an unprecedented verification challenge.
Researchers at the Atlantic Council’s Digital Forensic Research Lab documented more than 300 responses by X’s Grok AI bot to a single fake AI-generated video of a bombed airport, with the bot’s responses contradicting each other, sometimes minute to minute. DFRLab director Emerson Brooking noted that what we’re seeing is AI mediating the experience of warfare.
“The fog of war is quickly becoming the slop of war as AI synthetic content creates infinite noise in information ecosystems.”
— Ari Abelson, Co-founder, OpenOrigins
Political scientist Steven Feldstein explains that as people have become savvier about AI, Disinformation content creators have become more sophisticated, resulting in shallowfakes which present shades of the truth and manipulate what’s there. One fake satellite image was based on real satellite imagery of a U.S. base, but edited with Google AI to falsely depict damage, according to BBC Verify.
Regulatory Response Lags Technology
Platform responses have been narrow and reactive. X announced on March 3 that it would suspend creators from its Creator Revenue Sharing program for 90 days if they post AI-generated videos of armed conflicts without disclosure. But the policy represents progress while addressing only a narrow segment of potential AI misinformation, focusing specifically on AI-generated videos of armed conflict without disclosure and leaving other forms of synthetic media manipulation unaffected.
The EU AI Act includes transparency mandates requiring providers of generative AI to disclose AI-generated content, with most provisions enforceable starting in 2026 and supported by approximately €1 billion per year in EU funding. The United States has no comprehensive federal AI law, with regulation instead coming from a patchwork of state laws, federal agency guidance, and voluntary standards.
Research from Stanford HAI found that adding AI authorship labels changed people’s perceptions of whether the author was AI or human but did not significantly change the persuasiveness of the content itself. The research highlights that policy proposals requiring AI content labels may enhance transparency, but their inability to affect persuasiveness underscores the need for complementary safeguards.
Geopolitical Information Warfare
The disinformation serves strategic purposes beyond monetization. NewsGuard identified 18 war-related claims by Iran as false since February 28, compared to five false claims in the two weeks before the US-Israel attack. Iranian outlets are increasingly turning to AI-doctored images to spread false claims, with many images created outside Iran, according to NewsGuard.
A coordinated campaign by the University of Toronto’s Citizen Lab documented a network called PRISONBREAK that routinely used AI-generated imagery and video, mimicked real news outlets, and deployed Deepfakes to stoke unrest and encourage the overthrow of the Iranian government. Political scientist Feldstein notes that the U.S. is relying on information transmission as a means to mobilize change on the ground in Iran’s government, raising the stakes for how quickly information is digested and spurs action.
Media Literacy Crisis Deepens
The public’s ability to distinguish real from synthetic has collapsed. As AI fakes become more common, people begin to question everything, with a newly surfaced authentic video of an Israeli rocket hitting a busy Tehran street dismissed by many online as AI-generated regime propaganda, according to EDMO.
- Creator monetization programs directly subsidize AI war disinformation through engagement-based payments
- Detection tools and AI verification systems produce contradictory results, failing to scale with content volume
- Current regulations focus narrowly on disclosure requirements that research shows do not reduce persuasiveness
- State actors and coordinated networks exploit the chaos, deploying AI to shape narratives during kinetic operations
- Public skepticism now extends to authentic footage, creating strategic ambiguity that benefits all sides
People living in areas of armed conflict need to know where it’s safe to seek shelter, where drones are attacking, and if they need to evacuate — making trustworthy information critical. Yet Social Media has been flooded with an unprecedented amount of misleading videos, with clips viewed by millions coming from video games or generated entirely with AI.
What to Watch
The EU AI Act’s transparency requirements take full effect in August 2026, but the Commission’s November 2025 digital omnibus proposed delaying rules on high-risk AI until 2027-2028 and extended the deadline for generative AI system providers to comply with machine-readable marking requirements until February 2027. This 12-month gap creates a regulatory vacuum during active conflict.
Monitor whether platforms expand monetization suspensions beyond narrow conflict video categories to address the broader synthetic media ecosystem. The European Parliamentary Research Service projected 8 million deepfakes would be shared in 2025, up from 500,000 in 2023 — a 16x increase that detection infrastructure cannot match.
The strategic question is whether democratic information environments can function when the cost of producing disinformation has fallen to zero while the cost of verification remains high. The lack of uniform rules across jurisdictions creates opportunities for malicious actors to engage in regulatory arbitrage, exploiting gaps by operating from countries with lax or no AI oversight. Until detection scales and labeling demonstrates actual efficacy, the advantage belongs to those flooding the zone.