Trump Frames Iran AI Disinformation as National Security Threat Amid War
President accuses Tehran of weaponizing artificial intelligence for wartime propaganda, elevating influence operations to kinetic threat status as U.S. military deploys same technology for targeting.
President Donald Trump on March 15 accused Iran of deploying artificial intelligence as a disinformation weapon to fabricate military victories, marking the first time a sitting U.S. president has framed adversary AI capabilities as a national security priority equivalent to conventional military threats. The claim positions AI-enabled influence operations at the center of U.S. threat doctrine during an active conflict that has already killed over 1,300 people in Iran since airstrikes began February 28.
Trump told reporters aboard Air Force One that Iran is using artificial intelligence as a “Disinformation weapon” to misrepresent its wartime successes and support, citing three specific examples: fake imagery of kamikaze boats, fabricated attacks on the USS Abraham Lincoln, and AI-generated images showing “250,000” Iranians at a rally supporting Supreme Leader Mojtaba Khamenei. “The fact is, Iran is being decimated, and the only battles they ‘win’ are those that they create through AI,” Trump wrote on Truth Social.
The Mirror War: Both Sides Deploy AI
The accusations arrive amid confirmation that U.S. forces are using identical technology for the opposite purpose. Admiral Brad Cooper, head of U.S. Central Command, acknowledged Wednesday that “warfighters are leveraging a variety of advanced AI tools” that “help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react”.
Two people with knowledge of the matter confirmed the military is using AI systems from data analytics company Palantir to identify potential targets in the ongoing attacks, with Palantir’s software relying in part on Anthropic’s Claude AI systems, according to NBC News. The U.S. military “struck a blistering 1,000 targets in the first 24 hours of its attack on Iran” thanks in part to its use of artificial intelligence, per the Washington Post.
This creates an unprecedented paradox: the U.S. is simultaneously accusing Iran of using AI for deception while deploying the same technology class for lethal targeting — a collision that elevates AI from a theoretical National Security concern to an active battlefield variable with civilian consequences.
Iran has documented history with AI-enabled influence operations. In September 2024, U.S. intelligence officials stated Iran used artificial intelligence to create fake news articles in English and Spanish about the war in Gaza to promote division, and Iranian group APT42, believed linked to the Islamic Revolutionary Guard Corps, successfully hacked Trump’s presidential campaign in 2024. The Treasury Department sanctioned Iranian entities in December 2024 for AI-generated election disinformation.
Doctrine Shift: Influence Operations as Kinetic Threat
Trump’s framing represents a doctrinal escalation. He suggested media outlets that reported Iranian claims should face “Charges for TREASON for the dissemination of false information”, collapsing the distinction between adversary propaganda and domestic press coverage. FCC Chairman Brendan Carr reinforced this Saturday, threatening to pull licenses of broadcasters who did not “correct course” on their coverage.
The administration’s position effectively treats AI-generated content as a weapon system rather than an information operation, with implications for how the U.S. categorizes and responds to digital threats. This matters for Sanctions policy, tech platform regulation, and legal authorities governing military response to non-kinetic attacks.
The Iran war is being fought on a hybrid digital-physical battlefield, with old-school deception tactics and cutting-edge AI technology, according to NPR. The U.S. military’s first move in the Iran war was in cyberspace, with “coordinated space and cyber operations” that “effectively disrupted communications and sensor networks”.
“In current geopolitical context, AI has allowed propaganda at scale, and it’s really hard for individuals to know what information is real.”
— Dr. Jordan Howell, University of South Florida
The Anthropic Flashpoint
The AI-war nexus crystallized last week when the Defense Department labeled Anthropic a threat to national security, a move that threatens to remove it from military use after the company refused to remove safeguards preventing its Claude system from being used for mass surveillance or fully autonomous weapons.
The timing is stark: Since 2024, the Maven system has been supported by Anthropic’s Claude as part of a $200 million contract with the Department of Defense. Just one day before the U.S.-Israeli offensive began on February 28, the U.S. government sidelined Anthropic as part of a disagreement over AI guardrails, according to Nature.
Heidy Khlaaf, chief scientist at the AI Now Institute, said she was concerned that reliance on AI to rapidly process information for life-or-death decisions could be “a cover for indiscriminate targeting when you consider how inaccurate these models are”. The concern gained traction after preliminary information showed a U.S. munition was likely responsible for a strike on an Iranian elementary school that killed over 170 children, with outdated intelligence potentially to blame for selecting the target.
Election Security Meets Wartime Doctrine
Trump’s AI accusations connect two policy domains previously treated as separate: election security and military doctrine. In December 2024, Treasury sanctioned Iranian and Russian entities that “aimed to stoke socio-political tensions and influence the U.S. electorate during the 2024 election,” with the Russian group using “generative AI tools to quickly create disinformation”.
During the 2024 election cycle, Iranian actors hacked Trump’s presidential campaign via spear-phishing, establishing a precedent that Trump now invokes as partial justification for military action. On February 28, as bombs began falling, Trump posted to Truth Social that “Iran tried to interfere in 2020, 2024 elections to stop Trump, and now faces renewed war”.
The fusion is deliberate. By elevating AI disinformation from a sanctions-eligible offense to a wartime threat, the administration creates legal and policy space for more aggressive responses to influence operations — whether from adversary states or, critics warn, domestic actors.
- Doctrine: AI influence operations now framed as national security threats equivalent to kinetic capabilities, not merely sanctions-eligible offenses
- Tech regulation: Pressure mounting on platforms to choose between government contracts and ethical AI guardrails, with Anthropic as test case
- Election security: Trump administration linking foreign AI campaigns to military justification, blurring intelligence assessment and war rationale
- Allied coordination: No evidence of parallel AI-threat framing from NATO allies, raising coordination questions ahead of 2026 midterms
The AI Arms Race Goes Live
More than 60 Iranian-aligned cyber groups mobilized within hours of the February 28 escalation, as AI tools sharply lowered the barrier to targeting exposed critical infrastructure, according to cybersecurity firm CloudSEK. “The activation of 60+ hacktivist groups is a materially different threat than three years ago” because “AI breaks the knowledge barrier by making it irrelevant for internet-exposed ICS devices with default or absent credentials”.
Meanwhile, China’s Defense Ministry warned that “the unrestricted application of AI by the military, using AI as a tool to violate the sovereignty of other nations… risks technological runaway”, per Al Jazeera. Academics and legal experts met in Geneva this week to discuss lethal autonomous weapons systems and AI procurement, part of long-running efforts to arrive at international agreement on ethical uses of AI in warfare.
“Rapid technological development is outpacing slow international discussions,” says political scientist Michael Horowitz. “The current failure to regulate AI warfare seems to suggest potential proliferation is imminent”, according to Nature.
What to Watch
Congressional oversight intensifies: Over 120 Democratic lawmakers are demanding answers on AI’s role in target selection after the school bombing, with specific questions about human verification policies for AI-generated targets.
Anthropic litigation proceeds: The company’s lawsuit challenging its national security designation will test whether private firms can resist government demands for unrestricted AI access during wartime.
Sanctions expansion likely: Treasury precedent from December 2024 suggests new designations targeting entities facilitating Iranian AI capabilities, potentially including Chinese firms providing surveillance technology.
Platform regulation pressure: Tech companies face choice between maintaining ethical AI guardrails and preserving government relationships, with spillover effects for domestic surveillance debates.
Midterm security implications: Trump’s equation of Iranian election interference with military threat establishes framework for linking 2026 election security to national security emergency authorities — a scenario opposition researchers are monitoring closely.
The doctrine shift is complete: AI is no longer a future threat to be managed through export controls and sanctions. It is a present weapon system, deployed by both sides, with civilian casualties mounting and no international consensus on rules of engagement.