Ten years ago, a fake image circulating online had a short life. A specialist would spot it, debunk it, and the correction would usually catch up to the original before too much damage was done. That is no longer how it works.
Since the Iran war began, hundreds of AI-generated videos and images depicting fictional events have spread across social media — fake missile barrages, fake captured soldiers, fake burning embassies — racking up tens of millions of views before debunkers could get to them. The fakes aren’t crude. Many are realistic enough that trained journalists are spending hours verifying what average users are accepting in seconds.
“Ten years ago, there’d be like one or two fake things out there; they’d get debunked pretty fast,” said Hany Farid, a digital forensics professor at UC Berkeley who has spent years tracking online manipulation. “Now you see hundreds of them, and they’re really realistic. It’s landing hard. People believe it and they’re amplifying it.”
What the Fakes Actually Look Like
The range of fabricated content circulating since the conflict began is striking in both volume and variety. Verified fakes identified by fact-checkers include: a video of Iranian missiles striking Tel Aviv that never happened, footage of panicked crowds fleeing a nonexistent airport attack, a clip purporting to show captured US special forces held at gunpoint by Iranian troops, and images of the US Embassy in Saudi Arabia in flames. A publication linked to the Iranian government posted a fake satellite image claiming to show damage to a US military base.
None of these events occurred. All were created with AI tools that are, at this point, widely accessible and easy to use.
The Detection Gap Is Widening Fast
What separates this moment from every previous wave of wartime misinformation is the speed at which AI image quality has improved. Tips that were useful even a few months ago — check for extra fingers, look for blurred edges, scrutinize background details — are increasingly obsolete. Current AI-generated content routinely clears all of those bars.
“What has changed in the last year or so is that generative AI has become much more widely accessible,” said Shayan Sardarizadeh, a senior journalist at BBC Verify who focuses on debunking war-related fakes. “It’s now possible to create very believable videos and images appearing to show a significant war incident that is hard to detect to the untrained or naked eye.”
Free AI detection tools exist, but they are far from reliable. Sardarizadeh has noted that X’s own AI chatbot, Grok, has actively made the problem worse — wrongly telling users that several AI-created images and videos from the conflict are real, rather than flagging them as fabrications.
Platform Response Has Been Limited

X announced that creators who spread undisclosed AI war fakes will be suspended from its revenue-sharing program for 90 days, with permanent suspension for repeat violations. Farid said he is skeptical the policy will have much effect — the overwhelming majority of users spreading fakes are not part of the creator payment program. TikTok and Meta did not respond to requests for comment on the spread of war-related AI fakes.
The underlying conditions driving the problem go beyond any single platform: partisan media fragmentation, algorithmic feeds that surface content from like-minded sources, and social media companies that have broadly pulled back from aggressive content moderation in recent years.
How to Protect Yourself
The most reliable advice from researchers is also the least glamorous: get news about active conflicts from established journalistic sources rather than social media feeds. The volume and velocity of AI-generated content during fast-moving events makes real-time verification on a scroll nearly impossible.
For those who can’t avoid social media entirely, a few seconds of friction before sharing goes a long way. Has a known fact-checker addressed the clip? Do the visual details hold up to scrutiny — audio sync, lighting, background consistency? Are other users in the replies raising questions?
Farid’s assessment of where this is heading is not reassuring. “The content is more realistic, the volume is higher, the penetration is deeper,” he said. “This is our new reality. And it’s really messy.” Sardarizadeh puts it more plainly: detection is becoming extremely difficult, and the trajectory is toward it becoming harder still.
The tools to fool you are improving faster than the tools to protect you. Knowing that is, at minimum, a start.