AI-generated Iran images are widespread. How do we know what to believe? | Margaret Sullivan
TL;DR
AI-generated videos falsely show Iranian missiles hitting Tel Aviv airport and US soldiers held at gunpoint – both fake, both going massively viral.
Key Points
- Authentic footage gets dismissed as AI fakery while fabrications pass as real – a dual credibility collapse.
- Debunks rarely catch up to the original viral spread; the false impression sticks.
- Media critic Margaret Sullivan outlines three rules for navigating war coverage saturated with synthetic imagery.
Nauti's Take
The real problem is not the AI technology itself but the platform economy: outrage and shock perform better than corrections, so fakes are systematically amplified. Three rules for audiences sound helpful but fix nothing structurally – as long as algorithms reward engagement over accuracy, media literacy is a band-aid on a bullet wound.
What is missing is platform accountability: those who profit from viral disinformation must also bear responsibility for its consequences.