677 / 881

AI-generated Iran images are widespread. How do we know what to believe? | Margaret Sullivan

TL;DR

AI-generated videos falsely show Iranian missiles hitting Tel Aviv airport and US soldiers held at gunpoint – both fake, both going massively viral.

Key Points

  • Authentic footage gets dismissed as AI fakery while fabrications pass as real – a dual credibility collapse.
  • Debunks rarely catch up to the original viral spread; the false impression sticks.
  • Media critic Margaret Sullivan outlines three rules for navigating war coverage saturated with synthetic imagery.

Nauti's Take

The real problem is not the AI technology itself but the platform economy: outrage and shock perform better than corrections, so fakes are systematically amplified. Three rules for audiences sound helpful but fix nothing structurally – as long as algorithms reward engagement over accuracy, media literacy is a band-aid on a bullet wound.

What is missing is platform accountability: those who profit from viral disinformation must also bear responsibility for its consequences.

Context

AI tools have reduced the barrier to producing conflict disinformation to near zero – convincing war footage can be fabricated in minutes by anyone. This is not an abstract media literacy issue: false depictions of attacks or prisoners can generate political pressure, move markets, and trigger real-world escalation. The asymmetry is the core problem – a fake spreads in hours, a fact-check takes days and reaches only a fraction of the original audience.

Sources