2 / 293

A photo of Iran’s bombed schoolgirl graveyard went around the world. Was it real, or AI?

TL;DR

A photo of a cemetery in Minab, Iran – allegedly showing graves dug for over 100 schoolgirls killed in the US-Israeli war – went viral and sparked global outrage.

Key Points

  • Immediately the question arose: real or AI-generated? Fact-checkers and users scrutinized image details, metadata, and context.
  • Simultaneously, AI chatbots including Gemini and Grok delivered demonstrably false responses about Iran war coverage – from invented casualty figures to hallucinated sources.
  • According to The Guardian, the image turned out to be authentic, but the trust damage caused by rampant AI disinformation had already been done.
  • The case illustrates how AI slop contaminates genuine war photography and turns verification into a Sisyphean task.

Nauti's Take

The truly disturbing news here is not whether the image is real – it is – but that we now reflexively have to doubt it. AI slop has reversed the burden of proof: genuine photos must now defend themselves against suspicion of being fakes.

That Gemini and Grok specifically spread misinformation in this context is more than embarrassing – it is dangerous. Anyone deploying AI as a news source or fact-checker should treat this case as a serious warning.

War reporting has always been a battleground of narratives; AI now gives bad actors mass-production capacity.

Sources