472 / 749

A photo of Iran’s bombed schoolgirl graveyard went around the world. Was it real, or AI?

TL;DR

A photo of a cemetery in Minab, Iran – allegedly showing graves dug for over 100 schoolgirls killed in the US-Israeli war – went viral and sparked global outrage.

Key Points

  • Immediately the question arose: real or AI-generated? Fact-checkers and users scrutinized image details, metadata, and context.
  • Simultaneously, AI chatbots including Gemini and Grok delivered demonstrably false responses about Iran war coverage – from invented casualty figures to hallucinated sources.
  • According to The Guardian, the image turned out to be authentic, but the trust damage caused by rampant AI disinformation had already been done.
  • The case illustrates how AI slop contaminates genuine war photography and turns verification into a Sisyphean task.

Nauti's Take

The truly disturbing news here is not whether the image is real – it is – but that we now reflexively have to doubt it. AI slop has reversed the burden of proof: genuine photos must now defend themselves against suspicion of being fakes.

That Gemini and Grok specifically spread misinformation in this context is more than embarrassing – it is dangerous. Anyone deploying AI as a news source or fact-checker should treat this case as a serious warning.

War reporting has always been a battleground of narratives; AI now gives bad actors mass-production capacity.

Context

When AI-generated images and AI hallucinations flood war coverage simultaneously, trust in visual evidence collapses across the board – including for genuine photographs. This is not a fringe technical issue: it undermines the documentation of war crimes, hampers humanitarian responses, and hands ready-made excuses to those who wish to deny atrocities. Grok and Gemini actively contributed to the disinformation landscape with demonstrably false answers – a systemic failure, not an isolated incident.

Sources