153 / 749

Don’t blame AI for the Iran school bombing | Letters

TL;DR

A Guardian letter criticises how the term 'AI error' shifts moral responsibility from humans to systems.

Key Points

  • Background: An attack on an Iranian school initially saw 'the AI' blamed for mistakes – echoing how phrases like 'collateral damage' once obscured accountability.
  • The authors stress: humans design, authorise, and execute these decisions, regardless of how complex the chain of analysis and command is.
  • Linguistic obfuscation is not a technical error but an ethical and political choice.

Nauti's Take

The letter hits a nerve the tech industry prefers to avoid: AI is not an autonomous moral agent, and that fact conveniently gets forgotten when things go wrong. 'The AI made an error' sounds like bad luck with a machine – 'A human bombed a school' sounds like what it actually is.

This linguistic shift is not accidental; it is useful for everyone who wants to avoid accountability. Until the AI community and regulators enforce binding accountability frameworks, this rhetoric will keep growing – proportional to the number of systems deployed.

Context

The debate exposes a structural problem: the deeper AI is embedded in military and security decision-making, the easier it becomes to diffuse responsibility. Terms like 'AI error' function as linguistic shields for institutions and individuals. When accountability disappears, so does the incentive to improve systems or prevent bad decisions.

This is not an academic concern – it has direct consequences for civilian casualties.

Sources