1 / 382

The gen AI Kool-Aid tastes like eugenics

TL;DR

Director Valerie Veatch explored OpenAI's Sora text-to-video model in 2024 out of curiosity about the emerging AI art community.

Key Points

  • She quickly discovered the model routinely generated imagery steeped in racism and sexism with little apparent filtering.
  • More disturbing to her was the indifference shown by fellow AI enthusiasts when confronted with these outputs.
  • Veatch channelled her experience into a documentary examining the ideological blind spots underlying the generative AI boom.
  • The film argues that uncritical AI enthusiasm risks normalising and amplifying pre-existing societal biases at scale.

Nauti's Take

You do not need malicious intent to reproduce eugenic patterns in AI outputs – you just need a community that has collectively decided to look away. Calling biased outputs 'hallucinations' or edge cases is a rhetorical escape hatch that lets builders off the hook while the harm scales.

Veatch's documentary does what tech journalism too rarely does: it names the social contract being struck when enthusiasm is treated as a moral shield. Anyone deploying generative AI in production should sit with that discomfort rather than scroll past it.

Sources