The gen AI Kool-Aid tastes like eugenics
TL;DR
Director Valerie Veatch explored OpenAI's Sora text-to-video model in 2024 out of curiosity about the emerging AI art community. She quickly discovered the model routinely generated imagery steeped in racism and sexism with little apparent filtering. More disturbing to her was the indifference shown by fellow AI enthusiasts when confronted with these outputs. Veatch channelled her experience into a documentary examining the ideological blind spots underlying the generative AI boom.
Nauti's Take
You do not need malicious intent to reproduce eugenic patterns in AI outputs – you just need a community that has collectively decided to look away. Calling biased outputs 'hallucinations' or edge cases is a rhetorical escape hatch that lets builders off the hook while the harm scales.
Veatch's documentary does what tech journalism too rarely does: it names the social contract being struck when enthusiasm is treated as a moral shield. Anyone deploying generative AI in production should sit with that discomfort rather than scroll past it.
Briefingshow
Generative AI models trained on unfiltered web-scale data do not merely reflect societal bias – they automate and amplify it at unprecedented speed and volume. When the enthusiast community dismisses such criticism, it creates a culture of wilful ignorance that blocks the structural fixes these systems urgently need. For industries already deploying AI-generated content in media, advertising, and education, this is not a theoretical concern but a concrete liability.