Creating with Sora Safely
TL;DR
OpenAI built Sora 2 and the Sora app with safety as a foundational principle rather than an afterthought.
Key Points
- The dual challenge: a state-of-the-art video generation model combined with a new social creation platform for user-generated content.
- OpenAI cites 'concrete protections' as the core of its safety approach – though the announcement stays light on specific technical details.
- The pairing of generative video AI with a social platform creates heightened risks around deepfakes, disinformation, and harmful content.
Nauti's Take
'Safety at the foundation' sounds reassuring – but that phrase now appears in virtually every OpenAI press release. The real question is: what concrete mechanisms kick in when a user attempts to generate a politician in a compromising scenario?
The announcement stays deliberately vague on that front. To be fair, proactively addressing safety before the platform goes viral is better than damage control after the fact.
But PR promises and lived moderation practice are two very different things.
Sources
23.3.26