OpenAI Robotics Lead Caitlin Kalinowski Resigns, Citing Surveillance and Lethal Autonomy Concerns
TL;DR
Caitlin Kalinowski, OpenAI's head of robotics, has resigned, citing ethical concerns over surveillance and lethal autonomy.
Key Points
- In a public post on X, she made clear she is unwilling to work on technologies that could be used for autonomous weapons systems or mass surveillance.
- Her departure is one of the most high-profile ethics-driven resignations at OpenAI since several safety researchers left in 2024.
- OpenAI has recently opened the door to military and government partnerships – a strategic shift that appears to be causing significant internal friction.
Nauti's Take
When the head of robotics walks out because she refuses to see her work end up in drones or surveillance systems, that should be a loud wake-up call. OpenAI continues to present itself publicly as a responsible AI lab – but internally it looks increasingly like a classic defense contractor using safety rhetoric as a marketing tool.
Kalinowski clearly saw through it and made the only consistent choice available to her. The real question is: how many more will follow before OpenAI drops the ethical facade entirely?
Context
Kalinowski's resignation is not an isolated incident but part of a pattern: since OpenAI broadened its dual-use strategy, the company has been steadily losing employees who treat safety and ethics as foundational principles. The robotics division is particularly sensitive – AI-controlled autonomous physical systems represent the next escalation in the debate over lethal autonomy. Losing leadership there also means losing ethical oversight over what gets built.