Without effective regulation of AI, society is facing a head-on collision with a driverless car | Peter Lewis
TL;DR
Peter Lewis, executive director of research firm Essential, compares unregulated AI development to a driverless car without brakes, seatbelts, or speed limits.
Key Points
- The framing draws on Bruce Holsinger's tech-lit novel 'Culpability', which examines liability and agency in the AI era through the lens of a lawyer and an ethicist.
- Central argument: society and policy are structurally lagging behind exponential AI acceleration – regulation isn't just absent, it was never seriously built.
- Lewis calls for binding frameworks before more AI systems penetrate critical domains.
Nauti's Take
The driverless car analogy is catchy, but it understates a crucial nuance: car crashes leave physical evidence. AI harms – algorithmic discrimination, manipulated information environments, displaced livelihoods – are often invisible and hard to litigate.
That's precisely what makes regulation so difficult and so urgent at the same time. Using Holsinger's novel as a hook is clever, but Lewis could have gone further: which regulatory models are actually working, and which are failing?
The piece diagnoses the problem accurately but stays vague on the cure – which is, unfortunately, symptomatic of the entire political debate.