Ask HN: The new wave of AI agent sandboxes?
TL;DR
Dozens of new sandboxing solutions for AI agents have launched in recent months – spanning microVMs, WASM runtimes, browser isolation, and hardened tool containers.
Key Points
- The HN community counts over 35 active projects from the past year alone: E2B, Modal, Daytona, Capsule, DenoSandbox, AgentFence, and many more.
- The core question in the thread: do these solutions actually hold up in production, or are there still major tradeoffs around security, cost, and performance?
- No clear market leader has emerged – the space is fragmented and technically diverse.
Nauti's Take
35+ projects in a year sounds like momentum – but it's also a warning sign: when nobody has truly solved the problem, solutions proliferate instead of consolidating. WASM and microVMs are promising approaches, but the gap between 'works in a demo' and 'holds up under real agent traffic' is often enormous.
Shipping an agent to production today without sandbox testing is playing Russian roulette with your infrastructure budget – and your security posture. The race for a standard is wide open, but E2B and Modal currently have the strongest community backing.