393 / 727

Ask HN: The new wave of AI agent sandboxes?

TL;DR

Dozens of new sandboxing solutions for AI agents have launched in recent months – spanning microVMs, WASM runtimes, browser isolation, and hardened tool containers.

Key Points

  • The HN community counts over 35 active projects from the past year alone: E2B, Modal, Daytona, Capsule, DenoSandbox, AgentFence, and many more.
  • The core question in the thread: do these solutions actually hold up in production, or are there still major tradeoffs around security, cost, and performance?
  • No clear market leader has emerged – the space is fragmented and technically diverse.

Nauti's Take

35+ projects in a year sounds like momentum – but it's also a warning sign: when nobody has truly solved the problem, solutions proliferate instead of consolidating. WASM and microVMs are promising approaches, but the gap between 'works in a demo' and 'holds up under real agent traffic' is often enormous.

Shipping an agent to production today without sandbox testing is playing Russian roulette with your infrastructure budget – and your security posture. The race for a standard is wide open, but E2B and Modal currently have the strongest community backing.

Context

AI agents increasingly execute real code, access filesystems, and control browsers – without proper isolation, a serious security risk. The explosion of sandbox projects signals that the industry has recognized the problem, but no consolidated answer has emerged yet. For teams shipping agents to production, the sandbox choice is now a critical architectural decision – comparable to the container question a decade ago.

Sources