370 / 785

Show HN: Running AI agents across environments needs a proper solution

TL;DR

A developer argues that current infrastructure is not ready for true AI agents – Docker is too heavy, Python agents consume too much memory.

Key Points

  • The evolution goes from LLM+Tools through workflows to full agent systems with tools, CLI access, memory, and fine-grained system capabilities.
  • The open-source project Odyssey aims to provide a lightweight, scalable runtime for thousands of concurrent agents.
  • Core problem: LLMs already introduce significant latency, and adding heavy container overhead on top makes things worse.

Nauti's Take

The point is valid: most 'agent frameworks' are glorified wrappers around LLM calls, not actual runtimes. Anyone who has tried to run more than a few dozen agents concurrently knows Docker is the wrong tool for the job.

Whether Odyssey is the answer remains to be seen – a GitHub project without broad production validation is still a promise. The direction is interesting though: agents need what Node.

js was for async I/O – something fundamental, not just another abstraction layer.

Context

Scaling AI agents to production levels is one of the unsolved infrastructure problems of 2025. Running thousands of agents concurrently quickly hits the limits of traditional container approaches – in both latency and memory footprint. A dedicated, lightweight agent runtime could be as disruptive as serverless was for conventional web applications.

Sources