Show HN: Running AI agents across environments needs a proper solution
TL;DR
A developer argues that current infrastructure is not ready for true AI agents – Docker is too heavy, Python agents consume too much memory.
Key Points
- The evolution goes from LLM+Tools through workflows to full agent systems with tools, CLI access, memory, and fine-grained system capabilities.
- The open-source project Odyssey aims to provide a lightweight, scalable runtime for thousands of concurrent agents.
- Core problem: LLMs already introduce significant latency, and adding heavy container overhead on top makes things worse.
Nauti's Take
The point is valid: most 'agent frameworks' are glorified wrappers around LLM calls, not actual runtimes. Anyone who has tried to run more than a few dozen agents concurrently knows Docker is the wrong tool for the job.
Whether Odyssey is the answer remains to be seen – a GitHub project without broad production validation is still a promise. The direction is interesting though: agents need what Node.
js was for async I/O – something fundamental, not just another abstraction layer.