AI Agent Has Root Access (and That's a Problem)
TL;DR
Connect a Postgres MCP server for read access and you also get DELETE, DROP TABLE, and arbitrary SQL execution — with no way to restrict it.
Key Points
- GitHub MCP for code reading ships with delete_repository. Slack MCP for search includes remove_user and delete_channel.
- A scan of 1,808 MCP servers found: 66% had security findings, 30 CVEs in 60 days, 76 published skills contained malware — 5 of the top 7 most-downloaded skills were malicious.
- Claude, Cursor, ChatGPT — all follow the same all-or-nothing model. Granular permission scoping does not exist in any major platform.
- Aerostack built a gateway workaround: per-tool toggles, destructive ops blocked by default, enforced at the proxy layer.
Nauti's Take
This is not an edge case — it is systemic design failure at scale. When 5 of the 7 most-downloaded skills are malware, the ecosystem does not have a security problem; it has no security concept at all.
Platform vendors have a clear responsibility here that they have so far offloaded to the community. Gateway-level workarounds like Aerostack's are clever, but they are treating symptoms.
Until major providers ship native, granular tool-permission models, every production deployment of MCP agents is a calculated risk — and most teams are not doing the calculation.
Context
MCP servers are being integrated into production systems at scale — without the foundational permission model every other piece of infrastructure has taken for granted for years. The scan numbers are not theoretical risk; they represent active attack surface. Anyone running an agent with database access is implicitly granting write access to everything beneath it.
The comparison to early cloud before IAM is apt — except IAM took years to arrive, and agents are already inside critical workflows today.