523 / 760

AI coding agents accidentally introduced vulnerable dependencies

TL;DR

A developer found a cryptominer running on their server – root cause was CVE-2025-29927, a critical Next.js vulnerability that bypasses middleware protections entirely.

Key Points

  • The app was largely built with Claude Code and OpenAI Codex ('vibe coding'). AI-generated code pulled in outdated or vulnerable dependencies without anyone explicitly auditing their security posture.
  • The attacker reached internal endpoints assumed to be protected and executed a script that downloaded a mining binary.
  • The first sign was CPU usage near 100% even during low traffic – only manual process inspection revealed the miner.

Nauti's Take

'Vibe coding' is an apt name – you ride a wave of AI-generated output feeling productive, until the hangover hits. This isn't an isolated incident; it's a structural problem.

AI tools don't know which packages are vulnerable today, and nobody asks them to check. The output sounds competent but is a snapshot from training data with zero live threat intelligence baked in.

Anyone seriously using AI coding agents should treat 'npm audit', Dependabot, or Snyk as mandatory hard gates in CI/CD – not optional extras. In this case, a cryptominer was arguably the least damaging possible outcome.

Context

AI coding tools dramatically speed up development, but they reproduce patterns from training data – including outdated library versions and insecure architectural decisions. CVE-2025-29927 was already publicly known and patched at the time of the attack. The real issue isn't the AI itself, but that 'vibe coding' encourages developers to treat generated dependencies as trustworthy defaults, skipping dependency audits and security reviews entirely.

Anyone shipping AI-generated code to production needs automated vulnerability scanning as a non-negotiable pipeline step.

Sources