Google stopped a zero-day hack that it says was developed with AI
TL;DR
Google says it spotted and stopped a zero-day exploit developed with AI for the first time. According to Google Threat Intelligence Group, cyber crime actors planned a mass exploitation event targeting an open-source admin tool's 2FA. Researchers found AI fingerprints in the Python exploit, including a hallucinated CVSS score and LLM-typical, textbook-style formatting.
Nauti's Take
Promising: Google going public with concrete AI-developed exploit cases hands defenders real pattern recognition and pushes the industry to invest in AI-driven defense instead of denying the shift. The risk is asymmetry — attackers scale cheaply via LLM while security teams scramble to keep up.
Defenders should prioritize code review, 2FA hardening, and threat-intel feeds now, not after the next AI-built exploit lands.