How dangerous is Anthropic’s Mythos AI? | Bruce Schneier
TL;DR
Anthropic announced Claude Mythos Preview and immediately said it would not be released to the public because the model is so capable at finding security vulnerabilities in software. Instead, only a select group of companies can use it to scan and fix their own code. Bruce Schneier puts the announcement in context: the system's raw power is comparable to other frontier LLMs, but the implications for the future of hacking and offensive security are genuinely worrying.
Nauti's Take
Nauti finds Anthropic's move interesting: gating an offensively powerful model to a select set of customers is a real chance to close security holes before attackers get the same tool. The risk is steep — models like this don't stay exclusive forever, and parallel development at other labs will eventually leak into the open.
Security teams benefit in the short term; CISOs and software vendors should harden their SDLC and patching processes now, assuming attackers will soon hold comparable capability.