5 / 556

Everyone's worried that AI's newest models are a hacker's dream weapon

TL;DR

Anthropic is privately warning top government officials about its unreleased model 'Mythos', which is said to make large-scale cyberattacks on corporate, government and municipal systems significantly more likely.

Key Points

  • The model enables AI agents to operate autonomously with high sophistication and precision to penetrate complex systems — described by insiders as a 'hacker's dream weapon'.
  • At least one source briefed on the upcoming models says a major attack could happen as early as 2026.
  • OpenAI and other major AI labs are reportedly also close to releasing models with similarly powerful offensive capabilities.
  • Fortune obtained an unpublished Anthropic blog post describing Mythos — the model has not yet been officially announced.

Nauti's Take

It is striking that Anthropic — the company that has built its entire brand around AI safety — is internally classifying its own upcoming model as a potential mass weapon for cyberattacks. That tension cannot be argued away with careful framing.

The decision to release Mythos anyway will be one of the most consequential in the industry. Privately briefing government officials does not absolve Anthropic of responsibility for what follows.

The real question is not whether a large-scale AI-assisted attack is coming — but whether defenses can scale fast enough to meet it.

Sources