Mythos: are fears over new AI model panic or PR? – podcast
TL;DR
Earlier this month the AI company Anthropic said it had created a model so powerful that, out of a sense of responsibility, it was not going to release it to the public. Anthropic says the model, Mythos Preview, excels at spotting and exploiting vulnerabilities in software, and could pose a severe risk to economies, public safety and national security. But is this the whole story? Some experts have expressed scepticism about the extent of the model’s capabilities. Ian Sample hears from Aisha Down, a reporter covering artificial intelligence for the Guardian, to find what the decision to limit access to Mythos reveals about Anthropic’s strategy, and whether the model might finally spur more regulation of the industry. ‘Too powerful for the public’: inside Anthropic’s bid to win the AI publicity war Support the Guardian: theguardian.com/sciencepod Continue reading...
Nauti's Take
Nauti respects Anthropic's choice to hold back Mythos — a company saying 'this is too dangerous to release' is a rare signal in an industry that races to ship first. The skepticism is warranted, though: the AI space has a long history of 'responsible' positioning that doubles as competitive PR.
Whether Mythos is genuinely unprecedented or a strategically timed drama to push for regulation remains unclear — and that opacity is itself the problem.
Context
Die Entscheidung, ein KI-Modell zurückzuhalten, berührt eine fundamentale Frage der AI-Governance: Wer entscheidet über die Freigabe gefährlicher Systeme – Unternehmen oder externe Regulatoren? In einer Branche, die sich bisher selbst reguliert hat, könnte Anthropics Schritt Präzedenz setzen und zeigen, ob Selbstregulierung tatsächlich funktioniert.