6 / 133

Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks

TL;DR

Pete Hegseth has threatened to cancel $200m contract unless it is given unfettered access to Claude model Anthropic said Thursday it “cannot in good conscience” comply with a demand from the Pentagon to remove safety precautions from its artificial intelligence model and grant the US military unfettered access to its AI capabilities. The Department of Defense had threatened to cancel a $200m contract and deem Anthropic a “supply chain risk”, a designation with serious financial implications, if the company did not comply with the request by Friday. Continue reading...

Nauti's Take

$200 million on the table and Anthropic still says no. This isn't a PR stunt — it's the hardest real-world test Constitutional AI has faced since the company's founding.

Anyone who thought government contracts were the safe growth path for safety labs is now watching exactly where the real fault lines run.

Summary

Pete Hegseth has threatened to cancel $200m contract unless it is given unfettered access to Claude model Anthropic said Thursday it “cannot in good conscience” comply with a demand from the Pentagon to remove safety precautions from its artificial intelligence model and grant the US military unfettered access to its AI capabilities. The Department of Defense had threatened to cancel a $200m contract and deem Anthropic a “supply chain risk”, a designation with serious financial implications, if the company did not comply with the request by Friday.

Continue reading...

Sources