US military used Anthropic’s AI model Claude in Venezuela raid, report says
TL;DR
Wall Street Journal says Claude used in operation via Anthropic’s partnership with Palantir Technologies Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations. The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela’s defence ministry. Anthropic’s terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance. Continue reading...
Nauti's Take
Anthropic preaches safety-first and prohibits violence in their terms — but through Palantir, Claude still ends up in military operations with dozens of casualties. Either they can't control their partners, or the ethical guardrails are just marketing.
For a company positioning itself as the more responsible AI lab, this is devastating.
Summary
Wall Street Journal says Claude used in operation via Anthropic’s partnership with Palantir Technologies Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations. The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela’s defence ministry.
Anthropic’s terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance. Continue reading...