-
Advertisement
United States
WorldUnited States & Canada

Pentagon ‘close to cutting ties’ with AI firm Anthropic amid frustration over restrictions

Anthropic wants to safeguard its Claude AI tool from being used for mass surveillance or advanced weapons development, Axios news reported

Reading Time:2 minutes
Why you can trust SCMP
Anthropic’s Claude AI app in the app store on a smart phone. Photo: Illustration by Michael M. Santiago / Getty Images via AFP
Bloomberg

The Pentagon is close to cutting ties with Anthropic and may label the artificial intelligence company a supply chain risk after becoming frustrated with restrictions on how it can use the technology, Axios news website reported.

Anthropic’s talks about extending a contract with the Pentagon are being held up over additional protections the artificial intelligence company wants to put on its Claude tool, a person familiar with the matter said.

Anthropic wants to put safeguards in place to stop Claude from being used for mass surveillance of Americans or to develop weapons that can be deployed without a human involved, the person said, asking not to be identified because the negotiations are private.

Advertisement

The Pentagon wants to be able to use Claude as long as its deployment does not break the law. Axios reported on the disagreement earlier.

AI’s use cases for developing weapons and gathering personal data are a burgeoning risk for powerful models. Anthropic, which positions itself as a more responsible AI company that aims to avoid catastrophic harms from the technology, built Claude Gov specifically for the US national security apparatus and aims to serve government customers within its own ethical bounds.

Advertisement

Claude Gov has enhanced capabilities for handling and interpreting classified materials and intelligence and for understanding cybersecurity data.

Advertisement
Select Voice
Select Speed
1.00x