War Secretary Pete Hegseth has issued a stern ultimatum to Anthropic's CEO, Dario Amodei: remove all safety restrictions on Claude AI for military use by Friday evening, or face serious repercussions.

The stakes are significant. Claude is currently the sole advanced commercial AI model integrated within the Pentagon's classified networks, under a lucrative contract worth up to $200 million, awarded last summer.

Hegseth cautioned that failure to comply by the 5pm Friday deadline could lead to the termination of the contract, the company being labelled a 'supply chain risk,' or invocation of the Defence Production Act to enforce access, according to sources reported byAxios.

This 'supply chain risk' label, usually reserved for foreign adversaries, would essentially blacklist Anthropic from all future federal contracts.

Pentagon officials are seeking complete access to Claude AI for all lawful military purposes, bypassing the need for Anthropic's approval for each mission.

Using a pointed analogy,CBS News reportedthat Hegseth compared the situation to government purchases of Boeing aircraft, where the manufacturer does not control military operations. The same principle, he argued, should apply to AI technology.

A senior Pentagon official asserted that the matter does not concern mass surveillance or autonomous targeting, stressing the importance of human involvement and adherence to legal norms.

In clear terms, the Pentagon seeks to remove Anthropic's power to veto the use of its technology in military applications.

Amodei has stood his ground. According to a source close to the meeting, as reported byReuters, the CEO emphasised two non-negotiable principles: Claude AI must not be used for fully autonomous weapons that select and strike targets without human oversight, and it must not be used for mass surveillance of American citizens.

The company has consistently maintained these positions.

Source: International Business Times UK