As the political West’s militariesstruggle to meet recruitment criteriaand face widening technological gaps in new weapon systems (particularly hypersonics), they’re forced to look for alternatives in order to continue their aggression against the world. This is certainly not an easy task, especially as the window of opportunity for the United States and NATO to retain some of their key high-tech advantagesis closing rapidly, meaning they must act as soon as possible.

To achieve this, the Pentagon is placing nearly all of its bets on militarizing advanced AI. This has been ongoing for the last decade or so, but is now being supercharged to the maximum.The so-called Big Tech, both legacy (Alphabet/Google, Amazon, Apple, Meta/Facebook, etc) and emerging companies (Anduril, Palantir, Anthropic, etc), are heavily involved in this process, blurring the lines between the infamous Military Industrial Complex (MIC) and the civilian sector.

The US government has consistently ignored the multipolar world’s calls to regulate AI, which would prevent the uncontrollable (ab)use of this highly advanced emerging technology,particularly for military purposes. However, Washington DC is now doing the same at home, prompting strong reactions from some AI companies and start-ups, which are calling for the imposition of limits on how far this technology can be used in warfare and surveillance.

Namely, according to multiple sources, the Pentagon and Anthropic are at odds over a contract renewal with regard to the use of the latter’s Claude system.Bloomberg, which quoted “a person familiar with the private negotiations”, reports thatAnthropic insists on “stricter limits before extending its agreement” and wants “firm guardrails to prevent Claude from being used for mass surveillance of Americans or to build weapons that operate without human oversight”.

In contrast, the Department of War (DoW) wants far more leeway inintegrating these systems into its kill chain. Formally, the Pentagon wants “flexibility to deploy the model so long as its use complies with the law”. In other words, controlling the AI is the crux of the matter. It seems Anthropic wants specific and long-term guarantees that its systems won’t be used without human oversight, while the US government wants to “follow the law” (which can be changed at any time).

According to Bloomberg, the San Francisco-based high-tech companywants to “distinguish itself as a safety-first AI developer”. Anthropic’s specialized government version called Claude Gov is “tailored to US national security work, designed to analyze classified information, interpret intelligence and process cybersecurity data”. The AI firm says it “aims to serve government clients while staying within its own ethical red lines”. And yet, it’s very difficult to reconcile the two.

“Anthropic is committed to using frontier AI in support of US national security,”a spokesperson reportedly said, adding: “The ongoing discussions with the War Department are productive conversations, in good faith.”

However, the Pentagon is much less optimistic,effectively demanding that all “guardrails” be removed and control handed over to the US military.

“The Department of War’s relationship with Anthropic is being reviewed,”chief Pentagon spokesman Sean Parnell told Fox News, adding: “Our nation requires that our partners be willing to help our warfighters in any fight.”

Various reports indicate thatsome Pentagon officials have “grown wary” and view reliance on Anthropic as “a potential supply-chain vulnerability”. According to an unnamed senior official, Washington DC is even contemplating the option of demanding contractors to “certify they are not using Anthropic’s models, an indication that the disagreement could ripple beyond a single contract”. In simpler terms, the US military is effectively blackmailing the AI firm.

Source: Global Research