Something went wrong after OperationAbsolute Resolve. Or rather — something went right, the mission succeeded, and then somebody at Anthropic made a phone call that undid quite a lot of the goodwill.
An executive at the San Francisco company behind Claude contacted Palantir Technologies in the days after the Caracas raid to ask a question that sounds reasonable enough on paper: how had their AI model been used during a classified military operation that ended with people being shot? Palantir is the defence analytics firm through which Claude operates on Pentagon systems. The call, in other words, went to the middleman.
That call may cost Anthropic a $200m defence contract.
Two sources with direct knowledge of the situation toldAxiosthe US military used Claude during the 3 January raid — making it, as far as anyone can confirm, the first time a commercially built AI model has been deployed inside a classified American military operation. The Wall Street Journal broke the story first, citing people familiar with the matter. What Claude actually did during the operation has not been made public; whether it crunched satellite imagery, ran logistics calculations, processed intelligence feeds or did something else entirely is unclear, and the classified nature of the mission means nobody involved has to explain.
Here is where it gets properly messy. A senior administration official toldAxiosthatthe Anthropic executive's call was 'raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot.'
Anthropic flatly denied that version. A spokesperson insisted the company had 'not discussed the use of Claude for specific operations with the Department of War.'
But the damage — or whatever the administration decided to call it, and they've been rather careful not to call it anything directly — was done. The Pentagon is now weighing whether to sever the relationship entirely: 'Any company that would jeopardise the operational success of our warfighters in the field is one we need to reevaluate our partnership with going forward,' the senior official told Axios.
Whether that official had been waiting weeks for a reason to say something like that is impossible to know from the outside. But the speed of the response suggests the frustration did not start with the phone call. It had been building.
The contract at stake was awarded last summer. Anthropic was the first AI model developer to operate on classified Pentagon systems, and the same official conceded it would be difficult to replace Claude quickly because 'the other model companies are just behind' on specialist government work. So the Pentagon is threatening to fire a contractor it cannot easily replace. The maths on that don't quite add up, but since when has that stopped anyone?
Strip away the Venezuela drama and the fight is about something specific: can a company sell its technology to the US government and still tell them not to use it for certain things? That is not a hypothetical any more.
Source: International Business Times UK