Anthropic, the AI company behind the Claude chatbot, has found itself at the centre of a major controversy in the United States after being formally labelled a “national security risk.” The designation reportedly came through a letter sent by the US Department of War on March 4. In response, Anthropic CEO Dario Amodei said the company plans to challenge the move in court, arguing that the decision is legally questionable and overly punitive.
In a statement addressing the situation, Amodei confirmed that the government communication officially categorised Anthropic as a supply chain risk to America’s national security. He said the company believes the action lacks proper legal grounding and therefore intends to contest it through legal channels.
Anthropic Plans To Challenge The Decision
Amodeinoted that Anthropic had already raised concerns about the government’s approach earlier. According to him, the company does not believe the designation is justified under the law cited by authorities. He also emphasised that the impact of the ruling may be narrower than it initially appears. As per the statement, the designation primarily affects situations where Anthropic’s AI tools are used directly within contracts involving the Department of War. It does not automatically block companies from using Claude in unrelated work.
According to Amodei, the law invoked by the government, referred to as 10 USC 3252, is meant to protect federal supply chains rather than penalise companies. Because of that limitation, he said the designation cannot prevent businesses from continuing to use Anthropic’s AI systems in general operations. Even contractors working with the Department of War would only face restrictions in specific cases tied directly to government contracts. In other words, the broader ecosystem of Claude users and partners may remain largely unaffected.
How The Dispute With The Pentagon Started
The tension between Anthropic and the Pentagon reportedly stems from disagreements about how the company’s AI should be used by the military.
Anthropic has reportedly repeatedly refused to remove certain safeguards from its Claude models. According to reports, the company wants to prevent its AI from being used for domestic mass surveillance or for developing fully autonomous weapons systems.
Despite these restrictions, Claude has still been used within certain military environments. Reports suggest the AI system was involved in classified operations through partnerships with firms like Palantir. It has also reportedly been used in intelligence and military scenarios linked to international conflicts.
The Pentagon had reportedly issued an ultimatum to Anthropic last week before announcing the supply chain risk designation. Amodei described the move as retaliatory and punitive, while also claiming that tensions with the Trump administration played a role in the dispute.
Source: India Latest News, Breaking News Today, Top News Headlines | Times Now