WHAT HAPPENED:Anthropic, a leading U.S.artificial intelligence(AI) firm, has accused three China-based AI labs of using fraudulent accounts to extract valuable outputs from its Claude chatbot in a coordinated “distillation” campaign.
👤WHO WAS INVOLVED:Anthropic, Chinese AI companies DeepSeek, Moonshot AI, MiniMax, and U.S.governmentagencies investigating the claims.
📍WHEN & WHERE:The technology theft reportedly took place over the last several months, with the U.S. governmentinvestigationrevealed on Monday, February 23, 206.
💬KEY QUOTE:“Foreign labs that distill American models can then feed these unprotected capabilities into military,intelligence, and surveillance systems.” — Anthropic.
🎯IMPACT:The incident highlights vulnerabilities in U.S. AI export controls and raises concerns over China leveraging stolen AI capabilities for military and surveillance purposes.
TRUTH LIVES on athttps://sgtreport.tv/
One of the top U.S. artificial intelligence (AI) companies, Anthropic, is accusing three China-based AI labs—DeepSeek, Moonshot AI, and MiniMax—of conducting a coordinated campaign to extract high-value outputs from its Claude chatbot. According to the company’s own internal findings, the Chinese AI labs used approximately 24,000 fraudulent accounts to generate over 16 million exchanges with the chatbot, targeting its most advanced capabilities in an effort to obtain and port key code and technology for use inChina‘s own AI development.
Anthropic alleges that this “distillation” campaign, a technique commonly used internally by AI labs to train smaller models, was unauthorized and aimed at bypassing years of research and reinforcement learning. “We have high confidence these labs were conducting distillation attacks at scale,”statedJacob Klein, Anthropic’s head of threat intelligence. Importantly, the Anthropic documents suggest that the Trump administration is actively investigating the Chinesetheftof U.S. AI technology.
The company is warning that the implications of these attacks extend beyond intellectual property theft and that models created through such methods lack the safety guardrails embedded in U.S. systems, potentially enablingauthoritarianregimes to use AI for cyber operations,disinformation, and mass surveillance. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems,” Anthropic said.
The firm identified the campaigns through IP address correlations, metadata, and other indicators, and has shared its findings with U.S. government entities. Klein noted that while the Chinese Communist Party’s (CCP) direct involvement has not been proven, proxy services reselling access to U.S. AI models operate openly in China. He also emphasized that current U.S.export controls, which focus on advanced AI chips and model weights, fail to address the risks posed by distillation attacks.
Source: SGT Report