Last year, the Chinese startup DeepSeek rattled U.S. markets when it released a large language model that could compete with U.S. AI giants but at a fraction of the cost.David Sacks, then serving as President Donald Trump’s AI and crypto adviser, suggested that DeepSeek copied U.S. models. “There’s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s models,” Sacks said then.In a February letter to U.S. lawmakers, OpenAI, the developer of ChatGPT, made similar allegations and said China should not be allowed to advance “autocratic AI” by “appropriating and repackaging American innovation.”Anthropic, the maker of the Claude chatbot, in February accused DeepSeek and two other China-based AI laboratories of engaging in campaigns to “illicitly extract Claude’s capabilities to improve their own models” using the distillation technique that “involves training a less capable model on the outputs of a stronger one.”Anthropic said distillation can be a legitimate way to train AI systems but it’s a problem when competitors “use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”But it can go both ways. San Francisco-based startup Anysphere, maker of the popular coding tool Cursor, recently acknowledged that its latest product was based on an open-source model made by Chinese company Moonshot AI, maker of the chatbot Kimi.Kyle Chan, a fellow at the Washington-based think tank The Brookings Institution and an expert on China’s technology development, said it will be like “looking for needles in an enormous haystack” to separate unauthorized distillation from the vast volume of legitimate requests for data. But information sharing and coordination among U.S. AI labs could help, and the federal government can play an important role in facilitating anti-distillation efforts across labs, Chan said.It’s hard to assess how far the House bill can go, but Chan said Trump may not want to rock the boat with Chinese President Xi Jinping ahead of a planned mid-May state visit to Beijing.Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

David Sacks, then serving as President Donald Trump’s AI and crypto adviser, suggested that DeepSeek copied U.S. models. “There’s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s models,” Sacks said then.In a February letter to U.S. lawmakers, OpenAI, the developer of ChatGPT, made similar allegations and said China should not be allowed to advance “autocratic AI” by “appropriating and repackaging American innovation.”Anthropic, the maker of the Claude chatbot, in February accused DeepSeek and two other China-based AI laboratories of engaging in campaigns to “illicitly extract Claude’s capabilities to improve their own models” using the distillation technique that “involves training a less capable model on the outputs of a stronger one.”Anthropic said distillation can be a legitimate way to train AI systems but it’s a problem when competitors “use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”But it can go both ways. San Francisco-based startup Anysphere, maker of the popular coding tool Cursor, recently acknowledged that its latest product was based on an open-source model made by Chinese company Moonshot AI, maker of the chatbot Kimi.Kyle Chan, a fellow at the Washington-based think tank The Brookings Institution and an expert on China’s technology development, said it will be like “looking for needles in an enormous haystack” to separate unauthorized distillation from the vast volume of legitimate requests for data. But information sharing and coordination among U.S. AI labs could help, and the federal government can play an important role in facilitating anti-distillation efforts across labs, Chan said.It’s hard to assess how far the House bill can go, but Chan said Trump may not want to rock the boat with Chinese President Xi Jinping ahead of a planned mid-May state visit to Beijing.Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

In a February letter to U.S. lawmakers, OpenAI, the developer of ChatGPT, made similar allegations and said China should not be allowed to advance “autocratic AI” by “appropriating and repackaging American innovation.”Anthropic, the maker of the Claude chatbot, in February accused DeepSeek and two other China-based AI laboratories of engaging in campaigns to “illicitly extract Claude’s capabilities to improve their own models” using the distillation technique that “involves training a less capable model on the outputs of a stronger one.”Anthropic said distillation can be a legitimate way to train AI systems but it’s a problem when competitors “use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”But it can go both ways. San Francisco-based startup Anysphere, maker of the popular coding tool Cursor, recently acknowledged that its latest product was based on an open-source model made by Chinese company Moonshot AI, maker of the chatbot Kimi.Kyle Chan, a fellow at the Washington-based think tank The Brookings Institution and an expert on China’s technology development, said it will be like “looking for needles in an enormous haystack” to separate unauthorized distillation from the vast volume of legitimate requests for data. But information sharing and coordination among U.S. AI labs could help, and the federal government can play an important role in facilitating anti-distillation efforts across labs, Chan said.It’s hard to assess how far the House bill can go, but Chan said Trump may not want to rock the boat with Chinese President Xi Jinping ahead of a planned mid-May state visit to Beijing.Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

Anthropic, the maker of the Claude chatbot, in February accused DeepSeek and two other China-based AI laboratories of engaging in campaigns to “illicitly extract Claude’s capabilities to improve their own models” using the distillation technique that “involves training a less capable model on the outputs of a stronger one.”Anthropic said distillation can be a legitimate way to train AI systems but it’s a problem when competitors “use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”But it can go both ways. San Francisco-based startup Anysphere, maker of the popular coding tool Cursor, recently acknowledged that its latest product was based on an open-source model made by Chinese company Moonshot AI, maker of the chatbot Kimi.Kyle Chan, a fellow at the Washington-based think tank The Brookings Institution and an expert on China’s technology development, said it will be like “looking for needles in an enormous haystack” to separate unauthorized distillation from the vast volume of legitimate requests for data. But information sharing and coordination among U.S. AI labs could help, and the federal government can play an important role in facilitating anti-distillation efforts across labs, Chan said.It’s hard to assess how far the House bill can go, but Chan said Trump may not want to rock the boat with Chinese President Xi Jinping ahead of a planned mid-May state visit to Beijing.Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

Anthropic said distillation can be a legitimate way to train AI systems but it’s a problem when competitors “use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”But it can go both ways. San Francisco-based startup Anysphere, maker of the popular coding tool Cursor, recently acknowledged that its latest product was based on an open-source model made by Chinese company Moonshot AI, maker of the chatbot Kimi.Kyle Chan, a fellow at the Washington-based think tank The Brookings Institution and an expert on China’s technology development, said it will be like “looking for needles in an enormous haystack” to separate unauthorized distillation from the vast volume of legitimate requests for data. But information sharing and coordination among U.S. AI labs could help, and the federal government can play an important role in facilitating anti-distillation efforts across labs, Chan said.It’s hard to assess how far the House bill can go, but Chan said Trump may not want to rock the boat with Chinese President Xi Jinping ahead of a planned mid-May state visit to Beijing.Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

But it can go both ways. San Francisco-based startup Anysphere, maker of the popular coding tool Cursor, recently acknowledged that its latest product was based on an open-source model made by Chinese company Moonshot AI, maker of the chatbot Kimi.Kyle Chan, a fellow at the Washington-based think tank The Brookings Institution and an expert on China’s technology development, said it will be like “looking for needles in an enormous haystack” to separate unauthorized distillation from the vast volume of legitimate requests for data. But information sharing and coordination among U.S. AI labs could help, and the federal government can play an important role in facilitating anti-distillation efforts across labs, Chan said.It’s hard to assess how far the House bill can go, but Chan said Trump may not want to rock the boat with Chinese President Xi Jinping ahead of a planned mid-May state visit to Beijing.Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

Kyle Chan, a fellow at the Washington-based think tank The Brookings Institution and an expert on China’s technology development, said it will be like “looking for needles in an enormous haystack” to separate unauthorized distillation from the vast volume of legitimate requests for data. But information sharing and coordination among U.S. AI labs could help, and the federal government can play an important role in facilitating anti-distillation efforts across labs, Chan said.It’s hard to assess how far the House bill can go, but Chan said Trump may not want to rock the boat with Chinese President Xi Jinping ahead of a planned mid-May state visit to Beijing.Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

It’s hard to assess how far the House bill can go, but Chan said Trump may not want to rock the boat with Chinese President Xi Jinping ahead of a planned mid-May state visit to Beijing.Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

Related:Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude MythosRelated:AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers

With "Shadow AI" usage becoming prevalent in organizations, learn how to balance the need for rapid experimentation with the rigorous controls required for enterprise-grade deployment.

Source: SecurityWeek