
"Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently."
"Anthropic accused three Chinese firms - DeepSeek, MiniMax Group Inc and Moonshot AI - of violating its terms of service by collectively creating more than 24,000 fraudulent accounts, which were then used to generate more than 16 million exchanges with its publicly available Claude models."
"Because such campaigns are growing in intensity and sophistication, addressing the threat to US artificial intelligence companies will require rapid, coordinated action among industry players, policymakers and the global AI community."
US artificial intelligence companies are warning about adversarial distillation attacks by Chinese AI labs attempting to extract data from their models at industrial scale. Distillation is a legitimate training technique where smaller models learn from larger ones' outputs to replicate performance at lower cost. However, US firms worry Chinese competitors are exploiting this method to acquire advanced capabilities quickly and cheaply. Anthropic accused three Chinese companies—DeepSeek, MiniMax Group, and Moonshot AI—of creating over 24,000 fraudulent accounts to generate 16 million exchanges with Claude models. The company emphasized that while distillation serves legitimate purposes, adversarial use poses fundamental business risks. Industry leaders call for coordinated action among companies, policymakers, and the global AI community to address the growing threat.
#ai-model-distillation-attacks #us-china-ai-competition #model-security-threats #intellectual-property-protection
Read at ComputerWeekly.com
Unable to calculate read time
Collection
[
|
...
]