t>

Anthropic blames Chinese AI labs for mining Claude while US opposes AI chip shipments


Anthropic is to blame three Chinese AI companies set up more than 24,000 fake accounts with its Claude AI model to promote their brands.

Laboratories – DeepSeek, Moonshot AIand MiniMax – allegedly made more than 16 million transactions with Claude through those accounts using a technique called “distillation.” Anthropic said the labs “took care of Claude’s most differentiated skills: technical thinking, tool use, and writing.”

The objections come amid controversy over how to enforce stricter export controls on advanced AI chips, a policy aimed at curbing China’s AI development.

Distillation is a common training method that AI labs use on their models to create smaller, cheaper models, but competitors can use it to model other labs’ homework. OpenAI sent a memo to House lawmakers earlier this month accusing DeepSeek of using distillation based on its data.

DeepSeek first made waves a year ago when it released its open R1 analysis system that resembled the American frontier labs working slowly. DeepSeek is expected to release DeepSeek V4 soon, its latest version, which he says it can outperform Anthropic’s Claude and OpenAI’s ChatGPT in writing.

The scale of each attack was different. Anthropic tracked more than 150,000 exchanges from DeepSeek that appeared to be aimed at refining the original ideas and connections, particularly in the search for more secure ways to keep ships in policy-sensitive questions.

Moonshot AI had more than 3.4 million iterations focused on business intelligence and tool use, data mining and analysis, development of computer user assistants, and computer vision. Last month, the company release The new open version of Kimi K2.5 is a writing assistant.

Techcrunch event

Boston, MA
| |
June 9, 2026

13 million MiniMax exchanges look for agent codes and use tools and calls. Anthropic said it was able to see the MiniMax in action when it managed about half of its traffic to destroy the capabilities of Claude’s latest model when it was launched.

Anthropic says it will continue to invest in defenses that make distillation attacks harder and easier to detect, but it wants “a coordinated response across AI companies, cloud providers, and policymakers.”

The attack on distillation comes at the same time American Chip Exports in China it is still very controversial. Last month, the Trump administration allowed US companies like Nvidia to do so deploying advanced AI chips (eg H200) to China. Critics say that this liberalization of foreign ownership will increase China’s AI power at a critical time in the global AI race.

Anthropic says the number of DeepSeek, MiniMax, and Moonshot releases that have occurred “needs access to advanced chips.”

“The attack on distillation reinforces the idea of ​​export control: restrictions on access to the chip prevent direct training and the growth of illegal distillation,” according to the Anthropic blog.

Dmitri Alperovichchairman of the Silverado Policy Accelerator think-tank and co-founder of CrowdStrike, told TechCrunch that he was not surprised to see this.

“It has been known for some time that one of the reasons for the rapid development of Chinese AI models has been theft through the distillation of samples from the US border. Now we know this,” said Alperovich. “This should give us a good reason to refuse to sell AI chips to these (companies), which will only benefit them greatly.”

Anthropic also said that distillation not only threatens to destroy America’s AI capabilities, but could also pose national security threats.

“Anthropic and other US companies develop systems that prevent non-state actors from using AI, for example, to develop weapons or engage in malicious cyber activities,” Anthropic’s blog post said. “Samples built through illegal distillation are unlikely to be preserved, meaning that the potential for harm can increase if the majority of protection is removed.”

Anthropic pointed to authoritarian governments deploying AI frontiers for things like “offensive online activities, murderous awareness campaigns, and mass surveillance,” a risk that is heightened if the models are open.

TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for comment.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *