Anthropic Uncovers Massive Chinese Distillation Attacks as Seedream 5.0 Boosts AI Reasoning
Industry Controversy & Security
Anthropic Accuses Chinese AI Firms of Massive Data Distillation Attacks: Anthropic has identified industrial-scale "distillation attacks" by Chinese companies DeepSeek, Moonshot AI, and MiniMax, involving 24,000 fraudulent accounts used to extract data from 16 million Claude exchanges. To counter this, Anthropic published a blog post outlining defensive measures, including the controversial practice of "poisoning" outputs to degrade the quality of siphoned training data.
- Anthropic is accusing DeepSeek, Moonshot AI (Kimi) and MiniMax of setting up more than 24,000 fraudulent Claude accounts, and distilling training information from 16 million exchanges.
- Anthropic: "We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax." 🚨
- Anthropic's recent distillation blog should make anyone only ever want to use local open-weight models; it's scary and dystopian
- Here we go again. DeepSeek R1 was a literal copy paste of OpenAI models. They got locked out, now they are on Anthropic. Fraud!
Model Releases & Updates
Seedream 5.0 Launches with Enhanced Reasoning and Real-Time Search: The newly released Seedream 5.0 introduces intention-aware prompt understanding, real-time web search capabilities, and significant improvements in logical reasoning. The model is being positioned as a strong competitor in aesthetics and realism compared to existing models like Nano Banana Pro and Soul 2.