Anthropic Just Accused China's Hottest AI Labs of Running 24,000 Fake Accounts to Steal Claude
NotionThe AI Industry's Worst-Kept Secret Just Became Its Biggest Scandal
Anthopic just did something AI companies almost never do: they named names.
On Monday, the company behind Claude publicly accused three of China's most prominent AI labs—DeepSeek, Moonshot AI, and MiniMax—of orchestrating an industrial-scale theft operation. We're talking 24,000 fake accounts generating 16 million exchanges with Claude, all designed to reverse-engineer and extract the model's capabilities.

This isn't a vague accusation buried in a Terms of Service update. Anthropic went full receipts mode, publicly calling out specific companies in what might be the most dramatic move in AI corporate warfare yet.
Why This Actually Matters (And Why Everyone's Been Quiet Until Now)
Here's the thing: model distillation has been the industry's open secret for years. Everyone knows smaller labs have been using API access to larger models to train their own. It's like learning to cook by eating at restaurants and reverse-engineering the recipes.
But there's a difference between inspiration and straight-up industrial espionage.
The scale here is what's shocking. 24,000 accounts isn't a few curious developers tinkering. That's a coordinated operation with infrastructure, budget, and organizational backing. That's not research—that's extraction at scale.
Think about it this way:
Legitimate Research:
User → Claude API → Learn techniques → Build own model
Alleged Operation:
24,000 fake accounts → 16M queries → Systematic extraction → Clone capabilities
The Timing Is Everything
Why go public now? Because the AI race has entered a new phase where model capabilities are the entire moat.
Anthopic just launched Claude Code Security (finding 500+ vulnerabilities in production code) and Claude Cowork (automating enterprise workflows). These aren't just chatbots anymore—they're enterprise products with real revenue potential.

When your competitive advantage can be siphoned through API calls, staying quiet becomes existential risk.
Meanwhile, Google Is Banning People For Using AI Too Creatively
In a delicious piece of irony, Google just cut off users who connected the open-source AI agent OpenClaw to their Antigravity platform. Some developers lost access to their entire Google accounts.
The charge? "Malicious usage." The reality? People were using AI agents exactly as advertised—autonomously.
The double standard is wild: Chinese labs allegedly run 24,000 fake accounts for months, while legitimate developers get permabanned for creative use cases. Welcome to AI governance in 2026.
What Happens Next?
This public callout forces everyone's hand. Does the US government get involved? Do other AI companies start naming their own suspected theft operations? Does this accelerate the fracturing of AI development into regional silos?
One thing's certain: the era of polite silence in AI development is over.
When Treasure Data announced an engineer built a production SaaS product in an hour using AI, everyone celebrated the productivity gains. When that same technology gets weaponized for competitive intelligence at scale, suddenly we need governance systems we haven't built yet.
The AI Development Paradox:
More powerful models → Easier to clone via API
Easier to clone → More aggressive protection
More protection → Less open research
Less open research → Slower innovation
The Bottom Line
Anthopic's accusation isn't just about three Chinese labs. It's about drawing a line between acceptable competitive intelligence and coordinated theft. It's about whether AI development stays collaborative or becomes fully adversarial.
The real question: If building 24,000 fake accounts is the cost of catching up in AI, how many other companies are doing the exact same thing but haven't been caught yet?
And more importantly—what does it mean for innovation when the only way to protect your models is to lock them down completely?