AI Clash: Anthropic and OpenAI Accuse China's DeepSeek of Industrial-Scale Data Theft

0
AI companies Anthropic and OpenAI in conflict with DeepSeek.



AI companies Anthropic and OpenAI in conflict with DeepSeek.


A major rift has erupted in the global AI race, as US giants Anthropic and OpenAI level explosive allegations against three Chinese labs—including DeepSeek—accusing them of mass data theft and intellectual property infringement aimed at replicating and advancing their own AI technology. The dispute intensifies concerns over national security, technological leadership, and ethics in the fast-evolving world of artificial intelligence.


Key Takeaways

  • Anthropic alleges DeepSeek, Moonshot AI, and MiniMax orchestrated large-scale extraction of sensitive data from its Claude chatbot using over 24,000 fake accounts and 16 million interactions.
  • The Chinese labs are accused of employing “illicit distillation,” a method of training their own models on outputs from more advanced western competitors.
  • The practice, while common within companies for optimising their own AI, is seen as flagrant IP theft when used across rival firms and countries.
  • OpenAI made similar accusations earlier this year amid growing tensions over access to top-tier US AI chips and technologies.
  • There are wider concerns that such unregulated replication could strip AI safety guardrails, posing both commercial and national security risks.

How The Data Extraction Was Orchestrated

According to Anthropic's claims, DeepSeek, Moonshot AI, and MiniMax bypassed regional restrictions and bans on commercial access from China by routing vast amounts of traffic through proxy networks. They set up tens of thousands of fraudulent accounts, each one designed to interact with Claude, Anthropic’s flagship AI model, harvesting outputs at scale. In total, Anthropic estimates more than 16 million exchanges were generated—primarily focused on advanced coding, agentic reasoning, and tool use.


While "distillation" is a legitimate method within AI development for optimising smaller in-house models, using it to train competitors’ models on proprietary outputs crosses both ethical and legal boundaries, say US firms.


Tensions Rise Amid Global AI Rivalry

The scandal comes at a critical moment as the US debates the tightness of export controls on advanced AI chips, like those made by Nvidia. The alleged attacks underscore warnings that technological secrets can still leak abroad, even as hardware is more tightly restricted.


Defenders argue that distillation undermines American leadership and could give Chinese players a shortcut to catch up or leapfrog, especially in areas considered sensitive for national security.


There is alarm that American-developed safety guardrails—programmed to prevent AI from being misused for cyberattacks or bioweapon development—might fall away when copied into foreign models, enabling risks of misuse by state or non-state actors.


Industry Calls For A Coordinated Response

Both Anthropic and OpenAI underline that the scale of such data extraction campaigns goes beyond what any single company can address alone. They are urging government authorities, cloud providers, and all AI developers to unite in identifying, tracking, and shutting down similar attacks.


Among the measures proposed: advanced systems to detect suspicious patterns of AI model queries, stronger account verification, greater intelligence sharing within the sector, and policy reforms that clarify the legal boundaries of model training and IP protection.


The Road Ahead

The controversy not only highlights vulnerabilities in current AI security, but also signals the deepening competition—verging on confrontation—between leading US and Chinese tech powers. The incident is certain to add further urgency to calls for global AI governance, robust IP protection, and clearer norms governing how powerful technologies should be developed and shared (or not) across borders.



Tags:

Post a Comment

0Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!