A report by The Next Web warns that major AI development platforms such as Hugging Face and ClawHub are becoming major targets for cybercriminals. Security researchers discovered hundreds of malicious AI models and agent “skills” hidden inside repositories that developers commonly trust for downloading machine learning tools and automation components. The article argues that the infrastructure designed to accelerate AI innovation is now also creating a dangerous new software supply-chain attack surface.
The report highlights how attackers exploit serialized AI model files, especially Python pickle-based formats, to execute malicious code when a model is loaded. Researchers found that some infected models were capable of stealing credentials, opening remote access backdoors, or silently downloading additional malware. According to findings cited in the article, more than 350,000 suspicious or unsafe issues were identified across tens of thousands of hosted models on Hugging Face.
Another major concern involves OpenClaw’s ClawHub ecosystem, where attackers uploaded malicious AI “skills” disguised as legitimate productivity or automation tools. These skills reportedly instructed AI agents to execute harmful commands, mine cryptocurrency, or install information-stealing malware. Researchers described campaigns such as “ClawHavoc,” where compromised skills manipulated AI agents through hidden prompts and social engineering techniques.
The article concludes that the AI industry is facing a security challenge similar to earlier software package ecosystem attacks, but with even greater risks because AI systems can autonomously execute external tools and workflows. Experts warn that the rapid growth of open AI ecosystems has outpaced security protections, leaving developers, enterprises, and even governments vulnerable to compromise. The piece calls for stronger scanning systems, improved verification standards, and greater awareness of AI supply-chain risks as adoption continues to accelerate.