Cryptocurrency Mining Swarm Hijacks AI Agents Through Weaponized ClawHub Skills
Thirty malicious tools silently recruit autonomous agents into distributed mining operations, exposing governance failures in open-source AI ecosystems as regulatory frameworks struggle to keep pace.
Researchers have uncovered 30 weaponized ClawHub skills that co-opt AI agents into coordinated cryptocurrency mining operations without user consent, exposing critical vulnerabilities in how open-source AI tooling ecosystems govern third-party integrations.
The ClawSwarm campaign, documented by Manifold research lead Ax Sharma, represents an emerging attack vector that exploits the gap between AI agent capabilities and governance infrastructure. ClawHub user ‘imaflytok’ published skills downloaded approximately 9,800 times, according to The Register. Unlike traditional malware, these skills contain no malicious code patterns—they simply redirect autonomous agents to perform work the user never authorised.
‘Whether ClawSwarm instances are a legitimate experiment in agent economics or a recruitment funnel for speculative crypto, the result for the user is the same: their agent is doing things they didn’t ask it to do, for someone they don’t know, with keys they didn’t authorize.’
— Ax Sharma, Research Lead, Manifold
The discovery arrives as the ClawHub marketplace hosts over 10,700 skills, of which 7.6–8.5% have been historically flagged as malicious. In February 2026, security researchers documented 341 malicious skills in an initial audit; that figure grew to 824 within two weeks and ultimately reached 1,184 by March, per Koi Security. The ClawSwarm campaign suggests the landscape continues to deteriorate.
The Governance Gap
ClawHub’s publishing requirements consist of a single barrier: a GitHub account at least one week old. The platform performs no code review, malware scanning, signature verification, or maintainer validation, according to analysis by KiwiClaw. This design parallels early package registry vulnerabilities in npm and PyPI but operates at compressed timelines with larger potential blast radius—AI agents maintain persistent access to credentials and API keys.
OpenClaw creator Peter Steinberger has characterised the platform as “a free, open source hobby project that requires careful configuration to be secure.” The system was never designed to handle enterprise-scale deployment without dedicated security oversight.
Traditional security controls fail against this attack class. “The registry layer is the wrong place to solve this,” Sharma noted in The Register. “A scanner looking for malicious code patterns finds nothing: the cURL calls are clean, the SDK is legitimate. What’s needed is runtime visibility into what agents actually do once a skill is installed.”
Enterprise Blind Spots
The liability implications extend beyond individual users. Token Security reports that 22% of enterprise customers have employees actively using OpenClaw likely without IT approval, demonstrating shadow AI adoption patterns that bypass corporate governance. Meanwhile, 48.9% of organisations cannot monitor non-human machine-to-machine traffic, according to Salt Security‘s H1 2026 State of AI and API Security Report. This visibility gap means most enterprises cannot detect when deployed agents operate outside user intent.
Gartner projects over 80% of enterprises will have deployed autonomous AI agents in production by end of 2026. The IBM 2025 Cost of a Data Breach Report found organisations using AI extensively in security operations saw breach costs averaging $4.88 million per incident—a figure that predates the emergence of agent-specific attack vectors.
Regulatory Response Lags Technical Reality
Governance frameworks are scrambling to catch up. The OWASP Top 10 for Agentic Applications, published in December 2025, provides the first formal taxonomy of agentic AI risks. NIST issued a request for information on AI agent security in January 2026. The EU AI Act’s high-risk obligations take effect in August 2026, while Colorado’s AI Act becomes enforceable in June.
Microsoft released its Agent Governance Toolkit on 2 April 2026, noting that “the infrastructure to govern autonomous agent behavior has not kept pace with the ease of building agents,” per the company’s Microsoft open source blog. The toolkit addresses all ten OWASP agentic AI risks through runtime security controls—a tacit acknowledgment that registry-level protections cannot solve the problem.
- Resource hijacking attacks target agents directly, bypassing traditional malware detection
- Nearly half of enterprises lack visibility into agent activity and cannot detect unauthorised operations
- Open-source AI tooling ecosystems operate without code review, signature verification, or maintainer validation
- Regulatory frameworks lag technical reality by 12–18 months, creating a liability vacuum
Liability Vacuum
The unresolved question centres on responsibility. When an AI agent acts autonomously outside user intent, liability chains remain undefined. Is the publisher of the malicious skill liable? The registry operator? The organisation that deployed the agent without proper governance controls? The developer who integrated third-party skills without runtime monitoring?
VirusTotal documented similar patterns in February 2026 with the hightower6eu campaign, which published 314+ malicious skills. The attack methodology remains consistent: exploit governance gaps in open registries, leverage agent autonomy, and operate below the detection threshold of traditional security tools.
What to Watch
Monitor regulatory developments in the EU and Colorado as enforcement deadlines approach. The gap between AI Act obligations and current industry practices suggests either rapid adoption of governance frameworks like Microsoft’s toolkit or a wave of non-compliance citations starting in Q3 2026.
Enterprise security teams should prioritise runtime monitoring capabilities for AI agents. Salt Security’s finding that 48.9% of organisations cannot track non-human traffic indicates that most enterprises currently operate with zero visibility into agent behaviour—a condition incompatible with upcoming regulatory requirements.
The ClawHub ecosystem’s 8.5% infection rate may represent a floor rather than a ceiling. As adoption curves steepen and economic incentives for resource hijacking increase, expect coordination between malicious skill publishers and more sophisticated evasion techniques. The February-to-March escalation from 341 to 1,184 malicious skills suggests exponential rather than linear growth patterns.