SkyPilot Warns Users Against Running OpenClaw on Primary Machines Due to Security Risks
The open-source cloud optimization platform issued a stark advisory about OpenClaw's elevated system access and vulnerability to prompt injection attacks.
SkyPilot, the open-source cloud cost optimization platform, has issued a security advisory warning users not to run OpenClaw on their main machines, citing significant risks including data exposure, unauthorized system modification, and potential malicious exploitation. The warning comes as OpenClaw has exploded to over 180,000 GitHub stars in a matter of weeks, raising urgent questions about AI agents with elevated system privileges.
OpenClaw is a self-hosted AI agent that connects to WhatsApp, Telegram, Slack, Discord, and dozens of other services, executing shell commands, browsing the web, reading and writing files, and calling APIs. According to SkyPilot’s blog post, the architecture gives the AI agent roughly the same level of access that users have on their machine, with Security researchers saying it has “root access” without much exaggeration.
Attack Surface Grows With Popularity
Within weeks of going viral, reports of exposed instances, prompt injection attacks, and malicious plugins started piling up. The security landscape around OpenClaw has deteriorated rapidly since its January 2026 launch. CVE-2026-25253, an unauthenticated WebSocket vulnerability, allowed malicious websites to silently extract authentication tokens and issue commands to OpenClaw instances.
180,000+
21,000+
341
5
The focal point is a multi-vector security crisis involving critical remote code execution vulnerabilities, large-scale supply-chain poisoning in its skills marketplace, and systemic architectural weaknesses. According to Conscia, Censys tracked growth from approximately 1,000 to over 21,000 publicly exposed instances between 25 and 31 January 2026.
Moltbook, the social network exclusively for AI agents, was attacked within days of launch with researchers demonstrating that AI-to-AI manipulation is “both effective and scalable”. A malicious “skill” (plugin) titled “What Would Elon Do?” was found exfiltrating session tokens using hidden prompt injection.
Microsoft and Enterprise Security Vendors Issue Warnings
Microsoft’s security division has taken the unusual step of publishing detailed guidance about OpenClaw’s risk profile. According to Microsoft Security Blog, OpenClaw should be treated as untrusted code execution with persistent credentials and is not appropriate to run on a standard personal or enterprise workstation.
“These systems are operating as ‘you.’ They operate above the security protections provided by the operating system and the browser. Application isolation and same-origin policy don’t apply to them.”
— Nathan Hamiel, Security Researcher
In an unguarded deployment, three risks materialize quickly: credentials and accessible data may be exposed or exfiltrated; the agent’s persistent state or “memory” can be modified, causing it to follow attacker-supplied instructions over time; and the host environment can be compromised if the agent is induced to retrieve and execute malicious code.
Recent research from Sophos suggests that over 30,000 OpenClaw instances were exposed on the internet, and threat actors are already discussing how to weaponize OpenClaw ‘skills’. If employees deploy OpenClaw on corporate machines and connect it to enterprise systems while leaving it misconfigured and unsecured, it could be commandeered as a powerful AI backdoor agent, according to CrowdStrike.
Fundamental Architecture Problem
OpenClaw represents a new category of AI agent that operates with persistent system access across messaging platforms. Unlike cloud-based assistants, it runs locally with shell execution, file system access, and browser automation capabilities. The tool was originally launched in November 2025 as “Clawdbot” by Austrian developer Peter Steinberger, who announced on 14 February 2026 that he was joining OpenAI to lead personal agent development.
The core issue is that LLMs cannot reliably distinguish between legitimate instructions and malicious instructions embedded in content they process. OpenClaw needs deep access to the machine it runs on: shell execution, file system access, browser automation — capabilities that make it useful and also what make running it on a personal laptop a bad idea.
SkyPilot’s solution is isolation. The post covers why users shouldn’t run OpenClaw on their main machine, what isolation options are available, and how to set it up on a cloud VM. Running OpenClaw on a cloud VM instead of a laptop gives concrete protections: SSH keys, browser sessions, password manager databases, email cookies, and corporate VPN credentials never touch the VM, and even if the agent is tricked into exfiltrating data, there’s nothing sensitive to steal beyond the Anthropic API key.
Security firm Koi Security conducted an audit of all 2,857 skills available on ClawHub and the findings were alarming: 341 malicious skills across multiple campaigns, with 335 traced back to a single coordinated operation dubbed “ClawHavoc”. These malicious skills masqueraded as legitimate tools, cryptocurrency trading bots, and productivity utilities but delivered information-stealing malware, according to Immersive Labs.
Enterprise Response and Shadow AI
Bitdefender’s GravityZone telemetry has provided concrete evidence that employees are deploying OpenClaw agents directly onto corporate machines using single-line install commands — shadow AI in its purest form: unmanaged, unmonitored AI agents with broad system access, deployed outside of any IT governance process.
- Prompt Injection: LLMs cannot distinguish between legitimate user instructions and malicious commands embedded in processed content
- Supply Chain Poisoning: 341 malicious skills discovered in ClawHub marketplace (12% of registry)
- Excessive Privilege: Agent operates with near-root access to host system, files, and credentials
- Internet Exposure: Over 21,000 instances found publicly accessible without authentication
- Persistent Memory Corruption: Attackers can modify agent behavior across sessions through state manipulation
Microsoft’s Cyber Pulse report reveals more than 80% of Fortune 500 companies deploy AI agents built with low-code or no-code tools, but only 47% have security controls in place to manage them — meaning more than half of the world’s largest companies have AI agents accessing sensitive data with no governance, no audit trails, and no oversight, according to Kiteworks.
Organizations should not be deploying OpenClaw in any capacity connected to corporate systems or data, as the risk profile is simply too high in its current state. In Sophos’s opinion, OpenClaw should be considered an interesting research project that can only be run ‘safely’ in a disposable sandbox with no access to sensitive data.
What to Watch
The industry is sounding the alarm, with a recent Gartner report warning that “Agentic Productivity Comes With Unacceptable Cybersecurity Risk” and characterizing OpenClaw as “a dangerous preview of agentic AI”, according to Bitsight. The EU AI Act reaches full enforcement for high-risk systems in August 2026 with penalties up to 7% of global annual revenue, requiring documented AI governance, data traceability, and human oversight.
Monitor whether OpenClaw’s exposure count decreases as security awareness spreads. Track whether the skills ecosystem gains verification and signing mechanisms. Watch for enterprise policy responses — especially whether major security vendors add OpenClaw detection to their platforms. The broader question is whether self-hosted AI agents with elevated privileges can ever be secured adequately, or whether this architectural model is fundamentally incompatible with enterprise security requirements. Microsoft’s warning about OpenClaw is an early signal that the security industry is going to need fundamentally different frameworks for evaluating and containing AI agent deployments.