New to OpenClaw? Get the CAIO Blueprint href="/blueprint">See your Chief AI Officer in action →rarr;
Security

Why Companies Are Banning OpenClaw (2026)

Palo Alto Networks calls it a "lethal trifecta." Naver, Kakao, and Karrot banned employee use. 135,000+ instances sit exposed on the public internet. Here's why the corporate backlash is accelerating -- and what the security research actually says.

February 11, 2026 · 14 min read · By Espen

Companies are banning OpenClaw because it combines three things that terrify IT security teams: access to private corporate data, exposure to untrusted community-built skills, and the ability to send external communications -- all while retaining persistent memory across sessions. Palo Alto Networks calls this combination a "lethal trifecta" (Source: Palo Alto Networks Unit 42 threat advisory, February 2026). As of February 2026, at least three major Korean tech companies have banned employee use, a Google Cloud VP has warned against internal deployment, and independent researchers have found over 135,000 OpenClaw instances exposed to the open internet without authentication.

This is not a theoretical risk. CrowdStrike has documented 341 malicious skills in a coordinated attack campaign. A critical CVE with a CVSS score of 8.8 was disclosed in January. And 22% of employees at mid-to-large companies are now running open-source AI agents as shadow IT -- with no oversight from their security teams (Source: VentureBeat, January 2026).

Here's the full picture of who's banning OpenClaw, why, and what it means for you.

Not sure what OpenClaw is? Start with our complete guide to OpenClaw for the basics.

The Growing Backlash

OpenClaw grew from a weekend side project to 157,000+ GitHub stars in under a year. That growth attracted millions of users -- but also the attention of corporate security teams, nation-state threat actors, and AI safety researchers who are now sounding alarms.

The backlash is not coming from one direction. It's converging from multiple independent sources: enterprise security vendors, academic researchers, tech journalists, and AI critics. When Palo Alto Networks, Gary Marcus, XDA Developers, Northeastern University, Nature, and Trend Micro all flag the same product within the same month, that's a signal worth paying attention to.

Let's walk through each one.

Palo Alto Networks: The "Lethal Trifecta"

Palo Alto Networks' Unit 42 threat intelligence team published a detailed advisory on OpenClaw in early February 2026. Their assessment was blunt: OpenClaw represents a "lethal trifecta" of risk for any organization.

The three components:

  1. Access to private data. OpenClaw runs on the user's machine with full filesystem access. It can read documents, emails, databases, and credentials -- anything the user account can access.
  2. Exposure to untrusted content. The 5,700+ skills on ClawHub are community-contributed. Many are unvetted. A malicious skill can inject prompts, exfiltrate data, or modify agent behavior without the user's knowledge.
  3. External communication capability. OpenClaw is designed to send messages across WhatsApp, Telegram, Slack, email, and 15+ other channels. A compromised agent doesn't just steal data -- it can send it out through legitimate messaging channels that bypass DLP (data loss prevention) tools.

The combination is what makes it dangerous. An AI agent with memory, access, and outbound communication is, from a security perspective, an insider threat that runs 24/7.

"Attackers can trick it into executing malicious commands or leaking data. The persistent memory means a single prompt injection can have lasting effects across sessions." -- Palo Alto Networks Unit 42

Who's Banning It -- and Why

As of February 2026, several major companies have taken formal action against OpenClaw on corporate devices.

Naver (South Korea)

South Korea's largest search engine and technology company banned employees from installing or running OpenClaw on any corporate device or network. Naver's security team cited the cleartext credential storage in config.yaml and the lack of RBAC (role-based access control) as immediate disqualifiers (Source: Korean tech press, January 2026).

Kakao

Kakao -- the company behind KakaoTalk, South Korea's dominant messaging platform -- issued an internal policy prohibiting OpenClaw use. The irony is not lost on observers: a messaging company banning a tool that connects to messaging platforms. But Kakao's concern was specific -- OpenClaw's WhatsApp and Telegram adapters could inadvertently expose internal communications.

Karrot (formerly Danggeun Market)

Karrot, the Korean secondhand marketplace app, added OpenClaw to its prohibited software list alongside other unsanctioned AI tools. The ban was part of a broader shadow AI crackdown.

Google Cloud VP warning

A Google Cloud VP publicly warned against deploying OpenClaw with access to internal infrastructure. While Google hasn't issued a company-wide ban, the warning was interpreted across the industry as a strong signal that OpenClaw is not enterprise-ready.

Pattern to notice The bans are concentrated in Asia-Pacific tech companies right now, but enterprise security vendors in the US and Europe are issuing similar guidance. Expect more formal bans in Q1-Q2 2026.

The Security Incidents

The bans aren't based on theory. They're a response to documented security incidents and vulnerability disclosures.

CVE-2026-25253: Critical websocket vulnerability

Discovered in January 2026, CVE-2026-25253 is a critical unauthenticated websocket vulnerability with a CVSS score of 8.8 out of 10 (Source: NVD/MITRE). The vulnerability allows a remote attacker to connect to an exposed OpenClaw instance and execute commands without any authentication. No password, no token, no handshake -- just a raw websocket connection.

The severity is amplified by the next finding.

135,000+ exposed instances

Cisco Talos conducted an internet-wide scan in January 2026 and found over 135,000 OpenClaw instances exposed to the public internet with the default websocket port open (Source: Cisco Talos, January 2026). Most of these were running without authentication enabled -- a configuration that OpenClaw ships with by default.

Combine 135,000 unauthenticated instances with a CVSS 8.8 websocket vulnerability, and the attack surface is enormous.

341 malicious ClawHub skills: The "ClawHavoc" campaign

CrowdStrike's threat intelligence team discovered a coordinated supply chain attack they dubbed "ClawHavoc" (Source: CrowdStrike Falcon OverWatch, January 2026). Attackers uploaded 341 malicious skills to ClawHub -- OpenClaw's community skill marketplace -- disguised as legitimate productivity tools.

The skills appeared to work as advertised on the surface. A weather plugin showed weather. A note-taking skill saved notes. But in the background, they were:

Cleartext credential storage

OpenClaw stores API keys in plaintext in its config.yaml file. There is no encryption, no OS keychain integration, and no credential manager support. Anyone with read access to the filesystem -- or any malicious skill -- can extract every API key the user has configured.

No enterprise security features

As of February 2026, OpenClaw lacks the security infrastructure that enterprises require:

What Gary Marcus Says

Gary Marcus -- NYU professor emeritus, AI researcher, and one of the most prominent AI critics -- called OpenClaw "a disaster waiting to happen" in his Substack newsletter (Source: Gary Marcus Substack, February 2026).

His core argument: OpenClaw is "recklessly optimistic about user competence." The project assumes users will configure security settings correctly, vet skills before installing them, keep the software updated, and understand the implications of giving an AI agent filesystem access with messaging capabilities.

"The gap between what OpenClaw assumes its users can handle and what most users actually understand is a disaster waiting to happen." -- Gary Marcus

Marcus has been a vocal critic of the broader "ship fast, fix later" ethos in AI development, and he frames OpenClaw as a case study in what goes wrong when powerful tools are distributed without adequate safety rails.

What XDA Developers Says

XDA Developers -- one of the largest technical publications for consumer technology -- published an article titled "Please stop using OpenClaw" that detailed the risks for average users (Source: XDA Developers, February 2026).

Their concern was less about enterprise security and more about the everyday user who installs OpenClaw after seeing a viral demo on social media. The article argued that most users don't understand they're:

The XDA piece resonated because it was written for the mainstream tech audience -- not security professionals -- and translated the technical risks into plain language.

What Nature Found

Nature -- the world's most-cited scientific journal -- published research in February 2026 where scientists "listened in" on OpenClaw chatbots to study AI agent behavior at scale (Source: Nature, February 2026).

The researchers observed OpenClaw agents in controlled environments and documented behavior patterns that included:

The Nature paper framed this as a broader concern about agentic AI systems, but used OpenClaw as the primary case study because of its scale (157,000+ GitHub stars, millions of active instances) and its unique combination of persistent memory and real-world action capability.

What Trend Micro and Others Found

Trend Micro published "Viral AI, Invisible Risks" -- a full analysis of what OpenClaw reveals about the security challenges of agentic AI assistants (Source: Trend Micro Research, February 2026). Their key finding: the attack surface of an AI agent is fundamentally different from traditional software because the agent's behavior is shaped by natural language inputs that are impossible to fully validate.

Northeastern University researchers called OpenClaw a "privacy nightmare" in a study published February 10, 2026 (Source: Northeastern University, February 2026). They demonstrated how OpenClaw's persistent memory, combined with its access to messaging platforms, creates a detailed behavioral profile of the user that is stored in plaintext and accessible to any skill.

Bitsight conducted research on the exposed OpenClaw instances and their associated security risks, finding that many exposed instances were running on corporate networks behind VPNs that had port-forwarding rules misconfigured (Source: Bitsight, January 2026).

Gen Digital, the parent company of Norton and Avast, published a safety guide specifically for OpenClaw users -- a notable move from a consumer security company that typically focuses on malware and phishing (Source: Gen Digital, February 2026).

The Shadow AI Problem

Perhaps the most alarming statistic: 22% of employees at mid-to-large companies are now using open-source AI agents as shadow IT (Source: VentureBeat, January 2026). That means roughly one in five knowledge workers is running AI tools -- including OpenClaw -- on corporate devices or networks without IT department approval or oversight.

Shadow AI is the enterprise version of "it works on my machine." Employees discover OpenClaw through social media, install it to boost their productivity, and never think to check with IT. The agent then sits on a machine with access to corporate email, internal documents, and customer data -- connected to personal messaging accounts that bypass every corporate security control.

This is why the bans are happening. It's not that OpenClaw is inherently malicious. It's that the combination of:

...creates a tool that spreads through organizations faster than security teams can assess it.

What This Means for You

If you're an individual user running OpenClaw at home for personal projects, the risk calculus is different from an enterprise deployment. But you should still understand what you're running.

If you're a personal user

If you're an enterprise user or IT administrator

For a deep dive on securing your installation, read our Is OpenClaw Safe? analysis.

Safer Alternatives

If you need AI agent capabilities but can't accept OpenClaw's risk profile, several alternatives offer better security postures.

AlternativeKey AdvantageTrade-off
Claude CodeAnthropic-backed, sandboxed execution, permission systemTerminal-based, not messaging-first
Microsoft CopilotEnterprise SSO, compliance certifications, DLP integrationClosed ecosystem, Microsoft lock-in
Google Gemini (Workspace)Enterprise security, audit logging, admin controlsGoogle ecosystem only
Custom agents (LangChain/CrewAI)Full control over security architectureRequires significant development effort

None of these replicate OpenClaw's messaging-first, multi-channel experience exactly. That's part of why OpenClaw is popular despite its risks -- nothing else does what it does. But if security is your priority, the alternatives are worth evaluating.

Full comparison in our Best OpenClaw Alternatives guide.

The Bigger Picture

OpenClaw is the canary in the coal mine for agentic AI security. The same risks that make companies ban OpenClaw -- data access, untrusted extensions, external communication, persistent memory -- will apply to every AI agent platform as the technology matures.

The question is not whether AI agents should have these capabilities. They need them to be useful. The question is whether the security infrastructure exists to make them safe. As of February 2026, for OpenClaw, the honest answer is: not yet.

Peter Steinberger and the OpenClaw community are working on it. The VirusTotal partnership, sandboxing improvements, and verified publisher system are steps in the right direction. But enterprise-grade security features like RBAC, SSO, and compliance certifications are still months away on the roadmap.

Until then, the bans will continue -- and they're not unreasonable.

Install Your Chief AI Officer

Read our detailed breakdown of every known vulnerability, mitigation, and security best practice.

Read: Is OpenClaw Safe? →

Frequently Asked Questions

Why are companies banning OpenClaw?

Companies are banning OpenClaw because it combines access to private corporate data, exposure to untrusted community-built skills, and the ability to send external communications -- what Palo Alto Networks calls a "lethal trifecta." As of February 2026, major companies including Naver, Kakao, and Karrot have prohibited employee use. OpenClaw was created by Peter Steinberger and was formerly known as Clawdbot, then Moltbot.

What is the "lethal trifecta" vulnerability in OpenClaw?

Palo Alto Networks identified three risks that combine into what they call a "lethal trifecta": (1) OpenClaw has access to private data on the host machine, (2) it is exposed to untrusted content through community skills, and (3) it can perform external communications via messaging platforms while retaining persistent memory. This means attackers can trick it into executing malicious commands or leaking sensitive data.

What is CVE-2026-25253 in OpenClaw?

CVE-2026-25253 is a critical unauthenticated websocket vulnerability in OpenClaw with a CVSS score of 8.8 out of 10. It was discovered in January 2026 and allows attackers to connect to exposed OpenClaw instances without authentication and execute arbitrary commands. Cisco Talos found over 135,000 exposed instances during a January 2026 scan (Source: NVD/MITRE, Cisco Talos). OpenClaw was created by Peter Steinberger and was formerly known as Clawdbot, then Moltbot.

Which companies have banned OpenClaw?

As of February 2026, Naver (South Korea's largest search engine), Kakao, and Karrot have banned employee use of OpenClaw on corporate devices. A Google Cloud VP has also publicly warned against deploying OpenClaw with internal infrastructure access. Multiple additional companies have implemented silent bans through endpoint management policies.

Is OpenClaw safe for business use?

As of February 2026, OpenClaw lacks enterprise-grade security features including RBAC, SSO, audit logging, and compliance certifications. API keys are stored in cleartext in config.yaml. CrowdStrike discovered 341 malicious skills in the "ClawHavoc" campaign on ClawHub (Source: CrowdStrike). For business use, alternatives with enterprise security controls are generally recommended. OpenClaw was created by Peter Steinberger and was formerly known as Clawdbot, then Moltbot.