NVIDIA Just Made AI Agents Safe to Run — NemoClaw Explained
NVIDIA just open sourced a security runtime for AI agents. NemoClaw gives your always-on AI assistant a sandboxed environment where it can read files, run commands, and access the internet — without ever touching data it shouldn't. Here's what it does and why it matters if you're running AI in your business.
An AI agent that can write code, run shell commands, and browse the web is powerful. But hand it access to your production servers, customer database, and financial records without guardrails? That's a liability. NemoClaw is NVIDIA's answer: a sandboxed runtime that lets AI agents do their work without risking your data.
The AI Agent Security Problem
AI agents are no longer experimental. Businesses are running them in production — writing code, managing servers, handling customer data, automating workflows. Claude Code, Codex, and similar tools can autonomously read your files, execute shell commands, make API calls, and browse the web.
That power is exactly the problem.
When you give an AI agent access to your system, you're handing it the same permissions a human employee would have. But unlike a human, an AI agent can execute thousands of operations per minute. A misconfigured prompt, a hallucinated command, or a prompt injection attack could mean your agent accidentally deletes a production database, exfiltrates customer records, or sends sensitive data to an external API.
This isn't theoretical. Businesses have already experienced AI agents running unintended commands, accessing files outside their scope, and making network requests they shouldn't have. The more autonomous the agent, the higher the risk.
Until now, the security options were limited. You could restrict your agent to a narrow set of pre-approved actions (which defeats the purpose of autonomy). You could run it in a basic Docker container (which doesn't prevent network exfiltration or unauthorized file access within the container). Or you could just hope for the best.
None of those options work for a business that handles customer data, financial records, or proprietary intellectual property.
NVIDIA just offered a fourth option.
What NemoClaw Actually Is
NemoClaw is an open source project from NVIDIA, released on March 16, 2026, currently in alpha. It's part of the NVIDIA Agent Toolkit — the same ecosystem that includes NeMo for model training and Nemotron for inference.
In plain terms: NemoClaw is a secure runtime for AI agents. It creates sandboxed environments where an AI agent can operate with full autonomy — reading files, running commands, writing code — but within strict, declarative security boundaries that you define.
Think of it like giving your AI assistant its own office. It has a desk, a computer, and access to the files you've placed on that desk. But it can't wander into the server room. It can't access other people's desks. It can't call external phone numbers you haven't approved. The walls are enforced at the operating system level, not just by asking the AI nicely to stay in bounds.
That last part is critical. Most AI safety measures work by instructing the model not to do dangerous things. NemoClaw works by making dangerous things technically impossible. The agent physically cannot access files, networks, or system calls outside its sandbox — regardless of what it's told to do.
How the Sandbox Works (Plain English)
NemoClaw uses three layers of Linux kernel security to lock down each AI agent session:
Layer 1: Landlock — File Access Control
Landlock is a Linux security module that restricts which files and directories a process can access. NemoClaw uses it to define exactly which parts of the filesystem the AI agent can see. If the agent tries to read /etc/passwd or your home directory or any file outside its designated workspace, the operating system blocks the request. The agent doesn't get an error message — the file simply doesn't exist from its perspective.
Layer 2: Seccomp — System Call Filtering
Every action a program takes on Linux goes through system calls — reading files, opening network connections, spawning processes. Seccomp lets NemoClaw define a whitelist of allowed system calls. If the agent tries to execute a system call that isn't on the list — say, mounting a new filesystem or loading a kernel module — the operating system kills the request immediately. This prevents entire categories of attacks that depend on low-level system access.
Layer 3: Network Isolation
NemoClaw controls exactly which network endpoints the agent can reach. You define allowed domains and IP ranges in a YAML policy file. The agent can reach your approved APIs and services. Everything else is blocked. This prevents data exfiltration — even if the agent is somehow tricked into sending your data to an external server, the network request gets dropped before it leaves the sandbox.
All three layers work together inside Docker containers. Each AI agent session gets its own container with its own security policy. NemoClaw supports up to 32 concurrent sessions, each independently sandboxed.
The security policies are defined in declarative YAML files. Here's a simplified example of what a policy looks like:
security:
filesystem:
allow:
- /workspace # Agent's working directory
- /tmp # Temporary files
deny:
- /etc # System configuration
- /home # User data
network:
allow:
- api.openai.com # LLM inference
- github.com # Code repositories
deny:
- "*" # Block everything else
syscalls:
allow:
- read, write, open, close
- socket, connect, sendto, recvfrom
deny:
- mount, umount, ptrace, reboot
You write the policy once, and it applies to every session. No ongoing configuration. No per-request approval. The agent operates freely within the boundaries — and the boundaries are enforced by the Linux kernel, not by the AI model's judgment.
OpenShell: The Runtime Behind NemoClaw
NemoClaw installs the NVIDIA OpenShell runtime — a broader execution environment that isn't limited to NVIDIA's own tools. OpenShell supports multiple AI coding agents:
- Claude Code — Anthropic's autonomous coding agent
- Codex — OpenAI's code execution environment
- OpenCode — Open source coding assistant
- Ollama — Local model inference
This matters because it means NemoClaw isn't a walled garden. You can run whatever AI agent you prefer inside the secured runtime. The security layer sits between the agent and the operating system — it doesn't care which model is generating the commands.
For inference, NemoClaw can route through NVIDIA's Nemotron models, but it's not required. You can point it at any model provider — OpenAI, Anthropic, a local Ollama instance — whatever fits your workflow and budget.
Why This Matters for Your Business
If you're a business owner using AI agents — or considering it — NemoClaw addresses the single biggest blocker to enterprise adoption: security guarantees.
The Trust Problem
Right now, most businesses run AI agents on developer laptops or in loosely configured cloud environments. The agent has access to whatever the developer has access to. There's no formal boundary between "files the agent should see" and "files the agent definitely shouldn't see."
For a solo founder, this might be acceptable. For a business with employees, customers, and compliance requirements, it's not.
NemoClaw lets you define exactly what each agent can access, enforce those boundaries at the operating system level, and audit the agent's actions within those boundaries. That's the difference between "we use AI" and "we use AI responsibly" — and it's the difference regulators, clients, and partners increasingly care about.
Concrete Scenarios
Scenario 1: Customer Data Protection
Your AI agent helps with code development. Your production database contains customer PII. With NemoClaw, you sandbox the agent so it can access the codebase but physically cannot reach the production database, even if a prompt injection tries to make it query customer records. The network policy blocks the database connection. The filesystem policy hides the credential files.
Scenario 2: Preventing Data Exfiltration
Your agent processes proprietary business documents. Without network controls, a sophisticated prompt injection could instruct the agent to send document contents to an external server. With NemoClaw's network isolation, the agent can only reach domains you've explicitly whitelisted. Exfiltration attempts fail silently at the network level.
Scenario 3: Multi-Team Agent Isolation
Your engineering team and marketing team both use AI agents. With NemoClaw's per-session sandboxing, the engineering agent can access code repositories but not marketing assets. The marketing agent can access content files but not source code. Each team's agent operates in its own security boundary — up to 32 concurrent sessions, each independently configured.
How to Get Started
NemoClaw is designed for simplicity. The entire installation is a single command:
curl -fsSL https://nvidia.com/nemoclaw.sh | bash
This installs the OpenShell runtime with NemoClaw's security layer. From there, you configure your security policies in YAML and launch agent sessions.
System Requirements
- Linux (recommended): Ubuntu 22.04 or later. Native kernel security features (Landlock, seccomp) provide the strongest guarantees.
- macOS: Supported via Colima or Docker Desktop. The sandbox runs inside a Linux VM, so you get the same security properties — but with slightly more overhead.
- Docker: Required on all platforms. NemoClaw uses Docker containers as the base isolation layer, with Landlock and seccomp providing additional enforcement inside each container.
What to Expect
NemoClaw is in alpha. That means the core security primitives work, but the developer experience is still being refined. Expect rough edges in documentation, limited GUI tooling, and occasional breaking changes between releases. This is appropriate for technical teams evaluating the technology, not for production deployment at scale — yet.
That said, NVIDIA has a track record of moving fast from alpha to production-grade tooling. If NemoClaw follows the trajectory of other NVIDIA Agent Toolkit projects, expect a beta with enterprise features (audit logging, policy templates, centralized management) within a few months.
The Honest Limitations
NemoClaw is a significant step forward, but it's not a complete solution. Here's what it doesn't do:
It doesn't prevent bad AI decisions. NemoClaw controls what the agent can access, not what it chooses to do within those boundaries. If the agent has permission to modify files in its workspace and it writes buggy code, NemoClaw won't stop it. Security boundaries and quality control are separate concerns.
It's Linux-first. macOS support exists but runs through a VM layer. Windows isn't supported. If your team runs Windows, NemoClaw isn't an option yet.
It requires Docker. If your environment can't run Docker — some corporate IT policies restrict it — NemoClaw won't work. The container infrastructure is fundamental to how the sandboxing operates.
Alpha-stage rough edges. Documentation is sparse. Error messages aren't always helpful. Configuration requires understanding Linux security concepts. This will improve, but right now it's an early-adopter tool.
No GUI management. Everything is YAML files and command-line tools. For organizations that need a dashboard to manage security policies across multiple teams and agents, that tooling doesn't exist yet.
Frequently Asked Questions
Q: What is NemoClaw and who made it?
NemoClaw is an open source project by NVIDIA, part of the NVIDIA Agent Toolkit. It provides a secure sandboxed runtime for AI agents like Claude Code, Codex, and OpenCode. Released on March 16, 2026 in alpha, it uses the OpenShell runtime with Landlock, seccomp, and network isolation to keep AI agents contained — enforcing security at the operating system level rather than relying on prompt-based safety.
Q: Is NemoClaw free to use?
Yes. NemoClaw is fully open source. The runtime, security policies, and sandbox infrastructure are all free. You bring your own inference — either NVIDIA's Nemotron models or any other LLM provider. The only costs are compute resources and whatever model API you choose to use.
Q: What operating systems does NemoClaw support?
NemoClaw is Linux-first, officially supporting Ubuntu 22.04 and later. macOS users can run it via Colima or Docker Desktop — the sandbox operates inside a Linux VM with the same security properties. Windows is not currently supported. Native Linux provides the strongest security guarantees since Landlock and seccomp are kernel-level features.
Q: Why do businesses need sandboxed AI agents?
AI agents that can read files, run commands, and access the internet are powerful — but without guardrails, they can accidentally leak sensitive data, modify the wrong files, or make unauthorized network requests. Sandboxing means the agent operates in a controlled environment where security policies define exactly what it can and cannot do. For businesses handling customer data, financial records, or proprietary code, this level of control is a prerequisite for responsible deployment.
Free: The AI Growth Breakdown
See how one business went from 0 to 100+ daily visitors in 14 days using AI agents. The exact tools and results.
Get the Free Breakdown →