YOLO Mode For AI Agents Without the YOLO: Running Claude Code with Kernel Isolation
Every Claude Code user knows the ritual. You give the agent a task, it needs to run a command, and it asks permission. You read it, decide it looks fine, and click Allow. It asks again. You skim this time. Allow. Again. You stop reading. Allow, allow, allow. Eventually you flip on YOLO mode and let it run unrestricted - not because you decided the risk was acceptable, but because you got tired of being a babysitter.
Here's the thing: that leads to risk. Agents are genuinely more powerful when you get out of their way. Claude Code in YOLO mode will install dependencies, scaffold projects, run tests, fix what broke, and iterate, all without you context-switching back to approve each step. The productivity gain and velocity numbers don’t lie. The anxiety is also real.
The anxiety comes from knowing what's underneath. On shared infrastructure, unrestricted means uncontained. Any mistake — or an attacker exploiting a mistake — doesn't stay inside the pod. It cascades outward to everything else on that machine. You fundamentally cannot apply least privilege to AI agents because the valid behavior space is unbounded. An agent might legitimately install packages, write to arbitrary paths, make network calls you didn't predict, and run code it wrote thirty seconds ago. You can't enumerate what's valid, so you can't write a policy that only allows it.
The answer isn't restricting the agent. It's containing the blast radius. Run it in YOLO mode inside an isolated kernel sandbox where "YOLO" only affects itself.
Here's how to do it with Edera and what changes when you do.
What AI Agents Can See in Shared-Kernel Kubernetes
Deploy Claude Code in a standard shared-kernel Kubernetes container and it shares the host kernel with every other pod on the node. Here's what that looks like from inside:
The pod sees the entire node's memory:
$ head -3 /proc/meminfo
MemTotal: 12174208 kB
MemFree: 1874568 kB
MemAvailable: 10359028 kB12 GB. That's not the pod's allocation — that's the node. Your agent now knows the machine's capacity, which tells it (or an attacker) what else is running alongside it.
The kernel version is the host's kernel:
$ cat /proc/version
Linux version 6.15.11 ... #1 SMP PREEMPT_DYNAMIC Tue Jan 20 16:44:51 UTC 2026
Same kernel as every other workload. A kernel CVE exploited from any container on this node affects all of them, including the one holding your Claude OAuth tokens.
Kernel-Isolated AI Agent Sandboxing with Edera
Deploy the same container with runtimeClassName: edera and an Edera zone kernel annotation. The agent gets its own Linux kernel in an isolated microVM. Same container image, same tools, same YOLO mode - different blast radius.
Now run the same commands:
The zone runs its own kernel. There are no host PCI devices, no ENA adapters, no backends in its ring buffer because they don't exist in the isolated kernel's world.
Memory is scoped to the pod:
$ head -3 /proc/meminfo
MemTotal: 164588 kB
MemFree: 71772 kB
MemAvailable: 45148 kB
164 MB: the pod's actual allocation. The node's 12 GB is invisible.
The kernel is the zone's own:
$ cat /proc/version
Linux version 6.18.6 ... #1 SMP PREEMPT_DYNAMIC Wed Aug 27 20:14:16 UTC 2025
A different kernel version from the host. A kernel exploit here affects this zone and nothing else. kubectl delete pod and it's gone.
How Kernel Isolation Protects AI Agents from Neighbor Pods
So far we've talked about what the agent can do outward. But isolation also protects inward: what a compromised neighbor can do to the agent.
Your Claude Code session holds OAuth tokens, API keys, and whatever code you're working on. On a shared kernel, a neighboring pod that gets compromised can reach toward your workload. We deployed a pod with hostPID: true (a common misconfiguration) on the same node and ran ps aux:
$ ps aux | head -15
USER PID ... COMMAND
root 1 ... /usr/lib/systemd/systemd --switched-root --system ...
root 2863 ... /usr/bin/kubelet --image-credential-provider-config=...
--kubeconfig=/var/lib/kubelet/kubeconfig ...
root 35192 ... /bin/aws-ebs-csi-driver node --endpoint=unix:/csi/csi.sock ...
nobody 37245 ... /kube-state-metrics --port=8080 --telemetry-port=8081 ...
root 1695958 ... /opt/edera/qemu-xen/bin/qemu-system-i386 -xen-domid 7 ...
Every process on the host. Kubelet with its config paths. EBS CSI credentials. The metrics stack. From here, an attacker can enumerate the node, find interesting targets, and pivot. This is the shared fate problem: one compromise turns into a cluster-wide incident because every workload shares the same kernel and the same process namespace is one config mistake away.
The Edera-isolated Claude pod doesn't appear in that process list. Its processes run inside a dedicated microVM with its own kernel. There's no /proc entry on the host to inspect, no shared memory to scrape, no kernel data structure to walk. The compromised neighbor doesn't even know it's there.
Deploy Claude Code in an Edera Sandbox
The full Dockerfile and pod manifest are in this gist. The key bits:
The pod spec adds two lines to a standard deployment: a runtime class and a kernel annotation:
apiVersion: v1
kind: Pod
metadata:
name: claude-code
annotations:
dev.edera/kernel: ghcr.io/edera-dev/zone-kernel:6.15
spec:
runtimeClassName: edera
containers:
- name: claude
image: ghcr.io/your-org/claude-code:latest
command: ["tail", "-f", "/dev/null"]
...Deploy, wait, and verify isolation:
kubectl apply -f claude-pod.yaml
kubectl wait --for=condition=Ready pod/claude-code --timeout=120s
# Confirm you're in an Edera zone
kubectl exec claude-code -c claude -- cat /proc/version
# → Linux version 6.18.6 ... (not the host's 6.15.11)
Then connect and start working:
kubectl exec -it claude-code -c claude -- claude
Authenticate with your Claude account, flip on YOLO mode, and let the agent go to town. It has everything it needs: Node, Python, Go, Rust, git, ripgrep, the works. What it doesn't have is access to your host kernel, your node's memory map, or your neighboring workloads.
Why AI Agents Require Kernel-Level Sandboxing
The security problem with AI agents isn't that they're malicious. It's that they're non-deterministic and run on shared infrastructure that assumes determinism. Traditional security says enumerate valid behavior and block everything else. You can't do that with an agent that might legitimately install packages, write to arbitrary paths, and make network calls you didn't predict.
This is a resilience problem.. You don't solve it by adding more policy layers to a shared kernel. You solve it by designing for failure: accepting that something will eventually go wrong and making sure that when it does, the failure stays contained. Remote code execution is the highest-impact failure mode in shared compute. A system that withstands Remote Code Execution (RCE) with root privileges inside a pod, without affecting the host or its neighbors, is a system you can actually trust with an unrestricted AI agent.
Today, a compromised Kubernetes pod means you're tearing down the cluster. Every neighbor is suspect, every secret potentially exfiltrated, every workload tainted. With kernel isolation, the response is kubectl delete pod. The other workloads never even notice.
Efforts to restrict AI agents with policy and observability alone are doomed to fail because velocity beats out security, every time. Contain the failure instead. Give it a kernel it can't escape, let it YOLO within a trusted boundary, and get back to the work the agent was supposed to help you with in the first place.
That's what isolation gets you: more capability, with less consequence.
FAQ
Why are unrestricted AI agents risky on shared Kubernetes infrastructure?
Because containers share the host kernel, an unrestricted AI agent can expose node-level information and amplify the impact of kernel vulnerabilities across all workloads on the node.
What does it mean for an AI agent to run with its own kernel?
It means the agent runs inside a dedicated kernel with isolated memory, devices, and process space, completely separate from the host and other workloads.
How does kernel isolation limit blast radius for AI workloads?
Kernel isolation ensures that failures or compromises are contained to a single pod. The agent cannot affect the host or neighboring workloads by design.
Why doesn’t least-privilege security work well for AI agents?
AI agents have unbounded, non-deterministic behavior that can’t be fully enumerated in advance. Containment is more reliable than restrictive policy enforcement.
How does incident response change with kernel-isolated AI agents?
A compromised agent can be safely remediated by deleting the pod. There is no shared kernel state to investigate or cluster-wide exposure to assume.

-3.png)