Your Laptop Is Not a Build System

So why is your AI agent running there?

Five years ago, the software industry reached consensus: building production artifacts on developer laptops is a supply chain risk. We codified that consensus into Supply-chain Levels for Software Artifacts (SLSA) and moved builds into hermetic, isolated pipelines where the environment could be verified and the outputs trusted. This was one of those rare moments where an entire industry looked at something it had been doing for decades and collectively wondered what we were thinking.

Then we deployed AI agents, autonomous systems that write, test, and modify production code, and put them right back on developer laptops.

Think about what lives on a developer workstation. SSH keys. AWS credentials. GitHub personal access tokens. Kubernetes kubeconfigs with cluster-admin to at least one environment. And that's just the professional surface. That same machine holds passport scans, medical records, tax documents, browser sessions with active authentication tokens to every service you use. Now place an autonomous, non-deterministic agent in that environment with full access to your terminal, your filesystem, and your network. The developer workstation was already the most over-privileged, under-monitored node in your infrastructure. AI agents just made it autonomous.

Your laptop has always been a build system, just one you'd never trust for production artifacts. We moved CI/CD off developer machines years ago because we understood the risk. But AI agents have turned your laptop into something else: a build system that produces production code without ever touching your pipeline's isolation guarantees.

Why SLSA Principles Apply to AI Agent Environments

SLSA (pronounced "salsa") emerged from a straightforward observation: if your build environment is compromised, your artifacts are compromised. It doesn't matter how rigorous your code review is or how many tests you run. If the machine that compiles your binary has been tampered with, or has simply accumulated enough uncontrolled state that you can't verify what influenced the output, the artifact is untrustworthy.

The response wasn't to put better antivirus on developer machines. It was to move builds into environments designed with specific properties: isolated, and ephemeral. Created fresh for each build, destroyed after. No accumulated state, no unrelated credentials, no influence from the broader machine. The environment itself becomes part of the trust chain.

This was a response to real incidents: SolarWinds, Codecov, event-stream, attacks that exploited exactly this gap between developer environments and trusted infrastructure. The industry learned and the patterns changed.

Agent-Assisted Development Is a Supply Chain Function

Agent-assisted development is a supply chain function, and the same principles apply.

When an AI agent writes code on your laptop, the output is shaped by its environment. What files does it have access to? What credentials can it reach? What network requests can it make? What state has accumulated on that machine from previous sessions, other projects, other agents?

These are supply chain questions.

Agents like Claude Code, GitHub Copilot, and Cursor are writing production code today, code that gets committed, reviewed, merged, and deployed. The agent's environment is part of the software supply chain in exactly the same way a build system is. And right now, that environment is a developer laptop with the keys to the kingdom.

SLSA taught us that the answer isn't to make the untrustworthy machine more trustworthy. It's to move the work to an environment that's trustworthy by design.

What a Trustworthy AI Agent Environment Looks Like

What does a trustworthy agent environment look like? The same properties SLSA demands of build systems.

Hermetic: Scoped Access Only to What the Task Requires

The agent should have access to what it needs for the current task: a single repository, scoped credentials, specific APIs. Not your entire home directory. Not six months of accumulated project history. Not your active browser sessions.

Isolated: Hardware-Level Blast Radius Containment

If the agent behaves unexpectedly, and with non-deterministic systems, unexpected behavior is the baseline, not the exception, the blast radius is bounded. Here's where the current model breaks in ways that aren't immediately obvious: app-based firewalls like Little Snitch operate at the application level. They see your terminal making a network request and allow it, because the terminal is trusted. They have zero visibility into whether that request is you running curl or an AI agent exfiltrating data to a prompt-injected endpoint. The trust boundary is drawn at the wrong layer entirely. In infrastructure-grade orchestration, network policy is workload-aware, you define what the agent can reach, not what the application can reach.

Ephemeral: Every Session Starts from a Known Clean State

Every agent session starts from a known, clean state: a container image, not the archaeological dig of accumulated laptop state. No residual credentials from a previous project. No leftover state from a previous agent run. No drift. When the task is done, the environment is destroyed. This isn't an aspirational goal. This is how containers already work.

Observable: Centralized, Tamper-Resistant Audit Logging by Default

On a laptop, you have whatever the agent framework decided to log. In managed infrastructure, you have centralized, tamper-resistant audit trails by default. For incident response, and with non-deterministic agents, you will need incident response, the difference between "we can reconstruct what happened" and "we have no idea" is the difference between infrastructure-grade logging and hoping the agent wrote something useful to stdout.

Here's what makes this moment different from the SLSA reckoning: agents give us an opportunity we never had with human developers. All development has always been a supply chain function, human development included. We just never had the ability to fully instrument it. AI agents change that equation. Agents don't have privacy expectations. We can capture every tool call, every file access, every reasoning step. The audit trail we've always wanted for the software development process is suddenly possible, but only if the agent runs in infrastructure designed to capture it. A developer laptop isn't that infrastructure.

How to Run AI Agents on Isolated Infrastructure

What does this look like in practice? The agent runs in a container on managed infrastructure: your existing Kubernetes cluster, a cloud VM, even a local machine running a container runtime with real isolation. The project repository is mounted in. Credentials are scoped and injected at runtime, not inherited from your home directory. Network access is restricted to what the agent needs: the LLM API, a package registry, your git remote. When the session ends, the environment is destroyed. You interact with the agent the same way you do today, terminal session, an IDE integration, an API call, but the trust boundary is around the agent, not around your entire machine.

"This adds friction. Agents are fast because they're local."

The agent is already remote. The LLM inference, the actual intelligence, runs in Anthropic's or OpenAI's infrastructure. What runs on your laptop is the orchestration layer: calls to local binaries, file access, credential usage. That's the dangerous part, and it contributes almost no compute locally. Moving the orchestration into a managed container doesn't add meaningful latency to a workflow already bottlenecked on API round-trips. You're not moving the work farther from you. You're moving the trust boundary to the right place.

This only becomes more apparent with multi-agent workflows, where parallel tool calls saturate local compute and make the laptop unusable by humans anyway. The argument for infrastructure isn't just about security, it's that the workload is outgrowing the machine.

"I'll sandbox the agent on my laptop."

Better than nothing, and tools that constrain agent processes locally are a genuine step forward. But you're still defending the wrong perimeter. The developer workstation still contains your credentials and your personal data. You still have no centralized audit logging. You still accumulate state between sessions. Improving the locks is good. But the SLSA lesson wasn't "put better locks on developer machines." It was: move the work to infrastructure that was designed for this. The answer isn't a better lock on a door that shouldn't be there. It's an architecture that eliminates what's behind it.

The Infrastructure Already Exists

Container orchestration already provides the primitives: workload scheduling, secrets management, network policy, resource limits, audit logging, and ephemeral execution. What's been missing is the isolation layer.

Standard container runtimes share a host kernel. For build systems, this was a tradeoff the industry accepted, though the same argument applies there too. For autonomous agents that make arbitrary tool calls, write to filesystems, and initiate network requests based on non-deterministic reasoning, shared-kernel isolation is unambiguously insufficient. You need hardware-level boundaries that contain the workload even if the kernel is compromised. For autonomous agents that make arbitrary tool calls, write to filesystems, and initiate network requests based on non-deterministic reasoning, shared-kernel isolation isn't enough. You need hardware-level boundaries that contain the workload even if the kernel is compromised.

This is the problem we're solving at Edera. Hardware-isolated zones, orchestrated through Kubernetes, with the ephemeral and isolated properties that agent workloads require. Combined with scoped network policy the environment becomes hermetic: the agent can only reach what you explicitly allow. Each agent runs in its own micro-VM with its own kernel, not a shared namespace, not a cgroup boundary, but an actual hardware isolation boundary. The container API stays the same. The security model changes fundamentally.

The Environment Is the Supply Chain

Your laptop runs a hundred things that have nothing to do with the software supply chain, and every one of them can influence it. The agent shouldn't be one of them. Move it to infrastructure where it runs with scoped credentials, bounded network access, centralized logging, and a filesystem that gets destroyed when the session ends.

This pattern has been proven at the largest engineering organizations in the world for over a decade. What's new is the urgency. AI agents have made the developer workstation the most over-privileged, under-monitored node in your infrastructure. And unlike a human developer, the agent doesn't just make mistakes you can predict, it makes mistakes you can't enumerate.

We learned this lesson with build systems. We wrote it down and we called it SLSA. It's time to apply the same thinking to the environment where software is actually being written.

The agent's execution environment is the supply chain. Trust it accordingly.

FAQ

Does moving AI agents off the developer laptop add latency? 

No. The LLM inference already runs remotely on Anthropic's or OpenAI's infrastructure. Moving the orchestration layer into a managed container does not add meaningful latency to a workflow already bottlenecked on API round-trips.

What is the difference between sandboxing an AI agent on a laptop versus running it on managed infrastructure? 

Local sandboxing improves process-level containment but does not eliminate credential exposure, accumulated session state, or the absence of centralized audit logging. Managed infrastructure provides ephemeral environments, scoped credentials injected at runtime, network policy enforced at the workload level, and tamper-resistant audit trails by default.

Why does SLSA apply to AI agent environments? 

SLSA requires isolated, ephemeral build environments because a compromised build system produces untrustworthy artifacts. AI agents write production code using the same credentials and filesystem context as their host environment. The supply chain risk is structurally identical.

Cute cartoon axolotl with a light blue segmented body, big eyes, and dark gray external gills.

You know you wanna

Let’s solve this together