Meet Edera at KubeCon+CloudNativeCon North America 2025
Read Now

The RCE-cipe for Platform Security: Isolation Without Compromise

March 6, 2025

In the world of cloud computing, there's a growing paradox: as Kubernetes enables more flexible, efficient infrastructure, it simultaneously creates unprecedented security challenges. What happens when you want to offer a platform where customers can run any code, anywhere, without compromising your entire infrastructure?

Kubernetes is a distributed system that improves efficiency by binpacking disparate workloads onto a single cluster. But not all workloads are created equal. Some AI workloads require access to accelerated computing like GPUs, while databases need scalable cloud storage. More critically, these workloads come with dramatically different security profiles.

Consider the landscape: Some workloads handle sensitive data like customer PII or payment information. Others run on legacy code with known vulnerabilities. Some even require escalated host privileges or direct access to container runtimes. The threats of mixing vulnerable and trusted workloads have long existed in Kubernetes, spawning entire ecosystems of security tools focused on detecting potential breaches.

But the risk goes beyond binpacking and “multi-tenancy”. A common pattern is emerging in Kubernetes where the bins that are being packed are owned by separate entities. Cloud providers are creating Platform as a Service (PaaS) experiences, allowing different customers to deploy applications without managing underlying infrastructure. The critical question becomes: How can providers keep their infrastructure safe when they can't fully validate the code running on it?

This model is increasingly being called Remote Code Execution as a Service (RCEaaS). Providers offer diverse services like:

It's an exciting frontier for both providers and customers. Providers can monetize their platform engineering expertise, while customers get to focus solely on their applications. But for platform security teams, it represents a significant challenge.

The fundamental problem? Security tools in Kubernetes are inherently workload-specific. Traditional approaches like capability restrictions, seccomp filtering, and eBPF probes require deep, continuous knowledge of each application. A one-size-fits-all security policy risks breaking legitimate services, while a permissive approach leaves infrastructure vulnerable.

RCEaaS providers face an impossible balancing act: Securely run virtually any workload without any knowledge of the applications that are running.

We started Edera to solve this seemingly unsolvable problem. Our approach plugs directly into existing infrastructure, providing a robust security boundary around each workload, regardless of its risk profile. With a simple kubectl apply, we deploy a new runtime class that launches each workload in its own isolated virtual machine environment.

Our solution breaks through typical sandboxing limitations:

When a customer application is compromised, the rogue workload remains confined to an untrusted environment, protecting both other customer workloads and the platform infrastructure.

Remote Code Execution as a Service doesn't have to be a security team's nightmare. Instead of building complex, application-specific security policies and constantly monitoring alerts, Edera secures each workload by default.

We make security boring—even when solving the industry's most complex challenges.