Hi, I’m Lewis and I’m a Staff Solutions Engineer here at Edera. A key focus of my day to day working life is to help people understand that Kubernetes isn’t isolating their workloads by default, and then helping them go back to sleep at night thanks to Edera Protect solving this problem at a runtime level.

We love a spicy take, and if a spicy take is new to you then let’s introduce you to the term using Gitpod’s blog post as an example. A spicy take is an opinion that can divide a room. It doesn’t divide a room in a sense of Good Vs Bad, but usually from a use case perspective. 

The blog title We’re leaving Kubernetes is a spicy take as this is the week before KubeCon NA, the industry leading conference focused on the Korchestrator where we continue to encourage people to get on board with Kubernetes. So this blog post has a high scoville rating.

I remember first noticing Gitpod via their booth at KubeCon Valencia 2022, they had an attractive product to empower developers in a Cloud Native landscape and I’ve since watched them ride the Cloud Native wave from afar. If Gitpod are saying they're leaving Kubernetes, their presence in the community has earned them the right to have a moment of time to understand their spicy take in my opinion.

Focusing on Security and isolation

This review of their post is going to focus on the section: Security and isolation: balancing flexibility and protection, because this is what we are solving at Edera with Edera Protect Kubernetes. Kubernetes is massive [citation needed], and I learned early in my Cloud Native career that it’s important to focus and build in one core area. So let’s get started with our review:

… Users want the ability to install additional tools (e.g., using apt-get install), run Docker, or even set up a Kubernetes cluster within their development environment. Balancing these requirements with robust security measures proved to be a complex undertaking…

Kubernetes is a platform for all our workloads. Workloads are shipped in containers and are used as a way for us to deploy them into our infrastructure. It’s the contract between our developers, our infrastructure engineers, our security teams, and everyone in between. Containers can be thought of as a delivery box/container and from a security perspective it reminds me of the movie Se7en and to quote Brad Pitt, “What’s in the box?!?”. We want to know what’s in the box because we need to decide if it’s safe to run. 

Some of it will be their own code that they have written so they may assume that they can trust it, but there will often be libraries and code from somewhere else off the internet to support our workloads. Sometimes the container itself is from somewhere else and in some cases people don’t know where it’s come from at all. 

How do we trust that when we run these workloads that they won’t do anything bad? How do we know it’s not going to leak out people’s personal information/company intellectual property/remote code execution attacks? Eventually, this path leads towards an opinion that we can’t trust anything and that's where we have the concept of Zero Trust come in, the only way we can arguably say that we can trust is to not trust at all.

The naive approach: root access

…Clearly, a more sophisticated approach was needed.

Root access is essentially you giving the key to all of your compute to a workload. Let's compare this to our lives. I hope reading this you’ve had the opportunity to share the key to your house, car, or heart to someone else, or you’ve had someone share a key with you. With that being said there has probably been at least one instance where it didn’t go to plan, and this could be a highly trusted family member, friend, or even you were at fault! The point here is that we can’t allow our workloads to have root access when we’re focusing on Zero Trust, if you don’t share the key then you don’t need to trust. At least before using Edera Protect but more about that later on!

User namespaces: a more nuanced solution

…provides fine-grained control over the mapping of user and group IDs inside containers…

Filesystem UID shift: This is necessary to ensure that files created inside the container map correctly to UIDs on the host system…

…Mounting masked proc: When a container starts, it typically wants to mount /proc…

This section provides a great example of a friction point of Kubernetes. New features become available, but to utilize these features, releases need to be tracked and sometimes these features require engineering effort for the workloads within the container. Some of these changes are at a kernel level so we also have to manage the compute itself. 

Most companies aren’t open enough to share this and it could be considered a shameful secret, but I’m used to seeing workloads that were written several years ago running in production environments. Personally, I don’t see shame in this, we’re not meeting people where they’re at. Rocky didn’t win his fights straight away, most of the time he needed a montage. At Edera, we want to meet people where they’re at today and provide them the benefits and time to make their own montage to win their fights.

…Implementing this security model came with its own set of challenges…

And this is where we find ourselves today, if there are solutions available to our problems, we expect we have to lose out and that loss is usually at a cost of performance. I’ve been open about my mental health within the industry and I used to think that if I was low in mood or having a bad day, it was ok because it was the only way I could appreciate a good day. This was until someone said to me, “or you could just enjoy being happy.” As an industry, I feel we’ve come to accept that gains come at a cost to others. At Edera, our goal is to provide additional security at little to no performance cost. Is it easy? No! But we’re fans of a good challenge.

The micro-VM experiment

…technologies like Firecracker, Cloud Hypervisor, and QEMU

…This exploration was driven by the promise of improved resource isolation, compatibility with other workloads (e.g. Kubernetes) and security, while potentially maintaining some of the benefits of containerization.

There is already a path with other technologies, so how come this isn’t a solved problem today? Let’s look deeper into what they can do:

The promise of micro-VMs

…we would no longer have to contend with shared kernel resources

…uVMs offered the potential to serve as a robust security boundary

…This could provide full compatibility with a wider range of workloads, including nested containerization (running Docker or even Kubernetes within the development environment)

All this reads great, and is what we offer here at Edera. Now if you’re reading this I’m also expecting you to think right now: “so why aren’t we all using micro-VMs today?” NEXT SECTION PLEASE!

Challenges with micro-VMs

…Even as lightweight VMs, uVMs introduced more overhead than containers. This impacted both performance and resource utilization…

…This added complexity to our image management pipeline and potentially impacted startup times…

…Lack of GPU support

We’re seeing similar issues to what was mentioned before, either additional engineering effort is required to implement the solutions, requiring changes to workloads, requiring specific hardware to fully utilize the technology, and not meeting people where they’re at today.

Conclusion

The blog post from Gitpod provides additional validation that Kubernetes today isn’t built for isolation of processes on a single node. The definitive way to solve this to date has been to use separate nodes to provide absolute isolation. This solves one problem, but creates others like increased cost and decreased efficiency (efficiency in the context of wasted compute by creating nodes for isolation). 

At Edera, we’re excited to see how Gitpod looks to solve this in their upcoming webinar, whilst also confident that our product offering today of Edera Protect can solve the security and isolation challenges raised in the definitive blog post.

But wait, Edera Protect Kuberenetes!

The way Edera Protect Kubernetes solves this is by focusing at the lowest level of our compute that we can build on, just above the hardware (but doesn’t require virtualization extensions so can run anywhere you run containers). Edera Protect creates Edera Zones. The Edera Zone is what provides the isolation that Kubernetes needs. Edera Protect Kubernetes is written in Rust, and takes the lessons learned above as well as our own experiences to build a secure, isolated environment. This environment can be used in our Pods to create a Zone of compute that has its own kernel (the kernel is managed with OCI for the compliance nerds out there!) so the kernel no longer becomes a massive attack surface for the whole of the operating system (OS) and workloads. And Edera Protect Kubernetes  isn’t just built for Kubernetes, we create an Edera Zone for the OS to provide isolation from the containers and the OS.

And to quote one of our heroes, one more thing! With Edera Zones you don’t have to install drivers within the OS, you can install hardware drivers in their own Edera Zone and decide which other Edera Zones can interact with the hardware. Example: GPUs for AI - we’re providing isolation to your AI workloads too, and thats called Edera Protect AI.

See you in Salt Lake City!

We’ll be at KubeCon next week, you’ll easily find us with our F&%K LUCK hoodies, but if you want to make it official please reach out via DMs or via on our Event page and we’ll be more than happy to grab a Koffee with you!