Through the catalyst of open source software, Unix and C have become the dominant technologies in computing, despite their historical shortcomings around software reliability. The latest discussion trend about software reliability, of course, is memory safety, for good reason: at this point, the majority of software defects impacting the reliability of software and systems are indeed related to memory safety: a study showed that 88.2% of the software defects in the macOS kernel in 2019 had memory safety violations as their root cause.

Although the software reliability conversation has shifted towards memory safety as the primary focus, other issues still remain as well: shortsighted systems design decisions have led to many high profile security breaches over the years, a notable example being the Capital One data breach at AWS a few years ago, which leveraged systems design failures that ultimately resulted in hundreds of millions of customer records being stolen.

In addition to memory safety violations and systems design mistakes, the open source commons is also under constant attack by typosquatters and malicious maintainers. For example, the xz-utils attack uncovered in March 2024 was the result of an overworked open source maintainer being emotionally manipulated into adding a malicious person as a co-maintainer.

The usual answers about how these incidents can be mitigated in the future are simplistic and naive: suggestions to rewrite the world’s software in Rust, use automated linting for cloud configurations and to avoid adding people as co-maintainers to your projects unless you trust them explicitly.

Unfortunately, while these solutions are useful suggestions for specific problems that we are facing in cybersecurity, the world we live in and the systems we are trying to keep running in a secure and reliable manner are extremely complicated both on a technical and social level – as an example, safety critical systems must be recertified if they are rewritten in Rust, as a rewrite ultimately results in a new system, a process which can potentially take years to complete.

Similarly, while configuration linting can find many common types of errors, linters are not a substitute for a secure by design software architecture. Additionally, it is hard to know ahead of time whether or not a person’s intentions are malicious, and tools which help to allow you to establish trust in a prospective co-maintainer are naturally limited in their capabilities.

Alex, Emily and myself founded Edera because we wanted to solve this larger problem through a disciplined security and systems-minded approach. However, like any business, we must solve this problem incrementally.

Our first contribution toward a solution to this holistic problem is Edera Protect, a suite of tools built with modern development techniques informed by the need for memory and systems design safety, to help secure cloud native and edge computing environments through the introduction of hard security boundaries between workloads to prevent lateral movement by an attacker, enabling security teams to make real infrastructure-level security commitments without impacting the performance and experience of software engineering and DevOps teams.

We believe Edera Protect is a meaningful contribution toward a solution to this larger problem because the prevention of lateral movement helps to reduce the overall risk of compromise from a memory safety violation, or a backdoor installed by a malicious open source maintainer, or a misconfiguration in a cloud environment. It allows security and engineering teams to collaboratively work together, instead of against each other to create a hardened security posture for your business and its applications.

Check out how Edera Protect can help harden your infrastructure in our two minute video.