Now Validated on Azure Linux: Edera for the Enterprise Cloud

Before we even get to GPUs, we’re excited to announce that Edera is now fully supported and validated on Azure Linux. We know that enterprises are standardizing their cloud operations on trusted platforms like Azure Linux for secure, high-performance workloads. 

Our validation work means you can deploy Edera's security and isolation capabilities onto your standard Azure Linux builds with greater confidence. We've focused on ensuring kernel compatibility and smooth deployments to reduce integration friction and improve the out-of-the-box experience with your existing tools and driver stacks. It’s a major step in our commitment to bringing Edera's powerful platform to the heart of the enterprise cloud.

Technical Preview: Unlocking Bare-Metal Performance for AI/HPC

Now for performance. The headline feature of this release is our new Technical Preview of PCI Passthrough for NVIDIA GPUs. In plain terms, you can now attach an entire NVIDIA GPU directly to your workloads, bypassing any virtualization layer. This gives your applications the raw, "bare-metal" performance needed for heavy-duty AI training, complex model inference, and high-performance computing (HPC) simulations. 

If your team has been struggling with performance bottlenecks or has workloads that demand the full power of a GPU, this feature opens up new possibilities. It unlocks the ability to run your most critical AI and data-processing jobs on Edera with the performance you’d expect from a dedicated server, but with the security and isolation you count on from us. This initial release supports full-device passthrough, and we are already working on fine-grained partitioning (vGPU/MIG) for our roadmap.

We are actively seeking partners for this Technical Preview. If your organization is looking to run high-stakes AI workloads and wants to be first in line to test this capability, please reach out to our team.

Operational Advantage: Smarter Utilization and Consistent Performance

High-performance computing isn't just about computers; it's also about data. With stabilized support for SR-IOV and network device passthrough, you can now partition high-speed network cards and allocate dedicated slices of bandwidth to specific workloads. For customers running multi-tenant clusters, this is critical for mitigating "noisy neighbor" problems. This gives you greater control to ensure your most important workloads receive the low latency and high throughput they require. 

Extensible Security and Unprecedented Flexibility

Wasted resources are wasted money. With our latest release, we’re introducing foundational support for dynamic memory management, or "ballooning." This capability allows our platform to intelligently reclaim unused memory from workloads and return it to the host, making it available for other applications. The result is a path toward higher density and better overall hardware utilization. This also improves the reliability of scheduling expensive GPU workloads, as Kubernetes gains a more precise, real-time view of available memory. 

Laying the Groundwork for Deeper Security Integrations

To help organizations extend their defense-in-depth strategy into Edera, we are introducing a new security event forwarding capability. By integrating libscap and enabling syscall forwarding, Edera can now make security-relevant events from within a zone visible on the host. This provides a crucial bridge for eBPF, including runtime scanners like Falco, to gain preliminary visibility into workload behavior. This is a foundational step, and while it doesn’t represent a full, end-to-end integration yet, it paves the way for deeper, more comprehensive security monitoring in future releases.

Customize and Tune Environments Without Custom Images

Your AI and data science teams use a wide array of specialized frameworks, drivers, and libraries. Keeping all of this updated across different "golden images" is an operational nightmare. Edera now allows operators to tune workload environments on the fly. You can now inject kernel modules, set specific system parameters, and even load entire add-on packages at runtime. This gives your teams the flexibility to tune performance for specific AI frameworks or load unique drivers, all without the overhead and delay of building and managing custom OS builds. 

Leading the Hardened Runtime Category

This release comes as Edera has been named a Cloud Security Segment Leader for the Hardened Runtime Category in Latio's 2025 Cloud Security Market Report. This recognition reflects the growing industry acknowledgment that hardened runtimes are essential infrastructure for enterprises running security-critical AI workloads at scale—and validates our approach to eliminating the performance-security trade-off.

Get Started Today

This release is a major step forward, delivering the performance and security enterprises need to accelerate their AI and HPC initiatives. By providing bare-metal speed on standard enterprise platforms like Azure Linux, Edera ensures you have the operational simplicity and technical capability to tackle your most ambitious projects with complete confidence.

Ready to run your most demanding AI workloads with confidence and unprecedented performance?

  • Check out the full release notes for a detailed breakdown.
  • Contact our sales team for a demo to see Bare-Metal AI in action.
  • If you have a high-stakes AI workload, reach out about joining the PCI Passthrough Technical Preview.