Confidential computing, as implied by the name, has the goal of providing confidentiality and integrity to running code. This means other applications, the host operating system, hypervisor, system administrators, and even anyone with physical access to the machine cannot view or tamper with a program running under confidential computing or its data. This is an extremely powerful goal that vastly reduces the amount of code you have to trust to run your specific application (this is sometimes referred to as the trusted computing base or TCB).
So how does confidential computing achieve these goals?
Primarily, through the use of a hardware-based, attested Trusted Execution Environment (TEE). That's a lot of words, but this is basically a chip that sits next to the CPU and performs trusted operations in encrypted memory. This chip is programmable (so it can run any code that needs a trusted environment) and is usually attestable (so you can verify that your code is running on a TEE before sending sensitive data).
There are several different designs of TEE (you may have heard of Intel SGX, AMD SEV, Intel TDX, or NVIDIA's confidential computing GPU). Each of these designs has slightly different properties and thus slightly different security guarantees. There are lots of good comparisons between TEE designs (see appendix of Bertani et al., taxonomy in Akram et al., description in Ménétrey et al.), so we'll avoid talking about specific TEEs and instead talk about properties and tradeoffs made in these designs more generally.
TEEs use encrypted memory and isolation to protect applications. Memory is usually encrypted by keys generated on the TEE or other secure hardware, and is sometimes monitored by privileged hardware monitors. Isolation is either done at the kernel level or at the user level. Kernel level isolation has a larger TCB (more trusted code and thus a greater chance of compromise), but allows system calls to be encrypted and allows legacy applications to be ported to confidential computing more easily. User level isolation has a smaller TCB (less trusted code), but no encrypted system calls or compatibility with legacy applications.
Confidential computing has been used in cloud environments to utilize external computing resources to operate on sensitive data without giving access to the cloud provider or other users. Projects like Confidential Containers allow Kubernetes worker nodes to run inside a TEE. Other designs put just the container in a TEE, allowing for a smaller TCB, but less compatibility with existing infrastructure. In ‘Costs of confidential computing’ later, we’ll describe the tradeoffs that come with TCB size.
Current applications and emerging use cases
Traditional confidential computing use cases have focused on protecting sensitive data processing in multi-tenant cloud environments. However, the technology is rapidly expanding into new domains:
Confidential AI has emerged as one of the most significant growth areas. Organizations can now protect AI models, training data, and inference results throughout the entire machine learning pipeline. Major cloud providers offer confidential AI services that enable secure multi-party training, where organizations can collaborate to train models without exposing their datasets to each other. This is particularly valuable for regulated industries like healthcare and finance, where data cannot leave organizational boundaries but collective insights would be beneficial.
Multi-party computation allows organizations to perform joint analytics on combined datasets while keeping individual data sources private. For example, multiple hospitals could collaborate on medical research using their patient data without any single institution accessing another's records.
Regulatory compliance has become increasingly important as new AI regulations and data sovereignty requirements emerge. Confidential computing provides cryptographic proof that data processing meets privacy requirements, simplifying compliance with regulations like GDPR and emerging AI governance frameworks.
Costs of confidential computing
So what is the cost of getting all these benefits of confidential computing?
First, there are the hardware costs. A TEE is a piece of hardware, and so any application using confidential computing needs access to a TEE either on-premises or in a cloud. On-premises this has a lot of upfront costs (buying the hardware), while on the cloud this has ongoing costs (paying for more expensive compute instances that have TEEs). Each TEE that you pay for has limitations on the number of applications that can concurrently use the TEE. Things like limited encrypted memory, limited key generation, and more mean that any large-scale application will need access to a large number of TEEs.
However, the performance penalty that was once a significant concern is diminishing. Modern implementations, particularly for AI workloads, now offer near-identical performance to unencrypted processing. Additionally, managed confidential computing services are reducing the operational overhead of deployment and management.
Second, there are engineering costs. Remember those small-vs-large TCB tradeoffs we talked about earlier? These determine your engineering costs. In most cases, you can re-use a lot of existing application code, with the cost of a larger TCB. This larger TCB means that the most sensitive parts of your application are not isolated from other code in the TCB, thus reducing the benefit of using confidential computing. If you want a smaller TCB, this means re-writing applications to be aware they will be run in a TEE and able to isolate the most sensitive operations.
Fortunately, the engineering burden is decreasing as well. Managed platforms and software-as-a-service offerings now allow organizations to deploy confidential computing with minimal code changes, significantly reducing the expertise required for implementation.
Other considerations
In addition to the monetary costs, what are other considerations you should make when deciding to use confidential computing?
First, there are some security considerations. Confidential computing moves trust from software into hardware, which raises the question: is this hardware more secure? Several security vulnerabilities continue to be discovered in TEE implementations. Recent examples include a high-severity AMD SEV-SNP vulnerability (CVE-2024-56161) that allows malicious microcode injection, and the BadRAM attack that can compromise AMD SEV-SNP systems through memory manipulation. Unlike software bugs, patching hardware bugs in many cases requires buying new hardware or firmware updates. While we'd expect the number of these bugs to decrease as confidential computing matures, the risk of these hardware bugs undermining confidential computing remains a real and evolving challenge.
Also, there are questions about the threat model of confidential computing. Do you really trust the hardware manufacturers, but not your cloud provider, operating system, or hypervisor? How do you know that all code running in the TEE is trusted and bug-free? How can other applications and the operating system be protected from a malicious application running in a TEE? As quantum computing advances, there are also emerging concerns about post-quantum cryptographic readiness in confidential computing frameworks.
Finally, consider the ecosystem maturity. While confidential computing is gaining significant traction with major technology companies integrating it into their platforms, the ecosystem is still evolving. Standards are being developed, tools are maturing, and best practices are emerging. Organizations should evaluate whether the current maturity level meets their specific needs and risk tolerance.
Putting it all together
Confidential computing is a powerful security design that protects very sensitive data and applications from a malicious ecosystem. The technology is experiencing rapid growth and adoption, particularly in AI applications where it enables new forms of secure collaboration and compliance. However, achieving this protection can still be expensive in practice, and does not solve all of your security problems.
The landscape is evolving quickly—performance overhead is decreasing, managed services are simplifying deployment, and new use cases like confidential AI are driving innovation. Yet security challenges persist, with new vulnerabilities continuing to surface in TEE implementations.
In future posts, we’ll dive more into some of the aspects of confidential computing, and see what other technologies can achieve some of the security of confidential computing for cheaper.