MCP Security Risks: Why AI Infrastructure Needs Isolation
MCPs extend AI agents but also expose risks. Learn how isolation protects AI infrastructure from MCP vulnerabilities.
TL;DR
MCPs (Model Context Protocols) extend AI agents by giving them the ability to act, not just analyze. But without proper isolation, MCPs can leak secrets, tamper with workloads, or open doors to attackers.
Recent demos and real-world vulnerabilities show why hardened runtimes like Edera’s are the only way to make MCPs safe for enterprise AI infrastructure.
My Introduction to AI
Like many others, I’ve been drawn into the world of AI. I don’t believe it’s here to take away all of our jobs, but rather to improve them. AI can take on tasks we don’t need to spend time on – linting code, writing documentation, reviewing endless spreadsheets, analyzing markets, or even helping to draft a blog post like this one. One thing to state at the start of this blog post is that AI is like Football. When mentioning AI we should be clear what we mean by AI. Football could mean American Football or Soccer. AI could mean a next word suggestion to Skynet. Let's learn a little bit more about AI so we can get grounded in these concepts.
The Challenges of AI Infrastructure Today
AI is advancing at a staggering pace – some claim that every “year” of AI progress is equal to four years of traditional software development. But today’s AI systems are built on two main foundations:
- LLMs (Large Language Models)
- RAG (Retrieval-Augmented Generation)
Neither learns the way humans do. LLMs rely on vast text corpora and statistical patterns. RAG systems extend that knowledge with external context, often from APIs or proprietary data sources. But what happens when we need AI agents to go beyond? To not just know, but to act? That’s where MCPs come in.
What the F*ck are MCPs and Why They Matter for AI Agents
The simplest way to explain MCPs is this:
- Call centres are for humans
- APIs are for applications
- MCPs are for AI agents
MCPs allow AI agents to access tools, fetch information, and perform actions in structured, predictable ways. They give LLMs the missing piece: the ability to actually do something with their understanding.
Think of it this way: an LLM can parse the phrase “find me a picture of a cat on the internet,” but it doesn’t have the built-in ability to fetch that cat picture. An MCP provides the bridge – extending the LLM’s capabilities by granting safe, structured access to the outside world.
Real-World MCP Vulnerabilities: Lessons from the MCPee Demo
The MCPee demo makes this tangible. Two MCPs run on the same machine:
- Weather MCP – fetches real-time weather data from a third-party API, storing API credentials as environment variables.
- Raider MCP – reads environment variables and can even modify code on the system.
The problem: the Debugging MCP can access the Weather MCP’s API key, tamper with its source code, and change outputs. Suddenly, an innocent weather report could be manipulated to show hurricane-force winds – and the AI agent has no way to know the MCP itself was compromised.
This illustrates a core risk: when MCPs run side by side without isolation, they can interfere with each other, leak secrets, or corrupt outputs. It’s a textbook example of why workload boundaries matter.
Recent Real-World MCP Vulnerabilities
These aren’t just hypothetical risks. In July 2025, a critical CVSS 9.4 flaw in Anthropic’s MCP Inspector (CVE-2025-49596) was disclosed. The exploit required almost nothing from the victim – simply visiting a malicious website while MCP Inspector was running allowed attackers to remotely execute arbitrary code.
With over 5,000 forks on GitHub, this wasn’t an obscure tool. It exposed just how quickly MCPs can become vectors for supply chain compromise. The parallels to the MCPee demo are clear: once an MCP can read local variables or modify behavior, the attack surface expands dramatically.
How Isolation Secures MCPs in AI Infrastructure
Isolation is the architectural solution. Most MCP risks stem from shared execution environments. Containers, which share the Linux kernel, were never designed as hardened boundaries. Once a containerized MCP is compromised, attackers can often move laterally across workloads.
Hypervisor-Grade Isolation: Edera’s Approach to MCP Security
A hardened runtime changes the game:
- Hypervisor-grade isolation: Each MCP runs in its own sandbox with a dedicated kernel.
- Attack surface minimization: MCPs only get the resources they explicitly need, nothing more.
- Default denial of access: No implicit trust between MCPs; no shared secrets in memory or environment variables.
Instead of patching after incidents, hypervisor-level isolation prevents entire categories of MCP and AI agent attacks from happening at all.
Are MCPs Inherently Insecure?
Not at all. MCPs were only introduced in December 2024, and like any new framework, they’re going through a period of rapid experimentation. Early Internet protocols had buffer overflows everywhere. Early containers suffered repeated escape vulnerabilities. MCPs are simply the next technology to face a hardening cycle.
Security standards are already forming. OAuth integration, signed registries, and stricter access controls will strengthen MCP deployments. But technology history shows us one thing clearly: architecture matters most. Without isolation at the runtime level, bolt-on controls will always lag behind the attackers.
The Future of MCPs in AI Infrastructure
MCPs unlock incredible potential. They let AI agents interact with systems, APIs, and data sources in structured ways. But to safely scale their use, organizations must adopt:
- Strong isolation by default
- Runtime hardening that enforces boundaries instead of just observing them
- Security-first interoperability standards
Handled correctly, MCPs could become the backbone of trustworthy agentic AI systems. Mishandled, they risk becoming the new “container escapes” of the AI era.
The choice is clear: if MCPs are going to extend the capabilities of AI agents, they need to be deployed in environments built for security from the ground up.
FAQs
What is an MCP?
An MCP (Model Context Protocol) extends AI agents, allowing them to perform actions securely through defined interfaces.
Why is isolation critical for MCPs?
Without isolation, compromised MCPs can leak secrets or modify other workloads. Hypervisor-grade isolation prevents these cross-agent attacks.
How does Edera secure MCPs?
Edera’s hardened runtime enforces hypervisor-level isolation for every workload, eliminating shared-kernel vulnerabilities and lateral movement.

-3.png)