Measuring AI Security: Separating Signal from Panic
See what MCP exposure looks like in real environments — and the practical controls security teams should prioritize.
Model context protocol explained
MCP defines how an AI model communicates with tools through structured “schemas.” These schemas describe what actions the model can take, what parameters are allowed, and how results are returned.
In cybersecurity terms, MCP shifts AI from passive assistant to operational participant. That architectural change expands the attack surface. But it doesn’t introduce entirely new classes of risk. Rather, it reorganizes familiar ones.
At a high level, an MCP deployment includes:
- An AI model (the reasoning engine).
- A server exposing tools.
- A schema that defines permitted tool calls.
- An orchestration layer coordinating execution.
The schema is critical. It acts as the boundary between model reasoning and system capability. If that boundary is narrowly defined and permissioned correctly, risk remains constrained. If it’s broad, loosely validated, or over-permissioned, risk increases.
Understanding MCP security requires separating architectural reality from AI hype. This architectural boundary — not the prompt — is where most meaningful MCP security decisions live.
Why MCP matters for AI security
Traditional applications validate inputs at the UI layer, enforce roles through identity and access management (IAM) systems, and constrain execution through backend logic. MCP-based AI systems relocate those controls.
Instead of users clicking buttons, AI agents call tools. Instead of app logic gating execution, schemas define permissible actions. Instead of static workflows, orchestration chains dynamically combine capabilities – all ultimately changing where security teams must look.
Research analyzing real-world MCP servers found that most exposed familiar software primitives rather than exotic AI-specific functions. Observed capabilities included:
- Filesystem access.
- HTTP requests.
- Database queries.
- Local script or process execution.
- Tool chaining and orchestration.
None of these are new to enterprise environments. They already exist across DevOps automation, cloud security and management, and API ecosystems. MCP simply gives them structured access through AI systems. The implication: AI risk is often compositional, not magical.
What security risks are associated with MCP?
1. Capability exposure
Any MCP server exposes a defined set of capabilities. If a tool allows file writes, outbound HTTP requests, or database queries, those actions become callable through the model — subject to schema constraints.
Individually, many MCP deployments present low inherent risk. Research found that arbitrary code execution was relatively uncommon in operational servers. The more common issues resembled long-standing software security concerns:
- Excessive permissions.
- Weak defaults.
- Poor input validation.
- Overly broad parameters.
These are governance and design problems, not uniquely AI problems.
2. Composition and orchestration risk
Risk increases when capabilities combine. A filesystem write tool may be low risk alone. An HTTP fetch tool may be low risk alone. Together, they could enable persistence or content injection. Add orchestration and planning, and you may create multi-step automation chains.
Examples observed in real environments include:
- HTTP fetch + filesystem write → content injection or persistence.
- Database query + orchestration → stealthy data exfiltration.
- Filesystem write + planning → configuration poisoning.
- HTTP + planning + execution → multi-stage agent behavior.
This mirrors traditional attack chaining. The difference is speed and abstraction. MCP reduces friction in combining primitives. Security teams must therefore evaluate tool composition, not just individual tool risk.
3. Schema design as the security boundary
In MCP environments, the schema is the enforcement point. A poorly scoped parameter — such as an unbounded file path or unrestricted URL — may create more risk than a clever prompt injection attempt. While prompt injection remains a concern, schema over-permissioning often has clearer and more deterministic consequences.
Secure schema design should:
- Limit parameters to validated formats.
- Constrain accessible paths and domains.
- Enforce least privilege at the tool level.
- Separate read and write capabilities.
If the schema defines the guardrails, those guardrails must be narrow and explicit.
What MCP security Is not
It is easy to assume that giving AI access to tools automatically results in uncontrollable systems. Real-world data suggests otherwise.
Most MCP servers do not default to arbitrary code execution. Many expose narrow, task-specific capabilities. The majority of risk stems from predictable software design weaknesses rather than emergent AI behavior.
MCP does not invalidate established security principles. Instead, it requires applying them in new architectural locations. The fundamentals still apply:
- Least privilege.
- Defense in depth.
- Segmentation.
- Logging and monitoring.
- Clear ownership of execution contexts.
AI introduces scale and abstraction. It does not eliminate control.
How to secure model context protocol deployments
Security teams evaluating MCP-based systems should focus on architecture before prompts.
Start by mapping each MCP tool to the underlying primitive it exposes. Is it file access? HTTP calls? Database queries? Treat it like any other service capability.
Next, assess composition. Where can multiple tools be chained? Is orchestration limited? Are there controls around execution flow?
Then apply existing enterprise controls. Network segmentation, credential scoping, execution sandboxing, and behavioral detection remain effective. AI systems should inherit those protections rather than bypass them.
Finally, audit for capability sprawl. As AI adoption grows, separate teams may expose overlapping tools. Risk compounds when sensitive capabilities accumulate in the same execution context.
The shift is architectural, not philosophical. Security must follow the workflow from UI to schema to orchestration.
MCP vs. traditional application security
In traditional application security, the control boundary lives in application logic and access control layers. In MCP-based systems, that boundary shifts.
Where once developers validated user input, now schemas validate model parameters. Where IAM defined human permissions, now tool-level permissions define agent capability. Where static backend workflows enforced logic, orchestration engines assemble dynamic chains.
This shift demands visibility into tool definitions and execution graphs. It also requires collaboration between AI developers and security architects early in design. Influencing secure-by-design schema development may be more effective than attempting to contain insecure deployments after they ship.
Related reading
Fundamentals
What Is Attack Surface Management?
What Is Continuous Threat Exposure Management (CTEM)?
What is a Supply Chain Attack?