Model Context Protocol

MCP is an open standard that gives AI models a universal way to connect to external tools, data sources, and services — often called "the USB-C port for AI."

📅 March 2026 ⏱ 12 min read 📚 8 sources

What It Is

The Model Context Protocol (MCP) is an open-source standard created by Anthropic and released on November 25, 2024. It defines a universal way for AI applications — like Claude, ChatGPT, or Gemini — to discover, connect to, and invoke external tools, data sources, and workflows.[1]

Before MCP, every integration between an AI model and an external service required a bespoke connector. If you had N AI platforms and M tools, you needed up to N×M custom integrations. MCP collapses that into N+M: each AI platform implements one MCP client, and each tool implements one MCP server.[1]

Architecture

MCP uses a client-server model built on JSON-RPC 2.0 messages:

  • MCP Host — the AI application (an IDE, chat interface, or agent runtime) that contains the LLM.
  • MCP Client — a lightweight connector inside the host that handles protocol negotiation.
  • MCP Server — an external service that exposes capabilities (tools, resources, prompts) through the standardized protocol.

Communication flows over two primary transports: stdio (for local processes) and Streamable HTTP (for remote servers), the latter introduced in March 2025 to replace the earlier SSE transport with better scalability and bidirectional support.[8]

"Think of MCP like a USB-C port for AI applications." — Anthropic's launch announcement. One standardized connector replaces a tangle of proprietary adapters.[1]

Why It Matters

AI models are only as useful as the information they can access. Without a standard protocol, every new data source or tool meant weeks of custom engineering. MCP matters because it:

  • Eliminates integration sprawl — A tool built once as an MCP server works with every MCP-compatible AI platform.
  • Enables agentic workflows — AI agents can dynamically discover and invoke tools at runtime rather than relying on hardcoded function schemas.[6]
  • Reduces vendor lock-in — Because MCP is open and vendor-neutral (now governed by the Linux Foundation), organizations aren't tied to a single AI provider's tool ecosystem.
  • Scales with complexity — As enterprises deploy agents that orchestrate dozens of services, a standardized protocol becomes essential for reliability and auditability.

MCP vs. Function Calling

Function calling is a vendor-specific feature where the model outputs structured arguments for a predefined function. MCP operates at a higher level: it standardizes how tools are discovered, described, and invoked across any host and any server. In practice they're complementary — function calling handles the model-to-runtime bridge, while MCP handles the runtime-to-world bridge.[6]

Current State

Just 16 months after launch, MCP has achieved remarkable adoption across the AI industry:[2]

97M+
Monthly SDK downloads
10,000+
Active MCP servers
300+
MCP clients
$1.8B
Estimated market (2025)

Adoption Timeline

  • Nov 2024 — Anthropic open-sources MCP with Python & TypeScript SDKs.[1]
  • Mar 2025 — OpenAI adopts MCP across ChatGPT, Agents SDK, and Responses API.[2]
  • Mar 2025 — Streamable HTTP transport released, replacing SSE.[8]
  • Apr 2025 — Google DeepMind confirms MCP support in Gemini 2.5 Pro.[2]
  • Jun 2025 — Official OAuth 2.1 authorization spec published.[8]
  • Nov 2025 — Major spec update adds async operations and official server registry.[2]
  • Dec 2025 — Anthropic donates MCP to the Agentic AI Foundation under the Linux Foundation.[2]
Gartner predicts 40% of enterprise applications will feature task-specific AI agents by end of 2026 — up from under 5% in 2025. MCP is the connective tissue enabling that scale.[2]

Key Figures & Organizations

Creators & Governance

  • Anthropic — Created and open-sourced MCP in November 2024. Lead contributor to the specification.
  • Agentic AI Foundation (AAIF) — A directed fund under the Linux Foundation, co-founded by Anthropic, OpenAI, and Block in December 2025 to govern MCP's future.[2]

Major Adopters

  • OpenAI — Integrated MCP across ChatGPT desktop, Agents SDK, Responses API (March 2025).[2]
  • Google — Native MCP support in Gemini 2.5 Pro API and SDK.[2]
  • Microsoft — MCP support in Copilot Studio, Azure MCP server, native Windows 11 support.
  • Developer tools — Cursor, Replit, VS Code, Sourcegraph, Zed, Codeium.
  • Infrastructure — Cloudflare, Vercel, Netlify (MCP server hosting).
  • Enterprise — Block, Bloomberg, Amazon, and hundreds of Fortune 500 companies.

Supporting Members of AAIF

AWS, Google, Microsoft, Cloudflare, and Bloomberg are all supporting members of the Agentic AI Foundation, signaling industry-wide commitment to MCP as a shared standard.[2]

Security & Risks

MCP's rapid adoption has outpaced security practices. Over 13,000 MCP servers launched on GitHub in 2025 alone, and researchers have found serious vulnerabilities across the ecosystem.[4]

43% of MCP servers scanned contain command injection vulnerabilities. Roughly 1,000 servers are exposed on the public internet with zero authentication.[7]

Major Threat Categories

Academic research identifies 16 distinct threat scenarios across four phases of the MCP server lifecycle — creation, deployment, operation, and maintenance:[5]

  • Tool poisoning — A malicious MCP server can modify tool descriptions after installation, turning a benign-looking "weather" tool into a data exfiltration backdoor.[4]
  • Command injection — CVE-2025-6514 demonstrated that malicious servers could achieve remote code execution on client machines via crafted OAuth authorization endpoints.
  • Unintended actions — LLMs may invoke destructive tools (like delete_files) without explicit user intent, since agents autonomously choose which tools to call.[4]
  • Data aggregation — MCP servers often request broad permission scopes, and centralized token storage creates unprecedented data aggregation risks.

Real-World Incidents

  • A proof-of-concept demonstrated silent exfiltration of a user's entire WhatsApp history by combining tool poisoning with a legitimate whatsapp-mcp server.[4]
  • A critical vulnerability (CVSS 9.4) in Anthropic's own MCP Inspector tool could have enabled full remote code execution on developer machines.
  • Knostic's scan of nearly 2,000 internet-exposed MCP servers found that all verified servers lacked any form of authentication.[7]

Mitigations in Progress

The June 2025 OAuth 2.1 spec addressed authentication gaps, and the 2026 roadmap prioritizes audit trails, SSO integration, and gateway behavior standards for enterprise deployments.[3][8]

Open Questions

  • Governance neutrality — While MCP is now under the Linux Foundation, Anthropic remains the dominant contributor. Will other companies invest equally in the spec's evolution, or will MCP become a de facto Anthropic-led standard?
  • Identity and accountability — When an MCP tool performs an action, who is responsible — the end user, the AI agent, or the MCP server operator? The spec doesn't yet standardize attribution of actions.[4]
  • Context window pressure — A single MCP tool call can inject thousands of tokens into a conversation, potentially overwhelming the model's context window and degrading response quality. How should hosts manage this?
  • Tool quality and discovery — With 10,000+ servers and no mandatory quality bar, how will users find reliable, well-described tools? The official server registry (Nov 2025) is a start, but curation remains unsolved.
  • Versioning and lifecycle — MCP currently lacks standardized tool versioning or deprecation mechanisms. A breaking change in an MCP server can silently break agent workflows.
  • Agent safety at scale — As agents gain access to more powerful tools (file deletion, payment processing, email sending), the attack surface grows. The protocol needs formal consent and sandboxing primitives.

Where It's Headed

The 2026 MCP roadmap, published by the Agentic AI Foundation, focuses on four priority areas:[3]

1. Transport Scalability

Evolving Streamable HTTP to run statelessly across multiple server instances and behave correctly behind load balancers and proxies — critical for enterprise-scale deployments.

2. Agent-to-Agent Communication

As agentic systems grow more complex, MCP is being extended to support not just model-to-tool connections but agent-to-agent orchestration, enabling hierarchical task delegation.

3. Governance Maturation

A Contributor Ladder defining progression from community participant to core maintainer, plus a delegation model allowing Working Groups to publish updates within their domain without full core-maintainer review.[3]

4. Enterprise Readiness

Audit trails, SSO-integrated authentication, API gateway behavior standards, and configuration portability — the predictable set of problems enterprises encounter when deploying MCP at scale.[3]

The bigger picture: MCP is positioning itself as the TCP/IP of the agentic era — a foundational protocol layer that enables an ecosystem of interoperable AI services. Whether it achieves that ambition depends on continued multi-vendor investment and solving the security challenges that rapid adoption has exposed.