Key Features
- 3-Layer Architecture: Host (LLM app) → Client (MCP connector) → Server (tools/resources/prompts) via JSON-RPC 2.0 communication.
- Core Primitives: Tools (callable functions), Resources (read-only data blobs), and Prompts (reusable templates) with typed schemas.
- Secure Invocation Flow: Per-tool user consent, JSON schema validation, host-controlled tokens with no server callback capability.
- Cross-Model Compatibility: USB-C analogy - one connector works with Claude, GPT-4, Gemini, Bedrock, and other LLM applications.
- Typed Schema System: JSON schema discovery via mcp.listTools() prevents prompt injection with compile-time validation.
- Vendor-Neutral Design: Apache 2.0 open-source protocol adopted across OpenAI, Google, AWS, and Anthropic ecosystems.
Use Cases
- Cross-model tool reuse - Same Postgres MCP server serves Claude, GPT-4, and Bedrock applications
- IDE AI workflows - Cursor invokes GitHub MCP for PR reviews directly inside the editor
- Local OS integration - Windows AI Foundry exposes file-system and registry tools to LLMs
- Enterprise data access - Secure connection to databases, APIs, and internal systems
- Development automation - Git, Docker, AWS, and cloud service integrations
Pros & Cons
Advantages
- USB-C analogy: one connector works with any tool across all LLM providers
- Typed schemas eliminate prompt injection vulnerabilities
- Vendor-neutral protocol adopted by OpenAI, Google, AWS, and Anthropic
- Rich ecosystem of ready-to-run reference servers
- Secure per-tool consent model with host-controlled authentication
Disadvantages
- Prompt bloat when 100+ tools available (mitigated by RAG-MCP)
- Tool-poisoning risk requires signed manifests and registry validation
- Spec still evolving with potential breaking changes <1% per minor version
- Server setup complexity for custom tool development
Architecture & Core Concepts
- Host Layer: LLM applications like Claude Desktop, Cursor, ChatGPT Desktop that consume MCP services
- Client Layer: MCP connector inside the host application handling JSON-RPC 2.0 communication
- Server Layer: Exposes Tools, Resources, and Prompts via standardized JSON-RPC 2.0 interface
- Discovery Mechanism: Host calls mcp.listTools() to receive typed schema definitions for available functions
- Security Model: Per-tool user consent, JSON schema validation, signed tokens, no server callback capability
Code Examples
Quick Server Setup
pip install mcp
git clone https://github.com/modelcontextprotocol/servers
cd servers/src/filesystem
python server.py /tmp
Tool Discovery
// Host calls mcp.listTools()
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}
// Server responds with typed schemas
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "get_weather",
"description": "Get current weather for location",
"inputSchema": {
"type": "object",
"properties": {
"lat": {"type": "number"},
"lon": {"type": "number"}
}
}
}
]
}
}
Resource Access
// List available resources
{
"jsonrpc": "2.0",
"method": "resources/list",
"params": {}
}
// Read specific resource
{
"jsonrpc": "2.0",
"method": "resources/read",
"params": {
"uri": "file:///repo/README.md"
}
}
Security Model
# MCP Security Flow:
# 1. User grants per-tool consent (UI toggle/OAuth)
# 2. Host validates JSON schema before execution
# 3. Host-controlled token signs all requests
# 4. Servers cannot call back to host
# 5. Results injected into LLM context safely
# Enterprise Extensions:
# - ETDI: OAuth2 + policy engine
# - RAG-MCP: Vector-based tool selection
# - Safety Scanner: Code vulnerability detection