MCP Core Architecture
Understand how the Model Context Protocol works: main components, fundamental concepts and communication flows.
General Architecture
MCP follows a Client-Host-Server architecture based on JSON-RPC 2.0. This architecture enables secure and standardized communication between AI applications and data sources.
MCP Architecture Diagram
The Host can have multiple Clients, each connected to a different Server. Servers access their Data Sources securely and in a controlled manner.
Core Components
MCP Host
The AI application that wants to access data and tools (e.g. Claude Desktop, IDEs like Cursor, VS Code).
MCP Client
Maintains a 1:1 connection with each MCP server. Manages the JSON-RPC protocol and bidirectional communication.
MCP Server
Lightweight program that exposes resources, tools and prompts through the standardized MCP protocol.
Data Sources
Local data sources (files, databases) or remote ones (APIs) that the server can access securely.
Core Concepts
MCP defines four main concepts that allow servers to expose capabilities to LLMs in a structured way.
Resources
Data and content that the server exposes to the LLM. Can be files, documents, database entries, etc.
Example: A file system server exposes files as resources. The LLM can read them but cannot modify them directly.
Tools
Actions that the LLM can execute through the server. Allow modifying data, performing operations, etc.
Example: A 'create_file' tool allows the LLM to create new files. The server validates and executes the action.
Prompts
Reusable prompt templates that the server can provide to the LLM to guide its behavior.
Example: A 'code_review' prompt can contain specific instructions for reviewing code consistently.
Sampling
Allows the server to request completions from the LLM, useful for generating content or processing data.
Example: A server can ask the LLM to generate documentation based on the source code it exposes as a resource.
Request/Response Flow
The MCP protocol uses JSON-RPC 2.0 for bidirectional communication. This diagram shows the complete flow when an LLM requests to execute a tool.
Complete Flow: LLM → Tool → Result
Practical example: The LLM requests to get the weather of a city using the get_weather tool
Flow Breakdown
The user asks a question. The LLM analyzes and decides it needs an external tool to answer.
The Host sends tools/call to the Client. This converts it into a JSON-RPC 2.0 message and sends it to the Server.
The Server validates, executes the tool and queries the data source (weather API). Gets the real data.
The Server formats the result and returns it to the Client as a JSON-RPC response. The Client delivers it to the Host/LLM.
The LLM processes the tool result, integrates it into its context and generates a natural response for the user: "In Madrid it's sunny, with a temperature of 22°C".
Key Point
The entire flow is synchronous and bidirectional. The LLM waits for the response before continuing. JSON-RPC messages include a unique id that allows correlating request and response.
JSON-RPC Message Example
Typical structure of a request and response in the MCP protocol
Request
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": {
"city": "Madrid"
}
},
"id": 1
}Response
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [
{
"type": "text",
"text": "El clima en Madrid es soleado, 22°C"
}
]
}
}Ready to dive deeper?
Explore more about resources, tools, prompts and the complete protocol specification.