Model Context Protocol for AI Integration
The Model Context Protocol MCP is an open standard that solves one of the biggest challenges in AI development: connecting language models to external tools and data sources. Instead of building custom integrations for every AI provider, MCP provides a universal interface that works across Claude, GPT, Gemini, and any MCP-compatible client.
Think of MCP as the USB-C of AI integrations. Before USB-C, every device had a different charging port. Before MCP, every AI tool integration was a custom implementation. This guide shows you how to build MCP servers that expose your data and capabilities to AI models through a standardized protocol.
How MCP Works
MCP follows a client-server architecture. AI applications (like Claude Desktop or Cursor) act as MCP clients. Your code acts as an MCP server that exposes three types of capabilities:
- Tools — Functions the AI can call (query database, create ticket, send email)
- Resources — Data the AI can read (files, database records, API responses)
- Prompts — Reusable prompt templates with parameters
┌─────────────────┐ MCP Protocol ┌─────────────────┐
│ AI Client │◄────────────────────► │ MCP Server │
│ (Claude, etc.) │ JSON-RPC over │ (Your Code) │
│ │ stdio / SSE │ │
│ ┌───────────┐ │ │ ┌───────────┐ │
│ │ Tool Call │──┼────────────────────────┼─►│ Database │ │
│ │ Resource │◄─┼────────────────────────┼──│ API │ │
│ │ Prompt │──┼────────────────────────┼─►│ Files │ │
│ └───────────┘ │ │ └───────────┘ │
└─────────────────┘ └─────────────────┘
Building Your First MCP Server
Let’s build an MCP server that gives AI models access to a project management system. We will use the official TypeScript SDK, though Python and Rust SDKs are also available.
// src/index.ts — MCP Server for Project Management
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "project-manager",
version: "1.0.0",
});
// Database simulation (replace with real DB)
const projects = new Map();
const tasks = new Map();
// ─── TOOLS: Actions the AI can perform ───
server.tool(
"create_task",
"Create a new task in a project",
{
projectId: z.string().describe("The project ID"),
title: z.string().describe("Task title"),
description: z.string().describe("Task description"),
priority: z.enum(["low", "medium", "high", "critical"]),
assignee: z.string().optional().describe("Email of assignee"),
},
async ({ projectId, title, description, priority, assignee }) => {
const task: Task = {
id: crypto.randomUUID(),
projectId, title, description, priority,
assignee: assignee || "unassigned",
status: "todo",
createdAt: new Date().toISOString(),
};
tasks.set(task.id, task);
return {
content: [{
type: "text",
text: JSON.stringify({ success: true, taskId: task.id, message: "Task created" })
}]
};
}
);
server.tool(
"search_tasks",
"Search tasks by status, assignee, or keyword",
{
query: z.string().optional(),
status: z.enum(["todo", "in_progress", "review", "done"]).optional(),
assignee: z.string().optional(),
},
async ({ query, status, assignee }) => {
let results = Array.from(tasks.values());
if (status) results = results.filter(t => t.status === status);
if (assignee) results = results.filter(t => t.assignee === assignee);
if (query) results = results.filter(t =>
t.title.toLowerCase().includes(query.toLowerCase()) ||
t.description.toLowerCase().includes(query.toLowerCase())
);
return {
content: [{
type: "text",
text: JSON.stringify({ tasks: results, count: results.length })
}]
};
}
);
// ─── RESOURCES: Data the AI can read ───
server.resource(
"project://{projectId}",
"Get project details and statistics",
async (uri) => {
const projectId = uri.pathname.split("/").pop();
const project = projects.get(projectId);
const projectTasks = Array.from(tasks.values())
.filter(t => t.projectId === projectId);
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify({
project,
stats: {
total: projectTasks.length,
todo: projectTasks.filter(t => t.status === "todo").length,
inProgress: projectTasks.filter(t => t.status === "in_progress").length,
done: projectTasks.filter(t => t.status === "done").length,
}
})
}]
};
}
);
// ─── Start the server ───
const transport = new StdioServerTransport();
await server.connect(transport);
Connecting to Claude Desktop
Configure Claude Desktop to discover your MCP server by adding it to the configuration file:
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"project-manager": {
"command": "node",
"args": ["path/to/dist/index.js"],
"env": {
"DATABASE_URL": "postgresql://localhost/projects"
}
}
}
}
Production Patterns for MCP Servers
Building a toy MCP server is straightforward. Making it production-ready requires careful attention to authentication, rate limiting, error handling, and observability.
Authentication and Authorization
// Middleware pattern for MCP tool authorization
function withAuth(handler: ToolHandler): ToolHandler {
return async (params, context) => {
const token = context.meta?.authToken;
if (!token) {
return {
content: [{ type: "text", text: "Authentication required" }],
isError: true,
};
}
const user = await verifyToken(token);
const hasPermission = await checkPermission(user, params);
if (!hasPermission) {
return {
content: [{ type: "text", text: "Insufficient permissions" }],
isError: true,
};
}
return handler(params, { ...context, user });
};
}
Error Handling and Validation
Furthermore, MCP servers should return structured errors that help the AI model understand what went wrong and potentially retry with corrected parameters:
server.tool("update_task", "Update task status", schema, async (params) => {
try {
const task = tasks.get(params.taskId);
if (!task) {
return {
content: [{ type: "text", text: JSON.stringify({
error: "TASK_NOT_FOUND",
message: "No task with that ID exists",
suggestion: "Use search_tasks to find the correct task ID"
})}],
isError: true,
};
}
// ... update logic
} catch (error) {
return {
content: [{ type: "text", text: JSON.stringify({
error: "INTERNAL_ERROR",
message: error.message,
})}],
isError: true,
};
}
});
When NOT to Use MCP
MCP adds complexity that isn’t always justified. Skip it when you have a single AI provider with good native integrations, when your tool needs real-time streaming responses, or when the overhead of the protocol layer exceeds the benefit. Consequently, evaluate whether direct API integration would be simpler for your specific use case.
Key Takeaways
The Model Context Protocol standardizes how AI models interact with external systems. It eliminates vendor lock-in, reduces integration maintenance, and provides a consistent developer experience. Start with one or two tools, validate the pattern, and expand as you identify more AI-accessible capabilities in your system.