Login
Back to Blog
"MCP (Model Context Protocol) Complete Guide: The New Standard for AI Tool Integration"

"MCP (Model Context Protocol) Complete Guide: The New Standard for AI Tool Integration"

C
Crazyrouter Team
February 23, 2026
17 viewsEnglishTutorial
Share:

MCP is quietly becoming the USB-C of AI integrations. Instead of building custom tool integrations for every AI model, MCP provides a standard protocol that works across models and platforms. Here's what you need to know.

What Is MCP?#

MCP (Model Context Protocol) is an open protocol developed by Anthropic that standardizes how AI models connect to external tools, data sources, and services. Think of it as a universal adapter between AI models and the real world.

The Problem MCP Solves#

Before MCP, connecting an AI model to external tools looked like this:

code
Claude → Custom integration → Your database
GPT-5  → Different custom integration → Your database
Gemini → Yet another integration → Your database

With MCP:

code
Claude ─┐
GPT-5  ─┼── MCP Protocol ── MCP Server ── Your database
Gemini ─┘

One integration, every model. That's the value proposition.

Core Concepts#

ConceptDescription
MCP ServerA service that exposes tools, resources, and prompts via the MCP protocol
MCP ClientAn AI application that connects to MCP servers (e.g., Claude Desktop, Cursor)
ToolsFunctions the AI can call (e.g., search database, send email)
ResourcesData the AI can read (e.g., files, database records)
PromptsPre-built prompt templates the server provides
TransportCommunication layer (stdio, HTTP/SSE)

Architecture#

code
┌─────────────────────────────────────────────────┐
│                  AI Application                   │
│  (Claude Desktop, Cursor, Custom App)             │
│                                                   │
│  ┌─────────────┐                                  │
│  │  MCP Client  │                                  │
│  └──────┬───────┘                                  │
└─────────┼──────────────────────────────────────────┘
          │ MCP Protocol (JSON-RPC over stdio/HTTP)
          │
┌─────────┼──────────────────────────────────────────┐
│  ┌──────┴───────┐                                   │
│  │  MCP Server   │                                   │
│  │               │                                   │
│  │  ┌─────────┐  │                                   │
│  │  │  Tools   │  │  → search_docs(), create_ticket()│
│  │  ├─────────┤  │                                   │
│  │  │Resources │  │  → files, database records        │
│  │  ├─────────┤  │                                   │
│  │  │ Prompts  │  │  → templates, workflows           │
│  │  └─────────┘  │                                   │
│  └───────────────┘                                   │
│                                                      │
│  External Services: Database, API, File System       │
└──────────────────────────────────────────────────────┘

Building Your First MCP Server#

Python MCP Server#

python
# server.py
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent

app = Server("my-tools")

@app.list_tools()
async def list_tools():
    return [
        Tool(
            name="search_docs",
            description="Search documentation by keyword",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "Search query"
                    },
                    "limit": {
                        "type": "integer",
                        "description": "Max results",
                        "default": 5
                    }
                },
                "required": ["query"]
            }
        ),
        Tool(
            name="get_user",
            description="Get user information by ID",
            inputSchema={
                "type": "object",
                "properties": {
                    "user_id": {
                        "type": "string",
                        "description": "User ID"
                    }
                },
                "required": ["user_id"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "search_docs":
        # Your search logic here
        results = search_documentation(arguments["query"], arguments.get("limit", 5))
        return [TextContent(type="text", text=str(results))]

    elif name == "get_user":
        user = get_user_by_id(arguments["user_id"])
        return [TextContent(type="text", text=str(user))]

    raise ValueError(f"Unknown tool: {name}")

def search_documentation(query, limit):
    # Replace with your actual search logic
    return [{"title": f"Doc about {query}", "url": f"https://docs.example.com/{query}"}]

def get_user_by_id(user_id):
    # Replace with your actual database query
    return {"id": user_id, "name": "John Doe", "email": "[email]"}

async def main():
    async with stdio_server() as (read_stream, write_stream):
        await app.run(read_stream, write_stream)

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Node.js MCP Server#

typescript
// server.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

const server = new Server(
  { name: "my-tools", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: "search_docs",
      description: "Search documentation by keyword",
      inputSchema: {
        type: "object",
        properties: {
          query: { type: "string", description: "Search query" },
          limit: { type: "number", description: "Max results", default: 5 }
        },
        required: ["query"]
      }
    },
    {
      name: "create_ticket",
      description: "Create a support ticket",
      inputSchema: {
        type: "object",
        properties: {
          title: { type: "string" },
          description: { type: "string" },
          priority: { type: "string", enum: ["low", "medium", "high"] }
        },
        required: ["title", "description"]
      }
    }
  ]
}));

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  switch (name) {
    case "search_docs": {
      const results = await searchDocs(args.query, args.limit || 5);
      return { content: [{ type: "text", text: JSON.stringify(results) }] };
    }
    case "create_ticket": {
      const ticket = await createTicket(args.title, args.description, args.priority);
      return { content: [{ type: "text", text: `Ticket created: ${ticket.id}` }] };
    }
    default:
      throw new Error(`Unknown tool: ${name}`);
  }
});

async function searchDocs(query: string, limit: number) {
  // Your search implementation
  return [{ title: `Result for ${query}`, score: 0.95 }];
}

async function createTicket(title: string, description: string, priority?: string) {
  // Your ticket creation logic
  return { id: "TICKET-123", title, status: "open" };
}

const transport = new StdioServerTransport();
await server.connect(transport);

Install Dependencies#

bash
# Python
pip install mcp

# Node.js
npm install @modelcontextprotocol/sdk

Connecting MCP Servers to AI Clients#

Claude Desktop#

Add to claude_desktop_config.json:

json
{
  "mcpServers": {
    "my-tools": {
      "command": "python",
      "args": ["server.py"],
      "cwd": "/path/to/your/server"
    }
  }
}

Cursor IDE#

Add to Cursor settings:

json
{
  "mcp": {
    "servers": {
      "my-tools": {
        "command": "node",
        "args": ["server.js"]
      }
    }
  }
}

Custom Application#

python
from mcp.client import ClientSession
from mcp.client.stdio import stdio_client

async def main():
    async with stdio_client(["python", "server.py"]) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()

            # List available tools
            tools = await session.list_tools()
            print("Available tools:", [t.name for t in tools.tools])

            # Call a tool
            result = await session.call_tool("search_docs", {"query": "authentication"})
            print("Results:", result.content[0].text)

The MCP ecosystem is growing fast. Here are some useful community servers:

ServerPurposeInstall
mcp-server-sqliteSQLite database accesspip install mcp-server-sqlite
mcp-server-githubGitHub API integrationnpm install @mcp/github
mcp-server-postgresPostgreSQL queriespip install mcp-server-postgres
mcp-server-filesystemFile system operationsnpm install @mcp/filesystem
mcp-server-brave-searchWeb searchnpm install @mcp/brave-search
mcp-server-slackSlack messagingnpm install @mcp/slack

MCP vs Function Calling#

How does MCP compare to OpenAI's function calling or Claude's tool use?

FeatureMCPFunction Calling
ProtocolStandardizedProvider-specific
DiscoveryDynamic (list_tools)Static (defined in request)
Multi-model❌ (per provider)
Resources✅ (files, data)❌ (tools only)
Prompts✅ (templates)
Transportstdio, HTTPHTTP only
EcosystemGrowingMature

MCP doesn't replace function calling — it wraps it in a standard protocol. Your MCP server's tools get translated into function calls for whatever model the client uses.

Building Production MCP Servers#

Best Practices#

  1. Error handling — Return clear error messages, not stack traces
python
@app.call_tool()
async def call_tool(name: str, arguments: dict):
    try:
        result = await execute_tool(name, arguments)
        return [TextContent(type="text", text=str(result))]
    except ValidationError as e:
        return [TextContent(type="text", text=f"Invalid input: {e.message}")]
    except NotFoundError as e:
        return [TextContent(type="text", text=f"Not found: {e.message}")]
    except Exception as e:
        return [TextContent(type="text", text=f"Error: {str(e)}")]
  1. Input validation — Validate before executing

  2. Rate limiting — Protect external APIs from AI-driven request storms

  3. Logging — Log all tool calls for debugging and auditing

  4. Timeouts — Set reasonable timeouts for external calls

HTTP Transport (for Remote Servers)#

python
from mcp.server.sse import SseServerTransport
from starlette.applications import Starlette
from starlette.routing import Route

transport = SseServerTransport("/messages")

async def handle_sse(request):
    async with transport.connect_sse(request.scope, request.receive, request._send) as streams:
        await app.run(streams[0], streams[1])

starlette_app = Starlette(routes=[
    Route("/sse", endpoint=handle_sse),
    Route("/messages", endpoint=transport.handle_post_message, methods=["POST"])
])

Using MCP with Crazyrouter#

If you're building AI applications that use MCP, you can route the underlying model calls through Crazyrouter for cost savings:

python
from openai import OpenAI

# Your AI application uses Crazyrouter for model access
client = OpenAI(
    api_key="your-crazyrouter-key",
    base_url="https://api.crazyrouter.com/v1"
)

# MCP tools are passed as function definitions
response = client.chat.completions.create(
    model="claude-sonnet-4-5",  # or gpt-5, gemini-3-pro
    messages=[{"role": "user", "content": "Search our docs for authentication guides"}],
    tools=mcp_tools_as_openai_format,  # Converted from MCP tool definitions
    tool_choice="auto"
)

This gives you model flexibility (switch between Claude, GPT-5, Gemini) while keeping your MCP tool integrations unchanged.

FAQ#

Is MCP only for Claude?#

No. MCP is an open protocol. While Anthropic created it, any AI application can implement an MCP client. Cursor, Windsurf, and several other tools already support MCP.

Do I need to rewrite my existing integrations?#

No. You can wrap existing APIs and tools in an MCP server. The server acts as an adapter between the MCP protocol and your existing code.

Is MCP production-ready?#

The protocol is stable and used in production by Claude Desktop and several IDE integrations. The ecosystem is still growing, so some community servers may be less mature.

How does MCP handle authentication?#

MCP itself doesn't define authentication. Your MCP server handles auth internally — API keys, OAuth tokens, etc. are managed by the server, not exposed to the AI model.

Can I use MCP with local/open-source models?#

Yes. Any application that implements the MCP client protocol can use MCP servers. Tools like Ollama and LM Studio are adding MCP support.

Summary#

MCP is becoming the standard way to connect AI models to external tools and data. If you're building AI applications, investing in MCP servers now means your integrations work across Claude, GPT-5, Gemini, and whatever comes next.

Start with the Python or Node.js SDK, build a server for your most common tool, and connect it to your AI workflow. For the model layer, Crazyrouter gives you access to every major model through one API — pair it with MCP for maximum flexibility.

Related Articles