Zubnet AI学习Wiki › MCP
工具

MCP

又名: Model Context Protocol
一套开放协议(由 Anthropic 创建),用来标准化 AI 模型与外部工具、数据源之间的连接方式。把它想象成 AI 界的 USB-C — 一个标准接口,取代为每个工具量身定制的集成。MCP server 暴露能力;MCP client(比如 Claude)消费这些能力。

为什么重要

MCP 出现之前,每个 AI 工具的集成都是定制的。MCP 意味着一个工具建一次,就能和任何兼容的 AI 一起工作。Claude、Cursor 等都已经支持。这正是 AI 从聊天机器人走向真正助手的方式。

Deep Dive

MCP works on a client-server architecture over JSON-RPC. An MCP server is a small program that exposes a set of tools, resources, and prompts through a standardized interface. An MCP client — typically an AI application like Claude Desktop, Cursor, or Windsurf — discovers what the server offers and makes those capabilities available to the model. When the model decides to use a tool, the client sends a JSON-RPC request to the server, gets the result, and feeds it back into the conversation. The transport layer is flexible: servers can communicate over stdio (for local processes), HTTP with server-sent events (for remote services), or streamable HTTP (the newest transport, combining request-response and streaming in a single connection).

Building a Server

Building an MCP server is deliberately simple. In Python, you can use the official mcp SDK and have a working server in about 20 lines of code — you decorate a function with @server.tool(), give it a description and typed parameters, and the SDK handles JSON-RPC, schema generation, and transport. In TypeScript, the @modelcontextprotocol/sdk package works the same way. The server declares its capabilities on initialization (what tools it has, whether it supports resources or prompts), and the client negotiates what it wants to use. This means you can start small — a server that just wraps your company's internal API — and add capabilities incrementally.

The USB Moment

The real power of MCP over bespoke tool integrations becomes clear when you consider the combinatorial problem it solves. Before MCP, if you had 10 AI applications and 10 tools, you needed 100 custom integrations. With MCP, you need 10 servers and 10 clients — each built once. This is exactly the same pattern that made USB successful: standardize the interface, and the ecosystem scales. In practice, this means a Postgres MCP server built by one developer works with Claude, Cursor, Zed, and any other MCP-compatible client without modification. The MCP server ecosystem already includes hundreds of community-built servers for databases, APIs, dev tools, and cloud services.

Gotchas and Nuances

There are some important nuances practitioners should know. First, MCP servers come in two flavors: local servers that run on your machine (great for file access, local databases, dev tools) and remote servers that run as hosted services (better for shared infrastructure, SaaS integrations). Second, security is a real consideration — an MCP server has whatever permissions the process running it has, so a poorly scoped server could expose sensitive data. The protocol includes a capability negotiation step, but access control and authentication for remote servers are still evolving. Third, MCP is not the same as tool use — it's a transport and discovery layer that sits underneath tool use. A model that supports tool calling can work with MCP, but MCP also handles things like resource subscriptions (live-updating context) and prompt templates that go beyond simple function calls.

Where It's Heading

Anthropic open-sourced MCP in late 2024, and adoption has been remarkably fast. By early 2025, it was supported by Claude, Cursor, Windsurf, Zed, Sourcegraph, and many others. The specification continues to evolve — the addition of streamable HTTP transport, OAuth-based auth for remote servers, and elicitation (letting servers ask the user for input mid-flow) all landed in 2025. If you're choosing between building a custom tool integration and building an MCP server, the MCP route almost always wins: you get compatibility with every MCP client for free, and the protocol is simple enough that migration cost is near zero.

相关概念

← 所有术语
← Masked Language Modeling Mechanistic Interpretability →
ESC