TOC

关于 MCP Server(Model Context Protocol)

Model Context Protocol(模型上下文协议,简称 MCP)是由 Anthropic 公司于 2024 年末推出的开放标准协议,旨在为大型语言模型(LLM)与外部数据源、工具及系统提供标准化连接接口。其核心目标是解决传统 AI 系统集成复杂、维护困难的问题,通过定义通用规则实现 LLM 与数据库、API、本地文件等资源的即插即用式交互。

我的理解:就是用来向 AI 模型拓展一些功能,比如获取数据、运行程序、发送邮件、订购商品等,将 AI 这个大脑连接上真实世界。

Anthropic 就是做 Claude AI 编程模型的那家公司,设计这套协议用来拓展 AI 模型,比如执行目录查看、编辑文件、文本查找、文本替换、Git 提交、执行代码格式化工具。

但是这套协议可以用来拓展到方方面面,比如将我司的邮件服务、短信服务、AppPush 等提供出去,这样支持 MCP 的 IDE 可以在开发过程中直接发送邮件、短信、应用推送出去了。
最简单的场景,比如执行单元测试之后,将测试结果推送给相关人员。

因为目前只有几个 IDE 支持,但是我们也不一定只能用来做代码开发,直接在里面管理工作理论上也是可行。
还拿邮件功能举例,比如可以开发 MCP 接入自己的客户信息,然后在 AI 交互中安排自动化场景营销任务。
PS:MCP 这样的开放标准肯定是 AI 应用的大势所趋。

flowchart
    subgraph APP
        AppStart[App 启动]
        AppGetTask[接收任务]
        AppParse[解析 AI 输出]
        AppRun[App 执行任务]
    end

    subgraph MCP[MCP Server]
        McpResource[资源/接口]
        McpRun[MCP 执行任务]
    end

    User --> |提交任务|AppGetTask
    AppGetTask --> |请求大模型<BR>系统提示词 + 任务描述|Model[AI 大模型]
    AppStart -->|获取信息| McpResource
    Model --> AppParse --> AppRun
    AppRun <--> McpRun

资源:

Introducing the Model Context Protocol

2024 年 11 月 25 日
https://www.anthropic.com/news/model-context-protocol

Today, we're open-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.

As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale.

MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need.

Model Context Protocol

The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.

Today, we're introducing three major components of the Model Context Protocol for developers:

  • The Model Context Protocol specification and SDKs
  • Local MCP server support in the Claude Desktop apps
  • An open-source repository of MCP servers

Claude 3.5 Sonnet is adept at quickly building MCP server implementations, making it easy for organizations and individuals to rapidly connect their most important datasets with a range of AI-powered tools. To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.

Early adopters like Block and Apollo have integrated MCP into their systems, while development tools companies including Zed, Replit, Codeium, and Sourcegraph are working with MCP to enhance their platforms—enabling AI agents to better retrieve relevant information to further understand the context around a coding task and produce more nuanced and functional code with fewer attempts.

"At Block, open source is more than a development model—it’s the foundation of our work and a commitment to creating technology that drives meaningful change and serves as a public good for all,” said Dhanji R. Prasanna, Chief Technology Officer at Block. “Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration. We are excited to partner on a protocol and use it to build agentic systems, which remove the burden of the mechanical so people can focus on the creative.”

Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture.

Getting started

Developers can start building and testing MCP connectors today. All Claude.ai plans support connecting MCP servers to the Claude Desktop app.

Claude for Work customers can begin testing MCP servers locally, connecting Claude to internal systems and datasets. We'll soon provide developer toolkits for deploying remote production MCP servers that can serve your entire Claude for Work organization.

To start building:

  • Install pre-built MCP servers through the Claude Desktop app
  • Follow our quickstart guide to build your first MCP server
  • Contribute to our open-source repositories of connectors and implementations

An open community

We’re committed to building MCP as a collaborative, open-source project and ecosystem, and we’re eager to hear your feedback. Whether you’re an AI tool developer, an enterprise looking to leverage existing data, or an early adopter exploring the frontier, we invite you to build the future of context-aware AI together.

如果你有魔法,你可以看到一个评论框~