Advanced

MCP Servers: Extending AI with Real Tools

Model Context Protocol gives AI the ability to use external tools — search the web, query databases, create GitHub issues, read files, and more. 117 servers on Zubnet, with one-click setup for Claude Desktop and Cursor.
Sarah Chen March 2026 10 min read

Large language models are smart. They can reason, write, analyze, and create. But they are also isolated. By default, an LLM cannot look up today’s weather, check your database, search the web, or read a file from your computer. It only knows what was in its training data and what you paste into the prompt.

MCP — the Model Context Protocol — changes that. It is a standard for connecting AI models to external tools. Think of it as USB for AI: a universal plug that lets any model use any tool, as long as both speak the protocol.

How MCP Works

The flow is simple:

User: "What issues are open on our main repo?" LLM thinks: "I need to check GitHub. Let me call the GitHub MCP server." GitHub MCP Server: queries api.github.com/repos/org/repo/issues GitHub MCP Server returns list of 12 open issues LLM: "There are 12 open issues. The top 3 by priority are..." User sees a clean, summarized answer

The key insight: the LLM decides when to use a tool. You do not write code that says “if the user asks about GitHub, call the GitHub API.” The model understands the user’s intent, recognizes it needs external data, calls the appropriate MCP server, gets the result, and incorporates it into its response.

This is what makes MCP powerful — and what makes it different from traditional API integrations where a developer has to anticipate every possible workflow.

What MCP Servers Exist

There are 117 MCP servers available on Zubnet. Here are the categories and some highlights:

Developer Tools

GitHub — search repos, create/read/close issues, list PRs, read file contents, create branches
GitLab — similar to GitHub, for GitLab-hosted projects
Linear — create and manage issues, read project status
Sentry — search errors, read stack traces, resolve issues

Databases

PostgreSQL — execute read queries against your Postgres database
MySQL — same for MySQL/MariaDB
SQLite — query local SQLite files
Redis — read keys, scan patterns

Search and Research

Brave Search — web search with no tracking
Exa — semantic search across the web
ArXiv — search academic papers
Wikipedia — look up encyclopedic knowledge

File Systems and Storage

Filesystem — read, write, list, and search files in a specified directory
Google Drive — search and read documents from Drive
S3 — list and read objects from AWS S3 buckets

Communication

Slack — read channels, send messages, search history
Discord — read and send messages in servers
Email — read and draft emails via IMAP/SMTP

Data and APIs

Fetch — make HTTP requests to any URL
Puppeteer — navigate web pages, take screenshots, extract content
Google Maps — geocoding, directions, place search

Browse the full list on our MCP Store.

Setup for Claude Desktop

Claude Desktop supports MCP natively. To connect a server, you add its configuration to Claude’s config file. On Zubnet’s MCP Store, every server has a one-click copy button for this exact config.

The config file lives at:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

Here is what a config looks like with two MCP servers:

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
      }
    },
    "brave-search": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-brave-search"],
      "env": {
        "BRAVE_API_KEY": "your_brave_key_here"
      }
    }
  }
}

Save the file, restart Claude Desktop, and the tools appear automatically. Claude will use them when relevant to your conversation.

Setup for Cursor

Cursor (the AI code editor) also supports MCP. The config goes in your project’s .cursor/mcp.json file:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y", "@modelcontextprotocol/server-filesystem",
        "/path/to/your/project"
      ]
    },
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost/mydb"
      }
    }
  }
}

This gives Cursor’s AI the ability to read your project files and query your database — making it dramatically better at understanding your codebase and answering questions about your data.

Setup on Zubnet

For agents running on the Zubnet platform, MCP server setup is even simpler:

1. Go to the MCP Store

2. Find the server you want (e.g., GitHub)

3. Click “Activate”

4. Enter the required configuration (API keys, URLs, etc.)

5. Connect it to your agent

The server runs on our infrastructure. No local installation, no npx, no Node.js required. It just works.

Config Schemas: What Each Server Needs

Every MCP server needs some configuration. The most common patterns:

API key servers (GitHub, Brave Search, Sentry, etc.)

Need: an API key or personal access token from the service provider. Get it from their developer settings.

Database servers (PostgreSQL, MySQL, SQLite)

Need: a connection string. Format: protocol://user:password@host:port/database. For security, use read-only credentials.

Filesystem servers

Need: a directory path. The server can only access files within this directory. Use the narrowest scope possible.

OAuth servers (Google Drive, Slack)

Need: OAuth credentials (client ID + secret) or a pre-generated access token. On Zubnet, we handle the OAuth flow for you.

How the AI Decides to Use Tools

This is the part that feels like magic. You do not write rules. The LLM figures it out.

When you ask “What open issues do we have on GitHub?” the model recognizes that:

1. This requires data it does not have (your specific GitHub issues)
2. A GitHub tool is available
3. The list_issues function with state=open is the right call

It constructs the tool call, sends it to the MCP server, gets the JSON response, and then summarizes it in natural language.

For multi-step tasks, the model chains tool calls. “Find the most-commented open issue and post a summary to #dev-chat on Slack” might involve:

1. github.list_issues(state=open, sort=comments)
2. github.get_issue(number=142) (for full details)
3. slack.post_message(channel=#dev-chat, text=...)

Three tool calls, orchestrated automatically. No code written.

Security Considerations

MCP servers have real access to real systems.

A PostgreSQL MCP server with write credentials can modify your database. A GitHub server with a PAT that has delete permissions can delete repositories. A filesystem server pointed at / can read your entire drive. Treat MCP credentials like you treat SSH keys: minimum permissions, specific scope, rotated regularly. Use read-only database users. Use GitHub tokens with only the scopes you need. Point filesystem servers at specific project directories, never root.

Building Your Own MCP Server

The MCP specification is open. If the tool you need does not exist yet, you can build it. An MCP server is a program that:

1. Declares its available tools (name, description, parameter schema)
2. Handles tool call requests
3. Returns structured results

The official SDK is available in TypeScript and Python. A minimal server in TypeScript is about 50 lines of code. The specification and SDKs are at modelcontextprotocol.io.

If you build a server that others might find useful, consider publishing it to the MCP ecosystem. The community grows every week.

When to Use MCP vs. Regular API Calls

MCP is not always the right answer. Use it when:

• You want the AI to decide when to call external services (dynamic, user-driven workflows)
• You are building agents that need tool access
• You want a standard interface across many tools (one protocol, 117 servers)

Use regular API calls when:

• The workflow is deterministic (always call this API, then that one)
• You need precise control over every request
• Performance is critical and you cannot afford the LLM decision loop

MCP is the bridge between AI that talks and AI that acts. Without tools, an LLM can only generate text based on what it already knows. With MCP, it can reach out into the world, gather real-time data, take real actions, and work with your actual systems. The 117 servers on Zubnet are a starting point — the ecosystem grows every week.

Browse all 117 MCP servers on our MCP Store. One-click configs for Claude Desktop and Cursor. Or activate them directly for your Zubnet agents.

Sarah Chen
Zubnet · March 2026
ESC