Skip to content

How to Build an AI Agent with MCP

Key idea:

MCP (Model Context Protocol) from Anthropic — standard for exposing tools to LLM agents. Server implements tools → Client (Claude Desktop, Zed, custom) connects → LLM invokes tools via structured calls. Simpler than custom function calling. Implementations: Python @modelcontextprotocol/sdk, TypeScript, Go. Use cases: file system access, GitHub operations, database query, custom APIs.

Below: step-by-step, working examples, common pitfalls, FAQ.

Try it now — free →

Step-by-Step Setup

  1. Install MCP SDK: pip install mcp or npm install @modelcontextprotocol/sdk
  2. Define tools: name, description, input schema (JSON Schema)
  3. Implement handler: per-tool function returning a result
  4. Start MCP server: stdio transport (for local) or SSE (for remote)
  5. Connect client: Claude Desktop config, Zed settings, or custom agent
  6. Test: LLM will automatically discover available tools
  7. Deploy: self-host server or publish as open source

Working Examples

ScenarioConfig
Python MCP serverfrom mcp.server.fastmcp import FastMCP mcp = FastMCP('Enterno MCP') @mcp.tool() def check_dns(domain: str) -> dict: """Check DNS records for a domain.""" return {'a': resolve_a(domain), 'mx': resolve_mx(domain)} if __name__ == '__main__': mcp.run()
Claude Desktop config# ~/Library/Application Support/Claude/claude_desktop_config.json { "mcpServers": { "enterno": { "command": "python", "args": ["/path/to/enterno_mcp.py"] } } }
TypeScript MCP serverimport { Server } from '@modelcontextprotocol/sdk/server/index.js'; const server = new Server({ name: 'my-mcp', version: '1.0.0' }, { capabilities: { tools: {} } }); server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [{ name: 'echo', description: 'Echo', inputSchema: { type: 'object', properties: { text: { type: 'string' } } } }] })); server.setRequestHandler(CallToolRequestSchema, async (req) => ({ content: [{ type: 'text', text: req.params.arguments.text }] }));
Claude Agent SDK (Python)from anthropic import Anthropic client = Anthropic() # Claude Opus 4.7 with MCP tools support response = client.messages.create( model='claude-opus-4-7', mcp_servers=[{'url': 'stdio://path/to/mcp.py'}], messages=[{'role':'user','content':'Check DNS for google.com'}] )
MCP over HTTP (remote)# Start SSE-based MCP server mcp = FastMCP('Enterno Remote', host='0.0.0.0', port=8000) # Connect client via https://mcp.example.com/sse

Common Pitfalls

  • Tool descriptions vague — LLM does not know when to call. Be precise: "Get weather for a specific city, returns JSON"
  • Input schema missing required fields — LLM skips args. JSON Schema required is mandatory
  • No error handling — LLM gets an exception string, gets confused. Return structured error: {error: "msg"}
  • Long-running tools (>10s) — LLM timeout. Async with progress updates
  • No sandboxing — MCP server with shell access = RCE if prompt injection

Learn more

Frequently Asked Questions

MCP vs OpenAI function calling?

MCP: standard, works with multiple clients (Claude, Zed, others). OpenAI functions: specific to OpenAI API. MCP — more universal.

MCP servers market in 2026?

Official collection: filesystem, git, GitHub, Slack, Postgres, Google Drive. Community — dozens more (Figma, Linear, Sentry).

Security?

Claude Desktop prompts user before each tool call. For automated agents — need manual review pipe, sandbox, whitelist tools.

Deploy MCP remotely?

SSE transport over HTTPS. Auth via OAuth. Example — Sentry MCP server (opens at https://mcp.sentry.dev).