← Concepts & Glossary
AI Security

MCP Server

A server that exposes tools to AI agents via the Model Context Protocol.

What is an MCP server?

An MCP server is a program that implements the Model Context Protocol — an open specification introduced by Anthropic in late 2024 that standardises how AI agents discover and call external capabilities. Instead of every LLM host writing a bespoke integration for every tool, a server speaks MCP once and any MCP-compliant client can use it.

Think of it as the USB-C of AI tooling. A single protocol that replaces a directory of custom adapters — one for each model, each framework, each deployment. The server sits outside the model and owns the side effects; the model just asks for them.

How it works

Three concepts to understand, at a high level:

  1. Transports. MCP servers speak over either stdio (a local subprocess the client launches and pipes JSON-RPC through) or HTTP with Server-Sent Events for streaming. Stdio is typical for local developer tooling; HTTP is typical for hosted, multi-tenant servers used by remote agents.
  2. Primitives. A server exposes three kinds of thing: tools (callable functions the model can invoke — create a ticket, scan a repo), resources (read-only context the model can pull in — a file, a dashboard), and prompts (pre-written templates the user can trigger).
  3. Handshake. Client connects, negotiates protocol version, requests the server's capability list, and from that point the LLM knows what tools exist and how to call them. Every call is a structured JSON-RPC message, not a freeform string — which makes auditing and policy enforcement tractable.

Why it matters

Before MCP, connecting an LLM to a real system meant writing a custom tool adapter and re-writing it when the host model changed. Every integration was a snowflake. Every snowflake was a security review.

With a standard protocol, the integration surface consolidates. Security controls — authentication, capability scoping, audit logging — can live in the server itself and apply uniformly no matter which client connects. The cost of adding a new tool drops. The cost of governing tools drops more.

What value it adds

  • One protocol, every client

    Write the server once; Claude Desktop, Cursor, your own agent framework, and future MCP-aware tools all speak it natively.

  • Structured calls beat freeform strings

    Every tool invocation is a typed JSON-RPC message. Logging, replay, policy evaluation, and drift detection all become straightforward.

  • Capabilities live outside the model

    The server decides what can actually happen. The model can only ask. That separation is what makes secure agentic workflows possible.

  • Ecosystem compounding

    A growing catalog of open-source MCP servers means your agent gets richer without you writing more glue code — and every new server inherits your existing governance.

  • Predictable security review surface

    Instead of reviewing N bespoke integrations, your team reviews the server and the policy layer. The attack surface is inspectable in one place.

How Safeguard uses it

Safeguard ships its own first-party MCP server that exposes supply-chain security tools to any MCP-compliant agent, and governs third-party servers through the MCP server security control plane.

Connect an agent to Safeguard in minutes.

Point Claude Desktop, Cursor, or your own agent at the Safeguard MCP server. Get supply-chain intelligence as first-class tools.