Product Launch

Introducing the Safeguard MCP Server: AI-Native Software Supply Chain Security

Safeguard.sh launches its MCP Server, bringing software supply chain security directly into AI-powered development workflows through the Model Context Protocol.

Yukti Singhal
Head of Product
5 min read

Today we are shipping something we have been working on for months: the Safeguard MCP Server. It is our answer to a question that has been nagging at security teams since AI coding assistants became mainstream -- how do you bring supply chain security into the developer's actual workflow without adding yet another dashboard to check?

What is the Model Context Protocol?

MCP, or the Model Context Protocol, is an open standard originally developed by Anthropic. It defines how AI assistants can connect to external tools and data sources in a structured, secure way. Think of it as a USB-C port for AI -- a universal interface that lets any compatible AI assistant talk to any compatible service.

Before MCP, integrating an AI assistant with your security tooling meant building custom plugins for each combination of AI client and security tool. MCP collapses that matrix into a single integration point. Build an MCP server once, and every MCP-compatible client can use it.

Why We Built This

We kept hearing the same story from engineering teams. They had Safeguard running in their CI/CD pipelines. They had dashboards showing SBOM status and vulnerability counts. But when a developer was actually writing code -- choosing a dependency, evaluating an upgrade path, triaging an alert -- they had to context-switch out of their editor, navigate to a separate tool, and manually look up the information they needed.

That friction is not just annoying. It is a security risk. When the secure path requires more effort than the insecure path, developers take the path of least resistance. We needed to bring security data to where developers already work.

What You Can Do With the Safeguard MCP Server

The MCP Server exposes Safeguard's core capabilities as tools and resources that any MCP-compatible AI assistant can use. Here is what that looks like in practice.

Query SBOMs in Natural Language

Instead of navigating dashboards and filtering tables, you can ask your AI assistant directly:

  • "What dependencies does our payment service use?"
  • "Show me all components in our SBOM that have known vulnerabilities."
  • "Which of our projects are still using log4j?"

The MCP Server translates these into the appropriate API calls, fetches the data, and returns structured results that the AI assistant can reason about and present clearly.

Vulnerability Triage

This is where things get genuinely useful. You can ask questions like:

  • "Are any of the critical vulnerabilities in our frontend app actually reachable?"
  • "What is the remediation path for CVE-2025-1234?"
  • "Compare the vulnerability posture of our staging and production SBOMs."

The AI assistant has full context from Safeguard's vulnerability database, VEX documents, and reachability analysis. It can synthesize that information in a way that a static dashboard simply cannot.

Policy Gate Status

For teams using Safeguard's policy gates in their release process:

  • "Can we ship the v2.3 release based on current policy gates?"
  • "Which policy violations are blocking the deploy?"
  • "Show me the policy gate history for the billing service."

Asset Management

The MCP Server also provides resource endpoints that give AI assistants access to your project inventory, SBOM listings, and policy configurations as structured context.

Architecture Decisions

A few technical choices worth noting.

We use a session-based conversation model. Each AI assistant session creates a conversation in Safeguard, which means queries within a session maintain context. If you ask about a project and then ask a follow-up question about its vulnerabilities, the system understands the reference.

The server supports two query modes. The default is a prompt-based mode that routes through our GPT service for natural language understanding. For programmatic use cases, there is a direct mode that hits the data API with structured queries. The prompt mode is better for interactive developer use. The direct mode is better for automation scripts.

Authentication uses the same token-based system as the rest of the Safeguard platform. You configure your API credentials once, and the MCP Server handles token refresh transparently.

Getting Started

Installation is straightforward:

npm install -g @anthropic-ai/safeguard-mcp-server

Configure your credentials:

{
  "mcpServers": {
    "safeguard": {
      "command": "safeguard-mcp-server",
      "env": {
        "SAFEGUARD_API_URL": "https://api.safeguard.sh",
        "SAFEGUARD_API_KEY": "your-api-key"
      }
    }
  }
}

The server is compatible with Claude Desktop, VS Code with Copilot, Cursor, and any other MCP-compatible client.

What Comes Next

This is version 1.0. We are already working on write operations -- the ability to create SBOMs, update VEX documents, and configure policies through the MCP interface. We are also exploring proactive notifications, where the MCP Server can alert your AI assistant when a new critical vulnerability affects one of your projects.

The broader vision is that security should be ambient. Not something you go check, but something that is always present in your development context. The MCP Server is the foundation for that.

How Safeguard.sh Helps

Safeguard.sh provides end-to-end software supply chain security -- SBOM generation, vulnerability tracking, policy gates, and compliance reporting. The MCP Server brings all of that directly into your AI-powered development workflow. Instead of switching between tools, your AI assistant becomes a security-aware pair programmer with full visibility into your supply chain posture. Try it today at safeguard.sh.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.