Tutorials

Getting Started with Safeguard MCP + ChatGPT

Expose the Safeguard MCP server to ChatGPT so the assistant can run live dependency scans and pull advisory data instead of guessing.

Shadab Khan
Security Engineer
7 min read

ChatGPT will gladly hallucinate a CVE number if you let it. Giving it the Safeguard MCP server ends that problem — the model calls a real scanner and a real advisory database instead of guessing from stale training data. This tutorial walks through hooking Safeguard up to ChatGPT so you can ask security questions and get answers that are actually grounded.

Which ChatGPT Clients Support MCP?

ChatGPT Desktop on macOS and Windows supports local stdio MCP servers as of the 2026 client builds. ChatGPT on the web supports remote MCP servers (HTTPS-based) via the Connectors surface in Business and Enterprise workspaces. This guide covers both — pick the section that matches your setup. The tools exposed are identical; only the transport differs.

Step 1: Install and Authenticate the Safeguard CLI

MCP support is bundled with the CLI. Install it, then log in.

curl -sSfL https://get.safeguard.sh/install.sh | sh -s -- --version v2.4.0
sg login
sg whoami

Confirm sg whoami prints your email. If you're on Windows, use Scoop:

scoop bucket add safeguard https://github.com/safeguard-sh/scoop-bucket
scoop install safeguard

Step 2: (Desktop) Configure ChatGPT Desktop for Local MCP

Open Settings → Connections → MCP Servers → Add. Fill in:

  • Name: Safeguard
  • Command: sg
  • Args: mcp serve --stdio
  • Env: SAFEGUARD_TOKEN=sgp_user_..., SG_MCP_PROJECT_ROOT=/Users/<you>/work

Save and restart ChatGPT Desktop. In a new chat, click the tools icon and confirm Safeguard appears green. If you prefer hand-editing the config file:

  • macOS: ~/Library/Application Support/ChatGPT/mcp.json
  • Windows: %APPDATA%\ChatGPT\mcp.json
{
  "servers": {
    "safeguard": {
      "command": "sg",
      "args": ["mcp", "serve", "--stdio"],
      "env": {
        "SAFEGUARD_TOKEN": "sgp_user_...",
        "SG_MCP_PROJECT_ROOT": "/Users/shadab/work"
      }
    }
  }
}

Step 3: (Web/Enterprise) Configure a Remote MCP Connector

For ChatGPT on the web, you need the remote MCP bridge. Run it on a host that can reach the internal network or directly in your workspace admin console.

sg mcp serve --http \
  --listen 0.0.0.0:8443 \
  --tls-cert /etc/safeguard/tls.crt \
  --tls-key /etc/safeguard/tls.key \
  --auth oauth2 \
  --issuer https://auth.yourcorp.com \
  --audience safeguard-mcp

Then in ChatGPT's workspace admin console, go to Connectors → Add custom MCP, paste the URL https://mcp.yourcorp.com, and configure OAuth with your identity provider. Every user who wants access must authenticate through your IdP, so access follows your existing SSO/SCIM controls.

Step 4: Verify the Tools Are Available

In a chat, ask:

What Safeguard MCP tools do you have access to? List them with a one-line description of each.

You should get back: sg.scan, sg.sbom.get, sg.advisory.lookup, sg.fix.suggest, sg.ignore.list, sg.policy.evaluate. If the model says it doesn't have any tools, the connection isn't live — check logs via sg mcp serve --log-level debug.

Step 5: Ask Your First Grounded Question

Point ChatGPT at a real project and phrase the question so it knows to use tools:

Using Safeguard, scan ~/work/checkout-service for critical and high vulnerabilities. Return a table with package, version, CVE ID, fixed version, and one-line remediation. Only include issues the scan actually finds — do not add from memory.

ChatGPT will invoke sg.scan, parse the response, and produce the table. Because the "do not add from memory" phrasing is part of the prompt, the model is more likely to stay strictly grounded. You can make this a reusable instruction by adding it to a Custom GPT or Project.

Step 6: Explore an SBOM

Ask the model to analyze your SBOM:

Fetch the SBOM for ~/work/checkout-service and tell me: (1) total components, (2) ecosystems represented, (3) any deprecated or abandoned packages Safeguard knows about, and (4) top five licenses by frequency.

The answer is composed from sg.sbom.get (structure), sg.advisory.lookup (abandonment metadata), and the model's own aggregation. Licenses are in the SBOM itself, so there's no guessing — if a license string is weird or missing, the model will surface that too rather than inventing.

How Do I Make a Custom GPT for Security Reviews?

Create a Custom GPT with the Safeguard connector enabled, and use instructions like:

You are a supply chain security reviewer. For every question:
1. Call the relevant Safeguard MCP tool(s) to fetch ground truth.
2. Quote the tool output directly — do not paraphrase CVE IDs.
3. If the tool returns no finding, say "no finding" — never fabricate one.
4. Recommend remediations only from `sg.fix.suggest` output.
5. Cite the tool call that backs each claim.

This kind of constrained Custom GPT is the single biggest productivity unlock for a security team using ChatGPT — new hires can get consistent, grounded analysis without needing to know which CLI flags to run.

How Do I Scope What ChatGPT Can See?

The MCP server enforces access control at three layers. First, the token — user tokens only see projects you can access in the portal. Second, the filesystem root — SG_MCP_PROJECT_ROOT is a hard boundary. Third, explicit deny paths:

sg mcp serve --stdio \
  --allow-path /Users/shadab/work/public-repos \
  --deny-path  /Users/shadab/work/public-repos/client-nda

For enterprise remote deployments, OAuth scopes are the fourth layer — you can issue a safeguard-mcp.read scope that forbids sg.fix.suggest write operations entirely. Most orgs issue two scopes: a read-only one for broad ChatGPT use and a write-enabled one for a small group doing assisted remediation.

How Do I Handle Rate Limits and Token Costs?

Two levers. On the Safeguard side, enable caching:

sg config set cache.local ~/.safeguard/cache
sg config set cache.ttl 900
sg config set cache.remote https://cache.safeguard.sh

On the ChatGPT side, prompt the model to batch tool calls when possible. For example, "fetch advisories for these five CVEs in a single call" rather than five separate calls. The sg.advisory.lookup tool accepts an array of IDs specifically for this reason.

What Privacy Considerations Apply?

When you connect Safeguard to ChatGPT, the tool output flows through the ChatGPT model. That means OpenAI sees the tool response bodies — package names, versions, advisory text. If that's sensitive (for example, internal package names that imply an unreleased product), either use Safeguard's redaction mode:

sg mcp serve --stdio --redact-private-packages --redact-internal-paths

Or deploy ChatGPT Enterprise, which contractually restricts training on your data. Safeguard's MCP server never sends data back to Safeguard's SaaS beyond what's needed for the scan itself, and the CLI's network destinations are listed in sg config show.

How Do I Debug a Tool Call That Isn't Working?

Turn up verbosity on both ends. For the server:

SG_LOG=debug SG_MCP_LOG=/tmp/sg-mcp.log sg mcp serve --stdio --log-tool-calls

For ChatGPT Desktop, enable Settings → Advanced → MCP Debug Logs. The combination shows you exactly which tool the model called, what arguments it passed, and what response came back. The most common bug is the model passing a path outside SG_MCP_PROJECT_ROOT — the server refuses, the model reports the error in the transcript, and you adjust either the root or the prompt.

How Safeguard.sh Helps

Safeguard MCP turns ChatGPT from a fluent guesser into a grounded advisor. Every answer about your dependencies is backed by a real tool call you can audit. The same scan engine that runs in your CI is running against the same lockfile, producing the same advisories — so ChatGPT becomes an accurate narration layer over your existing security posture rather than a second opinion with no shared facts. For teams already using ChatGPT heavily, this is the easiest way to bring security into the AI workflow without giving up either fluency or accuracy.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.