Two years into the Model Context Protocol becoming a default integration mechanism for agents, most enterprises are in the same place they were two years into shadow IT. They know they have a lot of MCP servers. They cannot tell you exactly how many, where they run, who owns them, what data they expose, or which agents are calling which tools. The protocol succeeded faster than the inventory practice around it matured.
The result is a familiar pattern. Security teams know that MCP is a sensitive control point because it is the bridge between agent reasoning and real systems. They know that a poorly governed MCP server is functionally a privilege escalation surface. But the response so far has been mostly defensive, focused on individual server hardening rather than on the program level question of "what do we have."
This post is about that program level question. What does an MCP server inventory look like at enterprise scale, what data it should capture, where the data comes from, and how to make it durable.
Why ad hoc tracking does not survive contact with reality
Most teams start MCP governance by asking platform owners to register servers in a spreadsheet or wiki. This works for the first month. By month three, the registry is incomplete because new servers have been spun up by individual contributors for one off use cases. By month six, the registry is misleading because servers that were meant to be temporary are still running, and servers that were registered have been quietly retired without being delisted.
The problem is the same one that broke service registries a decade ago. Manual registration scales until the rate of change exceeds the rate at which someone is willing to update a list. MCP server creation is fast, often a single configuration file in a repository or a small process started in a developer environment. The pace of creation is far above the pace at which most security organizations can maintain a manual catalog.
The other failure mode is over reliance on one signal. A registry that only watches a single deployment platform misses servers running in other clouds, in developer laptops connected to corporate identity, in CI runners, or in vendor environments connected via federated agents. Single source registries have permanent blind spots.
What needs to be in the inventory
A useful MCP inventory captures more than a list of servers. It captures the full picture that lets a security team answer governance questions without a side investigation.
Server identity. A stable identifier that survives version upgrades and redeployments, plus the human readable name and the team that owns it. Version, build provenance, and source repository when known.
Tool catalog. Every tool the server exposes, including the tool name, schema, and any side effects the maintainers have declared. Tool catalogs change with each version, so the inventory should track tool changes over time, not just current state.
Auth and identity. How agents authenticate to the server. Which scopes or permissions the server requires from downstream systems. Which identity the server uses when calling backends.
Data exposure. What data classes the tools can read or modify, derived from the systems the server connects to. A tool that reads from the customer data warehouse is structurally different from one that fetches public docs.
Consumers. The agents, applications, or users known to call this server. This is the link that makes the inventory useful for blast radius analysis. Without consumer linkage, you can list servers but not reason about impact.
Lifecycle state. When was it created, when was it last seen, when was it last reviewed, and what is its retirement plan. Servers that no agent has called in 60 days are candidates for retirement.
Where the data actually comes from
The inventory has to be built from systems that already exist. Asking teams to enter the data manually is the path that fails. Practical sources include SCM repositories, where MCP server configurations and source code live; CI and build systems, where server images are produced; deployment platforms, where servers run; identity providers, where service accounts are issued; and runtime telemetry from agents and gateways, where calls are observed.
Each source contributes part of the picture. The SCM gives you ownership and version. The build system gives you provenance. The runtime gives you call patterns and last seen timestamps. The identity provider gives you the auth surface. None of them, alone, is sufficient. Reconciliation across sources is the actual work.
A common reconciliation challenge is identifying the same logical server across environments. The dev instance, the staging instance, and the prod instance of an MCP server are technically separate processes with separate endpoints. Treating them as three unrelated entries inflates the inventory and obscures the lineage. Treating them as one logical entity with three deployments preserves the relationship that matters.
The governance loop the inventory enables
An inventory is only valuable if it drives action. The MCP inventory should feed three loops at minimum.
A review loop. Every new server entering the inventory should trigger an automatic review against policy. Servers that expose sensitive data classes, that run with broad scopes, or that lack provenance signals should require explicit signoff before agents can call them in production.
A drift loop. When a server's tool catalog or auth surface changes, the inventory should compare the new state against the prior signed off state and flag the diff. This catches the case where a maintainer adds a tool with broader effects than the original review covered.
A retirement loop. Servers that have not been called in a configurable window should enter a retirement candidate state. Owners get a notification, and if no objection is raised, the server is removed from the registry and its agents are warned that the dependency is going away.
These loops convert the inventory from a snapshot into a living governance surface. Without them, the inventory becomes another spreadsheet that decays.
Common antipatterns
Two patterns recur in failed MCP inventory programs. The first is treating MCP governance as separate from software governance. MCP servers are software. They have dependencies, vulnerabilities, license terms, and provenance. If the SBOM program does not cover them, you will end up with two parallel security workflows that do not talk to each other.
The second is over centralizing ownership. Security teams that try to own every MCP server end up as a bottleneck, which causes teams to avoid registering. Better to make the inventory automatic, the policy enforcement automated, and the human review reserved for the edge cases that genuinely need judgment.
How Safeguard Helps
Safeguard runs MCP inventory as part of continuous asset discovery, not as a separate program. Servers are discovered from SCM repositories, build pipelines, deployment platforms, and runtime telemetry, and reconciled into a single registry that maps logical servers to their environments. Each entry carries owner, version, tool catalog, auth surface, and consumer agent linkage, and the registry watches for drift in any of these dimensions.
The MCP registry is connected to the AI-BOM and the software SBOM, so a server's components, models, and dependencies are visible in one view. When a new vulnerability hits a transitive dependency in an MCP server, you see immediately which agents call it and which data classes are exposed. Continuous discovery, AI-BOM, and the MCP registry together replace the spreadsheet with a governed, queryable, and self updating inventory of every MCP surface in your enterprise.