On September 8, 2025 the MCP working group launched the official MCP Registry at registry.modelcontextprotocol.io — an open catalogue and API for publicly available MCP servers, designed to replace the fragmented landscape of vendor-specific registries with a single discoverable source. By October 2025 the registry had close to 2,000 entries, representing 407% growth from the initial onboarding batch. The launch announcement on the MCP blog framed the registry as both an availability solution and a security one: a single namespace-bound catalogue makes server provenance verifiable in a way that "download from a random GitHub repository" never did. For defenders, the registry is one of the first pieces of standardised supply-chain infrastructure the agent ecosystem has produced. Understanding what it defends against, and what it does not, is foundational to building a sustainable MCP consumption practice.
What does the namespace model actually enforce?
The registry uses a namespace-based approach to manage server identity. To publish a server under io.github.<user>/<server>, the publisher must log in via GitHub as that user; to publish under me.<domain>/<server>, the publisher must prove control of the domain through DNS or HTTP challenges. The validation runs on every publish, not just on namespace creation, so a server's namespace remains bound to a verifiable identity for as long as it is listed. The architectural overview from WorkOS and the InfoQ launch coverage both highlight this as the core security property: an attacker who wants to publish a fake "Anthropic Git MCP" cannot use io.github.anthropic/ unless they control the Anthropic GitHub organisation, and they cannot use me.anthropic.com/ unless they control DNS for anthropic.com.
What classes of attack does this defend against?
Two specifically. The first is typosquatting at the namespace level — an attacker publishing io.github.anthropic-official/mcp-server-git to ride on confusion with the legitimate io.github.anthropic/. The registry does not forbid this, but it makes the deception verifiable: a consumer checking the namespace can confirm it does not match the expected publisher. The second is publisher takeover — an attacker who briefly controls a domain or a GitHub account cannot retroactively change the namespace owner of an existing server without going through the registry's validation again. These defences are weaker than the analogous controls in npm or PyPI (which now require 2FA and have package-level provenance attestations), but they are stronger than the pre-registry baseline of "trust whatever GitHub URL the blog post links."
What does the registry not defend against?
Several important things. First, it does not validate the contents of the server itself — a server published under a verified namespace can still contain malicious tool descriptions, vulnerable code, or backdoors. The registry verifies who published, not what they published. Second, it does not enforce versioning or signing of individual releases. A maintainer can update the server's code on GitHub without the registry observing the change, and the registry's metadata can drift from what is actually distributed. Third, it does not provide a community trust signal beyond namespace ownership — there is no equivalent yet of npm's download counts plus security advisories plus maintainer history that a consumer can use to judge a server's trustworthiness. The registry is the discovery layer; the trust evaluation is still a consumer responsibility.
What does a consumption policy on top of the registry look like?
The configuration below sketches a Safeguard-style policy that combines registry namespace verification with consumer-side controls — version pinning, hash verification, namespace allowlisting, and human review for new namespaces. The goal is to treat the registry as a source of provenance metadata rather than as the trust decision itself.
# mcp-registry-consumption.yaml — consumer-side trust policy
registry:
source_of_truth: "https://registry.modelcontextprotocol.io"
refresh_interval_hours: 6
trust_model: "namespace_verified_publisher"
allowed_namespaces:
- prefix: "io.github.anthropic/"
rationale: "Verified GitHub org; first-party servers"
- prefix: "io.github.modelcontextprotocol/"
rationale: "Working group; reference servers"
- prefix: "me.example.com/"
rationale: "Internal corporate servers"
- prefix: "io.github.cloudflare/"
rationale: "Approved vendor; reviewed 2025-10-15"
review_required_for:
- any_new_namespace
- any_server_whose_maintainer_changed_in_last_30_days
- any_server_with_zero_dependents_in_registry
pinning:
require_version: true
require_sha256_of_release_artifact: true
forbid_floating_tags: true # no "latest", no "main"
on_artifact_hash_mismatch:
action: block
notify: security@example.com
advisory_subscription:
sources:
- "https://registry.modelcontextprotocol.io/advisories"
- "https://github.com/advisories?query=ecosystem%3Amcp"
block_on_unresolved_advisory_severity: "high"
How does the registry interact with the spec revisions?
Cleanly. The 2025-06-18 spec revision defined the OAuth resource-server pattern; the 2025-11-25 revision (covered in detail elsewhere) added the Tasks abstraction and further authorization clarifications. The registry catalogues which spec versions each server implements, so a consumer can filter to "servers implementing 2025-06-18 or later" as a baseline. That filter is increasingly important — pre-2025-06-18 servers do not enforce RFC 8707 resource indicators, so their tokens are reusable across servers in a way that makes confused-deputy attacks easier. Pinning to recent spec versions through the registry filter is the cheapest way to inherit the spec's evolving security baseline.
What is the path to a stronger trust model?
Three additions would meaningfully strengthen the registry. First, mandatory release signing — every server release should be signed by the namespace owner's verified key, and the registry should reject uploads without signatures. The infrastructure exists (Sigstore, npm provenance attestations); only the policy is missing. Second, a community advisory channel scoped to the registry — when a server is found to be malicious or vulnerable, a registry-level advisory should propagate to every consumer policy within minutes, not weeks. Third, federation primitives that let enterprises run private sub-registries that inherit trust roots from the public registry but apply additional review — the registry was designed to support this and the spec encourages it, but reference implementations are still maturing. The Linux Foundation contribution of A2A in mid-2025 set a precedent for protocol-level standardisation that the MCP registry can follow.
What should defenders do this quarter?
Three actions. First, replace any "download from GitHub" MCP server installation in your environment with a registry-sourced install, pinned by version and verified by SHA-256. Second, build a consumption policy on top of the registry — namespace allowlist, review for new namespaces, version pinning, advisory subscription — and enforce it in a policy gate rather than as documentation. Third, contribute to the registry's strengthening. If your team consumes MCP servers at scale, you have data the working group needs about which servers are popular, which have caused incidents, and which advisories should propagate. The registry is the supply-chain infrastructure the agent ecosystem has been missing; it works only if consumers use it as designed.
How Safeguard Helps
Safeguard's MCP supply-chain module ingests the official MCP Registry continuously, maintains a per-tenant namespace allowlist, and enforces version pinning plus SHA-256 verification on every MCP server install across the environment. Griffin AI flags new namespaces appearing in your developers' configurations and requires security review before any install completes, eliminating the case where a typosquat namespace silently makes it onto a developer endpoint. Advisory subscriptions ingest registry-level vulnerabilities and the public GitHub Advisory feed, so a server marked malicious propagates to a block decision within minutes. The registry's namespace model defends against impersonation; Safeguard fills the gap between provenance and trust so the registry's launch translates into a meaningfully safer environment.