The shared-token anti-pattern
Walk into most MCP deployments and you will find the same anti-pattern. There is one service account, one long-lived token, and every MCP server in the environment uses it. The token has broad scope because nobody wanted to be the person who blocked an engineer's tool call by under-provisioning. The token has been in production for nine months because rotating it would break something nobody fully understands. And the token is sitting in environment variables on three different hosts, in a CI secret store, and in at least one engineer's laptop.
This is the same anti-pattern security teams have been fighting in non-AI systems for two decades. The reason it has shown up again with MCP is that MCP servers feel new, and teams have not yet rebuilt the muscle memory of treating each server as its own identity with its own scope. The fix is not new either. It is the per-server scoped-credential pattern, applied carefully.
What scoping per server actually means
The pattern is simple to describe. Every MCP server gets its own credential, and that credential is scoped to exactly the resources the server's declared tools need. A linting MCP server that only reads source code from a single repository gets a credential that can read that one repository and nothing else. A deployment MCP server that pushes to a specific Kubernetes namespace gets a credential that can act on that namespace and nothing else. There is no shared token, no broad service account, and no credential whose scope was decided by guessing.
The hard part is mapping tools to scopes correctly. Most tool authors are not security engineers, and tool descriptions tend to be optimistic. A tool described as read project metadata might, on closer inspection, also be capable of reading project secrets, because the underlying API does not separate those concerns cleanly. The mapping has to be done by someone who understands both the tool and the API behind it, and it has to be reviewed when the tool changes.
Identity model
The identity model that works is one identity per server, distinct from the identity of any user or any agent. The server identity is what the credential authenticates as. When the server makes an API call, the call is attributed to the server. The server is then responsible for forwarding the user identity and the agent identity as additional context, either through impersonation headers or through a separate trust chain, depending on what the upstream API supports.
This three-layer attribution, server plus user plus agent, is what makes audit logs useful. A log line that says only the API was called by service-account-mcp-prod tells you nothing. A log line that says the API was called by the deployment MCP server, on behalf of user alice@example.com, via agent release-bot, with reasoning chain xyz`` tells you everything. The cost of producing the second log line is small if you set the identity model up correctly from day one. Retrofitting it later is painful.
Scope sources
Scopes should come from the server's declared manifest, not from a separate configuration file that drifts. When a server registers, it declares which tools it exposes and what each tool needs. The registry computes the union of those needs and provisions a credential with exactly that scope. When the server's manifest changes, the credential is reissued with the new scope. There is no human in the loop for scope changes that stay within the previously approved envelope, and a real review for scope expansions that go beyond it.
The way to encode this in practice is a scope policy per server, expressed as a list of resource patterns and actions. For example, a documentation MCP server's scope policy might be read on docs/* and nothing else. A deployment MCP server's scope policy might be read and write on cluster:prod/namespace:checkout/*. The policies are version-controlled, reviewed in pull requests, and applied by the identity provider when the credential is issued.
Rotation and revocation
A credential that cannot be rotated is a credential that will eventually leak. The pattern only works if rotation is automatic and frequent. The credentials we provision for MCP servers are short-lived, typically one to four hours, and refreshed on the fly by the server's runtime. There is no long-lived token sitting in an environment variable. If the server is compromised, the worst case is that the attacker has a usable credential for the remainder of its lifetime, after which the credential expires and the attacker has to figure out how to obtain another one through whatever path is now monitored.
Revocation is the other half. If a server is compromised or retired, the credential and any active sessions are revoked immediately. The identity provider's revocation list is checked on every API call, which means a revoked credential stops working in seconds rather than waiting for the next refresh. This is the same pattern that mature human-identity systems use. It applies cleanly to machine identities too.
Migrating from shared tokens
Most teams who read this post will have a shared-token deployment in production today. The migration to per-server scoped credentials is doable in a few weeks if it is sequenced correctly. The first step is to inventory the MCP servers and the tokens they use. The second is to provision per-server credentials in a staging environment and get the servers running against them, fixing any scope gaps that appear. The third is to roll the new credentials to production behind a feature flag, with the shared token still available as a fallback. The fourth is to remove the shared token, which is the step that proves the migration actually worked.
The mistake teams make in this migration is skipping step three. If you cut over from shared tokens to per-server credentials without a fallback, you will discover scope gaps in production at the worst possible moment. The fallback gives you a few days to find the gaps with real traffic and close them before they cause an outage.
Common scoping mistakes
Three mistakes show up over and over in scope policies. The first is scoping at the wrong granularity. A scope of read on the entire production database is not a scope, it is a license to do whatever you want. Real scopes target specific tables, specific columns, or specific datasets. The second is forgetting about transitive access. A tool that reads project metadata might be allowed to follow a link to a secret if the API does not block that. The scope policy has to account for what the tool can reach, not just what it directly references. The third is not accounting for write amplification. A tool that writes a single record might trigger downstream effects that touch many other systems. The scope has to cover the actual blast radius, not the immediate call.
What good looks like
A well-scoped MCP deployment has a few visible signs. Each server has its own identity in the IdP. Each identity has a scope policy that is short, specific, and version-controlled. Credentials are short-lived and refreshed automatically. The audit log clearly attributes every action to a server, a user, and an agent. When a new tool is added to a server, the scope policy is updated in a pull request that goes through review. When a server is retired, its identity is deleted and any historical audit records are still attributable.
If you can see all of those signs in your environment, you have a defensible MCP credential model. If any of them are missing, that is the next thing to fix.
How Safeguard Helps
Safeguard provisions per-server scoped credentials directly from the registry's approved server manifests, eliminating the gap between what was reviewed and what is in production. Scope policies are version-controlled, drift is detected automatically, and credential rotation is handled by the platform on a schedule that matches your risk tolerance. When a server's manifest changes, Safeguard suspends the affected credential until the change is re-reviewed, and every tool call ends up in an audit log that attributes it to the server, the user, and the agent. The shared-token anti-pattern stops being possible by construction.