Every time I review a team's Azure Container Registry setup, I ask the same opening question: when you pull myregistry.azurecr.io/api:v1.2.3 into production, what exactly are you trusting, and what is verifying that trust? The answers I get range from "we trust the registry" to "we trust the SHA" to — most honestly — "I don't know, we just pull it." All three answers point at the same gap. Azure Container Registry (ACR) has real trust primitives, but they are not on by default, and the default pull is not a verified pull.
This post unpacks the ACR trust model as it stands in mid-2024, covers what Notation and notation verify actually give you, and is explicit about the places the trust chain still breaks.
What ACR Guarantees Out of the Box
A vanilla ACR pull over HTTPS gives you two things: authenticity of the registry (the TLS certificate chains back to a Microsoft-operated service) and integrity of the image bytes at retrieval (the content digest matches). It does not give you authenticity of the image publisher, and it does not give you any guarantee that the tag you asked for is the tag you should be getting.
Tags in OCI registries, including ACR, are mutable by design. A tag is a pointer to a digest, and the pointer can be moved. If an attacker has push rights to the repository, they can re-tag any image they control as v1.2.3 and your next pull will get their image. The default ACR role — AcrPush — grants exactly this ability. I have reviewed organizations where dozens of service principals had AcrPush across shared registries, and the blast radius of compromising any one of them was "every image in the registry can be replaced silently."
The first hardening step, before any signing discussion, is to treat push rights as the privileged operation they are. Use scope maps (generally available since 2021) to restrict service principals to specific repositories, prefer AcrPush on individual repositories over organization-wide, and turn on the registry's immutable tag feature where it fits the workflow. Immutable tags break some CI patterns, so most teams apply them only to release tags, not to branch builds.
Notation and OCI 1.1 Signatures
In mid-2023, ACR shipped general availability of Notation-based image signing aligned with the Notary Project's OCI 1.1 artifact spec. This is the modern signing path, and it replaces the earlier Docker Content Trust / Notary v1 integration that shipped on ACR in 2019 and was quietly deprecated. If your runbook still references DOCKER_CONTENT_TRUST=1 on ACR, the runbook is out of date.
Notation signs an image by computing the manifest digest, creating a JWS-formatted signature over that digest, and storing the signature as a separate OCI artifact referenced from the image through the subject field. The signature artifact has its own digest and is pushed alongside the image. Verification fetches the image, fetches the signature, validates the signature against a configured trust policy, and confirms the signer's certificate chains back to a trust store you control.
The trust store is the part teams get wrong. Notation trust policies are local — they live in ~/.config/notation/trustpolicy.json on the verifier — and the trust is rooted in whichever certificates or key references you put there. If every verifier has a different trust store, or if the trust store is never updated, the signature buys you nothing. The working pattern is to manage the trust store as a versioned artifact (a ConfigMap in Kubernetes, a mounted file in App Service) and roll it forward on a schedule.
The common key backends on Azure are Azure Key Vault (for hardware-backed signing keys) and a certificate issued by an internal PKI. Key Vault is the path most teams take because the signing key never leaves the HSM and Notation's Azure Key Vault plugin (notation-azure-kv) handles the integration. The plugin authenticates with workload identity, not a secret, which avoids a long-lived credential for the signer.
Signing in the Pipeline, Verifying in the Cluster
The model that works in production is to sign at the end of the build pipeline and verify at admission in the cluster. In Azure DevOps, the signing step is a task that calls notation sign --key acr-signing-key with the Key Vault-backed key; in GitHub Actions targeting ACR, the equivalent step uses the same plugin with a federated identity.
On the verification side, AKS has integrated with Ratify and the Gatekeeper policy engine since late 2022, and that integration went to GA under the Image Integrity feature in 2023. Ratify is the reference verifier for Notary Project signatures on Kubernetes; it runs as an admission controller, fetches signatures for every image in a pod spec, and denies the pod if the signatures do not match the configured trust policy. Azure Policy has a built-in policy — "Container images should be signed by trusted image signing solutions" — that enables this in a single click on AKS clusters.
The gap in this story is anything that is not AKS. Azure App Service for Containers, Azure Container Apps, and Azure Container Instances do not currently enforce Notation signatures at deploy time. You can sign the images, but there is no runtime check. For those services, the verification has to move earlier — into the CI pipeline, before the image is tagged for release — and the pipeline becomes the enforcement point. That is weaker than admission control, because a secondary push path (a human with AcrPush) bypasses it, but it is better than nothing.
Scope Maps, Tokens, and the Principle of Least Push
ACR scope maps and tokens are the least-known ACR feature and the most useful for hardening. A scope map is a named collection of repository-level actions (content/read, content/write, metadata/read, metadata/write) that can be bound to an ACR token. Tokens are registry-local credentials that do not require an Azure AD identity, which makes them convenient for non-Azure build infrastructure.
The pattern I recommend is to create a scope map per pipeline, grant content/write only on the specific repository that pipeline builds, and bind a short-lived token to that scope map. The CI pipeline uses the token to push, the token is rotated by automation every 30 days, and the blast radius of a compromise is the single repository rather than the whole registry. This is the ACR equivalent of short-lived credentials for Git, and it closes the "one compromised service principal owns all images" problem I opened the post with.
For Azure AD-based push paths, the equivalent control is to avoid registry-level AcrPush entirely and use repository-level RBAC (previewed in 2023, GA in H1 2024 on supported SKUs). Repository-level RBAC does what scope maps do, but through Azure AD identities.
Where the Trust Chain Still Breaks
Signing does not solve the base image problem. If your signed image is built from an unsigned base image, the signature asserts that the top layer is authentic, not that the entire filesystem is. The build pipeline has to verify base image signatures before building, or you have to accept that the base image is a trusted-but-unsigned component. ACR Tasks (the registry-native build feature) can chain verification into the build, and that is how teams running fully air-gapped registries get end-to-end signing.
The other gap is tag-to-signature mapping on replicas. ACR geo-replication replicates images across regions, and signatures replicate with them. But if you use a different registry as a pull-through cache, or if you mirror images through a proxy, the signature artifact may not follow. Verify on the source registry, not the mirror, or replicate the signatures explicitly.
How Safeguard Helps
Safeguard inventories ACR registries, enumerates scope maps, tokens, and Azure AD role assignments, and surfaces the effective push blast radius per principal — the same analysis I used to do by hand in review engagements. It tracks which images in each registry have Notation signatures, which signatures verify against the configured trust policy, and where signing is missing in the build-to-deploy chain. For AKS clusters with Image Integrity enabled, Safeguard reconciles the cluster's trust policy against the signatures actually present on the registry, so drift between "what we sign" and "what we verify" becomes visible before a deploy fails. Teams adopting Notation get a rollout dashboard instead of a spreadsheet.