A container registry is the most over-trusted piece of infrastructure in most Kubernetes environments. It holds the executable artifacts every pod in production runs. It is a signing oracle for anyone with push credentials to a trusted tag. It is a software distribution system that your CI pipeline pulls from with little or no verification. And it sits at exactly the chokepoint where a typosquatted base image — ubunto:20.04, alpine:3.154, node:-16 — can quietly slip into a build and come out the other side signed by your own pipeline. The 2021 CrowdStrike Global Threat Report flagged a 56% year-over-year increase in attacks targeting cloud registries, and the first quarter of 2022 has already produced three notable incidents: a widely copy-pasted malicious jwilder/nginx-proxy fork, the codecov bash uploader incident's continuing fallout, and a wave of Docker Hub verified-publisher impersonations. Registries need a hardening baseline, and most teams do not have one. This post lays out the one that actually holds up in 2022.
Why is the registry a more dangerous attack surface than the cluster?
The registry is a more dangerous attack surface than the cluster because a compromise at the registry propagates everywhere the registry is trusted, whereas a compromise inside a single cluster is usually contained to that cluster. If an attacker gets push access to your acme/api:latest tag, every environment that pulls that tag on the next deploy is compromised simultaneously — production, staging, developer laptops running docker compose, the air-gapped customer site that mirrored your registry last night. That is an impact multiplier that no Kubernetes primitive defends against on its own.
The 2020 Docker Hub credential stuffing campaign, in which attackers validated 190,000 reused credentials against namespaces with push rights, is the archetype of this attack. Push credentials are frequently long-lived, often stored in developer laptops' ~/.docker/config.json, and rarely rotated. A registry that treats every push as equally trustworthy is giving every compromised credential a production deployment vector.
What authentication controls should the registry enforce?
The registry should enforce short-lived, federated credentials for writes and immutable tags for anything that will be consumed downstream. Static bearer tokens with push scope are the primary failure mode and should be eliminated. For AWS ECR, this means IAM roles for the CI runner with ecr:InitiateLayerUpload permissions scoped to specific repository ARNs and session durations under one hour. For Harbor, it means robot accounts with automatic expiration. For Docker Hub, it means access tokens scoped to a single repository rather than account passwords.
Immutable tags are the second critical control. A tag that can be overwritten is a foothold for a supply chain rollback attack: the attacker pushes a new v1.4.2 that shadows the legitimate one, and everyone who redeploys picks up the malicious version. ECR has a per-repository immutability flag, Harbor has project-level tag retention policies, and Google Artifact Registry supports immutability natively. Turn it on for every production-facing repository and rely on digest pinning for the places where you genuinely need to move tags.
# Terraform for an immutable ECR repository with scan-on-push
resource "aws_ecr_repository" "api" {
name = "acme/api"
image_tag_mutability = "IMMUTABLE"
image_scanning_configuration {
scan_on_push = true
}
encryption_configuration {
encryption_type = "KMS"
kms_key = aws_kms_key.ecr.arn
}
}
How should vulnerability scanning be integrated?
Vulnerability scanning should be integrated at three points: push time (for fast feedback), deploy time (for policy enforcement), and continuously on stored images (because new CVEs land after the build). Scan-at-push alone is insufficient because the average vulnerability is disclosed 43 days after the image containing the vulnerable code was built, per Sysdig's 2022 cloud-native security report.
The scanners worth evaluating in 2022 are Trivy (Aqua), Grype (Anchore), and Clair (Red Hat/Quay), all of which are CVE-focused and run against image layers directly. Snyk Container and Prisma Cloud add reachability heuristics and license analysis on top. The pattern that works is Trivy or Grype in CI for speed, a second scanner at the registry or admission layer for defense in depth, and a nightly rescan of every in-production image to surface newly disclosed CVEs against artifacts that are not going to be rebuilt anytime soon.
What about base image selection and supply chain upstream?
Base image selection is the single biggest lever on your registry's overall risk profile. A 2022 Snyk study of the top 100 Docker Hub base images found that the median image carried 76 known vulnerabilities, and the 95th percentile carried over 500. The worst offenders were general-purpose distributions — node:latest, python:latest — which pull in an entire Debian userland for a single interpreter.
The 2022 playbook is to default to minimal bases: Google's distroless images, Chainguard's Wolfi-based images, or alpine for projects that have made peace with musl libc. A node:16 image weighs roughly 900MB and ships hundreds of CVEs at any given moment; gcr.io/distroless/nodejs:16 weighs about 150MB and carries none of the userland tools that make up most of that CVE surface. The tradeoff is debuggability — no shell, no package manager at runtime — which a well-instrumented observability stack offsets.
The upstream-pull problem is equally important. Every FROM ubuntu:20.04 is a network pull from Docker Hub by default. A compromised Docker Hub, or a compromised mirror along the path, is a build-time code execution vector. Registries like Harbor, Artifactory, and Sonatype Nexus support pull-through caches that let you pin a specific upstream digest and mirror it into your own registry, giving you both an integrity point and a rebuild path for air-gapped environments.
How does image signing fit into registry hardening?
Image signing fits into registry hardening as the control that makes the registry's contents verifiable independently of the registry itself. Without signing, a compromised registry can silently replace image contents while keeping the tags and digests consistent on its own metadata. With signing, a verifier downstream — a Kubernetes admission controller running Sigstore's policy-controller, or a Notary v2 verifier — can refuse an image whose signature does not match the expected signer identity, regardless of what the registry claims.
Cosign's keyless signing against GitHub Actions OIDC is the path of least resistance for projects building on GitHub. For enterprise-internal builds, Harbor 2.5 introduced native Cosign signature storage, letting the registry both host images and their signatures as first-class objects. ECR added signature support in early 2022 via the Notation CLI. The important implementation detail is where verification happens: a signed image that is never verified is operational overhead without security benefit.
How Safeguard Helps
Safeguard treats every container image as a versioned component with its own SBOM, signature status, provenance, and vulnerability trajectory. The SBOM ingest pipeline pulls Syft or Trivy output for every image in every tracked registry and reconciles it against the live CVE feed, so you see exactly which images are carrying newly disclosed vulnerabilities without running a fresh scan. The policy-gate engine wires into your admission controller and CI, refusing deployments of images that are unsigned, signed by an unexpected identity, carry critical CVEs with known-exploited status, or include base images outside your approved list. Griffin AI answers natural-language questions like "which of our production images still use node:latest as a base" directly against the aggregated SBOM store, which is the fastest way to turn a registry audit from a weekend project into an afternoon.