Google Cloud's Artifact Registry scanning service produces findings. The harder question — the one that separates organizations with a real vulnerability management program from those with a dashboard — is what happens after a finding lands. Artifact Analysis has matured substantially through 2025 and into 2026, broadening package ecosystem coverage, integrating better with build provenance, and reducing the lag between upstream disclosure and finding emission. None of that matters if findings accumulate in a console nobody reads, get triaged by exception, or wake on-call engineers for issues that will not be patched for months. This is the workflow guide that gets between the scanner output and the remediated artifact.
What does Artifact Analysis actually scan and when?
Artifact Analysis automatically scans container images pushed to Artifact Registry when the API is enabled in the project. It scans operating-system packages from the major distributions, language-ecosystem packages for the common runtimes (npm, Maven, PyPI, Go modules, RubyGems, NuGet, Cargo), and increasingly in 2026 deployments, build-tool artifacts and base image manifests. Scanning runs at push time and then continuously as new CVE data becomes available; an image that scanned clean today can have new findings tomorrow without any change to the image. The scanning happens regardless of whether the image is ever pulled or deployed. This is important because it means findings exist for images that are old, unused, or candidate-only — and treating every finding as equally actionable is the fastest way to drown a team.
How should findings be prioritized in a real environment?
The prioritization signal that matters most is whether the affected image is in production. An image with critical findings that has not been pulled in six months is a candidate for deletion, not patching. An image with the same findings that is currently running customer traffic is an incident. Artifact Analysis findings on their own do not carry this context; you have to bring it in. The pattern that works is to enrich each finding with three pieces of context at ingest time: the current deployment status of the image (in production, in pre-production, only in registry), the time since the affected component was last updated (a long-stale dependency is a different story than yesterday's release), and the EPSS score plus KEV status of the vulnerability (real-world exploitation evidence trumps CVSS for prioritization purposes). With that enrichment, the same scanner output produces a focused queue rather than an overwhelming one.
# List vulnerabilities for a specific image with severity filter
gcloud artifacts docker images list-vulnerabilities \
"us-central1-docker.pkg.dev/myproject/payments/api@sha256:..." \
--filter="vulnerability.effectiveSeverity=CRITICAL" \
--format="table(name,vulnerability.shortDescription,vulnerability.cvssScore)"
How do you handle base image drift?
Most CVEs in container images live in the base image, not the application code. The remediation is to rebuild on a newer base, not to patch in place. This sounds simple and is operationally hard because every team chooses its own base, base images evolve at different cadences, and the work of rebuilding falls on application teams who did not introduce the vulnerability. The pattern that works at scale is a base image promotion pipeline: a small platform team maintains a small number of approved base images, publishes new versions on a regular cadence (weekly for security-relevant updates), and provides automated PRs to application repositories when their base reference falls behind. The vulnerability scanning workflow then becomes simpler: most application-team findings are resolved by accepting an automated base bump, and only application-specific dependency issues remain in the team's queue.
What goes wrong with the auto-rebuild approach?
Three failure modes recur. The first is base bump regressions: a new base image fixes a CVE but breaks an application's runtime in a non-obvious way, and the auto-PR merges without sufficient testing. The mitigation is to require the application's own CI suite, including integration tests, to pass before the auto-PR auto-merges, and to roll out base bumps to staging environments for at least a defined soak period before they reach production. The second is base-image fragmentation: teams diverge from the approved base over time because of legitimate needs (a special runtime, a specific glibc version, a hardware-specific driver), and the platform team's base coverage shrinks. The mitigation is to formalize an "approved deviation" process and publish hardened variants for the common deviations rather than letting them proliferate. The third is the long-tail of "we cannot move to the new base because legacy" — the mitigation is calendar-based: any image still on a base that is two major versions behind is policy-blocked from production after a defined sunset date.
How does Artifact Analysis integrate with Binary Authorization?
The integration that matters is the vulnerability scan attestation. After a scan completes, an attestation can be emitted asserting "image X passed the scan with no findings above severity S as of timestamp T." Binary Authorization can then require this attestation for production deployments. The right configuration is: production policy requires a fresh scan attestation (scan completed within the last seven days), no critical findings, and no high findings older than a defined SLA. Pre-production policy is looser: a scan within thirty days, no critical findings, but high findings are warned rather than blocked. This forces the scanning workflow to actually drive deployment behavior — an image that drifts past the freshness window stops being deployable, which produces the urgency that pure-dashboard scanning never does.
What about ecosystem coverage and false negatives?
Artifact Analysis covers the major package ecosystems, but coverage is not coverage-equivalent. Findings emerge faster for popular packages, faster for OS packages than for language packages, and faster for ecosystems with strong CVE-to-package mapping (Debian, RHEL) than for those without (some language-specific package managers where multiple variants of a name can refer to the same package). The defender's mental model should be: Artifact Analysis is one input, not the source of truth. Supplement it with SBOM-driven scanning that pulls from additional databases (OSV, GitHub Security Advisories, language-specific feeds), runtime monitoring that catches what scanning missed, and a process for rapidly reanalyzing your inventory when a new CVE hits and the vendor's data has not caught up yet. A high-profile CVE that affects your stack will be discussed on social media hours or days before Artifact Analysis flags it; your workflow should be able to act on the public disclosure independently.
What does a healthy remediation cadence look like?
The metrics that matter are not "open critical findings" — that number reflects exposure, not response. The metrics that matter are: median time from CVE disclosure to first finding in your registry, median time from first finding to deployed remediation in production, percentage of production deployments running scans within freshness window, and percentage of base images on the current supported version. A program with median CVE-to-finding under 24 hours, finding-to-production-remediation under seven days for critical, more than ninety-five percent scan freshness, and more than ninety percent base-image currency is operating well. A program with any of those numbers wildly off is failing somewhere — usually in workflow rather than scanning — and that is where attention should go.
How Safeguard Helps
Safeguard ingests Artifact Analysis findings alongside SBOM data from your build pipelines, deployment state from GKE and Cloud Run, and threat intelligence from OSV, KEV, and EPSS — producing a prioritized queue that reflects actual production exposure rather than raw finding count. Policy gates block deployments of images that have not been scanned within your freshness window, that carry unmitigated critical findings, or that are based on a sunset base image. Griffin AI translates a fresh CVE disclosure into an answer about your specific inventory in seconds: which images contain the affected component, which are in production, what the upgrade path is, and which teams need to act. Continuous monitoring keeps your remediation metrics visible to engineering leadership so the program is steered by data rather than by whichever vulnerability happened to make the news this week.