Cloud Security

GCP Artifact Analysis API for Vulnerability Triage

GCP's Artifact Analysis API is the most direct way to get scan results into your triage tooling. Here is how to use it without drowning your team.

Shadab Khan
Security Engineer
7 min read

Artifact Analysis is the vulnerability scanning service behind GCP's Artifact Registry and Container Registry. It is continuously running against every image you push, it emits results via a reasonable API, and its findings are actionable — which makes it the rare scanner I do not ignore by default. The thing the docs do not teach is how to consume the API in a way that supports real triage rather than a notification dashboard nobody reads.

I have built or rebuilt triage pipelines on top of Artifact Analysis for three different organizations in the last year. The pattern that works is about three things: narrow the ingestion path, add context the API does not provide, and aggressively suppress findings that do not need human eyes. Every triage pipeline I have seen fail does the opposite.

Why build against the Artifact Analysis API rather than Security Command Center?

Because Security Command Center aggregates across too many sources and does not give you the per-image granularity you need for container triage. SCC is fine as an executive dashboard. It is a terrible primary triage surface because it collapses findings into broader categories, the API is more complex, and the alert semantics drift over time.

The Artifact Analysis API gives you raw Occurrence resources, one per (image, CVE) pair, with the affected package, the fixed version, the severity, and the scan timestamp. That is the shape you need to build triage on top of. Everything else — assignment, workflow, suppression, closure — is logic you apply on top of those occurrences.

For any organization large enough to care about vulnerability triage, building against Artifact Analysis directly and letting SCC see the already-triaged findings via Cloud Asset Inventory is the sustainable pattern. Do not put SCC in the critical path of engineer-facing triage.

How do I filter Artifact Analysis findings down to the ones that matter?

Apply three filters in order: fixed upstream, severity threshold, and reachability. Each filter cuts the noise by roughly a third, and the combination takes a raw scan of 500 CVEs down to something an on-call engineer can actually work through.

Fixed upstream is the cheapest filter. If the affected package has no fix available in any version, the finding is an informational note — track it, but do not page anyone. The API exposes this through the fixAvailable field on the vulnerability details. For most images, about 30-40% of findings have no fix, and those should be auto-suppressed to a tracking state.

Severity threshold is the next cut. Critical and high findings with a fix go to triage; medium and low findings go to a batch review that happens weekly. This is unpopular with compliance teams who want every CVE in a queue, but it is the only way to keep the queue workable. The compromise is that medium and low findings still land in a searchable store — they are not deleted — they just do not generate tickets.

Reachability is the hardest filter and the one with the biggest impact. Artifact Analysis does not do reachability analysis; it tells you that a vulnerable package exists in the image. Reachability tells you whether the vulnerable function is actually called by the running workload. A vulnerability in a bundled test utility that ships with the base image is not the same as a vulnerability in the HTTP parsing library your app calls on every request. You need a separate tool to add this context; the API gives you the finding, and reachability closes the loop.

What does a production-ready ingestion pipeline look like?

Pub/Sub notifications from Artifact Analysis, a dedicated processor that enriches each occurrence with image metadata and reachability context, and a ticketing system that creates work items only for findings that pass every filter. Everything else lands in a searchable archive.

Artifact Analysis publishes occurrence-level notifications to Pub/Sub when new vulnerabilities are found. Subscribe to that topic, not to a periodic scan API call. The push notification model means your triage latency is seconds, not hours, and you do not have to deal with the "did I already process this occurrence" idempotency logic of polling.

The enrichment processor does three things. First, it looks up the image's deployment context — is this image currently deployed to production, staging, or unused? An unused image's vulnerabilities are lower priority. Second, it applies your reachability data, annotating each occurrence with whether the vulnerable code path is reachable from any known entry point. Third, it applies organizational context: the team that owns the image, the service criticality, and any relevant business-impact tags.

Only after enrichment does a finding become a ticket. The ticket includes the CVE, the package, the affected service, the reachability verdict, the fix version, and a link to the promotion PR that would upgrade the package. Without that enrichment, the triage queue is noise; with it, the queue is a prioritized list of work.

How do I handle the Container Analysis note vs. occurrence model?

Notes describe a vulnerability in the abstract; occurrences describe an instance of that vulnerability in a specific image. You almost always care about occurrences. The note resource matters for a few specific workflows — tracking when a CVE is first published, joining against external CVE feeds — but for triage, query occurrences directly and filter to the ones in images you own.

The gotcha is that note permissions are inherited from the note provider, and GCP's default note providers (Google itself, for the OS-level scan) require containeranalysis.notes.get at the project level. Most IAM policies I review do not grant this to the triage service account, and the result is that occurrences load but the severity metadata is missing. Fix the permission, or shadow-copy the note severity into your triage store during ingestion so the finding is complete even if note access is later revoked.

What about Grafeas, the open source foundation underneath?

Grafeas is the open source project that defines the note/occurrence model. Artifact Analysis is GCP's implementation. Whatever you build against the GCP API maps reasonably well to any Grafeas server, which matters if you also run the same triage logic on images stored outside GCP. The API shapes are not identical, but the concepts port.

Do not build directly against Grafeas unless you have a Grafeas server you control. For GCP-native workflows, the Artifact Analysis API is the right surface; it has higher rate limits, managed availability, and better integration with Cloud IAM.

What are the rate limits and quota issues?

Artifact Analysis has per-project quotas on occurrence API calls, and a heavy triage pipeline will hit them. The limits are documented and raiseable, but the right answer is usually not "raise the quota" — it is "call the API less." Subscribe to Pub/Sub notifications for change-based updates, cache occurrences locally with a TTL, and batch reads when you need to reconcile state.

The last optimization is to store occurrence snapshots in BigQuery. Cloud Asset Inventory can export Artifact Analysis occurrences on a schedule, which gives you a queryable history without ever calling the API. Use BigQuery for reporting and analytics; use the API for real-time triage only.

How Safeguard.sh Helps

Safeguard.sh ingests Artifact Analysis occurrences and enriches them with reachability analysis on the running workloads behind each image, so the triage queue focuses on vulnerabilities that are actually called — reachability reliably cuts 60 to 80 percent of GCP scan noise before it reaches the on-call engineer. Griffin AI generates triage policies, ingestion filters, and Pub/Sub subscriber configurations from your existing Artifact Analysis footprint and flags drift when an engineer edits them out of band. Safeguard's SBOM module joins every occurrence to the full dependency graph at 100-level depth, its TPRM module tracks the vendor responsible for every affected package, and container self-healing rewrites GKE image references automatically when a fix becomes available and the patched image has been republished to Artifact Registry.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.