SBOM & Compliance

Provenance Attestation Consumer Workflow

Generating provenance is half the story. Consuming it correctly, at the right points in the pipeline, is where the security value actually materialises.

Shadab Khan
Security Engineer
7 min read

Every conversation about SLSA starts with producers. Who is generating provenance, which builder are they using, what level are they claiming. The producer side gets most of the attention because it is where the visible work happens: configuring reusable workflows, wiring up Sigstore, publishing attestations to registries. The consumer side is quieter and, for most security programs, more important. A producer that emits L3 provenance that no one verifies is the same as a producer that emits nothing.

This post is a consumer-side workflow. It covers where to perform verification, which tools to use at each point, how to handle the cases where attestations are missing or regressed, and what policy decisions to make explicit rather than implicit. It is based on deploying consumer workflows across mid-size and large customers since 2023.

Where should verification happen?

The temptation is to verify "everywhere," which in practice means "nowhere reliably." Every verification point adds latency and a failure mode, and duplicated verification across CI, registry ingestion, and admission control creates inconsistency when one point upgrades its tooling and another does not.

The right model is a single primary verification point and lightweight secondary checks. The primary point is typically artifact ingestion into an internal registry or dependency proxy. When a release binary, container image, or package is pulled from an upstream source into your organisation, a gate runs full SLSA verification. If it passes, a Verification Summary Attestation (VSA) is generated and attached to the artifact in the internal mirror. Downstream consumers (CI builds, admission controllers) verify the VSA signature instead of re-running the full SLSA check.

This pattern has three properties that matter. Verification happens once per artifact, which keeps cost bounded. Policy changes take effect at one point rather than being distributed. And the VSA provides a defensible audit trail: "we verified this artifact on date X against policy Y, and here is the signed record."

The secondary points add value when they check properties the primary point cannot. A Kubernetes admission controller can enforce that any image scheduled in production carries a valid VSA for the current policy version. A CI build can reject dependencies whose VSA is older than a threshold. These are policy decisions, not redundant verifications.

Which tools fit at each point?

At the primary ingestion point, the toolkit is cosign verify-attestation, slsa-verifier, and a policy engine. Cosign v2.2+ verifies the Sigstore bundle signature, the Fulcio certificate identity, and the Rekor transparency log entry. slsa-verifier v2.5+ verifies the SLSA Provenance v1 predicate structure, including the subject digest match and the source URI/ref match against policy. The policy engine (we use Open Policy Agent with Rego, though others work) decides whether the verified attestation meets organisational requirements.

At Kubernetes admission, the Sigstore policy-controller (v0.10+) is the natural fit. It supports VSA verification natively through the ClusterImagePolicy CRD by specifying the VSA predicate type and the verifier identity. policy-controller's TrustRoot CRD lets you pin the trust material to your internal verifier's Fulcio and Rekor instances, which is important for air-gapped or regulated deployments.

At CI, the right tool depends on what is being verified. For dependencies pulled from an internal registry, a simple cosign verify-attestation against the VSA predicate is sufficient. For dependencies still being pulled from upstream (which should be rare in a mature setup), full SLSA verification is needed.

How do you handle missing attestations?

Not every dependency has a signed provenance. Large parts of the Python and JavaScript ecosystems still ship unsigned packages. A policy that requires provenance for every dependency will fail immediately.

The pragmatic approach is tiered policy. Tier 1 is dependencies with verified SLSA L3 provenance from a trusted builder; these pass without additional review. Tier 2 is dependencies with weaker provenance (L2, or L3 from builders outside the trust list); these pass but are tagged for periodic audit. Tier 3 is dependencies without provenance; these require an explicit exception, tagged with the reviewer and the expiry date.

The critical property is that Tier 3 is auditable and time-boxed. Tier 3 with "forever" exceptions is the same as having no policy. We typically set Tier 3 exceptions to 90 days and require renewal with a review at expiry.

The tier assignment itself is recorded as a VSA. A dependency that is Tier 2 gets a VSA from the internal verifier noting that fact. Downstream consumers see "VSA present, tier 2" and can decide whether to accept.

What about regressions?

A dependency that was Tier 1 last week and is Tier 3 this week is a regression. This happens more than expected: a library switches CI providers, a builder claim drops from L3 to L2, a publisher moves from Trusted Publishing to API token uploads.

Regressions need dedicated handling because they are often the first signal of a compromise. A widely-used library that was signed with a known identity and now comes unsigned is a red flag. A widely-used library that was signed by actions/attest-build-provenance and is now signed by a novel identity is another red flag.

The consumer workflow should alert on regression separately from initial ingestion. The alert goes to a human who can review whether the regression is benign (the project switched to a new release pipeline) or suspicious (someone compromised the project's release credentials). Suspicious regressions trigger holds on the dependency's new versions until the alert is resolved.

How do you version policy?

Policy versioning is the quiet issue that breaks verification workflows over time. A policy that required slsa-framework/slsa-github-generator@v1.10 becomes obsolete when v2.0 ships. A policy that required a specific Fulcio certificate OID becomes obsolete when Sigstore rotates its trust root.

The clean model is explicit policy versions encoded in the VSA. Each VSA records the policy version under which it was issued. When policy updates, old VSAs become "expired" and new verification must run against the updated policy. A migration window lets downstream consumers accept both old and new VSAs briefly, then old VSAs are rejected.

This is more infrastructure than many teams have, and we do not recommend it for small deployments. But for organisations managing thousands of artifacts with different risk profiles, explicit policy versioning is what lets the verification workflow evolve without breaking overnight.

What breaks in practice?

Three failure modes keep recurring. First, network partitions to Rekor cause verification to hang or fail. The fix is either an internal Rekor mirror or offline verification using Rekor bundles. Cosign v2.2 supports the offline path but most homegrown verifiers we see do not.

Second, TUF metadata goes stale. Sigstore's TUF root refreshes periodically, and a verifier that has cached metadata older than the refresh interval will start failing. The fix is a scheduled TUF refresh, which cosign handles automatically for online verification but which offline verifiers must implement.

Third, policy drift between verification points. The admission controller is running policy version 5, the CI verifier is running policy version 4, and the ingestion verifier is running policy version 6. The symptom is that an artifact passes one point and fails another. The fix is centralised policy distribution, typically via a Git repository with a CI job that deploys policy to every verification point atomically.

How Safeguard Helps

Safeguard is designed around the single-verification-point pattern. Every artifact that enters your internal registry through Safeguard is verified against your policy once, a VSA is generated and attached, and downstream consumers (CI, admission, developer tools) can trust the VSA instead of re-running SLSA verification. Our policy engine handles tiered dependency classification, regression detection, and time-boxed exceptions without you having to build that logic yourself. For air-gapped deployments we maintain offline TUF and Rekor mirrors so verification continues without internet access. The platform also tracks policy versions, so a policy update triggers re-verification of affected artifacts with a defined migration window.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.