Cloud Security

Cloudflare Workers Build Attestations: A Defender's Field Guide

Workers Builds emits provenance attestations for the code it deploys. We trace how to verify them, gate on them, and integrate them into a multi-cloud supply chain program.

Michael
Security Engineer
7 min read

Cloudflare Workers occupies a strange place in most supply chain programs. The platform is operationally critical — Workers handle the edge for a large fraction of the public internet's traffic — but the build artifacts are tiny, the deploy cadence is fast, and the typical Worker is closer to a single function than a traditional application. The result is that Workers often live outside the SBOM and provenance discipline that covers the rest of an organization's deployable code. Worker deploys produce a hash and a deployment ID, and the question "what code is actually running on this script ID right now" is harder to answer than "what container image is currently running in this Kubernetes pod." Cloudflare's expansion of build attestations through 2025 and into 2026 changes the available answer. The work of turning that available answer into a defended one falls on the team that consumes the platform.

What does a Workers build attestation actually claim?

A Workers build attestation, emitted by Cloudflare's hosted build pipeline when a Worker is built from a connected source repository, is a signed statement that links a deployed Worker version to the source commit, the build configuration, and the build environment that produced it. It follows the in-toto attestation format, which means it has a predicate type (build provenance), a subject (the artifact, identified by a hash), and a body that describes how the subject was produced. Verifiers that already handle SLSA-style attestations from other build systems can consume Workers attestations with minimal changes. The valuable claim is not "this Worker was built by Cloudflare" — that is true of every hosted Worker. The valuable claim is "this Worker was built from commit ABC on branch X of repository Y, with these specific build parameters, in this specific build environment." That binding is what lets defenders reason about provenance.

How do you actually verify an attestation?

Verification involves three checks. First, the attestation's signature must verify against the public key or trust anchor Cloudflare publishes for the attestor identity. Second, the subject of the attestation must match the artifact you are about to deploy or are inspecting — typically this means the artifact hash. Third, the predicate body must match the expected provenance — the right source repository, the right branch, optionally the right build configuration. Each check has failure modes. The signature check fails if you have not refreshed your trust anchor recently or if Cloudflare has rotated the signing identity. The subject check fails if the deployed artifact has been modified post-build (rare but worth catching). The predicate check fails if someone has deployed from an unexpected branch or repository — the most common real finding, usually a developer who has built from a fork or a feature branch and forgotten to clean up.

# Inspect a Worker deployment and its attestation
curl -X GET "https://api.cloudflare.com/client/v4/accounts/$ACCOUNT/workers/scripts/$SCRIPT/deployments" \
  -H "Authorization: Bearer $CF_API_TOKEN" \
  | jq '.result[] | {id, source, metadata, attestations}'

Where should verification actually happen?

Three plausible places. At deploy time, in the pipeline that pushes the Worker, before the deploy is finalized — this catches misconfigurations during the rollout. At observation time, in a continuous compliance check that periodically reverifies every deployed Worker against its attestation — this catches drift and tampering after the fact. At promotion time, when a deployment moves between Cloudflare environments (preview to production, staging to live route) — this catches the case where a verified preview is promoted to production with a different identity. The strongest posture is all three. The minimum-viable posture is at least one — and the most common rollout mistake is to verify at deploy time only and then never reverify, which means an attacker who modifies a Worker through the API after deploy goes undetected by the attestation system.

How does this interact with secrets and bindings?

A Worker's attestation covers the code, not the runtime configuration. Bindings — KV namespaces, R2 buckets, Durable Object classes, service bindings to other Workers — are configured separately and can be changed without changing the code. From a supply chain perspective, this means an attacker with API access can leave the code unchanged (preserving attestation validity) while repointing bindings to attacker-controlled resources. The defense is to treat bindings as security-relevant configuration: store them in source-controlled wrangler.toml or equivalent, deploy them through the same pipeline that emits attestations, and audit the deployed binding set against the source on every reverification pass. A Worker whose bindings have drifted from source is as suspicious as one whose code has drifted, even if the attestation still verifies the code half.

What about Workers that are not built through the Cloudflare-hosted pipeline?

Many Workers are deployed via Wrangler from a developer's laptop or a customer-managed CI runner. Those deploys do not produce Cloudflare-issued build attestations; the trust signal must come from elsewhere. The pattern that works is to produce attestations in your own CI before the Wrangler deploy: a build provenance attestation signed by your CI's workload identity (using Sigstore or a similar mechanism), stored alongside the artifact in your own registry, and verified by the same continuous compliance loop that handles Cloudflare-hosted Workers. The verification logic becomes "either a Cloudflare-issued attestation matching the deployed artifact, or an organization-issued attestation matching the deployed artifact." Workers that satisfy neither path are flagged for investigation, which surfaces unmanaged deploys before they become incidents.

What goes wrong in incident response?

Two scenarios recur. The first is the missing attestation: a Worker is found to be running suspicious code, and there is no attestation for the deployed version because the deploy was made out-of-band. The investigation then has to reconstruct provenance from API audit logs, which is slower and noisier than verifying an attestation. The fix is preventive — require attestations at deploy time and reject unattested deploys via API token scoping or a Cloudflare-side policy. The second is the conflicting attestation: a Worker has an attestation, but the attestation points to a repository that no longer matches the code visibly running. This usually means the repository was overwritten or the deploy used a stale artifact. The investigation should treat the attestation as untrustworthy for this case and reconstruct from the deploy logs directly. Reverification on a regular cadence would have caught it before the incident, which is the strongest argument for not skipping the continuous pass.

How should this fit into a multi-cloud supply chain program?

A program that handles Cloudflare Workers in isolation will diverge from how it handles AWS Lambda, GCP Cloud Run, and Azure Container Apps. The supply chain abstractions that work across providers are: every deployable unit has an SBOM, every SBOM has a signed provenance attestation, every attestation chains to a trusted issuer, and every deployment is observed against its attestation continuously. The implementation details vary by provider, but the abstractions do not. Workers attestations slot into this model directly — they are in-toto, they cover the build, and they can be verified by the same tooling that handles SLSA attestations from any other source. The integration work is mostly plumbing: pulling the attestation from Cloudflare's API at the right time, mapping the predicate body to the program's canonical schema, and feeding the result into the central compliance and incident response systems.

How Safeguard Helps

Safeguard inventories every Cloudflare Workers script across your accounts alongside its attestation, source repository, build configuration, and binding set — then continuously verifies that the deployed code and bindings match what was attested. Policy gates block API tokens from making out-of-band Worker changes that bypass the attestation pipeline, and block deploys whose attestation predicates fall outside the approved repository and branch list. Griffin AI translates a Cloudflare-side incident bulletin into a list of specifically affected scripts in your tenant in seconds, with the responsible team, the source repository, and the most recent deploy timestamp surfaced for each. Continuous monitoring brings Workers into the same supply chain discipline that already covers your Lambda functions, your Kubernetes deployments, and your CI artifacts — closing the edge-compute gap that most multi-cloud programs leave open.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.