The SLSA v1.0 specification spends most of its ink on provenance consumers. The requirements on builders are comparatively short and comparatively harder to implement. A single paragraph on "isolation strength" implies a security architecture that CI platforms spend years building. A bullet on "provenance authenticity" implies cryptographic infrastructure that takes a dedicated team to operate. Reading the spec makes it look like building a builder is a weekend project; operating one in production is very much not.
This post is a walk through the SLSA v1.0 Build track requirements from the builder's perspective, based on what we have seen operating builders for customers and auditing third-party builder claims. It covers the isolation requirements, the provenance obligations, the documentation requirements, and the operational patterns that make the difference between a builder that claims L3 and a builder that can defend L3 under scrutiny.
What does "isolation strength" actually require?
The v1.0 Build track requires builders at L2 to have "isolated" build environments and at L3 to have "strong" isolation. The specification defines these loosely: tenants must not be able to interfere with each other's builds, and at L3 the build platform's integrity must survive tenant compromise. In practice this translates to several concrete requirements.
Each build must run in a fresh environment. Reusing a VM, container, or runner across tenants means state from one tenant's build can influence another's. "Fresh" means more than a clean filesystem; it means a new kernel, new network namespace, and new identity. GitHub-hosted runners achieve this by allocating a fresh VM for each job. Self-hosted runners that reuse a host across jobs do not, which is why they cannot claim L3 without additional layering.
Tenants must not be able to exfiltrate platform secrets. The signing key used to authenticate provenance, the OIDC signing key used to issue tokens, and any platform credentials must be inaccessible from inside the build environment. The GitHub Actions model puts the OIDC signing key in a separate Microsoft-operated service that build steps cannot reach. Google Cloud Build puts its signing key in a Google-managed KMS that build containers do not have permission to access. Homegrown builders that sign using a key mounted into the build container cannot meet this requirement; the key is within reach of malicious build steps.
Tenant builds must not influence other concurrent builds. If builds share a cache, a network egress path, or a metadata service, a malicious build could poison the cache or exfiltrate secrets from another tenant. Shared caches are the hardest case because they are operationally valuable; the compromise most builders make is per-tenant cache partitions with cryptographic separation.
How does provenance authenticity work in practice?
L2 requires that provenance be authenticated by the build platform. "Authenticated" means signed by a key the platform controls, not by a key the tenant controls. This is the requirement that rules out tenant-generated provenance written from inside a build step. Even if the build step runs correctly, the provenance it produces is a self-attestation from an environment the platform does not trust.
The common pattern is a two-stage flow. The build step produces artifacts and, optionally, unsigned claims about what it did. A separate platform-operated component (a reusable workflow, a sidecar service, a post-build hook) reads the build's metadata from the platform's own observation, generates structured provenance, and signs it with a platform-controlled key. The signing identity is the platform, not the tenant.
Sigstore's keyless signing is what most modern builders use for the signing step. Fulcio issues a short-lived certificate bound to the platform's OIDC identity, cosign 2.2+ signs a DSSE envelope wrapping the in-toto Statement, and the signed envelope is published to the artifact or to Rekor. The key detail is that the OIDC token driving Fulcio must come from the platform, not from the build step. The slsa-github-generator reusable workflows achieve this by running in a separate workflow context that has its own OIDC identity distinct from the caller.
Provenance must also accurately reflect the build. This is the requirement that tends to trip newer builders. It is not enough to produce a signed in-toto Statement with a SLSA predicate; the predicate's fields must be correct. If buildDefinition.externalParameters.source claims a source repository but the build actually used a different one, the provenance is false. Audits uncover this kind of issue when a builder's provenance template hard-codes values that should be derived from the build context.
What does L3 require above L2?
L3 adds three requirements on top of L2. First, the platform must prevent tenants from tampering with the build after it starts. This means the build definition (the pipeline file, the build script, the CI configuration) must be read from an immutable reference at build start, and modifications during the build must not retroactively change what the provenance claims was built. GitHub Actions achieves this by reading the workflow file at the commit that triggered the build, not at the current head of the branch.
Second, the build must run in an environment that the tenant cannot modify mid-flight. This rules out builders where a tenant can exec into the build container during execution, change build scripts, or alter the provenance generation step. The typical L3 builder model is "one-shot execution, no tenant access."
Third, the provenance must include a reference to the build's source that a consumer can use to re-fetch the exact inputs. This usually means a commit SHA and a repository URL, stored in buildDefinition.externalParameters. The purpose is that an auditor with the provenance can independently retrieve the source, rebuild the artifact, and compare results.
What documentation does a builder need?
The SLSA specification requires hosted builders claiming L3 to publish a "SLSA level document" describing how they meet each requirement. This is a specific document, not marketing copy. It lists each requirement from the Build track, describes the builder's implementation, and explains the threat model.
GitHub published their document at slsa.dev/spec/v1.0/requirements and it sits alongside the specification. Google Cloud Build has equivalent documentation. Buildkite and CircleCI have posted theirs. A builder that claims L3 without a published level document is making a claim that a careful auditor should reject.
The document is also the place where a builder commits to a threat model. L3 is defined against specific adversaries (tenant attempting to forge provenance, tenant attempting to compromise concurrent builds, malicious code in build steps). A builder should explicitly state what adversaries it defends against and which it does not.
What are the operational patterns?
Three operational patterns show up in every production builder we have audited. First, aggressive telemetry on the signing path. The signing service logs every certificate request, every OIDC token verification, and every Rekor publish. When something goes wrong (a forged provenance claim, a signing service outage), the telemetry is what lets the team respond.
Second, explicit versioning of the builder identity. Every time the reusable workflow, the signing service, or the isolation model changes materially, the builder's identity URI changes. Consumers pin to specific identities, so a silent change breaks verification. GitHub does this by versioning the reusable workflow path with tags (@v2.0.0) rather than branch names.
Third, continuous attestation testing. The builder operator runs a synthetic build every hour and verifies the resulting provenance end-to-end: signature valid, certificate identity correct, Rekor entry present, predicate structure correct. When a component of the chain breaks, the synthetic fails before real tenants notice.
How Safeguard Helps
Safeguard tracks which builders your organisation consumes and maps each builder's SLSA claims to the v1.0 Build track requirements, so you can see at a glance which artifacts come from L3-capable builders and which from L2. For builders you operate internally, we ingest their provenance output and run automated consistency checks against the requirements, flagging template fields that look suspicious (hard-coded values, missing external parameters, implausible timestamps). When a builder claims L3 but its provenance pattern does not match the specification, Safeguard surfaces the gap before a downstream consumer has to debug a failed verification. Customers can use this to hold both internal and third-party builders accountable to the claims they publish.