The artifact signing conversation in 2022 has gotten badly tangled. Half the industry is running Sigstore demos at conferences while the other half is still shipping signed MSIs through a ten-year-old HSM that only one person on the release team knows how to use. Somewhere in between, a lot of teams are adding cosign sign to their CI pipeline without having clearly articulated what problem they are trying to solve. This is worth backing up from. Artifact signing is a specific cryptographic control that answers a specific question: given an artifact I am about to run, can I verify that it came from the entity I believe produced it, built from the source I believe built it, without modification between there and here? If you cannot cleanly describe which of those properties you are asserting, you will end up with signatures that get produced and never verified — which is, in terms of actual security, indistinguishable from no signatures at all. Here is the ground-up version of what artifact signing is for, what it is not for, and what a working implementation looks like.
What are you actually proving when you sign an artifact?
When you sign an artifact you are proving that the holder of a specific private key processed that specific artifact at the moment of signing. That is the entire cryptographic claim. Everything else — "this came from our organization," "this was built from a trusted source tree," "this has not been tampered with" — is an inference that depends on who holds the key, under what controls, and what additional metadata you have bound into the signature.
This matters because teams routinely over-read signatures. A signed Docker image from Docker Hub tells you that someone holding Docker Hub's notary key signed it. It does not tell you the image came from the vendor you trust, unless you have independently pinned that specific signing identity. The 2017 breach of python-dateutil's typo-squat "dateutil" showed the same issue in reverse: uploads were legitimately signed by the account that owned them; the attacker owned the account.
What does the trust chain actually look like end to end?
The trust chain end to end has four links: a source commit, a build environment, a signing identity, and a verifier. Each link has an attestation that binds it to the next. In a modern Sigstore-based setup the chain looks like this: a Git commit with a signed tag, a GitHub Actions workflow that reads that commit, a Fulcio-issued ephemeral certificate tied to the workflow's OIDC identity, a signature of the artifact written to the Rekor transparency log, and a verifier that checks all of the above before deploying.
# Sign — no long-lived key, identity comes from OIDC
COSIGN_EXPERIMENTAL=1 cosign sign \
--identity-token "$OIDC_TOKEN" \
ghcr.io/acme/api@sha256:...
# Verify — asserts the signer identity and the workflow
cosign verify \
--certificate-identity "https://github.com/acme/api/.github/workflows/release.yml@refs/tags/v1.4.0" \
--certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
ghcr.io/acme/api@sha256:...
The shift from 2018-era signing, which was "Jenkins has the PGP key on disk," to 2022-era signing, which is "the build system proves its own identity via OIDC and gets an ephemeral cert," is the most important operational change in the last five years. It removes the long-lived secret that is the primary failure mode of traditional signing.
Which parts of your pipeline should actually sign things?
The parts of your pipeline that should actually sign things are the ones producing artifacts that a downstream consumer will verify. That is a narrower set than most teams assume. Signing intermediate build outputs that are consumed only by later stages of the same pipeline is mostly wasted effort — if the build environment is compromised, any signing that happens inside it is compromised too.
The signatures that matter are at the boundaries: the container image pushed to a registry that other teams deploy from, the npm package uploaded to the public registry, the firmware binary shipped to customer devices, the release artifact posted to GitHub Releases. Everything in between can be protected with weaker mechanisms — checksums in a manifest, hash pinning in a lockfile — without the ceremony of a full keyed signature.
A useful heuristic: if you are not going to write a verifier that rejects unsigned or wrongly signed artifacts, do not add the signing step. Sigstore's Kubernetes admission controller policy-controller, OPA-based gates, and Notary v2 verifiers all exist to make that verification enforceable; pick one and wire it up before you add signing upstream.
How does Sigstore's transparency log change the threat model?
Sigstore's transparency log, Rekor, changes the threat model by making every signature publicly and immutably visible, which defeats signatures that rely on nobody seeing them. A traditional signing compromise — attacker steals the key, signs a backdoor, distributes it quietly to a small number of targets — leaves no industry-visible trace. A Rekor-logged signature, by contrast, is a Merkle-tree entry that anyone can monitor for signatures under your identity that you did not authorize.
This is genuinely novel. It trades confidentiality (your release cadence, your artifact hashes, the existence of internal versions) for auditability. For public open-source projects that trade is almost always worth it. For private enterprise builds it is more nuanced, which is why deployments like Chainguard's Sigstore PKI offering and GitHub's private preview for private Rekor exist.
Where does signing stop and SBOM/provenance start?
Signing stops at "who processed this artifact" and SBOM/provenance picks up at "what went into it and how was it built." The distinction matters because a valid signature on a compromised build — a SolarWinds SUNBURST-style scenario — is cryptographically indistinguishable from a valid signature on a clean build. The signature does not tell you anything about what the signer did before producing the artifact.
That is the gap SLSA provenance attestations fill. A SLSA v0.2 provenance document, bound to the signature via in-toto attestation, records the builder identity, the source repo, the commit hash, and the build invocation parameters. When the verifier checks the signature and the attached provenance, they are checking two independent claims: "this artifact was processed by identity X" and "identity X claims it was built from source Y using process Z." An attacker would have to compromise both the signing identity and the provenance-emitting builder to produce a convincing forgery — still possible, but meaningfully harder than stealing a single HSM-held key.
How Safeguard Helps
Safeguard treats signatures and attestations as first-class metadata on every component it tracks. When you ingest an SBOM or scan a container image, Safeguard resolves the artifact's Rekor entry, validates the Sigstore certificate chain against the expected signer identity, and records a verdict you can use in policy gates. The policy engine lets you write rules like "refuse any deployment whose image is unsigned, signed by an unexpected identity, or missing SLSA provenance at level 2 or higher." For components pulled from the open-source ecosystem, Safeguard continuously monitors the transparency logs and alerts when an upstream project's signer identity changes — the early signal of an account compromise or takeover. The eval harness lets you test those policies against historical artifacts before you turn them on in production.