Every RFP I have reviewed in the last six months includes an SBOM requirement. Some cite the EU CRA, some cite NIST SSDF, some just say "we need one per release, signed, attested, and retained for seven years." The good news is that generating a compliant Software Bill of Materials from a GitHub Actions pipeline is now a solved problem. The bad news is that most teams still ship SBOMs that are missing transitive data, lack attestation, or drift from the actual shipped artifact within a release cycle.
This post is the workflow I give engineers who want a production-grade SBOM pipeline on the first try.
What Kind of SBOM Do You Need?
For 2026, generate CycloneDX 1.6 JSON as the primary format and keep an SPDX 2.3 variant for customers who explicitly ask for it. CycloneDX has become the de facto standard for ENISA guidance and most vulnerability tooling because it carries richer provenance and VEX information. Generate the SBOM from the built artifact, not from the source tree — source-tree SBOMs miss the packages that actually ended up in the container or binary.
Which Tool Should You Use?
Syft from Anchore is the default choice for container and filesystem SBOMs because it understands the widest range of ecosystems and ships native CycloneDX output. For Java-heavy builds, CycloneDX Maven/Gradle plugins produce more accurate component data than post-build scans. For Go binaries, combine Syft with govulncheck module output for the most precise list. Most real pipelines run two or three generators and merge the output.
Step-by-Step Implementation
Step 1: Pick Your Trigger and Artifact Target
Trigger SBOM generation on tag pushes (not branch pushes) so that every SBOM maps to an immutable artifact. Build the artifact once, generate the SBOM from the resulting image digest, and never regenerate the image during the SBOM job — the whole point is to describe what you are about to ship.
Step 2: Build and Push the Image With a Digest Pin
Capture the pushed image digest as a job output so subsequent steps operate on the exact bytes that went to the registry:
name: release
on:
push:
tags: ['v*']
permissions:
contents: read
id-token: write
packages: write
attestations: write
jobs:
build:
runs-on: ubuntu-latest
outputs:
digest: ${{ steps.push.outputs.digest }}
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- id: push
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ghcr.io/${{ github.repository }}:${{ github.ref_name }}
provenance: true
sbom: true
Step 3: Generate a CycloneDX SBOM From the Image Digest
Pin Syft to a specific release — never use @latest in a compliance pipeline:
sbom:
needs: build
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
attestations: write
packages: write
steps:
- name: Generate CycloneDX SBOM
uses: anchore/sbom-action@v0.20.0
with:
image: ghcr.io/${{ github.repository }}@${{ needs.build.outputs.digest }}
format: cyclonedx-json
output-file: sbom.cdx.json
upload-artifact: false
- name: Generate SPDX SBOM
uses: anchore/sbom-action@v0.20.0
with:
image: ghcr.io/${{ github.repository }}@${{ needs.build.outputs.digest }}
format: spdx-json
output-file: sbom.spdx.json
upload-artifact: false
Step 4: Attest the SBOM With GitHub's Built-In Attestation
GitHub's native attestation machinery produces an in-toto statement signed by a short-lived certificate from Sigstore's Fulcio CA, with the public transparency log entry in Rekor. This is what auditors actually want to see:
- name: Attest SBOM (CycloneDX)
uses: actions/attest-sbom@v2
with:
subject-name: ghcr.io/${{ github.repository }}
subject-digest: ${{ needs.build.outputs.digest }}
sbom-path: sbom.cdx.json
push-to-registry: true
Pushing the attestation to the registry stores it as an OCI referrer alongside the image, so consumers can discover it with a single cosign download attestation.
Step 5: Upload the SBOMs As Release Assets
Attach both SBOMs to the GitHub Release so customers and compliance teams can download them without registry credentials:
- name: Attach SBOMs to release
uses: softprops/action-gh-release@v2
with:
files: |
sbom.cdx.json
sbom.spdx.json
Step 6: Retain the SBOMs for the Full Support Period
GitHub's default artifact retention is 90 days, which is shorter than any meaningful compliance horizon. For CRA you need the full support period — at least five years. Mirror SBOMs into cold storage (S3 with Object Lock, Azure Blob immutable, or an OCI registry with retention policies) at the same time you attach them to the release:
- name: Mirror to S3 with object lock
env:
AWS_ACCESS_KEY_ID: ${{ secrets.SBOM_AWS_KEY }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.SBOM_AWS_SECRET }}
run: |
aws s3 cp sbom.cdx.json \
s3://acme-sbom-archive/${{ github.repository }}/${{ github.ref_name }}/sbom.cdx.json \
--object-lock-mode COMPLIANCE \
--object-lock-retain-until-date "$(date -u -d '+7 years' +%FT%TZ)"
Should You Generate the SBOM From Source or Build?
Generate from the build output, not from source. A source-tree SBOM misses packages pulled during go build, dependencies baked into language runtimes, native libraries installed via the base image, and any components added by multi-stage build copies. The only SBOM worth auditing is one that describes the actual shipped artifact.
How Do You Handle Monorepos?
Generate one SBOM per shipped artifact, not one SBOM per repo. A monorepo that ships five containers needs five SBOMs, each tied to that container's digest. A meta-SBOM for the monorepo itself is useful as a developer convenience but is not what a downstream consumer needs during incident response.
What Should You Do About Base-Image Components?
Include them — the whole point of an SBOM is to describe everything in the artifact, including the base image. Syft will walk the image layers automatically. For extremely minimal images (distroless, scratch with a single binary), consider combining the container SBOM with a language-ecosystem SBOM produced at build time so you capture Go module versions, Rust crate versions, or NPM package versions that do not show up in the filesystem scan.
How Do You Validate the SBOM Is Actually Usable?
Run the SBOM through a vulnerability scanner as part of the same workflow. If Grype, osv-scanner, or your scanner of choice throws "no components found" or matches suspiciously few CVEs, something is wrong — typically missing transitive data or a format mismatch. A blocked merge on "scanner could not read SBOM" catches 90% of bad SBOM output before it reaches a customer.
How Do You Keep Up With Component Churn Between Releases?
Regenerate on every tag, not on every merge. Merge-level SBOMs add noise without changing what you actually ship. For long-lived releases that receive patches, regenerate the SBOM each time you cut a patch release, and version the SBOMs the same way you version the artifact — 1.4.2-sbom.cdx.json, not latest.
How Safeguard.sh Helps
Safeguard.sh ingests CycloneDX and SPDX SBOMs from your GitHub Actions workflow and turns them into something you can act on. We run 100-level depth reachability analysis against the components you actually ship, so the CVE list a customer sees in your SBOM portal matches the list your engineers should actually chase. Griffin AI drafts VEX statements for the noise and attaches them to the next release's SBOM automatically. Our TPRM surface compares your SBOMs against upstream vendor SBOMs to flag divergences, and the container self-healing runtime rebuilds affected images when a new vulnerability is disclosed — with a fresh signed SBOM attached on the way out.