SBOM & Compliance

Building A Defensible SBOM Program In 90 Days

A pragmatic 90-day blueprint for standing up an SBOM program that survives auditor scrutiny, procurement reviews, and incident response without burning out your platform team.

Shadab Khan
Security Engineer
6 min read

Most SBOM programs start as a compliance checkbox and end as an internal joke. A team is told to "produce SBOMs", a CI step is bolted onto the build, files land in an S3 bucket, and six months later nobody can answer the only questions that matter: which products contain log4j-core 2.14.1, which suppliers shipped xz-utils 5.6.0, and which AI models in production were trained on the dataset that just got revoked. The artefacts exist, but the program does not. The gap between "we generate SBOMs" and "we can defend our SBOM program in front of an auditor or a regulator" is wider than most leaders expect, and it is almost entirely a function of process, not tooling. This post lays out a 90-day plan that treats SBOM as a living dataset rather than a build artefact, with concrete deliverables for weeks 1-4, weeks 5-8 and weeks 9-12. The numbers below reflect what we have seen across mid-market and regulated enterprise rollouts during 2025 and the first quarter of 2026, where median time-to-first-defensible-program landed at 87 days.

Days 1-30: Inventory, Ingestion, And A Single Source Of Truth

The first month is unglamorous. The single biggest predictor of programme success is whether you can answer "what are our products?" before you start generating SBOMs. Run a 5-day inventory sprint. List every shippable artefact: SaaS services, on-prem appliances, mobile apps, firmware images, customer-installed agents, and any AI models you expose as a product surface. Tag each with an owner, a build system, and a release cadence. Expect to find 20-40% more products than the asset inventory shows; shadow services and acquired-team artefacts dominate that delta.

Next, pick one canonical format and one canonical identifier scheme. CycloneDX 1.6 with Package URL (purl) identifiers is the pragmatic 2026 default for most engineering organisations; SPDX 2.3 wins where licence-driven legal review dominates. Mixing both in the same lake without normalisation is the most common failure mode we see. Standardise once, convert at ingest.

Wire up generation in CI for the top ten products by revenue. Do not chase 100% coverage in month one. Aim for 60% of revenue-weighted product surface, which in practice means about ten to fifteen pipelines. Validate every SBOM against the schema, reject anything missing bom-ref, purl, version, and supplier. By day 30 you should have a queryable index of components across those products, and a written contract that says "no SBOM, no release tag".

Days 31-60: Normalisation, VEX, And The Noise Problem

By week five the honeymoon ends. Engineering teams will start asking why every release has 1,800 "critical" findings, and procurement will start forwarding vendor-supplied SBOMs that do not match anything in your taxonomy. This is the normalisation phase, and it is where most programmes stall.

Three workstreams run in parallel.

First, component identity normalisation. The same library will appear as pkg:maven/org.apache.logging.log4j/log4j-core@2.17.1, log4j:log4j-core:2.17.1, and Apache Log4j 2.17.1 across three vendor SBOMs. Without a canonical resolver, cross-product queries return garbage. Build or buy a purl-first resolver and benchmark it against a fixed test set of 500 components; aim for above 95% canonicalisation accuracy before you trust dashboards.

Second, vulnerability enrichment with exploitability context. Raw NVD CVSS scoring produces noise ratios above 30:1 versus what is actually exploitable in your deployment. Layer in EPSS, CISA KEV, and reachability analysis where the language allows it. Teams that combine these three signals typically cut actionable backlog by 70-85% in the first 30 days of enrichment.

Third, VEX. Publish VEX statements for the long tail of "not affected" findings using CSAF 2.0 or CycloneDX VEX. A realistic target is 200-400 VEX assertions covering the top recurring false positives across your product line. Every VEX statement reduces ticket volume permanently; the ROI compounds.

Days 61-75: AI-BOM And The Model Supply Chain

If your product touches an AI model, an embedding API, or a fine-tuned weights file, the SBOM-only view is incomplete. Regulators in 2026 increasingly expect a parallel AI Bill of Materials covering model lineage, training data provenance, evaluation results, and the runtime stack that hosts the model.

Treat AI-BOM as a peer artefact to SBOM, not an extension. CycloneDX 1.6 supports mlModel and data component types; use them. For each production model record at minimum: base model identifier and version, fine-tuning datasets with licence and consent status, evaluation suite results with timestamps, the inference runtime (vllm, tgi, triton) and its version, and the hardware class. A reasonable first pass covers your top five model surfaces with 12-18 fields each.

The hardest field is data provenance. You will not have perfect lineage for legacy models. Document what you know, mark the rest as unknown rather than fabricating it, and put a remediation date on each gap. Auditors respect honest gaps; they do not respect invented metadata.

Days 76-90: Signed Attestations And External Defensibility

The final fortnight turns the programme from internal hygiene into external leverage. Sign every SBOM and AI-BOM you publish using Sigstore or an in-house KMS-backed signer, and emit an in-toto attestation that ties the SBOM to the build it describes. Unsigned SBOMs are increasingly treated as unverifiable in customer security questionnaires; signed ones become a procurement accelerator.

Stand up two external surfaces. The first is a customer-facing transparency endpoint where authenticated customers can pull the latest signed SBOM for a product version. The second is a procurement-facing supplier portal where you ingest signed SBOMs from your own vendors with the same rigour you apply outbound. Symmetry is the point; a programme that demands what it cannot supply does not survive an executive review.

By day 90 you should be able to answer four questions in under five minutes from a single console: which products contain component X at version Y, which suppliers shipped that component to us, what is our VEX position on the active CVEs against it, and where are the signed attestations that prove the chain. If the answer to any of those takes longer, the programme is not yet defensible regardless of how many SBOMs are in storage.

How Safeguard Helps

Safeguard collapses the 90-day plan into a single platform. SBOM ingest accepts CycloneDX 1.4-1.6 and SPDX 2.3 from CI, vendor portals, and signed registries, normalises components against a purl-first resolver, and indexes everything for sub-second cross-product queries. AI-BOM extends the same model to ML components, capturing model lineage, dataset provenance, and runtime fingerprints alongside traditional SBOM data. VEX ingest and authoring tools let teams suppress noise at scale using CSAF 2.0 and CycloneDX VEX, with provenance preserved end-to-end. Signed attestations cover both inbound supplier SBOMs and outbound customer-facing artefacts, backed by Sigstore-compatible verification. The result is a programme that holds up to auditors, procurement, and incident response on day 90 and stays defensible on day 900.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.