Between January and October 2024 we scored 1,204 production SBOMs using an internal quality rubric based on the NTIA Minimum Elements, the CISA Shared Attestation Draft, and the CycloneDX and SPDX conformance test suites. The sample was drawn from vendor-supplied SBOMs across 180 enterprise security teams, covering 340 distinct software publishers. The headline: fewer than one in six SBOMs would pass a meaningful TPRM gate if one were actually enforced today. That is not a criticism of the formats, both of which are mature. It is an assessment of the practice of generating and delivering them, which is still where vendor SBOMs fail. This post walks the data: what is actually in the average 2024 SBOM, where the systematic gaps are, and how buyers can use quality scoring without creating a compliance-theater dynamic where vendors rush-fill minimum fields.
What does a "quality" SBOM actually mean?
We used a 100-point rubric across seven dimensions: completeness of component inventory (30 pts), accuracy of version/hash/license data (20 pts), dependency graph depth and correctness (15 pts), identity fields including supplier, purl, CPE (15 pts), format conformance (10 pts), provenance and signing (5 pts), and VEX/vulnerability annotation (5 pts). An SBOM scoring 70+ is TPRM-usable; 50-69 is informational; below 50 is effectively a compliance checkbox with no analytic value. The median score across our sample was 47.
What fields are consistently missing?
Three. First, supplier, the organization that actually produced the component, is populated on only 34% of components across the sample. Without it, license obligations and TPRM supplier attribution are unresolvable. Second, hash for first-party components is present on 61% of SBOMs but only 19% of individual components carry hashes, because most tools hash third-party libs from the registry but skip internally-built artifacts. Third, purl or cpe is populated for 78% of components, which sounds good until you look at which ones are missing: first-party internal libraries and proprietary SDKs, which are exactly the ones you cannot look up externally.
// What an actual 2024 vendor SBOM component often looks like
{
"type": "library",
"name": "internal-auth-sdk",
"version": "2.3.0"
// no supplier, no purl, no hash, no license
}
How accurate is the dependency graph?
Mediocre. Only 41% of the SBOMs in our sample included a non-trivial dependency graph (more than just a flat list of components). Of those that did, we spot-checked 80 by rebuilding the source and comparing; about 55% had material omissions, usually around dynamic loading, reflection, or plugin architectures. The class of dependency that is most often missing is native libraries loaded via dlopen, JNI, or P/Invoke, because the SBOM was generated from the package manifest, not from the built artifact.
Which generators produced the best data?
Three generators consistently scored above 70 on our rubric when driven properly: Syft (2024 versions, run against a built container), CycloneDX Maven plugin with the aggregate and license-data flags, and Microsoft's SBOM Tool for Windows and .NET builds. The common factor is that all three run against a compiled artifact, not a source manifest, and all three pull license and hash data from authoritative sources. Source-manifest-only generators (many npm-specific or pip-specific tools) systematically under-score because they miss the actual set of components that ended up in the artifact.
What are buyers actually doing with the SBOMs they receive?
Not much. Based on survey data from the 180 teams in our sample, roughly 32% parse vendor SBOMs into their analysis tooling regularly; another 24% ingest them for audit purposes but do not query them between audits; and 44% receive and file them without any automated parsing at all. That last category is most of the compliance-motivated SBOM exchange happening today: the vendor produces, the buyer files, neither side gets value. The buyers who do parse SBOMs are a disproportionate share of the vendor-side pressure for quality, so improvement correlates with whether buyers actually consume.
Should we standardize a quality score?
Yes, and CISA's 2024 SBOM Minimum Elements update is moving in that direction. The useful version would publish a signed per-SBOM quality score alongside the SBOM itself, so a buyer sees both the document and a machine-readable assessment of whether it is complete. The risk is rubber-stamping: if "quality" becomes a box on the Common Form, generators will optimize for that specific rubric, which is the same pattern that broke the SOC 2 market. The mitigation is to publish multiple scores from independent scorers and let TPRM look at the distribution, not a single number.
What controls should buyers adopt?
Three. Reject SBOMs missing supplier on more than 20% of components at ingest time, which is a trivial automated check and catches the tools that export name and version only. Require SBOMs be generated against built artifacts, not source manifests, by asking vendors for the metadata.tools block and verifying it. And require the SBOM to be signed with the same key used to sign the release, so a post-hoc tampered SBOM is detectable.
How Safeguard Helps
Safeguard scores every ingested SBOM against a published quality rubric and surfaces the score alongside the content, so a buyer sees immediately that a "compliant" vendor SBOM is 43/100 rather than discovering it during an incident. Reachability analysis is only as good as the underlying graph, and Safeguard flags SBOMs whose dependency data is too shallow to support a reachability conclusion, preventing false confidence. Griffin AI auto-generates regeneration guidance for specific vendor tools ("run Syft against the built image, not the Dockerfile") and tracks improvement over time. TPRM stores quality trend lines per vendor, and policy gates can block promotion of a release from a supplier whose SBOM quality has regressed below a threshold since the last upload.