SBOM

SBOM Generation: Syft, Tern, Trivy Compared (2026)

An engineer's side-by-side of Syft, Tern, and Trivy for SBOM generation in 2026, with honest notes on accuracy, performance, and where each tool actually fits.

Shadab Khan
Security Engineer
7 min read

Three tools dominate the open-source SBOM generation space in 2026: Syft, Tern, and Trivy. Each has a distinct pedigree and optimization target, and the differences matter more than marketing might suggest. I have used all three extensively in pipeline and diligence contexts, and the choice between them is not interchangeable.

This is a direct comparison based on real-world use. It is not exhaustive, but it covers the dimensions that actually affect tool selection: accuracy, performance, format support, ecosystem coverage, and where each tool fails.

What do these tools actually do differently?

Syft, maintained by Anchore, is a general-purpose SBOM generator that analyzes container images, filesystems, and binaries. It supports a wide range of ecosystems, outputs CycloneDX, SPDX, and Syft's own JSON, and integrates tightly with Anchore's Grype for vulnerability scanning. Syft's philosophy is breadth: cover as many ecosystems as possible with reasonable accuracy.

Tern, originally developed by VMware and now maintained as an open project, focuses on container image SBOM generation with an emphasis on license discovery. Tern reads the layer history, attempts to identify the packages installed in each layer, and produces an SPDX document. Its philosophy is depth for containers: understand the layer structure and licenses thoroughly, even if ecosystem coverage is narrower.

Trivy, from Aqua Security, started as a vulnerability scanner and grew into SBOM generation. It scans container images, filesystems, and code repositories, producing CycloneDX and SPDX output. Its philosophy is operational: give you SBOMs as a side effect of the vulnerability scanning you are already doing, optimized for CI pipeline integration.

The tools overlap significantly but their optimization targets produce different outputs for the same input. Running all three on the same container image in 2026 still yields materially different SBOMs, with different component counts and different identifier quality.

How do they compare on accuracy?

For npm and Python projects with manifests present, all three produce comparable results. The manifest-driven path is the easiest SBOM generation case. The component graph is well-defined by the lockfile, and each tool walks it competently.

For compiled binaries without source manifests, Syft leads for Go binaries, which expose module information when compiled with default flags. For statically linked C and C++ binaries, all three struggle, as they must, because the provenance information is often not embedded.

For container base layers, Tern has historically had the most thorough coverage of OS package metadata, especially for Debian, Ubuntu, and Red Hat derivatives. Syft has closed much of this gap in the past year. Trivy sits slightly behind on container OS package depth but leads on speed.

For Java, all three handle Maven and Gradle well when pom.xml or build.gradle is present. Analysis of compiled JAR files without manifest data is better in Syft than in Tern or Trivy, but none of them are perfect. Shaded JARs remain a known weakness across all tools.

Accuracy testing against a controlled ground truth is worth doing for your own stack. Do not assume that because a tool works well for one of your repositories it works equally well for another. Ecosystem coverage and edge cases vary.

How do they compare on performance?

Trivy is the fastest in most scenarios I have measured. A typical 500MB container image scans in seconds on modern hardware. For CI pipelines where SBOM generation runs on every build, Trivy's speed is a meaningful advantage.

Syft is moderately fast. A comparable container image typically takes 1.5 to 3x longer than Trivy, with the multiplier depending on which cataloger configurations are enabled. For most CI contexts this is still fast enough. For bulk analysis of many artifacts, the difference starts to matter.

Tern is the slowest of the three, often by a significant margin. Its layer-by-layer analysis is thorough but expensive. For ad-hoc container analysis where license detail matters more than speed, the tradeoff is reasonable. For high-throughput pipelines, Tern is harder to justify.

Memory usage follows a similar pattern. Trivy is lean, Syft is moderate, Tern can be demanding on large images. If you run SBOM generation on lightweight CI runners, these differences can force choices.

How do they compare on format support?

All three support CycloneDX and SPDX in their current revisions. The fidelity of the output varies.

Syft produces the richest CycloneDX output, including component metadata that some consumers rely on. Its SPDX output is also comprehensive. Syft also offers its own JSON format, which is less portable but preserves the most detail for Syft-aware tooling.

Trivy's CycloneDX output is clean and consumable but slightly less detailed than Syft's. Its SPDX output is solid for common cases. Trivy does not attempt to produce its own intermediate format, which keeps things simple.

Tern's SPDX output is strong, especially for license data. Its CycloneDX support has improved in the last year and is now usable in most contexts. Tern does not aim to be a universal translator, which is consistent with its container-focused philosophy.

For teams standardizing on a specific format, test output against your downstream tooling. A format that parses in one consumer may cause issues in another because of vendor-specific extensions or schema interpretations.

Which should you pick for your pipeline?

Pick Trivy if speed is your primary constraint, if you are already using Trivy for vulnerability scanning, or if you want the tightest CI integration. Trivy is the right default for most teams who want SBOMs as a pipeline output without additional operational overhead.

Pick Syft if accuracy and ecosystem breadth matter more than raw speed, if you need the richer CycloneDX output for downstream consumers, or if you are already in the Anchore ecosystem with Grype and Enterprise. Syft is the right default for teams where SBOM quality is the focus.

Pick Tern if container license analysis is a primary use case, especially for enterprise legal review workflows. Tern's license detail is genuinely ahead of the others, and for compliance-heavy contexts that can justify the performance tradeoff.

For many teams, the right answer is two tools, not one. Trivy for high-frequency pipeline use, paired with Syft or Tern for slower but more thorough analysis of production artifacts. Both outputs can be normalized and ingested into your SBOM graph, so the choice is not exclusionary.

Where do all three tools still fall short?

Transitive dependency resolution for compiled artifacts is a weak spot. When source manifests are not present, all three tools are making inferences, and those inferences are not always correct. Production-critical analysis should supplement with build-time SBOM generation whenever possible.

First-party component attribution is another gap. Tools generally classify your own code as opaque blobs rather than as first-party components with identifiers, unless you explicitly configure them otherwise. For SBOM completeness, first-party attribution should be part of the pipeline.

Shaded, bundled, and relocated dependencies in Java and JavaScript remain difficult. A dependency that has been rewritten into a bundle can be invisible to naive scanning. Build-time SBOM generation is the only reliable answer for these patterns.

None of the three tools do reachability analysis. They produce component inventories, not exploitability assessments. That is the gap that higher-level tools fill.

How Safeguard.sh Helps

Safeguard.sh ingests SBOMs from Syft, Tern, Trivy, and every other major generator, normalizes them to a common graph at 100-level transitive dependency depth, and applies reachability analysis that cuts 60 to 80 percent of the CVE noise these tools would otherwise produce. Griffin AI reconciles format differences, fills first-party attribution gaps, and generates TPRM and compliance evidence from the normalized data. Container self-healing closes the loop by regenerating affected images against fresh base layers when correlated vulnerabilities land, so the SBOMs your generators produce become the foundation of a live defense rather than a static artifact.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.