Run any container vulnerability scanner against a production image and you will get dozens, sometimes hundreds, of findings. The security team panics. The development team ignores the report. Neither response is correct, because a significant portion of those findings are false positives.
False positives in container scanning are not bugs in the scanner. They are a structural problem with how vulnerability databases map to container environments. Understanding why they happen is the first step toward a scanning program that produces actionable results instead of noise.
Why False Positives Happen
Version Matching Gaps
The most common source of false positives is imprecise version matching. A vulnerability is reported against upstream package version 1.2.3. Your container runs a distribution-patched version 1.2.3-2ubuntu0.1 that has already backported the security fix. The scanner sees version 1.2.3 and flags the CVE because it does not understand the distribution's patch suffix.
Good scanners use distribution-specific vulnerability databases (Debian Security Tracker, Ubuntu CVE Tracker, Alpine secdb) that account for backported patches. Basic scanners that only check against the NVD miss these patches and produce false positives.
Unreachable Code Paths
A CVE may apply to a specific function in a library. If your application never calls that function, the vulnerability is technically present but not exploitable. Most scanners do not perform reachability analysis. They flag the vulnerable package regardless of whether the vulnerable code path is active.
This is especially common with large libraries. OpenSSL has hundreds of functions, and a CVE in a rarely used authentication mode affects every container that includes OpenSSL, even if the application only uses basic TLS.
Disputed or Rejected CVEs
Not every CVE is a real vulnerability. Some are disputed by the software maintainer. Others are informational entries that describe theoretical rather than practical risks. Scanners that pull from the NVD without filtering for dispute status or severity adjustments will include these contested entries in their findings.
Build-Time vs Runtime Packages
Multi-stage Docker builds should remove build-time dependencies from the final image. But scanners analyze image layers, and if a build dependency is installed and removed in the same layer, or if the scanner examines intermediate layers, it may flag packages that are not present in the running container.
Platform-Specific Vulnerabilities
A CVE might apply to a package when running on Windows but not Linux, or on x86 but not ARM. Scanners that do not consider the target platform will flag cross-platform CVEs regardless of applicability.
Measuring Your False Positive Rate
You cannot manage what you do not measure. Track your false positive rate by sampling scanner findings and manually verifying them.
Establish a Triage Process
For a representative sample of findings (start with 50-100), have a security engineer verify each one:
- Is the reported package actually present in the running container?
- Is the reported version actually vulnerable, or has the distribution backported a fix?
- Is the vulnerable code path reachable by the application?
- Is the CVE valid and applicable to this platform and configuration?
Track the percentage of findings that fail one or more of these checks. That is your false positive rate.
Benchmark Across Scanners
Run multiple scanners against the same image and compare results. The union of their findings includes real vulnerabilities any individual scanner might miss. The intersection of their findings tends to have a lower false positive rate. The differences highlight each scanner's biases and blind spots.
Track Over Time
False positive rates change as scanner databases are updated, as you change base images, and as your application dependencies evolve. Measure your false positive rate quarterly and track trends.
Reducing False Positives
Use Distribution-Aware Scanners
Choose scanners that consume distribution-specific vulnerability databases rather than relying solely on the NVD. Trivy, Grype, and most commercial scanners support distribution-specific matching. Verify this capability by testing against an image with known backported patches.
Configure Scanner Settings
Most scanners allow you to tune their behavior. Ignore unfixed vulnerabilities (CVEs with no available patch). Filter by severity. Exclude specific CVEs that you have verified as false positives for your environment.
Maintain a Suppression List
When you verify a finding as a false positive, add it to a suppression list. This prevents the same false positive from consuming triage time in future scans. Document the rationale for each suppression and review the list periodically.
Minimize Image Contents
The most effective way to reduce false positives is to reduce the number of packages in your images. Fewer packages means fewer potential matches, which means fewer findings to triage. Distroless and Alpine images produce dramatically fewer findings than Ubuntu or Debian-full images.
Add Context With SBOMs
An SBOM provides the ground truth about what is in your image. When a scanner reports a finding, cross-reference it against the SBOM to verify the package is actually present and to understand the dependency chain that brought it in.
The Cost of False Positives
False positives are not just annoying. They have real costs:
Alert fatigue causes teams to ignore scanner output entirely. When 40% of findings are false positives, engineers learn to distrust the scanner and stop investigating findings.
Wasted engineering time on investigating and documenting false positives is time not spent fixing real vulnerabilities.
Delayed remediation happens when the real vulnerabilities are buried in noise. A critical finding on line 47 of a 200-line report is easy to miss.
Compliance complications arise when auditors see hundreds of open findings and do not accept "false positive" as an explanation without extensive documentation.
How Safeguard.sh Helps
Safeguard.sh uses distribution-aware vulnerability matching that significantly reduces false positive rates compared to NVD-only scanners. It generates accurate SBOMs that serve as ground truth for validating findings, provides context about package provenance and dependency chains that help triage ambiguous findings, and supports suppression workflows that keep verified false positives from consuming triage cycles. The result is a scanner output that teams actually trust and act on.