Open a typical SCA dashboard and the number on the screen is something between 800 and 4,000. The breakdown is the same shape across organisations: a wide pyramid of findings, narrow at the critical/high tip, broad at the medium/low base. Engineering teams will look at this and ask the obvious question: which of these can actually hurt us? The honest answer for most security programs is "we don't know — let's start at the top and work down." That answer ends with backlog ageing, alert fatigue, and a gap between security work and engineering work that grows every quarter. Reachability analysis is the technique that closes the gap, and the difference it makes to a real program is measurable in hours per week and findings per quarter.
What "false positive overload" actually means
Three failure modes that look the same on a dashboard:
- Vulnerability present, not exploitable. The CVE describes a function that your code does not call. The package is in your tree but the affected code path is dead.
- Vulnerability mitigated by surrounding code. The dangerous function is called, but a sanitizer in your call path neutralises the attack.
- Vulnerability exploitable only under conditions you don't have. The CVE applies in a configuration your deployment doesn't use.
All three look like findings on a CVE report. None of the three is exploitable in your specific code. Treating them all as work to do is how triage hours get burned.
How reachability separates real from noise
Three deterministic steps:
Build the call graph. The engine parses your source plus dependency source and constructs a directed graph of who calls whom. Cross-package edges are real; reflection and dynamic dispatch are handled with explicit fallbacks.
Mark the affected functions. Each known CVE maps to a specific set of functions in a specific version. The mapping comes from advisory databases plus internal curation.
Compute reachability. Walk the graph from every untrusted-input source. Findings reachable from any source are surfaced as actionable. Findings unreachable, or reachable only through a sanitizer, are deprioritised.
The output: a queue of findings that are actually exploitable in your code, with the taint path attached.
What "80% reduction" looks like in real numbers
Customer numbers across the Safeguard deployment fleet, normalised to a 300-engineer org with a typical microservices stack:
- Raw SCA findings per quarter: ~2,400
- After reachability filtering: ~380 reachable
- Of those, fixable in the current sprint: ~120
- Of those, escalated to "critical action this week": ~25
The team works the 25 first. The 120 in this sprint. The 380 are tracked. The remaining 2,000 are flagged "present but not reachable" with audit-trail evidence — auditors love the documentation, engineering doesn't burn hours on hypothetical vulnerabilities.
The narrative that survives audit
Security programs that adopt reachability also gain a defensible story for SOC 2, ISO 27001, and EU CRA reviews: every CVE in the dependency graph has a documented status — reachable, not reachable, or under review. Auditors increasingly ask for this distinction explicitly. Programs that can produce it pass faster.
What the rollout looks like
Three-phase pattern that works in customer deployments:
Phase 1, weeks 1-2. Deploy reachability scoring alongside the existing SCA tool. Do not change triage workflow. Capture both numbers.
Phase 2, weeks 3-6. Use reachability as the primary triage signal for new findings. Validate the deprioritised set with sample audits.
Phase 3, weeks 7+. Migrate executive reporting to the reachability-based numbers. Backlog ages mature; "fix in this sprint" lists get cleaner.
Customers who run this rollout report compounding gains through the first year as the engineering team's trust in the queue rises.
How Safeguard Helps
Safeguard's engine produces reachability-classified findings as default output. Each finding includes the taint path, the version-resolved sink function, and the sanitizer state along the path. Griffin AI generates remediation PRs prioritised by reachability + EPSS + KEV composite score. For organisations whose triage hours are dominated by unreachable noise, this is the architectural change that returns the time and produces the audit narrative.