The "80% backlog reduction" claim from reachability analysis vendors gets understandable scepticism. It sounds like marketing. The number is suspiciously round. The methodology behind it is sometimes opaque. After running reachability across hundreds of customer codebases at Safeguard, the surprising thing is that the number is not marketing — it is a structural property of how transitive dependency graphs expose risk to a specific application. The reasons it holds are worth being precise about, both because they explain why the reduction is real and because they explain when it is smaller and what to look for.
Where the 80% comes from
Three structural properties of modern dependency graphs:
Most transitive code is never executed. A typical Node.js or Python application uses 30-50% of the functions in its direct dependencies and a far smaller percentage of the functions in its transitive dependencies. By the time you are 7 hops deep, the called-function ratio is single-digit.
CVE distribution mirrors code distribution. Vulnerabilities are distributed across the codebase roughly proportionally to code volume. The deep transitive code that almost no one calls hosts a disproportionate share of CVEs because it is large; the called code hosts fewer because it is small.
The intersection — reachable AND vulnerable — is structurally narrow. When you take the set of functions that are actually reached and intersect it with the set of functions that have CVEs, you get a small subset. That subset is your real backlog.
The 80% is what falls out of this intersection on average. Some applications produce 70%; some produce 90%; the structural pattern is the same.
Why the percentage varies by ecosystem
Three drivers:
- Dependency-graph depth. Deeper graphs (npm, Python) typically produce higher reductions. Shallower graphs (Go, Rust) produce more modest reductions because there is less unreachable transitive code to filter out.
- Direct-vs-transitive ratio. Applications with fewer direct dependencies relative to total dependencies have more unreachable code, so more reduction.
- Framework dispatch patterns. Heavy reflection (Java EE, Spring) reduces the precision of static reachability, which can either increase or decrease the apparent reduction depending on how the analysis handles unknowns.
Programs that adopt reachability without understanding their own ecosystem distribution get surprised when the number isn't exactly 80% on their codebase. The number is real; the variance is real too.
What "the remaining 20%" means
Of the findings that survive reachability filtering, three categories:
- Genuinely critical work. Reachable, exploitable, fix path exists.
- Reachable but already mitigated by surrounding sanitization. Should be deprioritised but not closed without review.
- Reachable but unfixable in current architecture. Compensating controls or accepted risk.
A mature program splits these into separate workstreams. The first is the immediate sprint backlog. The second is documentation. The third is roadmap input.
What a CISO does with the new number
The shift from "we have 2,400 open vulnerabilities" to "we have 380 reachable, of which 25 need action this week" changes the executive conversation in three ways:
- Budget defensibility. The cost-per-actionable-finding becomes calculable.
- SLA realism. A 30-day SLA on 25 weekly criticals is achievable; a 30-day SLA on 2,400 quarterly findings is not.
- Audit narrative. "Every CVE has a documented status" replaces "we are working through the backlog."
The third is the underrated effect. Auditors stop treating you as a program in catch-up mode and start treating you as one with discipline.
The customer examples
Three deployments at typical industry-vertical customers:
- A 200-engineer fintech: raw SCA quarterly volume ~1,800; reachability-filtered ~280; reduction ~84%.
- A 1,200-engineer SaaS: raw quarterly volume ~12,400; reachability-filtered ~2,100; reduction ~83%.
- A 50-engineer healthtech: raw quarterly volume ~640; reachability-filtered ~190; reduction ~70%.
The smaller team has less transitive depth, so their reduction is smaller. The structural pattern still holds.
How Safeguard Helps
Safeguard's reachability is the default lens on every finding the platform produces. The percentage reduction shows up immediately on existing customer codebases without configuration. Griffin AI's remediation PRs are scoped to the reachable subset, so engineering effort flows where it actually reduces risk. For programs trying to translate "vulnerability count" into "risk reduction," reachability is the architectural lever that makes the translation honest.