When alerts stop meaning anything
There is a moment in the life of every security program where the alerts stop landing. An engineer sees the daily digest of new CVEs, glances at the count, and closes the tab. Not because they do not care, but because caring at that volume is no longer possible. That moment is CVE fatigue, and once a team crosses it, the security signal is effectively gone. The alerts still fire. Nobody acts on them. The dashboard stays red as a permanent fixture.
CVE fatigue is not an attitude problem. It is a rational response to an irrational input rate. The National Vulnerability Database published more than 30,000 CVEs in 2024, and the 2026 trajectory suggests that number will keep climbing. If your team is consuming raw CVE feeds, you are asking humans to triage at the rate of a database. Humans do not work at that rate, and pretending they should is how organisations end up with engineers who quietly disengage from security entirely.
The damage is not only psychological. Fatigued engineers miss the small number of findings that genuinely matter, because those findings arrive through the same channel as the thousands that do not. The signal-to-noise ratio collapses, and with it, the value of the security program.
The fatigue cycle, decomposed
CVE fatigue follows a predictable progression. It starts with enthusiasm. A new scanner gets deployed, findings flow in, and engineers triage them carefully. Within a few weeks, the queue grows faster than the team can drain it. Triage time per finding drops, because the alternative is falling behind. Quality of triage drops with it. Engineers start applying heuristics, often unconsciously: dismiss anything in a dev dependency, ignore anything older than 90 days, only act on critical-severity items. These heuristics are not wrong, but they are also not security policy. They are coping.
The next phase is selective attention. Engineers learn which scanners and channels produce useful findings and which do not, and they tune out the rest. If the SCA tool has cried wolf five times, the sixth alert goes unread, even if it is the one that matters. The security team notices the disengagement and tries to push harder, which produces more alerts, which deepens the fatigue.
The final phase is normalisation of the red dashboard. Everyone agrees the count is too high. Nobody believes it can come down. The dashboard becomes wallpaper. New hires are told, on day one, not to worry about it. That is the failure mode we are trying to prevent, because once a team reaches it, climbing back out takes a year of cultural work, not just tooling changes.
What reachability does to alert volume
The most effective single intervention against CVE fatigue is reachability analysis applied at the source of the alert. The principle is to filter findings before they hit the engineer's inbox, not after. If a CVE affects a function that is never called from your application, it should not generate a P1 page. It should generate a low-priority queue entry that gets resolved during the next scheduled dependency refresh.
We have seen reachability filtering reduce alert volume by 70 to 85 percent in typical enterprise codebases. The variability depends on the language and framework. JavaScript and Python tend to land at the higher end of that range, because the dependency trees are wide and the proportion of unreachable code is correspondingly large. Go and Rust land slightly lower, because their import models tend to pull in less unused surface area, but the reduction is still substantial.
The cultural effect of this reduction matters as much as the numerical effect. When alerts arrive less frequently and each one is more likely to be real, engineers start reading them again. That is the goal. Restoring trust in the channel is the only way to restore the security signal.
Automation as fatigue relief
The findings that survive reachability filtering are not all worth a human's attention either. A reachable CVE in a dependency with a clean, low-risk fix version is not a triage decision. It is a pull request waiting to be opened. Automating that PR removes another large slice of the human queue.
The pattern that works is straightforward. When a finding is confirmed reachable and a non-major fix version is available, generate the PR automatically. Run CI. Assign reviewers based on recent file ownership. Group related bumps to avoid PR storms. Skip the automation during release freezes. None of this is novel. The novelty is wiring it directly to the security finding, so the gap between identification and fix collapses from weeks to hours.
The fatigue benefit comes from what engineers stop doing. They stop reading dependency-bump tickets. They stop reviewing the same advisory in three different tools. They stop being the integration layer between the scanner and the version control system. The work still happens. They just stop being the bottleneck.
Where Griffin AI lifts the remaining load
Even after reachability filtering and auto-PRs, a residual queue of genuinely ambiguous findings remains. These are the cases where exploitability depends on configuration, deployment context, or business logic that no scanner can infer. Historically, these are the findings that consume the most engineer time per item, because each one requires reading the advisory, the patch, the surrounding code, and sometimes the upstream commit history.
Griffin AI is built for this layer. It reads the finding, correlates it against the codebase, considers the deployment configuration, and surfaces a recommendation with the reasoning shown. The engineer is not asked to trust a black box. They are asked to review a structured argument and either approve, reject, or escalate. In practice, this drops average triage time per ambiguous finding from around 20 minutes to under three.
What Griffin does not do is replace the engineer on the hard calls. When the recommendation depends on assumptions Griffin cannot verify, it surfaces those assumptions explicitly and routes the finding to a human. The goal is not to eliminate human judgement. It is to make sure human judgement is only spent on findings that actually require it.
A measurable target
If you want a single metric to track for CVE fatigue, use findings-per-engineer-per-week. Most fatigued teams sit somewhere between 50 and 200. The target you want to land at is closer to five. That is the volume where engineers can read every alert, investigate properly, and resolve with care. Anything higher pushes the team back toward heuristic dismissal.
Getting from 200 to 5 sounds aggressive. It is not. Reachability filtering removes most of the volume. Auto-PRs handle most of what remains. Griffin AI triages the ambiguous tail. The human work that remains is real security engineering, the kind that engineers signed up for. That is the version of vulnerability management that survives 2026.
How fatigue shows up in incident postmortems
The clearest evidence that CVE fatigue has settled into a team is what happens during a security incident. The post-incident review almost always surfaces the alert that fired weeks earlier and was either dismissed, deferred, or closed without action. The engineers responsible for the dismissal are not negligent. They were doing what the volume forced them to do. The fatigue was the failure mode, not the individuals.
Fixing this requires removing the conditions that made the dismissal rational. Lowering alert volume to a level where every finding can receive proper attention is the only sustainable response. Training engineers to look harder at alerts they have learned to ignore is not a viable strategy, because the underlying volume has not changed. The instinct to dismiss is the correct response to overload, and the correct intervention is at the input rate, not the response.
Programs that have completed the transition to reachability-filtered, automation-supported, AI-assisted triage report a measurable shift in postmortem patterns. The alert that fired and was missed becomes a much rarer finding, because the alerts that fire are fewer and more credible. When something does slip through, the analysis points to a specific gap in the filtering or routing logic, which is fixable, rather than to a generic claim that the team should have noticed. That is the difference between an incident review that produces concrete improvements and one that produces blame.
Restoring trust takes calendar time
The technical changes that reduce fatigue can be deployed in weeks. The cultural recovery takes longer. Engineers who have spent two years tuning out security alerts will not start reading them again the day after the alert volume drops. They need to see the new pattern hold for a quarter or two before the trust returns. During that window, security leadership has to be patient. The metric will improve before the relationship does.
The fastest way to accelerate cultural recovery is to make the new alert pattern visible. Publish weekly summaries of findings the team responded to alongside the dropped volume. That story, repeated consistently, is what convinces the team the pattern is durable.