Vulnerability Management

Accepting The Unfixable: A Decision Framework

Some vulnerabilities cannot be fixed in any reasonable timeframe. Here is a structured framework for accepting risk responsibly with reachability and AI evidence.

Shadab Khan
Security Engineer
8 min read

The vulnerabilities that will not go away

Every mature vulnerability program eventually meets a category of finding that does not yield to remediation. The fix does not exist. The fix exists but breaks production. The library has been abandoned. The vulnerable component is buried in a vendor binary you cannot rebuild. The patch requires a major version bump that would take six months of regression testing across systems you do not own.

These findings are real. They are not going to be fixed in any reasonable timeframe. And yet they sit on the dashboard, ageing past every SLA, dragging down every metric, and producing the same uncomfortable conversation in every quarterly review. The team knows the finding is not going to move. Nobody is willing to formally close it, because closing it without remediation feels like ignoring a problem.

The mature answer is risk acceptance, treated as a first-class workflow rather than a quiet shrug. Risk acceptance is not the same as ignoring. It is a documented decision, supported by evidence, reviewed at a defined cadence, and revocable if the evidence changes. Done well, it removes a category of stale findings from the active queue while preserving the security team's authority to revisit the decision when conditions shift.

Why most teams avoid the conversation

The reluctance to formally accept risk usually comes from two places. The first is auditability. If the next incident touches the accepted finding, the team worries about being asked why they decided to leave it open. Without a documented decision, that conversation goes badly. With one, it goes much better, because the documented decision shows the reasoning that was reasonable at the time.

The second source of reluctance is process burden. Risk acceptance done badly involves a long-form document, a committee review, a sign-off chain, and a calendar reminder that nobody honours. Teams avoid it because the process is heavier than the finding deserves. The fix is not to skip risk acceptance. The fix is to make the process proportionate to the decision.

A lightweight, evidence-backed risk acceptance workflow integrated into the same tooling that handles the rest of the vulnerability lifecycle is the path most teams should be on. The decision is recorded alongside the finding. The supporting evidence is captured automatically. The review cadence is enforced by the system. None of this is exotic in 2026.

What evidence a defensible decision needs

A risk acceptance decision is defensible if it answers four questions. What is the vulnerability and which assets does it affect. Why is remediation impractical right now. What compensating controls or environmental factors reduce the risk. When will this decision be reviewed.

Each of those questions can be answered with evidence rather than narrative, and the evidence is largely automatable. Reachability analysis answers part of why. If the vulnerable code path is not reachable from the application surface, that fact is the strongest possible support for accepting the risk in the short term, because the practical exploitability is low even though the CVE record is open. Exploitability data answers another part. If the EPSS score is in the bottom decile and the CVE has not been observed in active exploitation, the probability of attack is correspondingly low.

Compensating controls are the hardest piece to capture, because they live across configuration, infrastructure, and policy. A WAF rule that blocks the relevant request pattern. A network segmentation boundary that prevents lateral movement. An authentication requirement on the affected endpoint. These need to be enumerated and linked to the decision, with the understanding that if any of them changes, the acceptance must be reviewed.

The framework, in five steps

The framework that holds up under audit is short. Step one, classify the finding. Confirm it is genuinely unfixable in the relevant timeframe, not merely inconvenient. Step two, gather the evidence. Reachability, exploitability, exposure, compensating controls. Step three, propose a risk acceptance with a defined expiry. Step four, route to the appropriate approver. The approver is usually the asset owner plus the security lead, with severity-dependent escalation. Step five, schedule the review. Quarterly is reasonable for most cases. Higher-risk acceptances should review monthly.

The key discipline is the expiry. Acceptances should not be permanent. They should expire and require renewal, even if the renewal is mechanical. The reason is simple: the conditions that made acceptance reasonable can change. A new exploit gets published. A WAF rule gets retired. A compensating control fails silently. Without a forced review, the acceptance becomes invisible, and the next person to encounter the finding has no context for why it was open.

Where Griffin AI fits the workflow

The mechanical parts of risk acceptance are well suited to AI assistance. Griffin AI can read the finding, gather the relevant evidence from the codebase and configuration, and produce a draft acceptance proposal with the reasoning shown. The security engineer reviews the draft, adjusts where necessary, and routes for approval. The work that previously took 30 to 60 minutes per acceptance drops to a few minutes of review.

The AI is not making the decision. It is preparing the case. That distinction matters for both audit posture and team trust. The human approver retains authority over every acceptance. What changes is the cost of preparing each one, which means the team can apply the workflow to the entire backlog of unfixable findings rather than only the ones with enough perceived importance to justify the manual effort.

Griffin also handles the review cadence. When an acceptance approaches expiry, the system regenerates the evidence, flags any changes since the last review, and surfaces the renewal for approval or revocation. If the EPSS score has climbed, the system shows it. If a new exploit has been published, the system shows that too. The reviewer is not relying on memory or hoping nothing has changed. They are reviewing a refreshed picture.

Auto-PRs as the alternative path

For some findings that initially look unfixable, the right answer is not acceptance but automated remediation that the team had not realised was available. A library that appears abandoned may have a community fork with a fix. A breaking version bump may be safer than expected once the test suite confirms compatibility. An auto-PR system that surfaces these alternatives alongside the acceptance proposal helps the team avoid accepting risk that did not actually need to be accepted.

In our deployments, between 10 and 20 percent of findings that started as acceptance candidates ended up resolved through an automated PR that the team would not otherwise have considered. That is real risk reduction at zero marginal effort, and it improves the credibility of the remaining acceptances by showing that the team chose acceptance only when remediation truly was impractical.

The cultural payoff

A team that operates a clean risk acceptance workflow looks different from one that does not. The active triage queue contains only findings that are being worked. The accepted-risk register contains only findings with documented reasoning and a review schedule. The dashboard reflects the actual security posture rather than the cumulative weight of every finding ever recorded.

Auditors prefer this. Boards prefer this. Most importantly, engineers prefer this, because the work they are asked to do is work that has a meaningful endpoint. Accepting the unfixable, with discipline and evidence, is how a vulnerability program becomes sustainable instead of perpetual.

Common objections and the answers that hold

Two objections recur whenever a team adopts formal risk acceptance for the first time. The first is that documenting acceptance creates legal exposure. The reasoning goes that a paper trail showing the team knew about a vulnerability and chose not to fix it is worse than no paper trail at all. The reasoning is wrong, and most general counsels will say so when asked. Documented, evidence-backed risk acceptance is not negligence. It is the standard of care for a mature security program. The opposite, which is leaving the finding open with no documented decision, is the posture that creates real legal exposure, because it suggests the team neither acted on the finding nor formally evaluated it.

The second objection is that risk acceptance becomes a way for engineering teams to avoid security work. If accepting risk is easier than fixing the issue, the path of least resistance becomes acceptance. This concern is real but addressable through governance. Acceptances should require approver authority that scales with the risk score. Low-risk findings can be accepted by the asset owner. Medium-risk findings should require the security lead. High-risk findings should require the CISO or equivalent. This structure ensures that accepting harder findings is genuinely harder, which keeps the path of least resistance pointing toward remediation where remediation is feasible.

A program that handles both objections in advance, with documented policy on legal posture and approval thresholds, will land risk acceptance smoothly with the rest of the organisation. A program that treats these as objections to be argued rather than resolved will struggle to land it at all.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.