Best Practices

The End-of-Year Dependency Audit Ritual

Most dependency audits get done in a panic after a CVE lands. A planned year-end audit is cheaper, more thorough, and produces a backlog you can actually work through in Q1.

Nayan Dey
Senior Security Engineer
6 min read

Every security team does a dependency audit. Most of them do it under duress, six hours after a Log4Shell-class CVE lands, at the exact moment their tooling is least capable of giving clear answers fast. A planned, calendared end-of-year audit is a much better version of the same activity. It is done with clear heads, against a stable code snapshot, with time to actually read the output. It produces a Q1 remediation backlog that the team can work through on a normal cadence rather than a fire-drill cadence. The difference between these two versions of the audit is roughly the difference between a dentist appointment and an emergency root canal — the work is similar, the experience is not. This post is the playbook we give customers at the end of each year, refined across several years of running it.

Why do a calendared audit at all when we scan continuously?

Continuous scanning catches known CVEs as they are disclosed, which is valuable but not the same thing as a dependency audit. An audit asks a different question: of the things we depend on, what should we be depending on at all? Continuous scanning tells you package X has CVE Y. The audit asks whether package X still belongs in your stack at all — whether it is still maintained, whether it has a healthier replacement, whether the version you are on is still receiving security updates, whether you are paying for a feature you stopped using three years ago.

These are strategic questions, not tactical ones. They do not get answered by alert tooling. They get answered by a human sitting with a spreadsheet and the SBOM, usually in December when things are slightly quieter.

What inputs do we need before starting?

Four artifacts, ideally gathered the week before the audit week:

  1. Current SBOM across all production services. Every package, every version, every license. If you do not have this centrally, now is the time — the audit itself becomes the first customer of the SBOM infrastructure.
  2. Usage telemetry. Which dependencies are actually called at runtime? Reachability data, even approximate, separates "we have it installed" from "we use it." Most dependency trees have 30–60% unused transitive dependencies that can be removed without behavior change.
  3. Maintenance signal per dependency. Last release date, issue backlog size, maintainer count, recent security patch cadence. Tools like OpenSSF Scorecard can give you some of this automatically.
  4. License inventory. Especially any newly introduced AGPL, SSPL, or other copyleft licenses in the year's dependency changes.

The audit is only as useful as the inputs. Spend the prep week getting them complete.

What does the audit workflow actually look like?

Six steps, typically two engineering days:

Step one: cluster dependencies by language and service. Export the SBOM as a spreadsheet with columns for package, version, ecosystem, services using it, license, last release date, maintenance score, reachable calls. This becomes your working document.

Step two: eliminate unused transitive deps. Sort by reachable calls ascending. Every dependency with zero reachable calls is a candidate for removal. This is typically the highest-ROI part of the audit. Customers regularly shrink their dependency count by 20–40% in this step alone, removing attack surface and rebuild time in one pass.

Step three: flag abandoned dependencies. Sort by last release date descending. Any dependency with no release in eighteen months and no active maintainer discussion is a risk. Two options: replace it, or fork it and take over maintenance. Both are better than "leave it in and hope."

Step four: review major version lags. Any dependency two or more major versions behind current, where upgrading is not actively blocked. These accumulate invisibly through the year because no single quarter has budget for the upgrade work. The audit produces the backlog entry that makes the upgrade a Q1 roadmap item.

Step five: license sweep. Any new copyleft or restricted-commercial licenses introduced in the year's dependency changes. Legal review before Q1 deploys.

Step six: categorize the remaining CVE backlog. Everything in your scanner that is still open. Categorize by reachable/not-reachable, critical path/non-critical, fix available/no fix. The goal is not to fix them all during the audit; the goal is to have a ranked Q1 list.

How long does this actually take for a realistic codebase?

For a team owning 10–20 services with a mature SBOM infrastructure, roughly two full engineering days per service, done in parallel by service owners with a cross-team coordinator. A security engineer acts as reviewer on each audit and consolidates findings into the org-level backlog. Teams that have never done this before should budget an extra day per service for the SBOM and reachability tooling to be bedded down.

The payoff is a ranked Q1 remediation backlog that is already agreed by the service owners. That avoids the usual Q1 cycle of security emailing "please patch these" and owners deprioritizing because they never agreed to the scope.

What should not be in the audit?

Three things that frequently get shoehorned in and shouldn't be:

  • Routine CVE triage. That is the continuous scanning workflow. The audit should consume its output, not replace it.
  • Tooling rollout decisions. Picking a new SCA vendor is a different project with different stakeholders. Do not let it hijack the audit.
  • Non-dependency security work. AuthN/authZ hardening, secret rotation, infra posture. Separate projects. The audit is narrow on purpose; narrow projects finish.

What comes out of the audit at the end?

Three artifacts:

  1. Removed dependencies list. PRs merged during the audit week to drop unused packages.
  2. Q1 remediation backlog. Ranked, service-owner-acknowledged list of upgrades, replacements, and license reviews.
  3. Next-year instrumentation asks. Gaps in SBOM coverage, reachability depth, or maintenance signal that made the audit harder than it should have been. These become platform team asks for the next year.

The third artifact matters more than people expect. The audit itself is the first real stress test of the SBOM and reachability infrastructure each year. Gaps that surface during the audit are the ones you actually want fixed before the next big incident lands.

How Safeguard Helps

Safeguard automates the mechanical parts of the end-of-year dependency audit and leaves the strategic judgment to the team. SBOM, reachability, license inventory, and maintenance signals come from the platform as pre-joined data, so the audit starts with a complete working document rather than two prep days of spreadsheet assembly. Griffin AI pre-ranks dependencies by removability (unused transitives), upgrade risk (major version lag), and maintenance risk (abandonment signals). The Q1 remediation backlog exports directly to Jira, Linear, or whatever your team uses, with service-owner assignments preserved. For security teams that want the audit to produce a shippable plan rather than a PDF, Safeguard compresses the mechanical effort and lets the team spend their audit time on the decisions that matter.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.