Dependencies that are two minor versions behind usually have known security fixes in the gap. Measuring how far behind your projects have drifted turns "we should update more often" into a metric you can enforce in CI. This tutorial shows how to compute dependency freshness for Node, Python, and Go projects, express it as a "libyear" budget, and fail builds that exceed the budget. You will finish with a working GitHub Actions job that reports freshness on every pull request and a dashboard query you can share with engineering leadership.
Prerequisites: Node 20+, Python 3.11+, Go 1.22+, GitHub Actions or equivalent CI, and jq.
Time to complete: About 45 minutes for the first ecosystem, 15 minutes per additional ecosystem.
What is libyear and why is it the right metric?
Libyear counts the cumulative age of your dependencies in years, where a library's age is the time between the version you use and the latest stable version. A project with 50 dependencies each one year out of date carries 50 libyears of debt. The metric has three nice properties: it aggregates across ecosystems, it is monotone (older always means more), and it responds to real updates rather than to cosmetic version bumps.
The alternative metrics are worse. "Number of outdated dependencies" treats a one-patch-behind package the same as a two-major-versions-behind package. "Percentage outdated" hides the worst offenders by averaging them in. Libyear does neither. It surfaces the long tail because the tail contributes most of the total.
How do I compute libyear for a Node project?
Use the libyear npm package. It reads package.json and package-lock.json, hits the npm registry for each package's publish dates, and outputs per-package and total drift:
npm install -g libyear@0.3.1
cd my-app
libyear --all
Expected output:
PACKAGE DRIFT PULSE RELEASES MAJOR MINOR PATCH AVAILABLE
express 1.20 0.02 24 1 3 4 4.21.0
lodash 2.14 0.18 12 0 0 2 4.17.21
...
TOTAL 47.82 libyears across 63 packages
DRIFT is libyears per package; PULSE is time since the last publish, which catches abandoned packages. Export the data as JSON for CI:
libyear --all --format=json > libyear.json
TOTAL=$(jq '[.[].drift] | add' libyear.json)
echo "Total libyear: ${TOTAL}"
A reasonable starting SLO for an actively maintained product is 20 libyears across all direct dependencies. Anything higher, and the project has accumulated debt that is actively slowing security patches.
How do I compute it for Python and Go?
Python does not have a first-class libyear tool, but you can compose one from pip list and the PyPI JSON API:
pip list --format=json --outdated > outdated.json
jq -r '.[] | [.name, .version, .latest_version] | @tsv' outdated.json | \
while IFS=$'\t' read -r pkg cur latest; do
CUR_DATE=$(curl -s "https://pypi.org/pypi/${pkg}/${cur}/json" \
| jq -r '.urls[0].upload_time_iso_8601 // "1970-01-01"')
LATEST_DATE=$(curl -s "https://pypi.org/pypi/${pkg}/${latest}/json" \
| jq -r '.urls[0].upload_time_iso_8601 // "1970-01-01"')
AGE_DAYS=$(( ($(date -d "${LATEST_DATE}" +%s) - $(date -d "${CUR_DATE}" +%s)) / 86400 ))
printf "%s\t%s\t%s\t%d\n" "${pkg}" "${cur}" "${latest}" "${AGE_DAYS}"
done > python-freshness.tsv
awk '{sum += $4} END {printf "Total libyear: %.2f\n", sum/365}' python-freshness.tsv
For Go, use go list -m -u -json all:
go list -m -u -json all | \
jq -r 'select(.Update != null) | [.Path, .Version, .Update.Version, .Time, .Update.Time] | @tsv'
The .Time field is the publish timestamp; compute the difference in days and divide by 365 for libyears. Go modules pin by time natively, which makes this calculation cleaner than npm's.
How do I enforce a freshness budget in CI?
Add a job to your GitHub Actions workflow that computes libyear and fails above a threshold. Save this as .github/workflows/freshness.yml:
name: Dependency Freshness
on:
pull_request:
paths:
- "package*.json"
- "requirements*.txt"
- "go.mod"
jobs:
libyear:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install libyear
run: npm install -g libyear@0.3.1
- name: Compute
id: ly
run: |
libyear --all --format=json > libyear.json
TOTAL=$(jq '[.[].drift] | add' libyear.json)
echo "total=${TOTAL}" >> "${GITHUB_OUTPUT}"
- name: Enforce budget
run: |
BUDGET=20
TOTAL=${{ steps.ly.outputs.total }}
awk -v t="${TOTAL}" -v b="${BUDGET}" 'BEGIN { exit !(t < b) }' \
|| { echo "Freshness budget exceeded: ${TOTAL} > ${BUDGET}"; exit 1; }
- name: Comment on PR
uses: actions/github-script@v7
with:
script: |
const total = "${{ steps.ly.outputs.total }}";
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `Dependency freshness: **${total} libyears**. Budget: 20.`
});
The job comments on every PR with the current number, which builds the habit of watching it. The hard fail at 20 keeps new PRs from adding to a debt that has already reached the ceiling.
How do I prioritize which packages to update first?
Sort by drift and cross-reference with security advisories and reachability. The libyear JSON output plus OSV lookups gives you a one-command priority list:
jq -r '.[] | select(.drift > 1) | [.name, .drift, .version] | @tsv' libyear.json \
| sort -k2 -n -r | head -20
The top of this list is where your first updates should go. Pair each candidate with an OSV check to surface known CVEs:
while IFS=$'\t' read -r pkg drift ver; do
COUNT=$(curl -s -X POST https://api.osv.dev/v1/query \
-d "{\"package\":{\"name\":\"${pkg}\",\"ecosystem\":\"npm\"},\"version\":\"${ver}\"}" \
| jq '.vulns | length')
printf "%-30s drift=%5.2f vulns=%d\n" "${pkg}" "${drift}" "${COUNT:-0}"
done < <(jq -r '.[] | [.name, .drift, .version] | @tsv' libyear.json | sort -k2 -n -r | head -20)
Any line with both high drift and nonzero vulns jumps to the top of the update queue.
How do I report freshness to engineering leadership?
Dump the weekly total libyear per service into a time-series file and chart it. A CSV with date and total makes a simple line chart that tells a clear story:
echo "$(date +%F),${TOTAL}" >> freshness-history.csv
The goal of the chart is not a specific number. The goal is a downward slope, or at worst a flat line. When the slope inflects upward, the metric tells you before the next CVE disclosure does.
How Safeguard Helps
Safeguard tracks libyear and per-package freshness across your entire portfolio and correlates drift with reachability, so an old package that sits behind a disabled feature flag is deprioritized automatically. Griffin AI suggests minimal upgrade paths that satisfy both freshness budgets and known-vulnerability constraints, using SBOMs to reason about transitive impact. Policy gates enforce freshness SLOs on merge, and the weekly trend is surfaced on the product dashboard so engineering leadership sees slope without asking for a report.