The first time a .NET team wires up an SCA pipeline, the output is usually a wall of CSV rows with package names, CVEs, and severities. After three weeks the team stops clicking through the reports because nothing ever changes as a result of reading them. The raw list is not the work. The dashboard is the work, and a good NuGet vulnerabilities dashboard is shaped around the questions a developer actually asks on the morning of a sprint planning meeting, not around the output format of the scanner.
The questions a dashboard should answer
Three, in this order:
- What must I fix this week? — Critical reachable issues with a patch available.
- What can wait? — Non-reachable, low-severity, or awaiting upstream.
- What am I stuck on? — Unfixable issues where the only mitigation is compensating controls.
A dashboard that answers these is used. One that only shows "open CVEs count by severity" is ignored because it does not translate to action.
Fields that matter per finding
- Package name and version
- Introduced by (direct vs transitive, which direct parent)
- CVE IDs and severity (CVSS 3.1 base + environmental if available)
- KEV listed? (yes/no) — KEV entries get priority regardless of CVSS
- EPSS probability — useful alongside CVSS for prioritization
- Fixed in version
- Reachability (reachable / not reachable / unknown)
- Service(s) affected
- Assigned owner and target date
- Status (open / fix in progress / mitigated / accepted)
The last three fields are the ones every dashboard gets wrong. Without clear ownership and status, the dashboard becomes a reporting view instead of a work-tracking view.
Filters that actually get used
Four filters that make the difference between "I'll look at this later" and "I'll fix this now":
- Service scope — show me only findings in the services I own.
- Reachable only — exclude the noise of findings in unused code paths.
- Fix available — exclude issues with no upstream fix yet (track those separately).
- SLA breach imminent — show me what will miss its deadline in 7 days.
A developer looking at a dashboard filtered to "my services + reachable + fix available + due this week" gets an actionable list. A developer looking at all open CVEs gets discouragement.
Integrations that close the loop
A dashboard in isolation does not drive outcomes. The integrations that make it work:
- Jira / Linear / GitHub Issues: one-click ticket creation with finding context pre-populated
- Slack / Teams: per-service channel alerts when a new critical reachable issue appears
- PR checks: block merges that introduce new criticals (or warn)
- Release gates: prevent deploys that regress the vulnerability baseline
Without these integrations, the dashboard is a mirror. With them, it is a workflow.
CVSS alone does not do the prioritization
If you sort by CVSS descending and start at the top, you will spend the week on hypothetical issues while a KEV-listed medium goes unpatched. A production-grade NuGet dashboard uses a composite score:
- KEV listing: +30 points
- Reachability: +20 points if reachable
- CVSS base: +0–10 points (scaled)
- EPSS probability: +0–10 points (scaled)
- Service criticality: +0–10 points
Rank by composite, not by any single field. This consistently surfaces the right five issues at the top in customer deployments.
Where NuGet dashboards typically fail
Four recurring patterns:
Double-counting across solutions. A single vulnerable package appearing in 40 projects shows as 40 findings. Aggregate to the package level with a service rollup for context.
Stale "introduced by" information. A transitive dependency whose direct parent changed six months ago still shows the original parent. Recompute on every scan.
No distinction between direct and transitive. Different fix approaches apply. Direct: upgrade the direct dependency. Transitive: upgrade a parent or use explicit overrides. The dashboard should surface this distinction.
Missing VEX awareness. A vendor-published VEX statement marks a CVE as not-affected for specific use. Dashboards that ignore VEX churn unnecessarily on already-accepted findings.
A minimal dashboard schema
finding {
id
package: { name, version, ecosystem: "nuget" }
parent_direct: { name, version }
cve_ids: [CVE-2024-XXXXX]
severity: { cvss_base, cvss_env, kev_listed, epss }
fixed_in: "1.2.3" | null
reachable: boolean | unknown
service: { id, criticality }
status: "open" | "in_progress" | "mitigated" | "accepted"
owner: user_id
due_date: iso_date
composite_score: number
vex_statements: [{ product, status, justification }]
}
Query views over this shape cover the three morning questions. The dashboard is then a UI over saved queries, not a fresh data model.
What changes with .NET 8 and Central Package Management
Central Package Management (CPM) collapses per-project versions into one central file, which simplifies aggregation. A CPM-aware dashboard shows one row per package even if 40 projects reference it. It also makes mitigation tractable — a single version bump in Directory.Packages.props fixes every occurrence. Dashboards built pre-CPM often over-report on CPM-enabled solutions.
How Safeguard Helps
Safeguard's platform ships this dashboard shape as default for .NET supply chain findings. Reachability is computed per-service, composite scoring is built in, VEX statements are respected, and ticket/Slack/PR integrations come wired. Griffin AI proposes remediation PRs that target the right layer (direct vs transitive vs override) and the dashboard tracks fix velocity so the team can see whether the program is improving over time rather than just counting open findings.