Industry Analysis

New Relic Security: Building a Supply Chain View

How to extend New Relic's APM and Vulnerability Management features into a working software supply chain dashboard for security and platform teams.

Nayan Dey
Senior Security Engineer
7 min read

New Relic began as an application performance monitoring tool and has spent the last several years extending into security. The Vulnerability Management feature now covers open-source libraries and container images; the logs and traces products ingest CI/CD events and audit data; and NRQL provides a single query language across the whole stack. That combination makes it a realistic platform for a supply chain view that serves both security and platform engineering.

Realistic does not mean turnkey. The supply chain view in New Relic is a deliberately composed set of dashboards, alerts, and workflows that sits on top of primitives designed for other purposes. Done well, it gives operations and security teams a shared picture of what is deployed, what is vulnerable, and what changed.

The data foundation

New Relic's supply chain value proposition starts with APM. Every service the APM agent observes reports its runtime, its dependencies, and the versions of those dependencies. That data flows into the PackageEvent and VulnerabilityEvent tables and becomes queryable through NRQL.

Container image scanning extends the same picture to base images and OS packages. Logs ingestion brings CI/CD output, GitHub and GitLab audit events, and cloud audit trails into the same environment. Traces connect runtime behavior back to the code that produced it.

Four inputs, one query surface. That is the foundation. The supply chain view builds on it.

The core dashboard

The supply chain dashboard in New Relic has four sections, each corresponding to a question a platform or security team asks on a recurring basis.

The composition section shows what is in production. Panels include total services tracked, total unique packages and versions, and a breakdown by ecosystem. The most valuable panel here is a table of packages sorted by how many services depend on them. When a CVE drops, that table is the shortest path from "there is a new Log4j" to "here are the twelve services we need to patch today."

The vulnerability section shows what is at risk. Panels include open vulnerabilities by severity, mean time to remediate, and a trend line of new versus fixed issues. The trend line is particularly useful because it tells you whether the program is making progress. If new vulnerabilities arrive faster than fixes, no amount of scanning rigor is going to catch up, and the dashboard makes that plain.

The change section shows what is moving. Panels include new package versions introduced in the last seven days, deployments in the last 24 hours, and CI pipeline success rates. The "new packages" panel in particular is where security teams find things they did not know they were using — new transitive dependencies that arrive without fanfare but expand the attack surface.

The runtime section shows what is happening. Panels include error rates for services with active vulnerabilities, outbound call patterns, and unusual behavior flagged by New Relic's anomaly detection. This section closes the loop between composition and consequence. A vulnerable package matters more if the service using it is under active traffic and communicating with sensitive systems.

NRQL patterns worth keeping

NRQL is expressive enough to answer most supply chain questions once the data is in place. A handful of query patterns come up often enough to be worth memorizing.

The "services using package X version Y" pattern takes a package coordinate and returns the services and environments where it runs. Paired with a vulnerability alert, it shortens triage dramatically.

The "new packages this week" pattern compares the current week's package set to the previous week's and returns the delta. A version of this pattern that tracks only direct dependencies is useful for license review; a version that includes transitive dependencies is useful for security review.

The "deployment-to-runtime latency" pattern joins deployment markers with first observation of a new package version in APM data. For teams trying to measure how quickly fixes actually reach production, this is the hard number that turns a qualitative story into a quantitative one.

The "services reaching new destinations" pattern analyzes outbound trace data to surface services that started calling new external hosts after a recent deployment. Combined with package change data, it is one of the better early signals for compromised dependencies that phone home.

Workflows, alerts, and the on-call experience

A dashboard nobody looks at on a calm day is not going to be looked at during an incident. New Relic Workflows bridge the gap by routing specific conditions to the right channel with the right context. Three workflows deserve default status in every supply chain setup.

A workflow that fires on any critical vulnerability in a package that currently runs in production, routed to the owning service's Slack channel with links to the dashboard, the SBOM, and the relevant code. The goal is that the first person who sees the alert has enough context to start working without five minutes of pivoting.

A workflow that fires when an unusual package appears in an SBOM — a package not on an allowlist, a version that skipped several releases, or a dependency of a dependency of a dependency that was not there yesterday. This workflow routes to the security team rather than the service owner, because the question it asks is about posture rather than patch.

A workflow that fires when an APM-observed runtime version diverges from the declared build-time version, an important signal for runtime tampering and configuration drift. Teams that deploy correctly should almost never see this fire; when it does, it deserves attention.

Integration with security tooling outside New Relic

Not every supply chain signal fits cleanly into New Relic's model. Source control events, registry events, and certain scanner outputs belong in dedicated systems. The practical pattern is to ingest the enriched, deduplicated outputs of those systems into New Relic as log events or as custom events, keep the raw data where it was produced, and use New Relic as the unified view layer.

Two integrations recur. Scanner outputs — Trivy, Grype, or a managed SBOM service — feed in as custom events and join NRQL queries to give a single source of vulnerability data across build-time and runtime. Source control events feed in as log events, which lets the dashboard show commits, pull requests, and sensitive file modifications alongside the runtime picture.

Operating the view

The supply chain view lives or dies by whether someone owns it. The best-run deployments assign a single platform or security engineer as dashboard owner, review the dashboard weekly as part of an operational ritual, and prune panels that stop being looked at. They also measure the view itself: how many incidents was the dashboard the first to surface, how many investigations used it, how quickly did it answer the questions that came up.

A dashboard that earns its place gets more investment. A dashboard that does not should be retired, not left to rot.

How Safeguard Helps

Safeguard provides the structured supply chain data that a New Relic dashboard needs to be more than a wrapper around APM telemetry. Our API delivers SBOMs, vulnerability correlations, provenance signals, and license findings as custom events or log streams that join natively to NRQL. The result is a unified view where New Relic shows what runs and how it performs, while Safeguard shows what is in the code, where it came from, and whether it can be trusted. Platform and security teams get a single pane that speaks both operations and supply chain.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.