A container that passes admission control is often treated as a finished story. The image was scanned, the signature verified, the SBOM evaluated, and the workload accepted. From that moment, supply chain attention typically moves to the next deploy. This is a mistake that adversaries are increasingly happy to exploit. The image that admission accepted is not necessarily the workload that ends up running, and the gap between what was admitted and what is actually executing is where modern supply chain attacks live.
Runtime drift detection closes that gap. It treats the SBOM not as a one-time document produced at build but as a contract about the workload's contents, and it watches for the moments when reality stops matching the contract.
What drift looks like in practice
Drift can be benign or malicious. The detection system does not need to know which is which immediately; it needs to know that something changed and surface the signal.
Benign drift includes: the workload pulling in a configuration update that adds a new sidecar, an operator injecting an init container that was not in the original spec, an in-place upgrade applied through a controller that swaps a binary inside the container, or a debug agent attached during incident response.
Malicious drift includes: a postinstall script in a freshly fetched dependency that modifies the running process, a supply chain compromise that arrives via a transitive update during an automatic dependency refresh, a backdoor activated by a periodic update check that pulls additional code from an external host, or an attacker who has gained execution inside the container and is now writing new binaries.
The pattern across both categories is the same: the running workload contains code, processes, or network behavior that was not part of the admitted snapshot. A drift detection system surfaces every such moment, lets policy decide what is acceptable, and converts the SBOM from a paper artifact into a continuously checked claim.
What drift detection should monitor
The signals worth tracking fall into three groups.
Filesystem and package state. New executables appearing in the container, package manager databases updating outside the image's base, language runtime caches receiving new modules, and shared libraries being modified or replaced. The baseline is the SBOM at admission time; deviations are events.
Process and execution. Processes whose hashes do not match those declared in the SBOM, processes spawned from filesystem locations not in the image, parent-child relationships that diverge from the workload's expected behavior, and language runtimes loading modules that are not part of the declared dependency graph.
Network and dependency egress. Outbound connections to hosts not in the workload's declared dependency set — particularly new package registry connections — DNS lookups for domains associated with package indexes that the workload should not be querying at runtime, and certificate-pinning failures that indicate man-in-the-middle attempts on dependency fetches.
Each category needs its own threshold and its own response policy. Process drift in a serverless function that should be immutable is a high-severity event. Package drift in a long-running JIT-compiled service that legitimately downloads optimization data is noisy and needs filters. The detection system has to encode this through configuration rather than treating every workload identically.
The relationship to admission control
Drift detection complements admission control rather than replacing it. Admission decides whether a workload can start; drift detection decides whether it remains compliant once running.
A common mistake is to treat drift detection as a runtime block — kill the process when drift is detected, refuse the network connection when an unsanctioned host is contacted. Hard runtime blocks are dangerous: a false positive terminates a workload that may be serving critical traffic, and the blast radius is unbounded. The safer default is to alert on drift, gather evidence, and enforce only on signals with very high specificity, such as known-malicious indicators or attempts to fetch from known-compromised registries.
The right mental model is that drift detection feeds the next admission cycle. A workload whose runtime behavior repeatedly drifts from its admitted SBOM gets a more skeptical review at the next deploy. A workload whose behavior matches its SBOM exactly earns a faster fast-path for subsequent rollouts. Drift becomes input to policy rather than only an output.
Operational realities
Several operational realities shape how drift detection actually deploys.
Cardinality. A medium-sized cluster runs thousands of containers. A naive system that emits an event for every minor variation will drown its consumers. The detection system has to deduplicate, group, and rate-limit at the source. Per-image baselines are more useful than per-pod baselines because an image's expected behavior is stable across pod replicas.
Sensitivity tuning. The same drift signal means different things in different namespaces. A dev namespace running short-lived debugging containers cannot be evaluated against the same standard as a production namespace running long-lived services. Policy needs to encode the namespace tier and adjust thresholds accordingly.
Image transparency. Drift detection assumes the SBOM accurately reflects the image. If the SBOM is incomplete — generated from a manifest that misses build-time-installed packages, for instance — every drift signal is a false positive against an inaccurate baseline. SBOM quality is the floor on which drift detection sits.
Evidence retention. When drift is detected, the system needs to capture enough evidence to investigate without preserving so much that storage explodes. The right pattern is short-lived raw evidence and long-lived structured summaries, with evidence escalation triggered by alert severity.
Performance. Runtime monitoring is in-line with workload execution. eBPF-based collectors offer low overhead but require kernel and platform support. Userspace agents are portable but slower. Sidecar-based approaches have isolation benefits but multiply resource consumption. The choice is workload-dependent and needs to be made deliberately.
What drift detection enables
When drift is captured systematically, several capabilities become possible.
Incident response gains a precise timeline. Container payments-api-7c4 started executing process bash -c curl ... at 02:14 UTC, which is not present in the admitted SBOM is the first sentence of a useful incident report.
Vulnerability response gains workload-level granularity. When a CVE drops on a popular package, the question is not do any of our images carry this package? but do any of our running workloads currently carry this package, possibly with drift from their admitted version?
Vendor and third-party risk gain visibility. A vendor-supplied container that drifts during normal operation is a vendor whose update behavior the security team needs to understand. The drift signal is the first place this comes into focus.
Policy authoring gains feedback. A policy that admits a workload whose runtime behavior is consistently incompatible with the admission decision is a policy that needs revisiting. Drift becomes a signal of policy quality.
How Safeguard Helps
Safeguard treats runtime drift as the fourth gate in a single policy plane, sharing definitions and audit infrastructure with the earlier gates.
At PR time, Safeguard captures the dependency manifests and base images that will compose the workload, evaluating them against the policy set so the admitted SBOM has been pre-validated long before runtime.
At build time, Safeguard generates the SBOM that will serve as the runtime baseline, signs the artifact, and stores the attestations the cluster will use to verify what was admitted.
At admission time, Safeguard's webhook records the admitted SBOM and signature for each workload, establishing the contract that drift detection will check against.
At runtime, Safeguard collects filesystem, process, and network telemetry and compares it against the admitted SBOM. Drift events are categorized — benign update, configuration change, suspicious modification, known-malicious indicator — and routed according to policy: dashboard entries for low-severity, paged alerts for high-severity, and automated workload isolation for indicators with very high specificity.
Drift signals feed back into PR-time and admission-time policy: an image whose pods drift consistently is downgraded in the trust model, prompting tighter review on its next deploy. The result is a supply chain control that does not stop watching when the workload starts running, treats the SBOM as a living contract, and detects the moments when running reality diverges from declared truth before those moments turn into incidents.