You can have a perfect SBOM. You can scan every image before deployment. You can patch every known vulnerability within 24 hours. And you can still get breached through a zero-day, a misconfiguration, or an attack vector that no scanner would catch.
Static analysis and pre-deployment scanning are necessary but insufficient. Runtime threat detection -- watching what your software actually does in production -- is the layer that catches everything else.
This is not a new idea, but the tooling for cloud-native environments has matured significantly in 2025. Here is a practical guide to implementing runtime detection for containerized workloads.
Why Runtime Detection Matters for Supply Chain Security
Supply chain attacks are specifically designed to evade pre-deployment scanning. A malicious package that passes all static checks but phones home at runtime. A backdoor that only activates when specific environment variables are present. A compromised dependency that behaves normally during testing but exfiltrates data in production.
These are the attacks that runtime detection is built to catch.
The fundamental insight is this: legitimate software has predictable behavior patterns. It opens the same types of network connections, accesses the same file paths, spawns the same processes. A supply chain compromise almost always introduces anomalous behavior -- unusual network destinations, unexpected file access, process spawning that does not match the application's normal pattern.
The eBPF Revolution
The technology that made cloud-native runtime detection practical is eBPF (extended Berkeley Packet Filter). Without getting too deep into kernel architecture, eBPF lets you run sandboxed programs inside the Linux kernel that can observe system calls, network packets, and other kernel events without modifying the kernel itself or adding significant overhead.
Before eBPF, runtime monitoring meant either:
- Kernel modules that were fragile and risky to deploy in production
- Userspace agents that added latency and missed events
- System call interception that broke application behavior
eBPF gives you kernel-level visibility with userspace safety. It is the foundation that tools like Falco, Tetragon, and KubeArmor are built on.
Building a Runtime Detection Strategy
Layer 1: System Call Monitoring
At the lowest level, you want visibility into what system calls your containers are making. System calls are the interface between user programs and the kernel -- every file open, network connection, process fork, and memory allocation goes through a system call.
For supply chain threat detection, the most interesting system calls are:
- execve / execveat -- process execution. A web server spawning a shell is almost always malicious.
- connect -- outbound network connections. Unexpected destinations are a strong indicator of data exfiltration or C2 communication.
- open / openat -- file access. A Node.js application reading /etc/shadow or accessing SSH keys is suspicious.
- ptrace -- process tracing. Used by debuggers, also used by malware to inject into other processes.
The key is establishing baselines. You need to know what normal looks like for each workload before you can detect anomalies.
Layer 2: Network Behavior Analysis
Network monitoring at the container level is particularly effective for detecting supply chain compromises. Most malicious packages need to communicate externally -- to exfiltrate data, download additional payloads, or establish command-and-control channels.
Effective network detection involves:
DNS monitoring. Track DNS queries from each container. A sudden query to a domain that has never been seen before, especially a recently registered domain or one with high entropy in the name, is a strong signal.
Connection profiling. Build a baseline of what IP addresses and ports each workload connects to. Alert on new destinations, especially those outside your known infrastructure.
Data volume anomalies. A container that normally sends 10KB of data per request suddenly transmitting 10MB is worth investigating, regardless of the destination.
Protocol analysis. Detect when containers use unexpected protocols. A web application making raw TCP connections to non-standard ports, or tunneling data through DNS queries, indicates compromise.
Layer 3: File Integrity Monitoring
Container images are supposed to be immutable. Once deployed, the filesystem inside a container should not change in unexpected ways. Runtime file integrity monitoring verifies this assumption.
Key signals:
- Binary modification. Any change to executable files inside a running container is suspicious.
- Configuration tampering. Changes to application configuration files at runtime may indicate an attacker establishing persistence.
- Unexpected file creation. New files appearing in /tmp, /dev/shm, or other writable locations, particularly executables or scripts.
- Sensitive file access. Reading credential files, key material, or environment files that the application does not normally access.
Layer 4: SBOM-Correlated Detection
This is where supply chain security and runtime detection converge. If you have an SBOM for your deployed container, you can correlate runtime behavior against the expected behavior of the declared components.
For example:
- Your SBOM says the container runs a Python Flask application with specific dependencies
- Runtime monitoring detects a Go binary executing inside the container
- That binary is not part of any declared component
- This is a strong indicator that something was added to the image that should not be there
Similarly, if a known-vulnerable component in your SBOM starts exhibiting behavior consistent with exploitation of its specific vulnerability, you can prioritize that alert higher than a generic anomaly.
Implementation Patterns
Sidecar vs. DaemonSet
For Kubernetes environments, there are two deployment patterns for runtime detection agents:
DaemonSet -- one monitoring agent per node. Lower resource overhead, simpler management, but requires privileged access to the host.
Sidecar -- one monitoring agent per pod. Better isolation, per-workload configuration, but higher aggregate resource usage.
For most organizations, the DaemonSet pattern is the better starting point. It provides coverage across all workloads on a node with a single deployment.
Alert Tuning
The biggest challenge in runtime detection is not collecting data -- it is reducing false positives to a manageable level. Expect to spend significant time tuning.
Start in observation mode. Run your detection rules without alerting for at least two weeks. Analyze the output to understand your baseline. Then gradually enable alerting, starting with the highest-confidence signals (unexpected process execution, connections to known-bad destinations) and working toward more nuanced behavioral analysis.
Response Automation
Detection without response is just expensive logging. For high-confidence detections, automate the response:
- Network isolation -- automatically restrict a container's network access when a C2 connection is detected
- Process termination -- kill unauthorized processes inside containers
- Workload quarantine -- cordon the affected node and prevent new workloads from scheduling
For lower-confidence detections, create structured investigation workflows that give security teams the context they need to triage quickly.
How Safeguard.sh Helps
Safeguard.sh connects your SBOM data to your runtime environment. By correlating the components declared in your SBOM with actual runtime behavior, Safeguard enables SBOM-aware runtime detection -- alerting not just on generic anomalies but on behavior that specifically indicates supply chain compromise. Our policy gates can integrate runtime health signals, and our vulnerability tracking provides the context needed to prioritize runtime alerts. When your runtime detection system flags suspicious behavior, Safeguard tells you exactly which component is responsible and what vulnerabilities it carries.