Tetragon and Falco are the two CNCF runtime security projects most often weighed against each other, and in 2026 the comparison is closer than it has ever been. Falco's 0.40 release made modern eBPF the default driver and brought its CPU and memory profile down significantly; Tetragon, from the Cilium and Isovalent team, has continued to harden its in-kernel enforcement model. We ran both on the same 400-node EKS cluster over six weeks, with identical workload mix (Java microservices, Python ML inference, Kafka, Postgres operators) and graded coverage against MITRE ATT&CK for Containers, overhead under load, and the ergonomics of policy authoring.
What is the architectural difference between Tetragon and Falco?
Both projects use eBPF to instrument the Linux kernel and observe security-relevant events. The split is in how they process those events. Falco streams kernel events to a user-space rule engine where matching and alerting happen; Tetragon does in-kernel filtering and aggregation, emitting only events that match a TracingPolicy. The practical consequence is that Tetragon has near-zero overhead on events that don't match policy (because they are filtered before they leave the kernel), while Falco's overhead is roughly proportional to event volume regardless of rule matches. For high-event-rate workloads (kafka brokers, busy webservers), this matters.
The second architectural difference is enforcement. Falco is observability-first — it alerts on detection but does not block. Tetragon supports synchronous in-kernel enforcement: a TracingPolicy can be configured with a SIGKILL action so the offending process is killed before it returns from the syscall. This is a fundamentally different security primitive — Falco tells you something happened, Tetragon prevents it.
How do they compare on overhead under realistic load?
Both deployed via DaemonSet, default rule sets for Falco plus 14 in-house rules, equivalent TracingPolicies for Tetragon translating the same intent.
| Metric | Falco 0.40.1 | Tetragon 1.4 | |---|---|---| | Per-node CPU (steady state) | 0.21 cores | 0.09 cores | | Per-node RSS | 287 MiB | 142 MiB | | Events processed/sec (cluster total) | 184,000 | 184,000 (in-kernel) | | Events emitted/sec to user-space | 184,000 | 7,200 | | Tail latency on syscalls (p99 µs) | +0.4 | +0.2 | | Enforcement available | No | Yes |
Tetragon's CPU and memory advantage comes from the in-kernel filtering — for the same workload, only the events that match a policy reach user space. At very high event rates the gap widens. For low-event-rate environments (small clusters, batch workloads), the two projects look much more similar.
Which one has better MITRE ATT&CK coverage out of the box?
We graded both against MITRE ATT&CK for Containers, with default community rules for Falco and the Tetragon library policies.
| Tactic | Falco default rules | Tetragon library | |---|---|---| | Initial Access | 8 | 5 | | Execution | 24 | 17 | | Persistence | 11 | 8 | | Privilege Escalation | 18 | 14 | | Defense Evasion | 16 | 9 | | Credential Access | 12 | 7 | | Discovery | 9 | 11 | | Lateral Movement | 7 | 13 | | Collection | 4 | 6 | | Exfiltration | 8 | 12 |
Falco wins on Execution, Defense Evasion, and Credential Access — the categories where the rule community has invested heavily. Tetragon wins on Lateral Movement and Exfiltration because Tetragon ties tightly to Cilium and observes network-runtime correlations Falco cannot. Neither is a complete coverage story alone, and both projects keep adding rules each quarter.
What does policy authoring look like?
Falco uses a YAML-like rules language built around condition expressions over the libsinsp event schema. Tetragon uses Kubernetes CRDs (TracingPolicy, ImageWatch) authored in YAML and selected by namespace labels. Same intent — block writes to /etc/shadow from any non-init container — looks like this in each.
# Falco rule (observability only)
- rule: Write to /etc/shadow
desc: Detect non-init writes to /etc/shadow
condition: >
open_write and fd.name=/etc/shadow and
not proc.name in (init,sshd)
output: >
Non-init wrote /etc/shadow (user=%user.name proc=%proc.name)
priority: WARNING
tags: [filesystem, credentials]
---
# Tetragon TracingPolicy (with enforcement)
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: block-shadow-writes
spec:
kprobes:
- call: "fd_install"
syscall: false
args: ["int", "file"]
selectors:
- matchArgs:
- index: 1
operator: "Equal"
values: ["/etc/shadow"]
matchActions:
- action: Sigkill
Falco's rules are easier to read; Tetragon's policies are more powerful because they can act. We landed on a hybrid pattern in our test cluster — Falco for the breadth of observability rules, Tetragon for the handful of critical paths where enforcement matters.
Should you run both?
For a production Kubernetes security program in 2026, yes. The overhead of running Tetragon for a focused enforcement policy library plus Falco for the broader observability ruleset is below what either alone with all rules enabled costs. The CNCF survey data agrees — among end-user organizations with both deployed (about 14% of CNCF members), the most common pattern is Falco-for-detection plus Tetragon-for-enforcement on critical syscalls. Sysdig Secure remains the commercial Falco-based product if you need a managed console; Isovalent's commercial Cilium Enterprise wraps Tetragon similarly. Pricing on either commercial path is roughly comparable.
What is the deployment topology that worked in production?
Four patterns we landed on after six weeks. First, Falco DaemonSet runs the full community rule library plus 14 in-house rules in observability-only mode; alerts route to falcosidekick and onward to Slack, PagerDuty, and the SIEM. Second, Tetragon DaemonSet runs a focused TracingPolicy library covering credential access (writes to /etc/shadow, reads from /proc/*/maps of non-self processes), known crypto-mining patterns, and lateral-movement primitives (kubectl exec into other pods); enforcement is enabled with Sigkill for the credential-access rules. Third, both DaemonSets share a single eBPF feature-flag config so an upgrade to one doesn't surprise the other. Fourth, the Tetragon enforcement events flow into the same Falco-sidekick pipeline using a shim that translates Tetragon JSON to Falco event format, which keeps the downstream consumer list short. The total per-node overhead with both running came to 0.28 cores and 410 MiB RSS — below either tool's all-rules deployment cost alone.
How Safeguard Helps
Safeguard ingests both Falco events and Tetragon TracingPolicy hits and correlates them with the SBOM-derived vulnerability data for each container. When Tetragon kills a process that tried to write /etc/shadow, Safeguard cross-references the killed process's binary against the image's SBOM and CVE list to determine whether the action was symptomatic of a known exploit. Griffin AI clusters related runtime events across the cluster so that a coordinated attack hitting 40 pods produces one incident rather than 40 alerts. The platform's policy engine takes runtime telemetry as input to deployment gates — services whose runtime tags include enforcement-triggered:credential-access in the last 24 hours are blocked from new deployments until cleared. Runtime tools detect; Safeguard turns detection into supply-chain action.