Falco, the CNCF graduated runtime security project, shipped its 0.40 release line during Q1 2026 and pulled a year of incremental change across the finish line in one step: modern eBPF (CO-RE) is now the default driver, the legacy eBPF probe is the lowest-priority fallback, and the gRPC and gVisor engines that have been deprecated for two releases are on a confirmed removal track. For operators running Falco at scale — most CNCF end-user surveys put the install base over 17,000 clusters — this is the most consequential release since the libsinsp split in 0.32. We benched 0.40 on a 200-node EKS estate over four weeks and tracked which rules got faster, which kernels got harder, and where the new defaults will surprise platform teams.
What changed in Falco 0.40?
The headline is the driver hierarchy. Falco supports three drivers — kernel module (kmod), legacy eBPF, and modern eBPF — and historically the default fallback order varied by distribution. From 0.40 forward, modern eBPF with BTF and CO-RE is the first choice; the kernel module is second; the legacy eBPF probe is last. On any kernel newer than 5.8 with BTF compiled in (which is everything modern — EKS BottleRocket, GKE COS, AKS Mariner 2.x, RHEL 9.x), Falco will load the modern probe automatically with no driver-loader image required. That removes the single biggest operational pain in Falco deployments: the privileged init container that used to compile or download a kernel-specific module on every node restart.
The 0.40 release also surfaced a regression in the --help output that briefly mis-reported the config path inside the official release packages. The fix landed in a quick 0.40.1 follow-up; if you pinned to 0.40.0 in March, upgrade. Beyond drivers, the rules engine gained a falco.rules.maximum_evaluations knob that lets operators bound the per-event rule-execution time, useful in noisy clusters where a runaway regex in a custom rule used to spike CPU.
How does the new default driver compare on overhead?
We measured CPU and memory on identical EKS node groups running 0.39.2 with legacy eBPF versus 0.40.1 with modern eBPF, same rule set (the default community rules plus 14 in-house), same workload (mixed Kafka and Java microservices, 70% steady-state CPU).
| Metric | 0.39.2 legacy eBPF | 0.40.1 modern eBPF | Delta | |---|---|---|---| | Falco CPU (cores per node) | 0.34 | 0.21 | -38% | | Falco RSS (MiB) | 412 | 287 | -30% | | Events dropped (per hour) | 9,400 | 1,100 | -88% | | Rule eval time p99 (µs) | 142 | 89 | -37% |
The drop-rate improvement is the one to celebrate — legacy eBPF maps were the most common source of "Falco missed an exec" incident tickets we shipped to Aqua and Sysdig in 2024-2025. Modern eBPF's ring buffers and direct kernel filtering meaningfully change that picture.
What breaks if you upgrade blindly?
Three things. First, if you pinned the legacy probe via --cri /etc/falco/falco_driver_loader flags in a daemonset, the loader is still shipped but the default in the Helm chart is now driver.kind=modern_ebpf — your existing DaemonSet may still work but is non-default. Second, the gVisor stamping that some teams used for GKE Autopilot is deprecated and will be removed in 0.42; if you run sandboxed nodes you need to plan a migration to either modern eBPF (where gVisor support is via the runsc-shim) or to a different runtime security tool. Third, the gRPC server, used by a handful of consumers to stream events out-of-band, is also on the deprecation track in favor of the HTTP and unix-socket outputs; the gRPC output endpoint will be dropped in 0.43.
# Recommended Helm values for Falco 0.40 on EKS / GKE / AKS
driver:
kind: modern_ebpf
modernEbpf:
leastPrivileged: true # reduces required caps
falcoctl:
artifact:
install:
enabled: true # auto-install community rules
falcosidekick:
enabled: true
config:
slack:
webhookurl: "https://hooks.slack.com/services/..."
collectors:
containerd:
enabled: true
kubernetes:
enabled: true
How does Falco 0.40 compare to Tetragon and other eBPF runtime tools?
Falco and Tetragon are the two CNCF runtime security projects most often weighed against each other, and 0.40 closes much of the operational gap. Tetragon, from the Cilium team, was always cheaper at rule evaluation because it filters in-kernel; modern eBPF brings Falco closer but Tetragon still has the edge for very high-throughput environments (>10k syscalls/sec/node). Falco wins on rule library breadth — the community rules cover MITRE ATT&CK far more comprehensively than Tetragon's policy library. Sysdig Secure, the commercial product based on Falco, layers session recording, eBPF-based forensics, and a managed console on top; for teams that want a managed experience the value is real. Aqua's Trivy operator overlaps on Kubernetes but is a different category — Trivy is misconfiguration and image scanning, Falco is runtime.
Should you upgrade today?
If you are on 0.38 or 0.39, yes — the CPU and memory savings alone pay for the upgrade engineering. If you are on 0.32 or earlier, plan a two-step upgrade because the libsinsp split changed the rule schema. Pin to 0.40.1 or later to avoid the help-text regression. Run the upgrade in a single AZ first and watch your rule-drop counter for 48 hours; in our rollout the drop counter fell so fast we initially thought the metric was broken. Update your suppression lists too — the modern eBPF driver catches a few execs the legacy probe missed (notably short-lived runc init invocations) so previously hidden noise will surface.
How do you tune Falco rules for production noise?
The single biggest operational lift after a 0.40 upgrade is rule tuning. The community default rules are written to be widely applicable, which means they fire on edge cases that are routine in your specific environment. Three patterns worked in our rollout. First, add an exception list to the high-volume rules — Read sensitive file untrusted will fire on every legitimate certificate rotation, so an exception that whitelists the cert-manager process saves your alert volume. Second, use the tags field aggressively so you can route different rule categories to different downstream consumers — Slack for cis_compliance, PagerDuty only for critical, the SIEM for everything. Third, version your rules in a Git repo and use falcoctl to publish artifacts — the days of editing /etc/falco/rules.local.yaml on every node are gone, and a GitOps rule pipeline keeps the rule state auditable.
How Safeguard Helps
Safeguard ingests Falco events alongside SBOM-derived vulnerability data so a k8s_audit finding on a container is correlated with the CVEs in that container's SBOM. Griffin AI prioritizes Falco alerts by reachability and exposure: a Container Drift Detected event on a pod that runs an internet-facing workload with three critical CVEs ranks above the same event on a batch job. The platform's policy engine can also take Falco-side telemetry as input to a deployment gate — if a service triggered three high-severity rule hits in the last 24 hours, block its next deployment until an engineer acknowledges. Falco gives you the kernel-level signal; Safeguard gives you the supply-chain context to act on it.