Trivy Operator, the Aqua Security open-source project that turns Trivy's scanning into a Kubernetes-native control loop, reached v0.30.1 on March 13, 2026 and pairs with the underlying Trivy v0.70 engine that shipped April 17, 2026. The Operator continuously watches the cluster state and writes scan results — vulnerabilities, misconfigurations, exposed secrets, RBAC posture, compliance findings — to a set of Custom Resource Definitions, which makes the data first-class Kubernetes API objects. We deployed the v0.30/v0.70 combination to a 60-node multi-tenant cluster running 1,400 pods across 47 namespaces and tracked what changed, what still surprises operators, and how Trivy Operator now sits relative to commercial alternatives.
What is in Trivy Operator v0.30?
The release builds on the CRD model that has been stable since v0.20. The headline practical changes in the v0.30 line are around the reconcile loop — the Operator now batches scan jobs more aggressively to reduce per-node load, scan reports persist across Operator pod restarts cleanly (a long-standing complaint), and the compliance report type added support for the NSA Kubernetes Hardening Guide v1.2 in addition to the existing CIS Benchmark and NIST baselines. The CRD set is unchanged from v0.29 — VulnerabilityReport, ConfigAuditReport, ExposedSecretReport, RbacAssessmentReport, KubeBenchReport, SbomReport — but the schema gained a scanProperties.engineVersion field so you can tell which Trivy version produced a given report. That sounds trivial; it solves a real audit headache.
The bigger story is the engine — Trivy v0.70.0 brings the BoltDB-to-Pebble database backend that improves cold-start scan speed by roughly 35% on large image counts, plus improved VEX handling and faster misconfiguration scanning. The Operator inherits all of that automatically.
How fast is the v0.30/v0.70 combination?
We measured time-to-coverage on a fresh cluster — install the Operator, wait until every running pod has a VulnerabilityReport — and per-pod re-scan latency on subsequent image updates.
| Metric | v0.29/v0.69 | v0.30/v0.70 | |---|---|---| | Time to first full cluster coverage (1,400 pods) | 47 min | 31 min | | Per-image scan time (p50, 800MB image) | 23s | 14s | | Per-image scan time (p95) | 71s | 49s | | Operator pod RSS (steady) | 412 MiB | 318 MiB | | Trivy server replicas needed for our load | 5 | 3 |
The Trivy server topology — running Trivy as a stateful service rather than a per-job container — is the recommended deployment pattern at this scale, and v0.30 made the server connection retries more robust. We saw zero "Trivy server unreachable" failures across the four-week pilot, down from a handful of occasional flakes on v0.29.
How does Trivy Operator compare to commercial Kubernetes scanners?
For a free, open-source option, Trivy Operator is now unusually close to the commercial scanners on raw coverage. The honest 2026 comparison.
| Capability | Trivy Operator v0.30 | Sysdig Secure | Prisma Cloud (Twistlock) | |---|---|---|---| | Image vulnerability scanning | Yes | Yes | Yes | | Misconfiguration scanning | Yes (built-in) | Yes | Yes | | Secret scanning in images | Yes | Yes | Yes | | Runtime threat detection | No (use Falco) | Yes (Falco-based) | Yes | | Compliance frameworks | CIS, NIST, NSA | CIS, NIST, NSA, PCI, HIPAA | CIS, NIST, NSA, PCI, HIPAA, SOC 2 | | Admission controller | Separate (use Kyverno) | Built-in | Built-in | | Multi-cluster console | None | Yes | Yes | | Cost per node/month | $0 | ~$30-60 | ~$40-70 |
Trivy Operator's gap is on multi-cluster aggregation and runtime detection — for a single-cluster security program, it is genuinely competitive; for a 50-cluster enterprise estate, the operational glue you'd write to consolidate CRDs into a useful dashboard adds up fast. The cost calculus typically favors Trivy Operator for development clusters and one or another commercial tool for production fleets.
What does the deployment look like in 2026?
The recommended pattern for production has stabilized — Helm install, Trivy in server mode for performance, Prometheus metrics on, and the SBOM generation enabled so each VulnerabilityReport carries a referenceable SBOM.
# values.yaml for trivy-operator chart in 2026
trivy:
mode: ClientServer
serverServiceName: trivy-server
storageClassName: gp3
resources:
requests: { cpu: 200m, memory: 256Mi }
limits: { cpu: 1000m, memory: 1Gi }
operator:
vulnerabilityScannerEnabled: true
configAuditScannerEnabled: true
exposedSecretScannerEnabled: true
rbacAssessmentScannerEnabled: true
sbomGenerationEnabled: true
metricsFindingsEnabled: true # Prometheus metrics for findings
scanJobTimeout: "5m"
scanJobsConcurrentLimit: 10
scanJobsRetryDelay: "30s"
compliance:
failEntriesLimit: 10
reportType: summary
cron: "0 */6 * * *" # 4x daily compliance check
specs:
- cis-1.23
- nsa
- pss-baseline
- pss-restricted
excludeNamespaces:
- kube-system
- kube-public
- trivy-system
The most common deployment mistake we still see: leaving the default scanJobsConcurrentLimit of 10 on a 200-node cluster. The scan queue backs up, the Operator looks broken, and someone files a bug. Set the concurrent limit to a reasonable fraction of your fleet size (rule of thumb: clusters / 20) and your time-to-coverage will be predictable.
What does Trivy Operator still get wrong?
Three rough edges that survived 2026. First, the dashboard story — Trivy Operator produces excellent CRDs but no native UI. The community has reached for Defectdojo, Dependency-Track, or vendor consoles to make the data legible. Second, multi-cluster aggregation has no built-in path; you wire it together with the Prometheus exports or a CRD-replication tool. Third, the compliance reports are useful for auditors but not actionable for engineers — "you fail CIS 5.7.4" doesn't get to "here is the kubectl patch to fix it" without external mapping. None of these are fatal, but they explain why Sysdig and Prisma are still selling at the enterprise end.
What about air-gapped and regulated-environment deployments?
A measurable share of Trivy Operator deployments run in air-gapped or restricted-network environments — government, defense, FSI customers with strict egress controls. The 2026 deployment pattern for air-gapped clusters has improved noticeably with the Pebble database backend in v0.70, which is more compact and mirrors more reliably than the older BoltDB format. The recommended pattern: mirror the Trivy database to an internal OCI registry on a daily schedule, configure the Trivy server with TRIVY_DB_REPOSITORY pointing at the mirror, and disable Trivy's GitHub fallback entirely so a network glitch doesn't degrade scans silently. The vulnerability database for v0.70 measures roughly 1.2 GB compressed and 4.8 GB uncompressed; budget a daily mirror window of 5-15 minutes depending on your registry's caching layer. The compliance reports in air-gapped mode are unchanged — CIS, NIST, NSA, PSS specs all evaluate against locally-cached rule definitions, so the only network-dependent piece is the vulnerability data itself.
How Safeguard Helps
Safeguard ingests Trivy Operator's VulnerabilityReport and SbomReport CRDs natively and uses them as the per-pod source of truth for the Kubernetes side of the supply-chain ledger. The platform aggregates findings across clusters so a critical CVE that exists on 14 different EKS clusters is one entry in the dashboard rather than 14. Griffin AI applies reachability and exposure context — a pod with a critical CVE on a worker node behind a private NLB ranks differently from the same image on a publicly exposed service. The platform's compliance dashboard maps CIS/NSA/PSS findings to concrete remediation steps and to the policy gate that will block the next deploy if the finding persists. Trivy Operator scans; Safeguard turns the scan stream into a multi-cluster security program.