Standard Kubernetes NetworkPolicies operate at Layers 3 and 4. They can allow or deny traffic based on IP addresses, ports, and pod selectors. For many environments, this is insufficient. You cannot write a standard NetworkPolicy that says "allow HTTP GET requests to /api/health but block POST requests to /api/admin." You cannot filter by DNS name. You cannot inspect TLS metadata.
Cilium, built on eBPF, extends network security to Layer 7. It understands application protocols, DNS resolution, and identity-based access control. For Kubernetes environments with real security requirements, Cilium is the network plugin that actually delivers.
Why Standard NetworkPolicies Fall Short
No Protocol Awareness
Standard NetworkPolicies see TCP and UDP packets. They cannot distinguish between an HTTP GET and an HTTP POST. A policy that allows port 80 allows all HTTP traffic, including exploitation attempts.
No DNS Filtering
You cannot write a standard NetworkPolicy that allows traffic to "api.example.com." You can only allow traffic to IP addresses. Since cloud services use dynamic IP addresses and CDNs, IP-based policies are constantly stale.
No Identity Beyond Pod Labels
NetworkPolicies select pods by labels. If an attacker compromises a pod and modifies its labels, the policy enforcement changes. Cilium assigns cryptographic identities to endpoints that cannot be spoofed through label manipulation.
Default Allow
Standard Kubernetes networking defaults to allowing all traffic between pods. Without explicit deny policies, every pod can talk to every other pod. Many teams deploy NetworkPolicies for specific use cases but never implement a comprehensive default-deny posture.
Cilium Network Policies
CiliumNetworkPolicy Basics
CiliumNetworkPolicy resources extend the standard NetworkPolicy with additional capabilities. They are namespaced resources that apply to pods selected by endpoint selectors.
A basic CiliumNetworkPolicy that restricts ingress to a pod:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-server-policy
namespace: production
spec:
endpointSelector:
matchLabels:
app: api-server
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: "/api/health"
- method: GET
path: "/api/v1/.*"
This policy allows the frontend pods to make GET requests to specific API paths on port 8080. All other traffic to the api-server pods is denied. This level of granularity is impossible with standard NetworkPolicies.
DNS-Aware Policies
Cilium intercepts DNS queries and uses the responses to dynamically populate network policies. You can write policies based on DNS names rather than IP addresses:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-external-api
spec:
endpointSelector:
matchLabels:
app: backend
egress:
- toFQDNs:
- matchName: "api.stripe.com"
toPorts:
- ports:
- port: "443"
When the backend pod resolves api.stripe.com, Cilium learns the IP addresses and allows traffic to them. When the DNS entry changes, the policy updates automatically.
L7 Protocol Filtering
Beyond HTTP, Cilium supports L7 filtering for DNS, Kafka, and gRPC. You can write policies that allow specific Kafka topics, specific gRPC methods, or specific DNS query types.
For Kafka:
rules:
kafka:
- role: produce
topic: "orders"
- role: consume
topic: "notifications"
This restricts a pod to producing messages on the "orders" topic and consuming from the "notifications" topic. All other Kafka operations are denied.
Implementing Cilium Security
Step 1: Deploy With Policy Enforcement
When installing Cilium, set the policy enforcement mode. The default mode only enforces policies when they exist, meaning pods without policies allow all traffic. For security-sensitive environments, enable default-deny:
helm install cilium cilium/cilium \
--set policyEnforcementMode=always
With always mode, pods without any CiliumNetworkPolicy deny all traffic. This forces you to explicitly define allowed communication patterns.
Step 2: Start With Observability
Before writing policies, use Cilium Hubble to observe actual traffic patterns. Hubble provides a network flow log that shows which pods communicate with which other pods, on which ports and protocols.
Use this data to write policies that match your actual communication patterns. Policies written from documentation alone inevitably miss legitimate traffic flows.
Step 3: Implement Default Deny Per Namespace
Create a default-deny policy for each namespace that blocks all ingress and egress except DNS:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: default-deny
namespace: production
spec:
endpointSelector: {}
ingress:
- {}
egress:
- toEndpoints:
- matchLabels:
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name: kube-system
toPorts:
- ports:
- port: "53"
protocol: ANY
Step 4: Add Specific Allow Policies
With default deny in place, create specific policies for each allowed communication path. Use L7 rules where possible to restrict not just which pods communicate but how they communicate.
Step 5: Monitor and Iterate
Use Hubble to monitor for dropped traffic that indicates legitimate flows blocked by overly restrictive policies. Adjust policies based on real traffic patterns, not assumptions.
Common Pitfalls
Forgetting DNS egress: Default-deny policies that do not allow DNS resolution break everything. Always allow egress to kube-dns on port 53.
Overly broad selectors: A policy with an empty endpointSelector applies to all pods in the namespace. Be intentional about policy scope.
Ignoring cluster-wide policies: CiliumClusterwideNetworkPolicy resources apply across all namespaces. Use them for cluster-level security baselines.
How Safeguard.sh Helps
Safeguard.sh provides the supply chain context that network policies alone cannot address. While Cilium controls which pods can communicate, Safeguard.sh ensures the software running in those pods is free of known vulnerabilities and supply chain compromises. Together, they provide defense in depth: Cilium restricts the blast radius of a compromise through network segmentation, while Safeguard.sh reduces the likelihood of a compromise by identifying vulnerable components before deployment.