Kubernetes networking is flat by default. Every pod can reach every other pod across every namespace. There is no firewall, no segmentation, no access control. A compromised pod in the frontend namespace can directly connect to your database pod in backend, your monitoring tools in observability, and every other service in the cluster.
Network policies are the fix. But implementing them correctly requires understanding both what they can do and where they fall short.
How Network Policies Work
A NetworkPolicy is a Kubernetes resource that selects pods using labels and defines ingress and egress rules:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-server-policy
namespace: backend
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
tier: frontend
podSelector:
matchLabels:
app: web
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
- to: # Allow DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
This policy says: the api-server pod can receive traffic only from the web pod in the frontend namespace on port 8080, and can send traffic only to the postgres pod on port 5432 and DNS on port 53.
Everything else is denied.
The Default Deny Foundation
Without a default deny policy, network policies only add allow rules. A pod with no policy applied to it accepts all traffic. This is the critical mistake most teams make: they write specific allow policies without first denying everything.
Start every namespace with default deny:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: backend
spec:
podSelector: {} # Matches all pods in namespace
policyTypes:
- Ingress
- Egress
This blocks all traffic to and from every pod in the namespace. Then selectively open what is needed.
DNS Must Be Explicitly Allowed
The most common mistake after enabling default deny: forgetting DNS. Without DNS, service discovery breaks, and everything looks like a network outage:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: backend
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
CNI Plugin Requirements
Network policies are API objects. The CNI plugin enforces them. Not all CNI plugins support network policies:
| CNI Plugin | NetworkPolicy Support | Egress | FQDN Rules | Encryption | |---|---|---|---|---| | Calico | Full | Yes | Yes | WireGuard | | Cilium | Full | Yes | Yes (L7) | WireGuard/IPsec | | Weave Net | Partial | Yes | No | IPsec | | Flannel | None | No | No | No | | AWS VPC CNI | Via Calico addon | Yes | No | No |
If you are running Flannel, your NetworkPolicy resources exist in the API but are not enforced. Pods can communicate freely regardless of what policies you define. This is a silent failure that catches teams during security audits.
Verifying Enforcement
After applying policies, verify they actually work:
# Create a test pod
kubectl run test-client --rm -it --image=busybox --restart=Never -- sh
# Try to reach the api-server pod (should be blocked)
wget -qO- --timeout=3 http://api-server.backend:8080
# If this succeeds, your CNI is not enforcing policies
Building Microsegmentation
Microsegmentation means every workload communicates only with the specific services it needs. In Kubernetes, this maps to per-service network policies.
Step 1: Map Communication Patterns
Before writing policies, understand your traffic flow:
# With Calico, enable flow logs
kubectl patch felixconfiguration default \
--type merge \
-p '{"spec":{"flowLogsEnableNetworkSets":true}}'
# With Cilium, use Hubble
cilium hubble observe --namespace backend
Build a service dependency graph. Which pods talk to which pods, on which ports.
Step 2: Write Allow Policies per Service
For each service, create a policy that permits only its known dependencies:
# Frontend can only reach API server
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-egress
namespace: frontend
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
tier: backend
podSelector:
matchLabels:
app: api-server
ports:
- protocol: TCP
port: 8080
- to: # DNS
ports:
- protocol: UDP
port: 53
Step 3: Namespace Isolation
Label namespaces with purpose-driven labels and use them in policies:
kubectl label ns frontend tier=frontend
kubectl label ns backend tier=backend
kubectl label ns database tier=data
Then reference these labels in cross-namespace policies. This is cleaner than hardcoding namespace names and survives namespace renames.
Advanced Patterns
CIDR-Based Rules for External Services
When pods need to reach external APIs, allow by CIDR block:
egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24 # External API endpoint range
ports:
- protocol: TCP
port: 443
Calico Global Network Policies
Standard Kubernetes NetworkPolicy is namespace-scoped. Calico extends this with cluster-wide policies:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-external-egress
spec:
order: 100
selector: all()
types:
- Egress
egress:
- action: Allow
destination:
nets:
- 10.0.0.0/8 # Cluster network
- 172.16.0.0/12 # Pod network
- action: Deny
This blocks all egress to the internet by default, cluster-wide. Individual namespace policies can then selectively allow specific external endpoints.
Cilium L7 Policies
Cilium can enforce policies at the application layer, not just L3/L4:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-l7-policy
spec:
endpointSelector:
matchLabels:
app: api-server
ingress:
- fromEndpoints:
- matchLabels:
app: web
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: "/api/v1/.*"
- method: POST
path: "/api/v1/orders"
This allows GET requests to any /api/v1/ path and POST only to /api/v1/orders. Everything else is blocked at the HTTP level. This is real application-aware microsegmentation.
Testing Network Policies at Scale
Manual testing does not scale. Use tools designed for network policy validation:
# Illuminate (Insightful Science) - policy visualization
# kubectl-np-viewer - shows effective rules per pod
kubectl np-viewer -n backend
# netpol-analyzer - identifies gaps
kubectl assert netpol -n backend --from frontend/web --to backend/api-server --port 8080
Chaos Testing
Inject network failures to verify policies hold:
# With Chaos Mesh, test that unauthorized traffic is blocked
apiVersion: chaos-mesh.org/v1alpha1
kind: NetworkChaos
metadata:
name: test-isolation
spec:
action: partition
mode: all
selector:
namespaces:
- frontend
direction: to
target:
selector:
namespaces:
- database
Common Pitfalls
1. Pods in the same namespace are not automatically allowed. Default deny blocks intra-namespace traffic too.
2. Policy order does not matter. Unlike traditional firewalls, network policies are additive. There is no "first match wins." If any policy allows traffic, it is allowed.
3. Egress policies break health checks. Kubelet health probes come from the node IP, not a pod. Allow egress to the node CIDR for health checks or use Calico's HostEndpoint rules.
4. Jobs and CronJobs need policies too. They create pods, and those pods need network access. Create policies that match job pod labels.
How Safeguard.sh Helps
Safeguard.sh maps your Kubernetes cluster communication patterns and identifies namespaces and workloads running without network policies. The platform detects overly permissive configurations, flags pods that can reach sensitive services they should not access, and generates recommended policies based on observed traffic patterns. For teams implementing microsegmentation incrementally, Safeguard.sh provides a compliance dashboard showing policy coverage percentage across namespaces and tracks improvement over time.