Container Security

Kubernetes Supply Chain Security: Best Practices for 2022

Kubernetes does not run your code — it runs container images built from layers of dependencies you may not control. Securing the K8s supply chain requires thinking beyond pod security policies.

Shadab Khan
Threat Intelligence
5 min read

Kubernetes has become the de facto standard for container orchestration, running production workloads for organizations of every size. But Kubernetes itself is not the security challenge — it is what runs inside Kubernetes that keeps security teams up at night. Every pod in your cluster is executing a container image, and that image is the product of a supply chain that extends from base OS layers through language runtimes, application frameworks, and your own code.

Securing a Kubernetes deployment means securing that entire chain.

The Kubernetes Supply Chain Attack Surface

A typical container image running in Kubernetes has multiple layers of supply chain risk:

Base image layer: Most Dockerfiles start with FROM ubuntu:22.04 or FROM node:18-alpine. That base image contains hundreds of OS-level packages, each with their own vulnerability history.

Language runtime layer: The Node.js, Python, Go, or Java runtime brings its own dependencies and potential vulnerabilities.

Application dependency layer: Your package.json, requirements.txt, or go.mod pulls in dozens to thousands of third-party packages.

Application code layer: Your own code, which may contain vulnerabilities or misconfigurations.

Build infrastructure layer: The CI/CD pipeline that builds the image has access to secrets, registries, and production infrastructure.

An attacker compromising any of these layers can inject malicious code that runs with whatever privileges the Kubernetes pod has been granted.

Best Practice 1: Image Signing and Verification

Sign every container image you build and verify signatures before admission to the cluster.

Cosign (part of the Sigstore project) has emerged as the standard tool for container image signing:

# Sign an image after building
cosign sign --key cosign.key myregistry/myapp:v1.2.3

# Verify before deployment
cosign verify --key cosign.pub myregistry/myapp:v1.2.3

Pair this with a Kubernetes admission controller that rejects unsigned images:

# Kyverno policy to require signed images
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-image-signature
spec:
  validationFailureAction: enforce
  rules:
    - name: verify-signature
      match:
        resources:
          kinds:
            - Pod
      verifyImages:
        - imageReferences:
            - "myregistry/*"
          attestors:
            - entries:
                - keys:
                    publicKeys: |-
                      -----BEGIN PUBLIC KEY-----
                      ...
                      -----END PUBLIC KEY-----

Best Practice 2: Use Minimal Base Images

Every package in your base image is a potential vulnerability. Minimize the attack surface by using distroless or scratch-based images:

# Instead of this (300+ MB, hundreds of packages)
FROM ubuntu:22.04

# Use this (< 20 MB, minimal packages)
FROM gcr.io/distroless/static-debian11

# Or for Go applications
FROM scratch
COPY --from=builder /app /app
ENTRYPOINT ["/app"]

Google's distroless images contain only the runtime dependencies your application needs — no shells, no package managers, no unnecessary utilities that could be exploited.

Best Practice 3: Scan Images in the Pipeline and at Runtime

Image scanning should happen at two points:

At build time: Scan images as part of your CI/CD pipeline and fail builds that contain critical vulnerabilities.

# GitHub Actions example with Trivy
- name: Scan image
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: myregistry/myapp:${{ github.sha }}
    exit-code: '1'
    severity: 'CRITICAL,HIGH'

At admission time: Use an admission controller to scan images before they are scheduled in the cluster. This catches vulnerabilities discovered after the image was built.

Continuously: Rescan running images on a schedule. New CVEs are published daily, and an image that was clean yesterday might be vulnerable today.

Best Practice 4: Pin Image Digests, Not Tags

Tags are mutable. An image tagged v1.2.3 today could point to a completely different image tomorrow if the registry is compromised or the tag is moved.

# Bad: mutable tag
image: myregistry/myapp:v1.2.3

# Good: immutable digest
image: myregistry/myapp@sha256:abc123def456...

Digest pinning guarantees that you are running the exact image you tested and approved, regardless of tag manipulation.

Best Practice 5: Implement RBAC for Image Registries

Not everyone in your organization should be able to push images to your production registry. Implement strict RBAC:

  • Developers can push to development/staging registries
  • CI/CD pipelines can push to production registries after security gates pass
  • Production clusters can only pull from production registries
  • No human accounts have push access to production registries

Best Practice 6: Generate SBOMs for Every Image

Attach an SBOM to every container image as an attestation. This gives you instant visibility into what is inside each image running in your cluster.

# Generate and attach SBOM using Syft and Cosign
syft myregistry/myapp:v1.2.3 -o cyclonedx-json > sbom.json
cosign attest --predicate sbom.json --type cyclonedx myregistry/myapp:v1.2.3

When a new CVE drops, you can query your SBOMs to instantly identify which images (and therefore which pods) are affected.

Best Practice 7: Network Policies

Limit what your pods can communicate with. A compromised container that cannot reach the internet cannot exfiltrate data or download additional payloads.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-egress
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
    - Egress
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: database
      ports:
        - port: 5432

Best Practice 8: Runtime Security Monitoring

Static scanning catches known vulnerabilities. Runtime monitoring catches exploitation attempts and zero-day attacks. Tools like Falco can detect anomalous behavior inside containers:

  • Unexpected process execution (a shell spawning in a distroless container)
  • Unusual network connections (connecting to a cryptocurrency mining pool)
  • Filesystem modifications (writing to normally read-only paths)
  • Privilege escalation attempts

How Safeguard.sh Helps

Safeguard.sh integrates directly with your Kubernetes clusters and container registries to provide continuous supply chain monitoring. Our platform generates and maintains SBOMs for every container image in your cluster, scans for vulnerabilities at build time and runtime, and enforces admission policies that block unsigned or non-compliant images. When the next critical CVE is announced, Safeguard.sh tells you exactly which pods across all your clusters are running affected images.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.