Supply chain attacks do not end when a compromised dependency runs inside a pod. That is the point where the actual attack starts. Everything after that — what the attacker can read, what they can exfiltrate, what they can pivot to — is determined by what the compromised workload is authorized to do. In a Kubernetes cluster, that is controlled by RBAC, service accounts, and the API server's authorization model.
Most organizations have invested in build-time supply chain controls. Fewer have invested equally in blast-radius controls. The result is that a successful supply chain attack against a moderately-privileged workload turns into a cluster-wide incident, when it should have been contained to a single pod.
What does the blast radius of a compromised pod actually look like?
It depends almost entirely on the ServiceAccount mounted in the pod and what that service account can do via the API server, plus any secrets mounted or accessible, plus anything reachable on the pod's network. RBAC is the dominant factor.
A concrete baseline: a default pod with the default ServiceAccount in its namespace, no extra token automounting, no mounted secrets, and a NetworkPolicy denying egress except to its required services. If this pod gets compromised, the attacker has code execution inside the container. They cannot meaningfully call the Kubernetes API because the default service account typically has no RBAC grants in modern clusters. They cannot read other secrets because none are mounted. They cannot pivot to other services because NetworkPolicy blocks egress. The blast radius is the pod itself and whatever data the pod legitimately processes.
Now the same pod with a few common mistakes: a ServiceAccount with get secrets permission cluster-wide (a pattern I have seen repeatedly in older Helm charts), the default NetworkPolicy (allow all), and a kubeconfig mounted as a secret for "cluster management." Now the attacker has every secret in every namespace, full network reach to every pod in the cluster, and a kubeconfig that probably grants broad API access. The same compromise. Different radius by three orders of magnitude.
The first pod is defensible. The second pod is an incident waiting for a trigger. RBAC is what determines which one you have.
Which RBAC misconfigurations actually enable supply chain attacks?
Cluster-wide get secrets is the one I see most often, cluster-wide list pods in the wrong combination enables attacker reconnaissance, and create pods or create deployments at the cluster level allows an attacker to hoist their compromise into a privileged workload of their choosing.
get secrets cluster-wide is the most immediately exploitable. If the compromised pod can read every secret in every namespace, the attacker leaves with your database credentials, your cloud provider credentials, your API keys for external services, and whatever else your cluster uses secrets for. Many Helm charts in the wild request this permission by default, or request a narrower permission that escalates to it via patch or escalate verbs.
list pods cluster-wide seems innocuous and is not on its own. Combined with get secrets in the pods' namespaces, it becomes a reconnaissance tool — the attacker enumerates what is running, then targets secrets in namespaces of interest. Combined with create pods, it lets the attacker spin up their own workloads in whichever namespace has the privileges they want to abuse.
create pods is the real multiplier. A compromised pod that can create new pods in a privileged namespace can schedule a pod that runs as root with hostPath mounts, getting host filesystem access and effectively breaking out of the cluster boundary. Every "manage" role I see in the wild tends to grant this accidentally.
How do you design ServiceAccount scope correctly?
One ServiceAccount per workload, with only the permissions that workload demonstrably needs, scoped to the specific resources and namespaces it touches. No cluster-wide grants unless the workload is itself a controller that legitimately watches cluster resources.
The design I deploy: every workload's Helm chart or manifest declares its own ServiceAccount in the workload's namespace. The Role or ClusterRole granted to that ServiceAccount is in the same chart and is reviewed as part of every release. If the role adds a new verb or a new resource, that is a change that a security reviewer sees. If the role stays the same release after release, no review is needed.
The common failure mode is workloads that share a ServiceAccount because it was easier at the time. A platform team's "shared-operators" ServiceAccount accumulates permissions over years until it can do anything anywhere. Then something mounts that ServiceAccount — accidentally, or through a configuration drift — and every compromise of that workload is a cluster-wide compromise.
The second common failure mode is ClusterRole grants to namespace-scoped workloads. A workload that runs in production-api namespace and only reads Services in its own namespace should have a Role, not a ClusterRole. ClusterRoles are almost always over-broad, because they grant the verb across every namespace even if the workload only uses it in one. Use Role and RoleBinding whenever the scope is a single namespace.
What about token projection and the deprecated service account tokens?
Use bound service account tokens (projected volumes) with a short expiration, and disable token automounting on any pod that does not actually need to call the Kubernetes API. The old-style long-lived service account tokens stored as Secrets are a liability.
Kubernetes 1.24 and later default to bound tokens with automatic expiration. A pod's token is valid for an hour or so and is automatically rotated. An attacker who exfiltrates a bound token has a limited window to use it. The old tokens stored as Secrets were valid forever and could be stolen from the secret store or from a backup and replayed indefinitely.
More aggressive: on pods that genuinely do not need to call the Kubernetes API, set automountServiceAccountToken: false at the pod or ServiceAccount level. This stops the token from being mounted in the first place. An attacker who compromises such a pod has no token to exfiltrate. A surprising number of workloads do not need to call the Kubernetes API and have the token mounted by default anyway.
The audit query worth running: list every pod in your cluster, group by whether its ServiceAccount has any RBAC grants, and look for pods that mount a token but whose ServiceAccount has no bindings. Those are pods mounting credentials for no reason. They are free defensive wins to fix.
How do NetworkPolicies change the supply chain blast radius?
They turn a cluster-wide network from "default reachable" to "default denied," which massively reduces what a compromised pod can pivot to. This is as important as RBAC for blast radius, and most clusters do not deploy it well.
Kubernetes default behavior is that any pod can reach any other pod. A compromised pod in the frontend namespace can reach the database pod in the data namespace over the pod network. In a cluster without NetworkPolicy, the attacker does not need to escalate RBAC — they just connect directly to the database and attack it with whatever credentials they can find.
A baseline NetworkPolicy posture: default-deny ingress and egress in every namespace, with explicit allow rules for the services that actually need to communicate. Cilium or Calico make this manageable at scale; without an eBPF-based CNI, the rule churn can overwhelm the kernel netfilter implementation on busy nodes.
The first deployment of a default-deny policy to a production cluster is painful. You will find that several services talk to each other in ways you did not know about, including DNS to the cluster DNS service (which everyone forgets to allow explicitly). Budget a week of rollout per environment.
How Safeguard.sh Helps
Safeguard.sh ties RBAC and NetworkPolicy posture to reachability analysis, so you can see which compromised-dependency scenarios actually have blast radius in your specific cluster configuration, and that correlation drives the 60 to 80 percent reduction in noise that makes the output actionable. Griffin AI generates narrowed ServiceAccount roles and NetworkPolicies from your workload's observed behavior, using SBOMs with 100-level dependency depth to identify the actual API and network calls made by third-party libraries. TPRM scores third-party images by the RBAC privileges they request, highlighting suppliers that request more than they need, and container self-healing can automatically restrict the RBAC of pods whose images have been flagged by the platform, reducing blast radius in response to emerging threats without human intervention in the critical path.