Helm charts are the lingua franca of Kubernetes deployment. Every meaningful workload, vendor product, and platform component arrives as a chart. The chart's templates, values, and dependencies determine what gets installed, with what configuration, and with what privilege. A compromised chart, or even a misunderstood one, can create the same blast radius as a compromised image, sometimes larger.
Despite that, charts are the least scrutinised supply chain artifact in most organisations. They are pulled from public repositories, rendered locally with default values, and applied without a meaningful review of what the rendered manifests will do at admission time. This blueprint covers the controls that bring charts into the supply chain story.
Chart Repositories And Trust Roots
The first decision is which chart repositories to trust. The default Helm posture is to trust whichever repository the user adds with helm repo add. That posture is fine for experimentation and unsuitable for production.
The control we apply is a curated repository list. Production clusters install only from internal repositories that mirror an approved upstream set. The mirror sync runs on a schedule, pulls chart packages, verifies their signatures where available, scans their templates and default values, and republishes them with internal signing.
The mirror's repository list is a versioned configuration. Adding a new upstream repository is a reviewed change, not a self-service action. Removing one is also reviewed, because charts that engineers depend on can disappear without notice if the removal is not sequenced.
For charts that are not available from a repository at all, such as charts maintained internally, the source is the relevant Git repository, with the chart artifact built and signed in the same CI flow as application artifacts.
Chart Signing
Helm provider charts can be signed using the provenance file format that ships with the toolchain. Signing produces a .prov file alongside the .tgz chart artifact, with a PGP signature over the chart's hash and metadata. The toolchain can verify the signature at install time when invoked with the appropriate flags.
The provenance signing flow has aged. Newer flows use Sigstore for keyless signing of OCI-distributed charts, with the chart pushed as an OCI artifact and the signature attached as a referrer. The Sigstore flow integrates with the rest of the OCI supply chain story, and we standardise on it for new internal charts.
Verification has to be enforced, not optional. The default helm install does not verify signatures unless told to. Internal tooling wraps helm install with verification flags set to required, and CI policy refuses to invoke helm install without the wrapper.
Render-Time Policy
A chart is just a template. The actual security posture depends on the rendered manifests. The render is a function of the chart, the values, and any dependent charts. Changing any of those changes the rendered output.
The control we apply is a render-time policy gate. Before any chart is installed in production, it is rendered with the actual values that will be used, and the rendered manifests are evaluated against the cluster's admission policy offline. The gate runs as a CI step or as a step in the GitOps reconciliation flow.
The gate catches charts that would have been blocked at admission time, before they reach the cluster. This is faster feedback than waiting for the live admission webhook to reject the install, and it surfaces problems in the chart or values that the user can fix before the deployment.
The gate also catches drift between charts. A chart upgrade that introduces new resources, new RBAC, or new webhook configurations is flagged for review. The diff between the rendered output of the previous version and the new version is a more useful artifact than the chart's changelog, because it shows what will actually change in the cluster.
Value Validation
Values files are where charts get parameterised, and where the most consequential security choices get made. The same chart can produce a manifest set that runs as a privileged daemon set with host-network access, or one that runs as a non-root deployment with a tight network policy. The difference is in the values.
The control we apply is value schema validation. Charts in the internal catalogue ship with a values schema that constrains what values are accepted. Values that exceed the policy, such as enabling privileged mode or expanding RBAC, are rejected by the schema before the chart is rendered.
The schema is an additional artifact alongside the chart. For internal charts, the schema is authored by the platform team. For mirrored upstream charts, we generate a baseline schema during mirror sync and tighten it through review.
The schema becomes a contract between the chart author and the chart consumer. A consumer who needs to set a value the schema does not allow has to either justify the exception or change the chart, both of which are healthier conversations than silently overriding the value at install time.
Dependent Charts
Helm charts can depend on other charts, with dependencies resolved at chart packaging time and the dependent charts vendored into the parent's tarball. This is convenient and obscures the supply chain.
The control we apply is dependency expansion at mirror sync. When a chart is pulled into the mirror, its dependencies are recorded in a dependency graph that is queryable per chart. The mirror does not flatten dependencies into a single artifact; it preserves the layered structure so that vulnerabilities or policy violations in dependent charts are attributed to the dependent rather than to the parent.
Dependent chart updates also have to be tracked. A parent chart that has not been updated is not necessarily safe; its dependents may have moved on and may have known issues. The dashboard tracks the freshness of every chart in the dependency graph, not just the top-level one.
Runtime Correspondence
The last control is the correspondence check between what was installed and what is running. Helm tracks releases, and the release records include the rendered manifests as deployed. Comparing those manifests to the live state of the cluster catches drift that has happened post-install.
The correspondence runs on a schedule against every Helm release. Resources that exist in the cluster but not in the release record are flagged. Resources that exist in the release but not in the cluster are flagged. Resources whose live state has diverged from the release manifest are flagged.
The flags catch a range of problems. Manual edits made through kubectl that drift from the chart's intent. Adversary-introduced resources that are styled to look like they belong to a release. Operator-driven changes that are legitimate but not reflected in the chart, indicating a documentation gap.
The most useful version of the correspondence check is one that proposes remediation rather than just reporting drift. For each flagged divergence, the report suggests either reverting the live state to the release, or updating the chart to reflect the live state, depending on which is the correct outcome. The decision is human; the analysis is automated.
How Safeguard Helps
Safeguard runs the chart blueprint as a coherent system. The internal chart mirror enforces upstream signing, runs the scan on sync, and republishes with internal signing. The render-time gate evaluates rendered manifests against admission policy offline, catching problems before they reach the cluster. Value schemas are stored with the chart and enforced at render time, with a workflow for exceptions that mirrors the rest of the policy stack. The dependency graph is built at mirror sync and tracks freshness across the full chart hierarchy. The runtime correspondence checker reports drift between Helm releases and live cluster state, with proposed remediations attached to each finding. The result is Helm chart governance that finally matches the privilege level of what charts actually do, which is to define how every workload in your cluster gets configured and run.