Spinnaker is the deployment platform that nobody starts with and a surprising number of large organizations end up running. It handles multi-cloud deployment orchestration, blue-green rollouts, canary analysis, and the kind of complex deployment topology that simpler tools cannot express. It also has a security model that is genuinely difficult to get right, because the platform was designed around Netflix's specific trust model and bolting more general access controls onto it has been an ongoing effort for several years.
This post covers security patterns for Spinnaker as it stands in release 1.32 and 1.33, the current supported versions. It focuses on account isolation, pipeline template governance, artifact binding, and the CVE history behind the current authentication defaults. It is written for teams running Spinnaker in production, because if you are not already committed to Spinnaker, the ecosystem alternatives have narrowed the gap enough that starting fresh on Spinnaker is hard to justify.
The account model and the multi-cloud trust implications
Spinnaker's "accounts" are connections to cloud providers — AWS accounts, GCP projects, Kubernetes clusters — each with its own credentials configured in the Spinnaker Halyard configuration or the recent Armory Operator. A Spinnaker installation typically has dozens of accounts, each granting deployment rights to a specific environment.
The security pattern that matters here is account-to-application binding. Spinnaker has a Fiat authorization service that can restrict which applications can deploy to which accounts, and this is the primary tenant isolation boundary in a shared Spinnaker installation. The default configuration grants every application access to every account, which in a multi-team deployment means that any team can deploy to any environment — a permission configuration that rarely matches the actual organizational trust model.
Getting Fiat configured right requires enumerating your applications, enumerating your accounts, and explicitly granting the cross-product that matches your deployment authorization policy. This is tedious and it is the single most impactful security configuration in a Spinnaker installation. The Fiat configuration is done through the fiat-local.yml configuration and it typically looks like:
fiat:
enabled: true
accounts:
prod-aws-account:
permissions:
EXECUTE:
- platform-engineers
- sre-oncall
READ:
- platform-engineers
- sre-oncall
- security-auditors
staging-aws-account:
permissions:
EXECUTE:
- platform-engineers
- backend-developers
READ:
- platform-engineers
- backend-developers
- security-auditors
Without Fiat, the only access control is who can authenticate to Spinnaker, and everyone who can authenticate can deploy anywhere. With Fiat configured, the deployment authority is explicitly tied to group membership in your identity provider, which is the right model for a multi-team platform.
Pipeline templates and the governance gap
Spinnaker pipelines can be defined as JSON in the UI, or as pipeline templates using the Spinnaker Pipeline Templates v2 specification. Templates are reusable pipeline definitions that can be parameterized and instantiated by multiple applications. They solve the real problem of pipeline sprawl in large installations, and they introduce a governance challenge because a template used by 50 applications is a concentration risk.
The governance pattern is to treat pipeline templates as production code. They should live in a separate git repository from application code, with its own review requirements — typically two-reviewer approval from the platform team, with signed commits. Changes to the templates should be tested in a non-production Spinnaker installation before being promoted to production, because a template bug is a fleet-wide deployment bug.
Spinnaker's template engine has had its own share of CVEs over the years. CVE-2020-9301 was an SSRF in the template rendering logic. CVE-2022-23521 was not quite Spinnaker but a related supply-chain issue in the git integration. The pattern in each case was that user-controlled input to the template engine got interpreted as code rather than data, which is the consistent shape of expression-engine bugs across every platform that has them.
The hardening advice is to restrict template rendering to trusted templates only, limit the expression language to data-operations rather than code-operations, and keep the template engine patched aggressively. Spinnaker releases ship regularly, and staying within one minor version of the current release is the sustainable cadence.
Artifact binding and the provenance story
Spinnaker deploys artifacts — container images, Debian packages, CloudFormation templates, Kubernetes manifests — sourced from external artifact providers. The binding between a pipeline trigger and the artifact being deployed is one of the most security-sensitive parts of the Spinnaker model, because it is where "the artifact that was tested" becomes "the artifact that is being deployed."
The pattern to watch for is loose artifact binding. A pipeline that triggers on "any new image in this Docker registry matching this regex" has a very loose binding, and an attacker who can push any image matching the regex can trigger a deployment. A pipeline that triggers on a signed webhook from a specific CI system, with the artifact reference included in the signed payload, has tight binding, and an attacker needs to compromise the CI system's signing key to spoof the trigger.
Spinnaker's webhook triggers support HMAC signing, and the signing should always be enabled for production pipelines. The shared secret is stored in the Spinnaker configuration and rotated on a regular schedule — quarterly is a reasonable cadence for most environments. An unauthenticated webhook trigger is not meaningfully different from a public API that deploys code to production, and treating it that way clarifies the hardening priority.
Spinnaker also supports artifact provenance attestation verification through the integration with the SLSA-framework work, though the native support is partial. For the full provenance enforcement story, you typically need an external policy engine — OPA Gatekeeper, Kyverno, or a commercial policy tool — evaluating the artifact metadata before Spinnaker's deploy stages run. Spinnaker 1.33 added a preview of native cosign signature verification in Kubernetes manifests, which closes part of the gap, but it is a preview feature and the production-grade story still typically involves external policy enforcement.
Authentication and the CVE history
Spinnaker's authentication story has been a source of CVEs more often than the project would prefer. CVE-2023-38476 was an authentication bypass in the OAuth2 integration. CVE-2022-23513 was an issue in the Fiat authorization flow where a specifically-formed request could escalate permissions. The pattern is consistent — Spinnaker's authentication and authorization code has enough complexity that corner cases slip through review, and the fixes land in point releases.
The hardening advice is to stay current, log authentication events aggressively, and alert on unusual patterns. The Spinnaker audit log — available through the Echo service — captures every significant user action, and feeding it into a SIEM with alerting on privilege escalations and unusual deployment targets is the standard practice.
A specific configuration worth checking: the Spinnaker default session timeout is longer than most security policies allow. The default Gate session timeout is six hours in some configurations and can be longer depending on the authentication provider. For production installations, a 30-minute timeout with re-authentication for deployment operations is a reasonable baseline.
Deployment windows and the human-in-the-loop pattern
Spinnaker supports manual judgment stages — pipeline steps that pause and require human approval before proceeding. For production deployments in high-sensitivity environments, requiring a manual judgment step before the final deploy stage is a cheap and effective control. It does not prevent a compromise but it does add a delay and a human checkpoint, which catches a meaningful fraction of unauthorized deploy attempts in practice.
Paired with that, deployment windows — configured through the Restrict Execution Windows stage option — let you prevent automated deploys outside of business hours. An attacker who can trigger a pipeline at 2 a.m. has to wait until morning, which is a window during which detection or a human reviewer can catch the change.
How Safeguard Helps
Safeguard integrates with Spinnaker by ingesting the pipeline execution metadata, correlating deployed artifacts against the signed provenance attestations from upstream CI systems, and blocking deployments where the provenance is missing or the policy gates have been bypassed. The platform also tracks the Fiat configuration for account-to-application binding drift, monitors pipeline template changes across the git repositories that hold them, and provides an auditable external record of who deployed what to which environment. Combined with continuous evaluation of the Spinnaker authentication logs against anomaly patterns, Safeguard gives you a defensible deployment-time posture for a platform whose native controls require deliberate configuration to be effective.