Best Practices

Vault Supply Chain Integration Patterns

HashiCorp Vault is a Swiss Army knife for secrets, but most teams use it as a glorified key-value store. A walkthrough of the integration patterns that make Vault actually useful in a CI/CD supply chain.

Shadab Khan
Senior Security Engineer
7 min read

I have watched four companies migrate to HashiCorp Vault with great fanfare and end up using it as an expensive etcd. They write secrets to kv/ paths, mount the KV engine v2 for versioning because someone mentioned it on a call, and declare the migration done. Vault is now a secrets store in the same way a Ferrari is a car for getting groceries: it works, but you have paid for capabilities you are not using, and the parts that justify the cost are gathering dust.

This post is about the Vault patterns that matter when Vault is part of a software supply chain — when CI runners, build systems, container registries, and cloud deploys all need credentials, and you need those credentials to be scoped, audited, short-lived, and revocable. The static KV path is the least interesting thing Vault can do. The interesting parts are dynamic secrets, identity-based auth, and the transit engine.

The Four Integration Points That Matter

A software supply chain has a small number of places where secrets flow across trust boundaries. Vault is worth its operational cost only if it is solving the hard problems at these boundaries.

CI/CD to cloud. The GitHub Actions runner needs to deploy to AWS. The old pattern is a long-lived IAM access key stored as a repo secret. The Vault pattern is the AWS secrets engine: CI authenticates to Vault with a short-lived JWT, Vault mints an IAM credential with a 30-minute TTL, the deploy runs, the credential expires. No long-lived key, no manual rotation, full audit trail of who requested what and when.

CI/CD to database. The migration job needs to run ALTER TABLE against Postgres. The Vault pattern is the database secrets engine: Vault connects to Postgres with an admin role, creates a new user on demand with a 10-minute TTL and exactly the privileges the migration needs, returns those credentials to the job. When the job ends, Vault drops the user. The database never sees the same credential twice.

Services to services. Service A calls Service B. The old pattern is a shared API key in an environment variable. The Vault pattern depends on what you are running. On Kubernetes, the Vault Agent Injector mutates pods to get secrets via a sidecar. On VMs, AppRole with response wrapping distributes short-lived tokens. In either case, the secret never appears in a container image, a Helm values file, or a Terraform state.

Signing and encryption. The release pipeline needs to sign artifacts or encrypt data at rest. The Vault pattern is the transit engine: Vault holds the private key, and the pipeline asks Vault to sign or encrypt a payload. The key never leaves Vault. Rotation is a single API call, and decryption continues to work across key versions because the ciphertext carries the version number.

If your Vault deployment is not doing at least two of these, you are paying Vault operator tax for KV storage you could have gotten from Parameter Store at a quarter of the effort.

JWT Auth and the Death of Long-Lived CI Tokens

The single highest-leverage integration is JWT auth between your CI system and Vault. GitHub Actions, GitLab CI, CircleCI, and Buildkite all issue OIDC tokens to their runners. Vault's JWT auth method validates those tokens and grants Vault tokens bound to the claims inside them.

Here is what the CI side looks like:

- uses: hashicorp/vault-action@v3
  with:
    url: https://vault.internal
    method: jwt
    role: github-actions-deploy
    secrets: |
      aws/creds/deploy-role access_key | AWS_ACCESS_KEY_ID ;
      aws/creds/deploy-role secret_key | AWS_SECRET_ACCESS_KEY ;
      aws/creds/deploy-role security_token | AWS_SESSION_TOKEN

And the Vault role, defined once:

path "auth/jwt/role/github-actions-deploy" {
  role_type       = "jwt"
  user_claim      = "actor"
  bound_audiences = ["https://github.com/acme-org"]
  bound_claims    = {
    repository = "acme-org/payments-service"
    ref        = "refs/heads/main"
  }
  policies = ["deploy-payments"]
  ttl      = "30m"
}

Read the bound_claims block carefully. This role only works for the payments-service repository, only on the main branch. A compromised runner in any other repo cannot mint this token, because Vault verifies the OIDC claim signed by GitHub. No long-lived credential exists anywhere. If the repository is compromised, the attacker gets a 30-minute window on the main branch and nothing else.

I have yet to see this pattern fail to impress a security auditor.

Dynamic Database Credentials in Practice

The database secrets engine is Vault's most underused feature. The configuration is a few minutes of work:

vault write database/config/payments-db \
  plugin_name=postgresql-database-plugin \
  connection_url="postgresql://{{username}}:{{password}}@payments-db.internal:5432/payments?sslmode=require" \
  allowed_roles="readonly,migrator" \
  username="vault_root" password="..."

vault write database/roles/migrator \
  db_name=payments-db \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
                        GRANT ALL ON SCHEMA public TO \"{{name}}\";" \
  default_ttl="10m" max_ttl="30m"

A migration job calls vault read database/creds/migrator, gets a unique username and password, runs the migration, and walks away. Vault revokes the role when the lease expires. If the migration script is captured in a logfile, the credential inside it is already dead.

The rough edge: connection churn. Every job gets a new user, which means a new connection and a small amount of Postgres overhead. For high-frequency jobs, lean on longer TTLs and lease reuse. For sensitive long-lived services, use the LDAP or Kubernetes auth methods instead, which hand out credentials tied to workload identity rather than per-call.

Transit Engine for Artifact Signing

If you sign release artifacts — SBOMs, container images, tarballs — the transit engine is the right place for the signing key. The key is generated inside Vault, never exportable, and you sign through an API call:

vault write transit/sign/release-key \
  input=$(sha256sum artifact.tar.gz | awk '{print $1}' | base64)

The key rotates on a schedule without any consumer changes. Old signatures still verify because the signature includes the key version. When an engineer leaves, you revoke their token; you do not re-issue a signing key to every pipeline.

Cosign supports this directly via the Vault KMS provider (cosign sign --key hashivault://release-key ...). The flow is the same as with a cloud KMS, but the key material stays inside your trust boundary instead of in AWS or Google's.

What Breaks, and How to Prepare

Vault is a dependency, and if it is down, nothing deploys. Two controls prevent this from being a crisis:

  1. Performance replicas. Enterprise Vault supports read replicas in each region. Auth and secret reads go to the local replica; writes forward to the primary. A primary outage degrades writes (new leases) but not reads (existing leases).
  2. Lease TTLs long enough to ride out an outage. If your Vault primary goes down for 15 minutes and your credentials have a 30-minute TTL, nothing outside the AZ notices. If your TTLs are 5 minutes, everything falls over.

Tune TTLs per engine. Dynamic credentials for short jobs should be short. Service-to-service tokens with periodic renewal should be long enough that one Vault outage does not cascade.

How Safeguard Helps

Safeguard watches the boundary where Vault-backed credentials meet your actual code. We inventory every Vault path referenced from your repositories and CI configs, so you can spot services still reading from a deprecated KV path that should be on the database engine. When a CVE lands against a Vault plugin or a version of the Vault Agent Injector, our alerts map it to the specific projects that will need to patch. And for the handoff between Vault and your supply chain — the JWT role configs, the transit key policies — we flag drift from the baseline you approved, so a quiet change to bound_claims does not turn into a silent privilege expansion.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.