DevSecOps

AWS CodePipeline Hardening Patterns

CodePipeline is the glue between your source, build, and deploy. It is also the thing that gets the widest IAM role in most AWS accounts. Here is how to harden it without rewriting your pipelines.

Nayan Dey
Senior Security Engineer
7 min read

CodePipeline has quietly become the nervous system of most AWS-native delivery stacks. It pulls from CodeCommit or GitHub, runs CodeBuild projects, pushes to ECR or S3, and kicks off deployments to ECS, EKS, Lambda, or wherever. That means the CodePipeline service role sees everything. It routes artifacts between stages. It has the credentials to trigger every downstream action. It is one of the highest-leverage IAM roles in the account, and most teams have not given it serious thought since the pipeline was first stood up.

This post walks through the hardening patterns I apply to CodePipeline deployments, in roughly the order I apply them during an engagement.

Separate the pipeline role from the action roles

A CodePipeline has two kinds of roles, and the distinction matters.

The pipeline service role is the role CodePipeline itself assumes to orchestrate stages: read the source, trigger the build, pass artifacts, invoke deploy actions. This role needs access to the source provider, the artifact S3 bucket, and the KMS key encrypting the artifact bucket. It should not need any production deployment permissions.

The action roles are the roles each individual action assumes. CodeBuild runs with its own role. CloudFormation deploy actions run with a deployment role. ECS deploy actions run with their own role. Each action role is scoped to what that specific action needs to do.

The anti-pattern: a single role attached to both the pipeline and every action, with a policy that says Action: "*", Resource: "*" because that was what it took to get the pipeline running on a Friday afternoon in 2019. I have seen this role in production at more organizations than I would like to admit. Fixing it is usually two or three days of careful work and eliminates an entire class of cross-stage privilege escalation.

The pipeline role's policy should essentially be limited to: s3:GetObject, s3:PutObject, s3:GetBucketVersioning on the artifact bucket, codepipeline:PutJobSuccessResult and PutJobFailureResult on itself, the specific codebuild:StartBuild and codebuild:BatchGetBuilds calls on the exact project ARNs it triggers, and the specific cloudformation:CreateChangeSet and related calls on the exact stack ARNs it deploys to. Everything else belongs on an action role.

Lock down the artifact bucket

CodePipeline artifacts transit through an S3 bucket. By default, this bucket is created with SSE-S3 encryption and a permissive bucket policy. This is the bucket that contains your unreleased source code, your compiled binaries before they are signed, and any configuration artifacts that stages produce.

Three controls that matter:

KMS encryption with a customer-managed key. Change the artifact bucket to use a customer-managed KMS key. Grant kms:Decrypt and kms:GenerateDataKey only to the pipeline role and the specific action roles that need to read artifacts. This prevents a compromised role with S3 read access from reading the artifacts; it would also need explicit KMS grant.

Versioning on, lifecycle rules that preserve recent versions. Artifact buckets should have versioning enabled so that if an artifact is overwritten or corrupted you can recover. Configure a lifecycle rule that transitions noncurrent versions to Glacier after thirty days and deletes after ninety. This balances recovery against storage cost.

Block Public Access, account-level. Confirm that the account-level S3 Block Public Access settings are on. Artifact buckets have been leaked in the past because somebody accidentally made them public while debugging. The account-level block prevents this even when a bucket-level policy would allow it.

Bucket policy denies non-TLS and non-KMS writes. Add explicit deny statements for aws:SecureTransport: false and for PutObject calls that do not specify the KMS key. This is belt-and-suspenders but it is the kind of thing that catches misconfiguration from other tools writing into the bucket.

Source stage controls

If your source is GitHub, use the GitHub App integration, not a personal access token. The App installation is tied to the organization, not to an individual. If you are still using the v1 GitHub integration with a user's OAuth token, migrate. The token being on a specific person's account means the pipeline breaks when that person changes their GitHub password, and more seriously means that a compromise of that account compromises every pipeline they authorized.

If your source is CodeCommit, use an IAM role that the pipeline assumes with codecommit:GitPull on the specific repository ARN, with a condition limiting to the branch you actually ship from. Do not leave the default broad CodeCommit access.

The source stage should specify a branch filter. main or release/* or whatever your release branches are. The anti-pattern is a source stage that listens on every branch, which means any pushed branch triggers a pipeline run and consumes build minutes. Worse, if the pipeline includes any stage that exposes information (a test report to a Slack channel, a build status badge), every branch's build exposes that information. Be explicit about which branches trigger builds.

Manual approval actions are credentials

A manual approval action is a pause in the pipeline where a human clicks approve. The humans who can approve are defined by the action's Configuration and the IAM permissions to call codepipeline:PutApprovalResult. These are production deployment credentials. Treat them accordingly.

Specifically: the set of IAM principals with codepipeline:PutApprovalResult on your production pipeline should be as short as the set with production admin access, because they are equivalent for practical purposes. Audit this list. Require MFA for approval actions via an IAM condition. Log approvals to a dedicated CloudTrail destination that is not writable by the same principals who can approve.

I have watched pipelines where codepipeline:PutApprovalResult: Resource: "*" was granted to a broad developer role, because someone wanted to let developers approve their own changes in non-prod and did not scope the permission by pipeline ARN. This is a production escalation path. Scope by pipeline ARN.

Deploy stages and the CloudFormation role

When CodePipeline triggers a CloudFormation deploy action, it does so with a CloudFormation execution role. This role is what CloudFormation uses to create resources. It has to be broad enough to create everything in your stack, which for a typical production stack is extensive.

The pattern that works: one CloudFormation execution role per stack, scoped to the specific resource types that stack creates. The role has a permissions boundary that caps its authority within the SCP-defined envelope for that environment. The role's trust policy allows only CloudFormation to assume it, with a condition that the source ARN is the specific pipeline that owns the stack.

The anti-pattern I see: a single CloudFormationDeployRole with Action: "*", Resource: "*" attached, used by every pipeline, because scoping per stack was "too much work." This role is effectively an AdministratorAccess role that any pipeline trigger can invoke. A compromised source or build stage has direct path to account takeover via this role.

Notifications, CloudTrail, and what you actually see

CodePipeline execution history is retained for twelve months by default. Pipeline execution details older than that are gone. For audit purposes, pipe every pipeline state change to EventBridge (CodePipeline has native EventBridge integration) and from there to a long-retention destination: S3, a SIEM, or a CloudTrail Lake.

The events you want to keep forever: every CodePipeline Pipeline Execution State Change, every CodePipeline Stage Execution State Change, every CodePipeline Action Execution State Change. These tell you who ran what pipeline when, with what source revision, and whether it succeeded. When an incident happens months later, this history is what lets you reconstruct the timeline.

For notifications, wire a notification rule to a dedicated SNS topic that feeds your Slack or Teams channel. Do not use the default CodePipeline notification which sends to your email address. Email notifications are not auditable, not searchable, and do not survive staff changes.

How Safeguard Helps

Safeguard maps every CodePipeline in your AWS organization to its service role, action roles, artifact bucket, and deploy targets, and flags the combinations that represent cross-stage privilege risk. We track approval principal lists and alert when the approver set diverges from the intended administrator group. For artifact buckets, Safeguard checks KMS key policies, versioning, and public access configuration and surfaces pipelines whose artifacts are not encrypted with a customer-managed key. The result is a pipeline-by-pipeline risk view that reflects the IAM graph as it actually exists, not as the original template intended.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.