DevSecOps

CI/CD Pipeline Audit Logging: What to Capture and Why

Your CI/CD pipeline is a high-value target. Without proper audit logging, you will not know when it has been compromised until it is too late.

Bob
Security Architect
6 min read

CI/CD pipelines are the most privileged automated systems in most organizations. They have access to source code repositories, secret stores, artifact registries, and production deployment credentials. A compromised pipeline can inject malicious code into every build, exfiltrate secrets, and deploy backdoored artifacts to production.

Despite this, most organizations have better audit logging for their web applications than for their build systems. When a pipeline is compromised, the investigation often starts with the question: "Do we have any logs?" And the answer is often: "Not really."

Why Pipeline Audit Logging Matters

Post-Incident Forensics

When a supply chain incident occurs, the first question is: "When did this start, and what was affected?" Without pipeline logs, you cannot answer either question. You cannot determine when a malicious step was introduced, which builds were affected, or what artifacts were compromised.

The SolarWinds investigation relied heavily on build system logs to establish the timeline of the attack. Organizations without equivalent logging would have been unable to conduct a similar investigation.

Tamper Detection

A well-logged pipeline creates an audit trail that makes tampering detectable. If an attacker modifies a pipeline configuration, adds a step, changes a secret, or deploys an unexpected artifact, the log should capture it. Without logging, these changes are invisible.

Compliance Requirements

Multiple compliance frameworks require audit logging of the software delivery process:

  • SOC 2 (CC7.1, CC7.2): Monitoring of system components and detection of anomalous activity
  • NIST SSDF (PO.3): Implementing roles, responsibilities, and access controls in the development process
  • SLSA: Supply chain Levels for Software Artifacts requires provenance attestation, which depends on build logging
  • PCI DSS (Requirement 10): Logging and monitoring of all access to system components

What to Log

Pipeline Configuration Changes

Every change to pipeline configuration should be logged:

  • Who changed the configuration (user identity, authentication method)
  • When the change was made (timestamp with timezone)
  • What was changed (diff of the configuration before and after)
  • How the change was made (UI, API, direct file edit via commit)

Specific events to capture:

  • Pipeline definition created, modified, or deleted
  • Pipeline steps added, removed, or reordered
  • Environment variables or secrets added, modified, or deleted
  • Trigger conditions changed (branch filters, event types)
  • Approval gates added or removed

Pipeline Execution Events

For each pipeline execution:

  • Trigger event: What caused the pipeline to run (push, PR, schedule, manual trigger, API call)
  • Trigger identity: Who or what initiated the run
  • Input parameters: Branch, commit SHA, environment variables passed at runtime
  • Step execution: Start time, end time, exit code, and output hash for each step
  • Artifact generation: Hashes and metadata of all artifacts produced
  • Secret access: Which secrets were accessed during execution (not the secret values)
  • External calls: Network requests made during the build (registry pulls, API calls)
  • Deployment events: What was deployed, to which environment, with what configuration

Access Control Events

  • User authentication to the CI/CD platform
  • Permission changes (role assignments, project access grants)
  • API token creation, rotation, and revocation
  • Service account usage
  • Failed authentication attempts

Artifact Lifecycle Events

  • Artifact creation with hash, size, and provenance metadata
  • Artifact storage and retrieval
  • Artifact promotion between environments (dev to staging to production)
  • Artifact deletion or expiration

How to Log

Structured Logging Format

Use structured logging (JSON) rather than plain text. Structured logs are parseable by automated systems, searchable, and analyzable at scale.

Each log entry should include:

  • Timestamp (ISO 8601 with timezone)
  • Event type (configuration_change, execution_start, step_complete, etc.)
  • Pipeline identifier
  • Execution identifier (unique per run)
  • Actor (user or service account that triggered the action)
  • Resource (what was affected)
  • Action (what happened)
  • Outcome (success, failure, error)
  • Metadata (additional context specific to the event type)

Immutable Log Storage

Pipeline logs must be stored in a location where they cannot be modified or deleted by the same credentials that run the pipeline. If an attacker compromises the pipeline, they should not be able to destroy the evidence.

Options:

  • Write-only log aggregation service (Datadog, Splunk, Elasticsearch with write-only API keys)
  • Append-only storage (AWS CloudWatch Logs, Google Cloud Logging)
  • WORM storage (AWS S3 Object Lock, Azure Immutable Blob Storage)
  • Separate logging infrastructure with independent credentials

Log Retention

Retain pipeline logs for at least one year. Supply chain attacks often have long dwell times -- the SolarWinds compromise went undetected for approximately nine months. Short retention windows may mean that evidence of a compromise is deleted before the investigation begins.

Compliance frameworks typically require specific retention periods:

  • SOC 2: No specific minimum, but one year is standard practice
  • PCI DSS: One year minimum, three months immediately available
  • NIST: Based on organizational policy, but one year is recommended

Detecting Anomalies

Collecting logs is necessary but not sufficient. You need to actively monitor for anomalous patterns:

Configuration Anomalies

  • Pipeline modifications outside normal business hours
  • Configuration changes by users who do not normally modify pipelines
  • Addition of new pipeline steps that run shell commands
  • Changes to secret references or environment variable sources
  • Removal of security-relevant steps (scanning, signing, verification)

Execution Anomalies

  • Pipeline runs triggered by unknown or unusual sources
  • Build durations significantly longer or shorter than baseline
  • New network destinations contacted during builds
  • Artifact sizes significantly different from baseline
  • Steps that produce different output hashes from identical inputs
  • Failed steps that are then marked as successful (suggesting log manipulation)

Access Anomalies

  • Authentication from unusual locations or IP addresses
  • API token usage patterns that differ from baseline
  • Permission escalation (user gaining admin access)
  • Service account usage outside expected pipelines

Implementation Checklist

For organizations implementing pipeline audit logging:

  1. Inventory all CI/CD platforms in use across the organization
  2. Enable built-in audit logging on each platform (GitHub Actions audit log, GitLab audit events, Jenkins audit trail plugin)
  3. Configure log forwarding to a centralized, immutable log store
  4. Define alert rules for high-priority anomalies (configuration changes, access anomalies)
  5. Establish log review cadence (daily automated review, weekly manual review)
  6. Test the logging by simulating events and verifying they are captured
  7. Document the logging architecture including what is captured, where it is stored, and how to query it
  8. Set retention policies that meet compliance requirements

How Safeguard.sh Helps

Safeguard.sh integrates with CI/CD platforms to capture pipeline audit data alongside supply chain security events. When Safeguard's policy gates evaluate a build, the evaluation results, including which policies were checked and their outcomes, are logged with full provenance data. This creates a continuous audit trail of security governance across your build pipelines, providing the evidence base for compliance reporting and the forensic data needed for incident investigation.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.