GitLab's OIDC integration with AWS, GCP, Azure, and HashiCorp Vault replaced a decade of long-lived static credentials in CI. It was a genuine security improvement. It also created a new, denser class of attack: if you can trick GitLab into minting a token with permissive audiences and claims, you inherit whatever cloud identity that token was configured to assume. Work through 2024 and 2025 from Palo Alto Unit 42, Datadog Security Labs, and an increasingly vocal set of independent researchers on GitHub has mapped the workflow-level weaknesses that make this feasible in practice.
How does GitLab OIDC actually work?
GitLab's CI job issues a short-lived JWT signed by GitLab's JWKS endpoint. The token's claims include the project path, the ref, the pipeline source, the job identity, and a user identity. Cloud IAM on the other side of the trust relationship (AWS IAM role, GCP workload identity pool, Azure federated credential) validates the signature, checks the issuer, and then inspects the claims via a condition on the trust policy. If the claims match, the cloud provider hands the job a short-lived cloud credential. The whole thing is elegant and removes the need for AWS_ACCESS_KEY_ID in CI variables.
The weak point is the trust policy. If it checks only the issuer and the project, any job in that project, on any branch, triggered by any pipeline source, receives the cloud credential. The claim fields exist specifically to scope this down, and they are almost always underused. USENIX Security 2025 papers on CI/CD trust boundaries and GitLab's own 2024 hardening documentation converge on the same finding: the default trust relationships shipped by cloud vendors' examples are broader than anyone reading them assumes.
What does a realistic attack look like?
The canonical chain is a branch-based attack. An attacker gains the ability to create a merge request branch in a project, either because the project accepts external contributions or because they have commit access to a fork whose branches can run pipelines. If the target trust policy does not pin ref_type and ref (for example, to only main), a pipeline run on the attacker's branch mints a token with the target audience, and that token assumes the production AWS role. This is the vector Datadog Security Labs documented with the project_path + ref_type pattern and is the single most common real-world finding.
A second vector is pipeline source confusion. GitLab's JWT includes pipeline_source, which can be push, merge_request_event, schedule, web, or trigger. Trust policies that do not pin this field accept tokens from any pipeline source, including trigger API calls from anyone with a pipeline trigger token. Trigger tokens have been found leaked in public logs and old project exports more often than anyone is comfortable with.
A third vector, newer and more interesting, is claim-value forgery through misconfigured proxies or self-hosted GitLab instances. If an on-premise GitLab runner fronts a self-hosted instance with a custom OIDC issuer URL, and the runner's issuer URL is not the canonical one, a trust relationship that includes the wrong issuer becomes ambiguous. Unit 42 documented a self-hosted case where multiple GitLab instances shared a single cloud trust boundary through a convenience condition, and a low-privilege project on one instance minted a token accepted by the trust policy for the other.
Why is the detection story so weak?
Because the attack almost always looks like normal CI behavior from the cloud provider's side. CloudTrail records an AssumeRoleWithWebIdentity call with a federated identity, a token from a legitimate issuer, and a session name. The cloud provider has no visibility into whether the GitLab job that minted the token was run on main or on feature/nefarious. Detection has to happen either at the GitLab side (which pipelines ran and what did they do) or via correlation between GitLab audit logs and CloudTrail. Neither is enabled by default.
The practical detection pattern that works: log every AssumeRoleWithWebIdentity call, correlate the session name (which GitLab sets to pipeline-<id>) against GitLab's pipeline history, and alert when a pipeline that assumed a production role ran on a non-main ref. This is a dozen lines of Athena or BigQuery once the log sources are there, and it catches every attack variant above.
What are the trust policy patterns that actually defend?
Pin the project path to a specific numeric project ID, not a path. Project IDs are immutable; paths can be transferred or renamed, and trust policies that check paths can be inherited by a different project after a rename. Pin ref to the exact production branch, or use a condition on ref_type = tag combined with a protected-tags pattern if deploys happen from tags. Pin pipeline_source to push or merge_request_event depending on the deploy model; never leave it unpinned. And pin aud to a project-specific value rather than a generic cloud provider audience string.
The per-environment pattern is worth implementing. GitLab's environment keyword is part of the token's claims, and a trust policy that requires environment = production forces the pipeline to declare the environment. Combined with GitLab's protected environments feature (which restricts which branches can deploy to which environments), this gives a clean separation between "any project member can run CI" and "only a protected branch on a protected environment can assume the prod role."
How do self-hosted runners change the calculus?
They expand the attack surface substantially. A self-hosted runner that handles jobs from multiple projects is a shared-tenancy trust boundary, and a compromised job on one project can potentially read secrets or tokens from the runner's filesystem while another job is running. The OIDC token is minted per job, but if the runner is compromised to log tokens, the attacker collects tokens for every job that runs on that runner, including production-role-assuming ones.
The mitigation is dedicated runners per sensitivity tier, ephemeral VMs for each job, and runner configurations that refuse to run jobs from untrusted projects. GitLab's runner tagging plus locked runners is the native implementation; many orgs achieve the same with separate runner fleets per environment.
What about the npm/pip side of this?
The token-theft story overlaps with package publishing. GitLab projects that publish to npm, PyPI, Maven Central, or the GitLab package registry often use OIDC to authenticate the publish step. If the trust policy on the registry side is misconfigured the same way as on the cloud side, an attacker who runs a pipeline on a feature branch can publish a malicious package version under the project's identity. Sigstore-based publishing via GitLab OIDC (via npm's trusted publishing and PyPI's trusted publisher flow) inherits the exact same ref-pinning requirement, and the default examples in PyPI's documentation pin only the project and the workflow, not the ref.
This is the quiet supply-chain-adjacent variant of the cloud-token-theft class, and it is where the next round of incident reports is going to come from.
How should teams start hardening this today?
Audit every OIDC trust relationship between GitLab and cloud providers, package registries, and Vault. For each, check that it pins the project (by ID), the ref, the ref_type, the pipeline source, and the audience. For anything that deploys to production, also pin the environment. Enable CloudTrail-to-GitLab correlation for AssumeRoleWithWebIdentity events. And periodically rotate the trust relationships: revoke and recreate them from scratch, because a trust policy accumulates exceptions and convenience conditions over time that are hard to remove incrementally.
The longer-term move is to treat the trust policy as code managed in the same repository as the workload it protects, not in a console. When the trust policy is in Terraform or Pulumi next to the deployment, every change runs through the same review as the workload change, and the claims that are pinned are visible in diff.
How Safeguard.sh Helps
Safeguard.sh inventories every OIDC trust relationship across your GitLab, GitHub, and cloud accounts, and flags the specific claim-pinning weaknesses that make ref-based token theft work. Griffin AI reviews merge requests that touch CI configurations for new OIDC token uses and checks the paired trust policy for missing ref, pipeline_source, and environment conditions before the configuration merges. Eagle correlates AssumeRoleWithWebIdentity events against GitLab pipeline metadata continuously, so a production role assumed from a feature branch is an alert within minutes. Guardrails block merge requests that introduce OIDC configurations without matching trust-policy hardening, and 100-level scanning of the pipeline supply chain catches the trigger-token and runner-image cases where the attack vector lives two layers deep in the dependency tree.