Microsoft Sentinel (Azure Sentinel, if you're still on the older naming) is a cloud-native SIEM built on top of Log Analytics, and in 2024 it is one of the few SIEMs with a defensible story for supply chain detection in Azure specifically. The data sources are the ones from the Azure Monitor post, the correlation primitives are Kusto and KQL, and the response primitives are Logic Apps playbooks. The gap between "Sentinel deployed" and "Sentinel detecting supply chain attacks" is the analytics rule library, and the default library — even the Microsoft-published solutions — does not cover the supply chain attack patterns I actually see in incident response.
This post is an industry-oriented look at Sentinel's role in supply chain detection: which attack patterns to build rules for, how those rules are constructed, and the integration story for response. The goal is that a security engineering team with an existing Sentinel deployment can walk away with a concrete list of rules to build.
What Supply Chain Attacks in Azure Actually Look Like
Before the rules, the attack patterns. Based on the incidents I have worked and the ones that have been written up publicly (including the SolarWinds post-incident disclosures, the 2022 Azure DevOps extension incidents, and the 2023 storage account enumeration campaigns), supply chain attacks in Azure tend to cluster around a few patterns:
- Compromised CI/CD identity pushes a malicious artifact — a container image, a function package, a Bicep module — to a registry, and downstream consumers pull it.
- Marketplace extension abuse — a malicious Azure DevOps marketplace extension, installed by a project administrator, exfiltrates pipeline secrets or modifies pipeline definitions.
- Privilege escalation through role assignments — an attacker with an initial foothold (a leaked service principal secret, a phished admin) creates new role assignments to persist and expand access.
- Image tag tampering — an attacker with
AcrPushre-tags a malicious image under a legitimate tag and waits for the next pull. - Dependency confusion in Azure Artifacts — an internal package name is registered on a public feed with a higher version, and the build tooling pulls the public one.
Each of these has a specific detection signature in the Azure telemetry. The Sentinel analytics rules below target those signatures directly.
Analytics Rule 1: Unexpected Image Push to Production Registry
The attack: a compromised CI/CD service principal pushes a tampered image to a production-tier ACR. The signature: an ImagePushEvent from an identity or IP not in the expected set.
let ProdRegistries = dynamic(["prodacr-eastus", "prodacr-westeurope"]);
let KnownBuildIdentities = dynamic(["<sp-id-1>", "<sp-id-2>"]);
ContainerRegistryRepositoryEvents
| where TimeGenerated > ago(1h)
| where OperationName == "Push"
| where Resource in~ (ProdRegistries)
| where not(Identity in (KnownBuildIdentities))
| project TimeGenerated, Identity, Repository, Tag, CallerIpAddress, Resource
The rule should run on a frequency of 5 minutes with a lookup window of 15 minutes to catch pushes promptly. The severity is medium by default and elevated to high if the repository path matches an allowlist of production-critical images (auth services, data pipeline services).
False positives come from break-glass deploys by human administrators. The fix is to maintain the KnownBuildIdentities list as a Sentinel watchlist that security engineering updates when an approved break-glass deploy is expected, with an auto-expiry of the addition.
Analytics Rule 2: Role Assignment Creation Outside of IaC
The attack: an attacker creates role assignments to persist. The signature: a Microsoft.Authorization/roleAssignments/write operation from an identity that is not the organization's IaC service principal.
let IacIdentities = dynamic(["<iac-sp-id-1>", "<iac-sp-id-2>"]);
AzureActivity
| where TimeGenerated > ago(1h)
| where OperationNameValue == "Microsoft.Authorization/roleAssignments/write"
| where ActivityStatusValue == "Success"
| where not(Caller in (IacIdentities))
| extend roleDefinition = tostring(parse_json(Properties).entity)
| project TimeGenerated, Caller, _ResourceId, roleDefinition, CallerIpAddress
This rule is noisier than rule 1 because legitimate role assignment changes happen — onboarding a new engineer, rotating an automation identity. The tuning is a Sentinel watchlist of approved human administrators, updated from the HR system on joiner/leaver events, and an auto-closed incident for known administrative sessions. The remaining signal — role assignments from non-administrators — is consistently high-value.
A companion rule targets role assignments to newly-seen principals. A role assignment to a service principal that has never been seen in the tenant's sign-in logs before is a strong indicator of attacker-created persistence.
Analytics Rule 3: Service Principal Sign-In from Impossible Travel
The attack: a leaked service principal secret is used from an attacker's infrastructure. The signature: two sign-ins from geographically distant IPs within a short window.
Microsoft publishes an "Impossible travel" rule for users in the standard solution, but the service principal equivalent is often missing. The rule:
ServicePrincipalSignInLogs
| where TimeGenerated > ago(1h)
| where ResultType == 0
| extend country = tostring(LocationDetails.countryOrRegion)
| summarize locations = make_set(country), ips = make_set(IPAddress) by ServicePrincipalId, bin(TimeGenerated, 1h)
| where array_length(locations) > 1
| where array_length(ips) > 1
This catches the "same service principal, two countries in one hour" case. It does produce noise for service principals with legitimate distributed use (a global pipeline hitting both US and EU regions), so it needs either an allowlist of distributed SPs or a secondary filter on the target resource.
Analytics Rule 4: Azure DevOps Extension Installation
The attack: a malicious Azure DevOps marketplace extension is installed, and it exfiltrates pipeline secrets on first run. The signature: an extension install audit event for an extension not on the organization's trust list.
Azure DevOps audit streaming to Sentinel landed in 2022 (through the Azure DevOps data connector). The relevant event is Extension.Installed, filtered against a trust list:
let TrustedExtensions = dynamic(["ms.vss-services-azureserviceconnection", "<other-trusted-ids>"]);
AzureDevOpsAuditing
| where TimeGenerated > ago(1h)
| where OperationName == "Extension.Installed"
| extend extensionId = tostring(Data.ExtensionId)
| where not(extensionId in (TrustedExtensions))
| project TimeGenerated, ActorDisplayName, extensionId, ProjectName
The TrustedExtensions watchlist is organizationally maintained, and installations outside it should require security review. For organizations with extension installation disabled at the organization level (the right config), this rule becomes a defense-in-depth double-check rather than the primary control.
Analytics Rule 5: Key Vault Secret Access Anomaly
The attack: a compromised identity reads secrets from Key Vault that it has never read before. The signature: a SecretGet event from an identity, with the secret name outside the identity's historical pattern.
let LookbackPeriod = 30d;
let DetectionWindow = 1h;
let HistoricalAccess = materialize(
AzureDiagnostics
| where TimeGenerated between (ago(LookbackPeriod) .. ago(DetectionWindow))
| where ResourceType == "VAULTS"
| where OperationName == "SecretGet"
| summarize seenSecrets = make_set(id_s) by identity_claim_appid_g
);
AzureDiagnostics
| where TimeGenerated > ago(DetectionWindow)
| where ResourceType == "VAULTS" and OperationName == "SecretGet"
| join kind=leftouter HistoricalAccess on identity_claim_appid_g
| where isempty(seenSecrets) or not(set_has_element(seenSecrets, id_s))
| project TimeGenerated, identity_claim_appid_g, id_s, CallerIPAddress
This rule is highest-value for break-glass credentials — secrets that should rarely or never be read. Pair it with Key Vault tags (access_frequency: rare) and raise the severity on the tagged secrets.
Automated Response Through Playbooks
Detection without response is paperwork. Sentinel's response primitive is Logic Apps playbooks, triggered from analytics rules through the incident-created hook. The playbooks that pair with the rules above:
- For Rule 1 (unexpected registry push): tag the image as quarantined, remove pull permissions on the image for the cluster's pull identity, notify the owning team's Microsoft Teams channel.
- For Rule 2 (rogue role assignment): capture the role assignment, notify the SOC, if the caller is a service principal, temporarily disable it pending review.
- For Rule 3 (SP impossible travel): rotate the service principal's credentials, notify the owning team.
- For Rule 4 (unapproved extension): uninstall the extension, preserve the install audit, notify the project administrator.
- For Rule 5 (anomalous Key Vault access): soft-disable the accessed secret, notify the vault owner.
Each of these is a few hundred lines of Logic App workflow. The automation is worth the build time because the time-to-contain for supply chain attacks is the variable that determines incident cost.
Integration With Defender for Cloud
Microsoft Defender for Cloud (MDC) has its own supply chain signals — CSPM recommendations, container scanning, cloud workload alerts. These flow into Sentinel through the MDC data connector as SecurityAlert entries. For supply chain detection, the alerts that matter are:
- Defender for Containers findings on images in runtime clusters.
- Defender for DevOps findings on repositories connected through the integration.
- Defender CSPM's attack path analysis, which surfaces the sequences of misconfiguration that produce the highest blast radius.
Treat MDC as a signal source into Sentinel, correlate its alerts with the rules above, and you have a layered detection that catches both the configuration-drift patterns MDC is good at and the behavioral patterns custom analytics rules are good at.
How Safeguard Helps
Safeguard integrates with Sentinel as both a signal source (findings from SBOM analysis, SCM scans, and package reputation feeds appear as Sentinel incidents) and a context provider (when a Sentinel analytics rule fires, Safeguard enriches the incident with the SBOM lineage, affected products, and blast radius for the involved resource). The analytics rules in this post ship as a Safeguard-published Sentinel solution, tuned against the customer base's actual telemetry rather than a laboratory environment. The detection layer becomes continuous and product-aware instead of a quarterly rule-building exercise.