Google Chronicle, now formally branded as Google Security Operations, has an unusual architecture among SIEMs. It stores everything in a petabyte-scale backend with a flat per-employee pricing model and searches across years of data at speeds that make Splunk queries look sluggish. The trade-off is a less mature detection authoring experience than some competitors, and the UDM (Unified Data Model) schema takes some getting used to. For teams already invested in Google Cloud, though, Chronicle's ability to query years of telemetry in seconds is compelling.
I have been working with Chronicle specifically for supply chain detection content at two large enterprises over the past year. The approach is different enough from Splunk and Elastic that it deserves its own writeup.
Understanding UDM for Supply Chain Data
UDM is Chronicle's schema. Everything gets normalized into UDM events at ingestion time through parsers that Google maintains for common sources and that you can author for custom ones. Core UDM event types like USER_LOGIN, PROCESS_LAUNCH, and NETWORK_CONNECTION map cleanly to endpoint and network telemetry, but supply chain data does not fit as neatly.
For GitHub audit logs I mapped events to UDM like this:
workflow_runevents becomePROCESS_LAUNCHwithprincipal.process.command_linecontaining the workflow namegit.pushevents becomeUSER_RESOURCE_UPDATE_CONTENTwith target resource being the repositoryrepository.file_changefor workflow files becomes a custom event typeRESOURCE_WRITTENwith labels indicating the workflow path
The labels field in UDM is flexible key-value metadata, and I lean on it heavily for supply chain context. Labels like package_name, package_version, build_id, and attestation_status let me write YARA-L rules that reference supply chain attributes without forcing them into UDM's core fields where they do not belong.
YARA-L 2.0 Basics
Chronicle's detection language is YARA-L, loosely inspired by YARA signature language but adapted for event sequences. A typical rule has a meta block, an events block defining event predicates and joins, a match block defining the time window and grouping, and an outcome block producing alert metadata.
rule supply_chain_unsigned_commit_protected_branch {
meta:
author = "security-team"
severity = "High"
mitre_tactic = "TA0001"
mitre_technique = "T1195"
events:
$push.metadata.event_type = "USER_RESOURCE_UPDATE_CONTENT"
$push.metadata.product_name = "GitHub"
$push.target.labels["branch"] = /^(main|master|release\/.*)$/
$push.security_result.detection_fields["signed"] = "false"
$push.principal.user.email = $user
match:
$user over 1h
outcome:
$risk_score = 75
$affected_repo = array_distinct($push.target.resource.name)
condition:
$push
}
This rule fires when a user pushes to a protected branch with an unsigned commit. The match block groups by user over a one-hour window, so a single user making multiple unsigned pushes generates one alert rather than a stream.
Detection: Sequential Package Install and External Connection
Sequence-based detection is where YARA-L gets interesting. The pattern catches a package install on a developer machine followed by an unexpected outbound connection:
rule supply_chain_package_install_then_exfil {
meta:
author = "security-team"
severity = "Critical"
events:
$install.metadata.event_type = "PROCESS_LAUNCH"
$install.principal.process.command_line = /npm install.*/ nocase
$install.principal.hostname = $host
$install.principal.user.userid = $user
$install.metadata.event_timestamp.seconds = $install_time
$connection.metadata.event_type = "NETWORK_CONNECTION"
$connection.principal.hostname = $host
$connection.target.ip != "10.0.0.0/8"
$connection.target.ip != "172.16.0.0/12"
$connection.metadata.event_timestamp.seconds = $conn_time
$conn_time > $install_time
$conn_time < $install_time + 60
match:
$host, $user over 2m
outcome:
$risk_score = 90
$suspicious_destination = array_distinct($connection.target.ip)
condition:
$install and $connection
}
The critical piece is the timestamp arithmetic binding the connection to within 60 seconds of the install. Chronicle executes this across the full retention period (one year by default), which means you can detect installs that phoned home weeks ago when a new IOC arrives.
Detection: Package Publisher Change
For this I maintain a reference list of expected package publishers in a data table. Chronicle's Context Enrichment lets rules reference external tables at match time:
rule supply_chain_publisher_change {
meta:
severity = "High"
events:
$install.metadata.event_type = "PROCESS_LAUNCH"
$install.principal.process.command_line = /npm install/ nocase
$install.security_result.detection_fields["package_name"] = $pkg
$install.security_result.detection_fields["publisher"] = $pub
match:
$pkg, $pub over 10m
outcome:
$is_expected = if($pub in %expected_publishers[$pkg], "yes", "no")
condition:
$install and $is_expected = "no"
}
The %expected_publishers is a reference list populated from npm view <package> maintainers output for each internally-used package. When a package suddenly installs from a different publisher, the rule fires.
Detection: Build Artifact Divergence
When two builds of the same commit produce different artifact digests, that is a strong signal. The rule joins build events with artifact upload events:
rule supply_chain_divergent_build_output {
meta:
severity = "Critical"
events:
$b1.metadata.event_type = "PROCESS_LAUNCH"
$b1.principal.process.command_line = /build/ nocase
$b1.security_result.detection_fields["commit_sha"] = $sha
$b1.security_result.detection_fields["artifact_digest"] = $digest1
$b2.metadata.event_type = "PROCESS_LAUNCH"
$b2.principal.process.command_line = /build/ nocase
$b2.security_result.detection_fields["commit_sha"] = $sha
$b2.security_result.detection_fields["artifact_digest"] = $digest2
$digest1 != $digest2
match:
$sha over 24h
condition:
$b1 and $b2
}
The same commit SHA producing two different digests within 24 hours is either non-determinism (a bug worth fixing) or tampering (a security incident worth investigating).
UDM Search for Threat Hunting
Beyond scheduled detection rules, Chronicle's UDM Search is useful for hunting. When a new IOC arrives, I can run a search like:
metadata.event_type = "NETWORK_CONNECTION"
target.ip = "185.234.218.42"
principal.hostname != ""
Chronicle runs this across the full retention window and returns results in seconds even when searching petabytes of data. For supply chain threats where the IOC might be a package registry URL or a known malicious maintainer, this speed changes how you hunt.
Entity Graph for Context
Chronicle builds an entity graph automatically, linking users, assets, IPs, and domains. For supply chain investigation, this means I can pivot from a single suspicious package install to all users who installed that package, all hosts where it landed, all build artifacts that embedded it, and all external destinations those hosts contacted.
The pivot UI in Chronicle Search makes this fast. Click a package name, see every associated event across the year of retention, then filter to the last 30 days to scope the incident.
Data Volume and Cost
Chronicle's per-employee pricing means ingestion volume does not directly affect cost. This changes what you are willing to ingest. I routinely push every GitHub audit event, every Artifactory access log line, and every Jenkins build log into Chronicle without worrying about per-GB costs. That data density makes the detections sharper, because more signal means fewer false positives through better correlation.
Parser Development
Getting custom data into UDM requires parser development. Chronicle has a parser SDK that compiles Grok-like patterns plus some custom logic into UDM. Writing a good parser for Artifactory access logs took me about two days of iteration, mostly because the timestamps and user fields vary depending on request type. Once the parser is deployed, the events flow into UDM consistently, and detection rules can reference them reliably.
How Safeguard Helps
Safeguard ships a Chronicle integration that forwards supply chain findings and component risk data as UDM events with consistent label schemas. This means your YARA-L rules can reference Safeguard enrichment using the same label structure shown throughout this post without writing custom parsers. When a Chronicle detection fires on a package-related pattern, the corresponding Safeguard finding provides maintainer context, vulnerability exploitability, and remediation guidance. Teams using Safeguard with Chronicle typically reduce mean time to triage because the cross-referencing between runtime events and upstream supply chain intelligence happens automatically in the alert payload.