Most Splunk deployments I walk into have thousands of detections for endpoint malware, cloud misconfigurations, and identity threats, but almost nothing that specifically targets software supply chain attacks. The irony is that Splunk already ingests most of the data you would need: GitHub audit logs, GitLab webhook events, Jenkins build logs, Artifactory access logs, and npm registry metadata. The raw material is there. What is missing is the detection content that connects the dots.
Over the past year I have been assembling a Splunk content pack focused specifically on supply chain threats, and the exercise has been illuminating. Some detections are obvious in retrospect. Others only emerged after walking through real incidents like the XZ Utils backdoor, the SolarWinds build compromise, and the ongoing wave of npm typosquatting campaigns. This post walks through the categories of detections that belong in any serious supply chain content pack and shows concrete SPL examples you can adapt.
Why Existing Splunk Content Does Not Cover This
Splunk Enterprise Security ships with a respectable detection library, and the ESCU (Enterprise Security Content Update) repository adds more. But when I actually searched the ESCU catalog for supply chain content, the results were thin. There are detections for specific CVEs like Log4Shell and a handful of rules covering GitHub suspicious activity, but nothing that treats the build pipeline as a first-class attack surface.
That gap reflects a broader industry pattern. Detection engineers focus where the data is loudest and the MITRE ATT&CK mappings are clearest. Supply chain attacks blur those boundaries. A compromised npm package that ships credential-stealing code spans initial access, execution, and exfiltration all at once, and the telemetry lives in developer systems that security teams often do not even own.
Data Sources the Pack Depends On
Before any detection can fire, the data has to be in Splunk. The content pack I have been building assumes the following indexes are populated:
index=github_auditfor GitHub Enterprise audit events including workflow runs, secret scanning alerts, and repository permission changesindex=gitlab_auditfor equivalent GitLab instance-level and group-level eventsindex=jenkinsfor build logs includingBUILD_URL, executed shell steps, and plugin update eventsindex=artifactoryfor package download and upload events from JFrogindex=npm_registryfor npm package install telemetry collected from developer endpoints via an HTTP event collector wrapper around npmindex=edrfor endpoint process telemetry covering developer workstations and build agents
Getting these feeds plumbed in is half the battle. The GitHub audit log integration via the Splunk Add-on for GitHub works reasonably well once you generate a fine-grained PAT with the right scopes. Jenkins requires the Splunk plugin configured to forward build logs to HEC. Artifactory needs request tracing enabled and logs shipped via the universal forwarder.
Detection One: Unusual Package Install on a Build Agent
Build agents should install a predictable set of packages. When a Jenkins agent suddenly pulls down a package it has never installed before, that is worth investigating, especially if the package was published in the last 48 hours. The SPL:
index=jenkins sourcetype=jenkins_build_log "npm install"
| rex "npm install (?<package_spec>[^\s]+)"
| lookup new_npm_packages package AS package_spec OUTPUT published_date
| where published_date > relative_time(now(), "-48h")
| stats count by build_id, package_spec, published_date
The new_npm_packages lookup is refreshed hourly from the npm registry replication stream. Any match fires an alert that includes the build ID, package name, and publish timestamp. In testing this caught three typosquatting attempts against internal projects over a six-month period, including one package named @types/nodee that wrapped a crypto miner.
Detection Two: GitHub Actions Workflow Tampering
GitHub Actions workflows pinned to mutable tags are a persistent risk. When a workflow file changes and a tag like @v3 is modified to @v3-beta-fork or pointed at an arbitrary commit, something is wrong. The SPL correlates two audit events:
index=github_audit action=workflow_run.completed
| join src_ip [search index=github_audit action=repository.file_change path=".github/workflows/*"]
| where actor!="dependabot[bot]"
| stats values(path) as changed_files, values(workflow_name) as workflows by actor, repository
Pairing workflow file modifications with subsequent workflow runs from the same actor surfaces the exact pattern a malicious insider or compromised developer account would generate.
Detection Three: Build Output Divergence
If the same source commit produces different container image digests across two builds, either the build is non-deterministic (which is its own problem) or something has tampered with the output. I maintain a build_provenance KV store that maps commit SHA to produced image digests. When a new build comes in:
index=artifactory sourcetype=artifactory_events event_type=artifact_deployed path=*docker*
| rex field=path "(?<image_name>[^/]+):(?<tag>[^/]+)"
| eval commit_sha=substr(tag, 0, 7)
| lookup build_provenance commit_sha OUTPUT known_digest
| where isnotnull(known_digest) AND digest!=known_digest
This detection has a low firing rate but high signal. When it does fire, it means the build pipeline produced something unexpected from a commit that had already been built, which is either a serious bug or an attack.
Detection Four: Secret Access Outside Pipeline Context
Build-time secrets should only be accessed by pipeline jobs. If someone pulls a GitHub Actions secret or a Jenkins credential outside a running build, that is suspicious. The query correlates credential access events with active build windows:
index=jenkins sourcetype=jenkins_credential_access
| lookup active_builds build_time_window AS _time OUTPUT is_active_build
| where is_active_build="false" OR is_active_build=""
| stats count by credential_id, accessor, src_ip
I pair this with a dashboard panel that shows credential access timelines so analysts can visually inspect outlier events.
Detection Five: Unsigned Commit to Protected Branch
For organizations that require signed commits, an unsigned commit making it to a protected branch means either policy was misconfigured or someone bypassed controls. The SPL:
index=github_audit action=git.push branch IN (main, master, release/*)
| where signed!="true"
| stats count by actor, repository, branch, commit_sha
This is simple but easy to forget. Protected branch policies can be weakened without anyone noticing, and this detection turns policy drift into an alert.
Packaging It as a Content App
Splunk apps live in $SPLUNK_HOME/etc/apps with a specific directory structure. The content pack I have been assembling includes savedsearches.conf with the detections above, macros.conf with reusable CIM mappings, and dashboards folder with a supply chain overview panel. Shipping the whole thing as a single .spl file lets other teams install it with one command.
The MITRE ATT&CK mappings matter for reporting. I tag every detection with the relevant techniques from the Supply Chain Compromise tactic (T1195) and the CI/CD-specific sub-techniques. This makes it easy for the security program manager to show board-level coverage metrics that previously did not exist for the supply chain attack surface.
Tuning and False Positives
The biggest tuning challenge is the new package install detection. Legitimate teams do onboard new dependencies, and fresh packages from trusted maintainers are common. I eventually added an allowlist of trusted scopes (@types/, @angular/, @microsoft/) and a publisher reputation score derived from package download counts and maintainer tenure. That reduced false positives by about 80 percent without hiding genuinely suspicious packages.
Build provenance detection had its own issues. Non-deterministic builds are more common than most teams realize, especially when builds embed timestamps in binaries. I ended up excluding specific file patterns from digest comparison and focusing on the layer hashes that should be stable.
How Safeguard Helps
Safeguard feeds Splunk the supply chain context that makes these detections possible without forcing your team to build their own package reputation data, new-publisher feeds, and provenance graphs from scratch. We stream findings, component risk scores, and build attestation events into Splunk via HEC, so your existing SPL can reference curated lookups instead of reinventing them. Teams building supply chain content packs typically save months of data engineering work by pulling from Safeguard's enrichment APIs. When a detection fires, the linked finding provides upstream package details and remediation guidance analysts need to triage fast.