Every security team I have worked with eventually asks the same question: "what should we be watching for that would indicate a supply chain attack in progress?" The honest answer is that supply chain IoCs are messier than traditional network IoCs because the attacker is using your legitimate infrastructure against you. A connection to a C2 server looks like malware. A postinstall script that runs curl looks like a lot of legitimate npm packages.
This catalog is the working list my team uses, organized by where you would detect each indicator and with explicit notes on the false positive patterns that will burn you if you alert without tuning.
Category One: Publish-Time Indicators
These show up in registry metadata and are the cheapest to detect because they do not require you to run anything. The signal-to-noise ratio is also the highest, which makes them a great starting point.
Rapid version churn after long silence. A package that has not been published in six months suddenly releases three versions in forty-eight hours, especially if those versions are patch bumps that do not correspond to any commits on the upstream repo. The query against npm packument history looks like:
jq '.time | to_entries | sort_by(.value) |
[.[-5:] | .[] | {version: .key, time: .value}]' packument.json
False positive pattern: legitimate maintainers sometimes do coordinated releases across a monorepo that look like version churn. Cross-reference against the upstream repo's commit history before alerting.
New maintainer adds themselves and immediately publishes. The npm owner add event is visible if you own the package, and the sequence "new maintainer added → publish within 24 hours" is a strong signal. For public packages you cannot see the add event directly, but the first publish by a new npm username against a long-established package is observable through the packument's _npmUser history.
Install-time scripts where there were none. A package that has shipped without preinstall or postinstall hooks for twenty versions and suddenly adds one in version twenty-one is worth looking at. The diff is trivial:
diff <(jq '.scripts' pkg-v20/package.json) \
<(jq '.scripts' pkg-v21/package.json)
False positive pattern: legitimate packages do add install scripts during normal feature work. The indicator is stronger when combined with version churn or maintainer changes.
Category Two: Static Artifact Indicators
These require you to unpack and inspect the artifact, but they do not require execution.
Obfuscated JavaScript in packages that have never used obfuscation. Run a minification and entropy check:
find extracted/ -name '*.js' -exec \
sh -c 'ent "$1" | awk "/Entropy/{print \"$1: \" \$3}"' _ {} \;
Entropy above 5.5 bits per byte in a file that is part of a library is worth a closer look. Known-minified libraries are common false positives — compare against the upstream repo's build output.
Base64 blobs longer than 500 characters. The pattern [A-Za-z0-9+/]{500,}={0,2} inside a JavaScript or Python file is almost never legitimate code. A quick grep across an extracted package tarball finds it. False positive pattern: embedded SVG or fonts. Filter those by file type before alerting.
Binary payloads inside packages that claim to be pure source. If package.json does not declare a bin field or any native bindings and the tarball contains an ELF, Mach-O, or PE binary, something is off. file is your friend:
tar tzf pkg.tgz | xargs -I{} sh -c 'tar xzOf pkg.tgz {} 2>/dev/null | file -'
URL fetches embedded in install scripts. Any curl, wget, fetch, Invoke-WebRequest, or equivalent inside a preinstall, postinstall, or setup.py is a code smell. A small number of legitimate packages (some native binding installers) do this; those should be on an allowlist, and everything else should alert.
Category Three: Runtime Indicators
These require execution or live host telemetry, but they catch the attacks that slipped past static analysis.
DNS queries to newly registered domains from build agents. Newly registered domain (NRD) feeds combined with your CI runner DNS logs give you a high-signal detection. The rule I run in our SIEM is:
source="ci-runner-dns" | lookup nrd_feed domain as query
| where isnotnull(registered_date) AND registered_date > relative_time(now(), "-30d")
| stats count by runner, query, registered_date
False positive pattern: legitimate new domains for package ecosystems (a new npm mirror, for example). Allowlist the ecosystem domains explicitly.
Outbound connections from the build agent to non-registry endpoints during dependency resolution. The only outbound traffic your build agent should make during npm install or pip install is to your registry. A TCP connection to anything else during that window is an IoC. Implement this as a network policy that logs rather than blocks at first, to shake out false positives.
Reads of credential files by package install scripts. On Linux, auditd rules on ~/.ssh/, ~/.aws/, ~/.npmrc, and ~/.gitconfig from processes spawned by npm, pip, or yarn are gold:
-a always,exit -F arch=b64 -S openat -F dir=/root/.ssh -F key=cred-read
Correlate the audit events against the process tree. A node process spawned by npm install reading ~/.ssh/id_rsa has no legitimate purpose.
Unexpected child processes from package scripts. An install script that spawns bash, python, powershell, or cmd from a context where that makes no sense is worth an alert. The eBPF tracing tools or your EDR can catch this.
Category Four: Deployment-Time Indicators
Artifact digest mismatches between build and deploy. If the image your CI built has digest sha256:abc... and the image your CD system pulled has digest sha256:def..., something intercepted the artifact. A simple admission controller check against the expected digest catches this:
- match:
kinds: ["Pod"]
validate:
deny:
conditions:
- key: "{{ request.object.spec.containers[0].image }}"
operator: NotIn
value: "{{ approved_digests }}"
Provenance attestation missing or invalid. If your build pipeline produces SLSA provenance, an artifact without it or with an attestation signed by an unexpected key is an IoC. Verify with cosign verify-attestation in the deploy gate.
Using the Catalog
A catalog is only useful if you tune it. Take each indicator, decide whether it is an alert or an inform, set a baseline over two weeks, and review the false positives as a team. The indicators that survive the tuning phase are the ones that go into your on-call runbook. The rest go into a hunt backlog that a senior analyst reviews monthly.
How Safeguard Helps
Safeguard ships this catalog as a built-in ruleset and continuously applies it across every package, build, and deployment in your environment. The platform maintains publish-time baselines for every dependency you consume, so rapid version churn, new maintainer activity, and script changes surface automatically without you having to build the detections yourself. When an indicator fires, Safeguard correlates it against your dependency graph and running asset inventory, so the alert arrives with a scope estimate and a suggested quarantine action rather than a raw event. For teams that want to extend the catalog with their own indicators, the platform exposes a rules API so your internal threat-hunting work compounds on top of the defaults.