SentinelOne's Singularity platform was designed around endpoint behavior, not package provenance. When supply chain attacks like 3CX, SolarWinds, and the XZ backdoor dominated incident reports in 2023 and 2024, security teams had to ask a harder question: can an EDR built for user laptops meaningfully detect malicious behavior on a build agent, a Jenkins controller, or a developer workstation installing a poisoned npm package?
The answer is a conditional yes. SentinelOne's Behavioral AI engine, Storyline correlation, and custom Static AI rules give defenders the raw material to build supply chain detections. What they do not give you is a ready-made detection pack for CI/CD. You have to engineer it.
Why build systems break default EDR assumptions
Most SentinelOne policies ship with baselines tuned for knowledge workers. They expect browsers to make outbound HTTPS connections, they expect Office macros to be suspicious, and they expect PowerShell to be rare outside admin hosts. Build agents invert almost every one of those assumptions.
A Linux build runner spends its day cloning repositories, resolving transitive dependencies across npm, PyPI, Maven Central, and a dozen internal mirrors, compiling code, pushing artifacts to registries, and signing releases. It runs git, curl, wget, gcc, node, pip, and gradle thousands of times per day. Default behavioral heuristics flag a fraction of that activity as anomalous, which means analysts either tune the policy so aggressively that real attacks slide through, or they drown in noise.
The fix is to treat build infrastructure as a distinct asset class with its own policy overrides, custom Static AI rules, and Storyline hunting queries.
Segmenting build agents into their own site and policy
The first move is organizational. Create a dedicated SentinelOne site or group for CI/CD assets — Jenkins controllers, GitHub Actions self-hosted runners, GitLab executors, Buildkite agents, and signing servers. Apply a policy that differs from the corporate endpoint baseline in three ways.
First, raise the aggressiveness of Behavioral AI for lateral movement and credential access, because build agents hold long-lived tokens for registries, cloud accounts, and source control. Second, relax static rules that target developer tools — compilers writing to /tmp, interpreters spawning subprocesses, and network activity to package registries are normal on these hosts. Third, enable Deep Visibility logging at full fidelity even if the corporate baseline samples. You will need the forensic depth when something goes wrong.
Custom Static AI rules that matter
Static AI rules in SentinelOne use a YARA-like syntax to pattern match file and process behavior. Five categories of rule deliver disproportionate value on build infrastructure.
Rules for typosquatted package installation monitor for npm, pip, and gem invocations where the package name differs by Levenshtein distance of one from a top-10k package. A developer installing reqests instead of requests should trigger a medium-severity alert that pulls the Storyline around the install.
Rules for unexpected child processes of package managers flag when npm, pip, or composer spawn network tools beyond their normal repertoire. If npm install launches a reverse shell, a cryptominer, or a binary from /tmp, the Storyline correlation captures the postinstall script, the parent package, and the exfiltration host.
Rules for credential file access outside authorized processes detect when anything other than the expected build tooling reads ~/.aws/credentials, ~/.docker/config.json, or the GitHub Actions runner's token cache. The 2024 CircleCI incident and several GitHub Actions token-theft campaigns share this behavioral signature.
Rules for signing key exfiltration patterns watch for access to Cosign private keys, GPG signing material, or HSM client binaries from processes that have no business touching them. On a signing server, this is one of the highest-fidelity signals you can build.
Rules for post-install network beaconing monitor short-lived processes that make DNS queries to newly registered domains or known DGA patterns within 60 seconds of a package install completing. Malicious npm packages routinely beacon immediately after install to confirm the victim and fetch a second stage.
Storyline correlation for supply chain investigations
Where Static AI rules fire on individual events, Storyline tells the story of a full attack chain. For supply chain incidents, three Storyline hunting queries deserve a permanent home in your saved queries.
The first pivots from any npm, pip, or Maven process back to the parent CI job. Given a suspicious outbound connection, you want to know which repository, which branch, which commit, and which developer initiated the build. Storyline's process tree makes this a two-click pivot when Deep Visibility is fully enabled.
The second enumerates all processes that touched a registry credential file in a given window. If you suspect a token compromise, Storyline can produce the full list of binaries and command lines that read the file, along with any outbound connections those processes made within the correlation window.
The third correlates file writes to artifact directories with subsequent uploads. A malicious build step that injects a backdoor into a compiled binary before the artifact is signed and published will show up as a write-then-upload sequence from an unexpected parent process. Teams that hunt this pattern regularly catch misconfigurations and insider mistakes long before they catch outside attackers.
Integrating with SBOM and registry telemetry
SentinelOne is strong at endpoint behavior and weak at package identity. An npm package with a malicious postinstall script looks like node spawning curl; without knowing which package, which version, and whether that package is in your SBOM, you cannot tell a tuning exercise from an incident response.
The practical pattern is to enrich SentinelOne alerts in your SIEM with SBOM context. When a Static AI rule fires on a build agent, the enrichment layer joins the process command line to the package manifest, looks up the package in the SBOM service, checks whether the package or version is known-malicious, and promotes or demotes the alert accordingly. The same enrichment can attach CVE data, license flags, and OpenSSF Scorecard ratings. This is where most of the tuning-versus-detection tradeoff lives.
Developer endpoint considerations
Developer laptops sit in the middle ground. They run production-grade development tools but are also used for email, chat, and browsing. Applying the build-agent policy to developer laptops creates unmanageable noise; applying the corporate baseline leaves gaps.
A workable compromise uses SentinelOne's application control to carve out developer tool directories — IDEs, package manager caches, local Docker daemons — from the most aggressive static rules, while leaving the behavioral engine fully active. Combine that with a small set of developer-specific Static AI rules focused on typosquatting, malicious VSCode extensions, and credential access from non-development processes. The result is a policy that catches the supply chain attacks actually targeting developers without generating triage fatigue.
Tuning, measuring, and iterating
Detection engineering on build infrastructure is never finished. The best-run programs treat it as a quarterly cycle: review the last ninety days of alerts, categorize each as true positive, false positive, or ambiguous, and adjust rules accordingly. Pair that with a small purple team exercise where the AppSec team plants a benign but realistic malicious package in a staging pipeline and measures detection latency.
Over time, this produces a detection library tuned to your exact toolchain. A team running GitHub Actions with self-hosted runners will have a materially different rule set than a team on Jenkins or Buildkite, and that specificity is a feature rather than a bug.
How Safeguard Helps
Safeguard complements SentinelOne by providing the supply chain context that endpoint telemetry alone cannot deliver. When a Static AI rule fires on a build agent, Safeguard's SBOM graph identifies the package, version, and transitive dependency path involved, along with known CVEs, license risks, and provenance signals. Our API-first architecture feeds enriched verdicts directly into SentinelOne's custom detection pipeline and SIEM, turning noisy process alerts into actionable supply chain incidents. Teams using Safeguard alongside SentinelOne report faster triage, fewer false positives on build infrastructure, and earlier detection of typosquatted and backdoored dependencies.