Industry Analysis

Panther SIEM Supply Chain Rules: A Detection Engineering Playbook

Write Panther Python detections that catch package poisoning, CI token abuse, and registry compromise. Real rule examples, tuning patterns, and alert routing.

Nayan Dey
Senior Security Engineer
7 min read

Panther's appeal to modern security teams is its Python-native detection model. Instead of the constrained query languages that older SIEMs inherited from the SQL era, Panther lets detection engineers write real code — unit-tested, version-controlled, and deployable through the same CI/CD pipelines that ship the rest of the company's software. That foundation matters when the threat you are chasing is a supply chain attack, because the signals are scattered across GitHub audit logs, package registry webhooks, cloud IAM events, and build system outputs.

This is a working playbook for teams that want to extend Panther coverage beyond the default ruleset into software supply chain detection.

Framing the problem in Panther terms

A Panther rule is a Python function that receives a log event and returns true when the event is suspicious. Rules are organized by log type, and Panther's schema ecosystem covers GitHub, GitLab, Okta, AWS CloudTrail, Google Workspace, and dozens of other sources out of the box. Supply chain detections live at the intersection of those log types, which means most high-fidelity rules are either correlation rules across multiple sources or Lookup Table enrichments that add package and dependency context to raw events.

The mental model to hold is that every major supply chain attack leaves traces in at least two of four places: source control, build systems, package registries, and cloud identity. A detection program that correlates across all four catches incidents that a program watching any single source misses.

Source control detections

GitHub audit logs are the foundation. A small set of Panther rules against the GitHub schema delivers outsized value.

Workflow file modifications outside the usual author cohort warrant a high-fidelity alert. If a repository's .github/workflows/release.yaml has been edited by Alice and Bob for the last two years, a sudden commit from a dormant contributor or a newly added user deserves triage. The rule compares the commit author to a lookup table of historical workflow editors and alerts on drift.

New deploy keys, personal access tokens with repository-admin scope, and newly added GitHub App installations are all high-signal events. Supply chain attackers favor these mechanisms because they survive password rotation and often escape MFA enforcement. A Panther rule that flags every new deploy key and every PAT with write scope, then routes to a Slack channel for reviewer approval, closes one of the most commonly abused gaps.

Branch protection changes are another class of event worth alerting on. Disabling required reviews, allowing force pushes on protected branches, or removing status checks are legitimate actions during incident response but suspicious during business as usual. A rule that compares the actor to the CODEOWNERS file catches the majority of unauthorized modifications.

Package registry detections

npm, PyPI, and Maven Central do not stream audit logs directly to your SIEM, but your internal mirrors and artifact repositories do. Artifactory, Nexus, GitHub Packages, and AWS CodeArtifact all emit events that Panther can ingest via webhook or log shipper.

The most productive rules in this category focus on publish events. A rule that alerts when an internal package is published with a version that reuses a previously published version catches dependency confusion and registry poisoning attempts. A rule that flags package publishes outside business hours or from IP addresses not associated with CI runners catches credential theft. A rule that looks for package names with Levenshtein distance of one from popular public packages catches typosquatting attempts targeting your developers.

Combine registry events with download patterns, and you can detect a broader class of attack. If a newly published package version is downloaded within seconds of publication by a user account rather than a CI job, that deserves scrutiny. Real usage typically lags publication by minutes to hours as caches populate and builds trigger.

Build system detections

Build system logs are the noisiest of the four sources and therefore the hardest to write high-fidelity rules against. The Panther teams that get the most value here use Data Explorer to baseline normal build behavior and write rules that detect deltas from that baseline.

Useful patterns include detecting when a build job runs a command that was not present in the last fifty successful runs of the same pipeline, detecting when a build job uploads artifacts to a registry it has not published to in the last thirty days, and detecting when a build job reaches environment variables or secrets it has not previously consumed. Each rule triggers manageable alert volume and catches real incidents.

Secret exfiltration from CI remains the highest-value detection in this category. Rules that look for base64-encoded blobs longer than 200 bytes in build logs, for curl or wget POST requests to domains not on an allowlist, and for DNS queries to recently registered domains from within build jobs all produce alerts that are worth investigating.

Cloud identity detections

AWS, GCP, and Azure produce the cleanest telemetry of the four sources, and Panther's default rule packs cover most well-known attack patterns. Supply chain-specific additions are worth layering on top.

A rule that alerts when an IAM role trusted by a CI/CD provider is assumed from an IP address outside the provider's published ranges catches OIDC token abuse. A rule that alerts when a service account key is created outside IaC-managed paths catches a common persistence technique. A rule that alerts when KMS keys used for artifact signing are used by principals not on an explicit allowlist catches signing-infrastructure compromise.

Correlation rules and the 24-hour window

Panther's Rules Engine supports correlation through saved queries, scheduled rules, and the Indicator Database. The highest-value correlation rules for supply chain security watch 24-hour windows rather than minute-scale bursts.

The canonical example: correlate a new GitHub PAT creation with a subsequent npm package publish from an IP outside the normal CI range. Neither event in isolation is necessarily alarming; together, they match the pattern of several real incidents. Panther expresses this as a scheduled rule that runs hourly, queries both log types, and alerts when a PAT creation within the last 24 hours precedes an unusual publish event.

Testing, tuning, and the alert budget

Panther's unit test framework is one of its best features. Every rule in production should have at least a positive and negative test case. For supply chain rules, we also recommend a regression corpus of real events — anonymized where necessary — drawn from your own environment. When a rule change lands, the CI pipeline runs the regression corpus and flags any shifts in alert volume.

Alert budget discipline matters more for supply chain rules than most other detection areas because the false positive rate can spike dramatically after tooling changes. A migration from Jenkins to GitHub Actions, a new internal registry, or a major upgrade of your IaC pipeline can invalidate months of tuning. Treat rule maintenance as a scheduled quarterly activity, not an ad hoc one.

How Safeguard Helps

Safeguard provides Panther with the package-level ground truth that makes supply chain rules high-fidelity. Our API returns SBOM composition, CVE mappings, provenance signatures, and known-malicious package data that Panther rules consume as Lookup Tables or real-time enrichments. Detection engineers writing rules against GitHub, registry, and build system events can join events to Safeguard's graph to answer questions like "is this package in our SBOM" and "does this version have known compromise indicators" without leaving the rule code. The result is fewer noisy alerts, faster triage, and supply chain coverage that keeps pace with a moving threat landscape.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.