Best Practices

One Policy Set, Four Enforcement Points

Different gates with different rules create gaps and developer friction. A unified policy engine evaluates one definition at PR, build, admission, and runtime.

Nayan Dey
Senior Security Engineer
8 min read

Most organizations end up with several supply chain policies running on different stacks. The PR check is a GitHub Action wrapping a homegrown script. The build-time scan is a SaaS scanner whose rules are configured in a separate console. The admission webhook is a Kubernetes policy engine with a custom rule library. The runtime monitor is yet another system with its own configuration model. Each was added at a different time to solve a different problem, and each does its job in isolation.

The cost of this fragmentation accumulates quietly. A rule that lives in three systems gets three slightly different definitions. A rule that lives in only one system has gaps wherever the other systems operate. Developers see different error messages at different gates for what is logically the same violation. Security teams maintain four sets of overrides, four audit logs, and four documentation sites. Audits become archaeology. The right answer is to express policy once and let one engine evaluate it at every gate.

Why fragmentation is the natural state

Fragmentation does not happen because anybody chose it. It happens because each gate appeared in response to a different incident.

A high-profile dependency confusion attack prompts somebody to add a PR-time check, scoped narrowly to package names because that was the immediate problem. A signed-artifact compliance requirement prompts somebody to add a build-time signing step and an admission-time signature check, configured in their respective tools. A runtime supply chain compromise prompts somebody to add a drift detection agent with its own threat list. Each addition was correct in context. None of the additions was integrated with the others.

The integration debt compounds. When a new policy is needed — say, blocking a freshly disclosed compromised package — somebody has to update four systems, validate the rule in each, and hope the four implementations stay in sync. The temptation is to update only the most painful gate, usually the one that just produced an incident, and leave the others stale. Over time, the gates drift apart, and the system as a whole protects less than the sum of its parts implied.

What unified policy actually means

Unified policy is not a single file containing every rule. It is a single definition format, a single decision engine, and a single audit pipeline, with adapters that present the engine's verdict at each enforcement point.

The definition format is declarative and source-controlled. A rule states what it checks — a property of a package, an image, a workload, or a runtime event — and what the verdict is when the check fails. The format is the same across gates. A rule about signature verification has the same shape whether it is being applied to an image at admission or an artifact at build.

The decision engine reads the rule definitions and produces verdicts on inputs. The engine is the same code path regardless of where the input comes from. A PR-time evaluation feeds the engine a proposed manifest. A build-time evaluation feeds it the assembled artifact. An admission-time evaluation feeds it the workload spec and SBOM. A runtime evaluation feeds it observed drift events. The engine does not care which gate is calling; it produces the verdict based on the rules.

The audit pipeline records every evaluation in a uniform schema. Querying what did our license policy evaluate against last week returns events from all four gates, not four separate result sets that have to be reconciled.

What a single rule definition looks like in practice

The shape of a rule under unified policy is approximately:

  • Identifier. A stable name and version so the rule can be referenced from incident reports.
  • Inputs. The properties of an artifact, workload, or event the rule consumes.
  • Conditions. The boolean logic that determines whether the rule fires.
  • Verdicts per gate. The outcome at PR time (warn, block), build time (warn, block), admission time (warn, deny), and runtime (alert, isolate).
  • Mode per gate. The current rollout phase at each gate independently.
  • Override channel. The break-glass and waiver paths.
  • Owner and review cadence. Who maintains the rule and when it is next reviewed.

A rule about license compliance, for example, might verdict-block at PR time, verdict-block at build time, verdict-deny at admission time, and verdict-alert at runtime. The same conditions apply at every gate; only the consequence is calibrated to the gate's risk profile. Authoring the rule once, with explicit per-gate consequences, is far less error-prone than maintaining four separate license rules in four different systems.

The integration story

Unified policy works only if the gates can actually call into the same engine. That demands a few integration capabilities.

Webhook and API surface. The engine has to expose a stable API that PR checks, CI plugins, admission webhooks, and runtime agents can query. The API needs to be fast — admission decisions block apply calls — and reliable, with redundancy and graceful degradation modes.

Input adapters. Each gate has a different native input shape. PR-time gates produce diffs. Build-time gates produce SBOMs. Admission gates produce Kubernetes specs. Runtime agents produce event streams. Adapters translate these into the engine's input model so the rules do not have to know which gate they are running for.

Output adapters. The engine produces a verdict. Each gate has a different way of presenting that verdict to its user — a PR comment, a build log line, an admission rejection, a runtime alert. Output adapters render the verdict into the gate-native format while preserving the underlying decision metadata.

Shared state. Some inputs are expensive to compute. Vulnerability lookups, package metadata, and signature verifications are best done once and cached. Shared state across gates means an evaluation at PR time can be reused at build time without recomputation, reducing both latency and cost.

The benefits compound

When policy is unified, several capabilities that were difficult become straightforward.

A new threat — say, a freshly disclosed compromised package — is addressed by adding one rule, which immediately applies at every gate. The rollout phases per gate let the policy author warn first and block later, but there is no risk that one gate gets the rule and another does not.

Audit responses become tractable. Show every evaluation of policy lic-001 in the last quarter returns a single result set. Show every override granted against this rule returns a single list. Show how this rule's verdicts evolved returns a single timeline.

Developer experience improves because the messages are consistent. The same package blocked at PR time produces a structurally identical block message at admission time, with the same identifier, the same remediation, and the same override path. Developers learn one mental model instead of four.

Policy authors get feedback loops. The signal flows that runtime drift detection produces feed back into PR-time and admission-time decisions on the same artifacts. A policy that admits workloads whose runtime behavior repeatedly drifts is a policy whose verdict surface needs revisiting.

What unified policy is not

It is not a single binary policy. Per-gate verdicts and modes give authors the precision they need. A rule that warns at PR time but blocks at admission is a perfectly valid rule under a unified engine.

It is not a single owner. The engine is shared infrastructure; the rules belong to whoever owns the domain — security, platform, legal, individual product teams. Unified policy makes ownership clearer, not vaguer.

It is not a one-time migration. Existing policy implementations move into the unified engine over time, often gate by gate. A common path is to start with PR-time and build-time, then add admission, then add runtime. The benefit grows as more gates join, but a partial unification is still better than full fragmentation.

How Safeguard Helps

Safeguard is the unified policy engine described above, with adapters at every supply chain gate.

At PR time, the Safeguard PR-check action calls the engine with the proposed manifest, the engine evaluates the active rule set, and the result becomes a structured PR comment and merge-blocking status.

At build time, the Safeguard build plugin feeds the engine the assembled SBOM and image, applying the same rule set with build-time verdicts and writing results into the build log and the artifact's attestation chain.

At admission time, the Safeguard Kubernetes webhook feeds the engine the workload spec, the image's SBOM, and the available attestations, applying the same rule set with admission-time verdicts and producing a structured rejection message when the workload fails.

At runtime, the Safeguard runtime collector feeds the engine drift signals and observed events, applying the same rule set with runtime-time verdicts and routing alerts to the appropriate response path.

The four adapters share the rule definitions, the override workflow, the audit log, and the dashboard. A rule authored once applies four times, with calibrated consequences at each gate, and a developer who hits a block sees the same identifier and the same remediation path regardless of where the block fired. The result is supply chain enforcement that protects more than the sum of its parts, because the parts are the same engine speaking with one voice.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.