Use Case · Policy As Code

Author Once. Enforce Everywhere.

A rego-style DSL, a version-controlled policy repo, simulation mode for new rules, gradual rollout from soft-warn to hard-block, and an audit log of every policy decision. The same policy evaluates in the IDE, in CI, at the admission controller, and at runtime.

4
Enforcement Planes (IDE · CI · Admission · Runtime)
Rego-Style
Policy DSL
Simulation
Dry-Run Mode
100%
Decisions Audit-Logged

Five Tools, Four Policy Engines, One Set Of Rules.

Each enforcement plane has historically shipped with its own rule language: IDE plugins, CI gates, admission controllers, runtime detection. The same security intent gets re-expressed four times with four sets of bugs.

The result is well-intentioned drift. The CI gate blocks AGPL; the admission controller does not. The IDE warns on hardcoded secrets; CI does not. Risk surfaces in whichever plane forgot to enforce.

Safeguard's policy DSL is authored once and compiled to every plane. Simulation mode lets new rules ride alongside enforcement without breaking the build; gradual rollout takes them from soft-warn to hard-block on a tenant-set schedule.

01

Rule Drift Across Planes

When the same intent is expressed in four engines, the four expressions diverge over time. Enforcement becomes inconsistent and unprovable.

02

No Safe Way To Test A New Rule

Without simulation, new rules ship as either soft-fail or hard-block. Both are wrong: soft-fail is ignored, hard-block breaks production.

03

Decision Logs Are Sparse

Most engines log only the denials. Knowing what was allowed and why — and which rule version applied — requires correlation across multiple stores.

04

Policy Lives Outside Version Control

Rules edited in vendor UIs leave no diff history. Reverting to last quarter's policy after a misadventure is impossible.

What It Does

One Repo, Four Planes Of Enforcement.

Rego-Style DSL

Familiar declarative syntax with first-class primitives for SBOM, vulnerability, licence, IaC, and runtime-event shapes; type-checked at PR time.

Version-Controlled Policy Repo

Policy lives as code in Git; every change is reviewed, every release tagged. Reverting to a previous policy version is one revert PR.

Simulation Mode

Run a new rule across last 30 days of decisions to see what it would have changed before it goes live; output is a diff against the current rule.

Compile To Every Plane

The same policy bundle compiles into IDE checks (via Lino), CI gates, admission controller policies, and runtime detection signatures.

The Pipeline

From Policy PR To Universal Enforcement.

01
Author rule in DSL

Engineer writes the rule in the rego-style DSL; type checker validates against SBOM / vulnerability / IaC / runtime shapes.

02
Simulate on history

PR runs the proposed rule against last 30 days of decisions; reviewer sees a diff of what would change.

03
Soft-warn rollout

After merge, the rule ships in warn-only mode by default; decisions are logged but not blocked.

04
Gradual escalation

Tenant-set schedule (e.g., warn for 14 days, then block in CI, then block in admission) escalates the enforcement automatically.

05
Multi-plane compile

Each release of the policy bundle is compiled into IDE / CI / admission / runtime artefacts and signed before distribution.

06
Decision audit emit

Every evaluation across every plane writes a signed entry to the tenant audit log with the policy version and the decision rationale.

Outcomes After Adoption.

Consistent Enforcement

Same rule, every plane
No silent drift between engines
Tenant-tunable per environment

Safe Iteration

Simulation against 30-day history
Soft-warn by default
Gradual escalation on schedule

Provable Decisions

Every evaluation audit-logged
Policy version pinned per decision
Rationale attached for reviewers

Read guardrails-and-enforcement for the gate model, see Lino for the IDE plane, and pair with break-glass for the bypass primitive.

Write The Rule Once. Enforce It Everywhere.

Bring one existing rule (in any current format) and we'll port it to the DSL plus show the simulation against your last 30 days.