Plain-English definitions of every concept Safeguard uses — reachability, SBOM, taint analysis, MCP security, SLSA, VEX, guardrails, and the rest — with the reasoning for why each matters to your security program.
How Safeguard finds vulnerabilities, maps their reach, and decides which ones matter.
Deciding which vulnerabilities can actually be exploited in your code — and ignoring the rest.
Tracing untrusted input from source to sink to find exploitable data flows.
Finding novel vulnerabilities before they are publicly disclosed.
A map of which functions call which — the foundation of reachability and taint analysis.
The four vulnerability intelligence signals you actually need to prioritise work.
What is running where, and who produced it.
A machine-readable inventory of every component in your software.
An SBOM extended to model weights, training data, and AI runtime components.
Signed attestations that prove how a build artifact was produced.
Vendor statements of which CVEs actually affect which products.
One of the two standard SBOM formats — security-optimised and widely adopted.
The other standard SBOM format — ISO-approved and license-focused.
Continuous inventory of every dependency, image, AI model, and MCP server across the estate.
Where risky things get blocked — and how the block is auditable.
Pre-configured rules that stop risky changes before they merge.
The same rule set applied at PR time, build time, admission, and runtime.
The Kubernetes hook that rejects non-compliant workloads before they schedule.
An audited bypass for legitimately urgent situations that would otherwise be blocked.
Catching configuration that has quietly moved away from its declared state.
An auto-generated pull request that remediates a specific vulnerability.
How trust gets established and verified across the build pipeline.
An open-source stack for signing, verifying, and transparency-logging artifacts.
The container-signing tool at the core of modern supply chain trust.
An attack that exploits name collisions between internal and public package registries.
Publishing malicious packages under names close to popular legitimate ones.
Base images curated so none of the packaged components carry known CVEs.
How Safeguard secures the AI layer — MCP, LLMs, and agent tool use.
Safeguard's reasoning engine — turns engine output into human-reviewable findings and fix PRs.
A server that exposes tools to AI agents via the Model Context Protocol.
Governance, signing, scoped credentials, and audit logs for MCP servers.
Giving each LLM exactly the tools it needs, with bounded privilege and audit.
Restricting what each MCP server or AI tool can actually do — even if the model asks for more.
The attack class where untrusted text in a model's context becomes a command it follows.
A regression test suite for AI workflow quality, gated into release.
Matching the right model family to the right security task for cost and quality.
Each concept connects to the use cases and products where Safeguard applies it.