A single-page reference for the moving parts of the platform. The control plane, scanner fusion, model serving, policy engine, audit trail, and the multi-tenancy boundaries that hold the whole thing together. Different from the model architecture or the surface map — this is platform engineering.
Ingest at the bottom, surfaces at the top, audit running alongside the whole chain. Each layer is independently versioned and independently observable.
SCM webhooks (GitHub, GitLab, Bitbucket, Azure DevOps), container registry events, manifest uploads from the CLI, and scheduled scans on a per-project cadence. All ingest events land on a single durable queue with deterministic event IDs so retries and replays are safe.
Eleven scanners run in parallel against the ingested artefact. A fusion layer collapses raw findings by (component, function, sink) and produces a single ranked candidate set with cross-scanner evidence bound to every item. The fusion result is what the rest of the platform reasons over.
Seven feeds — NVD, OSV, EPSS, KEV, GHSA, VirusTotal, VulnCheck — cached in a Postgres warm tier with Redis as the hot path. Cache invalidation tracks upstream sigstore signatures, so a stale CVE record cannot survive past the upstream amendment.
Lino runs on-device for inline. Eagle runs batched on the cloud for repo-wide ranking. Griffin (with its auto-router across Lite, S, M, L, Zero) runs for the deep pass. The reasoning trace is a first-class object — every verdict ships with the chain it survived.
Rego-style policies authored once and enforced in CI, the IDE extension, the desktop app, the admission controller, and the MCP server. The same policy bytecode is what gates the build and what surfaces the violation in the editor — no drift between enforcement surfaces.
Every action — ingest, scan, reasoning call, policy decision, fix, dismissal — emits a signed event in CycloneDX plus JSON. The chain is tamper-evident via sigstore: a regulator can verify a finding's full lineage without trusting Safeguard's word for it.
REST API, IDE extensions, desktop app, CLI, MCP server, web portal, and marketplace. Every surface reads from the same audit trail and writes through the same policy engine. Eight ways in, one source of truth.
Tenant separation runs at the schema, the inference key, and the audit stream. A bug in one layer should not leak data across tenants — the other two have to also fail.
Every tenant gets its own schema with a row-level-security envelope. The application layer cannot reach across schemas; the database role used by the service has zero permission outside its bound tenant. Cross-tenant joins are physically impossible, not aspirationally forbidden.
Each tenant has its own inference key with its own KMS-managed envelope. Prompt logs, KV caches, and reasoning traces are encrypted at rest with that key — a key revoke renders the tenant's history unreadable, including to Safeguard operators.
Audit events flow into a per-tenant stream with its own retention, signing key, and export endpoint. The stream is append-only and sigstore-signed so a regulator export carries provable lineage from event to verdict.
Customer code never enters training. There is no shared per-customer fine-tune that other tenants inherit. Distillation runs on Safeguard's public-only corpus; tenant-specific behaviour comes from policy, not from weights.
Each model in the lineup has a serving shape that matches its budget. The auto-router puts the work on the right hardware without the surface having to think about it.
Lino's 1B weights ship inside the IDE extension, the CLI binary, and the desktop app. Inference runs on the developer's laptop with no network egress. p95 inline latency is under 100 ms on M-series Apple Silicon and recent x86 GPU laptops.
Eagle's 13B dense head runs on a shared GPU pool with batched inference and INT8 quantisation. A 5k-package monorepo sweep returns ranked candidates at p95 ~420 ms per package. Per-tenant rate limits are enforced at the inference router, not the GPU.
Griffin Lite, S, M, and L run on dedicated GPU shapes per deployment tier. Shared cloud bursts on multi-tenant inference with prompt-level isolation; dedicated and VPC tiers get single-tenant GPU pools with SHA-pinned weight attestation.
The 671B-MoE Zero head runs only on sovereign or dedicated multi-GPU clusters — documented sizings start at 11x H100 and scale to multi-AZ topologies. It is not exposed on the shared cloud tier and not auto-routed without an explicit policy gate.
Every event a developer triggers walks the same path. The diagram below is the full chain — no hidden services, no parallel universes.
Audit signing fans out alongside every node — egress is not the only place lineage is captured. Every node writes to the audit trail before passing the event on.
Availability targets, latency targets, scan-to-verdict windows, and auto-fix turnaround — the numbers the production tier is held to. Contractual SLAs live in the order form.
Tenant isolation, signed weights, signed audit logs, and a break-glass workflow. Each one is independently testable; together they are how the platform earns the regulator export at the end.
Row-level security at the schema layer, per-tenant inference keys at the KMS layer, per-tenant audit streams at the egress layer. A bug in one layer does not cascade — three independent boundaries have to fail to leak data across tenants.
Every model weight file ships as a sigstore bundle with provenance. The IDE extension refuses to load an unsigned Lino weight; the inference server refuses to serve an unsigned Eagle or Griffin checkpoint. Weight rotation is auditable end-to-end.
Each audit event is signed with a tenant-scoped sigstore key. Verification is open — a regulator with the public key can verify any finding's provenance without Safeguard's involvement. Tampering breaks the chain visibly.
Operator access to tenant data requires a break-glass workflow: two-person approval, time-boxed access window, full keystroke audit, and customer notification. There is no path to tenant data that does not leave a trail.
An architecture page that only shows what is built is incomplete. The list below is what is deliberately absent.
The full diagram set, the weight-signing manifest, and the audit-log schema are reviewable under NDA. Book a session and we will walk it with your platform and security engineering teams.