Company · Mission & Vision

Why we built this. And what it looks like ten years from now.

The honest version of the mission, the ten-year arc the company is committing to, and the criteria we have written down for what success and failure look like. This page is meant to be read by the people who have to decide whether to bet their codebase, their regulator's evidence trail, or their career on Safeguard.

The mission

The actual mission statement.

The mission is to make software supply chain security a property of the toolchain, not a quarterly cleanup project. Today, the average engineering organisation treats the supply chain like an audit event — something that gets attention when a new CVE makes the news, and disappears in between. That cadence is incompatible with the rate at which dependencies move, models ship, and attackers iterate.

So we ship reasoning that can be audited. Every finding the platform raises arrives as a structured trace the reviewer can read end to end — hypothesis, cited path through the codebase, disproof attempt, proposed patch. The reviewer's question stops being "is this true?" and starts being "do I agree with the disproof?" That is a much faster question, and it is the question the toolchain should ask.

And we bring that posture to sovereign and regulated environments the same way the frontier teams take it for granted on shared cloud. The model that runs in the customer's own datacentre is not a smaller, weaker version of the model that runs in shared cloud — it is the same model, with the same evals, the same trace contract, and the same signed weights.

Ten-year arc

Year 1 to Year 10.

Not a roadmap of features. A description of what the world should look like when Safeguard is where we want it to be.

Year 1

Griffin lineup proven across enterprise customers

Reachability combined with the structured-trace LLM is the obvious answer for new buyers. Lino, Eagle, and Griffin all in production with named accounts. The eval suite, the trace contract, and the deployment-shape ladder are stable enough for new sales conversations to anchor on them.

Year 3

Lino on every developer machine that ships production code

MCP-server governance is the obvious answer for AI in the SDLC. The IDE surface is contractually on-device for the customers who need it. Eagle ranks the queue, Griffin reasons over the survivors, and the auto-fix loop runs against the maintainers who have opted in.

Year 5

Sovereign deployments in every region with mature accreditation

AI-BOM is industry standard. The model card, the corpus card, and the eval card travel with every model release the way an SBOM travels with a build. Sovereign tiers are not an upsell — the same model lineup runs in the customer's own datacentre with the same guarantees.

Year 10

Structured reasoning trace is the industry default

The structured reasoning trace contract — HYPOTHESIS / CITED PATH / DISPROOF / PROPOSED PATCH — is the industry default for security AI. Coordinated disclosure pipelines run upstream patches autonomously where the maintainer trusts them to. The cleanup-quarter culture is gone.

What success looks like

Five concrete success criteria.

Toolchain feel, not project feel

Customer engineering teams describe supply-chain security as a property of the toolchain — like the type checker or the test runner — not as a project that lights up once a quarter.

Under 5% of engineer time on review queue overhead

An average engineer spends less than 5% of their week on security-review queue overhead. The rest of the time they are writing or shipping code. Findings come prioritised, with fixes, ready to merge.

10x portfolio scale per security headcount

A security team running on Safeguard handles roughly an order of magnitude more application portfolio per person than the same team without it — measured by repos, services, and signed releases per analyst.

Regulator evidence standard

Regulators in regulated industries use Safeguard's evidence format — the signed structured trace, the model card, the AI-BOM — as the reference standard when describing what acceptable AI-assisted security evidence looks like.

Researchers cite Safeguard in disclosures

Independent security researchers credit Safeguard in published disclosures when the platform's reasoning trace helped them identify or refine the finding. The platform is contributing to the field, not just consuming it.

What failure looks like

Failures we have to catch ourselves.

If we ship marketing that overpromises the model's capability, we have failed.

If our models hallucinate vulnerabilities that do not exist and waste reviewer hours, we have failed.

If a customer cannot audit our reasoning end to end, we have failed regardless of how the model performed.

If our auto-fix loop breaks more than it fixes, we have failed and the auto-fix needs to come down until we know why.

If sovereign customers feel they got a watered-down product compared to shared-cloud customers, we have failed at the deployment-shape contract.

If any of the above happens, the failure has to be caught by us — not by the customer reading their incident report at 2am.

The non-goals

What we will not do to grow.

01

Becoming a general-purpose coding assistant. The model is a defender, not a productivity surface.

02

Selling customer data, in any form, ever. Customer artefacts are not the product.

03

Gating sovereign features as an upsell. The same security posture is the same security posture.

04

Shipping adversarial AI capabilities to non-defensive buyers. Research stays inside the defensive constitution.

05

Acquiring competitors to suppress them. Win on the model, not on the cap table.

06

Taking investment that misaligns the mission. The shape of the cap table is part of the product.

How we measure

Four leading indicators.

Numbers that move before the lagging ones. Revenue and logo count are downstream of these.

Customer time-to-evidence

From finding to the moment the customer's auditor accepts the evidence trail. Trending down quarter over quarter.

% of findings actually fixed

Not findings reported. Findings that resulted in a merged, signed patch — that is the only number that matters.

% of releases that pass adversarial regression

Every model release runs through the adversarial suite. The fraction that ship without a regression is published quarterly.

% of improvements that reach all tiers in 30 days

A model improvement that ships to shared-cloud should reach sovereign tier inside 30 days. The gap is the metric.

If this is the mission you want to back.

Customers, partners, and prospective team members — the mission is the contract. Talk to us if it lines up with where you are pointed.