Values · Three Constitutions

What we will and won't do.

Three short documents that govern how Safeguard builds, ships, and behaves. The Constitution of Security is how we think about defending software. The Constitution of AI is how we think about building and deploying models. The Constitution of Human Values is how we think about the people on every side of that work.

Constitution of Security

Constitution of Security

Seven principles that govern how Safeguard defends software supply chains. We hold ourselves and our partners to these. They are not aspirational — they are the bar a feature must clear before it ships.

01

Default-deny over default-allow

Anything that touches customer code, model weights, or sensitive output is denied unless explicitly permitted. We do not ship security postures that depend on remembering to lock the door.

02

No security by obscurity

Our model architectures, training methodologies, and protocol contracts are published or available under NDA. A control whose only protection is that nobody knows it exists is not a control.

03

Evidence over assertion

Every claim the platform makes — a finding, a verdict, a policy decision, an attestation — is accompanied by a cryptographically verifiable trace. "Trust us" is not a security guarantee.

04

Defense in depth, even against ourselves

We assume Safeguard itself can be compromised. Tenant isolation, signed weights, per-tenant audit logs, and the principle of least privilege apply inside Safeguard the same way they apply at our customers.

05

Coordinated disclosure as a first-class workflow

When the platform discovers a candidate vulnerability in third-party code, the default is coordinated disclosure with the upstream maintainer — not public posting, not silent exploitation.

06

Security is a property of the whole supply chain

A clean image with a vulnerable application is not secure. A great policy with no enforcement is not secure. Safeguard ships across the supply chain because half the answer is none of the answer.

07

We do not weaponise our research

Capabilities developed for defensive supply chain security are not repackaged into offensive tooling, made available to red teams under licensing terms designed to constrain abuse, and never sold to actors operating under sanctions regimes we comply with.

Constitution of AI

Constitution of AI

Eight principles that govern how Safeguard trains, evaluates, and deploys models. The Griffin, Eagle, and Lino families exist inside this document — every release passes through it before it reaches a customer surface.

01

Train on what we are willing to publish

Our corpus is curated security material — CVE write-ups, exploit research, advisory text, taint graphs, malware behavioural traces. We do not train on customer code, customer prompts, or general web crawl. What goes in is described in the security-corpus page; we stand by it publicly.

02

Capability does not justify deployment

A model passing the eval gate is necessary but not sufficient to ship. If the adversarial pass regresses, the trace-quality audit fails, or the refusal-rate drifts, the release is held — regardless of capability gain on the headline benchmark.

03

Structured reasoning over opaque generation

Every Griffin response is emitted as a structured trace: HYPOTHESIS / CITED PATH / DISPROOF / PROPOSED PATCH. The trace is the audit story. A model that produces a correct answer with no traceable reasoning is not, for security purposes, correct.

04

Adversarial disproof is non-negotiable

Every candidate finding is run through a second decoder head that tries to refute it. Only candidates that survive disproof reach the human queue. We accept slower inference in exchange for findings the reviewer can actually act on.

05

Customer code is not training data

Customer source code, scan outputs, prompts, and findings never enter the training loop. Anonymised, aggregated telemetry on model behaviour is used to improve the model — individual customer artefacts are not.

06

The model serves the engineer, not the other way around

We do not measure success by tokens generated or sessions logged. We measure by time-to-fix, false-positive rate, and reviewer-hour saved. If an interaction wastes the developer's time, the model failed even if the response was eloquent.

07

Refusal is a feature, not a bug

There are categories of request — offensive tooling, weapon design, mass-targeting attack patterns — where the correct answer is no. We tune for crisp refusal in those cases, and against vague refusal in legitimate security research where the analyst needs the model to engage.

08

Provenance over performance

Model weights are signed. Datasets are versioned. Training runs are reproducible from the recorded recipe. If a downstream party needs to verify what was in the model when it produced an output, they can.

Constitution of Human Values

Constitution of Human Values

Eight principles about the people on every side of this work — engineers, security teams, customers, partners, and the team building Safeguard itself. Software gets used by humans; we never forget that.

01

Developers are colleagues, not metrics

The engineer reading a Safeguard finding is a peer who has limited time and real pressure. The platform speaks to them like one — citing evidence, proposing a fix, and explaining the reasoning. Not nagging, not lecturing, not gamifying.

02

Security people are not the enemy of velocity

The security team and the engineering team are on the same side. Tools that pit one against the other waste the most expensive people in the building. Safeguard is built to remove the friction, not to relocate it.

03

No dark patterns

Defaults that quietly extend trial periods, opt-outs buried five clicks deep, urgency timers that lie — none of that ships. The platform respects the time and attention of the person using it.

04

Privacy is a posture, not a checkbox

On-device inference where it makes sense. Air-gapped deployments where they're needed. No telemetry by default; opt-in anonymised telemetry only. The defaults assume the most privacy-conscious customer is the standard customer.

05

Honest about what we do not do

Safeguard is not a general-purpose code assistant, not a chat application, not an image generator. We ship a single-purpose platform for software supply chain security and we say so. Mission creep is how good products become bad ones.

06

We tell partners and customers the truth

Where the competitor is better, we say so. Where our model misses a class of bug, we publish the eval. Where a feature ships with known limitations, we document them. The long game is built on accurate expectations.

07

People over process

Process exists to serve people. When the process produces outcomes nobody on either side wants — bad commits forced through, important findings ignored — the process is wrong and we change it. We do not hide behind it.

08

We will reverse decisions we got wrong

When we mis-design a feature, mis-prioritise a roadmap, or mis-handle a customer interaction, we say so and change course. The cost of admitting a mistake is always lower than the cost of compounding one.

If this is the kind of company you want to work with.

Customers, partners, and prospective team members — these documents are the contract. If we ever fall short of them, we want to hear about it.