Legal · Usage Policy

What you may and may not do with Safeguard.

A short, plainly written policy that describes the use we built the platform for, the use we do not accept, and what happens when the line is crossed. Written so a human can read it once and remember it.

Last updated: May 2026
Acceptable use

What the platform is for.

01

Use the platform for the security work it was built for

Scanning your own source, your own containers, your own dependencies, your own AI models, and the supply chain that produces all of the above. Generating SBOMs, attestations, and remediation patches against systems you operate or have explicit permission to assess.

02

Use AI outputs as evidence-backed inputs to human decisions

Every Griffin response is emitted with a cited trace. Treat findings, patches, and policy verdicts as evidence to review and act on — not as automated authority. The trace is the audit story.

03

Share findings responsibly with the relevant maintainers

When the platform discovers a candidate vulnerability in third-party code you depend on, the expectation is coordinated disclosure with the upstream maintainer first. We provide tooling to do this; we ask that you use it.

04

Respect tenant boundaries and access controls

Do not attempt to access data, scan outputs, or model traces belonging to other tenants. Every account is sandboxed; honour the boundary. Report any cross-tenant exposure you observe through the responsible disclosure channel.

05

Keep credentials and API keys yours

Do not share credentials across organisations. Do not check API keys into source. Rotate keys on a regular cadence, and revoke any that may have been exposed. The platform supports short-lived tokens and per-environment scoping; use them.

Prohibited use

What the platform is not for.

01

Weaponising model outputs into offensive tooling

Safeguard is a defensive platform. Repackaging model outputs, finding traces, or remediation logic into tools whose primary purpose is to compromise systems you do not own or are not authorised to test is not permitted and will result in account termination.

02

Mass-targeting attacks against third-party infrastructure

Using the platform to enumerate, fingerprint, or scan systems at scale that you do not own or have explicit written authorisation to assess. Bulk reconnaissance against the public internet is out of scope and will be blocked.

03

Attempting to extract model weights, training data, or internal state

Prompt-engineering attacks aimed at exfiltrating Griffin, Eagle, or Lino weights, the training corpus, internal system prompts, or other tenants' data are prohibited. Suspected exfiltration attempts are logged and reviewed.

04

Unauthorised security testing of others

Running scans, fuzzing campaigns, or exploitation flows against assets you do not own and do not have a written engagement for. Bug-bounty programmes provide written scope; honour that scope. "Public-facing" is not the same as authorised.

05

Circumventing safety controls or refusal behaviours

Attempting to disable, bypass, or coax the model into producing content classes the platform refuses — offensive tooling, weapon design, mass-harm patterns. Tampering with guardrails, attestation logic, or trace emission is prohibited.

Enforcement

Violations are reviewed by the trust and safety team. Outcomes range from a private notice and remediation request, to feature restrictions, to account suspension or termination. Where the conduct involves criminal activity, we cooperate with the relevant authorities. We do not name violators publicly without cause.

Reporting violations

See something? Tell us.

If you observe behaviour that violates this policy — by another customer, a partner, or by the platform itself — write to compliance@safeguard.sh. Reports are reviewed within one business day. Reporters who act in good faith are protected from retaliation.

Questions about a specific use case? Ask before you build.

If you are unsure whether a planned use falls inside the policy, ask first. A short call is always better than a wrong assumption.