AI Regulation · Singapore — voluntary global participation

Singapore AI Verify

Singapore's AI governance testing framework — voluntary toolkit and attestation for trustworthy AI.

Regulator
Infocomm Media Development Authority (IMDA) + AI Verify Foundation
Jurisdiction
Singapore — voluntary global participation
Status
Active — Generative AI Model AI Governance Framework released 2024.
In force since
Active
Regulator's source
Who it applies to

Voluntary; many MNCs use it as their AI governance baseline.

Audit / certification status

Continuous evidence pipeline available; audit support included for all customers.

What it requires

What AI Verify actually requires.

These are the obligations a regulated entity owes — the things an assessor or supervisor will ask about.

01

11 AI ethics principles mapped to testable processes.

02

Toolkit-based attestation that processes are in place.

How Safeguard maps to it

Pre-mapped controls. Continuous evidence.

Each requirement above is bound to live telemetry — not screenshots. The mapping below is what your auditor or regulator sees.

AI Verify process attestation auto-populated from model registry telemetry.

Model evaluation and red-team evidence binding.

Evidence we produce

Artifacts your auditor accepts.

Each evidence artifact is signed and timestamped. Auditors can verify integrity without trusting Safeguard.

AI Verify Process Checks report.

Model evaluation evidence.

Ready for AI Verify?

Bring the framework. We'll walk the controls with you — section by section, evidence packet by evidence packet, with the regulators you actually have to answer to.

Safeguard | Software Supply Chain Security Platform | Zero CVE + Self-Healing