The 2023 US Executive Order on Safe, Secure, and Trustworthy Development and Use of AI — reporting requirements for foundation model developers and federal AI use governance.
Federal agencies, federal AI vendors, and foundation model developers crossing dual-use thresholds.
Continuous evidence pipeline available; audit support included for all customers.
These are the obligations a regulated entity owes — the things an assessor or supervisor will ask about.
Reporting requirements for foundation models exceeding compute thresholds (Defense Production Act invocation).
Red teaming of dual-use foundation models prior to release.
NIST AI Safety Institute consortium participation.
Federal agency AI use case governance and chief AI officer appointment.
Each requirement above is bound to live telemetry — not screenshots. The mapping below is what your auditor or regulator sees.
Foundation model reporting workflow with NIST AI Safety Institute templates.
Red-team evidence repository aligned with NIST AI RMF.
Each evidence artifact is signed and timestamped. Auditors can verify integrity without trusting Safeguard.
Foundation model reporting pack.
Red-team evidence.
These frameworks share substantial control overlap with EO 14110. Customers running one assessment typically satisfy the others with the same evidence base.
European Union
The world's first comprehensive AI regulation — risk-based, with phased prohibitions, transparency duties, and obligations for high-risk and general-purpose AI.
APAC
Singapore's AI governance testing framework — voluntary toolkit and attestation for trustworthy AI.
Bring the framework. We'll walk the controls with you — section by section, evidence packet by evidence packet, with the regulators you actually have to answer to.