ISO/IEC 42001:2023, published as the world's first certifiable Artificial Intelligence Management System (AIMS) standard in December 2023, spent 2024 as an aspirational reference and reached critical-mass adoption in 2025. A CSA 2025 compliance benchmark reported that 76% of surveyed organizations planned to pursue frameworks like ISO 42001. SAP achieved 42001 certification for its core AI services in 2025. Cornerstone announced 42001 certification for Galaxy in late December 2025. Most importantly, the EU AI Act — which came into force in 2025 — increasingly references 42001 as a practical pathway for demonstrating conformity with high-risk AI requirements. For organizations building, deploying, or procuring AI, 42001 is now the lingua franca that auditors, regulators, and procurement teams converge on.
What is an AIMS and how is it different from an ISMS?
An AIMS is a management system for AI development and use, structured like ISO/IEC 27001's Information Security Management System but scoped to AI-specific risks. It requires the organization to establish AI policy, identify roles and responsibilities for AI governance, perform risk and impact assessments specific to AI systems, document the AI system lifecycle (data, model, deployment, monitoring), and implement controls drawn from the standard's Annex A. Where an ISMS treats AI as one category of information system, an AIMS treats AI as a distinct discipline with risks like bias, drift, autonomy, transparency, and human oversight that do not map cleanly to traditional InfoSec controls.
What controls does Annex A of 42001 cover?
Annex A organizes controls into nine categories: policies related to AI, internal organization, resources for AI systems, assessing impacts of AI systems, AI system lifecycle, data for AI systems, information for interested parties about AI systems, use of AI systems, and third-party and customer relationships. The data category is particularly important for supply-chain-minded teams: it requires documented processes for data acquisition, quality, labeling, provenance, and storage — the AI equivalent of an SBOM for the data side of the pipeline. Third-party relationships covers procurement of AI models or services from external providers, including obligations to assess upstream AI risk management.
# 42001 AIMS scope statement skeleton
aims_scope:
organization: Acme Corp
scope_boundary:
- all AI systems hosted on internal infrastructure
- all AI features in the Acme SaaS product line
- all use of third-party AI services in customer-facing flows
excluded:
- employee personal use of consumer AI tools (governed by Acceptable Use Policy)
ai_policy_version: 2026-01
governance_committee:
chair: chief_information_security_officer
members:
- product_engineering_director
- legal_general_counsel
- data_privacy_officer
- ml_platform_lead
cadence: monthly
external_certifications_referenced:
- iso_27001_2022
- soc2_type_2
- iso_42001_2023_in_progress
How does 42001 pair with the EU AI Act?
The EU AI Act classifies AI systems into risk tiers (prohibited, high-risk, limited-risk, minimal-risk) and imposes obligations on providers and deployers of high-risk systems including risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy/robustness/cybersecurity. ISO/IEC 42001's structure addresses many of those obligations directly. While 42001 is not officially harmonized in the EU AI Act sense (harmonized standards are published in the OJEU and provide presumption of conformity), it is increasingly cited as a practical evidence vehicle and several harmonized standards under development reference 42001 patterns. Organizations pursuing both should treat 42001 implementation as foundational and EU AI Act-specific evidence as additive.
What does the SAP and Cornerstone certification announcement signal?
Vendor-side 42001 certifications matter because they shift the procurement conversation. SAP's 2025 certification for core AI services and Cornerstone's December 2025 Galaxy certification are early data points in what appears to be a 2026 wave of enterprise SaaS vendors achieving 42001 alongside their existing SOC 2 and ISO 27001 audits. For procurement teams evaluating AI features bolted into enterprise applications, the question shifts from "does the vendor have AI?" to "does the vendor have an AIMS, and is it certified?" Vendors without an answer to that question are visibly behind, particularly for customers with EU operations who face EU AI Act conformity obligations on their own systems. The early-mover advantage may compress: certification bodies report multi-month backlogs for 42001 audits as of early 2026, and organizations targeting certification by Q4 2026 should engage their certification body in Q2 to secure a slot.
What does certification actually involve?
A typical 42001 certification path runs 9-18 months: scope definition, gap analysis against the standard, AIMS implementation (policies, risk assessments, controls, documented procedures), internal audit, management review, Stage 1 external audit (documentation review), and Stage 2 external audit (implementation review). Surveillance audits follow annually with full re-certification every three years. The standard is risk-based, so a small organization with a narrow AI scope can certify with proportionally less effort than a multinational with hundreds of AI features. The accredited certification bodies — DNV, BSI, Schellman, A-LIGN, and others — are now offering 42001 alongside their 27001 and SOC 2 services, which makes a multi-standard audit feasible.
Where does 42001 fall short, and what complements it?
42001 is a management system standard. It does not prescribe technical controls in the way that 27002 prescribes InfoSec controls. Organizations seeking technical guidance pair 42001 with NIST AI RMF (and the Generative AI Profile NIST-AI-600-1), the OWASP Top 10 for LLM Applications 2025, and the MITRE ATLAS adversarial ML knowledge base. For AI supply chain, the NIST SSDF 218A community profile augments 42001's data and lifecycle controls with secure-development practices specific to AI. Organizations that already operate ISO 27001 can extend their existing ISMS scope to AI by adding 42001 rather than creating a parallel program — most of the management-system infrastructure is reusable.
What are the most common implementation mistakes?
Three patterns recur across early 42001 implementations. First, treating 42001 as a documentation exercise rather than a management system: organizations produce binders of policies but do not establish the recurring governance committees, risk assessments, or impact reviews the standard requires. The certification audit catches this quickly. Second, scope sprawl: organizations attempt to bring every internal AI use under the AIMS scope at once, producing unmanageable complexity. Mature implementations scope narrowly first (highest-risk AI systems), demonstrate maturity, and expand scope in subsequent surveillance cycles. Third, treating 42001 as parallel to ISO 27001 rather than integrating it: maintaining two separate management systems doubles the documentation and audit burden without proportionate benefit. Integration through scope extension of the existing ISMS is almost always the better approach for organizations that already hold 27001. Avoiding these mistakes saves 6-12 months on the certification timeline.
How does 42001 interact with sector-specific AI regulation?
Beyond the EU AI Act, sector-specific AI regulation is proliferating: financial services regulators have issued AI risk guidance in the US, UK, Singapore, and elsewhere; healthcare regulators are increasingly active around AI-as-medical-device; and several jurisdictions are issuing AI-specific data protection guidance that extends or supplements general privacy laws. 42001's role across these regulatory regimes is broadly the same: it provides a management system that demonstrates governance and risk management discipline, which regulators reference as evidence of due care. Sector-specific requirements layer on top — model evaluation thresholds in financial services, clinical validation in healthcare, fairness audits in HR-tech — but the underlying AIMS supports all of them. For organizations operating across jurisdictions, the 42001 backbone reduces the marginal cost of complying with each new sectoral AI regulation, because the management system, documentation, and audit trail already exist. The pattern is similar to how ISO 27001 became the substrate that supported HIPAA, PCI-DSS, GLBA, and other sector-specific InfoSec requirements without organizations having to maintain separate management programs for each.
How Safeguard Helps
Safeguard's AI governance module supports ISO/IEC 42001 implementation alongside the broader supply-chain program. For Annex A data controls, the platform's AIBOM (AI Bill of Materials) capability tracks model lineage, training data provenance, and model-component dependencies — producing the documented data and model lifecycle evidence 42001 requires. For third-party AI risk, TPRM workflows score upstream model vendors and AI service providers against 42001 alignment, EU AI Act risk tier, and NIST AI RMF Profile maturity. Griffin AI generates the AIMS documentation skeleton, populated from tenant data, with traceability between 42001 controls, NIST AI RMF subcategories, EU AI Act obligations, and Safeguard evidence. Policy gates can require attestation of 42001-aligned controls before AI components are deployed to production, and AI-model-specific attestations (model card, evaluation results, bias assessments) can be required as part of the deployment package. For organizations pursuing 42001 certification, Safeguard collapses the parallel-evidence problem that historically plagued multi-standard programs.