NIST released the preliminary draft of Internal Report 8596, Cybersecurity Framework Profile for Artificial Intelligence, in December 2025. The profile addresses the intersection of AI and cybersecurity from three distinct angles: securing AI systems against attack, using AI to enhance cybersecurity defenses, and defending against AI-enabled threats. It is structured as a profile of the NIST Cybersecurity Framework 2.0, meaning it does not replace the CSF — it provides a use-case-specific lens on the CSF's Govern, Identify, Protect, Detect, Respond, and Recover functions for organizations that build, use, or are targeted by AI systems. For defenders, IR 8596 is the closest the federal standards body has come to a comprehensive map of where AI and cybersecurity programs intersect.
Why three angles instead of one?
Earlier NIST AI guidance had largely focused on a single dimension: NIST AI RMF 1.0 (AI risk management broadly), NIST-AI-600-1 (Generative AI Profile), and SP 800-218A (Secure Development for AI models). IR 8596 explicitly recognizes that AI and cybersecurity intersect in three different ways that require different controls. Securing AI systems applies to organizations building AI products — model theft, training data poisoning, prompt injection, model evasion. Using AI for defense applies to security operations centers adopting AI for triage, threat hunting, or response automation — risks around AI agent autonomy, false-positive rates, and accountability. Defending against AI-enabled threats applies to all organizations — adversaries using generative AI for spear-phishing, deepfake-enabled fraud, and accelerated vulnerability discovery. Separating the three lets organizations adopt the parts relevant to their operating model without conflating distinct control sets.
How is the profile structured against CSF 2.0?
The profile follows CSF 2.0's function-category-subcategory structure. For each subcategory, IR 8596 provides AI-specific implementation guidance applicable to one or more of the three angles. For example, under Identify → Asset Management, the profile augments ID.AM with AI-specific asset types: models, training datasets, evaluation datasets, system prompts, fine-tuning checkpoints, embedding models, retrieval indexes. Under Protect → Data Security, the profile augments PR.DS with AI-specific controls: training data lineage, model weight protection, prompt-injection-aware input validation. Under Detect → Continuous Monitoring, the profile addresses AI-specific anomalies: model output drift, embedding-similarity attack signals, abnormal agent tool-use patterns.
# Excerpt: IR 8596 indicative profile structure
function: PROTECT
category: PR.DS # Data Security
subcategory: PR.DS-01
csf_text: |
The confidentiality, integrity, and availability of data-at-rest are protected
ai_profile_guidance:
angle: securing_ai_systems
implementation:
- model_weight_files_stored_in_kms_protected_vault
- training_dataset_integrity_attested_via_sigstore_or_in-toto
- evaluation_set_separated_from_training_with_documented_provenance
- prompt_repository_versioned_and_access_controlled
evidence:
- kms_audit_log_for_model_weight_access
- in_toto_attestation_for_training_data_set
- prompt_repository_access_audit
references:
- nist_ai_rmf_1_0_MAP_3.4
- nist_sp_800_218a_PW.4
- iso_iec_42001_2023_8.7
How does IR 8596 interact with the existing AI guidance corpus?
It anchors the corpus. NIST AI RMF 1.0 remains the high-level risk-management framework. The Generative AI Profile (AI-600-1) provides GenAI-specific risk categorization. SP 800-218A provides secure-development practices for AI. ISO/IEC 42001 provides the management-system view. IR 8596 plays a coordinating role: for every cybersecurity control already familiar to organizations operating under CSF or 800-53, the profile maps how the AI dimension affects implementation. The expected use pattern is that an organization with mature cybersecurity already uses CSF or 800-53; IR 8596 lets that organization extend its existing control set to cover AI rather than building a parallel program.
What does "using AI for defense" actually mean in the profile?
This angle addresses the rapidly growing class of AI agents used in security operations: AI triage assistants, autonomous incident-response agents, AI-powered threat-hunting platforms, AI-augmented SIEMs. The profile recognizes that these systems introduce new risks even as they improve defender capabilities. Specific guidance includes: AI agents must have constrained tool permissions (Excessive Agency in the OWASP LLM list), AI outputs that drive incident response must be reviewable and auditable, AI models used in SOC must have evaluated false-positive and false-negative rates, and AI agents must be subject to the same identity, access, and audit controls as human analysts. The profile is notably skeptical of full-autonomy SOC patterns and recommends human-in-the-loop for consequential actions.
What does "defending against AI-enabled threats" cover?
This angle is the one all organizations need, regardless of whether they build or use AI. The profile covers: phishing and social engineering enhanced by generative AI (training programs must adapt to AI-generated content), deepfake-enabled fraud (voice-print-only authentication is now insufficient), AI-accelerated vulnerability discovery (patch cadence assumptions need to compress), and adversarial use of public AI services (data leakage to consumer AI tools by employees). The profile does not enumerate specific tools, but it requires that the organization's threat model explicitly enumerate AI-enabled threats and align controls accordingly.
What about supply-chain attestations for AI artifacts?
IR 8596's Protect function includes explicit guidance on attesting AI artifacts in the supply chain. The profile recommends attestations for training datasets (data provenance, content licensing, integrity hash), fine-tuning runs (build provenance equivalent to SLSA for training jobs), evaluation runs (model performance against documented benchmarks), and deployment configurations (model card, system prompt, tool permissions for agents). The recommendations align with the OpenSSF Model Signing project, the CycloneDX AIBOM extensions, and the in-toto predicate types being developed for model artifacts. For organizations consuming open-source AI models, the profile recommends treating model artifacts like any other supply-chain dependency: verify provenance where possible, inventory dependencies, monitor for known compromises (model-poisoning research has produced concrete examples), and prefer signed artifacts from accountable publishers over unsigned artifacts from anonymous uploaders. The Hugging Face Hub's growing support for signed models and attestation publishing makes this increasingly tractable.
What should organizations do during the comment period and after?
Three actions during the preliminary draft comment period. First, identify which of the three angles applies to your operating model — most large organizations have all three, smaller organizations may have one or two. Second, take three to five CSF subcategories you already have well-documented controls for and run the IR 8596 augmentation against them to test the profile's usability. Third, file public comments where the profile is unclear, redundant, or unrealistic — NIST consistently iterates on community feedback, and the preliminary draft will refine before final publication. After publication, expect IR 8596 to be referenced by federal contracting language, by FedRAMP 20x KSIs as AI use grows, and by enterprise procurement.
How does IR 8596 align with sector regulators and international standards?
NIST publications are voluntary at the federal level but are routinely incorporated by reference into sector regulation and procurement language. IR 8596 is expected to follow that pattern: financial services regulators considering AI guidance, healthcare regulators developing AI-as-medical-device rules, and procurement officials writing AI clauses into federal contracts will all reference IR 8596 as a structured framework. Internationally, IR 8596 aligns with ISO/IEC 42001 (the AI management system), the upcoming ISO/IEC 27090 (AI security), and the EU AI Act's risk-management requirements. The alignment is not coincidental — NIST authors participate in the corresponding international working groups, and the cross-references between US and international AI guidance are converging rather than diverging. For multinational organizations, this is good news: a single IR 8596-aligned implementation will produce evidence that maps to most major AI governance frameworks with documentation effort rather than program rebuilding. The remaining sector-specific requirements layer on top of the baseline, but the baseline itself is increasingly portable across jurisdictions.
How Safeguard Helps
Safeguard's AI governance and supply-chain modules align to IR 8596's three angles. For securing AI systems, the platform's AIBOM, model-provenance, and AI-supply-chain attestation capabilities provide the evidence the profile expects under Protect and Identify. For using AI for defense, Safeguard's own AI features — Griffin AI, automated triage, agent workflows — are themselves subject to the platform's controls: documented tool permissions, auditable outputs, human-in-the-loop gates on consequential operations. For defending against AI-enabled threats, the platform monitors for AI-generated supply-chain attack indicators — hallucinated package names, AI-augmented typosquatting, prompt-injection-aware dependency analysis. Griffin AI generates the IR 8596 profile implementation mapping for each tenant, joining existing CSF or 800-53 control evidence to the AI-specific guidance, so organizations adopting the profile do not rebuild their cybersecurity program but extend it.