The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, provides a voluntary framework for managing risks associated with AI systems throughout their lifecycle. In December 2025, NIST published the Cyber AI Profile (IR 8596), which aligns the AI RMF with the Cybersecurity Framework (CSF) 2.0 to specifically address cybersecurity risks from AI adoption.

Together, these documents provide the most comprehensive US government guidance for AI risk management and are increasingly referenced by regulators, standards bodies, and procurement requirements.

AI RMF Structure

The framework is organized into four core functions.

Govern

This establishes the organizational foundation for AI risk management. Define acceptable use policies for AI, including prohibited applications and risk tolerance. Assign accountability for AI risk decisions, covering who approves AI deployments, who monitors ongoing risk, and who has authority to halt AI systems. Foster organizational awareness of AI risks across technical and non-technical stakeholders. Identify applicable AI regulations (EU AI Act, sector-specific requirements) and align governance accordingly. Establish policies for procuring, deploying, and monitoring third-party AI systems and models.

Map

Understand the context, capabilities, and potential impacts of AI systems. Catalog all AI systems in development, deployment, or procurement, including their purpose, data sources, and decision authority. Identify all parties affected by the AI system, from users to subjects to operators to regulators. Assess AI-specific threats including prompt injection, data poisoning, model extraction, adversarial attacks, and bias. Evaluate potential harms to individuals (discrimination, privacy), to organizations (liability, reputation), and to society (misinformation, safety). Evaluate training and operational data for quality, representativeness, bias, and provenance.

Measure

Assess and monitor AI risks quantitatively and qualitatively. Test AI systems for accuracy, fairness, robustness, and safety before deployment and on an ongoing basis. Define and measure fairness criteria appropriate to the use case, such as demographic parity, equalized odds, or individual fairness. Conduct adversarial testing for prompt injection, jailbreaking, data extraction, and agent abuse (see OWASP Top 10 for LLMs). Track model performance over time to detect drift, degradation, and emerging failure modes. Monitor and record AI-related incidents, near-misses, and user complaints.

Manage

Treat, mitigate, and communicate AI risks. Implement controls to mitigate identified risks through input/output filtering, human-in-the-loop review, access restrictions, and monitoring. Establish AI-specific incident response procedures for failures, biased outputs, security compromises, and unintended harms. Report AI risk status to leadership, regulators, and affected stakeholders. Update risk assessments, controls, and governance based on incidents, monitoring data, and evolving threats. Plan for safe decommissioning of AI systems, including data retention, model disposal, and transition procedures.

Cyber AI Profile (IR 8596)

Published in December 2025, the Cyber AI Profile maps AI-specific cybersecurity risks to the CSF 2.0 framework:

CSF 2.0 FunctionAI-Specific Considerations
GovernAI risk governance, acceptable use policies, regulatory compliance
IdentifyAI asset inventory, AI-specific threat landscape, dependency mapping
ProtectInput validation, access controls on models and data, supply chain security
DetectAnomaly detection in AI behavior, prompt injection detection, model drift monitoring
RespondAI incident response, model rollback, containment of compromised AI systems
RecoverRestoration of AI services, post-incident model retraining, communication

The Cyber AI Profile specifically addresses risks from using AI in security operations (AI-assisted SOC, automated response), securing AI systems themselves (model security, training data integrity), and AI-enabled attacks (AI-generated phishing, deepfakes, automated vulnerability exploitation).

Relationship to EU AI Act

AspectNIST AI RMFEU AI Act
NatureVoluntary frameworkMandatory regulation
ApproachRisk management processRisk classification with prescriptive requirements
ScopeAll AI systemsAI systems on EU market or affecting EU persons
EnforcementNone (guidance only)Fines up to 35 million euros or 7% of global turnover

Organizations subject to the EU AI Act can use the NIST AI RMF as the operational framework to meet many of the Act’s requirements, particularly risk management, testing, and documentation obligations.

Practical Steps

Catalog all AI in use, development, or procurement across the organization. Establish AI governance by assigning roles, defining policies, and setting risk tolerance with executive sponsorship. Threat model each AI system using OWASP Top 10 for LLMs and MITRE ATLAS as checklists. Implement a testing program that red teams AI systems before deployment and monitors continuously. Use the Cyber AI Profile to integrate AI risk management into existing cybersecurity programs. Maintain risk assessments, testing results, and governance decisions for regulatory and audit purposes.