The AI RMF establishes a process and taxonomy through which organizations can identify, prioritize, and manage risks associated with the development, distribution, and use of artificial intelligence systems. Rather than a rigid one-size-fits-all approach, the Framework was designed with flexibility in mind so that organizations of all sizes and in any sector can use it to enhance their AI risk management processes.
The Framework has two parts: Part I of the AI RMF identifies seven characteristics of trustworthy AI to help frame the risks relating to AI systems. Part II, which is the core of the framework, describes four risk management functions that can be used to help operationalize AI risk management.
Trustworthy AI Characteristics
The AI RMF describes seven characteristics of trustworthy AI systems: (i) valid and reliable; (ii) safe; (iii) secure and resilient; (iv) accountable and transparent; (v) explainable and interpretable; (vi) privacy-enhanced; and (vii) fair and with harmful bias managed. The characteristics are presented more as a taxonomy for evaluating risks rather than a prescriptive checklist; indeed, the Framework acknowledges that trade-offs exist between these characteristics and finding an appropriate balance may be difficult and dependent on context. Nevertheless they provide organizations a way to discuss risks associated with AI systems. A high level summary of each characteristic follows:
-
Valid and reliable: AI actors should confirm using objective evidence that AI systems perform properly for the intended uses (validation) and perform without failure over a period of time under given circumstances (reliability). AI systems should also be accurate and function correctly in a wide range of conditions and situations, including those not originally intended (robustness).
-
Safe: AI systems should not “lead to a state in which human life, health, property, or the environment is endangered.” Risks to safety should be prioritized and managed based on the context and severity of harms. Safety considerations should be incorporated throughout the lifecycle of the AI system. Guidelines and efforts in fields such as transportation and healthcare can inform safety risk management.
-
Secure and resilient: Resilient AI systems can withstand unexpected changes or maintain their functions in the face of internal and external change and degrade safely where necessary. Secure AI systems are able to maintain confidentiality, integrity, and availability and to “avoid, protect against, respond to, or recover from attacks.”
-
Accountable and transparent: AI systems should provide those interacting with them with appropriate information based on the stage of the AI lifecycle and the role of the individuals (transparency) and should adopt accountability practices informed by the context, the role of the AI actor, and the risk associated with the AI systems.
-
Explainable and interpretable: Explainable and interpretable AI systems help AI actors understand the purpose and potential impact of these systems. According to NIST, explainable systems have representations of the mechanisms underlying the system’s operation and can answer the question of “how” a decision was made. Interpretable systems provide meaning for the outputs in the context of the intended function and can answer the question of “why” a decision was made.
-
Privacy-enhanced: The values of anonymity, confidentiality, and individual control should generally guide the design, development, and deployment of AI systems. Privacy-enhancing technologies and data minimizing methods can help mitigate privacy risks, though they may impact other characteristics such as accuracy.
-
Fair and with harmful bias managed: While the notion of fairness may be difficult to define given differing perceptions among cultures and application, AI risk management efforts can be enhanced by acknowledging these differences as part of the risk management process. AI systems should also have concern for equity and equality by recognizing and addressing harmful bias and discrimination.
Part II of the AI RMF then presents four risk management functions (govern, map, measure, and manage) and associated actions and outcomes that organizations can leverage to manage their organizational AI risk profile.
Companion Documents
NIST also released materials to support AI actors as they operationalize the Framework. These companion documents include the AI RMF playbook, which suggests actions, references, and documentation relating to each of the four risk management functions, the AI RMF Roadmap that identifies activities that can advance the Framework, and two crosswalks showing how the AI RMF compares against other governance frameworks. NIST also released a video explainer of the Framework.
EU AI Act
Given the significant overlap between the seven characteristics of trustworthy AI outlined in the AI RMF and the EU’s proposed AI Act, the Framework has the potential to prove influential to both companies and EU institutions in developing common approaches to AI governance.
Standards form a significant part of the AI Act. They are expected to assist providers of AI systems in developing clear and consistent approaches to risk management that adhere to the comprehensive requirements that the regulation intends to introduce. Under the proposed AI Act, the European Commission would be empowered to introduce new ”common specifications” to address areas where harmonized standards have not already been developed by the EU. European regulators are likely to evaluate its risk management processes and definitions of trustworthy AI when developing common specifications under the forthcoming AI Act.
Next steps
In addition to influencing AI risk management discussions amongst policymakers and regulators abroad, it is likely to also be a starting point for many sector-specific AI risk management frameworks domestically. Accordingly, it may be worthwhile for companies who develop or use AI to familiarize themselves with the Framework’s processes and substance. In addition, they may wish to take the following steps:
-
Identify gaps in existing governance controls to address the specific risks associated with AI systems.
-
Develop an internal governance framework for the management of AI risk which takes into account the principles of the AI RMF.
-
Consider providing comments on the AI RMF Playbook or contributing to the other AI RMF companion documents.
Authored by W. James Denvil, Filippo Raso, and Dan Whitehead.
Brittney Griffin, a Senior Paralegal in our New York office, contributed to this entry.