NIST and NTIA publish key resources on AI risk management and safety

President Biden’s October 2023 AI executive order (AI EO), which aimed to promote the development of guidance and regulations to support a safe, secure, trustworthy AI ecosystem, is spurring agencies across the federal government to study and engage stakeholders on issues related to AI deployment, safety, and security. Recently, the National Institute for Standards and Technology (NIST) and the National Telecommunications and Information Administration (NTIA) released new AI safety and risk management guidance, delivering on their mandates under the AI EO. NIST issued four AI risk management documents, including draft guidance on Managing Misuse Risk for Dual-Use Foundation Models, which is open for comment until September 9, 2024. Meanwhile, NTIA published a report on Dual-Use Foundation Models with Widely Available Models Weights, recommending regulatory forbearance and monitoring of developments.

NIST

As the U.S. standards-setting agency, NIST has been busy drafting guidance on various AI safety and risk-management topics. Most recently, NIST released one draft and three final documents focused on bridging the risk gaps between traditional software and AI technologies. The draft document provides recommendations for managing misuse risks for powerful generative AI models. Two of the final guidance documents address generative AI risks and serve as complements to existing NIST frameworks. The third includes proposals for global engagement on AI standards.

Addressing the risks of dual-use foundation model misuse

The draft document NIST AI 800-1: Managing the Risk of Misuse for Dual-Use Foundation Models identifies risk management best practices for dual-use foundation model developers to manage risks of deliberate, malicious misuse. It is the first guidance document from NIST’s new Artificial Intelligence Safety Institute (AISI). AISI identifies seven objectives and voluntary practices to manage misuse risks, along with recommendations on implementation and providing transparency about risk management activities. The objectives and practices include:

  1. Anticipating potential misuse risk;

  2. Establishing plans for managing misuse risk;

  3. Managing the risks of model theft;

  4. Measuring the risk of misuse;

  5. Ensuring that misuse is managed before deploying foundation models;

  6. Collecting and responding to information about misuse after deployment; and

  7. Providing appropriate transparency about misuse risk.

AISI encourages organizations to consider their specific activities, risk profile, and current and future needs to tailor the framework objectives and practices to their business. In an appendix, AISI also provides a non-exhaustive list of potential safeguards against misuse.

AISI also published a formal request for comment, with a September 9, 2024, submission deadline. The request asks for input on all aspects of the draft guidance, including the objectives, practices, and recommendations, as well as opportunities for further empirical evidence. It also asks five specific questions:

  1. “What practical challenges exist to meeting the objectives outlined in the guidance?

  2. How can the guidance better address the ways in which misuse risks differ based on deployment (e.g., how a foundation model is released) and modality (text, image, audio, multimodal, and others)?

  3. How can the guidance better reflect the important role for real-world monitoring in making risk assessments?

  4. How can the guidance’s examples of documentation better support communication of practically useful information while adequately addressing confidentiality concerns, such as protecting proprietary information?

  5. How can the guidance better enable collaboration among actors across the AI supply chain, such as addressing the role of both developers and their third-party partners in managing misuse risk?”

Adapting governance programs for generative AI

NIST AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile is a cross-sectoral profile of and companion resource for NIST’s AI Risk Management Framework (AI RMF) for generative AI. Released in January 2023, the AI RMF is a voluntarily tool to help organization’s incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. This new final guidance introduces additional considerations to address the unique risks posed by generative AI and proposes potential mitigations for generative AI risk management that may align with a business’s goals and priorities.

Similarly, NIST SP 800-218A: Secure Software Development Practices for Generative AI and Dual-Use Foundation Models is final guidance that builds on NIST’s Secure Software Development Framework (SSDF) version 1.1 by adding a new SSDF Community Profile to support requirements from the AI EO. The Community Profile includes tools, guidance, and informative references for AI model development throughout the software development life cycle for the producers of AI models, the producers of AI systems that use those models, and the acquirers of those AI systems.

U.S. leadership on global AI standards

NIST AI 100-5: A Plan for Global Engagement on AI Standards outlines a strategy for the United States to drive worldwide development and implementation of AI-related consensus standards, cooperation, coordination, and information sharing. The plan contemplates engagement with a broad range of stakeholders, including standards development organizations, industry, academia, civil society, and governments across the globe and in the United States. It addresses the full lifecycle of standards-related activity and considers both horizontal and vertical approaches. It promotes an open, transparent, consensus-driven process that is voluntary and market-driven and recommends a variety of engagement activities.

NTIA

In its report on managing the risks of dual-use foundation models with widely available model weights, NTIA outlined a restrained regulatory approach. NTIA sought comment on the risks and benefits of these models in February and reviewed more than 300 comments. NTIA also conducted additional stakeholder outreach, including two public events with policy and technology experts. Drawing on this feedback, the report provides a non-exhaustive review of the benefits and risks of dual-use foundation models with widely available weights, with an emphasis on the “marginal” risks unique to the deployment of these models compared to closed-weight models AI models. The risks and benefits are divided into the broad categories of public safety; societal risks and well-being; competition, innovation, and research; geopolitical considerations; and uncertainty in future risks and benefits.

The report also considers whether the U.S. government should restrict access to open-weight models. But based on the available information, the report concludes that the evidence does not currently support restrictions. The report recommends that policymakers should instead monitor the risks associated with these models and prepare to act, if needed. The specific recommendations include:

  1. Collecting evidence through:

    1. Standards and accountability mechanisms for disclosure and transparency;

    2. Research on the safety, security, and trustworthiness of foundation models and high-risk models and their downstream uses;

    3. Research into the present and future capabilities and limitations of specific models and risk mitigations; and

    4. Risk portfolios, indicators, and thresholds.

  2. Evaluating evidence through:

    1. Assessing the lag time between the use of specific capabilities in leading proprietary models and in open models;

    2. Developing benchmarks and definitions for monitoring and potential action; and

    3. Maintaining federal government expertise in technical, legal, social science, and policy domains to promote accountability.

  3. Acting on evaluations through actions such as:

    1. Restrictions on access; or

    2. Other risk mitigation measures.

  4. Keeping open the possibility of additional government action.

The report concludes that this wait-and-see approach honors “longstanding U.S. government policies supporting widespread access to digital technologies and their benefits, while nonetheless preparing for the potential future development of models for which an alternate approach may be justified.”

Next Steps

These four documents are important components of a nascent and evolving U.S. AI risk management framework for the responsible development and deployment of AI tools. Companies that engage with generative AI technologies or use powerful open-source models should review this guidance in relation to their products, services, operations, and practices. Companies that wish to engage with AISI regarding the draft guidance on Managing the Risk of Misuse for Dual-Use Foundation Models should note the September 9, 2024, comment deadline.

Hogan Lovells’ global team has produced several resources to help organizations navigate the latest market-moving trends in AI that we are seeing across sectors, including our Global AI Trends Guide. Our AI Hub centralizes our robust collection of AI articles, webinars, podcasts, and more into one location. It also features a regulatory tracking map to stay up-to-date on the rapidly evolving legal landscape. With our global reach, depth of technical knowledge, and industry expertise, Hogan Lovells stands ready to provide strategic AI guidance and help organizations anticipate tomorrow’s challenges before they arise.

 

Authored by Mark Brennan, Katy Milner, Ryan Thompson, and Ambia Harper.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.