The Center for Security and Emerging Technology (CSET) at Georgetown University offers the following comments in response to NIST’s second draft of its AI Risk Management Framework (RMF). A policy research organization within Georgetown University’s Walsh School of Foreign Service, CSET produces data-driven research at the intersection of security and technology, providing nonpartisan analysis to the policy community. We appreciate the opportunity to offer these comments, and look forward to continued engagement with NIST throughout the Framework development process. We have organized our response as general feedback on the RMF and more specific feedback according to pages in the document. We have also included general feedback on the AI RMF Playbook.
General NIST RMF Feedback:
- We would like to highlight revisions to the RMF that CSET views as substantial improvements. Key points of feedback that were incorporated into the second RMF draft and are aligned with CSET’s recommendations include:
- Elaborating on the audience, with examples and mapping to lifecycle stages
- Defining risk and incorporating discussion of “positive” risk
- Highlighting challenges to this process
- Clarifying whether functions are sequence-dependent; putting the Govern function before the Map, Measure, and Manage functions; and describing how the Govern function provides the organizational infrastructure needed for the rest of the functions
- Fleshing out the role of various stakeholders in carrying out the functions, especially Map
- Consider other AIs as potential actors in Appendix A Categories of AI Actors. As AIs become more prolific, we will have to start worrying about the interactions of AIs. Their interactions could potentially damage each other or create new safety or performance issues.
- The RMF does not account for risks that organizations’ AI activities pose to the environment. For example, large-scale AI development can consume high levels of energy that impact the environment. We suggest NIST include the environment within the “People & Planet” stakeholder mapping and mention assessing the environmental impacts of AI in the AI RMF Core section, since the RMF already references impacts on society, third parties, and supply chains.1
- The inclusion of terms and definitions for the key characteristics is important for the follow-on work that will be done by the many stakeholders implementing the AI RMF. Including ISO definitions where possible is an improvement from the earlier draft. The coupling of certain terms, however, could add unnecessary confusion to systems engineers and operators. Being explicit and clear about the terms—even though it might be viewed as a long list—is essential to aiding new stakeholders who must navigate the challenges these characteristics each uniquely present in AI development, deployment and maintenance. Additionally, the coupling of the terms as presented implies a special pairing or tension between them. While that relationship may be true, so too are other pairings or tensions. These other, currently unlisted tensions should not be dismissed or diminished as the current construct implies. Wrestling with the requirement and assessment of each characteristic on an individual basis must be done in tandem with weighing how that characteristic will intersect with other characteristics. Combining them a priori is misleading and may ultimately be unhelpful, especially as more stakeholders with less expertise come to rely on the AI RMF.