The Center for Security and Emerging Technology (CSET) at Georgetown University offers the following comments in response to the Request for Information (RFI) Related to NIST’s Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence (88 FR 88368). A policy research organization within Georgetown University’s Walsh School of Foreign Service, CSET produces data-driven research at the intersection of security and technology, providing nonpartisan analysis to the policy community. We appreciate the opportunity to offer these comments.
We have organized our feedback according to six topics featured in the RFI:
1. red-teaming;
2. criteria for defining an error, incident, or negative impact; and governance policies;
3. technical requirements for managing errors, incidents, or negative impacts;
4. AI risk management and governance;
5. strategies for driving adoption and implementation of AI standards; and,
6. potential mechanisms, venues, and partners for promoting standards development.
For additional reading, our feedback was informed by six CSET reports and analyses:
- What Does AI Red-Teaming Actually Mean?
- Adding Structure to AI Harm: An Introduction to CSET’s AI Harm Framework
- Understanding AI Harms: An Overview
- AI Incident Collection: An Observational Study of the Great AI Experiment
- Translating AI Risk Management Into Practice
- Repurposing the Wheel: Lessons for AI Standards