Executive Summary
The deputy secretary of defense’s memorandum entitled “Implementing Responsible Artificial Intelligence in the Department of Defense” articulates five ethical principles for artificial intelligence systems: responsible, equitable, traceable, reliable, and governable.1 Those guiding principles have evolved the Department of Defense’s thinking on responsible AI, but they are not sufficient for implementing responsible AI principles across everything from development to acquisition to operations. One foundational task toward implementing these guidelines, as laid out in the DOD memorandum “Responsible Artificial Intelligence Strategy and Implementation Pathway,” is the standardization of language and definitions relating to the characteristics of responsible AI.2
Policymakers, engineers, researchers, program managers, and operators all need the bedrock of clear and well-defined terms that are appropriate to their particular tasks in developing and operationalizing responsible AI systems. Creating those standard terms and definitions requires input from all the communities involved in realizing responsible AI, both internal and external to the DOD. Thankfully, a community-defined taxonomy for responsible AI has already been started by the National Institute for Standards and Technology as a part of its draft AI Risk Management Framework (AI RMF), and the DOD could benefit by leveraging the work NIST has already done.
The rough alignment of NIST’s trustworthy characteristics and the DOD’s ethical principles provides a basis for both to work together and across industry to develop and deploy responsible AI. That broad consensus should also be the starting point for agreement on specific terms and definitions, where appropriate, that will help clearly guide developers, managers, and operators. By adopting or adapting to NIST’s community-defined terms and definitions, the DOD stands to gain in three ways:
- first, by reducing misunderstandings that can lead to friction among the parties developing AI in the DOD, government, and industry;
- second, by having singularly focused and precise terms to guide developers, testers, policymakers, and operators in their efforts to develop, acquire, and operationalize responsible AI; and,
- third, by joining NIST and the rest of the U.S. government in projecting a consistent set of terms and definitions as norms of responsible AI are discussed and debated internationally.
This paper argues that the DOD could adopt or otherwise adapt to NIST’s draft taxonomy as the standardized language for responsible AI in the DOD, excepting only two guiding principles which are truly unique to the DOD’s context and mission: “responsible” and “traceable,” as shown in Table 1.
Table 1: Summary of Term/Definition Evolution Recommendations
Keep DOD-unique terms and definitions | • Responsible • Traceable |
Adapt DOD terms and definitions to NIST’s terms and definitions | • Reliable → Reliability • Governable → Safe • Equitable → Managing Bias |
Adopt terms and definitions from NIST | • Accuracy • Robustness • Transparency • Explainability • Interpretability • Privacy-Enhanced |
Download Full Report
A Common Language for Responsible AI: Evolving and Defining DOD Terms for Implementation- Kathleen Hicks, Implementing Responsible Artificial Intelligence in the Department of Defense, May 26, 2021, https://media.defense.gov/2021/May/27/2002730593/-1/-1/0/IMPLEMENTING-RESPONSIBLE-ARTIFICIAL-INTELLIGENCE-IN-THE-DEPARTMENT-OF-DEFENSE.PDF.
- DOD Responsible AI Working Council, “U.S. Department of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway,” June 2022, https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF.