The spread of artificial intelligence across business, defense, and security sectors has the potential to improve the speed of operations, provide new capabilities, and increase efficiencies. Along with the integration of AI comes an upsurge in risk and potential harm from AI accidents, misuse, and unexpected behavior. The growing concern about AI having unforeseen negative impacts on U.S. commercial, social, infrastructure, and national security highlights the need for AI assessment that can help reduce potential harm from AI and ensure that AI applications and technologies are safe and trustworthy.
The Center for Security and Emerging Technology has published studies related to AI safety, accidents, and testing. Building on this work, CSET has launched a new line of research titled “AI Assessment” to investigate the development and adequacy of current AI assessment approaches, along with the availability and sufficiency of tools and resources for implementing them. Specifically, the research will:
1. Understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification.
2. Characterize the wide variety of AI products, tools, services, data, and resources that influence AI assessment.
3. Understand the needs for additional infrastructure, academic research, tools, or budgetary resources to support demonstration and adoption.
4. Explore the global differences and similarities of AI assessment, standards, and testing practices among various sectors and government entities.
There is no simple one-size-fits-all assessment approach that can be adequately applied to the diverse range of AI. AI systems have a wide variety of functionalities, capabilities, and outputs. They are also created using different tools, data types, and resources, adding to assessment diversity. A collection of approaches and processes are needed to cover a wide range of AI products, tools, services, and resources. Additionally, because the number and frequency of AI creation will greatly increase, resources need to include techniques and tools for scaling assessment, handling the variety and quantity of AI systems. With new AI innovations, assessment needs may change. This research will provide a foundation for assessment that can be adapted to future needs. It will also provide a better understanding of the current U.S. needs and capabilities for AI assessment, and support decisions on AI policy, resourcing, research, and national security.