The international system is at an artificial intelligence fulcrum point. Compared to humans, AI is often faster, fearless, and efficient. National security agencies and militaries have been quick to explore the adoption of AI as a new tool to improve their security and effectiveness. AI, however, is imperfect. If given control of critical national security systems such as lethal autonomous weapons, buggy, poorly tested, or unethically designed AI could cause great harm and undermine bedrock global norms such as the law of war. To balance the potential harms and benefits of AI, international AI arms control regulations may be necessary.
Proposed regulatory paths forward, however, are diverse. Potential solutions include calls for AI design standards to ensure system safety, bans on more ethically questionable AI applications, such as lethal autonomous weapon systems, and limitations on the types of decisions AI can make, such as the decision to use force. Regardless of the chosen regulatory scheme, however, there is a need to verify an actor’s compliance with regulation. AI verification gives teeth to AI regulation.
This report defines “AI Verification” as the process of determining whether countries’ AI and AI systems comply with treaty obligations. “AI Verification Mechanisms” are tools that ensure regulatory compliance by discouraging or detecting the illicit use of AI by a system or illicit AI control over a system.
Despite the importance of AI verification, few practical verification mechanisms have been proposed to support most regulation in consideration. Without proper verification mechanisms, AI arms control will languish. To this end, this report seeks to jumpstart the regulatory conversation by proposing mechanisms of AI verification to support AI arms control.
Due to the breadth of AI and AI arms control policy goals, many approaches to AI verification exist. It is well beyond the scope of this report to focus on all possible options. For the sake of brevity, this report addresses the subcase of verifying whether an AI exists in a system and if so, what functions that AI could command. This report also focuses on mechanical systems, such as military drones, as the target of regulation and verification mechanisms. This reflects the focus of a wide range of regulatory proposals and the policy goals of many organizations fighting for AI arms control. In sum, this report concentrates on verification mechanisms that support many of the most popular AI arms control policy goals. Naturally, other approaches exist and should be studied further, however; they are beyond the scope of this initial report on the subject. To these ends, this report presents several novel verification mechanisms including:
System Inspection Mechanisms: Mechanisms to verify, through third party inspection, whether any AI exists in a given system and whether that AI could control regulated functions:
- Verification Zone Inspections: An inspection methodology that uses limited-scope software inspections to verify that any AI in a system cannot control certain functions. The subsystems that AI must not control, for example, subsystems controlling the use of force, are designated as “verification zones.” If these select verification zones can be verified as free from AI control, the system as a whole is compliant. This limited inspection scope reduces the complexity of system inspections, protects subsystems irrelevant to AI regulation, and renders inspections less intrusive.
- Hardware Inspections: The existence of AI in subsystems and its control over certain functions can be verified by examining whether AI chips exist and what subsystems they control.
Sustained Verification Mechanisms: These are tools that can be used to verify a system remains compliant after an initial inspection:
- Preserving Compliant Code with Anti-Tamper Techniques: These techniques protect system software from post-inspection tampering that may alter what AI can control. Methods chosen to illustrate such techniques include cryptographic hashing of code and code obfuscation. Cryptographically hashed system software also provides a record of expected system design for inspectors that can be used to monitor system software compliance long term.
- Continuous Verification through Van Eck Radiation Analysis: Verified systems can be affixed with a Van Eck radiation monitoring mechanism that can be used to monitor the radiation the system produces when code is run. Aberrations detected in this radiation could indicate non-compliant manipulation.
This report introduces and explains why these mechanisms have potential to support an AI verification regime. However, further research is needed to fully assess their technical viability and whether they can be implemented in an operationally practical manner.