On October 29, 2020, CSET and the Syracuse University Institute for Security Policy and Law sponsored a symposium for national security law practitioners titled “National Security Law and the Coming AI Revolution.” The discussants—lawyers, policymakers, and technologists—addressed the following topics:
- AI as a constellation of technologies;
- AI and the Law of Armed Conflict;
- Al ethics, bias, data, and principles;
- AI and national security decision-making; and
- The role of law and lawyers.
Two of the discussants have gone on to senior national security technology posts in the Biden Administration. Former CSET Founding Director Jason Matheny is now Deputy Assistant to the President for Technology and National Security, among other titles. Tarun Chhabra is now Senior Director for Technology and National Security on the NSC staff. Other senior discussants continue their important work on AI at PCLOB, JAIC, the Naval War College, and the Office of Naval Research, and in academia and industry. A list of discussants can be found in the symposium report.
The event drew more than 180 attendees. To make the discussion available to a larger audience, the sponsors summarized many of the observations in the report. The following collective themes emerged from the panels:
- AI will transform national security practice including legal practice. National security will be better served with the meaningful, thoughtful, and purposeful application of law and ethics to AI. It is not an either-or choice between security and law and ethics. Whatever we do to further law and ethics helps ensure our competitive advantage by improving accuracy, efficacy, and confidence in the results.
- Policymakers, commanders, and technologists need to understand law so that they can spot issues and create the time and space to embed law and ethics in AI applications. If the government waits to apply law and ethics at the use or decision point, it may be too late to meaningfully influence outcomes. Therefore, as we consider and apply the concept of human-machine teaming, we should pay equal attention to teaming between lawyers, policymakers, and technologists to make purposeful legal and ethical AI choices.
- It is time for national security practitioners to move from bromides and principles to the application of those principles to specific AI applications. Negotiations about AI ethics and norms will need to be on a case-by-case, scenario-by-scenario basis to be meaningful.
- Fundamentally, AI is a computer algorithm designed to “predict optimal future results based on past experience and recognized patterns.” It is the task of policymakers to determine whether that AI or a human has the authority to act on those predictions and make decisions.
- AI is both nimble and brittle. It has the potential to adapt in super dynamic, unstructured situations; it has the potential to adapt at machine speed and in the presence of overwhelming incoming data; and it does not feel fear or fatigue. However, the AI systems we have today are not yet safe, secure, or reliable enough to process real-time data in rapidly changing environments, then update themselves and learn in real time, and thus be used for targeting or other immediate decisional support. This is especially true because the enemy will be targeting the AI systems.
- Part of taking responsibility for AI, including mitigating against AI bias, means involving stakeholders in all stages of development and, where possible, deployment. However, as we employ more and more autonomous systems, it will become increasingly difficult to dedicate time and resources to refining the decision-making of each of those systems. In other words, with the proliferation of autonomous systems, we may be less likely to engage in the type of meaningful human-machine teaming that ethical deployment would require.
- Law and ethics must be applied throughout an AI lifecycle. Practitioners should think intentionally about issues such as bias from the beginning of a project. AI often fails when conditions change, and conditions will change in the national security world. Ethical failures can occur at any point in an AI software program. Moreover, large organizations, such as DOD, face the risk that a thousand-to-one or one-in-a-million type problem will occur.
- Lawyers should distinguish between law, policy, and ethics. Without clarity, government actors may be discouraged from applying higher ethical standards lest those standards later become construed as legally binding as distinct from wise policy choices.
- National security lawyers working in a classified environment have a heightened responsibility to be exceptionally conscious of bias.
The sponsors encourage readers of this blog to review the report, which offers detail and nuance on the themes identified above. Thank you.