Executive Summary
Policymakers contemplating the burgeoning field of artificial intelligence will find, if they have not already, that existing laws leave huge gaps in deciding how (and whether) AI will be developed and used in ethical ways. The law, of course, plays a vital role. While it does not guarantee wise choices, it can improve the odds of having a process that will lead to such choices. Law can reach across constituencies and compel, where policy encourages and ethics guide. The legislative process can also serve as an effective mechanism to adjudicate competing values as well as validate risks and opportunities.
But the law is not enough when it contains gaps due to lack of a federal nexus, interest, or the political will to legislate. And law may be too much if it imposes regulatory rigidity and burdens when flexibility and innovation are required. Sound ethical codes and principles can help fill legal gaps. To do so, policymakers have three main tools:
- Ethical Guidelines, Principles, and Professional Codes
- Academic Internal Review Boards (IRBs)
- Principles of Corporate Social Responsibility (CSR)
Below is a primer on the limits and promise of these three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.
This paper addresses specific considerations for policymakers:
1. Where AI is concerned, ethics codes should include indicative actions illustrating compliance with the code’s requirements. Otherwise, individual actors will independently define terms like “public safety,” “appropriate human control,” and “reasonable” subject to their own competing values. This will result in inconsistent and lowest-common-denominator ethics. If the principle is “equality,” for example, an indicative action might require training data for a facial recognition application to include a meaningful cross-section of gender and race-based data.
2. Most research and development in AI is academic and corporate. Therefore, Institutional Review Boards and Corporate Social Responsibility practices are critical in filling the gaps between law and professional ethics, and in identifying regulatory gaps. Indeed, corporations might consider the use of IRBs as well.
3. Policymakers should consider the Universal Guidelines for Artificial Intelligence (detailed below) as a legislative checklist. Even if they don’t adopt the guidelines, the list will help them make purposeful choices about what to include or omit in an AI regulatory regime consisting of law, ethics, and CSR.
4. Academic leaders and government officials should actively consider whether to subject AI research and development to IRB review. They should further consider whether to apply a burden of proof, persuasion, or a precautionary principle to high-risk AI activities, such as those that link AI to kinetic or cyber weapons or warning systems, pose counterintelligence (CI) risks, or remove humans from an active control loop.
5. Corporations should create a governance process for deciding whether and how to adopt CSR national security policies answering the question: What does it mean to be an American corporation? They should consider adopting a stakeholder model of CSR that is, in essence, a public-private partnership that includes input from consumers and employees as well as shareholders and the C-Suite.
6. Policymakers, lawyers, and corporate leaders should communicate regularly about the four issues that may define the tone, tenor, and content of government-industry relations: uniformity in response, business with and in China and Russia, encryption, and privacy.
7. Where government agencies, corporations, and academic entities have adopted AI Principles, as many institutions now have, it is time to move from statements of generic principle to the more difficult task of applying those principles to specific applications.