Executive Summary
Recent advances in generative artificial intelligence have captured public attention and spurred a litany of proposals for regulating this transformative technology. A core question in these discussions is how U.S. government agencies can use their existing authorities to govern AI.
Virtually all of the sectors in which AI may be deployed are already regulated by one or more federal agencies. We argue that relying on existing agency authorities to govern AI in these sectors is the most effective strategy for promoting the safe development and deployment of the technology, at least in the near term. This approach will allow policymakers to respond more quickly to emerging risks and developments in the field, and it would also leverage the sector-specific expertise that already exists across the federal government.
In this report, we outline a process that could help policymakers, regulators, and researchers, as well as other interested parties outside of the government, identify existing legal authorities that could apply to AI and highlight areas where additional legislative or regulatory action may be needed. We believe this framework is applicable in many, if not most, sectors where AI is likely to be deployed. For the purposes of this report, we focus on how existing authorities can be applied in the commercial aviation sector, and specifically in relation to AI applications in aircraft onboard systems and air traffic control. We focus on commercial aviation because the sector has already made substantial use of automation, and there is a rich literature on the potential issues that could arise from the introduction of AI and other technologies that enable greater use of autonomous systems.
Today, the commercial aviation sector is primarily regulated by the Federal Aviation Administration. In general, we found that the FAA’s existing authorities empower the agency to govern the development and deployment of AI systems in both air traffic control and aircraft onboard systems. However, we also identified gaps in the agency’s existing regulatory regime that will need to be addressed in order to mitigate the unique risks presented by AI. Updating protocols and processes related to software assurance, testing and evaluation, personnel training, pilot licensing, cybersecurity, as well as data management will help promote the safe use of AI across the commercial aviation sector.
Our case study of the FAA also spotlighted two common challenges that could hinder efforts to promote AI safety across the federal government:
- Talent: acquiring the in-house expertise to develop and implement effective AI governance frameworks.
- Testing and Evaluation (T&E): developing standards and benchmarks that would enable stakeholders to accurately assess the safety of AI systems in various contexts.
Without these two enabling factors—talent and T&E—federal agencies will not be well-positioned to design and implement effective AI governance strategies. The Biden administration’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) includes provisions aimed at addressing both the government’s AI talent gap and the lack of common T&E standards for AI. However, the effect of these measures remains to be seen. Looking ahead, the successful implementation of a comprehensive AI governance strategy will depend in large part on understanding where agencies already have the legal powers necessary to oversee AI and where new authorities, regulations, and processes may be needed.