Executive Summary
AI incidents have been occurring with growing frequency since AI capabilities began advancing rapidly in the last decade. Despite the number of incidents that have emerged during the development and deployment of AI, there is not yet a concerted U.S. policy effort to monitor, document, and compile AI incidents and use the data to enhance our understanding of AI harm and inform AI safety policies in order to foster a robust AI safety ecosystem. In response to this critical gap, the objectives of this paper are to:
- Examine and assess existing AI incident reporting initiatives—both databases and government initiatives.
- Elicit lessons from incident reporting databases from other sectors.
- Provide recommendations based on our analysis.
- Propose a federated1 and standardized hybrid reporting framework that consists of
- Mandatory reporting: Organizations must report certain incidents as directed by regulations, usually to a government agency.
- Voluntary reporting: Individuals and groups are permitted and encouraged to report incidents, often with clear guidelines and policies, and usually to a government agency or professional group.
- Citizen reporting: This is similar to voluntary reporting, but incidents are reported by the public, journalists, and organizations acting as watchdogs.
When discussing incident reporting in this paper, we emphasize reporting to an independent external organization (e.g., a government agency, professional association, oversight body, etc.).
A survey of existing AI incident collection efforts identified only two citizen-reporting organizations actively capturing AI incidents. Additionally, a review of AI legislative initiatives globally revealed China, the European Union, Brazil, and Canada have enacted or proposed guidelines for mandatory AI incident reporting. Currently, there aren’t any significant legislative initiatives for establishing an AI incident reporting policy framework in the United States. The available U.S. governmental documents that mention reporting AI incidents are recommendations and guidelines for implementing reporting mechanisms but not necessarily toward an external entity.
Looking at incident reporting frameworks from the healthcare, transportation, and cybersecurity sectors yielded valuable lessons. The healthcare sector’s use of voluntary reporting resulted in missing incidents and incomparable data points for analysis. The transportation sector has an established incident reporting framework that includes investigative boards for identifying root causes, which are then used to inform evidence-based safety measures. In cybersecurity, the U.S. government has issued a series of mandates requiring mandatory reporting in selected domains, shifting away from relying on standards and other soft laws.
Our analysis of the two AI incident reporting databases, emerging government initiatives related to AI incident reporting, and the various incident reporting systems in the healthcare, transportation, and cybersecurity sectors revealed disadvantages and advantages. These insights offered several important lessons that can be applied to an AI incident reporting policy framework, as discussed in the following:
- Limited incident reporting frameworks are inadequate. Across the board, the incident reporting initiatives examined in this paper often emphasized either citizen, voluntary, or mandatory reporting, typically focusing on one or two of these reporting categories. In isolation, each of these three frameworks has limitations.
- Inconsistent data creates meaningless data. Relying on state initiatives or domain-specific guidelines will likely produce uneven or inconsistent data that might not be adequate for aggregating AI incident data for statistical analysis or accurately depicting the many dimensions of AI harm.
- There is a need for a federated AI incident reporting framework. The absence of a federated AI incident reporting policy framework has impacted incident data collection efforts in the healthcare sector, resulting in fragmented and inconsistent reporting initiatives.
- Incident investigation supports effective safety policies. An investigative safety board can be useful for conducting root-cause analysis of significant AI incidents and providing feedback to help AI actors improve their design and development, enable policymakers to craft effective regulations, and educate the public on AI safety.2
Based on the observations discussed above and the nature of AI as a general-purpose technology, we make the following recommendations to address the current gap in AI incident reporting.
- Establish clear policies for a federated hybrid reporting framework. Policymakers should establish a federated and comprehensive AI incident reporting policy framework to gather incident data across sectors and applications. AI incidents should be reported to an independent external entity (e.g., government agency, professional association, oversight body, etc.) to promote transparency and accountability in AI incident management. A hybrid reporting framework is supported by:
- Mandatory reporting: Relevant AI actors should be mandated to report covered incidents in a timely manner.
- Voluntary reporting: Voluntary reporting frameworks should also be established alongside the mandatory framework to capture AI incidents outside the mandatory jurisdiction.
- Citizen reporting: An easily accessible reporting framework should be made available for the public and all other stakeholders to report and document AI incidents.
- Develop a standardized and authoritative classification system. The AI incident reporting framework should include a standardized set of disclosed information plus accommodations for the unique characteristics of distinct domains, such as privacy concerns and other regulatory requirements.
- Create an independent AI incident investigation agency. When a significant AI incident occurs, an independent board should investigate the root cause and provide evidence-based safety recommendations.
- Explore automated data collection mechanisms. Automated data collection mechanisms could be highly advantageous to obtain technical and contextual information from AI incidents.
Further research will be needed to explore the necessary content and considerations for implementing a comprehensive reporting framework that is applicable across sectors and applications. We will explore this in a follow-up paper and will not delve into it in this paper.
The ability to mitigate AI harms and manage their aftermath competently can shape public conversations about AI usage. An AI incident reporting framework must be integrated as an essential component of AI safety rather than developed as an afterthought in AI legislative initiatives. The present moment offers a prime opportunity to establish an AI incident reporting framework with relatively low stakes. However, this window is rapidly closing as AI becomes more prevalent across applications and sectors. A federated, comprehensive, and standardized framework will prevent data gaps and enhance data quality. Adopting a hybrid framework that includes mandatory, voluntary, and citizen reporting will improve data fidelity, providing a more accurate representation of the emerging trends in AI harm and risk.
Download Full Report
An Argument for Hybrid AI Incident Reporting- For the purpose of this paper, we define a federated framework as a centralized framework prescribed by a singular authoritative government body or the federal government. The framework stipulates a set of minimum requirements that can be adapted and implemented across government agencies or nongovernmental organizations.
- UNESCO defines AI actors as any actor involved in at least one stage of the AI system lifecycle, and can refer both to natural and legal persons, such as researchers, programmers, engineers, data scientists, end-users, business enterprises, universities and public and private entities, among others. See: “Recommendation on the Ethics of Artificial Intelligence,” UNESCO (2021), 10, https://unesdoc.unesco.org/ark:/48223/pf0000381137.