Reports

The Mechanisms of AI Harm: Lessons Learned from AI Incidents

Mia Hoffmann

October 2025

As artificial intelligence systems are deployed and affect more aspects of daily life, effective risk mitigation becomes imperative to prevent harm. This report analyzes AI incidents to improve our understanding of how risks from AI materialize in practice. By identifying six mechanisms of harm, it sheds light on the different pathways to harm, and on the variety of mitigation strategies needed to address them.

Download Full Report

Related Content

This follow-up report builds on the foundational framework presented in the March 2024 CSET issue brief, “An Argument for Hybrid AI Incident Reporting,” by identifying key components of AI incidents that should… Read More

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents… Read More

Artificial Intelligence incidents have been occurring with the rapid advancement of AI capabilities over the past decade. However, there is not yet a concerted policy effort in the United States to monitor, document, and aggregate… Read More

Reports

Adding Structure to AI Harm

July 2023

Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are… Read More