Analysis

AI Safety and Automation Bias

The Downside of Human-in-the-Loop

Lauren Kahn,

Emelia Probasco,

and Ronnie Kinoshita

November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Download Full Report

Related Content

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents… Read More

Community and technical colleges offer enormous potential to grow, sustain, and diversify the U.S. artificial intelligence (AI) talent pipeline. However, these institutions are not being leveraged effectively. This report evaluates current AI-related programs and the… Read More

This policy brief addresses the need for a clearly defined artificial intelligence education and workforce policy by providing recommendations designed to grow, sustain, and diversify the U.S. AI workforce. The authors employ a comprehensive definition… Read More

Analysis

Trusted Partners

February 2021

As the U.S. military integrates artificial intelligence into its systems and missions, there are outstanding questions about the role of trust in human-machine teams. This report examines the drivers and effects of such trust, assesses… Read More