Report Cover Page:

Analysis

The Inigo Montoya Problem for Trustworthy AI (International Version)

Comparing National Guidance Documents

Emelia Probasco

Kathleen Curlee

October 2023

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms. This creates, what we call the “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Download Full Report

Related Content

Data Brief

Who Cares About Trust?

July 2023

Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work… Read More

When the technology and policy communities use terms associated with trustworthy AI, could they be talking past one another? This paper examines the use of trustworthy AI keywords and the potential for an “Inigo Montoya… Read More

Analysis

A Common Language for Responsible AI

October 2022

Policymakers, engineers, program managers and operators need the bedrock of a common set of terms to instantiate responsible AI for the Department of Defense. Rather than create a DOD-specific set of terms, this paper argues… Read More

Allies of the United States have begun to develop their own policy approaches to responsible military use of artificial intelligence. This issue brief looks at key allies with articulated, emerging, and nascent views on how… Read More