Data Brief

Who Cares About Trust?

Clusters of Research on Trustworthy AI

Autumn Toney

Emelia Probasco

July 2023

Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work about the use of trustworthy AI terms to identify 18 clusters of research papers that contribute to the development of trustworthy AI. In identifying these clusters, the analysis also reveals that some concepts, like "explainability," are forming distinct research areas, whereas other concepts, like "reliability," appear to be accepted as metrics and broadly applied.

Download Full Report

Related Content

When the technology and policy communities use terms associated with trustworthy AI, could they be talking past one another? This paper examines the use of trustworthy AI keywords and the potential for an “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Data Brief

Identifying AI Research

July 2023

The choice of method for surfacing AI-relevant publications impacts the ultimate research findings. This report provides a quantitative analysis of various methods available to researchers for identifying AI-relevant research within CSET’s merged corpus, and showcases the research implications of each method.

Analysis

A Common Language for Responsible AI

October 2022

Policymakers, engineers, program managers and operators need the bedrock of a common set of terms to instantiate responsible AI for the Department of Defense. Rather than create a DOD-specific set of terms, this paper argues that the DOD could benefit by adopting the key characteristics defined by the National Institute of Standards and Technology in its draft AI Risk Management Framework with only two exceptions.