Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Annual Report

CSET at Five

Center for Security and Emerging Technology
| March 2024

In honor of CSET’s fifth birthday, this annual report is a look at CSET’s successes in 2023 and over the course of the past five years. It explores CSET’s different lines of research and cross-cutting projects, and spotlights some of its most impactful research products.

Filter publications
Analysis

Which Ties Will Bind?

Sam Bresnick Ngor Luong Kathleen Curlee
| February 2024

U.S. technology companies have become important actors in modern conflicts, and several of them have meaningfully contributed to Ukraine’s defense. But many of these companies are deeply entangled with China, potentially complicating their decision-making in a potential Taiwan contingency.

Analysis

Decoding Intentions

Andrew Imbrie Owen Daniels Helen Toner
| October 2023

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? Policymakers can send credible signals of their intent by making pledges or committing to undertaking certain actions for which they will pay a price—political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat. Talk is cheap, but inadvertent escalation is costly to all sides.

Analysis

The Inigo Montoya Problem for Trustworthy AI (International Version)

Emelia Probasco Kathleen Curlee
| October 2023

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms. This creates, what we call the “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Data Brief

U.S. and Chinese Military AI Purchases

Margarita Konaev Ryan Fedasiuk Jack Corrigan Ellen Lu Alex Stephenson Helen Toner Rebecca Gelles
| August 2023

This data brief uses procurement records published by the U.S. Department of Defense and China’s People’s Liberation Army between April and November of 2020 to assess, and, where appropriate, compare what each military is buying when it comes to artificial intelligence. We find that the two militaries are prioritizing similar application areas, especially intelligent and autonomous vehicles and AI applications for intelligence, surveillance and reconnaissance.

Data Brief

Who Cares About Trust?

Autumn Toney Emelia Probasco
| July 2023

Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work about the use of trustworthy AI terms to identify 18 clusters of research papers that contribute to the development of trustworthy AI. In identifying these clusters, the analysis also reveals that some concepts, like "explainability," are forming distinct research areas, whereas other concepts, like "reliability," appear to be accepted as metrics and broadly applied.

Analysis

Defending the Ultimate High Ground

Corey Crowell Sam Bresnick
| July 2023

China has poured resources into improving the resilience of its space architecture. But how much progress has Beijing made? This issue brief analyzes China’s space resilience efforts and identifies areas where the United States may need to invest to keep pace.

Data Brief

The Inigo Montoya Problem for Trustworthy AI

Emelia Probasco Autumn Toney Kathleen Curlee
| June 2023

When the technology and policy communities use terms associated with trustworthy AI, could they be talking past one another? This paper examines the use of trustworthy AI keywords and the potential for an “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Analysis

Volunteer Force

Christine H. Fox Emelia Probasco
| May 2023

U.S. tech companies have played a critical role in the international effort to support and defend Ukraine against Russia. To better understand and envision how these companies can help U.S. strategic interests, CSET convened a group of industry experts and former government leaders to discuss lessons learned from the ongoing war in Ukraine and what those lessons might mean for the future. The workshop’s discussion and this accompanying report expand on the themes explored in the October 2022 "Foreign Affairs" article, "Big Tech Goes to War."

Analysis

A Common Language for Responsible AI

Emelia Probasco
| October 2022

Policymakers, engineers, program managers and operators need the bedrock of a common set of terms to instantiate responsible AI for the Department of Defense. Rather than create a DOD-specific set of terms, this paper argues that the DOD could benefit by adopting the key characteristics defined by the National Institute of Standards and Technology in its draft AI Risk Management Framework with only two exceptions.

Analysis

Responsible and Ethical Military AI

Zoe Stanley-Lockman
| August 2021

Allies of the United States have begun to develop their own policy approaches to responsible military use of artificial intelligence. This issue brief looks at key allies with articulated, emerging, and nascent views on how to manage ethical risk in adopting military AI. The report compares their convergences and divergences, offering pathways for the United States, its allies, and multilateral institutions to develop common approaches to responsible AI implementation.