Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Visualization

Classifying AI Systems

Catherine Aiken and Brian Dunn
| December 2021

​​This Classifying AI Systems Interactive presents several AI system classification frameworks developed to distill AI systems into concise, comparable and policy-relevant dimensions. It provides key takeaways and framework-specific results from CSET’s analysis of more than 1,800 system classifications done by survey respondents using the frameworks. You can explore the frameworks and example AI systems used in the survey, and even take the survey.

Reports

Key Concepts in AI Safety: Specification in Machine Learning

Tim G. J. Rudner and Helen Toner
| December 2021

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.

Data Brief

Classifying AI Systems

Catherine Aiken
| November 2021

This brief explores the development and testing of artificial intelligence system classification frameworks intended to distill AI systems into concise, comparable and policy-relevant dimensions. Comparing more than 1,800 system classifications, it points to several factors that increase the utility of a framework for human classification of AI systems and enable AI system management, risk assessment and governance.

Data Visualization

AI Education Catalog

Claire Perkins, Diana Gehlhaus, Kayla Goode, Jennifer Melot, Ehrik Aldana, Grace Doerfler, and Gayani Gamage
| October 2021

Created through a joint partnership between CSET and the AI Education Project, the AI Education Catalog aims to raise awareness of the AI-related programs available to students and educators, as well as to help inform AI education and workforce policy.

Formal Response

Recommendations for the National AI Research Resource Task Force

Dakota Cary
| September 27, 2021

CSET submitted this comment to the Office of Science and Technology Policy and the National Science Foundation to support the work of the National Artificial Intelligence Research Resource (NAIRR) Task Force to develop an implementation roadmap that would provide AI researchers and students across scientific disciplines access to computational resources, high-quality data, educational tools, and user support.

Reports

AI Accidents: An Emerging Threat

Zachary Arnold and Helen Toner
| July 2021

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.

Reports

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Reports

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Reports

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

Reports

AI Verification

Matthew Mittelsteadt
| February 2021

The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations. This brief offers a starting point, defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.