Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Education in China and the United States

Dahlia Peterson, Kayla Goode, and Diana Gehlhaus
| September 2021

A globally competitive AI workforce hinges on the education, development, and sustainment of the best and brightest AI talent. This issue brief provides an overview of the education systems in China and the United States, lending context to better understand the accompanying main report, “AI Education in China and the United States: A Comparative Assessment.”

Data Brief

China is Fast Outpacing U.S. STEM PhD Growth

Remco Zwetsloot, Jack Corrigan, Emily S. Weinstein, Dahlia Peterson, Diana Gehlhaus, and Ryan Fedasiuk
| August 2021

Since the mid-2000s, China has consistently graduated more STEM PhDs than the United States, a key indicator of a country’s future competitiveness in STEM fields. This paper explores the data on STEM PhD graduation rates and projects their growth over the next five years, during which the gap between China and the United States is expected to increase significantly.

Data Brief

U.S. AI Summer Camps

Claire Perkins and Kayla Goode
| August 2021

Summer camps are an integral part of many U.S. students’ education, but little is known about camps that focus on artificial intelligence education. This data brief maps out the AI summer camp landscape in the United States and explores the camps’ locations, target age ranges, price, and hosting organization type.

Reports

AI Accidents: An Emerging Threat

Zachary Arnold and Helen Toner
| July 2021

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.

Reports

U.S. Demand for AI Certifications

Diana Gehlhaus and Ines Pancorbo
| June 2021

This issue brief explores whether artificial intelligence and AI-related certifications serve as potential pathways to enter the U.S. AI workforce. The authors find that according to U.S. AI occupation job postings data over 2010–2020, there is little demand from employers for AI and AI-related certifications. From this perspective, such certifications appear to present more hype than promise.

Reports

U.S. AI Workforce

Diana Gehlhaus and Ilya Rahkovsky
| April 2021

A lack of good data on the U.S. artificial intelligence workforce limits the potential effectiveness of policies meant to increase and cultivate this cadre of talent. In this issue brief, the authors bridge that information gap with new analysis on the state of the U.S. AI workforce, along with insight into the ongoing concern over AI talent shortages. Their findings suggest some segments of the AI workforce are more likely than others to be experiencing a supply-demand gap.

Reports

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Reports

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Reports

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

Reports

AI Verification

Matthew Mittelsteadt
| February 2021

The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations. This brief offers a starting point, defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.