Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Visualization

Classifying AI Systems

Catherine Aiken and Brian Dunn
| December 2021

​​This Classifying AI Systems Interactive presents several AI system classification frameworks developed to distill AI systems into concise, comparable and policy-relevant dimensions. It provides key takeaways and framework-specific results from CSET’s analysis of more than 1,800 system classifications done by survey respondents using the frameworks. You can explore the frameworks and example AI systems used in the survey, and even take the survey.

Reports

Key Concepts in AI Safety: Specification in Machine Learning

Tim G. J. Rudner and Helen Toner
| December 2021

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.

Data Brief

Classifying AI Systems

Catherine Aiken
| November 2021

This brief explores the development and testing of artificial intelligence system classification frameworks intended to distill AI systems into concise, comparable and policy-relevant dimensions. Comparing more than 1,800 system classifications, it points to several factors that increase the utility of a framework for human classification of AI systems and enable AI system management, risk assessment and governance.

Reports

AI Accidents: An Emerging Threat

Zachary Arnold and Helen Toner
| July 2021

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.

Data Visualization

National Cybersecurity Center Map

Dakota Cary and Jennifer Melot
| July 2021

China wants to be a “cyber powerhouse” (网络强国). At the heart of this mission is the sprawling 40 km2 campus of the National Cybersecurity Center. Formally called the National Cybersecurity Talent and Innovation Base (国家网络安全人才与创新基地), the NCC is being built in Wuhan. The campus, which China began constructing in 2017 and is still building, includes seven centers for research, talent cultivation, and entrepreneurship; two government-focused laboratories; and a National Cybersecurity School.

Data Visualization

PARAT – Tracking the Activity of AI Companies

Rebecca Gelles, Zachary Arnold, Ngor Luong, and Jennifer Melot
| June 2021

CSET’s Private-sector AI-Related Activity Tracker (PARAT) collects data related to companies’ AI research and development to inform analysis of the global AI sector. The global AI market is already expanding rapidly and is likely to continue growing in the coming years. Identifying “AI companies” helps illustrate the size and health of the AI industry in which they participate as well as the most sought-after skills and experience in the AI workforce.

CSET Research Fellow Zachary Arnold testified before the U.S.-China Economic and Security Review Commission hearing on "U.S. Investment in China's Capital Markets and Military-Industrial Complex." Arnold discussed discuss China’s use of financial capital flows and the state’s prominent role in allocating capital to specific firms and sectors.

CSET Research Analyst Emily Weinstein testified before the U.S.-China Economic and Security Review Commission hearing on "U.S. Investment in China's Capital Markets and Military-Industrial Complex." Weinstein discussed China's military-civil fusion strategy in university investment firms and Chinese talent programs.

Reports

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Reports

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.