Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Reducing the Risks of Artificial Intelligence for Military Decision Advantage

Wyatt Hoffman and Heeu Millie Kim
| March 2023

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.

Reports

One Size Does Not Fit All

Heather Frase
| February 2023

Artificial intelligence is so diverse in its range that no simple one-size-fits-all assessment approach can be adequately applied to it. AI systems have a wide variety of functionality, capabilities, and outputs. They are also created using different tools, data modalities, and resources, which adds to the diversity of their assessment. Thus, a collection of approaches and processes is needed to cover a wide range of AI products, tools, services, and resources.

Reports

Chinese AI Investment and Commercial Activity in Southeast Asia

Ngor Luong, Channing Lee, and Margarita Konaev
| February 2023

China’s government has pushed the country’s technology and financial firms to expand abroad, and Southeast Asia’s growing economies — and AI companies — offer promising opportunities. This report examines the scope and nature of Chinese investment in the region. It finds that China currently plays a limited role in Southeast Asia’s emerging AI markets outside of Singapore and that Chinese investment activity still trails behind that of the United States. Nevertheless, Chinese tech companies, with support from the Chinese government, have established a broad range of other AI-related linkages with public and commercial actors across Southeast Asia.

Formal Response

Comment to NIST on the AI Risk Management Framework

Mina Narayanan
| September 29, 2022

CSET submitted the following comment in response to the National Institute for Standards and Technology's second draft of its AI Risk Management Framework.

Reports

Quad AI

Husanjot Chahal, Ngor Luong, Sara Abdulla, and Margarita Konaev
| May 2022

Through the Quad forum, the United States, Australia, Japan and India have committed to pursuing an open, accessible and secure technology ecosystem and offering a democratic alternative to China’s techno-authoritarian model. This report assesses artificial intelligence collaboration across the Quad and finds that while Australia, Japan and India each have close AI-related research and investment ties to both the United States and China, they collaborate far less with one another.

Data Brief

Exploring Clusters of Research in Three Areas of AI Safety

Helen Toner and Ashwin Acharya
| February 2022

Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.

Data Visualization

Classifying AI Systems

Catherine Aiken and Brian Dunn
| December 2021

​​This Classifying AI Systems Interactive presents several AI system classification frameworks developed to distill AI systems into concise, comparable and policy-relevant dimensions. It provides key takeaways and framework-specific results from CSET’s analysis of more than 1,800 system classifications done by survey respondents using the frameworks. You can explore the frameworks and example AI systems used in the survey, and even take the survey.

Reports

Key Concepts in AI Safety: Specification in Machine Learning

Tim G. J. Rudner and Helen Toner
| December 2021

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.

Data Brief

Classifying AI Systems

Catherine Aiken
| November 2021

This brief explores the development and testing of artificial intelligence system classification frameworks intended to distill AI systems into concise, comparable and policy-relevant dimensions. Comparing more than 1,800 system classifications, it points to several factors that increase the utility of a framework for human classification of AI systems and enable AI system management, risk assessment and governance.

Reports

Responsible and Ethical Military AI

Zoe Stanley-Lockman
| August 2021

Allies of the United States have begun to develop their own policy approaches to responsible military use of artificial intelligence. This issue brief looks at key allies with articulated, emerging, and nascent views on how to manage ethical risk in adopting military AI. The report compares their convergences and divergences, offering pathways for the United States, its allies, and multilateral institutions to develop common approaches to responsible AI implementation.