Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications

Read our translation of a document that announces China’s 2023 research priorities in explainable and generalizable artificial intelligence methods. It also provides state funds for projects related to these priorities, and explains how Chinese AI researchers can apply for funding. Applications of AI in medicine, biology, physics, materials science, and mathematics are prominent among the 2023 priorities.

Reports

Confidence-Building Measures for Artificial Intelligence

Andrew Lohn
| August 3, 2023

Foundation models could eventually introduce several pathways for undermining state security: accidents, inadvertent escalation, unintentional conflict, the proliferation of weapons, and the interference with human diplomacy are just a few on a long list. The Confidence-Building Measures for Artificial Intelligence workshop hosted by the Geopolitics Team at OpenAI and the Berkeley Risk and Security Lab at the University of California brought together a multistakeholder group to think through the tools and strategies to mitigate the potential risks introduced by foundation models to international security.

Identifying emerging technologies is critical to governments, the private sector, and researchers, but these groups lack a shared analytical approach when it comes to assessing the trajectories of new technologies. To better calibrate efforts to protect and promote emerging technologies, supply chain security research provides a mature, relevant analytical framework. This report offers policymakers a template to map emerging technology supply chains using two tools developed by CSET's Emerging Technology Observatory: the Map of Science and the Supply Chain Explorer.

Reports

The Race for U.S. Technical Talent

Diana Gehlhaus, James Ryseff, and Jack Corrigan
| August 2023

Technical talent is vital to innovation and economic growth, and attracting these highly mobile workers is critical to staying on the cutting-edge of the technological frontier. Conventional wisdom holds that the defense community generally struggles to access this talent pool. This policy brief uses LinkedIn data to track the movement of tech workers between industries and metro areas, with a particular focus on the U.S. Department of Defense, the defense industrial base, and the so-called “Big Tech” companies.

CSET submitted the following comment in response to a Request for Information (RFI) from the National Science Foundation (NSF) about the development of the newly established Technology, Innovation, and Partnerships (TIP) Directorate, in accordance with the CHIPS and Science Act of 2022.

Reports

Adding Structure to AI Harm

Mia Hoffmann and Heather Frase
| July 2023

Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed. This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure.

Jenny Jun's testimony before the House Foreign Affairs Subcommittee on Indo-Pacific for a hearing titled, "Illicit IT: Bankrolling Kim Jong Un."

Data Brief

Voices of Innovation

Sara Abdulla and Husanjot Chahal
| July 2023

This data brief identifies the most influential AI researchers in the United States between 2010 and 2021 via three metrics: number of AI publications, citations, and AI h-index. It examines their demographic profiles, career trajectories, and research collaboration rates, finding that most are men in the later stages of their career, largely concentrated in 10 elite universities and companies, and that nearly 70 percent of America’s top AI researchers were born abroad.

Data Brief

Who Cares About Trust?

Autumn Toney and Emelia Probasco
| July 2023

Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work about the use of trustworthy AI terms to identify 18 clusters of research papers that contribute to the development of trustworthy AI. In identifying these clusters, the analysis also reveals that some concepts, like "explainability," are forming distinct research areas, whereas other concepts, like "reliability," appear to be accepted as metrics and broadly applied.

Data Snapshot

Tracking Industry in Government Contracts

Christian Schoeberl
| July 19, 2023

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This short series explores how government procurement data can shed light on federal technological interest and utilization. It analyzes contract metadata, provided in a collaborative project with Govini, to track key emerging technologies through the federal procurement process.