Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

China’s Embodied AI: A Path to AGI

William Hannas, Huey-Meei Chang, Valentin Weber, and Daniel Chou
| December 2025

China is embracing “embodied AI”—artificial intelligence integrated with physical agents, such as robots and drones—both for commercial reasons and as a path to artificial general intelligence (AGI). The trend reflects China’s signature approach to AI, which recognizes diverse paths to AI dominance vis-à-vis the large models favored in the United States. This report documents PRC support for AI embodiment, describes how it is understood by China’s research community, and maps out the related infrastructure.

Reports

AI Governance at the Frontier

Mina Narayanan, Jessica Ji, Vikram Venkatram, and Ngor Luong
| November 2025

This report presents an analytic approach to help U.S. policymakers deconstruct artificial intelligence governance proposals by identifying their underlying assumptions, which are the foundational elements that facilitate the success of a proposal. By applying the approach to five U.S.-based AI governance proposals from industry, academia, and civil society, as well as state and federal government, this report demonstrates how identifying assumptions can help policymakers make informed, flexible decisions about AI under uncertainty.

As artificial intelligence systems are deployed and affect more aspects of daily life, effective risk mitigation becomes imperative to prevent harm. This report analyzes AI incidents to improve our understanding of how risks from AI materialize in practice. By identifying six mechanisms of harm, it sheds light on the different pathways to harm, and on the variety of mitigation strategies needed to address them.

Reports

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.

Reports

AI System-to-Model Innovation

Jonah Schiestle and Andrew Imbrie
| July 2025

System-to-model innovation is an emerging innovation pathway in artificial intelligence that has driven progress in several prominent areas over the last decade. System-level innovations advance with the diffusion of AI and expand the base of contributors to leading-edge progress in the field. Countries that can identify and harness system-level innovations faster and more comprehensively will gain crucial economic and military advantages over competitors. This paper analyzes the benefits of system-to-model innovation and suggests a three-part framework to navigate the policy implications: protect, diffuse, and anticipate.

Artificial intelligence (AI) is beginning to change cybersecurity. This report takes a comprehensive look across cybersecurity to anticipate whether those changes will help cyber defense or offense. Rather than a single answer, there are many ways that AI will help both cyber attackers and defenders. The report finds that there are also several actions that defenders can take to tilt the odds to their favor.

Reports

Wuhan’s AI Development

William Hannas, Huey-Meei Chang, and Daniel Chou
| May 2025

Wuhan, China’s inland metropolis, is paving the way for a nationwide rollout of “embodied” artificial intelligence meant to fast-track scientific discovery, optimize production, streamline commerce, and facilitate state supervision of social activities. Grounded in real-world data, the AI grows smarter, offering a pathway to artificial “general” intelligence that will reinforce state ideology and boost economic goals. This report documents the genesis of Wuhan’s AGI initiative and its multifaceted deployment.

Reports

Promoting AI Innovation Through Competition

Jack Corrigan
| May 2025

Maintaining long-term U.S. leadership in artificial intelligence will require policymakers to foster a diversified, contestable, and competitive market for AI systems. Today, however, incumbent technology companies maintain a distinct advantage in the production of large AI models, and they have the means and motion to use their control over key chokepoints in the AI supply chain (compute, data, foundation models, distribution channels) to stifle competition. This report explores the associated economic and national security risks, and offers recommendations for maintaining an open and competitive AI industry.

Helen Toner testified before the House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet on recommendations to bolster security and transparency around U.S.-developed frontier AI.