Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

China’s Military AI Wish List

Emelia Probasco, Sam Bresnick, and Cole McFaul
| February 2026

This report examines thousands of Chinese-language open-source requests for proposal (RFPs) published by the People’s Liberation Army between January 1, 2023, and December 31, 2024. The RFPs the authors reviewed offer insights into the PLA’s priorities and ambitions for AI-enabled military technologies associated with C5ISRT: command, control, communications, computers, cyber, intelligence, surveillance, reconnaissance, and targeting.

Applications


Compete


Filter publications

As artificial intelligence systems are deployed and affect more aspects of daily life, effective risk mitigation becomes imperative to prevent harm. This report analyzes AI incidents to improve our understanding of how risks from AI materialize in practice. By identifying six mechanisms of harm, it sheds light on the different pathways to harm, and on the variety of mitigation strategies needed to address them.

Data Visualization

PATHWISE

October 2025

PATHWISE (Prototype Analytics for Tracking High-Demand Workforce in Innovative Skill Ecosystems) provides policymakers, educators, and analysts with a powerful new way to explore how the United States develops and deploys talent in emerging technology fields.

Read our translation of a Chinese draft standard that proposes rules for generative AI data annotation and data labeling with an eye toward improving the safety and security of GenAI systems.

Reports

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.

Read our translation of trial guidelines that describe safety measures for the operation of autonomous vehicles on China’s roads.

In the second installation of our blog series analyzing 147 AI-related laws enacted by Congress between January 2020 and March 2025 from AGORA, we explore the governance strategies, risk-related concepts, and harms addressed in the legislation. In the first blog, we showed that the majority of these AI-related legislative documents were drawn from National Defense Authorization Acts and apply to national security contexts.

Read our translation of a Chinese Ministry of Education guide that offers best practices for generative AI adoption in primary and secondary schools.

Read our translation of a notice that expands Chinese export controls on rare earths.

Reports

U.S. AI Statecraft

Pablo Chavez
| October 2025

Recent U.S.-Gulf AI partnerships represent billions of dollars in strategic technology deals, but they raise critical questions about governance, oversight, and long-term influence. This analysis examines four major AI initiatives with Saudi Arabia and the United Arab Emirates, discussing critical issues including fragmented oversight, technology diversion, and AI sovereignty. It proposes a framework to transform ad hoc dealmaking into principled, transparent, and rule-bound AI statecraft that advances U.S. interests, strengthens technology relationships with allies and partners, and establishes durable governance mechanisms for U.S. AI deployments abroad.