Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

China’s Military AI Wish List

Emelia Probasco, Sam Bresnick, and Cole McFaul
| February 2026

This report examines thousands of Chinese-language open-source requests for proposal (RFPs) published by the People’s Liberation Army between January 1, 2023, and December 31, 2024. The RFPs the authors reviewed offer insights into the PLA’s priorities and ambitions for AI-enabled military technologies associated with C5ISRT: command, control, communications, computers, cyber, intelligence, surveillance, reconnaissance, and targeting.

Applications


Compete


Filter publications

CSET Research Analyst Emily Weinstein testified before the U.S.-China Economic and Security Review Commission hearing on "U.S. Investment in China's Capital Markets and Military-Industrial Complex." Weinstein discussed China's military-civil fusion strategy in university investment firms and Chinese talent programs.

See our original translation of China's major 2018 Party and state agency reorganization plan.

Testimony

Testimony Before Senate Foreign Relations Committee

Saif M. Khan
| March 17, 2021

CSET Research Fellow Saif M. Khan testified before the Senate Foreign Relations Committee for its hearing, "Advancing Effective U.S. Policy for Strategic Competition with China in the Twenty-First Century." Khan spoke to the importance of U.S. leadership in semiconductor and artificial intelligence technology.

Reports

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Reports

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Reports

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

Reports

Lessons from Stealth for Emerging Technologies

Peter Westwick
| March 2021

Stealth technology was one of the most decisive developments in military aviation in the last 50 years. With U.S. technological leadership now under challenge, especially from China, this issue brief derives several lessons from the history of Stealth to guide current policymakers. The example of Stealth shows how the United States produced one critical technology in the past and how it might produce others today.

Reports

Chinese Government Guidance Funds

Ngor Luong, Zachary Arnold, and Ben Murphy
| March 2021

The Chinese government is pouring money into public-private investment funds, known as guidance funds, to advance China’s strategic and emerging technologies, including artificial intelligence. These funds are mobilizing massive amounts of capital from public and private sources—prompting both concern and skepticism among outside observers. This overview presents essential findings from our full-length report on these funds, analyzing the guidance fund model, its intended benefits and weaknesses, and its long-term prospects for success.

Reports

Understanding Chinese Government Guidance Funds

Ngor Luong, Zachary Arnold, and Ben Murphy
| March 2021

China’s government is using public-private investment funds, known as guidance funds, to deploy massive amounts of capital in support of strategic and emerging technologies, including artificial intelligence. Drawing exclusively on Chinese-language sources, this report explores how guidance funds raise and deploy capital, manage their investment, and interact with public and private actors. The guidance fund model is no silver bullet, but it has many advantages over traditional industrial policy mechanisms.

Reports

Academics, AI, and APTs

Dakota Cary
| March 2021

Six Chinese universities have relationships with Advanced Persistent Threat (APT) hacking teams. Their activities range from recruitment to running cyber operations. These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field. This report examines these universities’ relationships with known APTs and analyzes the schools’ AI/ML research that may translate to future operational capabilities.