Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications

CSET Research Fellow Zachary Arnold testified before the U.S.-China Economic and Security Review Commission hearing on "U.S. Investment in China's Capital Markets and Military-Industrial Complex." Arnold discussed discuss China’s use of financial capital flows and the state’s prominent role in allocating capital to specific firms and sectors.

CSET Research Analyst Emily Weinstein testified before the U.S.-China Economic and Security Review Commission hearing on "U.S. Investment in China's Capital Markets and Military-Industrial Complex." Weinstein discussed China's military-civil fusion strategy in university investment firms and Chinese talent programs.

See our original translation of China's major 2018 Party and state agency reorganization plan.

Testimony

Testimony Before Senate Foreign Relations Committee

Saif M. Khan
| March 17, 2021

CSET Research Fellow Saif M. Khan testified before the Senate Foreign Relations Committee for its hearing, "Advancing Effective U.S. Policy for Strategic Competition with China in the Twenty-First Century." Khan spoke to the importance of U.S. leadership in semiconductor and artificial intelligence technology.

Reports

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Reports

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Reports

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

Reports

Lessons from Stealth for Emerging Technologies

Peter Westwick
| March 2021

Stealth technology was one of the most decisive developments in military aviation in the last 50 years. With U.S. technological leadership now under challenge, especially from China, this issue brief derives several lessons from the history of Stealth to guide current policymakers. The example of Stealth shows how the United States produced one critical technology in the past and how it might produce others today.

Reports

Chinese Government Guidance Funds

Ngor Luong, Zachary Arnold, and Ben Murphy
| March 2021

The Chinese government is pouring money into public-private investment funds, known as guidance funds, to advance China’s strategic and emerging technologies, including artificial intelligence. These funds are mobilizing massive amounts of capital from public and private sources—prompting both concern and skepticism among outside observers. This overview presents essential findings from our full-length report on these funds, analyzing the guidance fund model, its intended benefits and weaknesses, and its long-term prospects for success.

Reports

Understanding Chinese Government Guidance Funds

Ngor Luong, Zachary Arnold, and Ben Murphy
| March 2021

China’s government is using public-private investment funds, known as guidance funds, to deploy massive amounts of capital in support of strategic and emerging technologies, including artificial intelligence. Drawing exclusively on Chinese-language sources, this report explores how guidance funds raise and deploy capital, manage their investment, and interact with public and private actors. The guidance fund model is no silver bullet, but it has many advantages over traditional industrial policy mechanisms.