Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Annual Report

CSET at Five

Center for Security and Emerging Technology
| March 2024

In honor of CSET’s fifth birthday, this annual report is a look at CSET’s successes in 2023 and over the course of the past five years. It explores CSET’s different lines of research and cross-cutting projects, and spotlights some of its most impactful research products.

Filter publications
Analysis

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Analysis

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Analysis

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

Analysis

Lessons from Stealth for Emerging Technologies

Peter Westwick
| March 2021

Stealth technology was one of the most decisive developments in military aviation in the last 50 years. With U.S. technological leadership now under challenge, especially from China, this issue brief derives several lessons from the history of Stealth to guide current policymakers. The example of Stealth shows how the United States produced one critical technology in the past and how it might produce others today.

Analysis

Chinese Government Guidance Funds

Ngor Luong Zachary Arnold Ben Murphy
| March 2021

The Chinese government is pouring money into public-private investment funds, known as guidance funds, to advance China’s strategic and emerging technologies, including artificial intelligence. These funds are mobilizing massive amounts of capital from public and private sources—prompting both concern and skepticism among outside observers. This overview presents essential findings from our full-length report on these funds, analyzing the guidance fund model, its intended benefits and weaknesses, and its long-term prospects for success.

Analysis

Understanding Chinese Government Guidance Funds

Ngor Luong Zachary Arnold Ben Murphy
| March 2021

China’s government is using public-private investment funds, known as guidance funds, to deploy massive amounts of capital in support of strategic and emerging technologies, including artificial intelligence. Drawing exclusively on Chinese-language sources, this report explores how guidance funds raise and deploy capital, manage their investment, and interact with public and private actors. The guidance fund model is no silver bullet, but it has many advantages over traditional industrial policy mechanisms.

Analysis

Academics, AI, and APTs

Dakota Cary
| March 2021

Six Chinese universities have relationships with Advanced Persistent Threat (APT) hacking teams. Their activities range from recruitment to running cyber operations. These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field. This report examines these universities’ relationships with known APTs and analyzes the schools’ AI/ML research that may translate to future operational capabilities.

Analysis

AI Verification

Matthew Mittelsteadt
| February 2021

The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations. This brief offers a starting point, defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.

Analysis

Trusted Partners

Margarita Konaev Tina Huang Husanjot Chahal
| February 2021

As the U.S. military integrates artificial intelligence into its systems and missions, there are outstanding questions about the role of trust in human-machine teams. This report examines the drivers and effects of such trust, assesses the risks from too much or too little trust in intelligent technologies, reviews efforts to build trustworthy AI systems, and offers future directions for research on trust relevant to the U.S. military.

Analysis

Assessing the Scope of U.S. Visa Restrictions on Chinese Students

Remco Zwetsloot Emily S. Weinstein Ryan Fedasiuk
| February 2021

In May 2020, the White House announced it would deny visas to Chinese graduate students and researchers who are affiliated with organizations that implement or support China’s military-civil fusion strategy. The authors discuss several ways this policy might be implemented. Based on Chinese and U.S. policy documents and data sources, they estimate that between three and five thousand Chinese students might be prevented from entering U.S. graduate programs each year.