Research

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

China’s STI Operations

William Hannas Huey-Meei Chang
| January 2021

Open source intelligence (OSINT) and science and technology intelligence (STI) are realized differently in the United States and China, China putting greater value on both. In the United States’ understanding, OSINT “enables” classified reporting, while in China it is the intelligence of first resort. This contrast extends to STI which has a lower priority in the U.S. system, whereas China and its top leaders personally lavish great attention on STI and rely on it for national decisions. Establishing a “National S&T Analysis Center” within the U.S. government could help to address these challenges.

China


Cybersecurity


Data, algorithms and models


Hardware and compute


Filter research
Analysis

The Path of Least Resistance

Margarita Konaev Husanjot Chahal
| April 2021

As multinational collaboration on emerging technologies takes center stage, U.S. allies and partners must overcome the technological, bureaucratic, and political barriers to working together. This report assesses the challenges to multinational collaboration and explains how joint projects centered on artificial intelligence applications for military logistics and sustainment offer a viable path forward.

Analysis

U.S. AI Workforce

Diana Gehlhaus Ilya Rahkovsky
| April 2021

A lack of good data on the U.S. artificial intelligence workforce limits the potential effectiveness of policies meant to increase and cultivate this cadre of talent. In this issue brief, the authors bridge that information gap with new analysis on the state of the U.S. AI workforce, along with insight into the ongoing concern over AI talent shortages. Their findings suggest some segments of the AI workforce are more likely than others to be experiencing a supply-demand gap.

Analysis

China’s Progress in Semiconductor Manufacturing Equipment

Will Hunt Saif M. Khan Dahlia Peterson
| March 2021

To reduce its dependence on the United States and its allies for semiconductors, China is building domestic semiconductor manufacturing facilities by importing U.S., Japanese, and Dutch semiconductor manufacturing equipment. In the longer term, it also hopes to indigenize this equipment to replace imports. U.S. and allied policy responses to China’s efforts will significantly affect its prospects for success in this challenging task.

Analysis

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Analysis

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Analysis

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

Analysis

Lessons from Stealth for Emerging Technologies

Peter Westwick
| March 2021

Stealth technology was one of the most decisive developments in military aviation in the last 50 years. With U.S. technological leadership now under challenge, especially from China, this issue brief derives several lessons from the history of Stealth to guide current policymakers. The example of Stealth shows how the United States produced one critical technology in the past and how it might produce others today.

Analysis

Chinese Government Guidance Funds

Ngor Luong Zachary Arnold Ben Murphy
| March 2021

The Chinese government is pouring money into public-private investment funds, known as guidance funds, to advance China’s strategic and emerging technologies, including artificial intelligence. These funds are mobilizing massive amounts of capital from public and private sources—prompting both concern and skepticism among outside observers. This overview presents essential findings from our full-length report on these funds, analyzing the guidance fund model, its intended benefits and weaknesses, and its long-term prospects for success.

Analysis

Understanding Chinese Government Guidance Funds

Ngor Luong Zachary Arnold Ben Murphy
| March 2021

China’s government is using public-private investment funds, known as guidance funds, to deploy massive amounts of capital in support of strategic and emerging technologies, including artificial intelligence. Drawing exclusively on Chinese-langauge sources, this report explores how guidance funds raise and deploy capital, manage their investment, and interact with public and private actors. The guidance fund model is no silver bullet, but it has many advantages over traditional industrial policy mechanisms.

Analysis

Academics, AI, and APTs

Dakota Cary
| March 2021

Six Chinese universities have relationships with Advanced Persistent Threat (APT) hacking teams. Their activities range from recruitment to running cyber operations. These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field. This report examines these universities’ relationships with known APTs and analyzes the schools’ AI/ML research that may translate to future operational capabilities.