Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

China’s Advanced AI Research

William Hannas Huey-Meei Chang Daniel Chou Brian Fleeger
| July 2022

China is following a national strategy to lead the world in artificial intelligence by 2030, including by pursuing “general AI” that can act autonomously in novel circumstances. Open-source research identifies 30 Chinese institutions engaged in one or more of this project‘s aspects, including machine learning, brain-inspired AI, and brain-computer interfaces. This report previews a CSET pilot program that will track China’s progress and provide timely alerts.

Applications and implications


China


Data, algorithms and models


International standing


Filter publications
Analysis

Decoupling in Strategic Technologies

Tim Hwang Emily S. Weinstein
| July 2022

Geopolitical tensions between the United States and China have sparked an ongoing dialogue in Washington about the phenomenon of “decoupling”—the use of public policy tools to separate the multifaceted economic ties that connect the two powers. This issue brief provides a historical lens on the efficacy of one specific aspect of this broader decoupling phenomenon: using export controls and related trade policies to prevent a rival from acquiring the equipment and know-how to catch up to the United States in cutting-edge, strategically important technologies.

Analysis

China’s Advanced AI Research

William Hannas Huey-Meei Chang Daniel Chou Brian Fleeger
| July 2022

China is following a national strategy to lead the world in artificial intelligence by 2030, including by pursuing “general AI” that can act autonomously in novel circumstances. Open-source research identifies 30 Chinese institutions engaged in one or more of this project‘s aspects, including machine learning, brain-inspired AI, and brain-computer interfaces. This report previews a CSET pilot program that will track China’s progress and provide timely alerts.

Analysis

AI Faculty Shortages

Remco Zwetsloot Jack Corrigan
| July 2022

Universities are the engines that power the AI talent pipeline, but mounting evidence suggests that U.S. computer science departments do not have enough faculty to meet growing student interest. This paper explores the potential mismatch between supply and demand in AI education, discusses possible causes and consequences, and offers recommendations for increasing teaching capacity at U.S. universities.

Analysis

Silicon Twist

Ryan Fedasiuk Karson Elmgren Ellen Lu
| June 2022

The Chinese military’s progress in artificial intelligence largely depends on continued access to high-end semiconductors. By analyzing thousands of purchasing records, this policy brief offers a detailed look at how China’s military comes to access these devices. The authors find that most computer chips ordered by Chinese military units are designed by American companies, and outline steps that the U.S. government could take to curtail their access.

Analysis

Chokepoints

Ben Murphy
| May 2022

China’s "Science and Technology Daily," a state-run newspaper, published a revealing series of articles in 2018 on 35 different Chinese technological import dependencies. The articles, accessible here in English for the first time, express concern that strategic Chinese industries are vulnerable to any disruption to their supply of specific U.S., Japanese, and European “chokepoint” technologies. This issue brief summarizes the article series and analyzes the Chinese perspective on these import dependencies and their causes.

Analysis

Securing AI

Andrew Lohn Wyatt Hoffman
| March 2022

Like traditional software, vulnerabilities in machine learning software can lead to sabotage or information leakages. Also like traditional software, sharing information about vulnerabilities helps defenders protect their systems and helps attackers exploit them. This brief examines some of the key differences between vulnerabilities in traditional and machine learning systems and how those differences can affect the vulnerability disclosure and remediation processes.

Data Brief

Exploring Clusters of Research in Three Areas of AI Safety

Helen Toner Ashwin Acharya
| February 2022

Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.

Analysis

AI and Compute

Andrew Lohn Micah Musser
| January 2022

Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.

Analysis

AI and the Future of Disinformation Campaigns

Katerina Sedova Christine McNeill Aurora Johnson Aditi Joshi Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

Analysis

Wisdom of the Crowd as Arbiter of Expert Disagreement

Michael Page
| December 2021

How can state-of-the-art probabilistic forecasting tools be used to advance expert debates on big policy questions? Using Foretell, a crowd forecasting platform piloted by CSET, we trialed a method to break down a big question—”What is the future of the DOD-Silicon Valley relationship?”—into measurable components, and then leveraged the wisdom of the crowd to reduce uncertainty and arbitrate disagreement among a group of experts.