Research

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

Hacking AI

Andrew Lohn
| December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.

Cybersecurity


Data, algorithms and models


Hardware and compute


Filter research
Analysis

The U.S. AI Workforce

Diana Gehlhaus Santiago Mutis
| January 2021

As the United States seeks to maintain a competitive edge in artificial intelligence, the strength of its AI workforce will be of paramount importance. In order to understand the current state of the domestic AI workforce, Diana Gehlhaus and Santiago Mutis define the AI workforce and offer a preliminary assessment of its size, composition, and key characteristics. Among their findings: The domestic supply of AI talent consisted of an estimated 14 million workers (or about 9% of total U.S. employment) as of 2018.

Analysis

A New Institutional Approach to Research Security in the United States

Melissa Flagg Zachary Arnold
| January 2021

U.S. research security requires trust and collaboration between those conducting R&D and the federal government. Most R&D takes place in the private sector, outside of government authority and control, and researchers are wary of federal government or law enforcement involvement in their work. Despite these challenges, as adversaries work to extract science, technology, data and know-how from the United States, the U.S. government is pursuing an ambitious research security initiative. In order to secure the 78 percent of U.S. R&D funded outside the government, authors Melissa Flagg and Zachary Arnold propose a new, public-private research security clearinghouse, with leadership from academia, business, philanthropy, and government and a presence in the most active R&D hubs across the United States.

Analysis

AI and the Future of Cyber Competition

Wyatt Hoffman
| January 2021

As states turn to AI to gain an edge in cyber competition, it will change the cat-and-mouse game between cyber attackers and defenders. Embracing machine learning systems for cyber defense could drive more aggressive and destabilizing engagements between states. Wyatt Hoffman writes that cyber competition already has the ingredients needed for escalation to real-world violence, even if these ingredients have yet to come together in the right conditions.

Analysis

Mapping U.S. Multinationals’ Global AI R&D Activity

Roxanne Heston Remco Zwetsloot
| December 2020

Many factors influence where U.S. tech multinational corporations decide to conduct their global artificial intelligence research and development (R&D). Company AI labs are spread all over the world, especially in North America, Europe and Asia. But in contrast to AI labs, most company AI staff remain concentrated in the United States. Roxanne Heston and Remco Zwetsloot explain where these companies conduct AI R&D, why they select particular locations, and how they establish their presence there. The report is accompanied by a new open-source dataset of more than 60 AI R&D labs run by these companies worldwide.

Analysis

Hacking AI

Andrew Lohn
| December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.

Analysis

Universities and the Chinese Defense Technology Workforce

Ryan Fedasiuk Emily Weinstein
| December 2020

To help U.S. policymakers address long-held concerns about risks and threats associated with letting Chinese university students or graduates study in the United States, CSET experts examine which forms of collaboration, and with which Chinese universities, pose the greatest risk to U.S. research security.

Analysis

Automating Cyber Attacks

Ben Buchanan John Bansemer Dakota Cary Jack Lucas Micah Musser
| November 2020

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.

China has built a surveillance state that has increasingly incorporated AI-enabled technologies. Their use during the COVID-19 pandemic has softened the image of China’s surveillance system, presenting unique challenges to preventing the spread of such technologies around the globe. This policy brief outlines core actions the United States and its allies can take to combat the spread of surveillance systems that threaten basic human rights.

The United States has long used export controls to prevent the proliferation of advanced semiconductors and the inputs necessary to produce them. With Beijing building up its own chipmaking industry, the United States has begun tightening restrictions on exports of semiconductor manufacturing equipment to China. This brief provides an overview of U.S. semiconductor export control policies and analyzes the impacts of those policies on U.S.-China trade.

Analysis

Future Indices

Michael Page Catherine Aiken Dewey Murdick
| October 19, 2020

Foretell is CSET's crowd forecasting pilot project focused on technology and security policy. It connects historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers. This issue brief uses recent forecast data to illustrate Foretell’s methodology.