Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Brief

Using Machine Learning to Fill Gaps in Chinese AI Market Data

Zachary Arnold, Joanne Boisson, Lorenzo Bongiovanni, Daniel Chou, Carrie Peelman, and Ilya Rahkovsky
| February 2021

In this proof-of-concept project, CSET and Amplyfi Ltd. used machine learning models and Chinese-language web data to identify Chinese companies active in artificial intelligence. Most of these companies were not labeled or described as AI-related in two high-quality commercial datasets. The authors' findings show that using structured data alone—even from the best providers—will yield an incomplete picture of the Chinese AI landscape.

Reports

China’s STI Operations

William Hannas and Huey-Meei Chang
| January 2021

Open source intelligence (OSINT) and science and technology intelligence (STI) are realized differently in the United States and China, China putting greater value on both. In the United States’ understanding, OSINT “enables” classified reporting, while in China it is the intelligence of first resort. This contrast extends to STI which has a lower priority in the U.S. system, whereas China and its top leaders personally lavish great attention on STI and rely on it for national decisions. Establishing a “National S&T Analysis Center” within the U.S. government could help to address these challenges.

Reports

AI and the Future of Cyber Competition

Wyatt Hoffman
| January 2021

As states turn to AI to gain an edge in cyber competition, it will change the cat-and-mouse game between cyber attackers and defenders. Embracing machine learning systems for cyber defense could drive more aggressive and destabilizing engagements between states. Wyatt Hoffman writes that cyber competition already has the ingredients needed for escalation to real-world violence, even if these ingredients have yet to come together in the right conditions.

Reports

Hacking AI

Andrew Lohn
| December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.

Reports

Universities and the Chinese Defense Technology Workforce

Ryan Fedasiuk and Emily S. Weinstein
| December 2020

To help U.S. policymakers address long-held concerns about risks and threats associated with letting Chinese university students or graduates study in the United States, CSET experts examine which forms of collaboration, and with which Chinese universities, pose the greatest risk to U.S. research security.

Reports

Automating Cyber Attacks

Ben Buchanan, John Bansemer, Dakota Cary, Jack Lucas, and Micah Musser
| November 2020

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.

Data Brief

U.S. Demand for Talent at the Intersection of AI and Cybersecurity

Cindy Martinez and Micah Musser
| November 2020

As demand for cybersecurity experts in the United States has grown faster than the supply of qualified workers, some organizations have turned to artificial intelligence to bolster their overwhelmed cyber teams. Organizations may opt for distinct teams that specialize exclusively in AI or cybersecurity, but there is a benefit to having employees with overlapping experience in both domains. This data brief analyzes hiring demand for individuals with a combination of AI and cybersecurity skills.

Reports

Destructive Cyber Operations and Machine Learning

Dakota Cary and Daniel Cebul
| November 2020

Machine learning may provide cyber attackers with the means to execute more effective and more destructive attacks against industrial control systems. As new ML tools are developed, CSET discusses the ways in which attackers may deploy these tools and the most effective avenues for industrial system defenders to respond.

Reports

Russian AI Research 2010-2018

Margarita Konaev and James Dunham
| October 2020

Over the last decade, Moscow has boosted funding of universities and implemented reforms in order to make Russia a global leader in AI. As part of that effort, Russian researchers have expanded their English-language publication output, a key—if imperfect—measure of the country’s innovation and impact. Between 2010 and 2018, the number of English-language publications by Russian scientists in AI-related fields increased six-fold.

Reports

Downscaling Attack and Defense

Andrew Lohn
| October 7, 2020

The resizing of images, which is typically a required part of preprocessing for computer vision systems, is vulnerable to attack. Images can be created such that the image is completely different at machine-vision scales than at other scales and the default settings for some common computer vision and machine learning systems are vulnerable.