Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Machine Learning and Cybersecurity

Micah Musser and Ashton Garriott
| June 2021

Cybersecurity operators have increasingly relied on machine learning to address a rising number of threats. But will machine learning give them a decisive advantage or just help them keep pace with attackers? This report explores the history of machine learning in cybersecurity and the potential it has for transforming cyber defense in the near future.

Reports

Truth, Lies, and Automation

Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova
| May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.

Reports

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Reports

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Reports

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner and Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

Reports

Academics, AI, and APTs

Dakota Cary
| March 2021

Six Chinese universities have relationships with Advanced Persistent Threat (APT) hacking teams. Their activities range from recruitment to running cyber operations. These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field. This report examines these universities’ relationships with known APTs and analyzes the schools’ AI/ML research that may translate to future operational capabilities.

Reports

AI Verification

Matthew Mittelsteadt
| February 2021

The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations. This brief offers a starting point, defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.

Reports

AI and the Future of Cyber Competition

Wyatt Hoffman
| January 2021

As states turn to AI to gain an edge in cyber competition, it will change the cat-and-mouse game between cyber attackers and defenders. Embracing machine learning systems for cyber defense could drive more aggressive and destabilizing engagements between states. Wyatt Hoffman writes that cyber competition already has the ingredients needed for escalation to real-world violence, even if these ingredients have yet to come together in the right conditions.

Reports

Hacking AI

Andrew Lohn
| December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.

Reports

Automating Cyber Attacks

Ben Buchanan, John Bansemer, Dakota Cary, Jack Lucas, and Micah Musser
| November 2020

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.