Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Poison in the Well

Andrew Lohn
| June 2021

Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest the answer may be no. This report introduces policymakers to these emerging threats and provides recommendations for how to secure the machine learning supply chain.

Reports

U.S. Demand for AI Certifications

Diana Gehlhaus and Ines Pancorbo
| June 2021

This issue brief explores whether artificial intelligence and AI-related certifications serve as potential pathways to enter the U.S. AI workforce. The authors find that according to U.S. AI occupation job postings data over 2010–2020, there is little demand from employers for AI and AI-related certifications. From this perspective, such certifications appear to present more hype than promise.

Reports

Machine Learning and Cybersecurity

Micah Musser and Ashton Garriott
| June 2021

Cybersecurity operators have increasingly relied on machine learning to address a rising number of threats. But will machine learning give them a decisive advantage or just help them keep pace with attackers? This report explores the history of machine learning in cybersecurity and the potential it has for transforming cyber defense in the near future.

Reports

Truth, Lies, and Automation

Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova
| May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.

Reports

U.S. AI Workforce

Diana Gehlhaus and Ilya Rahkovsky
| April 2021

A lack of good data on the U.S. artificial intelligence workforce limits the potential effectiveness of policies meant to increase and cultivate this cadre of talent. In this issue brief, the authors bridge that information gap with new analysis on the state of the U.S. AI workforce, along with insight into the ongoing concern over AI talent shortages. Their findings suggest some segments of the AI workforce are more likely than others to be experiencing a supply-demand gap.

Reports

Academics, AI, and APTs

Dakota Cary
| March 2021

Six Chinese universities have relationships with Advanced Persistent Threat (APT) hacking teams. Their activities range from recruitment to running cyber operations. These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field. This report examines these universities’ relationships with known APTs and analyzes the schools’ AI/ML research that may translate to future operational capabilities.

Reports

Assessing the Scope of U.S. Visa Restrictions on Chinese Students

Remco Zwetsloot, Emily S. Weinstein, and Ryan Fedasiuk
| February 2021

In May 2020, the White House announced it would deny visas to Chinese graduate students and researchers who are affiliated with organizations that implement or support China’s military-civil fusion strategy. The authors discuss several ways this policy might be implemented. Based on Chinese and U.S. policy documents and data sources, they estimate that between three and five thousand Chinese students might be prevented from entering U.S. graduate programs each year.

Reports

The U.S. AI Workforce

Diana Gehlhaus and Santiago Mutis
| January 2021

As the United States seeks to maintain a competitive edge in artificial intelligence, the strength of its AI workforce will be of paramount importance. In order to understand the current state of the domestic AI workforce, Diana Gehlhaus and Santiago Mutis define the AI workforce and offer a preliminary assessment of its size, composition, and key characteristics. Among their findings: The domestic supply of AI talent consisted of an estimated 14 million workers (or about 9% of total U.S. employment) as of 2018.

Reports

AI and the Future of Cyber Competition

Wyatt Hoffman
| January 2021

As states turn to AI to gain an edge in cyber competition, it will change the cat-and-mouse game between cyber attackers and defenders. Embracing machine learning systems for cyber defense could drive more aggressive and destabilizing engagements between states. Wyatt Hoffman writes that cyber competition already has the ingredients needed for escalation to real-world violence, even if these ingredients have yet to come together in the right conditions.

Reports

Hacking AI

Andrew Lohn
| December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.