Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

Josh A. Goldstein, Girish Sastry, Micah Musser, Renée DiResta, Matthew Gentzel, and Katerina Sedova
| January 2023

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

Reports

China’s AI Workforce

Diana Gehlhaus, Joanne Boisson, Sara Abdulla, Jacob Feldgoise, Luke Koslosky, and Dahlia Peterson
| November 2022

U.S. policies on artificial intelligence education and the AI workforce must grow, cultivate, attract, and retain the world’s best and brightest. Given China’s role as a producer of AI talent, understanding its AI workforce could provide important insight. This report provides an analysis of the AI workforce demand in China using a novel dataset of 6.8 million job postings. It then outlines potential implications along with future reports in this series.

Formal Response

Comment to the Office of the National Cyber Director on Cyber Workforce, Training, and Education

Ali Crawford and Jessica Ji
| November 1, 2022

CSET's Ali Crawford and Jessica Ji submitted this comment to the Office of the National Cyber Director in response to a request for information on a national strategy for a cyber workforce, training, and education.

Reports

Downrange: A Survey of China’s Cyber Ranges

Dakota Cary
| September 2022

China is rapidly building cyber ranges that allow cybersecurity teams to test new tools, practice attack and defense, and evaluate the cybersecurity of a particular product or service. The presence of these facilities suggests a concerted effort on the part of the Chinese government, in partnership with industry and academia, to advance technological research and upskill its cybersecurity workforce—more evidence that China has entered near-peer status with the United States in the cyber domain.

Data Brief

Mapping Biosafety Level-3 Laboratories by Publications

Caroline Schuerger, Sara Abdulla, and Anna Puglisi
| August 2022

Biosafety Level-3 laboratories (BSL-3) are an essential part of research infrastructure and are used to develop vaccines and therapies. The research conducted in them provides insights into host-pathogen interactions that may help prevent future pandemics. However, these facilities also potentially pose a risk to society through lab accidents or misuse. Despite their importance, there is no comprehensive list of BSL-3 facilities, or the institutions in which they are housed. By systematically assessing PubMed articles published in English from 2006-2021, this paper maps institutions that host BSL-3 labs by their locations, augmenting current knowledge of where high-containment research is conducted globally.

Reports

Will AI Make Cyber Swords or Shields?

Andrew Lohn and Krystal Jackson
| August 2022

Funding and priorities for technology development today determine the terrain for digital battles tomorrow, and they provide the arsenals for both attackers and defenders. Unfortunately, researchers and strategists disagree on which technologies will ultimately be most beneficial and which cause more harm than good. This report provides three examples showing that, while the future of technology is impossible to predict with certainty, there is enough empirical data and mathematical theory to have these debates with more rigor.

Reports

U.S. High School Cybersecurity Competitions

Kayla Goode, Ali Crawford, and Christopher Back
| July 2022

In the current cyber-threat environment, a well-educated workforce is critical to U.S. national security. Today, however, nearly six hundred thousand cybersecurity positions remain unfilled across the public and private sectors. This report explores high school cybersecurity competitions as a potential avenue for increasing the domestic cyber talent pipeline. The authors examine the competitions, their reach, and their impact on students’ educational and professional development.

Reports

Will AI Make Cyber Swords or Shields

Andrew Lohn
| July 27, 2022

We aim to demonstrate the value of mathematical models for policy debates about technological progress in cybersecurity by considering phishing, vulnerability discovery, and the dynamics between patching and exploitation. We then adjust the inputs to those mathematical models to match some possible advances in their underlying technology.

Reports

AI Faculty Shortages

Remco Zwetsloot and Jack Corrigan
| July 2022

Universities are the engines that power the AI talent pipeline, but mounting evidence suggests that U.S. computer science departments do not have enough faculty to meet growing student interest. This paper explores the potential mismatch between supply and demand in AI education, discusses possible causes and consequences, and offers recommendations for increasing teaching capacity at U.S. universities.

Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models. Although these attacks were initially conceived of and studied digitally, in that the raw pixel values of the image were perturbed, recent work has demonstrated that these attacks can successfully transfer to the physical world. This can be accomplished by printing out the patch and adding it into scenes of newly captured images or video footage.