Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.

Reports

Harmonizing AI Guidance: Distilling Voluntary Standards and Best Practices into a Unified Framework

Kyle Crichton, Abhiram Reddy, Jessica Ji, Ali Crawford, Mia Hoffmann, Colin Shea-Blymyer, and John Bansemer
| September 2025

Organizations looking to adopt artificial intelligence (AI) systems face the challenge of deciphering a myriad of voluntary standards and best practices—requiring time, resources, and expertise that many cannot afford. To address this problem, this report distills over 7,000 recommended practices from 52 reports into a single harmonized framework. Integrating new AI guidance with existing safety and security practices, this work provides a road map for organizations navigating the complex landscape of AI guidance.

Reports

The Future of Work-Based Learning for Cyber Jobs

Ali Crawford
| July 2025

This roundtable report explores how practitioners, researchers, educators, and government officials view work-based learning as a tool for strengthening the cybersecurity workforce. Participants engaged in an enriching discussion that ultimately provided insight and context into what makes work-based learning unique, effective, and valuable for the cyber workforce.

Reports

AI System-to-Model Innovation

Jonah Schiestle and Andrew Imbrie
| July 2025

System-to-model innovation is an emerging innovation pathway in artificial intelligence that has driven progress in several prominent areas over the last decade. System-level innovations advance with the diffusion of AI and expand the base of contributors to leading-edge progress in the field. Countries that can identify and harness system-level innovations faster and more comprehensively will gain crucial economic and military advantages over competitors. This paper analyzes the benefits of system-to-model innovation and suggests a three-part framework to navigate the policy implications: protect, diffuse, and anticipate.

Artificial intelligence (AI) is beginning to change cybersecurity. This report takes a comprehensive look across cybersecurity to anticipate whether those changes will help cyber defense or offense. Rather than a single answer, there are many ways that AI will help both cyber attackers and defenders. The report finds that there are also several actions that defenders can take to tilt the odds to their favor.

Reports

Defending Against Intelligent Attackers at Large Scales

Andrew Lohn
| April 22, 2025

We investigate the scale of attack and defense mathematically in the context of AI's possible effect on cybersecurity. For a given target today, highly scaled cyber attacks such as from worms or botnets typically all fail or all succeed.

Unlike other domains of conflict, and unlike other fields with high anticipated risk from AI, the cyber domain is intrinsically digital with a tight feedback loop between AI training and cyber application. Cyber may have some of the largest and earliest impacts from AI, so it is important to understand how the cyber domain may change as AI continues to advance. Our approach reviewed the literature, collecting nine arguments that have been proposed for offensive advantage in cyber conflict and nine proposed arguments for defensive advantage.

Reports

How to Assess the Likelihood of Malicious Use of Advanced AI Systems

Josh A. Goldstein and Girish Sastry
| March 2025

As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.

Formal Response

CSET’s Recommendations for an AI Action Plan

March 14, 2025

In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.

Reports

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.