Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

AI Governance at the Frontier

Mina Narayanan, Jessica Ji, Vikram Venkatram, and Ngor Luong
| November 2025

This report presents an analytic approach to help U.S. policymakers deconstruct artificial intelligence governance proposals by identifying their underlying assumptions, which are the foundational elements that facilitate the success of a proposal. By applying the approach to five U.S.-based AI governance proposals from industry, academia, and civil society, as well as state and federal government, this report demonstrates how identifying assumptions can help policymakers make informed, flexible decisions about AI under uncertainty.

As artificial intelligence systems are deployed and affect more aspects of daily life, effective risk mitigation becomes imperative to prevent harm. This report analyzes AI incidents to improve our understanding of how risks from AI materialize in practice. By identifying six mechanisms of harm, it sheds light on the different pathways to harm, and on the variety of mitigation strategies needed to address them.

Reports

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.

Reports

Biotech Manufacturing Apprenticeships

Luke Koslosky, Steph Batalis, and Ronnie Kinoshita
| August 2025

This report examines lessons from the North Carolina Life Sciences Apprenticeship Consortium for pharmaceutical and biomanufacturing workforce development, and analyzes how apprenticeship programs help address workforce shortages in emerging tech fields. It offers a practical framework with important considerations for designing and launching programs, and serves as a resource for employers, regional leaders, and policymakers seeking to build a more resilient and technically skilled workforce.

Reports

The Future of Work-Based Learning for Cyber Jobs

Ali Crawford
| July 2025

This roundtable report explores how practitioners, researchers, educators, and government officials view work-based learning as a tool for strengthening the cybersecurity workforce. Participants engaged in an enriching discussion that ultimately provided insight and context into what makes work-based learning unique, effective, and valuable for the cyber workforce.

Artificial intelligence (AI) is beginning to change cybersecurity. This report takes a comprehensive look across cybersecurity to anticipate whether those changes will help cyber defense or offense. Rather than a single answer, there are many ways that AI will help both cyber attackers and defenders. The report finds that there are also several actions that defenders can take to tilt the odds to their favor.

Reports

Top-Tier Research Status for HBCUs?

Jaret C. Riddick and Brendan Oliss
| April 2025

The Carnegie Classification of Institutions of Higher Education is simplifying its top-tier R1 research criteria this year. Recognizing the strategic importance of historically Black colleges and universities, Congress passed Section 223 of the 2023 National Defense Authorization Act to increase defense research capacity by encouraging the most eligible among these institutions to seek the highly coveted R1 status. This in-depth analysis examines the 2025 classification changes, their effect on eligible HBCUs, and strategies for Congress to maintain progress.

Reports

AI for Military Decision-Making

Emelia Probasco, Helen Toner, Matthew Burtell, and Tim G. J. Rudner
| April 2025

Artificial intelligence is reshaping military decision-making. This concise overview explores how AI-enabled systems can enhance situational awareness and accelerate critical operational decisions—even in high-pressure, dynamic environments. Yet, it also highlights the essential need for clear operational scopes, robust training, and vigilant risk mitigation to counter the inherent challenges of using AI, such as data biases and automation pitfalls. This report offers a balanced framework to help military leaders integrate AI responsibly and effectively.

Reports

How to Assess the Likelihood of Malicious Use of Advanced AI Systems

Josh A. Goldstein and Girish Sastry
| March 2025

As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.

Formal Response

CSET’s Recommendations for an AI Action Plan

March 14, 2025

In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.