Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Defending Against Intelligent Attackers at Large Scales

Andrew Lohn
| April 22, 2025

We investigate the scale of attack and defense mathematically in the context of AI's possible effect on cybersecurity. For a given target today, highly scaled cyber attacks such as from worms or botnets typically all fail or all succeed.

Unlike other domains of conflict, and unlike other fields with high anticipated risk from AI, the cyber domain is intrinsically digital with a tight feedback loop between AI training and cyber application. Cyber may have some of the largest and earliest impacts from AI, so it is important to understand how the cyber domain may change as AI continues to advance. Our approach reviewed the literature, collecting nine arguments that have been proposed for offensive advantage in cyber conflict and nine proposed arguments for defensive advantage.

Reports

Top-Tier Research Status for HBCUs?

Jaret C. Riddick and Brendan Oliss
| April 2025

The Carnegie Classification of Institutions of Higher Education is simplifying its top-tier R1 research criteria this year. Recognizing the strategic importance of historically Black colleges and universities, Congress passed Section 223 of the 2023 National Defense Authorization Act to increase defense research capacity by encouraging the most eligible among these institutions to seek the highly coveted R1 status. This in-depth analysis examines the 2025 classification changes, their effect on eligible HBCUs, and strategies for Congress to maintain progress.

Reports

How to Assess the Likelihood of Malicious Use of Advanced AI Systems

Josh A. Goldstein and Girish Sastry
| March 2025

As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.

Formal Response

CSET’s Recommendations for an AI Action Plan

March 14, 2025

In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.

Reports

The State of AI-Related Apprenticeships

Luke Koslosky and Jacob Feldgoise
| February 2025

As artificial intelligence permeates the economy, the demand for AI talent with all levels of educational attainment will expand in kind. Apprenticeships are an effective education and training pathway for other industries, but are they suitable for AI-related roles? This report analyzes trends in AI-related apprenticeships across the United States from 2013 through 2023. It explores the growth of these programs, completion rates, demographic and geographic information, and the organizations sponsoring these programs.

Reports

Chinese Critiques of Large Language Models

William Hannas, Huey-Meei Chang, Maximilian Riesenhuber, and Daniel Chou
| January 2025

Large generative models are widely viewed as the most promising path to general (human-level) artificial intelligence and attract investment in the billions of dollars. The present enthusiasm notwithstanding, a chorus of ranking Chinese scientists regard this singular approach to AGI as ill-advised. This report documents these critiques in China’s research, public statements, and government planning, while pointing to additional, pragmatic reasons for China’s pursuit of a diversified research portfolio.

Reports

AI and the Future of Workforce Training

Matthias Oschinski, Ali Crawford, and Maggie Wu
| December 2024

The emergence of artificial intelligence as a general-purpose technology could profoundly transform work across industries, potentially affecting a variety of occupations. While previous technological shifts largely enhanced productivity and wages for white-collar workers but led to displacement pressures for blue-collar workers, AI may significantly disrupt both groups. This report examines the changing landscape of workforce development, highlighting the crucial role of community colleges, alternative career pathways, and AI-enabled training solutions in preparing workers for this transition.

Reports

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Reports

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.