Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

How to Assess the Likelihood of Malicious Use of Advanced AI Systems

Josh A. Goldstein and Girish Sastry
| March 2025

As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.

Formal Response

CSET’s Recommendations for an AI Action Plan

March 14, 2025

In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.

Reports

Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches

Mina Narayanan, Christian Schoeberl, and Tim G. J. Rudner
| February 2025

Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluate their explainability and interpretability claims in the context of AI-enabled recommendation systems and offers considerations for policymakers seeking to support AI evaluations.

Reports

AI Incidents: Key Components for a Mandatory Reporting Regime

Ren Bin Lee Dixon and Heather Frase
| January 2025

This follow-up report builds on the foundational framework presented in the March 2024 CSET issue brief, “An Argument for Hybrid AI Incident Reporting,” by identifying key components of AI incidents that should be documented within a mandatory reporting regime. Designed to complement and operationalize our original framework, this report promotes the implementation of such a regime. By providing guidance on these critical elements, the report fosters consistent and comprehensive incident reporting, advancing efforts to document and address AI-related harms.

Reports

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.

Reports

Through the Chat Window and Into the Real World: Preparing for AI Agents

Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee
| October 2024

Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has fueled new optimism about the prospect of building sophisticated AI agents. This CSET-led workshop report synthesizes findings from a May 2024 workshop on this topic, including what constitutes an AI agent, how the technology is improving, what risks agents exacerbate, and intervention points that could help.

Reports

Securing Critical Infrastructure in the Age of AI

Kyle Crichton, Jessica Ji, Kyle Miller, John Bansemer, Zachary Arnold, David Batz, Minwoo Choi, Marisa Decillis, Patricia Eke, Daniel M. Gerstein, Alex Leblang, Monty McGee, Greg Rattray, Luke Richards, and Alana Scott
| October 2024

As critical infrastructure operators and providers seek to harness the benefits of new artificial intelligence capabilities, they must also manage associated risks from both AI-enabled cyber threats and potential vulnerabilities in deployed AI systems. In June 2024, CSET led a workshop to assess these issues. This report synthesizes our findings, drawing on lessons from cybersecurity and insights from critical infrastructure sectors to identify challenges and potential risk mitigations associated with AI adoption.

Reports

Building the Tech Coalition

Emelia Probasco
| August 2024

The U.S. Army’s 18th Airborne Corps can now target artillery just as efficiently as the best unit in recent American history—and it can do so with two thousand fewer servicemembers. This report presents a case study of how the 18th Airborne partnered with tech companies to develop, prototype, and operationalize software and artificial intelligence for clear military advantage. The lessons learned form recommendations to the U.S. Department of Defense as it pushes to further develop and adopt AI and other new technologies.

Formal Response

Comment on Commerce Department RFI 89 FR 27411

Catherine Aiken, James Dunham, Jacob Feldgoise, Rebecca Gelles, Ronnie Kinoshita, Mina Narayanan, and Christian Schoeberl
| July 16, 2024

CSET submitted the following comment in response to a Request for Information (RFI) from the Department of Commerce regarding 89 FR 27411.

Reports

Enabling Principles for AI Governance

Owen Daniels and Dewey Murdick
| July 2024

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.