Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Translation

人工智能控制:如何利用违规人工智能体

Kendrea Beers and Cody Rushing
| November 5, 2025

This is a Chinese translation of the CSET blog post "AI Control: How to Make Use of Misbehaving AI Agents."

Data Snapshot

Mapping Space Debris

Kathleen Curlee and Lauren Kahn
| November 3, 2025

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This data interactive maps each of the over 34,000 pieces of space debris the United States government has tracked since 1958, bringing Earth’s crowded orbits to life. It shows how seven decades of launches, collisions, and anti-satellite tests—and just a few catastrophic events by a handful of countries—have created most of today’s debris, potentially endangering the $1.8 trillion global space economy that depends on unfettered access to orbits.

As artificial intelligence systems are deployed and affect more aspects of daily life, effective risk mitigation becomes imperative to prevent harm. This report analyzes AI incidents to improve our understanding of how risks from AI materialize in practice. By identifying six mechanisms of harm, it sheds light on the different pathways to harm, and on the variety of mitigation strategies needed to address them.

Data Visualization

PATHWISE

October 2025

PATHWISE (Prototype Analytics for Tracking High-Demand Workforce in Innovative Skill Ecosystems) provides policymakers, educators, and analysts with a powerful new way to explore how the United States develops and deploys talent in emerging technology fields.

Read our translation of a Chinese draft standard that proposes rules for generative AI data annotation and data labeling with an eye toward improving the safety and security of GenAI systems.

Reports

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.

Read our translation of trial guidelines that describe safety measures for the operation of autonomous vehicles on China’s roads.

In the second installation of our blog series analyzing 147 AI-related laws enacted by Congress between January 2020 and March 2025 from AGORA, we explore the governance strategies, risk-related concepts, and harms addressed in the legislation. In the first blog, we showed that the majority of these AI-related legislative documents were drawn from National Defense Authorization Acts and apply to national security contexts.

Read our translation of a Chinese Ministry of Education guide that offers best practices for generative AI adoption in primary and secondary schools.