Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

Reports

Assessing China’s AI Workforce

Dahlia Peterson, Ngor Luong, and Jacob Feldgoise
| November 2023

Demand for talent is one of the core elements of technological competition between the United States and China. In this issue brief, we explore demand signals in China’s domestic AI workforce in two ways: geographically and within the defense and surveillance sectors. Our exploration of job postings from Spring 2021 finds that more than three-quarters of all AI job postings are concentrated in just three regions: the Yangtze River Delta region, the Pearl River Delta, and the Beijing-Tianjin-Hebei area.

Data Brief

The Antimicrobial Resistance Research Landscape and Emerging Solutions

Vikram Venkatram and Katherine Quinn
| November 2023

Antimicrobial resistance (AMR) is one of the world’s most pressing global health threats. Basic research is the first step towards identifying solutions. This brief examines the AMR research landscape since 2000, finding that the amount of research is increasing and that the U.S. is a leading publisher, but also that novel solutions like phages and synthetic antimicrobial production are a small portion of that research.

Reports

Skating to Where the Puck Is Going

Helen Toner, Jessica Ji, John Bansemer, and Lucy Lim
| October 2023

AI capabilities are evolving quickly and pose novel—and likely significant—risks. In these rapidly changing conditions, how can policymakers effectively anticipate and manage risks from the most advanced and capable AI systems at the frontier of the field? This Roundtable Report summarizes some of the key themes and conclusions of a July 2023 workshop on this topic jointly hosted by CSET and Google DeepMind.

Reports

Decoding Intentions

Andrew Imbrie, Owen Daniels, and Helen Toner
| October 2023

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? Policymakers can send credible signals of their intent by making pledges or committing to undertaking certain actions for which they will pay a price—political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat. Talk is cheap, but inadvertent escalation is costly to all sides.

Reports

The Inigo Montoya Problem for Trustworthy AI (International Version)

Emelia Probasco and Kathleen Curlee
| October 2023

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms. This creates, what we call the “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Other

Techniques to Make Large Language Models Smaller: An Explainer

Kyle Miller and Andrew Lohn
| October 11, 2023

This explainer overviews techniques to produce smaller and more efficient language models that require fewer resources to develop and operate. Importantly, information on how to leverage these techniques, and many of the subsequent small models, are openly available online for anyone to use. The combination of both small (i.e., easy to use) and open (i.e., easy to access) could have significant implications for artificial intelligence development.

In collaboration with colleagues from CNAS and the Atlantic Council, CSET Researchers Ngor Luong and Emily Weinstein provided this comment in request to Treasury's Advanced Notice of Rule-making request for public comment (TREAS-DO-2023-0009-0001).

Reports

The PRC’s Efforts Abroad

Owen Daniels
| September 2023

This report summarizes more than 20 CSET reports, translations, and data analyses to provide insight into the steps China has taken to increase its technological competitiveness beyond its own borders.

Reports

The PRC’s Domestic Approach

Owen Daniels
| September 2023

This report summarizes more than 20 CSET reports, translations, and data analyses to provide insight into China’s internal actions to advance and implement its technology-related policy goals