Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Skating to Where the Puck Is Going

Helen Toner, Jessica Ji, John Bansemer, and Lucy Lim
| October 2023

AI capabilities are evolving quickly and pose novel—and likely significant—risks. In these rapidly changing conditions, how can policymakers effectively anticipate and manage risks from the most advanced and capable AI systems at the frontier of the field? This Roundtable Report summarizes some of the key themes and conclusions of a July 2023 workshop on this topic jointly hosted by CSET and Google DeepMind.

Reports

Decoding Intentions

Andrew Imbrie, Owen Daniels, and Helen Toner
| October 2023

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? Policymakers can send credible signals of their intent by making pledges or committing to undertaking certain actions for which they will pay a price—political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat. Talk is cheap, but inadvertent escalation is costly to all sides.

Other

Techniques to Make Large Language Models Smaller: An Explainer

Kyle Miller and Andrew Lohn
| October 11, 2023

This explainer overviews techniques to produce smaller and more efficient language models that require fewer resources to develop and operate. Importantly, information on how to leverage these techniques, and many of the subsequent small models, are openly available online for anyone to use. The combination of both small (i.e., easy to use) and open (i.e., easy to access) could have significant implications for artificial intelligence development.

In collaboration with colleagues from CNAS and the Atlantic Council, CSET Researchers Ngor Luong and Emily Weinstein provided this comment in request to Treasury's Advanced Notice of Rule-making request for public comment (TREAS-DO-2023-0009-0001).

Reports

The PRC’s Efforts Abroad

Owen Daniels
| September 2023

This report summarizes more than 20 CSET reports, translations, and data analyses to provide insight into the steps China has taken to increase its technological competitiveness beyond its own borders.

Reports

The PRC’s Domestic Approach

Owen Daniels
| September 2023

This report summarizes more than 20 CSET reports, translations, and data analyses to provide insight into China’s internal actions to advance and implement its technology-related policy goals

Data Visualization

ETO Scout

September 2023

Scout is ETO's discovery tool for Chinese-language writing on science and technology. Scout compiles, tags, and summarizes news and commentary from selected Chinese sources, helping English-speaking users easily keep up to date, skim the latest news, and discover new perspectives. Use the Scout web interface to browse and filter articles, or get customized updates delivered to your inbox through the Scout email service.

Data Brief

Bayh-Dole Patent Trends

Sara Abdulla and Jack Corrigan
| August 2023

This brief examines trends in patents generated through federally funded research, otherwise known as Bayh-Dole patents. We find that while Bayh-Dole patents make up a small proportion of U.S. patents overall, they are much more common in certain fields, especially in biosciences and national defense related fields. Academic institutions are major recipients of Bayh-Dole patents, and the funding landscape for patent-producing research has shifted since Bayh-Dole came into effect in 1980.

Reports

Onboard AI: Constraints and Limitations

Kyle Miller and Andrew Lohn
| August 2023

Artificial intelligence that makes news headlines, such as ChatGPT, typically runs in well-maintained data centers with an abundant supply of compute and power. However, these resources are more limited on many systems in the real world, such as drones, satellites, or ground vehicles. As a result, the AI that can run onboard these devices will often be inferior to state of the art models. That can affect their usability and the need for additional safeguards in high-risk contexts. This issue brief contextualizes these challenges and provides policymakers with recommendations on how to engage with these technologies.

Reports

Confidence-Building Measures for Artificial Intelligence

Andrew Lohn
| August 3, 2023

Foundation models could eventually introduce several pathways for undermining state security: accidents, inadvertent escalation, unintentional conflict, the proliferation of weapons, and the interference with human diplomacy are just a few on a long list. The Confidence-Building Measures for Artificial Intelligence workshop hosted by the Geopolitics Team at OpenAI and the Berkeley Risk and Security Lab at the University of California brought together a multistakeholder group to think through the tools and strategies to mitigate the potential risks introduced by foundation models to international security.