Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

The Core of Federal Cyber Talent

Ali Crawford
| January 2024

Strengthening the federal cyber workforce is one of the main priorities of the National Cyber Workforce and Education Strategy. The National Science Foundation’s CyberCorps Scholarship-for-Service program is a direct cyber talent pipeline into the federal workforce. As the program tries to satisfy increasing needs for cyber talent, it is apparent that some form of program expansion is needed. This policy brief summarizes trends from participating institutions to understand how the program might expand and illustrates a potential future artificial intelligence (AI) federal scholarship-for-service program.

Reports

Scaling AI

Andrew Lohn
| December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.

Reports

Controlling Large Language Model Outputs: A Primer

Jessica Ji, Josh A. Goldstein, and Andrew Lohn
| December 2023

Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.

Other

AI and Biorisk: An Explainer

Steph Batalis
| December 2023

Recent government directives, international conferences, and media headlines reflect growing concern that artificial intelligence could exacerbate biological threats. When it comes to biorisk, AI tools are cited as enablers that lower information barriers, enhance novel biothreat design, or otherwise increase a malicious actor’s capabilities. In this explainer, CSET Biorisk Research Fellow Steph Batalis summarizes the state of the biorisk landscape with and without AI.

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

AI has the potential to revolutionize approaches to climate change research. Using CSET's Map of Science, this data brief maps the production of research publications at the intersection of AI and climate change to better understand how AI methods are being applied to climate change-related research.

Data Brief

The Antimicrobial Resistance Research Landscape and Emerging Solutions

Vikram Venkatram and Katherine Quinn
| November 2023

Antimicrobial resistance (AMR) is one of the world’s most pressing global health threats. Basic research is the first step towards identifying solutions. This brief examines the AMR research landscape since 2000, finding that the amount of research is increasing and that the U.S. is a leading publisher, but also that novel solutions like phages and synthetic antimicrobial production are a small portion of that research.

Reports

Skating to Where the Puck Is Going

Helen Toner, Jessica Ji, John Bansemer, and Lucy Lim
| October 2023

AI capabilities are evolving quickly and pose novel—and likely significant—risks. In these rapidly changing conditions, how can policymakers effectively anticipate and manage risks from the most advanced and capable AI systems at the frontier of the field? This Roundtable Report summarizes some of the key themes and conclusions of a July 2023 workshop on this topic jointly hosted by CSET and Google DeepMind.

Formal Response

Comment on OSTP RFI 88 FR 60513

Steph Batalis
| October 16, 2023

CSET submitted the following comment in response to a Request for Information (RFI) from the White House's Office of Science and Technology Policy about potential changes to the Policies for Federal and Institutional Oversight of Life Sciences Dual Use Research of Concern (DURC) and Recommended Policy Guidance for Departmental Development of Review Mechanisms for Potential Pandemic Pathogen Care and Oversight (P3CO).

Other

Techniques to Make Large Language Models Smaller: An Explainer

Kyle Miller and Andrew Lohn
| October 11, 2023

This explainer overviews techniques to produce smaller and more efficient language models that require fewer resources to develop and operate. Importantly, information on how to leverage these techniques, and many of the subsequent small models, are openly available online for anyone to use. The combination of both small (i.e., easy to use) and open (i.e., easy to access) could have significant implications for artificial intelligence development.