Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Brief

The Antimicrobial Resistance Research Landscape and Emerging Solutions

Vikram Venkatram and Katherine Quinn
| November 2023

Antimicrobial resistance (AMR) is one of the world’s most pressing global health threats. Basic research is the first step towards identifying solutions. This brief examines the AMR research landscape since 2000, finding that the amount of research is increasing and that the U.S. is a leading publisher, but also that novel solutions like phages and synthetic antimicrobial production are a small portion of that research.

Reports

Decoding Intentions

Andrew Imbrie, Owen Daniels, and Helen Toner
| October 2023

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? Policymakers can send credible signals of their intent by making pledges or committing to undertaking certain actions for which they will pay a price—political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat. Talk is cheap, but inadvertent escalation is costly to all sides.

Formal Response

Comment on OSTP RFI 88 FR 60513

Steph Batalis
| October 16, 2023

CSET submitted the following comment in response to a Request for Information (RFI) from the White House's Office of Science and Technology Policy about potential changes to the Policies for Federal and Institutional Oversight of Life Sciences Dual Use Research of Concern (DURC) and Recommended Policy Guidance for Departmental Development of Review Mechanisms for Potential Pandemic Pathogen Care and Oversight (P3CO).

Reports

The Inigo Montoya Problem for Trustworthy AI (International Version)

Emelia Probasco and Kathleen Curlee
| October 2023

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms. This creates, what we call the “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Data Visualization

ETO Scout

September 2023

Scout is ETO's discovery tool for Chinese-language writing on science and technology. Scout compiles, tags, and summarizes news and commentary from selected Chinese sources, helping English-speaking users easily keep up to date, skim the latest news, and discover new perspectives. Use the Scout web interface to browse and filter articles, or get customized updates delivered to your inbox through the Scout email service.

Reports

Understanding the Global Gain-of-Function Research Landscape

Caroline Schuerger, Steph Batalis, Katherine Quinn, Ronnie Kinoshita, Owen Daniels, and Anna Puglisi
| August 2023

Gain- and loss-of-function research have contributed to breakthroughs in vaccine development, genetic research, and gene therapy. At the same time, a subset of gain- and loss-of-function studies involve high-risk, highly virulent pathogens that could spread widely among humans if deliberately or unintentionally released. In this report, we map the gain- and loss-of-function global research landscape using a quantitative approach that combines machine learning with subject-matter expert review.

Data Brief

Bayh-Dole Patent Trends

Sara Abdulla and Jack Corrigan
| August 2023

This brief examines trends in patents generated through federally funded research, otherwise known as Bayh-Dole patents. We find that while Bayh-Dole patents make up a small proportion of U.S. patents overall, they are much more common in certain fields, especially in biosciences and national defense related fields. Academic institutions are major recipients of Bayh-Dole patents, and the funding landscape for patent-producing research has shifted since Bayh-Dole came into effect in 1980.

Data Brief

Assessing South Korea’s AI Ecosystem

Cole McFaul, Husanjot Chahal, Rebecca Gelles, and Margarita Konaev
| August 2023

This data brief examines South Korea’s progress in its development of artificial intelligence. The authors find that the country excels in semiconductor manufacturing, is a global leader in the production of AI patents, and is an important contributor to AI research. At the same time, the AI investment ecosystem remains nascent and despite having a highly developed AI workforce, the demand for AI talent may soon outpace supply.

Data Brief

Voices of Innovation

Sara Abdulla and Husanjot Chahal
| July 2023

This data brief identifies the most influential AI researchers in the United States between 2010 and 2021 via three metrics: number of AI publications, citations, and AI h-index. It examines their demographic profiles, career trajectories, and research collaboration rates, finding that most are men in the later stages of their career, largely concentrated in 10 elite universities and companies, and that nearly 70 percent of America’s top AI researchers were born abroad.

Data Brief

Who Cares About Trust?

Autumn Toney and Emelia Probasco
| July 2023

Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work about the use of trustworthy AI terms to identify 18 clusters of research papers that contribute to the development of trustworthy AI. In identifying these clusters, the analysis also reveals that some concepts, like "explainability," are forming distinct research areas, whereas other concepts, like "reliability," appear to be accepted as metrics and broadly applied.