Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Repurposing the Wheel: Lessons for AI Standards

Mina Narayanan, Alexandra Seymour, Heather Frase, and Karson Elmgren
| November 2023

Standards enable good governance practices by establishing consistent measurement and norms for interoperability, but creating standards for AI is a challenging task. The Center for Security and Emerging Technology and the Center for a New American Security hosted a series of workshops in the fall of 2022 to examine standards development in the areas of finance, worker safety, cybersecurity, sustainable buildings, and medical devices in order to apply the lessons learned in these domains to AI. This workshop report summarizes our findings and recommendations.

Reports

Decoding Intentions

Andrew Imbrie, Owen Daniels, and Helen Toner
| October 2023

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? Policymakers can send credible signals of their intent by making pledges or committing to undertaking certain actions for which they will pay a price—political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat. Talk is cheap, but inadvertent escalation is costly to all sides.

Reports

The Inigo Montoya Problem for Trustworthy AI (International Version)

Emelia Probasco and Kathleen Curlee
| October 2023

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms. This creates, what we call the “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Testimony

Advanced Technology: Examining Threats to National Security

Dewey Murdick
| September 19, 2023

CSET Executive Director Dr. Dewey Murdick testified before the Senate Homeland Security and Governmental Affairs Emerging Threats Subcommittee on challenges related to emerging technologies and national security.

This explainer defines criteria for effective AI Incident Collection and identifies tradeoffs between potential reporting models: mandatory, voluntary, and citizen reporting.

Data Brief

Assessing South Korea’s AI Ecosystem

Cole McFaul, Husanjot Chahal, Rebecca Gelles, and Margarita Konaev
| August 2023

This data brief examines South Korea’s progress in its development of artificial intelligence. The authors find that the country excels in semiconductor manufacturing, is a global leader in the production of AI patents, and is an important contributor to AI research. At the same time, the AI investment ecosystem remains nascent and despite having a highly developed AI workforce, the demand for AI talent may soon outpace supply.

CSET submitted the following comment in response to a Request for Information (RFI) from the National Science Foundation (NSF) about the development of the newly established Technology, Innovation, and Partnerships (TIP) Directorate, in accordance with the CHIPS and Science Act of 2022.

Reports

Adding Structure to AI Harm

Mia Hoffmann and Heather Frase
| July 2023

Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed. This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure.

Reports

A Matrix for Selecting Responsible AI Frameworks

Mina Narayanan and Christian Schoeberl
| June 2023

Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI), but the sheer number of frameworks, along with their loosely specified audiences, can make it difficult for organizations to select ones that meet their needs. This report presents a matrix that organizes approximately 40 public process frameworks according to their areas of focus and the teams that can use them. Ultimately, the matrix helps organizations select the right resources for implementing responsible AI.

Reports

Financing “The New Oil”

Anthony Ferrara and Sara Abdulla
| May 2023

Israel has by far the largest AI ecosystem in the Middle East as measured in AI companies and financial investments, and foreign investors play a critical role in Israel’s AI market growth. This issue brief finds that AI investments in Israel have mostly originated from the United States. To date, Chinese investors have played a limited role in funding Israel’s dynamic AI companies. But understanding the risk of Chinese investments into the Israeli AI ecosystem will be important for the national security of both the United States and Israel.