Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Annual Report

CSET at Five

Center for Security and Emerging Technology
| March 2024

In honor of CSET’s fifth birthday, this annual report is a look at CSET’s successes in 2023 and over the course of the past five years. It explores CSET’s different lines of research and cross-cutting projects, and spotlights some of its most impactful research products.

Filter publications
Analysis

An Argument for Hybrid AI Incident Reporting

Ren Bin Lee Dixon Heather Frase
| March 2024

Artificial Intelligence incidents have been occurring with the rapid advancement of AI capabilities over the past decade. However, there is not yet a concerted policy effort in the United States to monitor, document, and aggregate AI incident data to enhance the understanding of AI-related harm and inform safety policies. This report proposes a federated approach consisting of hybrid incident reporting frameworks to standardize reporting practices and prevent missing data.

Formal Response

Comment on NIST RFI Related to the Executive Order Concerning Artificial Intelligence (88 FR 88368)

Mina Narayanan Jessica Ji Heather Frase
| February 2, 2024

On February 2, 2024, CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order Concerning Artificial Intelligence (88 FR 88368). In the submission, CSET compiles recommendations from six CSET reports and analyses in order to assist NIST in its implementation of AI Executive Order requirements.

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

Analysis

Repurposing the Wheel: Lessons for AI Standards

Mina Narayanan Alexandra Seymour Heather Frase Karson Elmgren
| November 2023

Standards enable good governance practices by establishing consistent measurement and norms for interoperability, but creating standards for AI is a challenging task. The Center for Security and Emerging Technology and the Center for a New American Security hosted a series of workshops in the fall of 2022 to examine standards development in the areas of finance, worker safety, cybersecurity, sustainable buildings, and medical devices in order to apply the lessons learned in these domains to AI. This workshop report summarizes our findings and recommendations.

Testimony

Advanced Technology: Examining Threats to National Security

Dewey Murdick
| September 19, 2023

CSET Executive Director Dr. Dewey Murdick testified before the Senate Homeland Security and Governmental Affairs Emerging Threats Subcommittee on challenges related to emerging technologies and national security.

This explainer defines criteria for effective AI Incident Collection and identifies tradeoffs between potential reporting models: mandatory, voluntary, and citizen reporting.

CSET submitted the following comment in response to a Request for Information (RFI) from the National Science Foundation (NSF) about the development of the newly established Technology, Innovation, and Partnerships (TIP) Directorate, in accordance with the CHIPS and Science Act of 2022.

Analysis

Adding Structure to AI Harm

Mia Hoffmann Heather Frase
| July 2023

Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed. This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure.

Analysis

A Matrix for Selecting Responsible AI Frameworks

Mina Narayanan Christian Schoeberl
| June 2023

Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI), but the sheer number of frameworks, along with their loosely specified audiences, can make it difficult for organizations to select ones that meet their needs. This report presents a matrix that organizes approximately 40 public process frameworks according to their areas of focus and the teams that can use them. Ultimately, the matrix helps organizations select the right resources for implementing responsible AI.

Analysis

Reducing the Risks of Artificial Intelligence for Military Decision Advantage

Wyatt Hoffman Heeu Millie Kim
| March 2023

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.