Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Understanding the Global Gain-of-Function Research Landscape

Caroline Schuerger Steph Batalis Katherine Quinn Ronnie Kinoshita Owen Daniels Anna Puglisi
| August 2023

Gain- and loss-of-function research have contributed to breakthroughs in vaccine development, genetic research, and gene therapy. At the same time, a subset of gain- and loss-of-function studies involve high-risk, highly virulent pathogens that could spread widely among humans if deliberately or unintentionally released. In this report, we map the gain- and loss-of-function global research landscape using a quantitative approach that combines machine learning with subject-matter expert review.

Bio-Risk


Data


Filter publications
Testimony

Advanced Technology: Examining Threats to National Security

Dewey Murdick
| September 19, 2023

CSET Executive Director Dr. Dewey Murdick testified before the Senate Homeland Security and Governmental Affairs Emerging Threats Subcommittee on challenges related to emerging technologies and national security.

This explainer defines criteria for effective AI Incident Collection and identifies tradeoffs between potential reporting models: mandatory, voluntary, and citizen reporting.

CSET submitted the following comment in response to a Request for Information (RFI) from the National Science Foundation (NSF) about the development of the newly established Technology, Innovation, and Partnerships (TIP) Directorate, in accordance with the CHIPS and Science Act of 2022.

Analysis

Adding Structure to AI Harm

Mia Hoffmann Heather Frase
| July 2023

Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed. This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure.

Analysis

A Matrix for Selecting Responsible AI Frameworks

Mina Narayanan Christian Schoeberl
| June 2023

Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI), but the sheer number of frameworks, along with their loosely specified audiences, can make it difficult for organizations to select ones that meet their needs. This report presents a matrix that organizes approximately 40 public process frameworks according to their areas of focus and the teams that can use them. Ultimately, the matrix helps organizations select the right resources for implementing responsible AI.

Analysis

One Size Does Not Fit All

Heather Frase
| February 2023

Artificial intelligence is so diverse in its range that no simple one-size-fits-all assessment approach can be adequately applied to it. AI systems have a wide variety of functionality, capabilities, and outputs. They are also created using different tools, data modalities, and resources, which adds to the diversity of their assessment. Thus, a collection of approaches and processes is needed to cover a wide range of AI products, tools, services, and resources.

Analysis

A Common Language for Responsible AI

Emelia Probasco
| October 2022

Policymakers, engineers, program managers and operators need the bedrock of a common set of terms to instantiate responsible AI for the Department of Defense. Rather than create a DOD-specific set of terms, this paper argues that the DOD could benefit by adopting the key characteristics defined by the National Institute of Standards and Technology in its draft AI Risk Management Framework with only two exceptions.

Formal Response

Comment to NIST on the AI Risk Management Framework

Mina Narayanan
| September 29, 2022

CSET submitted the following comment in response to the National Institute for Standards and Technology's second draft of its AI Risk Management Framework.

Data Brief

Exploring Clusters of Research in Three Areas of AI Safety

Helen Toner Ashwin Acharya
| February 2022

Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.

Analysis

Key Concepts in AI Safety: Specification in Machine Learning

Tim G. J. Rudner Helen Toner
| December 2021

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.