Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Assessing China’s AI Workforce

Dahlia Peterson Ngor Luong Jacob Feldgoise
| November 2023

Demand for talent is one of the core elements of technological competition between the United States and China. In this issue brief, we explore demand signals in China’s domestic AI workforce in two ways: geographically and within the defense and surveillance sectors. Our exploration of job postings from Spring 2021 finds that more than three-quarters of all AI job postings are concentrated in just three regions: the Yangtze River Delta region, the Pearl River Delta, and the Beijing-Tianjin-Hebei area.

Peer Watch


Workforce


Filter publications
Formal Response

Comment on NIST RFI Related to the Executive Order Concerning Artificial Intelligence (88 FR 88368)

Mina Narayanan Jessica Ji Heather Frase
| February 2, 2024

On February 2, 2024, CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order Concerning Artificial Intelligence (88 FR 88368). In the submission, CSET compiles recommendations from six CSET reports and analyses in order to assist NIST in its implementation of AI Executive Order requirements.

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

Analysis

Repurposing the Wheel: Lessons for AI Standards

Mina Narayanan Alexandra Seymour Heather Frase Karson Elmgren
| November 2023

Standards enable good governance practices by establishing consistent measurement and norms for interoperability, but creating standards for AI is a challenging task. The Center for Security and Emerging Technology and the Center for a New American Security hosted a series of workshops in the fall of 2022 to examine standards development in the areas of finance, worker safety, cybersecurity, sustainable buildings, and medical devices in order to apply the lessons learned in these domains to AI. This workshop report summarizes our findings and recommendations.

Testimony

Advanced Technology: Examining Threats to National Security

Dewey Murdick
| September 19, 2023

CSET Executive Director Dr. Dewey Murdick testified before the Senate Homeland Security and Governmental Affairs Emerging Threats Subcommittee on challenges related to emerging technologies and national security.

This explainer defines criteria for effective AI Incident Collection and identifies tradeoffs between potential reporting models: mandatory, voluntary, and citizen reporting.

CSET submitted the following comment in response to a Request for Information (RFI) from the National Science Foundation (NSF) about the development of the newly established Technology, Innovation, and Partnerships (TIP) Directorate, in accordance with the CHIPS and Science Act of 2022.

Analysis

Adding Structure to AI Harm

Mia Hoffmann Heather Frase
| July 2023

Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed. This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure.

Analysis

A Matrix for Selecting Responsible AI Frameworks

Mina Narayanan Christian Schoeberl
| June 2023

Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI), but the sheer number of frameworks, along with their loosely specified audiences, can make it difficult for organizations to select ones that meet their needs. This report presents a matrix that organizes approximately 40 public process frameworks according to their areas of focus and the teams that can use them. Ultimately, the matrix helps organizations select the right resources for implementing responsible AI.

Analysis

Reducing the Risks of Artificial Intelligence for Military Decision Advantage

Wyatt Hoffman Heeu Millie Kim
| March 2023

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.

Analysis

One Size Does Not Fit All

Heather Frase
| February 2023

Artificial intelligence is so diverse in its range that no simple one-size-fits-all assessment approach can be adequately applied to it. AI systems have a wide variety of functionality, capabilities, and outputs. They are also created using different tools, data modalities, and resources, which adds to the diversity of their assessment. Thus, a collection of approaches and processes is needed to cover a wide range of AI products, tools, services, and resources.