Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Applications


Workforce


Filter publications
Formal Response

Comment on DHS’s Proposed Rule Modernizing H-1B Requirements

Luke Koslosky
| December 2023

CSET submitted the following comment in response to a DHS Notice on Proposed Rule-Making from the U.S. Citizenship and Immigration Services about modernizing H-1B requirements, providing flexibility in the F-1 program, and program improvements affecting other nonimmigrant workers

Analysis

Scaling AI

Andrew Lohn
| December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.

Analysis

Controlling Large Language Model Outputs: A Primer

Jessica Ji, Josh A. Goldstein, and Andrew Lohn
| December 2023

Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.

Other

AI and Biorisk: An Explainer

Steph Batalis
| December 2023

Recent government directives, international conferences, and media headlines reflect growing concern that artificial intelligence could exacerbate biological threats. When it comes to biorisk, AI tools are cited as enablers that lower information barriers, enhance novel biothreat design, or otherwise increase a malicious actor’s capabilities. In this explainer, CSET Biorisk Research Fellow Steph Batalis summarizes the state of the biorisk landscape with and without AI.

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

Analysis

Repurposing the Wheel: Lessons for AI Standards

Mina Narayanan, Alexandra Seymour, Heather Frase, and Karson Elmgren
| November 2023

Standards enable good governance practices by establishing consistent measurement and norms for interoperability, but creating standards for AI is a challenging task. The Center for Security and Emerging Technology and the Center for a New American Security hosted a series of workshops in the fall of 2022 to examine standards development in the areas of finance, worker safety, cybersecurity, sustainable buildings, and medical devices in order to apply the lessons learned in these domains to AI. This workshop report summarizes our findings and recommendations.

Analysis

Assessing China’s AI Workforce

Dahlia Peterson, Ngor Luong, and Jacob Feldgoise
| November 2023

Demand for talent is one of the core elements of technological competition between the United States and China. In this issue brief, we explore demand signals in China’s domestic AI workforce in two ways: geographically and within the defense and surveillance sectors. Our exploration of job postings from Spring 2021 finds that more than three-quarters of all AI job postings are concentrated in just three regions: the Yangtze River Delta region, the Pearl River Delta, and the Beijing-Tianjin-Hebei area.

Translation

Translation Snapshot: Chinese AI White Papers

Ben Murphy
| November 29, 2023

Translation Snapshots are short posts that highlight related translations produced by CSET’s in-house translation team. Each snapshot identifies relevant translations, provides short summaries, and links to the full translations. Check back regularly for additional Translation Snapshots highlighting our work.

Read our translation of an announcement by a Chinese Communist Party-run association for scientists that names 28 questions as China’s outstanding S&T issues and challenges of 2023.

Read our translation of a Chinese government policy for the near-term development of computing power.