Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Applications


Workforce


Filter publications

Read our original translation of an Israeli government document, which took effect in 2022, that details the process by which the government conducts national security reviews of foreign investments. This document strengthens and expands the scope of earlier foreign investment screening rules that the Israeli government adopted in 2019.

Read our original translation of an Israeli government document, which took effect in 2019, that details the process by which the government conducts national security reviews of foreign investments.

Data Snapshot

Keyword Cascade Plots

Autumn Toney and Melissa Flagg
| February 1, 2023

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This three-part series presents a method to explore and visualize connections across CSET’s research clusters and enable identification of research of interest within CSET’s merged corpus of scholarly literature and Map of Science.

Analysis

U.S. Outbound Investment into Chinese AI Companies

Emily S. Weinstein and Ngor Luong
| February 2023

U.S. policymakers are increasingly concerned about the national security implications of U.S. investments in China, and some are considering a new regime for reviewing outbound investment security. The authors identify the main U.S. investors active in the Chinese artificial intelligence market and the set of AI companies in China that have benefitted from U.S. capital. They also recommend next steps for U.S. policymakers to better address the concerns over capital flowing into the Chinese AI ecosystem.

CSET Non-Resident Senior Fellow Kevin Wolf testified before the Senate Banking Committee on U.S. export control policy and opportunities.

See our original translation of a document that describes, in broad strokes, the Chinese Communist Party’s guidelines for how big data can be used to spur economic development.

Formal Response

Comment to the National Biotechnology and Biomanufacturing Initiative

Caroline Schuerger, Steph Batalis, and Vikram Venkatram
| January 20, 2023

CSET's Dr. Caroline Schuerger, Dr. Steph Batalis, and Vikram Venkatram submitted this comment with recommendations for the National Biotechnology and Biomanufacturing Initiative.

The U.S. semiconductor supply chain’s resilience will meaningfully increase only if current efforts to re-shore fabrication (that is, to situate more facilities that make its key parts in the United States) are met with commensurate efforts to re-shore upstream material production along with downstream assembly, test, and packaging (ATP) of finished microelectronics.

Analysis

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

Josh A. Goldstein, Girish Sastry, Micah Musser, Renée DiResta, Matthew Gentzel, and Katerina Sedova
| January 2023

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

See our original translation of document announcing a new crackdown on mobile apps by China’s internet regulator.