Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Applications


Workforce


Filter publications
Analysis

Fueling China’s Innovation: The Chinese Academy of Sciences and Its Role in the PRC’s S&T Ecosystem

Cole McFaul, Hanna Dohmen, Sam Bresnick, and Emily S. Weinstein
| October 2024

The Chinese Academy of Sciences is among the most important S&T organizations in the world and plays a key role in advancing Beijing’s S&T objectives. This report provides an in-depth look into the organization and its various functions within China’s S&T ecosystem, including advancing S&T research, fostering the commercialization of critical and emerging technologies, and contributing to S&T policymaking.

Analysis

Enabling Principles for AI Governance

Owen Daniels and Dewey Murdick
| July 2024

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.

Analysis

China’s Military AI Roadblocks

Sam Bresnick
| June 2024

China’s leadership believes that artificial intelligence will play a central role in future wars. However, the author's comprehensive review of dozens of Chinese-language journal articles about AI and warfare reveals that Chinese defense experts claim that Beijing is facing several technological challenges that may hinder its ability to capitalize on the advantages provided by military AI. This report outlines these perceived barriers and identifies several technologies that Chinese experts believe may help the country develop and deploy military AI-enabled systems.

Analysis

Gao Huajian and the China Talent Returnee Question

William Hannas, Huey-Meei Chang, and Daniel Chou
| May 2024

The celebrated return to China of its overseas scientists, as evidenced in the recent case of physicist Gao Huajian, is typically cited as a loss to the United States. This report argues a contrarian view that the benefits equation is far more complicated. PRC programs that channel diaspora achievements “back” to China and the inclination of many scientists to work in familiar venues blur the distinction between returning to China and staying in place.

Analysis

Bibliometric Analysis of China’s Non-Therapeutic Brain-Computer Interface Research

William Hannas, Huey-Meei Chang, Rishika Chauhan, Daniel Chou, John O’Callaghan, Max Riesenhuber, Vikram Venkatram, and Jennifer Wang
| March 2024

China’s brain-computer interface research has two dimensions. Besides its usual applications in neuropathology, China is extending the benefits of BCI to the general population, aiming at enhanced cognition and a “merger” of natural and artificial intelligence. This report, authored in collaboration with researchers from the Department of War Studies at King’s College London uses bibliometric analysis and expert assessment of technical documents to evaluate China’s BCI, and conclude that the research is on track to achieve its targets.

Analysis

Assessing China’s AI Workforce

Dahlia Peterson, Ngor Luong, and Jacob Feldgoise
| November 2023

Demand for talent is one of the core elements of technological competition between the United States and China. In this issue brief, we explore demand signals in China’s domestic AI workforce in two ways: geographically and within the defense and surveillance sectors. Our exploration of job postings from Spring 2021 finds that more than three-quarters of all AI job postings are concentrated in just three regions: the Yangtze River Delta region, the Pearl River Delta, and the Beijing-Tianjin-Hebei area.

Data Brief

The Antimicrobial Resistance Research Landscape and Emerging Solutions

Vikram Venkatram and Katherine Quinn
| November 2023

Antimicrobial resistance (AMR) is one of the world’s most pressing global health threats. Basic research is the first step towards identifying solutions. This brief examines the AMR research landscape since 2000, finding that the amount of research is increasing and that the U.S. is a leading publisher, but also that novel solutions like phages and synthetic antimicrobial production are a small portion of that research.

Analysis

Decoding Intentions

Andrew Imbrie, Owen Daniels, and Helen Toner
| October 2023

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? Policymakers can send credible signals of their intent by making pledges or committing to undertaking certain actions for which they will pay a price—political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat. Talk is cheap, but inadvertent escalation is costly to all sides.

Analysis

The Inigo Montoya Problem for Trustworthy AI (International Version)

Emelia Probasco and Kathleen Curlee
| October 2023

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms. This creates, what we call the “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

In collaboration with colleagues from CNAS and the Atlantic Council, CSET Researchers Ngor Luong and Emily Weinstein provided this comment in request to Treasury's Advanced Notice of Rule-making request for public comment (TREAS-DO-2023-0009-0001).