Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Applications


Workforce


Filter publications
Data Brief

Identifying Emerging Technologies in Research

Catherine Aiken, James Dunham, Jennifer Melot, and Zachary Arnold
| December 2024

This paper presents two new methods for identifying research relevant to emerging technology. The authors developed and deployed technology topic classification and targeted research field scoring over a corpus of scientific literature to identify research relevant to cybersecurity, LLM development, and chips fabrication and design—expanding CSET’s existing set of topic classifications for AI, computer vision, NLP, robotics, and AI safety. The paper summarizes motivation, methods, and results.

Analysis

AI and the Future of Workforce Training

Matthias Oschinski, Ali Crawford, and Maggie Wu
| December 2024

The emergence of artificial intelligence as a general-purpose technology could profoundly transform work across industries, potentially affecting a variety of occupations. While previous technological shifts largely enhanced productivity and wages for white-collar workers but led to displacement pressures for blue-collar workers, AI may significantly disrupt both groups. This report examines the changing landscape of workforce development, highlighting the crucial role of community colleges, alternative career pathways, and AI-enabled training solutions in preparing workers for this transition.

Analysis

Staying Current with Emerging Technology Trends: Using Big Data to Inform Planning

Emelia Probasco and Christian Schoeberl
| December 2024

This report proposes an approach to systematically identify promising research using big data and analyze that research’s potential impact through structured engagements with subject-matter experts. The methodology offers a structured way to proactively monitor the research landscape and inform strategic R&D priorities.

Formal Response

RFI Response: Safety Considerations for Chemical and/or Biological AI Models

Steph Batalis and Vikram Venkatram
| December 3, 2024

Dr. Steph Batalis and Vikram Venkatram offered the following comment in response to the National Institute of Standards and Technology's request for information on safety considerations for chemical and biological AI models.

Artificial intelligence (AI) tools pose exciting possibilities to advance scientific, biomedical, and public health research. At the same time, these tools have raised concerns about their potential to contribute to biological threats, like those from pathogens and toxins. This report describes pathways that result in biological harm, with or without AI, and a range of governance tools and mitigation measures to address them.

Analysis

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Analysis

Acquiring AI Companies: Tracking U.S. AI Mergers and Acquisitions

Jack Corrigan, Ngor Luong, and Christian Schoeberl
| November 2024

Maintaining U.S. technological leadership in the years ahead will require policymakers to promote competition in the AI market and prevent industry leaders from wielding their power in harmful ways. This brief examines trends in U.S. mergers and acquisitions of artificial intelligence companies. The authors found that AI-related M&A deals have grown significantly over the last decade, with large U.S. tech companies being the most prolific acquirers of AI firms.

Analysis

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.

Analysis

Fueling China’s Innovation: The Chinese Academy of Sciences and Its Role in the PRC’s S&T Ecosystem

Cole McFaul, Hanna Dohmen, Sam Bresnick, and Emily S. Weinstein
| October 2024

The Chinese Academy of Sciences is among the most important S&T organizations in the world and plays a key role in advancing Beijing’s S&T objectives. This report provides an in-depth look into the organization and its various functions within China’s S&T ecosystem, including advancing S&T research, fostering the commercialization of critical and emerging technologies, and contributing to S&T policymaking.

Analysis

Through the Chat Window and Into the Real World: Preparing for AI Agents

Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee
| October 2024

Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has fueled new optimism about the prospect of building sophisticated AI agents. This CSET-led workshop report synthesizes findings from a May 2024 workshop on this topic, including what constitutes an AI agent, how the technology is improving, what risks agents exacerbate, and intervention points that could help.