Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Chinese Critiques of Large Language Models

William Hannas, Huey-Meei Chang, Maximilian Riesenhuber, and Daniel Chou
| January 2025

Large generative models are widely viewed as the most promising path to general (human-level) artificial intelligence and attract investment in the billions of dollars. The present enthusiasm notwithstanding, a chorus of ranking Chinese scientists regard this singular approach to AGI as ill-advised. This report documents these critiques in China’s research, public statements, and government planning, while pointing to additional, pragmatic reasons for China’s pursuit of a diversified research portfolio.

Compete


Peer Watch


Filter publications
See 5 moreSee less
See 2 moreSee less
Analysis

How to Assess the Likelihood of Malicious Use of Advanced AI Systems

Josh A. Goldstein and Girish Sastry
| March 2025

As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.

Formal Response

CSET’s Recommendations for an AI Action Plan

March 14, 2025

In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.

Analysis

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.

Analysis

Through the Chat Window and Into the Real World: Preparing for AI Agents

Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee
| October 2024

Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has fueled new optimism about the prospect of building sophisticated AI agents. This CSET-led workshop report synthesizes findings from a May 2024 workshop on this topic, including what constitutes an AI agent, how the technology is improving, what risks agents exacerbate, and intervention points that could help.

Analysis

Securing Critical Infrastructure in the Age of AI

Kyle Crichton, Jessica Ji, Kyle Miller, John Bansemer, Zachary Arnold, David Batz, Minwoo Choi, Marisa Decillis, Patricia Eke, Daniel M. Gerstein, Alex Leblang, Monty McGee, Greg Rattray, Luke Richards, and Alana Scott
| October 2024

As critical infrastructure operators and providers seek to harness the benefits of new artificial intelligence capabilities, they must also manage associated risks from both AI-enabled cyber threats and potential vulnerabilities in deployed AI systems. In June 2024, CSET led a workshop to assess these issues. This report synthesizes our findings, drawing on lessons from cybersecurity and insights from critical infrastructure sectors to identify challenges and potential risk mitigations associated with AI adoption.

Data Snapshot

Identifying Cyber Education Hotspots: An Interactive Guide

Maggie Wu and Brian Love
| June 5, 2024

In February 2024, CSET introduced its new cybersecurity jobs dataset, a novel resource comprising ~1.4 million LinkedIn profiles of current U.S. cybersecurity workers. This data snapshot uses the dataset to identify top-producing institutions of cybersecurity talent.

Analysis

Putting Teeth into AI Risk Management

Matthew Schoemaker
| May 2024

President Biden's October 2023 executive order prioritizes the governance of artificial intelligence in the federal government, prompting the urgent creation of AI risk management standards and procurement guidelines. Soon after the order's signing, the Office of Management and Budget issued guidance for federal departments and agencies, including minimum risk standards for AI in federal contracts. Similar to cybersecurity, procurement rules will be used to enforce AI development best practices for federal suppliers. This report offers recommendations for implementing AI risk management procurement rules.

Analysis

How Persuasive is AI-Generated Propaganda?

Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, and Michael Tomz
| February 2024

Research participants who read propaganda generated by GPT-3 davinci (a large language model) were nearly as persuaded as those who read real propaganda from Iran or Russia, according to a new peer-reviewed study by Josh A. Goldstein and co-authors.

Data Snapshot

Introducing the Cyber Jobs Dataset

Maggie Wu
| February 6, 2024

This data snapshot is the first in a series on CSET’s cybersecurity jobs data, a new dataset created by classifying data from 513 million LinkedIn user profiles. Here, we offer an overview of its creation and explore some use cases for analysis.

Analysis

The Core of Federal Cyber Talent

Ali Crawford
| January 2024

Strengthening the federal cyber workforce is one of the main priorities of the National Cyber Workforce and Education Strategy. The National Science Foundation’s CyberCorps Scholarship-for-Service program is a direct cyber talent pipeline into the federal workforce. As the program tries to satisfy increasing needs for cyber talent, it is apparent that some form of program expansion is needed. This policy brief summarizes trends from participating institutions to understand how the program might expand and illustrates a potential future artificial intelligence (AI) federal scholarship-for-service program.