Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Building the Tech Coalition

Emelia Probasco
| August 2024

The U.S. Army’s 18th Airborne Corps can now target artillery just as efficiently as the best unit in recent American history—and it can do so with two thousand fewer servicemembers. This report presents a case study of how the 18th Airborne partnered with tech companies to develop, prototype, and operationalize software and artificial intelligence for clear military advantage. The lessons learned form recommendations to the U.S. Department of Defense as it pushes to further develop and adopt AI and other new technologies.

Applications


Assessment


Filter publications
Analysis

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.

Analysis

Through the Chat Window and Into the Real World: Preparing for AI Agents

Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee
| October 2024

Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has fueled new optimism about the prospect of building sophisticated AI agents. This CSET-led workshop report synthesizes findings from a May 2024 workshop on this topic, including what constitutes an AI agent, how the technology is improving, what risks agents exacerbate, and intervention points that could help.

Analysis

Securing Critical Infrastructure in the Age of AI

Kyle Crichton, Jessica Ji, Kyle Miller, John Bansemer, Zachary Arnold, David Batz, Minwoo Choi, Marisa Decillis, Patricia Eke, Daniel M. Gerstein, Alex Leblang, Monty McGee, Greg Rattray, Luke Richards, and Alana Scott
| October 2024

As critical infrastructure operators and providers seek to harness the benefits of new artificial intelligence capabilities, they must also manage associated risks from both AI-enabled cyber threats and potential vulnerabilities in deployed AI systems. In June 2024, CSET led a workshop to assess these issues. This report synthesizes our findings, drawing on lessons from cybersecurity and insights from critical infrastructure sectors to identify challenges and potential risk mitigations associated with AI adoption.

Data Snapshot

Identifying Cyber Education Hotspots: An Interactive Guide

Maggie Wu and Brian Love
| June 5, 2024

In February 2024, CSET introduced its new cybersecurity jobs dataset, a novel resource comprising ~1.4 million LinkedIn profiles of current U.S. cybersecurity workers. This data snapshot uses the dataset to identify top-producing institutions of cybersecurity talent.

Analysis

Putting Teeth into AI Risk Management

Matthew Schoemaker
| May 2024

President Biden's October 2023 executive order prioritizes the governance of artificial intelligence in the federal government, prompting the urgent creation of AI risk management standards and procurement guidelines. Soon after the order's signing, the Office of Management and Budget issued guidance for federal departments and agencies, including minimum risk standards for AI in federal contracts. Similar to cybersecurity, procurement rules will be used to enforce AI development best practices for federal suppliers. This report offers recommendations for implementing AI risk management procurement rules.

Analysis

How Persuasive is AI-Generated Propaganda?

Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, and Michael Tomz
| February 2024

Research participants who read propaganda generated by GPT-3 davinci (a large language model) were nearly as persuaded as those who read real propaganda from Iran or Russia, according to a new peer-reviewed study by Josh A. Goldstein and co-authors.

Data Snapshot

Introducing the Cyber Jobs Dataset

Maggie Wu
| February 6, 2024

This data snapshot is the first in a series on CSET’s cybersecurity jobs data, a new dataset created by classifying data from 513 million LinkedIn user profiles. Here, we offer an overview of its creation and explore some use cases for analysis.

Analysis

The Core of Federal Cyber Talent

Ali Crawford
| January 2024

Strengthening the federal cyber workforce is one of the main priorities of the National Cyber Workforce and Education Strategy. The National Science Foundation’s CyberCorps Scholarship-for-Service program is a direct cyber talent pipeline into the federal workforce. As the program tries to satisfy increasing needs for cyber talent, it is apparent that some form of program expansion is needed. This policy brief summarizes trends from participating institutions to understand how the program might expand and illustrates a potential future artificial intelligence (AI) federal scholarship-for-service program.

Analysis

Scaling AI

Andrew Lohn
| December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.

Analysis

Controlling Large Language Model Outputs: A Primer

Jessica Ji, Josh A. Goldstein, and Andrew Lohn
| December 2023

Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.