Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

China’s Military AI Wish List

Emelia Probasco, Sam Bresnick, and Cole McFaul
| February 2026

This report examines thousands of Chinese-language open-source requests for proposal (RFPs) published by the People’s Liberation Army between January 1, 2023, and December 31, 2024. The RFPs the authors reviewed offer insights into the PLA’s priorities and ambitions for AI-enabled military technologies associated with C5ISRT: command, control, communications, computers, cyber, intelligence, surveillance, reconnaissance, and targeting.

Applications


Compete


Filter publications
Reports

A Common Language for Responsible AI

Emelia Probasco
| October 2022

Policymakers, engineers, program managers and operators need the bedrock of a common set of terms to instantiate responsible AI for the Department of Defense. Rather than create a DOD-specific set of terms, this paper argues that the DOD could benefit by adopting the key characteristics defined by the National Institute of Standards and Technology in its draft AI Risk Management Framework with only two exceptions.

Reports

Truth, Lies, and Automation

Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova
| May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.

Data Brief

Machine Intelligence for Scientific Discovery and Engineering Invention

Matthew Daniels, Autumn Toney, Melissa Flagg, and Charles Yang
| May 2021

The advantages of nations depend in part on their access to new inventions—and modern applications of artificial intelligence can help accelerate the creation of new inventions in the years ahead. This data brief is a first step toward understanding how modern AI and machine learning have begun accelerating growth across a wide array of science and engineering disciplines in recent years.

Reports

A DPA for the 21st Century

Jamie Baker
| April 2021

The Defense Production Act can be an effective tool to bring U.S. industrial might to bear on broader national security challenges, including those in technology. If updated and used to its full effect, the DPA could be leveraged to encourage development and governance of artificial intelligence. And debate about the DPA’s use for AI purposes can serve to shape and condition expectations about the role the law’s authorities should or could play, as well as to identify essential legislative gaps.

Reports

The Path of Least Resistance

Margarita Konaev and Husanjot Chahal
| April 2021

As multinational collaboration on emerging technologies takes center stage, U.S. allies and partners must overcome the technological, bureaucratic, and political barriers to working together. This report assesses the challenges to multinational collaboration and explains how joint projects centered on artificial intelligence applications for military logistics and sustainment offer a viable path forward.

Reports

Trusted Partners

Margarita Konaev, Tina Huang, and Husanjot Chahal
| February 2021

As the U.S. military integrates artificial intelligence into its systems and missions, there are outstanding questions about the role of trust in human-machine teams. This report examines the drivers and effects of such trust, assesses the risks from too much or too little trust in intelligent technologies, reviews efforts to build trustworthy AI systems, and offers future directions for research on trust relevant to the U.S. military.

Data Brief

“Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?”

Catherine Aiken, Rebecca Kagan, and Michael Page
| November 2020

Is there a rift between the U.S. tech sector and the Department of Defense? To better understand this relationship, CSET surveyed U.S. AI industry professionals about their views toward working on DOD-funded AI projects. The authors find that these professionals hold a broad range of opinions about working with DOD. Among the key findings: Most AI professionals are positive or neutral about working on DOD-funded AI projects, and willingness to work with DOD increases for projects with humanitarian applications.

National security leaders view AI as a priority technology for defending the United States. This two-part analysis is intended to help policymakers better understand the scope and implications of U.S. military investment in autonomy and AI. It focuses on the range of autonomous and AI-enabled technologies the Pentagon is developing, the military capabilities these applications promise to deliver, and the impact that such advances could have on key strategic issues.

This brief examines how the Pentagon’s investments in autonomy and AI may affect its military capabilities and strategic interests. It proposes that DOD invest in improving its understanding of trust in human-machine teams and leverage existing AI technologies to enhance military readiness and endurance. In the long term, investments in reliable, trustworthy, and resilient AI systems are critical for ensuring sustained military, technological, and strategic advantages.

The Pentagon has a wide range of research and development programs using autonomy and AI in unmanned vehicles and systems, information processing, decision support, targeting functions, and other areas. This policy brief delves into the details of DOD’s science and technology program to assess trends in funding, key areas of focus, and gaps in investment that could stymie the development and fielding of AI systems in operational settings.