Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

AI Education in China and the United States

Dahlia Peterson, Kayla Goode, and Diana Gehlhaus
| September 2021

A globally competitive AI workforce hinges on the education, development, and sustainment of the best and brightest AI talent. This issue brief compares efforts to integrate AI education in China and the United States, and what advantages and disadvantages this entails. The authors consider key differences in system design and oversight, as well as strategic planning. They then explore implications for the U.S. national security community.

Reports

Education in China and the United States

Dahlia Peterson, Kayla Goode, and Diana Gehlhaus
| September 2021

A globally competitive AI workforce hinges on the education, development, and sustainment of the best and brightest AI talent. This issue brief provides an overview of the education systems in China and the United States, lending context to better understand the accompanying main report, “AI Education in China and the United States: A Comparative Assessment.”

Reports

Small Data’s Big AI Potential

Husanjot Chahal, Helen Toner, and Ilya Rahkovsky
| September 2021

Conventional wisdom suggests that cutting-edge artificial intelligence is dependent on large volumes of data. An overemphasis on “big data” ignores the existence—and underestimates the potential—of several AI approaches that do not require massive labeled datasets. This issue brief is a primer on “small data” approaches to AI. It presents exploratory findings on the current and projected progress in scientific research across these approaches, which country leads, and the major sources of funding for this research.

Data Brief

China is Fast Outpacing U.S. STEM PhD Growth

Remco Zwetsloot, Jack Corrigan, Emily S. Weinstein, Dahlia Peterson, Diana Gehlhaus, and Ryan Fedasiuk
| August 2021

Since the mid-2000s, China has consistently graduated more STEM PhDs than the United States, a key indicator of a country’s future competitiveness in STEM fields. This paper explores the data on STEM PhD graduation rates and projects their growth over the next five years, during which the gap between China and the United States is expected to increase significantly.

Data Brief

U.S. AI Summer Camps

Claire Perkins and Kayla Goode
| August 2021

Summer camps are an integral part of many U.S. students’ education, but little is known about camps that focus on artificial intelligence education. This data brief maps out the AI summer camp landscape in the United States and explores the camps’ locations, target age ranges, price, and hosting organization type.

Reports

Ending Innovation Tourism

Melissa Flagg and Jack Corrigan
| July 2021

As dual-use technologies transform the national security landscape, the U.S. Department of Defense has established a variety of offices and programs dedicated to bringing private sector innovation into the military. However, these efforts have largely failed to drive cutting-edge commercial technology into major military platforms and systems. This report examines the shortcomings of the DOD’s current approach to defense innovation and offers recommendations for a more effective strategy.

Reports

The Huawei Moment

Alex Rubin, Alan Omar Loera Martinez, Jake Dow, and Anna Puglisi
| July 2021

For the first time, a Chinese company—Huawei—is set to lead the global transition from one key national security infrastructure technology to the next. How did Washington, at the beginning of the twenty-first century, fail to protect U.S. firms in this strategic technology and allow a geopolitical competitor to take a leadership position in a national security relevant critical infrastructure such as telecommunications? This policy brief highlights the characteristics of 5G development that China leveraged, exploited, and supported to take the lead in this key technology. The Huawei case study is in some ways the canary in the coal mine for emerging technologies and an illustration of what can happen to U.S. competitiveness when China’s companies do not have to base decisions on market forces.

Reports

U.S. Demand for AI Certifications

Diana Gehlhaus and Ines Pancorbo
| June 2021

This issue brief explores whether artificial intelligence and AI-related certifications serve as potential pathways to enter the U.S. AI workforce. The authors find that according to U.S. AI occupation job postings data over 2010–2020, there is little demand from employers for AI and AI-related certifications. From this perspective, such certifications appear to present more hype than promise.

Reports

Research Security, Collaboration, and the Changing Map of Global R&D

Melissa Flagg, Autumn Toney, and Paul Harris
| June 2021

The global map of research has shifted dramatically over the last 20 years. Annual global investment in research and development has tripled, and the United States’ share of both global R&D funding and total research output is diminishing. The open research system, with its expanding rates of investment and interconnectedness, has delivered tremendous benefits to many nations but also created new challenges for research integrity and security.

Reports

Truth, Lies, and Automation

Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova
| May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.