Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

System Re-engineering

Melissa Flagg and Paul Harris
| September 2020

The United States must adopt a new approach to R&D policy to optimize the diversity of the current system, manage the risks of system dispersion and deliver the benefits of R&D to society. This policy brief provides a new framework for understanding the U.S. R&D ecosystem and recommendations for repositioning the role of the federal government in R&D.

Reports

Optional Practical Training

Zachary Arnold and Remco Zwetsloot
| September 2020

Preserving pathways for high-skilled foreign talent critical to U.S. leadership in artificial intelligence.

CSET Founding Director Jason Matheny testified before the House Budget Committee for its hearing, "Machines, Artificial Intelligence, & the Workforce: Recovering and Readying Our Economy for the Future." Dr. Matheny's full testimony as prepared for delivery can be found below.

Reports

Mainframes: A Provisional Analysis of Rhetorical Frames in AI

Andrew Imbrie, James Dunham, Rebecca Gelles, and Catherine Aiken
| August 2020

Are great powers engaged in an artificial intelligence arms race? This issue brief explores the rhetorical framing of AI by analyzing more than 4,000 English-language articles over a seven-year period. Among its findings: a growing number of articles frame AI development as a competition, but articles using the competition frame represent a declining proportion of articles about AI.

One sentence summarizes the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. This AI triad of computing power, algorithms, and data offers a framework for decision-making in national security policy.

Reports

Deepfakes: A Grounded Threat Assessment

Tim Hwang
| July 2020

The rise of deepfakes could enhance the effectiveness of disinformation efforts by states, political parties and adversarial actors. How rapidly is this technology advancing, and who in reality might adopt it for malicious ends? This report offers a comprehensive deepfake threat assessment grounded in the latest machine learning research on generative models.

Reports

Shaping the Terrain of AI Competition

Tim Hwang
| June 2020

How should democracies effectively compete against authoritarian regimes in the AI space? This report offers a “terrain strategy” for the United States to leverage the malleability of artificial intelligence to offset authoritarians' structural advantages in engineering and deploying AI.

Data Brief

AI Hubs in the United States

Justin Olander and Melissa Flagg
| May 2020

With the increasing importance of artificial intelligence and the competition for AI talent, it is essential to understand the U.S. domestic industrial AI landscape. This data brief maps where AI talent is produced, where it concentrates, and where AI equity funding goes. This mapping reveals distinct AI hubs emerging across the country, with different growth rates, investment levels, and potential access to talent.

Machine learning advances are transforming cyber strategy and operations. This necessitates studying national security issues at the intersection of AI and cybersecurity, including offensive and defensive cyber operations, the cybersecurity of AI systems, and the effect of new technologies on global stability. 

While AI innovation would presumably continue in some form without Big Tech, the authors find that breaking up the largest technology companies could fundamentally change the broader AI innovation ecosystem, likely affecting the development of AI applications for national security.