Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Visualization

Chinese Talent Program Tracker

Emily S. Weinstein
| November 2020

China operates a number of party- and state-sponsored talent programs to recruit researchers -- Chinese citizens and non-citizens alike -- to bolster its strategic civilian and military goals. CSET has created a tracker to catalog publicly available information about these programs. This catalog is a work in progress; if you have further information on programs currently not included in it -- or if you spot an error -- please complete the form at http://bit.ly/ChineseTalent

Reports

Destructive Cyber Operations and Machine Learning

Dakota Cary and Daniel Cebul
| November 2020

Machine learning may provide cyber attackers with the means to execute more effective and more destructive attacks against industrial control systems. As new ML tools are developed, CSET discusses the ways in which attackers may deploy these tools and the most effective avenues for industrial system defenders to respond.

Formal Response

New Student Visa Rule Likely to Harm National Security More Than Help

Jason Matheny and Zachary Arnold
| October 26, 2020

CSET submitted the following comment to the Department of Homeland Security regarding a fixed time period of admission for nonimmigrant students, exchange visitors and representatives of foreign information media.

Reports

Downscaling Attack and Defense

Andrew Lohn
| October 7, 2020

The resizing of images, which is typically a required part of preprocessing for computer vision systems, is vulnerable to attack. Images can be created such that the image is completely different at machine-vision scales than at other scales and the default settings for some common computer vision and machine learning systems are vulnerable.

Reports

An Alliance-Centered Approach to AI

Andrew Imbrie and Ryan Fedasiuk
| September 2020

Collaborating with allies to shape the trajectory of artificial intelligence and protect against digital authoritarianism

Reports

Optional Practical Training

Zachary Arnold and Remco Zwetsloot
| September 2020

Preserving pathways for high-skilled foreign talent critical to U.S. leadership in artificial intelligence.

CSET Founding Director Jason Matheny testified before the House Budget Committee for its hearing, "Machines, Artificial Intelligence, & the Workforce: Recovering and Readying Our Economy for the Future." Dr. Matheny's full testimony as prepared for delivery can be found below.

One sentence summarizes the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. This AI triad of computing power, algorithms, and data offers a framework for decision-making in national security policy.

CSET and the Bipartisan Policy Center partnered with Representatives Robin Kelly and Will Hurd to propose guidelines for national security considerations that must be addressed in a national AI strategy. The findings identify key areas for improvement in defense and intelligence to put the nation on a path to large-scale development and deployment of AI tools in promoting national security.

Reports

Deepfakes: A Grounded Threat Assessment

Tim Hwang
| July 2020

The rise of deepfakes could enhance the effectiveness of disinformation efforts by states, political parties and adversarial actors. How rapidly is this technology advancing, and who in reality might adopt it for malicious ends? This report offers a comprehensive deepfake threat assessment grounded in the latest machine learning research on generative models.