Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Translation

Translation Snapshot: Chinese Overseas Talent Recruitment

Ben Murphy
| September 6, 2023

Translation Snapshots are short posts that highlight related translations produced by CSET’s in-house translation team. Each snapshot identifies relevant translations, provides short summaries, and links to the full translations. Check back regularly for additional Translation Snapshots highlighting our work.

Reports

Onboard AI: Constraints and Limitations

Kyle Miller and Andrew Lohn
| August 2023

Artificial intelligence that makes news headlines, such as ChatGPT, typically runs in well-maintained data centers with an abundant supply of compute and power. However, these resources are more limited on many systems in the real world, such as drones, satellites, or ground vehicles. As a result, the AI that can run onboard these devices will often be inferior to state of the art models. That can affect their usability and the need for additional safeguards in high-risk contexts. This issue brief contextualizes these challenges and provides policymakers with recommendations on how to engage with these technologies.

Data Brief

U.S. and Chinese Military AI Purchases

Margarita Konaev, Ryan Fedasiuk, Jack Corrigan, Ellen Lu, Alex Stephenson, Helen Toner, and Rebecca Gelles
| August 2023

This data brief uses procurement records published by the U.S. Department of Defense and China’s People’s Liberation Army between April and November of 2020 to assess, and, where appropriate, compare what each military is buying when it comes to artificial intelligence. We find that the two militaries are prioritizing similar application areas, especially intelligent and autonomous vehicles and AI applications for intelligence, surveillance and reconnaissance.

Reports

Confidence-Building Measures for Artificial Intelligence

Andrew Lohn
| August 3, 2023

Foundation models could eventually introduce several pathways for undermining state security: accidents, inadvertent escalation, unintentional conflict, the proliferation of weapons, and the interference with human diplomacy are just a few on a long list. The Confidence-Building Measures for Artificial Intelligence workshop hosted by the Geopolitics Team at OpenAI and the Berkeley Risk and Security Lab at the University of California brought together a multistakeholder group to think through the tools and strategies to mitigate the potential risks introduced by foundation models to international security.

Jenny Jun's testimony before the House Foreign Affairs Subcommittee on Indo-Pacific for a hearing titled, "Illicit IT: Bankrolling Kim Jong Un."

Reports

China’s Cognitive AI Research

William Hannas, Huey-Meei Chang, Max Riesenhuber, and Daniel Chou
| July 2023

An expert assessment of Chinese scientific literature validates China's public claim to be working toward artificial general intelligence (AGI). At a time when other nations are contemplating safeguards on AI research, China’s push toward AGI challenges emerging global norms, underscoring the need for a serious open-source monitoring program to serve as a foundation for outreach and mitigation.

Reports

Autonomous Cyber Defense

Andrew Lohn, Anna Knack, Ant Burke, and Krystal Jackson
| June 2023

The current AI-for-cybersecurity paradigm focuses on detection using automated tools, but it has largely neglected holistic autonomous cyber defense systems — ones that can act without human tasking. That is poised to change as tools are proliferating for training reinforcement learning-based AI agents to provide broader autonomous cybersecurity capabilities. The resulting agents are still rudimentary and publications are few, but the current barriers are surmountable and effective agents would be a substantial boon to society.

Reports

Spotlight on Beijing Institute for General Artificial Intelligence

Huey-Meei Chang and William Hannas
| May 2023

In late 2020, China established the Beijing Institute for General Artificial Intelligence, a state-backed institution dedicated to building software that emulates or surpasses human cognition in many or all of its aspects. Open source materials now available provide insight into BIGAI’s goals, scope, organization, methodology, and staffing. The project formalizes a trend evident in Chinese AI development toward broadly capable (general) AI.

Data Brief

“The Main Resource is the Human”

Micah Musser, Rebecca Gelles, Ronnie Kinoshita, Catherine Aiken, and Andrew Lohn
| April 2023

Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.

Reports

Adversarial Machine Learning and Cybersecurity

Micah Musser
| April 2023

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.