Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Brief

Assessing South Korea’s AI Ecosystem

Cole McFaul, Husanjot Chahal, Rebecca Gelles, and Margarita Konaev
| August 2023

This data brief examines South Korea’s progress in its development of artificial intelligence. The authors find that the country excels in semiconductor manufacturing, is a global leader in the production of AI patents, and is an important contributor to AI research. At the same time, the AI investment ecosystem remains nascent and despite having a highly developed AI workforce, the demand for AI talent may soon outpace supply.

Reports

Confidence-Building Measures for Artificial Intelligence

Andrew Lohn
| August 3, 2023

Foundation models could eventually introduce several pathways for undermining state security: accidents, inadvertent escalation, unintentional conflict, the proliferation of weapons, and the interference with human diplomacy are just a few on a long list. The Confidence-Building Measures for Artificial Intelligence workshop hosted by the Geopolitics Team at OpenAI and the Berkeley Risk and Security Lab at the University of California brought together a multistakeholder group to think through the tools and strategies to mitigate the potential risks introduced by foundation models to international security.

Jenny Jun's testimony before the House Foreign Affairs Subcommittee on Indo-Pacific for a hearing titled, "Illicit IT: Bankrolling Kim Jong Un."

Reports

Autonomous Cyber Defense

Andrew Lohn, Anna Knack, Ant Burke, and Krystal Jackson
| June 2023

The current AI-for-cybersecurity paradigm focuses on detection using automated tools, but it has largely neglected holistic autonomous cyber defense systems — ones that can act without human tasking. That is poised to change as tools are proliferating for training reinforcement learning-based AI agents to provide broader autonomous cybersecurity capabilities. The resulting agents are still rudimentary and publications are few, but the current barriers are surmountable and effective agents would be a substantial boon to society.

Reports

A Shot of Resilience

Steph Batalis and Anna Puglisi
| May 2023

Vaccines keep the U.S. public healthy while safeguarding economic stability and biosecurity. This report assesses the domestic vaccine manufacturing landscape and identifies two major vulnerabilities: a reliance on foreign manufacturers and a lack of manufacturing redundancy. Maintaining a resilient vaccine supply will require the U.S. government to take steps to protect the existing supply, identify and monitor manufacturing vulnerabilities, and create a stronger domestic production base.

Reports

Financing “The New Oil”

Anthony Ferrara and Sara Abdulla
| May 2023

Israel has by far the largest AI ecosystem in the Middle East as measured in AI companies and financial investments, and foreign investors play a critical role in Israel’s AI market growth. This issue brief finds that AI investments in Israel have mostly originated from the United States. To date, Chinese investors have played a limited role in funding Israel’s dynamic AI companies. But understanding the risk of Chinese investments into the Israeli AI ecosystem will be important for the national security of both the United States and Israel.

Data Brief

“The Main Resource is the Human”

Micah Musser, Rebecca Gelles, Ronnie Kinoshita, Catherine Aiken, and Andrew Lohn
| April 2023

Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.

Reports

Viral Families and Disease X: A Framework for U.S. Pandemic Preparedness Policy

Caroline Schuerger, Steph Batalis, Katherine Quinn, Amesh Adalja, and Anna Puglisi
| April 2023

Pandemic threats are increasing as globalization, urbanization, and encroachment on animal habitats cause infectious outbreaks to become more frequent and severe. It is imperative that the United States build a pipeline of medical countermeasure development, beginning with basic scientific research and culminating in approved therapies. This report assesses preparedness for families of viral pathogens of pandemic potential and offers recommendations for steps the U.S. government can take to prepare for future pandemics.

Reports

Adversarial Machine Learning and Cybersecurity

Micah Musser
| April 2023

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.

Reports

Reducing the Risks of Artificial Intelligence for Military Decision Advantage

Wyatt Hoffman and Heeu Millie Kim
| March 2023

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.