Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications

Jenny Jun's testimony before the House Foreign Affairs Subcommittee on Indo-Pacific for a hearing titled, "Illicit IT: Bankrolling Kim Jong Un."

Data Brief

Who Cares About Trust?

Autumn Toney and Emelia Probasco
| July 2023

Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work about the use of trustworthy AI terms to identify 18 clusters of research papers that contribute to the development of trustworthy AI. In identifying these clusters, the analysis also reveals that some concepts, like "explainability," are forming distinct research areas, whereas other concepts, like "reliability," appear to be accepted as metrics and broadly applied.

Reports

Defending the Ultimate High Ground

Corey Crowell and Sam Bresnick
| July 2023

China has poured resources into improving the resilience of its space architecture. But how much progress has Beijing made? This issue brief analyzes China’s space resilience efforts and identifies areas where the United States may need to invest to keep pace.

Data Brief

The Inigo Montoya Problem for Trustworthy AI

Emelia Probasco, Autumn Toney, and Kathleen Curlee
| June 2023

When the technology and policy communities use terms associated with trustworthy AI, could they be talking past one another? This paper examines the use of trustworthy AI keywords and the potential for an “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Reports

Autonomous Cyber Defense

Andrew Lohn, Anna Knack, Ant Burke, and Krystal Jackson
| June 2023

The current AI-for-cybersecurity paradigm focuses on detection using automated tools, but it has largely neglected holistic autonomous cyber defense systems — ones that can act without human tasking. That is poised to change as tools are proliferating for training reinforcement learning-based AI agents to provide broader autonomous cybersecurity capabilities. The resulting agents are still rudimentary and publications are few, but the current barriers are surmountable and effective agents would be a substantial boon to society.

Reports

Volunteer Force

Christine H. Fox and Emelia Probasco
| May 2023

U.S. tech companies have played a critical role in the international effort to support and defend Ukraine against Russia. To better understand and envision how these companies can help U.S. strategic interests, CSET convened a group of industry experts and former government leaders to discuss lessons learned from the ongoing war in Ukraine and what those lessons might mean for the future. The workshop’s discussion and this accompanying report expand on the themes explored in the October 2022 "Foreign Affairs" article, "Big Tech Goes to War."

Data Brief

“The Main Resource is the Human”

Micah Musser, Rebecca Gelles, Ronnie Kinoshita, Catherine Aiken, and Andrew Lohn
| April 2023

Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.

Reports

Adversarial Machine Learning and Cybersecurity

Micah Musser
| April 2023

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.

Reports

Reducing the Risks of Artificial Intelligence for Military Decision Advantage

Wyatt Hoffman and Heeu Millie Kim
| March 2023

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.

Reports

Examining Singapore’s AI Progress

Kayla Goode, Heeu Millie Kim, and Melissa Deng
| March 2023

Despite being a small city-state, Singapore’s star continues to rise as an artificial intelligence hub presenting significant opportunities for international collaboration. Initiatives such as fast-tracking patent approval, incentivizing private investment, and addressing talent shortfalls are making the country a rapidly growing global AI hub. Such initiatives offer potential models for those seeking to leverage the technology and opportunities for collaboration in AI education and talent exchanges, research and development, and governance. The United States and Singapore share similar goals regarding the development and use of trusted and responsible AI and should continue to foster greater collaboration among public and private sector entities.