Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Brief

Assessing South Korea’s AI Ecosystem

Cole McFaul, Husanjot Chahal, Rebecca Gelles, and Margarita Konaev
| August 2023

This data brief examines South Korea’s progress in its development of artificial intelligence. The authors find that the country excels in semiconductor manufacturing, is a global leader in the production of AI patents, and is an important contributor to AI research. At the same time, the AI investment ecosystem remains nascent and despite having a highly developed AI workforce, the demand for AI talent may soon outpace supply.

Data Brief

U.S. and Chinese Military AI Purchases

Margarita Konaev, Ryan Fedasiuk, Jack Corrigan, Ellen Lu, Alex Stephenson, Helen Toner, and Rebecca Gelles
| August 2023

This data brief uses procurement records published by the U.S. Department of Defense and China’s People’s Liberation Army between April and November of 2020 to assess, and, where appropriate, compare what each military is buying when it comes to artificial intelligence. We find that the two militaries are prioritizing similar application areas, especially intelligent and autonomous vehicles and AI applications for intelligence, surveillance and reconnaissance.

Data Brief

Who Cares About Trust?

Autumn Toney and Emelia Probasco
| July 2023

Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work about the use of trustworthy AI terms to identify 18 clusters of research papers that contribute to the development of trustworthy AI. In identifying these clusters, the analysis also reveals that some concepts, like "explainability," are forming distinct research areas, whereas other concepts, like "reliability," appear to be accepted as metrics and broadly applied.

Reports

Defending the Ultimate High Ground

Corey Crowell and Sam Bresnick
| July 2023

China has poured resources into improving the resilience of its space architecture. But how much progress has Beijing made? This issue brief analyzes China’s space resilience efforts and identifies areas where the United States may need to invest to keep pace.

Data Brief

The Inigo Montoya Problem for Trustworthy AI

Emelia Probasco, Autumn Toney, and Kathleen Curlee
| June 2023

When the technology and policy communities use terms associated with trustworthy AI, could they be talking past one another? This paper examines the use of trustworthy AI keywords and the potential for an “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Reports

Financing “The New Oil”

Anthony Ferrara and Sara Abdulla
| May 2023

Israel has by far the largest AI ecosystem in the Middle East as measured in AI companies and financial investments, and foreign investors play a critical role in Israel’s AI market growth. This issue brief finds that AI investments in Israel have mostly originated from the United States. To date, Chinese investors have played a limited role in funding Israel’s dynamic AI companies. But understanding the risk of Chinese investments into the Israeli AI ecosystem will be important for the national security of both the United States and Israel.

Reports

Volunteer Force

Christine H. Fox and Emelia Probasco
| May 2023

U.S. tech companies have played a critical role in the international effort to support and defend Ukraine against Russia. To better understand and envision how these companies can help U.S. strategic interests, CSET convened a group of industry experts and former government leaders to discuss lessons learned from the ongoing war in Ukraine and what those lessons might mean for the future. The workshop’s discussion and this accompanying report expand on the themes explored in the October 2022 "Foreign Affairs" article, "Big Tech Goes to War."

Reports

Chinese AI Investment and Commercial Activity in Southeast Asia

Ngor Luong, Channing Lee, and Margarita Konaev
| February 2023

China’s government has pushed the country’s technology and financial firms to expand abroad, and Southeast Asia’s growing economies — and AI companies — offer promising opportunities. This report examines the scope and nature of Chinese investment in the region. It finds that China currently plays a limited role in Southeast Asia’s emerging AI markets outside of Singapore and that Chinese investment activity still trails behind that of the United States. Nevertheless, Chinese tech companies, with support from the Chinese government, have established a broad range of other AI-related linkages with public and commercial actors across Southeast Asia.

Reports

A Common Language for Responsible AI

Emelia Probasco
| October 2022

Policymakers, engineers, program managers and operators need the bedrock of a common set of terms to instantiate responsible AI for the Department of Defense. Rather than create a DOD-specific set of terms, this paper argues that the DOD could benefit by adopting the key characteristics defined by the National Institute of Standards and Technology in its draft AI Risk Management Framework with only two exceptions.

Reports

Quad AI

Husanjot Chahal, Ngor Luong, Sara Abdulla, and Margarita Konaev
| May 2022

Through the Quad forum, the United States, Australia, Japan and India have committed to pursuing an open, accessible and secure technology ecosystem and offering a democratic alternative to China’s techno-authoritarian model. This report assesses artificial intelligence collaboration across the Quad and finds that while Australia, Japan and India each have close AI-related research and investment ties to both the United States and China, they collaborate far less with one another.