Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications

This is a Chinese translation of Secretary of Commerce Gina Raimondo's Feb 23, 2023 speech at Georgetown University’s School of Foreign Service, titled “The CHIPS Act and a Long-term Vision for America’s Technological Leadership.”

Translation

Kína külföldi technológiára irányuló kívánság listája

Ryan Fedasiuk, Emily S. Weinstein, and Anna Puglisi
| April 6, 2023

This is a Hungarian translation of the May 2021 CSET Issue Brief “China’s Foreign Technology Wish List.”

Translation

«Список пожеланий» Китая в области иностранных технологий

Ryan Fedasiuk, Emily S. Weinstein, and Anna Puglisi
| April 6, 2023

This is a Russian translation of the May 2021 CSET Issue Brief “China’s Foreign Technology Wish List.”

Read our original translation of an article that describes China’s “National Security Academic Fund,” which supports the China Academy of Engineering Physics (CAEP), China’s nuclear weapons research, development, and testing laboratory.

Reports

Adversarial Machine Learning and Cybersecurity

Micah Musser
| April 2023

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.

Data Snapshot

The Dynamic Face of AI Pre-Baccalaureate Credentials

Sara Abdulla
| March 29, 2023

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This five-part series uses data from the U.S. Department of Education and other select sources to complement existing CSET work on the U.S. AI workforce.

Read our original translation of China’s short- to mid-term strategy for expanding domestic demand in its economy.

See our original translation of South Korea’s industrial technology protection law, as amended in January 2023. The law aims to prevent technologies vital to South Korean national security or economic competitiveness from being divulged to or shared with foreign countries or corporations without the government’s knowledge.

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This five-part series uses data from the U.S. Department of Education and other select sources to complement existing CSET work on the U.S. AI workforce.

Reports

Reducing the Risks of Artificial Intelligence for Military Decision Advantage

Wyatt Hoffman and Heeu Millie Kim
| March 2023

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.