Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications

Read our translation of Chinese President Xi Jinping’s speech at a major Chinese S&T conference in June 2024.

Data Visualization

ETO PARAT

June 2024

PARAT (Private-sector AI-Related Activity Tracker) is ETO's hub for data about private-sector companies and their AI activities. PARAT's easy-to-use interface brings together data on companies' AI research publications, patents, and hiring, enabling customizable, data-driven comparison and trend analysis. Use PARAT to explore how hundreds of leading companies around the world are engaged in AI, from Big Tech titans and leading generative AI startups to the entire S&P 500.

This publication examines how emerging AI tools—including LLM-based chatbots and biological design tools—are reshaping the biosecurity landscape for commercial DNA synthesis.

Data Brief

A Quantitative Assessment of Department of Defense S&T Publication Collaborations

Emelia Probasco and Autumn Toney
| June 2024

While the effects of the U.S. Department of Defense’s broad investments in research and development go far beyond what is publicly disclosed, authors affiliated with the DOD do publish papers about their research. This analysis examines more than 100,000 papers by DOD-affiliated authors since 2000 and offers insight into the patterns of research publication and collaboration by the DOD.

Reports

China’s Military AI Roadblocks

Sam Bresnick
| June 2024

China’s leadership believes that artificial intelligence will play a central role in future wars. However, the author's comprehensive review of dozens of Chinese-language journal articles about AI and warfare reveals that Chinese defense experts claim that Beijing is facing several technological challenges that may hinder its ability to capitalize on the advantages provided by military AI. This report outlines these perceived barriers and identifies several technologies that Chinese experts believe may help the country develop and deploy military AI-enabled systems.

Data Snapshot

Pushing the Limits: Huawei’s AI Chip Tests U.S. Export Controls

Jacob Feldgoise and Hanna Dohmen
| June 17, 2024

Since 2019, the U.S. government has imposed restrictive export controls on Huawei—one of China’s leading tech giants—seeking, in part, to hinder the company’s AI chip development efforts. This data snapshot reveals how exactly Huawei’s latest AI chip—the Ascend 910B—improves on the prior generation and demonstrates how export controls are likely hindering Huawei’s production.

Reports

Trust Issues: Discrepancies in Trustworthy AI Keywords Use in Policy and Research

Emelia Probasco, Kathleen Curlee, and Autumn Toney
| June 2024

Policy and research communities strive to mitigate AI harm while maximizing its benefits. Achieving effective and trustworthy AI necessitates the establishment of a shared language. The analysis of policies across different countries and research literature identifies consensus on six critical concepts: accountability, explainability, fairness, privacy, security, and transparency.

Reports

Key Concepts in AI Safety: Reliable Uncertainty Quantification in Machine Learning

Tim G. J. Rudner and Helen Toner
| June 2024

This paper is the fifth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. This paper explores the opportunities and challenges of building AI systems that “know what they don’t know.”

Read our translation of a draft Chinese government framework for a system of standards for AI.

Read our translation of a report by a Chinese state-run think tank that describes how the Chinese government and foreign governments are using large AI models.