Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

China’s Military AI Roadblocks

Sam Bresnick
| June 2024

China’s leadership believes that artificial intelligence will play a central role in future wars. However, the author's comprehensive review of dozens of Chinese-language journal articles about AI and warfare reveals that Chinese defense experts claim that Beijing is facing several technological challenges that may hinder its ability to capitalize on the advantages provided by military AI. This report outlines these perceived barriers and identifies several technologies that Chinese experts believe may help the country develop and deploy military AI-enabled systems.

Data Snapshot

Pushing the Limits: Huawei’s AI Chip Tests U.S. Export Controls

Jacob Feldgoise and Hanna Dohmen
| June 17, 2024

Since 2019, the U.S. government has imposed restrictive export controls on Huawei—one of China’s leading tech giants—seeking, in part, to hinder the company’s AI chip development efforts. This data snapshot reveals how exactly Huawei’s latest AI chip—the Ascend 910B—improves on the prior generation and demonstrates how export controls are likely hindering Huawei’s production.

Reports

Trust Issues: Discrepancies in Trustworthy AI Keywords Use in Policy and Research

Emelia Probasco, Kathleen Curlee, and Autumn Toney
| June 2024

Policy and research communities strive to mitigate AI harm while maximizing its benefits. Achieving effective and trustworthy AI necessitates the establishment of a shared language. The analysis of policies across different countries and research literature identifies consensus on six critical concepts: accountability, explainability, fairness, privacy, security, and transparency.

Reports

Key Concepts in AI Safety: Reliable Uncertainty Quantification in Machine Learning

Tim G. J. Rudner and Helen Toner
| June 2024

This paper is the fifth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. This paper explores the opportunities and challenges of building AI systems that “know what they don’t know.”

Read our translation of a draft Chinese government framework for a system of standards for AI.

Read our translation of a report by a Chinese state-run think tank that describes how the Chinese government and foreign governments are using large AI models.

Data Snapshot

Identifying Cyber Education Hotspots: An Interactive Guide

Maggie Wu and Brian Love
| June 5, 2024

In February 2024, CSET introduced its new cybersecurity jobs dataset, a novel resource comprising ~1.4 million LinkedIn profiles of current U.S. cybersecurity workers. This data snapshot uses the dataset to identify top-producing institutions of cybersecurity talent.

CSET Non-Resident Senior Fellow Kevin Wolf testified before the US-China Economic and Security Review Commission on economic competition with China.

Reports

Putting Teeth into AI Risk Management

Matthew Schoemaker
| May 2024

President Biden's October 2023 executive order prioritizes the governance of artificial intelligence in the federal government, prompting the urgent creation of AI risk management standards and procurement guidelines. Soon after the order's signing, the Office of Management and Budget issued guidance for federal departments and agencies, including minimum risk standards for AI in federal contracts. Similar to cybersecurity, procurement rules will be used to enforce AI development best practices for federal suppliers. This report offers recommendations for implementing AI risk management procurement rules.

Reports

China and Medical AI

Caroline Schuerger, Vikram Venkatram, and Katherine Quinn
| May 2024

Medical artificial intelligence, which depends on large repositories of biological data, can improve public health and contribute to the growing global bioeconomy. Countries that strategically prioritize medical AI could benefit from a competitive advantage and set global norms. This report examines China’s stated goals for medical AI, finding that the country’s strategy for biodata collection and medical AI development positions it to be an economic and technological leader in this sector.