Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Chinese Critiques of Large Language Models

William Hannas, Huey-Meei Chang, Maximilian Riesenhuber, and Daniel Chou
| January 2025

Large generative models are widely viewed as the most promising path to general (human-level) artificial intelligence and attract investment in the billions of dollars. The present enthusiasm notwithstanding, a chorus of ranking Chinese scientists regard this singular approach to AGI as ill-advised. This report documents these critiques in China’s research, public statements, and government planning, while pointing to additional, pragmatic reasons for China’s pursuit of a diversified research portfolio.

Compete


Peer Watch


Filter publications
Analysis

Shaping the U.S. Space Launch Market

Michael O’Connor and Kathleen Curlee
| February 2025

The United States leads the world in space launch by nearly every measure: number of launches, total mass to orbit, satellite count, and more. SpaceX’s emergence has provided regular, reliable, and relatively affordable launches to commercial and national security customers. However, today’s market consolidation coupled with the capital requirements necessary to develop rockets may make it difficult for new competitors to break in and keep the space launch market dynamic.

Analysis

AI Incidents: Key Components for a Mandatory Reporting Regime

Ren Bin Lee Dixon and Heather Frase
| January 2025

This follow-up report builds on the foundational framework presented in the March 2024 CSET issue brief, “An Argument for Hybrid AI Incident Reporting,” by identifying key components of AI incidents that should be documented within a mandatory reporting regime. Designed to complement and operationalize our original framework, this report promotes the implementation of such a regime. By providing guidance on these critical elements, the report fosters consistent and comprehensive incident reporting, advancing efforts to document and address AI-related harms.

Read out translation of an interview of Chinese AI expert Song-Chun Zhu, who argues that China’s AI industry should chart a different course than the current U.S. focus on data- and compute-heavy large language models.

Analysis

Chinese Critiques of Large Language Models

William Hannas, Huey-Meei Chang, Maximilian Riesenhuber, and Daniel Chou
| January 2025

Large generative models are widely viewed as the most promising path to general (human-level) artificial intelligence and attract investment in the billions of dollars. The present enthusiasm notwithstanding, a chorus of ranking Chinese scientists regard this singular approach to AGI as ill-advised. This report documents these critiques in China’s research, public statements, and government planning, while pointing to additional, pragmatic reasons for China’s pursuit of a diversified research portfolio.

Read our translation of a Chinese government policy document that encourages the creation of “new-style R&D institutions,” which differ from traditional Chinese laboratories and research institutes in that they are not state-run and have additional sources of income besides government funding.

Read our translation of statements issued by four Chinese industry associations condemning the December 2, 2024 U.S. sanctions against Chinese companies.

Data Brief

Identifying Emerging Technologies in Research

Catherine Aiken, James Dunham, Jennifer Melot, and Zachary Arnold
| December 2024

This paper presents two new methods for identifying research relevant to emerging technology. The authors developed and deployed technology topic classification and targeted research field scoring over a corpus of scientific literature to identify research relevant to cybersecurity, LLM development, and chips fabrication and design—expanding CSET’s existing set of topic classifications for AI, computer vision, NLP, robotics, and AI safety. The paper summarizes motivation, methods, and results.

Analysis

AI and the Future of Workforce Training

Matthias Oschinski, Ali Crawford, and Maggie Wu
| December 2024

The emergence of artificial intelligence as a general-purpose technology could profoundly transform work across industries, potentially affecting a variety of occupations. While previous technological shifts largely enhanced productivity and wages for white-collar workers but led to displacement pressures for blue-collar workers, AI may significantly disrupt both groups. This report examines the changing landscape of workforce development, highlighting the crucial role of community colleges, alternative career pathways, and AI-enabled training solutions in preparing workers for this transition.

Data Visualization

ETO AGORA

December 2024

The Emerging Technology Observatory’s AGORA (AI GOvernance and Regulatory Archive) is a living collection of AI-relevant laws, regulations, standards, and other governance documents from the United States and around the world. Updated regularly, AGORA includes summaries, document text, thematic tags, and filters to help users quickly discover and analyze key developments in AI governance.

Analysis

Staying Current with Emerging Technology Trends: Using Big Data to Inform Planning

Emelia Probasco and Christian Schoeberl
| December 2024

This report proposes an approach to systematically identify promising research using big data and analyze that research’s potential impact through structured engagements with subject-matter experts. The methodology offers a structured way to proactively monitor the research landscape and inform strategic R&D priorities.