Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Chinese Critiques of Large Language Models

William Hannas, Huey-Meei Chang, Maximilian Riesenhuber, and Daniel Chou
| January 2025

Large generative models are widely viewed as the most promising path to general (human-level) artificial intelligence and attract investment in the billions of dollars. The present enthusiasm notwithstanding, a chorus of ranking Chinese scientists regard this singular approach to AGI as ill-advised. This report documents these critiques in China’s research, public statements, and government planning, while pointing to additional, pragmatic reasons for China’s pursuit of a diversified research portfolio.

Compete


Peer Watch


Filter publications
See 5 moreSee less
See 4 moreSee less
Analysis

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 4: Capstone

Danny Hague, Natalie Roisman, Matthias Oschinski, and Carolina Pachon
| March 2025

Georgetown University’s Center for Security and Emerging Technology and Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy (Tech Institute), led a series of invite-only roundtables over the course of 2024 to grapple with the legal liability questions that artificial intelligence poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. This resulting report was authored in 2024 after those discussions and is the fourth and final installment of a four-part series.

Analysis

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 3: Government as a Buyer of AI

Carolina Oxenstierna, Aaron Snow, and Danny Hague
| March 2025

Georgetown University’s Center for Security and Emerging Technology and Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy (Tech Institute), led a series of invite-only roundtables over the course of 2024 to grapple with the legal liability questions that artificial intelligence poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. This resulting report was authored in 2024 after those discussions and is the third installment of a four-part series.

Analysis

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 2: Government as an Employer of AI Talent

Danny Hague, Carolina Oxenstierna, and Matthias Oschinski
| March 2025

Georgetown University’s Center for Security and Emerging Technology and Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy (Tech Institute), led a series of invite-only roundtables over the course of 2024 to grapple with the legal liability questions that artificial intelligence poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. This resulting report was authored in 2024 after those discussions and is the second installment of a four-part series.

Analysis

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 1: Government as a User of AI

Carolina Oxenstierna, Alice Cao, and Danny Hague
| March 2025

Georgetown University’s Center for Security and Emerging Technology and Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy (Tech Institute), led a series of invite-only roundtables over the course of 2024 to grapple with the legal liability questions that artificial intelligence poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. This resulting report was authored in 2024 after those discussions and is the first installment of a four-part series.

Analysis

How to Assess the Likelihood of Malicious Use of Advanced AI Systems

Josh A. Goldstein and Girish Sastry
| March 2025

As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.

Formal Response

CSET’s Recommendations for an AI Action Plan

March 14, 2025

In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.

Analysis

The State of AI-Related Apprenticeships

Luke Koslosky and Jacob Feldgoise
| February 2025

As artificial intelligence permeates the economy, the demand for AI talent with all levels of educational attainment will expand in kind. Apprenticeships are an effective education and training pathway for other industries, but are they suitable for AI-related roles? This report analyzes trends in AI-related apprenticeships across the United States from 2013 through 2023. It explores the growth of these programs, completion rates, demographic and geographic information, and the organizations sponsoring these programs.

Analysis

Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches

Mina Narayanan, Christian Schoeberl, and Tim G. J. Rudner
| February 2025

Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluate their explainability and interpretability claims in the context of AI-enabled recommendation systems and offers considerations for policymakers seeking to support AI evaluations.

Analysis

Shaping the U.S. Space Launch Market

Michael O’Connor and Kathleen Curlee
| February 2025

The United States leads the world in space launch by nearly every measure: number of launches, total mass to orbit, satellite count, and more. SpaceX’s emergence has provided regular, reliable, and relatively affordable launches to commercial and national security customers. However, today’s market consolidation coupled with the capital requirements necessary to develop rockets may make it difficult for new competitors to break in and keep the space launch market dynamic.

Analysis

AI Incidents: Key Components for a Mandatory Reporting Regime

Ren Bin Lee Dixon and Heather Frase
| January 2025

This follow-up report builds on the foundational framework presented in the March 2024 CSET issue brief, “An Argument for Hybrid AI Incident Reporting,” by identifying key components of AI incidents that should be documented within a mandatory reporting regime. Designed to complement and operationalize our original framework, this report promotes the implementation of such a regime. By providing guidance on these critical elements, the report fosters consistent and comprehensive incident reporting, advancing efforts to document and address AI-related harms.