Assessment

AI Governance at the Frontier

Mina Narayanan, Jessica Ji, Vikram Venkatram, and Ngor Luong
| November 2025

This report presents an analytic approach to help U.S. policymakers deconstruct artificial intelligence governance proposals by identifying their underlying assumptions, which are the foundational elements that facilitate the success of a proposal. By applying the approach to five U.S.-based AI governance proposals from industry, academia, and civil society, as well as state and federal government, this report demonstrates how identifying assumptions can help policymakers make informed, flexible decisions about AI under uncertainty.

California’s Approach to AI Governance

Devin Von Arx
| November 4, 2025

This blog examines 18 AI-related laws that California enacted in 2024, 8 of which are explored in more detail in an accompanying CSET Emerging Technology Observatory (ETO) blog. This blog also chronicles California’s history of regulating AI and other emerging technologies and highlights several AI bills that have moved through the California legislature in 2025.

As artificial intelligence systems are deployed and affect more aspects of daily life, effective risk mitigation becomes imperative to prevent harm. This report analyzes AI incidents to improve our understanding of how risks from AI materialize in practice. By identifying six mechanisms of harm, it sheds light on the different pathways to harm, and on the variety of mitigation strategies needed to address them.

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.

To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply with the AI Act. This blog reviews the measures set out in the new Code’s safety and security chapter, assesses how they compare to existing practices, and what the Code’s global impact might be.

Why Donald Trump’s AI Strategy Needs More Safeguards

The National Interest
| July 24, 2025

Adrian Thinnyun and Zachary Arnold shared their expert analysis in an op-ed published by The National Interest. In their piece, they examine how the United States must adopt a learning-focused, industry-led self-regulatory framework for AI, drawing lessons from the nuclear sector’s post-Three Mile Island Institute for Nuclear Power Operations to prevent a public backlash and ensure safe, widespread deployment of transformative AI technologies.

CSET’s Lauren A. Kahn and CFR's Michael C. Horowitz shared their expert analysis in an op-ed published by AI Frontiers. In their piece, they examine the growing calls to regulate artificial intelligence in ways similar to nuclear technology.

Fixing the Pentagon’s Broken Innovation Pipeline

The National Interest
| June 25, 2025

CSET’s Lauren A. Kahn and CFR’s Michael C. Horowitz shared their expert analysis in an op-ed published by The National Interest. In their piece, they explore how the U.S. Department of Defense’s outdated budget process is undermining the military’s ability to adopt and scale emerging technologies quickly enough to deter rising global threats.

AI Safety Evaluations: An Explainer

Jessica Ji, Vikram Venkatram, and Steph Batalis
| May 28, 2025

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work? Our explainer lays out the different fundamental types of AI safety evaluations alongside their respective strengths and limitations.

Dewey Murdick and Miriam Vogel shared their expert analysis in an op-ed published by Fortune. In their piece, they highlight the urgent need for the United States to strengthen its AI literacy and incident reporting systems to maintain global leadership amid rapidly advancing international competition, especially from China’s booming AI sector.