Assement - Line of Research

Assessment

AI/ML systems are failure-prone, unreliable, and opaque. This research line seeks to understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. It identifies areas where U.S. policy could promote the responsible, safe, and reliable deployment of AI/ML capabilities. It encompasses exploration of AI/ML accidents, harms, and vulnerabilities; AI trustworthiness, safety, standards, testing, and evaluation; AI adoption, regulation, and policy; and attempts to understand when systems work well, when they fail, and how such failures could be mitigated.

Recent Publications

Reports

AI Governance at the Frontier

Mina Narayanan, Jessica Ji, Vikram Venkatram, and Ngor Luong
| November 2025

This report presents an analytic approach to help U.S. policymakers deconstruct artificial intelligence governance proposals by identifying their underlying assumptions, which are the foundational elements that facilitate the success of a proposal. By applying the approach to five U.S.-based AI governance proposals from industry, academia, and civil society, as well...

Read More

As artificial intelligence systems are deployed and affect more aspects of daily life, effective risk mitigation becomes imperative to prevent harm. This report analyzes AI incidents to improve our understanding of how risks from AI materialize in practice. By identifying six mechanisms of harm, it sheds light on the different...

Read More

Reports

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope...

Read More

Recent Blog Articles

California’s Approach to AI Governance

Devin Von Arx
| November 4, 2025

This blog examines 18 AI-related laws that California enacted in 2024, 8 of which are explored in more detail in an accompanying CSET Emerging Technology Observatory (ETO) blog. This blog also chronicles California’s history of regulating AI and other emerging technologies and highlights several AI bills that have moved through...

Read More

To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply with the AI Act. This blog reviews the measures set out in the new Code’s...

Read More

AI Safety Evaluations: An Explainer

Jessica Ji, Vikram Venkatram, and Steph Batalis
| May 28, 2025

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work? Our explainer lays out the different fundamental types of AI safety evaluations alongside their respective strengths and limitations.

Read More

Our People

Mia Hoffmann

Non-Resident Research Fellow

Mina Narayanan

Research Analyst

Related News

CSET Research Analyst, Mina Narayanan shared her expert insights in an article published by Defense One. The piece examines President Trump’s newly released AI Action Plan, which outlines a sweeping effort to secure American dominance in artificial intelligence by accelerating military adoption, fast-tracking infrastructure, and expanding U.S. influence in global AI governance.
In their op-ed in the Bulletin of the Atomic Scientists, Mia Hoffmann, Mina Narayanan, and Owen J. Daniels discuss the upcoming French Artificial Intelligence Action Summit in Paris, which aims to establish a shared and effective governance framework for AI.
In The News

5 AI trends to watch in 2025

January 7, 2025
Mina Narayanan provided her expert insights in an article published by GZERO Media. The article discusses the evolving landscape of AI policy under the incoming administration and the key trends shaping AI regulation, national security, and U.S.-China competition in 2025.
In a recent episode of the Corner Alliance's "AI, Government, and the Future" podcast that explores the challenges of assessing AI systems and managing their risk, Mina Narayanan, a Research Analyst at CSET, provides her expert take.
In an article published by WIRED that delves into recent developments in the international regulation of artificial intelligence (AI) for military use, CSET's Lauren Kahn provided her expert analysis.
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.