Mina Narayanan, Jessica Ji, Vikram Venkatram, and Ngor Luong
| November 2025
This report presents an analytic approach to help U.S. policymakers deconstruct artificial intelligence governance proposals by identifying their underlying assumptions, which are the foundational elements that facilitate the success of a proposal. By applying the approach to five U.S.-based AI governance proposals from industry, academia, and civil society, as well as state and federal government, this report demonstrates how identifying assumptions can help policymakers make informed, flexible decisions about AI under uncertainty.
This blog examines 18 AI-related laws that California enacted in 2024, 8 of which are explored in more detail in an accompanying CSET Emerging Technology Observatory (ETO) blog. This blog also chronicles California’s history of regulating AI and other emerging technologies and highlights several AI bills that have moved through the California legislature in 2025.
As artificial intelligence systems are deployed and affect more aspects of daily life, effective risk mitigation becomes imperative to prevent harm. This report analyzes AI incidents to improve our understanding of how risks from AI materialize in practice. By identifying six mechanisms of harm, it sheds light on the different pathways to harm, and on the variety of mitigation strategies needed to address them.
This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.
To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply with the AI Act. This blog reviews the measures set out in the new Code’s safety and security chapter, assesses how they compare to existing practices, and what the Code’s global impact might be.
Adrian Thinnyun and Zachary Arnold shared their expert analysis in an op-ed published by The National Interest. In their piece, they examine how the United States must adopt a learning-focused, industry-led self-regulatory framework for AI, drawing lessons from the nuclear sector’s post-Three Mile Island Institute for Nuclear Power Operations to prevent a public backlash and ensure safe, widespread deployment of transformative AI technologies.
CSET’s Lauren A. Kahn and CFR's Michael C. Horowitz shared their expert analysis in an op-ed published by AI Frontiers. In their piece, they examine the growing calls to regulate artificial intelligence in ways similar to nuclear technology.
CSET’s Lauren A. Kahn and CFR’s Michael C. Horowitz shared their expert analysis in an op-ed published by The National Interest. In their piece, they explore how the U.S. Department of Defense’s outdated budget process is undermining the military’s ability to adopt and scale emerging technologies quickly enough to deter rising global threats.
Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work? Our explainer lays out the different fundamental types of AI safety evaluations alongside their respective strengths and limitations.
Dewey Murdick and Miriam Vogel shared their expert analysis in an op-ed published by Fortune. In their piece, they highlight the urgent need for the United States to strengthen its AI literacy and incident reporting systems to maintain global leadership amid rapidly advancing international competition, especially from China’s booming AI sector.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.