Assessment

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.

CSET’s Jessica Ji shared her expert analysis in an article published by Fortune. The article discusses the broader trends in AI companionship and adult-oriented content, the growing demand for emotionally engaging interactions with AI, and the challenges companies face in balancing user freedom with safety and age verification.

To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply with the AI Act. This blog reviews the measures set out in the new Code’s safety and security chapter, assesses how they compare to existing practices, and what the Code’s global impact might be.

Why Donald Trump’s AI Strategy Needs More Safeguards

The National Interest
| July 24, 2025

Adrian Thinnyun and Zachary Arnold shared their expert analysis in an op-ed published by The National Interest. In their piece, they examine how the United States must adopt a learning-focused, industry-led self-regulatory framework for AI, drawing lessons from the nuclear sector’s post-Three Mile Island Institute for Nuclear Power Operations to prevent a public backlash and ensure safe, widespread deployment of transformative AI technologies.

CSET’s Lauren A. Kahn and CFR's Michael C. Horowitz shared their expert analysis in an op-ed published by AI Frontiers. In their piece, they examine the growing calls to regulate artificial intelligence in ways similar to nuclear technology.

Fixing the Pentagon’s Broken Innovation Pipeline

The National Interest
| June 25, 2025

CSET’s Lauren A. Kahn and CFR’s Michael C. Horowitz shared their expert analysis in an op-ed published by The National Interest. In their piece, they explore how the U.S. Department of Defense’s outdated budget process is undermining the military’s ability to adopt and scale emerging technologies quickly enough to deter rising global threats.

AI Safety Evaluations: An Explainer

Jessica Ji, Vikram Venkatram, and Steph Batalis
| May 28, 2025

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work? Our explainer lays out the different fundamental types of AI safety evaluations alongside their respective strengths and limitations.

Dewey Murdick and Miriam Vogel shared their expert analysis in an op-ed published by Fortune. In their piece, they highlight the urgent need for the United States to strengthen its AI literacy and incident reporting systems to maintain global leadership amid rapidly advancing international competition, especially from China’s booming AI sector.

Phase two of military AI has arrived

MIT Technology Review
| April 15, 2025

A CSET report was highlighted in an article published by MIT Technology Review. The article discusses the U.S. military’s growing use of generative AI—such as chatbot-style tools modeled after ChatGPT—for intelligence analysis and decision support during deployments.

AI for Military Decision-Making

Emelia Probasco, Helen Toner, Matthew Burtell, and Tim G. J. Rudner
| April 2025

Artificial intelligence is reshaping military decision-making. This concise overview explores how AI-enabled systems can enhance situational awareness and accelerate critical operational decisions—even in high-pressure, dynamic environments. Yet, it also highlights the essential need for clear operational scopes, robust training, and vigilant risk mitigation to counter the inherent challenges of using AI, such as data biases and automation pitfalls. This report offers a balanced framework to help military leaders integrate AI responsibly and effectively.