Assessment

CSET’s Lauren A. Kahn and CFR's Michael C. Horowitz shared their expert analysis in an op-ed published by AI Frontiers. In their piece, they examine the growing calls to regulate artificial intelligence in ways similar to nuclear technology.

Fixing the Pentagon’s Broken Innovation Pipeline

The National Interest
| June 25, 2025

CSET’s Lauren A. Kahn and CFR’s Michael C. Horowitz shared their expert analysis in an op-ed published by The National Interest. In their piece, they explore how the U.S. Department of Defense’s outdated budget process is undermining the military’s ability to adopt and scale emerging technologies quickly enough to deter rising global threats.

AI Safety Evaluations: An Explainer

Jessica Ji, Vikram Venkatram, and Steph Batalis
| May 28, 2025

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work? Our explainer lays out the different fundamental types of AI safety evaluations alongside their respective strengths and limitations.

Dewey Murdick and Miriam Vogel shared their expert analysis in an op-ed published by Fortune. In their piece, they highlight the urgent need for the United States to strengthen its AI literacy and incident reporting systems to maintain global leadership amid rapidly advancing international competition, especially from China’s booming AI sector.

Phase two of military AI has arrived

MIT Technology Review
| April 15, 2025

A CSET report was highlighted in an article published by MIT Technology Review. The article discusses the U.S. military’s growing use of generative AI—such as chatbot-style tools modeled after ChatGPT—for intelligence analysis and decision support during deployments.

AI for Military Decision-Making

Emelia Probasco, Helen Toner, Matthew Burtell, and Tim G. J. Rudner
| April 2025

Artificial intelligence is reshaping military decision-making. This concise overview explores how AI-enabled systems can enhance situational awareness and accelerate critical operational decisions—even in high-pressure, dynamic environments. Yet, it also highlights the essential need for clear operational scopes, robust training, and vigilant risk mitigation to counter the inherent challenges of using AI, such as data biases and automation pitfalls. This report offers a balanced framework to help military leaders integrate AI responsibly and effectively.

Dewey Murdick and William Hannas shared their expert analysis in an op-ed published by The National Interest. In their piece, they discussed China's approach to artificial intelligence and the lessons it offers to American policymakers.

How to Assess the Likelihood of Malicious Use of Advanced AI Systems

Josh A. Goldstein and Girish Sastry
| March 2025

As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.

In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.

Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches

Mina Narayanan, Christian Schoeberl, and Tim G. J. Rudner
| February 2025

Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluate their explainability and interpretability claims in the context of AI-enabled recommendation systems and offers considerations for policymakers seeking to support AI evaluations.