Emelia Probasco, Helen Toner, Matthew Burtell, and Tim G. J. Rudner
| April 2025
Artificial intelligence is reshaping military decision-making. This concise overview explores how AI-enabled systems can enhance situational awareness and accelerate critical operational decisions—even in high-pressure, dynamic environments. Yet, it also highlights the essential need for clear operational scopes, robust training, and vigilant risk mitigation to counter the inherent challenges of using AI, such as data biases and automation pitfalls. This report offers a balanced framework to help military leaders integrate AI responsibly and effectively.
Dewey Murdick and William Hannas shared their expert analysis in an op-ed published by The National Interest. In their piece, they discussed China's approach to artificial intelligence and the lessons it offers to American policymakers.
As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.
In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.
Mina Narayanan, Christian Schoeberl, and Tim G. J. Rudner
| February 2025
Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluate their explainability and interpretability claims in the context of AI-enabled recommendation systems and offers considerations for policymakers seeking to support AI evaluations.
In their op-ed in the Bulletin of the Atomic Scientists, Mia Hoffmann, Mina Narayanan, and Owen J. Daniels discuss the upcoming French Artificial Intelligence Action Summit in Paris, which aims to establish a shared and effective governance framework for AI.
This follow-up report builds on the foundational framework presented in the March 2024 CSET issue brief,
“An Argument for Hybrid AI Incident Reporting,”
by identifying key components of AI incidents that should be documented within a mandatory reporting regime. Designed to complement and operationalize our original framework, this report promotes the implementation of such a regime. By providing guidance on these critical elements, the report fosters consistent and comprehensive incident reporting, advancing efforts to document and address AI-related harms.
In their FedScoop op-ed, Jack Corrigan and Owen J. Daniels discuss the challenges of regulating artificial intelligence (AI) during an election season, when lawmakers are more focused on politics than policy.
In their Lawfare op-ed, Helen Toner and Zachary Arnold discuss the growing concerns and divisions within the AI community regarding the risks posed by artificial intelligence.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities...
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.