Mina Narayanan, Christian Schoeberl, and Tim G. J. Rudner
| February 2025
Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluate their explainability and interpretability claims in the context of AI-enabled recommendation systems and offers considerations for policymakers seeking to support AI evaluations.
In their op-ed in the Bulletin of the Atomic Scientists, Mia Hoffmann, Mina Narayanan, and Owen J. Daniels discuss the upcoming French Artificial Intelligence Action Summit in Paris, which aims to establish a shared and effective governance framework for AI.
This follow-up report builds on the foundational framework presented in the March 2024 CSET issue brief,
“An Argument for Hybrid AI Incident Reporting,”
by identifying key components of AI incidents that should be documented within a mandatory reporting regime. Designed to complement and operationalize our original framework, this report promotes the implementation of such a regime. By providing guidance on these critical elements, the report fosters consistent and comprehensive incident reporting, advancing efforts to document and address AI-related harms.
In their FedScoop op-ed, Jack Corrigan and Owen J. Daniels discuss the challenges of regulating artificial intelligence (AI) during an election season, when lawmakers are more focused on politics than policy.
In their Lawfare op-ed, Helen Toner and Zachary Arnold discuss the growing concerns and divisions within the AI community regarding the risks posed by artificial intelligence.
In his op-ed in TIME, Jack Corrigan discusses the landmark antitrust ruling that officially named Google a monopoly, marking the first significant antitrust defeat for a major internet platform in over two decades.
In their article featured in the Council of Foreign Relations, Jack Corrigan and Owen J. Daniels provide their expert analysis on the Chevron Doctrine Supreme Court decision and its implications for artificial intelligence (AI) governance.
A CSET report was highlighted in an article by DefenseOne. The article discusses new findings suggesting that the Pentagon may have discovered how to quickly and cost-effectively acquire technology, particularly in the realm of AI-driven capabilities.
The U.S. Army’s 18th Airborne Corps can now target artillery just as efficiently as the best unit in recent American history—and it can do so with two thousand fewer servicemembers. This report presents a case study of how the 18th Airborne partnered with tech companies to develop, prototype, and operationalize software and artificial intelligence for clear military advantage. The lessons learned form recommendations to the U.S. Department of Defense as it pushes to further develop and adopt AI and other new technologies.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities...
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.