AI/ML systems are failure-prone, unreliable, and opaque. This research line seeks to understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. It identifies areas where U.S. policy could promote the responsible, safe, and reliable deployment of AI/ML capabilities. It encompasses exploration of AI/ML accidents, harms, and vulnerabilities; AI trustworthiness, safety, standards, testing, and evaluation; AI adoption, regulation, and policy; and attempts to understand when systems work well, when they fail, and how such failures could be mitigated.
Assessment
Recent Publications
Recent Blog Articles
Our People
Related News
In The News
Assessing Risks and Setting Effective AI Standards with Mina Narayanan of Center for Security and Emerging Technology
March 13, 2024
In a recent episode of the Corner Alliance's "AI, Government, and the Future" podcast that explores the challenges of assessing AI systems and managing their risk, Mina Narayanan, a Research Analyst at CSET, provides her expert take.
In an article published by WIRED that delves into recent developments in the international regulation of artificial intelligence (AI) for military use, CSET's Lauren Kahn provided her expert analysis.
In The News
China Wants to Regulate Its Artificial Intelligence Sector Without Crushing It
August 14, 2023
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.