AI/ML systems are failure-prone, unreliable, and opaque. This research line seeks to understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. It identifies areas where U.S. policy could promote the responsible, safe, and reliable deployment of AI/ML capabilities. It encompasses exploration of AI/ML accidents, harms, and vulnerabilities; AI trustworthiness, safety, standards, testing, and evaluation; AI adoption, regulation, and policy; and attempts to understand when systems work well, when they fail, and how such failures could be mitigated.

Assessment
Recent Publications
Recent Blog Articles
Our People
Related News
In The News
China Wants to Regulate Its Artificial Intelligence Sector Without Crushing It
August 14, 2023
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.
The Messenger published an article featuring insights from CSET's Mina Narayanan. The article delves into the growing concerns surrounding the regulation of artificial intelligence and the challenges Congress faces in developing rules for its use.
CSET's Heather Frase was quoted by the Associated Press in an article discussing the Biden administration's efforts to ensure the responsible development of AI.