Assement - Line of Research

Assessment

AI/ML systems are failure-prone, unreliable, and opaque. This research line seeks to understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. It identifies areas where U.S. policy could promote the responsible, safe, and reliable deployment of AI/ML capabilities. It encompasses exploration of AI/ML accidents, harms, and vulnerabilities; AI trustworthiness, safety, standards, testing, and evaluation; AI adoption, regulation, and policy; and attempts to understand when systems work well, when they fail, and how such failures could be mitigated.

Recent Publications

Analysis

Trust Issues: Discrepancies in Trustworthy AI Keywords Use in Policy and Research

Emelia Probasco Kathleen Curlee Autumn Toney
|

Policy and research communities strive to mitigate AI harm while maximizing its benefits. Achieving effective and trustworthy AI necessitates the establishment of a shared language. The analysis of policies across different countries and research literature identifies consensus on six critical concepts: accountability, explainability, fairness, privacy, security, and transparency.

Read More

This paper is the fifth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. This paper explores the opportunities and challenges of building...

Read More

Analysis

Putting Teeth into AI Risk Management

Matthew Schoemaker
| May 2024

President Biden's October 2023 executive order prioritizes the governance of artificial intelligence in the federal government, prompting the urgent creation of AI risk management standards and procurement guidelines. Soon after the order's signing, the Office of Management and Budget issued guidance for federal departments and agencies, including minimum risk standards...

Read More

Recent Blog Articles

In a Taiwan conflict, tough choices could come for Big Tech

Sam Bresnick Emelia Probasco
| April 23, 2024

In their op-ed featured in Breaking Defense, CSET's Sam Bresnick and Emelia Probasco provide their expert analysis on the involvement of US tech giants in conflicts, such as the Ukraine war, and raise important questions about their role and potential entanglements in future conflicts, particularly those involving Taiwan.

Read More

China Bets Big on Military AI

Sam Bresnick
| April 3, 2024

In his op-ed published by the Center for European Policy Analysis (CEPA), CSET’s Sam Bresnick shared his expert analysis on China's evolving military capabilities and its growing emphasis on battlefield information and the role of AI.

Read More

For Government Use of AI, What Gets Measured Gets Managed

Matthew Burtell Helen Toner
| March 28, 2024

In their op-ed featured in Lawfare, CSET’s Matthew Burtell and Helen Toner shared their expert analysis on the significant implications of government procurement and deployment of artificial intelligence (AI) systems, emphasizing the need for high ethical and safety standards.

Read More

Our People

Heather Frase

Senior Fellow

Christian Schoeberl

Data Research Analyst

Mia Hoffmann

Research Fellow

Mina Narayanan

Research Analyst

Related News

In a recent episode of the Corner Alliance's "AI, Government, and the Future" podcast that explores the challenges of assessing AI systems and managing their risk, Mina Narayanan, a Research Analyst at CSET, provides her expert take.
In an article published by WIRED that delves into recent developments in the international regulation of artificial intelligence (AI) for military use, CSET's Lauren Kahn provided her expert analysis.
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.