Assement - Line of Research

Assessment

AI/ML systems are failure-prone, unreliable, and opaque. This research line seeks to understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. It identifies areas where U.S. policy could promote the responsible, safe, and reliable deployment of AI/ML capabilities. It encompasses exploration of AI/ML accidents, harms, and vulnerabilities; AI trustworthiness, safety, standards, testing, and evaluation; AI adoption, regulation, and policy; and attempts to understand when systems work well, when they fail, and how such failures could be mitigated.

Recent Publications

Analysis

Putting Teeth into AI Risk Management

Matthew Schoemaker
| May 2024

President Biden's October 2023 executive order prioritizes the governance of artificial intelligence in the federal government, prompting the urgent creation of AI risk management standards and procurement guidelines. Soon after the order's signing, the Office of Management and Budget issued guidance for federal departments and agencies, including minimum risk standards...

Read More

Analysis

An Argument for Hybrid AI Incident Reporting

Ren Bin Lee Dixon Heather Frase
| March 2024

Artificial Intelligence incidents have been occurring with the rapid advancement of AI capabilities over the past decade. However, there is not yet a concerted policy effort in the United States to monitor, document, and aggregate AI incident data to enhance the understanding of AI-related harm and inform safety policies. This...

Read More

Formal Response

Comment on NIST RFI Related to the Executive Order Concerning Artificial Intelligence (88 FR 88368)

Mina Narayanan Jessica Ji Heather Frase
| February 2, 2024

On February 2, 2024, CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order Concerning Artificial Intelligence (88 FR 88368). In the submission, CSET compiles recommendations from six CSET reports and analyses in order to assist NIST in its implementation of AI...

Read More

Recent Blog Articles

In a Taiwan conflict, tough choices could come for Big Tech

Sam Bresnick Emelia Probasco
| April 23, 2024

In their op-ed featured in Breaking Defense, CSET's Sam Bresnick and Emelia Probasco provide their expert analysis on the involvement of US tech giants in conflicts, such as the Ukraine war, and raise important questions about their role and potential entanglements in future conflicts, particularly those involving Taiwan.

Read More

China Bets Big on Military AI

Sam Bresnick
| April 3, 2024

In his op-ed published by the Center for European Policy Analysis (CEPA), CSET’s Sam Bresnick shared his expert analysis on China's evolving military capabilities and its growing emphasis on battlefield information and the role of AI.

Read More

For Government Use of AI, What Gets Measured Gets Managed

Matthew Burtell Helen Toner
| March 28, 2024

In their op-ed featured in Lawfare, CSET’s Matthew Burtell and Helen Toner shared their expert analysis on the significant implications of government procurement and deployment of artificial intelligence (AI) systems, emphasizing the need for high ethical and safety standards.

Read More

Our People

Heather Frase

Senior Fellow

Christian Schoeberl

Data Research Analyst

Mia Hoffmann

Research Fellow

Mina Narayanan

Research Analyst

Related News

In a recent episode of the Corner Alliance's "AI, Government, and the Future" podcast that explores the challenges of assessing AI systems and managing their risk, Mina Narayanan, a Research Analyst at CSET, provides her expert take.
In an article published by WIRED that delves into recent developments in the international regulation of artificial intelligence (AI) for military use, CSET's Lauren Kahn provided her expert analysis.
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.