Assement - Line of Research


AI/ML systems are failure-prone, unreliable, and opaque. This research line seeks to understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. It identifies areas where U.S. policy could promote the responsible, safe, and reliable deployment of AI/ML capabilities. It encompasses exploration of AI/ML accidents, harms, and vulnerabilities; AI trustworthiness, safety, standards, testing, and evaluation; AI adoption, regulation, and policy; and attempts to understand when systems work well, when they fail, and how such failures could be mitigated.

Recent Publications

Formal Response

Comment on NIST RFI Related to the Executive Order Concerning Artificial Intelligence (88 FR 88368)

Mina Narayanan Jessica Ji Heather Frase
| February 2, 2024

On February 2, 2024, CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order Concerning Artificial Intelligence (88 FR 88368). In the submission, CSET compiles recommendations from six CSET reports and analyses in order to assist NIST in its implementation of AI...

Read More

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive...

Read More


Repurposing the Wheel: Lessons for AI Standards

Mina Narayanan Alexandra Seymour Heather Frase Karson Elmgren
| November 2023

Standards enable good governance practices by establishing consistent measurement and norms for interoperability, but creating standards for AI is a challenging task. The Center for Security and Emerging Technology and the Center for a New American Security hosted a series of workshops in the fall of 2022 to examine standards...

Read More

Recent Blog Articles

CSET’s Must Read Research: A Primer

Tessa Baker
| December 18, 2023

This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.

Read More

In an op-ed published in The Bulletin, CSET’s Owen J. Daniels, discusses the Biden administration's executive order on responsible AI use, emphasizing the importance of clear signals in AI policymaking.

Read More

In their recent article published by the Brennan Center for Justice, CSET's Heather Frase and Mia Hoffman, along with Edgardo Cortés and Lawrence Norden from the Brennan Center, delve into the growing role of artificial intelligence (AI) in election administration. The piece explores the potential benefits and risks associated with...

Read More

Our People

Heather Frase

Senior Fellow

Christian Schoeberl

Data Research Analyst

Karson Elmgren

Research Analyst

Mia Hoffmann

Research Fellow

Mina Narayanan

Research Analyst

Related News

In an article published by WIRED that delves into recent developments in the international regulation of artificial intelligence (AI) for military use, CSET's Lauren Kahn provided her expert analysis.
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.
In The News

Congress Is Falling Behind on AI

May 16, 2023
The Messenger published an article featuring insights from CSET's Mina Narayanan. The article delves into the growing concerns surrounding the regulation of artificial intelligence and the challenges Congress faces in developing rules for its use.