Assement - Line of Research

Assessment

AI/ML systems are failure-prone, unreliable, and opaque. This research line seeks to understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. It identifies areas where U.S. policy could promote the responsible, safe, and reliable deployment of AI/ML capabilities. It encompasses exploration of AI/ML accidents, harms, and vulnerabilities; AI trustworthiness, safety, standards, testing, and evaluation; AI adoption, regulation, and policy; and attempts to understand when systems work well, when they fail, and how such failures could be mitigated.

Recent Publications

Analysis

An Argument for Hybrid AI Incident Reporting

Ren Bin Lee Dixon Heather Frase
| March 2024

Artificial Intelligence incidents have been occurring with the rapid advancement of AI capabilities over the past decade. However, there is not yet a concerted policy effort in the United States to monitor, document, and aggregate AI incident data to enhance the understanding of AI-related harm and inform safety policies. This...

Read More

Formal Response

Comment on NIST RFI Related to the Executive Order Concerning Artificial Intelligence (88 FR 88368)

Mina Narayanan Jessica Ji Heather Frase
| February 2, 2024

On February 2, 2024, CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order Concerning Artificial Intelligence (88 FR 88368). In the submission, CSET compiles recommendations from six CSET reports and analyses in order to assist NIST in its implementation of AI...

Read More

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive...

Read More

Recent Blog Articles

China Bets Big on Military AI

Sam Bresnick
| April 3, 2024

In his op-ed published by the Center for European Policy Analysis (CEPA), CSET’s Sam Bresnick shared his expert analysis on China's evolving military capabilities and its growing emphasis on battlefield information and the role of AI.

Read More

For Government Use of AI, What Gets Measured Gets Managed

Matthew Burtell Helen Toner
| March 28, 2024

In their op-ed featured in Lawfare, CSET’s Matthew Burtell and Helen Toner shared their expert analysis on the significant implications of government procurement and deployment of artificial intelligence (AI) systems, emphasizing the need for high ethical and safety standards.

Read More

The October 30, 2023, White House executive order on artificial intelligence requires companies developing the most advanced AI models to report safety testing results to the federal government. CSET Horizon Junior Fellow Thomas Woodside writes that these requirements are a good first step towards managing uncertain risks and Congress should...

Read More

Our People

Heather Frase

Senior Fellow

Christian Schoeberl

Data Research Analyst

Karson Elmgren

Research Analyst

Mia Hoffmann

Research Fellow

Mina Narayanan

Research Analyst

Related News

In an article published by WIRED that delves into recent developments in the international regulation of artificial intelligence (AI) for military use, CSET's Lauren Kahn provided her expert analysis.
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.
In The News

Congress Is Falling Behind on AI

May 16, 2023
The Messenger published an article featuring insights from CSET's Mina Narayanan. The article delves into the growing concerns surrounding the regulation of artificial intelligence and the challenges Congress faces in developing rules for its use.