Assement - Line of Research

Assessment

AI/ML systems are failure-prone, unreliable, and opaque. This research line seeks to understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. It identifies areas where U.S. policy could promote the responsible, safe, and reliable deployment of AI/ML capabilities. It encompasses exploration of AI/ML accidents, harms, and vulnerabilities; AI trustworthiness, safety, standards, testing, and evaluation; AI adoption, regulation, and policy; and attempts to understand when systems work well, when they fail, and how such failures could be mitigated.

Recent Publications

Analysis

Building the Tech Coalition

Emelia Probasco
| August 2024

The U.S. Army’s 18th Airborne Corps can now target artillery just as efficiently as the best unit in recent American history—and it can do so with two thousand fewer servicemembers. This report presents a case study of how the 18th Airborne partnered with tech companies to develop, prototype, and operationalize...

Read More

Formal Response

Comment on Commerce Department RFI 89 FR 27411

Catherine Aiken, James Dunham, Jacob Feldgoise, Rebecca Gelles, Ronnie Kinoshita, Mina Narayanan, and Christian Schoeberl
| July 16, 2024

CSET submitted the following comment in response to a Request for Information (RFI) from the Department of Commerce regarding 89 FR 27411.

Read More

Analysis

Enabling Principles for AI Governance

Owen Daniels and Dewey Murdick
| July 2024

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of...

Read More

Recent Blog Articles

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules...

Read More

In a Taiwan conflict, tough choices could come for Big Tech

Sam Bresnick and Emelia Probasco
| April 23, 2024

In their op-ed featured in Breaking Defense, CSET's Sam Bresnick and Emelia Probasco provide their expert analysis on the involvement of US tech giants in conflicts, such as the Ukraine war, and raise important questions about their role and potential entanglements in future conflicts, particularly those involving Taiwan.

Read More

China Bets Big on Military AI

Sam Bresnick
| April 3, 2024

In his op-ed published by the Center for European Policy Analysis (CEPA), CSET’s Sam Bresnick shared his expert analysis on China's evolving military capabilities and its growing emphasis on battlefield information and the role of AI.

Read More

Our People

Heather Frase

Senior Fellow

Christian Schoeberl

Data Research Analyst

Mia Hoffmann

Research Fellow

Mina Narayanan

Research Analyst

Related News

In a recent episode of the Corner Alliance's "AI, Government, and the Future" podcast that explores the challenges of assessing AI systems and managing their risk, Mina Narayanan, a Research Analyst at CSET, provides her expert take.
In an article published by WIRED that delves into recent developments in the international regulation of artificial intelligence (AI) for military use, CSET's Lauren Kahn provided her expert analysis.
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.