CSET

Blumenthal and Hawley’s U.S. AI Act Framework: CSET’s Perspective and Contributions

Tessa Baker

September 19, 2023

On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) released their Bipartisan Framework on AI Legislation. The framework includes many ideas and recommendations that CSET research has highlighted over the past four years. This blog post highlights some of the most relevant reports and CSET’s perspective on the framework’s elements.

Related Content

The Summer of 2023 will likely go down in the history books as the “Summer of AI”—with numerous advancements in LLMs and other generative AIs capturing the public’s consciousness. At the same time, the Biden administration announced multiple efforts to ensure the American public’s safety, security, and privacy, while ensuring the U.S. continues to lead the world in AI innovation. This blog post summarizes some of these major executive branch actions and highlights related CSET insights.

On July 21, the White House announced voluntary commitments from seven AI firms to ensure safe, secure, and transparent AI. CSET’s research provides important context to this discussion.

Dr. Dewey Murdick testified before the House Science Committee on steps the United States can take to support U.S. AI innovation, prevent authoritarian governments from surpassing us in AI, and improving user safety.

CSET Director Dewey Murdick testified before the Senate Select Committee on Intelligence hearing on "Countering the People’s Republic of China’s Economic and Technological Plan for Dominance." Murdick discussed China's strategy to move towards self-sufficiency in key technologies and steps the United States can take to respond.

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

This explainer defines criteria for effective AI Incident Collection and identifies tradeoffs between potential reporting models: mandatory, voluntary, and citizen reporting.

Analysis

A Common Language for Responsible AI

October 2022

Policymakers, engineers, program managers and operators need the bedrock of a common set of terms to instantiate responsible AI for the Department of Defense. Rather than create a DOD-specific set of terms, this paper argues that the DOD could benefit by adopting the key characteristics defined by the National Institute of Standards and Technology in its draft AI Risk Management Framework with only two exceptions.

Analysis

Adding Structure to AI Harm

July 2023

Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed. This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure.

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.