Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap

Ronnie Kinoshita, Luke Koslosky, and Tessa Baker
| May 3, 2024

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.

This blog post by CSET’s Executive Director Dewey Murdick explores two different metaphorical lenses for governing the frontier of AI. The "Space Exploration Approach" likens AI models to spacecrafts venturing into unexplored territories, requiring detailed planning and regular updates. The "Snake-Filled Garden Approach" views AI as a garden with both harmless and dangerous 'snakes,' necessitating rigorous testing and risk assessment. In the post, Dewey examines these metaphors and the different ways they can inform approaches to AI governance strategy that balances innovation with safety, all while emphasizing the importance of ongoing learning and adaptability.

A Guide to the Proposed Outbound Investment Regulations

Ngor Luong and Emily S. Weinstein
| October 6, 2023

The August 9 Executive Order aims to restrict certain U.S. investments in key technology areas. In a previous post, we proposed an end-user approach to crafting an AI investment prohibition. In this follow-on post, we rely on existing and hypothetical transactions to test scenarios where U.S. investments in China’s AI ecosystem would or would not be covered under the proposed program, and highlight outstanding challenges.

On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) released their Bipartisan Framework on AI Legislation. The framework includes many ideas and recommendations that CSET research has highlighted over the past four years. This blog post highlights some of the most relevant reports and CSET’s perspective on the framework’s elements.

Recent announcements from both Pentagon and Congressional leaders offer significant opportunity for rapidly delivering autonomous systems technology at-scale for U.S. Warfighters well into the future. Dr. Jaret Riddick, CSET Senior Fellow and former Principal Director for Autonomy in USD(R&E) offers his perspective on DOD’s Replicator Initiative and recent legislative proposals about DOD autonomy.

Scoping AI for National Security: An Impossible Task?

Emily S. Weinstein and Ngor Luong
| August 28, 2023

On August 9, 2023 the Biden administration announced an executive order to restrict certain U.S. investments in China’s key technology sectors, including artificial intelligence. This blog post proposes implementing investment restrictions related to AI for national security using a list-based end-user approach that builds upon existing list-based tools.

In & Out of China: Financial Support for AI Development

Ngor Luong and Margarita Konaev
| August 10, 2023

Drawing from prior CSET research, this blog post describes different domestic and international initiatives the Chinese government and companies are pursuing to shore up investment in AI and meet China’s strategic objectives, as well as indicators to track their future trajectories.

Why Improving AI Reliability Metrics May Not Lead to Reliability

Romeo Valentin and Helen Toner
| August 8, 2023

How can we measure the reliability of machine learning systems? And do these measures really help us predict real world performance? A recent study by the Stanford Intelligent Systems Laboratory, supported by CSET funding, provides new evidence that models may perform well on certain reliability metrics while still being unreliable in other ways. This blog post summarizes the study’s results, which suggest that policymakers and regulators should not think of “reliability” or “robustness” as a single, easy-to-measure property of an AI system. Instead, AI reliability requirements will need to consider which facets of reliability matter most for any given use case, and how those facets can be evaluated.

For Export Controls on AI, Don’t Forget the “Catch-All” Basics

Emily S. Weinstein and Kevin Wolf
| July 5, 2023

Existing U.S. government tools and approaches may help mitigate some of the issues worrying AI observers. This blog post describes long-standing “catch-all” controls, administered by the Department of Commerce’s Bureau of Industry and Security (BIS), and how they might be used to address some of these threats.

Controlling Access to Compute via the Cloud: Options for U.S. Policymakers, Part II

Hanna Dohmen, Jacob Feldgoise, Emily S. Weinstein, and Timothy Fist
| June 5, 2023

In the second of a series of publications, CSET and CNAS outline one avenue the U.S. government could pursue to cut off China’s access to cloud computing services in support of military, security, or intelligence services end use(r)s. The authors discuss pros, cons, and limitations.