Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

This trip report covers the major takeaways from the Future of Drones in Ukraine conference, co-hosted by the U.S. Defense Innovation Unit and Ukraine’s Brave1. It gives an overview of how drones are being deployed in Ukraine, the pace and scale of technological and operational innovation for inexpensive and commercially-available drones, and the future outlook for scaling in Ukraine but also with the U.S. Replicator initiative.

The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap

Ronnie Kinoshita, Luke Koslosky, and Tessa Baker
| May 3, 2024

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.

Working Through Our Global AI Trust Issues

Kathleen Curlee
| November 1, 2023

The use of the word “trustworthy” in relation to AI has sparked debate among policymakers and experts alike. This blog post explores different understandings of trustworthy AI among international actors, as well as challenges in establishing an international trustworthy AI consensus.

This blog post by CSET’s Executive Director Dewey Murdick explores two different metaphorical lenses for governing the frontier of AI. The "Space Exploration Approach" likens AI models to spacecrafts venturing into unexplored territories, requiring detailed planning and regular updates. The "Snake-Filled Garden Approach" views AI as a garden with both harmless and dangerous 'snakes,' necessitating rigorous testing and risk assessment. In the post, Dewey examines these metaphors and the different ways they can inform approaches to AI governance strategy that balances innovation with safety, all while emphasizing the importance of ongoing learning and adaptability.

Replicator: A Bold New Path for DoD

Michael O’Connor
| September 18, 2023

The Replicator effort by the U.S. Department of Defense (DoD) is intended to overcome some of the military challenges posed by China’s People’s Liberation Army (PLA). This blog post intends to both identify tradeoffs for the Department to consider as it charts the path for Replicator, and to provide a sense for the state of industry readiness to support.

Recent announcements from both Pentagon and Congressional leaders offer significant opportunity for rapidly delivering autonomous systems technology at-scale for U.S. Warfighters well into the future. Dr. Jaret Riddick, CSET Senior Fellow and former Principal Director for Autonomy in USD(R&E) offers his perspective on DOD’s Replicator Initiative and recent legislative proposals about DOD autonomy.

Why Improving AI Reliability Metrics May Not Lead to Reliability

Romeo Valentin and Helen Toner
| August 8, 2023

How can we measure the reliability of machine learning systems? And do these measures really help us predict real world performance? A recent study by the Stanford Intelligent Systems Laboratory, supported by CSET funding, provides new evidence that models may perform well on certain reliability metrics while still being unreliable in other ways. This blog post summarizes the study’s results, which suggest that policymakers and regulators should not think of “reliability” or “robustness” as a single, easy-to-measure property of an AI system. Instead, AI reliability requirements will need to consider which facets of reliability matter most for any given use case, and how those facets can be evaluated.

Collaborations between researchers and policymakers are necessary for progress, but can be challenging in practice. This blog post reports on recent discussions by privacy experts on the obstacles they face when engaging in the policy space and advice on how to improve these barriers.

On July 21, the White House announced voluntary commitments from seven AI firms to ensure safe, secure, and transparent AI. CSET’s research provides important context to this discussion.

With the rapid integration of AI into our daily lives, we must all learn when and whether to trust the technology, understand its capabilities and limitations, and adapt as these systems — and our functional relationships with them — evolve.