Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

CSET’s Must Read Research: A Primer

Tessa Baker
| December 18, 2023

This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.

On October 17, 2023, the Bureau of Industry and Security (BIS) issued an update to last year’s export controls on advanced computing, supercomputing and semiconductor manufacturing equipment. This blog post provides an overview of the updated advanced computing controls, analyzes more than 100 relevant chips, and discusses the licensing policies for the expanded chip restrictions and the increased country scope.

Commentary: Balancing AI Governance with Opportunity

Jaret C. Riddick
| November 30, 2023

On October 30, 2023, the Biden administration released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). This blog contemplates a potential lack of balance in the policy discussions between the restrictions that come with governance and the promise of opportunity unleashed by innovation. Is it possible to apply AI standards, governance, safeguards and protections in a manner that stimulates innovation, competitiveness and global leadership for the United States in AI?

The Global Distribution of STEM Graduates: Which Countries Lead the Way?

Brendan Oliss, Cole McFaul, and Jaret C. Riddick
| November 27, 2023

Discover how the global landscape of STEM graduates is shifting, potentially reshaping the future of innovation and education worldwide. This blog post analyzes recent education data from the countries with the most graduates in Science, Technology, Engineering, and Mathematics (STEM) fields. For each of the top eleven countries by number of STEM graduates, we present the total number of STEM graduates as well as STEM graduates as a percentage of total graduates in 2020.

There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.

The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap

Ronnie Kinoshita, Luke Koslosky, and Tessa Baker
| May 3, 2024

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.

This blog post by CSET’s Executive Director Dewey Murdick explores two different metaphorical lenses for governing the frontier of AI. The "Space Exploration Approach" likens AI models to spacecrafts venturing into unexplored territories, requiring detailed planning and regular updates. The "Snake-Filled Garden Approach" views AI as a garden with both harmless and dangerous 'snakes,' necessitating rigorous testing and risk assessment. In the post, Dewey examines these metaphors and the different ways they can inform approaches to AI governance strategy that balances innovation with safety, all while emphasizing the importance of ongoing learning and adaptability.

What Does AI Red-Teaming Actually Mean?

Jessica Ji
| October 24, 2023

“AI red-teaming” is currently a hot topic, but what does it actually mean? This blog post explains the term’s cybersecurity origins, why AI red-teaming should incorporate cybersecurity practices, and how its evolving definition and sometimes inconsistent usage can be misleading for policymakers interested in exploring testing requirements for AI systems.

A Guide to the Proposed Outbound Investment Regulations

Ngor Luong and Emily S. Weinstein
| October 6, 2023

The August 9 Executive Order aims to restrict certain U.S. investments in key technology areas. In a previous post, we proposed an end-user approach to crafting an AI investment prohibition. In this follow-on post, we rely on existing and hypothetical transactions to test scenarios where U.S. investments in China’s AI ecosystem would or would not be covered under the proposed program, and highlight outstanding challenges.

Memory Safety: An Explainer

Chris Rohlf
| September 26, 2023

Memory safety issues remain endemic in cybersecurity and are often seen as a never-ending source of cyber vulnerabilities. Recently the topic has increased in prominence with the White House Office of the National Cyber Director (ONCD) releasing a request for comments on how to strengthen the open-source ecosystem. But what exactly is memory safety? This blog describes the historical antecedents in computing that helped create one aspect of today’s insecure cyber ecosystem. There will be no quick fixes, but there is encouraging progress towards addressing these long-standing security issues.