CSET

Breaking Down the Biden AI EO: Ensuring Safe and Secure AI

Tessa Baker

November 15, 2023

There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.

Related Content

Join CSET researchers for a 90-day review of ongoing implementation of the Biden Administration's Executive Order on artificial intelligence. Read More

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our… Read More

The recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence will have major implications for biotechnology. The EO demonstrates that the White House considers biorisk… Read More

Analysis

Hacking AI

December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn… Read More

Analysis

Securing AI

March 2022

Like traditional software, vulnerabilities in machine learning software can lead to sabotage or information leakages. Also like traditional software, sharing information about vulnerabilities helps defenders protect their systems and helps attackers exploit them. This brief… Read More

Analysis

Poison in the Well

June 2021

Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest… Read More

The rise of deepfakes could enhance the effectiveness of disinformation efforts by states, political parties and adversarial actors. How rapidly is this technology advancing, and who in reality might adopt it for malicious ends? This… Read More

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to… Read More

Analysis

Truth, Lies, and Automation

May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge… Read More