Cybersecurity of AI Systems

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

What Does AI Red-Teaming Actually Mean?

Jessica Ji
| October 24, 2023

“AI red-teaming” is currently a hot topic, but what does it actually mean? This blog post explains the term’s cybersecurity origins, why AI red-teaming should incorporate cybersecurity practices, and how its evolving definition and sometimes inconsistent usage can be misleading for policymakers interested in exploring testing requirements for AI systems.

Skating to Where the Puck Is Going

Helen Toner Jessica Ji John Bansemer Lucy Lim
| October 2023

AI capabilities are evolving quickly and pose novel—and likely significant—risks. In these rapidly changing conditions, how can policymakers effectively anticipate and manage risks from the most advanced and capable AI systems at the frontier of the field? This Roundtable Report summarizes some of the key themes and conclusions of a July 2023 workshop on this topic jointly hosted by CSET and Google DeepMind.

Memory Safety: An Explainer

Chris Rohlf
| September 26, 2023

Memory safety issues remain endemic in cybersecurity and are often seen as a never-ending source of cyber vulnerabilities. Recently the topic has increased in prominence with the White House Office of the National Cyber Director (ONCD) releasing a request for comments on how to strengthen the open-source ecosystem. But what exactly is memory safety? This blog describes the historical antecedents in computing that helped create one aspect of today’s insecure cyber ecosystem. There will be no quick fixes, but there is encouraging progress towards addressing these long-standing security issues.

In a BBC article that discusses the urgent need to integrate cybersecurity measures into artificial intelligence systems, CSET's Andrew Lohn provided his expert analysis.

Securing AI Makes for Safer AI

John Bansemer Andrew Lohn
| July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

CSET's Andrew Lohn and Krystal Jackson discussed the potential for reinforcement learning to support cyber defense.

A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Forbes.

Breaking Defense published an article that explores both the potential benefits and risks of generative artificial intelligence, featuring insights from CSET's Micah Musser.

CSET Senior Fellow Dr. Heather Frase discussed her research on effectively evaluating and assessing AI systems across a broad range of applications.