Cybersecurity of AI Systems

Putting Teeth into AI Risk Management

Matthew Schoemaker
| May 2024

President Biden's October 2023 executive order prioritizes the governance of artificial intelligence in the federal government, prompting the urgent creation of AI risk management standards and procurement guidelines. Soon after the order's signing, the Office of Management and Budget issued guidance for federal departments and agencies, including minimum risk standards for AI in federal contracts. Similar to cybersecurity, procurement rules will be used to enforce AI development best practices for federal suppliers. This report offers recommendations for implementing AI risk management procurement rules.

How Will AI Change Cyber Operations?

War on the Rocks
| April 30, 2024

In her op-ed featured in War on the Rocks, CSET's Jenny Jun discussed the nuanced relationship between AI and cyber operations, highlighting both the optimism and caution within the U.S. government regarding AI's impact on cyber defense and offense.

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

What Does AI Red-Teaming Actually Mean?

Jessica Ji
| October 24, 2023

“AI red-teaming” is currently a hot topic, but what does it actually mean? This blog post explains the term’s cybersecurity origins, why AI red-teaming should incorporate cybersecurity practices, and how its evolving definition and sometimes inconsistent usage can be misleading for policymakers interested in exploring testing requirements for AI systems.

Skating to Where the Puck Is Going

Helen Toner Jessica Ji John Bansemer Lucy Lim
| October 2023

AI capabilities are evolving quickly and pose novel—and likely significant—risks. In these rapidly changing conditions, how can policymakers effectively anticipate and manage risks from the most advanced and capable AI systems at the frontier of the field? This Roundtable Report summarizes some of the key themes and conclusions of a July 2023 workshop on this topic jointly hosted by CSET and Google DeepMind.

Memory Safety: An Explainer

Chris Rohlf
| September 26, 2023

Memory safety issues remain endemic in cybersecurity and are often seen as a never-ending source of cyber vulnerabilities. Recently the topic has increased in prominence with the White House Office of the National Cyber Director (ONCD) releasing a request for comments on how to strengthen the open-source ecosystem. But what exactly is memory safety? This blog describes the historical antecedents in computing that helped create one aspect of today’s insecure cyber ecosystem. There will be no quick fixes, but there is encouraging progress towards addressing these long-standing security issues.

In a BBC article that discusses the urgent need to integrate cybersecurity measures into artificial intelligence systems, CSET's Andrew Lohn provided his expert analysis.

Securing AI Makes for Safer AI

John Bansemer Andrew Lohn
| July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

CSET's Andrew Lohn and Krystal Jackson discussed the potential for reinforcement learning to support cyber defense.

A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Forbes.