CSET

Securing AI Makes for Safer AI

John Bansemer

and Andrew Lohn

July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

Related Content

What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ. Read More

Analysis

Poison in the Well

June 2021

Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest… Read More

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems… Read More

Analysis

Automating Cyber Attacks

November 2020

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. Read More