Cybersecurity of AI Systems

In a BBC article that discusses the urgent need to integrate cybersecurity measures into artificial intelligence systems, CSET's Andrew Lohn provided his expert analysis.

Securing AI Makes for Safer AI

John Bansemer and Andrew Lohn
| July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

CSET's Andrew Lohn and Krystal Jackson discussed the potential for reinforcement learning to support cyber defense.

A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Forbes.

Breaking Defense published an article that explores both the potential benefits and risks of generative artificial intelligence, featuring insights from CSET's Micah Musser.

CSET Senior Fellow Dr. Heather Frase discussed her research on effectively evaluating and assessing AI systems across a broad range of applications.

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.

CSET's Josh A. Goldstein was recently quoted in a WIRED article about state-backed hacking groups using fake LinkedIn profiles to steal information from their targets. Goldstein provides insight by highlighting the issues in the disinformation space.

A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Grid. The report examines the potential misuse of language models for influence operations in the future and proposes a structure for evaluating possible solutions to this problem.

A report by CSET’s Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published on Medium. The report explores how language models could be misused for influence operations in the future, and it provides a framework for assessing potential mitigation strategies.