Analysis

Cybersecurity Risks of AI-Generated Code

Jessica Ji,

Jenny Jun,

Maggie Wu,

and Rebecca Gelles

November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.

Download Full Report

Related Content

Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has… Read More

Researchers, companies, and policymakers have dedicated increasing attention to evaluating large language models (LLMs). This explainer covers why researchers are interested in evaluations, as well as some common evaluations and associated challenges. While evaluations can… Read More

Analysis

Securing AI

March 2022

Like traditional software, vulnerabilities in machine learning software can lead to sabotage or information leakages. Also like traditional software, sharing information about vulnerabilities helps defenders protect their systems and helps attackers exploit them. This brief… Read More

Analysis

Poison in the Well

June 2021

Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest… Read More