CyberAI

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.

Mia Hoffmann provided her expert insights in an article published by TIME. The article discusses concerns about artificial intelligence (AI) affecting the 2024 U.S. elections through misinformation and deepfakes.

This blog describes key takeaways from the NATO-Ukraine Defense Innovators Forum, held in Krakow, Poland in June 2024. It overviews changing concepts of operation, battlefield realities, and technological aspirations and innovations in Ukraine, with a focus on uncrewed aerial vehicles (UAVs) and counter-UAV systems. It builds upon CSET’s previous blog from the Future of Drones in Ukraine conference held in Warsaw in November 2023.

In their op-ed in Foreign Policy, Josh A. Goldstein and Renée DiResta discuss recent efforts by the U.S. government to disrupt Russian influence operations, highlighting how Russia uses fake domains, media outlets, and social media influencers to manipulate global public conversations.

Through the Chat Window and Into the Real World: Preparing for AI Agents

Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee
| October 2024

Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has fueled new optimism about the prospect of building sophisticated AI agents. This CSET-led workshop report synthesizes findings from a May 2024 workshop on this topic, including what constitutes an AI agent, how the technology is improving, what risks agents exacerbate, and intervention points that could help.

Securing Critical Infrastructure in the Age of AI

Kyle Crichton, Jessica Ji, Kyle Miller, John Bansemer, Zachary Arnold, David Batz, Minwoo Choi, Marisa Decillis, Patricia Eke, Daniel M. Gerstein, Alex Leblang, Monty McGee, Greg Rattray, Luke Richards, and Alana Scott
| October 2024

As critical infrastructure operators and providers seek to harness the benefits of new artificial intelligence capabilities, they must also manage associated risks from both AI-enabled cyber threats and potential vulnerabilities in deployed AI systems. In June 2024, CSET led a workshop to assess these issues. This report synthesizes our findings, drawing on lessons from cybersecurity and insights from critical infrastructure sectors to identify challenges and potential risk mitigations associated with AI adoption.

Revisiting AI Red-Teaming

Jessica Ji and Colin Shea-Blymyer
| September 26, 2024

This year, CSET researchers returned to the DEF CON cybersecurity conference to explore how understandings of AI red-teaming practices have evolved among cybersecurity practitioners and AI experts. This blog post, a companion to "How I Won DEF CON’s Generative AI Red-Teaming Challenge", summarizes our takeaways and concludes with a list of outstanding research questions regarding AI red-teaming, some of which CSET hopes to address in future work.

How I Won DEF CON’s Generative AI Red-Teaming Challenge

Colin Shea-Blymyer
| September 26, 2024

In August 2024, CSET Research Fellow Colin Shea-Blymyer attended DEF CON, the world’s largest hacking convention to break powerful artificial intelligence systems. He participated in the AI red-teaming challenge, and won. This blog post details his experiences with the challenge, what it took to win the grand prize, and what he learned about the state of AI testing.

View this session of our Security and Emerging Technology Seminar Series on August 1 at 12 p.m. ET. This session featured a discussion on the President’s Council of Advisors on Science and Technology (PCAST) Report on Strategy for Cyber-Physical Resilience.

In their op-ed featured in MIT Technology Review, Josh A. Goldstein and Renée DiResta provide their expert analysis on OpenAI's first report on the misuse of its generative AI.