CyberAI

Will AI Make Cyber Swords or Shields?

Andrew Lohn and Krystal Jackson
| August 2022

Funding and priorities for technology development today determine the terrain for digital battles tomorrow, and they provide the arsenals for both attackers and defenders. Unfortunately, researchers and strategists disagree on which technologies will ultimately be most beneficial and which cause more harm than good. This report provides three examples showing that, while the future of technology is impossible to predict with certainty, there is enough empirical data and mathematical theory to have these debates with more rigor.

U.S. High School Cybersecurity Competitions

Kayla Goode, Ali Crawford, and Christopher Back
| July 2022

In the current cyber-threat environment, a well-educated workforce is critical to U.S. national security. Today, however, nearly six hundred thousand cybersecurity positions remain unfilled across the public and private sectors. This report explores high school cybersecurity competitions as a potential avenue for increasing the domestic cyber talent pipeline. The authors examine the competitions, their reach, and their impact on students’ educational and professional development.

Will AI Make Cyber Swords or Shields

Andrew Lohn
| July 27, 2022

We aim to demonstrate the value of mathematical models for policy debates about technological progress in cybersecurity by considering phishing, vulnerability discovery, and the dynamics between patching and exploitation. We then adjust the inputs to those mathematical models to match some possible advances in their underlying technology.

A CSET report illustrates how malign actors exploit AI to automation disinformation campaigns.

In his testimony before the House Subcommittee Cybersecurity, Infrastructure Protection, and Innovation, Senior Fellow Andrew Lohn offer recommendations on how to mitigate AI security risks.

Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models. Although these attacks were initially conceived of and studied digitally, in that the raw pixel values of the image were perturbed, recent work has demonstrated that these attacks can successfully transfer to the physical world. This can be accomplished by printing out the patch and adding it into scenes of newly captured images or video footage.

In his testimony before the House of Representatives Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation, Senior Fellow Andrew Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.

CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks." Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.

CSET shows support for OpenAI, Cohere, and AI21 Labs' statement regarding best practices applicable to any organization developing or deploying large language models. 

At a hearing before the House Science Subcommittee on Investigations and Oversight and Subcommittee on Research and Technology explained the vulnerabilities of open-source software.