CyberAI

Hacking AI

Andrew Lohn
| December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.

Automating Cyber Attacks

Ben Buchanan, John Bansemer, Dakota Cary, Jack Lucas, and Micah Musser
| November 2020

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.

U.S. Demand for Talent at the Intersection of AI and Cybersecurity

Cindy Martinez and Micah Musser
| November 2020

As demand for cybersecurity experts in the United States has grown faster than the supply of qualified workers, some organizations have turned to artificial intelligence to bolster their overwhelmed cyber teams. Organizations may opt for distinct teams that specialize exclusively in AI or cybersecurity, but there is a benefit to having employees with overlapping experience in both domains. This data brief analyzes hiring demand for individuals with a combination of AI and cybersecurity skills.

Destructive Cyber Operations and Machine Learning

Dakota Cary and Daniel Cebul
| November 2020

Machine learning may provide cyber attackers with the means to execute more effective and more destructive attacks against industrial control systems. As new ML tools are developed, CSET discusses the ways in which attackers may deploy these tools and the most effective avenues for industrial system defenders to respond.

Downscaling Attack and Defense

Andrew Lohn
| October 7, 2020

The resizing of images, which is typically a required part of preprocessing for computer vision systems, is vulnerable to attack. Images can be created such that the image is completely different at machine-vision scales than at other scales and the default settings for some common computer vision and machine learning systems are vulnerable.

One sentence summarizes the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. This AI triad of computing power, algorithms, and data offers a framework for decision-making in national security policy.

The U.S. Has AI Competition All Wrong

Foreign Affairs
| August 7, 2020

AI competition among nations comes down to a technical triad: data, algorithms and computing power. While the first two elements receive an enormous amount of policy attention, compute is often overlooked. CSET's Ben Buchanan explores its potential in Foreign Affairs.

The rise of deepfakes could enhance the effectiveness of disinformation efforts by states, political parties and adversarial actors. How rapidly is this technology advancing, and who in reality might adopt it for malicious ends? This report offers a comprehensive deepfake threat assessment grounded in the latest machine learning research on generative models.

AI will alter the nature of cybersecurity in unanticipated ways. CSET's CyberAI Director, Ben Buchanan, wrote a research agenda for understanding these changes, including “how AI & machine learning can be used to detect malicious code.”

CyberLaw Podcast: Whaling at Scale

Steptoe & Johnson
| June 8, 2020

"Does machine learning get offensive actors anything they don't already have?" asks Ben Buchanan, Director of CSET's CyberAI program. He joined the CyberLaw podcast to discuss the impacts of AI on offensive and defensive cyber operations.