CyberAI

The new grant will contribute to the CyberAI Project's research at the intersection of artificial intelligence and cybersecurity.

Securing AI

Andrew Lohn and Wyatt Hoffman
| March 2022

Like traditional software, vulnerabilities in machine learning software can lead to sabotage or information leakages. Also like traditional software, sharing information about vulnerabilities helps defenders protect their systems and helps attackers exploit them. This brief examines some of the key differences between vulnerabilities in traditional and machine learning systems and how those differences can affect the vulnerability disclosure and remediation processes.

Hacking Poses Risks for Artificial Intelligence

SIGNAL Online
| March 1, 2022

CSET Senior Fellow Andrew Lohn discusses the potential for AI and machine learning software to be susceptible to data poisoning.

CSET Research Analyst Dakota Cary testified before the U.S.-China Economic and Security Review Commission hearing on "China’s Cyber Capabilities: Warfare, Espionage, and Implications for the United States." Cary discussed the cooperative relationship between Chinese universities and China’s military and intelligence services to develop talent with the capabilities to perform state-sponsored cyberespionage operations.

Artificial intelligence offers enormous promise to address a number of societal challenges, but it can also exacerbate existing ones. CSET Research Fellow Katerina Sedova, and John Bansemer, CSET Senior Fellow and Director of the CyberAI Project, discussed countering the threat of automated disinformation.

AI and Compute

Andrew Lohn and Micah Musser
| January 2022

Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.

AI and the Future of Disinformation Campaigns

Katerina Sedova, Christine McNeill, Aurora Johnson, Aditi Joshi, and Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

Making AI Work for Cyber Defense

Wyatt Hoffman
| December 2021

Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.

AI and the Future of Disinformation Campaigns

Katerina Sedova, Christine McNeill, Aurora Johnson, Aditi Joshi, and Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

Stanford HAI Director of Policy Russell Wald, CSET Senior Fellow Andrew Lohn and Stanford HAI Postdoctoral Fellow Jeff Ding discussed how a National Research Cloud will impact U.S. national security.