Tag Archive: CyberAI

Jessica Ji is a Research Analyst working on the CyberAI Project.

Donna Artusy is a Non-Resident Research Fellow at CSET, where she contributes to the CyberAI Project.

Walter Haydock is a Non-Resident Research Fellow at CSET, where he contributes to the CyberAI Project.

Hacking Poses Risks for Artificial Intelligence

SIGNAL Online
| March 1, 2022

CSET Senior Fellow Andrew Lohn discusses the potential for AI and machine learning software to be susceptible to data poisoning.

AI and the Future of Disinformation Campaigns

Katerina Sedova Christine McNeill Aurora Johnson Aditi Joshi Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

AI and the Future of Disinformation Campaigns

Katerina Sedova Christine McNeill Aurora Johnson Aditi Joshi Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

Robot Hacking Games

Dakota Cary
| September 2021

Software vulnerability discovery, patching, and exploitation—collectively known as the vulnerability lifecycle—is time consuming and labor intensive. Automating the process could significantly improve software security and offensive hacking. The Defense Advanced Research Projects Agency’s Cyber Grand Challenge supported teams of researchers from 2014 to 2016 that worked to create these tools. China took notice. In 2017, China hosted its first Robot Hacking Game, seeking to automate the software vulnerability lifecycle. Since then, China has hosted seven such competitions and the People’s Liberation Army has increased its role in hosting the games.

Poison in the Well

Andrew Lohn
| June 2021

Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest the answer may be no. This report introduces policymakers to these emerging threats and provides recommendations for how to secure the machine learning supply chain.

Machine Learning and Cybersecurity

Micah Musser Ashton Garriott
| June 2021

Cybersecurity operators have increasingly relied on machine learning to address a rising number of threats. But will machine learning give them a decisive advantage or just help them keep pace with attackers? This report explores the history of machine learning in cybersecurity and the potential it has for transforming cyber defense in the near future.

Academics, AI, and APTs

Dakota Cary
| March 2021

Six Chinese universities have relationships with Advanced Persistent Threat (APT) hacking teams. Their activities range from recruitment to running cyber operations. These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field. This report examines these universities’ relationships with known APTs and analyzes the schools’ AI/ML research that may translate to future operational capabilities.