Tag Archive: CyberAI

Join us for the next session of our Security and Emerging Technology Seminar Series on August 1 at 12 p.m. ET. This session will feature a discussion on the President’s Council of Advisors on Science and Technology (PCAST) Report on Strategy for Cyber-Physical Resilience.

In their op-ed featured in MIT Technology Review, Josh A. Goldstein and Renée DiResta provide their expert analysis on OpenAI's first report on the misuse of its generative AI.

In an article published by NPR which the discusses the surge in AI-generated spam on Facebook and other social media platforms, CSET's Josh A. Goldstein provided his expert insights.

Deepfakes, Elections, and Shrinking the Liar’s Dividend

Brennan Center for Justice
| January 23, 2024

In an article published by the Brennan Center for Justice, Josh A. Goldstein and Andrew Lohn delve into the concerns about the spread of misleading deepfakes and the liar's dividend.

Hacking Poses Risks for Artificial Intelligence

SIGNAL Online
| March 1, 2022

CSET Senior Fellow Andrew Lohn discusses the potential for AI and machine learning software to be susceptible to data poisoning.

AI and the Future of Disinformation Campaigns

Katerina Sedova Christine McNeill Aurora Johnson Aditi Joshi Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

AI and the Future of Disinformation Campaigns

Katerina Sedova Christine McNeill Aurora Johnson Aditi Joshi Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

Robot Hacking Games

Dakota Cary
| September 2021

Software vulnerability discovery, patching, and exploitation—collectively known as the vulnerability lifecycle—is time consuming and labor intensive. Automating the process could significantly improve software security and offensive hacking. The Defense Advanced Research Projects Agency’s Cyber Grand Challenge supported teams of researchers from 2014 to 2016 that worked to create these tools. China took notice. In 2017, China hosted its first Robot Hacking Game, seeking to automate the software vulnerability lifecycle. Since then, China has hosted seven such competitions and the People’s Liberation Army has increased its role in hosting the games.

Poison in the Well

Andrew Lohn
| June 2021

Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest the answer may be no. This report introduces policymakers to these emerging threats and provides recommendations for how to secure the machine learning supply chain.

Machine Learning and Cybersecurity

Micah Musser Ashton Garriott
| June 2021

Cybersecurity operators have increasingly relied on machine learning to address a rising number of threats. But will machine learning give them a decisive advantage or just help them keep pace with attackers? This report explores the history of machine learning in cybersecurity and the potential it has for transforming cyber defense in the near future.