Truth, Lies, and Automation Report Cover

Analysis

Truth, Lies, and Automation

How Language Models Could Change Disinformation

Ben Buchanan

Andrew Lohn

Micah Musser

Katerina Sedova

May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.

Download Full Report

Related Content

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks." Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

Machine learning advances are transforming cyber strategy and operations. This necessitates studying national security issues at the intersection of AI and cybersecurity, including offensive and defensive cyber operations, the cybersecurity of AI systems, and the effect of new technologies on global stability.