Truth, Lies, and Automation Report Cover

Analysis

Truth, Lies, and Automation

How Language Models Could Change Disinformation

Ben Buchanan,

Andrew Lohn,

Micah Musser,

and Katerina Sedova

May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.

Download Full Report

Related Content

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance… Read More

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance… Read More

CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating… Read More

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI,… Read More

Machine learning advances are transforming cyber strategy and operations. This necessitates studying national security issues at the intersection of AI and cybersecurity, including offensive and defensive cyber operations, the cybersecurity of AI systems, and the… Read More