CSET

How Much Money Could Large Language Models Save Propagandists?

Micah Musser

August 9, 2023

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation.

Read arXiv Preprint

Related Content

CSET has received a lot of questions about LLMs and their implications. But questions and discussions tend to miss some basics about LLMs and how they work. In this blog post, we ask CSET’s NLP Engineer, James Dunham, to help us explain LLMs in plain English.

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and they provide a framework for assessing potential mitigation strategies.

CSET's Andrew Lohn and Joshua A. Goldstein share their insights on the difficulties of identifying AI-generated text in disinformation campaigns in their op-ed in Lawfare.

CSET's Josh A. Goldstein and OpenAI's Girish Sastry co-authored an insightful article on language models and disinformation that was published in Foreign Affairs.

Analysis

Truth, Lies, and Automation

May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.