Disinformation

Controlling Large Language Model Outputs: A Primer

Jessica Ji, Josh A. Goldstein, and Andrew Lohn
| December 2023

Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.

In a KCBS Radio segment that explores the rapid rise of AI and its potential impact on the 2024 election, CSET's Josh Goldstein provides his expert insights.

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation.

In a Forbes article discussing the challenges posed by AI-generated content in the context of political campaigns and the upcoming presidential election, CSET's Josh A. Goldstein provided his expert take.

In an article published by NPR that discusses the accessibility and potential impact of generative artificial intelligence applications on political campaigns and public opinion, CSET's Micah Musser provided his expert insights.

CSET's Andrew Lohn and Joshua A. Goldstein share their insights on the difficulties of identifying AI-generated text in disinformation campaigns in their op-ed in Lawfare.

CSET's Jenny Jun was featured in the Atlantic Council's The 5x5, a series that showcases five experts answering five questions on a common theme, trend, or current event in the world of cyber.

A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Forbes.

The Coming Age of AI-Powered Propaganda

Foreign Affairs
| April 7, 2023

CSET's Josh A. Goldstein and OpenAI's Girish Sastry co-authored an insightful article on language models and disinformation that was published in Foreign Affairs.

NPR published an article featuring CSET's Josh Goldstein. Goldstein provided expert insight on the topic.