Disinformation

How might AI impact the democratic process and how should policymakers respond? What steps can the media, AI providers, and social media companies take to help people find reliable information and recognize when content is AI-generated? On April 10, CSET Research Fellow Josh Goldstein and a panel of outside experts discussed these and other challenges.

In a new preprint paper, CSET's Josh A. Goldstein and the Stanford Internet Observatory's Renee DiResta explored the use of AI-generated imagery to drive Facebook engagement.

How Persuasive is AI-Generated Propaganda?

Josh A. Goldstein Jason Chao Shelby Grossman Alex Stamos Michael Tomz
| February 2024

Research participants who read propaganda generated by GPT-3 davinci (a large language model) were nearly as persuaded as those who read real propaganda from Iran or Russia, according to a new peer-reviewed study by Josh A. Goldstein and co-authors.

Deepfakes, Elections, and Shrinking the Liar’s Dividend

Brennan Center for Justice
| January 23, 2024

In an article published by the Brennan Center for Justice, Josh A. Goldstein and Andrew Lohn delve into the concerns about the spread of misleading deepfakes and the liar's dividend.

In a WIRED article discussing issues with Microsoft's AI chatbot providing misinformation, conspiracies, and outdated information in response to political queries, CSET's Josh A. Goldstein provided his expert insights.

Controlling Large Language Model Outputs: A Primer

Jessica Ji Josh A. Goldstein Andrew Lohn
| December 2023

Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.

In a KCBS Radio segment that explores the rapid rise of AI and its potential impact on the 2024 election, CSET's Josh Goldstein provides his expert insights.

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation.

In a Forbes article discussing the challenges posed by AI-generated content in the context of political campaigns and the upcoming presidential election, CSET's Josh A. Goldstein provided his expert take.

In an article published by NPR that discusses the accessibility and potential impact of generative artificial intelligence applications on political campaigns and public opinion, CSET's Micah Musser provided his expert insights.