Josh A. Goldstein is a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Prior to joining CSET, he was a pre- and postdoctoral fellow at the Stanford Internet Observatory. His research has included investigating covert influence operations on social media platforms, studying the effects of foreign interference on democratic societies, and exploring how emerging technologies will impact the future of propaganda campaigns. He has given briefings to the U.S. Department of Defense, Department of State, and senior technology journalists based on this work. He has been published in outlets including Brookings, Lawfare, and Foreign Policy. He holds an MPhil and DPhil in International Relations from the University of Oxford, where he studied as a Clarendon Scholar, and an A.B. in Government from Harvard College.
Related Content
Old Meets New in Online Influence
December 2024In his Tech Policy Press op-ed, Josh A. Goldstein discusses Meta's quarterly threat report, which highlights the discovery of five networks of fake accounts from Moldova, Iran, Lebanon, and two from India attempting to manipulate… Read More
Russia’s Global Information Operations Have Grown Up
October 2024In their op-ed in Foreign Policy, Josh A. Goldstein and Renée DiResta discuss recent efforts by the U.S. government to disrupt Russian influence operations, highlighting how Russia uses fake domains, media outlets, and social media… Read More
In their op-ed featured in MIT Technology Review, Josh A. Goldstein and Renée DiResta provide their expert analysis on OpenAI's first report on the misuse of its generative AI. Read More
In an article published by NPR which the discusses the surge in AI-generated spam on Facebook and other social media platforms, CSET's Josh A. Goldstein provided his expert insights. Read More
In an article published by the Financial Time exploring the rapid rise of AI-generated conspiracy theories and spam content on social media platforms, CSET's Josh A. Goldstein provided his expert insights. Read More
How Spammers, Scammers and Creators Leverage AI-Generated Images on Facebook for Audience Growth
March 2024In a new preprint paper, CSET's Josh A. Goldstein and the Stanford Internet Observatory's Renee DiResta explored the use of AI-generated imagery to drive Facebook engagement. Read More
Research participants who read propaganda generated by GPT-3 davinci (a large language model) were nearly as persuaded as those who read real propaganda from Iran or Russia, according to a new peer-reviewed study by Josh… Read More
In an article published by the Brennan Center for Justice, Josh A. Goldstein and Andrew Lohn delve into the concerns about the spread of misleading deepfakes and the liar's dividend. Read More
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to… Read More
In a KCBS Radio segment that explores the rapid rise of AI and its potential impact on the 2024 election, CSET's Josh Goldstein provides his expert insights. Read More
In a commentary published by Nature, Josh A. Goldstein and Zachary Arnold, along with co-authors, explore how artificial intelligence, including large language models like ChatGPT, can enhance science advice for policymaking. Read More
In an article published by The New York Times that discusses the increasing use of artificial intelligence in political campaigns and the concerns it raises regarding disinformation and manipulation, CSET's Josh A. Goldstein provides his… Read More
CSET's Andrew Lohn and Joshua A. Goldstein share their insights on the difficulties of identifying AI-generated text in disinformation campaigns in their op-ed in Lawfare. Read More
The Coming Age of AI-Powered Propaganda
April 2023CSET's Josh A. Goldstein and OpenAI's Girish Sastry co-authored an insightful article on language models and disinformation that was published in Foreign Affairs. Read More
Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk
January 2023Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI,… Read More
Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk
January 2023Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI,… Read More
In a study for Harvard's Misinformation Review, Research Fellow Josh Goldstein looks at how tactics used in political influence operations are used for commercial purposes. Read More