Josh A. Goldstein is a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Prior to joining CSET, he was a pre- and postdoctoral fellow at the Stanford Internet Observatory. His research has included investigating covert influence operations on social media platforms, studying the effects of foreign interference on democratic societies, and exploring how emerging technologies will impact the future of propaganda campaigns. He has given briefings to the U.S. Department of Defense, Department of State, and senior technology journalists based on this work. He has been published in outlets including Brookings, Lawfare, and Foreign Policy. He holds an MPhil and DPhil in International Relations from the University of Oxford, where he studied as a Clarendon Scholar, and an A.B. in Government from Harvard College.
Related Content
In a KCBS Radio segment that explores the rapid rise of AI and its potential impact on the 2024 election, CSET's Josh Goldstein provides his expert insights. Read More
In a commentary published by Nature, Josh A. Goldstein and Zachary Arnold, along with co-authors, explore how artificial intelligence, including large language models like ChatGPT, can enhance science advice for policymaking. Read More
In an article published by The New York Times that discusses the increasing use of artificial intelligence in political campaigns and the concerns it raises regarding disinformation and manipulation, CSET's Josh A. Goldstein provides his… Read More
CSET's Andrew Lohn and Joshua A. Goldstein share their insights on the difficulties of identifying AI-generated text in disinformation campaigns in their op-ed in Lawfare. Read More
The Coming Age of AI-Powered Propaganda
April 2023CSET's Josh A. Goldstein and OpenAI's Girish Sastry co-authored an insightful article on language models and disinformation that was published in Foreign Affairs. Read More
Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk
January 2023Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI,… Read More
Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk
January 2023Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI,… Read More
In a study for Harvard's Misinformation Review, Research Fellow Josh Goldstein looks at how tactics used in political influence operations are used for commercial purposes. Read More