Advancements in AI have led to the creation of large language models—like Open AI’s GPT-3—that have generated a lot of buzz. These models can produce original text and can adapt to a range of tasks. Commentators have suggested that these models could revolutionize content creation, but could also be used to automate propaganda campaigns. What do the threats from large language models look like in the disinformation space? Is it more bark than bite?
CSET Research Fellow Josh Goldstein hosted a panel conversation with experts from the private sector, government, and academia to discuss the threat from automated influence operations, as well as potential mitigations.
The panel was hosted as part of CyberScoop’s CyberWeek event—the nation’s largest week-long cybersecurity festival focused on digital threats, best practices, and the U.S. government’s work on improving cyberspace.
Recording and Discussion
Participants
Josh A. Goldstein is a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Prior to joining CSET, he was a pre- and postdoctoral fellow at the Stanford Internet Observatory. His research has included investigating covert influence operations on social media platforms, studying the effects of foreign interference on democratic societies, and exploring how emerging technologies will impact the future of propaganda campaigns. He has given briefings to the U.S. Department of Defense, Department of State, and senior technology journalists based on this work. He has been published in outlets including Brookings, Lawfare, and Foreign Policy. He holds an MPhil and DPhil in International Relations from the University of Oxford, where he studied as a Clarendon Scholar, and an A.B. in Government from Harvard College.
Sarah Kreps is the John L. Wetherill Professor of Government, Adjunct Professor of Law, and Director of the Tech Policy Institute at Cornell University. She is also a Non-Resident Senior Fellow at the Brookings Institution and a life member of the Council on Foreign Relations. Her work lies at the intersection of technology, politics, and national security, and is the subject of five books and a range of publications published in academic journals such as the New England Journal of Medicine, Science Advances, Vaccine, Journal of the American Medical Association (JAMA) Network Open, American Political Science Review, and Journal of Cybersecurity, policy journals such as Foreign Affairs, and media outlets such as CNN, the BBC, New York Times, and Washington Post. She has a BA from Harvard University, MSc from Oxford, and PhD from Georgetown. Between 1999-2003, she served as an active duty officer in the United States Air Force.
Girish Sastry is a researcher on the Policy Research team at OpenAI, where he currently focuses on issues related to the security, misuse, and evaluation of AI systems. Prior to OpenAI, he spent time as a research engineer in machine learning at the University of Oxford and as a data scientist at various internet technology startups. He holds a BA in Computer Science from Yale University.
J.D. Maddox is an independent consultant serving as Chief Technology Advisor to the U.S. Global Engagement Center. J.D. is the CEO of Inventive Insights LLC, a small national security consultancy, and is an adjunct professor of national security studies at George Mason University’s Schar School. He is also a frequent author of national security commentaries. Prior to his role at Inventive Insights, J.D. served as a CIA branch chief, Deputy Coordinator of the U.S. Global Engagement Center, an advisor to the Secretary of Homeland Security, and as a U.S. Army Psychological Operations team leader.