Cutting-edge written content generation systems have shown enormous capability, producing op-eds, stories, and more. These systems’ potential for disinformation has been often theorized and is the cause of significant concern. In “Truth, Lies, and Automation,” CSET considered the degree to which GPT-3—the leading AI text generation system—can write credible content for disinformation campaigns. Now, join the authors of this seminal report to discuss its findings, implications and recommendations. From churning out climate change denial tweets to persuading Americans to change their minds on global issues such as sanctions on China, the authors discussed what the system can and cannot do—and how we might guard against the potential coming risks of automated disinformation.
Recording and Discussion
Andrew Lohn is a Senior Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Prior to joining CSET, he was an Information Scientist at the RAND Corporation, where he led research focusing mainly on cybersecurity and artificial intelligence. Prior to RAND, Andrew worked in material science and nanotechnology at Sandia National Laboratories, NASA, Hewlett Packard Labs, and a few startup companies. He has published in a variety of fields and his work has been covered in MIT Technology Review, Gizmodo, Foreign Policy and BBC. He has a PhD in electrical engineering from UC Santa Cruz and a Bachelors in Engineering from McMaster University.
Katerina Sedova is a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where she works on the CyberAI Project. Most recently, she advised Sen. Maggie Hassan on cybersecurity and technology policy issues and drafted key legislation as a TechCongress fellow with the Senate Homeland Security and Governmental Affairs Committee. Previously, she published research and advised projects on disinformation, state-sponsored information operations and OSINT for the NATO Strategic Communications Center of Excellence, the Department of State and the Department of Defense. She started her career at Microsoft, where she led engineering teams in the security, networking and performance components of the internet browsing platform. She was named as an inventor on multiple patents awarded to Microsoft. She holds a B.A. in Political Science from California State University and an M.S. in Foreign Service from Georgetown University, where she focused on strategic competition and engagement in the cyber domain, Russia, Ukraine and NATO. She speaks Ukrainian and Russian.
Micah Musser is a Research Analyst at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Previously, he worked as a Research Assistant at the Berkley Center for Religion, Peace, and World Affairs. He graduated from Georgetown University’s College of Arts and Sciences with a B.A. (summa cum laude) in Government focusing on political theory.
Girish Sastry is a researcher on the Policy Research team at OpenAI, where he currently focuses on issues related to the security, misuse, and evaluation of AI systems. Prior to OpenAI, he spent time as a research engineer in machine learning at the University of Oxford and as a data scientist at various internet technology startups. He holds a BA in Computer Science from Yale University.