Alumni,
Analysis,
CyberAI Project

Katerina Sedova

Print Bio

Katerina Sedova was a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where she worked on the CyberAI Project. Most recently, she advised Sen. Maggie Hassan on cybersecurity and technology policy issues and drafted key legislation as a TechCongress fellow with the Senate Homeland Security and Governmental Affairs Committee. Previously, she published research and advised projects on disinformation, state-sponsored information operations and OSINT for the NATO Strategic Communications Center of Excellence, the Department of State and the Department of Defense. She started her career at Microsoft, where she led engineering teams in the security, networking and performance components of the internet browsing platform. She was named as an inventor on multiple patents awarded to Microsoft. She holds a B.A. in Political Science from California State University and an M.S. in Foreign Service from Georgetown University, where she focused on strategic competition and engagement in the cyber domain, Russia, Ukraine and NATO. She speaks Ukrainian and Russian.

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

Chinese and Russian government officials are keen to publicize their countries’ strategic partnership in emerging technologies, particularly artificial intelligence. This report evaluates the scope of cooperation between China and Russia as well as relative trends over time in two key metrics of AI development: research publications and investment. The findings expose gaps between aspirations and reality, bringing greater accuracy and nuance to current assessments of Sino-Russian tech cooperation.

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.