Andrew Lohn is a Senior Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Prior to joining CSET, he was an Information Scientist at the RAND Corporation, where he led research focusing mainly on cybersecurity and artificial intelligence. Prior to RAND, Andrew worked in material science and nanotechnology at Sandia National Laboratories, NASA, Hewlett Packard Labs, and a few startup companies. He has published in a variety of fields and his work has been covered in MIT Technology Review, Gizmodo, Foreign Policy and BBC. He has a PhD in electrical engineering from UC Santa Cruz and a Bachelors in Engineering from McMaster University.
Poison in the WellJune 2021
Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest the answer may be no. This report introduces policymakers to these emerging threats and provides recommendations for how to secure the machine learning supply chain.
Truth, Lies, and AutomationMay 2021
Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.
Hacking AIDecember 2020
Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.