Andrew Lohn is a Senior Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Prior to joining CSET, he was an Information Scientist at the RAND Corporation, where he led research focusing mainly on cybersecurity and artificial intelligence. Prior to RAND, Andrew worked in material science and nanotechnology at Sandia National Laboratories, NASA, Hewlett Packard Labs, and a few startup companies. He has published in a variety of fields and his work has been covered in MIT Technology Review, Gizmodo, Foreign Policy and BBC. He has a PhD in electrical engineering from UC Santa Cruz and a Bachelors in Engineering from McMaster University.
Related Content
Artificial intelligence that makes news headlines, such as ChatGPT, typically runs in well-maintained data centers with an abundant supply of compute and power. However, these resources are more limited on many systems in the real world, such as drones, satellites, or ground vehicles. As a result, the AI that can run onboard these devices will often be inferior to state of the art models. That can affect their usability and the need for additional safeguards in high-risk contexts. This issue brief contextualizes these challenges and provides policymakers with recommendations on how to engage with these technologies.
Two CSET researchers are coauthors for a new multi-organization report about the safety of AI systems led by OpenAI and the Berkeley Risk and Security Lab. The report, published on arXiv, identified six confidence-building measures (CBMs) that could be applied by AI labs to reduce hostility, prevent conflict escalation, and improve trust between parties as it relates to foundation AI models.
Securing AI Makes for Safer AI
July 2023Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.
The current AI-for-cybersecurity paradigm focuses on detection using automated tools, but it has largely neglected holistic autonomous cyber defense systems — ones that can act without human tasking. That is poised to change as tools are proliferating for training reinforcement learning-based AI agents to provide broader autonomous cybersecurity capabilities. The resulting agents are still rudimentary and publications are few, but the current barriers are surmountable and effective agents would be a substantial boon to society.
CSET's Andrew Lohn and Joshua A. Goldstein share their insights on the difficulties of identifying AI-generated text in disinformation campaigns in their op-ed in Lawfare.
Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.
Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.
Funding and priorities for technology development today determine the terrain for digital battles tomorrow, and they provide the arsenals for both attackers and defenders. Unfortunately, researchers and strategists disagree on which technologies will ultimately be most beneficial and which cause more harm than good. This report provides three examples showing that, while the future of technology is impossible to predict with certainty, there is enough empirical data and mathematical theory to have these debates with more rigor.
Andrew Lohn’s Testimony Before the House Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation
June 2022CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks." Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.
Andrew Lohn’s Testimony before House of Representatives Science, Space and Technology Subcommittees
May 2022CSET Senior Fellow Andrew Lohn testified before the House of Representatives Science, Space and Technology Subcommittee on Investigations and Oversight and Subcommittee on Research and Technology at a hearing on "Securing the Digital Commons: Open-Source Software Cybersecurity." Lohn discussed how the United States can maximize sharing within the artificial intelligence community while reducing risks to the AI supply chain.
Andrew Lohn’s Testimony before the Senate Armed Services Subcommittee on Cybersecurity
May 2022CSET Senior Fellow Andrew Lohn testified before the U.S. Senate Armed Services Subcommittee on Cybersecurity hearing on artificial intelligence applications to operations in cyberspace. Lohn discussed AI's capabilities and vulnerabilities in cyber defenses and offenses.
Like traditional software, vulnerabilities in machine learning software can lead to sabotage or information leakages. Also like traditional software, sharing information about vulnerabilities helps defenders protect their systems and helps attackers exploit them. This brief examines some of the key differences between vulnerabilities in traditional and machine learning systems and how those differences can affect the vulnerability disclosure and remediation processes.
Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.
Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest the answer may be no. This report introduces policymakers to these emerging threats and provides recommendations for how to secure the machine learning supply chain.
Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.
Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.