Helen Toner is Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen has written for Foreign Affairs and other outlets on the national security implications of AI and machine learning for China and the United States, as well as testifying before the U.S.-China Economic and Security Review Commission. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne.

Directors,
Foundational Research Grants,
Strategy
Helen Toner
Director of Strategy and Foundational Research Grants Print BioU.S. and Chinese Military AI Purchases
August 2023This data brief uses procurement records published by the U.S. Department of Defense and China’s People’s Liberation Army between April and November of 2020 to assess, and, where appropriate, compare what each military is buying when it comes to artificial intelligence. We find that the two militaries are prioritizing similar application areas, especially intelligent and autonomous vehicles and AI applications for intelligence, surveillance and reconnaissance.
How can we measure the reliability of machine learning systems? And do these measures really help us predict real world performance? A recent study by the Stanford Intelligent Systems Laboratory, supported by CSET funding, provides new evidence that models may perform well on certain reliability metrics while still being unreliable in other ways. This blog post summarizes the study’s results, which suggest that policymakers and regulators should not think of “reliability” or “robustness” as a single, easy-to-measure property of an AI system. Instead, AI reliability requirements will need to consider which facets of reliability matter most for any given use case, and how those facets can be evaluated.
During her interview with ABC News Live, CSET's Helen Toner delved into the significant growth of Artificial Intelligence, with a particular emphasis on its impact within the realm of National Security.
In an op-ed published in TIME, CSET's Helen Toner discusses the challenges of understanding and interacting with chatbots powered by large language models, a form of artificial intelligence.
What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ.
Will China Set Global Tech Standards?
March 2022Director of Strategy Helen Toner explored China's tech standards in China File.
Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.
Privacy is Power
January 2022In an opinion piece for Foreign Affairs, a team of CSET authors alongside coauthors make the case for the use of privacy-enhancing technologies to protect personal privacy.
This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.
‘Small Data’ Is Also Crucial for Machine Learning
October 2021In their op-ed for Scientific American, Husanjot Chahal and Helen Toner argue how small data can assist AI breakthroughs.
Small Data’s Big AI Potential
September 2021Conventional wisdom suggests that cutting-edge artificial intelligence is dependent on large volumes of data. An overemphasis on “big data” ignores the existence—and underestimates the potential—of several AI approaches that do not require massive labeled datasets. This issue brief is a primer on “small data” approaches to AI. It presents exploratory findings on the current and projected progress in scientific research across these approaches, which country leads, and the major sources of funding for this research.
AI Accidents: An Emerging Threat
July 2021As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.
This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.
This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.
Key Concepts in AI Safety: An Overview
March 2021This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.
Why robustness is key to deploying AI
June 2020"When it comes to high-stakes settings," write Jacob Steinhardt and CSET's Helen Toner, "machine learning is a risky choice." Their piece in Brookings TechStream offers rules of thumb for assessing risk of failure in ML.
CSET's Helen Toner highlights OpenAI's delayed release of GPT-2 and the increased attention it brought to publication norms in the AI research community in 2019. This piece was featured in In the Shanghai Institute for Science of Science's "AI Governance in 2019" report.
CSET’s Helen Toner, Jeff Ding, and Elsa Kania testified before the U.S.-China Economic and Security Review Commission on U.S.-China Competition in Artificial Intelligence: Policy, Industry, and Strategy.
CSET Director of Strategy Helen Toner testified before the U.S.-China Economic and Security Review Commission at a hearing on "Technology, Trade, and Military-Civil Fusion: China’s Pursuit of Artificial Intelligence, New Materials, and New Energy."