Directors,
Foundational Research Grants,
Strategy

Helen Toner

Director of Strategy and Foundational Research Grants Print Bio

Helen Toner is Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen has written for Foreign Affairs and other outlets on the national security implications of AI and machine learning for China and the United States, as well as testifying before the U.S.-China Economic and Security Review Commission. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne.

Director of Strategy Helen Toner explored China's tech standards in China File.

Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.

Privacy is Power

January 2022

In an opinion piece for Foreign Affairs, a team of CSET authors alongside coauthors make the case for the use of privacy-enhancing technologies to protect personal privacy.

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.

In their op-ed for Scientific American, Husanjot Chahal and Helen Toner argue how small data can assist AI breakthroughs.

Conventional wisdom suggests that cutting-edge artificial intelligence is dependent on large volumes of data. An overemphasis on “big data” ignores the existence—and underestimates the potential—of several AI approaches that do not require massive labeled datasets. This issue brief is a primer on “small data” approaches to AI. It presents exploratory findings on the current and projected progress in scientific research across these approaches, which country leads, and the major sources of funding for this research.

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

"When it comes to high-stakes settings," write Jacob Steinhardt and CSET's Helen Toner, "machine learning is a risky choice." Their piece in Brookings TechStream offers rules of thumb for assessing risk of failure in ML.

CSET's Helen Toner highlights OpenAI's delayed release of GPT-2 and the increased attention it brought to publication norms in the AI research community in 2019. This piece was featured in In the Shanghai Institute for Science of Science's "AI Governance in 2019" report.

CSET’s Helen Toner, Jeff Ding, and Elsa Kania testified before the U.S.-China Economic and Security Review Commission on U.S.-China Competition in Artificial Intelligence: Policy, Industry, and Strategy.

CSET Director of Strategy Helen Toner testified before the U.S.-China Economic and Security Review Commission in "Technology, Trade, and Military-Civil Fusion: China’s Pursuit of Artificial Intelligence, New Materials, and New Energy."