Analysis

Tim G. J. Rudner

Non-Resident AI/ML Fellow

Tim G. J. Rudner is an Assistant Professor of Statistical Sciences at the University of Toronto, a Faculty Member at the Vector Institute of Artificial Intelligence, and a Faculty Affiliate at the Schwartz Reisman Institute for Technology and Society. His research interests span probabilistic machine learning, AI safety, and AI governance. Tim is also a Junior Research Fellow of Trinity College at the University of Cambridge, an Associate Member of the Department of Computer Science at the University of Oxford, a Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society, and an AI Fellow at Georgetown University’s Center for Security and Emerging Technology. Before joining the University of Toronto, he was an Assistant Professor and Faculty Fellow at New York University. Tim holds a PhD in Computer Science from the University of Oxford, where he was a Qualcomm Innovation Fellow and a Rhodes Scholar.

Related Content

Reports

AI for Military Decision-Making

March 2025

Artificial intelligence is reshaping military decision-making. This concise overview explores how AI-enabled systems can enhance situational awareness and accelerate critical operational decisions—even in high-pressure, dynamic environments. Yet, it also highlights the essential need for clear… Read More

Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluate their explainability and interpretability claims in… Read More

This paper is the fifth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More