Analysis

Tim G. J. Rudner

Non-Resident AI/ML Fellow

Tim G. J. Rudner is a Non-Resident AI/ML Fellow at Georgetown’s Center for Security and Emerging Technology (CSET). He is currently completing his Ph.D. in Computer Science at the University of Oxford, where he conducts research on probabilistic machine learning, reinforcement learning and AI safety. Previously, Tim worked at Amazon Research, the European Central Bank and the European Space Agency’s Frontier Development Lab. He holds an M.Sc. in Statistics from the University of Oxford and a B.S. in Applied Mathematics and in Economics from Yale University. Tim is also a Fellow of the German Academic Scholarship Foundation and a Rhodes Scholar.

Related Content

Analysis

AI for Military Decision-Making

March 2025

Artificial intelligence is reshaping military decision-making. This concise overview explores how AI-enabled systems can enhance situational awareness and accelerate critical operational decisions—even in high-pressure, dynamic environments. Yet, it also highlights the essential need for clear… Read More

Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluate their explainability and interpretability claims in… Read More

This paper is the fifth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure… Read More