Reports

Beyond P(doom) for AI Risk: Quantifying Uncertainty Without Probability

Andrew Lohn

May 2026

As artificial intelligence introduces new risks, some potentially catastrophic or even existential, there is little data or detailed theory to assess them. Policymakers often resort to expert best guesses for the probability of doom but probability is not always the most appropriate tool, especially for the types of uncertainties in AI risk. This report details a brief introduction to Belief and Plausibility, which provides an alternative approach that is mathematically rigorous, uses familiar vocabulary, and only requires policymakers to ask two simple questions.

Download Report

Related Content

Artificial intelligence (AI) is beginning to change cybersecurity. This report takes a comprehensive look across cybersecurity to anticipate whether those changes will help cyber defense or offense. Rather than a single answer, there are many… Read More

Unlike other domains of conflict, and unlike other fields with high anticipated risk from AI, the cyber domain is intrinsically digital with a tight feedback loop between AI training and cyber application. Cyber may have… Read More

Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has… Read More

Reports

Scaling AI

December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against… Read More