Executive Summary
Many observers wonder about the potential for artificial intelligence to cause catastrophic risks, but fortunately, there is little empirical evidence on which to base those assessments. Absent such evidence, experts often use their best guesses to estimate the probability of an AI-induced catastrophe or apocalypse (i.e., p-doom). Although subjective expert assessment may be the best evidence available, policymakers and risk analysts are not restricted to asking for probabilities. This brief promotes additional tools for handling uncertainty in AI risk assessments.
Imagine you are asked to roll a six-sided die but you have only seen three sides; one side has a star etched on it, two sides are blank, and the other three are unknown. Predicting the outcome involves part randomness and part ignorance. If you are asked to give the probability of a star, you might note that one of the three sides you’ve seen has a star and answer 1/3. Asked how confident to be that a star will come up, there is only one side that you know has a star, so 1/6 would be a reasonable answer. Asked whether a star could come up, you might note that four sides could have a star, so 4/6 would be a reasonable answer. These questions appear similar, but their differences are important if you are a decision-maker who cares about stars.
In AI risk, rather than in dice rolls, ignorance is the dominant form of uncertainty, not randomness, so the best techniques are not always probabilistic. There are alternative mathematical techniques that are just as rigorous as probability. They also use familiar terms from common discourse, such as Belief and Plausibility, allowing them to easily become part of popular AI risk vernacular and to be communicated to decision makers.
The way to think of the mathematical term Belief is that it expresses how confident one can be based on the evidence. For instance, the evidence allows us to have a 1/6 degree of belief that the die will come up stars. Plausibility expresses what is left after removing the counter-evidence. Two of the six sides cannot be stars, so the Plausibility of stars is 4/6. The gap between Belief and Plausibility is due to ignorance. Without ignorance, Belief and Plausibility become the same number, equal to probability.
This issue brief explains why analysts and decision-makers need alternatives to probability for handling the uncertainty in AI risk. It explains Belief, Plausibility, and how they relate to probability in an intuitively accessible way. And it demonstrates how to calculate Belief and Plausibility in the context of expert assessments of AI risk.
At a high level, enacting the change sought by this brief is easy. Policymakers only need to add two additional questions when discussing AI risks. The first is either, how certain are you that this risk will occur, or even better, how strong is the evidence supporting this hypothetical outcome? The second is, how certain are you that this risk will not occur, or how strong is the evidence against this hypothetical outcome?
Asking those two additional questions will force analysts to confront their sources of uncertainty more directly and drive analysts to expand their risk analysis toolbox. Answering those questions, and communicating those answers, is also a low lift because the analytical techniques already exist and because the vocabulary is already familiar. This brief provides an introduction to those techniques and vocabulary.