Ord ea’s Probing the Improbable
Toby Ord, Rafaela Hillerbrand, Anders Sandberg Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes
When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect.
[R]isk analysis commonly falls back on the distinction between model and parameter uncertainty. We argue that this dichotomy is not well suited for incorporating information about the reliability of the theories involved in the risk assessment. … [W]e therefore propose a three-fold distinction between an argument’s theory, its model, and its calculations.
There are a number of papers that warn that when experts make infeasibly low probability estimates, they are normally implicitly assuming that their understanding of the situation is precise and correct, and that we should treat their estimates with caution. The paper shows how to do this using a straightforward Bayesian argument. I have two quibbles:
- Some people who are employed as ‘experts’ routinely make this adjustment, and would seem to be better placed that ‘the man in the street’ to do so. I think that this should be encouraged, and that experts who don’t appreciate the limitations of their own knowledge, beliefs and estimates should be ‘re-educated’.
- The theory-model-calculation distinction has some advantages over the more common model selection – model application distinction, but could – I think – do with further development.
The dependence on theory etc is brought out by Jack Good’s notation P(A|B:C), where C is the context. The general point is that errors in determining probability are often due to the use of an inappropriate conception of the context.
It seems to me that experts do often have conceptions that bundle estimation of parameters, the selection of a model from those available and the general theory together. But another way of looking at it (following Good) is that:
- Some aspects of the conception will have a straightforward ‘space’ of possibilities, and the parameters within this space will have been adequately determined by experience, preferably by experimentation.
- Some aspects will be more discreet, with less obvious alternatives. Nevertheless they will be reasonably ‘predictive’, and these predictions will have been checked. (I.e., subject to Popper’s falsification.)
- Other aspects have not been found wanting, but have not really been tested. (E.g., the view that we have a ‘new economy’ that will never crash.)
These have different impacts on how we should interpret naive estimates. For the 3rd aspects, at least, it doesn’t make much sense to consider P(True), but P(It matters). This will depend on how the estimate is to be acted upon. (E.g., if everyone acts as if there were no risk, the risk may be increased.) Moreover:
- The argument in the paper seems to leads to notions of imprecise probabilities (e.g., as in Boole, Keynes). Any precise probabilities would often seem arbitrary.
- As the paper says, just because the risk assessment is wrong, doesn’t mean that the risk is actually higher. But it does seem reasonable to act as if it is. This is reminiscent of the Ellsberg ‘paradox’.