Risks to scientists from mis-predictions

The recent conviction of six seismologists and a public official for reassuring the public about the risk of an earthquake when there turned out to be one raises many issues, mostly legal, but I want to focus on the scientific aspects, specifically the assessment and communication of uncertainty.

A recent paper by O’Hagan  notes that there is “wide recognition that the appropriate representation for expert judgements of uncertainty is as a probability distribution for the unknown quantity of interest …”.  This conflicts with UK best practice, as described by Spiegelhalter at understanding uncertainty. My own views have been formed by experience of potential and actual crises where evaluation of uncertainty played a key role.

From a mathematical perspective, probability theory is a well-grounded theory depending on certain axioms. There are plausible arguments that these axioms are often satisfied, but these arguments are empirical and hence should be considered at best as scientific rather than mathematical or ‘universally true’.  O’Hagan’s arguments, for example, start from the assumption that uncertainty is nothing but a number, ignoring Spiegelhalter’s ‘Knightian uncertainty‘.

Thus, it seems to me, that where there are rare critical decisions with a lack of evidence to support a belief in the axioms, one should recognize the attendant non-probabilistic uncertainty, and that failure to do so is a serious error, meriting some censure. In practice, one needs relevant guidance such as the UK is developing, interpreted for specific areas such as seismology. This should provide both guidance (such as that at understanding uncertainty) to scientists and material to be used in communicating risk to the public, preferably with some legal status. But what should such guidance be? Spiegelhalter’s is a good start, but needs developing.

My own view is that one should have standard techniques that can put reasonable bounds on probabilities, so that one has something that is relatively well peer-reviewed, ‘authorised’ and ‘scientific’ to inform critical decisions. But in applying any methods one should recognize any assumptions that have been made to support the use of those methods, and highlight them. Thus one may say that according to the usual methods, ‘the probability is p’, but that there are various named factors that lead you to suppose that the ‘true risk’ may be significantly higher (or lower). But is this enough?

Some involved in crisis management have noted that scientists generally seem to underestimate risk. If so, then even the above approach (and the similar approach of understanding uncertainty) could tend to understate risk. So do scientists tend to understate the risks pertaining to crises, and why?

It seems to me that one cannot be definitive about this, since there are, from a statistical perspective – thankfully – very few crises or even near-crises. But my impression is that could be something in it. Why?

As at Aquila, human and organisational factors seem to play a role, so that some answers seem to need more justification that others. Any ‘standard techniques’ would need take account of these tendancies. For example, I have often said that the key to good advice is to have a good customer, who desires an adequate answer – whatever it is – who fully appreciates the dangers of misunderstanding arising, and is prepared to invest the time in ensuring adequate communication. This often requires debate and perhaps role-playing, prior to any crisis. This was not achieved at Aquila. But is even this enough?

Here I speculate even more. In my own work, it seems to me that where a quantity such as P(A|B) is required and scientists/statisticians only have a good estimate of P(A|B’) for some B’ that is more general than B, then P(A|B’) will be taken as ‘the scientific’ estimate for P(A|B). This is so common that it seems to be a ‘rule of pragmatic inference’, albeit one that seems to be unsupported by the kind of arguments that O’Hagan supports. My own experience is that it can seriously underestimate P(A|B).

The facts of the Aquila case are not clear to me, but I suppose that the scientists made their assessment based on the best available scientific data. To put it another way, they would not have taken account of ad-hoc observations, such as amateur observations of radon gas fluctuations. Part of the Aquila problem seems to be that the amateur observations provided a warning which the population were led to discount on the basis of ‘scientific’ analysis. More generally, in a crisis, one often has a conflict between a scientific analysis based on sound data and non-scientific views verging on divination. How should these diverse views inform the overall assessment?

In most cases one can make a reasonable scientific analysis based on sound data and ‘authorised assumptions’, taking account of recognized factors. I think that one should always strive to do so, and to communicate the results. But if that is all that one does then one is inevitably ignoring the particulars of the case, which may substantially increase the risk. One may also want to take a broader decision-theoretic view. For example, if the peaks in radon gas levels were unusual then taking them as a portent might be prudent, even in the absence of any relevant theory. The only reason for not doing so would be if the underlying mechanisms were well understood and the gas levels were known to be simply consequent on the scientific data, thus providing no additional information. Such an approach is particularly indicated where – as I think is the case in seismology – even the best scientific analysis has a poor track record.

The bottom line, then, is that I think that one should always provide ‘the best scientific analysis’ in the sense of an analysis that gives a numeric probability (or probability range etc) but one needs to establish a best practice that takes a broader view of the issue in question, and in particular the limitations and potential biases of ‘best practice’.

The O’Hagan paper quoted at the start says – of conventional probability theory – that  “Alternative, but similarly compelling, axiomatic or rational arguments do not appear to have been advanced for other ways of representing uncertainty.” This overlooks Boole, Keynes , Russell and Good, for example. It may be timely to reconsider the adequacy of the conventional assumptions. It might also be that ‘best scientific practice’ needs to be adapted to cope with messy real-world situations. Aquila was not a laboratory.

See Also

My notes on uncertainty and on current debates.

Dave Marsay