Blackett’s ‘Black Swans’

UK Gov. Office for Science Blackett Review of High Impact Low Probability Risks Ed Prof. Beddington, 2011

The question from the UK Cabinet Office and Ministry of Defence was:

How can we ensure that we minimise strategic surprises from high impact low probability risks”.

The panel considered how Government could best [identify], assess, communicate and quantify the inherent uncertainty in these types of risk.”

“The report identifies several recommendations for further strengthening UK government’s approaches to addressing these types of risk. It will also be of value to the wider Risk Management community.”

“The recommendations build on existing practice, with an emphasis on refreshed thinking in a number of areas. The most notable over-arching factor in these recommendations is the repeated need for the inclusion of external experts and readiness to consider unlikely risks.”

I find the report good to excellent as far as it goes, particularly the annexes that wrestle with the nature of uncertainty. It goes well beyond the notions that risk and uncertainty are just numbers, with much important detail. For example, the annex on epistemic risk provides some good insights into why we can’t simply apply a ‘scientific’ approach.  But the report does not seem to me that it provides a ‘fully baked’ picture and does not do enough to encourage debate on these topics, or even identify a need to revisit the topic. But I will critique it anyway.

What are ‘probability’ and ‘low probability’?

The title and the main body use the term ‘probability’ as if we all had a common and appropriate conception of it. But on reading the annexes I doubt this. The work of Jack Good is quoted. But if this is relevant then presumably so is that of Keynes (which Good built on). In which case, there may be much more to ‘probability’ than many think. On the other hand, perhaps ‘uncertainty’ is now the broader term, with ‘probability’ reserved for simple uncertainties that can be represented by numbers. But this would seem to make nonsense of the scope. A dichotomy to be resolved?

Some (like Taleb) use the term ‘low probability’ without seeking to imply that probability is ‘tame’ or ‘measurable’. But there is still a problem. Using the Renn approach one can only use the subjective probability (one has no access to reality), but it is the objective uncertainty that influences events.

An alternative question would have been: “How can we ensure that we minimise strategic surprises?” It seems to me that if something does not have at least a potential for high impact then it is not strategic, and that if it is a surprise then we must have thought it a low probability. If we want to point to Taleb’s insights, we might direct particular attention to ‘avoiding systematic under-appreciation of the potential for strategic shocks’, or some other phrase that avoids misleading terminology.

The annexes cover different aspects of uncertainty. It would be helpful to relate them to a common theory, such as Boole’s, showing where they differ in presentation or substance.

The annex on epistemic risk says:

“… the Bayesian paradigm offers an obvious rational framework for evaluating scientific evidence for decision support.”

But, in view of the later annexes,  is it adequate?

Risk identification

A common belief among managers is that things have to be identified, perhaps even identified as definite, before they can be managed. As in the first set of quotes, above, the report seems to make this assumption.

“It can be challenging for the Government to be confident that it has used the best available evidence and expert judgement to identify, assess and prioritise, a representative range of challenging yet plausible risk scenarios …

Government needs to identify and mitigate risks arising from a wide range of non-malicious hazards and malicious threats.”

“Once a potential risk [sic] has been identified … .”

 “… risks for which there is insufficient evidence are kept on a “risks under review list.”

“Government should use probabilistic analysis, where it is available, in support of its risk management process to evaluate defined scenarios and inform decision making about significant individual risks.”

Despite this, it seems to me that some of the suggestions (such as consulting experts and improving the representation of uncertainty) would help to reduce strategic surprises, and thus come within the scope of the ‘exam question’. For example, in 2005 it was clear to many economists that there was a bubble that was going to end in trouble sooner or later, but it was not clear what would puncture the bubble or when, and what the scapegoat would be. If there really were many very different bubble-bursting scenarios then there was clearly ‘insufficient evidence’ for any one of them. Even after an event historians can make a living discussing causes. Often, then, there will be surprises for whose ‘scenarios’ there had not been sufficient evidence. Indeed, if there were sufficient evidence one wonders why they were surprises. The language of management (‘identify and mitigate’ hardly seems appropriate. We do not necessarily need to identify scenarios to identify a potential and take avoiding action. For example, we might have identified and sought to remedy group-think among mainstream economists.

“The majority of high impact low probability risks can be identified, and their likelihoods characterised … “

This may seem true with hind-sight, but is it?

“…. Any idea that, faced with a decision about a very complex matter, a decision maker can simply state “OK we’ll sort this out if it happens” is not practically rigorous –  every possibility must be thought through, including the inconceivable.”

One way of reducing strategic surprises is to ‘think through’ every possibility, but what about the remainder?

 

Evaluation of evidence

 

“There are a number of questions that could be asked to probe the importance of new or unanticipated evidence:

  • How independent is each information source, and how reliable?
  • Is this information really ‘surprising’?
  • Is it likely that the source could be wrong?
  • When considering all of the ‘surprising information’ received, is it contradictory; or does it partially/fully confirm what other sources have been saying?”

These are important questions to ask, but all too often such procedures can lead to the discounting of evidence that one’s preconceptions are wrong. As with the assessment of Iraqi WMD, one wants to avoid being unduly influenced by one’s prejudices and false assumptions.

Separating assessment and management

“It is accepted within government that the assessment and representation of risk is a separate process to that of managing risk.”

This separation of concerns is common, and clearly desirable. If uncertainty can be represented by a number then such a separation would seem indicated. But if, as argued in the annexes, there is significant true (Knightian) uncertainty then there may need to be a two-way dialogue.

“Discussion of risks should also be framed so that senior decision makers can feel comfortable in being able to challenge ‘expert’ assessments.”

Agreed, particularly where there is true uncertainty in Knight’s sense.

Sources of insight

“Both industry and government face the same difficult challenge in assessing and understanding high impact low probability risks.”

Clearly, both industry and government face challenges, for which the status quo (‘risk is a number’) is inadequate, and it would be useful to share insights, techniques, lessons identified and case studies more. But the two parties do not face the same challenges. The UK’s most companies and their employees have very limited liabilities. No-one gets negative bonuses if they fail to anticipate strategic events. There is even an opportunity covertly to create problems and then be rewarded for dealing with them as they arise. Some government risks are not in the same league. Those who face the potential for serious problems that they will not be able to walk away from, such as the military or relief agencies, may also have important insights.

Mathematics and science

” When applied universally …  this approach can [perhaps] lead … to unacceptable consequences counter to the aims of science to explain and predict natural phenomena.”

  “… research is needed into how this impinges on science-informed and risk-informed decision-making, especially in the context of low probability, high consequence ‘black swan’ events.”

“Rigour is strict enforcement of rules to an end. Logical rigour in mathematics and science require an exactness and adherence to logical rules that derive from the pursuit of strict truth. … Scientific rigour requires selective inattention to the difficulties it cannot yet address.

This confuses mathematical and scientific rigour and approaches to evidence, yet elsewhere the report refers to Jack Good, who explains the difference. The mathematical approach is more general, and freed of some common assumptions, can e more appropriate. We need rigour as in ‘logical exactitude’ rather than ‘adherence to rules’. A key difference is in the treatment of assumptions. Mathematics can challenge even where science presumes. But research IS needed.

World-view

Perhaps most importantly, but also most controversially – and hence left to last – is the issue of  ‘world view’. Risks arise in the world, so our view of the world affects our view of risks. If a risk is ‘really’ such a low probability then it seems reasonable not to let it dictate our lives. The real risk is from risks that are under-appreciated. For example, natural risks are often thought of as natural, but what is of concern is not so much the once in a century event that happens about once every hundred years but the one that happens more often. A natural disaster is possibly just bad luck, but more often its ‘likelihood’ or impact had been underestimated. Part of the problem is risk management itself. After a flood the risk of subsequent flooding is appreciated and drains are kept clear, but after a period without flood . we ‘learn’ that floods are unlikely, the perception of risk reduces and drains can remain blocked. The risk of flooding increases. More generally, we manage our lives according to our perception of risk, thus there is a reflexive relationship: a risk is only a serious risk if it is under-appreciated. Thus the inherently challenging risks are those that are invisible, obscure, complex, confused or otherwise outside our management approach. Perhaps we should rise to the challenge?

“With sufficient knowledge and informed judgement uncertainty can be characterised statistically. It follows that strategic surprises arise from lack of knowledge or the inability to perceive the consequences of what is known.  “

This is reminiscent of the view that free will must be an illusion, because our actions are either deterministic or probabilistic (in the Bayesian sense). On the other hand, according to Keynes economies harbour genuine, non-statistical uncertainties, leaving scope for free will. Mathematically, statistics tell you about the past. Inferences about the future rely on some principle of induction. But if one has data that have been generated by a particular system (e.g. economic) the best that induction based on statistics can do is extrapolate the behaviour as if the system will endure. But it might not. Such, surely, are ‘high impact low probability’ events?

My Conclusion

The report shows that uncertainty is not just a number. There are different notions of uncertainty. The use of the wrong one can sometimes lead to ‘unacceptable consequences’, so that the choice matters. Yet my own experience is that people often ‘come to the table’ with incompatible notions which, for lack of a common framework (such as Boole‘s), they cannot resolve and may not even identify. Research is needed, and in my view a wide-ranging programme should be recommended.

Part of the problem, I think, is that we expect politicians to be definite: uncertainty is a weakness. Thus a hypothesis that survives for a while can become a dogma, which then can not be challenged. Not only, like Keynes, should politicians change their mind when the facts change, but they should also strive to keep an open mind, particularly when all about them people are (psychologically) certain.

See Also

Gov. thinking: Science advice, risk management; Uncertainty: Good Thinking, KnightTaleb, ‘is not probability‘ ; Solutions component?: positive deviancy, appreciative inquiry

Dave Marsay

2 Responses to Blackett’s ‘Black Swans’

  1. Pingback: Science advice and the management of risk « djmarsay

  2. Pingback: Avoiding ‘Black Swans’ « djmarsay

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: