# The Norm Chronicles

Michale Blastland & David Spiegelhalter The Norm Chronicles Profile Books 2013.

Blastland is the journalist who devised Radio 4’s numerate More or less.

Spiegelhalter is the televisual Winton Professor for the Public Understanding of Risk, at Cambridge UK.

## 1. Introduction

… Can risk claim to be true to numbers and to you at the same time? We will present both sides as we try to find out, but we will tell you our conclusion  now.
It can’t. For people, probability does not exist.
That’s an extraordinary claim … [so read the book!].

Probability … yokes together … the orderly view of whole populations seen in numbers from above and the sometimes lonely view in the maze of stories below. … The news is full of it, and no wonder – it seems to offer a hold on the future. Which is why it is a little bit inconvenient that it doesn’t exist.

## 17. Judgment Day

We think that there’s more uncertainty than you would think from the way that people throw numbers around.
We think this especially because when you try to grab hold of probability it somehow slips through your fingers. It’s hard to say what it really is for an individual. It’s hard, too, to say how the average affects an individual.

We reject all these explanations [of frequency, propensity and ‘non-informative priors’] and take a very pragmatic stance – that [a quoted] ‘chance of a heart attack’ is not really [your] risk, and not even an estimate of some propensity of it. It is based on a few items of limited information, and should be treated like the ‘probability’ of [a horse] winning – reasonable betting odds given current information [as taken account of by the doctor]. No more an no less.

[Probability] is necessarily a judgment and does not exist as a property of the outside world. Risk, in this sense, is a measure of what we don’t and can’t know as much as a measure of what we can.
All of which forms part of a rather startling conclusion: that independent, objective probability, as Norm says at the last, doesn’t exist.
Nor, as we say, does the average person exist to whom the average risk is supposed to apply.

So in practical terms, for the events of life in general, when we say a certain activity is dangerous and quote its risk as so many MicroMorts, these numbers should be considered only as reasonable betting odds given what we know. As soon as we know more … the risk changes … .

Probability sounds sensible enough, but whenever you reach for a firm and meaningful definition the concept loses shape – although it is a number, show us the scales or stick that you can measure it with. … DS says he has spent many years trying to work out why people find probability intuitively difficult and confusing. MB adds that he has often reported people’s communication of risk and found the communicators don’t really know what they are communicating. Just when anxious people want most clarity, they find muddle. There is a reason for that. It is a muddle.

[One might] say ‘of 100 ways that things might turn out for you over the next 10 years, in 12 of them you will have a heart attack or stroke.’
So, which of the 100 are you this time?

This is a very authorative view, that probabilities are not really ‘objective’ in the way that many suppose. They may sometimes be ‘the best basis there is for decision-making’, but there are always caveats, which sometimes bite.

But how do we know when we should particularly be concerned?

My suggestion is that we take probability as ‘reasonable betting odds given current information’, but note that information is often incomplete. In particular estimates of probability inevitably rely on a type of induction from some ‘evidence base’. So perhaps we should think in terms of probabilities based on different evidence bases. For example, a doctor may cite a study and say something like ‘based on figures for typical English men, your probability of … is …’, wheras you may be thinking in terms of your own family history. Neither is ‘the objective probability’, and the best figure to use might be a subject for debate. (I would suggest thinking in terms of a range, not a precise value.)

More generally we may identify different evidence bases, precedents and assumptions from which we may make different (relatively ‘objective’) estimates. When these estimates differ significantly (in terms of the actions to be taken), this can be taken as an indication that more than usual attention should be paid to this uncertainty, and – ideally – attempts made to reduce it or at least hedge against it.