# Matthews’ Chancing it

Robert Matthews *Chancing it: The laws of chance and How they can work for you* Profile Books 2016

This is a ‘popular science / business’ book. Judging it by its cover, I expect to find it on sale at airports. But inside is a well-written book on probability from a Bayesian perspective. It is not technical, but tells the story of the development of ‘the laws of probability’ and covers the main controversies, with some familiar examples woven in.

The argument for Bayesianism amounts to:

- It is based on some sound mathematics (i.e. probability theory)
- It is clearly true in the examples as described, which are intended to be representative.
- Ignoring or going against the ‘laws’ can be bad for your wallet (or health).
- Big organisations are increasingly getting value out of applications to ‘big data’.

To it’s credit – and unlike some other Bayesians, it does not argue that (1) is enough. It seems to me that the book would justify the assertion that Bayesianism is ‘state of the art’, which is probably all that most readers want. Key controversial claims include:

- Sometimes, even when there is no actual underlying probabilistic mechanism (such as a roulette wheel), one can treat a problem of uncertainty ‘as if” there were.
- Sometimes the above approach is dangerous.
- Having ‘degrees of belief’ implies, in effect, that one should be a Bayesian.

The first point is well illustrated, and distinguishes Bayesianism from alternatives such as frequentism. The second point distinguishes this author’s more sophisticated Bayesianism from other’s more simplistic ideas, and reasonable illustrations are provided (although fundamentalist Bayesians may disagree). The third point has been well established mathematically, and so is beyond criticism. But the book also makes use of the notion of ‘weights of evidence’ as used by Turing. So, should one have such ‘degrees of belief’? If so, then Bayesianism has been established as the one true method. If not, then Bayesianism is still ‘state of the art’, but is (sometimes) wrong (and may even be inferior to some past art).

The book discusses Laplace’s controversial ‘principle of indifference’. If one has no evidence then one must fall back on some such principle to estimate a ‘prior probability’. Suppose that for two bent coins one has a degree of belief of 0.5 that the next toss will yield ‘Heads’, but that one is based on the principle of indifference alone, whereas the other is supported by a long-running experiment whose results were consistent with the hypothesized probability. If ‘degree of belief’ measures all aspects of uncertainty, then one ought to be equally prepared to gamble with favourable odds in the two cases. It would seem that at Bletchley Park they should have ignored weight of evidence except as reflected in probabilities. Is this reasonable? Effective? Efficient? Worse, at the start of Turing’s paper as cited (but not quoted) in Matthews’ book, Turing says:

The theory of probability may be used in cryptography with most eﬀect when the type of cipher used is already fully understood, and it only remains to ﬁnd the actual keys. It is of rather less value when one is trying to diagnose the type of cipher, but if deﬁnite rival theories about the type of cipher are suggested it may be used to decide between them.

This seems to capture the spirit of the book’s examples. Thus, it seems to me, the reason for accepting Bayesianism (with appropriate qualifications) are not really those given in the book. Rather we can treat it as a heuristic that we have had good experiences with, and hence with which we can expect similarly good experiences – at least in the short run – in sufficiently similar circumstances, with the caveat that we need to be conscientious in establishing ‘similarity’.

More technically, it is not clear to me quite what the proper scope of Matthews’ variant of Bayesianism is. He clearly goes further than frequentists, for example, in supposing that Bayesianism can be valid even when there is no given underlying random mechanism. From (22) the book details some problems with using a Bayesian approach. But whereas many Bayesians would describe these as a misuse of Bayesian techniques, Matthews makes it clear that some of them are genuine limitations. For example, in economics we sometimes ‘simply do not know’.

The situation reminds me of the case for Newtonian mechanics prior to Einstein. This satisfied the four points above, yet Newton pointed to the recession of Mercury as inconsistent with his theory. So for me, a key question is: ‘is Bayesianism almost correct, or based on a fundamental mistake?’

Turing advocated Bayesianism when the general situation was ‘fully understood’, but not otherwise. In the first case the use of probability had been fully tested and found adequate. In the latter case, not. For example, the geeks’ use of econometrics may be justified when the economy is carrying on as usual, but can say nothing about the prospects for a radical change, such as a crash.

Matthews notes that probability can be broken down into ‘prior probability’ and likelihood, and repeatedly notes that it is the prior probability that is controversial when there seems no justification in the particular case. The chapter on Turing describes Turing’s methods without pointing out what ought perhaps to be obvious: that Turing’s ‘weight of evidence’ (due to Keynes) does not rely on priors. In this sense we might say that Turing was sometimes a sophisticated ‘likelihoodist’, not always a Bayesian.

My understanding is that once a cipher had been broken and fully understood, the subsequent finding was largely Bayesian. But:

- In scheduling the use of resources based on different leads, account would be taken of the weight of evidence for a lead as well as the probability of it leading to a solution.
- The process had built in checks, which meant that it was recognized when the cipher had changed (e.g., by the introduction of a new wheel), and the subsequent activity, while informed by probability theory, did not always make use of priors, which would often have been meaningless.

Similar considerations would seem to apply in other areas.