Who thinks probability is just a number? A plea.

Many people think – perhaps they were taught it – that it is meaningful to talk about the unconditional probability of ‘Heads’ (I.e. P(Heads)) for a real coin, and even that there are logical or mathematical arguments to this effect. I have been collecting and commenting on works which have been – too widely – interpreted in this way, and quoting their authors in contradiction. De Finetti seemed to be the only example of a respected person who seemed to think that he had provided such an argument. But a friendly economist has just forwarded a link to a recent work that debunks this notion, based on wider  reading of his work.

So, am I done? Does anyone have any seeming mathematical sources for the view that ‘probability is just a number’ for me to consider?

I have already covered:

There are some more modern authors who make strong claims about probability, but – unless you know different – they rely on the above, and hence do not need to be addressed separately. I do also opine on a few less well known sources: you can search my blog to check.

Dave Marsay

The search for MH370: uncertainty

There is an interesting podcast about the search for MH370 by a former colleague. I think it illustrates in a relatively accessible form some aspects of uncertainty.

According to the familiar theory, if one has an initial probability distribution over the globe for the location of MH370’s flight recorder, say, then one can update it using Bayes’ rule to get a refined distribution. Conventionally, one should search where there is a higher probability density (all else being equal). But in this case it is fairly obvious that there is no principled way of deriving an initial distribution, and even Bayes’ rule is problematic. Conventionally, one should do the best one can, and search accordingly.

The podcaster (Simon) gives examples of some hypotheses (such as the pilot being well, well-motivated and unhindered throughout) for which the probabilistic approach is more reasonable. One can then split one’s effort over such credible hypotheses, not ruled out by evidence.

A conventional probabilist would note that any ‘rational’ search would be equivalent to some initial probability distribution over hypotheses, and hence some overall distribution. This may be so, but it is clear from Simon’s account that this would hardly be helpful.

I have been involved in similar situations, and have found it easier to explain the issues to non-mathematicians when there is some severe resource constraint, such as time. For example, we are looking for a person. The conventional approach is to maximise our estimated probability of finding them based on our estimated probabilities of them having acted in various ways (e.g., run for it, hunkered down). An alternative is to consider the ways they may ‘reasonably’ be thought to have acted and then to seek to maximize the worst case probability of finding them. Then again, we may have a ranking of ways that they may have acted, and seek to maximize the number of ways for which the probability of our success exceeds some acceptable amount (e.g. 90%). The key point here is that there are many reasonable objectives one might have, for only one of which the conventional assumptions are valid. The relevant mathematics does still apply, though!

Dave Marsay

Mathematical modelling

I had the good fortune to attend a public talk on mathematical modelling, organised by the University of Birmingham (UK). The speaker, Dr Nira Chamberlain CMath FIMA CSci, is a council member of the appropriate institution, and so may reasonably be thought to be speaking for mathematicians generally.

He observed that there were many professional areas that used mathematics as a tool, and that they generally failed to see the need for professional mathematicians as such. He thought that mathematical modelling was one area where – at least for the more important problems – mathematicians ought to be involved. He gave examples of modelling, including one of the financial crisis.

The main conclusion seemed very reasonable, and in line with the beliefs of most ‘right thinking’ mathematicians. But on reflection, I wonder if my non-mathematician professional colleagues would accept it. In 19th century professional mathematicians were proclaiming it a mathematical fact that the physical world conformed to classical geometry. On this basis, mathematicians do not seem to have any special ability to produce valid models. Indeed, in the run up to the financial crash there were too many professional mathematicians who were advocating some mainstream mathematical models of finance and economies in which the crash was impossible.

In Dr Chamberlain’s own model of the crash, it seems that deregulation and competition led to excessive risk taking, which risks eventually materialised. A colleague who is a professional scientist but not a professional mathematician has advised me that this general model was recognised by the UK at the time of our deregulation, but that it was assumed (as Greenspan did) that somehow some institution would step in to foreclose this excessive risk taking. To me, the key thing to note is that the risks being taken were systemic and not necessarily recognised by those taking them. To me, the virtue of a model does not just depend on it being correct in some abstract sense, but also that ‘has traction’ with relevant policy and decision makers and takers. Thus, reflecting on the talk, I am left accepting the view of many of my colleagues that some mathematical models are too important to be left to mathematicians.

If we have a thesis and antithesis, then the synthesis that I and my colleagues have long come to is that important mathematical model needs to be a collaborative endeavour, including mathematicians as having a special role in challenging, interpret and (potentially) developing the model, including developing (as Dr C said) new mathematics where necessary. A modelling team will often need mathematicians ‘on tap’ to apply various methods and theories, and this is common. But what is also needed is a mathematical insight into the appropriateness of these tools and the meaning of the results. This requires people who are more concerned with their mathematical integrity than in satisfying their non-mathematical pay-masters. It seems to me that these are a sub-set of those that are generally regarded as ‘professional’. How do we identify such people?

Dave Marsay 

 

More to Uncertainty than Probability!

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

(Actually, my paper was published jan 2016,but somehow this request for comments got stuck in a limbo somewhere. Better late than never?)

Which rationality?

We often suppose that rationality is ‘a good thing’, but is it always?

Rationality is variously defined as being in accord with reason, logic or ‘the facts’. Here ‘reason’ may mean one’s espoused or actual reasons, or it may mean in accord with some external standards. Thus in its broadest interpretation, it seems that anything that has a reason for being the way that it is may be considered broadly rational. But the notion of rationality derives from ‘reason’, one aspect of which is ‘sound judgement, good sense’. This suggests some external standard.

If we use the term ‘simple’ to denote a situation in which there are definite ‘objective’ standards of soundness and goodness, then rationality in simple situations is behaviour that accords with those standards. Philosophers can argue endlessly about whether any such situations exist, so it seems sensible to define rationality more generally as being relative to some set of standards. The question then being: What standards?

My natural inclination as a mathematician is that those standards should always include the best relevant logics, including mathematics. Yet I have witnessed many occasions on which the use of mathematics has tended to promote disasters, and the advocates of such approaches (apart from those few who think like me) have seemed culpable. I have a great deal of respect and sympathy for the view that mathematics is harmful in complex situations. Yet intellectually it seems quite wrong, and I cannot accept it.

In each case there seems to be some specific failing, which many of my colleagues have attributed to some human factor, such as hubris or the need to keep a job or preserve an institution. But the perpetrators do not seem to me to be much different from the rest of us, and I have long thought that there is some more fundamental common standard that is incompatible with the use of reason. The financial crises of 2007/8/9 are cases where it is hard to believe that most of those pushing the ‘mathematical’ view that turned out to be harmful were either irrational or rationally harmful.

Here I want to suggest a possible explanation.

From a theoretical perspective, there are no given ‘facts’, ‘logics’ or ‘reasons’ that we can rely on.This certainly seems to be true of finance and economics. For example, in economics the models used may be mathematical and in this sense beyond criticism, but the issue of their relevance to a particular situation is never purely logical, and ought to be questioned. Yet it seems that many institutions, including businesses,  rely on having absolute beliefs: questioning them would be wasteful in the short-run. So individual actors tend not only to be rational, but also to be narrowly rational ‘in the short run’, which normally goes with acting ‘as if’ it had narrow facts.

For example, it seems to me to be  a fact that according to the best contemporary scientific theories, the earth is not stationary. It is generally expedient to for me to act ‘as if’ I knew that the earth moved. But unless we can be absolutely sure that the earth moves, the tendency to suppose that it is a fact that the earth moves could be dangerous. (You could try substituting other facts, such as that economies always tend to a healthy equilibrium.)

In a healthy society there would be a competition of ideas,  such that society as a whole could be said to be being more broadly rational, even while its actors were being only narrowly rational. For example, a science would consist of various schools, each of which would be developing its own theories, consistent with the facts, which between them would be exploring and developing the space of all such credible theories. At a practical level, an engineer would appreciate the difference between building a bridge based on a theory that had been tested on similar bridges, and building a novel type of bridge where the existing heuristics could not be relied upon.

I do not think that human society as a whole is healthy in this sense. Why not? In evolutionary theory separate niches, such as islands, promote the development of healthy diversity. Perhaps the rise of global communications and trade, and even the spread of the use of English, is eliminating the niches in which ideas can be explored and so is having a long-run negative impact that needs to be compensated for?

Thus I think we need to distinguish between short and long-run rationalities, and to understand and foster both. It seems to me that most of the time, for most areas of life, short-run rationality is adequate, and it is this that is familiar. But this needs to be accompanied by an understanding of the long-run issues, and an appropriate balance achieved. Perhaps too much (short-run) rationality can be harmful (in the long-run). And not only in economies.

Dave Marsay

Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Instrumental Probabilities

Reflecting on my recent contribution to the economics ejournal special issue on uncertainty (comments invited), I realised that from a purely mathematical point of view, the current mainstream mathematical view, as expressed by Dawid, could be seen as a very much more accessible version of Keynes’. But there is a difference in expression that can be crucial.

In Keynes’ view ‘probability’ is a very general term, so that it always legitimate to ask about the probability of something. The challenge is to determine the probability, and in particular whether it is just a number. In some usages, as in Kolmogorov, the term probability is reserved for those cases where certain axioms hold. In such cases the answer to a request for a probability might be to say that there isn’t one. This seems safe even if it conflicts with the questioner’s presuppositions about the universality of probabilities. In the instrumentalist view of Dawid, however, suggests that probabilistic methods are tools that can always be used. Thus the probability may exist even if it does not have the significance that one might think and, in particular, it is not appropriate to use it for ‘rational decision making’.

I have often come across seemingly sensible people who use ‘sophisticated mathematics’ in strange ways. I think perhaps they take an instrumentalist view of mathematics as a whole, and not just probability theory. This instrumentalist mathematics reminds me of Keynes’ ‘pseudo-mathematics’. But the key difference is that mathematicians, such as Dawid, know that the usage is only instrumentalist and that there are other questions to be asked. The problem is not the instrumentalist view as such, but the dogma (of at last some) that it is heretical to question widely used instruments.

The financial crises of 2007/8 were partly attributed by Lord Turner to the use of ‘sophisticated mathematics’. From Keynes’ perspective it was the use of pseudo-mathematics. My view is that if it is all you have then even pseudo-mathematics can be quite informative, and hence worthwhile. One just has to remember that it is not ‘proper’ mathematics. In Dawid’s terminology  the problem seems to be that the instrumental use of mathematics without any obvious concern for its empirical validity. Indeed, since his notion of validity concerns limiting frequencies, one might say that the problem was the use of an instrument that was stunningly inappropriate to the question at issue.

It has long seemed  to me that a similar issue arises with many miscarriages of justice, intelligence blunders and significant policy mis-steps. In Keynes’ terms people are relying on a theory that simply does not apply. In Dawid’s terms one can put it blunter: Decision-takers were relying on the fact that something had a very high probability when they ought to have been paying more attention to the evidence in the actual situation, which showed that the probability was – in Dawid’s terms – empirically invalid. It could even be that the thing with a high instrumental probability was very unlikely, all things considered.

Decision-making under uncertainty: ‘after Keynes’

I have a new discussion paper. I am happy to take comments here, on LinkedIn, at the more formal Economics e-journal site or by email (if you have it!), but wish to record substantive comments on the journal site while continuing to build up a site of whatever any school of thought may think is relevant, with my comments, here.

Please do comment somewhere.

Clarifications

I refer to Keynes’ ‘weights of argument’ mostly as something to be taken into account in addition to probability. For example, if one has two urns each with a mix of 100 otherwise identical black and white balls, where the first urn is known to have equal number of each colour, but the mix for the other urn is unknown, then conventionally one has equal probability of drawing a black ball form each urn, but the weight of argument is greater for the first than the second.

Keynes does fully develop his notion of weights and it seems not to be well understood, and I wanted my overview of Keynes’ views to be non-contentious. But from some off-line comments I should clarify.

Ch. VI para 8 is worth reading, followed by Ch. III para 8. Whatever the weight may be, it is ‘strengthened by’:

  • Being more numerous.
  • Having been obtained with a greater variety of conditions.
  • Concerning a greater generalisation.

Keynes argues that this weight cannot be reduced to a single number, and so weights can be incomparable. He uses the term ‘strength’ to indicate that something is increased while recognizing that it may not be measurable. This can be confusing, as in Ch. III para 7, where he refers to ‘the strength of argument’. In simple cases this would just be the probability, not to be confused with the weight.

It seems to me that Keynes’ concerns relate to Mayo’s:

Severity Principle: Data x provides a good evidence for hypothesis H if and only if x results from a test procedure T which, taken as a whole, constitutes H having passed a severe test – that is, a procedure which would have, with very high probability, uncovered the discrepancies from H, and yet no such error is detected.

In cases where one has performed a test, severity seems to roughly correspond to have a strong weight, at least in simpler cases. Keynes’ notion applies more broadly. Currently, it seems to me, care needs taking in applying either to particular cases. But that is no reason to ignore them.

 

 

Dave Marsay