Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

Advertisements

Haldane’s Tails of the Unexpected

A. Haldane, B. Nelson Tails of the unexpected,  The Credit Crisis Five Years On: Unpacking the Crisis conference, University of Edinburgh Business School, 8-9 June 2012

The credit crisis is blamed on a simplistic belief in ‘the Normal Distribution’ and its ‘thin tails’, understating risk. Complexity and chaos theories point to greater risks, as does the work of Taleb.

Modern weather forecasting is pointed to as good relevant practice, where one can spot trouble brewing. Robust and resilient regulatory mechanisms need to be employed. It is no good relying on statistics like VaR (Value at Risk) that assume a normal distribution. The Bank of England is developing an approach based on these ideas.

Comment

Risk arises when the statistical distribution of the future can be calculated or is known. Uncertainty arises when this distribution is incalculable, perhaps unknown.

While the paper acknowledges Keynes’ economics and Knightian uncertainty, it overlooks Keynes’ Treatise on Probability, which underpins his economics.

Much of modern econometric theory is … underpinned by the assumption of randomness in variables and estimated error terms.

Keynes was critical of this assumption, and of this model:

Economics … shift[ed] from models of Classical determinism to statistical laws. … Evgeny Slutsky (1927) and Ragnar Frisch (1933) … divided the dynamics of the economy into two elements: an irregular random element or impulse and a regular systematic element or propagation mechanism. This impulse/propagation paradigm remains the centrepiece of macro-economics to this day.

Keynes pointed out that such assumptions could only be validated empirically and (as the current paper also does) in the Treatise he cited Lexis’s falsification.

The paper cites a game of paper/scissors/stone which Sotheby’s thought was a simple game of chance but which Christie’s saw  as an opportunity for strategizing – and won millions of dollars. Apparently Christie’s consulted some 11 year old girls, but they might equally well have been familiar with Shannon‘s machine for defeating strategy-impaired humans. With this in mind, it is not clear why the paper characterises uncertainty a merly being about unknown probability distributions, as distinct from Keynes’ more radical position, that there is no such distribution. 

The paper is critical of nerds, who apparently ‘like to show off’.  But to me the problem is not the show-offs, but those who don’t know as much as they think they know. They pay too little attention to the theory, not too much. The girls and Shannon seem okay to me: it is those nerds who see everything as the product of randomness or a game of chance who are the problem.

If we compare the Slutsky Frisch model with Kuhn’s description of the development of science, then economics is assumed to develop in much the same way as normal science, but without ever undergoing anything like a (systemic) paradigm shift. Thus, while the model may be correct most of the time,  violations, such as in 2007/8, matter.

Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.

 One can understand this reasoning by analogy with science: the more dominant a school which protects its core myths, the greater the reaction and impact when the myths are exposed. But in finance it may not be just ‘risk control’ that causes a problem. Any optimisation that is blind to the possibility of systemic change may tend to increase the chance of change (for good or ill) [E.g. Bohr Atomic Physics and Human Knowledge. Ox Bow Press 1958].

See Also

Previous posts on articles by or about Haldane, along similar lines:

My notes on:

Dave Marsay

Anyone for Tennis?

An example of Knightian uncertainty?

Sam, a Norwegian statistician, and Gina, a Moldovian game-theorist, have just met on holiday and are playing tennis. Sam knows that in previous games Gina has taken 70% of the opportunities to ‘go to the net’, and out of 10 opportunities in their games far, has gone to the net 7 times.

What is the probability that Gina will go to the net at the next opportunity? (And what is your reasoning? You may consult my notes on probability.)

More, similar, puzzles here.

Dave Marsay

Making your mind up (NS)

Difficult choices to make? A heavy dose of irrationality may be just what you need.

Comment on a New Scientist article, 12 Nov. 2011, pg 39.

The on-line version is Decision time: How subtle forces shape your choices: Struggling to make your mind up? Interpret your gut instincts to help you make the right choice.

The article talks a lot about decision theory and rationality. No definitions are given, but it seems to be assumed that all decisions are analogous to decisions about games of chance. It is clearly supposed, without motivation, that the objective is always to maximize expected utility. This might make sense for gamblers who expect to live forever without ever running out of funds, but more generally is unmotivated.

Well-known alternatives include:

  • taking account of the chances of going broke (short-term) and never getting to the ‘expected’ (long-term) returns.
  • taking account of uncertainty, as in the Ellsberg’s approach.
  • taking account of the cost of evaluating options, as in March’s ‘bounded rationality’.

The logic of inconsistency

A box claims that ‘intransitive preferences’ give mathematicians a head-ache. But as a mathematician I find that some people’s assumptions about rationality give me a headache, especially if they try to force them on to me.

Suppose that I prefer apples to plums to pears, but I prefer a mixture to having just apples. If I am given the choice between apples and plums I will pick apples. If I am then given the choice between plums and pears I will pick plums. If I am now given the choice between apples and pears I will pick pears, to have a good spread of fruit. According to the article I am inconsistent and illogical: I should have chosen apples. But what kind of logic is it in which I would end up with all meat and no gravy? Or all bananas and no custard?

Another reason I might pick pears was if I wanted to acquire things that appeared scarce. Thus being offered a choice of apples or plums suggests that neither are scarce, so what I really want is pears. In this case, if I was subsequently given a choice of plums to pears I would choice pears, even though I actually prefer plums. An question imparts information, and is not just a means of eliciting information.

In criticising rationality one needs to consider exactly what the notion of ‘utility’ is, and whether or not it is appropriate.

Human factors

On the last page it becomes clear that ‘utility’ is even narrower than one might suppose. Most games of chance have an expected monetary loss for the gambler and thus – it seems – such gamblers are ‘irrational’. But maybe there is something about the experience that they value. They may, for example, be developing friendships that will stand them in good stead. Perhaps if we counted such expected benefits, gambling might be rational. Could buying a lottery ticket be rational if it gave people hope and something to talk about with friends?

If we expect that co-operation or conformity  have a benefit, then could not such behaviours be rational? The example is given of someone who donates anonymously to charity. “In purely evolutionary terms, it is a bad choice.” But why? What if we feel better about ourselves and are able to act more confidently in social situations where others may be donors?

Retirement

“Governments wanting us to save up for retirement need to understand why we are so bad at making long-term decisions.”

But are we so very bad? This could do with much more analysis. With the article’s view of rationality under-saving could be caused by a combination of:

  • poor expected returns on savings (especially at the moment)
  • pessimism about life expectancy
  • heavy discounting of future value
  • an anticipation of a need to access the funds before retirement
    (e.g., due to redundancy or emigration).

The article suggests that there might also be some biases. These should be considered, although they are really just departures from a normative notion of rationality that may not be appropriate. But I think one would really want to consider broader factors on expected utility. Maybe, for example, investing in one’s children’s’ future may seem a more sensible investment. Similarly, in some cultures, investing one’s aura of success (sports car, smart suits, …) might be a rational gamble. Is it that ‘we’ as individuals are bad at making long-term decisions, or that society as a whole has led to a situation in which for many people it is ‘rational’ to save less than governments think we ought to have? The notion of rationality in the article hardly seems appropriate to address this question.

Conclusion

The article raises some important issues but takes much too limited a view of even mathematical decision theory and seems – uncritically – to suppose that it is universally normatively correct. Maybe what we need is not so much irrationality as the right rationality, at least as a guide.

See also

Kahneman: anomalies paper , Review, Judgment. Uncertainty: Cosimedes and Tooby, Ellsberg. Examples. Inferences from utterances.

Dave Marsay

How to live in a world that we don’t understand, and enjoy it (Taleb)

N Taleb How to live in a world that we don’t understand, and enjoy it  Goldstone Lecture 2011 (U Penn, Wharton)

Notes from the talk

Taleb returns to his alma mater. This talk supercedes his previous work (e.g. Black Swan). His main points are:

  • We don’t have a word for the opposite of fragile.
      Fragile systems have small probability of huge negative payoff
      Robust systems have consistent payoffs
      ? has a small probability of a large pay-off
  • Fragile systems eventually fail. ? systems eventually come good.
  • Financial statistics have a kurtosis that cannot in practice be measured, and tend to hugely under-estimate risk.
      Often more than 80% of kurtosis over a few years is contributed by a single (memorable) day.
  • We should try to create ? systems.
      He calls them convex systems, where the expected return exceeds the return given the expected environment.
      Fragile systems are concave, where the expected return is less than the return from the expected situation.
      He also talks about ‘creating optionality’.
  • He notes an ‘action bias’, where whenever there is a game like the stock market then we want to get involved and win. It may be better not to play.
  • He gives some examples.

 Comments

Taleb is dismissive of economists who talk about Knightian uncertainty, which goes back to Keynes’ Treatise on Probability. Their corresponding story is that:

  • Fragile systems are vulnerable to ‘true uncertainty’
  • Fragile systems eventually fail
  • Practical numeric measures of risk ignore ‘true uncertainty’.
  • We should try to create systems that are robust to or exploit true uncertainty.
  • Rather than trying to be the best at playing the game, we should try to change the rules of the game or play a ‘higher’ game.
  • Keynes gives examples.

The difference is that Taleb implicitly suppose that financial systems etc are stochastic, but have too much kurtosis for us to be able to estimate their parameters. Rare events are regarded as rare events generated stochastically. Keynes (and Whitehead) suppose that it may be possible to approximate such systems by a stochastic model for a while, but the rare events denote a change to a new model, so that – for example – there is not a universal economic theory. Instead, we occasionally have new economics, calling for new stochastic models. Practically, there seems little to choose between them, so far.

From a scientific viewpoint, one can only asses definite stochastic models. Thus, as Keynes and Whitehead note, one can only say that a given model fitted the data up to a certain date, and then it didn’t. The notion that there is a true universal stochastic model is not provable scientifically, but neither is it falsifiable. Hence according to Popper one should not entertain it as a view. This is possibly too harsh on Taleb, but the point is this:

Taleb’s explanation has pedagogic appeal, but this shouldn’t detract from an appreciation of alternative explanations based on non-stochastic uncertainty.

 In particular:

  • Taleb (in this talk) seems to regard rare crisis as ‘acts of fate’ whereas Keynes regards them as arising from misperceptions on the part of regulators and major ‘players’. This suggests that we might be able to ameliorate them.
  • Taleb implicitly uses the language of probability theory, as if this were rational. Yet his argument (like Keynes’) undermines the notion of probability as derived from rational decision theory.
      Not playing is better whenever there is Knightian uncertainty.
      Maybe we need to be able to talk about systems that thrive on uncertainty, in addition to convex systems.
  • Taleb also views the up-side as good fortune, whereas we might view it as an innovation, by whatever combination of luck, inspiration, understanding and hard work.

See also

On fat tails versus epochs.

Dave Marsay

Uncertainty, utility and paradox

Brooklyn Museum - An Embarrassment of Choices,...

Image via Wikipedia

Allais

Allais devised two choices:

  1. between a definite £1M versus a gamble whose expected return was much greater, but could give nothing
  2. between two gambles

He showed that most people made choices that were inconsistent with expected utility theory, and hence paradoxical.

In the first choice, one option has a certain payoff and so is reasonably prefered. In the other choice both choices have similarly uncertain outcomes and so it is reasonable to choose based on expected utility. In general, uncertainty reasonably detracts from expected utility.

Ellsberg

Ellsberg devised a similar paradox, but again people consistently prefer alternatives with the least uncertainty.

See also

mathematics, illustrations, examples.

Dave Marsay

Examples of Uncertainty in Real Decisions

Uncertainty, beyond that of numeric probability, is apparent in many familiar decisions. Here the focus is on those that may be familiar, where overlooked uncertainty seems to have led to important mistakes. See Sources of Uncertainty for an overview of the situations and factors considered.

Financial crash

Before the financial crash of 2007/8 finance was largely considered from the point of view that risk is variability. Keynes was ignored, both his economics and his mathematics of uncertainty and risk. After the crash Keynes’ economics and Keynesian economics came to the fore, and his ‘Knightian uncertainty’ more recognized. It is perhaps clear that the conditions and factors above – largely based on Keynes’ work – were operative. An approach to uncertainty that seeked to uncover the key factors may have been more helpful than thinking of them as sources of variability and probability distributions.

UK Miscarriages of Justice

Emotion and assessment

The UK’s most notorious miscarriages of justice often share some of the following characteristics:

An event evokes public outrage (and hence tends to be rare). There is intense pressure to find and punish those guilty. Suspects who lie outside the mainstream of society are found.

 Thus one tends not to have the conditions that support reliable probability judgements.

In the Birmingham six case, a key piece of evidence was a forensic test that showed that one  had handled explosives ‘with a 99% certainty’. An appeal was turned down on these reflexive grounds:

 “If they won, it would mean that the police were guilty of perjury; that they were guilty of violence and threats; that the confessions were involuntary and improperly admitted in evidence; and that the convictions were erroneous. That would mean that the Home Secretary would have either to recommend that they be pardoned or to remit the case to the Court of Appeal. That was such an appalling vista that every sensible person would say, ‘It cannot be right that these actions should go any further.”

In their final appeal it was recognized that a similar forensic result could have been obtained if the suspect had handled playing cards. Similar forensic problems bedevilled other cases, such as the Maguire seven.

Bayesian reasoning

The case R v T has raised some relatively mundane issues of estimation. The weight of evidence depends on an estimation of the likelihood of the evidence supposing that the suspect is innocent. In R v T footmarks were found at the scene of a murder that matched an associate’s shoes. The original forensic scientist used an approximation to whole population statistics for the prevalence of the shoes. But for many crimes the perpetrators are likely to be drawn from some local population who are likely to be more similar than the general population, and so typical forensic evidence is likely to be more likely for the appropriate population than for the population as a whole: if the print of a particular shoe is found then that shoe is likely to be more common among the associates of the victim than for the population as a whole.

Weapons of Mass Destruction

Most westerners, at least, regarded it as probable or highly probable that Saddam Hussein had WMD, leading to the decision to invade Iraq, after which none were found. From a probability perspective this may seem to be just bad luck. But it does seem odd that an assessment made on such a large and wide evidence base was so wrong.

This is clearly an area where probability estimation doesn’t meet the conditions to be non-contentious: Saddam was not a randomly selected dictator. Thus one might have been prompted to look for the specific factors. There was some of evidence, at the time, of:

  • complexity, particularly reflexivity
  • vagueness
  • source unreliability (widely blamed).

This might have prompted more detailed consideration, for example, of Saddam’s motivation: if he had no WMD, what did he have to lose by letting it be known? It seems unlikely that a routine sensitivity analysis would have been as insightful.

Stockwell

Two weeks after London’s 7/7 bombings and a day after an attempted bombing, Jean Charles de Mendez was mistaken for a bomber and shot at Stockwell tube station. This case has some similarities to miscarriages of justice. As the Gold Commander made clear at the inquest, the key test was the balance of probability between the suspect being about to cause another atrocity and an innocent man being killed. The standard is thus explicitly probabilistic rather than being one of ‘reasonable doubt’.

The suspect was being followed by ‘James’s team’, and James said that ‘it was probably him [the known terrorist]’. From then on nothing suggested the suspect’s innocence, and he was shot before he could blow himself up.

The inquest did not particularly criticise any of those involved, but from an uncertainty perspective the following give pause for thought:

  • the conditions were far from routine.
  • there were some similarities with known miscarriages of justice in terrorist cases
  • the specific factors above were present

More particularly:

  • The Gold Commander had access to relevant information that James lacked, which appears not to have been taken into account.
  • James regarded the request for a ‘probability assessment’ (as against hard evidence) of improper, and only provided one under pressure.
  • In assessing probability nothing that James’ team had seen (apart from some nervousness) was suggestive that the suspect was a terrorist. The main thing they had been told was that the suspect had come out of the flat of the known terrorist, but by then the Gold Commander knew that the terrorist’s flat had a shared doorway, so the probability assessment should have been reduced accordingly.
  • Those who shot the suspect were relying on James’ judgement, but were unaware of the circumstances in which he had given it.

With hindsight it may be significant that:

  • The suspect had got off the bus at Brixton, found the station to be closed, and got back on. The station was closed due to a security alert, but – not knowing this – the behaviour may have seemed to be anti-surveillance. [The inquest found that this innocent behaviour did not contribute to the death.]
  • The Gold Commander was in a reflexive situation: if the suspect was not shot then it must have been assessed that ‘on the balance of probability’ the suspect was innocent, in which case he ought not to have been followed.

Time was pressing, but a fuller consideration of uncertainty might have led to:

  • James being asked to supply descriptions of, and/or likelihoods for, what he had seen against the terrorist and innocent hypotheses, rather than ‘final’ probabilities.
  • Consideration being given to innocent explanations for the suspect’s behaviour

More

Ulrich Beck opined (1992) that the Knightian ‘true uncertainty’, particularly the reflexive, aspects of risk are being mishandled, with widespread adverse consequences. Naomi Klein has a similar view. Here are some relatively mundane specifics.

Economic Recovery from 2007/8

Robert Skidelsky, an advocate of Keynes and his view of uncertainty, has noted:

Keynes thought that the chief implicit assumption underlying the classical theory of the economy was that of perfect knowledge. “Risks,” he wrote, “were supposed to be capable of an exact actuarial computation. The calculus of probability … was supposed to be capable of reducing uncertainty to the same calculable status as certainty itself.”

For Keynes, this is untenable: “Actually…we have as a rule only the vaguest idea of any but the most direct consequences of our acts.” This made investment, which is always a bet on the future, dependent on fluctuating states of confidence. Financial markets, through which investment is made, were always liable to collapse when something happened to disturb business confidence. Therefore, market economies were inherently unstable.

Unless we start discussing economics in a Keynesian framework, we are doomed to a succession of crises and recessions. If we don’t, the next one will come sooner than we think.

Climate Change

Much of the climate change ‘debate’ seems to be being driven by preconceived ideas and special interests, but these positions tend to align with different views on uncertainty

Mobile phone cancer risk

The International Agency for Research on Cancer (IARC), part of the World Health Organization (WHO), has issued a press release stating that it:

has classified radiofrequency electromagnetic fields as possibly carcinogenic to humans (Group 2B), based on an increased risk for glioma, a malignant type of brain cancer, associated with wireless phone use.

…. The conclusion means that there could be some risk, and therefore we need to keep a close watch for a link between cell phones and cancer risk.”

Where: Group 2B; Possibly carcinogenic to humans: “This category is used for agents for which there is limited evidence of carcinogenicity … .” Thus it is possible that there is no carcinogenity.

The understanding uncertainty blog has noted how the British media has confused the issues, giving the impression that there was an increased risk of cancer. But from a probability perspective, what does ‘could be some risk’ mean? If the probability of risk r is p(r) then (from a standard Bayesian viewpoint) the (overall) risk is ∫(p(r).r)dr, which is positive unless there is definitely no risk. Thus if ‘there could be some risk’ then there is some risk. On the other hand, if we assess the risk as an interval, [0, small], then it is clear that there could be no risk, but (as the IARC suggests) further research is required to reduce the uncertainty. The IARC’s statement that:

The Working Group did not quantitate the risk; however, one study of past cell phone use (up to the year 2004), showed a 40% increased risk for gliomas in the highest category of heavy users (reported average: 30 minutes per day over a 10‐year period).

This is presumably the worst case to hand (balancing apparent effect and weight of evidence), so that (confusion of language apart) it is easy to interpret the release in terms of uncertainty, noting the link to heavy uasage. It is unfortunate that the British media did not: maybe we do need a more nuanced language?

 See Also

 Reasoning under uncertainty methods , biases and uncertainty,  metaphors, scaling

David Marsay