Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

Advertisements

Disease

“You are suffering from a disease that, according to your manifest symptoms, is either A or B. For a variety of demographic reasons disease A happens to be nineteen times as common as B. The two diseases are equally fatal if untreated, but it is dangerous to combine the respectively appropriate treatments. Your physician orders a certain test which, through the operation of a fairly well understood causal process, always gives a unique diagnosis in such cases, and this diagnosis has been tried out on equal numbers of A- and B-patients and is known to be correct on 80% of those occasions. The tests report that you are suffering from disease B. Should you nevertheless opt for the treatment appropriate to A … ?”

My thoughts below …

.

.

.

.

.

.

.

.

If, following Good, we use

P(A|B:C) to denote the odds of A, conditional on B in the context C, Odds(A1/A2|B:C) to denote the odds P(A1|B:C)/P(A2|B:C), and LR(B|A1/A2:C) to denote the likelihood ratio, P(B|A1:C)/P(B|A2:C).

then we want

Odds(A/B | diagnosis of B : you), given
Odds(A/B : population) and
P(diagnosis of B | B : test), and similarly for A.

This looks like a job for Bayes’ rule! In Odds form this is

Odds(A1/A2|B:C) = LR(B|A1/A2:C).Odds(A1/A2:C).

If we ignore the dependence on context, this would yield

Odds(A/B | diagnosis of B ) = LR(diagnosis of B | A/B ).Odds(A/B).

But are we justified in ignoring the differences? For simplicity, suppose that the tests were conducted on a representative sample of the population, so that we have Odds(A/B | diagnosis of B : population), but still need Odds(A/B | diagnosis of B : you). According to Blackburn’s population indifference principle (PIP) you ‘should’ use the whole population statistics, but his reasons seem doubtful. Suppose that:

  • You thought yourself in every way typical of the population as a whole.
  • The prevalence of diseases among those you know was consistent with the whole population data.

Then PIP seems more reasonable. But if you are of a minority ethnicity – for example – with many relatives, neighbours and friends who share your distinguishing characteristic, then it might be more reasonable to use an informal estimate based on a more appropriate population, rather than a better quality estimate based on a less appropriate estimate. (This is a kind of converse to the availability heuristic.)

See Also

My notes on Cohen for a discussion of alternatives.

Other, similar, Puzzles.

My notes on probability.

Dave Marsay

Cab accident

“In a certain town blue and green cabs operate in a ratio of 85 to 15, respectively. A witness identifies a cab in a crash as green, and the court is told [based on a test] that in the relevant light conditions he can distinguish blue cabs from green ones in 80% of cases. [What] is the probability (expressed as a percentage) that the cab involved in the accident was blue?” (See my notes on Cohen for a discussion of alternatives.)

For bonus points …. if you were involved , what questions might you reasonably ask before estimating the required percentage? Does your first answer imply some assumptions about the answers, and are they reasonable?

My thoughts below:

.

.

.

.

.

.

If, following Good, we use

P(A|B:C) to denote the odds of A, conditional on B in the context C,
Odds(A1/A2|B:C) to denote the odds P(A1|B:C)/P(A2|B:C), and
LR(B|A1/A2:C) to denote the likelihood ratio, P(B|A1:C)/P(B|A2:C).

Then we want P(blue| witness: accident), which can be derived by normalisation from Odds(blue/green| witness : accident).
We have Odds(blue/green: city) and the statement that the witness “can distinguish blue cabs from green ones in 80% of cases”.

Let us suppose (as I think is the intention) that this means that we know Odds(witness| blue/green: test) under the test conditions. This looks like a job for Bayes’ rule! In Odds form this is

Odds(A1/A2|B:C) = LR(B|A1/A2:C).Odds(A1/A2:C),

as can be verified from the identity P(A|B:C) = P(A&B:C)/P(B:C) whenever P(B:C)≠0.

If we ignore the contexts, this would yield:

Odds(blue/green| witness) = LR(witness| blue/green).Odds(blue/green),

as required. But this would only be valid if the context made no difference. For example, suppose that:

  • Green cabs have many more accidents than blue ones.
  • The accident was in an area where green cabs were more common.
  •  The witness knew that blue cabs were much more common than green and yet was still confident that it was a green cab.

In each case, one would wish to re-assess the required odds. Would it be reasonable to assume that none of the above applied, if one didn’t ask?

See Also

Other Puzzles.

My notes on probability.

Dave Marsay

Are more intelligent people more biased?

It has been claimed that:

U.S. intelligence agents may be more prone to irrational inconsistencies in decision making compared to college students and post-college adults … .

This is scary, if unsurprising to many. Perhaps more surprisingly:

Participants who had graduated college seemed to occupy a middle ground between college students and the intelligence agents, suggesting that people with more “advanced” reasoning skills are also more likely to show reasoning biases.

It seems as if there is some serious  mis-education in the US. But what is it?

The above conclusions are based on responses to the following two questions:

1. The U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Do you: (a) Save 200 people for sure, or (b) choose the option with 1/3 probability that 600 will be saved and a 2/3 probability no one will be saved?

2. In the same scenario, do you (a) pick the option where 400 will surely die, or instead (b) a 2/3 probability that all 600 will die and a 1/3 probability no one dies?

You might like to think about your answers to the above, before reading on.

.

.

.

.

.

The paper claims that:

Notably, the different scenarios resulted in the same potential outcomes — the first option in both scenarios, for example, has a net result of saving 200 people and losing 400.

Is this what you thought? You might like to re-read the questions and reconsider your answer, before reading on.

.

.

.

.

.

The questions may appear to contain statements of fact, that we are entitled to treat as ‘given’. But in real-life situations we should treat such questions as utterances, and use the appropriate logics. This may give the same result as taking them at face value – or it may not.

It is (sadly) probably true that if this were a UK school examination question then the appropriate logic would be (1) to treat the statements ‘at face value’ (2) assume that if 200 people will be saved ‘for sure’ then exactly 200 people will be saved, no more. On the other hand, this is just the kind of question that I ask mathematics graduates to check that they have an adequate understanding of the issues before advising decision-takers. In the questions as set, the (b) options are the same, but (1a) is preferable to (2a), unless one is in the very rare situation of knowing exactly how many will die. With this interpretation, the more education and the more experience, the better the decisions – even in the US 😉

It would be interesting to repeat the experiment with less ambiguous wording. Meanwhile, I hope that intelligence agents are not being re-educated. Or have I missed something?

Also

Kahneman’s Thinking, fast and slow has a similar example, in which we are given ‘exact scientific estimates’ of probable outcomes, avoiding the above ambiguity. This might be a good candidate experimental question.

Kahneman’s question is not without its own subtleties, though. It concerns the efficacy of ‘programs to combat disease’. It seems to me that if I was told that a vaccine would save 1/3 of the lives, I would suppose that it had been widely tested, and that the ‘scientific’ estimate was well founded. On the other hand, if I was told that there was a 2/3 chance of the vaccine being ineffective I would suppose that it hadn’t been tested adequately, and the ‘scientific’ estimate was really just an informed guess. In this case, I would expect the estimate of efficacy to be revised in the light of new information. It could even be that while some scientist has made an honest estimate based on the information that they have, some other scientist (or technician) already knows that the vaccine is ineffective. A program based on such a vaccine would be more complicated and ‘risky’ than one based on a well-founded estimate, and so I would be reluctant to recommend it. (Ideally, I would want to know a lot more about how the estimates were arrived at, but if pressed for a quick decision, this is what I would do.)

Could the framing make a difference? In one case, we are told that ‘scientifically’, 200 people will be saved. But scientific conclusions always depend on assumptions, so really one should say ‘if …. then 200 will be saved’. My experience is that otherwise the outcome should not be expected, and that saving 200 is the best that should be expected. In the other case we are told that ‘400 will die’. This seems to me to be a very odd thing to say. From a logical perspective one would like to understand the circumstances in which someone would put it like this. I would be suspicious, and might well (‘irrationally’) avoid a program described in that way.

Addenda

The example also shows a common failing, in assuming that the utility is proportional to lives lost. Suppose that when we are told that lives will be ‘saved’ we assume that we will get credit, then we might take the utility from saving lives to be number of lives saved, but with a limit of ‘kudos’ at 250 lives saved. In this case, it is rational to save 200 ‘for sure’, as the expected credit from taking a risk is very much lower. On the other hand, if we are told that 400 lives will be ‘lost’ we might assume that we will be blamed, and take the utility to be minus the lives lost, limited at -10. In this case it is rational to take a risk, as we have some chance of avoiding the worst case utility, whereas if we went for the sure option we would be certain to suffer the worst case.

These kind of asymmetric utilities may be just the kind that experts experience. More study required?

 

Dave Marsay

Mathematics, psychology, decisions

I attended a conference on the mathematics of finance last week. It seems that things would have gone better in 2007/8 if only policy makers had employed some mathematicians to critique the then dominant dogmas. But I am not so sure. I think one would need to understand why people went along with the dogmas. Psychology, such as behavioural economics, doesn’t seem to help much, since although it challenges some aspects of the dogmas it fails to challenge (and perhaps even promotes) other aspects, so that it is not at all clear how it could have helped.

Here I speculate on an answer.

Finance and economics are either empirical subjects or they are quasi-religious, based on dogmas. The problems seem to arise when they are the latter but we mistake them for the former. If they are empirical then they have models whose justification is based on evidence.

Naïve inductivism boils down to the view that whatever has always (never) been the case will continue always (never) to be the case. Logically it is untenable, because one often gets clashes, where two different applications of naïve induction are incompatible. But pragmatically, it is attractive.

According to naïve inductivism we might suppose that if the evidence has always fitted the models, then actions based on the supposition that they will continue to do so will be justified. (Hence, ‘it is rational to act as if the model is true’). But for something as complex as an economy the models are necessarily incomplete, so that one can only say that the evidence fitted the models within the context as it was at the time. Thus all that naïve inductivism could tell you is that ‘it is rational’ to act as if the  model is true, unless and until the context should change. But many of the papers at the mathematics of finance conference were pointing out specific cases in which the actions ‘obviously’ changed the context, so that naïve inductivism should not have been applied.

It seems to me that one could take a number of attitudes:

  1. It is always rational to act on naïve inductivism.
  2. It is always rational to act on naïve inductivism, unless there is some clear reason why not.
  3. It is always rational to act on naïve inductivism, as long as one has made a reasonable effort to rule out any contra-indications (e.g., by considering ‘the whole’).
  4. It is only reasonable to act on naïve inductivism when one has ruled out any possible changes to the context, particularly reactions to our actions, by considering an adequate experience base.

In addition, one might regard the models as conditionally valid, and hedge accordingly. (‘Unless and until there is a reaction’.) Current psychology seems to suppose (1) and hence has little to help us understand why people tend to lean too strongly on naïve inductivism. It may be that a belief in (1) is not really psychological, but simply a consequence of education (i.e., cultural).

See Also

Russell’s Human Knowledge. My media for the conference.

Dave Marsay

Making your mind up (NS)

Difficult choices to make? A heavy dose of irrationality may be just what you need.

Comment on a New Scientist article, 12 Nov. 2011, pg 39.

The on-line version is Decision time: How subtle forces shape your choices: Struggling to make your mind up? Interpret your gut instincts to help you make the right choice.

The article talks a lot about decision theory and rationality. No definitions are given, but it seems to be assumed that all decisions are analogous to decisions about games of chance. It is clearly supposed, without motivation, that the objective is always to maximize expected utility. This might make sense for gamblers who expect to live forever without ever running out of funds, but more generally is unmotivated.

Well-known alternatives include:

  • taking account of the chances of going broke (short-term) and never getting to the ‘expected’ (long-term) returns.
  • taking account of uncertainty, as in the Ellsberg’s approach.
  • taking account of the cost of evaluating options, as in March’s ‘bounded rationality’.

The logic of inconsistency

A box claims that ‘intransitive preferences’ give mathematicians a head-ache. But as a mathematician I find that some people’s assumptions about rationality give me a headache, especially if they try to force them on to me.

Suppose that I prefer apples to plums to pears, but I prefer a mixture to having just apples. If I am given the choice between apples and plums I will pick apples. If I am then given the choice between plums and pears I will pick plums. If I am now given the choice between apples and pears I will pick pears, to have a good spread of fruit. According to the article I am inconsistent and illogical: I should have chosen apples. But what kind of logic is it in which I would end up with all meat and no gravy? Or all bananas and no custard?

Another reason I might pick pears was if I wanted to acquire things that appeared scarce. Thus being offered a choice of apples or plums suggests that neither are scarce, so what I really want is pears. In this case, if I was subsequently given a choice of plums to pears I would choice pears, even though I actually prefer plums. An question imparts information, and is not just a means of eliciting information.

In criticising rationality one needs to consider exactly what the notion of ‘utility’ is, and whether or not it is appropriate.

Human factors

On the last page it becomes clear that ‘utility’ is even narrower than one might suppose. Most games of chance have an expected monetary loss for the gambler and thus – it seems – such gamblers are ‘irrational’. But maybe there is something about the experience that they value. They may, for example, be developing friendships that will stand them in good stead. Perhaps if we counted such expected benefits, gambling might be rational. Could buying a lottery ticket be rational if it gave people hope and something to talk about with friends?

If we expect that co-operation or conformity  have a benefit, then could not such behaviours be rational? The example is given of someone who donates anonymously to charity. “In purely evolutionary terms, it is a bad choice.” But why? What if we feel better about ourselves and are able to act more confidently in social situations where others may be donors?

Retirement

“Governments wanting us to save up for retirement need to understand why we are so bad at making long-term decisions.”

But are we so very bad? This could do with much more analysis. With the article’s view of rationality under-saving could be caused by a combination of:

  • poor expected returns on savings (especially at the moment)
  • pessimism about life expectancy
  • heavy discounting of future value
  • an anticipation of a need to access the funds before retirement
    (e.g., due to redundancy or emigration).

The article suggests that there might also be some biases. These should be considered, although they are really just departures from a normative notion of rationality that may not be appropriate. But I think one would really want to consider broader factors on expected utility. Maybe, for example, investing in one’s children’s’ future may seem a more sensible investment. Similarly, in some cultures, investing one’s aura of success (sports car, smart suits, …) might be a rational gamble. Is it that ‘we’ as individuals are bad at making long-term decisions, or that society as a whole has led to a situation in which for many people it is ‘rational’ to save less than governments think we ought to have? The notion of rationality in the article hardly seems appropriate to address this question.

Conclusion

The article raises some important issues but takes much too limited a view of even mathematical decision theory and seems – uncritically – to suppose that it is universally normatively correct. Maybe what we need is not so much irrationality as the right rationality, at least as a guide.

See also

Kahneman: anomalies paper , Review, Judgment. Uncertainty: Cosimedes and Tooby, Ellsberg. Examples. Inferences from utterances.

Dave Marsay

Kahneman et al’s Anomalies

Daniel Kahneman, Jack L. Knetsch, Richard H. Thaler Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias The Journal of Economic Perspectives, 5(1), pp. 193-206, Winter 1991

[Some] “behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences … . An empirical result qualifies as an anomaly if it is difficult to “rationalize,” or if implausible assumptions are necessary to explain it within the paradigm.”

The first candidate anomaly is:

“A wine-loving economist we know purchased some nice Bordeaux wines … . The wines have greatly appreciated in value, so that a bottle that cost only $10 when purchased would now fetch $200 at auction. This economist now drinks some of this wine occasionally, but would neither be willing to sell the wine at the auction price nor buy an additional bottle at that price.”

This an example of the effects in the title. Is it anomalous? Suppose that the economist can spare $120 but not $200 on self-indulgencies, of which wine is her favourite. Would this not explain why she might buy a crate cheaply but not pay a lot for a bottle or sell it at a profit. Is it an anomaly? The anomalies seem to be relative to expected utility theory. However, some of the other examples may be genuine psychological effects.

See also

Kahneman’s review, Keynes’ General Theory

Dave Marsay