Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Which Car?: a puzzle

Here’s a variation on some of my other uncertainty puzzles:

You are thinking of buying a new car. Your chosen model comes in a choice of red or silver. You are about to buy a red one when you learn that red car drivers have twice the accident rate of those who drive silver ones.

Should you switch, and why?

Dave Marsay

Football – substitution

A spanish banker has made some interesting observations about a football coach’s substitution choice.

The coach can make a last substitution. He can substitute an attacker for a defender or vice-versa. With more attackers the team  more likely to score but also more likely to be scored against. Substituting a defender makes the final score less uncertain. Hence there is some link with Ellsberg’s paradox. What should the coach do? How should he decide?

 

 

A classic solution would be to estimate the probability of getting through the round, depending on the choice made. But is this right?

 

Pause for thought …

 

As the above banker observes, a ‘dilemma’ arises in something like the 2012’s last round of group C matches where the probabilities depend, reflexively, on the decisions of each other. He gives the details in terms of game theory. But what is the general approach?

 

 

The  classic approach is to set up a game between the coaches. One gets a payoff matrix from which the ‘maximin’ strategy can be determined? Is this the best approach?

 

 

If you are in doubt, then that is ‘radical uncertainty’. If not, then consider the alternative in the article: perhaps you should have been in doubt. The implications, as described in the article, have a wider importance, and not just for Spanish bankers.

See Also

Other Puzzles, and my notes on uncertainty.

Dave Marsay 

The Sultan’s daughters

The IMA website has the following puzzle:

A sultan has 100 daughters. A commoner may be given a chance to marry one of the daughters, but he must first pass a test. He will be presented with the daughters one at a time. As each one comes before him she will tell him the size of her dowry, and he must then decide whether to accept or reject her (he is not allowed to return to a previously rejected daughter). However, the sultan will only allow the marriage to take place if the commoner chooses the daughter with the highest dowry. If he gets it wrong he will be executed! The commoner knows nothing about the distribution of dowries. What strategy should he adopt?

You might want to think about it first. The ‘official’ answer is …

 

 

 

 

 

One strategy the commoner could adopt is simply to pick a daughter at random. This would give him a 1/100 chance of getting the correct daughter. [But] the probability of the commoner accepting the daughter with the highest dowry is about 37% if he rejects the first 37 daughters and then chooses the next one whose dowry is greater than any he’s seen so far. This is a fraction 1/e of the total number of daughters (rounded to the nearest integer) and is significantly better than just choosing at random!

My question:

Given that the sultan knows what dowry each daughter has, in which order should he present the daughters to minimise the chance of one of them having to marry the commoner? With this in mind, what is the commoner’s best strategy? (And what has this to do with the financial crisis?)

See also

More puzzles.

Dave Marsay

Medical treatment puzzle

These are attempts at examples of uncertainty that may be familiar.

Medical treatment

You go to the doctor, who tells you that you have a disease for which there are two treatments, a well-established one and a new one, which – based on one trial – seems slightly better. You  are undecided between treatments.

You now realise that your identical twin had the same disease a while back, had the old treatment, and it worked. Is this a good reason to decide on the old treatment? Why, or why not?

Now, a variation. Suppose that the reported advantage of the new treatment is such that after realising that your twin had the same disease, you are undecided. You get home and hear on the news that, whereas you had thought of all instances of the disease as being similar, it has just been found that there are 10 distinct variants that may respond differently to particular treatments. Which treatment do you now prefer, and why?

Trains

This is – hopefully – a simpler example, raising a sub-set of the issues.

You need to go to Bigcity regularly. For the same cost, and at convenient times, you could go by X’s or Y’s train (by different routes). The rail companies publish standardised, audited, statistics concerning numbers of trains cancelled per week, average delays and number of trains that are more than 10 minutes late. X’s trains seems marginally better. But you have a colleague who uses the same train that you would, provided by Y, and has found it to be reliable and on time. Which train do you choose, and why?  

See also

Similar puzzles.

Dave Marsay

Making your mind up (NS)

Difficult choices to make? A heavy dose of irrationality may be just what you need.

Comment on a New Scientist article, 12 Nov. 2011, pg 39.

The on-line version is Decision time: How subtle forces shape your choices: Struggling to make your mind up? Interpret your gut instincts to help you make the right choice.

The article talks a lot about decision theory and rationality. No definitions are given, but it seems to be assumed that all decisions are analogous to decisions about games of chance. It is clearly supposed, without motivation, that the objective is always to maximize expected utility. This might make sense for gamblers who expect to live forever without ever running out of funds, but more generally is unmotivated.

Well-known alternatives include:

  • taking account of the chances of going broke (short-term) and never getting to the ‘expected’ (long-term) returns.
  • taking account of uncertainty, as in the Ellsberg’s approach.
  • taking account of the cost of evaluating options, as in March’s ‘bounded rationality’.

The logic of inconsistency

A box claims that ‘intransitive preferences’ give mathematicians a head-ache. But as a mathematician I find that some people’s assumptions about rationality give me a headache, especially if they try to force them on to me.

Suppose that I prefer apples to plums to pears, but I prefer a mixture to having just apples. If I am given the choice between apples and plums I will pick apples. If I am then given the choice between plums and pears I will pick plums. If I am now given the choice between apples and pears I will pick pears, to have a good spread of fruit. According to the article I am inconsistent and illogical: I should have chosen apples. But what kind of logic is it in which I would end up with all meat and no gravy? Or all bananas and no custard?

Another reason I might pick pears was if I wanted to acquire things that appeared scarce. Thus being offered a choice of apples or plums suggests that neither are scarce, so what I really want is pears. In this case, if I was subsequently given a choice of plums to pears I would choice pears, even though I actually prefer plums. An question imparts information, and is not just a means of eliciting information.

In criticising rationality one needs to consider exactly what the notion of ‘utility’ is, and whether or not it is appropriate.

Human factors

On the last page it becomes clear that ‘utility’ is even narrower than one might suppose. Most games of chance have an expected monetary loss for the gambler and thus – it seems – such gamblers are ‘irrational’. But maybe there is something about the experience that they value. They may, for example, be developing friendships that will stand them in good stead. Perhaps if we counted such expected benefits, gambling might be rational. Could buying a lottery ticket be rational if it gave people hope and something to talk about with friends?

If we expect that co-operation or conformity  have a benefit, then could not such behaviours be rational? The example is given of someone who donates anonymously to charity. “In purely evolutionary terms, it is a bad choice.” But why? What if we feel better about ourselves and are able to act more confidently in social situations where others may be donors?

Retirement

“Governments wanting us to save up for retirement need to understand why we are so bad at making long-term decisions.”

But are we so very bad? This could do with much more analysis. With the article’s view of rationality under-saving could be caused by a combination of:

  • poor expected returns on savings (especially at the moment)
  • pessimism about life expectancy
  • heavy discounting of future value
  • an anticipation of a need to access the funds before retirement
    (e.g., due to redundancy or emigration).

The article suggests that there might also be some biases. These should be considered, although they are really just departures from a normative notion of rationality that may not be appropriate. But I think one would really want to consider broader factors on expected utility. Maybe, for example, investing in one’s children’s’ future may seem a more sensible investment. Similarly, in some cultures, investing one’s aura of success (sports car, smart suits, …) might be a rational gamble. Is it that ‘we’ as individuals are bad at making long-term decisions, or that society as a whole has led to a situation in which for many people it is ‘rational’ to save less than governments think we ought to have? The notion of rationality in the article hardly seems appropriate to address this question.

Conclusion

The article raises some important issues but takes much too limited a view of even mathematical decision theory and seems – uncritically – to suppose that it is universally normatively correct. Maybe what we need is not so much irrationality as the right rationality, at least as a guide.

See also

Kahneman: anomalies paper , Review, Judgment. Uncertainty: Cosimedes and Tooby, Ellsberg. Examples. Inferences from utterances.

Dave Marsay

GLS Shackle, imagined and deemed possible?

Background

This is a personal view of GLS Shackle’s uncertainty. Having previously used Keynes’ approach to identify possible failure modes in systems, including financial systems (in the run-up to the collapse of the tech bubble), I became concerned  in 2007 that there was another bubble with a potential for a Keynes-type  25% drop in equities, constituting a ‘crisis’. In discussions with government advisers I first came across Shackle. The differences between him and Keynes were emphasised. I tried, but failed to make sense of Shackle, so that I could form my own view, but failed. Unfinished business.

Since the crash of 2008 there have been various attempts to compare and contrast Shackle and Keynes, and others. Here I imagine a solution to the conundrum which I deem possible: unless you know different?

Imagined Shackle

Technically, Shackle seems to focus on the wickeder aspects of uncertainty, to seek to explain them and their significance to economists and politicians, and to advise on how to deal with them. Keynes provides a more academic view, covering all kinds of uncertainty, contrasting tame probabilities with wicked uncertainties, helping us to understand both in a language that is better placed to survive the passage of time and the interpretation by a wider – if more technically aware – audience.

Politically, Shackle lacks the baggage of Lord Keynes, whose image has been tarnished by the misuse of the term ‘Keynesian’. (Like Keynes, I am not a Keynesian.)

Conventional probability theory would make sense if the world was a complicated randomizing machine, so that one has ‘the law of large numbers’: that in the long run particular events will tend to occur with some characteristic, stable, frequency. Thus in principle it would be possible to learn the frequency of events, such that reasonably rare events would be about as rare as we expect them to be. Taleb has pointed out that we can never learn the frequencies of very rare events, and that this is a technical flaw in many accounts of probability theory, which fail to point this out. But Keynes and Shackle have more radical concerns.

If we think of the world as a complicated randomizing machine, then as in Whitehead, it is one which can suddenly change. Shackle’s approach, in so far as I understand it, is to be open to the possibility of a change, recognize when the evidence of a change is overwhelming, and to react to it. This is an important difference for the conventional approach, in which all inference is done on the assumptions that the machine is known. Any evidence that it may have change is simply normalised away. Shackle’s approach is clearly superior in all those situations where substantive change can occur.

Shackle terms decisions about a possibly changing world ‘critical’. He makes the point that the application of a predetermined strategy or habit is not a decision proper: all ‘real’ decisions are critical in that they make a lasting difference to the situation. Thus one has strategies for situations that one expects to repeat, and makes decisions about situations that one is trying to ‘move on’. This seems a useful distinction.

Shackle’s approach to critical decisions is to imagine potential changes to new behaviours, to assess them and then to choose between those deemed possible. This is based on preference not expected utility, because ‘probability’ does not make sense. He gives an example of  a French guard at the time of the revolution who can either give access to a key prisoner or not. He expects to lose his life if he makes the wrong decision, depending on whether the revolution succeeds or not. A conventional approach would be based on the realisation that most attempted revolutions fail. But his choice may have a big impact on whether or not the revolution succeeds. So Shackle advocates imagining the two possible outcomes and their impact on him, and then making a choice. This seems reasonable. The situation is one of choice, not probability.

Keynes can support Shackle’s reasoning. But he also supports other types of wicked uncertainty. Firstly, it is not always the case that a change is ‘out of the blue’. One may not be able to predict when the change will come, but it is sometimes possible to see that there is an economic bubble, and the French guard probably had some indications that he was living in extraordinary times. Thus Keynes goes beyond Shackle’s pragmatism.

In reality, there is no strict dualism between probabilistic behaviour and chaos, between probability and Shackle’s complete ignorance. There are regions in-between that Keynes helps explore. For example, the French guard is not faced with a strictly probabilistic situation, but could usefully think in terms of probabilities conditioned on his actions. In economics, one might usefully think of outcomes as conditioned on the survival of conventions and institutions (October 2011).

I also have a clearer view why consideration of Shackle led to the rise in behavioural economics: if one is ‘being open’ and ‘imagining’ then psychology is clearly important. On the other hand, much of behavioral economics seems to use conventional rationality as some form of ‘gold standard’ for reasoning under uncertainty, and to consider departures from it as a ‘bias’.  But then I don’t understand that either!

Addendum

(Feb 2012, after Blue’s comments.)

I have often noticed that decision-takers and their advisers have different views about how to tackle uncertainty, with decision-takers focusing on the non-probabilistic aspects while their advisers (e.g. scientists or at least scientifically trained) tend to, and may even insist on, treating the problem probabilistically, and hence have radically different approaches to problem-solving. Perhaps the situation is crucial for the decision-taker, but routine for the adviser? (‘The agency problem.’) (Econophysics seems to suffer from this.)

I can see how Shackle had much that was potentially helpful in the run-up to the financial crash. But it seems to me no surprise that the neoclassical mainstream was unmoved by it. They didn’t regard the situation as crucial, and didn’t imagine or deem possible a crash. Unless anyone knows different, there seems to be nothing in Shackle’s key ideas that provide as explicit a warning as Keynes. While Shackle was more acceptable that Keynes (lacking the ‘Keynesian’ label) he also still seems less to the point. One needs both together.

See Also

Prigogine , who provides models of systems that can suddenly change ‘become’. He also  relates to Shackle’s discussion on how making decisions relates to the notion of ‘time’.

Dave Marsay