Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Which Car?: a puzzle

Here’s a variation on some of my other uncertainty puzzles:

You are thinking of buying a new car. Your chosen model comes in a choice of red or silver. You are about to buy a red one when you learn that red car drivers have twice the accident rate of those who drive silver ones.

Should you switch, and why?

Dave Marsay

Football – substitution

A spanish banker has made some interesting observations about a football coach’s substitution choice.

The coach can make a last substitution. He can substitute an attacker for a defender or vice-versa. With more attackers the team  more likely to score but also more likely to be scored against. Substituting a defender makes the final score less uncertain. Hence there is some link with Ellsberg’s paradox. What should the coach do? How should he decide?

 

 

A classic solution would be to estimate the probability of getting through the round, depending on the choice made. But is this right?

 

Pause for thought …

 

As the above banker observes, a ‘dilemma’ arises in something like the 2012’s last round of group C matches where the probabilities depend, reflexively, on the decisions of each other. He gives the details in terms of game theory. But what is the general approach?

 

 

The  classic approach is to set up a game between the coaches. One gets a payoff matrix from which the ‘maximin’ strategy can be determined? Is this the best approach?

 

 

If you are in doubt, then that is ‘radical uncertainty’. If not, then consider the alternative in the article: perhaps you should have been in doubt. The implications, as described in the article, have a wider importance, and not just for Spanish bankers.

See Also

Other Puzzles, and my notes on uncertainty.

Dave Marsay 

The Sultan’s daughters

The IMA website has the following puzzle:

A sultan has 100 daughters. A commoner may be given a chance to marry one of the daughters, but he must first pass a test. He will be presented with the daughters one at a time. As each one comes before him she will tell him the size of her dowry, and he must then decide whether to accept or reject her (he is not allowed to return to a previously rejected daughter). However, the sultan will only allow the marriage to take place if the commoner chooses the daughter with the highest dowry. If he gets it wrong he will be executed! The commoner knows nothing about the distribution of dowries. What strategy should he adopt?

You might want to think about it first. The ‘official’ answer is …

 

 

 

 

 

One strategy the commoner could adopt is simply to pick a daughter at random. This would give him a 1/100 chance of getting the correct daughter. [But] the probability of the commoner accepting the daughter with the highest dowry is about 37% if he rejects the first 37 daughters and then chooses the next one whose dowry is greater than any he’s seen so far. This is a fraction 1/e of the total number of daughters (rounded to the nearest integer) and is significantly better than just choosing at random!

My question:

Given that the sultan knows what dowry each daughter has, in which order should he present the daughters to minimise the chance of one of them having to marry the commoner? With this in mind, what is the commoner’s best strategy? (And what has this to do with the financial crisis?)

See also

More puzzles.

Dave Marsay

Medical treatment puzzle

These are attempts at examples of uncertainty that may be familiar.

Medical treatment

You go to the doctor, who tells you that you have a disease for which there are two treatments, a well-established one and a new one, which – based on one trial – seems slightly better. You  are undecided between treatments.

You now realise that your identical twin had the same disease a while back, had the old treatment, and it worked. Is this a good reason to decide on the old treatment? Why, or why not?

Now, a variation. Suppose that the reported advantage of the new treatment is such that after realising that your twin had the same disease, you are undecided. You get home and hear on the news that, whereas you had thought of all instances of the disease as being similar, it has just been found that there are 10 distinct variants that may respond differently to particular treatments. Which treatment do you now prefer, and why?

Trains

This is – hopefully – a simpler example, raising a sub-set of the issues.

You need to go to Bigcity regularly. For the same cost, and at convenient times, you could go by X’s or Y’s train (by different routes). The rail companies publish standardised, audited, statistics concerning numbers of trains cancelled per week, average delays and number of trains that are more than 10 minutes late. X’s trains seems marginally better. But you have a colleague who uses the same train that you would, provided by Y, and has found it to be reliable and on time. Which train do you choose, and why?  

See also

Similar puzzles.

Dave Marsay

Making your mind up (NS)

Difficult choices to make? A heavy dose of irrationality may be just what you need.

Comment on a New Scientist article, 12 Nov. 2011, pg 39.

The on-line version is Decision time: How subtle forces shape your choices: Struggling to make your mind up? Interpret your gut instincts to help you make the right choice.

The article talks a lot about decision theory and rationality. No definitions are given, but it seems to be assumed that all decisions are analogous to decisions about games of chance. It is clearly supposed, without motivation, that the objective is always to maximize expected utility. This might make sense for gamblers who expect to live forever without ever running out of funds, but more generally is unmotivated.

Well-known alternatives include:

  • taking account of the chances of going broke (short-term) and never getting to the ‘expected’ (long-term) returns.
  • taking account of uncertainty, as in the Ellsberg’s approach.
  • taking account of the cost of evaluating options, as in March’s ‘bounded rationality’.

The logic of inconsistency

A box claims that ‘intransitive preferences’ give mathematicians a head-ache. But as a mathematician I find that some people’s assumptions about rationality give me a headache, especially if they try to force them on to me.

Suppose that I prefer apples to plums to pears, but I prefer a mixture to having just apples. If I am given the choice between apples and plums I will pick apples. If I am then given the choice between plums and pears I will pick plums. If I am now given the choice between apples and pears I will pick pears, to have a good spread of fruit. According to the article I am inconsistent and illogical: I should have chosen apples. But what kind of logic is it in which I would end up with all meat and no gravy? Or all bananas and no custard?

Another reason I might pick pears was if I wanted to acquire things that appeared scarce. Thus being offered a choice of apples or plums suggests that neither are scarce, so what I really want is pears. In this case, if I was subsequently given a choice of plums to pears I would choice pears, even though I actually prefer plums. An question imparts information, and is not just a means of eliciting information.

In criticising rationality one needs to consider exactly what the notion of ‘utility’ is, and whether or not it is appropriate.

Human factors

On the last page it becomes clear that ‘utility’ is even narrower than one might suppose. Most games of chance have an expected monetary loss for the gambler and thus – it seems – such gamblers are ‘irrational’. But maybe there is something about the experience that they value. They may, for example, be developing friendships that will stand them in good stead. Perhaps if we counted such expected benefits, gambling might be rational. Could buying a lottery ticket be rational if it gave people hope and something to talk about with friends?

If we expect that co-operation or conformity  have a benefit, then could not such behaviours be rational? The example is given of someone who donates anonymously to charity. “In purely evolutionary terms, it is a bad choice.” But why? What if we feel better about ourselves and are able to act more confidently in social situations where others may be donors?

Retirement

“Governments wanting us to save up for retirement need to understand why we are so bad at making long-term decisions.”

But are we so very bad? This could do with much more analysis. With the article’s view of rationality under-saving could be caused by a combination of:

  • poor expected returns on savings (especially at the moment)
  • pessimism about life expectancy
  • heavy discounting of future value
  • an anticipation of a need to access the funds before retirement
    (e.g., due to redundancy or emigration).

The article suggests that there might also be some biases. These should be considered, although they are really just departures from a normative notion of rationality that may not be appropriate. But I think one would really want to consider broader factors on expected utility. Maybe, for example, investing in one’s children’s’ future may seem a more sensible investment. Similarly, in some cultures, investing one’s aura of success (sports car, smart suits, …) might be a rational gamble. Is it that ‘we’ as individuals are bad at making long-term decisions, or that society as a whole has led to a situation in which for many people it is ‘rational’ to save less than governments think we ought to have? The notion of rationality in the article hardly seems appropriate to address this question.

Conclusion

The article raises some important issues but takes much too limited a view of even mathematical decision theory and seems – uncritically – to suppose that it is universally normatively correct. Maybe what we need is not so much irrationality as the right rationality, at least as a guide.

See also

Kahneman: anomalies paper , Review, Judgment. Uncertainty: Cosimedes and Tooby, Ellsberg. Examples. Inferences from utterances.

Dave Marsay

GLS Shackle, imagined and deemed possible?

Background

This is a personal view of GLS Shackle’s uncertainty. Having previously used Keynes’ approach to identify possible failure modes in systems, including financial systems (in the run-up to the collapse of the tech bubble), I became concerned  in 2007 that there was another bubble with a potential for a Keynes-type  25% drop in equities, constituting a ‘crisis’. In discussions with government advisers I first came across Shackle. The differences between him and Keynes were emphasised. I tried, but failed to make sense of Shackle, so that I could form my own view, but failed. Unfinished business.

Since the crash of 2008 there have been various attempts to compare and contrast Shackle and Keynes, and others. Here I imagine a solution to the conundrum which I deem possible: unless you know different?

Imagined Shackle

Technically, Shackle seems to focus on the wickeder aspects of uncertainty, to seek to explain them and their significance to economists and politicians, and to advise on how to deal with them. Keynes provides a more academic view, covering all kinds of uncertainty, contrasting tame probabilities with wicked uncertainties, helping us to understand both in a language that is better placed to survive the passage of time and the interpretation by a wider – if more technically aware – audience.

Politically, Shackle lacks the baggage of Lord Keynes, whose image has been tarnished by the misuse of the term ‘Keynesian’. (Like Keynes, I am not a Keynesian.)

Conventional probability theory would make sense if the world was a complicated randomizing machine, so that one has ‘the law of large numbers’: that in the long run particular events will tend to occur with some characteristic, stable, frequency. Thus in principle it would be possible to learn the frequency of events, such that reasonably rare events would be about as rare as we expect them to be. Taleb has pointed out that we can never learn the frequencies of very rare events, and that this is a technical flaw in many accounts of probability theory, which fail to point this out. But Keynes and Shackle have more radical concerns.

If we think of the world as a complicated randomizing machine, then as in Whitehead, it is one which can suddenly change. Shackle’s approach, in so far as I understand it, is to be open to the possibility of a change, recognize when the evidence of a change is overwhelming, and to react to it. This is an important difference for the conventional approach, in which all inference is done on the assumptions that the machine is known. Any evidence that it may have change is simply normalised away. Shackle’s approach is clearly superior in all those situations where substantive change can occur.

Shackle terms decisions about a possibly changing world ‘critical’. He makes the point that the application of a predetermined strategy or habit is not a decision proper: all ‘real’ decisions are critical in that they make a lasting difference to the situation. Thus one has strategies for situations that one expects to repeat, and makes decisions about situations that one is trying to ‘move on’. This seems a useful distinction.

Shackle’s approach to critical decisions is to imagine potential changes to new behaviours, to assess them and then to choose between those deemed possible. This is based on preference not expected utility, because ‘probability’ does not make sense. He gives an example of  a French guard at the time of the revolution who can either give access to a key prisoner or not. He expects to lose his life if he makes the wrong decision, depending on whether the revolution succeeds or not. A conventional approach would be based on the realisation that most attempted revolutions fail. But his choice may have a big impact on whether or not the revolution succeeds. So Shackle advocates imagining the two possible outcomes and their impact on him, and then making a choice. This seems reasonable. The situation is one of choice, not probability.

Keynes can support Shackle’s reasoning. But he also supports other types of wicked uncertainty. Firstly, it is not always the case that a change is ‘out of the blue’. One may not be able to predict when the change will come, but it is sometimes possible to see that there is an economic bubble, and the French guard probably had some indications that he was living in extraordinary times. Thus Keynes goes beyond Shackle’s pragmatism.

In reality, there is no strict dualism between probabilistic behaviour and chaos, between probability and Shackle’s complete ignorance. There are regions in-between that Keynes helps explore. For example, the French guard is not faced with a strictly probabilistic situation, but could usefully think in terms of probabilities conditioned on his actions. In economics, one might usefully think of outcomes as conditioned on the survival of conventions and institutions (October 2011).

I also have a clearer view why consideration of Shackle led to the rise in behavioural economics: if one is ‘being open’ and ‘imagining’ then psychology is clearly important. On the other hand, much of behavioral economics seems to use conventional rationality as some form of ‘gold standard’ for reasoning under uncertainty, and to consider departures from it as a ‘bias’.  But then I don’t understand that either!

Addendum

(Feb 2012, after Blue’s comments.)

I have often noticed that decision-takers and their advisers have different views about how to tackle uncertainty, with decision-takers focusing on the non-probabilistic aspects while their advisers (e.g. scientists or at least scientifically trained) tend to, and may even insist on, treating the problem probabilistically, and hence have radically different approaches to problem-solving. Perhaps the situation is crucial for the decision-taker, but routine for the adviser? (‘The agency problem.’) (Econophysics seems to suffer from this.)

I can see how Shackle had much that was potentially helpful in the run-up to the financial crash. But it seems to me no surprise that the neoclassical mainstream was unmoved by it. They didn’t regard the situation as crucial, and didn’t imagine or deem possible a crash. Unless anyone knows different, there seems to be nothing in Shackle’s key ideas that provide as explicit a warning as Keynes. While Shackle was more acceptable that Keynes (lacking the ‘Keynesian’ label) he also still seems less to the point. One needs both together.

See Also

Prigogine , who provides models of systems that can suddenly change ‘become’. He also  relates to Shackle’s discussion on how making decisions relates to the notion of ‘time’.

Dave Marsay

Kahneman et al’s Anomalies

Daniel Kahneman, Jack L. Knetsch, Richard H. Thaler Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias The Journal of Economic Perspectives, 5(1), pp. 193-206, Winter 1991

[Some] “behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences … . An empirical result qualifies as an anomaly if it is difficult to “rationalize,” or if implausible assumptions are necessary to explain it within the paradigm.”

The first candidate anomaly is:

“A wine-loving economist we know purchased some nice Bordeaux wines … . The wines have greatly appreciated in value, so that a bottle that cost only $10 when purchased would now fetch $200 at auction. This economist now drinks some of this wine occasionally, but would neither be willing to sell the wine at the auction price nor buy an additional bottle at that price.”

This an example of the effects in the title. Is it anomalous? Suppose that the economist can spare $120 but not $200 on self-indulgencies, of which wine is her favourite. Would this not explain why she might buy a crate cheaply but not pay a lot for a bottle or sell it at a profit. Is it an anomaly? The anomalies seem to be relative to expected utility theory. (However, some of the other examples may be genuine psychological effects.)

See also

Kahneman’s review, Keynes’ General Theory

Dave Marsay

How to live in a world that we don’t understand, and enjoy it (Taleb)

N Taleb How to live in a world that we don’t understand, and enjoy it  Goldstone Lecture 2011 (U Penn, Wharton)

Notes from the talk

Taleb returns to his alma mater. This talk supercedes his previous work (e.g. Black Swan). His main points are:

  • We don’t have a word for the opposite of fragile.
      Fragile systems have small probability of huge negative payoff
      Robust systems have consistent payoffs
      ? has a small probability of a large pay-off
  • Fragile systems eventually fail. ? systems eventually come good.
  • Financial statistics have a kurtosis that cannot in practice be measured, and tend to hugely under-estimate risk.
      Often more than 80% of kurtosis over a few years is contributed by a single (memorable) day.
  • We should try to create ? systems.
      He calls them convex systems, where the expected return exceeds the return given the expected environment.
      Fragile systems are concave, where the expected return is less than the return from the expected situation.
      He also talks about ‘creating optionality’.
  • He notes an ‘action bias’, where whenever there is a game like the stock market then we want to get involved and win. It may be better not to play.
  • He gives some examples.

 Comments

Taleb is dismissive of economists who talk about Knightian uncertainty, which goes back to Keynes’ Treatise on Probability. Their corresponding story is that:

  • Fragile systems are vulnerable to ‘true uncertainty’
  • Fragile systems eventually fail
  • Practical numeric measures of risk ignore ‘true uncertainty’.
  • We should try to create systems that are robust to or exploit true uncertainty.
  • Rather than trying to be the best at playing the game, we should try to change the rules of the game or play a ‘higher’ game.
  • Keynes gives examples.

The difference is that Taleb implicitly suppose that financial systems etc are stochastic, but have too much kurtosis for us to be able to estimate their parameters. Rare events are regarded as rare events generated stochastically. Keynes (and Whitehead) suppose that it may be possible to approximate such systems by a stochastic model for a while, but the rare events denote a change to a new model, so that – for example – there is not a universal economic theory. Instead, we occasionally have new economics, calling for new stochastic models. Practically, there seems little to choose between them, so far.

From a scientific viewpoint, one can only asses definite stochastic models. Thus, as Keynes and Whitehead note, one can only say that a given model fitted the data up to a certain date, and then it didn’t. The notion that there is a true universal stochastic model is not provable scientifically, but neither is it falsifiable. Hence according to Popper one should not entertain it as a view. This is possibly too harsh on Taleb, but the point is this:

Taleb’s explanation has pedagogic appeal, but this shouldn’t detract from an appreciation of alternative explanations based on non-stochastic uncertainty.

 In particular:

  • Taleb (in this talk) seems to regard rare crisis as ‘acts of fate’ whereas Keynes regards them as arising from misperceptions on the part of regulators and major ‘players’. This suggests that we might be able to ameliorate them.
  • Taleb implicitly uses the language of probability theory, as if this were rational. Yet his argument (like Keynes’) undermines the notion of probability as derived from rational decision theory.
      Not playing is better whenever there is Knightian uncertainty.
      Maybe we need to be able to talk about systems that thrive on uncertainty, in addition to convex systems.
  • Taleb also views the up-side as good fortune, whereas we might view it as an innovation, by whatever combination of luck, inspiration, understanding and hard work.

See also

On fat tails versus epochs.

Dave Marsay

Quantum Minds

A New Scientist Cover Story (No. 2828 3 Sept 2011) opines that:

‘The fuzziness and weird logic of the way particles behave applies surprisingly well to how human thinks’. (banner, p34)

It starts:

‘The quantum world defies the rules of ordinary logic.’

The first two examples are The infamous two-slit experiment and an experiment by Tversky and Shamir supposedly showing violation of the ‘sure thing principle’. But do they?

Saving classical logic

According to George Boole (Laws of thought), when a series of assumptions and applications of logic leads to a falsehood I must abandon one of the assumptions of one of the rules of inference, but I can ‘save’ whichever one I am most wedded to. So to save ‘ordinary logic’ it suffices to identify a dodgy assumption.

Two-slits experiment

The article says of the two-slits experiment:

‘… the pattern you should get – ordinary physics and logic would suggest – should be ..’

There is a missing factor here: the classical (Bayesian) assumptions about ‘how probabilities work’. Thus I could save ‘ordinary logic’ by abandoning common-sense probability theory.

Actually, there is a more obvious culprit. As Kant pointed out the assumption that the world is composed of objects with attributes and having relationships with each other belongs to common-sense physics, not logic. For example, two isolated individuals may behave like objects but when they come into communion the sum may be more than the sum of the parts. Looking at the two-slit experiment this way, the stuff that we regard as a particle seem isolated and hence object-like until it ‘comes into communion with’ the apparatus, when the whole may be un-object-like, but then a new steady-state ’emerges’, which is object-like and which we regard as a particle. The experiment is telling us something about the nature of the communion. Prigogine has a mathematization of this.

Thus one can abandon the common-sense assumption that ‘a communion is nothing but the sum of objects’, thus saving classical logic.

Sure Thing Principle

An example is given (pg 36). That appears to violate Savage’s sure-thing principle and hence ‘classical logic’. But, as above, we might prefer to abandon out probability theory rather than our logic. But there are plenty of alternatives.

The sure-thing principle applies to ‘economic man’, who has some unusual values. For example, if he values a winter sun holiday at $500 and a skiing holiday at $500 then he ‘should’ be happy to pay $500 for a holiday in which he only finds out which it is when he gets there. The assumptions of classical economic man only seem to apply to people with lots of spare money and are used to gambling with it. Perhaps the experimental subjects were different?

The details of the experiment as reported also repay attention. A gamble with an even chance of winning $200 or losing $100 is available. Experimental subjects all had a first gamble. In case A subjects were told they had won. In case B they were told they had lost. In case C they were not told. They were all invited to gamble again.

Most subjects (69%) wanted to gamble again in case A. This seems reasonable as over the two gambles they were guaranteed a gain of $100. Fewer subjects (59%) wanted to gamble again in case B. This seems reasonable, as they risked a $200 loss overall. Least subjects  (36%) wanted to gamble again in case C. This seems to violate the sure-thing principle, which (according to the article) says that anyone who gambles in both the first two cases should gamble in the third. But from the figures above we can only deduce that – if they are representative – then at least 28% (i.e. 100%-(100%-69%)+(100%-59%)) would gamble in both cases. But 36% gambled in case C, so the data does not imply that anyone would gamble for A and B but not C.

If one chooses a person at random, then the probability that they gambled again in both cases A and B is between 28% and 100%. The convention in ‘classical’ probability theory is to split the difference (a kind of principle of indifference) yielding 64% (as in the article). A possible explanation for the dearth of such subjects is that they were not wealthy (so having non-linear utilities in the region of $100s) and that those who couldn’t afford to lose $100 had good used in mind for $200, preferring a certain win of $200 to an evens chance of winning $400 or only $100. This seems reasonable.

Others’ criticisms here. See also some notes on uncertainty and probability.

Dave Marsay

Better than Rational

Cosmides, L. & Tooby, J. (1994). Better than rational: Evolutionary psychology and the invisible hand. American Economic Review, 84 (2), 327-332.

Summary

[Mainstream Psychologists and behaviourists have studied] “biases” and “fallacies”-many of which are turning out to be experimental artifacts or misinterpretations (see G. Gigerenzer, 1991). [Gigerenzer, G. “How to Make Cognitive Illusions Disappear: Beyond Heuristics and Biases,” in W. Stroebe and M. Hewstone, eds.,  European review of social psychology, Vol. 2. Chichester, U.K.: Wiley, 1991, pp. 83-115.]

… 

One point is particularly important for economists to appreciate: it can be demonstrated that “rational” decision-making methods (i.e., the usual methods drawn from logic, mathematics, and probability theory) are computationally very weak: incapable of solving the natural adaptive problems our ancestors had to solve reliably in order to reproduce (e.g., Cosmides and Tooby, 1987; Tooby and Cosmides, 1992a; Steven Pinker, 1994).

…  sharing rules [should be] appealing in conditions of high variance, and unappealing when resource accrual is a matter of effort rather than of luck (Cosmides and Tooby, 1992).

Comment

They rightly criticise ‘some methods’ drawn from mathematics etc, but some have interpreted as meaning that “logic, mathematics, and probability theory are … incapable of solving the natural adaptive problems our ancestors had to solve reliably in order to reproduce”. But this leads them to overlook relevant theories, such as Whitehead and Keynes‘.

See Also

Relevant mathematics, Avoiding unknown probabilities, Kahneman on biases

NOTE

This has been copied to my bibliography section under ‘rationality and uncertainty’, ‘more …’, where it has more links. Please comment there.

Dave Marsay

When and why do people avoid unknown probabilities in decisions under uncertainty?

Rode, C., Cosmides, L., Hell, W., & Tooby, J. (1999). When and why do people avoid unknown probabilities in decisions under uncertainty? Testing some predictions from optimal foraging theory. Cognition, 72, 269-304.

Summary

Sets up a foraging ‘system’ to explore decision-making.

In this view, the system is not designed merely to maximize expected utility. It is designed to minimize the probability of an outcome that fails to satisfy one’s need, as per Keynes.

The people who participated in our experiments executed complex decision strategies, ones that take into account three parameters mean, variance, and need level rather than just the single parameter (mean) emphasized by some normative theories. Their intuitions were so on target, that their decisions very closely tracked the actual probabilities of each box satisfying their needs. This was true even though explicitly deriving these probabilities is a nontrivial mathematical calculation.

Comment

This gives a foraging setting in which rather than gathering the most food in the long run, the aim is – firstly – to have enough to survive in the short run, and then to build up a surplus in the long run. It rightly notes that this calls for a different approach. Confusingly (to me) it describes the utility approach as ‘logical’ and ‘mathematical’, from which some seem to infer that trying to maximize sustainability is not.

  • A strategy that seeks to maximize expected return / return ‘in the long run’ may not be appropriate when there is short-term jeopardy. (As Keynes’ said, ‘In the long run you are dead.)
  • It is not logical or mathematical to use a theory whose assumptions / axioms are known to be false, although (according to some definitions) it may be ‘rational’. If one is not certain that the assumptions / axioms are ‘true’, it is not logical or mathematical to avoid the uncertainty.
  • Logic and mathematics such as Keynes‘ can cope with short-term decisions, or situations where a balance is needed between short and long-run issues.
  • Logically, typical foraging tasks are best met by a population of foragers with different ‘attitudes to risk’. That is, most foragers may take a short term view but some need to take a long term view (to find new food sources). This relies on sharing when the explorers come back empty-handed.

One should also note that the original paper uses variance in a stereotyped way that is not always appropriate, as emphasised by Taleb., who alos discusses the general problem of ‘resilience to tail risk’.

See Also

Paradoxes, Mathematics, Allen, Better than Rational .

Dave Marsay

Uncertainty, utility and paradox

Brooklyn Museum - An Embarrassment of Choices,...

Image via Wikipedia

Allais

Allais devised two choices:

  1. between a definite £1M versus a gamble whose expected return was much greater, but could give nothing
  2. between two gambles

He showed that most people made choices that were inconsistent with expected utility theory, and hence paradoxical.

In the first choice, one option has a certain payoff and so is reasonably prefered. In the other choice both choices have similarly uncertain outcomes and so it is reasonable to choose based on expected utility. In general, uncertainty reasonably detracts from expected utility.

Ellsberg

Ellsberg devised a similar paradox, but again people consistently prefer alternatives with the least uncertainty.

See also

mathematics, illustrations, examples.

Dave Marsay