Probability Paradoxes

A ‘paradox’ is literally ‘a statement against accepted opinion’, but mainly it is used for  a situation where two plausible, even persuasive, arguments lead to contradictory results. The notion that probability is ‘just a number’ provides fertile grounds for constructing such puzzles. The resolution, as so often, is to be more careful with one’s logic. In particular …

Possible probabilities are a generalisation of conventional, precise, probabilities in which one supposes that more than one probability function may be possible. Here it is shown how the theory helped circumvent paradoxes due to Ellsberg and Allais, in which humans (such as Keynes) consistently violate the expected utility hypothesis.

Ellsberg’s single urn

From Wikipedia:

Suppose you have an urn containing 30 red balls and 60 other balls that are either black or yellow. You don’t know how many black or how many yellow balls there are, but that the total number of black balls plus the total number of yellow equals 60. The balls are well mixed so that each individual ball is as likely to be drawn as any other. You are now given a choice between two gambles:

Gamble A Gamble B
You receive $100 if you draw a red ball You receive $100 if you draw a black ball

Also you are given the choice between these two gambles (about a different draw from the same urn):

Gamble C Gamble D
You receive $100 if you draw a red or yellow ball You receive $100 if you draw a black or yellow ball

This situation poses both Knightian uncertainty – whether the non-red balls are all yellow or all black, which is not quantified – and probability – whether the ball is red or non-red, which is ⅓ vs. ⅔.

In terms of possible probabilities, we have:

P[A] = ⅓,         P[B] = [0,⅔],

P[C] = [⅓,1],   P[D] = ⅔.

According to the corresponding theory of value and utility we might reasonably prefer A to B and D to C, as long as we take some account of the ‘spread’, which is what most people seem to do. There is no paradox.

Now, if these choices were to be based on the same draw, we would have a technical problem in that

P<A> + P<D> = 1 = P<B> + P<C>,

so we should not strictly prefer the pair A , D to B, C, since they give the same outcome. This is the basis for the apparent paradox. But in our theory of value we do not insist that the value of  ‘picking a red or black or yellow’ is split between arbitrary components. To put it another way, we avoid the paradox by having a notion of value that is not straightforwardly additive, reflecting the fact that while probabilities add, uncertainties may cancel each other out, as this example shows.

Ellsberg’s two urns

Suppose that two urns contain 100 red and black balls. The first has balls in unknown proportions, the other has 50 of each. Would you rather bet on R1 (red from urn 1), B1 (black from 1), R2 or B2? Ellsberg notes that people typically have preferences

R1 ≈ B1 < R2 ≈ B2,

which are inconsistent with expected utility maximization based on (precise) probabilities. But for probable probabilities these make sense

P[R1] = [0,1]
P[B1] = [0,1]
P[R2] = 0.5
P[B2] = 0.5

and the ‘value’ of a bet with a probability of [0,1] is generally less than that of one with probability 0.5. Ellsberg argues that if you prefer R2 to R1 then you must regard urn 1 as having less than 50 red balls, and similarly urn 1 must have less than 50 black balls, contradicting the given fact that it has 100 balls, black or red. Using possible probabilities this argument simply shows that urn 1 might have less than 50 red balls, and also might have less than 50 black balls. The imprecision dissolves the contradiction and hence the paradox.


The Allais paradox arises when comparing participants’ choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows:

Experiment 1 Experiment 2
Gamble 1A Gamble 1B Gamble 2A Gamble 2B
Winnings Chance Winnings Chance Winnings Chance Winnings Chance
$1 million 100% $1 million 89% Nothing 89% Nothing 90%
Nothing 1% $1 million 11%
$5 million 10% $5 million 10%

[W]hen presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Allais further asserted that it was reasonable to choose 1A alone or 2B alone.

However, that the same person (who chose 1A alone or 2B alone) would choose both 1A and 2B together is inconsistent with expected utility theory. According to expected utility theory, the person should choose either 1A and 2A or 1B and 2B.

In terms of possible probabilities, choice 1A seems clear-cut: we would know if the experimenter cheated. In 1B we to win a large amount unless the gamble is biased. That is, for P<LOSE|1A>=0 is precise and virtually unconditional, whereas P<lose|1B:C> is less precise and conditional. Hence 1A seems a reasonable choice. For experiment 2, both P<win|2A:C> and P<win|2B:C> are imprecise and conditional, about equally so. Hence it seems reasonable to decide on the basis of their nominal values ($110,000 versus $500,000). Here it is the relative conditionality that does most to resolve the paradox.

Dave Marsay

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: