Ellsberg’s Risk, Ambiguity and Decision
Daniel Ellsberg Risk, Ambiguity and Decision Garland, 2001
Ellsberg’s 1962 Harvard thesis, building on his QJE paper, Risk, Ambiguity and the Savage Axioms. (And before his rise to notoriety.) The approach taken is descriptive rather than normative, but builds on the insights of Keynes, Savage, Shackle and Good.
Vagueness, confidence and the weight of arguments
[T]he particular assumption that all degrees of belief may be represented by definite, uniquely-defined numbers … commits one to too great precision, leading either to absurdities, or else to undue restriction of the field of applicability of the idea of probability.
The nature and use of normative theory
The principles one may feel “sure of” … are guides not to “truth” but to consistency … .
The utility axioms as norms
Ellsberg, in effect, is discussing the foundations of science.
Normative theory and empirical research
[A]ny agreements reached prior to measurement [sic] are not binding and serve as devices to guide progress, not rules to limit it. We don’t know how to measure the values of decision today, and until we do, it would be foolish to agree any commitment once and for all. [Quoting C.W. Churchman 1956.]
The Bernoulli proposition
For a reasonable decision-maker, there exists a set of numbers … corresponding to the uncertain outcomes of any risky proposition or “gamble,” and a set of numbers (numerical probabilities) corresponding to the events determining these outcomes, such that … mathematical expectations … will reflect his actual, deliberated preferences among these gambles.
A possible counter-example: are there uncertainties that are not risks?
Ellsberg introduces his urn examples, for which he supposes that some people would regard it – even after reflection – as ‘reasonable’ to violate the above proposition. This is commonly known as ‘the Ellsberg paradox’.
Vulgar evaluations of risk
[I] am encouraged to challenge what is becoming a new orthodoxy of opinion by the thought that the theory of expected value … appeared to generations of subtle minds, as just as unshakable, as intuitively and logically compelling, as uniquely “right,” as does the “Sure-Thing Principle” today to the truest believer.
Discusses utilities and coherence, as ideals.
Opinions that make horse races
Ellsberg quotes Savage:
If I offer you $10.00 if any one horse of your choosing wins a given race, your decision tells me operationally which you consider to be the most probable winner.
Ellsberg advises that:
In the following chapter, we shall consider some situations in which the theory seems … to lead me astray.
He also quotes Savage’s highlighting of:
the assumption that on which of two events the person will choose to stake a given prize does not depend on the prize itself.
Ellsberg develops his well-known and paradoxical urn examples.
Why are some uncertainties not risks?
Decision criteria …
It is not appropriate to reason inn terms of ‘games against nature’.
Ellsberg discusses variations on the Hurwicz criterion.
Allais and the sure-thing principle
Ellsberg notes that Allais’ example raises concerns about value that are in addition to the uncertainty issues that Ellsberg explores.
Ellsberg claims that it is reasonable to violate the assumptions of various (numeric) probability theories, but does not consider any normative theory. It could be that although Savage himself (as quoted by Ellsberg) would violate his own principles, those principles are in fact ‘mathematically valid’. The question is left open.
In choosing between urns, one is choosing between cases in which the long-run trend is given by the law of large numbers. Let us suppose (as is usual) that it reasonable to be guided by this law. (For example, that one is very rich and will live forever.) It does seem reasonable to prefer the urn for which the expected outcome, based on the law, is the greatest.
Consider a fair coin versus a coin of unknown bias, a similar situation to Ellsberg’s urns. According to then usual theory, one must assign some probability, p, of Heads to the unknown coin, so that there are only these possibilities:
- You are indifferent between coins when betting on Heads, and also when betting on tails (p=0.5).
- You would prefer the fair coin when betting on Heads, and the unknown coin for Tails (p<0.5).
- Conversely (p>0.5).
It is supposedly never ‘rational’ to definitely prefer the fair coin when betting either Heads or Tails. But how ‘should’ we compare such expectations?
The usual rule is to compare their mid-points, but this seems under-motivated. If one is cautious, it may be reasonable to take the worst case, typically one of the extremes. Thus one would prefer a fair coin to one of unknown bias, whether betting on heads or tails, which violates the above recommendation.
Suppose that by choosing the fair coin one gets an adequate long-run return, whereas the unknown return could be disastrous. In such a case, the ‘irrational’ choice of the fair coin seems reasonable. For example, suppose that you though that the coin was two-sided, with Heads and Tails being equally likely. Would you commit yourself to betting on Heads forever? Or suppose that the coin was to be chosen by someone. Would you assume that they would be fair? The ‘standard’ assumption, that you can treat the uncertainty as a probability seems not only unmotivated, but wrong.
My notes on uncertainty.