February 28, 2012 5 Comments
A familiar probability example, using urns, is adapted to illustrate ‘true’ (non-numeric) uncertainty.
The following is a good teaching example:
Suppose that an urn is known to contain black and white balls that are otherwise identical. A subject claims to be able to predict the colour of a ball that they draw ‘at random’.
They ‘predict’ and draw a black ball. What are the odds that they are really able to predict?
From a Bayesian perspective, the final odds are the initial odds times the likelihood ratio. If there are b black and w balls and we represent the evidence by E and likelihoods by P( E | ), then P( E | Predict ) = 1 and P( E | Luck) = b/(b+w). Thus the rarer the phenomenon predicted, the more a correct prediction tends to support the claim, of reliable prediction.
There is, however, some subjectivity in the estimated probability that the subject can predict:
- In this case, the initial odds seem somewhat arbitrary, and Bayes’ rule seems not to apply. For example, have you considered that the different colours may result in different temperatures? Such a thought is not ‘evidence’ in the sense of Bayes’ rule, but might change your subjective estimate of the probability prior to their draw.
- If we do not know the proportions of black and white balls for sure then the likelihood is uncertain.
Here we introduce a different type of uncertainty:
Suppose now that the subject is faced with two urns and selects a ball from one. Given the number of black and white balls in each urn, what is the likelihood, P( E | Luck ), of a correct prediction due to luck?
If you think the question is ambiguous, please disambiguate it however you wish.
Suppose you know the total numbers of black and white balls in the two urns. Is the likelihood estimate P( E | Luck) = b/(b+w) reasonable? Could it be biased? How?