Tversky & Kahneman’s Judgment u Uncertainty

Amos Tversky; Daniel Kahneman Judgment under Uncertainty: Heuristics and Biases Science, New Series, Vol. 185, No. 4157. (Sep. 27, 1974), pp. 1124-1131.

This seminal work notes that people have to use heuristics to estimate probabilities, and this can give rise to systematic biases. It forms a key part of their Nobel-prize winning work, and on behavioural economics. It covers representativeness, availability and ‘adjustment and anchoring’. There does not appear to be a freely available version.

Comments

In the discussion it becomes clear that by ‘probability’ is meant the narrow numeric concept often attributed to Bayes. Thus the probability of Heads for a particular coin is certainly 0.5 rather than the uncertain ‘0.5 ish’. Thus the paper is really about ‘judgement under numeric probability’, with no discussion about the difference between probability and broader ‘Knightian’ uncertainty, or of its impact on the experiments or their interpretation. Given the tendency to conflate probability and uncertainty and the lack of detail on the experiments, particularly on the questions, it is hard to be sure how to interpret the findings of this paper. There seem to be many more credible explanations for their reported results than those that are considered.

In the ‘representativeness’ section I note from later in the paper that the subjects were undergraduates. I wonder if (as the authors seem to) they confuse likelihood and probability and if they mis-apply the principle of indifference.

Under ‘insensitivity to predictability’ it is noted that when asked to judge how well current student-teachers will perform as teachers in five years time, people rely too much on their current performance, which should only be a very rough guide. Thus the top 10% of current students will not be the top 10% of teachers in five years time. This is held up as a bias. But what exactly was the question? If I was trying to rank teachers in five years time, it seems reasonable to base it on their current rankings, if that is all that I have. This example could be more about the subtleties of the notion of ‘prediction’ than probability.

The sub-section ‘illusion of validity’ notes that consistency in the data tends to lead to confidence in the interpretation. It raises some important points, but conflates statistical variability with more structured variability. This is related to the issue of Knightian uncertainty.

In the ‘availability’ section an experiment with red and white balls being drawn from an urn is discussed. I make the following observations:

  1. The subjects were asked to compare probability values which were actually very close. Consistent errors here do not seem very worrying.
  2. The subjects tended to overestimate the probability of conjunctive events and underestimate that of disjunctive events. But taking account of uncertainty tends to have the same effect (below).
  3. The paper draws an analogy with planning a project or risk-managing a nuclear power station. From a probability perspective all probabilities are comparable on the same scale, but from an uncertainty perspective the situations are quite different, and so the comparison is not valid.

The second point can be motivated as follows. If the total probability of an event is (p+e) where e is an error term that is symmetrically distributed then the conventional probability is p and the uncertainty is e. The probability of n successes in a row is (p+e)n = pn + n.pn-1e + (n.(n-1)/2).pn-2e2 + … . The first term is what the paper assumes, which ignores the uncertainty. The even terms  have zero expected value, since e is a symmetric error term. The odd terms make a positive contribution, so that the total is strictly greater than the first term. Thus the paper underestimates the probability for a conjunction when the probability is uncertain. (E.g., if paints might feel different or may result in the balls being of slightly different temperatures.) Perhaps people routinely take account of this kind of uncertainty?

Under ‘discussion’ it is noted that coins have no memories and are therefore incapable of generating sequential dependencies. But if one is not certain that the coin is precisely fair then with enough tosses one might learn what the bias is. That is, even if the coin has no memory and is incapable of generating dependencies, you have a memory and there may be dependencies between your estimates.

See Also

Kahneman behavioural economics, anomalies.

Dave Marsay

One Response to Tversky & Kahneman’s Judgment u Uncertainty

  1. Pingback: Making your mind up (NS) « djmarsay

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.