Kahneman’s Maps of Bounded Rationality
Daniel Kahneman, Maps Of Bounded Rationality: A Perspective On Intuitive Judgment And Choice [Nobel] Prize Lecture, December 8, 2002.
This is a review of Kahneman’s system 1 / system 2 model as described in his paper, which is intended for a broad audience. I comment on biases as such elsewhere.
My interpretation of the paper is that system 1 (sensing) is regarded as ‘accessing’ reality and presenting an unambiguous object-based impression of it to system 2 (reasoning). Reasoning can check for conformance of the overall impressions with ‘folk-science’ rules, and then accept and integrate or reject and possibly correct those impressions. Thus any inherent ambiguity is resolved by our intuitive sensing, and if this is reasonable, it is accepted unchallenged.
Following the crash of 2008 it seems credible to think that people form too definite opinions on values and risks, perhaps informed by intuitive heuristics, which their reasoning does too little to check. But although Kahneman was awarded the economics prize, the paper does not seem to be about judgements such as value and risk. They are not attributes of objects, and they are not sensed or ‘accessed’. So I consider the model more geneally, as in sensing that their is a red bat.
If true, this model would explain why people tend to make poor decisions in complex situations where there is no unambiguous object-based ‘reality’ to be accessed. Kahneman uses this general model to derive more specific models of biases.
It seems to me reasonable that there is some such sensing / reasoning distinction, with sensing acting more or less automatically and then being corrected by reasoning. I doubt both the object-basing and the lack of ambiguity of representations, which would seem to doom us to sleep-walk from mistake to mistake. It may be that most people see like this most of the time in most circumstances, but it may not be universal, and even for those that see like this it may be a habit that could be unlearnt, rather than biologically inherent.
When I look at the night sky it seems to me that I often see a ‘bright spot’, and that only after a time does reasoning classify it as, for example, a star or plane. But ‘bright spot’ is not a designation of a real world object. It is simply a summary of what is happening on my retina. If asked what it was I would immediately give a list of possibilities (satellite, space-station, UFO, … ). It is not obvious that my sensing has any difficulty in representing the ambiguity as to whether it is a shining object, a reflective object or an illusion. Presumably in these cases it has an object type ‘all the objects that might give rise to a bright dot’, but then it seems misleading to say that the description is object-based. Indeed, it seems to me that when looking at an abstract work of art I have the impression of red squares and so-on, and am not necessarily ‘accessing’ reality. Under Kahneman’s model when I watch TV the reality that I am ‘accessing’ is presumably a TV screen, so that I must use reasoning to decode the scene that the TV is presenting. But it seems to me (from a physical viewpoint) that I must use the same system to decode the scene whether I am viewing the scene directly or through the TV. I do not think that sensing ‘knows’ that it is watching a TV and so could not treat the sensations differently.
The Isahara color perception test http://en.wikipedia.org/wiki/Ishihara_color_test consists of a disc containing many coloured discs. These are intended to conjure up pictures, but the things pictured are not ‘real’. People with normal vision only see the ‘good’ picture, whereas people with colour blindness only see the ‘bad’ picture or no picture. This is consistent with Kahneman’s model. However, I can see both good and bad patterns, as if sensing were representing the ambiguity. If I report what I actually see it tends to confuse the doctors and waste a lot of time, so I use reason to decide what to report. It is hard to reconcile this with Kahneman’s model.
My suggestion, then, is that – contrary to Kahneman – while sensing uses a constrained ‘object-like’ representation schema, sensing and reasoning between them can potentially recognize when things represented as objects are less object-like than is the norm, and deal with them accordingly. Thus when sensing represents a rainbow there is no confusion when it appears to break the laws of physics. Nor is there difficulty in sensing representations, such as TV pictures. Similarly, it seems that sensing can represent some additional ambiguity over and above that inherent in its classifying scheme, such as ‘a bright dot in the sky’. Perhaps what the work of Kahneman and others is showing is a general and dangerous but not universal and unavoidable tendency to simplify. Thus when we first see a rainbow we treat it as if there were some real object, and we habitually represent things the way we normally do in that context, suppressing ambiguity for efficiency and convenience. But we learn that rainbows are not like other objects, and that dots in the sky can have many different types of causes. Perhaps reasoning can learn to improve in this respect. I hope so.
A 2011 Google Talk publicising Fast and Slow is more accessible than his papers. It also describes how the ‘quality’ of the system 1 output is used to inform confidence estimate that then informs probability estimates. This indicates a general ‘degree of uncertainty’ but does not specifically indicate ambiguity as distinct from straightforward inaccuracy.
Cosmides and Tooby give a different perspective, which they oppose to Kahneman’s views on biases.