Kahneman’s Thinking, Fast and Slow
Daniel Kahneman Thinking, Fast and Slow Penguin, 2012
A popular book, covering much the same ground as his Nobel speech and papers, but more accessible. I generally have difficulty with Kahneman that seems to be because he talks an alien language, but I felt that I had less difficulty here. I comment as a mathematician with an interest in concepts of rationality.
The basic framework is of fast intuitive thinking (system 1) imperfectly corrected by slower, ‘rational’, thinking (system 2).
… the idea that our minds are susceptible to systematic errors is now generally accepted.
Intuition is nothing more and nothing less than recognition.
These observations … present a deep challenge to the rationality assumptions favoured in standard economics.
I Two Systems
The characters of the story
System 2 is activated when an event is detected that violates the model of the world that system 1 maintains.
The lazy controller
All roses are flowers
Some flowers fade quickly
Therefore some roses fade quickly
A large majority of college students endorse this syllogism … . Failing these minitests appears to be … a matter of insufficient motivation. … The ease with which they are satisfied enough to stop thinking is rather troubling. … The psychologist would call [those who get this type of question correct] more rational.
The associative machine
Of ‘priming studies’:
You have no choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you.
When you feel strained, you are more likely to be vigilant and suspicious invest more effort in what you are doing, feel less comfortable, and make fewer errors, but you also are less intuitive and creative than usual.
A happy mood loosens the control of system 2 over performance: when in a good mood, people become more intuitive and creative but also more prone to logical errors.
Norms, Surprises, and causes
System 1, which understands language, has access to norms of categories, which specify the range of plausible values as well as the most typical cases.
We are evidently ready from birth to have impressions of causality, which do not depend on reasoning about patterns of causality.
… the evidence is that we are born prepared to make intentional attributions …
Statistical thinking derives conclusions about individual cases from properties of categories and ensembles. Unfortunately, System 1 does not have the capability for this mode of reasoning; System 2 can learn to think statistically, but few people receive the necessary training.
A machine for jumping to conclusions
In the absence of an explicit context, System 1 generate[s] a likely context on its own.
System 1 does not keep track of alternatives that it rejects, or even the fact that there were alternatives.
System 1 is radically insensitive to both the quality and quantity of the information that gives rise to impressions and intuitions.
How judgements happen
… System 1 represents categories by a prototype or a set of typical example … .
II Heuristics and Biases
The science of availability
Among the basic features of System 1 is its ability to set expectations and to be surprised when the expectations are violated. The system also retrieves possible causes of a surprise, usually by finding a possible cause among recent surprises.
Tom W’s speciality
The concept “the probability that Tom W studies computer science” is not a simple one. Logicians and statisticians disagree about ist meaning, and some would say it has no meaning at all.
The illusion of understanding
… we apply the word know only when what was known is true and can be shown to be true.
Many intelligent and well-informed people were keenly interested in the future of the economy and did not believe a catastrophe was imminent; I infer from this that the crisis [of 2007/8] was not knowable.
Intuition versus formulas
Several studies have shown that human decision makers are inferior to a prediction formula even when they are given the score given by the formula! They feel they can overrule the formula because they have additional information about the case, but they are wrong more often than not. … there are few circumstances under which it is a good idea to substitute judgement for a formula. … The name “broken leg rule” has stuck. The point is, of course, that broken legs are very rare – as well as decisive.
… the hostility to algorithms will probably soften as their role in everyday life continues to expand.
Expert intuition:when can we trust it?
… how can we evaluate the probable validity of an intuitive judgment? … The answer comes from the two basic conditions for acquiring a skill:
- an environment that is sufficiently regular to be predictable
- an opportunity to learn these regularities through prolonged practice.
Robin Hogarth described “wicked” environments, in which professionals are likely to learn the wrong lessons from experience.
… intuition cannot be trusted in the absence of stable regularities in the environment.
The unrecognized limits of professional skill help explain why experts are often overconfident.
The outside view
… people who have information about an individual case rarely feel the need to know the statistics of the class to which the case belongs.
A proud emphasis on the uniqueness of cases is also common in medicine, in spite of recent advances in evidence-based medicine that point the other way. Medical statistics and baseline predictions come up with increasing frequency in conversations between patients and physicians. However, the remaining ambivalence about the outside [statistical] view in the medical profession is expressed in concerns about the impersonality of procedures that are guided by statistics and checklists.
The treatment for the planning fallacy [is] reference class forecasting … :
- Identify an appropriate reference class … .
- Obtain the statistics of the reference class … .
- Use specific information about the case to adjust the baseline prediction .. .
The engine of capitalism
Allowing for the information that does not come to mind – perhaps because one never knew it – is impossible.
Even if they knew how little they know, the executives would be penalized for admitting it.
… inadequate appreciation of the uncertainty of the environment inevitably leads economic risks they should avoid. However, optimism is highly valued, socially and in the market; people and firms reward the providers of dangerously misleading information more than they reward truth tellers. One of the lessons of the financial crisis that lead to the Great Recession is that there are periods in which competition, among experts and among organisations, creates powerful forces that favor a collective blindness to risk and uncertainty.
Other professionals must deal with the fact that an expert worthy of the name is expected to display high confidence.
An unbiased appreciation of uncertainty is a cornerstone of rationality – but is not what people and organisations want.
I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers.
In other situations, overconfidence was mitigated (but not eliminated) when judges were encouraged to consider competing hypotheses.
[Premortems are suggested:] “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of the disaster.”
This discusses variations on ‘decisions by numbers’, as if it were always reasonable.
The fourfold pattern
… there is a powerful argument that a decision maker who wishes to be rational must conform to the expectation principle. This was the main point of the axiomatic version of utility theory that von Neumann and Morgenstern introduced in 1944. They proved that any weighting of uncertain outcomes that is not strictly proportional to probability leads to inconsistencies and other disasters.
Kahneman claims that the choices in the Allais paradox ‘do not make logical sense’ and such people are ‘misguided’.
We have neither the inclination nor the mental resources to enforce consistency on our preferences, and our preferences are not magically set out to be coherent, as they are in the rational-agent model.
Kahneman notes that ‘the remembering self’ is more significant than ‘the experiencing self’.
The only test of rationality is not whether a person’s belief’s are reasonable, but whether they are internally consistent.
Humans, unlike Econs, need help to make good decisions, and there are informed and unintrusive ways to provide that help.
Organizations are better than individuals when it comes to avoiding errors, because they naturally think more slowly and have the power to impose orderly procedures. Organizations can institute and enforce the application of useful checklists, as well as more elaborate exercises, such as reference-class forecasting and the premortem. … The operative concept is routine.
Kahneman’s work on biases have influenced behavioural economics, which criticises conventional economics for supposing that people act rationally. But all that is normally supposed is that the net effect is as if people act more or less rationally, because rationality is advantageous in a competitive market. If one thinks of the market as an ‘organization’, or supposes that the main actors within the market (e.g., fund managers) operate within an organization, then this book would seem to predict that organizations act more rationally than typical individuals, thus seeming to support conventional economic assumptions.
The concept of knowledge is key to, for example, the ‘illusion of understanding’, but is not explained. But (in ‘illusion of understanding’) it seems to have the following remarkable properties:
- It is ‘true’ and ‘can be shown to be true’.
- It will be known to all but a few of any intelligent and well-informed people who are keenly interested.
Thus, it seems, smart people tend to know everything of importance that can be known: all our big problems are caused by things that are not only unknown but unknowable. If true, this would seem to be of much greater significance than this book, and to provide some balance. (A more conventional view is that – outside of mathematics – people simply have beliefs that they believe they can justify, consistent with the prevailing norms and culture.)
Kahneman advocates ‘statistical thinking’ within system 2, but while he recognizes that there are many variants of such thinking, he does not make clear which he favours.
The flower example (from ‘the lazy controller’) can be adapted as follows:
P( fades quickly | flower ) > 0, P( rose | flower ) > 0, therefore P( fades quickly | rose ) > 0.
This is wrong, for the reason given. But (in ‘the outside view’) Kahneman advocates reference class forecasting. Here one takes P( fades quickly | rose ) = P( fades quickly | flower ) as an initial estimate, and then use ‘specific information about the case’ to adjust the baseline. Thus if we have no specific information we let the estimate stand. This seems wrong, or at best simplistic.
Kahneman seems to regard consistency as desirable, without considering whether or not it is achievable, or what the implications of striving for it might be. The utility theory axioms of von Neumann and Morgenstern, based on notions of consistency, are cited as if they always held. But as they noted, the axioms only hold in stable situations, and hence not in situations like the crash of 2007/8. If the rules of the game change then so should we: consistency is not always a good strategy. Also, Kahneman assumes ‘risk neutrality’ in setting his norm, so that in his examples the value of money is proportional to its face-value. But – as far as I can see – he gives no justification for this.
Kahneman praises organizations for their greater rationality, due to their ‘orderly procedures’. It is clear how this could be a benefit in stable times, but in interesting times such practices could stifle innovation. In particular, while in stable times we might expect confident predictions from our experts, in times of pending and actual crisis such confidence would seem to be a clear sign of a lack of understanding of the prevalent uncertainties.
Limits of judgment
Kahneman warns that system 1 thinking relies on:
- an environment that is sufficiently regular to be predictable,
- an opportunity to learn these regularities through prolonged practice.
But does not conventional system 2 statistical thinking also rely on these simplicities?
The book is full of good insights into how people and organisations think, but its normative content only really seems to be reliable in situations in which statistical thinking is reliable, and the different schools of thought all amount – in practice – to the same thing. I question the straightforward application of its ideas to complex systems such as economies, and in particular the idea that intelligent, motivated and well-informed people working within organisations tend to know what is going. Incompetence often seems to me a good explanation for organizational blunders, and I do think that a better understanding of human knowledge could help.
Kahneman describes system 2 as if it had objective rationality, without considering the possibility that system 2 is culturally biased, or that thinking within an organisation might have biases characteristic of the organisation. In so doing it seems to me that Kahneman is reflecting a narrow culture that I do not share.
Other work by Kahneman et al.