Gladwell’s Blink

Malcolm Gladwell Blink: The power of thinking without thinking Penguin 2005.

Gladwell argues that first impressions can be more reliable than considered analysis, and that performance can be improved by practice.

So, when should we trust our instincts and when should we be wary of them?

Gladwell gives many interesting stories, which may merit reading even if one does not entirely  accept Gladwell’s analysis.

The statue that didn’t look right

A statue was being bought that looked too good to be true. Scientific testing showed that it could not be a fake, whereas many experts first reactions were that something was wrong. It turned out to be a fake, after it had been paid for.

“I always considered scientific opinion more objective than aesthetic judgements,” [the curator said].

My comments

The scientific opinion can be divided into two: ‘The statue could not have been faked using any method known to me’ and ‘No-one could ever invent a new method’. This first part seems genuinely scientific. The second is not. It seems to me that if the scientist had been clearer about his findings (as distinct from his assessment) then a lot of trouble might have been saved.

The issues relates to the pragmatic notion of making decisions based on the model that you have, only questioning the model when it is clearly wrong. For the statue, this pragmatic approach seems inappropriate.

Van Riper’s Big Victory

This describes the US war-game in preparation for the second Gulf War, with van Riper playing the adversary. The US forces (‘blue’) had far superior equipment and sophisticated decision support, yet the first time around van Riper managed to defeat them.

[The] mistake that Blue Team made [was that they] had a system in place that forced their commanders to stop and talk things over and figure out what was going on. They would have been fine if the problem in front of them demanded logic. … [They] needed to solve an insight problem, but their powers of insight had been extinguished.

They were so focussed on the mechanics and the process that they never looked at the problem holistically. In the act of tearing something apart, you lose its meaning.


If we take what Gladwell is saying literally, then using logic precludes holistic thinking and insight. The blue team were using ‘Operational Net Assessment’ (ONA) which is not described in full, but includes identifying factors and the relationships between them. If it does suppress other modes of thinking then what they were trying to do was like driving a car based on an understanding of how all the components work without an appropriate high-level picture (such as the brakes make the car slow down.) The mistake, then, seems to me to be an inappropriate level of analysis and the wrong logic. They might have done better with some familiarity with Whitehead, for example.

A Crisis in ER

A hospital ER was overloaded. It was recognised that if only they could be more accurate in their handling of potential heart-attack cases, they could cope. In this case a statistical study produced an algorithm that hugely out-performed the doctors.

What screws up doctors when they are trying to predict heart attacks is that they take too much information into account.


What strikes me is that the algorithm only took account of ‘likelihood’ information, i.e. symptoms that were more likely when someone was about to have, was having, or had just had an attack. The doctors were also taking account of information that affected their ‘priors’, i.e. conditions that made it more likely that they would have an attack at some point.

According to standard Bayesianism, the priors ought to be relevant, and sop the explanation would seem to be that the doctors were very bad at estimating priors. Yet they had great experience. What could be going on?

If you look at it mathematically, a missing factor from the account is that probability of someone with a misleading symptom (such as heart burn) going to hospital with a possible attack. It is clear from the account that those more at risk would be more likely to go to hospital. This would tend to even out the priors for someone attending hospital, so that only the actual symptoms and their likelihoods matter.

Symbolically, we might think that we want P(Attack | Symptoms, Risk Factors), but actually we need P(Attack | Symptoms, Risk Factors, Attended), where the decision to Attend is influenced by the perception of risk.

A Small Miracle

Taking our powers of rapid cognition seriously means that we have to acknowledge the subtle influences that our can alter or undermine or bias the products of our unconscious.

Gladwell gives many example, although some of them don’t seem so subtle – and what about innocent conscious biases that arise because of simplistic use of statistics, for example?

See Also

My notes on: Rationality and uncertainty, generally; Psychology; Kahneman.

Dave Marsay



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: