Are more intelligent people more biased?

It has been claimed that:

U.S. intelligence agents may be more prone to irrational inconsistencies in decision making compared to college students and post-college adults … .

This is scary, if unsurprising to many. Perhaps more surprisingly:

Participants who had graduated college seemed to occupy a middle ground between college students and the intelligence agents, suggesting that people with more “advanced” reasoning skills are also more likely to show reasoning biases.

It seems as if there is some serious  mis-education in the US. But what is it?

The above conclusions are based on responses to the following two questions:

1. The U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Do you: (a) Save 200 people for sure, or (b) choose the option with 1/3 probability that 600 will be saved and a 2/3 probability no one will be saved?

2. In the same scenario, do you (a) pick the option where 400 will surely die, or instead (b) a 2/3 probability that all 600 will die and a 1/3 probability no one dies?

You might like to think about your answers to the above, before reading on.

.

.

.

.

.

The paper claims that:

Notably, the different scenarios resulted in the same potential outcomes — the first option in both scenarios, for example, has a net result of saving 200 people and losing 400.

Is this what you thought? You might like to re-read the questions and reconsider your answer, before reading on.

.

.

.

.

.

The questions may appear to contain statements of fact, that we are entitled to treat as ‘given’. But in real-life situations we should treat such questions as utterances, and use the appropriate logics. This may give the same result as taking them at face value – or it may not.

It is (sadly) probably true that if this were a UK school examination question then the appropriate logic would be (1) to treat the statements ‘at face value’ (2) assume that if 200 people will be saved ‘for sure’ then exactly 200 people will be saved, no more. On the other hand, this is just the kind of question that I ask mathematics graduates to check that they have an adequate understanding of the issues before advising decision-takers. In the questions as set, the (b) options are the same, but (1a) is preferable to (2a), unless one is in the very rare situation of knowing exactly how many will die. With this interpretation, the more education and the more experience, the better the decisions – even in the US 😉

It would be interesting to repeat the experiment with less ambiguous wording. Meanwhile, I hope that intelligence agents are not being re-educated. Or have I missed something?

Also

Kahneman’s Thinking, fast and slow has a similar example, in which we are given ‘exact scientific estimates’ of probable outcomes, avoiding the above ambiguity. This might be a good candidate experimental question.

Kahneman’s question is not without its own subtleties, though. It concerns the efficacy of ‘programs to combat disease’. It seems to me that if I was told that a vaccine would save 1/3 of the lives, I would suppose that it had been widely tested, and that the ‘scientific’ estimate was well founded. On the other hand, if I was told that there was a 2/3 chance of the vaccine being ineffective I would suppose that it hadn’t been tested adequately, and the ‘scientific’ estimate was really just an informed guess. In this case, I would expect the estimate of efficacy to be revised in the light of new information. It could even be that while some scientist has made an honest estimate based on the information that they have, some other scientist (or technician) already knows that the vaccine is ineffective. A program based on such a vaccine would be more complicated and ‘risky’ than one based on a well-founded estimate, and so I would be reluctant to recommend it. (Ideally, I would want to know a lot more about how the estimates were arrived at, but if pressed for a quick decision, this is what I would do.)

Could the framing make a difference? In one case, we are told that ‘scientifically’, 200 people will be saved. But scientific conclusions always depend on assumptions, so really one should say ‘if …. then 200 will be saved’. My experience is that otherwise the outcome should not be expected, and that saving 200 is the best that should be expected. In the other case we are told that ‘400 will die’. This seems to me to be a very odd thing to say. From a logical perspective one would like to understand the circumstances in which someone would put it like this. I would be suspicious, and might well (‘irrationally’) avoid a program described in that way.

Addenda

The example also shows a common failing, in assuming that the utility is proportional to lives lost. Suppose that when we are told that lives will be ‘saved’ we assume that we will get credit, then we might take the utility from saving lives to be number of lives saved, but with a limit of ‘kudos’ at 250 lives saved. In this case, it is rational to save 200 ‘for sure’, as the expected credit from taking a risk is very much lower. On the other hand, if we are told that 400 lives will be ‘lost’ we might assume that we will be blamed, and take the utility to be minus the lives lost, limited at -10. In this case it is rational to take a risk, as we have some chance of avoiding the worst case utility, whereas if we went for the sure option we would be certain to suffer the worst case.

These kind of asymmetric utilities may be just the kind that experts experience. More study required?

 

Dave Marsay

Advertisements

About Dave Marsay
Mathematician with an interest in 'good' reasoning.

2 Responses to Are more intelligent people more biased?

  1. OVVO says:

    Hi Dave, per the Linkedin discussion, here is a paper on the importance of reference points and domain specific utility functions:

  2. Blue Aurora says:

    Out of curiosity David Marsay, have you read the original pieces of published scholarly research by Daniel Kahneman and Amos Tversky?

    Although I did enjoy Daniel Kahneman’s book, he’s merely reiterating the research he made to a more general audience.

    But as for “intelligent people” being more prone to bias…well, perhaps this might be a trite finding to some, but it doesn’t surprise me. Overconfidence in one’s capabilities and cognitive faults like confirmatory bias explain this quite well.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: