Williams ea.’s Hobgoblin

Elanor F. Williams, David Dunning and Justin Kruger The Hobgoblin of Consistency: Algorithmic Judgment Strategies Underlie Inflated Self-Assessments of Performance Journal of Personality and Social Psychology 2013, Vol. 104, No. 6, 976–994

Georgia Tech’s Jack Thomason fumbled the ball near the 30-yard line. Roy Riegels  … scooped up the ball, quickly scanned the chaos of players in front of him, and then ran for the daylight of the goal line to try to score a touchdown. There was only one flaw. In the middle of the maelstrom, Riegels had been turned around and was now running for the wrong goal line.

What circumstances steer people toward confident error?

    We propose that people often bring to mind some rule or algorithm, one they follow in a systematic way, when they face a particular class of everyday puzzles. Frequently, that algorithm will be right and, thus, using it consistently is an accurate and useful cue to competence. However, at other times, that rule carries a glitch or omission that steers people, instead, toward a wrong answer.

That is, people are often pragmatic, and this gives them confidence, which may be false.

Will having a rule to cite lead people to be mistakenly confident in wrong conclusions? Relative to those who do not follow any hard and fast rule, will those faithfully following some formula express confidence that may not be accompanied by much in the way of demonstrable competence?

We asserted that the consistency or systematicity with which people approach a class of problems positively predicts how confident they are in their performance, irrespective of the actual quality of that performance. In contrast, people who approach each problem in a variable or ad hoc way would be less confident in their decisions.

    In a phrase, we proposed that it is “rational” errors, more than haphazard ones, that underlie the undue confidence that poor performers imbue in their decisions. By rational, we refer to errors that are produced by the systematic application of a misguided rule rather than by a more haphazard and inconsistent process.

Why might consistent use of erroneous rules lead to confidence? One such plausible mechanism, which we test, is that having a rational rule to follow, whether right or wrong, may preclude people from considering alternative conclusions.

[Koriat (2012])had already shown for general knowledge questions that:] People will be confident in answers associated with consistency across time or people regardless of whether those answers are wrong or right.

This paper focusses on problem solving, and consistency across problems.

Do people assume a set of problems fall in a single “equivalence class” that calls for the application of same rule to reach a solution? … instead of asking whether multiple inputs tend to consistently favour one solution over another, we ask instead whether people rely mostly on one input, a rational rule or algorithm, or whether they switch … as they move from problem to problem.

[In asking experts to make predictions about future world events,] Tetlock [2005] divided his experts into hedgehogs, those who approached their predictions with an overarching grand theory, versus foxes ,who tailored their approach to each prediction. He found that hedgehogs were more overconfident in their predictions than were foxes. Tetlock anticipated our hypotheses: Individuals who consistently applied an overarching rule or theory were the most overconfident … . … he found that foxes, who dismiss the value of overarching theories, were more accurate in their predictions than their hedgehog counterparts.

Here, however, we examine cases in which there is a correct algorithm to follow. The best performers will necessarily be hedgehogs consistently applying the right algorithm, rather than foxes using different approaches for each problem. The question then changes to how much more confident will these “correct” hedgehogs be relative to those applying an incorrect set of rules? We predict that there might be a difference in confidence but that it will hardly anticipate just how different the performance is between “correct” and “incorrect” hedgehogs.

Participants were not explicitly rewarded for correct answers. It is not clear how much time they had.

The tasks were

  • Variations on the Wason selection task, a logical task that many people get wrong.
  • A test of ‘folk physics’, which many get wrong.
  • A multiple-choice mathematical question (in financial clothing), where again many make a simple mistake.

Results were analysed:

  • To see how confidence depended on performance, with the expectation being that poor performers would be significantly over-confident, and good performers would be slightly under-confident.
  • To see if those following a rule (right or wrong) were more confident.
  • To see if, for poor performers, consistency was associated with poorer performance.
  • To see how confidence would compare between those who were consistent and correct and those who were consistent and wrong.
  • To see if people were aware that they were following a rule, and if they could identify it from a list, or articulate it.
  • To see if confidence correlated with the numbers of alternatives considered.

Participants rated their confidence by comparison with the other participants, and how many questions they thought they had got right. Previous results were replicated. Also:

In short, high consistency did not indicate good performance as much as it indicated extreme performance, both good and bad.

The more consistent people were in their judgment, the more confident they were in their responses.

In short, participants who were 100% right in their judgments proved not to be that much more confident in their performance than those who were 100% wrong in their answers.

59% of participants stated they were following a specific rule … . Further, those who stated they were following a rule showed significantly more consistency … . Not surprisingly, those who stated they followed a rule rated their performance more positively on all self-perception measures (except for average confidence) relative to those who stated they did not. These more positive self-perceptions arose, even though, at least in one measure (raw test score), rule-following participants objectively did worse than their less rule-bound peers.

A mediational analysis showed that it was the consistency of their approach that linked citing a rule to heightened (and misguided) self-perceptions. … In short, those who explicitly cited a rule were, in fact, using a more consistent approach in their decision making, one that led them to levels of confidence not necessarily matched by objective performance.

… the rule-following groups had consistently more positive views of their performances than did the nonfollowing group. The correct-rule group rated themselves higher on all five self-evaluation measures. The wrong-rule group did the same on three of five measures including the composite measure, despite the fact that objective performance was lower for the wrong-rule group than it was for the no-rule group.

… the manipulation used in Study 5, which gave participants a chance to adopt a rule if they wished to, had a direct impact on judgmental consistency, which led to a rise in how highly participants evaluated their performance. … even though … they were doing worse than the control group.

… better performing participants reported considering fewer alternatives than poorer performing participants. [Conversely:] To the extent that participants reported considering a greater number of alternatives, they were significantly less consistent in their approach and more negative in their self-evaluations.

… consideration of alternatives [is a partial] mediator of the link between consistency and self-evaluation.

General discussion

… people are confident in their conclusions to the extent that those conclusions are reached via a systematic algorithm or procedure that is used faithfully across a class of decision problems.

It is conjectured that adopting a rule increases ‘fluency’ and / or familiarity.

It may be profitable for future research to discuss further the important distinction between being uninformed and misinformed.

Concluding Remarks

In sum, although our data suggest that a wise consistency can be the fount of accurate judgment, but they also suggest that a foolish consistency, as Ralph Waldo Emerson once observed, may serve as a hobgoblin for little minds. Across our studies, it was also a foolish consistency that gave metaphorical swelled heads to some of those little minds as they evaluated the quality of their work.


Decisions are often made in settings where what is acceptable is socially or organisationally set, and a part of acceptability is often having some rule-like justification. While this has not been an explicit factor here, the participants may nonetheless see ad-hoc decision-making as intrinsically undesirable, and been more confident if they had identified a (to them) plausible rule. Moreover, the participants may – rightly – have viewed the problems posed as puzzles, with an objectively correct answer determined according to some objectively correct rule, which they wished to identify.

A notable feature here was that participants were allowed to continue even when they were entirely wrong. There was no feed-back on their performance at all: a most unusual situation for most of us. It seems to me possible that participants were simply being pragmatic, in so far as they applied a model (rule) that gave rise to no (known) problems.

Consideration by participants of concrete alternative solutions to some puzzles was studied. More to the point, one would wish to study consideration of alternative rules. In crisis management, for example, it is common practice in exercises to tell participants that something bad has happened and to get them to look for alternative solutions (policies, structures, rules). By analogy, one might have provided participants with some (possibly fake) feedback, and see if they change their rules. This would distinguish between over-confidence in the current rules and under-imagination in developing alternatives.

Dave Marsay


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: