Complexity, uncertainty and heuristics


Typical heuristics are good for coping with typical cases, which may or may not be complex or uncertain. Considering when various heuristics do or do not work can be helpful in understanding complexity and uncertainty, and how to cope with them.

The focus is on heuristics associated with science.


Pierce thought that: “Doubt, like belief, requires justification. It arises from confrontation with some specific recalcitrant matter of fact (which Dewey called a ‘situation’), which unsettles our belief in some specific proposition”. This is clearly too complacent in the face of complexity and uncertainty. His view of science that:  “A theory that proves itself more successful than its rivals in predicting and controlling our world is said to be nearer the truth” fails to distinguish between short-run and long-run truths. It also cannot cope with an uncertain world, where even probabilistic prediction may simply not be possible.

Pierce’s pragmaticism supposes that “What recommends the scientific method of inquiry above all others is that it is deliberately designed to arrive, eventually, at the ultimately most secure beliefs, upon which the most successful practices can eventually be based.” Now, as Keynes observed, ephemeral things that are observable and hence, according to Cybernetics, are not useful for control.  It might seem, then, that we should focus on the stable properties behind the ‘secure beliefs’. But many things are neither so ephemeral as to be unobservable not so stable as to be ‘secure’. In Whitehead’s view, ‘the level of the game’ is just this intermediate level which science ignores as either being too changeable to be represented by a law or too structured to be aggregated up statistically. Worse, although they do no actually satisfy the assumptions of pragmatism they may endure for long enough to be mistaken for laws, as in the great moderation.

The assumptions of pragmatism tend are not true across epochs and – in practice – are often more true within epochs, provided that one has enough and representative data about the epoch.

Occam’s razor

Occam’s razor or ‘the law of parsimony’ recommends making the fewest possible assumptions. One is parsimonious with one’s assumptions. One should not assume less complexity or more certainty than might be appropriate. Thus one should always suppose that a situation has the maximum possible complexity and uncertainty, unless it can be shown otherwise for the case at hand. Whitehead takes this to an extreme.

Keep it simple, stupid (KISS)

Unfortunately, Occam’s razor is often taken to mean that one should choose the simplest model. Thus given a general model with a parameter p that fits the data for a range of values of p, where p=0 yields a much simper model, a widespread practice would be to use the simpler model. In this case, this yields the opposite of Occam’s razor, which would report the general model.

The false razor tends to be more true within an epoch.


For the great moderation, the false razor would recommend false induction: what has always happened will always happen, and what has never happened will never happened. For example, once the great moderation has lasted a while, it would suggest that ‘logically’ it would last forever. Certainly, the burden of proof would be on those who thought otherwise. Induction tends to be true within an epoch, since a significant change typically indicates a change of epoch.


Human beings have a psychological yearning for certain predictions, or failing that ‘objective’ probability distributions. These are impossible where there is genuine uncertainty, as in many complex situations, so there can be no reasonable heuristic for making them. But, consistent with Whitehead and Occam’s razor, one can make extrapolations and one can often make anticipations.

Wherever a heuristic gives a reasonable prediction within an epoch, it is reasonable to note it as a conditional prediction or as an extrapolation. Thus if X has never happened and one has lots of data, saying that X will never happen is a genuine extrapolation even if it may be a poor prediction (e.g. in a conflict situation). To make a reasoned anticipation one has to consider the ‘higher’ epoch which is likely to be sustained even when the detailed level changes. For example, you might think that the rules of football might change soon, marking a change in epoch, but some things (such as the shape of the ball) might be ‘secure’. You might then anticipate what the new rules might be, without necessarily being able to associate a probability distribution.

In some cases one might know when the epoch was going to end. Otherwise one might assess:

  • The robustness of the epoch to fat-tailed events.
  • Which individuals or potential collaborations might have both an interest in ending it and the capability to end it or make it more vulnerable to fat-tailed events. 
  • Which existing self-referential cycles have the potential to ‘explode’, changing the epoch.
  • Which self-referential cycles have the potential to come into being, changing the epoch.

 There are also other factors (not well understood) that influence the tendency of changes to propagate and build through a network, rather than dissipate. A new set of heuristics are needed. Watch this space?


It would seem rash not to consider the possibility that radical complexity and uncertainty are factors in any large-scale system that involved a degree of adapting, learning or evolving. Science, engineering,  managerialism and common-sense tend to focus on things that are ‘true’ in the short-term and treat them as if they are immutable, unless informed by longer-term experience. We can typically ‘fix’ these heuristics by noting that they are really only extrapolations and then either considering the wider context or taking account of the uncertainties which arise from not explicitly considering the wider context.

Thus, while we might go-along with Occam’s razor and pragmatism to the extent of seeking the simplest possible models for things, we should at the same time recognize the inevitable limits to such models and allow for the inevitable uncertainties about what they may be.

See Also

Reasoning in complex, dynamic worlds, How much complexity and uncertainty?


Dave Marsay


About Dave Marsay
Mathematician with an interest in 'good' reasoning.

4 Responses to Complexity, uncertainty and heuristics

  1. Pingback: How much complexity and uncertainty? | djmarsay

  2. Pingback: Scientists of the subprime | djmarsay

  3. Pingback: Induction and epochs | djmarsay

  4. Pingback: Reasoning under uncertainty methods | djmarsay

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: