Complexity Demystified: A guide for practitioners

Complexity Demystified: A guide for practitioners, Triarchy Press, 2011.

First Impressions

  • The title comes close to ‘complexity made simple’, which would be absurd. A favourable interpretation (after Einstein) would be ‘complexity made as straightforward as possible, but no more.’
  • The references look good.
  • The illustrations look appropriate, of suitable quality, quantity and relevance.

Skimming through I gained a good impression of who the book was for and what it had to offer them. This was born out (below).

Summary

Who is it for?

Complexity is here viewed from the viewpoint of a ‘coal face’ practitioner:

  • Dealing with problems that are not amenable to a conventional managerial approach (e.g. set targets, monitor progress against targets, …).
  • Has had some success and shown some insight and aptitude.
  • Is being thwarted by stakeholders (e.g., donors, management) with conventional management view and using conventional ‘tools’, such as accountability against pre-agreed targets.

What is complexity?

Complexity is characterised as a situation where:

  • One can identify potential behaviours and value them, mostly in advance.
  • Unlike simpler situations, one cannot predict what will be the priorities, when: a plan that is a program will fail.
  • One can react to behaviours by suppressing negative behaviours and supporting positive ones: a plan is a valuation, activity is adaptation.

Complexity leads to uncertainty.

Details

Complexity science principles, concepts and techniques

The first two context-settings were well written and informative. This is about academic theory, which we have been warned not to expect too much of; such theory is not [yet?] ‘real-world ready’ – ready to be ‘applied to’ real complex situations – but it does supply some useful conceptual tools.

The approach

In effect commonplace ‘pragmatism’ is not adequate. The notion of pragmatism is adapted. Instead of persisting with one’s view as long as it seems to be adequate, one seeks to use a broad range of cognitive tools to check one’s understanding and look for alternatives, particular looking out for any unanticipated changes as soon as they occur.

The book refers to a ‘community of practice’, which suggests that there is already a community that has identified and is grappling with the problems, but needing some extra hints and tips. The approach seems down to earth and ‘pragmatic’, not challenging ideologies, cultures, values or other deeply held values.

 Case Studies

These were a good range, with those where the authors had been more closely involved being the better for it. I found the one on Ludlow particular insightful, chiming with my own experiences. I am tempted to blog separately on the ‘fuel protests in the UK in 2000’ as I was engaged with some of the team involved at the time, on related issues. But some of the issues raised here seem quite generally important.

Interesting points

  • Carl Sagan is cited to the effect that the left brain deals with detail, the right with context – the ‘bigger’ picture’. In my opinion many organisations focus too readily on the short term, to the exclusion of the long-term, and if they do focus on the long-term they tend to do it ‘by the clock’ with no sense of ‘as required’. Balancing long-term and short-term needs can be the most challenging aspect of interventions.
  • ECCS 09 is made much of. I can vouch for the insightful nature of the practitioners’ workshop that the authors led.
  • I have worked with Patrick, so had prior sight of some of the illustrations. The account is recognizable, but all the better for the insights of ECCS 09 and – possibly – not having to fit with the prejudices of some unsympathetic stakeholders. In a sense, this is the book that we have been lacking.

Related work

Management

  • Leadership agility: A business imperative for a VUCA world.
    Takes a similar view about complexity and how to work with it.
  • The Cynefin Framework.
    Positions complexity between complicated (familiar management techniques work) and chaos (act first). Advocates ‘probe-sense-respond’, which reflects some of the same views as ‘complexity demystified. (The authors have discussed the issues.)..

Conclusions

The book considers all types of complexity, revealing that what is required is a more thoughtful approach to pragmatism than is the norm for familiar situations, together with a range of thought-provoking tools, the practical expediency of some of which I can vouch for. As such it provides 259 pages of good guidance. If it also came to be a common source across many practitioner domains then it could also facilitate cross-domain discussions on complex topics, something that I feel would be most useful. (Currently some excellent practice is being obscured by the use of ‘silo’ languages and tools, inhibiting collaboration and cross-cultural learning.)

The book seems to me to be strongest in giving guidance to practitioners who are taking, or are constrained to take, a phenomenological approach: seeking to make sense of situations before reacting. This type of approach has been the focus of western academic research and much practice for the last few decades, and in some quarters the notion that one might act without being able to justify one’s actions would be anathema. The book gives some new tools which it is hoped will be useful to justify action, but I have a concern that some situations will be stil be novel and that to be effective practitioners may still need to act outside the currently accepted concepts, whatever they are. I would have liked to see the book be more explicit about its scope since:

  • Some practitioners can actually cope quite well with such supposedly chaotic situations. Currently, observers tend not to appreciate this extreme complexity of others’ situations, and so under-value their achievements. This is unfortunate, as, for example:
    • Bleeding edge practitioners might find themselves stymied by managers and other stakeholders who have too limited a concept of ‘accountability’.
    • Many others could learn from such practitioners, or employ their insights.
  • Without an appreciation of the complexity/chaos boundary, practitioners may take on tasks that are too difficult for them or the tools at their disposal, or where they may lose stakeholder engagement through having different notions of what is ‘appropriately pragmatic’.
  • An organisation that had some appreciation of the boundary could facilitate mentoring etc.
  • We could start to identify and develop tools with a broader applicability.

In fact, some of the passages in the book would, I believe, be helpful even in the ‘chaos’ situation. If we had a clearer ‘map’ the guidance on relatively straightforward complexity could be simplified and the key material for that complexity which threatens chaos could be made more of. My attempt at drawing such a distinction is at https://djmarsay.wordpress.com/notes/about-these-posts/work-in-progress/complexity/ .

In practice, novelty is more often found in long-term factors, not least because if we do not prepare for novelty sufficiently in advance, we will be unable to react effectively. While I would never wish to advocate too clean a separation between practice and policy, or between short and long-term considerations, we can perhaps adopt a leaf out of the book and venture some guidance, not to be taken too rigidly. If conventional pragmatism is appropriate at the immediate ‘coal face’ in the short run, then this book is a guide for those practitioners who are taking a step back and considering complex medium term issues, and would usefully inform policy makers in considering the long-run, but does not directly address the full complexities which they face, which are often inherently mysterious when seen from a narrow phenomenological stance. It does not provide guidance tailored for policy makers, and nor does it give practitioners a view of policy issues. But it could provide a much-needed contribution towards spanning what can be a difficult practice / policy divide.

See Also

Some mathematics of complexity, Reasoning in a complex dynamic world

Dave Marsay

Advertisements

Reasoning and natural selection

Cosmides, L. & Tooby, J. (1991). Reasoning and natural selection. Encyclopedia of Human Biology, vol. 6. San Diego: Academic Press

Summary

Argues that logical reasoning, by which it seems to mean classical induction and symbolic reasoning, are not favoured by evolution. Instead one has reasoning particular to the social context. It argues that in typical situations it is either not possible or not practical to consider ‘all hypotheses’, and that the generation of hypotheses to consider is problematic. It argues that this is typically done using implicit specific theories. Has a discussion of the ‘green and blue cabs’ example.

Comment

 In real situations one can assume induction and lacks the ‘facts’ to be able to perform symbolic reasoning. Logically, then, empirical reasoning would seem more suitable. Keynes, for example, considers the impact of not being able to consider ‘all hypotheses’.

While the case against classically rationality seems sound, the argument leaves the way open for an alternative rationality, e.g. based on Whitehead and Keynes.

See Also

Later work

Better than rational, uncertainty aversion.

Other

Reasoning, mathematics.

Dave Marsay

Better than Rational

Cosmides, L. & Tooby, J. (1994). Better than rational: Evolutionary psychology and the invisible hand. American Economic Review, 84 (2), 327-332.

Summary

[Mainstream Psychologists and behaviourists have studied] “biases” and “fallacies”-many of which are turning out to be experimental artifacts or misinterpretations (see G. Gigerenzer, 1991). [Gigerenzer, G. “How to Make Cognitive Illusions Disappear: Beyond Heuristics and Biases,” in W. Stroebe and M. Hewstone, eds.,  European review of social psychology, Vol. 2. Chichester, U.K.: Wiley, 1991, pp. 83-115.]

… 

One point is particularly important for economists to appreciate: it can be demonstrated that “rational” decision-making methods (i.e., the usual methods drawn from logic, mathematics, and probability theory) are computationally very weak: incapable of solving the natural adaptive problems our ancestors had to solve reliably in order to reproduce (e.g., Cosmides and Tooby, 1987; Tooby and Cosmides, 1992a; Steven Pinker, 1994).

…  sharing rules [should be] appealing in conditions of high variance, and unappealing when resource accrual is a matter of effort rather than of luck (Cosmides and Tooby, 1992).

Comment

They rightly criticise ‘some methods’ drawn from mathematics etc, but some have interpreted as meaning that “logic, mathematics, and probability theory are … incapable of solving the natural adaptive problems our ancestors had to solve reliably in order to reproduce”. But this leads them to overlook relevant theories, such as Whitehead and Keynes‘.

See Also

Relevant mathematics, Avoiding unknown probabilities, Kahneman on biases

NOTE

This has been copied to my bibliography section under ‘rationality and uncertainty’, ‘more …’, where it has more links. Please comment there.

Dave Marsay

When and why do people avoid unknown probabilities in decisions under uncertainty?

Rode, C., Cosmides, L., Hell, W., & Tooby, J. (1999). When and why do people avoid unknown probabilities in decisions under uncertainty? Testing some predictions from optimal foraging theory. Cognition, 72, 269-304.

Summary

Sets up a foraging ‘system’ to explore decision-making.

In this view, the system is not designed merely to maximize expected utility. It is designed to minimize the probability of an outcome that fails to satisfy one’s need, as per Keynes.

The people who participated in our experiments executed complex decision strategies, ones that take into account three parameters mean, variance, and need level rather than just the single parameter (mean) emphasized by some normative theories. Their intuitions were so on target, that their decisions very closely tracked the actual probabilities of each box satisfying their needs. This was true even though explicitly deriving these probabilities is a nontrivial mathematical calculation.

Comment

This gives a foraging setting in which rather than gathering the most food in the long run, the aim is – firstly – to have enough to survive in the short run, and then to build up a surplus in the long run. It rightly notes that this calls for a different approach. Confusingly (to me) it describes the utility approach as ‘logical’ and ‘mathematical’, from which some seem to infer that trying to maximize sustainability is not.

  • A strategy that seeks to maximize expected return / return ‘in the long run’ may not be appropriate when there is short-term jeopardy. (As Keynes’ said, ‘In the long run you are dead.)
  • It is not logical or mathematical to use a theory whose assumptions / axioms are known to be false, although (according to some definitions) it may be ‘rational’. If one is not certain that the assumptions / axioms are ‘true’, it is not logical or mathematical to avoid the uncertainty.
  • Logic and mathematics such as Keynes‘ can cope with short-term decisions, or situations where a balance is needed between short and long-run issues.
  • Logically, typical foraging tasks are best met by a population of foragers with different ‘attitudes to risk’. That is, most foragers may take a short term view but some need to take a long term view (to find new food sources). This relies on sharing when the explorers come back empty-handed.

One should also note that the original paper uses variance in a stereotyped way that is not always appropriate, as emphasised by Taleb., who alos discusses the general problem of ‘resilience to tail risk’.

See Also

Paradoxes, Mathematics, Allen, Better than Rational .

Dave Marsay

Finding the true meaning of risk

Interpretation as Risk

The New Scientist has an important article by Nicolas Bouleau, Issue 2818, 25 June 2011, pg 30.

Whether it’s a shape, a structure or a work of art, once you see a meaning in something, there’s no going back. The reasons why run deep.

… the assertion that a particular phenomenon is “random” is neither trivial nor obvious and, except in textbook cases, is the subject of debate.

This ..  irreversibility of interpretation … holds quite generally: once you perceive something in the world – a shape, structure or a meaning – you can’t go back. …

All this is crucial to truly understanding risk. The belief some people have that risks can be objectively measured means expunging their interpretative aspect, even though that aspect is an essential part of understanding risk. From the epistemic point of view, it is the meaning of the event that determines the risk. The probabilistic representation … is too simplistic.

Usually there isn’t enough information for such a model: we do not know the probabilities of rare events occurring since there will never be enough data, we do not have a full description of what can happen, and we do not know how to calculate the cost of that event occurring.

….

The bottom line – quite literally, sometimes – is that to really understand risk, we have no choice but to take account of the way people interpret events.

Comments

The conclusion seems sound, but

  • I am not sure that it is useful to imagine that anything really ‘is’ random or meaningful: these are in the eye of the beholder.
  • When abroad I often see things that appear random to me but which I believe to be meaningful to the locals.
  • The article is full of disparaging remarks about how ‘people’ make sense of things without considering whether this is cultural or biological, for example, and what might be done to correct or compensate for them. A link to behaviourist economics would be interesting.
  • The piece de resistance is a similar pair of figures. The intention is that the first initially looks random, but after looking at the second and seeing words picked out in colour one looks at the first figure and sees words. The assertion is that ‘people’ cannot suppress the autonomous ‘sense making’. But some can.

Selecting for ‘Negative Capability’

To me, the significance is not so much about the nature of risk (which aligns with Keynes, for example) but about the reasons why people are blind to risk: because once they ‘see’ how the economy (or whatever) works they are unable to ‘see’ any other possibility. The implication seems to be that the blindness here is the same kind as in the optical example. If so, maybe we should use the optical example (or other colour-blindness tests) to select those with Keats’ ‘negative capability’ for roles that need to ‘see’ risk. But is it really so?

See also

Search my blog for uncertainty, risk or crisis.

Dave Marsay

Science advice and the management of risk

Science advice and the management of risk in government and business

The foundation for science and technology, 10 November 2010

An authoritative summary of the UK governments position on risk, with talks and papers.

  •  Beddington gives a good overview. He discusses probability versus impact ‘heat maps’, the use of ‘worst case’ scenarios, the limitations of heat maps and Blackett reviews. He discusses how management strategy has to reflect both the location on the heat map and the uncertainty in the location.
  • Omand discusses ‘Why wont they (politicians) listen (to the experts)?’  He notes the difference between secrets (hard to uncover) and secrets (hard to make sense of), and makes ‘common cause’ between science and intelligence in attempting to communicate with politicians. Presents a familiar type of chart in which probability is thought of as totally ordered (as in Bayesian probability) and seeks to standardise on the descriptors of ranges of probability, such as ‘highly probable’.
  • Goodman discusses economic risk management and the need to cope with ‘irrational cycles of exuberance’, focussing on ‘low probability high impact’ events. Only some risks can be quantified. Recommends ‘generalised Pareto distribution’.
  • Spielgelhalter introduced the discussion with some important insights:

The issue ultimately comes down to whether we can put numbers on these events.  … how can a figure communicate the enormous number of assumptions which underlie such quantifications? … The … goal of a numerical probability … becomes much more difficult when dealing with deeper uncertainties. … This concerns the acknowledgment of indeterminacy and ignorance.

Standard methods of analysis deal with recognised, quantifiable uncertainties, but this is only part of the story, although … we tend to focus at this level. A first extra step is to be explicit about acknowledged inadequacies – things that are not put into the analysis such as the methane cycle in climate models. These could be called ‘indeterminacy’. We do not know how to quantify them but we know they might be influential.

Yet there are even greater unknowns which require an essential humility. This is not just ignorance about what is wrong with the model, it is an acknowledgment that there could be a different conceptual basis for our analysis, another way to approach the problem.

There will be a continuing debate  about the process of communicating these deeper uncertainties.

  • The discussion covered the following:
    • More coverage of the role of emotion and group think is needed.
    • “[G]overnments did not base policies on evidence; they proclaimed them because they thought that a particular policy would attract votes. They would then seek to find evidence that supported their view. It would be more realistic to ask for policies to be evidence tested [rather than evidence-based.]”
    • “A new language was needed to describe uncertainty and the impossibility of removing risk from ordinary life … .”
    •  Advisors must advise, not covertly subvert decision-making.

Comments

If we accept that there is more to uncertainty than  can be reflected in a typical scale of probability, then it is no wonder that organisational decisions fail to take account of it adequately, or that some advisors seek to subvert such poor processes. Moreover, this seems to be a ‘difference that makes a difference’.

From a Keynesian perspective conditional probabilities, P(X|A), sometimes exist but unconditional ones, P(X), rarely do. As Spielgelhalter notes it is often the assumptions that are wrong: the estimated probability is then irrelevant. Spielgelhalter mentioned the common use of ‘sensitivity analysis’, noting that it is unhelpful. But what is commonly done is to test the sensitivity of P(X|y,A) to some minor variable y while keeping the assumptions, A. fixed. What is more often (for these types of risk) needed is a sensitivity to assumptions. Thus, if P(X|A) is high:

  • one needs to identify possible alternatives, A’, to A for which P(X|A’) is low, no matter how improbable A’ may be regarded.

Here:

  • ‘Possible’ means consistent with the evidence rather than anything psychological.
  • The criteria for what is regarded as ‘low’ or ‘high’ will be set by the decision context.

The rationale is that everything that has ever happened was, with hind-sight, possible: the things that catch us out are those that we overlooked, perhaps because we thought them improbable.

A conventional analysis would overlook emergent properties, such as booming cycles of ‘irrational’ exuberance. Thus in considering alternatives one needs to consider potential emotions and other emergencies and epochal events.

This suggests a typical ‘risk communication’ would consist of an extrapolated ‘main case’ probability together with a description of scenarios in which the opposite probability would hold.

See also

mathematicsheat maps, extrapolation and induction

Other debates, my bibliography.

Dave Marsay

 

Uncertainty, utility and paradox

Brooklyn Museum - An Embarrassment of Choices,...

Image via Wikipedia

Allais

Allais devised two choices:

  1. between a definite £1M versus a gamble whose expected return was much greater, but could give nothing
  2. between two gambles

He showed that most people made choices that were inconsistent with expected utility theory, and hence paradoxical.

In the first choice, one option has a certain payoff and so is reasonably prefered. In the other choice both choices have similarly uncertain outcomes and so it is reasonable to choose based on expected utility. In general, uncertainty reasonably detracts from expected utility.

Ellsberg

Ellsberg devised a similar paradox, but again people consistently prefer alternatives with the least uncertainty.

See also

mathematics, illustrations, examples.

Dave Marsay