Complexity Demystified: A guide for practitioners

P. Beautement & C. Broenner Complexity Demystified: A guide for practitioners, Triarchy Press, 2011.

First Impressions

  • The title comes close to ‘complexity made simple’, which would be absurd. A favourable interpretation (after Einstein) would be ‘complexity made as straightforward as possible, but no more.’
  • The references look good.
  • The illustrations look appropriate, of suitable quality, quantity and relevance.

Skimming through I gained a good impression of who the book was for and what it had to offer them. This was born out (below).

Summary

Who is it for?

Complexity is here viewed from the viewpoint of a ‘coal face’ practitioner:

  • Dealing with problems that are not amenable to a conventional managerial approach (e.g. set targets, monitor progress against targets, …).
  • Has had some success and shown some insight and aptitude.
  • Is being thwarted by stakeholders (e.g., donors, management) with conventional management view and using conventional ‘tools’, such as accountability against pre-agreed targets.

What is complexity?

Complexity is characterised as a situation where:

  • One can identify potential behaviours and value them, mostly in advance.
  • Unlike simpler situations, one cannot predict what will be the priorities, when: a plan that is a program will fail.
  • One can react to behaviours by suppressing negative behaviours and supporting positive ones: a plan is a valuation, activity is adaptation.

Complexity leads to uncertainty.

Details

Complexity science principles, concepts and techniques

The first two context-settings were well written and informative. This is about academic theory, which we have been warned not to expect too much of; such theory is not [yet?] ‘real-world ready’ – ready to be ‘applied to’ real complex situations – but it does supply some useful conceptual tools.

The approach

In effect commonplace ‘pragmatism’ is not adequate. The notion of pragmatism is adapted. Instead of persisting with one’s view as long as it seems to be adequate, one seeks to use a broad range of cognitive tools to check one’s understanding and look for alternatives, particular looking out for any unanticipated changes as soon as they occur.

The book refers to a ‘community of practice’, which suggests that there is already a community that has identified and is grappling with the problems, but needing some extra hints and tips. The approach seems down to earth and ‘pragmatic’, not challenging ideologies, cultures, values or other deeply held values.

 Case Studies

These were a good range, with those where the authors had been more closely involved being the better for it. I found the one on Ludlow particular insightful, chiming with my own experiences. I am tempted to blog separately on the ‘fuel protests in the UK in 2000’ as I was engaged with some of the team involved at the time, on related issues. But some of the issues raised here seem quite generally important.

Interesting points

  • Carl Sagan is cited to the effect that the left brain deals with detail, the right with context – the ‘bigger’ picture’. In my opinion many organisations focus too readily on the short term, to the exclusion of the long-term, and if they do focus on the long-term they tend to do it ‘by the clock’ with no sense of ‘as required’. Balancing long-term and short-term needs can be the most challenging aspect of interventions.
  • ECCS 09 is made much of. I can vouch for the insightful nature of the practitioners’ workshop that the authors led.
  • I have worked with Patrick, so had prior sight of some of the illustrations. The account is recognizable, but all the better for the insights of ECCS 09 and – possibly – not having to fit with the prejudices of some unsympathetic stakeholders. In a sense, this is the book that we have been lacking.

Related work

Management

  • Leadership agility: A business imperative for a VUCA world.
    Takes a similar view about complexity and how to work with it.
  • The Cynefin Framework.
    Positions complexity between complicated (familiar management techniques work) and chaos (act first). Advocates ‘probe-sense-respond’, which reflects some of the same views as ‘complexity demystified. (The authors have discussed the issues.)..

Conclusions

The book considers all types of complexity, revealing that what is required is a more thoughtful approach to pragmatism than is the norm for familiar situations, together with a range of thought-provoking tools, the practical expediency of some of which I can vouch for. As such it provides 259 pages of good guidance. If it also came to be a common source across many practitioner domains then it could also facilitate cross-domain discussions on complex topics, something that I feel would be most useful. (Currently some excellent practice is being obscured by the use of ‘silo’ languages and tools, inhibiting collaboration and cross-cultural learning.)

The book seems to me to be strongest in giving guidance to practitioners who are taking, or are constrained to take, a phenomenological approach: seeking to make sense of situations before reacting. This type of approach has been the focus of western academic research and much practice for the last few decades, and in some quarters the notion that one might act without being able to justify one’s actions would be anathema. The book gives some new tools which it is hoped will be useful to justify action, but I have a concern that some situations will be stil be novel and that to be effective practitioners may still need to act outside the currently accepted concepts, whatever they are. I would have liked to see the book be more explicit about its scope since:

  • Some practitioners can actually cope quite well with such supposedly chaotic situations. Currently, observers tend not to appreciate this extreme complexity of others’ situations, and so under-value their achievements. This is unfortunate, as, for example:
    • Bleeding edge practitioners might find themselves stymied by managers and other stakeholders who have too limited a concept of ‘accountability’.
    • Many others could learn from such practitioners, or employ their insights.
  • Without an appreciation of the complexity/chaos boundary, practitioners may take on tasks that are too difficult for them or the tools at their disposal, or where they may lose stakeholder engagement through having different notions of what is ‘appropriately pragmatic’.
  • An organisation that had some appreciation of the boundary could facilitate mentoring etc.
  • We could start to identify and develop tools with a broader applicability.

In fact, some of the passages in the book would, I believe, be helpful even in the ‘chaos’ situation. If we had a clearer ‘map’ the guidance on relatively straightforward complexity could be simplified and the key material for that complexity which threatens chaos could be made more of. My attempt at drawing such a distinction is at https://djmarsay.wordpress.com/notes/about-these-posts/work-in-progress/complexity/ .

In practice, novelty is more often found in long-term factors, not least because if we do not prepare for novelty sufficiently in advance, we will be unable to react effectively. While I would never wish to advocate too clean a separation between practice and policy, or between short and long-term considerations, we can perhaps adopt a leaf out of the book and venture some guidance, not to be taken too rigidly. If conventional pragmatism is appropriate at the immediate ‘coal face’ in the short run, then this book is a guide for those practitioners who are taking a step back and considering complex medium term issues, and would usefully inform policy makers in considering the long-run, but does not directly address the full complexities which they face, which are often inherently mysterious when seen from a narrow phenomenological stance. It does not provide guidance tailored for policy makers, and nor does it give practitioners a view of policy issues. But it could provide a much-needed contribution towards spanning what can be a difficult practice / policy divide

Addendum

One of the authors has developed eleven ‘Principles of Practice’. These reflect the view that, in practice, the most significant ‘unintended consequences‘ could have been avoided. I think there is a lot of ‘truth’ in this. But it seems to me that however ‘complexity worthy’ one is, and however much one thinks one has followed ‘best practice’ – including that covered by this book – there are always going to be ‘unintended consequences’. Its just that one can anticipate that they will be less serious, and not as serious as the original problem one was trying to solve.

See Also

Some mathematics of complexity, Reasoning in a complex dynamic world

Dave Marsay

Reasoning and natural selection

Cosmides, L. & Tooby, J. (1991). Reasoning and natural selection. Encyclopedia of Human Biology, vol. 6. San Diego: Academic Press

Summary

Argues that logical reasoning, by which it seems to mean classical induction and symbolic reasoning, are not favoured by evolution. Instead one has reasoning particular to the social context. It argues that in typical situations it is either not possible or not practical to consider ‘all hypotheses’, and that the generation of hypotheses to consider is problematic. It argues that this is typically done using implicit specific theories. Has a discussion of the ‘green and blue cabs’ example.

Comment

 In real situations one can assume induction and lacks the ‘facts’ to be able to perform symbolic reasoning. Logically, then, empirical reasoning would seem more suitable. Keynes, for example, considers the impact of not being able to consider ‘all hypotheses’.

While the case against classically rationality seems sound, the argument leaves the way open for an alternative rationality, e.g. based on Whitehead and Keynes.

See Also

Later work

Better than rational, uncertainty aversion.

Other

Reasoning, mathematics.

Dave Marsay

Better than Rational

Cosmides, L. & Tooby, J. (1994). Better than rational: Evolutionary psychology and the invisible hand. American Economic Review, 84 (2), 327-332.

Summary

[Mainstream Psychologists and behaviourists have studied] “biases” and “fallacies”-many of which are turning out to be experimental artifacts or misinterpretations (see G. Gigerenzer, 1991). [Gigerenzer, G. “How to Make Cognitive Illusions Disappear: Beyond Heuristics and Biases,” in W. Stroebe and M. Hewstone, eds.,  European review of social psychology, Vol. 2. Chichester, U.K.: Wiley, 1991, pp. 83-115.]

… 

One point is particularly important for economists to appreciate: it can be demonstrated that “rational” decision-making methods (i.e., the usual methods drawn from logic, mathematics, and probability theory) are computationally very weak: incapable of solving the natural adaptive problems our ancestors had to solve reliably in order to reproduce (e.g., Cosmides and Tooby, 1987; Tooby and Cosmides, 1992a; Steven Pinker, 1994).

…  sharing rules [should be] appealing in conditions of high variance, and unappealing when resource accrual is a matter of effort rather than of luck (Cosmides and Tooby, 1992).

Comment

They rightly criticise ‘some methods’ drawn from mathematics etc, but some have interpreted as meaning that “logic, mathematics, and probability theory are … incapable of solving the natural adaptive problems our ancestors had to solve reliably in order to reproduce”. But this leads them to overlook relevant theories, such as Whitehead and Keynes‘.

See Also

Relevant mathematics, Avoiding unknown probabilities, Kahneman on biases

NOTE

This has been copied to my bibliography section under ‘rationality and uncertainty’, ‘more …’, where it has more links. Please comment there.

Dave Marsay

When and why do people avoid unknown probabilities in decisions under uncertainty?

Rode, C., Cosmides, L., Hell, W., & Tooby, J. (1999). When and why do people avoid unknown probabilities in decisions under uncertainty? Testing some predictions from optimal foraging theory. Cognition, 72, 269-304.

Summary

Sets up a foraging ‘system’ to explore decision-making.

In this view, the system is not designed merely to maximize expected utility. It is designed to minimize the probability of an outcome that fails to satisfy one’s need, as per Keynes.

The people who participated in our experiments executed complex decision strategies, ones that take into account three parameters mean, variance, and need level rather than just the single parameter (mean) emphasized by some normative theories. Their intuitions were so on target, that their decisions very closely tracked the actual probabilities of each box satisfying their needs. This was true even though explicitly deriving these probabilities is a nontrivial mathematical calculation.

Comment

This gives a foraging setting in which rather than gathering the most food in the long run, the aim is – firstly – to have enough to survive in the short run, and then to build up a surplus in the long run. It rightly notes that this calls for a different approach. Confusingly (to me) it describes the utility approach as ‘logical’ and ‘mathematical’, from which some seem to infer that trying to maximize sustainability is not.

  • A strategy that seeks to maximize expected return / return ‘in the long run’ may not be appropriate when there is short-term jeopardy. (As Keynes’ said, ‘In the long run you are dead.)
  • It is not logical or mathematical to use a theory whose assumptions / axioms are known to be false, although (according to some definitions) it may be ‘rational’. If one is not certain that the assumptions / axioms are ‘true’, it is not logical or mathematical to avoid the uncertainty.
  • Logic and mathematics such as Keynes‘ can cope with short-term decisions, or situations where a balance is needed between short and long-run issues.
  • Logically, typical foraging tasks are best met by a population of foragers with different ‘attitudes to risk’. That is, most foragers may take a short term view but some need to take a long term view (to find new food sources). This relies on sharing when the explorers come back empty-handed.

One should also note that the original paper uses variance in a stereotyped way that is not always appropriate, as emphasised by Taleb., who alos discusses the general problem of ‘resilience to tail risk’.

See Also

Paradoxes, Mathematics, Allen, Better than Rational .

Dave Marsay

Finding the true meaning of risk

Interpretation as Risk

The New Scientist has an important article by Nicolas Bouleau, Issue 2818, 25 June 2011, pg 30.

Whether it’s a shape, a structure or a work of art, once you see a meaning in something, there’s no going back. The reasons why run deep.

… the assertion that a particular phenomenon is “random” is neither trivial nor obvious and, except in textbook cases, is the subject of debate.

This ..  irreversibility of interpretation … holds quite generally: once you perceive something in the world – a shape, structure or a meaning – you can’t go back. …

All this is crucial to truly understanding risk. The belief some people have that risks can be objectively measured means expunging their interpretative aspect, even though that aspect is an essential part of understanding risk. From the epistemic point of view, it is the meaning of the event that determines the risk. The probabilistic representation … is too simplistic.

Usually there isn’t enough information for such a model: we do not know the probabilities of rare events occurring since there will never be enough data, we do not have a full description of what can happen, and we do not know how to calculate the cost of that event occurring.

….

The bottom line – quite literally, sometimes – is that to really understand risk, we have no choice but to take account of the way people interpret events.

Comments

The conclusion seems sound, but

  • I am not sure that it is useful to imagine that anything really ‘is’ random or meaningful: these are in the eye of the beholder.
  • When abroad I often see things that appear random to me but which I believe to be meaningful to the locals.
  • The article is full of disparaging remarks about how ‘people’ make sense of things without considering whether this is cultural or biological, for example, and what might be done to correct or compensate for them. A link to behaviourist economics would be interesting.
  • The piece de resistance is a similar pair of figures. The intention is that the first initially looks random, but after looking at the second and seeing words picked out in colour one looks at the first figure and sees words. The assertion is that ‘people’ cannot suppress the autonomous ‘sense making’. But some can.

Selecting for ‘Negative Capability’

To me, the significance is not so much about the nature of risk (which aligns with Keynes, for example) but about the reasons why people are blind to risk: because once they ‘see’ how the economy (or whatever) works they are unable to ‘see’ any other possibility. The implication seems to be that the blindness here is the same kind as in the optical example. If so, maybe we should use the optical example (or other colour-blindness tests) to select those with Keats’ ‘negative capability’ for roles that need to ‘see’ risk. But is it really so?

See also

Search my blog for uncertainty, risk or crisis.

Dave Marsay

Science advice and the management of risk

Science advice and the management of risk in government and business

The foundation for science and technology, 10 November 2010

An authoritative summary of the UK governments position on risk, with talks and papers.

  •  Beddington gives a good overview. He discusses probability versus impact ‘heat maps’, the use of ‘worst case’ scenarios, the limitations of heat maps and Blackett reviews. He discusses how management strategy has to reflect both the location on the heat map and the uncertainty in the location.
  • Omand discusses ‘Why wont they (politicians) listen (to the experts)?’  He notes the difference between secrets (hard to uncover) and secrets (hard to make sense of), and makes ‘common cause’ between science and intelligence in attempting to communicate with politicians. Presents a familiar type of chart in which probability is thought of as totally ordered (as in Bayesian probability) and seeks to standardise on the descriptors of ranges of probability, such as ‘highly probable’.
  • Goodman discusses economic risk management and the need to cope with ‘irrational cycles of exuberance’, focussing on ‘low probability high impact’ events. Only some risks can be quantified. Recommends ‘generalised Pareto distribution’.
  • Spielgelhalter introduced the discussion with some important insights:

The issue ultimately comes down to whether we can put numbers on these events.  … how can a figure communicate the enormous number of assumptions which underlie such quantifications? … The … goal of a numerical probability … becomes much more difficult when dealing with deeper uncertainties. … This concerns the acknowledgment of indeterminacy and ignorance.

Standard methods of analysis deal with recognised, quantifiable uncertainties, but this is only part of the story, although … we tend to focus at this level. A first extra step is to be explicit about acknowledged inadequacies – things that are not put into the analysis such as the methane cycle in climate models. These could be called ‘indeterminacy’. We do not know how to quantify them but we know they might be influential.

Yet there are even greater unknowns which require an essential humility. This is not just ignorance about what is wrong with the model, it is an acknowledgment that there could be a different conceptual basis for our analysis, another way to approach the problem.

There will be a continuing debate  about the process of communicating these deeper uncertainties.

  • The discussion covered the following:
    • More coverage of the role of emotion and group think is needed.
    • “[G]overnments did not base policies on evidence; they proclaimed them because they thought that a particular policy would attract votes. They would then seek to find evidence that supported their view. It would be more realistic to ask for policies to be evidence tested [rather than evidence-based.]”
    • “A new language was needed to describe uncertainty and the impossibility of removing risk from ordinary life … .”
    •  Advisors must advise, not covertly subvert decision-making.

Comments

If we accept that there is more to uncertainty than  can be reflected in a typical scale of probability, then it is no wonder that organisational decisions fail to take account of it adequately, or that some advisors seek to subvert such poor processes. Moreover, this seems to be a ‘difference that makes a difference’.

From a Keynesian perspective conditional probabilities, P(X|A), sometimes exist but unconditional ones, P(X), rarely do. As Spielgelhalter notes it is often the assumptions that are wrong: the estimated probability is then irrelevant. Spielgelhalter mentioned the common use of ‘sensitivity analysis’, noting that it is unhelpful. But what is commonly done is to test the sensitivity of P(X|y,A) to some minor variable y while keeping the assumptions, A. fixed. What is more often (for these types of risk) needed is a sensitivity to assumptions. Thus, if P(X|A) is high:

  • one needs to identify possible alternatives, A’, to A for which P(X|A’) is low, no matter how improbable A’ may be regarded.

Here:

  • ‘Possible’ means consistent with the evidence rather than anything psychological.
  • The criteria for what is regarded as ‘low’ or ‘high’ will be set by the decision context.

The rationale is that everything that has ever happened was, with hind-sight, possible: the things that catch us out are those that we overlooked, perhaps because we thought them improbable.

A conventional analysis would overlook emergent properties, such as booming cycles of ‘irrational’ exuberance. Thus in considering alternatives one needs to consider potential emotions and other emergencies and epochal events.

This suggests a typical ‘risk communication’ would consist of an extrapolated ‘main case’ probability together with a description of scenarios in which the opposite probability would hold.

See also

mathematicsheat maps, extrapolation and induction

Other debates, my bibliography.

Dave Marsay

 

Uncertainty, utility and paradox

Brooklyn Museum - An Embarrassment of Choices,...

Image via Wikipedia

Allais

Allais devised two choices:

  1. between a definite £1M versus a gamble whose expected return was much greater, but could give nothing
  2. between two gambles

He showed that most people made choices that were inconsistent with expected utility theory, and hence paradoxical.

In the first choice, one option has a certain payoff and so is reasonably prefered. In the other choice both choices have similarly uncertain outcomes and so it is reasonable to choose based on expected utility. In general, uncertainty reasonably detracts from expected utility.

Ellsberg

Ellsberg devised a similar paradox, but again people consistently prefer alternatives with the least uncertainty.

See also

mathematics, illustrations, examples.

Dave Marsay

Induction, novelty and possibilistic causality

The concept of induction normally bundles together a number of stages, of which the key ones are modelling and extrapolating. Here I speculatively consider causality through the ‘lens’ of induction.

If I perform induction and what is subsequently observed fits the extrapolation then, in a sense, there is no novelty. If what happened was part of an epoch where things fit the model, then the epoch has not ended. I only need to adjust some parameter within the model that is supposed to vary with time.  In this case I can say that conformance to the model (with the value of its variables) could have caused the observed behaviour. That is, any notion of causality is entailed by the model. If we consider modelling and extrapolation as flow, then what happens seems to be flowing within the epoch. The general model (with some ‘slack’ in its variables) describes a tendency for change, that can be described as a field (as Smuts does).

As with the interpretation of induction, we have to be careful. There may be multiple inconsistent models and hence multiple inconsistent possible causes. For example, an aircraft plot may fit both civil and military aircraft, which may heading for different airports. Similarly, we often need to make assumptions to make the data fit the model, so different assumptions can lead to different models. For example, if an aircraft suddenly loses height we may assume that it had received an instruction, or that it is in trouble. These would lead to different extrapolations. As with induction, we neglect the caveats at our peril.

We can distinguish the following types of ‘surprise’:

  1. Where sometimes rare events happen within an epoch, without affecting the epoch. (Like an aircraft being struck by lightning, harmlessly.)
  2. Where the induction was only possibilistic, one of which predictions actually occurred. (Where one predicts that at least one aircraft will manoeuvre to avoid a collision, or there will be a crash.) 
  3. Where induction shows that the epoch has become self-defeating. (As when a period aircraft flying straight and level has to be ended to avoid a crash – which would end the epoch anyway).
  4. Where the epoch is ended by external events. (As when air traffic control fails.)

These all distinguish between different types of ’cause’. Sometimes two or more types may act together. (For example, when two airplanes crash together, the ’cause’ usually involves both planes and air traffic control. Similarly, if a radar is tracking an aircraft flying straight and level, we can say that the current location of the aircraft is ’caused by’ the laws of physics, the steady hand of the pilot, and the continued availability of fuel etc. But in a sense it also ’caused by’ not having been shot down.)

If the epoch appears to have continued then a part of the cause is the lack of all those things that could have ended it.  If the epoch appears to have ended then we may have no model or only a very partial model for what happens. If we have a fuller model we can use that to explain what happened and hence to describe ‘the cause’. But with a partial model we may only be able to put constraints on what happened in a very vague way. (For example, if we launch a rocket we may say what caused it to reach its intended target, but if it misbehaves we could only say that it will end up somewhere in quite a large zone, and we may be able to say what caused it to fail but not what caused it to land where it did. Rockets are designed to operate within the bounds of what is understood: if they fail ‘interesting’ things can happen.) Thus we may not always be able to give a possible cause for the event of interest, but would hope to be able to say something helpful.

In so far as we can talk about causes, we are talking about the result of applying a theory / model / hypothesis that fits the data. The use of the word ’cause’ is thus a short-hand for the situation where the relevant theory is understood.

Any attempt to draw conclusions from data involves modelling, and the effectiveness of induction feeds back into the modelling process, fitting some hypotheses while violating others. The term ’cause’ is suggestive that this process is mature and reliable. Its use thus tends to go with a pragmatic approach. Otherwise one should be aware of the inevitable uncertainties. To say that X [possibly] causes Y is simply to say that one’s experience to date fits X causes Y, subject to certain assumptions. It may not be sensible to rely on this, for example where you are in an adversarial situation and your opponent has a broader range of relevant experience than you, or where you are using your notion of causality to influence something that may be counter-adapting. Any notion of causality is just a theory. Thus it seems quite proper for physicists to seek to redefine causality in order to cope with Quantum Physics.

Dave Marsay

Uncertainty and risk ‘Heat Maps’

Risk heat maps

A risk ‘heat map’ shows possible impact against likelihood of various events or scenarios, as in this one from the EIU website:

The ‘Managing Uncertainty’ blog draws attention to it and raises some interesting issues. Importantly, it notes that it includes events with both positive and negative potential impacts. But I go further and note that in assigning a small blob to each event, it fails to show  Knightian uncertainty at all.

Incorporating uncertainty

Uncertainty can be shown by having multiple blobs per event, perhaps smearing them into a region. One way to set the blobs is to get multiple stakeholders to mark their own assessments. My experience in crisis management and security is that:

  • Stakeholders will tend to judge impact for their own organisations. This can be helpful, but often one will want them to also assess the impact on ‘the big picture’ and the ‘route’ through which that impact may take effect. This can help flesh out the scenario. For example, perhaps an organisation doesn’t see any (direct) impact on them but another organisation sees that although they will be effected they can shift the burden to the unsuspecting first organisation.
  • Often, the risk comes from a lack of preparation, which often comes from a lack of anticipation. Thus the situation is highly reflexive. One can use the heat map to show a range of outcomes from ‘taken by surprise’ to ‘fully prepared’.
  • One generally needs some sort of role-playing ‘game’ backed by good analysis before stakeholders can make reasonable appreciations of the impact on ‘the whole’.
  • It is often helpful for stakeholders to mark the range of positions assumed within their organisations.
  • A suitably marked heat map can be used to facilitate debate and scenarios and marks developed until one either has convergence or a clear idea of why convergence is lacking.
  • The various scenarios will often need some analysis to bring out the key relationships (‘e.g. contagion’), which can then be validated by further debate / gaming.
  • Out of the debate, supported by the heat map with rationalised scenarios, comes a view about which issues need to be communicated better or more widely, so that all organisations appreciate the relative importance of their uncertainties, and how they are affected by and affect others’.
  • Any difficulties in above (such as irreconcilable views, or questions that cannot be answered) lead to requirements for further research, debate, etc.
  • When time is pressing a ‘bold decision’ may need to substitute for thorough analysis. But there is then a danger that the residual risks become ‘unspeakable’. The quality of the debate, to avoid this and other kinds of groupthink, can thus be critical.

Example

The UK at first misunderstood the nature of the protestors in the ‘first fuel crisis’ of 2000, which could have had dire consequences. It proved key that the risk heat map showed not only the mainstream view, but also credible alternatives. This is seen to be a case where Internet, mobile phone and social media changed the nature of protest. With this in mind the EIU’s event 2 (technology leading to rapid political and economic change) have positive or negative consequences, depending on how well governments respond. It may be that democratic governments believe that they can respond to rapid change, but it ought still be flagged up as a risk.  

 See also

Cynefin , mathematics of uncertainty

Dave Marsay 

The Precautionary Principle and Risk

Definition

The precautionary principle is that:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.

It thus applies in situations of uncertainty: better safe than sorry. It has been criticised for holding back innovation. But a ‘precautionary measure’ can be anything that mitigates the risk, not just failing to make the innovation. In particular if the potential ‘harm’ is very mild or easy to remediate, then there may be no need for costly ‘measures’.

Measures

There may be a cancer risk from mobile phones. The appropriate response is to advise restraint in the use of mobile phones, particularly by young people, and more research.

In the run-up to the financial crisis of 2007/8 there was an (indirect) threat to human health. An appropriate counter-measure might have been to encourage a broader base of economic research, including non-Bayesians.

Criticisms

Volokh sees the principle as directed against “politically disfavoured technologies” and hence potentially harmful. In particular Matt Ridley considers that the German e-coli outbreak of 2011 might have been prevented if the food had been irradiated, but irradiation had been regarded as leading to a possible threat, and hence under the precautionary principle had not been used. But the principle ought to be applied to all innovations, including large-scale organic farming, in which case irradiation might seem to be an appropriate precautionary measure. Given the fears about irradiation, it might have been used selectively – after test results or to quash an e-coli outbreak.  In any event, there should be a balance of threats and measures.

Conclusion

The precautionary principle seems reasonable, but needs to be applied evenly, not just to ‘Frankenstein technologies’. It could be improved by emphasing the need for the measures to be ‘proportional’ to the down-side risk.

Dave Marsay

How to Grow a Mind

How to Grow a Mind: Statistics, Structure, and Abstraction

Joshua B. Tenenbaum , et al.
Science 331, 1279 (2011);
DOI: 10.1126/science.1192788

This interesting paper proposes that human reasoning, far from being uniquely human, is understandable in terms of the mathematics of inference, and in particular that concept learning is ‘just’ that combination of Bayesian inference and abstract induction. found in hierarchical Bayesian model s (HBM). This has implications for two debates:

  • how to conceptualise how people learn
  • the validity of Bayesian methods

These may help, for example, in helping: 

  • to understand how thinking may be influenced, for example, by culture or experience
  • to aid teaching
  • to understand what might be typical mistakes of the majority
  • to understand mistakes typical of important minorities

If it were the case that humans are Bayesians (as others have also claimed, but with less scope) and if one thought that Bayesian thinking had certain flaws, then one would expect to find evidence of these in human activities (as one does – watch this blog e.g. here). But the details matter.

In HBM one considers that observations are produced by a likelihood function that has a probability distribution, or a longer chain of likelihood functions ‘topped out’ by a probability function. This is equivalent to have a chain of conditional likelihood functions, with the likelihood of the conditions of each function being given by the next one, topped out by an unconditional probability distribution, to make it Bayesian. The paper explains how a Chinese restaurant process (CRP) is used to decide whether new observations fit an existing category (node in the HBM) or a new one is required. In terms of the odinary Bayesain probability theory, this corresponds to creating a new hypothesis when the evidence does not fit any of the existing ones. It thus breaks the Bayesian assumption that the sum of the probabilities of the hypotheses add to 1. Thus the use of the HBM is Bayesian only for as long as there is no observed novelty. So far, the way that humans reason would seem to meet criticisms of ‘pure’ Bayes.

A pragmatic approach is to use the existing model unless and until it is definitely broken, and this seems to be what the paper is saying the way humans seem to think. But the paper does not distinguish between the following two situations:

  1. We seem to be in a familiar, routine, situation with no particular reason to expect surprises.
  2. We are in a completely novel situation, perhaps where others are seeking to outwit us.

The pragmatic approach seems reasonable when surprises are infrequent ‘out of the blue’ and ‘not to be helped’. One proceeds as if one is a Bayesian until one has to change, in which case one fixes the Bayesian model (HBM) and goes back to being a de-facto Bayesian. But if surprises are more frequent then there are theoretical benefits in discounting the Bayesian priors (or frequentist frequency information), discounting more the more surprises are to be expected. This could be accommodated by the CRP-based categorisation process, to give an approach that was pragmatic in a broad sense, but not in the pedantic James’ sense. 

There are two other ways in which one might depart further from a pure Bayesian approach, although these are not covered by the paper:

  • In a novel situation for which there is no sound basis for any ‘priors’ use likelihood-based reasoning rather than trying (as HBM does) to extrapolate from previous experience.
  • In a novel situation, if previous experience has not provided a matching ‘template’ in HBM, consider other sources of templates, e.g.:
    • theoretical (e.g., mathematical) reasoning
    • advice from others

Conclusion

An interesting paper, but we perhaps shouldn’t take it’s endorsement of Bayesian reasoning too pedantically: there may be other explanations, or even if people are naturally Bayesians in the strict technical sense, that doesn’t necessarily mean that they are beyond education.

Dave Marsay

Which Mathematics of Uncertainty for Today’s Challenges?

This is a slight adaptation of a technical paper presented to an IMA conference 16 Nov. 2009, in the hope that it may be of broader interest. It argues that ‘Knightian uncertainty’, in Keynes’ mathematical form, provides a much more powerful, appropriate and safer approach to uncertainty than the more familiar ‘Bayesian (numeric) probability’.

Issues

Conventional Probability

The combination of inherent uncertainty and the rate of change challenge or capabilties.

There are gaps in the capability to handle both inherent uncertainty and rapid change.

Keynes et al suggest that there is more to uncertainty than random probability. We seem to be able to cope with high volumes of deterministic or probabilistic data, or low volumes of less certain data, but to have problems at the margins. This leads to the questions:

  • How complex is the contemporary world?
  • What is the perceptual problem?
  • What is contemporary uncertainty like?
  • How is uncertainty engaged with?

Probability arises from a definite context

Objective numeric probabilities can arise through random mechanisms, as in gambling. Subjective probabilities are often adequate for familiar, situations where decisions are short-term, with only cumulative long-term impact, at worst. This is typical of the application of established science and engineering where one has a kind of ‘information dominance’ and there are only variations within an established frame / context.

Contexts

Thus (numeric) probability is appropriate where:

  • Competition is coherent and takes place within a stable, utilitarian, framework.
  • Innovation does not challenge the over-arching status quo or ‘world view’
  • We only ever need to estimate the current parameters within a given model.
  • Uncertainty can be managed. Uncertainty about estimates can be represented by numbers (probability distributions), as if they were principally due to noise or other causes of variation.
  • Numeric probability is multiplied by value to give a utility, which is optimised.
  • Risk is only a number, negative utility.

Uncertainty is measurable (in one dimension) where one has so much stability that almost everything is measurable.

Probability Theory

Probability theories typically build on Bayes’ rule [Cox] :

P(H|E) = P(H).(P(E|H)/P(E)),

where P(E|H) denotes the ‘likelihood’, the probability of evidence, E, given a hypothesis, H. Thus the final probability is the prior probability times the ‘likelihood ratio’.

The key assumptions are that:

  • The selection of evidence for a given hypothesis, H, is indistinguishable from a random process with a proper numeric likelihood function, P( · |H).
  • The selection of the hypothesis that actually holds is indistinguishable from random selection from a set {Hi} with ‘priors’ P(Hi) – that can reasonably be estimated – such that
    • P(HiÇHj) = 0 for i ¹ j (non-intersection)
    • P(ÈiHi) = 1 (completeness).

It follows that P(E) = SiP(E|Hi).P(Hi) is well-defined.

H may be composite, so that there are many proper sub-hypotheses, h Þ H, with different likelihoods, P(E|h). It is then common to use the Bayesian likelihood,

P(E|H) = òh ÞHP(E|h).dP(h|H),

or

P(E|H) = P(E|h), for some representative hypothesis h.

In either case, hypotheses should be chosen to ensure that the expected likelihood is maximal for the true hypothesis.

Bayes noted a fundamental problem with such conventional probability: “[Even] where the course of nature has been the most constant … we can have no reason for thinking that there are no causes in nature which will ever interfere with the operations the causes from which this constancy is derived.”

Uncertain in Contemporary Life

Uncertainty arises from an indefinite context

Uncertainty may arise through human decision-making, adaptation or evolution, and may be significant for situations that are unfamiliar or for decisions that may have long-term  impact. This is typical of the development of science in new areas, and of competitions where unexpected innovation can transform aspects of contemporary life. More broadly still, it is typical of situations where we have a poor information position or which challenge our sense-making, and where we could be surprised, and so need to alter our framing of the situation. For example, where others can be adaptive or innovative and hence surprising.

Contexts

  • Competitions, cooperations, collaborations, confrontations and conflicts all nest and overlap messily, each with their own nature.
  • Perception is part of multiple co-adaptations.
  • Uncertainty can be shaped but not fully tamed. Only the most careful reasoning will do.
  • Uncertainty and utility are imprecise and conditional. One can only satisfice, not optimise.
  • Critical risks arise from the unanticipated.

Likelihoods, Evidence

In Plato’s republic the elite make the rules which form a fixed context for the plebs. But in contemporary life the rulers only rule with the consent of the ruled and in so far as the rules of the game ’cause’ (or at least influence) the behaviour of the players, the participants have reason to interfere with causes, and in many cases we expect it: it is how things get done. J.M. Keynes and I.J. Good (under A.M.Turing) developed techniques that may be used for such ‘haphazard’ situations, as well as random ones.

The distinguishing concepts are: The law of evidence; generalized weight of evidence (woe) and iterative fusion.

If datum, E, has a distribution f(·) over a possibility space, , then distributions g(·) over ,

 òlog(f(E)).f(E )  ³ òlog(g(E)).f(E).

I.e. the cross-entropy is no more than the entropy. For a hypothesis H in a context, C, such that the likelihood function g = PH:C is well-defined, the weight of evidence (woe) due to E for H is defined to be:

W(E|H:C) º log(PH:C (E)).

Thus the ‘law of evidence’: that the expected woe for the truth is never exceeded by that for any other hypothesis. (But the evidence may indicate that many or none of thehypotheses fit.) For composite hypotheses, the generalized woe is:

W(E|H:C) º suph ÞH {W(E|h:C)}.

This is defined even for a haphazard selection of h.

Let ds(·) be a discounting factor for the source, s [Good]. If one has independent evidence, Es, from different sources, s, then typically the fusion equation is:

W(E|H:C,ds) £ Ss{ds (W(Es |H:C))},

with equality for precise hypotheses. Together, generalized woe and fusion determine how woe is propagated through a network, where the woe for a hypothesis is dependent on an assumption which itself has evidence. The inequality forces iterative fusion, whereby one refines candidate hypotheses until one has adequate precision. If circumstantial evidence indicates that the particular situation is random, one could take full account of it, to obtain the same result as Bayes, or discount [Good].

In some cases it is convenient, as Keynes does, to use an interval likelihood or woe, taking the infimum and supremum of possible values. The only assumption is that the evidence can be described as a probabilistic outcome of a definite hypothesis, even if the overall situation is haphazard. In practice, the use of likelihoods is often combined with conjectural causal modelling, to try to get at a deep understanding of situations.

Examples

Crises

Typical crisis dynamics

Above is an informal attempt to illustrate typical crisis kinematics, such as the financial crisis of 2007/8. It is intended to capture the notion that conventional probability calculations may suffice for long periods, but over-dependence on such classical constructs can lead to shocks or crises. To avoid or mitigate these more attention should be given to uncertainty [Turner].

An ambush

Uncertainty is not necessarily esoteric or long-term. It can be found wherever the assumptions of conventional probability theory do not hold, in particular in multilevel games. I would welcome more examples that are simple to describe, relatively common and where the significance of uncertainty is easy to show.

Deer need to make a morning run from A to B. Routes r, s, t are possible. A lion may seek to ambush them. Suppose that the indicators of potential ambushes are equal. Now in the last month route r has been used 25 times, s 5 times and t never, without incident. What is the ‘probability’ of an ambush for the 3 routes?

Let A=“The Lion deploys randomly each day with a fixed probability distribution, p”. Here we could use a Bayesian probability distribution over p, with some sensitivity analysis.

But this is not the only possibility. Alternatively, let B =“The Lion has reports about some of our runs, and will adapt his deployments.” We could use a Bayesian model for the Lion, but with less confidence. Alternatively, we could use likelihoods.

Route s is intermediate in characteristics between the other two. There is no reason to expect an ambush at s that doesn’t apply to one of the other two. On the other hand, if the ambush is responsive to the number of times a route is used then r is more likely than s or t, and if the ambush is on a fixed route, it is only likely to be on t. Hence s is the least likely to have an ambush.

Consistently selecting routes using a fixed probability distribution is not as effective as a muddling strategy [Binmore] which varies the distribution, supporting learning and avoiding an exploitable equilibrium.

Concluding Remarks

Conventional (numeric) probability, utility and rationality all extrapolate based on a presumption of stability. If two or more parties are co-adapting or co-evolving any equilibria tend to be punctuated, and so a more general approach to uncertainty, information, communication, value and rationality is indicated, as identified by Keynes, with implications for ‘risk’.

Dave Marsay, Ph.D., C.Math FIMA, Fellow ISRS

References:

Bayes, T. An Essay towards solving a Problem in the Doctrine of Chances (1763), Philosophical Transactions of the Royal Society of London 53, 370–418. Regarded by most English-speakers as ‘the source’.

Binmore, K, Rational Decisions (2009), Princeton U Press. Rationality for ‘muddles’, citing Keynes and Turing. Also http://else.econ.ucl.ac.uk/papers/uploaded/266.pdf .

Cox, R.T. The Algebra of Probable Inference (1961) Johns Hopkins University Press, Baltimore, MD. The main justification for the ‘Bayesian’ approach, based on a belief function for sets whose results are comparable. Keynes et al deny these assumptions. Also Jaynes, E.T. Probability Theory: The Logic of Science (1995) http://bayes.wustl.edu/etj/prob/book.pdf .

Good, I.J. Probability and Weighting of Evidence (1950), Griffin, London. Describes the basic techniques developed and used at Bletchley Park. Also Explicativity: A Mathematical Theory of Explanation with Statistical Applications (1977) Proc. R. Soc. Lond. A 354, 303-330, etc. Covers discounting, particularly of priors. More details have continued to be released up until 2006.

Hodges, A. Alan Turing (1983) Hutchinson, London. Describes the development and use of ‘weights of evidence’, “which constituted his major conceptual advance at Bletchley”.

Keynes, J.M. Treatise on Probability (1920), MacMillan, London. Fellowship essay, under Whitehead. Seminal work, outlines the pros and cons of the numeric approach to uncertainty, and develops alternatives, including interval probabilities and the notions of likelihood and weights of evidence, but not a ‘definite method’ for coping with uncertainty.

Smuts, J.C. The Scientific World-Picture of Today, British Assoc. for the Advancement of Science, Report of the Centenary Meeting. London: Office of the BAAS. 1931. (The Presidential Address.) A view from an influential guerrilla leader, General, War Cabinet Minister and supporter of ‘modern’ science, who supported Keynes and applied his ideas widely.

Turner, The Turner Review: A regulatory response to the global banking crisis (2009). Notes the consequences of simply extrapolating, ignoring non-probabilistic (‘Knightian’) uncertainty.

Whitehead, A.N. Process and Reality (1929: 1979 corrected edition) Eds. D.R. Griffin and D.W. Sherburne, Free Press. Whitehead developed the logical alternative to the classical view of uniform unconditional causality.

All watched over by machines of loving grace

What?

An Adam Curtis documentary shown on the BBC May/June 2011.

Comment

The trailers (above link) give a good feel for the series, which is entertaining, with some good video, music, pseudo-history and comment. The details shouldn’t be taken too seriously, but it is thought-provoking, on some topics that need thought.

Thoughts

The series ends:

The idea that human beings are helpless chunks of hardware controlled by software programs written in their genetic codes [remains powerfully influential in our society]. The question is, have we embraced that idea because it is a comfort in a world where everything that we do, either good or bad, seems to have terrible unforeseen consequences? …

We have embraced a fatalistic philosophy of us as helpless computing machines, to both excuse and explain our political failure to change the world.

This thesis has three parts:

  1. that everything we do has terrible unforeseen consequences
  2. that we are fatalistic in the face of such uncertainty
  3. that we have adopted a machine metaphor as ‘cover’ for our fatalism.

Uncertainty

The program demonizes unforeseen consequences. Certainly we should be troubled by them, and their implications for rationalism and pragmatism. But if there were no uncertainties then we could be rational and ‘should’ behave like machines. Reasoning in a complex, dynamic world calls for more than narrowly rational machine-like calculation, and gives purpose to being human.

Fatalism

It seems reasonable to suppose that most of the time most people can do little to influence the factors that shape their lives, but I think this is true even when people can perfectly well see the likely consequences of what is being done in their name. What is at issue here is not so much ordinary fatalism, which seems justified, as the charge that those who are making big decisions on our behalf are also fatalistic.

In democracies, no-one makes a free decision anymore. Everyone is held accountable and expected to abide by generally accepted norms and procedures. In principle whenever one has a novel situation the extant rules should be at least briefly reviewed, lest they lead to ‘unforseen consequences’. A fatalist would presumably not do this. Perhaps the failure, then, is not to challenge assumptions or ‘kick against’ constraints.

The machine metaphor

Computers and mathematicians played a big role in the documentary. Humans are seen as being programmed by a genetic code that has evolved to self-replicate. But evolution leads to ‘punctuated equilibrium’ and epochs.  Reasoning in epochs is not like reasoning in stable situations, the preserve of rule-driven machines. The mathematics of Whitehead and Turing supports the machine-metaphor, but only within an epoch. How would a genetically programmed person fare if they move to a different culture or had to cope with new technologies radically transforming their daily lives? One might suppose that we are encoded for ‘general ways of living and learning’ but then that we seem to require a grasp of uncertainty beyond that which we currently associate with machines.

Notes

  • The program had a discussion on altruism and other traits in which behaviours might disbenefit the individual but advantage those who are genetically similar over others. This would seem to justify much terrorism and even suicide-bombing. The machine metaphor would seem undesirable for reasons other than its tendency to fatalism.
  • An alternative to absolute fatalism would be fatalism about long-term consequences. This would lead to a short-term-ism that might provide a better explanation for real-world events
  • The financial crash of 2007/8 was preceded by a kind of fatalism, in that it was supposed that free markets could never crash. This was associated with machine trading, but neither a belief in the machine metaphor nor a fear of unintended consequences seems to have been at the root of the problem. A belief in the potency of markets was perhaps reasonable (in the short term) once the high-tech bubble had burst. The problem seems to be that people got hooked on the bubble drug, and went into denial.
  • Mathematicians came in for some implicit criticism in the program. But the only subject of mathematics is mathematics. In applying mathematics to real systems the error is surely in substituting myth for science. If some people mis-use mathematics, the mathematics is no more at fault than their pencils. (Although maybe mathematicians ought to be more vigorous in uncovering abuse, rather than just doing mathematics.)

Conclusion

Entertaining, thought-provoking.

Dave Marsay

Out of Control

Kevin Kelly’s ‘Out of Control‘ (1994) sub-titled “The New Biology of Machines, Social Systems, and the Economic World” gives ‘the nine laws of god’which it commends for all future systems, including organisations and economies. They didn’t work out too well in 2008.

The claims

The book is introduced (above) by:

“Out of Control is a summary of what we know about self-sustaining systems, both living ones such as a tropical wetland, or an artificial one, such as a computer simulation of our planet. The last chapter of the book, “The Nine Laws of God,” is a distillation of the nine common principles that all life-like systems share. The major themes of the book are:

  • As we make our machines and institutions more complex, we have to make them more biological in order to manage them.
  • The most potent force in technology will be artificial evolution. We are already evolving software and drugs … .
  • Organic life is the ultimate technology, and all technology will improve towards biology.
  • The main thing computers are good for is creating little worlds so that we can try out the Great Questions. …
  • As we shape technology, it shapes us. We are connecting everything to everything, and so our entire culture is migrating to a “network culture” and a new network economics.

In order to harvest the power of organic machines, we have to instill in them guidelines and self-governance, and relinquish some of our total control.”

Holism

Much of the book is Holistic in nature, The above could be read as applying the ideas of Smuts’ Holism to newer technologies. (Chapter 19 does make explicit reference to JC Smuts in connection with internal selection, but doesn’t reference his work.)

Jan Smuts based his work on wide experience, including with improving arms production in the Great War, and went on to found ecology and help modernise the sciences, thus leading to the views that Kelly picks up on. Superficially, Kelly’s book is greatly concerned with technology that ante-dates Smuts, but his arguments claim to be quite general, so an apostle of Smuts would expect Kelly to be consist, but applying the ideas to the new realm. But where does Kelly depart from Smuts, and what new insights does he bring? Below we pick out Kelly’s key texts and compare them.

The nine Laws of God

The laws with my italics are:

Distribute being

When the sum of the parts can add up to more than the parts, then that extra being … is distributed among the parts. Whenever we find something from nothing, we find it arising from a field of many interacting smaller pieces. All the mysteries we find most interesting — life, intelligence, evolution — are found in the soil of large distributed systems.

The first phrase is clearly Holistic, and perhaps consistent with Smuts’ view that the ‘extra’ arises from the ‘field of interactions’. However in many current technologies the ‘pieces’ are very hard-edged, with limited ‘mutual interaction’. 

Control from the bottom up

When everything is connected to everything in a distributed network … overall governance must arise from the most humble interdependent acts done locally in parallel, and not from a central command. …

The phrases ‘bottom up’ and ‘humble interdependent acts’ seem inconsistent with Smuts’ own behaviour, for example in taking the ‘go’ decision for D-day. Generally, Kelly seems to ignore or deny the need for different operational levels, as in the military’s tactical and strategic.

Cultivate increasing returns

Each time you use an idea, a language, or a skill you strengthen it, reinforce it, and make it more likely to be used again. … Success breeds success. In the Gospels, this principle of social dynamics is known as “To those who have, more will be given.” Anything which alters its environment to increase production of itself is playing the game … And all large, sustaining systems play the game … in economics, biology, computer science, and human psychology. …

Smuts seems to have been the first to recognize that one could inherit a tendency to have more of something (such as height) than your parents, so that a succesful tendency (such as being tall) would be reinforced. The difference between Kelly and Smuts is that Kelly has a general rule whereas Smuts has it as a product of evolution for each attribute. Kelly’s version also needs to be balanced against not optimising (below).

Grow by chunking

The only way to make a complex system that works is to begin with a simple system that works. Attempts to instantly install highly complex organization — such as intelligence or a market economy — without growing it, inevitably lead to failure. … Time is needed to let each part test itself against all the others. Complexity is created, then, by assembling it incrementally from simple modules that can operate independently.

Kelly is uncomfortable with the term ‘complex’. In Smuts’ usage a military platoon attack is often ‘complex’, whereas a superior headquarters could be simple. Systems with humans in naturally tend to be complex (as Kelly describes) and are only made simple by prescriptive rules and procedures. In many settings such process-driven systems would (as Kelly describes them) be quite fragile, and unable to operate independently in a demanding environment (e.g., one with a thinking adversary). Thus I suppose that Kelly is advocating starting with small but adaptable systems and growing them. This is desirable, but often Smuts did not have that luxury, and had to re-engineer systems such as production or fighting systems, ‘on the fly’

Maximize the fringes

… A uniform entity must adapt to the world by occasional earth-shattering revolutions, one of which is sure to kill it. A diverse heterogeneous entity, on the other hand, can adapt to the world in a thousand daily mini revolutions, staying in a state of permanent, but never fatal, churning. Diversity favors remote borders, the outskirts, hidden corners, moments of chaos, and isolated clusters. In economic, ecological, evolutionary, and institutional models, a healthy fringe speeds adaptation, increases resilience, and is almost always the source of innovations.

A large uniform entity cannot adapt and maintain its uniformity, and so is unsustainable in the face of a changing situation or environment. If diversity is allowed then parts can adapt independently, and generally favourable adaptations spread. Moreover, the more diverse an entity is the more it can fill a variety of niches, and the more likely that it will survive some shot. Here Kelly, Smuts and Darwin essentially agree.

Honor your errors

A trick will only work for a while, until everyone else is doing it. To advance from the ordinary requires a new game, or a new territory. But the process of going outside the conventional method, game, or territory is indistinguishable from error. Even the most brilliant act of human genius, in the final analysis, is an act of trial and error. … Error, whether random or deliberate, must become an integral part of any process of creation. Evolution can be thought of as systematic error management.

Here the problem of competition is addressed. Here Kelly supposes that the only viable strategy in the face of complexity is blind trial and error, ‘the no strategy strategy’. But the main thing is to be able to identify actual errors. Smuts might also add that one might learn from near-misses and other potential errors.

Pursue no optima; have multiple goals

 …  a large system can only survive by “satisficing” (making “good enough”) a multitude of functions. For instance, an adaptive system must trade off between exploiting a known path of success (optimizing a current strategy), or diverting resources to exploring new paths (thereby wasting energy trying less efficient methods). …  forget elegance; if it works, it’s beautiful.

Here Kelly confuses ‘a known path of success’ with ‘a current strategy’, which may explain why he is dismissive of strategy. Smuts would say that getting an adequate balance between the exploitation of manifest success and the exploration of alternatives would be a key feature of any strategy. Sometimes it pays not to go after near-term returns, perhaps even accepting a loss.

Seek persistent disequilibrium

Neither constancy nor relentless change will support a creation. A good creation … is persistent disequilibrium — a continuous state of surfing forever on the edge between never stopping but never falling. Homing in on that liquid threshold is the still mysterious holy grail of creation and the quest of all amateur gods.

This is a key insight. The implication is that even the nine laws do not guarantee success. Kelly does not say how the disequilibrium is generated. In many systems it is only generated as part of an eco-system, so that reducing the challenge to a system can lead to its virtual death. A key part of growth (above) is o grow the ability to maintain a healthy disequilibrium despite increasing novel challenges.

Change changes itself

… When extremely large systems are built up out of complicated systems, then each system begins to influence and ultimately change the organizations of other systems. That is, if the rules of the game are composed from the bottom up, then it is likely that interacting forces at the bottom level will alter the rules of the game as it progresses.  Over time, the rules for change get changed themselves. …

It seems that the changes the rules are blindly adaptive. This may be because, unlike Smuts, Kelly does not believe in strategy, or in the power of theory to enlighten.

Kelly’s discussion

These nine principles underpin the awesome workings of prairies, flamingoes, cedar forests, eyeballs, natural selection in geological time, and the unfolding of a baby elephant from a tiny seed of elephant sperm and egg.

These same principles of bio-logic are now being implanted in computer chips, electronic communication networks, robot modules, pharmaceutical searches, software design, and corporate management, in order that these artificial systems may overcome their own complexity.

When the Technos is enlivened by Bios we get artifacts that can adapt, learn, and evolve. …

The intensely biological nature of the coming culture derives from five influences:

    • Despite the increasing technization of our world, organic life — both wild and domesticated — will continue to be the prime infrastructure of human experience on the global scale.
    • Machines will become more biological in character.
    • Technological networks will make human culture even more ecological and evolutionary.
    • Engineered biology and biotechnology will eclipse the importance of mechanical technology.
    • Biological ways will be revered as ideal ways.

 …

As complex as things are today, everything will be more complex tomorrow. The scientists and projects reported here have been concerned with harnessing the laws of design so that order can emerge from chaos, so that organized complexity can be kept from unraveling into unorganized complications, and so that something can be made from nothing.

My discussion

Considering local action only, Kelly’s arguments often come down to the supposed impossibility of effective strategy in the face of complexity, leading to the recommendation of the universal ‘no strategy strategy’: continually adapt to the actual situation, identifying and setting appropriate goals and sub-goals. Superficially, this seems quite restrictive, but we are free as to how we interpret events, learn, set goals and monitor progress and react. There seems to be nothing to prevent us from following a more substantial strategy but describing it in Kelly’s terms.

 The ‘bottom up’ principle seems to be based on the difficulty of central control. But Kelly envisages the use of markets, which can be seen as a ‘no control control’. That is, we are heavily influenced by markets but they have no intention. An alternative would be to allow a range of mechanisms, ideally also without intention; whatever is supported by an appropriate majority (2/3?).

For economics, Kelly’s laws are suggestive of Hayek, whereas Smuts’ approach was shared with his colleague, Keynes. 

Conclusion

What is remarkable about Kelly’s laws is the impotence of the individuals in the face of ‘the system’. It would seem better to allow for ‘central’ (or intermediate) mechanisms to be ‘bottom up’ in the sense that they are supported by an informed ‘bottom’.

See Also

David Marsay