Traffic bunching

In heavy traffic, such as on motorways in rush-hour, there is often oscillation in speed and there can even be mysterious ’emergent’ halts. The use of variable speed limits can result in everyone getting along a given stretch of road quicker.

Soros (worth reading) has written an article that suggests that this is all to do with the humanity and ‘thinking’ of the drivers, and that something similar is the case for economic and financial booms and busts. This might seem to indicate that ‘mathematical models’ were a part of our problems, not solutions. So I suggest the following thought experiment:

Suppose a huge number of  identical driverless cars with deterministic control functions all try to go along the same road, seeking to optimise performance in terms of ‘progress’ and fuel economy. Will they necessarily succeed, or might there be some ‘tragedy of the commons’ that can only be resolved by some overall regulation? What are the critical factors? Is the nature of the ‘brains’ one of them?

Are these problems the preserve of psychologists, or does mathematics have anything useful to say?

Dave Marsay

Hercock’s Cohesion

Robert G. Hercock Cohesion: The Making of Society 2009.

Having had Robert critique some of my work, I could hardly not comment on this think-piece. It draws on modern complexity theory and a broad view of relevant historical examples and current trends to create a credible narrative. For me, his key conclusions are:

  1. “[G]iven a sufficient degree of communication … the cooperative assembly of [a cohesive society] is inevitable.”
  2. To be cohesive, a society should be “global politically federated, yet culturally diverse”.

The nature of communication envisaged seems to be indicated by:

 “From smoke signals, and the electric telegraph, through to fibre optics, and the Internet … the manifest boom in all forms of communication is bringing immense capabilities to form new social collectives and positive cultural developments.”

 I ‘get’ that increasing communication will bring immense capabilities to support the cooperative assembly of a cohesive global society, but am not convinced the effective exploitation of the capability in this way is inevitable. In chapter 6 (‘Bridges’) Robert says:

 “The truth is we now need a new shared set of beliefs. … Unfortunately, no one appears to have the faintest idea what such a common set of beliefs should look like, or where it might arise from, or who has responsibility to make it happen, or how, etc. Basically this is the challenge of the 21st century; we stand or fall on this battle for a common cultural nexus.”  

 This is closer to my own thinking.

People have different understandings of terms like ‘federated’. My preference is for subsidiarity: the idea that one has the minimum possible governance, with reliance on the minimum possible shared beliefs and common cultures. In complex situations these minimum levels are not obvious or static, so I would see an effective federations as engaging tentatively at a number of ‘levels’, ‘veering and hauling’ between them, and with strong arrangements for ‘horizon scanning’ and debate with the maximum possible diversity of views. Thus there would be not only cultural diversity but ‘viewpoint diversity within federated debate’. What is needed seems somewhat like Holism and glocalization 

Thinking of the EU, diversity of monetary policy might make the EU as an institution more cohesive while making their economies less cohesive. To put it another way, attempts to enforce cohesion at the monetary level can threaten cohesion at the political level. So it is not clear to me that one can think of a society as simply ‘being cohesive’. Rather it should be cohesive in the sense appropriate to its current situation. Cohesion should be ‘adaptive’. Leadership and vision seem to be required to achieve this: it is not automatic.

In the mid 80s many of those involved in the development of communications technologies thought that they would promote world peace, sometimes citing the kind of works that Robert does. I had and have two reservations. Firstly, the quality of communications matters. Thus [it was thought] one probably needed digital video, mobile phones and the Internet, all integrated in way that was easy to use. [The Apple Macintosh made this credible.] Thus, if there was a clash between Soviet secret police and Jewish protestors [common at the time], the whole world could take an informed view, rather than relying on the media. [This was before the development of video faking capabilities]. Secondly, while this would destabilize autocratic regimes, it was another issue as to what would happen next. It was generally felt that the only possible ‘properly’ stable states were democratic, but views differed on whether such states would necessarily stabilize.

Subsequent experience, such as the Arab spring, support the view that YouTube and Facebook undermine oppressive regimes. But I remain unconvinced that ‘the cooperative assembly of [a cohesive society] is inevitable’ in Africa, the Middle East,Russia or South America’, or that more communications would make it so. It certainly seems that if the process is inevitable, it can be much too slow.

My own thinking in the 80s was informed by the uncertainty and complexity theory Keynes, Whitehead, Turing and Smuts, which predates that which Robert cites, and which informed the development of the United Nations as a part of ‘the cooperative assembly of a cohesive global society’. Robert seems to be arguing that according to modern theory such efforts were not necessary, but even so they may have been beneficial if all they did was speed the process up by a few generations. Moreover, the EU example seems to support my view that these theories are usefully more advanced than their contemporary counter-parts.

The financial crash of 2008 occurred part way through the writing of the book. Like any history, explanations differ, and Robert gives a credible account in terms of modern complexity theory. But logic teaches us to be cautious about such post-hoc explanations. It seems to me that Keynes’ theory explains it adequately, and having been developed before the event should be given more credence.

 Robert seems to regard the global crash of 2008 as a result of a loss of cohesion :

“When economies, states and societies lose their cohesion, people suffer; to be precise a lot of people end up paying the cost. In the recession of 2008/09 … “

But Keynes shows how it is cohesion (‘sticking together’) that causes global crashes. Firstly, in a non-globalized economy a crash in one part can be compensated for by the stability of another part, a bit like China saving the situation, but more so. Secondly, (to quote Patton) ‘if everyone is thinking alike then no-one is thinking’. Once group-think is established ‘expectations’ become ossified, and the market is disconnected from reality.

Robert’s notion of cohesion is “global politically federated, yet culturally diverse”. One can see how in 2008 and currently in the EU (and North Africa and elsewhere) de jure and de-facto regulatory structures change, consistent with Robert’s view. But according to Keynes this is a response to an actual or potential crisis, rather than a causative factor. One can have a chain of  crises in which political change leads to emergent social or economic problems, leading to political change and so-on. Robert seems to suppose that this must settle down into some stable federation. If so then perhaps only the core principles will be stable, and even these might need to be continually reinterpreted and refreshed, much as I have tried to do here.

On a more conceptual note, Robert has the qualifies the conclusion with “The evidence from all of the fields considered in this text suggests …”.  But the conclusion could only be formally sustained by an argument employing induction. Now, if improved communications is really going to change the world so much then it will undermine the basis of any induction. (In Whitehead’s terms, induction only works with an epoch but here the epoch is changed.) The best one could say would be that on current trends a move towards greater cohesion appears inevitable. This is a more fundamental problem than only considering evidence from a limited range of fields. More evidence from more fields could not overcome this problem.

Dave Marsay

The End of a Physics Worldview (Kauffman)

Thought provoking, as usual. This video goes beyond his previous work, but in the same direction. His point is that it is a mistake to think of ecologies and economies as if they resembled the typical world of Physics. A previous written version is at npr, followed by a later development.

He builds on Kant’s notion of wholes, noting (as Kant did before him) that the existence of such wholes is inconsistent with classical notions of causality.  He ties this in to biological examples. This complements Prigogine, who did a similar job for modern Physics.

Kauffman is critical of mathematics and ‘mathematization’, but seems unaware of the mathematics of Keynes and Whitehead. Kauffman’s view seems the same as that due to Bergson and Smuts, which in the late 1920s defined ‘modern science’. To me the problem behind the financial crash lies not in science or mathematics or even in economics, but in the brute fact that politicians and financiers were wedded to a pre-modern (pre-Kantian) view of economics and mathematics. Kauffman’s work may help enlighten them on the need, but not on the potential role for modern mathematics.

Kauffman notes that at any one time there are ‘adjacent possibles’ and that in the near future they may come to pass, and that – conceptually – one could associate a probability distribution with these possibilities. But as new possibilities come to pass new adjacent possibilities arise. Kauffman supposes that it is not possible to know what these are, and hence one cannot have a probability distribution, much of information theory makes no sense, and one cannot reason effectively. The challenge, then, is to discover how we do, in fact, reason.

Kauffman does not distinguish between short and long run. If we do so then we see that if we know the adjacent possible then our conventional reasoning is appropriate in the short-term, and Kauffman’s concerns are really about the long-term: beyond the point at which we can see the potential possibles that may arise. To this extent, at least, Kauffman’s post-modern vision seems little different from the modern vision of the 1920s and 30s, before it was trivialized.

Dave Marsay

GLS Shackle, imagined and deemed possible?

Background

This is a personal view of GLS Shackle’s uncertainty. Having previously used Keynes’ approach to identify possible failure modes in systems, including financial systems (in the run-up to the collapse of the tech bubble), I became concerned  in 2007 that there was another bubble with a potential for a Keynes-type  25% drop in equities, constituting a ‘crisis’. In discussions with government advisers I first came across Shackle. The differences between him and Keynes were emphasised. I tried, but failed to make sense of Shackle, so that I could form my own view, but failed. Unfinished business.

Since the crash of 2008 there have been various attempts to compare and contrast Shackle and Keynes, and others. Here I imagine a solution to the conundrum which I deem possible: unless you know different?

Imagined Shackle

Technically, Shackle seems to focus on the wickeder aspects of uncertainty, to seek to explain them and their significance to economists and politicians, and to advise on how to deal with them. Keynes provides a more academic view, covering all kinds of uncertainty, contrasting tame probabilities with wicked uncertainties, helping us to understand both in a language that is better placed to survive the passage of time and the interpretation by a wider – if more technically aware – audience.

Politically, Shackle lacks the baggage of Lord Keynes, whose image has been tarnished by the misuse of the term ‘Keynesian’. (Like Keynes, I am not a Keynesian.)

Conventional probability theory would make sense if the world was a complicated randomizing machine, so that one has ‘the law of large numbers’: that in the long run particular events will tend to occur with some characteristic, stable, frequency. Thus in principle it would be possible to learn the frequency of events, such that reasonably rare events would be about as rare as we expect them to be. Taleb has pointed out that we can never learn the frequencies of very rare events, and that this is a technical flaw in many accounts of probability theory, which fail to point this out. But Keynes and Shackle have more radical concerns.

If we think of the world as a complicated randomizing machine, then as in Whitehead, it is one which can suddenly change. Shackle’s approach, in so far as I understand it, is to be open to the possibility of a change, recognize when the evidence of a change is overwhelming, and to react to it. This is an important difference for the conventional approach, in which all inference is done on the assumptions that the machine is known. Any evidence that it may have change is simply normalised away. Shackle’s approach is clearly superior in all those situations where substantive change can occur.

Shackle terms decisions about a possibly changing world ‘critical’. He makes the point that the application of a predetermined strategy or habit is not a decision proper: all ‘real’ decisions are critical in that they make a lasting difference to the situation. Thus one has strategies for situations that one expects to repeat, and makes decisions about situations that one is trying to ‘move on’. This seems a useful distinction.

Shackle’s approach to critical decisions is to imagine potential changes to new behaviours, to assess them and then to choose between those deemed possible. This is based on preference not expected utility, because ‘probability’ does not make sense. He gives an example of  a French guard at the time of the revolution who can either give access to a key prisoner or not. He expects to lose his life if he makes the wrong decision, depending on whether the revolution succeeds or not. A conventional approach would be based on the realisation that most attempted revolutions fail. But his choice may have a big impact on whether or not the revolution succeeds. So Shackle advocates imagining the two possible outcomes and their impact on him, and then making a choice. This seems reasonable. The situation is one of choice, not probability.

Keynes can support Shackle’s reasoning. But he also supports other types of wicked uncertainty. Firstly, it is not always the case that a change is ‘out of the blue’. One may not be able to predict when the change will come, but it is sometimes possible to see that there is an economic bubble, and the French guard probably had some indications that he was living in extraordinary times. Thus Keynes goes beyond Shackle’s pragmatism.

In reality, there is no strict dualism between probabilistic behaviour and chaos, between probability and Shackle’s complete ignorance. There are regions in-between that Keynes helps explore. For example, the French guard is not faced with a strictly probabilistic situation, but could usefully think in terms of probabilities conditioned on his actions. In economics, one might usefully think of outcomes as conditioned on the survival of conventions and institutions (October 2011).

I also have a clearer view why consideration of Shackle led to the rise in behavioural economics: if one is ‘being open’ and ‘imagining’ then psychology is clearly important. On the other hand, much of behavioral economics seems to use conventional rationality as some form of ‘gold standard’ for reasoning under uncertainty, and to consider departures from it as a ‘bias’.  But then I don’t understand that either!

Addendum

(Feb 2012, after Blue’s comments.)

I have often noticed that decision-takers and their advisers have different views about how to tackle uncertainty, with decision-takers focusing on the non-probabilistic aspects while their advisers (e.g. scientists or at least scientifically trained) tend to, and may even insist on, treating the problem probabilistically, and hence have radically different approaches to problem-solving. Perhaps the situation is crucial for the decision-taker, but routine for the adviser? (‘The agency problem.’) (Econophysics seems to suffer from this.)

I can see how Shackle had much that was potentially helpful in the run-up to the financial crash. But it seems to me no surprise that the neoclassical mainstream was unmoved by it. They didn’t regard the situation as crucial, and didn’t imagine or deem possible a crash. Unless anyone knows different, there seems to be nothing in Shackle’s key ideas that provide as explicit a warning as Keynes. While Shackle was more acceptable that Keynes (lacking the ‘Keynesian’ label) he also still seems less to the point. One needs both together.

See Also

Prigogine , who provides models of systems that can suddenly change ‘become’. He also  relates to Shackle’s discussion on how making decisions relates to the notion of ‘time’.

Dave Marsay

Systemism: the alternative to individualism and holism

Mario Bunge Systemism: the alternative to individualism and holism Journal of Socio-Economics 29 (2000) 147–157

“Three radical worldviews and research approaches are salient in social studies: individualism, holism, and systemism.”

[Systemism] “is centered in the following postulates:
1. Everything, whether concrete or abstract, is a system or an actual or potential component of a system;
2. systems have systemic (emergent) features that their components lack, whence
3. all problems should be approached in a systemic rather than in a sectoral fashion;
4. all ideas should be put together into systems (theories); and
5. the testing of anything, whether idea or artifact, assumes the validity of other items, which are taken as benchmarks, at least for the time being.”

Thus systemism resembles Smuts’ Holism. Bunge uses the term ‘holism’ for what Smuts terms wholism: the notion that systems should be subservient to their ‘top’ level, the ‘whole’. This usage apart, Bunge appears to be saying something important. Like Smuts, he notes the systemic nature of mathematics is distinction to those who note the tendency to apply mathematical formulae thoughtlessly, as in some notorious financial mathematics

Much of the main body is taken up with the need for micro-macro analyses and the limitations of piece-meal approaches, something familiar to Smuts and |Keynes. On the other hand he says: “I support the systems that benefit me, and sabotage those that hurt me.” without flagging up the limitations of such an approach in complex situations. He even suggests that an interdisciplinary subject such as biochemistry is nothing but the overlap of the two disciplines. If this is the case, I find it hard to grasp their importance. I would take a Kantian view, in which bringing into communion two disciplines can be more than the sum of the parts.

In general, Bunge’s arguments in favour of what he calls systemism and Smuts called holism seem sound, but it lacks the insights into complexity and uncertainty of the original.

See also

Andy Denis’ response to Bunge adds some arguments in favour of Holism. It’s main purpose, though, is to contradict Bunge’s assertion that laissez-faire is incompatible with systemism. It is argued that a belief in Adam Smith’s invisible hand could support laissez faire. It is not clear what might constitute grounds for such a belief. (My own view is that even a government that sought to leverage the invisible hand would have a duty to monitor the workings of such and hand, and to take action should it fail, as in the economic crisis of 2007/8. It is now clear how politics might facilitate this.)

Also my complexity.

Dave Marsay

From Being to Becoming

I. Prigogine, From Being to Becoming: Time and Complexity in the Physical Sciences, WH Freeman, 1980 

 See new page.

Summary

“This book is about time.” But it has much to say about complexity, uncertainty, probability, dynamics and entropy. It builds on his Nobel lecture, re-using many of the models and arguments, but taking them further.

Being is classically modelled by a state within a landscape, subject to a fixed ‘master equation’ describing changes with time. The state may be an attribute of an object (classical dynamics) or a probability ‘wave’ (quantum mechanics). [This unification seems most fruitful.] Such change is ‘reversible’ in the sense that if one reverses the ‘arrow of time’ one still has a dynamical system.

Becoming refers to more fundamental, irreversible, change, typical of ‘complex systems’ in chemistry, biology and sociology, for example. 

The book reviews the state of the art in theories of Being and Becoming, providing the hooks for its later reconciliation. Both sets of theories are phenomenological – about behaviours. Prigogine shows that not only is there no known link between the two theories, but that they are incompatible.

Prigogine’s approach is to replace the notion of Being as being represented by a state, analogous to a point in a vector space,  by that of an ‘operator’ within something like a Hilbert Space. Stable operators can be thought of as conventional states, but operators can become unstable, which leads to non-statelike behaviours. Prigogine shows how in some cases this can give rise to ‘becoming’.

This would, in itself, seem a great and much needed subject for a book, but Prigogine goes on to consider the consequences for time. He shows how time arises from the operators. If everything is simple and stable then one has classical time. But if the operators are complex then one can have a multitude of times at different rates, which may be erratic or unstable. I haven’t got my head around this bit yet.

Some Quotes

Preface

… the main thesis …can be formulated as:

  1. Irreversible processes are as real as reversible ones …
  2. Irreversible processes play a fundamental constructive role in the physical world …
  3. Irreversibility … corresponds … to an embedding of dynamics within a vaster formalism. [Processes instead of points.] (xiii)

The classical, often called “Galilean,” view of science was to regard the world as an “object,” to try to describe the physical world as if it were being seen from the outside as an object of analysis to which we do not belong. (xv)

… in physics, as in sociology, only various possible “scenarios” can be predicted. [One cannot predict actual outcomes, only identify possibilities.] (xvii)

Introduction

… dynamics … seemed to form a closed universal system, capable of yielding the answer to any question asked. (3)

… Newtonian dynamics is replaced by quantum mechanics and by relativistic mechanics. However, these new forms of dynamics … have inherited the idea of Newtonian physics: a static universe, a universe of being without becoming. (4)

The Physics of Becoming

The interplay between function, structure and fluctuations leads to the most unexpected phenomena, including order through fluctuations … . (101)

… chemical instabilities involve long-range order through which the system acts as a whole. (104)

… the system obeys deterministic laws [as in classical dynamics] between two bifurcation points, but in the neighbourhood of the bifurcation points fluctuations play an essential role and determine the “branch” that the system will follow. (106) [This is termed ‘structurally unstable”]

.. a cyclic network of reactions [is] called a hypercycle. When such networks compete with one another, they display the ability the ability to evolve through mutation and replication into greater complexity. …
The concept of structural stability seems to express in the most compact way the idea of innovation, the appearance of a new mechanism and a new species, … . (109)

… the origin of life may be related to successive instabilities somewhat analogous to the successive bifurcations that have led to a state of matter of increasing coherence. (123)

As an example, … consider the problem of urban evolution … (124) … such a model offers a new basis for the understanding of “structure” resulting from the actions (choices) of the many agents in a system, having in part at least mutually dependent criteria of action. (126)

… there are no limits to structural instability. Every system may present instabilities when suitable perturbations are introduced. Therefore, there can be no end to history. [DJM emphasis.] … we have … the constant generation of “new types” and “new ideas” that may be incorporated into the structure of the system, causing its continual evolution. (128)

… near bifurcations the law of large numbers essentially breaks down.
In general, fluctuations play a minor role … . However, near bifurcations they play a critical role because there the fluctuation drives the average. This is the very meaning of the concept of order through fluctuations .. . (132)

… near a bifurcation point, nature always finds some clever way to avoid the consequences of the law of large numbers through an appropriate nucleation process. (134)

… For small-scale fluctuations, boundary effects will dominate and fluctuations will regress. … for large-scale fluctuations, boundary effects become negligible. Between these limiting cases lies the actual size of nucleation. (146)

… We may expect that in systems that are very complex, in the sense that there are many interacting species or components, [the degree of coupling between the system and its surroundings] will be very large, as will be the size of the fluctuation which could start the instability. Therefore … a sufficiently complex system is generally in a metastable state. (147) [But see Comments below.]

… Near instabilities, there are large fluctuations that lead to a breakdown of the usual laws of probability theory. (150)

The Bridge from Being to Becoming

[As foreshadowed by Bohr] we have a new form of complimentarity – one between the dynamical and thermodynamic descriptions. (174)

… Irreversibility is the manifestation on a macroscopic scale of “randomness” on a microscopic scale. (178)

Contrary to what Boltzmann attempted to show there is no “deduction” of irreversibility from randomness – they are only cousins! (177)

The Microscopic Theory of Irreversible Processes

The step made … is quite crucial. We go from the dynamical system in terms of trajectories or wave packets to a description in terms of processes. (186)

… Various mechanisms may be involved, the important element being that they lead to a complexity on the microscopic level such that the basic concepts involved in the trajectory or wave function must be superseded by a statistical ensemble. (194)

The classical order was: particles first, the second law later – being before becoming! It is possible that this is no longer so when we come to the level of elementary particles and that here we must first introduce the second law before being able to define the entities. (199)

The Laws of Change

… Of special interest is the close relation between fluctuations and bifurcations which leads to deep alterations in the classical results of probability theory. The law of large numbers is no longer valid near bifurcations and the unicity of the solution of … equations for the probability distribution is lost. (204)

This mathematization leads us to a new concept of time and irreversibility … . (206)

… the classical description in terms of trajectories has to be given up either because of instability and randomness on the microscopic level or because of quantum “correlations”. (207)

… the new concept implies that age depends on the distribution itself and is therefore no longer an external parameter, a simple label as in the conventional formula.
We see how deeply the new approach modifies our traditional view of time, which now emerges as a kind of average over “individual times” of the ensemble. (210)

For a long time, the absolute predictability of classical mechanics, or the physics of being, was considered to be an essential element of the scientific picture of the physical world. … the scientific picture has shifted toward a new, more subtle conception in which both deterministic features and stochastic features play an essential role. (210)

The basis of classical physics was the conviction that the future is determined by the present, and therefore a careful study of the present permits the unveiling of the future. At no time, however, was this more than a theoretical possibility. Yet in some sense this unlimited predictability was an essential element of the scientific picture of the physical world. We may perhaps even call this the founding myth of classical science.
The situation is greatly changed today. … The incorporation of the limitation of our ways of acting on nature has been an essential element of progress. (214)

Have we lost essential elements of classical science in this recent evolution [of thought]? The increased limitation of deterministic laws means that we go from a universe that is closed to one that is open to fluctuations. to innovations.

… perhaps there is a more subtle form of reality that involves both laws and games, time and eternity. (215) 

Comments

Relationship to previous work

This book can be seen as a development of the work of Kant, Whitehead and Smuts on emergence, although – curiously – it makes little reference to them [pg xvii]. In their terms, reality cannot logically be described in terms of point-like states within spaces with fixed ‘master equations’ that govern their dynamics. Instead, it needs to be described in terms of ‘processes’. Prigogine goes beyond this by developing explicit mathematical models as examples of emergence (from being to becoming) within physics and chemistry.

Metastability

According to the quote above, sufficiently complex systems are inherently metastable. Some have supposed that globalisation inevitably leads to an inter-connected and hence complex and hence stable world. But globalisation could lead to homogenization or fungibility, a reduction in complexity and hence an increased vulnerability to fluctuations. As ever, details matter.

See Also

I. Prigogine and I. Strengers Order out of Chaos Heinemann 1984.
This is an update of a popular work on Prigogine’s theory of dissipative systems. He provides an unsympathetic account of Kant’s Critique of Pure Reason, supposing Kant to hold that there are “a unique set of principles on which science is based” without making reference to Kants’ concept of emergence, or of the role of communities. But he does set his work within the framework of Whitehead’s Process and Reality. Smuts’ Holism and Evolution, which draws on Kant and mirrors Whitehead is also relevant, as a popular and influential account of the 1920s, helping to define the then ‘modern science’.

Dave Marsay

Composability

State of the art – software engineering

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations … .”For information systems, from a software engineering perspective,  the essential features are regarded as modularity and statelessness. Current inhibitors include:  

“Lack of clear composition semantics that describe the intention of the composition and allow to manage change propagation.”

Broader context

Composability has a natural interpretation as readiness to be composed with others, and has broader applicability. For example, one suspects that if some people met their own clone, they would not be able to collaborate. Quite generally, composability would seem necessary but perhaps not sufficient to ‘good’ behaviour. Thus each culture tends to develop ways for people to work effectively together, but some sub-cultures seem parasitic, in that they couldn’t sustain themselves on their own.

Cultures tend to evolve, but technical interventions tend to be designed. How can we be sure that the resultant systems are viable under evolutionary pressure? Composability would seem to be an important element, as it allows elements to be re-used and recombined, with the aspiration of supporting change propagation.

Analysis

Composability is particularly evident, and important, in algorithms in statistics and data fusion.  If modularity and statelessness are important for the implementation of the algorithms, it is clear that there are also characteristics of the algorithms as functions (ignoring internal details) that are also important.

If we partition a given data set, apply a function to the parts and the combine the result, we want to get the same result no matter how the data is partitioned. That is, we want the result to depend on the data, not the partitioning.

In elections for example, it is not necessarily true that a party who gets a majority of the votes overall will get the most candidates elected. This lack of composability can lead to a loss of confidence in the electoral process. Similarly, media coverage is often an editor’s precis of the precis by different reporters. One would hope that a similar story would emerge if one reporter had covered the whole. 

More technically, averages over parts cannot, in general, be combined to give a true overall average, whereas counting and summing are composable. Desired functions can often be computed composably by using a preparation function, then composable function, then a projection or interpretation function. Thus an average can be computed by finding the number of terms averaged, reporting the sum and count, summing over parts to give an overall sum and count, then projecting to get the average. If a given function can be implented via two or more composable functions, then those functions must be ‘conjugate’: the same up to some change of basis. (For example, multiplication is composable, but one could prepare using logs and project using exponentiation to calculate a product using a sum.)

In any domain, then, it is natural to look for composable functions and to implement algorithms in terms of them. This seems to have been widespread practice until the late 1980s, when it became more common to implement algorithms directly and then to worry about how to distribute them.

Iterative Composability

In some cases it is not possible to determine composable functions in advance, or perhaps at all. For example, where innovation can take place, or one is otherwise ignorant of what may be. Here one may look for a form of ‘iterative composability’ in which one hopes tha the results is normally adequate, there will be signs if it is not, and that one will be able to improve the situation. What matters is that this process should converge, so that one can get as close as one likes to the results one would get from using all the data.

Elections under FPTP (first past the post) are not composable, and one cannot tell if the party who is most voter’s first preference has failed to get in. AV (alternative vote) is also not composable, but one has more information (voters give rankings) and so can sometimes tell that there cannot have been a party who was most voters first preference who failed to get in. If there can have been, one could have a second round with only the top parties’ candidates. This is a partial step towards general iterative composability, which might often be iteratively composable for the given situation, much more so than fptp.

Parametric estimation is generally composable when one has a fixed number of entities whose parameters are being estimated. Otherwise one has an ‘association’ problem, which might be tackled differently for the different parts. If so, this needs to be detected and remedied, perhaps iteratively. This is effectively a form of hypothesis testing. Here the problem is that the testing of hypotheses using likelihood ratios is not composable. But, again, if hypotheses are compared differences can be detected and remedial action taken. It is less obvious that this process will converge, but for constrained hypothesis spaces it does.

Innovation, transformation, freedom and rationality

It is common to suppose that people acting in their environment should characterise their situation within a context in enough detail to removes all but (numeric) probabilistic uncertainty, so that they can optimize. Acting sub-optimally, it is supposed, would not be rational. But if innovation is about transformation then a supposedly rational act may undermine the context of another, leading to a loss of performance and possibly crisis or chaos.

Simultaneous innovation could be managed by having an over-arching policy or plan, but this would clearly constrain freedom and hence genuine innovation. To much innovation and one has chaos, too little and there is too little progress.

A composable approach is to seek innovations that respect each other’s contexts, and to make clear to other’s what one’s essential context is. This supports only very timid innovation if the innovation is rational (in the above sense), since no true (Knightian) uncertainty can be accepted. A more composable approach is to seek to minimise dependencies and to innovate in a way that accepts – possibly embraces – true uncertainty. This necessitates a deep understanding of the situation and its potentialities.  

Conclusion

Composability is an important concept that can be applied quite generally. The structure of activity shouldn’t impact on the outcome of the activity (other than resource usage). This can mean developing core components that provide a sound infrastructure, and then adapting it to perform the desired tasks, rather than seeking to implement the desired functionality directly.

Dave Marsay

Complexity Demystified: A guide for practitioners

P. Beautement & C. Broenner Complexity Demystified: A guide for practitioners, Triarchy Press, 2011.

First Impressions

  • The title comes close to ‘complexity made simple’, which would be absurd. A favourable interpretation (after Einstein) would be ‘complexity made as straightforward as possible, but no more.’
  • The references look good.
  • The illustrations look appropriate, of suitable quality, quantity and relevance.

Skimming through I gained a good impression of who the book was for and what it had to offer them. This was born out (below).

Summary

Who is it for?

Complexity is here viewed from the viewpoint of a ‘coal face’ practitioner:

  • Dealing with problems that are not amenable to a conventional managerial approach (e.g. set targets, monitor progress against targets, …).
  • Has had some success and shown some insight and aptitude.
  • Is being thwarted by stakeholders (e.g., donors, management) with conventional management view and using conventional ‘tools’, such as accountability against pre-agreed targets.

What is complexity?

Complexity is characterised as a situation where:

  • One can identify potential behaviours and value them, mostly in advance.
  • Unlike simpler situations, one cannot predict what will be the priorities, when: a plan that is a program will fail.
  • One can react to behaviours by suppressing negative behaviours and supporting positive ones: a plan is a valuation, activity is adaptation.

Complexity leads to uncertainty.

Details

Complexity science principles, concepts and techniques

The first two context-settings were well written and informative. This is about academic theory, which we have been warned not to expect too much of; such theory is not [yet?] ‘real-world ready’ – ready to be ‘applied to’ real complex situations – but it does supply some useful conceptual tools.

The approach

In effect commonplace ‘pragmatism’ is not adequate. The notion of pragmatism is adapted. Instead of persisting with one’s view as long as it seems to be adequate, one seeks to use a broad range of cognitive tools to check one’s understanding and look for alternatives, particular looking out for any unanticipated changes as soon as they occur.

The book refers to a ‘community of practice’, which suggests that there is already a community that has identified and is grappling with the problems, but needing some extra hints and tips. The approach seems down to earth and ‘pragmatic’, not challenging ideologies, cultures, values or other deeply held values.

 Case Studies

These were a good range, with those where the authors had been more closely involved being the better for it. I found the one on Ludlow particular insightful, chiming with my own experiences. I am tempted to blog separately on the ‘fuel protests in the UK in 2000’ as I was engaged with some of the team involved at the time, on related issues. But some of the issues raised here seem quite generally important.

Interesting points

  • Carl Sagan is cited to the effect that the left brain deals with detail, the right with context – the ‘bigger’ picture’. In my opinion many organisations focus too readily on the short term, to the exclusion of the long-term, and if they do focus on the long-term they tend to do it ‘by the clock’ with no sense of ‘as required’. Balancing long-term and short-term needs can be the most challenging aspect of interventions.
  • ECCS 09 is made much of. I can vouch for the insightful nature of the practitioners’ workshop that the authors led.
  • I have worked with Patrick, so had prior sight of some of the illustrations. The account is recognizable, but all the better for the insights of ECCS 09 and – possibly – not having to fit with the prejudices of some unsympathetic stakeholders. In a sense, this is the book that we have been lacking.

Related work

Management

  • Leadership agility: A business imperative for a VUCA world.
    Takes a similar view about complexity and how to work with it.
  • The Cynefin Framework.
    Positions complexity between complicated (familiar management techniques work) and chaos (act first). Advocates ‘probe-sense-respond’, which reflects some of the same views as ‘complexity demystified. (The authors have discussed the issues.)..

Conclusions

The book considers all types of complexity, revealing that what is required is a more thoughtful approach to pragmatism than is the norm for familiar situations, together with a range of thought-provoking tools, the practical expediency of some of which I can vouch for. As such it provides 259 pages of good guidance. If it also came to be a common source across many practitioner domains then it could also facilitate cross-domain discussions on complex topics, something that I feel would be most useful. (Currently some excellent practice is being obscured by the use of ‘silo’ languages and tools, inhibiting collaboration and cross-cultural learning.)

The book seems to me to be strongest in giving guidance to practitioners who are taking, or are constrained to take, a phenomenological approach: seeking to make sense of situations before reacting. This type of approach has been the focus of western academic research and much practice for the last few decades, and in some quarters the notion that one might act without being able to justify one’s actions would be anathema. The book gives some new tools which it is hoped will be useful to justify action, but I have a concern that some situations will be stil be novel and that to be effective practitioners may still need to act outside the currently accepted concepts, whatever they are. I would have liked to see the book be more explicit about its scope since:

  • Some practitioners can actually cope quite well with such supposedly chaotic situations. Currently, observers tend not to appreciate this extreme complexity of others’ situations, and so under-value their achievements. This is unfortunate, as, for example:
    • Bleeding edge practitioners might find themselves stymied by managers and other stakeholders who have too limited a concept of ‘accountability’.
    • Many others could learn from such practitioners, or employ their insights.
  • Without an appreciation of the complexity/chaos boundary, practitioners may take on tasks that are too difficult for them or the tools at their disposal, or where they may lose stakeholder engagement through having different notions of what is ‘appropriately pragmatic’.
  • An organisation that had some appreciation of the boundary could facilitate mentoring etc.
  • We could start to identify and develop tools with a broader applicability.

In fact, some of the passages in the book would, I believe, be helpful even in the ‘chaos’ situation. If we had a clearer ‘map’ the guidance on relatively straightforward complexity could be simplified and the key material for that complexity which threatens chaos could be made more of. My attempt at drawing such a distinction is at https://djmarsay.wordpress.com/notes/about-these-posts/work-in-progress/complexity/ .

In practice, novelty is more often found in long-term factors, not least because if we do not prepare for novelty sufficiently in advance, we will be unable to react effectively. While I would never wish to advocate too clean a separation between practice and policy, or between short and long-term considerations, we can perhaps adopt a leaf out of the book and venture some guidance, not to be taken too rigidly. If conventional pragmatism is appropriate at the immediate ‘coal face’ in the short run, then this book is a guide for those practitioners who are taking a step back and considering complex medium term issues, and would usefully inform policy makers in considering the long-run, but does not directly address the full complexities which they face, which are often inherently mysterious when seen from a narrow phenomenological stance. It does not provide guidance tailored for policy makers, and nor does it give practitioners a view of policy issues. But it could provide a much-needed contribution towards spanning what can be a difficult practice / policy divide

Addendum

One of the authors has developed eleven ‘Principles of Practice’. These reflect the view that, in practice, the most significant ‘unintended consequences‘ could have been avoided. I think there is a lot of ‘truth’ in this. But it seems to me that however ‘complexity worthy’ one is, and however much one thinks one has followed ‘best practice’ – including that covered by this book – there are always going to be ‘unintended consequences’. Its just that one can anticipate that they will be less serious, and not as serious as the original problem one was trying to solve.

See Also

Some mathematics of complexity, Reasoning in a complex dynamic world

Dave Marsay

How to Grow a Mind

How to Grow a Mind: Statistics, Structure, and Abstraction

Joshua B. Tenenbaum , et al.
Science 331, 1279 (2011);
DOI: 10.1126/science.1192788

This interesting paper proposes that human reasoning, far from being uniquely human, is understandable in terms of the mathematics of inference, and in particular that concept learning is ‘just’ that combination of Bayesian inference and abstract induction. found in hierarchical Bayesian model s (HBM). This has implications for two debates:

  • how to conceptualise how people learn
  • the validity of Bayesian methods

These may help, for example, in helping: 

  • to understand how thinking may be influenced, for example, by culture or experience
  • to aid teaching
  • to understand what might be typical mistakes of the majority
  • to understand mistakes typical of important minorities

If it were the case that humans are Bayesians (as others have also claimed, but with less scope) and if one thought that Bayesian thinking had certain flaws, then one would expect to find evidence of these in human activities (as one does – watch this blog e.g. here). But the details matter.

In HBM one considers that observations are produced by a likelihood function that has a probability distribution, or a longer chain of likelihood functions ‘topped out’ by a probability function. This is equivalent to have a chain of conditional likelihood functions, with the likelihood of the conditions of each function being given by the next one, topped out by an unconditional probability distribution, to make it Bayesian. The paper explains how a Chinese restaurant process (CRP) is used to decide whether new observations fit an existing category (node in the HBM) or a new one is required. In terms of the odinary Bayesain probability theory, this corresponds to creating a new hypothesis when the evidence does not fit any of the existing ones. It thus breaks the Bayesian assumption that the sum of the probabilities of the hypotheses add to 1. Thus the use of the HBM is Bayesian only for as long as there is no observed novelty. So far, the way that humans reason would seem to meet criticisms of ‘pure’ Bayes.

A pragmatic approach is to use the existing model unless and until it is definitely broken, and this seems to be what the paper is saying the way humans seem to think. But the paper does not distinguish between the following two situations:

  1. We seem to be in a familiar, routine, situation with no particular reason to expect surprises.
  2. We are in a completely novel situation, perhaps where others are seeking to outwit us.

The pragmatic approach seems reasonable when surprises are infrequent ‘out of the blue’ and ‘not to be helped’. One proceeds as if one is a Bayesian until one has to change, in which case one fixes the Bayesian model (HBM) and goes back to being a de-facto Bayesian. But if surprises are more frequent then there are theoretical benefits in discounting the Bayesian priors (or frequentist frequency information), discounting more the more surprises are to be expected. This could be accommodated by the CRP-based categorisation process, to give an approach that was pragmatic in a broad sense, but not in the pedantic James’ sense. 

There are two other ways in which one might depart further from a pure Bayesian approach, although these are not covered by the paper:

  • In a novel situation for which there is no sound basis for any ‘priors’ use likelihood-based reasoning rather than trying (as HBM does) to extrapolate from previous experience.
  • In a novel situation, if previous experience has not provided a matching ‘template’ in HBM, consider other sources of templates, e.g.:
    • theoretical (e.g., mathematical) reasoning
    • advice from others

Conclusion

An interesting paper, but we perhaps shouldn’t take it’s endorsement of Bayesian reasoning too pedantically: there may be other explanations, or even if people are naturally Bayesians in the strict technical sense, that doesn’t necessarily mean that they are beyond education.

Dave Marsay

All watched over by machines of loving grace

What?

An Adam Curtis documentary shown on the BBC May/June 2011.

Comment

The trailers (above link) give a good feel for the series, which is entertaining, with some good video, music, pseudo-history and comment. The details shouldn’t be taken too seriously, but it is thought-provoking, on some topics that need thought.

Thoughts

The series ends:

The idea that human beings are helpless chunks of hardware controlled by software programs written in their genetic codes [remains powerfully influential in our society]. The question is, have we embraced that idea because it is a comfort in a world where everything that we do, either good or bad, seems to have terrible unforeseen consequences? …

We have embraced a fatalistic philosophy of us as helpless computing machines, to both excuse and explain our political failure to change the world.

This thesis has three parts:

  1. that everything we do has terrible unforeseen consequences
  2. that we are fatalistic in the face of such uncertainty
  3. that we have adopted a machine metaphor as ‘cover’ for our fatalism.

Uncertainty

The program demonizes unforeseen consequences. Certainly we should be troubled by them, and their implications for rationalism and pragmatism. But if there were no uncertainties then we could be rational and ‘should’ behave like machines. Reasoning in a complex, dynamic world calls for more than narrowly rational machine-like calculation, and gives purpose to being human.

Fatalism

It seems reasonable to suppose that most of the time most people can do little to influence the factors that shape their lives, but I think this is true even when people can perfectly well see the likely consequences of what is being done in their name. What is at issue here is not so much ordinary fatalism, which seems justified, as the charge that those who are making big decisions on our behalf are also fatalistic.

In democracies, no-one makes a free decision anymore. Everyone is held accountable and expected to abide by generally accepted norms and procedures. In principle whenever one has a novel situation the extant rules should be at least briefly reviewed, lest they lead to ‘unforseen consequences’. A fatalist would presumably not do this. Perhaps the failure, then, is not to challenge assumptions or ‘kick against’ constraints.

The machine metaphor

Computers and mathematicians played a big role in the documentary. Humans are seen as being programmed by a genetic code that has evolved to self-replicate. But evolution leads to ‘punctuated equilibrium’ and epochs.  Reasoning in epochs is not like reasoning in stable situations, the preserve of rule-driven machines. The mathematics of Whitehead and Turing supports the machine-metaphor, but only within an epoch. How would a genetically programmed person fare if they move to a different culture or had to cope with new technologies radically transforming their daily lives? One might suppose that we are encoded for ‘general ways of living and learning’ but then that we seem to require a grasp of uncertainty beyond that which we currently associate with machines.

Notes

  • The program had a discussion on altruism and other traits in which behaviours might disbenefit the individual but advantage those who are genetically similar over others. This would seem to justify much terrorism and even suicide-bombing. The machine metaphor would seem undesirable for reasons other than its tendency to fatalism.
  • An alternative to absolute fatalism would be fatalism about long-term consequences. This would lead to a short-term-ism that might provide a better explanation for real-world events
  • The financial crash of 2007/8 was preceded by a kind of fatalism, in that it was supposed that free markets could never crash. This was associated with machine trading, but neither a belief in the machine metaphor nor a fear of unintended consequences seems to have been at the root of the problem. A belief in the potency of markets was perhaps reasonable (in the short term) once the high-tech bubble had burst. The problem seems to be that people got hooked on the bubble drug, and went into denial.
  • Mathematicians came in for some implicit criticism in the program. But the only subject of mathematics is mathematics. In applying mathematics to real systems the error is surely in substituting myth for science. If some people mis-use mathematics, the mathematics is no more at fault than their pencils. (Although maybe mathematicians ought to be more vigorous in uncovering abuse, rather than just doing mathematics.)

Conclusion

Entertaining, thought-provoking.

Dave Marsay

Out of Control

Kevin Kelly’s ‘Out of Control‘ (1994) sub-titled “The New Biology of Machines, Social Systems, and the Economic World” gives ‘the nine laws of god’which it commends for all future systems, including organisations and economies. They didn’t work out too well in 2008.

The claims

The book is introduced (above) by:

“Out of Control is a summary of what we know about self-sustaining systems, both living ones such as a tropical wetland, or an artificial one, such as a computer simulation of our planet. The last chapter of the book, “The Nine Laws of God,” is a distillation of the nine common principles that all life-like systems share. The major themes of the book are:

  • As we make our machines and institutions more complex, we have to make them more biological in order to manage them.
  • The most potent force in technology will be artificial evolution. We are already evolving software and drugs … .
  • Organic life is the ultimate technology, and all technology will improve towards biology.
  • The main thing computers are good for is creating little worlds so that we can try out the Great Questions. …
  • As we shape technology, it shapes us. We are connecting everything to everything, and so our entire culture is migrating to a “network culture” and a new network economics.

In order to harvest the power of organic machines, we have to instill in them guidelines and self-governance, and relinquish some of our total control.”

Holism

Much of the book is Holistic in nature, The above could be read as applying the ideas of Smuts’ Holism to newer technologies. (Chapter 19 does make explicit reference to JC Smuts in connection with internal selection, but doesn’t reference his work.)

Jan Smuts based his work on wide experience, including with improving arms production in the Great War, and went on to found ecology and help modernise the sciences, thus leading to the views that Kelly picks up on. Superficially, Kelly’s book is greatly concerned with technology that ante-dates Smuts, but his arguments claim to be quite general, so an apostle of Smuts would expect Kelly to be consist, but applying the ideas to the new realm. But where does Kelly depart from Smuts, and what new insights does he bring? Below we pick out Kelly’s key texts and compare them.

The nine Laws of God

The laws with my italics are:

Distribute being

When the sum of the parts can add up to more than the parts, then that extra being … is distributed among the parts. Whenever we find something from nothing, we find it arising from a field of many interacting smaller pieces. All the mysteries we find most interesting — life, intelligence, evolution — are found in the soil of large distributed systems.

The first phrase is clearly Holistic, and perhaps consistent with Smuts’ view that the ‘extra’ arises from the ‘field of interactions’. However in many current technologies the ‘pieces’ are very hard-edged, with limited ‘mutual interaction’. 

Control from the bottom up

When everything is connected to everything in a distributed network … overall governance must arise from the most humble interdependent acts done locally in parallel, and not from a central command. …

The phrases ‘bottom up’ and ‘humble interdependent acts’ seem inconsistent with Smuts’ own behaviour, for example in taking the ‘go’ decision for D-day. Generally, Kelly seems to ignore or deny the need for different operational levels, as in the military’s tactical and strategic.

Cultivate increasing returns

Each time you use an idea, a language, or a skill you strengthen it, reinforce it, and make it more likely to be used again. … Success breeds success. In the Gospels, this principle of social dynamics is known as “To those who have, more will be given.” Anything which alters its environment to increase production of itself is playing the game … And all large, sustaining systems play the game … in economics, biology, computer science, and human psychology. …

Smuts seems to have been the first to recognize that one could inherit a tendency to have more of something (such as height) than your parents, so that a succesful tendency (such as being tall) would be reinforced. The difference between Kelly and Smuts is that Kelly has a general rule whereas Smuts has it as a product of evolution for each attribute. Kelly’s version also needs to be balanced against not optimising (below).

Grow by chunking

The only way to make a complex system that works is to begin with a simple system that works. Attempts to instantly install highly complex organization — such as intelligence or a market economy — without growing it, inevitably lead to failure. … Time is needed to let each part test itself against all the others. Complexity is created, then, by assembling it incrementally from simple modules that can operate independently.

Kelly is uncomfortable with the term ‘complex’. In Smuts’ usage a military platoon attack is often ‘complex’, whereas a superior headquarters could be simple. Systems with humans in naturally tend to be complex (as Kelly describes) and are only made simple by prescriptive rules and procedures. In many settings such process-driven systems would (as Kelly describes them) be quite fragile, and unable to operate independently in a demanding environment (e.g., one with a thinking adversary). Thus I suppose that Kelly is advocating starting with small but adaptable systems and growing them. This is desirable, but often Smuts did not have that luxury, and had to re-engineer systems such as production or fighting systems, ‘on the fly’

Maximize the fringes

… A uniform entity must adapt to the world by occasional earth-shattering revolutions, one of which is sure to kill it. A diverse heterogeneous entity, on the other hand, can adapt to the world in a thousand daily mini revolutions, staying in a state of permanent, but never fatal, churning. Diversity favors remote borders, the outskirts, hidden corners, moments of chaos, and isolated clusters. In economic, ecological, evolutionary, and institutional models, a healthy fringe speeds adaptation, increases resilience, and is almost always the source of innovations.

A large uniform entity cannot adapt and maintain its uniformity, and so is unsustainable in the face of a changing situation or environment. If diversity is allowed then parts can adapt independently, and generally favourable adaptations spread. Moreover, the more diverse an entity is the more it can fill a variety of niches, and the more likely that it will survive some shot. Here Kelly, Smuts and Darwin essentially agree.

Honor your errors

A trick will only work for a while, until everyone else is doing it. To advance from the ordinary requires a new game, or a new territory. But the process of going outside the conventional method, game, or territory is indistinguishable from error. Even the most brilliant act of human genius, in the final analysis, is an act of trial and error. … Error, whether random or deliberate, must become an integral part of any process of creation. Evolution can be thought of as systematic error management.

Here the problem of competition is addressed. Here Kelly supposes that the only viable strategy in the face of complexity is blind trial and error, ‘the no strategy strategy’. But the main thing is to be able to identify actual errors. Smuts might also add that one might learn from near-misses and other potential errors.

Pursue no optima; have multiple goals

 …  a large system can only survive by “satisficing” (making “good enough”) a multitude of functions. For instance, an adaptive system must trade off between exploiting a known path of success (optimizing a current strategy), or diverting resources to exploring new paths (thereby wasting energy trying less efficient methods). …  forget elegance; if it works, it’s beautiful.

Here Kelly confuses ‘a known path of success’ with ‘a current strategy’, which may explain why he is dismissive of strategy. Smuts would say that getting an adequate balance between the exploitation of manifest success and the exploration of alternatives would be a key feature of any strategy. Sometimes it pays not to go after near-term returns, perhaps even accepting a loss.

Seek persistent disequilibrium

Neither constancy nor relentless change will support a creation. A good creation … is persistent disequilibrium — a continuous state of surfing forever on the edge between never stopping but never falling. Homing in on that liquid threshold is the still mysterious holy grail of creation and the quest of all amateur gods.

This is a key insight. The implication is that even the nine laws do not guarantee success. Kelly does not say how the disequilibrium is generated. In many systems it is only generated as part of an eco-system, so that reducing the challenge to a system can lead to its virtual death. A key part of growth (above) is o grow the ability to maintain a healthy disequilibrium despite increasing novel challenges.

Change changes itself

… When extremely large systems are built up out of complicated systems, then each system begins to influence and ultimately change the organizations of other systems. That is, if the rules of the game are composed from the bottom up, then it is likely that interacting forces at the bottom level will alter the rules of the game as it progresses.  Over time, the rules for change get changed themselves. …

It seems that the changes the rules are blindly adaptive. This may be because, unlike Smuts, Kelly does not believe in strategy, or in the power of theory to enlighten.

Kelly’s discussion

These nine principles underpin the awesome workings of prairies, flamingoes, cedar forests, eyeballs, natural selection in geological time, and the unfolding of a baby elephant from a tiny seed of elephant sperm and egg.

These same principles of bio-logic are now being implanted in computer chips, electronic communication networks, robot modules, pharmaceutical searches, software design, and corporate management, in order that these artificial systems may overcome their own complexity.

When the Technos is enlivened by Bios we get artifacts that can adapt, learn, and evolve. …

The intensely biological nature of the coming culture derives from five influences:

    • Despite the increasing technization of our world, organic life — both wild and domesticated — will continue to be the prime infrastructure of human experience on the global scale.
    • Machines will become more biological in character.
    • Technological networks will make human culture even more ecological and evolutionary.
    • Engineered biology and biotechnology will eclipse the importance of mechanical technology.
    • Biological ways will be revered as ideal ways.

 …

As complex as things are today, everything will be more complex tomorrow. The scientists and projects reported here have been concerned with harnessing the laws of design so that order can emerge from chaos, so that organized complexity can be kept from unraveling into unorganized complications, and so that something can be made from nothing.

My discussion

Considering local action only, Kelly’s arguments often come down to the supposed impossibility of effective strategy in the face of complexity, leading to the recommendation of the universal ‘no strategy strategy’: continually adapt to the actual situation, identifying and setting appropriate goals and sub-goals. Superficially, this seems quite restrictive, but we are free as to how we interpret events, learn, set goals and monitor progress and react. There seems to be nothing to prevent us from following a more substantial strategy but describing it in Kelly’s terms.

 The ‘bottom up’ principle seems to be based on the difficulty of central control. But Kelly envisages the use of markets, which can be seen as a ‘no control control’. That is, we are heavily influenced by markets but they have no intention. An alternative would be to allow a range of mechanisms, ideally also without intention; whatever is supported by an appropriate majority (2/3?).

For economics, Kelly’s laws are suggestive of Hayek, whereas Smuts’ approach was shared with his colleague, Keynes. 

Conclusion

What is remarkable about Kelly’s laws is the impotence of the individuals in the face of ‘the system’. It would seem better to allow for ‘central’ (or intermediate) mechanisms to be ‘bottom up’ in the sense that they are supported by an informed ‘bottom’.

See Also

David Marsay

Cynefin Framework

Youtube has a good video by Dave Snowdon on his/Cognitive Edge’s ‘Cynefin sense-making Framework’ for complexity and chaos. I speculate on its applicability outside routine management.

Overview

Cynefin

Image via Wikipedia

The Cynefin framework is very much from a human factors / organisational / management point of view, but may have wider potential applicability. It makes reference to evolutionary theories, but these seem not to be essential.

Components

The framework has four main components:

  • simple: sense, categorise, respond
  • complicated: sense, analyse, respond
  • complex: probe, sense, respond
  • chaos: act, sense, respond

plus: disorder: not knowing where one is, and not knowing what to do.

Transitions

Problems can transit incrementally between simple and complicated, simple and complex or complex and chaotic. But if one treats problems as if they were simple there is a risk of them becoming chaotic, in which case one can not get them back to simple directly, but has to go via complex etc. It is best not to treat things as simple except where doing so would yield a great enough advantage to outweigh the risks. (Even here one should watch out.)

One escapes disorder by applying the framework and associated techniques. (One might modify the framework so that one transits out of order into disorder and can then go into chaos, but apparently managers can only cope with four components. 😉 )

Handling complexity

A complex situation is described as stable. One identifies ‘safe to fail’ probes, i.e. ones whose effects one could recover from, bringing the situation back to the stability. In particular, one needs to be able to tell when the outcome of a probe is not safe, and have to hand sufficient remediation resources and also to be able to tell when the outcome is positive, and to have available amplifying resources. One then tries out such probes until what happens is acceptable and then seeks to amplify the effect (e.g., by pushing harder). Thus one has a form of ‘trial and error’, eventually leading to success by persistence.

Sense making

The video starts with an important preamble: although the framework is typically presented as a categorisation it should really be used for sense-making. That is, one needs to decide for the case at hand what are the appropriate definitions of the components. My interpretation that ‘complicated’ is what an organisation can already analyse, ‘complex’ is what they – after some enlightening – may be able to get to handle, while ‘chaos’ is still too hard to handle. Thus one would naturally expect the definitions to vary.

Limitations

No palette of options, from which a definition of ‘complex’ could be developed, is provided. It is quite a ‘thin’ framework. 

If one had a given problem, one can see how (using the Cognitive Edge techniques or otherwise) one might usefully characterise complexity as more than run-of-the-mill complicatedness but still handle-able (as above), and identify the main features. This might be appropriate within a typical commercial organisation. But outside such conservative settings one has some potential issues:

  • It might not be possible to resolve a problem without going to the edge of chaos, and solutions might involve ‘leaps of faith’ through some chaos.
  • The current situation might not be stable, so there is nothing to return to with ‘safe to fail’.
  • Stability might not be desirable: one might want to survive in a hostile situation, which might depend on agility.
  • The situation might be complex or complicated (or complex in different ways)  depending on where you think the problem lies, or on what your strategy might be.

Examples

Economics

We wish economies to be ‘managed’ in the sense that we might intervene to promote growth while minimising risk. The Cynefin framework might be applied as follows:

  • Many commentators and even some economists and responsible officials seem to view the problem as simple. E.g., sense  the debt, categorise it as ‘too much’, respond according to dogma.
  • Other commentators, and many who make many from financial markets, seem to see them as complicated: sense lots of data in various graphs, analyse and respond. Each situation has some novelty, but can be fitted into their overall approach.
  • Many commentators, some economists and many politicians seemed entranced by ‘the great moderation’ which seemed to guarantee a permanent stability, so that the economy was not chaos but was ‘at worst’ complex. Many of those involved seemed to appreciate the theoretical need for probe-sense-respond but it became difficult (at least in the UK) to justify action (probes) for which one could not make a ‘business case’ because there may be no benefit other than the lessons identified and the reduction of options. Hence there was an inability to treat things as complex, leading to chaos
  • Chaos (innovation) had been encouraged at the micro level in the belief that it could not destabilise the macro. But over 2007/8 it played a role in bringing down the economy. This led to activity that could be categorised as act (as a Keynesian), sense (what the market makers think), respond (with austerity).

Here one may note

  • That different parts and levels of the economy could be in different parts of the framework, and to consider influences between them. 
  • The austerity option is simple, so chaos was reduced to simple directly, whereas a more Keynesian response would have ben complex.
  • Whilst the austerity option is economically simple, it may lead to complex or chaotic situations elsewhere,  e.g. the social.

Crisis Management

Typically, potential crises are dealt with in the first place by  appropriate departments, who are typically capable of handling simple and complicated situations, so that a full-brown crisis is typically complex or chaotic. If a situation is stable then one might think that the time pressure would be reduced, and so the situation would be less of a crisis. One can distinguish between timer-scales:

  • a situation is stable in the short term, but may suddenly ‘blow up’
  • a situation is stable in the long term

and two notions of stability:

  • all indicators are varying around a constant mean
  • some aspects may be varying around a mean that is changing steadily but possibly rapidly (e.g. linear or exponential) , but ‘the essential regulatory system’ is stable.

Thus one might regard a racing car as stable ‘in itself’ even as it races and even if it might crash. Similarly, a nuclear reactor that is in melt-down is stable in some sense: the nature of the crisis is stable, even if contamination is spreading.

With these interpretations, many crises are complex or disordered. If the situation is chaotic one might need some decisive action to stabilise it. If it is disordered then as a rule of thumb one might treat it as chaotic: the distinction seems slight, since there will be no time for navel-gazing.

In many crises there will be specialists who, by habit or otherwise, will want to treat the problem as merely complicated, applying their nostrums. Such actions need to be guarded and treated as probes, in the way a parent might watch over an over-confident child, unaware of the wider risks. Thus what appears to be sense-analyse-respond may be guarded to become probe-sense-respond.

In some cases a domain expert may operate effectively in a complex situation and might reasonably be given license to do so, but as the situation develops one needs to be clear where responsibility for the beyond complicated aspects lie. A common framework, such as Cynefin, would seem essential here.

In other cases a ‘heroic leader’ may be acting to bring order to chaos, but others may be quietly taking precautions in case it doesn’t come of, so that the distinction between ‘act-sense-respond’ and ‘probe-sense-respond’ may be subjective.

Quibbles

I may turn these notes into a graphic.

It seems to me that, with experience, one will often be able to judge that a situation is going to be simple, complicated or worse, but not whether it is going to be complex or chaotic.  Moreover, the interaction can be much more interactive. Thus in complex we may have a series of probes, {probe} leading to sense being made and action that improves the situation but which typically leads to a less problematic complex, complicated or simple problem.  Thus the complex part is {probe}-sense-respond, followed by others, to give {{probe}-sense-respond} [{complicated/simple}], with – in practice – some mis-steps leading to the problem actually getting worse, hence {{{probe}-sense-respond} [{complicated/simple}]}. The complicated is then  {sense-analyse-respond}[{simple}] and simple is typically {sense-categorise-respond}: even simple is not often a one-shot activity.

With the above understanding, we can represent chaotic as a failure of the above. We start by probing and trying to make sense, but failing we have to take a ‘shaping’ action. If this succeeds, we have a complex situation at worst. If not, we have to try again. Thus we gave:

while complex fails: shape

Here I take the view that once we have found the situation to be beyond our sense-making resources we should treat it as if it is complex. If it turns out to be merely complicated or simple, so much the better: our ‘response’ is not an action in the ‘real’ world but simple a recognition of the type of situation and a selection of the appropriate methods.

My next quibble is on the probing. This implies taking an action which is ‘safe-to-fail’. But, particularly after taking a shaping action one may need to bundle the probe with some constraining activity, which prevents the disturbance from the probe from spreading. Also, part of the shaping may be to decouple parts of the system being studied so that probes become safe-to-fail.

Overall, I think a useful distinction is between situations where one can probe-sense-respond and those that call for  interventions (‘shape’) that create the conditions for probing, analysing or categorising. Perhaps the distinction is between activities normally conducted by managers (complex at worst) and those that are normally conducted by CEOs, leaders etc and hence outside the management box. Thus the management response to chaos might call for an act ‘from above’.

Conclusion

Cynefin provides a sense-making framework, but if one is in a complex situation one may need a more specific framework, e.g. for complexity or for chaos/complexity. Outside routine management situations the chaos / complexity distinction may need to be reviewed. The distinction between probe-send-respond and act-sense-respond seems hard to make in advance.

Dave Marsay

See also

Induction and epochs

 

Critique of Pure Reason

I Kant’s Critique of Pure Reason, Ed. 2 1787.

See new location.

David Marsay

Holism and Evolution

Holism and evolution 1927. Smuts’ notoriously inaccessible theory of evolution, building on and show-casing Keynes’ notion of uncertainty. Smuts made significant revisions and additions in later editions to reflect some of the details of the then current understanding. Not all of these now appear to be an improvement. Although Smuts and Whitehead worked independently, they recognized that their theories were equivalent. The book is of most interest for its general approach, rather than its detail. Smuts went on to become the centennial president of the British Association for the Advancement of Science, drawing on these ideas to characterise ‘modern science’.

Holism is a term introduced by Smuts, in contrast to individualism and wholism. In the context of evolution it emphasises co-evolution between parts and wholes, with neither being dominant. The best explanation I have found is:

“Back in the days of those Ancient Greeks, Aristotle (384-322BCE) gave us:

The whole is greater than the sum of its parts; (the composition law)
The part is more than a fraction of the whole. (the decomposition law)

Composition Laws” (From Derek Hitchins’ Systems World.)

Smuts also develops LLoyd Morgan’s concept of emergence,  For example, the evolutionary ‘fitness function’ may emerge from a co-adaptation rather than be fixed.

The book covers evolution from physics to personality. Smuts intended a sequel covering, for example, social and political evolution, but was distracted by the second world war, for example.

Smuts noted that according to the popular view of evolution, one would expect organisms to become more and more adapted to their environmental niches, whereas they were more ‘adapted to adapt’, particularly mankind. There seemed to be inheritance of variability in offspring as whole as the more familiar inheritance of manifest characteristics, which suggested more sudden changes in the environment than had been assumed. This led Smuts to support research into the Wegner hypothesis (concerning continental drift) and the geographic origins of  life-forms. 

See also

Ian Stewart, Peter Allen

David Marsay

Life’s Other Secret

Ian Stewart Life’s Other Secret: The new mathematics of the living world, 1998.

This updates D’Arcy Thompson’s classic On growth and form, ending with a manifesto for a ‘new’ mathematics, and a good explanation of the relationship between mathematics and scientific ‘knowledge’.

Like most post-80s writings, it’s main failing is that it sees science as having achieved some great new insights in the 80s, ignoring the work of Whitehead et al, as explained by Smuts, for example.

Ian repeatedly notes the tendency for models to assume fixed rules, and hence only to apply within a fixed Whitehead-epoch, whereas (as Smuts also noted) life bears the imprint of having being formed during (catastrophic) changes of epoch.

The discussion provides some supporting evidence for the following, but does not develop the ideas:

The manifesto is for a model combining the strengths  the strengths of cellular automata with Turing’s reaction-diffusion approach, and more. Thus it is similar to Smuts’ thoughts on Whitehead et al, as developed in SMUTS. Stewart also notes the inadequacy of the conventional interpretation of Shannon’s ‘information’.

See also

Mathematics and real systems. Evolution and uncertainty, epochs.

Dave Marsay

Synthetic Modelling of Uncertain Temporal Systems

Overview

SMUTS is a computer-based ‘exploratorium’, to aid the synthetic modelling of uncertain temporal systems. I had previously worked on sense-making systems based on the ideas of Good, Turing and Keynes, and was asked to get involved in a study on the potential impact of any Y2K bugs, starting November 1999. Not having a suitable agreed model, we needed a generic modelling system, able to at least emulate the main features of all the part models. I had been involved in conflict resolution, where avoiding cultural biases and being able to meld different models was often key, and JC Smuts’ Holism and Evolution seemed a sound if hand-wavy approach. SMUTS is essentially a mathematical interpretation of Smuts. I was later able to validate it when I found from the Smuts’ Papers that Whitehead, Smuts and Keynes regarded their work as highly complementary. SMUTS is actually closer to Whitehead than Smuts.

Systems

An actual system is a part of the actual world that is largely self-contained, with inputs and outputs but with no significant external feedback-loops.  It is a judgement about what is significant. Any external feedback loop will typically have some effect, but we may not regard it as significant if we can be sure that any effects will build up too slowly. It is a matter of analysis on larger systems to determine what might be considered smaller systems. Thus plankton are probably not a part of the weather system but may be a pat of the climate.

The term system may also be used for a model of a system, but here we mean an actual system.

Temporal

We are interested in how systems change in time, or ‘evolve’. These systems include all types of evolution, adaptation, learning and desperation, and hence are much broader than the usual ‘mathematical models’.

Uncertain

Keynes’ notion of uncertainty is essentially Knightian uncertainty, but with more mathematical underpinning. It thus extends more familiar notions of probability as ‘just a number’. As Smuts emphasises, systems of interest can display a much richer variety of behaviours than typical probabilistic systems. Keynes has detailed the consequences for economics at length.

Modelling

Pragmatically, one develops a single model which one exploits until it fails. But for complex systems no single model can ever be adequate in the long run, and as Keynes and Smuts emphasised, it could be much better recognize that any conventional model would be uncertain. A key part of the previous sense-making work was the multi-modelling concept of maintaining the broadest range of credible models, with some more precise and others more robust, and then hedging across them, following Keynes et al.

Synthetic

In conflict resolution it may be enough to simply show the different models of the different sides. But equally one may need to synthesize them, to understand the relationships between them and scope for ‘rationalization’. In sense making this is essential to the efficient and effective use of data, otherwise one can have a ‘combinatorial explosion’.

Test cases

To set SMUTS going, it was developed to emulate some familiar test cases.

  • Simple emergence. (From random to a monopoly.)
  • Symbiosis. (Emergence of two mutually supporting behaviours.)
  • Indeterminacy. (Emergence of co-existing behaviours where the proportions are indeterminate.)
  • Turing patterns. (Groups of mutually supporting dynamic behaviours.)
  • Forest fires. (The gold standard in epidemiology, thoroughly researched.)

In addition we had an example to show how the relationships between extremists and moderates were key to urban conflicts.

The aim in all of these was not to be as accurate as the standard methods or to provide predictions, but to demonstrate SMUTS’ usefulness in identifying the key factors and behaviours. 

Viewpoints

A key requirement was to be able to accommodate any relevant measure or sense-making aid, so that users could literally see what effects were consistent from run to run, what weren’t, and how this varied across cases. The initial phase had a range of standard measures, plus Shannon entropy, as a measure of diversity.

Core dynamics

Everything emerged from an interactional model. One specified the extent to which one behaviour would support or inhibit nearby behaviours of various types. By default behaviours were then randomized across an agora and the relationships applied. Behaviours might then change in an attempt to be more supported. The fullest range of variations on this was supported, including a range of update rules, strategies and learning. Wherever possible these were implemented as a continuous range rather than separate cases, and all combinations were allowed.

Illustration

SMUTS enables one to explore complex dynamic systems

SMUTS has a range of facilities for creating, emulating and visualising systems.

By default there are four quadrants. The bottom right illustrates the inter-relationships (e.g., fire inhibits nearby trees, trees support nearby trees). The top right shows the behaviours spread over the agora (in this case ground, trees and fire). The bottom left shows  a time-history of one measure against another, in this case entropy versus value of trees. The top-left allows one to keep an eye on multiple displays, forming an over-arching view. In this example, as in many others, attempting to get maximum value (e.g. by building fire breaks or putting out all fires) leads to a very fragile system which may last a long time but which will completely burn out when it does go. If one allows fires to run their course, one typically gets an equilibrium in which there are frequent small fires which keep the undergrowth down so that there are never any large fires.

Findings

It was generally possible to emulate text-book models to show realistic short-run behaviours of systems. Long term, simpler systems tended to show behaviours like other emulations, and unlike real systems. Introducing some degree of evolution, adaptation or learning all tended to produce markedly more realistic behaviours: the details didn’t matter. Having behaviours that took account of uncertainty and hedged also had a similar effect.

Outcomes

SMUTS had a recognized positive influence, for example on the first fuel crisis, but the main impact has been in validating the ideas of Smuts et al.

Dave Marsay