Allen’s Dynamics of Knowledge and Ignorance

Emphasis and notes in italics are mine. Additional notes also appear indented, as does this.


Knowledge is not what we thought it was. Knowledge about the consequences of our beliefs, policies and actions requires that we understand and can predict how the world works, and we don’t. This is because we are just participants in a complex, co-evolutionary system with multiple spatial and temporal scales of interaction, where learning and transformation are occurring, and which is therefore fundamentally irreversible. In this situation, we encounter the paradox that greater apparent knowledge can really lead to greater uncertainty, since that “knowledge” may rely on the view of the world as a mechanical system, rather than an evolutionary one. This approach, which includes that of traditional Systems Science, is based on the misconception that all systems, even social and economic ones, can be broken down into interacting, stable components, whose coupled working can be completely understood. The struggle to make human systems toe this line led to the invention of “Rational Man” and to “Homo Economicus”, artificial constructs designed to represent human responses as mechanical.
As a mathematician, Peter’s work seems consistent with relevant logic and mathematics, while being considerably more accessible. What it provides, that mathematics cannot, is a link to real-world problems. Mathematics as such can only comment on the logical consequences of other’s beliefs: it can never confirm any belief: it may usefully disconfirm or cause to be modifed a belief, in conjunction with those with subject-matter experience. Thus we may know that Geometry or Probability Theory are perfectly valid as mathematics, but we should also know that the appropriateness of such theories to any given non-mathematical situation is outside of the scope of mathematics. While mathematics may express an opinion on applicability, such an opinion does not merit the same ‘weight’ as a mathematician’s knowledge of mathematics as such.

1. Complexity, Simplicity and Knowledge

Understanding “reality”, creating apparent “knowledge”, requires us to reduce the real complexity of any particular situation to a simpler, more understandable one, by making specific simplifying assumptions. The hope is that there exists a representation that, while being sufficiently simple to be understood, remains sufficiently representative of reality to be useful. It is not certain that such a level exists, but in dealing with a situation, it is our hope that it does, and that the assumptions made are sufficiently true. What are these assumptions?
Assume is used when the guess is based on little or no evidence.”  Logically then, any deductions based on the assumptions will be conditioned on those assumptions, which seems harmless. The harm, it seems to me, is when one forgets or overlooks the assumptions and treats them as ‘facts’ or as definitely ‘pragmatically useful’.

1.1 The Assumptions used to reduce Complexity to Simplicity

These are:
  1. That we can define a boundary between the part of the world that we want to “understand” and the rest. In other words, we assume first that there is a “System” and an “Environment”.
  2. That we have rules for the classification of objects that lead to a relevant taxonomy for the system components, which will enable us to understand what is going on. This is often decided entirely intuitively.
  3. The third assumption concerns the level of description below that which we are trying to understand, and assumes are either all identical to each other and to the average, or have a diversity that is at all times distributed “normally” around the average.
  4. That the individual behaviour of sub-components can be described by average interaction parameters.
The mathematical representation that results from making all four of these assumptions is that of a mechanical system that appears to “predict” the future of the system perfectly.
Peter is rightly critical of how mathematics was being used in mathematical representations of economies, for example. Others have gone further and opined that mathematics in itself is somehow wrong, misleading or useless. Others have blamed problems on a ‘mechanical’ or ‘reductionist’ world-view. Here Peter links these concerns, and blames the combination of mathematics with some dubious assumptions. Thus reform of economics, for example, could be achieved by discarding either the mathematics or the assumptions.
Mainstream economics, for example, tends to implicitly rely on the assumption that there is – at least ‘in principle’ – some constraints on system boundaries, relevant classifications, levels of detail. This creates an incentive for active participants to consider breaching any such constraints, and experience shows that they often do (advisedly or not).
A fifth assumption that is often made in building models to deal with “reality”, is that of stationarity or equilibrium. It is assumed in classical and neo-classical economics for example, that markets move rapidly to equilibrium, so that fixed relationships can be assumed between the different variables of the system.  …  If knowledge is obtained by making simplifying assumptions, is it real or an illusion?
In general, dynamic non-deterministic systems may have ‘critical instabilities’ which not only destroy the apparent equilibria, but which can lead to novel systems requiring novel representations with differing system boundaries, relevant classiifcations,  levels of detail and interactions. The conventional assumption in economics is that either such instabilities cannot occur, or that those with power would make sure they didn’t occur, or that if they were to occur there would be nothing useful we could do about them anyway.

But in other areas it is commonplace to make best estimates of appropriate boundaries and look out for breaches, taking action to defend or extend the boundaries as and when necessary.
[There are] three fundamentally different factors in the working of the system:
  1. Values of external factors, which are not modelled as variables in the system. These reflect the “environment” of the system, and as such presents the “selection” pressure that will affect the system.
  2. Effects of spatial or network interaction, of juxtaposition, of the entities underlying the system. Often these will express non-linear effects of density for example, capturing the effects of structure, configuration and organisation on the functional operation of the equations. Juxtaposition could also refer not simply to geographical space, but to an organisational or network proximity, so that these parameters both drive and reflect the effects of different organisational structures.
  3. Values corresponding to the “performance” of the entities underlying [the system], due to their internal characteristics like technology, level of knowledge or particular strategies.
These three entirely different aspects have not been separated out in much of the previous work concerning non-linear systems, and this has led to much confusion. 1) is the link of the system to its external context. 2) reflects the interactions between the components of the system and 3) connects the behaviour of the system to the internal characteristics of the individuals involved. …

1.2 The Modelling Outcomes of different Assumptions

Clearly, the use of the assumptions above provides different interpretive frameworks for reality, some suppose far more constraints on what can happen than others. Relating Assumptions to Outcomes in terms of types of model we have:
  • Making all 4 assumptions plus stationarity gives either a static equilibrium model, or one corresponding to a cyclic or chaotic attractor
  • Making all 4 assumptions leads to System Dynamics, a mechanical representation of changes
  • Making 1 to 3 assumptions leads to Self-Organising Dynamic models, capable of reconfiguring their spatial of organisational structure.
  • Making only assumptions 1 and 2 leads to mathematical models of evolutionary processes where the environment, system components and sub-components all co-evolve in a non-mechanical mutual “learning” process

Part of the confusion leading up to the financial crises around 2008 was the invention of novel financial instruments that didn’t fit the received classificatory wisdom. So a mathematical model based on assumption 2 seems unappealing, despite its ‘non-mechanical mutual “learning”.

2. Models and Knowledge: Simple to Complex.

The “knowledge” generation process is based on understanding “what must be true” about a system. From the above, we see that the more “assumptions” made about “what must be true” for a system, the more seemingly precise the “knowledge” about future behaviour will be. This is because the more things that “must be true” there are, then the more constrained is the possible future behaviour that could happen. If almost nothing “must be true”, then it means the system is free to do whatever it likes. Simplicity can be plucked from complexity therefore if and only if the simplifying assumptions are true. Let us consider the different types of models briefly in ascending order of complexity
There is a kind of uncertainty principle here: the more assumptions one makes the more precise one can be about the behaviours but the greater the uncertainty about the assumptions.

2.1) Equilibrium Models

These are models of the “final” situation that a system will attain, providing that it does
not change qualitatively. [The] the system will “go to” the attractor in whose basin it happens to be at time equals
The advantage of studying the attractor of the dynamical system is that it is clearly predictive. Either it is a stationary value … or it is predictable cycle or chaotic attractor with a fixed range of variation, and average value. Such an approach seems to offer the possibility of rationally making decisions by considering the situation (attractor) before an action, investment or policy, and the situation afterwards. Cost/Benefit analysis, for example, is based on this (fundamentally flawed) idea.
The problem is of course, not only whether the 4 assumptions are actually true, but in addition, whether something may happen along the way to the attractor. It does not take into account the possibility of non-linear effects, through which small non-average fluctuations or differences can be amplified and change the nature of the system.
I.e., there may be ‘critical instabilities’.
.… In reality, people discover their future, and may change their behaviour as a result of real time feed-backs, thus changing the “target” equilibrium to which the system was heading.
Assumptions may be mutually unrealistic in the long-run, e.g. ever increasing house prices ( as a proportion of GDP) and a healthy economy.
… Of course, on its way to the new one, something else may change in turn, and of course, we may even find that there were in fact several possible outcomes, and so the real task would seem to be that of revealing these possible pathways to different futures
This uncertainty is to be expected when there are critical instabilities.
The Simplicity of such simple models is beguiling but misleading. The knowledge they offer is too simple, and in most human systems is based on assumptions that are incorrect. Such models view the future as the present, and hence hide the reality of change.

2.2) Non-Linear Dynamical Models

Non-linear dynamics (System Dynamics) are what results generally from a modelling exercise when assumptions (1) to (4) above are made, but equilibrium is not assumed. [The] trajectory traced by such equations corresponds … to the most probable trajectory of an ensemble of such systems. … Non-linear dynamics can exhibit a rich spectrum of possible behaviours.
Dynamical systems can:
a) run towards different possible stationary states. So, instead of a single, “optimal” equilibrium, there may exist several possible equilibria, possibly with different spatial configurations, and the initial condition of the system will decide which it adopts.
b) have different possible cyclic solutions. These might be found to correspond to the business cycle, for example, or to long waves.
c) exhibit chaotic motions of various kinds, spreading over the surface of a strange attractor.


But this is a mechanical system and assumes that the participating components do not change their behaviour as the result of their experiences. Behaviours are fixed, and there is no local detail, no good or bad luck, no individual diversity and no change in responses whatever the participants endure. In other words, once again, this is simply not going to be true for almost any human system. It is probably equally untrue for biological and ecological systems, and
hence the results and information calculated from such system models should be treated with great caution.

2.3) Self-Organising Systems

If we do not make assumption 4, then provided that we accept that different outcomes may now occur, we may explore the possible gains obtained if the fourth assumption is not made. This corresponds to admitting to “freedoms” that exist within the system. If individuals do not know that they are supposed to behave like the average, and indeed do not know what the average is, then they don’t behave like the average, and have freedom to be non-average.
(Actually, for ‘fat-tailed’ distributions ‘the average’ is not observable, so individuals can’t be affected by it.)

This destroys the idea of a trajectory, and gives to the system a collective adaptive capacity corresponding to the spontaneous spatial reorganisation of its structure….The fact is that in the real system,unpredictable runs of good and bad luck, represented by “noise”, can and do occur, and these deviations from the average rate of events means that a real system can “tunnel” through potential barriers, the separatrices in state space. As a result it can switch between attractor basins and spontaneously undergo changes in configuration and organisation. However, it is still a model couched in terms of “stereotypes”, excluding evolution and learning on the part of participants.

2.4) Evolutionary Complex Systems

In biology … . This is the mechanism by which adaptation takes place. This demonstrates the vital part played by exploratory, non-average behaviour, and shows that, in the long term, evolution selects for populations with the ability to learn, rather than for populations with optimal, but fixed, behaviour.
In this section Peter presents a specific theory which demonstrates that the previous approaches are inadequate and can be improved on.
Now we can clearly see the basis for the predictive “knowledge” afforded by a mechanical model. It will only be true if the participating individuals have no freedom to change their behaviour in the light of their experiences. What is missing is the underlying, inner dynamic that is really running under the system dynamics. However, if all “eccentricity” is rapidly suppressed in the system, then evolution will itself be suppressed, and the “system dynamics” will then be a good representation of reality. This is the recipe for a mechanical system, and the ambition of many business managers and military men. However if instead micro-diversity is allowed and even encouraged then the system will contain an inherent capacity to adapt, change and evolve in response to whatever selective forces are placed upon it. Clearly therefore, sustainability is much more related to micro-diversity than to mechanical efficiency.
Several important points can now be made.
  • Firstly, a successful and sustainable evolutionary system will clearly be one in which there is freedom for imagination and creativity to explore at the individual level, and to seek out complementarities and loops of positive feedback which will generate a stable community of actors.
  • Secondly, the self-organisation of our system leads to a highly co-operative system, where the competition per individual is low, but where loops of positive feedback and synergy are high. In other words, the free evolution of the different populations, each seeking its own growth, leads to a community. An individual’s identity and any simple basis for future actions (i.e. self-interest) become unclear. The success of the network of individuals results from their combined interaction, and so the intelligence resides both in the network links and the particularities of the nodes.
  • The third important point, particularly for modellers, is that it would be impossible to infer (backwards) the “correct” model equations (even for this simple example) from observing the population dynamics of the system. Because any single behaviour could be playing a positive, or negative role in a self, or pair or triplet etc. interaction, it would be impossible to “untangle” its interactions and write down its the equations simply by noting the population’s growth or decline.

 …Probably, most situations are sufficiently complex that the people involved do not really know the circles of interaction that sustain their existence.

Rational scientific thought has shown the inadequacy of rationality in surviving in the real world. We must always explore and go beyond any present domain if we are to maintain adaptive responses and survive.
This seems all very hopeful. But note:
  • The theory is incomplete, leaving scope for new insights to falsify the positive conclusions.
  • New factors, (such as social media) might destabilise established communities, and it might take a long time for new ones to form.
  • New forms of communities may not look like the old.

So maybe one shouldn’t rely too much on seeming emerging communities in deciding where or how to explore.

2.5 The General Structure of Modelling

[The] evolutionary model, allows both for an organisational response to the environment (L+1) at the system level (L), and also for adaptivity and learning to occur within components at the Level L-1. This couples the L+1, L and L-1 levels in a co-evolutionary process.
Complex systems modelling involving elements with internal structure that can change in response to their experiences, leads naturally to a hierarchy of linked levels of description. If all the levels of description are “satisfied” with their circumstances, then the hierarchy will be stable. But, when the behaviour and strategies of many individuals, at a given level, do not provide them with satisfactory pay-off in the macrostructure that exists, eccentric and deviant behaviour will be amplified which may lead to a structural re-organisation of the system. Stability, or at least quasi-stability, will occur when the microstructures of a given level are compatible with the macro-structures they both create and inhabit, and vice versa.
[If] we are interested in modelling the longer term associated with making strategic decisions and planning, then we must try to go beyond the “mechanical” description with fixed structure and try to develop models which can describe structural change and emergent levels of description endogenously.

3. Innovation and Design in Complex Systems

Here Peter looks at a case of interest. I pick out some general ‘conclusions’.

3.3. Trust, Experience and Chance

The work above based on the ideas coming from evolutionary complex systems show us that the job of designing or defining new products is one of great subtlety. It also implies the need for trust and long term relationships in the management chain, because essentially, it is important to explore a range of concepts, and to be able to pursue them sufficiently far, without having a clear idea of their relative merits until afterwards.
This requires “loose” financial control, since the people making the “explorations” cannot justify their actions in terms of short-term returns.

4. The Law of Excess Diversity

In earlier times, we may have thought that we could know what were the different possible environmental challenges a system might be expected to deal with, but now we accept more the fact of uncertainty, and the impossibility of knowing how many different things could occur. We recognise the issue of uncertainty and change in both the external environment and also in our own organisations, technologies and ideas. The discussion above concerning complex systems and their evolution brings us to the recognition of a new law for systems. We shall call it, the Law of Excess Diversity. It states that:
  • For a system to survive as a coherent entity over the medium and long term, it must have a number of internal states greater than those considered requisite to deal with the outside world.
… Here we define variety as being a selection of possibilities that share a common attribute space, and diversity as being a selection that spans different attribute spaces. In addition, this new law means that either there is hidden diversity within an“adaptable” system or that it has within it mechanisms that can produce diversity as and when it is required. This means that some overhead of diversity or of a diversity creating mechanism must be carried before it can be shown to be necessary.
The long-term functional capacity of a system, its ability to “deal with” a changing environment and implement new technologies, rely on the presence of actors having short-term sub-optimal behaviours.
“Excess” diversity, means that in addition to having a spread of responses that are specifically designed for the known possibilities, we need a range of “other” possible responses and behaviours, which in the present are not logically justified by the known “facts”.
This is because, “you don’t know what it is that you don’t know”.

5. Conclusions

If knowledge is used, then it changes behaviour. If behaviour changes, then the system may respond creatively, and we will have “used up” our knowledge. This is the meaning of co-evolution. Anything that has to interact with an environment, and with other living things, in order to survive, will find that the value of any piece of knowledge is ephemeral. What matters is the capacity to generate new knowledge and to forget old. This is where non-average behaviour and internal diversity is crucial. It is the non-average behaviour that goes beyond the present structure of the system. But the present structure, coming from the average behaviour of the components, defines present rationality, normality and banality. It is the inner ferment of abnormality that can explore and invent, and providing this is tolerated, even helped and eventually assessed, then innovations can occur and with it structural change. The successful co-evolution of a system with its environment therefore occurs through the dynamic interplay of the average and non-average behaviours within it. Successive instabilities occur each time that existing structure and organisation fail to withstand the impact of some new circumstance or behaviour, and when this occurs, the system re-structures, and becomes a different system subjected in its turn to the disturbances from its own non-average individuals and situations. It is this dialogue between successive “systems” and their own inner “richness” that provides the capacity for continuous adaptation and change. This vital exploratory activity cannot be justified in the short term, but only in the long. The ability to explore and to monitor and interpret the experience is what will confer adaptability on an organisation. It is the essential creative, adaptive power of evolutionary complex systems, where structure and organisation emerge and change over time, in a pattern of competition and co-operation.
Understanding the source of “learning” within organisations, and within ourselves, comes down to representing the information flows that occur in the relevant attribute spaces of different actors, and their particular abilities to scan the outside world, and to make meaning and “knowledge” from this. It also concerns the internal relationships within the organisation and whether it is able to successfully monitor such learning, and in turn make meaning from it, and implement changes accordingly.
Management practice has tended to focus on improving the short-term efficiency and effectiveness of the organisation, through the use of competitive forces inside the business, by economic rationalisation and a general paring down of the system to the “leanest” possible. Such an approach goes counter to that required for sustainability in the long term, which as we have seen demands diversity and some slack in the system for exploration, as well as a co-operative atmosphere to allow knowledge to be built up where it is required. However, the use of the complexity paradigm for understanding and guiding business practices is only at its beginning, and so the conclusions here are only the first steps in this exciting new step forward.

My Comments

To quote Peter:

Knowledge about the consequences of our beliefs, policies and actions requires that we understand and can predict how the world works, and we don’t.

As Plato noted, it would be nice if there was some reliable way of determining what we ought to do, so that our actions have no dependence on our free-will or judgement, only on the situation that we are (rightly) responding to. Then we need not take many responsibility for our actions: there was no alternative.

To be robust to criticism, any such method would need to be culturally independent and formal, and hence ‘logical’. For example, where quantity is involved it would need to be mathematical. But which mathematics? Picking the simplest credible mathematics, such as Euclidean Geometry or Probability Theory and then just getting on with life seems pragmatic, unless and until it leads to concrete problems. In this sense mathematics is just a tool for management, responding to its needs.

Unfortunately, as in Quantum Physics, experience suggests a kind of complimentarity principle: we may have adequate beliefs for particular domains, but not (yet?) a universal set of beliefs. One approach to this is to work within familiar areas, applying familiar methods, avoiding genuinely novelty. But people seem (understandably) discontent with this. The ‘pragmatic’ approach is to proceed with one’s beliefs to the best of one’s ability, reviewing them as and when necessary. But this is worrying, and the application of mathematics seems to improve efficiency and effectivess at uncovering unintended consequences just as much as in achieving the intended effects. This may be why mathematics sometimes gets a bad press.

Peter list four ‘common sense’ assumptions and, in effect, suggest that we could save mathematics and logic at the expense of abandoning these, where necessary. From a mathematicians point of this opens up whole new areas to explore: mostly those developed since about 1910 concerning structure and uncertainty (as discussed in my blog.) This ‘modern’ (non-classical) mathematics shows that, for example, classical probability theory has no credible interpretation for some credible systems of interest, such as Peter describes.

Whitehead’s ‘Process Logic‘, building on the work of Keynes, was an early attempt to explain this. Here everthing is regarded as a process; process act on processes and are transformed by processes, as in Peter’s Levels. Thus to replace Peter’s questionable assumptions, we have:

  1. Processes can appear to be closed systems, with definite boundaries, but only unless and until they are acted on by larger processes (e.g. ‘evolutionary’ or ‘learning’.)
  2. Classificatory process may be useful as long as they are acting within a stable system, but if the system changes so should they.
  3. Process esmay appear to be closed and stable, interacting with and acting on other closed and stable systems, but only as long as these other processes actually are stable.
  4. All information (and hence all descriptions) are contingent as above.

Thus Peter’s approach in the first two sections is compatible with process theory. I do however have concerns:

  • The term ‘heirarchy’ has a lot of baggage, some of which may need to be rejected.
  • Peter says “If all the levels of description are “satisfied” with their circumstances, then the hierarchy will be stable.” I am not clear how to interpret this in process theory, but it seems wrong or misleading. If systems need to explore in order to adapt, then the exploratory system can never be ‘satisfied’ with the status quo. And if the exploratory system explores too far too fast the system as we have known it will surely be destabilised.

Section 3 seems reasonable, but I would say that exploration requires more than just financial control. While putting bounds on what may be explored is ultimately dangerous (as Peter argues), it may be prudent to constrain the rate and timing of exploration, so that the system as a whole can cope with the destabilizing effects of new insights. This is not easy.

Section 4 points to Ashby’s Cybernetics. I would add to while one does need to allways ‘push the envelope’ and’ think outside the boxe’it that one needs a prudent approach before pushing to hard or opening up new boxes.

The conclusions seem reasonable but:

  • To ‘What matters is the capacity to generate new knowledge and to forget old’ I would say that the capacity to cope with new knowledge also matters.
  • While we do need ‘diversity and some slack in the system for exploration, as well as a co-operative atmosphere to allow knowledge to be built up where it is required‘, too much diversity or slack can lead to ineffective co-operation, so we need to build up the capacity to cope with it, which (to me) suggests collaboration rather than co-operation.

Unfortunately, process theory is not widely understood and has attracted some baggae of its own. Fom a mathematical perspective it has largely been replaced by category theory. Categories are like processes but even more abstract. If one accepts the view that ‘proper’ mathematics is categorical, then one can ‘categorically’ say that much of  Peter’s findings would be a necessary consequence of taking a mathematical approach to systems. This suggests that the appropriate mathematics for systems is categorical, not classical. In particular (classical) probability theory is not appropriate, and it matters.

Dave Marsay

%d bloggers like this: