Synthetic Modelling of Uncertain Temporal Systems
February 19, 2011 2 Comments
SMUTS is a computer-based ‘exploratorium’, to aid the synthetic modelling of uncertain temporal systems. I had previously worked on sense-making systems based on the ideas of Good, Turing and Keynes, and was asked to get involved in a study on the potential impact of any Y2K bugs, starting November 1999. Not having a suitable agreed model, we needed a generic modelling system, able to at least emulate the main features of all the part models. I had been involved in conflict resolution, where avoiding cultural biases and being able to meld different models was often key, and JC Smuts’ Holism and Evolution seemed a sound if hand-wavy approach. SMUTS is essentially a mathematical interpretation of Smuts. I was later able to validate it when I found from the Smuts’ Papers that Whitehead, Smuts and Keynes regarded their work as highly complementary. SMUTS is actually closer to Whitehead than Smuts.
An actual system is a part of the actual world that is largely self-contained, with inputs and outputs but with no significant external feedback-loops. It is a judgement about what is significant. Any external feedback loop will typically have some effect, but we may not regard it as significant if we can be sure that any effects will build up too slowly. It is a matter of analysis on larger systems to determine what might be considered smaller systems. Thus plankton are probably not a part of the weather system but may be a pat of the climate.
The term system may also be used for a model of a system, but here we mean an actual system.
We are interested in how systems change in time, or ‘evolve’. These systems include all types of evolution, adaptation, learning and desperation, and hence are much broader than the usual ‘mathematical models’.
Keynes’ notion of uncertainty is essentially Knightian uncertainty, but with more mathematical underpinning. It thus extends more familiar notions of probability as ‘just a number’. As Smuts emphasises, systems of interest can display a much richer variety of behaviours than typical probabilistic systems. Keynes has detailed the consequences for economics at length.
Pragmatically, one develops a single model which one exploits until it fails. But for complex systems no single model can ever be adequate in the long run, and as Keynes and Smuts emphasised, it could be much better recognize that any conventional model would be uncertain. A key part of the previous sense-making work was the multi-modelling concept of maintaining the broadest range of credible models, with some more precise and others more robust, and then hedging across them, following Keynes et al.
In conflict resolution it may be enough to simply show the different models of the different sides. But equally one may need to synthesize them, to understand the relationships between them and scope for ‘rationalization’. In sense making this is essential to the efficient and effective use of data, otherwise one can have a ‘combinatorial explosion’.
To set SMUTS going, it was developed to emulate some familiar test cases.
- Simple emergence. (From random to a monopoly.)
- Symbiosis. (Emergence of two mutually supporting behaviours.)
- Indeterminacy. (Emergence of co-existing behaviours where the proportions are indeterminate.)
- Turing patterns. (Groups of mutually supporting dynamic behaviours.)
- Forest fires. (The gold standard in epidemiology, thoroughly researched.)
In addition we had an example to show how the relationships between extremists and moderates were key to urban conflicts.
The aim in all of these was not to be as accurate as the standard methods or to provide predictions, but to demonstrate SMUTS’ usefulness in identifying the key factors and behaviours.
A key requirement was to be able to accommodate any relevant measure or sense-making aid, so that users could literally see what effects were consistent from run to run, what weren’t, and how this varied across cases. The initial phase had a range of standard measures, plus Shannon entropy, as a measure of diversity.
Everything emerged from an interactional model. One specified the extent to which one behaviour would support or inhibit nearby behaviours of various types. By default behaviours were then randomized across an agora and the relationships applied. Behaviours might then change in an attempt to be more supported. The fullest range of variations on this was supported, including a range of update rules, strategies and learning. Wherever possible these were implemented as a continuous range rather than separate cases, and all combinations were allowed.
By default there are four quadrants. The bottom right illustrates the inter-relationships (e.g., fire inhibits nearby trees, trees support nearby trees). The top right shows the behaviours spread over the agora (in this case ground, trees and fire). The bottom left shows a time-history of one measure against another, in this case entropy versus value of trees. The top-left allows one to keep an eye on multiple displays, forming an over-arching view. In this example, as in many others, attempting to get maximum value (e.g. by building fire breaks or putting out all fires) leads to a very fragile system which may last a long time but which will completely burn out when it does go. If one allows fires to run their course, one typically gets an equilibrium in which there are frequent small fires which keep the undergrowth down so that there are never any large fires.
It was generally possible to emulate text-book models to show realistic short-run behaviours of systems. Long term, simpler systems tended to show behaviours like other emulations, and unlike real systems. Introducing some degree of evolution, adaptation or learning all tended to produce markedly more realistic behaviours: the details didn’t matter. Having behaviours that took account of uncertainty and hedged also had a similar effect.
SMUTS had a recognized positive influence, for example on the first fuel crisis, but the main impact has been in validating the ideas of Smuts et al.