Morphogenesis concerns the origins of structure. According to Wikipedia, Turing’s is a paper
describing the way in which non-uniformity … may arise naturally out of a homogeneous, uniform state. The theory … is seen by some as the very beginning of chaos theory.
Thus, while the paper focusses on morphogenesis in biology, it has a broad relevance to all kinds of ’emergent properties’ in all kinds of systems. Thus I have replaced Turing’s term ‘organism’ by the word ‘org‘, a term invented by Turing’s colleague, Jack Good, to cover organised structures of all kinds, including organisms and organisations.
It is suggested that a system of [entities or factors] reacting together and diffusing through [the environment], is adequate to account for the main phenomena of morphogenesis. Such a system, although it may originally be quite homogeneous, may later develop a pattern or structure due to an instability of the homogeneous equilibrium, which is triggered off by random disturbances.
A Model …
Turing’s model of a cell comprises a structured environment (the cell) containing variables (chemicals) that interact with other. Thus one has a ‘system’ with a ‘state’ which (Turing supposes) evolve according to known ‘laws’, represented by partial differential equations.
The Breakdown of Symmetry and Homogeneity
The usual intuition is that
a system which has spherical symmetry, and whose state is changing because of chemical reactions and diffusion, will remain spherically symmetrical for ever. (The same would hold true if the state were changing according to the laws of electricity and magnetism, or of quantum mechanics.) It certainly cannot result in an organism such as a horse, which is not spherically symmetrical.
… It is, however, important that there are some deviations, for the system may reach a state of instability in which these irregularities, or certain components of them, tend to grow. If this happens a new and stable equilibrium is usually reached, with the symmetry entirely gone. The variety of such new equilibria will normally not be so great as the variety of irregularities giving rise to them.
… The situation is very similar to that which arises in connexion with electrical oscillators. It is usually easy to understand how an oscillator keeps going when once it has started, but on a first acquaintance it is not obvious how the oscillation begins.
… Unstable equilibrium is not, of course, a condition which occurs very naturally. It usually requires some rather artificial interference, such as placing a marble on the top of a dome. Since systems tend to leave unstable equilibria they cannot often be in them. Such equilibria can, however, occur naturally through a stable equilibrium changing into an unstable one. For example, if a rod is hanging from a point a little above its centre of gravity it will be in stable equilibrium. If, however, a mouse climbs up the rod the equilibrium eventually becomes unstable and the rod starts to swing.
Left-handed and Right-handed …
Though these [emergent developmental] effects may be large compared with the statistical disturbances they are almost certainly small compared with the ordinary [development] effects. This will mean that they only have an appreciable effect during a short period in which the break- down of left-right symmetry is occurring.
Reactions and Diffusion …
Turing considers a system whose evolution is governed by a partial differential equation with two components. The first is the ‘reaction’ component, which he assumes to be approximately linear. The second is a diffusion component. He shows that the solution is an oscillation that either grows or decays like an exponential function. Growth breaks symmetry.
[T]he assumption that the system is still nearly homogeneous brings the problem within the range of what is capable of being treated mathematically. Even so many further simplifying assumptions have to be made. Another reason for giving this phase such attention is that it is in a sense the most critical period. That is to say, that if there is any doubt as to how the [org] is going to develop it is conceivable that a minute examination of it just after instability has set in might settle the matter, but an examination of it at any earlier time could never do so.
Types of Asymptotic Behaviour … after a Lapse of Time
[A]fter a lapse of time the behaviour of [the system above] is eventually dominated by the terms for which the corresponding [characteristic]has the largest real part. There may, however, be several terms for which this real part has the same value, and these terms will together dominate the situation, the other terms being ignored by comparison. There will, in fact, normally be either two or four such ‘leading’ terms. … One need not, however, normally anticipate that any further terms will have to be included.
Turing’s treatment is of a particular example. More generally, where Turing considers a ring, the number of terms to be considered is determined by the overall structure of the environment. In the simpler case one gets stationary wave-like patterns. With more terms one gets travelling waves, in opposite directions. The wave-length depends on both the reactants and the environment. Turing classifies the possible behaviours.
Further Considerations …
[Whereas] it was supposed that the disturbances were not continuously operative, and that the marginal reaction rates did not change with the passage of time. These assumptions will now be dropped, though it will be necessary to make some other, less drastic, approximations to replace them. The (statistical) amplitude of the ‘noise’ disturbances will be assumed constant in time.
Turing considers the case where there is a growth term that sufficiently dominates all others, as when the term starts off as decay (i.e., damping) and then steadily and sufficiently rapidly increases, becoming growth (i.e., destabilising).
The physical significance of this latter approximation is that the disturbances near the time when the instability is zero are the only ones which have any appreciable ultimate effect. Those which occur earlier are damped out by the subsequent period of stability. Those which occur later have a shorter period of instability within which to develop to greater amplitude. This principle is familiar in radio, and is fundamental to the theory of the super-regenerative receiver.
Turing shows that a negative cubic term leads to an ‘arrested’ exponential growth, and a new stability. However, if the term is positive one gets faster than exponential growth and – unless even higher order terms come into play – one gets infinite amplitude in finite time.
This phenomenon may be called ‘catastrophic instability’. In the case of two-dimensional systems [and higher] catastrophic instability is almost universal … . Naturally enough in the case of catastrophic instability the amplitude does not really reach infinity, but when it is sufficiently large some effect previously ignored becomes large enough to halt the growth.
Turing gives an example, resembling black blotches on white skin. In his next section he uses the new Manchester University Computer to study other cases, including catastrophic instability.
Non-linear Theory. Use of Digital Computers
The ‘wave’ theory which has been developed here depends essentially on the assumption that the reaction rates are linear functions of the concentrations, an assumption which is justifiable in the case of a system just beginning to leave a homogeneous condition. Such systems certainly have a special interest as giving the first appearance of a pattern, but they are the exception rather than the rule. Most of an organism, most of’ the time, is developing from one pattern into another, rather than from homogeneity into a pattern. One would like to be able to follow this more general process mathematically also. The difficulties are, however, such that one cannot hope to have any very embracing theory of such processes, beyond the statement of the equations. It might be possible, however, to treat a few particular cases in detail with the aid of a digital computer.
… It is thought … that the imaginary … systems which have been treated, and the principles which have been discussed, should be of some help in interpreting real … forms.
This paper has received most attention as a theory of biological morphogenesis, which has received some support from recent research. But Turing’s argument is more general. While the example given is biological, the theory would seem to apply as well to, for example, urban development, where one also has a combination of structured interaction and diffusion.
One metaphor for change is where a rod is lying across a pivot on a table and a mouse walks along and up it, until the rod tips. The difference with Turing’s example is that there, as the mouse climbs up the rod it remains in equilibrium. The change is not from one equilibrium to another, but from stability to instability. Thus Turing’s is a different kind of ‘tipping point’. The first kind is typically reversible, Turing’s kind is not. To detect this instability one needs to consider ‘higher order terms’ (details) and random disturbances. As a distribution, the situation is symmetric and the equilibrium remains. But a particular instance can break the equilibrium and, if the effect is magnified, gives a catastrophic instability. Points to note include:
- If you don’t watch the mouse, there will be no warning of a change. Thus induction is only justified as long as you can be sure that there are no un-monitored particulars.
- If you ignore the possibility of a mouse, you will discount the possibility of change. So Occam’s razor is unsound.
- The significant higher-order terms could be direct positive feedback, which would obviously be troublesome, but could also be an over-reaction that provokes a cycle of over-re-reactions, so that the indirect effect oscillates and builds up.
- The point at which the system becomes unstable is what is sometimes called the ‘nexus’.
- The impact of activity before the nexus is damped down, and so does not affect what happens afterward.
- After the nexus the system will often stabilise, in which case a great deal of effort will be needed to have any lasting impact.
- Thus the nexus is often the only opportunity to influence or predict the transformation of the system.
- Unless one is watching the mouse, one is practically restricted to one of two unsatisfactory strategies:
- Be ‘pragmatic’ in the sense of working with the system as it currently is, not looking out for specific change, but keeping back some reserves to cope with change should it happen, and then try to adjust to the new situation as best one can.
- Continually probe and engage with the situation, seeking to ‘get on top of’ and influence any change, should it occur.
- If one is watching, or even influencing, the mouse, one may be able to:
- Surge your activity as the nexus nears, so that you can be effective through the nexus and efficient otherwise.
- Move the situation towards or away from a nexus, or ‘shape’ the range of situations that may occur after it.
- If you are in a situation with many dimensions, you may be able to shape it so that only one or two issues dominate, and one can identify the range for the mouse, even if one cannot see the mouse. Alternatively, in a competitive situation you may be able to introduce extra factors to distract or confuse other actors, so that they are unable to identify and exploit any key factors.
In a competitive situation there is a clear ‘level of the fight’ issue: those who are unaware of the mouse, or are ‘pragmatically’ ignoring it, are unlikely to do well compared with those who are ‘playing’ the mouse. Ideally, one wants to shape situations so that when the nexus comes, there are no very bad outcomes for you. Otherwise, if one is struggling at the actual nexus, it is more straightforward force-on-force, a fight of attrition.
Relation to Other Work
Symmetry-breaking is an example of what Turing’s grand-tutor Whitehead called ’emergence’. Whitehead’s approach was logical, with no calculus in sight. Keynes, a student of Whitehead’s, argued that the concept of ‘state’ was problematic, and showed how reflexive uncertainties could give rise to instabilities and emergence. Keynes’ work is sometimes been interpreted as being about ‘animal spirits’, hence leading to the view that the problems could be avoided by being more ‘rational’ or pragmatic.
In contrast, Turing considers a classical state-determined dynamical system, within a possibly fixed environment. Even here, one has a problem. Turing’s condition about higher-order terms corresponds to Keynes’ reflexive uncertainty, but without the psychological baggage. His paper is also more readable than Keynes’ Treatise or Whitehead’s work. It also resolves Physics’ Loschmidt paradox by showing how time-symmetric dynamical together with randomness can give rise to ‘the arrow of time’.
Turing was a member of The Ratio Club, as was Ashby. Ashby’s Cyberneticss is consistent with Whitehead’s theory and Turing’s model, and gives an explanation as to why new equilibria so commonly emerge, rather than having chaos. (Keynes’ more jaundiced view was that we only see structure and simply ignore the chaos, treating it as ‘random’. There may be an element of truth in both views. But Ashby’s is less challenging.)
Turing’s statistical adviser, IJ (Jack) Good, developed a reformed notion of reasoning under uncertainty. In the conventional notion, prior to the nexus one would tend to use Bayes’ rule (i.e. probabilistic induction) to deduce that any change was increasingly unlikely, whereas if you know that there are mice, even if you can’t see them, you will view some change as increasingly likely, especially if you know that mice tend to run in straight lines, as suggested by Ashby’s model, or are in competition. In Good’s notation, probability P(E|H:C) depends on a context ‘C’. Thus one can only deduce that certain events are improbable with respect to the current situation: one can say nothing about what would happen should a mouse upset things, unless one has observations relating to the mouse.
Wikipedia (above) Turing’s work is seen by some as the beginning of chaos theory. My own view is that chaos theory largely started without the benefit of any insights from Turing or others, and that the connection was only noticed later. Wikipedia says that:
For Turing, the ‘initial condition’ is often one of stability, which some change of variable changes to a critical instability and hence (typically) to alternate stabilities. Thus Turing puts chaos theory in a wide context. His work actually links to catastrophe theory, with Turing’s main case being a fold catastrophe. But Turing’s account is more straightforward, whereas catastrophe theory is mostly about ‘deep results in topology’ and generally assumes that one could work around the potential catastrophe if one recognised it, whereas in Turing’s framing one may be forced through its eye. Moreover, catastrophe theory mostly assumes some minimising principle, of the kind which Keynes criticised. The work of Keynes’ colleague Smuts and Turing’s tutor, Russell, add to Turing’s overall picture. Smuts and Keynes also provide examples of the application and impact of these ideas. It seems to me that their work makes more sense after having read Turing, and that Turing’s work has not been obsoleted.