From Being to Becoming

I. Prigogine, From Being to Becoming: Time and Complexity in the Physical Sciences, WH Freeman, 1980 

 See new page.

Summary

“This book is about time.” But it has much to say about complexity, uncertainty, probability, dynamics and entropy. It builds on his Nobel lecture, re-using many of the models and arguments, but taking them further.

Being is classically modelled by a state within a landscape, subject to a fixed ‘master equation’ describing changes with time. The state may be an attribute of an object (classical dynamics) or a probability ‘wave’ (quantum mechanics). [This unification seems most fruitful.] Such change is ‘reversible’ in the sense that if one reverses the ‘arrow of time’ one still has a dynamical system.

Becoming refers to more fundamental, irreversible, change, typical of ‘complex systems’ in chemistry, biology and sociology, for example. 

The book reviews the state of the art in theories of Being and Becoming, providing the hooks for its later reconciliation. Both sets of theories are phenomenological – about behaviours. Prigogine shows that not only is there no known link between the two theories, but that they are incompatible.

Prigogine’s approach is to replace the notion of Being as being represented by a state, analogous to a point in a vector space,  by that of an ‘operator’ within something like a Hilbert Space. Stable operators can be thought of as conventional states, but operators can become unstable, which leads to non-statelike behaviours. Prigogine shows how in some cases this can give rise to ‘becoming’.

This would, in itself, seem a great and much needed subject for a book, but Prigogine goes on to consider the consequences for time. He shows how time arises from the operators. If everything is simple and stable then one has classical time. But if the operators are complex then one can have a multitude of times at different rates, which may be erratic or unstable. I haven’t got my head around this bit yet.

Some Quotes

Preface

… the main thesis …can be formulated as:

  1. Irreversible processes are as real as reversible ones …
  2. Irreversible processes play a fundamental constructive role in the physical world …
  3. Irreversibility … corresponds … to an embedding of dynamics within a vaster formalism. [Processes instead of points.] (xiii)

The classical, often called “Galilean,” view of science was to regard the world as an “object,” to try to describe the physical world as if it were being seen from the outside as an object of analysis to which we do not belong. (xv)

… in physics, as in sociology, only various possible “scenarios” can be predicted. [One cannot predict actual outcomes, only identify possibilities.] (xvii)

Introduction

… dynamics … seemed to form a closed universal system, capable of yielding the answer to any question asked. (3)

… Newtonian dynamics is replaced by quantum mechanics and by relativistic mechanics. However, these new forms of dynamics … have inherited the idea of Newtonian physics: a static universe, a universe of being without becoming. (4)

The Physics of Becoming

The interplay between function, structure and fluctuations leads to the most unexpected phenomena, including order through fluctuations … . (101)

… chemical instabilities involve long-range order through which the system acts as a whole. (104)

… the system obeys deterministic laws [as in classical dynamics] between two bifurcation points, but in the neighbourhood of the bifurcation points fluctuations play an essential role and determine the “branch” that the system will follow. (106) [This is termed ‘structurally unstable”]

.. a cyclic network of reactions [is] called a hypercycle. When such networks compete with one another, they display the ability the ability to evolve through mutation and replication into greater complexity. …
The concept of structural stability seems to express in the most compact way the idea of innovation, the appearance of a new mechanism and a new species, … . (109)

… the origin of life may be related to successive instabilities somewhat analogous to the successive bifurcations that have led to a state of matter of increasing coherence. (123)

As an example, … consider the problem of urban evolution … (124) … such a model offers a new basis for the understanding of “structure” resulting from the actions (choices) of the many agents in a system, having in part at least mutually dependent criteria of action. (126)

… there are no limits to structural instability. Every system may present instabilities when suitable perturbations are introduced. Therefore, there can be no end to history. [DJM emphasis.] … we have … the constant generation of “new types” and “new ideas” that may be incorporated into the structure of the system, causing its continual evolution. (128)

… near bifurcations the law of large numbers essentially breaks down.
In general, fluctuations play a minor role … . However, near bifurcations they play a critical role because there the fluctuation drives the average. This is the very meaning of the concept of order through fluctuations .. . (132)

… near a bifurcation point, nature always finds some clever way to avoid the consequences of the law of large numbers through an appropriate nucleation process. (134)

… For small-scale fluctuations, boundary effects will dominate and fluctuations will regress. … for large-scale fluctuations, boundary effects become negligible. Between these limiting cases lies the actual size of nucleation. (146)

… We may expect that in systems that are very complex, in the sense that there are many interacting species or components, [the degree of coupling between the system and its surroundings] will be very large, as will be the size of the fluctuation which could start the instability. Therefore … a sufficiently complex system is generally in a metastable state. (147) [But see Comments below.]

… Near instabilities, there are large fluctuations that lead to a breakdown of the usual laws of probability theory. (150)

The Bridge from Being to Becoming

[As foreshadowed by Bohr] we have a new form of complimentarity – one between the dynamical and thermodynamic descriptions. (174)

… Irreversibility is the manifestation on a macroscopic scale of “randomness” on a microscopic scale. (178)

Contrary to what Boltzmann attempted to show there is no “deduction” of irreversibility from randomness – they are only cousins! (177)

The Microscopic Theory of Irreversible Processes

The step made … is quite crucial. We go from the dynamical system in terms of trajectories or wave packets to a description in terms of processes. (186)

… Various mechanisms may be involved, the important element being that they lead to a complexity on the microscopic level such that the basic concepts involved in the trajectory or wave function must be superseded by a statistical ensemble. (194)

The classical order was: particles first, the second law later – being before becoming! It is possible that this is no longer so when we come to the level of elementary particles and that here we must first introduce the second law before being able to define the entities. (199)

The Laws of Change

… Of special interest is the close relation between fluctuations and bifurcations which leads to deep alterations in the classical results of probability theory. The law of large numbers is no longer valid near bifurcations and the unicity of the solution of … equations for the probability distribution is lost. (204)

This mathematization leads us to a new concept of time and irreversibility … . (206)

… the classical description in terms of trajectories has to be given up either because of instability and randomness on the microscopic level or because of quantum “correlations”. (207)

… the new concept implies that age depends on the distribution itself and is therefore no longer an external parameter, a simple label as in the conventional formula.
We see how deeply the new approach modifies our traditional view of time, which now emerges as a kind of average over “individual times” of the ensemble. (210)

For a long time, the absolute predictability of classical mechanics, or the physics of being, was considered to be an essential element of the scientific picture of the physical world. … the scientific picture has shifted toward a new, more subtle conception in which both deterministic features and stochastic features play an essential role. (210)

The basis of classical physics was the conviction that the future is determined by the present, and therefore a careful study of the present permits the unveiling of the future. At no time, however, was this more than a theoretical possibility. Yet in some sense this unlimited predictability was an essential element of the scientific picture of the physical world. We may perhaps even call this the founding myth of classical science.
The situation is greatly changed today. … The incorporation of the limitation of our ways of acting on nature has been an essential element of progress. (214)

Have we lost essential elements of classical science in this recent evolution [of thought]? The increased limitation of deterministic laws means that we go from a universe that is closed to one that is open to fluctuations. to innovations.

… perhaps there is a more subtle form of reality that involves both laws and games, time and eternity. (215) 

Comments

Relationship to previous work

This book can be seen as a development of the work of Kant, Whitehead and Smuts on emergence, although – curiously – it makes little reference to them [pg xvii]. In their terms, reality cannot logically be described in terms of point-like states within spaces with fixed ‘master equations’ that govern their dynamics. Instead, it needs to be described in terms of ‘processes’. Prigogine goes beyond this by developing explicit mathematical models as examples of emergence (from being to becoming) within physics and chemistry.

Metastability

According to the quote above, sufficiently complex systems are inherently metastable. Some have supposed that globalisation inevitably leads to an inter-connected and hence complex and hence stable world. But globalisation could lead to homogenization or fungibility, a reduction in complexity and hence an increased vulnerability to fluctuations. As ever, details matter.

See Also

I. Prigogine and I. Strengers Order out of Chaos Heinemann 1984.
This is an update of a popular work on Prigogine’s theory of dissipative systems. He provides an unsympathetic account of Kant’s Critique of Pure Reason, supposing Kant to hold that there are “a unique set of principles on which science is based” without making reference to Kants’ concept of emergence, or of the role of communities. But he does set his work within the framework of Whitehead’s Process and Reality. Smuts’ Holism and Evolution, which draws on Kant and mirrors Whitehead is also relevant, as a popular and influential account of the 1920s, helping to define the then ‘modern science’.

Dave Marsay

Advertisements

Composability

State of the art – software engineering

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations … .”For information systems, from a software engineering perspective,  the essential features are regarded as modularity and statelessness. Current inhibitors include:  

“Lack of clear composition semantics that describe the intention of the composition and allow to manage change propagation.”

Broader context

Composability has a natural interpretation as readiness to be composed with others, and has broader applicability. For example, one suspects that if some people met their own clone, they would not be able to collaborate. Quite generally, composability would seem necessary but perhaps not sufficient to ‘good’ behaviour. Thus each culture tends to develop ways for people to work effectively together, but some sub-cultures seem parasitic, in that they couldn’t sustain themselves on their own.

Cultures tend to evolve, but technical interventions tend to be designed. How can we be sure that the resultant systems are viable under evolutionary pressure? Composability would seem to be an important element, as it allows elements to be re-used and recombined, with the aspiration of supporting change propagation.

Analysis

Composability is particularly evident, and important, in algorithms in statistics and data fusion.  If modularity and statelessness are important for the implementation of the algorithms, it is clear that there are also characteristics of the algorithms as functions (ignoring internal details) that are also important.

If we partition a given data set, apply a function to the parts and the combine the result, we want to get the same result no matter how the data is partitioned. That is, we want the result to depend on the data, not the partitioning.

In elections for example, it is not necessarily true that a party who gets a majority of the votes overall will get the most candidates elected. This lack of composability can lead to a loss of confidence in the electoral process. Similarly, media coverage is often an editor’s precis of the precis by different reporters. One would hope that a similar story would emerge if one reporter had covered the whole. 

More technically, averages over parts cannot, in general, be combined to give a true overall average, whereas counting and summing are composable. Desired functions can often be computed composably by using a preparation function, then composable function, then a projection or interpretation function. Thus an average can be computed by finding the number of terms averaged, reporting the sum and count, summing over parts to give an overall sum and count, then projecting to get the average. If a given function can be implented via two or more composable functions, then those functions must be ‘conjugate’: the same up to some change of basis. (For example, multiplication is composable, but one could prepare using logs and project using exponentiation to calculate a product using a sum.)

In any domain, then, it is natural to look for composable functions and to implement algorithms in terms of them. This seems to have been widespread practice until the late 1980s, when it became more common to implement algorithms directly and then to worry about how to distribute them.

Iterative Composability

In some cases it is not possible to determine composable functions in advance, or perhaps at all. For example, where innovation can take place, or one is otherwise ignorant of what may be. Here one may look for a form of ‘iterative composability’ in which one hopes tha the results is normally adequate, there will be signs if it is not, and that one will be able to improve the situation. What matters is that this process should converge, so that one can get as close as one likes to the results one would get from using all the data.

Elections under FPTP (first past the post) are not composable, and one cannot tell if the party who is most voter’s first preference has failed to get in. AV (alternative vote) is also not composable, but one has more information (voters give rankings) and so can sometimes tell that there cannot have been a party who was most voters first preference who failed to get in. If there can have been, one could have a second round with only the top parties’ candidates. This is a partial step towards general iterative composability, which might often be iteratively composable for the given situation, much more so than fptp.

Parametric estimation is generally composable when one has a fixed number of entities whose parameters are being estimated. Otherwise one has an ‘association’ problem, which might be tackled differently for the different parts. If so, this needs to be detected and remedied, perhaps iteratively. This is effectively a form of hypothesis testing. Here the problem is that the testing of hypotheses using likelihood ratios is not composable. But, again, if hypotheses are compared differences can be detected and remedial action taken. It is less obvious that this process will converge, but for constrained hypothesis spaces it does.

Innovation, transformation, freedom and rationality

It is common to suppose that people acting in their environment should characterise their situation within a context in enough detail to removes all but (numeric) probabilistic uncertainty, so that they can optimize. Acting sub-optimally, it is supposed, would not be rational. But if innovation is about transformation then a supposedly rational act may undermine the context of another, leading to a loss of performance and possibly crisis or chaos.

Simultaneous innovation could be managed by having an over-arching policy or plan, but this would clearly constrain freedom and hence genuine innovation. To much innovation and one has chaos, too little and there is too little progress.

A composable approach is to seek innovations that respect each other’s contexts, and to make clear to other’s what one’s essential context is. This supports only very timid innovation if the innovation is rational (in the above sense), since no true (Knightian) uncertainty can be accepted. A more composable approach is to seek to minimise dependencies and to innovate in a way that accepts – possibly embraces – true uncertainty. This necessitates a deep understanding of the situation and its potentialities.  

Conclusion

Composability is an important concept that can be applied quite generally. The structure of activity shouldn’t impact on the outcome of the activity (other than resource usage). This can mean developing core components that provide a sound infrastructure, and then adapting it to perform the desired tasks, rather than seeking to implement the desired functionality directly.

Dave Marsay