Knightian uncertainty and epochs

Frank Knight was a follower of Keynes who emphasised and popularised the importance of ‘Knightian uncertainty’, meaning uncertainty other than Bayesian probability. That is, it is concerned with events that are not deterministic but also not ‘random‘ in the same sense as idealised gambling mechanisms.

Whitehead‘s epochs, when stable and ergodic in the short-term, tend to satisfy the Bayesian assumptions, whereas future epochs do not. This within the current epoch one has (Bayesian) probabilities, longer term one has Knightian uncertainty.

Example

Consider a Casino with imperfect roulette wheels with unknown biases. It might reasonably change the wheel (or other gambling mechanism) whenever a punter seems to be doing infeasibly well. Even if we assume that the current wheel has associated unknown probabilities that can be estimated from the observed outcomes, but the longer-term uncertainty seems qualitatively different. If there are two wheels with known biases that might be used next, and if we don’t know which is to be used then, as Keynes shows, one needs to represent the ambiguity rather than being able to fuse the two probability distributions into one. (If the two wheels have equal and opposite biases then by the principle of indifference a fused version would have the same probability distribution as an idealised wheel, yet the two situations are quite different.)

Bayesian probability in context

In practice, the key to Bayesian probability is Bayes’ rule, by which the estimated probability of hypotheses is updated depending on the likelihood of new evidence against those hypotheses. (P(H|E)/P(H’|E) = {P(E|H)/P(E|H’)}•{P(H)/P(H’)}.) But estimated probabilities depend on the ‘context’ or epoch, which may change without our receiving any data. Thus, as Keynes and Jack Good point out, the probabilities should really be qualified by context, as in:

P(H|E:C)/P(H’|E:C) = {P(E|H:C)/P(E|H’:C)}•{P(H:C)/P(H’:C)}.

That is, the results of applying Bayes’ rule is conditional on our being in the same epoch. Whenever we consider data, we should not only consider what it means within our assumed context, but whether it has implications for our context.

Taking a Bayesian approach, if G is a global certain context and C a sub-context that is believed but not certain, then taking

P(E|H:G) = P(E|H:C)  etc

is only valid when P(C:G) ≈ 1,

both before and after obtaining E. But by Bayes’ rule, for an alternative C’:

P(C|E:G)/P(C’|E:G) = {P(E|C:G)/P(E|C’:G)}•{P(C:G)/P(C’:G)}.

Thus, unless the evidence, E, is as least as likely for C as for any other possible sub-context, one needs to check that P(C|E:G) ≈ 1. If not then one may need to change the apparent context, C, and compute P(H|E:C’)/P(H’|E:C’) from scratch: Bayes’ rule in its common – simple – form does not apply. (There are also other technical problems, but this piece is about epochs.) If one is not certain about the epoch then for each possible epoch one has a different possible Bayesian probability, a form of Knightian uncertainty.

Representing uncertainty across epochs

Sometimes a change of context means that everything changes. At other times, some things can be ‘read across’ between epochs. For example, suppose that the hypotheses, H, are the same but the likelihoods P(E|H:C) change. Then one can use Bayes’ rule to maintain likelihoods conditioned on possible contexts.

Information 

Shannon‘s mathematical information theory builds on the notion of probability. A probability distribution determines an entropy, whilst information is measured by the change in entropy.  The familiar case of Shannon’s theory (which is what people normally mean when they refer to ‘information theory’) makes assumptions that imply Bayesian probability and a single epoch.  But in the real world one often has multiple or ambiguous epochs. Thus conventional notions of information are conditional on the assumed context. In practice, the component of ‘information’ measured by the conventional approach may be much less important than that which is neglected.

Shocks

It is normally supposed that pieces of evidence are independent and so information commutes:

P(E1+E2|H:C) = P(E1|H:C).P(E2|H:C) = P(E2|H:C).P(E1|H:C) = P(E2+E1|H:C)

But if we need to re-evaluate the context, C, this is not the case unless we re-evaluate old data against the new context. Thus we may define a ‘shock’ as anything that requires us to change the likelihood functions for either past or future data. Such shocks occur when the data had been considered unlikely according to an assumption. Alternatively, if we are maintaining likelihoods against many possible assumptions, a shock occurs when none of them remains probable. 

See also

Complexity and epochs, Statistic and epochs, reasoning in a complex, dynamic, world.

Keynes, Turing and Good developed a theory of ‘weight of evidence’ for reasoning in complex worlds. TBB

David Marsay

The Cassandra Factor

A New Scientist article opines that some disasters are preceded by a slowing down of some characteristic ‘heart beat’.

From a Cybernetic point of view the key thing is that (as Keynes and Whitehead had previously noted) regular behaviour implies some sort of regulation, and that in ecosystems this typically comes about through predator-prey relationships, which in simple cases give rise to cyclical behaviours. As the article notes, a weakening of a regulatory factor can lead to a noticeable lengthening in the period of the cycles or – more generally – an increasing autocorrelation, preceding the failure of regulation and hence the crash.

The article considers the ‘financial whirl’, noting that any successful method of predicting collapse would ‘quickly invalidate its own forecasts as investors changed their strategies to avoid its predictions.’ This reflect Keynes’ thoughts. But how general is their model? The editorial refers to the BSE crisis in which the introduction of a cycle into the food ‘chain’ may have led, decades later, to the emergence of prions, causing BSE. This seems the opposite of the situation described in the article.

From a Cybernetic point of view, echoing Whitehead and Keynes, if we observe a system for a while we expect to be aware of those things that are regular, rather than what is ephemeral. This regularity reflects the action of regulatory systems, including their structure and strengths. Any change in regulation can lead to the end of an epoch, and may be perceived as a disaster, not just the failure of one element of regulation. Thus if an epoch is characterised by behaviour with a characteristic period, a speeding up or slowing down can both indicate significant change, which could be followed by disaster. But equally, a change may lead to an innovation that is perceived as a benefit. From this and related Cybernetic considerations:

  • A stable regulatory regime implies a stable characteristic behaviour in the thing regulated, so that change in what had been regulated indicates a change in regulation, and hence a significant change in behaviours, good or bad. This includes a change from a heart-beat that varies in frequency to one that is unhealthily ‘pure’.
  • A stable regulatory regime will rely on stable information inputs, which can only be effective if there is global stability, including lack of learning and innovation. So we should not expect any epochs to last forever, unless the system is already dead.
  • A regulatory system needs to cover both short-term and long-term aspects of behaviour. From an efficiency point of view it might focus on short-term issues most of the time, opening up and exploring longer-term issues periodically. It would be unfortunate to confuse this with the onset of a crisis, or to seek to inhibit it.
  • There is no widespread common appreciation of what makes for ‘good’ regulatory systems.

Dave Marsay

See also my Modelling the crash.

Modelling the Crash

Nature has a special issue on ‘Modelling the Crash’. May and Haldane propose a model intended to support policy. Johnson and Lux call for empirical support, meaning a clear link to financial data. Both views have a strong pedigree, yet both seem questionable.

In some ways this discussion pre-empts some of the material I am trying to build up on this blog, and may be ‘hard going’.

May and Haldane

May and Haldane point to connectivity as a factor that was missed out of the pre-crash economic analyses. They make an analogy with ecosystems, where low connectivity is chaotic, increasing connectivity brings stability, but then more connectivity brings instability. It may be a slight digression, but I think it important to focus on what is unstable. Initially it is the components, which are stabilised by being connected. But after a critical point the parts begin to form wholes. These now become the focus of attention. They are unstable because there are few of them. In an important sense, then, maximum complexity is where one has some connectedness (as in a liquid), but the parts are not ‘locked in’ to wholes (solids). If we think of connectivity as co-emerging with growth then the problem is not the kind of growth that loosely connects, but the growth beyond that, in which bubbles of self-reinforcing behaviours develop. This model would seem to suggest that we should be looking out for such feedbacks and bubbles. I agree. In biological evolution Lamarckian inheritance would lead to such bubbles. Maybe one lesson to be learned from nature is that Darwinian evolution is better: the survival of the satisfactory, rather than the over-optimisation of survival of the very fittest, leading to a population of seeming clones.

The paper is largely about robustness and herding, but put in rather narrow terms. It identifies the key issues as homogeneity and fragility, and modularity. It does recognize its own limitations.

I found the paper quite reasonable, but I do see that the presentation invites the supposition that it is leaning too heavily on its ‘mathematical model’.

Johnson and Lux

‘Shouldn’t we test against data sets?’ This raises great challenges.  We know how to test within epochs, and conventional economic theories seem reasonable enough. The challenges are thus:

  • What should we be looking for in theories that span epochs?
  • How do we test theories when the data sets span epochs?

It is common practice to apply ‘Occam’s razor’, which seeks to simplify as much as possible, yet Keynes and Whitehead tend to lead to a contrary view: never assume regularity without evidence. Thus even if we had a theory (like May and Haldane) that fitted all the data, Keynes would predict continuing innovation and hence we would have no reason to trust the theory in future. (See my ‘statistics and epochs’ .)

So What? May and Haldane seems insightful and useful, but leaves open a gap for a more comprehensive ‘model’. Such a model would include nodes with links like those of May and Haldane. Should it be extended into detailed models of how banks work? Or should it seek to identify further key factors? I would like to see it address the issues raised by Lord Turner in ‘mathematics and real systems’.

Dave Marsay

See also ‘The Cassandra Factor’.

Mathematics and real systems

The UK FSA’s Lord Turner, in the Turner Review of the financial crisis of 2008 was critical of the role of mathematics in misleading decision-makers about the possibility of a crisis. I have also found similar cynicism in other areas involving real complex systems. It seems that mathematics befuddles and misleads. (Or am I being unduly sensitive?)

In this INET interview Lord Turner provides a more considered and detailed critique of mathematics than I have come across from him before. (Unless you know different?) In defence of mathematicians I note:

  • His general criticism is that ‘sophisticated’ mathematical models were given a credibility that they did not deserve. But we do not judge a car on its engine alone. What mattered was not how mathematically brilliant the models may have been, but how they corresponded to reality. This was the preserve of the economist. If an economist declares certain assumptions to be true, the mathematician will build on them, yielding a model with certain behaviours. Normally you would expect the economist to reconsider the assumptions if the resultant behaviour wasn’t credible. But this didn’t happen.
  •  If the potential for crashes was a concern, no amount of statistical analysis based on data since the last big crash was ever going to be helpful. The problem was with the science, not the mathematics.
  • Turner is critical of the commonplace application of probability theory, extrapolating from past data, as if this were the only ‘mathematical’ approach.
  • Turner continually refers to ‘Knightian Uncertainty’ as if it were extra-mathematical. He does not note that Frank Knight was a Keynesian at a time when Keynes’ best known work was mathematical.
  • Turner refers to Keynes’ Treatise on Probability without remarking that it provides mathematical models for different types of Knightian uncertainty, or linking it to Whitehead’s (mathematical) work on complex systems.
  • In Whitehead’s terms, Turners criticism of mathematics is that it can only provide extropolations within a given epoch. But this is to ignore the work of Whitehead, Keynes and Turing, for example, on ’emergent properties’.

It seems clear that the economist’s assumption of ‘the end of history’ for economics led to the use of mathematical models that were only useful for making predictions conditional on the assumption that ‘the rules of the game are unchanging’. Is it reasonable to blame the mathematicians if some mistook an assumption for a mathematical theorem? (Possibly?) More importantly, Turner notes that the ‘end of history’ assumption led to a side-lining of economists who could work across epochs. They need to be revived. Perhaps so too do those mathematicians with a flair for the appropriate mathematics?

David Marsay, C. Math FIMA

See also: IMA paper, statistics blog.

Statistics and epochs

Statistics come in two flavours. The formal variety are based on samples with a known distribution. The empirical variety are drawn using a real-world process. If there is a known distribution then we know the ‘rules of the situation’ and hence are in a single epoch, albeit one that may have sub-epochs. In Cybernetic terms there is usually an implicit assumption that the situation is stable or in one of the equilibria of a polystable system. Hence, that the data was drawn from a single epoch. Otherwise the statistics are difficult to interpret.

Statistics are often intended to be predictive, by extrapolation. But this depends on the epoch enduring. Hence the validity of a statistic is dependent on it being taken from a single epoch, and the application of the statistic is dependent on the epoch continuing.

For example, suppose that we have found that all swans are white. We cannot conclude that we will never see black swans, only that if:

  • we stick to the geographic area from which we drew our conclusion
  • our sample of swans was large enough and representative enough to be significant
  • we are extrapolating over a time-period that is short compared with any evolutionary or other selective processes
  • there is no other agency that has an interest in falsifying our predictions.

then we are unlikely to see swans that are not white.

In particular the ‘law of large numbers’ should have appropriate caveats.

Annotated Bibliography

PLEASE NOTE: There is now a page. This post is NOT being maintained.

 

 

WR Ashby

Design for a brain 1960. (Ed. 2 has significant improvements.) A cybernetic classic. Ashby notes the correspondence to game theory (and thus to Whitehead). In simpler case Cybernetic language is considerably more accessible than Whitehead.

With Conant: Every Good Regulator of a System must be a Model of that System, 1970. It claims that:

“The theorem has the interesting corollary that the living brain, so far as it is to be successful and efficient as a regulator for survival, must proceed, in learning, by the formation of a model (or models) of its environment.” [My italics.]

The notion of a regulator is an objective mechanism which is required to maintain pre-defined objective criteria: i.e., keep some output within a ‘good’ domain. It may also have other functions, e.g. optimising something. The theory shows that in so far as a system to be regulated conforms to some model, it will need – in effect – to be modelled to be regulated effectively. It does not explicitly consider feed-back from output to input. Where this is present on could apply the theory to short-term regulation, within the delay of the feed-back. Alternatively one could apply it at the level of strategy rather than base events and activities. In this case one needs to model ‘the game’, including the ‘board’, the players and their motivations.

JM Keynes

Keynes was a mathematician student of Whitehead, employed by the treasury during the war, who worked with JC Smuts on the transition to peace. The Prime Minister (Lloyd George) supported Keynes in producing the first two references as ‘lessons identified’ from the war.

Treatise on Probability 1919. Keynes’ critique of ‘standard’ (numeric) probability theory, together with some useful generalisations. Refers to Whitehead (Keynes’ tutor). Underpins subsequent work.

The Economic Consequences of the Peace 1919. Keynes shows that although depressions are impossible according to the then classical economics, based on standard probability, ‘animal spirits’ introduced the greater uncertainties, of the kind described in his treatise, made a crash and a sustained impression virtually certain. Thus whereas the conventional view was that economies were inherently buoyant, so that recessions could occur as a result of ‘exogenous events’, such as war, a depression could actually occur endogenously – due to ‘market failures’.

The General Theory of Employment, Interest and Money 1935. The master work on economics. It explains how crises such as those of 1929 and 2007/8 occur, and suggests some remedies. Underpinned by the treatise on probability, but this is not widely appreciated. ‘Keynesian’ economics means the application of stock remedies whatever the cause. In this sense, Keynes was not a Keynesian. Rather he developed a theory as a way of looking at economies. His approach was to follow where theory led, taking account of residual uncertainties.

JC Smuts

Smuts worked with Keynes on economics.

Holism and evolution 1927. Smuts’ notoriously inaccessible theory of evolution, building on and show-casing Keynes’ notion of uncertainty. Although Smuts and Whitehead worked independently, they recognized that their theories were equivalent.

The Scientific World-Picture of Today, in ‘British Association for the Advancement of Science, Report of the Centenary Meeting’. London: Office of the BAAS, 1932. Smuts’ presidential address, outlining the new ideas in ‘Holism and evolution’ and their import.

I Stewart

Ian has done more than most to explore, develop and explain the most important parts of qualitative mathematics.

Life’s Other Secret: The new mathematics of the living world, 1998. This updates D’Arcy Thompson’s classic On growth and form, ending with a manifesto for a ‘new’ mathematics, and a good explanation of the relationship between mathematics and scientific ‘knowledge’. Like most post-80s writings, it’s main failing is that it sees science as having achieved some great new insights in the 80s, ignoring the work of Whitehead et al, as explained by Smuts, for example. 

M Tse-Tung

Chairman Mao.

On Contradiction 1937. Possibly the most popular account of Whitehead’s ideas.

AN Whitehead

Whitehead was Russell and Keynes’ tutor.

Process and Reality 1929. Notoriously hard going, it refers to Keynes’ work on uncertainty.

W Whitman

American civil-war thinker and poet. Part of the inspiration for the Beat movement.

Leaves of Grass 1855. Part of Smuts’ inspiration.

See also

ABACI – a consultancy who offer to tailor solutions.

Dave Marsay

Complexity and epochs

It is becoming widely recognized (e.g. after the financial crash of 2008) that complexity matters. But while it is clear that systems of interest are complex in some sense, it is not always clear that any particular theory of complexity captures the aspects of importance.
We commonly observe that systems of interest do display epochs, and many systems of interest involve some sort of adaptation, learning or evolution, so according to Whitehead and Ashby (Cybernetics) will display epochs. Thus key features of interest are:

  •  polystability: left to its own devices the system will tend to settle down into one or other of a number of possible equilibria, not just one.
  • exogenous changeability: the potential equilibria themselves change under external influence.
  • endogenous changeability: the potential equilibria change under internal influence.

For example, a person in an institution such as an old folk’s home is likely to settle into a routine, but their may be other routines that they might have adopted, if things had happened differently earlier on. Thus their behaviour is polyunstable, except in a very harsh institution. Their equilibrium might be upset by a change in the time at which the papers are delivered, an exogenous change.   Endogenous factors are typically slower-acting. For example, adopting a poor diet may (in the long – run) impact on their ability recover from illnesses and hence on their ability to carry on their establish routine. For a while the routine may carry on as normal,  only to suddenly become non-viable.

Cybernetics and epochs

It is often useful, in reasoning, to have a model of a possible world that displays the behaviours that we are interested in. At the least it can give us some credible candidate hypotheses.

W.R. Ashby noted that his theory of Cybernetics had a formal correspondence with Whitehead’s theory of processes. Thus an epoch is a region of the field in which a polystable system displays a particular stability, and Ashby’s model gives a possible explanation for changes in epoch.

Brain activity has multiple possible equilibria, which a stimulus might upset.

Ashby's illustration of polystability in the brain.

Thinking of economies, the conventional (pre-Keynesian) view was that theories were ‘ultrastable’, necessarily tending to an equilibrium unless upset by an external event (e.g., war). Keynes noted that there were multiple possible equilibria, so that an economy that was growing had a potential to be in a depression, and if it flipped into a depression then it may have a potential for growth that it might need some stimulus to achieve. In the simplest version, as in much of Ashby’s work, one might have a closed polystable system that could be explored. In the more general case there might be ’emergent, creative, evolution’ in which nothing was necessarily fixed forever. In this case Whitehead’s formulation seems more natural.

David Marsay

Collaboration and reason

To ‘collaborate‘ is to act ‘jointly’, especially on an artistic work. To ‘co-operate’ is simply to act together. Thus in a collaboration the differences between actors and the need for something to join them together is more emphasised.

The acting together may take place in a single ‘coherent’ epoch, or may span epochs.

  • If we anticipate a single epoch then there are a coherent set of rules, constraints, drivers, which may be thought of rationally, with roles allocated to actors. These roles may be tied together by a conventional plan within a conventional organisational structure, and conventionally managed. Thus, within the context of some O&M conventions, one seeks co-operation. If any of the actors are out of step with the establsihed ways of the epoch, they should be brought into line before or while co-operating.
  • If we have a mix of actors their activity might tend to complexify what would otherwise have been a coherent epoch. We may need to collaborate simply because the actors are artistic, even on a simple task. (We may need to add some artistic or otherwise interesting aspect to the activity, simply to keep the actors engaged.)
  • If the activity concerns a zone that may have many epochs, e.g. is open-ended, then one necessarily wants actors with different capabilities, such as those who can adapt the hear and now, those who can anticipate possible futures, those who are good at hedging or otherwise developing robust plans and policies.
  • In many organisations a person who reasons across different time-scales simultaneously, and hence reasons incoherently, may be seen as ‘illogical’ and confused.  Using different actors, particularly those from different parent organisations, with dfifferent disciplines and even different cultural and biological inherentances, can mask or help ‘rationalise’ these apparent defects.

Thus, I suggest that the nature of appropriate collaboration can usefully be thought of in terms of the types of epochs to be covered and the types of reasoning required, as well as, for example, familiarity with the subject matters. Co-operation within an epoch might be seen as collecting ‘the facts’, making decisions, and then acting to an agreed plan. But in fuller collaboration there may be no ‘facts’ and no possibility of mapping out the zone of desired activity, let alone the possibility of someone making coherent sense of the situation.

Reasoning in epochs

The term ‘reason‘ derives from the term ‘consider’. It can be applied to any situation where a person or other ‘chooser’ has options and selects one based on some consideration of their characteristics. Always choosing at random does not display any reason, but (as in game theory) choosing at random might be reasonable if one does so after considering the situation. One can rarely tell if a given action of another was the result of reason.

Often the term ‘reasonable’ is confined to occasions where we think the reasoned actions ‘make sense’, but this usage can be problematic except in very dull situations.

Epochs (q.v.) as defined by Whitehead are (roughly) zones of subject-space-time-scale that have stable rules. They are separated by transitions or transformations that may be perceived as crises.

The term ‘rational‘ means ‘endowed with reason’, but its derivation from ‘ratio’ (quantitative relation …) suggests more precision  than just considering. If one has a given epoch with known rules that are stable and sufficiently precise, then one can reason rationally and where rationality is practical it would seem to be optimal. But if one is considering a future in which there are multiple possible significantly different epochs then the effective rules are unknown and unstable, so apparent rationality can only be achieved by inventing rules. This may be effective if one is dominant in the sense that one can dictate or strongly affect the rules, but not otherwise. In some situations others may react against your attempts to impose or influence rules, so that this blinkered approach to rationalism may be expected to fail.

Another way to think of this is that if an epoch is stable and adequately understood then it may be ‘mapped’. Although ‘the map is not the territory’ one can use a sufficiently detailed map as if it were the territory. But a map may fail to represent the territory adequately when:

  • We had insufficient data.
  • We missed features that turned out to be vital (e.g., fords).
  • Someone else produced the map, and we don’t understand their conventions.
  • The territory has changed (e.g., after a flood).

Where a situation lacks the inherent stability or you lack the information, one might seek to ‘satisfice’ rather than (as rationality does) to optimize.  When in an epoch, to be reasonable one needs to consider:

  • The vulnerabilities of the current epoch, which actors may benefit (or may think that they may benefit) from its termination.
  • The potential new epochs, and which actors may prefer, or come to prefer, which.
  • The key factors in actors’ preferencs and in which epochs may actualise.
  • Which actors have what preferences for changes to these factors, and what direct or indirect influences they may have (including oneself and one’s collaborators).
  • Which strategies (including collaborations) have what prospects of avoiding the worst outcomes, and what weaknesses and opportunities may arise, and what new factors may need to be considered.

Similarly if one is in turmoil, but with more coping than consideration. A point to note is that a factor may be important just because an influential actor thinks that it is, so the above considerations do not derive from any objective analysis of any objective situation.

Rationality, then, presupposes that we have a good ‘map’. A common practice, lacking a good map, is for a group or organisation to ‘agree’ on a map and then act rationally. Here we argue that one ought at least consider the consequences of one’s actions (in terms of epochs). Keynes (‘Economic Consequences of the Peace’) argued that this could make a critical difference: perhaps even avoiding economic crashes and world-wars. I note that mis-deeds and crises of all kinds seem to have been ’caused’ by the use of the wrong kind of reasoning, but am not clear that effective reform is possible or necessarily even desireable. 

Epochs in time

Keynes and Whitehead think of activity taking place in different epochs, covering different zones of theme-space-time-scale, such as the British economy between Versailles and the Great Crash. Here is my attempt to illustrate this:

An epoch may end in a transition, leading to a new epoch.

The horizontal axis is time. The vertical axis is some indicator of merit. The subjective merit may not be ‘true’ to any actual merit, and so the axis is shown skewed. The period from A to B is where one has an established epoch, with ‘rules of the game’ that one may expect to last forever. Knowing the (current) rules, one is able to extract ‘value’. The circles represent bubbles. If only you had realised that the epoch was vulnerable you might have acted more sustainably, sacrificing some performance. The bubbles show the short-term benefit over the potential long-term benefit, which in practicde may not be known.

At B the rules of the game change. Perhaps some resource runs out, or the exploiters of the old game become so common, that they are exploited in turn. Effectiveness crashes. Actions may no longer be effective. What was once valued may have much less value, or may be a liability. Actors are trying to cope with the chaos, to understand the new rules, and perhaps to impose their own or influence those which emerge. At the point of inflection, C, the situation is showing the first signs of stabilising. By A’ a new epoch, with new rules, is becoming established. The same or new actors may become effective in this new epoch.

What is shown is a relatively simple case. In practice:

  • The period of transition between epochs may be so chaotic that no-one can make any sense of it and one appears to have one epoch followed by another, with no perceptible in-between.
  • Epochs are embedded in other epochs, (like games within games) so that a transition between low-level epochs may be driven by or influenced by higher epochs, thought of as games.
  • A new epoch may start to get established but fail. One may then have an indefinite succession of not-quite epochs with rules not quite established.

Reasoning in a complex, dynamic, world

I’m publishing what might seem an eclectic mix of stuff here, so I’ve started with something of a Rosetta Stone:

Conventional models tend to ignore potential long-term changes.
How models may relate to reality. Peter Allen

This is almost self-explanatory: most of our reasoning habits are okay in the short term, but may not be appropriate for the long-term.

A point to note is that sometimes ‘the long term’ can rush upon us. It is sometimes said that the best we can do is to reason for the short term, and respond and adapt to the long-term changes as they arise. But is this good enough? Can we do better?

More on this in some of the other pages. For now, trust me: they are related.

Dave Marsay