AI pros and cons

Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher The Metamorphosis Atlantic August 2019.

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.

The authors are looking for comments. My initial reaction is here. I hope to say more. Meanwhile, I’d appreciate your reactions.

 

Dave Marsay

What logical term or concept ought to be more widely known?

Various What scientific term or concept ought to be more widely known? Edge, 2017.

INTRODUCTION: SCIENTIA

Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. …

Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.

Contributions

As against others on:

(This is as far as I’ve got.)

Comment

I’ve grouped the contributions according to whether or not I think they give due weight to the notion of uncertainty as expressed in my blog. Interestingly Steven Pinker seems not to give due weight in his article, whereas he is credited by Nicholas G. Carr with some profound insights (in the first of the second batch). So maybe I am not reading them right.

My own thinking

Misplaced Concreteness

Whitehead’s fallacy of misplaced concerteness, also known as the reification fallacy, “holds when one mistakes an abstract belief, opinion, or concept about the way things are for a physical or “concrete” reality.” Most of what we think of as knowledge is ‘known about a theory” rather than truly “known about reality”. The difference seems to matter in psychology, sociology, economics and physics. This is not a term or concept of any particular science, but rather a seeming ‘brute fact’ of ‘the theory of science’ that perhaps ought to have been called attention to in the above article.

Morphogenesis

My own speciifc suggestion, to illustrate the above fallacy, would be Turing’s theory of ‘Morphogenesis’. The particular predictions seem to have been confirmed ‘scientifically’, but it is essentially a logical / mathematical theory. If, as the introduction to the Edge article suggests, science is “reliable methods for obtaining knowledge” then it seems to me that logic and mathematics are more reliable than empirical methods, and deserve some special recognition. Although, I must concede that it may be hard to tell logic from pseudo-logic, and that unless you can do so my distinction is potentially dangerous.

The second law of thermodynamics, and much common sense rationality,  assumes a situation in which the law of large numbers applies. But Turing adds to the second law’s notion of random dissipation a notion of relative structuring (as in gravity) to show that ‘critical instabilities’ are inevitable. These are inconsistent with the law of large numbers, so the assumptions of the second law of thermodynamics (and much else) cannot be true. The universe cannot be ‘closed’ in its sense.

Implications

If the assumptions of the second law seem to leave no room for free will and hence no reason to believe in our agency and hence no point in any of the contributions to Edge: they are what they are and we do what we do. But Pinker does not go so far: he simply notes that if things inevitably degrade we do not need to beat ourselves up, or look for scape-goats when things go wrong. But this can be true even if the second law does not apply. If we take Turing seriously then a seeming permanent status quo can contain the reasons for its own destruction, so that turning a blind eye and doing nothing can mean sleep-walking to disaster. Where Pinker concludes:

[An] underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

This would seem to follow more clearly from the theory of morphogenesis than the second law. Turing’s theory also goes some way to suggesting or even explaining the items in the second batch. So, I commend it.

 

Dave Marsay

 

 

How can economics be a science?

This note is prompted by Thaler’s Nobel prize, the reaction to it, and attempts by mathematicians to explain both what they do do and what they could do. Briefly, mathematicians are increasingly employed to assist practitioners (such as financiers) to sharpen their tools and improve their results, in some pre-defined sense (such as making more profit). They are less used to sharpen core ideas, much less to challenge assumptions. This is unfortunate when tools are misused and mathematicians blamed. It is no good saying that mathematicians should not go along with such misuse, since the misuse is often not obvious without some (expensive) investigations, and in any case whistleblowers are likely to get shown the door (even if only for being inefficient).

Mainstream economics aspires to be a science in the sense of being able to make predictions, at least probabilistically. Some (mostly before 2007/8) claimed that it achieved this, because its methods were scientific. But are they? Keynes coined the term ‘pseudo-mathematical’ for the then mainstream practices, whereby mathematics was applied without due regard for the soundness of the application. Then, as now, the mathematics in itself is as much beyond doubt as anything can be. The problem is a ‘halo effect’ whereby the application is regarded as ‘true’ just because the mathematics is. It is like physics before Einstein, whereby some (such as Locke) thought that classical geometry must be ‘true’ as physics, largely because it was so true as mathematics and they couldn’t envisage an alternative.

From a logical perspective, all that the use of scientific methods can do is to make probabilistic predictions that are contingent on there being no fundamental change. In some domains (such as particle physics, cosmology) there have never been any fundamental changes (at least since soon after the big bang) and we may not expect any. But economics, as life more generally, seems full of changes.

Popper famously noted that proper science is in principle falsifiable. Many practitioners in science and science-like fields regard the aim of their domain as to produce ‘scientific’ predictions. They have had to change their theories in the past, and may have to do so again. But many still suppose that there is some ultimate ‘true’ theory, to which their theories are tending. But according to Popper this is not a ‘proper’ scientific belief. Following Keynes we may call it an example of ‘pseudo-science’: something that masquerades as a science but goes beyond it bounds.

One approach to mainstream economics, then, is to disregard the pseudo-scientific ideology and just take its scientific content. Thus we may regard its predictions as mere extrapolations, and look out for circumstances in which they may not be valid. (As Eddington did for cosmology.)

Mainstream economics depends heavily on two notions:

  1. That there is some pre-ordained state space.
  2. That transitions evolve according to fixed conditional probabilities.

For most of us, most of the time, fortunately, these seem credible locally and in the short term, but not globally in space-time. (At the time of writing it seems hard to believe that just after the big bang there were in any meaningful sense state spaces and conditional probabilities that are now being realised.) We might adjust the usual assumptions:

The ‘real’ state of nature is unknowable, but one can make reasonable observations and extrapolations that will be ‘good enough’ most of the time for most routine purposes.

This is true for hard and soft sciences, and for economics. What varies is the balance between the routine and the exceptional.

Keynes observed that some economic structures work because people expect them to. For example, gold tends to rise in price because people think of it as being relatively sound. Thus anything that has a huge effect on expectations can undermine any prior extrapolations. This might be a new product or service, an independence movement, a conflict or a cyber failing. These all have a structural impact on economies that can cascade. But will the effect dissipate as it spreads, or may it result in a noticable shift? A mainstream economist would argue that all such impacts are probabilistic, and hence all that was happening was that we were observing new parts of the existing state space and new transitions. If we suppose for a moment that it is true, it is not a scientific belief, and hardly seems a useful way of thinking about potential and actual crises.

Mainstream economists suppose that people are ‘rational’, by which they mean that they act as if they are maximizing some utility, which is something to do with value and probability. But, even if the world is probabilistic, being rational is not necessarily scientific. For example, when a levee is built  to withstand a ‘100 year storm’, this is scientific if it is clear that the claim is based on past storm data. But it is unscientific if there is an implicit claim that the climate can not change. When building a levee it may be ‘rational’ to build it to withstand all but very improbable storms, but it is more sensible to add a margin and make contingency arrangements (as engineers normally do). In much of life it is common experience that the ‘scientific’ results aren’t entirely reliable, so it is ‘unscientific’ (or at least unreasonable) to totally rely on them.

Much of this is bread-and-butter in disciplines other than economics, and I am not sure that what economists mostly need is to improve their mathematics: they need to improve their sciencey-ness, and then use mathematics better. But I do think that they need somehow to come to a better appreciation of the mathematics of uncertainty, beyond basic probability  theory and its ramifications.

Dave Marsay

 

 

Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Evolution of Pragmatism?

A common ‘pragmatic’ approach is to keep doing what you normally do until you hit a snag, and (only) then to reconsider. Whereas Lamarckian evolution would lead to the ‘survival of the fittest’, with everyone adapting to the current niche, tending to yield a homogenous population, Darwinian evolution has survival of the maximal variety of all those who can survive, with characteristics only dying out when they are not viable. This evolution of diversity makes for greater resilience, which is maybe why ‘pragmatic’ Darwinian evolution has evolved.

The products of evolution are generally also pragmatic, in that they have virtually pre-programmed behaviours which ‘unfold’ in the environment. Plants grow and procreate, while animals have a richer variety of behaviours, but still tend just to do what they do. But humans can ‘think for themselves’ and be ‘creative’, and so have the possibility of not being just pragmatic.

I was at a (very good) lecture by Alice Roberts last night on the evolution of technology. She noted that many creatures use tools, but humans seem to be unique in that at some critical population mass the manufacture and use of tools becomes sustained through teaching, copying and co-operation. It occurred to me that much of this could be pragmatic. After all, until recently development has been very slow, and so may well have been driven by specific practical problems rather than continual searching for improvements. Also, the more recent upswing of innovation seems to have been associated with an increased mixing of cultures and decreased intolerance for people who think for themselves.

In biological evolution mutations can lead to innovation, so evolution is not entirely pragmatic, but their impact is normally limited by the need to fit the current niche, so evolution typically appears to be pragmatic. The role of mutations is more to increase the diversity of behaviours within the niche, rather than innovation as such.

In social evolution there will probably always have been mavericks and misfits, but the social pressure has been towards conformity. I conjecture that such an environment has favoured a habit of pragmatism. These days, it seems to me, a better approach would be more open-minded, inclusive and exploratory, but possibly we do have a biologically-conditioned tendency to be overly pragmatic: to confuse conventions for facts and  heuristics for laws of nature, and not to challenge widely-held beliefs.

The financial crash of 2008 was blamed by some on mathematics. This seems ridiculous. But the post Cold War world was largely one of growth with the threat of nuclear devastation much diminished, so it might be expected that pragmatism would be favoured. Thus powerful tools (mathematical or otherwise) could be taken up and exploited pragmatically, without enough consideration of the potential dangers. It seems to me that this problem is much broader than economics, but I wonder what the cure is, apart from better education and more enlightened public debate?

Dave Marsay

 

 

Traffic bunching

In heavy traffic, such as on motorways in rush-hour, there is often oscillation in speed and there can even be mysterious ’emergent’ halts. The use of variable speed limits can result in everyone getting along a given stretch of road quicker.

Soros (worth reading) has written an article that suggests that this is all to do with the humanity and ‘thinking’ of the drivers, and that something similar is the case for economic and financial booms and busts. This might seem to indicate that ‘mathematical models’ were a part of our problems, not solutions. So I suggest the following thought experiment:

Suppose a huge number of  identical driverless cars with deterministic control functions all try to go along the same road, seeking to optimise performance in terms of ‘progress’ and fuel economy. Will they necessarily succeed, or might there be some ‘tragedy of the commons’ that can only be resolved by some overall regulation? What are the critical factors? Is the nature of the ‘brains’ one of them?

Are these problems the preserve of psychologists, or does mathematics have anything useful to say?

Dave Marsay

Haldane’s The dog and the Frisbee

Andrew Haldane The dog and the Frisbee

Haldane argues in favour of simplified regulation. I find the conclusions reasonable, but have some quibbles about the details of the argument. My own view is that much of our financial problems have been due – at least in part – to a misrepresentation of the associated mathematics, and so I am keen to ensure that we avoid similar misunderstandings in the future. I see this as a primary responsibility of ‘regulators’, viewed in the round.

The paper starts with a variation of Ashby’s ball-catching observation, involving dog and a Frisbee instead of a man and a ball: you don’t need to estimate the position of the Frisbee or be an expert in aerodynamics: a simple, natural, heuristic will do. He applies this analogy to financial regulation, but it is somewhat flawed. When catching a Frisbee one relies on the Frisbee behaving normally, but in financial regulation one is concerned with what had seemed to be abnormal, such as the crisis period of 2007/8.

It is noted of Game theory that

John von Neumann and Oskar Morgenstern established that optimal decision-making involved probabilistically-weighting all possible future outcomes.

In apparent contrast

Many of the dominant figures in 20th century economics – from Keynes to Hayek, from Simon to Friedman – placed imperfections in information and knowledge centre-stage. Uncertainty was for them the normal state of decision-making affairs.

“It is not what we know, but what we do not know which we must always address, to avoid major failures, catastrophes and panics.”

The Game Theory thinking is characterised as ignoring the possibility of uncertainty, which – from a mathematical point of view – seems an absurd misreading. Theories can only ever have conditional conclusions: any unconditional misinterpretation goes beyond the proper bounds. The paper – rightly – rejects the conclusions of two-player zero-sum static game theory. But its critique of such a theory is much less thorough than von Neumann and Morgenstern’s own (e.g. their 4.3.3) and fails to identify which conditions are violated by economics. More worryingly, it seems to invite the reader to accept them, as here:

The choice of optimal decision-making strategy depends importantly on the degree of uncertainty about the environment – in statistical terms, model uncertainty. A key factor determining that uncertainty is the length of the sample over which the model is estimated. Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies.

This seems to suggest that – contra game theory – we could ‘in principle’ establish a sound model, if only we had enough data. Yet:

Einstein wrote that: “The problems that exist in the world today cannot be solved by the level of thinking that created them”.

There seems a non-sequitur here: if new thinking is repeatedly being applied then surely the nature of the system will continually be changing? Or is it proposed that the ‘new thinking’ will yield a final solution, eliminating uncertainty? If it is the case that ‘new thinking’ is repeatedly being applied then the regularity conditions of basic game theory (e.g. at 4.6.3 and 11.1.1) are not met (as discussed at 2.2.3). It is certainly not an unconditional conclusion that the methods of game theory apply to economies beyond the short-run, and experience would seem to show that such an assumption would be false.

The paper recommends the use of heuristics, by which it presumably means what Gigernezer means: methods that ignore some of the data. Thus, for example, all formal methods are heuristics since they ignore intuition.  But a dog catching a Frisbeee only has its own experience, which it is using, and so presumably – by this definition – is not actually using a heuristic either. In 2006 most financial and economics methods were heuristics in the sense that they ignored the lessons identified by von Neumann and Morgenstern. Gigerenzer’s definition seems hardly helpful. The dictionary definition relates to learning on one’s own, ignoring others. The economic problem, it seems to me, was of paying too much atention to the wrong people, and too little to those such as von Neumann and Morgenstern – and Keynes.   

The implication of the paper and Gigerenzer is, I think, that a heuristic is a set method that is used, rather than solving a problem from first principles. This is clearly a good idea, provided that the method incorporates a check that whatever principles that it relies upon do in fact hold in the case at hand. (This is what economists have often neglecte to do.) If set methods are used as meta-heuristics to identify the appropriate heuristics for particular cases, then one has something like recognition-primed decision-making. It could be argued that the financial community had such meta-heuristics, which led to the crash: the adoption of heuristics as such seems not to be a solution. Instead one needs to appreciate what kind of heuristic are appropriate when. Game theory shows us that the probabilistic heuristics are ill-founded when there is significant innovation, as there was both prior, through and immediately after 2007/8. In so far as economics and finance are games, some events are game-changers. The problem is not the proper application of mathematical game theory, but the ‘pragmatic’ application of a simplistic version: playing the game as it appears to be unless and until it changes. An unstated possible deduction from the paper is surely that such ‘pragmatic’ approaches are inadequate. For mutable games, strategy needs to take place at a higher level than it does for fixed games: it is not just that different strategies are required, but that ‘strategy’ has a different meaning: it should at least recognize the possibility of a change to a seemingly established status quo.

If we take an analogy with a dog and a Frisbee, and consider Frisbee catching to be a statistically regular problem, then the conditions of simple game theory may be met, and it is also possible to establish statistically that a heuristic (method) is adequate. But if there is innovation in the situation then we cannot rely on any simplistic theory or on any learnt methods. Instead we need a more principled approach, such as that of Keynes or Ashby,  considering the conditionality and looking out for potential game-changers. The key is not just simpler regulation, but regulation that is less reliant on conditions that we expect to hold but for which, on maturer reflection, are not totally reliable. In practice this may necessitate a mature on-going debate to adjust the regime to potential game-changers as they emerge.

See Also

Ariel Rubinstein opines that:

classical game theory deals with situations where people are fully rational.

Yet von Neumann and Morgenstern (4.1.2) note that:

the rules of rational behaviour must provide definitely for the possibility of irrational conduct on the part of others.

Indeed, in a paradigmatic zero-sum two person game, if the other person players rationally (according to game theory) then your expected return is the same irrespective of how you play. Thus it is of the essence that you consider potential non-rational plays. I take it, then, that game theory as reflected in economics is a very simplified – indeed an over-simplified – version. It is presumably this distorted version that Haldane’s criticism’s properly apply to.

Dave Marsay

Haldane’s Tails of the Unexpected

A. Haldane, B. Nelson Tails of the unexpected,  The Credit Crisis Five Years On: Unpacking the Crisis conference, University of Edinburgh Business School, 8-9 June 2012

The credit crisis is blamed on a simplistic belief in ‘the Normal Distribution’ and its ‘thin tails’, understating risk. Complexity and chaos theories point to greater risks, as does the work of Taleb.

Modern weather forecasting is pointed to as good relevant practice, where one can spot trouble brewing. Robust and resilient regulatory mechanisms need to be employed. It is no good relying on statistics like VaR (Value at Risk) that assume a normal distribution. The Bank of England is developing an approach based on these ideas.

Comment

Risk arises when the statistical distribution of the future can be calculated or is known. Uncertainty arises when this distribution is incalculable, perhaps unknown.

While the paper acknowledges Keynes’ economics and Knightian uncertainty, it overlooks Keynes’ Treatise on Probability, which underpins his economics.

Much of modern econometric theory is … underpinned by the assumption of randomness in variables and estimated error terms.

Keynes was critical of this assumption, and of this model:

Economics … shift[ed] from models of Classical determinism to statistical laws. … Evgeny Slutsky (1927) and Ragnar Frisch (1933) … divided the dynamics of the economy into two elements: an irregular random element or impulse and a regular systematic element or propagation mechanism. This impulse/propagation paradigm remains the centrepiece of macro-economics to this day.

Keynes pointed out that such assumptions could only be validated empirically and (as the current paper also does) in the Treatise he cited Lexis’s falsification.

The paper cites a game of paper/scissors/stone which Sotheby’s thought was a simple game of chance but which Christie’s saw  as an opportunity for strategizing – and won millions of dollars. Apparently Christie’s consulted some 11 year old girls, but they might equally well have been familiar with Shannon‘s machine for defeating strategy-impaired humans. With this in mind, it is not clear why the paper characterises uncertainty a merly being about unknown probability distributions, as distinct from Keynes’ more radical position, that there is no such distribution. 

The paper is critical of nerds, who apparently ‘like to show off’.  But to me the problem is not the show-offs, but those who don’t know as much as they think they know. They pay too little attention to the theory, not too much. The girls and Shannon seem okay to me: it is those nerds who see everything as the product of randomness or a game of chance who are the problem.

If we compare the Slutsky Frisch model with Kuhn’s description of the development of science, then economics is assumed to develop in much the same way as normal science, but without ever undergoing anything like a (systemic) paradigm shift. Thus, while the model may be correct most of the time,  violations, such as in 2007/8, matter.

Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.

 One can understand this reasoning by analogy with science: the more dominant a school which protects its core myths, the greater the reaction and impact when the myths are exposed. But in finance it may not be just ‘risk control’ that causes a problem. Any optimisation that is blind to the possibility of systemic change may tend to increase the chance of change (for good or ill) [E.g. Bohr Atomic Physics and Human Knowledge. Ox Bow Press 1958].

See Also

Previous posts on articles by or about Haldane, along similar lines:

My notes on:

Dave Marsay

Hercock’s Cohesion

Robert G. Hercock Cohesion: The Making of Society 2009.

Having had Robert critique some of my work, I could hardly not comment on this think-piece. It draws on modern complexity theory and a broad view of relevant historical examples and current trends to create a credible narrative. For me, his key conclusions are:

  1. “[G]iven a sufficient degree of communication … the cooperative assembly of [a cohesive society] is inevitable.”
  2. To be cohesive, a society should be “global politically federated, yet culturally diverse”.

The nature of communication envisaged seems to be indicated by:

 “From smoke signals, and the electric telegraph, through to fibre optics, and the Internet … the manifest boom in all forms of communication is bringing immense capabilities to form new social collectives and positive cultural developments.”

 I ‘get’ that increasing communication will bring immense capabilities to support the cooperative assembly of a cohesive global society, but am not convinced the effective exploitation of the capability in this way is inevitable. In chapter 6 (‘Bridges’) Robert says:

 “The truth is we now need a new shared set of beliefs. … Unfortunately, no one appears to have the faintest idea what such a common set of beliefs should look like, or where it might arise from, or who has responsibility to make it happen, or how, etc. Basically this is the challenge of the 21st century; we stand or fall on this battle for a common cultural nexus.”  

 This is closer to my own thinking.

People have different understandings of terms like ‘federated’. My preference is for subsidiarity: the idea that one has the minimum possible governance, with reliance on the minimum possible shared beliefs and common cultures. In complex situations these minimum levels are not obvious or static, so I would see an effective federations as engaging tentatively at a number of ‘levels’, ‘veering and hauling’ between them, and with strong arrangements for ‘horizon scanning’ and debate with the maximum possible diversity of views. Thus there would be not only cultural diversity but ‘viewpoint diversity within federated debate’. What is needed seems somewhat like Holism and glocalization 

Thinking of the EU, diversity of monetary policy might make the EU as an institution more cohesive while making their economies less cohesive. To put it another way, attempts to enforce cohesion at the monetary level can threaten cohesion at the political level. So it is not clear to me that one can think of a society as simply ‘being cohesive’. Rather it should be cohesive in the sense appropriate to its current situation. Cohesion should be ‘adaptive’. Leadership and vision seem to be required to achieve this: it is not automatic.

In the mid 80s many of those involved in the development of communications technologies thought that they would promote world peace, sometimes citing the kind of works that Robert does. I had and have two reservations. Firstly, the quality of communications matters. Thus [it was thought] one probably needed digital video, mobile phones and the Internet, all integrated in way that was easy to use. [The Apple Macintosh made this credible.] Thus, if there was a clash between Soviet secret police and Jewish protestors [common at the time], the whole world could take an informed view, rather than relying on the media. [This was before the development of video faking capabilities]. Secondly, while this would destabilize autocratic regimes, it was another issue as to what would happen next. It was generally felt that the only possible ‘properly’ stable states were democratic, but views differed on whether such states would necessarily stabilize.

Subsequent experience, such as the Arab spring, support the view that YouTube and Facebook undermine oppressive regimes. But I remain unconvinced that ‘the cooperative assembly of [a cohesive society] is inevitable’ in Africa, the Middle East,Russia or South America’, or that more communications would make it so. It certainly seems that if the process is inevitable, it can be much too slow.

My own thinking in the 80s was informed by the uncertainty and complexity theory Keynes, Whitehead, Turing and Smuts, which predates that which Robert cites, and which informed the development of the United Nations as a part of ‘the cooperative assembly of a cohesive global society’. Robert seems to be arguing that according to modern theory such efforts were not necessary, but even so they may have been beneficial if all they did was speed the process up by a few generations. Moreover, the EU example seems to support my view that these theories are usefully more advanced than their contemporary counter-parts.

The financial crash of 2008 occurred part way through the writing of the book. Like any history, explanations differ, and Robert gives a credible account in terms of modern complexity theory. But logic teaches us to be cautious about such post-hoc explanations. It seems to me that Keynes’ theory explains it adequately, and having been developed before the event should be given more credence.

 Robert seems to regard the global crash of 2008 as a result of a loss of cohesion :

“When economies, states and societies lose their cohesion, people suffer; to be precise a lot of people end up paying the cost. In the recession of 2008/09 … “

But Keynes shows how it is cohesion (‘sticking together’) that causes global crashes. Firstly, in a non-globalized economy a crash in one part can be compensated for by the stability of another part, a bit like China saving the situation, but more so. Secondly, (to quote Patton) ‘if everyone is thinking alike then no-one is thinking’. Once group-think is established ‘expectations’ become ossified, and the market is disconnected from reality.

Robert’s notion of cohesion is “global politically federated, yet culturally diverse”. One can see how in 2008 and currently in the EU (and North Africa and elsewhere) de jure and de-facto regulatory structures change, consistent with Robert’s view. But according to Keynes this is a response to an actual or potential crisis, rather than a causative factor. One can have a chain of  crises in which political change leads to emergent social or economic problems, leading to political change and so-on. Robert seems to suppose that this must settle down into some stable federation. If so then perhaps only the core principles will be stable, and even these might need to be continually reinterpreted and refreshed, much as I have tried to do here.

On a more conceptual note, Robert has the qualifies the conclusion with “The evidence from all of the fields considered in this text suggests …”.  But the conclusion could only be formally sustained by an argument employing induction. Now, if improved communications is really going to change the world so much then it will undermine the basis of any induction. (In Whitehead’s terms, induction only works with an epoch but here the epoch is changed.) The best one could say would be that on current trends a move towards greater cohesion appears inevitable. This is a more fundamental problem than only considering evidence from a limited range of fields. More evidence from more fields could not overcome this problem.

Dave Marsay

The End of a Physics Worldview (Kauffman)

Thought provoking, as usual. This video goes beyond his previous work, but in the same direction. His point is that it is a mistake to think of ecologies and economies as if they resembled the typical world of Physics. A previous written version is at npr, followed by a later development.

He builds on Kant’s notion of wholes, noting (as Kant did before him) that the existence of such wholes is inconsistent with classical notions of causality.  He ties this in to biological examples. This complements Prigogine, who did a similar job for modern Physics.

Kauffman is critical of mathematics and ‘mathematization’, but seems unaware of the mathematics of Keynes and Whitehead. Kauffman’s view seems the same as that due to Bergson and Smuts, which in the late 1920s defined ‘modern science’. To me the problem behind the financial crash lies not in science or mathematics or even in economics, but in the brute fact that politicians and financiers were wedded to a pre-modern (pre-Kantian) view of economics and mathematics. Kauffman’s work may help enlighten them on the need, but not on the potential role for modern mathematics.

Kauffman notes that at any one time there are ‘adjacent possibles’ and that in the near future they may come to pass, and that – conceptually – one could associate a probability distribution with these possibilities. But as new possibilities come to pass new adjacent possibilities arise. Kauffman supposes that it is not possible to know what these are, and hence one cannot have a probability distribution, much of information theory makes no sense, and one cannot reason effectively. The challenge, then, is to discover how we do, in fact, reason.

Kauffman does not distinguish between short and long run. If we do so then we see that if we know the adjacent possible then our conventional reasoning is appropriate in the short-term, and Kauffman’s concerns are really about the long-term: beyond the point at which we can see the potential possibles that may arise. To this extent, at least, Kauffman’s post-modern vision seems little different from the modern vision of the 1920s and 30s, before it was trivialized.

Dave Marsay

The voice of science: let’s agree to disagree (Nature)

Sarewitz uses his Nature column to argue against forced or otherwise false consensus in science.

“The very idea that science best expresses its authority through consensus statements is at odds with a vibrant scientific enterprise. … Science would provide better value to politics if it articulated the broadest set of plausible interpretations, options and perspectives, imagined by the best experts, rather than forcing convergence to an allegedly unified voice.”

D. Sarewitz The voice of science: let’s agree to disagree Nature Vol 478 Pg 3, 6 October 2011.

Sarewitz seems to be thinking in terms of issues such as academic freedom and vibrancy. But there are arguably more important aspects. Given any set of experiments or other evidence there will generally be a wide range of credible theories. The choice of a particular theory is not determined by any logic, but such factors as which one was thought of first and by whom, and is easiest to work with in making predictions etc.

In issues like smoking and climate change the problem is that the paucity of data is obvious and different credible theories lead to different policy or action recommendations. Thus no one detailed theory is credible. We need a different way of reasoning, that should at least recognize the range of credible theories and the consequential uncertainty.

I have experience of a different kind of problem: where one has seemingly well established theories but these are suddenly falsified in a crisis (as in the financial crash of 2008). Politicians (and the public, where they are involved) understandably lose confidence in the ‘science’ and can fall back on instincts that may or may not be appropriate. One can try to rebuild a credible theory over-night (literally) from scratch, but this is not recommended. Some scientists have a clear grasp of their subject. They understand that the accepted theory is part science part narrative and are able to help politicians understand the difference. We may need more of these.

Enlightened scientists will seek to encourage debate, e.g. via enlightened journals, but in some fields, as in economics, they may find themselves ‘out in the cold’. We need to make sure that such people have a platform. I think that this goes much broader than the committees Sarewitz is considering.

I also think that many of our contemporary problems are because societies end to suppress uncertainty, being more comfortable with consensus and giving more credence to people who are confident in their subject. This attitude suppresses a consideration of alternatives and turns novelty into shocks, which can have disastrous results. 

Previous work

In a 2001 Nature article Roger Pielke covers much the same ground. But he also says:

“Take for example weather forecasters, who are learning that the value to society of their forecasts is enhanced when decision-makers are provided with predictions in probabilistic rather than categorical fashion and decisions are made in full view of uncertainty.”

 From this and his blog it seems that the uncertainty is merely probabilistic, and differs only in magnitude. But it seems to me that before global warming became significant  weather forecasting and climate modelling seemed probabilistic but that there was an intermediate time-scale (in the UK one or two weeks) which was always more complex and which had different types of uncertainty, as described by Keynes. But this does not detract from the main point of the article.

See also

Popper’s Logic of Scientific Discovery , Roger Pielke’s blog (with a link to his 2001 article in Nature on the same topic).

Dave Marsay

How to live in a world that we don’t understand, and enjoy it (Taleb)

N Taleb How to live in a world that we don’t understand, and enjoy it  Goldstone Lecture 2011 (U Penn, Wharton)

Notes from the talk

Taleb returns to his alma mater. This talk supercedes his previous work (e.g. Black Swan). His main points are:

  • We don’t have a word for the opposite of fragile.
      Fragile systems have small probability of huge negative payoff
      Robust systems have consistent payoffs
      ? has a small probability of a large pay-off
  • Fragile systems eventually fail. ? systems eventually come good.
  • Financial statistics have a kurtosis that cannot in practice be measured, and tend to hugely under-estimate risk.
      Often more than 80% of kurtosis over a few years is contributed by a single (memorable) day.
  • We should try to create ? systems.
      He calls them convex systems, where the expected return exceeds the return given the expected environment.
      Fragile systems are concave, where the expected return is less than the return from the expected situation.
      He also talks about ‘creating optionality’.
  • He notes an ‘action bias’, where whenever there is a game like the stock market then we want to get involved and win. It may be better not to play.
  • He gives some examples.

 Comments

Taleb is dismissive of economists who talk about Knightian uncertainty, which goes back to Keynes’ Treatise on Probability. Their corresponding story is that:

  • Fragile systems are vulnerable to ‘true uncertainty’
  • Fragile systems eventually fail
  • Practical numeric measures of risk ignore ‘true uncertainty’.
  • We should try to create systems that are robust to or exploit true uncertainty.
  • Rather than trying to be the best at playing the game, we should try to change the rules of the game or play a ‘higher’ game.
  • Keynes gives examples.

The difference is that Taleb implicitly suppose that financial systems etc are stochastic, but have too much kurtosis for us to be able to estimate their parameters. Rare events are regarded as rare events generated stochastically. Keynes (and Whitehead) suppose that it may be possible to approximate such systems by a stochastic model for a while, but the rare events denote a change to a new model, so that – for example – there is not a universal economic theory. Instead, we occasionally have new economics, calling for new stochastic models. Practically, there seems little to choose between them, so far.

From a scientific viewpoint, one can only asses definite stochastic models. Thus, as Keynes and Whitehead note, one can only say that a given model fitted the data up to a certain date, and then it didn’t. The notion that there is a true universal stochastic model is not provable scientifically, but neither is it falsifiable. Hence according to Popper one should not entertain it as a view. This is possibly too harsh on Taleb, but the point is this:

Taleb’s explanation has pedagogic appeal, but this shouldn’t detract from an appreciation of alternative explanations based on non-stochastic uncertainty.

 In particular:

  • Taleb (in this talk) seems to regard rare crisis as ‘acts of fate’ whereas Keynes regards them as arising from misperceptions on the part of regulators and major ‘players’. This suggests that we might be able to ameliorate them.
  • Taleb implicitly uses the language of probability theory, as if this were rational. Yet his argument (like Keynes’) undermines the notion of probability as derived from rational decision theory.
      Not playing is better whenever there is Knightian uncertainty.
      Maybe we need to be able to talk about systems that thrive on uncertainty, in addition to convex systems.
  • Taleb also views the up-side as good fortune, whereas we might view it as an innovation, by whatever combination of luck, inspiration, understanding and hard work.

See also

On fat tails versus epochs.

Dave Marsay

From Being to Becoming

I. Prigogine, From Being to Becoming: Time and Complexity in the Physical Sciences, WH Freeman, 1980 

 See new page.

Summary

“This book is about time.” But it has much to say about complexity, uncertainty, probability, dynamics and entropy. It builds on his Nobel lecture, re-using many of the models and arguments, but taking them further.

Being is classically modelled by a state within a landscape, subject to a fixed ‘master equation’ describing changes with time. The state may be an attribute of an object (classical dynamics) or a probability ‘wave’ (quantum mechanics). [This unification seems most fruitful.] Such change is ‘reversible’ in the sense that if one reverses the ‘arrow of time’ one still has a dynamical system.

Becoming refers to more fundamental, irreversible, change, typical of ‘complex systems’ in chemistry, biology and sociology, for example. 

The book reviews the state of the art in theories of Being and Becoming, providing the hooks for its later reconciliation. Both sets of theories are phenomenological – about behaviours. Prigogine shows that not only is there no known link between the two theories, but that they are incompatible.

Prigogine’s approach is to replace the notion of Being as being represented by a state, analogous to a point in a vector space,  by that of an ‘operator’ within something like a Hilbert Space. Stable operators can be thought of as conventional states, but operators can become unstable, which leads to non-statelike behaviours. Prigogine shows how in some cases this can give rise to ‘becoming’.

This would, in itself, seem a great and much needed subject for a book, but Prigogine goes on to consider the consequences for time. He shows how time arises from the operators. If everything is simple and stable then one has classical time. But if the operators are complex then one can have a multitude of times at different rates, which may be erratic or unstable. I haven’t got my head around this bit yet.

Some Quotes

Preface

… the main thesis …can be formulated as:

  1. Irreversible processes are as real as reversible ones …
  2. Irreversible processes play a fundamental constructive role in the physical world …
  3. Irreversibility … corresponds … to an embedding of dynamics within a vaster formalism. [Processes instead of points.] (xiii)

The classical, often called “Galilean,” view of science was to regard the world as an “object,” to try to describe the physical world as if it were being seen from the outside as an object of analysis to which we do not belong. (xv)

… in physics, as in sociology, only various possible “scenarios” can be predicted. [One cannot predict actual outcomes, only identify possibilities.] (xvii)

Introduction

… dynamics … seemed to form a closed universal system, capable of yielding the answer to any question asked. (3)

… Newtonian dynamics is replaced by quantum mechanics and by relativistic mechanics. However, these new forms of dynamics … have inherited the idea of Newtonian physics: a static universe, a universe of being without becoming. (4)

The Physics of Becoming

The interplay between function, structure and fluctuations leads to the most unexpected phenomena, including order through fluctuations … . (101)

… chemical instabilities involve long-range order through which the system acts as a whole. (104)

… the system obeys deterministic laws [as in classical dynamics] between two bifurcation points, but in the neighbourhood of the bifurcation points fluctuations play an essential role and determine the “branch” that the system will follow. (106) [This is termed ‘structurally unstable”]

.. a cyclic network of reactions [is] called a hypercycle. When such networks compete with one another, they display the ability the ability to evolve through mutation and replication into greater complexity. …
The concept of structural stability seems to express in the most compact way the idea of innovation, the appearance of a new mechanism and a new species, … . (109)

… the origin of life may be related to successive instabilities somewhat analogous to the successive bifurcations that have led to a state of matter of increasing coherence. (123)

As an example, … consider the problem of urban evolution … (124) … such a model offers a new basis for the understanding of “structure” resulting from the actions (choices) of the many agents in a system, having in part at least mutually dependent criteria of action. (126)

… there are no limits to structural instability. Every system may present instabilities when suitable perturbations are introduced. Therefore, there can be no end to history. [DJM emphasis.] … we have … the constant generation of “new types” and “new ideas” that may be incorporated into the structure of the system, causing its continual evolution. (128)

… near bifurcations the law of large numbers essentially breaks down.
In general, fluctuations play a minor role … . However, near bifurcations they play a critical role because there the fluctuation drives the average. This is the very meaning of the concept of order through fluctuations .. . (132)

… near a bifurcation point, nature always finds some clever way to avoid the consequences of the law of large numbers through an appropriate nucleation process. (134)

… For small-scale fluctuations, boundary effects will dominate and fluctuations will regress. … for large-scale fluctuations, boundary effects become negligible. Between these limiting cases lies the actual size of nucleation. (146)

… We may expect that in systems that are very complex, in the sense that there are many interacting species or components, [the degree of coupling between the system and its surroundings] will be very large, as will be the size of the fluctuation which could start the instability. Therefore … a sufficiently complex system is generally in a metastable state. (147) [But see Comments below.]

… Near instabilities, there are large fluctuations that lead to a breakdown of the usual laws of probability theory. (150)

The Bridge from Being to Becoming

[As foreshadowed by Bohr] we have a new form of complimentarity – one between the dynamical and thermodynamic descriptions. (174)

… Irreversibility is the manifestation on a macroscopic scale of “randomness” on a microscopic scale. (178)

Contrary to what Boltzmann attempted to show there is no “deduction” of irreversibility from randomness – they are only cousins! (177)

The Microscopic Theory of Irreversible Processes

The step made … is quite crucial. We go from the dynamical system in terms of trajectories or wave packets to a description in terms of processes. (186)

… Various mechanisms may be involved, the important element being that they lead to a complexity on the microscopic level such that the basic concepts involved in the trajectory or wave function must be superseded by a statistical ensemble. (194)

The classical order was: particles first, the second law later – being before becoming! It is possible that this is no longer so when we come to the level of elementary particles and that here we must first introduce the second law before being able to define the entities. (199)

The Laws of Change

… Of special interest is the close relation between fluctuations and bifurcations which leads to deep alterations in the classical results of probability theory. The law of large numbers is no longer valid near bifurcations and the unicity of the solution of … equations for the probability distribution is lost. (204)

This mathematization leads us to a new concept of time and irreversibility … . (206)

… the classical description in terms of trajectories has to be given up either because of instability and randomness on the microscopic level or because of quantum “correlations”. (207)

… the new concept implies that age depends on the distribution itself and is therefore no longer an external parameter, a simple label as in the conventional formula.
We see how deeply the new approach modifies our traditional view of time, which now emerges as a kind of average over “individual times” of the ensemble. (210)

For a long time, the absolute predictability of classical mechanics, or the physics of being, was considered to be an essential element of the scientific picture of the physical world. … the scientific picture has shifted toward a new, more subtle conception in which both deterministic features and stochastic features play an essential role. (210)

The basis of classical physics was the conviction that the future is determined by the present, and therefore a careful study of the present permits the unveiling of the future. At no time, however, was this more than a theoretical possibility. Yet in some sense this unlimited predictability was an essential element of the scientific picture of the physical world. We may perhaps even call this the founding myth of classical science.
The situation is greatly changed today. … The incorporation of the limitation of our ways of acting on nature has been an essential element of progress. (214)

Have we lost essential elements of classical science in this recent evolution [of thought]? The increased limitation of deterministic laws means that we go from a universe that is closed to one that is open to fluctuations. to innovations.

… perhaps there is a more subtle form of reality that involves both laws and games, time and eternity. (215) 

Comments

Relationship to previous work

This book can be seen as a development of the work of Kant, Whitehead and Smuts on emergence, although – curiously – it makes little reference to them [pg xvii]. In their terms, reality cannot logically be described in terms of point-like states within spaces with fixed ‘master equations’ that govern their dynamics. Instead, it needs to be described in terms of ‘processes’. Prigogine goes beyond this by developing explicit mathematical models as examples of emergence (from being to becoming) within physics and chemistry.

Metastability

According to the quote above, sufficiently complex systems are inherently metastable. Some have supposed that globalisation inevitably leads to an inter-connected and hence complex and hence stable world. But globalisation could lead to homogenization or fungibility, a reduction in complexity and hence an increased vulnerability to fluctuations. As ever, details matter.

See Also

I. Prigogine and I. Strengers Order out of Chaos Heinemann 1984.
This is an update of a popular work on Prigogine’s theory of dissipative systems. He provides an unsympathetic account of Kant’s Critique of Pure Reason, supposing Kant to hold that there are “a unique set of principles on which science is based” without making reference to Kants’ concept of emergence, or of the role of communities. But he does set his work within the framework of Whitehead’s Process and Reality. Smuts’ Holism and Evolution, which draws on Kant and mirrors Whitehead is also relevant, as a popular and influential account of the 1920s, helping to define the then ‘modern science’.

Dave Marsay

Composability

State of the art – software engineering

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations … .”For information systems, from a software engineering perspective,  the essential features are regarded as modularity and statelessness. Current inhibitors include:  

“Lack of clear composition semantics that describe the intention of the composition and allow to manage change propagation.”

Broader context

Composability has a natural interpretation as readiness to be composed with others, and has broader applicability. For example, one suspects that if some people met their own clone, they would not be able to collaborate. Quite generally, composability would seem necessary but perhaps not sufficient to ‘good’ behaviour. Thus each culture tends to develop ways for people to work effectively together, but some sub-cultures seem parasitic, in that they couldn’t sustain themselves on their own.

Cultures tend to evolve, but technical interventions tend to be designed. How can we be sure that the resultant systems are viable under evolutionary pressure? Composability would seem to be an important element, as it allows elements to be re-used and recombined, with the aspiration of supporting change propagation.

Analysis

Composability is particularly evident, and important, in algorithms in statistics and data fusion.  If modularity and statelessness are important for the implementation of the algorithms, it is clear that there are also characteristics of the algorithms as functions (ignoring internal details) that are also important.

If we partition a given data set, apply a function to the parts and the combine the result, we want to get the same result no matter how the data is partitioned. That is, we want the result to depend on the data, not the partitioning.

In elections for example, it is not necessarily true that a party who gets a majority of the votes overall will get the most candidates elected. This lack of composability can lead to a loss of confidence in the electoral process. Similarly, media coverage is often an editor’s precis of the precis by different reporters. One would hope that a similar story would emerge if one reporter had covered the whole. 

More technically, averages over parts cannot, in general, be combined to give a true overall average, whereas counting and summing are composable. Desired functions can often be computed composably by using a preparation function, then composable function, then a projection or interpretation function. Thus an average can be computed by finding the number of terms averaged, reporting the sum and count, summing over parts to give an overall sum and count, then projecting to get the average. If a given function can be implented via two or more composable functions, then those functions must be ‘conjugate’: the same up to some change of basis. (For example, multiplication is composable, but one could prepare using logs and project using exponentiation to calculate a product using a sum.)

In any domain, then, it is natural to look for composable functions and to implement algorithms in terms of them. This seems to have been widespread practice until the late 1980s, when it became more common to implement algorithms directly and then to worry about how to distribute them.

Iterative Composability

In some cases it is not possible to determine composable functions in advance, or perhaps at all. For example, where innovation can take place, or one is otherwise ignorant of what may be. Here one may look for a form of ‘iterative composability’ in which one hopes tha the results is normally adequate, there will be signs if it is not, and that one will be able to improve the situation. What matters is that this process should converge, so that one can get as close as one likes to the results one would get from using all the data.

Elections under FPTP (first past the post) are not composable, and one cannot tell if the party who is most voter’s first preference has failed to get in. AV (alternative vote) is also not composable, but one has more information (voters give rankings) and so can sometimes tell that there cannot have been a party who was most voters first preference who failed to get in. If there can have been, one could have a second round with only the top parties’ candidates. This is a partial step towards general iterative composability, which might often be iteratively composable for the given situation, much more so than fptp.

Parametric estimation is generally composable when one has a fixed number of entities whose parameters are being estimated. Otherwise one has an ‘association’ problem, which might be tackled differently for the different parts. If so, this needs to be detected and remedied, perhaps iteratively. This is effectively a form of hypothesis testing. Here the problem is that the testing of hypotheses using likelihood ratios is not composable. But, again, if hypotheses are compared differences can be detected and remedial action taken. It is less obvious that this process will converge, but for constrained hypothesis spaces it does.

Innovation, transformation, freedom and rationality

It is common to suppose that people acting in their environment should characterise their situation within a context in enough detail to removes all but (numeric) probabilistic uncertainty, so that they can optimize. Acting sub-optimally, it is supposed, would not be rational. But if innovation is about transformation then a supposedly rational act may undermine the context of another, leading to a loss of performance and possibly crisis or chaos.

Simultaneous innovation could be managed by having an over-arching policy or plan, but this would clearly constrain freedom and hence genuine innovation. To much innovation and one has chaos, too little and there is too little progress.

A composable approach is to seek innovations that respect each other’s contexts, and to make clear to other’s what one’s essential context is. This supports only very timid innovation if the innovation is rational (in the above sense), since no true (Knightian) uncertainty can be accepted. A more composable approach is to seek to minimise dependencies and to innovate in a way that accepts – possibly embraces – true uncertainty. This necessitates a deep understanding of the situation and its potentialities.  

Conclusion

Composability is an important concept that can be applied quite generally. The structure of activity shouldn’t impact on the outcome of the activity (other than resource usage). This can mean developing core components that provide a sound infrastructure, and then adapting it to perform the desired tasks, rather than seeking to implement the desired functionality directly.

Dave Marsay

Cyber Doctrine

Cyber Doctrine: Towards a coherent evolutionary framework for learning resilience, ISRS, JP MacIntosh, J Reid and LR Tyler.

A large booklet that provides a critical contribution to the Cyber debate. Here I provide my initial reactions: the document merits more detailed study.

Topics

Scope

Just as financial security is about more than just defending against bank-robbers, cyber security is about more than just defending against deliberate attack, and extends to all aspects of resilience, including freedom from whatever delusions might be analogous to the efficient market hypothesis.

Approach

Innovation is key to a vibrant Cyberspace and further innovation in Cyberspace is vital to our real lives. Thus a notion of security based on constraint or resilience based on always returning to the status quo are simply not appropriate. 

Resilience and Transformation

Resilience is defined as “the enduring power of a body or bodies for transformation, renewal and recovery through the flux of interactions and flow of events.” It is not just the ability to ‘bounce back’ to its previous state. It implies the ability to learn from events and adapt to be in a better position to face them.

Transformation is taken to be the key characteristic. It is not defined, which might lead people to turn to wikipedia, whose notion does not explicitly address complexity or uncertainty. I would like to see more emphasis on the long-run issues of adapting to evolve as against sequentially adapting to what one thinks the current needs are. This may include ‘deep transformation’ and ‘transformation in contact’ and the elimination of parts that are no longer needed.

Pragmatism 

The document claims to be ‘pragmatic’: I have concerns about what this term means to readers. According to wikipedia, “it describes a process where theory is extracted from practice, and applied back to practice to form what is called intelligent practice.” Fair enough. But the efficient market hypothesis was once regarded as pragmatic, and there are many who think it pragmatic to act as if one’s beliefs were true. Effective Cyber practice would seem to depend on an appropriate notion of pragmatism, which a doctrine perhaps ought to elucidate.

Glocalization

The document advocates glocalization. According to wikipedia this means ‘think global act local’ and the document refers to a variant: “the compression of the world and the intensification of the consciousness of the world as a whole”. But how should we conceive the whole? The document says “In cyberspace our lives are conducted through a kaleidoscope of global and local relations, which coalesce and dissipate as diverse glocals.” Thus this is not wholism (which supposes that the parts should be dominated by the needs of the whole) but a more holistic vision, which seeks a sustainable solution, somehow ‘balancing’ a range of needs on a range of scales. The doctrinal principles will need to support the structuring and balancing more explicitly.

Composability

The document highlights composability as a key aspect of best structural practice that – pragmatically – perhaps ought to be leveraged further. I intend to blog specifically on this. Effective collaboration is clearly essential to innovation, including resilience. Composability would seem essential to effective collaboration.

Visualisation: Quads

I imagine that anyone who has worked on these types of complex issue, with all their uncertainties, will recognize the importance of visual aids that can be talked around. There are many that are helpful when interpreted with understanding and discretion, but I have yet to find any that can ‘stand alone’ without risk of mis-interpretation. Diagram 6 (page 89) seems at first sight a valuable contribution to the corpus, worthy of further study and perhaps development.

I consider Perrow limited because his ‘yardstick’ tends to be an existing system and his recommendation seems to be ‘complexity and uncertainty are dangerous’. But if we want resilience through innovation we cannot avoid complexity and uncertainty. Further, glocalization seems to imply a turbulent diversity of types of coupling, such that Perrow’s analysis is impossible to apply.

I have come across the Johari window used in government as a way of explaining uncertainty, but here the yardstick is what others think they know, and in any case the concept of ‘knowledge’ seems just as difficult as that of uncertainty. So while this motivates, it doesn’t really explain.

The top ‘quad’ says something important about conventional economics. Much of life is a zero sum game: if I eat the cake, then you can’t. But resilience is about other aspects of life: we need a notion of rationality that suits this side of life. This will need further development.

Positive Deviancy and Education

 Lord Reid (below) made some comments when launching the booklet that clarify some of the issues. He emphasises the role for positive deviancy and education in the sense of ‘bringing out’. This seems to me to be vital.

Control and Patching

Lord Reid (below) emphasises that a control-based approach, or continual ‘patching’, aren’t enough. There is a qualitative change in the nature of Cyber, and hence a need for a completely different approach. This might have been made more explicit in the document.

Criticisms

The main criticisms that I have seen have been either of the recommendations that they wrongly assume John Reid is making (e.g., for more control) or appear to be based on a dislike of Lord Reid. In any case, changes such as those proposed would seem to call for a more international figure-head or lead institution, perhaps with ISRS in a supporting role.

What next?

The argument for having some doctrine matches my own leanings, as does the general trend of  the suggestions. But (as the government, below, says) one needs an international consensus, which in practice would seem to mean an approach endorsed by the UN security council (including America, France, Russia and China). Such a hopeless task seems to lead people to underestimate the risks of the status quo, or of ‘evolutionary’ patching of it with either less order or more control. As with the financial crisis, this may be the biggest threat to our security, let alone our resilience.

It seems to me, though, that behind the specific ideas proffered the underlying instincts are not all that different from those of the founders of the UN, and that seen in that context the ideas might not be too far from being attractive to each of the permanent members, if only the opportunities were appreciated.

Any re-invention or re-articulation of the principles of the UN would naturally have an impact on member states, and call for some adjustment to their legal codes. The UK’s latest Prevent strategy already emphasises the ‘fundamental values’ of ‘universal human rights, equality before the law, democracy and full participation in our society’.  In effect, we could see the proposed Cyber doctrine as proposing principles that would support a right to live in a reasonably resilient society. If for resilience we read sustainability, then we could say that there should be a right to be able to sustain oneself without jeopardising the prospects of one’s children and grandchildren. I am not sure what ‘full participation in our society’ would mean under reformed principles, but I see governments as having a role in fostering the broadest range of possible ‘positive deviants’, rather than (perhaps inadvertently) encouraging dangerous groupthink. These thoughts are perhaps prompted more by Lord Reid’s comments than the document itself.

Conclusion

 The booklet raises important issues about the nature, opportunities and threats of globalisation as impacted by Cyberspace. It seems clear that there is a consequent need for doctrine, but not yet what routes forward there may be. Food for thought, but not a clear prospectus.

See Also

Government position, Lord Reid’s Guardian article. , Police Led Intelligence, some negative comment.

Dave Marsay

Complexity Demystified: A guide for practitioners

P. Beautement & C. Broenner Complexity Demystified: A guide for practitioners, Triarchy Press, 2011.

First Impressions

  • The title comes close to ‘complexity made simple’, which would be absurd. A favourable interpretation (after Einstein) would be ‘complexity made as straightforward as possible, but no more.’
  • The references look good.
  • The illustrations look appropriate, of suitable quality, quantity and relevance.

Skimming through I gained a good impression of who the book was for and what it had to offer them. This was born out (below).

Summary

Who is it for?

Complexity is here viewed from the viewpoint of a ‘coal face’ practitioner:

  • Dealing with problems that are not amenable to a conventional managerial approach (e.g. set targets, monitor progress against targets, …).
  • Has had some success and shown some insight and aptitude.
  • Is being thwarted by stakeholders (e.g., donors, management) with conventional management view and using conventional ‘tools’, such as accountability against pre-agreed targets.

What is complexity?

Complexity is characterised as a situation where:

  • One can identify potential behaviours and value them, mostly in advance.
  • Unlike simpler situations, one cannot predict what will be the priorities, when: a plan that is a program will fail.
  • One can react to behaviours by suppressing negative behaviours and supporting positive ones: a plan is a valuation, activity is adaptation.

Complexity leads to uncertainty.

Details

Complexity science principles, concepts and techniques

The first two context-settings were well written and informative. This is about academic theory, which we have been warned not to expect too much of; such theory is not [yet?] ‘real-world ready’ – ready to be ‘applied to’ real complex situations – but it does supply some useful conceptual tools.

The approach

In effect commonplace ‘pragmatism’ is not adequate. The notion of pragmatism is adapted. Instead of persisting with one’s view as long as it seems to be adequate, one seeks to use a broad range of cognitive tools to check one’s understanding and look for alternatives, particular looking out for any unanticipated changes as soon as they occur.

The book refers to a ‘community of practice’, which suggests that there is already a community that has identified and is grappling with the problems, but needing some extra hints and tips. The approach seems down to earth and ‘pragmatic’, not challenging ideologies, cultures, values or other deeply held values.

 Case Studies

These were a good range, with those where the authors had been more closely involved being the better for it. I found the one on Ludlow particular insightful, chiming with my own experiences. I am tempted to blog separately on the ‘fuel protests in the UK in 2000’ as I was engaged with some of the team involved at the time, on related issues. But some of the issues raised here seem quite generally important.

Interesting points

  • Carl Sagan is cited to the effect that the left brain deals with detail, the right with context – the ‘bigger’ picture’. In my opinion many organisations focus too readily on the short term, to the exclusion of the long-term, and if they do focus on the long-term they tend to do it ‘by the clock’ with no sense of ‘as required’. Balancing long-term and short-term needs can be the most challenging aspect of interventions.
  • ECCS 09 is made much of. I can vouch for the insightful nature of the practitioners’ workshop that the authors led.
  • I have worked with Patrick, so had prior sight of some of the illustrations. The account is recognizable, but all the better for the insights of ECCS 09 and – possibly – not having to fit with the prejudices of some unsympathetic stakeholders. In a sense, this is the book that we have been lacking.

Related work

Management

  • Leadership agility: A business imperative for a VUCA world.
    Takes a similar view about complexity and how to work with it.
  • The Cynefin Framework.
    Positions complexity between complicated (familiar management techniques work) and chaos (act first). Advocates ‘probe-sense-respond’, which reflects some of the same views as ‘complexity demystified. (The authors have discussed the issues.)..

Conclusions

The book considers all types of complexity, revealing that what is required is a more thoughtful approach to pragmatism than is the norm for familiar situations, together with a range of thought-provoking tools, the practical expediency of some of which I can vouch for. As such it provides 259 pages of good guidance. If it also came to be a common source across many practitioner domains then it could also facilitate cross-domain discussions on complex topics, something that I feel would be most useful. (Currently some excellent practice is being obscured by the use of ‘silo’ languages and tools, inhibiting collaboration and cross-cultural learning.)

The book seems to me to be strongest in giving guidance to practitioners who are taking, or are constrained to take, a phenomenological approach: seeking to make sense of situations before reacting. This type of approach has been the focus of western academic research and much practice for the last few decades, and in some quarters the notion that one might act without being able to justify one’s actions would be anathema. The book gives some new tools which it is hoped will be useful to justify action, but I have a concern that some situations will be stil be novel and that to be effective practitioners may still need to act outside the currently accepted concepts, whatever they are. I would have liked to see the book be more explicit about its scope since:

  • Some practitioners can actually cope quite well with such supposedly chaotic situations. Currently, observers tend not to appreciate this extreme complexity of others’ situations, and so under-value their achievements. This is unfortunate, as, for example:
    • Bleeding edge practitioners might find themselves stymied by managers and other stakeholders who have too limited a concept of ‘accountability’.
    • Many others could learn from such practitioners, or employ their insights.
  • Without an appreciation of the complexity/chaos boundary, practitioners may take on tasks that are too difficult for them or the tools at their disposal, or where they may lose stakeholder engagement through having different notions of what is ‘appropriately pragmatic’.
  • An organisation that had some appreciation of the boundary could facilitate mentoring etc.
  • We could start to identify and develop tools with a broader applicability.

In fact, some of the passages in the book would, I believe, be helpful even in the ‘chaos’ situation. If we had a clearer ‘map’ the guidance on relatively straightforward complexity could be simplified and the key material for that complexity which threatens chaos could be made more of. My attempt at drawing such a distinction is at https://djmarsay.wordpress.com/notes/about-these-posts/work-in-progress/complexity/ .

In practice, novelty is more often found in long-term factors, not least because if we do not prepare for novelty sufficiently in advance, we will be unable to react effectively. While I would never wish to advocate too clean a separation between practice and policy, or between short and long-term considerations, we can perhaps adopt a leaf out of the book and venture some guidance, not to be taken too rigidly. If conventional pragmatism is appropriate at the immediate ‘coal face’ in the short run, then this book is a guide for those practitioners who are taking a step back and considering complex medium term issues, and would usefully inform policy makers in considering the long-run, but does not directly address the full complexities which they face, which are often inherently mysterious when seen from a narrow phenomenological stance. It does not provide guidance tailored for policy makers, and nor does it give practitioners a view of policy issues. But it could provide a much-needed contribution towards spanning what can be a difficult practice / policy divide

Addendum

One of the authors has developed eleven ‘Principles of Practice’. These reflect the view that, in practice, the most significant ‘unintended consequences‘ could have been avoided. I think there is a lot of ‘truth’ in this. But it seems to me that however ‘complexity worthy’ one is, and however much one thinks one has followed ‘best practice’ – including that covered by this book – there are always going to be ‘unintended consequences’. Its just that one can anticipate that they will be less serious, and not as serious as the original problem one was trying to solve.

See Also

Some mathematics of complexity, Reasoning in a complex dynamic world

Dave Marsay