Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Evolution of Pragmatism?

A common ‘pragmatic’ approach is to keep doing what you normally do until you hit a snag, and (only) then to reconsider. Whereas Lamarckian evolution would lead to the ‘survival of the fittest’, with everyone adapting to the current niche, tending to yield a homogenous population, Darwinian evolution has survival of the maximal variety of all those who can survive, with characteristics only dying out when they are not viable. This evolution of diversity makes for greater resilience, which is maybe why ‘pragmatic’ Darwinian evolution has evolved.

The products of evolution are generally also pragmatic, in that they have virtually pre-programmed behaviours which ‘unfold’ in the environment. Plants grow and procreate, while animals have a richer variety of behaviours, but still tend just to do what they do. But humans can ‘think for themselves’ and be ‘creative’, and so have the possibility of not being just pragmatic.

I was at a (very good) lecture by Alice Roberts last night on the evolution of technology. She noted that many creatures use tools, but humans seem to be unique in that at some critical population mass the manufacture and use of tools becomes sustained through teaching, copying and co-operation. It occurred to me that much of this could be pragmatic. After all, until recently development has been very slow, and so may well have been driven by specific practical problems rather than continual searching for improvements. Also, the more recent upswing of innovation seems to have been associated with an increased mixing of cultures and decreased intolerance for people who think for themselves.

In biological evolution mutations can lead to innovation, so evolution is not entirely pragmatic, but their impact is normally limited by the need to fit the current niche, so evolution typically appears to be pragmatic. The role of mutations is more to increase the diversity of behaviours within the niche, rather than innovation as such.

In social evolution there will probably always have been mavericks and misfits, but the social pressure has been towards conformity. I conjecture that such an environment has favoured a habit of pragmatism. These days, it seems to me, a better approach would be more open-minded, inclusive and exploratory, but possibly we do have a biologically-conditioned tendency to be overly pragmatic: to confuse conventions for facts and  heuristics for laws of nature, and not to challenge widely-held beliefs.

The financial crash of 2008 was blamed by some on mathematics. This seems ridiculous. But the post Cold War world was largely one of growth with the threat of nuclear devastation much diminished, so it might be expected that pragmatism would be favoured. Thus powerful tools (mathematical or otherwise) could be taken up and exploited pragmatically, without enough consideration of the potential dangers. It seems to me that this problem is much broader than economics, but I wonder what the cure is, apart from better education and more enlightened public debate?

Dave Marsay

 

 

Traffic bunching

In heavy traffic, such as on motorways in rush-hour, there is often oscillation in speed and there can even be mysterious ’emergent’ halts. The use of variable speed limits can result in everyone getting along a given stretch of road quicker.

Soros (worth reading) has written an article that suggests that this is all to do with the humanity and ‘thinking’ of the drivers, and that something similar is the case for economic and financial booms and busts. This might seem to indicate that ‘mathematical models’ were a part of our problems, not solutions. So I suggest the following thought experiment:

Suppose a huge number of  identical driverless cars with deterministic control functions all try to go along the same road, seeking to optimise performance in terms of ‘progress’ and fuel economy. Will they necessarily succeed, or might there be some ‘tragedy of the commons’ that can only be resolved by some overall regulation? What are the critical factors? Is the nature of the ‘brains’ one of them?

Are these problems the preserve of psychologists, or does mathematics have anything useful to say?

Dave Marsay

Haldane’s The dog and the Frisbee

Andrew Haldane The dog and the Frisbee

Haldane argues in favour of simplified regulation. I find the conclusions reasonable, but have some quibbles about the details of the argument. My own view is that much of our financial problems have been due – at least in part – to a misrepresentation of the associated mathematics, and so I am keen to ensure that we avoid similar misunderstandings in the future. I see this as a primary responsibility of ‘regulators’, viewed in the round.

The paper starts with a variation of Ashby’s ball-catching observation, involving dog and a Frisbee instead of a man and a ball: you don’t need to estimate the position of the Frisbee or be an expert in aerodynamics: a simple, natural, heuristic will do. He applies this analogy to financial regulation, but it is somewhat flawed. When catching a Frisbee one relies on the Frisbee behaving normally, but in financial regulation one is concerned with what had seemed to be abnormal, such as the crisis period of 2007/8.

It is noted of Game theory that

John von Neumann and Oskar Morgenstern established that optimal decision-making involved probabilistically-weighting all possible future outcomes.

In apparent contrast

Many of the dominant figures in 20th century economics – from Keynes to Hayek, from Simon to Friedman – placed imperfections in information and knowledge centre-stage. Uncertainty was for them the normal state of decision-making affairs.

“It is not what we know, but what we do not know which we must always address, to avoid major failures, catastrophes and panics.”

The Game Theory thinking is characterised as ignoring the possibility of uncertainty, which – from a mathematical point of view – seems an absurd misreading. Theories can only ever have conditional conclusions: any unconditional misinterpretation goes beyond the proper bounds. The paper – rightly – rejects the conclusions of two-player zero-sum static game theory. But its critique of such a theory is much less thorough than von Neumann and Morgenstern’s own (e.g. their 4.3.3) and fails to identify which conditions are violated by economics. More worryingly, it seems to invite the reader to accept them, as here:

The choice of optimal decision-making strategy depends importantly on the degree of uncertainty about the environment – in statistical terms, model uncertainty. A key factor determining that uncertainty is the length of the sample over which the model is estimated. Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies.

This seems to suggest that – contra game theory – we could ‘in principle’ establish a sound model, if only we had enough data. Yet:

Einstein wrote that: “The problems that exist in the world today cannot be solved by the level of thinking that created them”.

There seems a non-sequitur here: if new thinking is repeatedly being applied then surely the nature of the system will continually be changing? Or is it proposed that the ‘new thinking’ will yield a final solution, eliminating uncertainty? If it is the case that ‘new thinking’ is repeatedly being applied then the regularity conditions of basic game theory (e.g. at 4.6.3 and 11.1.1) are not met (as discussed at 2.2.3). It is certainly not an unconditional conclusion that the methods of game theory apply to economies beyond the short-run, and experience would seem to show that such an assumption would be false.

The paper recommends the use of heuristics, by which it presumably means what Gigernezer means: methods that ignore some of the data. Thus, for example, all formal methods are heuristics since they ignore intuition.  But a dog catching a Frisbeee only has its own experience, which it is using, and so presumably – by this definition – is not actually using a heuristic either. In 2006 most financial and economics methods were heuristics in the sense that they ignored the lessons identified by von Neumann and Morgenstern. Gigerenzer’s definition seems hardly helpful. The dictionary definition relates to learning on one’s own, ignoring others. The economic problem, it seems to me, was of paying too much atention to the wrong people, and too little to those such as von Neumann and Morgenstern – and Keynes.   

The implication of the paper and Gigerenzer is, I think, that a heuristic is a set method that is used, rather than solving a problem from first principles. This is clearly a good idea, provided that the method incorporates a check that whatever principles that it relies upon do in fact hold in the case at hand. (This is what economists have often neglecte to do.) If set methods are used as meta-heuristics to identify the appropriate heuristics for particular cases, then one has something like recognition-primed decision-making. It could be argued that the financial community had such meta-heuristics, which led to the crash: the adoption of heuristics as such seems not to be a solution. Instead one needs to appreciate what kind of heuristic are appropriate when. Game theory shows us that the probabilistic heuristics are ill-founded when there is significant innovation, as there was both prior, through and immediately after 2007/8. In so far as economics and finance are games, some events are game-changers. The problem is not the proper application of mathematical game theory, but the ‘pragmatic’ application of a simplistic version: playing the game as it appears to be unless and until it changes. An unstated possible deduction from the paper is surely that such ‘pragmatic’ approaches are inadequate. For mutable games, strategy needs to take place at a higher level than it does for fixed games: it is not just that different strategies are required, but that ‘strategy’ has a different meaning: it should at least recognize the possibility of a change to a seemingly established status quo.

If we take an analogy with a dog and a Frisbee, and consider Frisbee catching to be a statistically regular problem, then the conditions of simple game theory may be met, and it is also possible to establish statistically that a heuristic (method) is adequate. But if there is innovation in the situation then we cannot rely on any simplistic theory or on any learnt methods. Instead we need a more principled approach, such as that of Keynes or Ashby,  considering the conditionality and looking out for potential game-changers. The key is not just simpler regulation, but regulation that is less reliant on conditions that we expect to hold but for which, on maturer reflection, are not totally reliable. In practice this may necessitate a mature on-going debate to adjust the regime to potential game-changers as they emerge.

See Also

Ariel Rubinstein opines that:

classical game theory deals with situations where people are fully rational.

Yet von Neumann and Morgenstern (4.1.2) note that:

the rules of rational behaviour must provide definitely for the possibility of irrational conduct on the part of others.

Indeed, in a paradigmatic zero-sum two person game, if the other person players rationally (according to game theory) then your expected return is the same irrespective of how you play. Thus it is of the essence that you consider potential non-rational plays. I take it, then, that game theory as reflected in economics is a very simplified – indeed an over-simplified – version. It is presumably this distorted version that Haldane’s criticism’s properly apply to.

Dave Marsay

Haldane’s Tails of the Unexpected

A. Haldane, B. Nelson Tails of the unexpected,  The Credit Crisis Five Years On: Unpacking the Crisis conference, University of Edinburgh Business School, 8-9 June 2012

The credit crisis is blamed on a simplistic belief in ‘the Normal Distribution’ and its ‘thin tails’, understating risk. Complexity and chaos theories point to greater risks, as does the work of Taleb.

Modern weather forecasting is pointed to as good relevant practice, where one can spot trouble brewing. Robust and resilient regulatory mechanisms need to be employed. It is no good relying on statistics like VaR (Value at Risk) that assume a normal distribution. The Bank of England is developing an approach based on these ideas.

Comment

Risk arises when the statistical distribution of the future can be calculated or is known. Uncertainty arises when this distribution is incalculable, perhaps unknown.

While the paper acknowledges Keynes’ economics and Knightian uncertainty, it overlooks Keynes’ Treatise on Probability, which underpins his economics.

Much of modern econometric theory is … underpinned by the assumption of randomness in variables and estimated error terms.

Keynes was critical of this assumption, and of this model:

Economics … shift[ed] from models of Classical determinism to statistical laws. … Evgeny Slutsky (1927) and Ragnar Frisch (1933) … divided the dynamics of the economy into two elements: an irregular random element or impulse and a regular systematic element or propagation mechanism. This impulse/propagation paradigm remains the centrepiece of macro-economics to this day.

Keynes pointed out that such assumptions could only be validated empirically and (as the current paper also does) in the Treatise he cited Lexis’s falsification.

The paper cites a game of paper/scissors/stone which Sotheby’s thought was a simple game of chance but which Christie’s saw  as an opportunity for strategizing – and won millions of dollars. Apparently Christie’s consulted some 11 year old girls, but they might equally well have been familiar with Shannon‘s machine for defeating strategy-impaired humans. With this in mind, it is not clear why the paper characterises uncertainty a merly being about unknown probability distributions, as distinct from Keynes’ more radical position, that there is no such distribution. 

The paper is critical of nerds, who apparently ‘like to show off’.  But to me the problem is not the show-offs, but those who don’t know as much as they think they know. They pay too little attention to the theory, not too much. The girls and Shannon seem okay to me: it is those nerds who see everything as the product of randomness or a game of chance who are the problem.

If we compare the Slutsky Frisch model with Kuhn’s description of the development of science, then economics is assumed to develop in much the same way as normal science, but without ever undergoing anything like a (systemic) paradigm shift. Thus, while the model may be correct most of the time,  violations, such as in 2007/8, matter.

Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.

 One can understand this reasoning by analogy with science: the more dominant a school which protects its core myths, the greater the reaction and impact when the myths are exposed. But in finance it may not be just ‘risk control’ that causes a problem. Any optimisation that is blind to the possibility of systemic change may tend to increase the chance of change (for good or ill) [E.g. Bohr Atomic Physics and Human Knowledge. Ox Bow Press 1958].

See Also

Previous posts on articles by or about Haldane, along similar lines:

My notes on:

Dave Marsay

Hercock’s Cohesion

Robert G. Hercock Cohesion: The Making of Society 2009.

Having had Robert critique some of my work, I could hardly not comment on this think-piece. It draws on modern complexity theory and a broad view of relevant historical examples and current trends to create a credible narrative. For me, his key conclusions are:

  1. “[G]iven a sufficient degree of communication … the cooperative assembly of [a cohesive society] is inevitable.”
  2. To be cohesive, a society should be “global politically federated, yet culturally diverse”.

The nature of communication envisaged seems to be indicated by:

 “From smoke signals, and the electric telegraph, through to fibre optics, and the Internet … the manifest boom in all forms of communication is bringing immense capabilities to form new social collectives and positive cultural developments.”

 I ‘get’ that increasing communication will bring immense capabilities to support the cooperative assembly of a cohesive global society, but am not convinced the effective exploitation of the capability in this way is inevitable. In chapter 6 (‘Bridges’) Robert says:

 “The truth is we now need a new shared set of beliefs. … Unfortunately, no one appears to have the faintest idea what such a common set of beliefs should look like, or where it might arise from, or who has responsibility to make it happen, or how, etc. Basically this is the challenge of the 21st century; we stand or fall on this battle for a common cultural nexus.”  

 This is closer to my own thinking.

People have different understandings of terms like ‘federated’. My preference is for subsidiarity: the idea that one has the minimum possible governance, with reliance on the minimum possible shared beliefs and common cultures. In complex situations these minimum levels are not obvious or static, so I would see an effective federations as engaging tentatively at a number of ‘levels’, ‘veering and hauling’ between them, and with strong arrangements for ‘horizon scanning’ and debate with the maximum possible diversity of views. Thus there would be not only cultural diversity but ‘viewpoint diversity within federated debate’. What is needed seems somewhat like Holism and glocalization 

Thinking of the EU, diversity of monetary policy might make the EU as an institution more cohesive while making their economies less cohesive. To put it another way, attempts to enforce cohesion at the monetary level can threaten cohesion at the political level. So it is not clear to me that one can think of a society as simply ‘being cohesive’. Rather it should be cohesive in the sense appropriate to its current situation. Cohesion should be ‘adaptive’. Leadership and vision seem to be required to achieve this: it is not automatic.

In the mid 80s many of those involved in the development of communications technologies thought that they would promote world peace, sometimes citing the kind of works that Robert does. I had and have two reservations. Firstly, the quality of communications matters. Thus [it was thought] one probably needed digital video, mobile phones and the Internet, all integrated in way that was easy to use. [The Apple Macintosh made this credible.] Thus, if there was a clash between Soviet secret police and Jewish protestors [common at the time], the whole world could take an informed view, rather than relying on the media. [This was before the development of video faking capabilities]. Secondly, while this would destabilize autocratic regimes, it was another issue as to what would happen next. It was generally felt that the only possible ‘properly’ stable states were democratic, but views differed on whether such states would necessarily stabilize.

Subsequent experience, such as the Arab spring, support the view that YouTube and Facebook undermine oppressive regimes. But I remain unconvinced that ‘the cooperative assembly of [a cohesive society] is inevitable’ in Africa, the Middle East,Russia or South America’, or that more communications would make it so. It certainly seems that if the process is inevitable, it can be much too slow.

My own thinking in the 80s was informed by the uncertainty and complexity theory Keynes, Whitehead, Turing and Smuts, which predates that which Robert cites, and which informed the development of the United Nations as a part of ‘the cooperative assembly of a cohesive global society’. Robert seems to be arguing that according to modern theory such efforts were not necessary, but even so they may have been beneficial if all they did was speed the process up by a few generations. Moreover, the EU example seems to support my view that these theories are usefully more advanced than their contemporary counter-parts.

The financial crash of 2008 occurred part way through the writing of the book. Like any history, explanations differ, and Robert gives a credible account in terms of modern complexity theory. But logic teaches us to be cautious about such post-hoc explanations. It seems to me that Keynes’ theory explains it adequately, and having been developed before the event should be given more credence.

 Robert seems to regard the global crash of 2008 as a result of a loss of cohesion :

“When economies, states and societies lose their cohesion, people suffer; to be precise a lot of people end up paying the cost. In the recession of 2008/09 … “

But Keynes shows how it is cohesion (‘sticking together’) that causes global crashes. Firstly, in a non-globalized economy a crash in one part can be compensated for by the stability of another part, a bit like China saving the situation, but more so. Secondly, (to quote Patton) ‘if everyone is thinking alike then no-one is thinking’. Once group-think is established ‘expectations’ become ossified, and the market is disconnected from reality.

Robert’s notion of cohesion is “global politically federated, yet culturally diverse”. One can see how in 2008 and currently in the EU (and North Africa and elsewhere) de jure and de-facto regulatory structures change, consistent with Robert’s view. But according to Keynes this is a response to an actual or potential crisis, rather than a causative factor. One can have a chain of  crises in which political change leads to emergent social or economic problems, leading to political change and so-on. Robert seems to suppose that this must settle down into some stable federation. If so then perhaps only the core principles will be stable, and even these might need to be continually reinterpreted and refreshed, much as I have tried to do here.

On a more conceptual note, Robert has the qualifies the conclusion with “The evidence from all of the fields considered in this text suggests …”.  But the conclusion could only be formally sustained by an argument employing induction. Now, if improved communications is really going to change the world so much then it will undermine the basis of any induction. (In Whitehead’s terms, induction only works with an epoch but here the epoch is changed.) The best one could say would be that on current trends a move towards greater cohesion appears inevitable. This is a more fundamental problem than only considering evidence from a limited range of fields. More evidence from more fields could not overcome this problem.

Dave Marsay

The End of a Physics Worldview (Kauffman)

Thought provoking, as usual. This video goes beyond his previous work, but in the same direction. His point is that it is a mistake to think of ecologies and economies as if they resembled the typical world of Physics. A previous written version is at npr, followed by a later development.

He builds on Kant’s notion of wholes, noting (as Kant did before him) that the existence of such wholes is inconsistent with classical notions of causality.  He ties this in to biological examples. This complements Prigogine, who did a similar job for modern Physics.

Kauffman is critical of mathematics and ‘mathematization’, but seems unaware of the mathematics of Keynes and Whitehead. Kauffman’s view seems the same as that due to Bergson and Smuts, which in the late 1920s defined ‘modern science’. To me the problem behind the financial crash lies not in science or mathematics or even in economics, but in the brute fact that politicians and financiers were wedded to a pre-modern (pre-Kantian) view of economics and mathematics. Kauffman’s work may help enlighten them on the need, but not on the potential role for modern mathematics.

Kauffman notes that at any one time there are ‘adjacent possibles’ and that in the near future they may come to pass, and that – conceptually – one could associate a probability distribution with these possibilities. But as new possibilities come to pass new adjacent possibilities arise. Kauffman supposes that it is not possible to know what these are, and hence one cannot have a probability distribution, much of information theory makes no sense, and one cannot reason effectively. The challenge, then, is to discover how we do, in fact, reason.

Kauffman does not distinguish between short and long run. If we do so then we see that if we know the adjacent possible then our conventional reasoning is appropriate in the short-term, and Kauffman’s concerns are really about the long-term: beyond the point at which we can see the potential possibles that may arise. To this extent, at least, Kauffman’s post-modern vision seems little different from the modern vision of the 1920s and 30s, before it was trivialized.

Dave Marsay