Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Are fananciers really stupid?

The New Scientist (30 March 2013) has the following question, under the heading ‘Stupid is as stupid does’:

Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?

Possible answers are: “yes”, “no” or “cannot be determined”.

You might want to think about this before scrolling down.

.

.

.

.

.

.

.

It is claimed that while ‘the vast majority’ (presumably including financiers, whose thinking is being criticised) think the answer is “cannot be determined”,

careful deduction shows that the answer is “yes”.

Similar views are expressed at  a learning blog and at a Physics blog, although the ‘careful deductions’ are not given. Would you like to think again?

.

.

.

.

.

.

.

.

Now I have a confession to make. My first impression is that the closest of the admissible answers is ‘cannot be determined’, and having thought carefully for a while, I have not changed my mind. Am I stupid? (Based on this evidence!) You might like to think about this before scrolling down.

.

.

.

.

.

.

.

Some people object that the term ‘is married’ may not be well-defined, but that is not my concern. Suppose that one has a definition of marriage that is as complete and precise as possible. What is the correct answer? Does that change your thinking?

.

.

.

.

.

.

.

Okay, here are some candidate answers that I would prefer, if allowed:

  1. There are cases in which the answer cannot be determined.
  2. It is not possible to prove that there are not cases in which the answer cannot be determined. (So that the answer could actually be “yes”, but we cannot know that it is “yes”.)

Either way, it cannot be proved that there is a complete and precise way of determining the answer, but for different reasons. I lean towards the first answer, but am not sure. Which it is is not a logical or mathematical question, but a question about ‘reality’, so one should ask a Physicist. My reasoning follows … .

.

.

.

.

.

.

.

.

Suppose that Anne marries Henry who dies while out in space, with a high relative velocity and acceleration. Then to answer yes we must at least be able to determine a unique time in Anne’s time-frame in which Henry dies, or else (it seems to me) there will be a period of time in which Anne’s status is indeterminate. It is not just that we do not know what Anne’s status is; she has no ‘objective’ status.

If there is some experiment which really proves that there is no possible ‘objective’ time (and I am not sure that there is) then am I not right? Even if there is no such experiment, one cannot determine the truth of physical theories, only fail to disprove them. So either way, am I not right?

Enlightenment, please. The link to finance is that the New Scientist article says that

Employees leaving logic at the office door helped cause the financial crisis.

I agree, but it seems to me (after Keynes) that it was their use of the kind of ‘classical’ logic that is implicitly assumed in the article that is at fault. Being married is a relation, not a proposition about Anne. Anne has no state or attributes from which her marital status can be determined, any more than terms such as crash, recession, money supply, inflation, inequality, value or ‘the will of the people’ have any correspondence in real economies.  Unless you know different?

Dave Marsay

Mathematics, psychology, decisions

I attended a conference on the mathematics of finance last week. It seems that things would have gone better in 2007/8 if only policy makers had employed some mathematicians to critique the then dominant dogmas. But I am not so sure. I think one would need to understand why people went along with the dogmas. Psychology, such as behavioural economics, doesn’t seem to help much, since although it challenges some aspects of the dogmas it fails to challenge (and perhaps even promotes) other aspects, so that it is not at all clear how it could have helped.

Here I speculate on an answer.

Finance and economics are either empirical subjects or they are quasi-religious, based on dogmas. The problems seem to arise when they are the latter but we mistake them for the former. If they are empirical then they have models whose justification is based on evidence.

Naïve inductivism boils down to the view that whatever has always (never) been the case will continue always (never) to be the case. Logically it is untenable, because one often gets clashes, where two different applications of naïve induction are incompatible. But pragmatically, it is attractive.

According to naïve inductivism we might suppose that if the evidence has always fitted the models, then actions based on the supposition that they will continue to do so will be justified. (Hence, ‘it is rational to act as if the model is true’). But for something as complex as an economy the models are necessarily incomplete, so that one can only say that the evidence fitted the models within the context as it was at the time. Thus all that naïve inductivism could tell you is that ‘it is rational’ to act as if the  model is true, unless and until the context should change. But many of the papers at the mathematics of finance conference were pointing out specific cases in which the actions ‘obviously’ changed the context, so that naïve inductivism should not have been applied.

It seems to me that one could take a number of attitudes:

  1. It is always rational to act on naïve inductivism.
  2. It is always rational to act on naïve inductivism, unless there is some clear reason why not.
  3. It is always rational to act on naïve inductivism, as long as one has made a reasonable effort to rule out any contra-indications (e.g., by considering ‘the whole’).
  4. It is only reasonable to act on naïve inductivism when one has ruled out any possible changes to the context, particularly reactions to our actions, by considering an adequate experience base.

In addition, one might regard the models as conditionally valid, and hedge accordingly. (‘Unless and until there is a reaction’.) Current psychology seems to suppose (1) and hence has little to help us understand why people tend to lean too strongly on naïve inductivism. It may be that a belief in (1) is not really psychological, but simply a consequence of education (i.e., cultural).

See Also

Russell’s Human Knowledge. My media for the conference.

Dave Marsay

Haldane’s The dog and the Frisbee

Andrew Haldane The dog and the Frisbee

Haldane argues in favour of simplified regulation. I find the conclusions reasonable, but have some quibbles about the details of the argument. My own view is that much of our financial problems have been due – at least in part – to a misrepresentation of the associated mathematics, and so I am keen to ensure that we avoid similar misunderstandings in the future. I see this as a primary responsibility of ‘regulators’, viewed in the round.

The paper starts with a variation of Ashby’s ball-catching observation, involving dog and a Frisbee instead of a man and a ball: you don’t need to estimate the position of the Frisbee or be an expert in aerodynamics: a simple, natural, heuristic will do. He applies this analogy to financial regulation, but it is somewhat flawed. When catching a Frisbee one relies on the Frisbee behaving normally, but in financial regulation one is concerned with what had seemed to be abnormal, such as the crisis period of 2007/8.

It is noted of Game theory that

John von Neumann and Oskar Morgenstern established that optimal decision-making involved probabilistically-weighting all possible future outcomes.

In apparent contrast

Many of the dominant figures in 20th century economics – from Keynes to Hayek, from Simon to Friedman – placed imperfections in information and knowledge centre-stage. Uncertainty was for them the normal state of decision-making affairs.

“It is not what we know, but what we do not know which we must always address, to avoid major failures, catastrophes and panics.”

The Game Theory thinking is characterised as ignoring the possibility of uncertainty, which – from a mathematical point of view – seems an absurd misreading. Theories can only ever have conditional conclusions: any unconditional misinterpretation goes beyond the proper bounds. The paper – rightly – rejects the conclusions of two-player zero-sum static game theory. But its critique of such a theory is much less thorough than von Neumann and Morgenstern’s own (e.g. their 4.3.3) and fails to identify which conditions are violated by economics. More worryingly, it seems to invite the reader to accept them, as here:

The choice of optimal decision-making strategy depends importantly on the degree of uncertainty about the environment – in statistical terms, model uncertainty. A key factor determining that uncertainty is the length of the sample over which the model is estimated. Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies.

This seems to suggest that – contra game theory – we could ‘in principle’ establish a sound model, if only we had enough data. Yet:

Einstein wrote that: “The problems that exist in the world today cannot be solved by the level of thinking that created them”.

There seems a non-sequitur here: if new thinking is repeatedly being applied then surely the nature of the system will continually be changing? Or is it proposed that the ‘new thinking’ will yield a final solution, eliminating uncertainty? If it is the case that ‘new thinking’ is repeatedly being applied then the regularity conditions of basic game theory (e.g. at 4.6.3 and 11.1.1) are not met (as discussed at 2.2.3). It is certainly not an unconditional conclusion that the methods of game theory apply to economies beyond the short-run, and experience would seem to show that such an assumption would be false.

The paper recommends the use of heuristics, by which it presumably means what Gigernezer means: methods that ignore some of the data. Thus, for example, all formal methods are heuristics since they ignore intuition.  But a dog catching a Frisbeee only has its own experience, which it is using, and so presumably – by this definition – is not actually using a heuristic either. In 2006 most financial and economics methods were heuristics in the sense that they ignored the lessons identified by von Neumann and Morgenstern. Gigerenzer’s definition seems hardly helpful. The dictionary definition relates to learning on one’s own, ignoring others. The economic problem, it seems to me, was of paying too much atention to the wrong people, and too little to those such as von Neumann and Morgenstern – and Keynes.   

The implication of the paper and Gigerenzer is, I think, that a heuristic is a set method that is used, rather than solving a problem from first principles. This is clearly a good idea, provided that the method incorporates a check that whatever principles that it relies upon do in fact hold in the case at hand. (This is what economists have often neglecte to do.) If set methods are used as meta-heuristics to identify the appropriate heuristics for particular cases, then one has something like recognition-primed decision-making. It could be argued that the financial community had such meta-heuristics, which led to the crash: the adoption of heuristics as such seems not to be a solution. Instead one needs to appreciate what kind of heuristic are appropriate when. Game theory shows us that the probabilistic heuristics are ill-founded when there is significant innovation, as there was both prior, through and immediately after 2007/8. In so far as economics and finance are games, some events are game-changers. The problem is not the proper application of mathematical game theory, but the ‘pragmatic’ application of a simplistic version: playing the game as it appears to be unless and until it changes. An unstated possible deduction from the paper is surely that such ‘pragmatic’ approaches are inadequate. For mutable games, strategy needs to take place at a higher level than it does for fixed games: it is not just that different strategies are required, but that ‘strategy’ has a different meaning: it should at least recognize the possibility of a change to a seemingly established status quo.

If we take an analogy with a dog and a Frisbee, and consider Frisbee catching to be a statistically regular problem, then the conditions of simple game theory may be met, and it is also possible to establish statistically that a heuristic (method) is adequate. But if there is innovation in the situation then we cannot rely on any simplistic theory or on any learnt methods. Instead we need a more principled approach, such as that of Keynes or Ashby,  considering the conditionality and looking out for potential game-changers. The key is not just simpler regulation, but regulation that is less reliant on conditions that we expect to hold but for which, on maturer reflection, are not totally reliable. In practice this may necessitate a mature on-going debate to adjust the regime to potential game-changers as they emerge.

See Also

Ariel Rubinstein opines that:

classical game theory deals with situations where people are fully rational.

Yet von Neumann and Morgenstern (4.1.2) note that:

the rules of rational behaviour must provide definitely for the possibility of irrational conduct on the part of others.

Indeed, in a paradigmatic zero-sum two person game, if the other person players rationally (according to game theory) then your expected return is the same irrespective of how you play. Thus it is of the essence that you consider potential non-rational plays. I take it, then, that game theory as reflected in economics is a very simplified – indeed an over-simplified – version. It is presumably this distorted version that Haldane’s criticism’s properly apply to.

Dave Marsay

Haldane’s Tails of the Unexpected

A. Haldane, B. Nelson Tails of the unexpected,  The Credit Crisis Five Years On: Unpacking the Crisis conference, University of Edinburgh Business School, 8-9 June 2012

The credit crisis is blamed on a simplistic belief in ‘the Normal Distribution’ and its ‘thin tails’, understating risk. Complexity and chaos theories point to greater risks, as does the work of Taleb.

Modern weather forecasting is pointed to as good relevant practice, where one can spot trouble brewing. Robust and resilient regulatory mechanisms need to be employed. It is no good relying on statistics like VaR (Value at Risk) that assume a normal distribution. The Bank of England is developing an approach based on these ideas.

Comment

Risk arises when the statistical distribution of the future can be calculated or is known. Uncertainty arises when this distribution is incalculable, perhaps unknown.

While the paper acknowledges Keynes’ economics and Knightian uncertainty, it overlooks Keynes’ Treatise on Probability, which underpins his economics.

Much of modern econometric theory is … underpinned by the assumption of randomness in variables and estimated error terms.

Keynes was critical of this assumption, and of this model:

Economics … shift[ed] from models of Classical determinism to statistical laws. … Evgeny Slutsky (1927) and Ragnar Frisch (1933) … divided the dynamics of the economy into two elements: an irregular random element or impulse and a regular systematic element or propagation mechanism. This impulse/propagation paradigm remains the centrepiece of macro-economics to this day.

Keynes pointed out that such assumptions could only be validated empirically and (as the current paper also does) in the Treatise he cited Lexis’s falsification.

The paper cites a game of paper/scissors/stone which Sotheby’s thought was a simple game of chance but which Christie’s saw  as an opportunity for strategizing – and won millions of dollars. Apparently Christie’s consulted some 11 year old girls, but they might equally well have been familiar with Shannon‘s machine for defeating strategy-impaired humans. With this in mind, it is not clear why the paper characterises uncertainty a merly being about unknown probability distributions, as distinct from Keynes’ more radical position, that there is no such distribution. 

The paper is critical of nerds, who apparently ‘like to show off’.  But to me the problem is not the show-offs, but those who don’t know as much as they think they know. They pay too little attention to the theory, not too much. The girls and Shannon seem okay to me: it is those nerds who see everything as the product of randomness or a game of chance who are the problem.

If we compare the Slutsky Frisch model with Kuhn’s description of the development of science, then economics is assumed to develop in much the same way as normal science, but without ever undergoing anything like a (systemic) paradigm shift. Thus, while the model may be correct most of the time,  violations, such as in 2007/8, matter.

Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.

 One can understand this reasoning by analogy with science: the more dominant a school which protects its core myths, the greater the reaction and impact when the myths are exposed. But in finance it may not be just ‘risk control’ that causes a problem. Any optimisation that is blind to the possibility of systemic change may tend to increase the chance of change (for good or ill) [E.g. Bohr Atomic Physics and Human Knowledge. Ox Bow Press 1958].

See Also

Previous posts on articles by or about Haldane, along similar lines:

My notes on:

Dave Marsay

Avoiding ‘Black Swans’

A UK Blackett Review has reviewed some approaches to uncertainty relevent to the question “How can we ensure that we minimise strategic surprises from high impact low probability risks”. I have already reviewed the report in its own terms.  Here I consider the question.

  • One person’s surprise may be as a result of another person’s innovation, so we need to consider the up-sides and down-sides together.
  • In this context ‘low probability’ is subjective. Things are not surprising unless we didn’t expect them, so the reference to low probability is superfluous.
  • Similarly, strategic surprise necessarily relates to things that – if only in anticipation – have high impact.
  • Given that we are concerned with areas of innovation and high uncertainty, the term ‘minimise’ is overly ambitious. Reducing would be good. Thinking that we have minimized would be bad.

The question might be simplified to two parts:

  1. “How can we ensure that we strategize?
  2. “How can we strategize?”

These questions clearly have very important relative considerations, such as:

  • What in our culture inhibits strategizing?
  • Who can we look to for exemplars?
  • How can we convince stakeholders of the implications of not strategizing?
  • What else will we need to do?
  • Who might we co-opt or collaborate with?

But here I focus on the more widely-applicable aspects. On the first question the key point seems to be that, where the Blackett review points out the limitations of a simplistic view of probability, there are many related misconceptions and misguided ways that blind us to the possibility of or benefits of strategizing. In effect, as in economics, we have got ourselves locked into ‘no-strategy strategies’, where we believe that a short-term adaptive approach, with no broader or long-term view, is the best, and that more strategic approaches are a snare and a delusion. Thus the default answer to the original question seems to be ‘you don’t  – you just live with the consequences’. In some cases this might be right, but I do not think that we should take it for granted. This leads on to the second part.

We at least need ‘eyes open minds open’, to be considering potential surprises, and keeping score. If (for example, as in International Relations) it seems that none of our friends do better than chance, we should consider cultivating some more. But the scoring and rewarding is an important issue. We need to be sure that our mechanisms aren’t recognizing short-term performance at the expense of long-run sustainability. We need informed views about what ‘doing well’ would look like and what are the most challenging issues, and to seek to learn and engage with those who are doing well. We then need to engage in challenging issues ourselves, if only to develop and then maintain our understanding and capability.

If we take the financial sector as an example, there used to be a view that regulation was not needed. There are two more moderate views:

  1. That the introduction of rules would distort and destabilise the system.
  2. That although the system is not inherently stable, the government is not competent to regulate, and no regulation is better than bad regulation.

 My view is that what is commonly meant by ‘regulation’ is very tactical, whereas the problems are strategic. We do not need a ‘strategy for regulation’: we need strategic regulation. One of the dogmas of capitalism is that it involves ‘free markets’ in which information plays a key role. But in the noughties the markets were clearly not free in this sense. A potential role for a regulator, therefore, would be to perform appropriate ‘horizon scanning’ and to inject appropriate information to ‘nudge’ the system back into sustainability. Some voters would be suspicious of a government that attempts to strategize, but perhaps this form of regulation could be seen as simply better-informed muddling, particularly if there were strong disincentives to take unduly bold action.

But finance does not exist separate from other issues. A UK ‘regulator’ would need to be a virtual beast spanning  the departments, working within the confines of regular general elections, and being careful not to awaken memories of Cromwell.

This may seem terribly ambitious, but maybe we could start with reformed concepts of probability, performance, etc. 

Comments?

See also

JS Mill’s views

Other debates, my bibliography.  

Dave Marsay

The money forecast

A review of The Money forecast A Haldane New Scientist 10 Dec. 2011. On-line version is To Navigate economic storms we need better forecasting.

Summary

Andrew Haldane, ‘Andy’, is one of the more insightful and – hopefully – influential members of the UK economic community, recognising that new ways of thinking are needed and taking a lead in their development.

He refers to a previous article ‘Revealed – the Capitalist network that runs the world’, which inspires him to attempt to map the world of finance.

“… Making sense of the financial system is more an act of archaeology than futurology.”

Of the pre-crisis approach it says:

“… The mistake came in thinking the behaviour of the system was just an aggregated version of the behaviour of the individual. …

”    Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behaviour of any one node. To make an analogy, you cannot understand the brain by focusing on a neuron – and then simply multiplying by 100 billion. …

… When parts started to malfunction … no one had much idea what critical faculties would be impaired.

    That uncertainty, coupled with dense financial wiring, turned small failures into systemic collapse. …

    Those experiences are now seared onto the conscience of regulators. Systemic risk has entered their lexicon, and to understand that risk, they readily acknowledge the need to join the dots across the network. So far, so good. Still lacking are the data and models necessary to turn this good intent into action.

… Other disciplines have cut a dash in their complex network mapping over the past generation, assisted by increases in data-capture and modelling capability made possible by technology. One such is weather forecasting … .

   Success stories can also be told about utility grids and transport networks, the web, social networks, global supply chains and perhaps the most complex web of all, the brain.

    …  imagine the scene a generation hence. There is a single nerve centre for global finance. Inside, a map of financial flows is being drawn in real time. The world’s regulatory forecasters sit monitoring the financial world, perhaps even broadcasting it to the world’s media.

    National regulators may only be interested in a quite narrow subset of the data for the institutions for which they have responsibility. These data could be part of, or distinct from, the global architecture.

    …  it would enable “what-if?” simulations to be run – if UK bank Northern Rock is the first domino, what will be the next?”

Comments

I am unconvinced that archeology, weather forecasting or the other examples are really as complex as economic forecasting, which can be reflexive: if all the media forecast a crash there probably will be one, irrespective of the ‘objective’ financial and economic conditions. Similarly, prior to the crisis most people seemed to believe in ‘the great moderation’, and the good times rolled on, seemingly.

Prior to the crisis I was aware that a minority of British economists were concerned about the resilience of the global financial system and that the ‘great moderation’ was a cross between a house of cards and a pyramid selling scheme. In their view, a global financial crisis precipitated by a US crisis was the greatest threat to our security. In so far as I could understand their concerns, Keynes’ mathematical work on uncertainty together with his later work on economics seemed to be key.

Events in 2007 were worrying. I was advised that the Chinese were thinking more sensibly about these issues, and I took to opportunity to visit China in Easter 2008, hosted by the Chinese Young Persons Tourist Group, presumably not noted for their financial and economic acumen. It was very apparent from a coach ride from Beijing to the Great Wall that their program of building new towns and moving peasants in was on hold. The reason given by the Tour Guide was that the US financial system was expected to crash after their Olympics, leading to a slow-down in their economic growth, which needed to be above 8% or else they faced civil unrest. Once tipped off, similar measures to mitigate a crisis were apparent almost everywhere. I also talked to a financier, and had some great discussions about Keynes and his colleagues, and the implications for the crash. In the event the crisis seems to have been triggered by other causes, but Keynes conceptual framework still seemed relevant.

The above only went to reinforce my prejudice:

  • Not only is uncertainty important, but one needs to understand its ramifications as least as well as Keynes did (e.g. in his Treatise and ‘Economic Consequences of the Peace’).
  • Building on this, concepts such as risk need to be understood to their fullest extent, not reduced to numbers.
  • The quotes above are indicative of the need for a holistic approach. Whatever variety one prefers, I do think that this cannot be avoided.
  • The quote about national regulators only having a narrow interest seems remarkably reductionist. I would think that they would all need a broad interest and to be exchanging data and views, albeit they may only have narrow responsibilities. Financial storms can spread around the world quicker than meteorological ones.
  • The – perhaps implicit – notion of only monitoring financial ‘flows’ seems ludicrous. I knew that the US was bound to fail eventually, but it was only by observing changes in migration that I realised it was imminent. Actually, I might have drawn the same conclusion from observing changes in financial regulation in China, but that still was not a ‘financial flow’. I did previously draw similar conclusions talking to people who were speculating on ‘buy to let’, thinking it a sure-thing.
  • Interactions between agents and architectures are important, but if Keynes was right then what really matters are changes to ‘the rules of the games’. The end of the Olympics was not just a change in ‘flows’ but a potential game-changer.
  • Often it is difficult to predict what will trigger a crisis, but one can observe when the situation is ripe for one. To draw an analogy with forest fires, one can’t predict when someone will drop a bottle or a lit cigarette, but one can observe when the tinder has built up and is dry.

It thus seems to me that while Andy Haldane is insightful, the actual article is not that enlightening, and invites a much too prosaic view of forecasting. Even if we think that Keynes was wrong I am fairly sure that we need to develop language and concepts in which we can have a discussion of the issues, even if only ‘Knightian uncertainty’. The big problem that I had prior to the crisis was the lack of a possibility of such a discussion. If we are to learn anything from the crisis it is surely that such discussions are essential. The article could be a good start.

See Also

The short long. On the trend to short-termism.

Control rights (and wrongs). On the imbalance between incentives and risks in banking.

Risk Off. A behaviorist’ view of risk. It notes that prior to the crash ‘risk was under-priced’.

  Dave Marsay