How can economics be a science?

This note is prompted by Thaler’s Nobel prize, the reaction to it, and attempts by mathematicians to explain both what they do do and what they could do. Briefly, mathematicians are increasingly employed to assist practitioners (such as financiers) to sharpen their tools and improve their results, in some pre-defined sense (such as making more profit). They are less used to sharpen core ideas, much less to challenge assumptions. This is unfortunate when tools are misused and mathematicians blamed. It is no good saying that mathematicians should not go along with such misuse, since the misuse is often not obvious without some (expensive) investigations, and in any case whistleblowers are likely to get shown the door (even if only for being inefficient).

Mainstream economics aspires to be a science in the sense of being able to make predictions, at least probabilistically. Some (mostly before 2007/8) claimed that it achieved this, because its methods were scientific. But are they? Keynes coined the term ‘pseudo-mathematical’ for the then mainstream practices, whereby mathematics was applied without due regard for the soundness of the application. Then, as now, the mathematics in itself is as much beyond doubt as anything can be. The problem is a ‘halo effect’ whereby the application is regarded as ‘true’ just because the mathematics is. It is like physics before Einstein, whereby some (such as Locke) thought that classical geometry must be ‘true’ as physics, largely because it was so true as mathematics and they couldn’t envisage an alternative.

From a logical perspective, all that the use of scientific methods can do is to make probabilistic predictions that are contingent on there being no fundamental change. In some domains (such as particle physics, cosmology) there have never been any fundamental changes (at least since soon after the big bang) and we may not expect any. But economics, as life more generally, seems full of changes.

Popper famously noted that proper science is in principle falsifiable. Many practitioners in science and science-like fields regard the aim of their domain as to produce ‘scientific’ predictions. They have had to change their theories in the past, and may have to do so again. But many still suppose that there is some ultimate ‘true’ theory, to which their theories are tending. But according to Popper this is not a ‘proper’ scientific belief. Following Keynes we may call it an example of ‘pseudo-science’: something that masquerades as a science but goes beyond it bounds.

One approach to mainstream economics, then, is to disregard the pseudo-scientific ideology and just take its scientific content. Thus we may regard its predictions as mere extrapolations, and look out for circumstances in which they may not be valid. (As Eddington did for cosmology.)

Mainstream economics depends heavily on two notions:

  1. That there is some pre-ordained state space.
  2. That transitions evolve according to fixed conditional probabilities.

For most of us, most of the time, fortunately, these seem credible locally and in the short term, but not globally in space-time. (At the time of writing it seems hard to believe that just after the big bang there were in any meaningful sense state spaces and conditional probabilities that are now being realised.) We might adjust the usual assumptions:

The ‘real’ state of nature is unknowable, but one can make reasonable observations and extrapolations that will be ‘good enough’ most of the time for most routine purposes.

This is true for hard and soft sciences, and for economics. What varies is the balance between the routine and the exceptional.

Keynes observed that some economic structures work because people expect them to. For example, gold tends to rise in price because people think of it as being relatively sound. Thus anything that has a huge effect on expectations can undermine any prior extrapolations. This might be a new product or service, an independence movement, a conflict or a cyber failing. These all have a structural impact on economies that can cascade. But will the effect dissipate as it spreads, or may it result in a noticable shift? A mainstream economist would argue that all such impacts are probabilistic, and hence all that was happening was that we were observing new parts of the existing state space and new transitions. If we suppose for a moment that it is true, it is not a scientific belief, and hardly seems a useful way of thinking about potential and actual crises.

Mainstream economists suppose that people are ‘rational’, by which they mean that they act as if they are maximizing some utility, which is something to do with value and probability. But, even if the world is probabilistic, being rational is not necessarily scientific. For example, when a levee is built  to withstand a ‘100 year storm’, this is scientific if it is clear that the claim is based on past storm data. But it is unscientific if there is an implicit claim that the climate can not change. When building a levee it may be ‘rational’ to build it to withstand all but very improbable storms, but it is more sensible to add a margin and make contingency arrangements (as engineers normally do). In much of life it is common experience that the ‘scientific’ results aren’t entirely reliable, so it is ‘unscientific’ (or at least unreasonable) to totally rely on them.

Much of this is bread-and-butter in disciplines other than economics, and I am not sure that what economists mostly need is to improve their mathematics: they need to improve their sciencey-ness, and then use mathematics better. But I do think that they need somehow to come to a better appreciation of the mathematics of uncertainty, beyond basic probability  theory and its ramifications.

Dave Marsay

 

 

Mathematical Modelling

Mathematics and modelling in particular is very powerful, and hence can be very risky if you get it wrong, as in mainstream economics. But is modelling inappropriate – as has been claimed – or is it just that it has not been done well enough?

As a mathematician who has dabbled in modelling and economics I thought I’d try my hand at modelling economies. What harm could there be?

My first notion is that actors activity is habitual.

My second is that habits persist until there is a ‘bad’ experience, in which case they are revised. What is taken account of, what counts as ‘bad’ and how habits are replaced or revised are all subject to meta-habits (habits about habits).

In particular, mainstream economists suppose that actors seek to maximise their utilities, and they never revise this approach. But this may be too restrictive.

Myself, I would add that most actors mostly seek to copy others and also tend to discount experiences and lessons identified by previous generations.

With some such ‘axioms’ (suitably formalised) such as those above, one can predict booms and busts leading to new ‘epochs’ characterised by dominant theories and habits. For example, suppose that some actors habitually borrow as much as they can to invest in an asset (such as a house for rent) and the asset class performs well. Then they will continue in their habit, and others who have done less well will increasingly copy them, fuelling an asset price boom. But no asset class is worth an infinite amount, so the boom must end, resulting in disappointment and changes in habit, which may again be copied by those who are losing out on the asset class., giving a bust.  Thus one has an ’emergent behaviour’ that contradicts some of the implicit mainstream assumptions about rationality  (such as ‘ergodicity’), and hence the possibility of meaningful ‘expectations’ and utility functions to be maximized. This is not to say that such things cannot exist, only that if they do exist it must be due to some economic law as yet unidentified, and we need an alternative explanation for booms and busts.

What I take from this is that mathematical models seem possible and may even provide insights.I do not assume that a model that is adequate in the short-run will necessarily continue to be adequate, and my model shows how economic epochs can be self-destructing. To me, the problem in economics is not so much that it uses mathematics and in particular mathematical modelling but that it does so badly. My ‘axioms’ mimic the approach that Einstein took to physics: it replaces an absolutist model by a relativistic one, and shows that it makes a difference. In my model there are no magical ‘expectations’, rather actors may have realistic habits and expectations, based on their experience and interpretation of the media and other sources, which may be ‘correct’ (or at least not falsified) in the short-run, but which cannot provide adequate predictions for the longer run. To survive a change of epochs our actors would need to be at least following some actors who were monitoring and thinking about the overall situation more broadly and deeply than those who focus on short run utility. (Something that currently seems lacking.)

David Marsay

Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Are fananciers really stupid?

The New Scientist (30 March 2013) has the following question, under the heading ‘Stupid is as stupid does’:

Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?

Possible answers are: “yes”, “no” or “cannot be determined”.

You might want to think about this before scrolling down.

.

.

.

.

.

.

.

It is claimed that while ‘the vast majority’ (presumably including financiers, whose thinking is being criticised) think the answer is “cannot be determined”,

careful deduction shows that the answer is “yes”.

Similar views are expressed at  a learning blog and at a Physics blog, although the ‘careful deductions’ are not given. Would you like to think again?

.

.

.

.

.

.

.

.

Now I have a confession to make. My first impression is that the closest of the admissible answers is ‘cannot be determined’, and having thought carefully for a while, I have not changed my mind. Am I stupid? (Based on this evidence!) You might like to think about this before scrolling down.

.

.

.

.

.

.

.

Some people object that the term ‘is married’ may not be well-defined, but that is not my concern. Suppose that one has a definition of marriage that is as complete and precise as possible. What is the correct answer? Does that change your thinking?

.

.

.

.

.

.

.

Okay, here are some candidate answers that I would prefer, if allowed:

  1. There are cases in which the answer cannot be determined.
  2. It is not possible to prove that there are not cases in which the answer cannot be determined. (So that the answer could actually be “yes”, but we cannot know that it is “yes”.)

Either way, it cannot be proved that there is a complete and precise way of determining the answer, but for different reasons. I lean towards the first answer, but am not sure. Which it is is not a logical or mathematical question, but a question about ‘reality’, so one should ask a Physicist. My reasoning follows … .

.

.

.

.

.

.

.

.

Suppose that Anne marries Henry who dies while out in space, with a high relative velocity and acceleration. Then to answer yes we must at least be able to determine a unique time in Anne’s time-frame in which Henry dies, or else (it seems to me) there will be a period of time in which Anne’s status is indeterminate. It is not just that we do not know what Anne’s status is; she has no ‘objective’ status.

If there is some experiment which really proves that there is no possible ‘objective’ time (and I am not sure that there is) then am I not right? Even if there is no such experiment, one cannot determine the truth of physical theories, only fail to disprove them. So either way, am I not right?

Enlightenment, please. The link to finance is that the New Scientist article says that

Employees leaving logic at the office door helped cause the financial crisis.

I agree, but it seems to me (after Keynes) that it was their use of the kind of ‘classical’ logic that is implicitly assumed in the article that is at fault. Being married is a relation, not a proposition about Anne. Anne has no state or attributes from which her marital status can be determined, any more than terms such as crash, recession, money supply, inflation, inequality, value or ‘the will of the people’ have any correspondence in real economies.  Unless you know different?

Dave Marsay

Mathematics, psychology, decisions

I attended a conference on the mathematics of finance last week. It seems that things would have gone better in 2007/8 if only policy makers had employed some mathematicians to critique the then dominant dogmas. But I am not so sure. I think one would need to understand why people went along with the dogmas. Psychology, such as behavioural economics, doesn’t seem to help much, since although it challenges some aspects of the dogmas it fails to challenge (and perhaps even promotes) other aspects, so that it is not at all clear how it could have helped.

Here I speculate on an answer.

Finance and economics are either empirical subjects or they are quasi-religious, based on dogmas. The problems seem to arise when they are the latter but we mistake them for the former. If they are empirical then they have models whose justification is based on evidence.

Naïve inductivism boils down to the view that whatever has always (never) been the case will continue always (never) to be the case. Logically it is untenable, because one often gets clashes, where two different applications of naïve induction are incompatible. But pragmatically, it is attractive.

According to naïve inductivism we might suppose that if the evidence has always fitted the models, then actions based on the supposition that they will continue to do so will be justified. (Hence, ‘it is rational to act as if the model is true’). But for something as complex as an economy the models are necessarily incomplete, so that one can only say that the evidence fitted the models within the context as it was at the time. Thus all that naïve inductivism could tell you is that ‘it is rational’ to act as if the  model is true, unless and until the context should change. But many of the papers at the mathematics of finance conference were pointing out specific cases in which the actions ‘obviously’ changed the context, so that naïve inductivism should not have been applied.

It seems to me that one could take a number of attitudes:

  1. It is always rational to act on naïve inductivism.
  2. It is always rational to act on naïve inductivism, unless there is some clear reason why not.
  3. It is always rational to act on naïve inductivism, as long as one has made a reasonable effort to rule out any contra-indications (e.g., by considering ‘the whole’).
  4. It is only reasonable to act on naïve inductivism when one has ruled out any possible changes to the context, particularly reactions to our actions, by considering an adequate experience base.

In addition, one might regard the models as conditionally valid, and hedge accordingly. (‘Unless and until there is a reaction’.) Current psychology seems to suppose (1) and hence has little to help us understand why people tend to lean too strongly on naïve inductivism. It may be that a belief in (1) is not really psychological, but simply a consequence of education (i.e., cultural).

See Also

Russell’s Human Knowledge. My media for the conference.

Dave Marsay

Haldane’s The dog and the Frisbee

Andrew Haldane The dog and the Frisbee

Haldane argues in favour of simplified regulation. I find the conclusions reasonable, but have some quibbles about the details of the argument. My own view is that much of our financial problems have been due – at least in part – to a misrepresentation of the associated mathematics, and so I am keen to ensure that we avoid similar misunderstandings in the future. I see this as a primary responsibility of ‘regulators’, viewed in the round.

The paper starts with a variation of Ashby’s ball-catching observation, involving dog and a Frisbee instead of a man and a ball: you don’t need to estimate the position of the Frisbee or be an expert in aerodynamics: a simple, natural, heuristic will do. He applies this analogy to financial regulation, but it is somewhat flawed. When catching a Frisbee one relies on the Frisbee behaving normally, but in financial regulation one is concerned with what had seemed to be abnormal, such as the crisis period of 2007/8.

It is noted of Game theory that

John von Neumann and Oskar Morgenstern established that optimal decision-making involved probabilistically-weighting all possible future outcomes.

In apparent contrast

Many of the dominant figures in 20th century economics – from Keynes to Hayek, from Simon to Friedman – placed imperfections in information and knowledge centre-stage. Uncertainty was for them the normal state of decision-making affairs.

“It is not what we know, but what we do not know which we must always address, to avoid major failures, catastrophes and panics.”

The Game Theory thinking is characterised as ignoring the possibility of uncertainty, which – from a mathematical point of view – seems an absurd misreading. Theories can only ever have conditional conclusions: any unconditional misinterpretation goes beyond the proper bounds. The paper – rightly – rejects the conclusions of two-player zero-sum static game theory. But its critique of such a theory is much less thorough than von Neumann and Morgenstern’s own (e.g. their 4.3.3) and fails to identify which conditions are violated by economics. More worryingly, it seems to invite the reader to accept them, as here:

The choice of optimal decision-making strategy depends importantly on the degree of uncertainty about the environment – in statistical terms, model uncertainty. A key factor determining that uncertainty is the length of the sample over which the model is estimated. Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies.

This seems to suggest that – contra game theory – we could ‘in principle’ establish a sound model, if only we had enough data. Yet:

Einstein wrote that: “The problems that exist in the world today cannot be solved by the level of thinking that created them”.

There seems a non-sequitur here: if new thinking is repeatedly being applied then surely the nature of the system will continually be changing? Or is it proposed that the ‘new thinking’ will yield a final solution, eliminating uncertainty? If it is the case that ‘new thinking’ is repeatedly being applied then the regularity conditions of basic game theory (e.g. at 4.6.3 and 11.1.1) are not met (as discussed at 2.2.3). It is certainly not an unconditional conclusion that the methods of game theory apply to economies beyond the short-run, and experience would seem to show that such an assumption would be false.

The paper recommends the use of heuristics, by which it presumably means what Gigernezer means: methods that ignore some of the data. Thus, for example, all formal methods are heuristics since they ignore intuition.  But a dog catching a Frisbeee only has its own experience, which it is using, and so presumably – by this definition – is not actually using a heuristic either. In 2006 most financial and economics methods were heuristics in the sense that they ignored the lessons identified by von Neumann and Morgenstern. Gigerenzer’s definition seems hardly helpful. The dictionary definition relates to learning on one’s own, ignoring others. The economic problem, it seems to me, was of paying too much atention to the wrong people, and too little to those such as von Neumann and Morgenstern – and Keynes.   

The implication of the paper and Gigerenzer is, I think, that a heuristic is a set method that is used, rather than solving a problem from first principles. This is clearly a good idea, provided that the method incorporates a check that whatever principles that it relies upon do in fact hold in the case at hand. (This is what economists have often neglecte to do.) If set methods are used as meta-heuristics to identify the appropriate heuristics for particular cases, then one has something like recognition-primed decision-making. It could be argued that the financial community had such meta-heuristics, which led to the crash: the adoption of heuristics as such seems not to be a solution. Instead one needs to appreciate what kind of heuristic are appropriate when. Game theory shows us that the probabilistic heuristics are ill-founded when there is significant innovation, as there was both prior, through and immediately after 2007/8. In so far as economics and finance are games, some events are game-changers. The problem is not the proper application of mathematical game theory, but the ‘pragmatic’ application of a simplistic version: playing the game as it appears to be unless and until it changes. An unstated possible deduction from the paper is surely that such ‘pragmatic’ approaches are inadequate. For mutable games, strategy needs to take place at a higher level than it does for fixed games: it is not just that different strategies are required, but that ‘strategy’ has a different meaning: it should at least recognize the possibility of a change to a seemingly established status quo.

If we take an analogy with a dog and a Frisbee, and consider Frisbee catching to be a statistically regular problem, then the conditions of simple game theory may be met, and it is also possible to establish statistically that a heuristic (method) is adequate. But if there is innovation in the situation then we cannot rely on any simplistic theory or on any learnt methods. Instead we need a more principled approach, such as that of Keynes or Ashby,  considering the conditionality and looking out for potential game-changers. The key is not just simpler regulation, but regulation that is less reliant on conditions that we expect to hold but for which, on maturer reflection, are not totally reliable. In practice this may necessitate a mature on-going debate to adjust the regime to potential game-changers as they emerge.

See Also

Ariel Rubinstein opines that:

classical game theory deals with situations where people are fully rational.

Yet von Neumann and Morgenstern (4.1.2) note that:

the rules of rational behaviour must provide definitely for the possibility of irrational conduct on the part of others.

Indeed, in a paradigmatic zero-sum two person game, if the other person players rationally (according to game theory) then your expected return is the same irrespective of how you play. Thus it is of the essence that you consider potential non-rational plays. I take it, then, that game theory as reflected in economics is a very simplified – indeed an over-simplified – version. It is presumably this distorted version that Haldane’s criticism’s properly apply to.

Dave Marsay

Haldane’s Tails of the Unexpected

A. Haldane, B. Nelson Tails of the unexpected,  The Credit Crisis Five Years On: Unpacking the Crisis conference, University of Edinburgh Business School, 8-9 June 2012

The credit crisis is blamed on a simplistic belief in ‘the Normal Distribution’ and its ‘thin tails’, understating risk. Complexity and chaos theories point to greater risks, as does the work of Taleb.

Modern weather forecasting is pointed to as good relevant practice, where one can spot trouble brewing. Robust and resilient regulatory mechanisms need to be employed. It is no good relying on statistics like VaR (Value at Risk) that assume a normal distribution. The Bank of England is developing an approach based on these ideas.

Comment

Risk arises when the statistical distribution of the future can be calculated or is known. Uncertainty arises when this distribution is incalculable, perhaps unknown.

While the paper acknowledges Keynes’ economics and Knightian uncertainty, it overlooks Keynes’ Treatise on Probability, which underpins his economics.

Much of modern econometric theory is … underpinned by the assumption of randomness in variables and estimated error terms.

Keynes was critical of this assumption, and of this model:

Economics … shift[ed] from models of Classical determinism to statistical laws. … Evgeny Slutsky (1927) and Ragnar Frisch (1933) … divided the dynamics of the economy into two elements: an irregular random element or impulse and a regular systematic element or propagation mechanism. This impulse/propagation paradigm remains the centrepiece of macro-economics to this day.

Keynes pointed out that such assumptions could only be validated empirically and (as the current paper also does) in the Treatise he cited Lexis’s falsification.

The paper cites a game of paper/scissors/stone which Sotheby’s thought was a simple game of chance but which Christie’s saw  as an opportunity for strategizing – and won millions of dollars. Apparently Christie’s consulted some 11 year old girls, but they might equally well have been familiar with Shannon‘s machine for defeating strategy-impaired humans. With this in mind, it is not clear why the paper characterises uncertainty a merly being about unknown probability distributions, as distinct from Keynes’ more radical position, that there is no such distribution. 

The paper is critical of nerds, who apparently ‘like to show off’.  But to me the problem is not the show-offs, but those who don’t know as much as they think they know. They pay too little attention to the theory, not too much. The girls and Shannon seem okay to me: it is those nerds who see everything as the product of randomness or a game of chance who are the problem.

If we compare the Slutsky Frisch model with Kuhn’s description of the development of science, then economics is assumed to develop in much the same way as normal science, but without ever undergoing anything like a (systemic) paradigm shift. Thus, while the model may be correct most of the time,  violations, such as in 2007/8, matter.

Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.

 One can understand this reasoning by analogy with science: the more dominant a school which protects its core myths, the greater the reaction and impact when the myths are exposed. But in finance it may not be just ‘risk control’ that causes a problem. Any optimisation that is blind to the possibility of systemic change may tend to increase the chance of change (for good or ill) [E.g. Bohr Atomic Physics and Human Knowledge. Ox Bow Press 1958].

See Also

Previous posts on articles by or about Haldane, along similar lines:

My notes on:

Dave Marsay

Avoiding ‘Black Swans’

A UK Blackett Review has reviewed some approaches to uncertainty relevent to the question “How can we ensure that we minimise strategic surprises from high impact low probability risks”. I have already reviewed the report in its own terms.  Here I consider the question.

  • One person’s surprise may be as a result of another person’s innovation, so we need to consider the up-sides and down-sides together.
  • In this context ‘low probability’ is subjective. Things are not surprising unless we didn’t expect them, so the reference to low probability is superfluous.
  • Similarly, strategic surprise necessarily relates to things that – if only in anticipation – have high impact.
  • Given that we are concerned with areas of innovation and high uncertainty, the term ‘minimise’ is overly ambitious. Reducing would be good. Thinking that we have minimized would be bad.

The question might be simplified to two parts:

  1. “How can we ensure that we strategize?
  2. “How can we strategize?”

These questions clearly have very important relative considerations, such as:

  • What in our culture inhibits strategizing?
  • Who can we look to for exemplars?
  • How can we convince stakeholders of the implications of not strategizing?
  • What else will we need to do?
  • Who might we co-opt or collaborate with?

But here I focus on the more widely-applicable aspects. On the first question the key point seems to be that, where the Blackett review points out the limitations of a simplistic view of probability, there are many related misconceptions and misguided ways that blind us to the possibility of or benefits of strategizing. In effect, as in economics, we have got ourselves locked into ‘no-strategy strategies’, where we believe that a short-term adaptive approach, with no broader or long-term view, is the best, and that more strategic approaches are a snare and a delusion. Thus the default answer to the original question seems to be ‘you don’t  – you just live with the consequences’. In some cases this might be right, but I do not think that we should take it for granted. This leads on to the second part.

We at least need ‘eyes open minds open’, to be considering potential surprises, and keeping score. If (for example, as in International Relations) it seems that none of our friends do better than chance, we should consider cultivating some more. But the scoring and rewarding is an important issue. We need to be sure that our mechanisms aren’t recognizing short-term performance at the expense of long-run sustainability. We need informed views about what ‘doing well’ would look like and what are the most challenging issues, and to seek to learn and engage with those who are doing well. We then need to engage in challenging issues ourselves, if only to develop and then maintain our understanding and capability.

If we take the financial sector as an example, there used to be a view that regulation was not needed. There are two more moderate views:

  1. That the introduction of rules would distort and destabilise the system.
  2. That although the system is not inherently stable, the government is not competent to regulate, and no regulation is better than bad regulation.

 My view is that what is commonly meant by ‘regulation’ is very tactical, whereas the problems are strategic. We do not need a ‘strategy for regulation’: we need strategic regulation. One of the dogmas of capitalism is that it involves ‘free markets’ in which information plays a key role. But in the noughties the markets were clearly not free in this sense. A potential role for a regulator, therefore, would be to perform appropriate ‘horizon scanning’ and to inject appropriate information to ‘nudge’ the system back into sustainability. Some voters would be suspicious of a government that attempts to strategize, but perhaps this form of regulation could be seen as simply better-informed muddling, particularly if there were strong disincentives to take unduly bold action.

But finance does not exist separate from other issues. A UK ‘regulator’ would need to be a virtual beast spanning  the departments, working within the confines of regular general elections, and being careful not to awaken memories of Cromwell.

This may seem terribly ambitious, but maybe we could start with reformed concepts of probability, performance, etc. 

Comments?

See also

JS Mill’s views

Other debates, my bibliography.  

Dave Marsay

The money forecast

A review of The Money forecast A Haldane New Scientist 10 Dec. 2011. On-line version is To Navigate economic storms we need better forecasting.

Summary

Andrew Haldane, ‘Andy’, is one of the more insightful and – hopefully – influential members of the UK economic community, recognising that new ways of thinking are needed and taking a lead in their development.

He refers to a previous article ‘Revealed – the Capitalist network that runs the world’, which inspires him to attempt to map the world of finance.

“… Making sense of the financial system is more an act of archaeology than futurology.”

Of the pre-crisis approach it says:

“… The mistake came in thinking the behaviour of the system was just an aggregated version of the behaviour of the individual. …

”    Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behaviour of any one node. To make an analogy, you cannot understand the brain by focusing on a neuron – and then simply multiplying by 100 billion. …

… When parts started to malfunction … no one had much idea what critical faculties would be impaired.

    That uncertainty, coupled with dense financial wiring, turned small failures into systemic collapse. …

    Those experiences are now seared onto the conscience of regulators. Systemic risk has entered their lexicon, and to understand that risk, they readily acknowledge the need to join the dots across the network. So far, so good. Still lacking are the data and models necessary to turn this good intent into action.

… Other disciplines have cut a dash in their complex network mapping over the past generation, assisted by increases in data-capture and modelling capability made possible by technology. One such is weather forecasting … .

   Success stories can also be told about utility grids and transport networks, the web, social networks, global supply chains and perhaps the most complex web of all, the brain.

    …  imagine the scene a generation hence. There is a single nerve centre for global finance. Inside, a map of financial flows is being drawn in real time. The world’s regulatory forecasters sit monitoring the financial world, perhaps even broadcasting it to the world’s media.

    National regulators may only be interested in a quite narrow subset of the data for the institutions for which they have responsibility. These data could be part of, or distinct from, the global architecture.

    …  it would enable “what-if?” simulations to be run – if UK bank Northern Rock is the first domino, what will be the next?”

Comments

I am unconvinced that archeology, weather forecasting or the other examples are really as complex as economic forecasting, which can be reflexive: if all the media forecast a crash there probably will be one, irrespective of the ‘objective’ financial and economic conditions. Similarly, prior to the crisis most people seemed to believe in ‘the great moderation’, and the good times rolled on, seemingly.

Prior to the crisis I was aware that a minority of British economists were concerned about the resilience of the global financial system and that the ‘great moderation’ was a cross between a house of cards and a pyramid selling scheme. In their view, a global financial crisis precipitated by a US crisis was the greatest threat to our security. In so far as I could understand their concerns, Keynes’ mathematical work on uncertainty together with his later work on economics seemed to be key.

Events in 2007 were worrying. I was advised that the Chinese were thinking more sensibly about these issues, and I took to opportunity to visit China in Easter 2008, hosted by the Chinese Young Persons Tourist Group, presumably not noted for their financial and economic acumen. It was very apparent from a coach ride from Beijing to the Great Wall that their program of building new towns and moving peasants in was on hold. The reason given by the Tour Guide was that the US financial system was expected to crash after their Olympics, leading to a slow-down in their economic growth, which needed to be above 8% or else they faced civil unrest. Once tipped off, similar measures to mitigate a crisis were apparent almost everywhere. I also talked to a financier, and had some great discussions about Keynes and his colleagues, and the implications for the crash. In the event the crisis seems to have been triggered by other causes, but Keynes conceptual framework still seemed relevant.

The above only went to reinforce my prejudice:

  • Not only is uncertainty important, but one needs to understand its ramifications as least as well as Keynes did (e.g. in his Treatise and ‘Economic Consequences of the Peace’).
  • Building on this, concepts such as risk need to be understood to their fullest extent, not reduced to numbers.
  • The quotes above are indicative of the need for a holistic approach. Whatever variety one prefers, I do think that this cannot be avoided.
  • The quote about national regulators only having a narrow interest seems remarkably reductionist. I would think that they would all need a broad interest and to be exchanging data and views, albeit they may only have narrow responsibilities. Financial storms can spread around the world quicker than meteorological ones.
  • The – perhaps implicit – notion of only monitoring financial ‘flows’ seems ludicrous. I knew that the US was bound to fail eventually, but it was only by observing changes in migration that I realised it was imminent. Actually, I might have drawn the same conclusion from observing changes in financial regulation in China, but that still was not a ‘financial flow’. I did previously draw similar conclusions talking to people who were speculating on ‘buy to let’, thinking it a sure-thing.
  • Interactions between agents and architectures are important, but if Keynes was right then what really matters are changes to ‘the rules of the games’. The end of the Olympics was not just a change in ‘flows’ but a potential game-changer.
  • Often it is difficult to predict what will trigger a crisis, but one can observe when the situation is ripe for one. To draw an analogy with forest fires, one can’t predict when someone will drop a bottle or a lit cigarette, but one can observe when the tinder has built up and is dry.

It thus seems to me that while Andy Haldane is insightful, the actual article is not that enlightening, and invites a much too prosaic view of forecasting. Even if we think that Keynes was wrong I am fairly sure that we need to develop language and concepts in which we can have a discussion of the issues, even if only ‘Knightian uncertainty’. The big problem that I had prior to the crisis was the lack of a possibility of such a discussion. If we are to learn anything from the crisis it is surely that such discussions are essential. The article could be a good start.

See Also

The short long. On the trend to short-termism.

Control rights (and wrongs). On the imbalance between incentives and risks in banking.

Risk Off. A behaviorist’ view of risk. It notes that prior to the crash ‘risk was under-priced’.

  Dave Marsay

 

How mathematical modelling seduced Wall Street (NS)

How mathematical modelling seduced Wall Street

New Scientist, 22 Oct. 2011.

See also page 10 A better way to price the future takes hold.

In the print version this is ‘Unruly humans vs the lust for order’, and it ends by criticising ‘models in the physical sciences’. Whitehead, co-author of Principia Mathematica, has shown in forensic detail, in his Process and Reality, the limitations of conventional models. Keynes had also covered much the same ground in his Treatise on Probability. More recently, Good joined the dots while Prigogine developed a mathematical model showing the severe limitations of the conventional approach. Yet the online version seems to criticise ‘mathematical modelling’.

I think the actual problem of Wall Street is its pragmatism. In the short-run we earn bonuses, in the long-run we are retired. So it is pragmatic to make money while the opportunity is there. The problem is in ‘valuing the future’ (pg 10). In markets where we can always move on, we don’t. Why should we, unless we have a stake in it? But Whitehead and Keynes also note a kind of ‘lust for order’, or at least an assumption that whatever order there happens to be will endure. But whether it was short-termism or a misguided attitude to order, mathematical modelling appear innocent.

Institutional Investor

How to understand the limits of financial models  is for a more financially aware audience, but raises new issues.

“… there has been a frantic attempt to prevent loss, to restore the status quo ante at all cost”

The status quo ante was very risky: we should not be seeking to return to it. (Keynes showed why.)

“Quants were the theorists”

Oh dear. If the quants had been mathematicians they would have realised that economics was an empirical subject, and appreciated the uncertainties that Keynes highlighted.

“… traders were the experimentalists, and we collaborated to develop and explore our models.”

Oh dear. In an empirical subject, how can one separate ‘theory’ and ‘experiment’ like this? And what can one deduce from traders’ experiments?

“If you are someone who cannot distinguish between God’s creations and man’s idols, you may mistake models for deep laws. Many economists are such people.”

So blame such economists, not mathematicians (or physicists).

“We have seen corporations treated with the kindness owed to individuals, in the hope, perhaps, that their well-being would trickle down to individuals, and individuals treated with the kindness owed to objects.”

Perceptive. Derman’s prescription includes:

Avoid axiomatization. Axioms and theorems are suitable for mathematics, but finance is concerned with the real world. Every financial axiom is pretty much wrong; the most-relevant questions in creating a model are, how wrong and in what way? “

If one doesn’t axiomatize one cannot do mathematics. One is left to apply formulae and methods with no real understanding. Keynes’ attempts to axiomatize probability and economics was critical in revealing the flaws in conventional thinking. The mistake is to turn axioms into dogma.

“The dangerous part of Black-Scholes is the further assumption that the sole risk of a stock is the risk of diffusion, which isn’t true. But the more realistically you can define risk, the better the model will become. “

How does one define risk, if not with axioms? I tend to go along with Keynes, in supposing that one cannot define risk, but can give an axiomatization that falls short of the precision definition.

“When someone shows you an economic or financial model that involves mathematics, you should understand that, despite the confident appearance of the equations, what lies beneath is a substrate of great simplification and — only sometimes — great imagination, perhaps even intuition.”

Having axioms shows exactly what ‘lies beneath’. Being able to produce an axiomatization is a good test of one’s understanding. Thus financial modellers typically define away risk: the mathematics makes this clear: what else would?

Beware of idolatry. The greatest conceptual danger is idolatry: believing that someone can write down a theory that encapsulates human behavior and thereby free you of the obligation to think for yourself. A model may be entrancing, but no matter how hard you try, you will not be able to breathe life into it. To confuse a model with a theory is to believe that humans obey mathematical rules, and to invite future disaster.”

This gives us a clue to some of the confusion. Mathematical models and rules (such as Keynes’) can reflect imprecision and uncertainty. The problem is that the customers for economic models wanted precision and certainty, and were content with models that were mathematical in the sense that they were based on formulae using mathematical operators with no concern for their validity.

Derman reminds us of some earlier (2009) advice:

“• I will remember that I didn’t make the world and that it doesn’t satisfy my equations.

• Though I will use the models that I or others create to boldly estimate value, I will always look over my shoulder and never forget that the model is not the world.

• I will not be overly impressed by mathematics. I will never sacrifice reality for elegance without explaining to end users why I have done so.

• I will not give the people who use my models false comfort about their accuracy. I will make the assumptions and oversights explicit to all who use them.

• I understand that my work may have enormous effects on society and the economy, many beyond my apprehension.”

These seems reasonable. However most modellers have been paid by people who appear to have no concern for the longer term effects, and the apparent desire to return to the status quo ante suggests that they still don’t. It is no good giving advice to modellers (mathematical or otherwise) unless there are fundamental changes to financial institutions, changes that are incompatible with conventional capitalism, “a way of life in which all the standards of the past are supposedly subservient to the goal of efficient, timely production”.

“We need free markets, but we need them to be principled.”

Agreed. Can’t mathematics help?

Reuters

The Physics of an economic crisis is along much the same lines.

See Also

Kauffman, the End of a Physics Worldview, takes a more theoretical approach to the same issue. Or Good, a mathematician who explores the limitations of theories and models.

Dave Marsay

GLS Shackle, imagined and deemed possible?

Background

This is a personal view of GLS Shackle’s uncertainty. Having previously used Keynes’ approach to identify possible failure modes in systems, including financial systems (in the run-up to the collapse of the tech bubble), I became concerned  in 2007 that there was another bubble with a potential for a Keynes-type  25% drop in equities, constituting a ‘crisis’. In discussions with government advisers I first came across Shackle. The differences between him and Keynes were emphasised. I tried, but failed to make sense of Shackle, so that I could form my own view, but failed. Unfinished business.

Since the crash of 2008 there have been various attempts to compare and contrast Shackle and Keynes, and others. Here I imagine a solution to the conundrum which I deem possible: unless you know different?

Imagined Shackle

Technically, Shackle seems to focus on the wickeder aspects of uncertainty, to seek to explain them and their significance to economists and politicians, and to advise on how to deal with them. Keynes provides a more academic view, covering all kinds of uncertainty, contrasting tame probabilities with wicked uncertainties, helping us to understand both in a language that is better placed to survive the passage of time and the interpretation by a wider – if more technically aware – audience.

Politically, Shackle lacks the baggage of Lord Keynes, whose image has been tarnished by the misuse of the term ‘Keynesian’. (Like Keynes, I am not a Keynesian.)

Conventional probability theory would make sense if the world was a complicated randomizing machine, so that one has ‘the law of large numbers’: that in the long run particular events will tend to occur with some characteristic, stable, frequency. Thus in principle it would be possible to learn the frequency of events, such that reasonably rare events would be about as rare as we expect them to be. Taleb has pointed out that we can never learn the frequencies of very rare events, and that this is a technical flaw in many accounts of probability theory, which fail to point this out. But Keynes and Shackle have more radical concerns.

If we think of the world as a complicated randomizing machine, then as in Whitehead, it is one which can suddenly change. Shackle’s approach, in so far as I understand it, is to be open to the possibility of a change, recognize when the evidence of a change is overwhelming, and to react to it. This is an important difference for the conventional approach, in which all inference is done on the assumptions that the machine is known. Any evidence that it may have change is simply normalised away. Shackle’s approach is clearly superior in all those situations where substantive change can occur.

Shackle terms decisions about a possibly changing world ‘critical’. He makes the point that the application of a predetermined strategy or habit is not a decision proper: all ‘real’ decisions are critical in that they make a lasting difference to the situation. Thus one has strategies for situations that one expects to repeat, and makes decisions about situations that one is trying to ‘move on’. This seems a useful distinction.

Shackle’s approach to critical decisions is to imagine potential changes to new behaviours, to assess them and then to choose between those deemed possible. This is based on preference not expected utility, because ‘probability’ does not make sense. He gives an example of  a French guard at the time of the revolution who can either give access to a key prisoner or not. He expects to lose his life if he makes the wrong decision, depending on whether the revolution succeeds or not. A conventional approach would be based on the realisation that most attempted revolutions fail. But his choice may have a big impact on whether or not the revolution succeeds. So Shackle advocates imagining the two possible outcomes and their impact on him, and then making a choice. This seems reasonable. The situation is one of choice, not probability.

Keynes can support Shackle’s reasoning. But he also supports other types of wicked uncertainty. Firstly, it is not always the case that a change is ‘out of the blue’. One may not be able to predict when the change will come, but it is sometimes possible to see that there is an economic bubble, and the French guard probably had some indications that he was living in extraordinary times. Thus Keynes goes beyond Shackle’s pragmatism.

In reality, there is no strict dualism between probabilistic behaviour and chaos, between probability and Shackle’s complete ignorance. There are regions in-between that Keynes helps explore. For example, the French guard is not faced with a strictly probabilistic situation, but could usefully think in terms of probabilities conditioned on his actions. In economics, one might usefully think of outcomes as conditioned on the survival of conventions and institutions (October 2011).

I also have a clearer view why consideration of Shackle led to the rise in behavioural economics: if one is ‘being open’ and ‘imagining’ then psychology is clearly important. On the other hand, much of behavioral economics seems to use conventional rationality as some form of ‘gold standard’ for reasoning under uncertainty, and to consider departures from it as a ‘bias’.  But then I don’t understand that either!

Addendum

(Feb 2012, after Blue’s comments.)

I have often noticed that decision-takers and their advisers have different views about how to tackle uncertainty, with decision-takers focusing on the non-probabilistic aspects while their advisers (e.g. scientists or at least scientifically trained) tend to, and may even insist on, treating the problem probabilistically, and hence have radically different approaches to problem-solving. Perhaps the situation is crucial for the decision-taker, but routine for the adviser? (‘The agency problem.’) (Econophysics seems to suffer from this.)

I can see how Shackle had much that was potentially helpful in the run-up to the financial crash. But it seems to me no surprise that the neoclassical mainstream was unmoved by it. They didn’t regard the situation as crucial, and didn’t imagine or deem possible a crash. Unless anyone knows different, there seems to be nothing in Shackle’s key ideas that provide as explicit a warning as Keynes. While Shackle was more acceptable that Keynes (lacking the ‘Keynesian’ label) he also still seems less to the point. One needs both together.

See Also

Prigogine , who provides models of systems that can suddenly change ‘become’. He also  relates to Shackle’s discussion on how making decisions relates to the notion of ‘time’.

Dave Marsay

Bretton Woods: Modelling and Economics

The institute for new economic thinking has a video on modelling and economics. It is considerably more interesting that it might have been before the financial crises beginning 2007. I make a few points from a mathematical perspective.

  • There is a tendency to apply a ‘canned’ model, varying a few parameters, rather then to engage in genuine modelling. The difference makes a difference. In the run-up to the crises of 2007 on there was wide-spread agreement on key aspects of economic theory and some fixed models became to be treated as ‘fact’. In this sense, modelling had stopped. So maybe proper modeling in economics would be a useful innovation? 😉
  • Milton Friedman distinguishes between models that predict well short-term) and those that have ‘realistic’ micro-features. One should also be concerned about the typical behaviours of the model.
  • One particularly needs, as Keynes did, to distinguish between short-run and long-run  models.
  • Models that are solely judged by their ability to predict short-run events will tend to forget about significant events (e.g. crises) that occur over a longer time-frame, and to fall into the habit of extrapolating from current trends, rather than seeking to model potential changes to the status quo.
  • Again, as Keynes pointed out, in complex situations one often cannot predict the long-run future, but only anticipate potential failure modes (scenarios).
  • A single model is at best a possible model. There will always be alternatives (scenarios). One at least needs a representative set of credible models if one is to rely on them.
  • As Keynes said, there is a reflexive relationship between one’s long-run model and what actually happens. Crises mitigated are less likely to happen. A belief in the inevitable stability of the status quo increases the likelihood of a failure.
  • Generally, as Keynes said, the economic system works because people expect it to work. We are part of the system to be modelled.
  • It is better for a model to be imprecise but reliable than to be precisely wrong. This particularly applies to assumptions about human behaviour.
  • It may be better for a model to have some challenging gaps than to fill those gaps with myths.

Part 2 ‘Progress in Economics’ gives the impression that understanding crises is what is most needed, whereas much of the modelling video used language that seems more appropriate to adding epicycles to our models of the new status quo – if we ever have one.

See Also

Reasoning in a complex, dynamic, world, Which mathematics of uncertainty? , Keynes’ General Theory

Dave Marsay

How to live in a world that we don’t understand, and enjoy it (Taleb)

N Taleb How to live in a world that we don’t understand, and enjoy it  Goldstone Lecture 2011 (U Penn, Wharton)

Notes from the talk

Taleb returns to his alma mater. This talk supercedes his previous work (e.g. Black Swan). His main points are:

  • We don’t have a word for the opposite of fragile.
      Fragile systems have small probability of huge negative payoff
      Robust systems have consistent payoffs
      ? has a small probability of a large pay-off
  • Fragile systems eventually fail. ? systems eventually come good.
  • Financial statistics have a kurtosis that cannot in practice be measured, and tend to hugely under-estimate risk.
      Often more than 80% of kurtosis over a few years is contributed by a single (memorable) day.
  • We should try to create ? systems.
      He calls them convex systems, where the expected return exceeds the return given the expected environment.
      Fragile systems are concave, where the expected return is less than the return from the expected situation.
      He also talks about ‘creating optionality’.
  • He notes an ‘action bias’, where whenever there is a game like the stock market then we want to get involved and win. It may be better not to play.
  • He gives some examples.

 Comments

Taleb is dismissive of economists who talk about Knightian uncertainty, which goes back to Keynes’ Treatise on Probability. Their corresponding story is that:

  • Fragile systems are vulnerable to ‘true uncertainty’
  • Fragile systems eventually fail
  • Practical numeric measures of risk ignore ‘true uncertainty’.
  • We should try to create systems that are robust to or exploit true uncertainty.
  • Rather than trying to be the best at playing the game, we should try to change the rules of the game or play a ‘higher’ game.
  • Keynes gives examples.

The difference is that Taleb implicitly suppose that financial systems etc are stochastic, but have too much kurtosis for us to be able to estimate their parameters. Rare events are regarded as rare events generated stochastically. Keynes (and Whitehead) suppose that it may be possible to approximate such systems by a stochastic model for a while, but the rare events denote a change to a new model, so that – for example – there is not a universal economic theory. Instead, we occasionally have new economics, calling for new stochastic models. Practically, there seems little to choose between them, so far.

From a scientific viewpoint, one can only asses definite stochastic models. Thus, as Keynes and Whitehead note, one can only say that a given model fitted the data up to a certain date, and then it didn’t. The notion that there is a true universal stochastic model is not provable scientifically, but neither is it falsifiable. Hence according to Popper one should not entertain it as a view. This is possibly too harsh on Taleb, but the point is this:

Taleb’s explanation has pedagogic appeal, but this shouldn’t detract from an appreciation of alternative explanations based on non-stochastic uncertainty.

 In particular:

  • Taleb (in this talk) seems to regard rare crisis as ‘acts of fate’ whereas Keynes regards them as arising from misperceptions on the part of regulators and major ‘players’. This suggests that we might be able to ameliorate them.
  • Taleb implicitly uses the language of probability theory, as if this were rational. Yet his argument (like Keynes’) undermines the notion of probability as derived from rational decision theory.
      Not playing is better whenever there is Knightian uncertainty.
      Maybe we need to be able to talk about systems that thrive on uncertainty, in addition to convex systems.
  • Taleb also views the up-side as good fortune, whereas we might view it as an innovation, by whatever combination of luck, inspiration, understanding and hard work.

See also

On fat tails versus epochs.

Dave Marsay

Systemism: the alternative to individualism and holism

Mario Bunge Systemism: the alternative to individualism and holism Journal of Socio-Economics 29 (2000) 147–157

“Three radical worldviews and research approaches are salient in social studies: individualism, holism, and systemism.”

[Systemism] “is centered in the following postulates:
1. Everything, whether concrete or abstract, is a system or an actual or potential component of a system;
2. systems have systemic (emergent) features that their components lack, whence
3. all problems should be approached in a systemic rather than in a sectoral fashion;
4. all ideas should be put together into systems (theories); and
5. the testing of anything, whether idea or artifact, assumes the validity of other items, which are taken as benchmarks, at least for the time being.”

Thus systemism resembles Smuts’ Holism. Bunge uses the term ‘holism’ for what Smuts terms wholism: the notion that systems should be subservient to their ‘top’ level, the ‘whole’. This usage apart, Bunge appears to be saying something important. Like Smuts, he notes the systemic nature of mathematics is distinction to those who note the tendency to apply mathematical formulae thoughtlessly, as in some notorious financial mathematics

Much of the main body is taken up with the need for micro-macro analyses and the limitations of piece-meal approaches, something familiar to Smuts and |Keynes. On the other hand he says: “I support the systems that benefit me, and sabotage those that hurt me.” without flagging up the limitations of such an approach in complex situations. He even suggests that an interdisciplinary subject such as biochemistry is nothing but the overlap of the two disciplines. If this is the case, I find it hard to grasp their importance. I would take a Kantian view, in which bringing into communion two disciplines can be more than the sum of the parts.

In general, Bunge’s arguments in favour of what he calls systemism and Smuts called holism seem sound, but it lacks the insights into complexity and uncertainty of the original.

See also

Andy Denis’ response to Bunge adds some arguments in favour of Holism. It’s main purpose, though, is to contradict Bunge’s assertion that laissez-faire is incompatible with systemism. It is argued that a belief in Adam Smith’s invisible hand could support laissez faire. It is not clear what might constitute grounds for such a belief. (My own view is that even a government that sought to leverage the invisible hand would have a duty to monitor the workings of such and hand, and to take action should it fail, as in the economic crisis of 2007/8. It is now clear how politics might facilitate this.)

Also my complexity.

Dave Marsay

Quantum Minds

A New Scientist Cover Story (No. 2828 3 Sept 2011) opines that:

‘The fuzziness and weird logic of the way particles behave applies surprisingly well to how human thinks’. (banner, p34)

It starts:

‘The quantum world defies the rules of ordinary logic.’

The first two examples are The infamous two-slit experiment and an experiment by Tversky and Shamir supposedly showing violation of the ‘sure thing principle’. But do they?

Saving classical logic

According to George Boole (Laws of thought), when a series of assumptions and applications of logic leads to a falsehood I must abandon one of the assumptions of one of the rules of inference, but I can ‘save’ whichever one I am most wedded to. So to save ‘ordinary logic’ it suffices to identify a dodgy assumption.

Two-slits experiment

The article says of the two-slits experiment:

‘… the pattern you should get – ordinary physics and logic would suggest – should be ..’

There is a missing factor here: the classical (Bayesian) assumptions about ‘how probabilities work’. Thus I could save ‘ordinary logic’ by abandoning common-sense probability theory.

Actually, there is a more obvious culprit. As Kant pointed out the assumption that the world is composed of objects with attributes and having relationships with each other belongs to common-sense physics, not logic. For example, two isolated individuals may behave like objects but when they come into communion the sum may be more than the sum of the parts. Looking at the two-slit experiment this way, the stuff that we regard as a particle seem isolated and hence object-like until it ‘comes into communion with’ the apparatus, when the whole may be un-object-like, but then a new steady-state ’emerges’, which is object-like and which we regard as a particle. The experiment is telling us something about the nature of the communion. Prigogine has a mathematization of this.

Thus one can abandon the common-sense assumption that ‘a communion is nothing but the sum of objects’, thus saving classical logic.

Sure Thing Principle

An example is given (pg 36). That appears to violate Savage’s sure-thing principle and hence ‘classical logic’. But, as above, we might prefer to abandon out probability theory rather than our logic. But there are plenty of alternatives.

The sure-thing principle applies to ‘economic man’, who has some unusual values. For example, if he values a winter sun holiday at $500 and a skiing holiday at $500 then he ‘should’ be happy to pay $500 for a holiday in which he only finds out which it is when he gets there. The assumptions of classical economic man only seem to apply to people with lots of spare money and are used to gambling with it. Perhaps the experimental subjects were different?

The details of the experiment as reported also repay attention. A gamble with an even chance of winning $200 or losing $100 is available. Experimental subjects all had a first gamble. In case A subjects were told they had won. In case B they were told they had lost. In case C they were not told. They were all invited to gamble again.

Most subjects (69%) wanted to gamble again in case A. This seems reasonable as over the two gambles they were guaranteed a gain of $100. Fewer subjects (59%) wanted to gamble again in case B. This seems reasonable, as they risked a $200 loss overall. Least subjects  (36%) wanted to gamble again in case C. This seems to violate the sure-thing principle, which (according to the article) says that anyone who gambles in both the first two cases should gamble in the third. But from the figures above we can only deduce that – if they are representative – then at least 28% (i.e. 100%-(100%-69%)+(100%-59%)) would gamble in both cases. But 36% gambled in case C, so the data does not imply that anyone would gamble for A and B but not C.

If one chooses a person at random, then the probability that they gambled again in both cases A and B is between 28% and 100%. The convention in ‘classical’ probability theory is to split the difference (a kind of principle of indifference) yielding 64% (as in the article). A possible explanation for the dearth of such subjects is that they were not wealthy (so having non-linear utilities in the region of $100s) and that those who couldn’t afford to lose $100 had good used in mind for $200, preferring a certain win of $200 to an evens chance of winning $400 or only $100. This seems reasonable.

Others’ criticisms here. See also some notes on uncertainty and probability.

Dave Marsay

Finding the true meaning of risk

Interpretation as Risk

The New Scientist has an important article by Nicolas Bouleau, Issue 2818, 25 June 2011, pg 30.

Whether it’s a shape, a structure or a work of art, once you see a meaning in something, there’s no going back. The reasons why run deep.

… the assertion that a particular phenomenon is “random” is neither trivial nor obvious and, except in textbook cases, is the subject of debate.

This ..  irreversibility of interpretation … holds quite generally: once you perceive something in the world – a shape, structure or a meaning – you can’t go back. …

All this is crucial to truly understanding risk. The belief some people have that risks can be objectively measured means expunging their interpretative aspect, even though that aspect is an essential part of understanding risk. From the epistemic point of view, it is the meaning of the event that determines the risk. The probabilistic representation … is too simplistic.

Usually there isn’t enough information for such a model: we do not know the probabilities of rare events occurring since there will never be enough data, we do not have a full description of what can happen, and we do not know how to calculate the cost of that event occurring.

….

The bottom line – quite literally, sometimes – is that to really understand risk, we have no choice but to take account of the way people interpret events.

Comments

The conclusion seems sound, but

  • I am not sure that it is useful to imagine that anything really ‘is’ random or meaningful: these are in the eye of the beholder.
  • When abroad I often see things that appear random to me but which I believe to be meaningful to the locals.
  • The article is full of disparaging remarks about how ‘people’ make sense of things without considering whether this is cultural or biological, for example, and what might be done to correct or compensate for them. A link to behaviourist economics would be interesting.
  • The piece de resistance is a similar pair of figures. The intention is that the first initially looks random, but after looking at the second and seeing words picked out in colour one looks at the first figure and sees words. The assertion is that ‘people’ cannot suppress the autonomous ‘sense making’. But some can.

Selecting for ‘Negative Capability’

To me, the significance is not so much about the nature of risk (which aligns with Keynes, for example) but about the reasons why people are blind to risk: because once they ‘see’ how the economy (or whatever) works they are unable to ‘see’ any other possibility. The implication seems to be that the blindness here is the same kind as in the optical example. If so, maybe we should use the optical example (or other colour-blindness tests) to select those with Keats’ ‘negative capability’ for roles that need to ‘see’ risk. But is it really so?

See also

Search my blog for uncertainty, risk or crisis.

Dave Marsay