Bretton Woods: Modelling and Economics

The institute for new economic thinking has a video on modelling and economics. It is considerably more interesting that it might have been before the financial crises beginning 2007. I make a few points from a mathematical perspective.

  • There is a tendency to apply a ‘canned’ model, varying a few parameters, rather then to engage in genuine modelling. The difference makes a difference. In the run-up to the crises of 2007 on there was wide-spread agreement on key aspects of economic theory and some fixed models became to be treated as ‘fact’. In this sense, modelling had stopped. So maybe proper modeling in economics would be a useful innovation? 😉
  • Milton Friedman distinguishes between models that predict well short-term) and those that have ‘realistic’ micro-features. One should also be concerned about the typical behaviours of the model.
  • One particularly needs, as Keynes did, to distinguish between short-run and long-run  models.
  • Models that are solely judged by their ability to predict short-run events will tend to forget about significant events (e.g. crises) that occur over a longer time-frame, and to fall into the habit of extrapolating from current trends, rather than seeking to model potential changes to the status quo.
  • Again, as Keynes pointed out, in complex situations one often cannot predict the long-run future, but only anticipate potential failure modes (scenarios).
  • A single model is at best a possible model. There will always be alternatives (scenarios). One at least needs a representative set of credible models if one is to rely on them.
  • As Keynes said, there is a reflexive relationship between one’s long-run model and what actually happens. Crises mitigated are less likely to happen. A belief in the inevitable stability of the status quo increases the likelihood of a failure.
  • Generally, as Keynes said, the economic system works because people expect it to work. We are part of the system to be modelled.
  • It is better for a model to be imprecise but reliable than to be precisely wrong. This particularly applies to assumptions about human behaviour.
  • It may be better for a model to have some challenging gaps than to fill those gaps with myths.

Part 2 ‘Progress in Economics’ gives the impression that understanding crises is what is most needed, whereas much of the modelling video used language that seems more appropriate to adding epicycles to our models of the new status quo – if we ever have one.

See Also

Reasoning in a complex, dynamic, world, Which mathematics of uncertainty? , Keynes’ General Theory

Dave Marsay

Advertisements

How to live in a world that we don’t understand, and enjoy it (Taleb)

N Taleb How to live in a world that we don’t understand, and enjoy it  Goldstone Lecture 2011 (U Penn, Wharton)

Notes from the talk

Taleb returns to his alma mater. This talk supercedes his previous work (e.g. Black Swan). His main points are:

  • We don’t have a word for the opposite of fragile.
      Fragile systems have small probability of huge negative payoff
      Robust systems have consistent payoffs
      ? has a small probability of a large pay-off
  • Fragile systems eventually fail. ? systems eventually come good.
  • Financial statistics have a kurtosis that cannot in practice be measured, and tend to hugely under-estimate risk.
      Often more than 80% of kurtosis over a few years is contributed by a single (memorable) day.
  • We should try to create ? systems.
      He calls them convex systems, where the expected return exceeds the return given the expected environment.
      Fragile systems are concave, where the expected return is less than the return from the expected situation.
      He also talks about ‘creating optionality’.
  • He notes an ‘action bias’, where whenever there is a game like the stock market then we want to get involved and win. It may be better not to play.
  • He gives some examples.

 Comments

Taleb is dismissive of economists who talk about Knightian uncertainty, which goes back to Keynes’ Treatise on Probability. Their corresponding story is that:

  • Fragile systems are vulnerable to ‘true uncertainty’
  • Fragile systems eventually fail
  • Practical numeric measures of risk ignore ‘true uncertainty’.
  • We should try to create systems that are robust to or exploit true uncertainty.
  • Rather than trying to be the best at playing the game, we should try to change the rules of the game or play a ‘higher’ game.
  • Keynes gives examples.

The difference is that Taleb implicitly suppose that financial systems etc are stochastic, but have too much kurtosis for us to be able to estimate their parameters. Rare events are regarded as rare events generated stochastically. Keynes (and Whitehead) suppose that it may be possible to approximate such systems by a stochastic model for a while, but the rare events denote a change to a new model, so that – for example – there is not a universal economic theory. Instead, we occasionally have new economics, calling for new stochastic models. Practically, there seems little to choose between them, so far.

From a scientific viewpoint, one can only asses definite stochastic models. Thus, as Keynes and Whitehead note, one can only say that a given model fitted the data up to a certain date, and then it didn’t. The notion that there is a true universal stochastic model is not provable scientifically, but neither is it falsifiable. Hence according to Popper one should not entertain it as a view. This is possibly too harsh on Taleb, but the point is this:

Taleb’s explanation has pedagogic appeal, but this shouldn’t detract from an appreciation of alternative explanations based on non-stochastic uncertainty.

 In particular:

  • Taleb (in this talk) seems to regard rare crisis as ‘acts of fate’ whereas Keynes regards them as arising from misperceptions on the part of regulators and major ‘players’. This suggests that we might be able to ameliorate them.
  • Taleb implicitly uses the language of probability theory, as if this were rational. Yet his argument (like Keynes’) undermines the notion of probability as derived from rational decision theory.
      Not playing is better whenever there is Knightian uncertainty.
      Maybe we need to be able to talk about systems that thrive on uncertainty, in addition to convex systems.
  • Taleb also views the up-side as good fortune, whereas we might view it as an innovation, by whatever combination of luck, inspiration, understanding and hard work.

See also

On fat tails versus epochs.

Dave Marsay

The Logic of Scientific Discovery

K.R. Popper The Logic of Scientific Discovery 1980 Routledge A review. (The last edition has some useful clarifications.) See new location.

Dave Marsay

Systemism: the alternative to individualism and holism

Mario Bunge Systemism: the alternative to individualism and holism Journal of Socio-Economics 29 (2000) 147–157

“Three radical worldviews and research approaches are salient in social studies: individualism, holism, and systemism.”

[Systemism] “is centered in the following postulates:
1. Everything, whether concrete or abstract, is a system or an actual or potential component of a system;
2. systems have systemic (emergent) features that their components lack, whence
3. all problems should be approached in a systemic rather than in a sectoral fashion;
4. all ideas should be put together into systems (theories); and
5. the testing of anything, whether idea or artifact, assumes the validity of other items, which are taken as benchmarks, at least for the time being.”

Thus systemism resembles Smuts’ Holism. Bunge uses the term ‘holism’ for what Smuts terms wholism: the notion that systems should be subservient to their ‘top’ level, the ‘whole’. This usage apart, Bunge appears to be saying something important. Like Smuts, he notes the systemic nature of mathematics is distinction to those who note the tendency to apply mathematical formulae thoughtlessly, as in some notorious financial mathematics

Much of the main body is taken up with the need for micro-macro analyses and the limitations of piece-meal approaches, something familiar to Smuts and |Keynes. On the other hand he says: “I support the systems that benefit me, and sabotage those that hurt me.” without flagging up the limitations of such an approach in complex situations. He even suggests that an interdisciplinary subject such as biochemistry is nothing but the overlap of the two disciplines. If this is the case, I find it hard to grasp their importance. I would take a Kantian view, in which bringing into communion two disciplines can be more than the sum of the parts.

In general, Bunge’s arguments in favour of what he calls systemism and Smuts called holism seem sound, but it lacks the insights into complexity and uncertainty of the original.

See also

Andy Denis’ response to Bunge adds some arguments in favour of Holism. It’s main purpose, though, is to contradict Bunge’s assertion that laissez-faire is incompatible with systemism. It is argued that a belief in Adam Smith’s invisible hand could support laissez faire. It is not clear what might constitute grounds for such a belief. (My own view is that even a government that sought to leverage the invisible hand would have a duty to monitor the workings of such and hand, and to take action should it fail, as in the economic crisis of 2007/8. It is now clear how politics might facilitate this.)

Also my complexity.

Dave Marsay

Quantum Minds

A New Scientist Cover Story (No. 2828 3 Sept 2011) opines that:

‘The fuzziness and weird logic of the way particles behave applies surprisingly well to how human thinks’. (banner, p34)

It starts:

‘The quantum world defies the rules of ordinary logic.’

The first two examples are The infamous two-slit experiment and an experiment by Tversky and Shamir supposedly showing violation of the ‘sure thing principle’. But do they?

Saving classical logic

According to George Boole (Laws of thought), when a series of assumptions and applications of logic leads to a falsehood I must abandon one of the assumptions of one of the rules of inference, but I can ‘save’ whichever one I am most wedded to. So to save ‘ordinary logic’ it suffices to identify a dodgy assumption.

Two-slits experiment

The article says of the two-slits experiment:

‘… the pattern you should get – ordinary physics and logic would suggest – should be ..’

There is a missing factor here: the classical (Bayesian) assumptions about ‘how probabilities work’. Thus I could save ‘ordinary logic’ by abandoning common-sense probability theory.

Actually, there is a more obvious culprit. As Kant pointed out the assumption that the world is composed of objects with attributes and having relationships with each other belongs to common-sense physics, not logic. For example, two isolated individuals may behave like objects but when they come into communion the sum may be more than the sum of the parts. Looking at the two-slit experiment this way, the stuff that we regard as a particle seem isolated and hence object-like until it ‘comes into communion with’ the apparatus, when the whole may be un-object-like, but then a new steady-state ’emerges’, which is object-like and which we regard as a particle. The experiment is telling us something about the nature of the communion. Prigogine has a mathematization of this.

Thus one can abandon the common-sense assumption that ‘a communion is nothing but the sum of objects’, thus saving classical logic.

Sure Thing Principle

An example is given (pg 36). That appears to violate Savage’s sure-thing principle and hence ‘classical logic’. But, as above, we might prefer to abandon out probability theory rather than our logic. But there are plenty of alternatives.

The sure-thing principle applies to ‘economic man’, who has some unusual values. For example, if he values a winter sun holiday at $500 and a skiing holiday at $500 then he ‘should’ be happy to pay $500 for a holiday in which he only finds out which it is when he gets there. The assumptions of classical economic man only seem to apply to people with lots of spare money and are used to gambling with it. Perhaps the experimental subjects were different? 

The details of the experiment as reported also repay attention. A gamble with an even chance of winning $200 or losing $100 is available. Experimental subjects all had a first gamble. In case A subjects were told they had won. In case B they were told they had lost. In case C they were not told. They were all invited to gamble again.

Most subjects (69%) wanted to gamble again in case A. This seems reasonable as over the two gambles they were guaranteed a gain of $100. Fewer subjects (59%) wanted to gamble again in case B. This seems reasonable, as they risked a $200 loss overall. Least subjects  (36%) wanted to gamble again in case C. This seems to violate the sure-thing principle, which (according to the article) says that anyone who gambles in both the first two cases should gamble in the third. But from the figures above we can only deduce that – if they are representative – then at least 28% (i.e. 100%-(100%-69%)+(100%-59%)) would gamble in both cases. But 36% gambled in case C, so the data does not imply that anyone would gamble for A and B but not C.

If one chooses a person at random, then the probability that they gambled again in both cases A and B is between 28% and 100%. The convention in ‘classical’ probability theory is to split the difference (a kind of principle of indifference) yielding 64% (as in the article). A possible explanation for the dearth of such subjects is that they were not wealthy (so having non-linear utilities in the region of $100s) and that those who couldn’t afford to lose $100 had good used in mind for $200, preferring a certain win of $200 to an evens chance of winning $400 or only $100. This seems reasonable.

Others’ criticisms here. See also some notes on uncertainty and probability.

Dave Marsay