Lo & Mueller’s Physics Envy

AW Lo and MT Mueller WARNING: Physics Envy May Be Hazardous To Your Wealth! 2010

This puts forward the view that

“[T]he failure of quantitative models in economics is almost always the result of a mismatch between the type of uncertainty in effect and the methods used to manage it.”

This failure may have contributed to the crash of 2007/8. An MIT video presentation may be helpful.

Taxonomy

 “[Situations] can be classified along a continuous spectrum according to the type of uncertainty involved, with religion at one extreme (irreducible uncertainty), economics and psychology in the middle (partially reducible uncertainty) and mathematics and physics at the other extreme (certainty).”

 These are: 

  1. Certainty.
  2. Probabilistic uncertainty
    Objective Bayesian.
  3. Fully reducible uncertainty.
    “This is risk with a degree of uncertainty … due to unknown probabilities for a fully enumerated set of outcomes that we presume are still completely known. … randomness can be rendered arbitrarily close to Level-2 [Objective Bayesian] uncertainty with sufficiently large amounts of data using the tools of statistical analysis.”
  4. Partially reducible uncertainty.
    [T]here is a limit to what we can deduce about the underlying phenomena generating the data.”
  5. Irreducible uncertainty
    “[A] state of total ignorance … that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or … thinking harder and smarter. [It] is beyond the reach of probabilistic reasoning, statistical inference, and any meaningful quantification. …”

 Details

“Partially reducible uncertainty contains a component that can never be quantified, and irreducible uncertainty is the Knightian limit of unparametrizable randomness.”

“Examples [of level 4 uncertainty] include data-generating processes that exhibit:
(1)   stochastic or time-varying parameters that vary too frequently to be estimated accurately;
(2)   nonlinearities too complex to be captured by existing models, techniques, and datasets;
(3)   nonstationarities and non-ergodicities that render useless the Law of Large Numbers, Central Limit Theorem, and other methods of statistical inference and approximation; and
(4)   the dependence on relevant but unknown and unknowable conditioning information.
Although the laws of probability still operate at this level, there is a non-trivial degree of uncertainty regarding the underlying structures … we are in a casino that may or may not be honest, and the rules tend to change from time to time without notice. In this situation, classical statistics may not be as useful as a Bayesian perspective … .”

“[I]rreducible uncertainty seems more likely to be the exception rather than the rule. … essentially an admission of intellectual defeat …”

“[W]e can observe the full range of uncertainty from Level 5 to Level 1 just by varying the information available to the observer.”

Finance

“[M]ean-reversion strategies are adding to the demand for losers and increasing the supply of winners, thereby stabilizing supply/demand imbalances.

    However, on occasion information affecting all stocks in the same direction arises, temporarily replacing mean reversion with momentum … . In such scenarios [they] will suffer large losses … and only [afterwards] will [they] begin to profit again.”

“[S]statistical changes in regime, estimation errors, erroneous in-put data, institutional rigidities such as legal and regulatory constraints, and unanticipated market shocks are possible, i.e., Level-4 and Level-5 uncertainty.”

 The paper also identifies a gap between quant and senior understanding of uncertainty, so that some understanding may be unrecognized by anyone, and hence un-managed and liable to ‘blow-up’. For example, if a strategy has unrecognized contingencies then success over a period may lead to increased investment in that strategy, so that downwards pressure is created on returns.

Comments

The paper positions mathematics as only good for dealing with certainty and says that only religion can deal with irreducible uncertainty. Even probability theory mostly deals with ‘probabilistic determinism’: one knows for sure what the best bet is, even if one doesn’t know how a particular gamble will turn out. However, the mathematics of Whitehead and Keynes’ seems to me to provide a language and theory that covers the 5 levels to better effect. For example, in helping to distinguish between stochastic randomness and haphazardness and also between short, mid and long-run effects. Without these distinctions the paper is left no handle on irreducible uncertainty.

 The paper provides some relevant examples, so it is unfortunate that it has no theoretical underpinning.

In one example samples are taken at random from one of two oscillators. It shows how this is amenable to conventional techniques when the switching is either slow enough or fast enough, but not when switching is at a similar rate to observation. This could usefully be related to the observation that classical techniques are often good in the short-term (like forecasting tomorrows weather) and can be good in the long-run (like forecasting the climate for next year), but may be less good over intermediate periods (like the weather next week). As in 2007, one wants to know when the short-run may be about to run out.

 Another example considers the growth of mean-reversion strategies. Again, one can go further than the paper. Without financial markets good firms will prosper and bad ones fail. If market-makers pay attention to fundamentals or other indicators then they may perform a useful function in encouraging good firms and discouraging bad. But if they simply react to the numbers they would not seem to be adding any value and may be free-riding. This not only puts a down-wards pressure on returns but corrupts the normal ‘market discipline’.

I have previously noted that decision-making needs to reflect the type of uncertainty and in advising non-mathematicians I have tried using a framework of levels similar to the one here as a way of organizing findings. But it seems to me that using a check-list in lieu of a sound understanding of the issues may not be effective, unless economics is simpler than Keynes supposes.

The paper’s definitions of levels roughly correspond to:

  1. next_state = f( state ), for some known function, f( )
  2. P( next_state | state ) = p( next_state , state ) for some known probability distribution, p( , )
  3. P( next_state | state ) = p( next_state , state ) for some unknown probability distribution, p( , ), that it is expected can be estimated.
  4. P( next_state | state ) = p( next_state , state ) for some unknown probability distribution, p( , ), that it is expected can only partially be estimated.
  5.  Total uncertainty.

 A common type of uncertainty intermediate between 2 and 3 is:

  • Level 2.5: P( next_state | state ) = p( next_state , state, Θ) for some known probability distribution, p( , , ) and unknown parameter, Θ, that it is expected can be estimated.

For example, if (as is common physics) one really knows that a distribution is Gaussian, but not its parameters, one has much more uncertainty than level 2 but a lot less than level 3. Thus, as the paper says, the levels are marks along a continuum, not discrete steps. Now, the paper’s description of level 4 uncertainty refers to a casino that may be dishonest. But suppose that for a particular game there are a limited number of ways that they could cheat. Then we have only level 2.5 uncertainty. Now suppose that they do not normally cheat but could cheat when we, a high-stakes player, play, and that this is a ‘game’ in the sense of game-theory: that their winning strategies depend on ours. Then we have:

  • P( next_state | state ) = p( next_state , state, Θ ) for some known probability distribution, p( , , ) and parameter, Θ, that may respond to our strategy.

This seems like the discussion of mean-reversion strategy. Where does this fit into the scale? It has elements of level 5, but is in some respects more constrained than level 3. It seems to me that there is no linear ‘scale’, instead there is a partially ordered set of types that are best understood in terms of the empirical evidence we have or expect to have and the assumptions that we make. Perhaps one ought to consider the actual uncertainty present in any domain and adapt any ‘scale’ to it, rather than trying to use the scale as a check-list or ‘frame’ for understanding. More immediately, we can treat the checklist as simply an illustration of Keynes’ notion that the type of uncertainty matters.

See Also

Ellsberg, who advocates a different type of uncertainty, intermediate between levels 2 and 3. Others TBD.

 Dave Marsay

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.