Friedman ea’s Risky Curves

Daniel Friedman, R Mark Isaac, Duncan James and Shyam Sunder Risky Curves: On the empirical failure of expected utility Routledge 2014

1 The challenge of understanding choice under risk

It would … be comforting to have a well-grounded theory that organizes our observations, guides our decisions, and predicts what others might do in this uncertain world.

We … argue that [Expected Utility Theory] (and its cousins) fail to offer useful predictions as to what actual people end up doing.

Under the receive theory, it is considered scientifically useful to model choices under risk (or uncertainty) as maximizing the expectation of some curved function of wealth, income, or other outcomes. Indeed, many social scientists have the impression that by applying some elicitation instrument to collect data, a researcher can estimate some permanent aspect of an individual’s attitude or personality (e.g. a coefficient of risk aversion) that governs the individual’s choice behaviour.

[We] will be careful to distinguish the possibility-of-harm meaning of risk from the dispersion meaning.

[To] deserve attention, a scientific theory must be able to predict and explain better than known alternatives.

The problem is that the estimated parameters, e.g., risk-aversion coefficients, exhibit remarkably little stability outside the context in which they are fitted. Their power to predict out-of-sample is in the poor to non-existent range, and we have seen no convincing victories over naïve alternatives. … EUT and its generalizations have provided surprisingly little insight into economic phenomena such as securities markets, insurance, gambling, or business cycles.

2 Historical Review of research through 1960

This focusses on Bernoulli and on von Neumann and Morgenstern (VNM). There is also a discussion of C. Jackson Grayson’s findings, that the ‘attitude to risk’ is context-dependent in a way that will be familiar to anyone who has worked with decision-makers (DMs).

3 Measuring individual risk preferences

This discusses various ways of attempting to elicit attitudes to risk, yielding wildly inconsistent results. Lack of numeracy seems the best – but far from adequate – explanation for the observations. That is, numerate DMs tend to be risk-neutral (possibly because they regard  maximizing expected payoff as a reasonable and relatively easily computed approach).

4 Aggregate-level evidence from the field

It is noted that where deliberate decisions involve calculations, engineers and others do not seek to maximize expected utility, but more often seek to minimize the probability of failure. In field such as gambling that are typically not deliberate, trying to fit observed DM behaviours to variations on EUT leads to “absurd” conclusions.

Particular puzzles are ‘interest parity’ and ‘equity premium’.

  • According to the ‘uncovered interest parity’ dogma, exchange rates should adjust to compensate for differences in interest rates, but the puzzle is that they tend to go in the opposite direction.
  • The mainstream CAPM (Capital Asset Pricing Model) assumes a fairly constant ‘market risk premium’. The puzzle is that the premium is far from fixed, is often much larger than theory suggests, sometimes by an order of magnitude.

The book notes the lack of success in explaining these, and offers no explanation other than:

average ex-post returns are highly imperfect estimates of expected returns.

So far, each proposed explanation sticks to the idea of aversion to dispersion risk, and none considers questioning the theoretical or empirical underpinnings of that assumption.

5 What are risk preferences?

EUT is likened to the idea of phlogiston, but no adequately replacement has yet been found, even when restricted to explicit lotteries with given probabilities. The best so far is minimising expected loss (i.e. value gains at 0, losses at nominal value), sometimes called ‘minimising the lower tail’.

By Occam’s Razor, anything more complicated requires careful justification.

6 Risky opportunities

[Insurance] simplifies one’s life by reducing the number diversity, and cost of contingency plans, and indirectly expands the contingency set.

The difference between net and gross gain is discussed, as when income is taxed.

In all these cases, an uninformed outsider – one who observes only the gross payoffs and casually assumes a linear net payoff function – might be tempted to assume a risk-averse [decision maker]. An informed observer, who sees the varying net payoff functions, will be able to use that variability to correctly predict how revealed risk aversion will depend on context. That observer will avoid the specification error of attributing context dependence to an unstable concave Bernoulli function [i.e., unstable attitude to risk].

Friedman and Savage (1948) motivated their famous Bernoulli function with a vague story about the possibility of the DM moving up a rung on the social ladder.

We retain the neoclassical economists’ standard operating procedure of representing choice as the solution of a constrained optimisation problem. [But we] simply assume that [the utility function] is linear. More sophisticated opportunity sets can incorporate embedded real options and the relationship … between gross and net payoffs.

We can’t claim that it will be able to explain all observed behaviour, but it seems to us a way to make nontrivial progress.

7 Possible ways forward

This discusses a variety of ideas, some inspired by evolution, learning and neuro-imaging. It notes how apparently irrelevant factors can alter the set of ‘feasible strategies’ and hence the ‘opportunity set’. Overall, though …

Our position is similar to those of classical physicists circa 1900 who had no conception of relativity or quantum mechanics but were acutely aware of fatal empirical gaps in contemporary theory. Or … the chemist who perceived the empirical folly of phlogiston but had no concept of molecular reactions.

 Comments

Criticisms

The book is critical of Bernoulli, von Neumann and Morgenstern and other ‘mathematical’ models. But:

  • Bernoulli only regards his formulation as some kind of average, pointing out exceptions – some of which the book uses to criticise Bernoulli.
  • Von Neumann and Morgenstern give a mathematical deduction of their theory from some explicit assumptions. The book’s supposed counter-examples appear (to me) to violate these assumptions.

The book is also critical of the non-numerate for not maximizing expected utility. But in so doing it seems to confuse ‘objective’ and subjective expectations. It seems to me that the subjective expectations of the non-numerate can be quite different from what they ‘should be’, and it is still possible that they maximize their subjective expectation (although I do think they take account of their uncertainty as well).

Aim of economics

The challenge of economics is to identify a precise, unconditional and well-grounded theory, and many extant theories are criticised for falling short. But – from a mathematical perspective – in the light of Whitehead, Keynes, Russell and Turing, why should we think that such a theory is possible, even in principle? Worse, if no such reliable predictive theory is possible, then according to the authors no theory ‘deserves attention’. Does this mean that in the run-up to the crises around 2008 economics was correct to stay silent about the dangers? (For example, that Taleb’s work did not ‘deserve attention’?)

Occam’s razor

The book invokes Occam’s razor. Yet:

  • There is no evidence that economies really are simple, so simplicity may not be a good guide.
  • There is no evidence that our subjective notions of simplicity are at all reliable, except in very simple cases.
  • It seems to me that some regard Bernoulli’s version of EUT as simpler than the book’s proposal (minimising expected loss), with no obvious way to adjudicate.
  • It seems to me that Bernoulli’s own view, of some context-dependent compromise between the above two extremes is more reasonable than either alone, albeit not simple.

Scope and implicit assumptions

The book is considering EUT and its variants as a description of how people actually makes decisions, and finds them wanting. It does not explicitly consider how people ‘should’ make decisions, but I have the impression that it regards maximizing expected net payoffs as sensible. This implies that there is such a thing as ‘a neutral attitude to risk’ and so presumably numerate people could be taught to comply, and the evidence cited suggests that they have. But if (as in C. Jackson Grayson’s findings) there is more to it than that, then perhaps people could be taught that too. Either way, the way people make decisions could change. Moreover, if we are talking about financial decisions then since wealth is highly concentrated, it could be that only a few people would need to learn. Or maybe they  already have theories that are better than those the book considers? To me the more important question is how should people make decisions? Or rather, how should we assess the ‘risks’ (considered broadly) of candidate strategies?

Dispersion aversion

Other things being equal, it is reasonable for the relatively well-off to try to minimise dispersion, as when people try to take out insurance. The book repeatedly points out that as a general strategy dispersion minimisation is absurd, but the examples are where the expected gains are significantly different, in which case I do not think anyone would think it sensible to ignore them.

As a thought experiment, suppose that someone is in a position to make savings every month. If they keep giving away all their wealth then they will have zero dispersion, which cannot be improved upon. But is this really the best strategy?

Attitude to risk

The book claims that academics have tended to regard ‘attitude to risk’ as a fixed attribute of an individual, independent of context. But as Bernoulli argues in considering insurance, this makes no sense. I am not sure how important it is, as most cases that I am aware of have only considered short-term behaviours.

Uncertainty aversion

The book only considers lotteries with objective probabilities. It thus does not consider cases like Ellsberg’s, where the probability is undefined. But it seems to me, even as a numerate person, if I have not yet calculated a probability then the situation is subjectively like Ellsberg’s, so if one is seeking a descriptive account one needs to consider Ellsberg-style uncertainty, and hence distinguish between risk aversion and uncertainty aversion. (It is hard to see how risk aversion could affect the inumerate, for example.)

A suggestion

From a mathematical viewpoint, VNM only deals with a special case. An alternative to seeking a general theory would be to develop a range of special cases that could then be adapted to actual circumstances. Having worked with and for a range of DMs, the following seems to me quite common.

Suppose that you have a policy, plan or strategy that you regard as adequate or at least has been approved. You are offered the chance to change things in a way that may mean re-planning. Then the ‘value’ of making the change takes account not only the ‘expected’ gains and losses, but also the expected cost and uncertainties of re-planning. This seems to me to cut across quite a few of the variants of EUT that the authors consider, and to be quite sensible. It also links to Ellsberg, which I think deserves more consideration than given in the book. According to Ellsberg, I would (and perhaps even ‘should’) sacrifice for some loss in expected utility for greater certainty about the utility. This is not just about the expected ‘dispersion’ of the outcomes but about the more general unknowns. Thus I would normally be expected to be relatively clear about the possibilities for my selected plan, but much less clear about alternatives that I have not considered.

What is risk?

The book appears to regard risk as something like historic variability, that can be estimated form the available data. Thus the DM drives while looking in the rear-view mirror. But:

  • Economic development often depends on factors that have not previously been reflected in the data. If we had all relevant data then possibly we could predict by extrapolation. But how do we know what is relevant?
  • As VNM notes, economies have an aspect of gaming in them, and ‘the rules of the game’ may change from time to time, as when coalitions change or there is a change of hegemon. When the rules change, behaviours should and typically do change, even if ‘preferences’ stay fixed.

On the face of it, risk is a much more nuanced concept than the book supposes. For example, following Friedman and Savage, I note that  the role of social standing. The risk of a permanent loss of standing is of a different type from the risk of a temporary set-back in one’s ambitions. I also note that for stocks and shares many ‘expect’ ‘buy and hold’ to be a better strategy than buying and selling based on one’s current expectations. This is because we know that our expectations are not very reliable. So why use them to value decisions? Why not take this more strategic approach?

Ergodicity

The book does not consider ergodicity as much as I would like. It is normally assumed that the short-run expectation over current options is a good guide to long-run expectations, so that the law of large numbers applies. That is, it is assumed that bad-luck now can be compensated for by better luck in the future. But some decisions are ‘critical‘ in the sense that bad luck now can blight one’s future for a long time. One wants a theory that differentiates. Alternatively, in situations like investments where one can withdraw, one can identify indicators of insipient non-ergodicity (and hence critical instabilities), and ‘play the game’ for just as long as it is consistent with your expectations. Then your aim should be to be more sensitive to departures from normalcy than your peers. Thus I propose that one should strategize one’s way around risk, rather than trying to ‘measure’ it.

Puzzles

Interest parity

According to the ‘uncovered interest parity’ dogma, exchange rates should adjust to compensate for differences in interest rates, but the puzzle is that they tend to go in the opposite direction.  It seems to me that if a bank is offering exceptionally high rates of interest then it must be desperate to raise money, in which case it is probably fragile, and I will not invest (unless covered by a government guarantee). Thus the issue is not my ‘attitude to risk’ but the fact that beyond some point a higher interest rate ceases to be an attractive opportunity and becomes a warning sign. If I have my savings in a boring bank I will have plans for that money. If I see a ‘too good to be true’ alternative, I do not just form an ‘expectation’ but I consider my uncertainty: how can I be sure that the bank will not fail when I know so little about it?

Equity premium

The mainstream CAPM (Capital Asset Pricing Model) assumes a fairly constant ‘market risk premium’, but it is far from fixed and often much larger than theory suggests. It seems to me that market players could be taking into account uncertainty as well as dispersion risk. As in the interest parity case, if I have a boring investment it will take more than an increase in ‘expected return’ to tempt me: I will consider its conditional and uncertainties as well, as in Ellsberg.

Conclusion

There seems to me that their is little evidence that DMs only take account of value and expectations. Perhaps some also take account of uncertainty, as I think they should. A simple formulation would follow Keynes, as follows.

Instead of using subjective ‘best estimates’ of value and probability to form a subjective ‘expected utility’, follow Boole in putting bounds on values and probabilities, and hence on ‘average utility’. If the best worst-case average utility (i.e. maximin) is acceptable (e.g., you will survive), then do that. Otherwise gamble, e.g. by following EUT if that is acceptable, or best best case if necessary. The justification for this heuristic is in terms of policy/ planning / strategy, seeking long-term success while taking account of shorter-term hazards.

See Also

My Uncertainty in Planning  (SIGPLAN 2000) is about DMs who are route planning, showing that using a game-theoretical framework to account for uncertainty improves upon conventional utility maximization. My current speculation is that this forms a much better normative model for both Decision making in general than does utility maximization, and that some DMs – as capable individuals or in the aggregate) may make decisions ‘as if’ they are thinking ahead.

I also have some general notes on uncertainty and economics.

 

Dave Marsay

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: