Walley’s Statistical Reasoning

Peter Walley Statistical Reasoning with Imprecise Probabilities Chapman and Hall 1991.

This develops a general behaviourally-based theory of imprecise probabilities to support statistical reasoning, which it mainly contrasts with conventional precise Bayesian probability theory and sensitivity analysis.

The probability theory is not computationally complete in itself, but – following de Finetti – builds on a complete theory of previsions: valuations of gambles.

Previsions

The coherence axioms of previsions are:

  1. P(X) ≥ inf X,
  2. P(λX) = λP(X),
  3. P(X+Y) ≥ P(X)+P(Y).

The first is simply that a gamble is worth at least as much as the worst possible outcome. The third demonstrates the technical convenience of using previsions over using probabilities: in previsions, if X and Y both happen then you get the rewards of each: both count, unlike probability where one should not double-count. It is an inequality because you may not have a neutral ‘attitude to risk’. The justification (at 2.2.4) for the second axiom is attributed to Cedric Smith.

The role of mathematical axioms

Much of this book is concerned with the mathematical consequences of the coherence axioms [above]. We believe that it is useful to present the theory through mathematical definitions, axioms and theorems, in order to state both the assumptions and the results clearly and rigorously. However, it is advisable to be wary of any system of axioms, especially of axioms that are proposed as norms of rationality. Such axioms require compelling justification.

… the reader should study these ‘justifications’ critically! (1.6.7)

On the other hand …

… It seems unlikely that probabilistic reasoning can ever be completely formalized, because it is implausible that the intelligence and imagination needed to devise useful assessment strategies, and the judgments to apply them, could be completely reduced to formal principles. (1.7.11)

Inference

Walley develops imprecise variations of the likelihood principle and Bayes’ rule, but these are less normative than in the precise case, and it is often possible to improve on the limits that they give. He also develops a precise ‘generalize Bayes’ rule’, but this depends on the following ‘Updating principle’:

Suppose that you intend to accept a gamble Z provided that you observe just the event B. Then you should accept the gamble ZB. And vice-versa.

The argument is that these have the same effect. Yet I am uncomfortable about this. Perhaps if my gamble were kept secret I might agree, but in stock markets gambles send signals that can affect the market, so the principle appears not to apply, and is certainly questionable. It thus seems more reasonable to follow Good in not making the principle an assumption, but making it something to be considered which, if true in a particular case, would justify a precise generalized Bayes’ rule.

Muddles

Binmore has introduced the concept of ‘muddles’, where the law of large numbers fails. Walley recognizes that a probability may be imprecise either because of lack of subjective precision or because of muddles (e.g. 8.5.5). He notes that Bayesian sensitivity analysis precludes muddles, whereas his method – using ‘robust Bernoulli models’ (9.6) allows for them.

… The imprecision might be much smaller than 0.1 for relatively stable physical processes such as coin tosses, but might be much larger than 0.1 for some unstable or poorly understood phenomena such as economic processes.

These are strong arguments against the sensitivity analysis interpretation … we expect the difference in interpretation to lead to important differences in practice.

Comments

Usefulness

Even if one is fairly sure that one will not adopt imprecise probabilities as a practical method, preferring sensitivity analysis, the discussion still gives a useful theoretical background for Bayesian sensitivity analysis, which is otherwise ad-hoc. This can help to give grounds for confidence in such analysis and – alternatively – to identify cases where such analysis is not reliable. One might then be able to develop an ad-hoc solution (such as considering cases) based on multiple sensitivity analyses that is then reliably grounded.

Previsions and probabilities

Academically, it might have been more straightforward to develop a complete theory of previsions first, and then make the probability theory a straight application. But Walley, perhaps wisely, develops the theories of prevision and of probability together, to retain the reader’s interest and provide a link to more familiar material and experience. But this does confuse the logic somewhat.

9.1.7 provides an example of a set of probability judgments that are coherent according to Walley’s theory but not realisable by any precise probability. Unfortunately the example is not well motivated, and so it is not clear whether it is a good or a bad thing to permit such examples. It would be good to see some practical examples.

Gambling

Walley follows Smith in developing probability currency, particularly in support of (2) above, yet Smith’s concept is very limited, as he acknowledges. In essence, the statistician has to care no more for the subjects of the statistics in themselves as a gambler does about coins, wheels and cards, in themselves.

Walley gives his own justification for the prevision axiom (2) above. This compares
(a) a reward αX in probability currency with
(b) a reward αX in probability currency provided that an extraneous random event with known positive chance β (independent) occurs, otherwise 0.

It argues that if (a) is desirable then so is (b) and hence so is a reward αβX. This seems reasonable. But previsions are based on gambling, and – like Smith – I would not risk more than I have, so while the converse of the above does seem reasonable, it does not reflect my own habits: my ‘attitude to risk’ depends on what is at stake and what I have to stake. It may be that the theory could be fixed, but I suspect it would lose its generality. Instead, it would be a theory of some idealised gambling behaviour.

The formalism suggest that probabilities can be represented by an interval. But if an urn contains fair and double-sided coins then for a coin drawn at random we have P(Heads) = {0, 0.5, 1}, not [0,1]. Similarly we may think that the economy will continue to be consistent with its observed statistical behaviour, or will crash steeply. We may regard a moderate fall as highly implausible.

The role of mathematical axioms

Walley is critical of those, like Keynes, who provide a theory without a standard general interpretation and hence with no justification for the view that the theory is universal. Taking up Walley’s challenge to be critical of his theory, I do not find his justification at all compelling. His notion of gambling seems highly idealised. It also worth noting that generations of the best minds found the axioms of  geometry compelling, and yet we now see that as a model of reality they are false.

It seems to me simpler to treat a mathematical theory, even one of probability, as simply one among many possible mathematical theories that should only ever be applied with its assumptions and credible alternative theories in mind, and with a commitment not to gamble too much on the theory without checking it.

Walley points out that the conventional approaches to probability rely on assumptions that are simple not credible as regards economics, and any claimed results should be appropriately caveated. For example, it is simply not possible to use conventional statistical reasoning to support a view that economies will continue to be consistent with their past behaviour.

It seems to me that Walley’s approach still makes questionable assumptions, and hence results will still need to be caveated and critiqued. But Walley’s assumptions do seem usefully more general than the conventional ones.

There seems no evidence that a completely general theory is possible, but that should not stop us trying.

Implicit assumptions

By default, the theory seems to assume:

  • That the underlying reality is in some sense ‘statistically stable’, as in sampling from a fixed population, as distinct from sampling from a population that may be changing.
  • That my actions do not affect what happens. For example, if I ‘gamble’ by buying a large quantity of stocks, that this does not affect their price.
  • That there is no conflict between short and long-term considerations in determining which gambles are acceptable.
  • That there is no value in revealing preferences. For example, if I prefer beer to water to wine, I would choose water rather than wine and beer rather than water. But if in a social context where seeming to prefer beer to wine would result in a loss of ‘social capital’, I might choose wine over beer. This is inconsistent, but hardly irrational.
  • That there is no value ‘in having a horse in the race’. If I gamble simply to take part, then previsions do not necessarily add.
  • That if I am indifferent between X and Y then with any small inducement to prefer X, I should prefer X.
  • That I have no strategy that cannot be modelled by its effect on my valuations.
  • That in muddles what is of interest are just the short term and long term behaviours, with no interest in the middle term. (Hence no interest in economic crashes.)

This last point applies to all uncertainty theories of which I am aware.

Extensions

Smith cites von Neumann and Morgenstern for more complex situations, such as when we do care about the subject of the statistics. Consideration of their work would seem to lead us much closer to the generality that Walley seems to suppose.

Conclusions

The term statistics derives from studies of statedata. Walley has failed to argue that his approach is suitable for such statistics, although it has obvious application to more routine data analysis, where the statistician is disinterested. It is unfortunate that no practical examples of the proper scope and consequences are given.

This work is important in understanding Bayesian probability and sensitivity analysis, showing – for example – their limitations as regards modelling economies, but I am unconvinced that it provides an approach that is sufficiently general for economies, and do not see how it could be usefully applied to the issue of crashes, for example, although it does have use for lesser muddles and in particular for some common statistical problems. It is also useful as an alternative to sensitivity analysis where there is no clear ‘central’ precise probability or there are concerns about possible correlations.

Lacking a clearly defined concept of the type of applications for which it is valid and has benefits over sensitivity analysis, it motivates further study of the important issues that it raises (explicitly and implicitly) more than universal adoption of its techniques.

See Also

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: