# Lindley’s Philosophy of Statistics

Dennis V. Lindley The Statistician:  Journal of the Royal Statistical Society. Series D, Vol. 49, No. 3(2000), pp. 293-337.

## The Philosophy of Statistics

[S]tatistical inference is firmly based on probability alone. Progress is therefore dependent on the construction of a probability model …  probabilities are personal. … (summary)

But:

… If a philosophical position can be developed that embraces all uncertainty, it will provide an important advance in our understanding of the world. At the moment it would be presumptive to claim so much.

## Measuring uncertainty

### Reasons for assuming measureability

[I]t is only by associating numbers with any scientific concept that the concept can be properly understood.

… We want to measure uncertainties in order to combine them. A politician said that he preferred adverbs to numbers. Unfortunately it is difficult to combine adverbs.

### Method of measurement

Consider before you an urn containing a known number N of balls that are as nearly identical as modern engineering can make them. Suppose that one ball is drawn at random from the urn. For this to make sense, it is needful to define randomness. Imagine that the balls are numbered consecutively from 1 to N and suppose that, at no cost to you, you were offered a prize if ball 57 were drawn. Suppose, alternatively, that you were offered the same prize if ball 12 were drawn. If you are indifferent between the two propositions and, in extension, beween any two numbers between 1 and N, then, for you, the ball is drawn at random. Notice that the definition of randomness is subjective; it depends on you.

Having said what is meant by the drawing of a ball at random, forget the numbers and suppose that R of the balls are red and the remainder white, the colouring not affecting your opinion of randomness. Consider the uncertain event that the ball, withdrawn at random, is red. The suggestion is that this provides a standard for uncertainty and that the measure is R/N, the proportion of red balls in the urn. … Now pass to any event, or proposition, which can either happen or not, be true or false. It is proposed to measure your uncertainty associated with the event happening by comparison with the standard. If you think that the event is just as uncertain as the random drawing of a red ball from an urn containing N balls, of which R are red, then the event has uncertainty R/N for you. R and N are for you to choose. For given N, it is easy to see that there cannot be more than one such R. There is now a measure of uncertainty for any event or proposition. …

The sketchy  demonstration here may not be thought adequate for such an important conclusion, so let us look at other approaches.

### Other demonstrations that uncertainty is nothing but probability

Lindley refers to de Finetti, Ramsey, Savage.

… My personal favourite among presentations that have full rigour is that of DeGroot (1970) [Optimal Statistical Decisions], chapter 6. An excellent recent presentation is Bernardo and Smith (1994) [Bayesian Theory].

… Some writers have considered the axioms carefully and produced objections. A fine critique is Walley (1991), who went on to construct a system that uses a pair of numbers, called upper and lower probabilities, in place of the single probability. The result is a more complicated system. My position is that the complication seems unnecessary. I have yet to meet a situation in which the probability approach appears to be inadequate and where the inadequacy can be fixed by employing upper and lower values. The pair is supposed to deal with the precision of probability assertions; yet probability alone contains a measure of its own precision. I believe in simplicity; provided that it works, the simpler is to be preferred over the complicated, essentially Occam’s razor.

## Inference

The formulation that has served statistics well throughout this century is based on the data having, for each value of a parameter, a probability distribution. This accords with the idea that the uncertainty in the data needs to be described probabilistically. …

… It is often said that the parameters are assumed to be random quantities. This is not so. It is the axioms that are assumed, from which the randomness property is deduced.

## Subjectivity

[P]robability … depends on two arguments: the element whose uncertainty is being described and the knowledge on which that uncertainty is based. The omission of the conditioning argument often leads to confusion.

… Suppose that Θ is the scientific quantity of interest. … Initially the scientist will know little about Θ, because the relevant knowledge base K is small, and two scientists will have different opinions, expressed through probabilities p(Θ|K). Experiments will be conducted, data x obtained and their probabilities updated to p(Θ|x, K) in the way already described. It can be demonstrated (see Edwards et al. (1963)) that under circumstances that typically obtain, as the amount of data increases, the disparate views will converge, typically to where Θ is known, or at least determined with considerable precision. This is what is observed in practice: whereas initially scientists vary in their views and discuss, sometimes vigorously, among themselves, they eventually come to agreement. As someone said, the apparent objectivity is really a consensus. There is therefore good agreement here between scientific practice and the Bayesian paradigm.

## Models

The philosophical position developed here is that uncertainty should be described solely in terms of your probability. The implementation of this idea requires the construction of probability distributions for all the uncertain elements in the reality being studied. … The statistician’s task is to construct a model for the uncertain world under study. Having done this, the probability calculus enables the specific aspects of interest to have their uncertainties computed on the knowledge that is available. … The statistician’s task is to articulate the scientist’s uncertainties in the language of probability, and then to compute with the numbers found.

… The fundamental problem of inference and induction is to use past data to predict future data.

## Models again

… If … you are genuinely uncertain whether model Ml or model M2 obtains, then describe your uncertainty by probability, producing a model that has Ml with probability γ and M2 with probability 1-γ. Part of the inferential problem will be the passage from γ to p(Ml| x). This is a problem that has been discussed (O’Hagan, 1995), and where impropriety is best avoided and conglomerability assumed.

### Decision analysis

… The foundational argument goes on to show that the merits of the consequence (d, Θ) can be described by a real number u(d, Θ), termed the utility of the consequence. … None of the arguments given here apply to the case of two, or more, decision makers who do not have a common purpose, or may even be in conflict. This is an important limitation on maximized expected utility.

## Science

[T]he scientific method consists in expressing your view of your uncertain world in terms of probability, performing experiments to obtain data, and using that data to update your probability and hence your view of the world. Although the emphasis in this updating is ordinarily put on Bayes, effectively the product rule, the elimination of the ubiquitous nuisance parameters by the addition rule … is also important.

## Conclusions

To carry out this scheme for the large world is impossible. It is essential to use a small world, which introduces simplification but often causes distortion. … Where a real difficulty arises is in the construction of the model … there is a real gap in our appreciation of how to assess probabilities-of how to express our uncertainties in the requisite form. My view is that the most important statistical research topic as we enter the new millennium is the development of sensible methods of probability assessment. This will require co-operation with numerate experimental psychologists and much experimental work.

Lindley makes a good case for uncertainty in science being adequately represented by conventional, precise, Bayesian probability. To me, the key features are:

• Science is concerned with measurement and the measurable.
(Mathematically, there is no doubt that if you must have numbers that add and multiply then you must use probability.)
• Scientists typically work towards a culmination, such as the presentation of results.
(Hence each analysis is a finished item, rather than being part of  longer-term engagement or ‘strategy’.)
• Scientists often have a detachment from their subject, only observing it.
• Scientists are often prepared to make comparative probability judgements, and even numeric assessments.
• Scientists often find that their subjects behave as if they are ‘really’ probabilistic.
(Find or assume?)
• Where scientists have a common purpose, their models and assessments tend to converge, with enough data.
(Unlike some politicians.)

Lindley acknowledges that probabilities are not ‘really’ precise, and appreciates imprecise probability theory academically, but has not found the explicit representation of imprecision to be practically useful. He seems to find his theory to be widely applicable, but is not himself claiming any more general theory than Ramsey or Savage.

Dave Marsay

### One Response to Lindley’s Philosophy of Statistics

This site uses Akismet to reduce spam. Learn how your comment data is processed.