Jaynes’ Probability Theory
Jaynes Probability Theory: The Logic of Science Washington Uni, St. Louis
This is a Physicists’ view of probability as a logic, which generalizes both frequentist and Bayesian views. It is full of some good insights, and popular among Physicists. Like Bayesians, it proposes that we ‘should’ treat probability as being just a number, but it suggests a maximum entropy principle to be used instead of subjective priors.
As a logic, Jaynes is proposing this as the ideal limiting case for an infinitely educated robot, not for a human having to deal with incomplete and ambiguous data.
Approach and advantages
It purports to follow Jeffreys, Cox, Shannon and Polya.
“[I]f degrees of plausibility are represented by real numbers, then there is a uniquely determined set of quantitative rules for conducting inference. …The important new feature [is] that these rules [are] now seen as uniquely valid principles of logic in general, making no reference to “chance” or “random variables”; so their range of application is vastly greater than had been supposed in the conventional probability theory… . As a result, the imaginary distinction between “probability theory” and “statistical inference” disappears, and the field achieves not only logical unity and simplicity, but far greater technical power and flexibility in applications.”
The advantage of Maximum Entropy over subjective priors is that:
“… Maximum Entropy is a non-speculative procedure, in the sense that it invokes no hypotheses beyond the sample space and the evidence that is in the available data.”
“The equations also reproduce a more complicated phenomenon, divergence of opinions. One might expect that open discussion of public issues would tend to bring about a general consensus. On the contrary, we observe repeatedly that when some controversial issue has been discussed vigorously for a few years, society becomes polarized into two opposite extreme camps; it is almost impossible to find anyone who retains a moderate view. Probability theory as logic shows how two persons, given the same information, may have their opinions driven in opposite directions by it, and what must be done to avoid this. [Otherwise,] a false premise built into a model which is never questioned, cannot be removed by any amount of new data.”
The Basic Desiderata and justification
Jaynes introduces the problem of designing a reasoning robot:
“To each proposition about which it reasons, our robot must assign some degree of plausibility, based on the evidence we have given it; and whenever it receives new evidence it must revise these assignments to take that new evidence into account. In order that these plausibility assignments can be stored and modified in the circuits of its brain, they must be associated with some definite physical quantity, such as voltage or pulse duration or a binary coded number, etc., however our engineers want to design the details. For present purposes this means that there will have to be some kind of association between degrees of plausibility and real numbers:
[This association] is also required theoretically; we do not see the possibility of any consistent theory without a property that is equivalent functionally to [the above] Desideratum.
Other approaches to probability theory
Jaynes notes (correctly) that Bayesians actually follow de Finetti, rather than Bayes. Their approach is based on avoiding a sure loss in gambling. Jaynes considers lattice-like theories of comparative probability, arguing that non-comparable probabilities correspond to a lack of knowledge, which would normally be filled in by experience. He has two main issues with incomplete comparisons:
- Fine’s 1974 observation that ‘ordering relations may not be assigned arbitrarily’ or else one has technical problems in updating the lattice.
- Computer representations must ‘at some stage’ use real numbers.
Jaynes does not consider representations like Bayes’ confidence intervals or Keynes‘, in which one can mix numeric and comparative probabilities. He concludes:
“[T]he Laplace-Bayes theory does not describe the inductive reasoning of actual human brains; it describes the ideal limiting case of an “infinitely educated” brain.”
Thus non-numerical representations of probability seems to have advantages, but [Jaynes supposes] as there is as yet no satisfactory theory numerical representations will have to do. [It would also seem from Bayes et al that numeric representations may be adequate for Physics and other relatively mechanistic situations.]