Binmore’s Rational Decisions
Ken Binmore Rational Decisions Princeton University Press, 2009.
Binmore explains the foundations of Bayesian decision theory, shows why Savage restricted the theory’s application to small worlds, and argues that the Bayesian approach to knowledge is inadequate in a large world.
Binmore also introduces the notion of a ‘muddled’ strategy. (A sequence is muddled if no amount of data leads to a precise estimate of probability, i.e. where the law of large numbers fails. This implies that the variability in the strategy is not ‘random’ in the strong sense assumed by conventional probability theory.) Binmore shows that muddle strategies can sometimes beat conventional mixed strategies. (Perhaps because they can prevent other players from ‘defecting’.)
Small worlds and consistency
Ken notes that:
- ‘Jimmie’ Savage, who is often credited as the creator of contemporary Bayesian decision theory “held the view that it is only rational to apply to apply Bayesian decision theory in small worlds. But what is a small world?” (1.1)
- Most theories of decision-making can be split into two parts: modelling ‘the world’ and then making a decision. To many to be rational means acting consistently with one’s belief’s, that is: acting as if the model were true (1.3).
- “Bayesian decision theory assumes that [actors] make stable and consistent choices in the presence of events that lack objective probabilities.” (7.1.1)
- “Savage … restricted the application of his theory to what he called small worlds in which it makes sense to insist on consistency” (7.1.1). “According to Savage, a small world is one within which it is always possible to ‘look before you leap’.”
- “In a large world, the possibility of an unpleasant surprise that reveals some consideration overlooked in [the] original model can’t be discounted.” (7.1.1.)
In other words, in a small world it is possible in principle to assign Bayesian priors and to use Bayes’ rule to update them. But in large worlds this is not appropriate:
- You might not know what it is that you don’t know (8.2).
- Your opponents may not be ‘open books’ (8.).
- The situation might be complex in the sense of incorporating self-reference (8.4).
Ken (7.3.1) considers the familiar Dutch Book argument that rational actions imply consistent (numeric) subjective probabilities. Ken (after Cedric Smith) notes that “It isn’t very realistic to insist that [a person] must always bet one way or the other no matter what level of confidence she may have in her beliefs.” (7.3.1) “… do we really want to insist that she always has a subjective probability for everything, – even for events for which she is entirely ignorant?” “If we insist on completeness but accept that [gamblers] may be averse to uncertainty, then we may have to give up the strong consistency requirements ..”.
Implications for Knowledge and Science
My guess is that the problem of scientific induction will always remain unsolved, because it is one of those problems that has no definite solution. (7.5.2)
Ken considers Bayesian updating to be the routine refinement of variables within a fixed, ‘known’, model. Thus knowledge as such is fixed until one has a surprise, when one needs to change the model – perhaps radically – identify its variables, and then estimate them. He supposes that this knowledge will be fixed and known in typical sub-games. In particular if players ‘know’ that they are all rational, they may all continue playing according to the rules of the sub-game until they recognize that some players are being irrational with relation to that sub-game, in which case their play, the game and the rules may change.
Ken shows that if uncertainty has a range of possible probabilities, then any decision method that satisfies some mild axioms depends only on the values of the extremes and not (for example) on second-order probability distributions. (9.1.1)
Rubin’s axiom holds that if, having made a decision, a random act makes the outcomes from all possible actions the same, then one should ‘be consistent’ and stick with one’s selected action. Ken argues that this must be abandoned:
We therefore have to learn to tolerate some muddling of the boundary between preferences and beliefs in a large world. (9.1.2)
Ken develops an exemplar theory, only a little larger than small worlds, in which one has incomplete subjective probabilities, represented by intervals. This gives a multiplicative form of the Hurwicz Criterion, and supports Ellsberg‘s views.
A key technical innovation in Ken’s account is a ‘muddling box’. This is like a randomizer, but instead of containing a pseudo-random sequence of 0s and 1s with a fixed probability of 1s, it has a collection of sequences with probabilities between set limits. He finishes his account by discussing games in which players do better with muddling strategies than deterministic or random ones, thus motivating the use of imprecise probabilities.
Pragmatism and logic
Ken describes Bayesian updating as implying a fixed ‘known’ contextual model, and hence as a form of pragmatism. It also resonates with Whitehead’s model in Process and Reality. In effect, the conventional (Bayesian) approach to rationality assumes that there is a single, global, permanent epoch with some overarching rules that never change.
Rubin’s axiom holds that if, having made a decision, a random act makes the outcomes from all possible actions the same, then one should ‘be consistent’ and stick with one’s selected action. Thus one only changes action when there is a manifest reason. This rules out having strategies that might break ties for other reasons, e.g. possible meta-games. Thus Bayesianism supposes that each particular rare event is no more common than it. The key question, then, is what is ‘pragmatic’ and ‘resilient’ when surprises can be more common and more significant. Ken gives a good answer based on imprecise probabilities and shows how ‘muddling strategies’ can be better than supposedly rational ones. But he leaves us quite a challenge in considering decision making under a broader range of uncertainties.