UK judge rules against probability theory? R v T

Actually, the judge was a bit more considered than my title suggests. In my defence the Guardian says:

“Bayes’ theorem is a mathematical equation used in court cases to analyse statistical evidence. But a judge has ruled it can no longer be used. Will it result in more miscarriages of justice?”

The case involved Nike trainers and appears to be the same as that in a recent appeal  judgment, although it doesn’t actually involve Bayes’ rule. It just involves the likelihood ratio, not any priors. An expert witness had said:

“… there is at this stage a moderate degree of scientific evidence to support the view that the [appellant’s shoes] had made the footwear marks.”

The appeal hinged around the question of whether this was a reasonable representation of a reasonable inference.

According to Keynes, Knight and Ellsberg, probabilities are grounded on either logic, statistics or estimates. Prior probabilities are – by definition – never grounded on statistics and in practical applications rarely grounded on logic, and hence must be estimates. Estimates are always open to challenge, and might reasonably be discounted, particularly where one wants to be ‘beyond reasonable doubt’.

Likelihood ratios are typically more objective and hence more reliable. In this case they might have been based on good quality relevant statistics, in which case the judge supposed that it might be reasonable to state that there was a moderate degree of scientific evidence. But this was not the case. Expert estimates had supplied what the available database had lacked, so introducing additional uncertainty. This might have been reasonable, but the estimate appears not to have been based on relevant experience.

My deduction from this is that where there is doubt about the proper figures to use, that doubt should be acknowledged and the defendant given the benefit of it. As the judge says:

“… it is difficult to see how an opinion … arrived at through the application of a formula could be described as ‘logical’ or ‘balanced’ or ‘robust’, when the data are as uncertain as we have set out and could produce such different results.”

This case would seem to have wider implications:

“… we do not consider that the word ‘scientific’ should be used, as … it is likely to give an impression … of a degree of  precision and objectivity that is not present given the current state of this area of expertise.”

My experience is that such estimates are often used by scientists, and the result confounded with ‘science’. I have sometimes heard this practice justified on the grounds that some ‘measure’ of probability is needed and that if an estimate is needed it is best that it should be given by an independent scientist or analyst than by an advocate or, say, politician. Maybe so, but perhaps we should indicate when this has happened, and the impact it has on the result. (It might be better to follow the advice of Keynes.)

Royal Statistical Society

The guidance for forensic scientists is:

“There is a long history and ample recent experience of misunderstandings relating to statistical information and probabilities which have contributed towards serious miscarriages of justice. … forensic scientists and expert witnesses, whose evidence is typically the immediate source of statistics and probabilities presented in court, may also lack familiarity with relevant terminology, concepts and methods.”

“Guide No 1 is designed as a general introduction to the role of probability and statistics in criminal proceedings, a kind of vade mecum for the perplexed forensic traveller; or possibly, ‘Everything you ever wanted to know about probability in criminal litigation but were too afraid to ask’. It explains basic terminology and concepts, illustrates various forensic applications of probability, and draws attention to common reasoning errors (‘traps for the unwary’).”

The guide is clearly much needed. It states:

“The best measure of uncertainty is probability, which measures uncertainty on a scale from 0 to 1.”

This statement is nowhere supported by any evidence whatsoever. No consideration is given to alternatives, such as those of Keynes, or to the legal concept of “beyond reasonable doubt.”

“The type of probability that arises in criminal proceedings is overwhelmingly of the subjective variety, …

There is no consideration of Boole and Keynes’ more logical notion, or any reason to take notice of the subjective opinions of others.

“Whether objective expressions of chance or subjective measures of belief, probabilistic calculations of (un)certainty obey the axiomatic laws of probability, …

But how do we determine whether those axioms are appropriate to the situation at hand? The reader is not told whether the term axiom is to be interpreted in its mathematical or lay sense: as something to be proved, or as something that may be assumed without further thought. The first example given is:

“Consider an unbiased coin, with an equal probability of producing a ‘head’ or a ‘tail’ on each coin-toss. …”

Probability here is mathematical. Considering the probability of an untested coin of unknown provenance would be more subjective. It is the handling of the subjective component that is at issue, an issue that the example does not help to address. More realistically:

“Assessing the adequacy of an inference is never a purely statistical matter in the final analysis, because the adequacy of an inference is relative to its purpose and what is at stake in any particular context in relying on it.”

“… an expert report might contain statements resembling the following:
* “Footwear with the pattern and size of the sole of the defendant’s shoe occurred in approximately 2% of burglaries.” …
It is vital for judges, lawyers and forensic scientists to be able to identify and evaluate the assumptions which lie behind these kinds of statistics.”

This is good advice, which the appeal judge took. However, while I have not read and understood every detail of the guidance, it seems to me that the judge’s understanding went beyond the guidance, including its ‘traps for the unwary’.

The statistical guidance cites the following guidance from the forensic scientists’ professional body:

Logic: The expert will address the probability of the evidence given the proposition and relevant background information and not the probability of the proposition given the evidence and background information.”

This seems sound, but needs supporting by detailed advice. In particular none of the above guidance explicitly takes account of the notion of ‘beyond reasonable doubt’.

Forensic science view

Science and Justice has an article which opines:

“Our concern is that the judgment will be interpreted as being in opposition to the principles of logical interpretation of evidence. We re-iterate those principles and then discuss several extracts from the judgment that may be potentially harmful to the future of forensic science.”

The full article is behind a pay-wall, but I would like to know what principles it is referring to. It is hard to see how there could be a conflict, unless there are some extra principles not in the RSS guidance.

Criminal law Review

Forensic Science Evidence in Question argues that:

 “The strict ratio of R. v T  is that existing data are legally insufficient to permit footwear mark experts to utilise probabilistic methods involving likelihood ratios when writing reports or testifying at trial. For the reasons identified in this article, we hope that the Court of Appeal will reconsider this ruling at the earliest opportunity. In the meantime, we are concerned that some of the Court’s more general statements could frustrate the jury’s understanding of forensic science evidence, and even risk miscarriages of justice, if extrapolated to other contexts and forms of expertise. There is no reason in law why errant obiter dicta should be permitted to corrupt best scientific practice.”

In this account it is clear that the substantive issues are about likelihoods rather than probabilities, and that consideration of ‘prior probabilities’ are not relevant here. This is different from the Royal Society’s account, which emphasises subjective probability. However, in considering the likelihood of the evidence conditioned on the suspect’s innocence, it is implicitly assumed that the perpetrator is typical of the UK population as a whole, or of people at UK crime scenes as a whole. But suppose that women are most often murdered by men that they are or have been close to, and that such men are likely to be more similar to each other than people randomly selected from the population as a whole. Then it is reasonable to suppose that the likelihood that the perpetrator is some other male known to the victim will be significantly greater than the likelihood of it being some random man. The use of an inappropriate likelihood introduces a bias.

My advice: do not get involved with people who mostly get involved with people like you, unless you trust them all.

The Appeal

Prof. Jamieson, an expert on the evaluation of evidence whose statements informed the appeal, said:

“It is essential for the population data for these shoes be applicable to the population potentially present at the scene. Regional, time, and cultural differences all affect the frequency of particular footwear in a relevant population. That data was simply not … . If the shoes were more common in such a population then the probative value is lessened. The converse is also true, but we do not know which is the accurate position.”

Thus the professor is arguing that the estimated likelihood could be too high or too low, and that the defence ought to be given the benefit of the doubt. I have argued that using a whole population likelihood is likely to be actually biased against the defence, as I expect such traits as the choice of shoes to be clustered.

Science and Justice

Faigman, Jamieson et al, Response to Aitken et al. on R v T Science and Justice 51 (2011) 213 – 214

This argues against an unthinking application of likelihood ratios, noting:

  • That the defence may reasonable not be able explain the evidence, so that there may be no reliable source for an innocent hypothesis.
  • That assessment of likelihoods will depend on experience, the basis for which should be disclosed and open to challenge.
  • If there is doubt as to how to handle uncertainty, any method ought to be tested in court and not dictated by armchair experts.

On the other hand, when it says “Accepting that probability theory provides a coherent foundation …” it fails to note that coherence is beside the point: is it credible?

Comment

The current situation seems unsatisfactory, with the best available advice both too simplistic and not simple enough. In similar situations I have co-authored a large document which has then been split into two: guidance for practitioners and justification. It may not be possible to give comprehensive guidance for practitioners, in which case one should aim to give ‘safe’ advice, so that practitioners are clear about when they can use their own judgment and when they should seek advice. This inevitably becomes a ‘legal’ document, but that seems unavoidable.

In my view it should not be simply assumed that the appropriate representation of uncertainty is ‘nothing but a number’. Instead one should take Keynes’ concerns seriously in the guidance and explicitly argue for a simpler approach avoiding ‘reasonable doubt’, where appropriate. I would also suggest that any proposed principles ought to be compared with past cases, particularly those which have turned out to be miscarriages of justice. As the appeal judge did, this might usefully consider foreign cases to build up an adequate ‘database’.

My expectation is that this would show that the use of whole-population likelihoods as in R v T is biased against defendants who are in a suspect social group.

More generally, I think that anyguidance ought to apply to my growing uncertainty puzzles, even if it only cautions against a simplistic application of any rule in such cases.

See Also

Blogs: The register, W Briggs and Convicted by statistics (referring to previous miscarriages).

My notes on probability. A relevant puzzle.

Dave Marsay 

How mathematical modelling seduced Wall Street (NS)

How mathematical modelling seduced Wall Street

New Scientist, 22 Oct. 2011.

See also page 10 A better way to price the future takes hold.

In the print version this is ‘Unruly humans vs the lust for order’, and it ends by criticising ‘models in the physical sciences’. Whitehead, co-author of Principia Mathematica, has shown in forensic detail, in his Process and Reality, the limitations of conventional models. Keynes had also covered much the same ground in his Treatise on Probability. More recently, Good joined the dots while Prigogine developed a mathematical model showing the severe limitations of the conventional approach. Yet the online version seems to criticise ‘mathematical modelling’.

I think the actual problem of Wall Street is its pragmatism. In the short-run we earn bonuses, in the long-run we are retired. So it is pragmatic to make money while the opportunity is there. The problem is in ‘valuing the future’ (pg 10). In markets where we can always move on, we don’t. Why should we, unless we have a stake in it? But Whitehead and Keynes also note a kind of ‘lust for order’, or at least an assumption that whatever order there happens to be will endure. But whether it was short-termism or a misguided attitude to order, mathematical modelling appear innocent.

Institutional Investor

How to understand the limits of financial models  is for a more financially aware audience, but raises new issues.

“… there has been a frantic attempt to prevent loss, to restore the status quo ante at all cost”

The status quo ante was very risky: we should not be seeking to return to it. (Keynes showed why.)

“Quants were the theorists”

Oh dear. If the quants had been mathematicians they would have realised that economics was an empirical subject, and appreciated the uncertainties that Keynes highlighted.

“… traders were the experimentalists, and we collaborated to develop and explore our models.”

Oh dear. In an empirical subject, how can one separate ‘theory’ and ‘experiment’ like this? And what can one deduce from traders’ experiments?

“If you are someone who cannot distinguish between God’s creations and man’s idols, you may mistake models for deep laws. Many economists are such people.”

So blame such economists, not mathematicians (or physicists).

“We have seen corporations treated with the kindness owed to individuals, in the hope, perhaps, that their well-being would trickle down to individuals, and individuals treated with the kindness owed to objects.”

Perceptive. Derman’s prescription includes:

Avoid axiomatization. Axioms and theorems are suitable for mathematics, but finance is concerned with the real world. Every financial axiom is pretty much wrong; the most-relevant questions in creating a model are, how wrong and in what way? “

If one doesn’t axiomatize one cannot do mathematics. One is left to apply formulae and methods with no real understanding. Keynes’ attempts to axiomatize probability and economics was critical in revealing the flaws in conventional thinking. The mistake is to turn axioms into dogma.

“The dangerous part of Black-Scholes is the further assumption that the sole risk of a stock is the risk of diffusion, which isn’t true. But the more realistically you can define risk, the better the model will become. “

How does one define risk, if not with axioms? I tend to go along with Keynes, in supposing that one cannot define risk, but can give an axiomatization that falls short of the precision definition.

“When someone shows you an economic or financial model that involves mathematics, you should understand that, despite the confident appearance of the equations, what lies beneath is a substrate of great simplification and — only sometimes — great imagination, perhaps even intuition.”

Having axioms shows exactly what ‘lies beneath’. Being able to produce an axiomatization is a good test of one’s understanding. Thus financial modellers typically define away risk: the mathematics makes this clear: what else would?

Beware of idolatry. The greatest conceptual danger is idolatry: believing that someone can write down a theory that encapsulates human behavior and thereby free you of the obligation to think for yourself. A model may be entrancing, but no matter how hard you try, you will not be able to breathe life into it. To confuse a model with a theory is to believe that humans obey mathematical rules, and to invite future disaster.”

This gives us a clue to some of the confusion. Mathematical models and rules (such as Keynes’) can reflect imprecision and uncertainty. The problem is that the customers for economic models wanted precision and certainty, and were content with models that were mathematical in the sense that they were based on formulae using mathematical operators with no concern for their validity.

Derman reminds us of some earlier (2009) advice:

“• I will remember that I didn’t make the world and that it doesn’t satisfy my equations.

• Though I will use the models that I or others create to boldly estimate value, I will always look over my shoulder and never forget that the model is not the world.

• I will not be overly impressed by mathematics. I will never sacrifice reality for elegance without explaining to end users why I have done so.

• I will not give the people who use my models false comfort about their accuracy. I will make the assumptions and oversights explicit to all who use them.

• I understand that my work may have enormous effects on society and the economy, many beyond my apprehension.”

These seems reasonable. However most modellers have been paid by people who appear to have no concern for the longer term effects, and the apparent desire to return to the status quo ante suggests that they still don’t. It is no good giving advice to modellers (mathematical or otherwise) unless there are fundamental changes to financial institutions, changes that are incompatible with conventional capitalism, “a way of life in which all the standards of the past are supposedly subservient to the goal of efficient, timely production”.

“We need free markets, but we need them to be principled.”

Agreed. Can’t mathematics help?

Reuters

The Physics of an economic crisis is along much the same lines.

See Also

Kauffman, the End of a Physics Worldview, takes a more theoretical approach to the same issue. Or Good, a mathematician who explores the limitations of theories and models.

Dave Marsay

GLS Shackle, imagined and deemed possible?

Background

This is a personal view of GLS Shackle’s uncertainty. Having previously used Keynes’ approach to identify possible failure modes in systems, including financial systems (in the run-up to the collapse of the tech bubble), I became concerned  in 2007 that there was another bubble with a potential for a Keynes-type  25% drop in equities, constituting a ‘crisis’. In discussions with government advisers I first came across Shackle. The differences between him and Keynes were emphasised. I tried, but failed to make sense of Shackle, so that I could form my own view, but failed. Unfinished business.

Since the crash of 2008 there have been various attempts to compare and contrast Shackle and Keynes, and others. Here I imagine a solution to the conundrum which I deem possible: unless you know different?

Imagined Shackle

Technically, Shackle seems to focus on the wickeder aspects of uncertainty, to seek to explain them and their significance to economists and politicians, and to advise on how to deal with them. Keynes provides a more academic view, covering all kinds of uncertainty, contrasting tame probabilities with wicked uncertainties, helping us to understand both in a language that is better placed to survive the passage of time and the interpretation by a wider – if more technically aware – audience.

Politically, Shackle lacks the baggage of Lord Keynes, whose image has been tarnished by the misuse of the term ‘Keynesian’. (Like Keynes, I am not a Keynesian.)

Conventional probability theory would make sense if the world was a complicated randomizing machine, so that one has ‘the law of large numbers’: that in the long run particular events will tend to occur with some characteristic, stable, frequency. Thus in principle it would be possible to learn the frequency of events, such that reasonably rare events would be about as rare as we expect them to be. Taleb has pointed out that we can never learn the frequencies of very rare events, and that this is a technical flaw in many accounts of probability theory, which fail to point this out. But Keynes and Shackle have more radical concerns.

If we think of the world as a complicated randomizing machine, then as in Whitehead, it is one which can suddenly change. Shackle’s approach, in so far as I understand it, is to be open to the possibility of a change, recognize when the evidence of a change is overwhelming, and to react to it. This is an important difference for the conventional approach, in which all inference is done on the assumptions that the machine is known. Any evidence that it may have change is simply normalised away. Shackle’s approach is clearly superior in all those situations where substantive change can occur.

Shackle terms decisions about a possibly changing world ‘critical’. He makes the point that the application of a predetermined strategy or habit is not a decision proper: all ‘real’ decisions are critical in that they make a lasting difference to the situation. Thus one has strategies for situations that one expects to repeat, and makes decisions about situations that one is trying to ‘move on’. This seems a useful distinction.

Shackle’s approach to critical decisions is to imagine potential changes to new behaviours, to assess them and then to choose between those deemed possible. This is based on preference not expected utility, because ‘probability’ does not make sense. He gives an example of  a French guard at the time of the revolution who can either give access to a key prisoner or not. He expects to lose his life if he makes the wrong decision, depending on whether the revolution succeeds or not. A conventional approach would be based on the realisation that most attempted revolutions fail. But his choice may have a big impact on whether or not the revolution succeeds. So Shackle advocates imagining the two possible outcomes and their impact on him, and then making a choice. This seems reasonable. The situation is one of choice, not probability.

Keynes can support Shackle’s reasoning. But he also supports other types of wicked uncertainty. Firstly, it is not always the case that a change is ‘out of the blue’. One may not be able to predict when the change will come, but it is sometimes possible to see that there is an economic bubble, and the French guard probably had some indications that he was living in extraordinary times. Thus Keynes goes beyond Shackle’s pragmatism.

In reality, there is no strict dualism between probabilistic behaviour and chaos, between probability and Shackle’s complete ignorance. There are regions in-between that Keynes helps explore. For example, the French guard is not faced with a strictly probabilistic situation, but could usefully think in terms of probabilities conditioned on his actions. In economics, one might usefully think of outcomes as conditioned on the survival of conventions and institutions (October 2011).

I also have a clearer view why consideration of Shackle led to the rise in behavioural economics: if one is ‘being open’ and ‘imagining’ then psychology is clearly important. On the other hand, much of behavioral economics seems to use conventional rationality as some form of ‘gold standard’ for reasoning under uncertainty, and to consider departures from it as a ‘bias’.  But then I don’t understand that either!

Addendum

(Feb 2012, after Blue’s comments.)

I have often noticed that decision-takers and their advisers have different views about how to tackle uncertainty, with decision-takers focusing on the non-probabilistic aspects while their advisers (e.g. scientists or at least scientifically trained) tend to, and may even insist on, treating the problem probabilistically, and hence have radically different approaches to problem-solving. Perhaps the situation is crucial for the decision-taker, but routine for the adviser? (‘The agency problem.’) (Econophysics seems to suffer from this.)

I can see how Shackle had much that was potentially helpful in the run-up to the financial crash. But it seems to me no surprise that the neoclassical mainstream was unmoved by it. They didn’t regard the situation as crucial, and didn’t imagine or deem possible a crash. Unless anyone knows different, there seems to be nothing in Shackle’s key ideas that provide as explicit a warning as Keynes. While Shackle was more acceptable that Keynes (lacking the ‘Keynesian’ label) he also still seems less to the point. One needs both together.

See Also

Prigogine , who provides models of systems that can suddenly change ‘become’. He also  relates to Shackle’s discussion on how making decisions relates to the notion of ‘time’.

Dave Marsay

New Look

I have changed the ‘theme’ (look and feel) of this blog, to provide more room for drop-own menus. I hope it isn’t a pain.

The voice of science: let’s agree to disagree (Nature)

Sarewitz uses his Nature column to argue against forced or otherwise false consensus in science.

“The very idea that science best expresses its authority through consensus statements is at odds with a vibrant scientific enterprise. … Science would provide better value to politics if it articulated the broadest set of plausible interpretations, options and perspectives, imagined by the best experts, rather than forcing convergence to an allegedly unified voice.”

D. Sarewitz The voice of science: let’s agree to disagree Nature Vol 478 Pg 3, 6 October 2011.

Sarewitz seems to be thinking in terms of issues such as academic freedom and vibrancy. But there are arguably more important aspects. Given any set of experiments or other evidence there will generally be a wide range of credible theories. The choice of a particular theory is not determined by any logic, but such factors as which one was thought of first and by whom, and is easiest to work with in making predictions etc.

In issues like smoking and climate change the problem is that the paucity of data is obvious and different credible theories lead to different policy or action recommendations. Thus no one detailed theory is credible. We need a different way of reasoning, that should at least recognize the range of credible theories and the consequential uncertainty.

I have experience of a different kind of problem: where one has seemingly well established theories but these are suddenly falsified in a crisis (as in the financial crash of 2008). Politicians (and the public, where they are involved) understandably lose confidence in the ‘science’ and can fall back on instincts that may or may not be appropriate. One can try to rebuild a credible theory over-night (literally) from scratch, but this is not recommended. Some scientists have a clear grasp of their subject. They understand that the accepted theory is part science part narrative and are able to help politicians understand the difference. We may need more of these.

Enlightened scientists will seek to encourage debate, e.g. via enlightened journals, but in some fields, as in economics, they may find themselves ‘out in the cold’. We need to make sure that such people have a platform. I think that this goes much broader than the committees Sarewitz is considering.

I also think that many of our contemporary problems are because societies end to suppress uncertainty, being more comfortable with consensus and giving more credence to people who are confident in their subject. This attitude suppresses a consideration of alternatives and turns novelty into shocks, which can have disastrous results. 

Previous work

In a 2001 Nature article Roger Pielke covers much the same ground. But he also says:

“Take for example weather forecasters, who are learning that the value to society of their forecasts is enhanced when decision-makers are provided with predictions in probabilistic rather than categorical fashion and decisions are made in full view of uncertainty.”

 From this and his blog it seems that the uncertainty is merely probabilistic, and differs only in magnitude. But it seems to me that before global warming became significant  weather forecasting and climate modelling seemed probabilistic but that there was an intermediate time-scale (in the UK one or two weeks) which was always more complex and which had different types of uncertainty, as described by Keynes. But this does not detract from the main point of the article.

See also

Popper’s Logic of Scientific Discovery , Roger Pielke’s blog (with a link to his 2001 article in Nature on the same topic).

Dave Marsay

Is there a reasonable way to count votes?

I have seen a supposedly learned journal which suggests that there is no ‘fair’ way to select a winner from a set of ballots. Although not emphasised by wikipedia or the UN, this is a commonly held view, often relying on abstruse maths.

A simple example

Suppose that there are two candidates who are the first choice of an equal number of voters: a tie. Then no fair deterministic method can select a winner. But is this a practical problem?

Typical proofs

Condorcet was at least one of the first to note that a set of ballots could imply a 3-way tie, with A being above B on most ballots, B above C on most, and yet C above A on most ballots. Such ties could be relatively robust, in that the addition of another ballot would not necessarily break the tie. For example, if a voters vote ABC, b voters vote BCA and c vote CAB then there is a 3-way tie (A over B over C over A) whenever (a+b)>c etc. If those who voted ABC wish to eliminate C they can do so by voting BAC, in which case B has a clear majority. This is certainly a problem, but it did not stop Condorcet from devising what he regarded as a ‘reasonable’ method. The difficulties lie in breaking n-way ties. The best known impossibility theorems is Arrow’s. It is implicit that the ‘voting system’ is deterministic. The problem is in what it calls “irrelevant alternatives”. But in Condorcet’s 3-way tie, C is ‘relevant’ to the choice between A and B, although Arrow says that it isn’t. So although Arrow’s approach is interesting, the result could be summarised as saying that ties are a problem. The Gibbard-Satterthwaite Theorem extends Arrow by replacing consideration of ‘irrelevant’ alternatives consideration of tactical voting. This is a much more reasonable criterion. As Condorcet and others appreciated, tactical voting can easily arise in any definite method that seeks to break a 3-way tie. But how significant is this?

A simple counter-example

The mathematics above is correct, but we may doubt the interpretation. A simple method is to put the ballots into a hat, pull one out ‘at random’, and to select a winner from it.

Irrelevant alternatives:  the winner depends only on the chosen ballot, not on any alternatives.

Tactical voting: how people vote only matters if their ballot is selected, in which case they would do best to vote honestly.

This method is not very attractive, but it does show that the impossibility theorems need careful interpretation.

A more reasonable counter-example?

I have devised this example to be ‘obviously’ reasonable, without necessarily being the best. First, a candidate is said to have a majority over another if it is ranked higher on more ballots. The method is:

  • If we can divide the candidates into two sets, such that all members of the first set have a majority over all members of the second set, then the second set is eliminated.
  • This is done to yield a minimal candidate set, with ties between its members.
  • A ballot is chosen at random and that member of the candidate set that is ranked highest is selected.

Independence from Irrelevant Alternatives

All members of the candidate set are relevant in the sense that there is no agreed way to choose between them. In this sense the above method meets the requirement for ‘independence from irrelevant alternatives’. The more conventional requirement seems much too strong.

Tactical Voting

Is the method liable to tactical voting? Firstly, as before, if my candidate gets into the candidate set then it pays to rank my candidate first, and the other rankings do not count. Hence the only possibility of tactical voting is in determining the candidate set. As above, I could try to eliminate candidates from the candidate set by exploiting cycles, but I cannot falsely promote candidates into the candidate set, because I would have ranked them highly anyway. In this sense the scope for tactical voting is limited. You vote for the candidate with the best chance of beating your least liked candidate. If this is your priority, then this is arguably an honest vote. It may be that we need to distinguish between types of tactical voting, rather than treat them all as equally disreputable.

Types of tactics

Some voting systems, such as first-past-the-post, waste votes for genuine first-preferences when they are unpopular. Some hold this to be a good thing, but it means that the votes are cast are not a reliable indication of actual preferences. While it would be ‘a good thing’ for a system not to penalize voters who rank all their preference honestly, a compromise would be a system that never penalizes voters for ranking their true first preference first. Not all tactics are equally harmful. In Condorcet’s system problems arise with ties. If, as above, ties are broken by selecting ballots at random, there is no incentive not to rank one’s first preference first. An alternative method would be to eliminate candidates who were most often worst-ranked. This would encourage voters to take account of how often candidates were likely to be given a low ranking, and so might encourage them to rank a more moderate candidate first. While this is tactical voting, it is not obvious that it would be a ‘bad thing’. Finally, some voting systems (such as first-past-the-post and approval voting) call for tactics that depend on subtle assesments of how others were likely to vote. Under alternative vote, for instance, you want to work out in which round candidates wull be eliminated and who will pick up their next preferences. Tactics which depend more on relatively simple factors, such as who is more moderate, seem preferable. The aim here is not to advocate any particular method, but to cast doubt on simplistic interpretations of ASrrow’s theorem.

See also

Condorcet anticipated Arrow, Approval voting as a counter-example to Arrow.

Dave Marsay

Kahneman et al’s Anomalies

Daniel Kahneman, Jack L. Knetsch, Richard H. Thaler Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias The Journal of Economic Perspectives, 5(1), pp. 193-206, Winter 1991

[Some] “behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences … . An empirical result qualifies as an anomaly if it is difficult to “rationalize,” or if implausible assumptions are necessary to explain it within the paradigm.”

The first candidate anomaly is:

“A wine-loving economist we know purchased some nice Bordeaux wines … . The wines have greatly appreciated in value, so that a bottle that cost only $10 when purchased would now fetch $200 at auction. This economist now drinks some of this wine occasionally, but would neither be willing to sell the wine at the auction price nor buy an additional bottle at that price.”

This an example of the effects in the title. Is it anomalous? Suppose that the economist can spare $120 but not $200 on self-indulgencies, of which wine is her favourite. Would this not explain why she might buy a crate cheaply but not pay a lot for a bottle or sell it at a profit. Is it an anomaly? The anomalies seem to be relative to expected utility theory. However, some of the other examples may be genuine psychological effects.

See also

Kahneman’s review, Keynes’ General Theory

Dave Marsay