Cynefin Framework

Youtube has a good video by Dave Snowdon on his/Cognitive Edge’s ‘Cynefin sense-making Framework’ for complexity and chaos. I speculate on its applicability outside routine management.

Overview

Cynefin

Image via Wikipedia

The Cynefin framework is very much from a human factors / organisational / management point of view, but may have wider potential applicability. It makes reference to evolutionary theories, but these seem not to be essential.

Components

The framework has four main components:

  • simple: sense, categorise, respond
  • complicated: sense, analyse, respond
  • complex: probe, sense, respond
  • chaos: act, sense, respond

plus: disorder: not knowing where one is, and not knowing what to do.

Transitions

Problems can transit incrementally between simple and complicated, simple and complex or complex and chaotic. But if one treats problems as if they were simple there is a risk of them becoming chaotic, in which case one can not get them back to simple directly, but has to go via complex etc. It is best not to treat things as simple except where doing so would yield a great enough advantage to outweigh the risks. (Even here one should watch out.)

One escapes disorder by applying the framework and associated techniques. (One might modify the framework so that one transits out of order into disorder and can then go into chaos, but apparently managers can only cope with four components. 😉 )

Handling complexity

A complex situation is described as stable. One identifies ‘safe to fail’ probes, i.e. ones whose effects one could recover from, bringing the situation back to the stability. In particular, one needs to be able to tell when the outcome of a probe is not safe, and have to hand sufficient remediation resources and also to be able to tell when the outcome is positive, and to have available amplifying resources. One then tries out such probes until what happens is acceptable and then seeks to amplify the effect (e.g., by pushing harder). Thus one has a form of ‘trial and error’, eventually leading to success by persistence.

Sense making

The video starts with an important preamble: although the framework is typically presented as a categorisation it should really be used for sense-making. That is, one needs to decide for the case at hand what are the appropriate definitions of the components. My interpretation that ‘complicated’ is what an organisation can already analyse, ‘complex’ is what they – after some enlightening – may be able to get to handle, while ‘chaos’ is still too hard to handle. Thus one would naturally expect the definitions to vary.

Limitations

No palette of options, from which a definition of ‘complex’ could be developed, is provided. It is quite a ‘thin’ framework. 

If one had a given problem, one can see how (using the Cognitive Edge techniques or otherwise) one might usefully characterise complexity as more than run-of-the-mill complicatedness but still handle-able (as above), and identify the main features. This might be appropriate within a typical commercial organisation. But outside such conservative settings one has some potential issues:

  • It might not be possible to resolve a problem without going to the edge of chaos, and solutions might involve ‘leaps of faith’ through some chaos.
  • The current situation might not be stable, so there is nothing to return to with ‘safe to fail’.
  • Stability might not be desirable: one might want to survive in a hostile situation, which might depend on agility.
  • The situation might be complex or complicated (or complex in different ways)  depending on where you think the problem lies, or on what your strategy might be.

Examples

Economics

We wish economies to be ‘managed’ in the sense that we might intervene to promote growth while minimising risk. The Cynefin framework might be applied as follows:

  • Many commentators and even some economists and responsible officials seem to view the problem as simple. E.g., sense  the debt, categorise it as ‘too much’, respond according to dogma.
  • Other commentators, and many who make many from financial markets, seem to see them as complicated: sense lots of data in various graphs, analyse and respond. Each situation has some novelty, but can be fitted into their overall approach.
  • Many commentators, some economists and many politicians seemed entranced by ‘the great moderation’ which seemed to guarantee a permanent stability, so that the economy was not chaos but was ‘at worst’ complex. Many of those involved seemed to appreciate the theoretical need for probe-sense-respond but it became difficult (at least in the UK) to justify action (probes) for which one could not make a ‘business case’ because there may be no benefit other than the lessons identified and the reduction of options. Hence there was an inability to treat things as complex, leading to chaos
  • Chaos (innovation) had been encouraged at the micro level in the belief that it could not destabilise the macro. But over 2007/8 it played a role in bringing down the economy. This led to activity that could be categorised as act (as a Keynesian), sense (what the market makers think), respond (with austerity).

Here one may note

  • That different parts and levels of the economy could be in different parts of the framework, and to consider influences between them. 
  • The austerity option is simple, so chaos was reduced to simple directly, whereas a more Keynesian response would have ben complex.
  • Whilst the austerity option is economically simple, it may lead to complex or chaotic situations elsewhere,  e.g. the social.

Crisis Management

Typically, potential crises are dealt with in the first place by  appropriate departments, who are typically capable of handling simple and complicated situations, so that a full-brown crisis is typically complex or chaotic. If a situation is stable then one might think that the time pressure would be reduced, and so the situation would be less of a crisis. One can distinguish between timer-scales:

  • a situation is stable in the short term, but may suddenly ‘blow up’
  • a situation is stable in the long term

and two notions of stability:

  • all indicators are varying around a constant mean
  • some aspects may be varying around a mean that is changing steadily but possibly rapidly (e.g. linear or exponential) , but ‘the essential regulatory system’ is stable.

Thus one might regard a racing car as stable ‘in itself’ even as it races and even if it might crash. Similarly, a nuclear reactor that is in melt-down is stable in some sense: the nature of the crisis is stable, even if contamination is spreading.

With these interpretations, many crises are complex or disordered. If the situation is chaotic one might need some decisive action to stabilise it. If it is disordered then as a rule of thumb one might treat it as chaotic: the distinction seems slight, since there will be no time for navel-gazing.

In many crises there will be specialists who, by habit or otherwise, will want to treat the problem as merely complicated, applying their nostrums. Such actions need to be guarded and treated as probes, in the way a parent might watch over an over-confident child, unaware of the wider risks. Thus what appears to be sense-analyse-respond may be guarded to become probe-sense-respond.

In some cases a domain expert may operate effectively in a complex situation and might reasonably be given license to do so, but as the situation develops one needs to be clear where responsibility for the beyond complicated aspects lie. A common framework, such as Cynefin, would seem essential here.

In other cases a ‘heroic leader’ may be acting to bring order to chaos, but others may be quietly taking precautions in case it doesn’t come of, so that the distinction between ‘act-sense-respond’ and ‘probe-sense-respond’ may be subjective.

Quibbles

I may turn these notes into a graphic.

It seems to me that, with experience, one will often be able to judge that a situation is going to be simple, complicated or worse, but not whether it is going to be complex or chaotic.  Moreover, the interaction can be much more interactive. Thus in complex we may have a series of probes, {probe} leading to sense being made and action that improves the situation but which typically leads to a less problematic complex, complicated or simple problem.  Thus the complex part is {probe}-sense-respond, followed by others, to give {{probe}-sense-respond} [{complicated/simple}], with – in practice – some mis-steps leading to the problem actually getting worse, hence {{{probe}-sense-respond} [{complicated/simple}]}. The complicated is then  {sense-analyse-respond}[{simple}] and simple is typically {sense-categorise-respond}: even simple is not often a one-shot activity.

With the above understanding, we can represent chaotic as a failure of the above. We start by probing and trying to make sense, but failing we have to take a ‘shaping’ action. If this succeeds, we have a complex situation at worst. If not, we have to try again. Thus we gave:

while complex fails: shape

Here I take the view that once we have found the situation to be beyond our sense-making resources we should treat it as if it is complex. If it turns out to be merely complicated or simple, so much the better: our ‘response’ is not an action in the ‘real’ world but simple a recognition of the type of situation and a selection of the appropriate methods.

My next quibble is on the probing. This implies taking an action which is ‘safe-to-fail’. But, particularly after taking a shaping action one may need to bundle the probe with some constraining activity, which prevents the disturbance from the probe from spreading. Also, part of the shaping may be to decouple parts of the system being studied so that probes become safe-to-fail.

Overall, I think a useful distinction is between situations where one can probe-sense-respond and those that call for  interventions (‘shape’) that create the conditions for probing, analysing or categorising. Perhaps the distinction is between activities normally conducted by managers (complex at worst) and those that are normally conducted by CEOs, leaders etc and hence outside the management box. Thus the management response to chaos might call for an act ‘from above’.

Conclusion

Cynefin provides a sense-making framework, but if one is in a complex situation one may need a more specific framework, e.g. for complexity or for chaos/complexity. Outside routine management situations the chaos / complexity distinction may need to be reviewed. The distinction between probe-send-respond and act-sense-respond seems hard to make in advance.

Dave Marsay

See also

Induction and epochs

 

Illustrations of Uncertainty

Some examples of uncertainty, based on those invented by others. As such, they are simpler than real examples. See Sources of uncertainty for an overview of the situations and factors referred to.

Pirates: Predicting the outcome of a decision that you have yet to make

Jack Sparrow can’t predict events that he can influence. Here we generalise this observation, revealing limits to probability theories.

In ‘Pirates of the Caribbean’ the hero, Captain Jack Sparrow, mocks the conventions of the day, including probability theory. In ‘On Stranger Tides’ when asked to make a prediction he says something like ‘I never make a prediction on something that I will be able to influence’. This has a mundane interpretation (even he can’t predict what he will do). But it also suggests the following paradox.

Let {Ei} be a set of possible future events dependent on a set, {Dj}, of possible decisions then, according to probability theory, for each i, P(Ei) ≡ ∑j{P(Ei|Dj).P(Dj)}.

Hence to determine the event probabilities, {P(Ei)}, we need to determine the decision probabilities, {P(Dj)}. This seems straightforward if the decision is not dependent on us, but is problematic if we are to make the decision.

According to Bayes’ rule the probability of an event only changes when new evidence is received. Thus if we consider a decision to have a particular probability it is problematic if we change our mind without receiving more information.

As an example, suppose that an operation on our child might be beneficial, but has to be carried out within the next hour. The pros and cons are explained to us, and then we have an hour to decide, alone (in an age before modern communications). We are asked how likely we are to go ahead initially, then at half an hour, then for our final decision. It seems obvious that {P(Dj)} would most likely change, if only to become more definite. Indeed, it is in the nature of making a decision that it should change.

From a purely mathematical perspective, there is no problem. As Keynes emphasized, not all future events can be assigned numeric probabilities: sometimes one just doesn’t know. ‘Weights of evidence’ are more general. In this scenario we can see that initially {P(Di)} would be based on a rough assessment of the evidence, and the rest of the time spent weighing things up more  carefully, until finally the pans tip completely and one has a decision. The concept of probability, beyond weight of evidence, is not needed to make a decision.

We could attempt to rescue probabilities by supposing that we only take account of probability estimates that take full account of all the evidence available. Keynes does this, by taking probability to mean what a super-being would make of the evidence, but then our decision-maker is not a super-being and so we can say what the probability distribution should be, not what it is ‘likely’ to be. More seriously, in an actual decision such as this the decision makers will be considering how the decision can be justified, both to themselves and to others. Justifications often involve stories, and hence are creative acts. It seems hard to see how an outsider, however clever, could determine what should be done.  Thus even a Keynesian logical probability does not seem applicable.

Area

Wittgenstein pointed out that if you could arrange for darts to land with a uniform probability distribution on a unit square, then the probability of the dart landing on sub-set of the square would equal its area, and vice-versa. But some sub-sets are not measurable, so some (admittedly obscure) probabilities would be paradoxical if they existed.

Cabs

Tversky and Kahneman, working on behavioural economics, posed what is now a classic problem:

A cab was involved in a hit-and-run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data:
(i) 85% of the cabs in the city are Green and 15% are Blue;
(ii) A witness identified the cab as a Blue cab.
The court tested his ability to identify cabs under the appropriate visibility conditions. When presented with a sample of cabs (half of which were Blue and half of which were Green) the witness made correct identifications in 80% of the cases and erred in 20% of the cases.

Question: What is the probability that the cab involved in the accident was Blue rather than Green?

People generally say 80%, whereas Kahneman and Tversky, taking account of the base rate using Bayes’ rule, gave 41%. This is highly plausible and generally accepted as an example of the ‘base rate fallacy’. But this answer seems to assume that the witness is always equally accurate against both types of cab, and from an uncertainty perspective we should challenge all such assumptions.

If the witness has lived in area where most cabs are Green then they may tend to call cabs Green when they are in doubt, and only call them Blue when they are clear. When tested they may have stuck with this habit, or may have corrected for it. We just do not know. t is possible that the witness never mistakes Green for Blue, and so the required probability is 100%. This might happen if, for example, the Blue cabs had a distinctive logo that the witness (who might be colour-blind) used as a recognition feature. At the other extreme (for example, Green cabs had a distinctive logo) if Blue cabs are never mistaken for Green, the required probability is 31%.  

Finally, a witness would normally have the option of saying that they were not sure. In this case it might be reasonable to suppose that they would only say that the cab was Blue if – after taking account of the base rate – the probability was reasonably high, say 80%. Thus an answer of 80% seems more justifiable than the official answer of 41%, but it might be better to range a range of answers for different assumptions, which could then be checked. (This is not to say that people to do not often neglect the base rate when they shouldn’t, but simply to say that the normative theory that was being used was not fully reliable.)

Tennis

Gärdenfors, Peter & Nils-Eric Sahlin. 1988. Decision, Probability, and Utility includes the following example:

Miss Julie … is invited to bet on the outcome of three different tennis matches:

  • In Match A, Julie is well-informed about the two players. She predicts that the match will be very even.
  • In Match B, Julie knows nothing about the players.
  • In Match C, Julie has overheard that one of the players is much better than the other but—since she didn’t hear which of the players was better—otherwise she is in the same position as in Match B.

Now, if Julie is pressed to evaluate the probabilities she would say that in all three matches, given the information she has, each of the players has a 50% chance of winning.

Miss Julie’s uncertainties, following Keynes, are approximately [0.5], [0,1] and {0,1}. That is, they are like those of a fair coin, a coin whose bias is unknown, or a coin that is two-sided, but we do not know if it is ‘heads’ or ‘tails’. If Miss Julie is risk-averse she may reasonably prefer to bet on match A than on either of the other two.

The difference can perhaps be made clearer if a friend of Miss Julie’s, Master Keynes, offers an evens bet on a match, as he always does. For match A Miss Julie might consider this fair. But for matches B and C she might worry that Master Keynes may have some additional knowledge and hence an unfair advantage.

Suppose now that Keynes offers odds of 2:1. In match A this seems fair. In match C it seems unfair, since if Keynes knows which player is better he will still have the better side of the bet. In match B things are less clear. Does Keynes know Miss Julie’s estimate of the odds? Is he under social pressure to make a fair, perhaps generous, offer? In deciding which matches to bet on, Miss Julie has to consider very different types of factor, so in this sense ‘the uncertainties are very different’.

(This example was suggested by Michael Smithson.) 

Shoes

If a group have a distinctive characteristic, then the use of whole population likelihoods for an in-group crime is biased against a suspect.

For example, suppose that a group of 20 social dancers all wear shoes supplied by X all the time. One of them murders another, leaving a clear shoe-mark. The police suspect Y and find matching shoes. What is the weight of evidence?

If the police fail to take account of the strange habits of the social group, they may simply note that X supplies 1% of the UK’s shoes, and use that to inform the likelihood, yielding moderate evidence against Y. But the most that one should deduce from the evidence is that it was likely to be one the dance group.

The problem here is that many (most?) people do belong to some group or groups with whom they share distinctive characteristics.

More illustrations

Yet to be provided.

See also

mathematics, paradoxes.

Dave Marsay

Sources of Uncertainty

What causes uncertainty, beyond straightforward numeric probability?

Conditions

Uncertainty can affect either ‘prior probabilities’ or likelihoods. Both may be uncontentious when:

  • One is working in an area where one has a proven track record at estimating probabilities.

Prior Probabilities are relatively uncontentious when:

  • One has good reason to suppose that one has a genuinely random sample from a population for which one has a good statistical calculation.
  • One has good reason to suppose that certain possibilities are equally uncertain (so that one can apply the principle of indifference).

Likelihoods are relatively uncontentious when:

  • The processes being observed are constrained and routine.
  • The hypothesis being considered is precise enough to determine meaningful likelihoods without undue averaging over cases.

Otherwise, the ability to estimate probabilities is questionable, so that one has reason to be uncertain about any estimate. There is a difference of opinion as to whether in such circumstances one should nonetheless make the best estimate one can, and live with the consequences, or take explicit account of uncertainty, and if so whether one simply performs a sensitivity analysis, varying the estimates, or if one needs a more ‘forensic’ approach.

Factors

Some of the things that can specifically contribute to uncertainty are as follows:

Complexity

If the situation is complex, it is hard to have confidence in any estimates. In particular, complexity can give rise to innovation

Reflexivity

If the probability estimate is being made in support of a decision that will impact upon the situation being observed, it may be ‘reflexive’ in the sense that what may happen depends upon the decision being made, which depends upon the estimate.

Source reliability

The likelihood of a source stating that something is the case is not the same as the likelihood of that statement. ‘They would say that, wouldn’t they’.

Vagueness

In assessing a collection of evidence against a hypothesis it is common to asses each item individually and then to ‘fuse’, to establish the overall probability. When assessing a vague hypothesis this can lead to an over-estimate of probability.

Impact on Reasoning

With probabilistic reasoning, as one gets more relevant evidence the probability will always converge to giving a probability of 1 to the truth. Hence the greater the evidence that points to a conclusion, the more one tends to suppose it to be valid.  Sensitivity analysis will tend to discount some evidence, but it remains true that the more evidence the more certain one supposedly can be of the result.

See also

Real examples

Dave Marsay

Critique of Pure Reason

I Kant’s Critique of Pure Reason, Ed. 2 1787.

See new location.

David Marsay

Examples of Uncertainty in Real Decisions

Uncertainty, beyond that of numeric probability, is apparent in many familiar decisions. Here the focus is on those that may be familiar, where overlooked uncertainty seems to have led to important mistakes. See Sources of Uncertainty for an overview of the situations and factors considered.

Financial crash

Before the financial crash of 2007/8 finance was largely considered from the point of view that risk is variability. Keynes was ignored, both his economics and his mathematics of uncertainty and risk. After the crash Keynes’ economics and Keynesian economics came to the fore, and his ‘Knightian uncertainty’ more recognized. It is perhaps clear that the conditions and factors above – largely based on Keynes’ work – were operative. An approach to uncertainty that seeked to uncover the key factors may have been more helpful than thinking of them as sources of variability and probability distributions.

UK Miscarriages of Justice

Emotion and assessment

The UK’s most notorious miscarriages of justice often share some of the following characteristics:

An event evokes public outrage (and hence tends to be rare). There is intense pressure to find and punish those guilty. Suspects who lie outside the mainstream of society are found.

 Thus one tends not to have the conditions that support reliable probability judgements.

In the Birmingham six case, a key piece of evidence was a forensic test that showed that one  had handled explosives ‘with a 99% certainty’. An appeal was turned down on these reflexive grounds:

 “If they won, it would mean that the police were guilty of perjury; that they were guilty of violence and threats; that the confessions were involuntary and improperly admitted in evidence; and that the convictions were erroneous. That would mean that the Home Secretary would have either to recommend that they be pardoned or to remit the case to the Court of Appeal. That was such an appalling vista that every sensible person would say, ‘It cannot be right that these actions should go any further.”

In their final appeal it was recognized that a similar forensic result could have been obtained if the suspect had handled playing cards. Similar forensic problems bedevilled other cases, such as the Maguire seven.

Bayesian reasoning

The case R v T has raised some relatively mundane issues of estimation. The weight of evidence depends on an estimation of the likelihood of the evidence supposing that the suspect is innocent. In R v T footmarks were found at the scene of a murder that matched an associate’s shoes. The original forensic scientist used an approximation to whole population statistics for the prevalence of the shoes. But for many crimes the perpetrators are likely to be drawn from some local population who are likely to be more similar than the general population, and so typical forensic evidence is likely to be more likely for the appropriate population than for the population as a whole: if the print of a particular shoe is found then that shoe is likely to be more common among the associates of the victim than for the population as a whole.

Weapons of Mass Destruction

Most westerners, at least, regarded it as probable or highly probable that Saddam Hussein had WMD, leading to the decision to invade Iraq, after which none were found. From a probability perspective this may seem to be just bad luck. But it does seem odd that an assessment made on such a large and wide evidence base was so wrong.

This is clearly an area where probability estimation doesn’t meet the conditions to be non-contentious: Saddam was not a randomly selected dictator. Thus one might have been prompted to look for the specific factors. There was some of evidence, at the time, of:

  • complexity, particularly reflexivity
  • vagueness
  • source unreliability (widely blamed).

This might have prompted more detailed consideration, for example, of Saddam’s motivation: if he had no WMD, what did he have to lose by letting it be known? It seems unlikely that a routine sensitivity analysis would have been as insightful.

Stockwell

Two weeks after London’s 7/7 bombings and a day after an attempted bombing, Jean Charles de Mendez was mistaken for a bomber and shot at Stockwell tube station. This case has some similarities to miscarriages of justice. As the Gold Commander made clear at the inquest, the key test was the balance of probability between the suspect being about to cause another atrocity and an innocent man being killed. The standard is thus explicitly probabilistic rather than being one of ‘reasonable doubt’.

The suspect was being followed by ‘James’s team’, and James said that ‘it was probably him [the known terrorist]’. From then on nothing suggested the suspect’s innocence, and he was shot before he could blow himself up.

The inquest did not particularly criticise any of those involved, but from an uncertainty perspective the following give pause for thought:

  • the conditions were far from routine.
  • there were some similarities with known miscarriages of justice in terrorist cases
  • the specific factors above were present

More particularly:

  • The Gold Commander had access to relevant information that James lacked, which appears not to have been taken into account.
  • James regarded the request for a ‘probability assessment’ (as against hard evidence) of improper, and only provided one under pressure.
  • In assessing probability nothing that James’ team had seen (apart from some nervousness) was suggestive that the suspect was a terrorist. The main thing they had been told was that the suspect had come out of the flat of the known terrorist, but by then the Gold Commander knew that the terrorist’s flat had a shared doorway, so the probability assessment should have been reduced accordingly.
  • Those who shot the suspect were relying on James’ judgement, but were unaware of the circumstances in which he had given it.

With hindsight it may be significant that:

  • The suspect had got off the bus at Brixton, found the station to be closed, and got back on. The station was closed due to a security alert, but – not knowing this – the behaviour may have seemed to be anti-surveillance. [The inquest found that this innocent behaviour did not contribute to the death.]
  • The Gold Commander was in a reflexive situation: if the suspect was not shot then it must have been assessed that ‘on the balance of probability’ the suspect was innocent, in which case he ought not to have been followed.

Time was pressing, but a fuller consideration of uncertainty might have led to:

  • James being asked to supply descriptions of, and/or likelihoods for, what he had seen against the terrorist and innocent hypotheses, rather than ‘final’ probabilities.
  • Consideration being given to innocent explanations for the suspect’s behaviour

More

Ulrich Beck opined (1992) that the Knightian ‘true uncertainty’, particularly the reflexive, aspects of risk are being mishandled, with widespread adverse consequences. Naomi Klein has a similar view. Here are some relatively mundane specifics.

Economic Recovery from 2007/8

Robert Skidelsky, an advocate of Keynes and his view of uncertainty, has noted:

Keynes thought that the chief implicit assumption underlying the classical theory of the economy was that of perfect knowledge. “Risks,” he wrote, “were supposed to be capable of an exact actuarial computation. The calculus of probability … was supposed to be capable of reducing uncertainty to the same calculable status as certainty itself.”

For Keynes, this is untenable: “Actually…we have as a rule only the vaguest idea of any but the most direct consequences of our acts.” This made investment, which is always a bet on the future, dependent on fluctuating states of confidence. Financial markets, through which investment is made, were always liable to collapse when something happened to disturb business confidence. Therefore, market economies were inherently unstable.

Unless we start discussing economics in a Keynesian framework, we are doomed to a succession of crises and recessions. If we don’t, the next one will come sooner than we think.

Climate Change

Much of the climate change ‘debate’ seems to be being driven by preconceived ideas and special interests, but these positions tend to align with different views on uncertainty

Mobile phone cancer risk

The International Agency for Research on Cancer (IARC), part of the World Health Organization (WHO), has issued a press release stating that it:

has classified radiofrequency electromagnetic fields as possibly carcinogenic to humans (Group 2B), based on an increased risk for glioma, a malignant type of brain cancer, associated with wireless phone use.

…. The conclusion means that there could be some risk, and therefore we need to keep a close watch for a link between cell phones and cancer risk.”

Where: Group 2B; Possibly carcinogenic to humans: “This category is used for agents for which there is limited evidence of carcinogenicity … .” Thus it is possible that there is no carcinogenity.

The understanding uncertainty blog has noted how the British media has confused the issues, giving the impression that there was an increased risk of cancer. But from a probability perspective, what does ‘could be some risk’ mean? If the probability of risk r is p(r) then (from a standard Bayesian viewpoint) the (overall) risk is ∫(p(r).r)dr, which is positive unless there is definitely no risk. Thus if ‘there could be some risk’ then there is some risk. On the other hand, if we assess the risk as an interval, [0, small], then it is clear that there could be no risk, but (as the IARC suggests) further research is required to reduce the uncertainty. The IARC’s statement that:

The Working Group did not quantitate the risk; however, one study of past cell phone use (up to the year 2004), showed a 40% increased risk for gliomas in the highest category of heavy users (reported average: 30 minutes per day over a 10‐year period).

This is presumably the worst case to hand (balancing apparent effect and weight of evidence), so that (confusion of language apart) it is easy to interpret the release in terms of uncertainty, noting the link to heavy uasage. It is unfortunate that the British media did not: maybe we do need a more nuanced language?

 See Also

 Reasoning under uncertainty methods , biases and uncertainty,  metaphors, scaling

David Marsay

AV: Yes or No? A comparison of the Campaigns’ ‘reasons’

At last we have some sensible claims to compare, at the Beeb. Here are some comments:

YES Campaign

Its reasons

  1. AV makes people work harder
  2. AV cuts safe seats
  3. AV is a simple upgrade
  4. AV makes votes count
  5. AV is our one chance for a change

An Assessment

These are essentially taken from the all-party Jenkins Commission. The NO Campaign rejoinders seem to be:

  1. Not significantly so.
  2. Not significantly so.
  3. AV will require computers and £250M to implement (see below).
  4. AV Makes votes count twice, or more (se below).
  5. Too right!

A Summary

Worthy, but dull.

An Addenda

I would add:

  • There would be a lot less need for tactical voting
  • The results would more reliably indicate people’s actual first preferences
  • It would be a lot easier to vote out an unpopular government – no ‘vote splitting’
  • It would make it possible for a new party to grow support across elections to challenge the status quo.
  • It may lead to greater turnout, especially in seats that are currently safe

NO Campaign Reasons

AV is unfair

Claim

“… some people would get their vote counted more times than others. For generations, elections in the UK have been based on the fundamental principle of ‘one person, one vote’. AV would undermine all that by allowing the supporters of fringe parties to have their second, third or fourth choices counted – while supporters of the mainstream candidates would only get their vote counted once.”

Notes

According to the Concise OED a vote is ‘a formal expression of will or opinion in regard to election of … signified by ballot …’ Thus the Irish, Scottish, Welsh, Australians, who cast similar ballots to AV, ‘have one vote’. The NO Campaign use of the term ‘counted’ is also confusing. The general meaning is a ‘reckoning’, and in this sense each polling station has one count per election, and this remains true under AV. A peculiarity of AV is that ballots are also counted in the sense of ‘find number of”. (See ‘maths of voting’ for more.)

Assessment

There is no obvious principle that requires us to stick with FPTP: all ballots are counted according to the same rules.

Should ‘supporters of fringe parties’ have their second preferences counted? The ‘fringe’ includes:

  • Local candidates, such as a doctor trying to stop the closure of a hospital
  • The Greens
  • In some constituencies, Labour, LibDem, Conservative.

AV is blind to everything except how voters rank them. Consider an election in which the top three candidates get 30%, 28%, 26%, with some also-rans. According to the NO campaign the candidate with a narrow margin should be declared the winner. Thus they would disregard the preferences of anyone who votes for their hospital (say). Is this reasonable?

AV is not widely used

True-ish, but neither is FPTP (in terms of countries – one of them is large), and variants of AV (IRV, STV, …) together are the most widely used.

AV is expensive

Countries with AV don’t have election machinery. Australian elections may cost more than ours, but it is a much bigger country with a smaller population. 

AV hand more power to politicians

See the Jenkins Commission.

AV supporters are sceptical

Opposition to FPTP is split between variants of AV, with single-member constituencies and forms of PR. The Jenkins Commission recommended AV+, seeking to provide the best of both. The referendum is FPTP and hence can only cope with two alternatives: YES or NO.

I don’t know that AV supporters are sceptical against a move away from FPTP – just differ on what would be ideal.

Addenda

  • The NO campaign is playing down the ‘strong and stable government’ argument. The flip side is that an unpopular government can survive.
  • A traditional argument for FPTP was that it encourages tactical voting and hence politicking, and hence develops tough leaders, good at dealing with foreigners. We haven’t heard this, this time. Maybe the times are different?

See Also

AV: the worst example

According to the NO campaign the Torfaen election shows AV in the worst light. Labour won with 44.8%, followed by Conservative (20%), LibDem (16.6%)  and 6 more (5.3% or less each). The No campaign claim that under AV the 8th placed, an Independent, could have won. But to do so Labour would have had to have picked up less than 5.3% from the other candidates, including LibDem, and the Independent would have had to be ranked higher than the others by a majority. In particular, the Independent could not have won without support from Conservative voters.

Is it reasonable for Conservatives to complain?:

  • Conservative votes contributed to the victory.
  • Don’t the Conservatives prefer this to Labour?

It is also worth noting that the Independent would have to have picked up most second-rank votes from the Greens and UKIP, and so on, which also seems unlikely.

See Also

AV pros and cons

Dave Marsay