Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay


Artificial Intelligence?

The subject of ‘Artificial Intelligence’ (AI) has long provided ample scope for long and inconclusive debates. Wikipedia seems to have settled on a view, that we may take as straw-man:

Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. [Dartmouth Conference, 1956] The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds. [John Searle’s straw-man hypothesis]

Readers of my blog will realise that I agree with Searle that his hypothesis is wrong, but for different reasons. It seems to me that mainstream AI (mAI) is about being able to take instruction. This is a part of learning, but by no means all. Thus – I claim – mAI is about a sub-set of intelligence. In many organisational settings it may be that sub-set which the organisation values. It may even be that an AI that ‘thought for itself’ would be a danger. For example, in old discussions about whether or not some type of AI could ever act as a G.P. (General Practitioner – first line doctor) the underlying issue has been whether G.P.s ‘should’ think for themselves, or just apply their trained responses. My own experience is that sometimes G.P.s doubt the applicability of what they have been taught, and that sometimes this is ‘a good thing’. In effect, we sometimes want to train people, or otherwise arrange for them to react in predictable ways, as if they were machines. mAI can create better machines, and thus has many key roles to play. But between mAI and ‘superhuman intelligence’  there seems to be an important gap: the kind of intelligence that makes us human. Can machines display such intelligence? (Can people, in organisations that treat them like machines?)

One successful mainstream approach to AI is to work with probabilities, such a P(A|B) (‘the probability of A given B’), making extensive use of Bayes’ rule, and such an approach is sometimes thought to be ‘logical’, ‘mathematical, ‘statistical’ and ‘scientific’. But, mathematically, we can generalise the approach by taking account of some context, C, using Jack Good’s notation P(A|B:C) (‘the probability of A given B, in the context C’). AI that is explicitly or implicitly statistical is more successful when it operates within a definite fixed context, C, for which the appropriate probabilities are (at least approximately) well-defined and stable. For example, training within an organisation will typically seek to enable staff (or machines) to characterise their job sufficiently well for it to become routine. In practice ‘AI’-based machines often show a little intelligence beyond that described above: they will monitor the situation and ‘raise an exception’ when the situation is too far outside what it ‘expects’. But this just points to the need for a superior intelligence to resolve the situation. Here I present some thoughts.

When we state ‘P(A|B)=p’ we are often not just asserting the probability relationship: it is usually implicit that ‘B’ is the appropriate condition to consider if we are interested in ‘A’. Contemporary mAI usually takes the conditions a given, and computes ‘target’ probabilities from given probabilities. Whilst this requires a kind of intelligence, it seems to me that humans will sometimes also revise the conditions being considered, and this requires a different type of intelligence (not just the ability to apply Bayes’ rule). For example, astronomers who refine the value of relevant parameters are displaying some intelligence and are ‘doing science’, but those first in the field, who determined which parameters are relevant employed a different kind of intelligence and were doing a different kind of science. What we need, at least, is an appropriate way of interpreting and computing ‘probability’ to support this enhanced intelligence.

The notions of Whitehead, Keynes, Russell, Turing and Good seem to me a good start, albeit they need explaining better – hence this blog. Maybe an example is economics. The notion of probability routinely used would be appropriate if we were certain about some fundamental assumptions. But are we? At least we should realise that it is not logical to attempt to justify those assumptions by reasoning using concepts that implicitly rely on them.

Dave Marsay

What should replace utility maximization in economics?

Mainstream economics has been based on the idea of people producing and trading in order to maximize their utility, which depends on their assigning values and conditional  probabilities to outcomes. Thus, in particular, mainstream economics implies that people do best by assigning probabilities to possible outcomes, even when there seems no sensible way to do this (such as when considering a possible crash). Ken Arrow has asked, if one rejects utility maximization, what should one replace it with?

The assumption here seems to be that it is better to have a wrong theory than to have no theory. The fear seems to be that economies would grind to a holt unless they were sanctioned by some theory – even a wrong one. But this fear seems at odds with another common view, that economies are driven by businesses, which are driven by ‘pragmatic’ men. It might be that without the endorsement of some (wrong) theory some practices, such as the development of novel technical instruments and the use of large leverages, would be curtailed. But would this be a bad thing?

Nonetheless, Arrow’s challenge deserves a response.

There are many variations in detail of utility maximization theories. Suppose we identity ‘utility maximization’ as a possible heuristic, then utility maximization theory claims that people use some specific heuristics, so an obvious alternative is to consider a wider  range. The implicit idea behind utility maximization theory seems to be under a competitive regime resembling evolution, the evolutionary stable strategies (‘the good ones’) do maximize some utility function, so that in time utility maximizers ought to get to dominate economies. (Maybe poor people do not maximize any utility, but they – supposedly – have relatively little influence on economies.) But this idea is hardly credible. If – as seems to be the case – economies have significant ‘Black Swans’ (low probability high impact events) then utility maximizers  who ignore the possibility of a Black Swan (such as a crash) will do better in the short-term, and so the economy will become dominated by people with the wrong utilities. People with the right utilities would do better in the long run, but have two problems: they need to survive the short-term and they need to estimate the probability of the Black Swan. No method has been suggested for doing this. An alternative is to take account of some notional utility but also take account of any other factors that seem relevant.

For example, when driving a hire-car along a windy road with a sheer drop I ‘should’ adjust my speed to trade time of arrival against risk of death or injury. But usually I simply reduce my speed to the point where the risk is slight, and accept the consequential delay. These are qualitative judgements, not arithmetic trade-offs. Similarly an individual might limit their at-risk investments (e.g. stocks) so that a reasonable fall (e.g. 25%) could be tolerated, rather than try to keep track of all the possible things that could go wrong (such as terrorists stealing a US Minuteman) and their likely impact.

More generally, we could suppose that people act according to their own heuristics, and that there are competitive pressures on heuristics, but not that utility maximization is necessarily ‘best’ or even that a healthy economy relies on most people having similar heuristics, or that there is some stable set of ‘good’ heuristics. All these questions (and possibly more) could be left open for study and debate. As a mathematician it seems to me that decision-making involves ideas, and that ideas are never unique or final, so that novel heuristics could arise and be successful from time to time. Or at least, the contrary would require an explanation. In terms of game theory, the conventional theory seems to presuppose a fixed single-level game, whereas – like much else – economies seem to have scope for changing the game and even for creating higher-level games, without limit. In this case, the strategies must surely change and are created rather than drawn from a fixed set?

See Also

Some evidence against utility maximization. (Arrow’s response prompted this post).

My blog on reasoning under uncertainty with application to economics.

Dave Marsay

Evolution of Pragmatism?

A common ‘pragmatic’ approach is to keep doing what you normally do until you hit a snag, and (only) then to reconsider. Whereas Lamarckian evolution would lead to the ‘survival of the fittest’, with everyone adapting to the current niche, tending to yield a homogenous population, Darwinian evolution has survival of the maximal variety of all those who can survive, with characteristics only dying out when they are not viable. This evolution of diversity makes for greater resilience, which is maybe why ‘pragmatic’ Darwinian evolution has evolved.

The products of evolution are generally also pragmatic, in that they have virtually pre-programmed behaviours which ‘unfold’ in the environment. Plants grow and procreate, while animals have a richer variety of behaviours, but still tend just to do what they do. But humans can ‘think for themselves’ and be ‘creative’, and so have the possibility of not being just pragmatic.

I was at a (very good) lecture by Alice Roberts last night on the evolution of technology. She noted that many creatures use tools, but humans seem to be unique in that at some critical population mass the manufacture and use of tools becomes sustained through teaching, copying and co-operation. It occurred to me that much of this could be pragmatic. After all, until recently development has been very slow, and so may well have been driven by specific practical problems rather than continual searching for improvements. Also, the more recent upswing of innovation seems to have been associated with an increased mixing of cultures and decreased intolerance for people who think for themselves.

In biological evolution mutations can lead to innovation, so evolution is not entirely pragmatic, but their impact is normally limited by the need to fit the current niche, so evolution typically appears to be pragmatic. The role of mutations is more to increase the diversity of behaviours within the niche, rather than innovation as such.

In social evolution there will probably always have been mavericks and misfits, but the social pressure has been towards conformity. I conjecture that such an environment has favoured a habit of pragmatism. These days, it seems to me, a better approach would be more open-minded, inclusive and exploratory, but possibly we do have a biologically-conditioned tendency to be overly pragmatic: to confuse conventions for facts and  heuristics for laws of nature, and not to challenge widely-held beliefs.

The financial crash of 2008 was blamed by some on mathematics. This seems ridiculous. But the post Cold War world was largely one of growth with the threat of nuclear devastation much diminished, so it might be expected that pragmatism would be favoured. Thus powerful tools (mathematical or otherwise) could be taken up and exploited pragmatically, without enough consideration of the potential dangers. It seems to me that this problem is much broader than economics, but I wonder what the cure is, apart from better education and more enlightened public debate?

Dave Marsay



JIC, Syria and Uncertainty

This page considers the case that the Assad regime used gas against the rebels on 21st August 2013 from a theory of evidence perspective. For a broader account, see Wikipedia.

The JIC Assessment

The JIC concluded on 27th that it was:

highly likely that the Syrian regime was responsible.

In the covering letter (29th) the chair said:

Against that background, the JIC concluded that it is highly likely that the regime was responsible for the CW attacks on 21 August. The JIC had high confidence in all of its assessments except in relation to the regime’s precise motivation for carrying out an attack of this scale at this time – though intelligence may increase our confidence in the future.

A cynic or pedant might note the caveat:

The paper’s key judgements, based on the information and intelligence available to us as of 25 August, are attached.

Mathematically-based analysis

From a mathematical point of view, the JIC report is an ‘utterance’, and one needs to consider the context in which it was produced. Hopefully, best practice would include identifying the key steps in the conclusion and seeking out and hastening any possible contrary reports. Thus one might reasonably suppose that the letter on the 29th reflected all obviously relevant information available up to the ends of the 28th, but perhaps not some other inputs, such as ‘big data’, that only yield intelligence after extensive processing and analysis.

But what is the chain of reasoning (29th)?

It is being claimed, including by the regime, that the attacks were either faked or undertaken by the Syrian Armed Opposition. We have tested this assertion using a wide range of intelligence and open sources, and invited HMG and outside experts to help us establish whether such a thing is possible. There is no credible intelligence or other evidence to substantiate the claims or the possession of CW by the opposition. The JIC has therefore concluded that there are no plausible alternative scenarios to regime responsibility.

The JIC had high confidence in all of its assessments except in relation to the regime’s precise motivation for carrying out an attack of this scale at this time – though intelligence may increase our confidence in the future.

The report of the 27th is more nuanced:

There is no credible evidence that any opposition group has used CW. A number continue to seek a CW capability, but none currently has the capability to conduct a CW attack on this scale.

Russia claims to have a ‘good degree of confidence’ that the attack was an ‘opposition provocation’ but has announced that they support an investigation into the incident. …

In contrast, concerning Iraqi WMD, we were told that “lack of evidence is not evidence of lack”. But mathematics is not so rigid: it depends on one’s intelligence sources and analysis. Presumably in 2003 we lacked the means to detect Iraqi CW, but now – having learnt the lesson – we would know almost as soon as any one of a number of disparate groups acquires CW.  Many outside the intelligence community might not find this credible, leading to a lack of confidence in the report. Others would take the JIC’s word for it. But while the JIC may have evidence that supports their rating, it seems to me that they have not even alluded to a key part of it.

Often, of course, an argument may be technically flawed but still lead to a correct conclusion. To fix the argument one would want a much greater understanding of the situation. For example, the Russians seem to suggest that one opposition group would be prepared to gas another, presumably to draw the US and others into the war. Is the JIC saying that this is not plausible, or simply that no such group (yet) has the means? Without clarity, it is difficult for an outsider to asses the report and draw their own conclusion.

Finally, it is notable that regime responsibility for the attack of the 21st is rated ‘highly likely’, the same as their responsibility for previous attacks. Yet mathematically the rating should depend on what is called ‘the likelihood’, which one would normally expect to increase with time. Hence one would expect the rating to increase from possible (in the immediate aftermath) through likely to highly likely, as the kind of issues described above are dealt with. This unexpectedly high rating calls for an explanation, which would need to address the most relevant factors.

Anticipating the UN Inspectors

The UN weapons inspectors are expected to produce much relevant evidence. For example, it may be that even if an opposition group had CW an attack would necessarily lack some key signatures. But, from a mathematical point of view, one cannot claim that one explanation is ‘highly likely’ without considering all the alternatives and taking full account of how the evidence was obtained. It is quite true, as the PM argued, that there will always be gaps that require judgement to span. But we should strive to make the gap as slight as possible, and to be clear about what it is. While one would not want a JIC report to be phrased in terms of mathematics, it would seem that appropriate mathematics could be a valuable aid to critical thinking. Hopefully we shall soon have an assessment that genuinely rates ‘highly likely’ independently of any esoteric expertise, whether intelligence or mathematics.


30th August: US

The US assessment concludes that the attack was by Assad’s troops, using rockets to deliver a nerve agent, following their usual procedures. This ought to be confirmed or disconfirmed by the inspectors, with reasonable confidence. Further, the US claim ‘high confidence’ in their assessment, rather than very high confidence. Overall, the US assessment appears to be about what one would expect if Assad’s troops were responsible.

31st August: Blog

There is a good private-enterprise analysis of the open-source material. It makes a good case that the rockets’ payloads were not very dense, and probably a chemical gas. However, it points out that only the UN inspectors could determine if the payload was a prohibited substance, or some other substance such as is routinely used by respectable armies and police forces.

It makes no attribution of the rockets. The source material is clearly intended to show them being used by the Assad regime, but there is no discussion of whether or not any rebel groups could have made, captured or otherwise acquired them.

2nd September: France

The French have declassified a dossier. Again, it presents assertion and argumentation rather than evidence. The key points seem to be:

  • A ‘large’ amount of gas was used.
  • Rockets were probably used (presumably many).
  • No rebel group has the ability to fire rockets (unlike the Vietcong in Vietnam).

This falls short of a conclusive argument. Nothing seems to rule out the possibility of an anti-Assad outside agency loading up an ISO container (or a mule train) with CW (perhaps in rockets), and delivering them to an opposition group along with an adviser. (Not all the opposition groups all are allies.)

4th September: Germany

A German report includes:

  • Conjecture that the CW mix was stronger than intended, and hence lethal rather than temporarily disabling.
  • That a Hezbollah official said that Assad had ‘lost his nerve’ and ordered the attack.

It is not clear if the Hezbollah utterance was based on good grounds or was just speculation.

4th September: Experts

Some independent experts have given an analysis of the rockets that is similar in detail to that provided by Colin Powell to the UN in 2003, providing some support for the official dossiers. They asses that each warhead contained 50 litres (13 gallons) of agent. The assess that the rebels could have constructed the rockets, but not produced the large quantity of agents.

No figure is given for the number of rockets, but I have seen a figure of 100, which seems the right order of magnitude. This would imply 5,000 litres or 1,300 gallons, if all held the agent. A large tanker truck has a capacity of about 7 times this, so it does not seem impossible that such an amount could have been smuggled in.

This report essentially puts a little more detail on the blog of 31st August, and is seen as being more authoritative.

5th September: G20

The UK has confirmed that Sarin was used, but seems not to have commented on whether it was of typical ‘military quality’, or more home-made.

Russia has given the UN a 100 page dossier of its own, and I have yet to see a credible debunking (early days, and I haven’t found it on-line).

The squabbles continue. The UN wants to wait for its inspectors.

6th September: Veteran Intelligence Professionals for Sanity

An alternative, unofficial narrative. Can this be shown to be incredible? Will it be countered?

9th September: German

German secret sources indicate that Assad had no involvement in the CW attack (although others in the regime might have).

9th September: FCO news conference

John Kerry, at a UK FCO news conference, gives very convincing account of the evidenced for CW use, but without indicating any evidence that the chemicals were delivered by rocket. He is asked about Assad’s involvement, but notes that all that is claimed is senior regime culpability.

UN Inspectors’ Report

21st September. The long-awaited report concludes that rockets were used to deliver Sarin. The report, at first read, seems professional and credible. It is similar in character to the evidence that Colin Powell presented to the UN in 2003, but without the questionable ‘judgments’. It provides some key details (type of rocket, trajectory) which – one hopes – could be tied to the Assad regime, especially given US claims to have monitored rocket launches. Otherwise, they appear to be of  type that the rebels could have used.

The report does not discuss the possibility, raised by the regime, that conventional rockets had accidentally hit a rebel chemical store, but the technical details do seem to rule it out. There is an interesting point here. Psychologically, the fact that the regime raised a possibility in their defence which has been shown to be false increases our scepticism about them. But mathematically, if they are innocent then we would not expect them to know what happened, and hence we would not expect their conjectures to be correct. Such a false conjecture could even be counted as evidence in their favour, particularly if we thought them competent enough to realise that such an invention would easily be falsified by the inspectors.


Initial formal reactions

Initial reactions from the US, UK and French are that the technical details, including the trajectory, rule out rebel responsibility. They appear to be a good position to make such a determination, and it would normally be a conclusion that I would take at face value. But given the experience of Iraq and their previous dossiers, it seems quite possible that they would say what they said even without any specific evidence. A typical response, from US ambassador to the UN Samantha Power was:

The technical details of the UN report make clear that only the regime could have carried out this large-scale chemical weapons attack.”

Being just a little pedantic, this statement is literally false: one would at least have to take the technical details to a map showing rebel and regime positions, and have some idea of the range of the rockets. From the Russian comments, it would seem they have not been convinced.

Media reaction

A Telegraph report includes:

Whether the rebels have captured these delivery systems – along with sarin gas – from government armouries is unknown. Even if they have, experts said that operating these weapons successfully would be exceptionally difficult.

”It’s hard to say with certainty that the rebels don’t have access to these delivery systems. But even if they do, using them in such a way as to ensure that the attack was successful is the bit the rebels won’t know how to do,” said Dina Esfandiary, an expert on chemical weapons at the International Institute for Strategic Studies.

The investigators had enough evidence to trace the trajectories followed by two of the five rockets. If the data they provide is enough to pinpoint the locations from which the weapons were launched, this should help to settle the question of responsibility.

John Kerry, the US secretary of state, says the rockets were fired from areas of Damascus under the regime’s control, a claim that strongly implicates Mr Assad’s forces.

This suggests that there might be a strong case against the regime. But it is not clear that the government would be the only source of weapons for the rebels, that the rebels would need sophisticated launchers (rather than sticks) or that they would lack advice. Next, given the information on type, timing and bearing it should be possible to identify the rockets, if the US was monitoring their trajectories at the time, and hence it might be possible to determine where they came from, in which case the evidence trail would lead strongly to the regime. (Elsewhere it has been asserted that one of the rockets was fired from within the main Syrian Army base, in which case one would have thought they would have noticed a rebel group firing out.)

17 September: Human Rights Watch

Human Rights Watch has marked the UN estimate of the trajectories on a map, clearly showing tha- they could have been fired from the Republican Guard 104 Brigade area.

Connecting the dots provided by these numbers allows us to see for ourselves where the rockets were likely launched from and who was responsible.

This isn’t conclusive, given the limited data available to the UN team, but it is highly suggestive and another piece of the puzzle.

This seems a reasonable analysis. The BBC has said of it:

Human Rights Watch says the document reveals details of the attack that strongly
suggest government forces were behind the attack.

But this seems to exaggerate the strength of the evidence. One would at least want to see if the trajectories are consistent with the rockets having been launched from rebel held areas (map, anyone?) It also seems a little odd that a salvo of M14 rockets appear to have been fired over the presidential palace. Was the Syrian Army that desperate? Depending on the view that one takes of these questions, the evidence could favour the rebel hypothesis. On the other hand, if the US could confirm that the only rockets fired at that time to those sites came from government areas, that would seem conclusive.

(Wikipedia gives technical details of rockets. It notes use by the Taliban, and quotes its normal maximum range as 9.8km. The Human Rights Watch analysis seems to be assuming that this will not be significantly reduced by the ad-hoc adaptation to carry gas. Is this credible? My point here is that the lack of explicit discussion of such aspects in the official dossiers leaves room for doubt, which could be dispelled if their ‘very high confidence’ is justified.)

18 September: Syrian “proof”

The BBC has reported that the Syrians have provide what they consider proof to the Russia that the rebels were responsible for the CW attack, and that the Russians are evaluating it. I doubt that this will be proof, but perhaps it will reduce our confidence in  the ‘very high’ likelihood that the regime was responsible. (Probably not!) It may, though, flush out more conclusive evidence, either way.

19 September: Forgery?

Assad has claimed that the materials recovered by the UN inspectors were forged. The report talks about rebels moving material, and it is not immediately clear, as the official dossiers claim, that this hypothesis is not credible, particularly if the rebels had technical support.

Putin has confirmed that the rockets used were obsolete Soviet-era ones, no longer in use by the Syrian Army.

December: US Intelligence?

Hersh claims that US had intelligence that the Syrian rebels had chemical weapons, and that the US administration  deliberately ‘adjusted’ the intelligence to make it appear much more damning of the Syrian regime. (This is disputed.)


The UN Inspectors report is clear about what it has found. It is careful not to make deductive leaps, but provides ample material to support further analysis. For example, while it finds that Sarin was delivered by rockets that could have been launched from a regime area, it does not rule out rebel responsibility. But it does give details of type, time and direction, such that if – as appears to be the case from their dossier – the US were monitoring the area, it should be possible to conclude that the rocket was actually fired by the regime. Maybe someone will assemble the pieces for us.

My own view is not that Assad did not do it or that we should not attack, but that any attack based on the grounds that Assad used CW should be supported by clear, specific evidence, which the dossiers prior to the UN report did not provide. Even now, we lack a complete case. Maybe the UN should have its own intelligence capability? Or could we attack on purely humanitarian grounds, not basing the justification on the possible events on 21 Aug? Or share our intelligence with the Russians and Chinese?

Maybe no-one is interested any more?

See Also

Telegraph on anti-spy cynicism. Letters. More controversially: inconclusive allegations. and an attempted debunking.

Discussion of weakness of case that Assad was personally involved. Speculation on UN findings.

A feature of the debate seems to be that those who think that ‘something must be done’ tend to be critical of those who question the various dossiers, and those who object to military action tend to throw mud at the dossiers, justified or not. So maybe my main point should be that, irrespective of the validity of the JIC assessment, we need a much better quality of debate, engaging the public and those countries with different views, not just our traditional allies.

A notable exception was a private blog, which looked very credible, but fell short claiming “high likelihood”. It gives details of two candidate delivery rockets, and hoped that the UN inspectors will have got evidence from them, as they did. Neither rocket was known to have been used, but neither do they appear to be beyond the ability of rebel groups to use (with support). The comments are also interesting, e.g.:

There is compelling evidence that the Saudi terrorists operating in Syria, some having had training from an SAS mercenary working out of Dubai who is reporting back to me, are responsible for the chemical attack in the Ghouta area of Damascus.

The AIPAC derived ‘red line’ little game and frame-up was orchestrated at the highest levels of the American administration and liquid sarin binary precursors mainly DMMP were supplied by Israeli handled Saudi terrorists to a Jabhat al-Nusra Front chemist and fabricator.

Israel received supplies of the controlled substance DMMP from Solkatronic Chemicals of Morrisville, Pa.

This at least has some detail, although not such as can be easily checked.

Finally, I am beginning to get annoyed by the media’s use of scare quotes around Russian “evidence”.

Dave Marsay

Are more intelligent people more biased?

It has been claimed that:

U.S. intelligence agents may be more prone to irrational inconsistencies in decision making compared to college students and post-college adults … .

This is scary, if unsurprising to many. Perhaps more surprisingly:

Participants who had graduated college seemed to occupy a middle ground between college students and the intelligence agents, suggesting that people with more “advanced” reasoning skills are also more likely to show reasoning biases.

It seems as if there is some serious  mis-education in the US. But what is it?

The above conclusions are based on responses to the following two questions:

1. The U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Do you: (a) Save 200 people for sure, or (b) choose the option with 1/3 probability that 600 will be saved and a 2/3 probability no one will be saved?

2. In the same scenario, do you (a) pick the option where 400 will surely die, or instead (b) a 2/3 probability that all 600 will die and a 1/3 probability no one dies?

You might like to think about your answers to the above, before reading on.






The paper claims that:

Notably, the different scenarios resulted in the same potential outcomes — the first option in both scenarios, for example, has a net result of saving 200 people and losing 400.

Is this what you thought? You might like to re-read the questions and reconsider your answer, before reading on.






The questions may appear to contain statements of fact, that we are entitled to treat as ‘given’. But in real-life situations we should treat such questions as utterances, and use the appropriate logics. This may give the same result as taking them at face value – or it may not.

It is (sadly) probably true that if this were a UK school examination question then the appropriate logic would be (1) to treat the statements ‘at face value’ (2) assume that if 200 people will be saved ‘for sure’ then exactly 200 people will be saved, no more. On the other hand, this is just the kind of question that I ask mathematics graduates to check that they have an adequate understanding of the issues before advising decision-takers. In the questions as set, the (b) options are the same, but (1a) is preferable to (2a), unless one is in the very rare situation of knowing exactly how many will die. With this interpretation, the more education and the more experience, the better the decisions – even in the US 😉

It would be interesting to repeat the experiment with less ambiguous wording. Meanwhile, I hope that intelligence agents are not being re-educated. Or have I missed something?


Kahneman’s Thinking, fast and slow has a similar example, in which we are given ‘exact scientific estimates’ of probable outcomes, avoiding the above ambiguity. This might be a good candidate experimental question.

Kahneman’s question is not without its own subtleties, though. It concerns the efficacy of ‘programs to combat disease’. It seems to me that if I was told that a vaccine would save 1/3 of the lives, I would suppose that it had been widely tested, and that the ‘scientific’ estimate was well founded. On the other hand, if I was told that there was a 2/3 chance of the vaccine being ineffective I would suppose that it hadn’t been tested adequately, and the ‘scientific’ estimate was really just an informed guess. In this case, I would expect the estimate of efficacy to be revised in the light of new information. It could even be that while some scientist has made an honest estimate based on the information that they have, some other scientist (or technician) already knows that the vaccine is ineffective. A program based on such a vaccine would be more complicated and ‘risky’ than one based on a well-founded estimate, and so I would be reluctant to recommend it. (Ideally, I would want to know a lot more about how the estimates were arrived at, but if pressed for a quick decision, this is what I would do.)

Could the framing make a difference? In one case, we are told that ‘scientifically’, 200 people will be saved. But scientific conclusions always depend on assumptions, so really one should say ‘if …. then 200 will be saved’. My experience is that otherwise the outcome should not be expected, and that saving 200 is the best that should be expected. In the other case we are told that ‘400 will die’. This seems to me to be a very odd thing to say. From a logical perspective one would like to understand the circumstances in which someone would put it like this. I would be suspicious, and might well (‘irrationally’) avoid a program described in that way.


The example also shows a common failing, in assuming that the utility is proportional to lives lost. Suppose that when we are told that lives will be ‘saved’ we assume that we will get credit, then we might take the utility from saving lives to be number of lives saved, but with a limit of ‘kudos’ at 250 lives saved. In this case, it is rational to save 200 ‘for sure’, as the expected credit from taking a risk is very much lower. On the other hand, if we are told that 400 lives will be ‘lost’ we might assume that we will be blamed, and take the utility to be minus the lives lost, limited at -10. In this case it is rational to take a risk, as we have some chance of avoiding the worst case utility, whereas if we went for the sure option we would be certain to suffer the worst case.

These kind of asymmetric utilities may be just the kind that experts experience. More study required?


Dave Marsay

Mathematics, psychology, decisions

I attended a conference on the mathematics of finance last week. It seems that things would have gone better in 2007/8 if only policy makers had employed some mathematicians to critique the then dominant dogmas. But I am not so sure. I think one would need to understand why people went along with the dogmas. Psychology, such as behavioural economics, doesn’t seem to help much, since although it challenges some aspects of the dogmas it fails to challenge (and perhaps even promotes) other aspects, so that it is not at all clear how it could have helped.

Here I speculate on an answer.

Finance and economics are either empirical subjects or they are quasi-religious, based on dogmas. The problems seem to arise when they are the latter but we mistake them for the former. If they are empirical then they have models whose justification is based on evidence.

Naïve inductivism boils down to the view that whatever has always (never) been the case will continue always (never) to be the case. Logically it is untenable, because one often gets clashes, where two different applications of naïve induction are incompatible. But pragmatically, it is attractive.

According to naïve inductivism we might suppose that if the evidence has always fitted the models, then actions based on the supposition that they will continue to do so will be justified. (Hence, ‘it is rational to act as if the model is true’). But for something as complex as an economy the models are necessarily incomplete, so that one can only say that the evidence fitted the models within the context as it was at the time. Thus all that naïve inductivism could tell you is that ‘it is rational’ to act as if the  model is true, unless and until the context should change. But many of the papers at the mathematics of finance conference were pointing out specific cases in which the actions ‘obviously’ changed the context, so that naïve inductivism should not have been applied.

It seems to me that one could take a number of attitudes:

  1. It is always rational to act on naïve inductivism.
  2. It is always rational to act on naïve inductivism, unless there is some clear reason why not.
  3. It is always rational to act on naïve inductivism, as long as one has made a reasonable effort to rule out any contra-indications (e.g., by considering ‘the whole’).
  4. It is only reasonable to act on naïve inductivism when one has ruled out any possible changes to the context, particularly reactions to our actions, by considering an adequate experience base.

In addition, one might regard the models as conditionally valid, and hedge accordingly. (‘Unless and until there is a reaction’.) Current psychology seems to suppose (1) and hence has little to help us understand why people tend to lean too strongly on naïve inductivism. It may be that a belief in (1) is not really psychological, but simply a consequence of education (i.e., cultural).

See Also

Russell’s Human Knowledge. My media for the conference.

Dave Marsay