What logical term or concept ought to be more widely known?

Various What scientific term or concept ought to be more widely known? Edge, 2017.

INTRODUCTION: SCIENTIA

Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. …

Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.

Contributions

As against others on:

(This is as far as I’ve got.)

Comment

I’ve grouped the contributions according to whether or not I think they give due weight to the notion of uncertainty as expressed in my blog. Interestingly Steven Pinker seems not to give due weight in his article, whereas he is credited by Nicholas G. Carr with some profound insights (in the first of the second batch). So maybe I am not reading them right.

My own suggestion would be Turing’s theory of ‘Morphogenesis’. The particular predictions seem to have been confirmed ‘scientifically’, but it is essentially a logical / mathematical theory. If, as the introduction suggests, science is “reliable methods for obtaining knowledge” then it seems to me that logic and mathematics are more reliable than empirical methods, and deserve some special recognition. Although, I must concede that it may be hard to tell logic from pseudo-logic, and that unless you can do so my distinction is potentially dangerous.

Morphogenesis

The second law of thermodynamics, and much common sense rationality,  assumes a situation in which the law of large numbers applies. But Turing adds to the second law’s notion of random dissipation a notion of relative structuring (as in gravity) to show that ‘critical instabilities’ are inevitable. These are inconsistent with the law of large numbers, so the assumptions of the second law of thermodynamics (and much else) cannot be true. The universe cannot be ‘closed’ in its sense.

Implications

If the assumptions of the second law seem to leave no room for free will and hence no reason to believe in our agency and hence no point in any of the contributions to Edge: they are what they are and we do what we do. But Pinker does not go so far: he simply notes that if things inevitably degrade we do not need to beat ourselves up, or look for scape-goats when things go wrong. But this can be true even if the second law does not apply. If we take Turing seriously then a seeming permanent status quo can contain the reasons for its own destruction, so that turning a blind eye and doing nothing can mean sleep-walking to disaster. Where Pinker concludes:

[An] underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

This would seem to follow more clearly from the theory of morphogenesis than the second law. Turing’s theory also goes some way to suggesting or even explaining the items in the second batch. So, I commend it.

Dave Marsay

 

 

Advertisements

Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Artificial Intelligence?

The subject of ‘Artificial Intelligence’ (AI) has long provided ample scope for long and inconclusive debates. Wikipedia seems to have settled on a view, that we may take as straw-man:

Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. [Dartmouth Conference, 1956] The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds. [John Searle’s straw-man hypothesis]

Readers of my blog will realise that I agree with Searle that his hypothesis is wrong, but for different reasons. It seems to me that mainstream AI (mAI) is about being able to take instruction. This is a part of learning, but by no means all. Thus – I claim – mAI is about a sub-set of intelligence. In many organisational settings it may be that sub-set which the organisation values. It may even be that an AI that ‘thought for itself’ would be a danger. For example, in old discussions about whether or not some type of AI could ever act as a G.P. (General Practitioner – first line doctor) the underlying issue has been whether G.P.s ‘should’ think for themselves, or just apply their trained responses. My own experience is that sometimes G.P.s doubt the applicability of what they have been taught, and that sometimes this is ‘a good thing’. In effect, we sometimes want to train people, or otherwise arrange for them to react in predictable ways, as if they were machines. mAI can create better machines, and thus has many key roles to play. But between mAI and ‘superhuman intelligence’  there seems to be an important gap: the kind of intelligence that makes us human. Can machines display such intelligence? (Can people, in organisations that treat them like machines?)

One successful mainstream approach to AI is to work with probabilities, such a P(A|B) (‘the probability of A given B’), making extensive use of Bayes’ rule, and such an approach is sometimes thought to be ‘logical’, ‘mathematical, ‘statistical’ and ‘scientific’. But, mathematically, we can generalise the approach by taking account of some context, C, using Jack Good’s notation P(A|B:C) (‘the probability of A given B, in the context C’). AI that is explicitly or implicitly statistical is more successful when it operates within a definite fixed context, C, for which the appropriate probabilities are (at least approximately) well-defined and stable. For example, training within an organisation will typically seek to enable staff (or machines) to characterise their job sufficiently well for it to become routine. In practice ‘AI’-based machines often show a little intelligence beyond that described above: they will monitor the situation and ‘raise an exception’ when the situation is too far outside what it ‘expects’. But this just points to the need for a superior intelligence to resolve the situation. Here I present some thoughts.

When we state ‘P(A|B)=p’ we are often not just asserting the probability relationship: it is usually implicit that ‘B’ is the appropriate condition to consider if we are interested in ‘A’. Contemporary mAI usually takes the conditions a given, and computes ‘target’ probabilities from given probabilities. Whilst this requires a kind of intelligence, it seems to me that humans will sometimes also revise the conditions being considered, and this requires a different type of intelligence (not just the ability to apply Bayes’ rule). For example, astronomers who refine the value of relevant parameters are displaying some intelligence and are ‘doing science’, but those first in the field, who determined which parameters are relevant employed a different kind of intelligence and were doing a different kind of science. What we need, at least, is an appropriate way of interpreting and computing ‘probability’ to support this enhanced intelligence.

The notions of Whitehead, Keynes, Russell, Turing and Good seem to me a good start, albeit they need explaining better – hence this blog. Maybe an example is economics. The notion of probability routinely used would be appropriate if we were certain about some fundamental assumptions. But are we? At least we should realise that it is not logical to attempt to justify those assumptions by reasoning using concepts that implicitly rely on them.

Dave Marsay

What should replace utility maximization in economics?

Mainstream economics has been based on the idea of people producing and trading in order to maximize their utility, which depends on their assigning values and conditional  probabilities to outcomes. Thus, in particular, mainstream economics implies that people do best by assigning probabilities to possible outcomes, even when there seems no sensible way to do this (such as when considering a possible crash). Ken Arrow has asked, if one rejects utility maximization, what should one replace it with?

The assumption here seems to be that it is better to have a wrong theory than to have no theory. The fear seems to be that economies would grind to a holt unless they were sanctioned by some theory – even a wrong one. But this fear seems at odds with another common view, that economies are driven by businesses, which are driven by ‘pragmatic’ men. It might be that without the endorsement of some (wrong) theory some practices, such as the development of novel technical instruments and the use of large leverages, would be curtailed. But would this be a bad thing?

Nonetheless, Arrow’s challenge deserves a response.

There are many variations in detail of utility maximization theories. Suppose we identity ‘utility maximization’ as a possible heuristic, then utility maximization theory claims that people use some specific heuristics, so an obvious alternative is to consider a wider  range. The implicit idea behind utility maximization theory seems to be under a competitive regime resembling evolution, the evolutionary stable strategies (‘the good ones’) do maximize some utility function, so that in time utility maximizers ought to get to dominate economies. (Maybe poor people do not maximize any utility, but they – supposedly – have relatively little influence on economies.) But this idea is hardly credible. If – as seems to be the case – economies have significant ‘Black Swans’ (low probability high impact events) then utility maximizers  who ignore the possibility of a Black Swan (such as a crash) will do better in the short-term, and so the economy will become dominated by people with the wrong utilities. People with the right utilities would do better in the long run, but have two problems: they need to survive the short-term and they need to estimate the probability of the Black Swan. No method has been suggested for doing this. An alternative is to take account of some notional utility but also take account of any other factors that seem relevant.

For example, when driving a hire-car along a windy road with a sheer drop I ‘should’ adjust my speed to trade time of arrival against risk of death or injury. But usually I simply reduce my speed to the point where the risk is slight, and accept the consequential delay. These are qualitative judgements, not arithmetic trade-offs. Similarly an individual might limit their at-risk investments (e.g. stocks) so that a reasonable fall (e.g. 25%) could be tolerated, rather than try to keep track of all the possible things that could go wrong (such as terrorists stealing a US Minuteman) and their likely impact.

More generally, we could suppose that people act according to their own heuristics, and that there are competitive pressures on heuristics, but not that utility maximization is necessarily ‘best’ or even that a healthy economy relies on most people having similar heuristics, or that there is some stable set of ‘good’ heuristics. All these questions (and possibly more) could be left open for study and debate. As a mathematician it seems to me that decision-making involves ideas, and that ideas are never unique or final, so that novel heuristics could arise and be successful from time to time. Or at least, the contrary would require an explanation. In terms of game theory, the conventional theory seems to presuppose a fixed single-level game, whereas – like much else – economies seem to have scope for changing the game and even for creating higher-level games, without limit. In this case, the strategies must surely change and are created rather than drawn from a fixed set?

See Also

Some evidence against utility maximization. (Arrow’s response prompted this post).

My blog on reasoning under uncertainty with application to economics.

Dave Marsay

Who thinks probability is just a number? A plea.

Many people think – perhaps they were taught it – that it is meaningful to talk about the unconditional probability of ‘Heads’ (I.e. P(Heads)) for a real coin, and even that there are logical or mathematical arguments to this effect. I have been collecting and commenting on works which have been – too widely – interpreted in this way, and quoting their authors in contradiction. De Finetti seemed to be the only example of a respected person who seemed to think that he had provided such an argument. But a friendly economist has just forwarded a link to a recent work that debunks this notion, based on wider  reading of his work.

So, am I done? Does anyone have any seeming mathematical sources for the view that ‘probability is just a number’ for me to consider?

I have already covered:

There are some more modern authors who make strong claims about probability, but – unless you know different – they rely on the above, and hence do not need to be addressed separately. I do also opine on a few less well known sources: you can search my blog to check.

Dave Marsay

JIC, Syria and Uncertainty

This page considers the case that the Assad regime used gas against the rebels on 21st August 2013 from a theory of evidence perspective. For a broader account, see Wikipedia.

The JIC Assessment

The JIC concluded on 27th that it was:

highly likely that the Syrian regime was responsible.

In the covering letter (29th) the chair said:

Against that background, the JIC concluded that it is highly likely that the regime was responsible for the CW attacks on 21 August. The JIC had high confidence in all of its assessments except in relation to the regime’s precise motivation for carrying out an attack of this scale at this time – though intelligence may increase our confidence in the future.

A cynic or pedant might note the caveat:

The paper’s key judgements, based on the information and intelligence available to us as of 25 August, are attached.

Mathematically-based analysis

From a mathematical point of view, the JIC report is an ‘utterance’, and one needs to consider the context in which it was produced. Hopefully, best practice would include identifying the key steps in the conclusion and seeking out and hastening any possible contrary reports. Thus one might reasonably suppose that the letter on the 29th reflected all obviously relevant information available up to the ends of the 28th, but perhaps not some other inputs, such as ‘big data’, that only yield intelligence after extensive processing and analysis.

But what is the chain of reasoning (29th)?

It is being claimed, including by the regime, that the attacks were either faked or undertaken by the Syrian Armed Opposition. We have tested this assertion using a wide range of intelligence and open sources, and invited HMG and outside experts to help us establish whether such a thing is possible. There is no credible intelligence or other evidence to substantiate the claims or the possession of CW by the opposition. The JIC has therefore concluded that there are no plausible alternative scenarios to regime responsibility.

The JIC had high confidence in all of its assessments except in relation to the regime’s precise motivation for carrying out an attack of this scale at this time – though intelligence may increase our confidence in the future.

The report of the 27th is more nuanced:

There is no credible evidence that any opposition group has used CW. A number continue to seek a CW capability, but none currently has the capability to conduct a CW attack on this scale.

Russia claims to have a ‘good degree of confidence’ that the attack was an ‘opposition provocation’ but has announced that they support an investigation into the incident. …

In contrast, concerning Iraqi WMD, we were told that “lack of evidence is not evidence of lack”. But mathematics is not so rigid: it depends on one’s intelligence sources and analysis. Presumably in 2003 we lacked the means to detect Iraqi CW, but now – having learnt the lesson – we would know almost as soon as any one of a number of disparate groups acquires CW.  Many outside the intelligence community might not find this credible, leading to a lack of confidence in the report. Others would take the JIC’s word for it. But while the JIC may have evidence that supports their rating, it seems to me that they have not even alluded to a key part of it.

Often, of course, an argument may be technically flawed but still lead to a correct conclusion. To fix the argument one would want a much greater understanding of the situation. For example, the Russians seem to suggest that one opposition group would be prepared to gas another, presumably to draw the US and others into the war. Is the JIC saying that this is not plausible, or simply that no such group (yet) has the means? Without clarity, it is difficult for an outsider to asses the report and draw their own conclusion.

Finally, it is notable that regime responsibility for the attack of the 21st is rated ‘highly likely’, the same as their responsibility for previous attacks. Yet mathematically the rating should depend on what is called ‘the likelihood’, which one would normally expect to increase with time. Hence one would expect the rating to increase from possible (in the immediate aftermath) through likely to highly likely, as the kind of issues described above are dealt with. This unexpectedly high rating calls for an explanation, which would need to address the most relevant factors.

Anticipating the UN Inspectors

The UN weapons inspectors are expected to produce much relevant evidence. For example, it may be that even if an opposition group had CW an attack would necessarily lack some key signatures. But, from a mathematical point of view, one cannot claim that one explanation is ‘highly likely’ without considering all the alternatives and taking full account of how the evidence was obtained. It is quite true, as the PM argued, that there will always be gaps that require judgement to span. But we should strive to make the gap as slight as possible, and to be clear about what it is. While one would not want a JIC report to be phrased in terms of mathematics, it would seem that appropriate mathematics could be a valuable aid to critical thinking. Hopefully we shall soon have an assessment that genuinely rates ‘highly likely’ independently of any esoteric expertise, whether intelligence or mathematics.

Updates

30th August: US

The US assessment concludes that the attack was by Assad’s troops, using rockets to deliver a nerve agent, following their usual procedures. This ought to be confirmed or disconfirmed by the inspectors, with reasonable confidence. Further, the US claim ‘high confidence’ in their assessment, rather than very high confidence. Overall, the US assessment appears to be about what one would expect if Assad’s troops were responsible.

31st August: Blog

There is a good private-enterprise analysis of the open-source material. It makes a good case that the rockets’ payloads were not very dense, and probably a chemical gas. However, it points out that only the UN inspectors could determine if the payload was a prohibited substance, or some other substance such as is routinely used by respectable armies and police forces.

It makes no attribution of the rockets. The source material is clearly intended to show them being used by the Assad regime, but there is no discussion of whether or not any rebel groups could have made, captured or otherwise acquired them.

2nd September: France

The French have declassified a dossier. Again, it presents assertion and argumentation rather than evidence. The key points seem to be:

  • A ‘large’ amount of gas was used.
  • Rockets were probably used (presumably many).
  • No rebel group has the ability to fire rockets (unlike the Vietcong in Vietnam).

This falls short of a conclusive argument. Nothing seems to rule out the possibility of an anti-Assad outside agency loading up an ISO container (or a mule train) with CW (perhaps in rockets), and delivering them to an opposition group along with an adviser. (Not all the opposition groups all are allies.)

4th September: Germany

A German report includes:

  • Conjecture that the CW mix was stronger than intended, and hence lethal rather than temporarily disabling.
  • That a Hezbollah official said that Assad had ‘lost his nerve’ and ordered the attack.

It is not clear if the Hezbollah utterance was based on good grounds or was just speculation.

4th September: Experts

Some independent experts have given an analysis of the rockets that is similar in detail to that provided by Colin Powell to the UN in 2003, providing some support for the official dossiers. They asses that each warhead contained 50 litres (13 gallons) of agent. The assess that the rebels could have constructed the rockets, but not produced the large quantity of agents.

No figure is given for the number of rockets, but I have seen a figure of 100, which seems the right order of magnitude. This would imply 5,000 litres or 1,300 gallons, if all held the agent. A large tanker truck has a capacity of about 7 times this, so it does not seem impossible that such an amount could have been smuggled in.

This report essentially puts a little more detail on the blog of 31st August, and is seen as being more authoritative.

5th September: G20

The UK has confirmed that Sarin was used, but seems not to have commented on whether it was of typical ‘military quality’, or more home-made.

Russia has given the UN a 100 page dossier of its own, and I have yet to see a credible debunking (early days, and I haven’t found it on-line).

The squabbles continue. The UN wants to wait for its inspectors.

6th September: Veteran Intelligence Professionals for Sanity

An alternative, unofficial narrative. Can this be shown to be incredible? Will it be countered?

9th September: German

German secret sources indicate that Assad had no involvement in the CW attack (although others in the regime might have).

9th September: FCO news conference

John Kerry, at a UK FCO news conference, gives very convincing account of the evidenced for CW use, but without indicating any evidence that the chemicals were delivered by rocket. He is asked about Assad’s involvement, but notes that all that is claimed is senior regime culpability.

UN Inspectors’ Report

21st September. The long-awaited report concludes that rockets were used to deliver Sarin. The report, at first read, seems professional and credible. It is similar in character to the evidence that Colin Powell presented to the UN in 2003, but without the questionable ‘judgments’. It provides some key details (type of rocket, trajectory) which – one hopes – could be tied to the Assad regime, especially given US claims to have monitored rocket launches. Otherwise, they appear to be of  type that the rebels could have used.

The report does not discuss the possibility, raised by the regime, that conventional rockets had accidentally hit a rebel chemical store, but the technical details do seem to rule it out. There is an interesting point here. Psychologically, the fact that the regime raised a possibility in their defence which has been shown to be false increases our scepticism about them. But mathematically, if they are innocent then we would not expect them to know what happened, and hence we would not expect their conjectures to be correct. Such a false conjecture could even be counted as evidence in their favour, particularly if we thought them competent enough to realise that such an invention would easily be falsified by the inspectors.

Reaction

Initial formal reactions

Initial reactions from the US, UK and French are that the technical details, including the trajectory, rule out rebel responsibility. They appear to be a good position to make such a determination, and it would normally be a conclusion that I would take at face value. But given the experience of Iraq and their previous dossiers, it seems quite possible that they would say what they said even without any specific evidence. A typical response, from US ambassador to the UN Samantha Power was:

The technical details of the UN report make clear that only the regime could have carried out this large-scale chemical weapons attack.”

Being just a little pedantic, this statement is literally false: one would at least have to take the technical details to a map showing rebel and regime positions, and have some idea of the range of the rockets. From the Russian comments, it would seem they have not been convinced.

Media reaction

A Telegraph report includes:

Whether the rebels have captured these delivery systems – along with sarin gas – from government armouries is unknown. Even if they have, experts said that operating these weapons successfully would be exceptionally difficult.

”It’s hard to say with certainty that the rebels don’t have access to these delivery systems. But even if they do, using them in such a way as to ensure that the attack was successful is the bit the rebels won’t know how to do,” said Dina Esfandiary, an expert on chemical weapons at the International Institute for Strategic Studies.

The investigators had enough evidence to trace the trajectories followed by two of the five rockets. If the data they provide is enough to pinpoint the locations from which the weapons were launched, this should help to settle the question of responsibility.

John Kerry, the US secretary of state, says the rockets were fired from areas of Damascus under the regime’s control, a claim that strongly implicates Mr Assad’s forces.

This suggests that there might be a strong case against the regime. But it is not clear that the government would be the only source of weapons for the rebels, that the rebels would need sophisticated launchers (rather than sticks) or that they would lack advice. Next, given the information on type, timing and bearing it should be possible to identify the rockets, if the US was monitoring their trajectories at the time, and hence it might be possible to determine where they came from, in which case the evidence trail would lead strongly to the regime. (Elsewhere it has been asserted that one of the rockets was fired from within the main Syrian Army base, in which case one would have thought they would have noticed a rebel group firing out.)

17 September: Human Rights Watch

Human Rights Watch has marked the UN estimate of the trajectories on a map, clearly showing tha- they could have been fired from the Republican Guard 104 Brigade area.

Connecting the dots provided by these numbers allows us to see for ourselves where the rockets were likely launched from and who was responsible.

This isn’t conclusive, given the limited data available to the UN team, but it is highly suggestive and another piece of the puzzle.

This seems a reasonable analysis. The BBC has said of it:

Human Rights Watch says the document reveals details of the attack that strongly
suggest government forces were behind the attack.

But this seems to exaggerate the strength of the evidence. One would at least want to see if the trajectories are consistent with the rockets having been launched from rebel held areas (map, anyone?) It also seems a little odd that a salvo of M14 rockets appear to have been fired over the presidential palace. Was the Syrian Army that desperate? Depending on the view that one takes of these questions, the evidence could favour the rebel hypothesis. On the other hand, if the US could confirm that the only rockets fired at that time to those sites came from government areas, that would seem conclusive.

(Wikipedia gives technical details of rockets. It notes use by the Taliban, and quotes its normal maximum range as 9.8km. The Human Rights Watch analysis seems to be assuming that this will not be significantly reduced by the ad-hoc adaptation to carry gas. Is this credible? My point here is that the lack of explicit discussion of such aspects in the official dossiers leaves room for doubt, which could be dispelled if their ‘very high confidence’ is justified.)

18 September: Syrian “proof”

The BBC has reported that the Syrians have provide what they consider proof to the Russia that the rebels were responsible for the CW attack, and that the Russians are evaluating it. I doubt that this will be proof, but perhaps it will reduce our confidence in  the ‘very high’ likelihood that the regime was responsible. (Probably not!) It may, though, flush out more conclusive evidence, either way.

19 September: Forgery?

Assad has claimed that the materials recovered by the UN inspectors were forged. The report talks about rebels moving material, and it is not immediately clear, as the official dossiers claim, that this hypothesis is not credible, particularly if the rebels had technical support.

Putin has confirmed that the rockets used were obsolete Soviet-era ones, no longer in use by the Syrian Army.

December: US Intelligence?

Hersh claims that US had intelligence that the Syrian rebels had chemical weapons, and that the US administration  deliberately ‘adjusted’ the intelligence to make it appear much more damning of the Syrian regime. (This is disputed.)

Comment

The UN Inspectors report is clear about what it has found. It is careful not to make deductive leaps, but provides ample material to support further analysis. For example, while it finds that Sarin was delivered by rockets that could have been launched from a regime area, it does not rule out rebel responsibility. But it does give details of type, time and direction, such that if – as appears to be the case from their dossier – the US were monitoring the area, it should be possible to conclude that the rocket was actually fired by the regime. Maybe someone will assemble the pieces for us.

My own view is not that Assad did not do it or that we should not attack, but that any attack based on the grounds that Assad used CW should be supported by clear, specific evidence, which the dossiers prior to the UN report did not provide. Even now, we lack a complete case. Maybe the UN should have its own intelligence capability? Or could we attack on purely humanitarian grounds, not basing the justification on the possible events on 21 Aug? Or share our intelligence with the Russians and Chinese?

Maybe no-one is interested any more?

See Also

Telegraph on anti-spy cynicism. Letters. More controversially: inconclusive allegations. and an attempted debunking.

Discussion of weakness of case that Assad was personally involved. Speculation on UN findings.

A feature of the debate seems to be that those who think that ‘something must be done’ tend to be critical of those who question the various dossiers, and those who object to military action tend to throw mud at the dossiers, justified or not. So maybe my main point should be that, irrespective of the validity of the JIC assessment, we need a much better quality of debate, engaging the public and those countries with different views, not just our traditional allies.

A notable exception was a private blog, which looked very credible, but fell short claiming “high likelihood”. It gives details of two candidate delivery rockets, and hoped that the UN inspectors will have got evidence from them, as they did. Neither rocket was known to have been used, but neither do they appear to be beyond the ability of rebel groups to use (with support). The comments are also interesting, e.g.:

There is compelling evidence that the Saudi terrorists operating in Syria, some having had training from an SAS mercenary working out of Dubai who is reporting back to me, are responsible for the chemical attack in the Ghouta area of Damascus.

The AIPAC derived ‘red line’ little game and frame-up was orchestrated at the highest levels of the American administration and liquid sarin binary precursors mainly DMMP were supplied by Israeli handled Saudi terrorists to a Jabhat al-Nusra Front chemist and fabricator.

Israel received supplies of the controlled substance DMMP from Solkatronic Chemicals of Morrisville, Pa.

This at least has some detail, although not such as can be easily checked.

Finally, I am beginning to get annoyed by the media’s use of scare quotes around Russian “evidence”.

Dave Marsay