Traffic bunching

In heavy traffic, such as on motorways in rush-hour, there is often oscillation in speed and there can even be mysterious ’emergent’ halts. The use of variable speed limits can result in everyone getting along a given stretch of road quicker.

Soros (worth reading) has written an article that suggests that this is all to do with the humanity and ‘thinking’ of the drivers, and that something similar is the case for economic and financial booms and busts. This might seem to indicate that ‘mathematical models’ were a part of our problems, not solutions. So I suggest the following thought experiment:

Suppose a huge number of  identical driverless cars with deterministic control functions all try to go along the same road, seeking to optimise performance in terms of ‘progress’ and fuel economy. Will they necessarily succeed, or might there be some ‘tragedy of the commons’ that can only be resolved by some overall regulation? What are the critical factors? Is the nature of the ‘brains’ one of them?

Are these problems the preserve of psychologists, or does mathematics have anything useful to say?

Dave Marsay

Are fananciers really stupid?

The New Scientist (30 March 2013) has the following question, under the heading ‘Stupid is as stupid does’:

Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?

Possible answers are: “yes”, “no” or “cannot be determined”.

You might want to think about this before scrolling down.

.

.

.

.

.

.

.

It is claimed that while ‘the vast majority’ (presumably including financiers, whose thinking is being criticised) think the answer is “cannot be determined”,

careful deduction shows that the answer is “yes”.

Similar views are expressed at  a learning blog and at a Physics blog, although the ‘careful deductions’ are not given. Would you like to think again?

.

.

.

.

.

.

.

.

Now I have a confession to make. My first impression is that the closest of the admissible answers is ‘cannot be determined’, and having thought carefully for a while, I have not changed my mind. Am I stupid? (Based on this evidence!) You might like to think about this before scrolling down.

.

.

.

.

.

.

.

Some people object that the term ‘is married’ may not be well-defined, but that is not my concern. Suppose that one has a definition of marriage that is as complete and precise as possible. What is the correct answer? Does that change your thinking?

.

.

.

.

.

.

.

Okay, here are some candidate answers that I would prefer, if allowed:

  1. There are cases in which the answer cannot be determined.
  2. It is not possible to prove that there are not cases in which the answer cannot be determined. (So that the answer could actually be “yes”, but we cannot know that it is “yes”.)

Either way, it cannot be proved that there is a complete and precise way of determining the answer, but for different reasons. I lean towards the first answer, but am not sure. Which it is is not a logical or mathematical question, but a question about ‘reality’, so one should ask a Physicist. My reasoning follows … .

.

.

.

.

.

.

.

.

Suppose that Anne marries Henry who dies while out in space, with a high relative velocity and acceleration. Then to answer yes we must at least be able to determine a unique time in Anne’s time-frame in which Henry dies, or else (it seems to me) there will be a period of time in which Anne’s status is indeterminate. It is not just that we do not know what Anne’s status is; she has no ‘objective’ status.

If there is some experiment which really proves that there is no possible ‘objective’ time (and I am not sure that there is) then am I not right? Even if there is no such experiment, one cannot determine the truth of physical theories, only fail to disprove them. So either way, am I not right?

Enlightenment, please. The link to finance is that the New Scientist article says that

Employees leaving logic at the office door helped cause the financial crisis.

I agree, but it seems to me (after Keynes) that it was their use of the kind of ‘classical’ logic that is implicitly assumed in the article that is at fault. Being married is a relation, not a proposition about Anne. Anne has no state or attributes from which her marital status can be determined, any more than terms such as crash, recession, money supply, inflation, inequality, value or ‘the will of the people’ have any correspondence in real economies.  Unless you know different?

Dave Marsay

Intelligence-led: Intelligent?

In the UK, after various scandals in the 90s, it seemed that horizon scanning for potential problems, such as the BSE crisis, ought to be more intelligent and even ‘intelligence-led’ or ‘evidence-led’ as against being prejudice or spin-led. Listening to ministerial pronouncements on the horse-meat scandal I wonder if  the current so-called ‘intelligence-led’ approach is actually intelligent.

Suppose that the house next door becomes a refuge for drug-addicts. Which of the following are intelligent? Intelligence-led?

  1. Wait until there is a significant increase in crime locally – or until you get burgled – and then up-rate your security.
  2. Review security straight away.

In case you hadn’t guessed, this relates to my blog, and the question of what you mean by ‘information’ and ‘evidence’.

Does anyone have a definition of what is meant by ‘intelligence-led’ in this context?

Dave Marsay

P.S. I have some more puzzles on uncertainty.

 

Risks to scientists from mis-predictions

The recent conviction of six seismologists and a public official for reassuring the public about the risk of an earthquake when there turned out to be one raises many issues, mostly legal, but I want to focus on the scientific aspects, specifically the assessment and communication of uncertainty.

A recent paper by O’Hagan  notes that there is “wide recognition that the appropriate representation for expert judgements of uncertainty is as a probability distribution for the unknown quantity of interest …”.  This conflicts with UK best practice, as described by Spiegelhalter at understanding uncertainty. My own views have been formed by experience of potential and actual crises where evaluation of uncertainty played a key role.

From a mathematical perspective, probability theory is a well-grounded theory depending on certain axioms. There are plausible arguments that these axioms are often satisfied, but these arguments are empirical and hence should be considered at best as scientific rather than mathematical or ‘universally true’.  O’Hagan’s arguments, for example, start from the assumption that uncertainty is nothing but a number, ignoring Spiegelhalter’s ‘Knightian uncertainty‘.

Thus, it seems to me, that where there are rare critical decisions with a lack of evidence to support a belief in the axioms, one should recognize the attendant non-probabilistic uncertainty, and that failure to do so is a serious error, meriting some censure. In practice, one needs relevant guidance such as the UK is developing, interpreted for specific areas such as seismology. This should provide both guidance (such as that at understanding uncertainty) to scientists and material to be used in communicating risk to the public, preferably with some legal status. But what should such guidance be? Spiegelhalter’s is a good start, but needs developing.

My own view is that one should have standard techniques that can put reasonable bounds on probabilities, so that one has something that is relatively well peer-reviewed, ‘authorised’ and ‘scientific’ to inform critical decisions. But in applying any methods one should recognize any assumptions that have been made to support the use of those methods, and highlight them. Thus one may say that according to the usual methods, ‘the probability is p’, but that there are various named factors that lead you to suppose that the ‘true risk’ may be significantly higher (or lower). But is this enough?

Some involved in crisis management have noted that scientists generally seem to underestimate risk. If so, then even the above approach (and the similar approach of understanding uncertainty) could tend to understate risk. So do scientists tend to understate the risks pertaining to crises, and why?

It seems to me that one cannot be definitive about this, since there are, from a statistical perspective – thankfully – very few crises or even near-crises. But my impression is that could be something in it. Why?

As at Aquila, human and organisational factors seem to play a role, so that some answers seem to need more justification that others. Any ‘standard techniques’ would need take account of these tendancies. For example, I have often said that the key to good advice is to have a good customer, who desires an adequate answer – whatever it is – who fully appreciates the dangers of misunderstanding arising, and is prepared to invest the time in ensuring adequate communication. This often requires debate and perhaps role-playing, prior to any crisis. This was not achieved at Aquila. But is even this enough?

Here I speculate even more. In my own work, it seems to me that where a quantity such as P(A|B) is required and scientists/statisticians only have a good estimate of P(A|B’) for some B’ that is more general than B, then P(A|B’) will be taken as ‘the scientific’ estimate for P(A|B). This is so common that it seems to be a ‘rule of pragmatic inference’, albeit one that seems to be unsupported by the kind of arguments that O’Hagan supports. My own experience is that it can seriously underestimate P(A|B).

The facts of the Aquila case are not clear to me, but I suppose that the scientists made their assessment based on the best available scientific data. To put it another way, they would not have taken account of ad-hoc observations, such as amateur observations of radon gas fluctuations. Part of the Aquila problem seems to be that the amateur observations provided a warning which the population were led to discount on the basis of ‘scientific’ analysis. More generally, in a crisis, one often has a conflict between a scientific analysis based on sound data and non-scientific views verging on divination. How should these diverse views inform the overall assessment?

In most cases one can make a reasonable scientific analysis based on sound data and ‘authorised assumptions’, taking account of recognized factors. I think that one should always strive to do so, and to communicate the results. But if that is all that one does then one is inevitably ignoring the particulars of the case, which may substantially increase the risk. One may also want to take a broader decision-theoretic view. For example, if the peaks in radon gas levels were unusual then taking them as a portent might be prudent, even in the absence of any relevant theory. The only reason for not doing so would be if the underlying mechanisms were well understood and the gas levels were known to be simply consequent on the scientific data, thus providing no additional information. Such an approach is particularly indicated where – as I think is the case in seismology – even the best scientific analysis has a poor track record.

The bottom line, then, is that I think that one should always provide ‘the best scientific analysis’ in the sense of an analysis that gives a numeric probability (or probability range etc) but one needs to establish a best practice that takes a broader view of the issue in question, and in particular the limitations and potential biases of ‘best practice’.

The O’Hagan paper quoted at the start says – of conventional probability theory – that  “Alternative, but similarly compelling, axiomatic or rational arguments do not appear to have been advanced for other ways of representing uncertainty.” This overlooks Boole, Keynes , Russell and Good, for example. It may be timely to reconsider the adequacy of the conventional assumptions. It might also be that ‘best scientific practice’ needs to be adapted to cope with messy real-world situations. Aquila was not a laboratory.

See Also

My notes on uncertainty and on current debates.

Dave Marsay

Avoiding ‘Black Swans’

A UK Blackett Review has reviewed some approaches to uncertainty relevent to the question “How can we ensure that we minimise strategic surprises from high impact low probability risks”. I have already reviewed the report in its own terms.  Here I consider the question.

  • One person’s surprise may be as a result of another person’s innovation, so we need to consider the up-sides and down-sides together.
  • In this context ‘low probability’ is subjective. Things are not surprising unless we didn’t expect them, so the reference to low probability is superfluous.
  • Similarly, strategic surprise necessarily relates to things that – if only in anticipation – have high impact.
  • Given that we are concerned with areas of innovation and high uncertainty, the term ‘minimise’ is overly ambitious. Reducing would be good. Thinking that we have minimized would be bad.

The question might be simplified to two parts:

  1. “How can we ensure that we strategize?
  2. “How can we strategize?”

These questions clearly have very important relative considerations, such as:

  • What in our culture inhibits strategizing?
  • Who can we look to for exemplars?
  • How can we convince stakeholders of the implications of not strategizing?
  • What else will we need to do?
  • Who might we co-opt or collaborate with?

But here I focus on the more widely-applicable aspects. On the first question the key point seems to be that, where the Blackett review points out the limitations of a simplistic view of probability, there are many related misconceptions and misguided ways that blind us to the possibility of or benefits of strategizing. In effect, as in economics, we have got ourselves locked into ‘no-strategy strategies’, where we believe that a short-term adaptive approach, with no broader or long-term view, is the best, and that more strategic approaches are a snare and a delusion. Thus the default answer to the original question seems to be ‘you don’t  – you just live with the consequences’. In some cases this might be right, but I do not think that we should take it for granted. This leads on to the second part.

We at least need ‘eyes open minds open’, to be considering potential surprises, and keeping score. If (for example, as in International Relations) it seems that none of our friends do better than chance, we should consider cultivating some more. But the scoring and rewarding is an important issue. We need to be sure that our mechanisms aren’t recognizing short-term performance at the expense of long-run sustainability. We need informed views about what ‘doing well’ would look like and what are the most challenging issues, and to seek to learn and engage with those who are doing well. We then need to engage in challenging issues ourselves, if only to develop and then maintain our understanding and capability.

If we take the financial sector as an example, there used to be a view that regulation was not needed. There are two more moderate views:

  1. That the introduction of rules would distort and destabilise the system.
  2. That although the system is not inherently stable, the government is not competent to regulate, and no regulation is better than bad regulation.

 My view is that what is commonly meant by ‘regulation’ is very tactical, whereas the problems are strategic. We do not need a ‘strategy for regulation’: we need strategic regulation. One of the dogmas of capitalism is that it involves ‘free markets’ in which information plays a key role. But in the noughties the markets were clearly not free in this sense. A potential role for a regulator, therefore, would be to perform appropriate ‘horizon scanning’ and to inject appropriate information to ‘nudge’ the system back into sustainability. Some voters would be suspicious of a government that attempts to strategize, but perhaps this form of regulation could be seen as simply better-informed muddling, particularly if there were strong disincentives to take unduly bold action.

But finance does not exist separate from other issues. A UK ‘regulator’ would need to be a virtual beast spanning  the departments, working within the confines of regular general elections, and being careful not to awaken memories of Cromwell.

This may seem terribly ambitious, but maybe we could start with reformed concepts of probability, performance, etc. 

Comments?

See also

JS Mill’s views

Other debates, my bibliography.  

Dave Marsay

The money forecast

A review of The Money forecast A Haldane New Scientist 10 Dec. 2011. On-line version is To Navigate economic storms we need better forecasting.

Summary

Andrew Haldane, ‘Andy’, is one of the more insightful and – hopefully – influential members of the UK economic community, recognising that new ways of thinking are needed and taking a lead in their development.

He refers to a previous article ‘Revealed – the Capitalist network that runs the world’, which inspires him to attempt to map the world of finance.

“… Making sense of the financial system is more an act of archaeology than futurology.”

Of the pre-crisis approach it says:

“… The mistake came in thinking the behaviour of the system was just an aggregated version of the behaviour of the individual. …

”    Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behaviour of any one node. To make an analogy, you cannot understand the brain by focusing on a neuron – and then simply multiplying by 100 billion. …

… When parts started to malfunction … no one had much idea what critical faculties would be impaired.

    That uncertainty, coupled with dense financial wiring, turned small failures into systemic collapse. …

    Those experiences are now seared onto the conscience of regulators. Systemic risk has entered their lexicon, and to understand that risk, they readily acknowledge the need to join the dots across the network. So far, so good. Still lacking are the data and models necessary to turn this good intent into action.

… Other disciplines have cut a dash in their complex network mapping over the past generation, assisted by increases in data-capture and modelling capability made possible by technology. One such is weather forecasting … .

   Success stories can also be told about utility grids and transport networks, the web, social networks, global supply chains and perhaps the most complex web of all, the brain.

    …  imagine the scene a generation hence. There is a single nerve centre for global finance. Inside, a map of financial flows is being drawn in real time. The world’s regulatory forecasters sit monitoring the financial world, perhaps even broadcasting it to the world’s media.

    National regulators may only be interested in a quite narrow subset of the data for the institutions for which they have responsibility. These data could be part of, or distinct from, the global architecture.

    …  it would enable “what-if?” simulations to be run – if UK bank Northern Rock is the first domino, what will be the next?”

Comments

I am unconvinced that archeology, weather forecasting or the other examples are really as complex as economic forecasting, which can be reflexive: if all the media forecast a crash there probably will be one, irrespective of the ‘objective’ financial and economic conditions. Similarly, prior to the crisis most people seemed to believe in ‘the great moderation’, and the good times rolled on, seemingly.

Prior to the crisis I was aware that a minority of British economists were concerned about the resilience of the global financial system and that the ‘great moderation’ was a cross between a house of cards and a pyramid selling scheme. In their view, a global financial crisis precipitated by a US crisis was the greatest threat to our security. In so far as I could understand their concerns, Keynes’ mathematical work on uncertainty together with his later work on economics seemed to be key.

Events in 2007 were worrying. I was advised that the Chinese were thinking more sensibly about these issues, and I took to opportunity to visit China in Easter 2008, hosted by the Chinese Young Persons Tourist Group, presumably not noted for their financial and economic acumen. It was very apparent from a coach ride from Beijing to the Great Wall that their program of building new towns and moving peasants in was on hold. The reason given by the Tour Guide was that the US financial system was expected to crash after their Olympics, leading to a slow-down in their economic growth, which needed to be above 8% or else they faced civil unrest. Once tipped off, similar measures to mitigate a crisis were apparent almost everywhere. I also talked to a financier, and had some great discussions about Keynes and his colleagues, and the implications for the crash. In the event the crisis seems to have been triggered by other causes, but Keynes conceptual framework still seemed relevant.

The above only went to reinforce my prejudice:

  • Not only is uncertainty important, but one needs to understand its ramifications as least as well as Keynes did (e.g. in his Treatise and ‘Economic Consequences of the Peace’).
  • Building on this, concepts such as risk need to be understood to their fullest extent, not reduced to numbers.
  • The quotes above are indicative of the need for a holistic approach. Whatever variety one prefers, I do think that this cannot be avoided.
  • The quote about national regulators only having a narrow interest seems remarkably reductionist. I would think that they would all need a broad interest and to be exchanging data and views, albeit they may only have narrow responsibilities. Financial storms can spread around the world quicker than meteorological ones.
  • The – perhaps implicit – notion of only monitoring financial ‘flows’ seems ludicrous. I knew that the US was bound to fail eventually, but it was only by observing changes in migration that I realised it was imminent. Actually, I might have drawn the same conclusion from observing changes in financial regulation in China, but that still was not a ‘financial flow’. I did previously draw similar conclusions talking to people who were speculating on ‘buy to let’, thinking it a sure-thing.
  • Interactions between agents and architectures are important, but if Keynes was right then what really matters are changes to ‘the rules of the games’. The end of the Olympics was not just a change in ‘flows’ but a potential game-changer.
  • Often it is difficult to predict what will trigger a crisis, but one can observe when the situation is ripe for one. To draw an analogy with forest fires, one can’t predict when someone will drop a bottle or a lit cigarette, but one can observe when the tinder has built up and is dry.

It thus seems to me that while Andy Haldane is insightful, the actual article is not that enlightening, and invites a much too prosaic view of forecasting. Even if we think that Keynes was wrong I am fairly sure that we need to develop language and concepts in which we can have a discussion of the issues, even if only ‘Knightian uncertainty’. The big problem that I had prior to the crisis was the lack of a possibility of such a discussion. If we are to learn anything from the crisis it is surely that such discussions are essential. The article could be a good start.

See Also

The short long. On the trend to short-termism.

Control rights (and wrongs). On the imbalance between incentives and risks in banking.

Risk Off. A behaviorist’ view of risk. It notes that prior to the crash ‘risk was under-priced’.

  Dave Marsay

 

UK judge rules against probability theory? R v T

Actually, the judge was a bit more considered than my title suggests. In my defence the Guardian says:

“Bayes’ theorem is a mathematical equation used in court cases to analyse statistical evidence. But a judge has ruled it can no longer be used. Will it result in more miscarriages of justice?”

The case involved Nike trainers and appears to be the same as that in a recent appeal  judgment, although it doesn’t actually involve Bayes’ rule. It just involves the likelihood ratio, not any priors. An expert witness had said:

“… there is at this stage a moderate degree of scientific evidence to support the view that the [appellant’s shoes] had made the footwear marks.”

The appeal hinged around the question of whether this was a reasonable representation of a reasonable inference.

According to Keynes, Knight and Ellsberg, probabilities are grounded on either logic, statistics or estimates. Prior probabilities are – by definition – never grounded on statistics and in practical applications rarely grounded on logic, and hence must be estimates. Estimates are always open to challenge, and might reasonably be discounted, particularly where one wants to be ‘beyond reasonable doubt’.

Likelihood ratios are typically more objective and hence more reliable. In this case they might have been based on good quality relevant statistics, in which case the judge supposed that it might be reasonable to state that there was a moderate degree of scientific evidence. But this was not the case. Expert estimates had supplied what the available database had lacked, so introducing additional uncertainty. This might have been reasonable, but the estimate appears not to have been based on relevant experience.

My deduction from this is that where there is doubt about the proper figures to use, that doubt should be acknowledged and the defendant given the benefit of it. As the judge says:

“… it is difficult to see how an opinion … arrived at through the application of a formula could be described as ‘logical’ or ‘balanced’ or ‘robust’, when the data are as uncertain as we have set out and could produce such different results.”

This case would seem to have wider implications:

“… we do not consider that the word ‘scientific’ should be used, as … it is likely to give an impression … of a degree of  precision and objectivity that is not present given the current state of this area of expertise.”

My experience is that such estimates are often used by scientists, and the result confounded with ‘science’. I have sometimes heard this practice justified on the grounds that some ‘measure’ of probability is needed and that if an estimate is needed it is best that it should be given by an independent scientist or analyst than by an advocate or, say, politician. Maybe so, but perhaps we should indicate when this has happened, and the impact it has on the result. (It might be better to follow the advice of Keynes.)

Royal Statistical Society

The guidance for forensic scientists is:

“There is a long history and ample recent experience of misunderstandings relating to statistical information and probabilities which have contributed towards serious miscarriages of justice. … forensic scientists and expert witnesses, whose evidence is typically the immediate source of statistics and probabilities presented in court, may also lack familiarity with relevant terminology, concepts and methods.”

“Guide No 1 is designed as a general introduction to the role of probability and statistics in criminal proceedings, a kind of vade mecum for the perplexed forensic traveller; or possibly, ‘Everything you ever wanted to know about probability in criminal litigation but were too afraid to ask’. It explains basic terminology and concepts, illustrates various forensic applications of probability, and draws attention to common reasoning errors (‘traps for the unwary’).”

The guide is clearly much needed. It states:

“The best measure of uncertainty is probability, which measures uncertainty on a scale from 0 to 1.”

This statement is nowhere supported by any evidence whatsoever. No consideration is given to alternatives, such as those of Keynes, or to the legal concept of “beyond reasonable doubt.”

“The type of probability that arises in criminal proceedings is overwhelmingly of the subjective variety, …

There is no consideration of Boole and Keynes’ more logical notion, or any reason to take notice of the subjective opinions of others.

“Whether objective expressions of chance or subjective measures of belief, probabilistic calculations of (un)certainty obey the axiomatic laws of probability, …

But how do we determine whether those axioms are appropriate to the situation at hand? The reader is not told whether the term axiom is to be interpreted in its mathematical or lay sense: as something to be proved, or as something that may be assumed without further thought. The first example given is:

“Consider an unbiased coin, with an equal probability of producing a ‘head’ or a ‘tail’ on each coin-toss. …”

Probability here is mathematical. Considering the probability of an untested coin of unknown provenance would be more subjective. It is the handling of the subjective component that is at issue, an issue that the example does not help to address. More realistically:

“Assessing the adequacy of an inference is never a purely statistical matter in the final analysis, because the adequacy of an inference is relative to its purpose and what is at stake in any particular context in relying on it.”

“… an expert report might contain statements resembling the following:
* “Footwear with the pattern and size of the sole of the defendant’s shoe occurred in approximately 2% of burglaries.” …
It is vital for judges, lawyers and forensic scientists to be able to identify and evaluate the assumptions which lie behind these kinds of statistics.”

This is good advice, which the appeal judge took. However, while I have not read and understood every detail of the guidance, it seems to me that the judge’s understanding went beyond the guidance, including its ‘traps for the unwary’.

The statistical guidance cites the following guidance from the forensic scientists’ professional body:

Logic: The expert will address the probability of the evidence given the proposition and relevant background information and not the probability of the proposition given the evidence and background information.”

This seems sound, but needs supporting by detailed advice. In particular none of the above guidance explicitly takes account of the notion of ‘beyond reasonable doubt’.

Forensic science view

Science and Justice has an article which opines:

“Our concern is that the judgment will be interpreted as being in opposition to the principles of logical interpretation of evidence. We re-iterate those principles and then discuss several extracts from the judgment that may be potentially harmful to the future of forensic science.”

The full article is behind a pay-wall, but I would like to know what principles it is referring to. It is hard to see how there could be a conflict, unless there are some extra principles not in the RSS guidance.

Criminal law Review

Forensic Science Evidence in Question argues that:

 “The strict ratio of R. v T  is that existing data are legally insufficient to permit footwear mark experts to utilise probabilistic methods involving likelihood ratios when writing reports or testifying at trial. For the reasons identified in this article, we hope that the Court of Appeal will reconsider this ruling at the earliest opportunity. In the meantime, we are concerned that some of the Court’s more general statements could frustrate the jury’s understanding of forensic science evidence, and even risk miscarriages of justice, if extrapolated to other contexts and forms of expertise. There is no reason in law why errant obiter dicta should be permitted to corrupt best scientific practice.”

In this account it is clear that the substantive issues are about likelihoods rather than probabilities, and that consideration of ‘prior probabilities’ are not relevant here. This is different from the Royal Society’s account, which emphasises subjective probability. However, in considering the likelihood of the evidence conditioned on the suspect’s innocence, it is implicitly assumed that the perpetrator is typical of the UK population as a whole, or of people at UK crime scenes as a whole. But suppose that women are most often murdered by men that they are or have been close to, and that such men are likely to be more similar to each other than people randomly selected from the population as a whole. Then it is reasonable to suppose that the likelihood that the perpetrator is some other male known to the victim will be significantly greater than the likelihood of it being some random man. The use of an inappropriate likelihood introduces a bias.

My advice: do not get involved with people who mostly get involved with people like you, unless you trust them all.

The Appeal

Prof. Jamieson, an expert on the evaluation of evidence whose statements informed the appeal, said:

“It is essential for the population data for these shoes be applicable to the population potentially present at the scene. Regional, time, and cultural differences all affect the frequency of particular footwear in a relevant population. That data was simply not … . If the shoes were more common in such a population then the probative value is lessened. The converse is also true, but we do not know which is the accurate position.”

Thus the professor is arguing that the estimated likelihood could be too high or too low, and that the defence ought to be given the benefit of the doubt. I have argued that using a whole population likelihood is likely to be actually biased against the defence, as I expect such traits as the choice of shoes to be clustered.

Science and Justice

Faigman, Jamieson et al, Response to Aitken et al. on R v T Science and Justice 51 (2011) 213 – 214

This argues against an unthinking application of likelihood ratios, noting:

  • That the defence may reasonable not be able explain the evidence, so that there may be no reliable source for an innocent hypothesis.
  • That assessment of likelihoods will depend on experience, the basis for which should be disclosed and open to challenge.
  • If there is doubt as to how to handle uncertainty, any method ought to be tested in court and not dictated by armchair experts.

On the other hand, when it says “Accepting that probability theory provides a coherent foundation …” it fails to note that coherence is beside the point: is it credible?

Comment

The current situation seems unsatisfactory, with the best available advice both too simplistic and not simple enough. In similar situations I have co-authored a large document which has then been split into two: guidance for practitioners and justification. It may not be possible to give comprehensive guidance for practitioners, in which case one should aim to give ‘safe’ advice, so that practitioners are clear about when they can use their own judgment and when they should seek advice. This inevitably becomes a ‘legal’ document, but that seems unavoidable.

In my view it should not be simply assumed that the appropriate representation of uncertainty is ‘nothing but a number’. Instead one should take Keynes’ concerns seriously in the guidance and explicitly argue for a simpler approach avoiding ‘reasonable doubt’, where appropriate. I would also suggest that any proposed principles ought to be compared with past cases, particularly those which have turned out to be miscarriages of justice. As the appeal judge did, this might usefully consider foreign cases to build up an adequate ‘database’.

My expectation is that this would show that the use of whole-population likelihoods as in R v T is biased against defendants who are in a suspect social group.

More generally, I think that anyguidance ought to apply to my growing uncertainty puzzles, even if it only cautions against a simplistic application of any rule in such cases.

See Also

Blogs: The register, W Briggs and Convicted by statistics (referring to previous miscarriages).

My notes on probability. A relevant puzzle.

Dave Marsay 

Systemism: the alternative to individualism and holism

Mario Bunge Systemism: the alternative to individualism and holism Journal of Socio-Economics 29 (2000) 147–157

“Three radical worldviews and research approaches are salient in social studies: individualism, holism, and systemism.”

[Systemism] “is centered in the following postulates:
1. Everything, whether concrete or abstract, is a system or an actual or potential component of a system;
2. systems have systemic (emergent) features that their components lack, whence
3. all problems should be approached in a systemic rather than in a sectoral fashion;
4. all ideas should be put together into systems (theories); and
5. the testing of anything, whether idea or artifact, assumes the validity of other items, which are taken as benchmarks, at least for the time being.”

Thus systemism resembles Smuts’ Holism. Bunge uses the term ‘holism’ for what Smuts terms wholism: the notion that systems should be subservient to their ‘top’ level, the ‘whole’. This usage apart, Bunge appears to be saying something important. Like Smuts, he notes the systemic nature of mathematics is distinction to those who note the tendency to apply mathematical formulae thoughtlessly, as in some notorious financial mathematics

Much of the main body is taken up with the need for micro-macro analyses and the limitations of piece-meal approaches, something familiar to Smuts and |Keynes. On the other hand he says: “I support the systems that benefit me, and sabotage those that hurt me.” without flagging up the limitations of such an approach in complex situations. He even suggests that an interdisciplinary subject such as biochemistry is nothing but the overlap of the two disciplines. If this is the case, I find it hard to grasp their importance. I would take a Kantian view, in which bringing into communion two disciplines can be more than the sum of the parts.

In general, Bunge’s arguments in favour of what he calls systemism and Smuts called holism seem sound, but it lacks the insights into complexity and uncertainty of the original.

See also

Andy Denis’ response to Bunge adds some arguments in favour of Holism. It’s main purpose, though, is to contradict Bunge’s assertion that laissez-faire is incompatible with systemism. It is argued that a belief in Adam Smith’s invisible hand could support laissez faire. It is not clear what might constitute grounds for such a belief. (My own view is that even a government that sought to leverage the invisible hand would have a duty to monitor the workings of such and hand, and to take action should it fail, as in the economic crisis of 2007/8. It is now clear how politics might facilitate this.)

Also my complexity.

Dave Marsay

Composability

State of the art – software engineering

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations … .”For information systems, from a software engineering perspective,  the essential features are regarded as modularity and statelessness. Current inhibitors include:  

“Lack of clear composition semantics that describe the intention of the composition and allow to manage change propagation.”

Broader context

Composability has a natural interpretation as readiness to be composed with others, and has broader applicability. For example, one suspects that if some people met their own clone, they would not be able to collaborate. Quite generally, composability would seem necessary but perhaps not sufficient to ‘good’ behaviour. Thus each culture tends to develop ways for people to work effectively together, but some sub-cultures seem parasitic, in that they couldn’t sustain themselves on their own.

Cultures tend to evolve, but technical interventions tend to be designed. How can we be sure that the resultant systems are viable under evolutionary pressure? Composability would seem to be an important element, as it allows elements to be re-used and recombined, with the aspiration of supporting change propagation.

Analysis

Composability is particularly evident, and important, in algorithms in statistics and data fusion.  If modularity and statelessness are important for the implementation of the algorithms, it is clear that there are also characteristics of the algorithms as functions (ignoring internal details) that are also important.

If we partition a given data set, apply a function to the parts and the combine the result, we want to get the same result no matter how the data is partitioned. That is, we want the result to depend on the data, not the partitioning.

In elections for example, it is not necessarily true that a party who gets a majority of the votes overall will get the most candidates elected. This lack of composability can lead to a loss of confidence in the electoral process. Similarly, media coverage is often an editor’s precis of the precis by different reporters. One would hope that a similar story would emerge if one reporter had covered the whole. 

More technically, averages over parts cannot, in general, be combined to give a true overall average, whereas counting and summing are composable. Desired functions can often be computed composably by using a preparation function, then composable function, then a projection or interpretation function. Thus an average can be computed by finding the number of terms averaged, reporting the sum and count, summing over parts to give an overall sum and count, then projecting to get the average. If a given function can be implented via two or more composable functions, then those functions must be ‘conjugate’: the same up to some change of basis. (For example, multiplication is composable, but one could prepare using logs and project using exponentiation to calculate a product using a sum.)

In any domain, then, it is natural to look for composable functions and to implement algorithms in terms of them. This seems to have been widespread practice until the late 1980s, when it became more common to implement algorithms directly and then to worry about how to distribute them.

Iterative Composability

In some cases it is not possible to determine composable functions in advance, or perhaps at all. For example, where innovation can take place, or one is otherwise ignorant of what may be. Here one may look for a form of ‘iterative composability’ in which one hopes tha the results is normally adequate, there will be signs if it is not, and that one will be able to improve the situation. What matters is that this process should converge, so that one can get as close as one likes to the results one would get from using all the data.

Elections under FPTP (first past the post) are not composable, and one cannot tell if the party who is most voter’s first preference has failed to get in. AV (alternative vote) is also not composable, but one has more information (voters give rankings) and so can sometimes tell that there cannot have been a party who was most voters first preference who failed to get in. If there can have been, one could have a second round with only the top parties’ candidates. This is a partial step towards general iterative composability, which might often be iteratively composable for the given situation, much more so than fptp.

Parametric estimation is generally composable when one has a fixed number of entities whose parameters are being estimated. Otherwise one has an ‘association’ problem, which might be tackled differently for the different parts. If so, this needs to be detected and remedied, perhaps iteratively. This is effectively a form of hypothesis testing. Here the problem is that the testing of hypotheses using likelihood ratios is not composable. But, again, if hypotheses are compared differences can be detected and remedial action taken. It is less obvious that this process will converge, but for constrained hypothesis spaces it does.

Innovation, transformation, freedom and rationality

It is common to suppose that people acting in their environment should characterise their situation within a context in enough detail to removes all but (numeric) probabilistic uncertainty, so that they can optimize. Acting sub-optimally, it is supposed, would not be rational. But if innovation is about transformation then a supposedly rational act may undermine the context of another, leading to a loss of performance and possibly crisis or chaos.

Simultaneous innovation could be managed by having an over-arching policy or plan, but this would clearly constrain freedom and hence genuine innovation. To much innovation and one has chaos, too little and there is too little progress.

A composable approach is to seek innovations that respect each other’s contexts, and to make clear to other’s what one’s essential context is. This supports only very timid innovation if the innovation is rational (in the above sense), since no true (Knightian) uncertainty can be accepted. A more composable approach is to seek to minimise dependencies and to innovate in a way that accepts – possibly embraces – true uncertainty. This necessitates a deep understanding of the situation and its potentialities.  

Conclusion

Composability is an important concept that can be applied quite generally. The structure of activity shouldn’t impact on the outcome of the activity (other than resource usage). This can mean developing core components that provide a sound infrastructure, and then adapting it to perform the desired tasks, rather than seeking to implement the desired functionality directly.

Dave Marsay

Cyber Doctrine

Cyber Doctrine: Towards a coherent evolutionary framework for learning resilience, ISRS, JP MacIntosh, J Reid and LR Tyler.

A large booklet that provides a critical contribution to the Cyber debate. Here I provide my initial reactions: the document merits more detailed study.

Topics

Scope

Just as financial security is about more than just defending against bank-robbers, cyber security is about more than just defending against deliberate attack, and extends to all aspects of resilience, including freedom from whatever delusions might be analogous to the efficient market hypothesis.

Approach

Innovation is key to a vibrant Cyberspace and further innovation in Cyberspace is vital to our real lives. Thus a notion of security based on constraint or resilience based on always returning to the status quo are simply not appropriate. 

Resilience and Transformation

Resilience is defined as “the enduring power of a body or bodies for transformation, renewal and recovery through the flux of interactions and flow of events.” It is not just the ability to ‘bounce back’ to its previous state. It implies the ability to learn from events and adapt to be in a better position to face them.

Transformation is taken to be the key characteristic. It is not defined, which might lead people to turn to wikipedia, whose notion does not explicitly address complexity or uncertainty. I would like to see more emphasis on the long-run issues of adapting to evolve as against sequentially adapting to what one thinks the current needs are. This may include ‘deep transformation’ and ‘transformation in contact’ and the elimination of parts that are no longer needed.

Pragmatism 

The document claims to be ‘pragmatic’: I have concerns about what this term means to readers. According to wikipedia, “it describes a process where theory is extracted from practice, and applied back to practice to form what is called intelligent practice.” Fair enough. But the efficient market hypothesis was once regarded as pragmatic, and there are many who think it pragmatic to act as if one’s beliefs were true. Effective Cyber practice would seem to depend on an appropriate notion of pragmatism, which a doctrine perhaps ought to elucidate.

Glocalization

The document advocates glocalization. According to wikipedia this means ‘think global act local’ and the document refers to a variant: “the compression of the world and the intensification of the consciousness of the world as a whole”. But how should we conceive the whole? The document says “In cyberspace our lives are conducted through a kaleidoscope of global and local relations, which coalesce and dissipate as diverse glocals.” Thus this is not wholism (which supposes that the parts should be dominated by the needs of the whole) but a more holistic vision, which seeks a sustainable solution, somehow ‘balancing’ a range of needs on a range of scales. The doctrinal principles will need to support the structuring and balancing more explicitly.

Composability

The document highlights composability as a key aspect of best structural practice that – pragmatically – perhaps ought to be leveraged further. I intend to blog specifically on this. Effective collaboration is clearly essential to innovation, including resilience. Composability would seem essential to effective collaboration.

Visualisation: Quads

I imagine that anyone who has worked on these types of complex issue, with all their uncertainties, will recognize the importance of visual aids that can be talked around. There are many that are helpful when interpreted with understanding and discretion, but I have yet to find any that can ‘stand alone’ without risk of mis-interpretation. Diagram 6 (page 89) seems at first sight a valuable contribution to the corpus, worthy of further study and perhaps development.

I consider Perrow limited because his ‘yardstick’ tends to be an existing system and his recommendation seems to be ‘complexity and uncertainty are dangerous’. But if we want resilience through innovation we cannot avoid complexity and uncertainty. Further, glocalization seems to imply a turbulent diversity of types of coupling, such that Perrow’s analysis is impossible to apply.

I have come across the Johari window used in government as a way of explaining uncertainty, but here the yardstick is what others think they know, and in any case the concept of ‘knowledge’ seems just as difficult as that of uncertainty. So while this motivates, it doesn’t really explain.

The top ‘quad’ says something important about conventional economics. Much of life is a zero sum game: if I eat the cake, then you can’t. But resilience is about other aspects of life: we need a notion of rationality that suits this side of life. This will need further development.

Positive Deviancy and Education

 Lord Reid (below) made some comments when launching the booklet that clarify some of the issues. He emphasises the role for positive deviancy and education in the sense of ‘bringing out’. This seems to me to be vital.

Control and Patching

Lord Reid (below) emphasises that a control-based approach, or continual ‘patching’, aren’t enough. There is a qualitative change in the nature of Cyber, and hence a need for a completely different approach. This might have been made more explicit in the document.

Criticisms

The main criticisms that I have seen have been either of the recommendations that they wrongly assume John Reid is making (e.g., for more control) or appear to be based on a dislike of Lord Reid. In any case, changes such as those proposed would seem to call for a more international figure-head or lead institution, perhaps with ISRS in a supporting role.

What next?

The argument for having some doctrine matches my own leanings, as does the general trend of  the suggestions. But (as the government, below, says) one needs an international consensus, which in practice would seem to mean an approach endorsed by the UN security council (including America, France, Russia and China). Such a hopeless task seems to lead people to underestimate the risks of the status quo, or of ‘evolutionary’ patching of it with either less order or more control. As with the financial crisis, this may be the biggest threat to our security, let alone our resilience.

It seems to me, though, that behind the specific ideas proffered the underlying instincts are not all that different from those of the founders of the UN, and that seen in that context the ideas might not be too far from being attractive to each of the permanent members, if only the opportunities were appreciated.

Any re-invention or re-articulation of the principles of the UN would naturally have an impact on member states, and call for some adjustment to their legal codes. The UK’s latest Prevent strategy already emphasises the ‘fundamental values’ of ‘universal human rights, equality before the law, democracy and full participation in our society’.  In effect, we could see the proposed Cyber doctrine as proposing principles that would support a right to live in a reasonably resilient society. If for resilience we read sustainability, then we could say that there should be a right to be able to sustain oneself without jeopardising the prospects of one’s children and grandchildren. I am not sure what ‘full participation in our society’ would mean under reformed principles, but I see governments as having a role in fostering the broadest range of possible ‘positive deviants’, rather than (perhaps inadvertently) encouraging dangerous groupthink. These thoughts are perhaps prompted more by Lord Reid’s comments than the document itself.

Conclusion

 The booklet raises important issues about the nature, opportunities and threats of globalisation as impacted by Cyberspace. It seems clear that there is a consequent need for doctrine, but not yet what routes forward there may be. Food for thought, but not a clear prospectus.

See Also

Government position, Lord Reid’s Guardian article. , Police Led Intelligence, some negative comment.

Dave Marsay

Complexity Demystified: A guide for practitioners

P. Beautement & C. Broenner Complexity Demystified: A guide for practitioners, Triarchy Press, 2011.

First Impressions

  • The title comes close to ‘complexity made simple’, which would be absurd. A favourable interpretation (after Einstein) would be ‘complexity made as straightforward as possible, but no more.’
  • The references look good.
  • The illustrations look appropriate, of suitable quality, quantity and relevance.

Skimming through I gained a good impression of who the book was for and what it had to offer them. This was born out (below).

Summary

Who is it for?

Complexity is here viewed from the viewpoint of a ‘coal face’ practitioner:

  • Dealing with problems that are not amenable to a conventional managerial approach (e.g. set targets, monitor progress against targets, …).
  • Has had some success and shown some insight and aptitude.
  • Is being thwarted by stakeholders (e.g., donors, management) with conventional management view and using conventional ‘tools’, such as accountability against pre-agreed targets.

What is complexity?

Complexity is characterised as a situation where:

  • One can identify potential behaviours and value them, mostly in advance.
  • Unlike simpler situations, one cannot predict what will be the priorities, when: a plan that is a program will fail.
  • One can react to behaviours by suppressing negative behaviours and supporting positive ones: a plan is a valuation, activity is adaptation.

Complexity leads to uncertainty.

Details

Complexity science principles, concepts and techniques

The first two context-settings were well written and informative. This is about academic theory, which we have been warned not to expect too much of; such theory is not [yet?] ‘real-world ready’ – ready to be ‘applied to’ real complex situations – but it does supply some useful conceptual tools.

The approach

In effect commonplace ‘pragmatism’ is not adequate. The notion of pragmatism is adapted. Instead of persisting with one’s view as long as it seems to be adequate, one seeks to use a broad range of cognitive tools to check one’s understanding and look for alternatives, particular looking out for any unanticipated changes as soon as they occur.

The book refers to a ‘community of practice’, which suggests that there is already a community that has identified and is grappling with the problems, but needing some extra hints and tips. The approach seems down to earth and ‘pragmatic’, not challenging ideologies, cultures, values or other deeply held values.

 Case Studies

These were a good range, with those where the authors had been more closely involved being the better for it. I found the one on Ludlow particular insightful, chiming with my own experiences. I am tempted to blog separately on the ‘fuel protests in the UK in 2000’ as I was engaged with some of the team involved at the time, on related issues. But some of the issues raised here seem quite generally important.

Interesting points

  • Carl Sagan is cited to the effect that the left brain deals with detail, the right with context – the ‘bigger’ picture’. In my opinion many organisations focus too readily on the short term, to the exclusion of the long-term, and if they do focus on the long-term they tend to do it ‘by the clock’ with no sense of ‘as required’. Balancing long-term and short-term needs can be the most challenging aspect of interventions.
  • ECCS 09 is made much of. I can vouch for the insightful nature of the practitioners’ workshop that the authors led.
  • I have worked with Patrick, so had prior sight of some of the illustrations. The account is recognizable, but all the better for the insights of ECCS 09 and – possibly – not having to fit with the prejudices of some unsympathetic stakeholders. In a sense, this is the book that we have been lacking.

Related work

Management

  • Leadership agility: A business imperative for a VUCA world.
    Takes a similar view about complexity and how to work with it.
  • The Cynefin Framework.
    Positions complexity between complicated (familiar management techniques work) and chaos (act first). Advocates ‘probe-sense-respond’, which reflects some of the same views as ‘complexity demystified. (The authors have discussed the issues.)..

Conclusions

The book considers all types of complexity, revealing that what is required is a more thoughtful approach to pragmatism than is the norm for familiar situations, together with a range of thought-provoking tools, the practical expediency of some of which I can vouch for. As such it provides 259 pages of good guidance. If it also came to be a common source across many practitioner domains then it could also facilitate cross-domain discussions on complex topics, something that I feel would be most useful. (Currently some excellent practice is being obscured by the use of ‘silo’ languages and tools, inhibiting collaboration and cross-cultural learning.)

The book seems to me to be strongest in giving guidance to practitioners who are taking, or are constrained to take, a phenomenological approach: seeking to make sense of situations before reacting. This type of approach has been the focus of western academic research and much practice for the last few decades, and in some quarters the notion that one might act without being able to justify one’s actions would be anathema. The book gives some new tools which it is hoped will be useful to justify action, but I have a concern that some situations will be stil be novel and that to be effective practitioners may still need to act outside the currently accepted concepts, whatever they are. I would have liked to see the book be more explicit about its scope since:

  • Some practitioners can actually cope quite well with such supposedly chaotic situations. Currently, observers tend not to appreciate this extreme complexity of others’ situations, and so under-value their achievements. This is unfortunate, as, for example:
    • Bleeding edge practitioners might find themselves stymied by managers and other stakeholders who have too limited a concept of ‘accountability’.
    • Many others could learn from such practitioners, or employ their insights.
  • Without an appreciation of the complexity/chaos boundary, practitioners may take on tasks that are too difficult for them or the tools at their disposal, or where they may lose stakeholder engagement through having different notions of what is ‘appropriately pragmatic’.
  • An organisation that had some appreciation of the boundary could facilitate mentoring etc.
  • We could start to identify and develop tools with a broader applicability.

In fact, some of the passages in the book would, I believe, be helpful even in the ‘chaos’ situation. If we had a clearer ‘map’ the guidance on relatively straightforward complexity could be simplified and the key material for that complexity which threatens chaos could be made more of. My attempt at drawing such a distinction is at https://djmarsay.wordpress.com/notes/about-these-posts/work-in-progress/complexity/ .

In practice, novelty is more often found in long-term factors, not least because if we do not prepare for novelty sufficiently in advance, we will be unable to react effectively. While I would never wish to advocate too clean a separation between practice and policy, or between short and long-term considerations, we can perhaps adopt a leaf out of the book and venture some guidance, not to be taken too rigidly. If conventional pragmatism is appropriate at the immediate ‘coal face’ in the short run, then this book is a guide for those practitioners who are taking a step back and considering complex medium term issues, and would usefully inform policy makers in considering the long-run, but does not directly address the full complexities which they face, which are often inherently mysterious when seen from a narrow phenomenological stance. It does not provide guidance tailored for policy makers, and nor does it give practitioners a view of policy issues. But it could provide a much-needed contribution towards spanning what can be a difficult practice / policy divide

Addendum

One of the authors has developed eleven ‘Principles of Practice’. These reflect the view that, in practice, the most significant ‘unintended consequences‘ could have been avoided. I think there is a lot of ‘truth’ in this. But it seems to me that however ‘complexity worthy’ one is, and however much one thinks one has followed ‘best practice’ – including that covered by this book – there are always going to be ‘unintended consequences’. Its just that one can anticipate that they will be less serious, and not as serious as the original problem one was trying to solve.

See Also

Some mathematics of complexity, Reasoning in a complex dynamic world

Dave Marsay

Science advice and the management of risk

Science advice and the management of risk in government and business

The foundation for science and technology, 10 November 2010

An authoritative summary of the UK governments position on risk, with talks and papers.

  •  Beddington gives a good overview. He discusses probability versus impact ‘heat maps’, the use of ‘worst case’ scenarios, the limitations of heat maps and Blackett reviews. He discusses how management strategy has to reflect both the location on the heat map and the uncertainty in the location.
  • Omand discusses ‘Why wont they (politicians) listen (to the experts)?’  He notes the difference between secrets (hard to uncover) and secrets (hard to make sense of), and makes ‘common cause’ between science and intelligence in attempting to communicate with politicians. Presents a familiar type of chart in which probability is thought of as totally ordered (as in Bayesian probability) and seeks to standardise on the descriptors of ranges of probability, such as ‘highly probable’.
  • Goodman discusses economic risk management and the need to cope with ‘irrational cycles of exuberance’, focussing on ‘low probability high impact’ events. Only some risks can be quantified. Recommends ‘generalised Pareto distribution’.
  • Spielgelhalter introduced the discussion with some important insights:

The issue ultimately comes down to whether we can put numbers on these events.  … how can a figure communicate the enormous number of assumptions which underlie such quantifications? … The … goal of a numerical probability … becomes much more difficult when dealing with deeper uncertainties. … This concerns the acknowledgment of indeterminacy and ignorance.

Standard methods of analysis deal with recognised, quantifiable uncertainties, but this is only part of the story, although … we tend to focus at this level. A first extra step is to be explicit about acknowledged inadequacies – things that are not put into the analysis such as the methane cycle in climate models. These could be called ‘indeterminacy’. We do not know how to quantify them but we know they might be influential.

Yet there are even greater unknowns which require an essential humility. This is not just ignorance about what is wrong with the model, it is an acknowledgment that there could be a different conceptual basis for our analysis, another way to approach the problem.

There will be a continuing debate  about the process of communicating these deeper uncertainties.

  • The discussion covered the following:
    • More coverage of the role of emotion and group think is needed.
    • “[G]overnments did not base policies on evidence; they proclaimed them because they thought that a particular policy would attract votes. They would then seek to find evidence that supported their view. It would be more realistic to ask for policies to be evidence tested [rather than evidence-based.]”
    • “A new language was needed to describe uncertainty and the impossibility of removing risk from ordinary life … .”
    •  Advisors must advise, not covertly subvert decision-making.

Comments

If we accept that there is more to uncertainty than  can be reflected in a typical scale of probability, then it is no wonder that organisational decisions fail to take account of it adequately, or that some advisors seek to subvert such poor processes. Moreover, this seems to be a ‘difference that makes a difference’.

From a Keynesian perspective conditional probabilities, P(X|A), sometimes exist but unconditional ones, P(X), rarely do. As Spielgelhalter notes it is often the assumptions that are wrong: the estimated probability is then irrelevant. Spielgelhalter mentioned the common use of ‘sensitivity analysis’, noting that it is unhelpful. But what is commonly done is to test the sensitivity of P(X|y,A) to some minor variable y while keeping the assumptions, A. fixed. What is more often (for these types of risk) needed is a sensitivity to assumptions. Thus, if P(X|A) is high:

  • one needs to identify possible alternatives, A’, to A for which P(X|A’) is low, no matter how improbable A’ may be regarded.

Here:

  • ‘Possible’ means consistent with the evidence rather than anything psychological.
  • The criteria for what is regarded as ‘low’ or ‘high’ will be set by the decision context.

The rationale is that everything that has ever happened was, with hind-sight, possible: the things that catch us out are those that we overlooked, perhaps because we thought them improbable.

A conventional analysis would overlook emergent properties, such as booming cycles of ‘irrational’ exuberance. Thus in considering alternatives one needs to consider potential emotions and other emergencies and epochal events.

This suggests a typical ‘risk communication’ would consist of an extrapolated ‘main case’ probability together with a description of scenarios in which the opposite probability would hold.

See also

mathematicsheat maps, extrapolation and induction

Other debates, my bibliography.

Dave Marsay

 

The Precautionary Principle and Risk

Definition

The precautionary principle is that:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.

It thus applies in situations of uncertainty: better safe than sorry. It has been criticised for holding back innovation. But a ‘precautionary measure’ can be anything that mitigates the risk, not just failing to make the innovation. In particular if the potential ‘harm’ is very mild or easy to remediate, then there may be no need for costly ‘measures’.

Measures

There may be a cancer risk from mobile phones. The appropriate response is to advise restraint in the use of mobile phones, particularly by young people, and more research.

In the run-up to the financial crisis of 2007/8 there was an (indirect) threat to human health. An appropriate counter-measure might have been to encourage a broader base of economic research, including non-Bayesians.

Criticisms

Volokh sees the principle as directed against “politically disfavoured technologies” and hence potentially harmful. In particular Matt Ridley considers that the German e-coli outbreak of 2011 might have been prevented if the food had been irradiated, but irradiation had been regarded as leading to a possible threat, and hence under the precautionary principle had not been used. But the principle ought to be applied to all innovations, including large-scale organic farming, in which case irradiation might seem to be an appropriate precautionary measure. Given the fears about irradiation, it might have been used selectively – after test results or to quash an e-coli outbreak.  In any event, there should be a balance of threats and measures.

Conclusion

The precautionary principle seems reasonable, but needs to be applied evenly, not just to ‘Frankenstein technologies’. It could be improved by emphasing the need for the measures to be ‘proportional’ to the down-side risk.

Dave Marsay

Regulation and epochs

Conventional regulation aims at maintaining objective criteria, as in Conant and Ashby. They must have or form a model or models of their environment. But if future epochs are unpredictable or the regulators are set-up for the short-term, e.g. being post-hoc adaptive, then the models will not be appropriate for the long-term, leading to a loss of regulation at least until a new effective model can be formed.

Thus regulation based only on objective criteria is not sustainable in the long-term. Loss of regulation can occur, for example, due to innovation by the system being regulated. More sustainable regulation (in the sense of preserving viability) might be achieveable by taking a broader view of the system ‘as a whole’, perhaps engaging with it. For example, a ‘higher’ (strategic) regulator might monitor the overall situation, redirect the ‘lower’ (tactical) regulators and keep the lower regulators safe. The operation of these regulators would tend to correspond to Whitehead’s epochs (regulators would impose different rules, and different rules would call for different regulators).

See also

Stafford Beer.

David Marsay

Collaboration and reason

To ‘collaborate‘ is to act ‘jointly’, especially on an artistic work. To ‘co-operate’ is simply to act together. Thus in a collaboration the differences between actors and the need for something to join them together is more emphasised.

The acting together may take place in a single ‘coherent’ epoch, or may span epochs.

  • If we anticipate a single epoch then there are a coherent set of rules, constraints, drivers, which may be thought of rationally, with roles allocated to actors. These roles may be tied together by a conventional plan within a conventional organisational structure, and conventionally managed. Thus, within the context of some O&M conventions, one seeks co-operation. If any of the actors are out of step with the establsihed ways of the epoch, they should be brought into line before or while co-operating.
  • If we have a mix of actors their activity might tend to complexify what would otherwise have been a coherent epoch. We may need to collaborate simply because the actors are artistic, even on a simple task. (We may need to add some artistic or otherwise interesting aspect to the activity, simply to keep the actors engaged.)
  • If the activity concerns a zone that may have many epochs, e.g. is open-ended, then one necessarily wants actors with different capabilities, such as those who can adapt the hear and now, those who can anticipate possible futures, those who are good at hedging or otherwise developing robust plans and policies.
  • In many organisations a person who reasons across different time-scales simultaneously, and hence reasons incoherently, may be seen as ‘illogical’ and confused.  Using different actors, particularly those from different parent organisations, with dfifferent disciplines and even different cultural and biological inherentances, can mask or help ‘rationalise’ these apparent defects.

Thus, I suggest that the nature of appropriate collaboration can usefully be thought of in terms of the types of epochs to be covered and the types of reasoning required, as well as, for example, familiarity with the subject matters. Co-operation within an epoch might be seen as collecting ‘the facts’, making decisions, and then acting to an agreed plan. But in fuller collaboration there may be no ‘facts’ and no possibility of mapping out the zone of desired activity, let alone the possibility of someone making coherent sense of the situation.