Can polls be reliable?

Election polls in many countries have seemed unusually unreliable recently. Why? And can they be fixed?

The most basic observation is that if one has a random sample of a population in which x% has some attribute then it is reasonable to estimate that x% of the whole population has that attribute, and that this estimate will tend to be more accurate the larger the sample is. In some polls sample size can be an issue, but not in the main political polls.

A fundamental problem with most polls is that the ‘random’ sample may not be uniformly distributed, with some sub-groups over or under represented. Political polls have some additional issues, that are sometimes blamed:

  • People with certain opinions may be reluctant to express them, or may even mislead.
  • There may be a shift in opinions with time, due to campaigns or events.
  • Different groups may differ in whether they actually vote, for example depending on the weather.

I also think that in the UK the trend to postal voting may have confused things, as postal voters will have missed out on the later stages of campaigns, and on later events. (Which were significant in the UK 2017 general election.)

Pollsters have a lot of experience at compensating for these distortions, and are increasingly using ‘sophisticated mathematical tools’. How is this possible, and is there any residual uncertainty?

Back to mathematics, suppose that we have a science-like situation in which we know which factors (e.g. gender, age, social class ..) are relevant. With a large enough sample we can partition the results by combination of factors, measure the proportions for each combination, and then combine these proportions, weighting by the prevalence of the combinations in the whole population. (More sophisticated approaches are used for smaller samples, but they only reduce the statistical reliability.)

Systematic errors can creep in in two ways:

  1. Instead of using just the poll data, some ‘laws of politics’ (such as the effect of rain) or other heuristics (such as that the swing among postal votes will be similar to that for votes in person) may be wrong.
  2. An important factor is missed. (For example, people with teenage children or grandchildren may vote differently from their peers when student fees are an issue.)

These issues have analogues in the science lab. In the first place one is using the wrong theory to interpret the data, and so the results are corrupted. In the second case one has some unnoticed ‘uncontrolled variable’ that can really confuse things.

A polling method using fixed factors and laws will only be reliable when they reasonably accurately the attributes of interest, and not when ‘the nature of politics’ is changing, as it often does and as it seems to be right now in North America and Europe. (According to game theory one should expect such changes when coalitions change or are under threat, as they are.) To do better, the polling organisation would need to understand the factors that the parties were bringing into play at least as well as the parties themselves, and possibly better. This seems unlikely, at least in the UK.

What can be done?

It seems to me that polls used to be relatively easy to interpret, possibly because they were simpler. Our more sophisticated contemporary methods make more detailed assumptions. To interpret them we would need to know what these assumptions were. We could then ‘aim off’, based on our own judgment. But this would involve pollsters in publishing some details of their methods, which they are naturally loth to do. So what could be done? Maybe we could have some agreed simple methods and publish findings as ‘extrapolations’ to inform debate, rather than predictions. We could then factor in our own assumptions. (For example, our assumptions about students turnout.)

So, I don’t think that we can expect reliable poll findings that are predictions, but possibly we could have useful poll findings that would inform debate and allow us to take our own views. (A bit like any ‘big data’.)

Dave Marsay

 

Assessing and Communicating Risks and Uncertainty

David Spielgelhalter Assessing and Communicating Risks and Uncertainty Science in Parliament vol 69, no. 2, pp. 21-26. This is part of the IMA’s Mathematics Matters: A Crucial Contribution to the Country’s Economy.

This starts with a Harvard study showing that “a daily portion of red meat was associated with an increase in the annual risk of death by 13% over the period of the study”. Does this mean, as the Daily Express claimed, that “10% of all deaths could be avoided”?

David S uses ‘survival analysis’ to show that “a 40 year-old  man who eats a quarter-pound burger for his working lunch each day can expect, on average, to live to 79, while his mate who avoids the burger can expect to live to 80.” He goes on: “over a lifetime habit, each daily portion of red meat is associated with about 30 minutes off your life expectancy .. ” (my emphasis.)

As a mathematician advising politicians and other decision-makers, I would not be comfortable that policy-makers understood this, and would act appropriately. They might, for example, assume that we should all be discouraged from eating too much red meat.

Even some numerate colleagues with some exposure to statistics might, I think, suppose that their life expectancy was being reduced by eating red meat. But all that is being said is that if a random person were selected from the population as a whole then – knowing nothing about them – a statistician would ‘expect’ them to have a shorter life if they eat red meat. But every actual individual ‘you’ has a family history and many by 40 will have had cholesterol tests. It is not clear what the relevance to them is of the statistician’s ‘averaged’ figures.

Generally speaking, statistics gathered for one set of factors cannot be used to draw precise conclusions about  other sets of factors, much less about individuals. David S’s previous advice at Don’t Know, Can’t Know applies. In my experience, it is not safe to assume that the audience will appreciate these finer points. All that I would take from the Harvard study is that if you eat red meat most days it might be a good idea to consult your doctor. I would also hope that there was research going on into the factors in the apparent dangers.

See Also

I would appreciate a link to the original study.

Dave Marsay

Hercock’s Cohesion

Robert G. Hercock Cohesion: The Making of Society 2009.

Having had Robert critique some of my work, I could hardly not comment on this think-piece. It draws on modern complexity theory and a broad view of relevant historical examples and current trends to create a credible narrative. For me, his key conclusions are:

  1. “[G]iven a sufficient degree of communication … the cooperative assembly of [a cohesive society] is inevitable.”
  2. To be cohesive, a society should be “global politically federated, yet culturally diverse”.

The nature of communication envisaged seems to be indicated by:

 “From smoke signals, and the electric telegraph, through to fibre optics, and the Internet … the manifest boom in all forms of communication is bringing immense capabilities to form new social collectives and positive cultural developments.”

 I ‘get’ that increasing communication will bring immense capabilities to support the cooperative assembly of a cohesive global society, but am not convinced the effective exploitation of the capability in this way is inevitable. In chapter 6 (‘Bridges’) Robert says:

 “The truth is we now need a new shared set of beliefs. … Unfortunately, no one appears to have the faintest idea what such a common set of beliefs should look like, or where it might arise from, or who has responsibility to make it happen, or how, etc. Basically this is the challenge of the 21st century; we stand or fall on this battle for a common cultural nexus.”  

 This is closer to my own thinking.

People have different understandings of terms like ‘federated’. My preference is for subsidiarity: the idea that one has the minimum possible governance, with reliance on the minimum possible shared beliefs and common cultures. In complex situations these minimum levels are not obvious or static, so I would see an effective federations as engaging tentatively at a number of ‘levels’, ‘veering and hauling’ between them, and with strong arrangements for ‘horizon scanning’ and debate with the maximum possible diversity of views. Thus there would be not only cultural diversity but ‘viewpoint diversity within federated debate’. What is needed seems somewhat like Holism and glocalization 

Thinking of the EU, diversity of monetary policy might make the EU as an institution more cohesive while making their economies less cohesive. To put it another way, attempts to enforce cohesion at the monetary level can threaten cohesion at the political level. So it is not clear to me that one can think of a society as simply ‘being cohesive’. Rather it should be cohesive in the sense appropriate to its current situation. Cohesion should be ‘adaptive’. Leadership and vision seem to be required to achieve this: it is not automatic.

In the mid 80s many of those involved in the development of communications technologies thought that they would promote world peace, sometimes citing the kind of works that Robert does. I had and have two reservations. Firstly, the quality of communications matters. Thus [it was thought] one probably needed digital video, mobile phones and the Internet, all integrated in way that was easy to use. [The Apple Macintosh made this credible.] Thus, if there was a clash between Soviet secret police and Jewish protestors [common at the time], the whole world could take an informed view, rather than relying on the media. [This was before the development of video faking capabilities]. Secondly, while this would destabilize autocratic regimes, it was another issue as to what would happen next. It was generally felt that the only possible ‘properly’ stable states were democratic, but views differed on whether such states would necessarily stabilize.

Subsequent experience, such as the Arab spring, support the view that YouTube and Facebook undermine oppressive regimes. But I remain unconvinced that ‘the cooperative assembly of [a cohesive society] is inevitable’ in Africa, the Middle East,Russia or South America’, or that more communications would make it so. It certainly seems that if the process is inevitable, it can be much too slow.

My own thinking in the 80s was informed by the uncertainty and complexity theory Keynes, Whitehead, Turing and Smuts, which predates that which Robert cites, and which informed the development of the United Nations as a part of ‘the cooperative assembly of a cohesive global society’. Robert seems to be arguing that according to modern theory such efforts were not necessary, but even so they may have been beneficial if all they did was speed the process up by a few generations. Moreover, the EU example seems to support my view that these theories are usefully more advanced than their contemporary counter-parts.

The financial crash of 2008 occurred part way through the writing of the book. Like any history, explanations differ, and Robert gives a credible account in terms of modern complexity theory. But logic teaches us to be cautious about such post-hoc explanations. It seems to me that Keynes’ theory explains it adequately, and having been developed before the event should be given more credence.

 Robert seems to regard the global crash of 2008 as a result of a loss of cohesion :

“When economies, states and societies lose their cohesion, people suffer; to be precise a lot of people end up paying the cost. In the recession of 2008/09 … “

But Keynes shows how it is cohesion (‘sticking together’) that causes global crashes. Firstly, in a non-globalized economy a crash in one part can be compensated for by the stability of another part, a bit like China saving the situation, but more so. Secondly, (to quote Patton) ‘if everyone is thinking alike then no-one is thinking’. Once group-think is established ‘expectations’ become ossified, and the market is disconnected from reality.

Robert’s notion of cohesion is “global politically federated, yet culturally diverse”. One can see how in 2008 and currently in the EU (and North Africa and elsewhere) de jure and de-facto regulatory structures change, consistent with Robert’s view. But according to Keynes this is a response to an actual or potential crisis, rather than a causative factor. One can have a chain of  crises in which political change leads to emergent social or economic problems, leading to political change and so-on. Robert seems to suppose that this must settle down into some stable federation. If so then perhaps only the core principles will be stable, and even these might need to be continually reinterpreted and refreshed, much as I have tried to do here.

On a more conceptual note, Robert has the qualifies the conclusion with “The evidence from all of the fields considered in this text suggests …”.  But the conclusion could only be formally sustained by an argument employing induction. Now, if improved communications is really going to change the world so much then it will undermine the basis of any induction. (In Whitehead’s terms, induction only works with an epoch but here the epoch is changed.) The best one could say would be that on current trends a move towards greater cohesion appears inevitable. This is a more fundamental problem than only considering evidence from a limited range of fields. More evidence from more fields could not overcome this problem.

Dave Marsay

Systemism: the alternative to individualism and holism

Mario Bunge Systemism: the alternative to individualism and holism Journal of Socio-Economics 29 (2000) 147–157

“Three radical worldviews and research approaches are salient in social studies: individualism, holism, and systemism.”

[Systemism] “is centered in the following postulates:
1. Everything, whether concrete or abstract, is a system or an actual or potential component of a system;
2. systems have systemic (emergent) features that their components lack, whence
3. all problems should be approached in a systemic rather than in a sectoral fashion;
4. all ideas should be put together into systems (theories); and
5. the testing of anything, whether idea or artifact, assumes the validity of other items, which are taken as benchmarks, at least for the time being.”

Thus systemism resembles Smuts’ Holism. Bunge uses the term ‘holism’ for what Smuts terms wholism: the notion that systems should be subservient to their ‘top’ level, the ‘whole’. This usage apart, Bunge appears to be saying something important. Like Smuts, he notes the systemic nature of mathematics is distinction to those who note the tendency to apply mathematical formulae thoughtlessly, as in some notorious financial mathematics

Much of the main body is taken up with the need for micro-macro analyses and the limitations of piece-meal approaches, something familiar to Smuts and |Keynes. On the other hand he says: “I support the systems that benefit me, and sabotage those that hurt me.” without flagging up the limitations of such an approach in complex situations. He even suggests that an interdisciplinary subject such as biochemistry is nothing but the overlap of the two disciplines. If this is the case, I find it hard to grasp their importance. I would take a Kantian view, in which bringing into communion two disciplines can be more than the sum of the parts.

In general, Bunge’s arguments in favour of what he calls systemism and Smuts called holism seem sound, but it lacks the insights into complexity and uncertainty of the original.

See also

Andy Denis’ response to Bunge adds some arguments in favour of Holism. It’s main purpose, though, is to contradict Bunge’s assertion that laissez-faire is incompatible with systemism. It is argued that a belief in Adam Smith’s invisible hand could support laissez faire. It is not clear what might constitute grounds for such a belief. (My own view is that even a government that sought to leverage the invisible hand would have a duty to monitor the workings of such and hand, and to take action should it fail, as in the economic crisis of 2007/8. It is now clear how politics might facilitate this.)

Also my complexity.

Dave Marsay

Cyber Doctrine

Cyber Doctrine: Towards a coherent evolutionary framework for learning resilience, ISRS, JP MacIntosh, J Reid and LR Tyler.

A large booklet that provides a critical contribution to the Cyber debate. Here I provide my initial reactions: the document merits more detailed study.

Topics

Scope

Just as financial security is about more than just defending against bank-robbers, cyber security is about more than just defending against deliberate attack, and extends to all aspects of resilience, including freedom from whatever delusions might be analogous to the efficient market hypothesis.

Approach

Innovation is key to a vibrant Cyberspace and further innovation in Cyberspace is vital to our real lives. Thus a notion of security based on constraint or resilience based on always returning to the status quo are simply not appropriate. 

Resilience and Transformation

Resilience is defined as “the enduring power of a body or bodies for transformation, renewal and recovery through the flux of interactions and flow of events.” It is not just the ability to ‘bounce back’ to its previous state. It implies the ability to learn from events and adapt to be in a better position to face them.

Transformation is taken to be the key characteristic. It is not defined, which might lead people to turn to wikipedia, whose notion does not explicitly address complexity or uncertainty. I would like to see more emphasis on the long-run issues of adapting to evolve as against sequentially adapting to what one thinks the current needs are. This may include ‘deep transformation’ and ‘transformation in contact’ and the elimination of parts that are no longer needed.

Pragmatism 

The document claims to be ‘pragmatic’: I have concerns about what this term means to readers. According to wikipedia, “it describes a process where theory is extracted from practice, and applied back to practice to form what is called intelligent practice.” Fair enough. But the efficient market hypothesis was once regarded as pragmatic, and there are many who think it pragmatic to act as if one’s beliefs were true. Effective Cyber practice would seem to depend on an appropriate notion of pragmatism, which a doctrine perhaps ought to elucidate.

Glocalization

The document advocates glocalization. According to wikipedia this means ‘think global act local’ and the document refers to a variant: “the compression of the world and the intensification of the consciousness of the world as a whole”. But how should we conceive the whole? The document says “In cyberspace our lives are conducted through a kaleidoscope of global and local relations, which coalesce and dissipate as diverse glocals.” Thus this is not wholism (which supposes that the parts should be dominated by the needs of the whole) but a more holistic vision, which seeks a sustainable solution, somehow ‘balancing’ a range of needs on a range of scales. The doctrinal principles will need to support the structuring and balancing more explicitly.

Composability

The document highlights composability as a key aspect of best structural practice that – pragmatically – perhaps ought to be leveraged further. I intend to blog specifically on this. Effective collaboration is clearly essential to innovation, including resilience. Composability would seem essential to effective collaboration.

Visualisation: Quads

I imagine that anyone who has worked on these types of complex issue, with all their uncertainties, will recognize the importance of visual aids that can be talked around. There are many that are helpful when interpreted with understanding and discretion, but I have yet to find any that can ‘stand alone’ without risk of mis-interpretation. Diagram 6 (page 89) seems at first sight a valuable contribution to the corpus, worthy of further study and perhaps development.

I consider Perrow limited because his ‘yardstick’ tends to be an existing system and his recommendation seems to be ‘complexity and uncertainty are dangerous’. But if we want resilience through innovation we cannot avoid complexity and uncertainty. Further, glocalization seems to imply a turbulent diversity of types of coupling, such that Perrow’s analysis is impossible to apply.

I have come across the Johari window used in government as a way of explaining uncertainty, but here the yardstick is what others think they know, and in any case the concept of ‘knowledge’ seems just as difficult as that of uncertainty. So while this motivates, it doesn’t really explain.

The top ‘quad’ says something important about conventional economics. Much of life is a zero sum game: if I eat the cake, then you can’t. But resilience is about other aspects of life: we need a notion of rationality that suits this side of life. This will need further development.

Positive Deviancy and Education

 Lord Reid (below) made some comments when launching the booklet that clarify some of the issues. He emphasises the role for positive deviancy and education in the sense of ‘bringing out’. This seems to me to be vital.

Control and Patching

Lord Reid (below) emphasises that a control-based approach, or continual ‘patching’, aren’t enough. There is a qualitative change in the nature of Cyber, and hence a need for a completely different approach. This might have been made more explicit in the document.

Criticisms

The main criticisms that I have seen have been either of the recommendations that they wrongly assume John Reid is making (e.g., for more control) or appear to be based on a dislike of Lord Reid. In any case, changes such as those proposed would seem to call for a more international figure-head or lead institution, perhaps with ISRS in a supporting role.

What next?

The argument for having some doctrine matches my own leanings, as does the general trend of  the suggestions. But (as the government, below, says) one needs an international consensus, which in practice would seem to mean an approach endorsed by the UN security council (including America, France, Russia and China). Such a hopeless task seems to lead people to underestimate the risks of the status quo, or of ‘evolutionary’ patching of it with either less order or more control. As with the financial crisis, this may be the biggest threat to our security, let alone our resilience.

It seems to me, though, that behind the specific ideas proffered the underlying instincts are not all that different from those of the founders of the UN, and that seen in that context the ideas might not be too far from being attractive to each of the permanent members, if only the opportunities were appreciated.

Any re-invention or re-articulation of the principles of the UN would naturally have an impact on member states, and call for some adjustment to their legal codes. The UK’s latest Prevent strategy already emphasises the ‘fundamental values’ of ‘universal human rights, equality before the law, democracy and full participation in our society’.  In effect, we could see the proposed Cyber doctrine as proposing principles that would support a right to live in a reasonably resilient society. If for resilience we read sustainability, then we could say that there should be a right to be able to sustain oneself without jeopardising the prospects of one’s children and grandchildren. I am not sure what ‘full participation in our society’ would mean under reformed principles, but I see governments as having a role in fostering the broadest range of possible ‘positive deviants’, rather than (perhaps inadvertently) encouraging dangerous groupthink. These thoughts are perhaps prompted more by Lord Reid’s comments than the document itself.

Conclusion

 The booklet raises important issues about the nature, opportunities and threats of globalisation as impacted by Cyberspace. It seems clear that there is a consequent need for doctrine, but not yet what routes forward there may be. Food for thought, but not a clear prospectus.

See Also

Government position, Lord Reid’s Guardian article. , Police Led Intelligence, some negative comment.

Dave Marsay

Science advice and the management of risk

Science advice and the management of risk in government and business

The foundation for science and technology, 10 November 2010

An authoritative summary of the UK governments position on risk, with talks and papers.

  •  Beddington gives a good overview. He discusses probability versus impact ‘heat maps’, the use of ‘worst case’ scenarios, the limitations of heat maps and Blackett reviews. He discusses how management strategy has to reflect both the location on the heat map and the uncertainty in the location.
  • Omand discusses ‘Why wont they (politicians) listen (to the experts)?’  He notes the difference between secrets (hard to uncover) and secrets (hard to make sense of), and makes ‘common cause’ between science and intelligence in attempting to communicate with politicians. Presents a familiar type of chart in which probability is thought of as totally ordered (as in Bayesian probability) and seeks to standardise on the descriptors of ranges of probability, such as ‘highly probable’.
  • Goodman discusses economic risk management and the need to cope with ‘irrational cycles of exuberance’, focussing on ‘low probability high impact’ events. Only some risks can be quantified. Recommends ‘generalised Pareto distribution’.
  • Spielgelhalter introduced the discussion with some important insights:

The issue ultimately comes down to whether we can put numbers on these events.  … how can a figure communicate the enormous number of assumptions which underlie such quantifications? … The … goal of a numerical probability … becomes much more difficult when dealing with deeper uncertainties. … This concerns the acknowledgment of indeterminacy and ignorance.

Standard methods of analysis deal with recognised, quantifiable uncertainties, but this is only part of the story, although … we tend to focus at this level. A first extra step is to be explicit about acknowledged inadequacies – things that are not put into the analysis such as the methane cycle in climate models. These could be called ‘indeterminacy’. We do not know how to quantify them but we know they might be influential.

Yet there are even greater unknowns which require an essential humility. This is not just ignorance about what is wrong with the model, it is an acknowledgment that there could be a different conceptual basis for our analysis, another way to approach the problem.

There will be a continuing debate  about the process of communicating these deeper uncertainties.

  • The discussion covered the following:
    • More coverage of the role of emotion and group think is needed.
    • “[G]overnments did not base policies on evidence; they proclaimed them because they thought that a particular policy would attract votes. They would then seek to find evidence that supported their view. It would be more realistic to ask for policies to be evidence tested [rather than evidence-based.]”
    • “A new language was needed to describe uncertainty and the impossibility of removing risk from ordinary life … .”
    •  Advisors must advise, not covertly subvert decision-making.

Comments

If we accept that there is more to uncertainty than  can be reflected in a typical scale of probability, then it is no wonder that organisational decisions fail to take account of it adequately, or that some advisors seek to subvert such poor processes. Moreover, this seems to be a ‘difference that makes a difference’.

From a Keynesian perspective conditional probabilities, P(X|A), sometimes exist but unconditional ones, P(X), rarely do. As Spielgelhalter notes it is often the assumptions that are wrong: the estimated probability is then irrelevant. Spielgelhalter mentioned the common use of ‘sensitivity analysis’, noting that it is unhelpful. But what is commonly done is to test the sensitivity of P(X|y,A) to some minor variable y while keeping the assumptions, A. fixed. What is more often (for these types of risk) needed is a sensitivity to assumptions. Thus, if P(X|A) is high:

  • one needs to identify possible alternatives, A’, to A for which P(X|A’) is low, no matter how improbable A’ may be regarded.

Here:

  • ‘Possible’ means consistent with the evidence rather than anything psychological.
  • The criteria for what is regarded as ‘low’ or ‘high’ will be set by the decision context.

The rationale is that everything that has ever happened was, with hind-sight, possible: the things that catch us out are those that we overlooked, perhaps because we thought them improbable.

A conventional analysis would overlook emergent properties, such as booming cycles of ‘irrational’ exuberance. Thus in considering alternatives one needs to consider potential emotions and other emergencies and epochal events.

This suggests a typical ‘risk communication’ would consist of an extrapolated ‘main case’ probability together with a description of scenarios in which the opposite probability would hold.

See also

mathematicsheat maps, extrapolation and induction

Other debates, my bibliography.

Dave Marsay

 

Uncertainty and risk ‘Heat Maps’

Risk heat maps

A risk ‘heat map’ shows possible impact against likelihood of various events or scenarios, as in this one from the EIU website:

The ‘Managing Uncertainty’ blog draws attention to it and raises some interesting issues. Importantly, it notes that it includes events with both positive and negative potential impacts. But I go further and note that in assigning a small blob to each event, it fails to show  Knightian uncertainty at all.

Incorporating uncertainty

Uncertainty can be shown by having multiple blobs per event, perhaps smearing them into a region. One way to set the blobs is to get multiple stakeholders to mark their own assessments. My experience in crisis management and security is that:

  • Stakeholders will tend to judge impact for their own organisations. This can be helpful, but often one will want them to also assess the impact on ‘the big picture’ and the ‘route’ through which that impact may take effect. This can help flesh out the scenario. For example, perhaps an organisation doesn’t see any (direct) impact on them but another organisation sees that although they will be effected they can shift the burden to the unsuspecting first organisation.
  • Often, the risk comes from a lack of preparation, which often comes from a lack of anticipation. Thus the situation is highly reflexive. One can use the heat map to show a range of outcomes from ‘taken by surprise’ to ‘fully prepared’.
  • One generally needs some sort of role-playing ‘game’ backed by good analysis before stakeholders can make reasonable appreciations of the impact on ‘the whole’.
  • It is often helpful for stakeholders to mark the range of positions assumed within their organisations.
  • A suitably marked heat map can be used to facilitate debate and scenarios and marks developed until one either has convergence or a clear idea of why convergence is lacking.
  • The various scenarios will often need some analysis to bring out the key relationships (‘e.g. contagion’), which can then be validated by further debate / gaming.
  • Out of the debate, supported by the heat map with rationalised scenarios, comes a view about which issues need to be communicated better or more widely, so that all organisations appreciate the relative importance of their uncertainties, and how they are affected by and affect others’.
  • Any difficulties in above (such as irreconcilable views, or questions that cannot be answered) lead to requirements for further research, debate, etc.
  • When time is pressing a ‘bold decision’ may need to substitute for thorough analysis. But there is then a danger that the residual risks become ‘unspeakable’. The quality of the debate, to avoid this and other kinds of groupthink, can thus be critical.

Example

The UK at first misunderstood the nature of the protestors in the ‘first fuel crisis’ of 2000, which could have had dire consequences. It proved key that the risk heat map showed not only the mainstream view, but also credible alternatives. This is seen to be a case where Internet, mobile phone and social media changed the nature of protest. With this in mind the EIU’s event 2 (technology leading to rapid political and economic change) have positive or negative consequences, depending on how well governments respond. It may be that democratic governments believe that they can respond to rapid change, but it ought still be flagged up as a risk.  

 See also

Cynefin , mathematics of uncertainty

Dave Marsay