Can polls be reliable?

Election polls in many countries have seemed unusually unreliable recently. Why? And can they be fixed?

The most basic observation is that if one has a random sample of a population in which x% has some attribute then it is reasonable to estimate that x% of the whole population has that attribute, and that this estimate will tend to be more accurate the larger the sample is. In some polls sample size can be an issue, but not in the main political polls.

A fundamental problem with most polls is that the ‘random’ sample may not be uniformly distributed, with some sub-groups over or under represented. Political polls have some additional issues, that are sometimes blamed:

  • People with certain opinions may be reluctant to express them, or may even mislead.
  • There may be a shift in opinions with time, due to campaigns or events.
  • Different groups may differ in whether they actually vote, for example depending on the weather.

I also think that in the UK the trend to postal voting may have confused things, as postal voters will have missed out on the later stages of campaigns, and on later events. (Which were significant in the UK 2017 general election.)

Pollsters have a lot of experience at compensating for these distortions, and are increasingly using ‘sophisticated mathematical tools’. How is this possible, and is there any residual uncertainty?

Back to mathematics, suppose that we have a science-like situation in which we know which factors (e.g. gender, age, social class ..) are relevant. With a large enough sample we can partition the results by combination of factors, measure the proportions for each combination, and then combine these proportions, weighting by the prevalence of the combinations in the whole population. (More sophisticated approaches are used for smaller samples, but they only reduce the statistical reliability.)

Systematic errors can creep in in two ways:

  1. Instead of using just the poll data, some ‘laws of politics’ (such as the effect of rain) or other heuristics (such as that the swing among postal votes will be similar to that for votes in person) may be wrong.
  2. An important factor is missed. (For example, people with teenage children or grandchildren may vote differently from their peers when student fees are an issue.)

These issues have analogues in the science lab. In the first place one is using the wrong theory to interpret the data, and so the results are corrupted. In the second case one has some unnoticed ‘uncontrolled variable’ that can really confuse things.

A polling method using fixed factors and laws will only be reliable when they reasonably accurately the attributes of interest, and not when ‘the nature of politics’ is changing, as it often does and as it seems to be right now in North America and Europe. (According to game theory one should expect such changes when coalitions change or are under threat, as they are.) To do better, the polling organisation would need to understand the factors that the parties were bringing into play at least as well as the parties themselves, and possibly better. This seems unlikely, at least in the UK.

What can be done?

It seems to me that polls used to be relatively easy to interpret, possibly because they were simpler. Our more sophisticated contemporary methods make more detailed assumptions. To interpret them we would need to know what these assumptions were. We could then ‘aim off’, based on our own judgment. But this would involve pollsters in publishing some details of their methods, which they are naturally loth to do. So what could be done? Maybe we could have some agreed simple methods and publish findings as ‘extrapolations’ to inform debate, rather than predictions. We could then factor in our own assumptions. (For example, our assumptions about students turnout.)

So, I don’t think that we can expect reliable poll findings that are predictions, but possibly we could have useful poll findings that would inform debate and allow us to take our own views. (A bit like any ‘big data’.)

Dave Marsay

 

Assessing and Communicating Risks and Uncertainty

David Spielgelhalter Assessing and Communicating Risks and Uncertainty Science in Parliament vol 69, no. 2, pp. 21-26. This is part of the IMA’s Mathematics Matters: A Crucial Contribution to the Country’s Economy.

This starts with a Harvard study showing that “a daily portion of red meat was associated with an increase in the annual risk of death by 13% over the period of the study”. Does this mean, as the Daily Express claimed, that “10% of all deaths could be avoided”?

David S uses ‘survival analysis’ to show that “a 40 year-old  man who eats a quarter-pound burger for his working lunch each day can expect, on average, to live to 79, while his mate who avoids the burger can expect to live to 80.” He goes on: “over a lifetime habit, each daily portion of red meat is associated with about 30 minutes off your life expectancy .. ” (my emphasis.)

As a mathematician advising politicians and other decision-makers, I would not be comfortable that policy-makers understood this, and would act appropriately. They might, for example, assume that we should all be discouraged from eating too much red meat.

Even some numerate colleagues with some exposure to statistics might, I think, suppose that their life expectancy was being reduced by eating red meat. But all that is being said is that if a random person were selected from the population as a whole then – knowing nothing about them – a statistician would ‘expect’ them to have a shorter life if they eat red meat. But every actual individual ‘you’ has a family history and many by 40 will have had cholesterol tests. It is not clear what the relevance to them is of the statistician’s ‘averaged’ figures.

Generally speaking, statistics gathered for one set of factors cannot be used to draw precise conclusions about  other sets of factors, much less about individuals. David S’s previous advice at Don’t Know, Can’t Know applies. In my experience, it is not safe to assume that the audience will appreciate these finer points. All that I would take from the Harvard study is that if you eat red meat most days it might be a good idea to consult your doctor. I would also hope that there was research going on into the factors in the apparent dangers.

See Also

I would appreciate a link to the original study.

Dave Marsay

Hercock’s Cohesion

Robert G. Hercock Cohesion: The Making of Society 2009.

Having had Robert critique some of my work, I could hardly not comment on this think-piece. It draws on modern complexity theory and a broad view of relevant historical examples and current trends to create a credible narrative. For me, his key conclusions are:

  1. “[G]iven a sufficient degree of communication … the cooperative assembly of [a cohesive society] is inevitable.”
  2. To be cohesive, a society should be “global politically federated, yet culturally diverse”.

The nature of communication envisaged seems to be indicated by:

 “From smoke signals, and the electric telegraph, through to fibre optics, and the Internet … the manifest boom in all forms of communication is bringing immense capabilities to form new social collectives and positive cultural developments.”

 I ‘get’ that increasing communication will bring immense capabilities to support the cooperative assembly of a cohesive global society, but am not convinced the effective exploitation of the capability in this way is inevitable. In chapter 6 (‘Bridges’) Robert says:

 “The truth is we now need a new shared set of beliefs. … Unfortunately, no one appears to have the faintest idea what such a common set of beliefs should look like, or where it might arise from, or who has responsibility to make it happen, or how, etc. Basically this is the challenge of the 21st century; we stand or fall on this battle for a common cultural nexus.”  

 This is closer to my own thinking.

People have different understandings of terms like ‘federated’. My preference is for subsidiarity: the idea that one has the minimum possible governance, with reliance on the minimum possible shared beliefs and common cultures. In complex situations these minimum levels are not obvious or static, so I would see an effective federations as engaging tentatively at a number of ‘levels’, ‘veering and hauling’ between them, and with strong arrangements for ‘horizon scanning’ and debate with the maximum possible diversity of views. Thus there would be not only cultural diversity but ‘viewpoint diversity within federated debate’. What is needed seems somewhat like Holism and glocalization 

Thinking of the EU, diversity of monetary policy might make the EU as an institution more cohesive while making their economies less cohesive. To put it another way, attempts to enforce cohesion at the monetary level can threaten cohesion at the political level. So it is not clear to me that one can think of a society as simply ‘being cohesive’. Rather it should be cohesive in the sense appropriate to its current situation. Cohesion should be ‘adaptive’. Leadership and vision seem to be required to achieve this: it is not automatic.

In the mid 80s many of those involved in the development of communications technologies thought that they would promote world peace, sometimes citing the kind of works that Robert does. I had and have two reservations. Firstly, the quality of communications matters. Thus [it was thought] one probably needed digital video, mobile phones and the Internet, all integrated in way that was easy to use. [The Apple Macintosh made this credible.] Thus, if there was a clash between Soviet secret police and Jewish protestors [common at the time], the whole world could take an informed view, rather than relying on the media. [This was before the development of video faking capabilities]. Secondly, while this would destabilize autocratic regimes, it was another issue as to what would happen next. It was generally felt that the only possible ‘properly’ stable states were democratic, but views differed on whether such states would necessarily stabilize.

Subsequent experience, such as the Arab spring, support the view that YouTube and Facebook undermine oppressive regimes. But I remain unconvinced that ‘the cooperative assembly of [a cohesive society] is inevitable’ in Africa, the Middle East,Russia or South America’, or that more communications would make it so. It certainly seems that if the process is inevitable, it can be much too slow.

My own thinking in the 80s was informed by the uncertainty and complexity theory Keynes, Whitehead, Turing and Smuts, which predates that which Robert cites, and which informed the development of the United Nations as a part of ‘the cooperative assembly of a cohesive global society’. Robert seems to be arguing that according to modern theory such efforts were not necessary, but even so they may have been beneficial if all they did was speed the process up by a few generations. Moreover, the EU example seems to support my view that these theories are usefully more advanced than their contemporary counter-parts.

The financial crash of 2008 occurred part way through the writing of the book. Like any history, explanations differ, and Robert gives a credible account in terms of modern complexity theory. But logic teaches us to be cautious about such post-hoc explanations. It seems to me that Keynes’ theory explains it adequately, and having been developed before the event should be given more credence.

 Robert seems to regard the global crash of 2008 as a result of a loss of cohesion :

“When economies, states and societies lose their cohesion, people suffer; to be precise a lot of people end up paying the cost. In the recession of 2008/09 … “

But Keynes shows how it is cohesion (‘sticking together’) that causes global crashes. Firstly, in a non-globalized economy a crash in one part can be compensated for by the stability of another part, a bit like China saving the situation, but more so. Secondly, (to quote Patton) ‘if everyone is thinking alike then no-one is thinking’. Once group-think is established ‘expectations’ become ossified, and the market is disconnected from reality.

Robert’s notion of cohesion is “global politically federated, yet culturally diverse”. One can see how in 2008 and currently in the EU (and North Africa and elsewhere) de jure and de-facto regulatory structures change, consistent with Robert’s view. But according to Keynes this is a response to an actual or potential crisis, rather than a causative factor. One can have a chain of  crises in which political change leads to emergent social or economic problems, leading to political change and so-on. Robert seems to suppose that this must settle down into some stable federation. If so then perhaps only the core principles will be stable, and even these might need to be continually reinterpreted and refreshed, much as I have tried to do here.

On a more conceptual note, Robert has the qualifies the conclusion with “The evidence from all of the fields considered in this text suggests …”.  But the conclusion could only be formally sustained by an argument employing induction. Now, if improved communications is really going to change the world so much then it will undermine the basis of any induction. (In Whitehead’s terms, induction only works with an epoch but here the epoch is changed.) The best one could say would be that on current trends a move towards greater cohesion appears inevitable. This is a more fundamental problem than only considering evidence from a limited range of fields. More evidence from more fields could not overcome this problem.

Dave Marsay

Systemism: the alternative to individualism and holism

Mario Bunge Systemism: the alternative to individualism and holism Journal of Socio-Economics 29 (2000) 147–157

“Three radical worldviews and research approaches are salient in social studies: individualism, holism, and systemism.”

[Systemism] “is centered in the following postulates:
1. Everything, whether concrete or abstract, is a system or an actual or potential component of a system;
2. systems have systemic (emergent) features that their components lack, whence
3. all problems should be approached in a systemic rather than in a sectoral fashion;
4. all ideas should be put together into systems (theories); and
5. the testing of anything, whether idea or artifact, assumes the validity of other items, which are taken as benchmarks, at least for the time being.”

Thus systemism resembles Smuts’ Holism. Bunge uses the term ‘holism’ for what Smuts terms wholism: the notion that systems should be subservient to their ‘top’ level, the ‘whole’. This usage apart, Bunge appears to be saying something important. Like Smuts, he notes the systemic nature of mathematics is distinction to those who note the tendency to apply mathematical formulae thoughtlessly, as in some notorious financial mathematics

Much of the main body is taken up with the need for micro-macro analyses and the limitations of piece-meal approaches, something familiar to Smuts and |Keynes. On the other hand he says: “I support the systems that benefit me, and sabotage those that hurt me.” without flagging up the limitations of such an approach in complex situations. He even suggests that an interdisciplinary subject such as biochemistry is nothing but the overlap of the two disciplines. If this is the case, I find it hard to grasp their importance. I would take a Kantian view, in which bringing into communion two disciplines can be more than the sum of the parts.

In general, Bunge’s arguments in favour of what he calls systemism and Smuts called holism seem sound, but it lacks the insights into complexity and uncertainty of the original.

See also

Andy Denis’ response to Bunge adds some arguments in favour of Holism. It’s main purpose, though, is to contradict Bunge’s assertion that laissez-faire is incompatible with systemism. It is argued that a belief in Adam Smith’s invisible hand could support laissez faire. It is not clear what might constitute grounds for such a belief. (My own view is that even a government that sought to leverage the invisible hand would have a duty to monitor the workings of such and hand, and to take action should it fail, as in the economic crisis of 2007/8. It is now clear how politics might facilitate this.)

Also my complexity.

Dave Marsay

Cyber Doctrine

Cyber Doctrine: Towards a coherent evolutionary framework for learning resilience, ISRS, JP MacIntosh, J Reid and LR Tyler.

A large booklet that provides a critical contribution to the Cyber debate. Here I provide my initial reactions: the document merits more detailed study.

Topics

Scope

Just as financial security is about more than just defending against bank-robbers, cyber security is about more than just defending against deliberate attack, and extends to all aspects of resilience, including freedom from whatever delusions might be analogous to the efficient market hypothesis.

Approach

Innovation is key to a vibrant Cyberspace and further innovation in Cyberspace is vital to our real lives. Thus a notion of security based on constraint or resilience based on always returning to the status quo are simply not appropriate. 

Resilience and Transformation

Resilience is defined as “the enduring power of a body or bodies for transformation, renewal and recovery through the flux of interactions and flow of events.” It is not just the ability to ‘bounce back’ to its previous state. It implies the ability to learn from events and adapt to be in a better position to face them.

Transformation is taken to be the key characteristic. It is not defined, which might lead people to turn to wikipedia, whose notion does not explicitly address complexity or uncertainty. I would like to see more emphasis on the long-run issues of adapting to evolve as against sequentially adapting to what one thinks the current needs are. This may include ‘deep transformation’ and ‘transformation in contact’ and the elimination of parts that are no longer needed.

Pragmatism 

The document claims to be ‘pragmatic’: I have concerns about what this term means to readers. According to wikipedia, “it describes a process where theory is extracted from practice, and applied back to practice to form what is called intelligent practice.” Fair enough. But the efficient market hypothesis was once regarded as pragmatic, and there are many who think it pragmatic to act as if one’s beliefs were true. Effective Cyber practice would seem to depend on an appropriate notion of pragmatism, which a doctrine perhaps ought to elucidate.

Glocalization

The document advocates glocalization. According to wikipedia this means ‘think global act local’ and the document refers to a variant: “the compression of the world and the intensification of the consciousness of the world as a whole”. But how should we conceive the whole? The document says “In cyberspace our lives are conducted through a kaleidoscope of global and local relations, which coalesce and dissipate as diverse glocals.” Thus this is not wholism (which supposes that the parts should be dominated by the needs of the whole) but a more holistic vision, which seeks a sustainable solution, somehow ‘balancing’ a range of needs on a range of scales. The doctrinal principles will need to support the structuring and balancing more explicitly.

Composability

The document highlights composability as a key aspect of best structural practice that – pragmatically – perhaps ought to be leveraged further. I intend to blog specifically on this. Effective collaboration is clearly essential to innovation, including resilience. Composability would seem essential to effective collaboration.

Visualisation: Quads

I imagine that anyone who has worked on these types of complex issue, with all their uncertainties, will recognize the importance of visual aids that can be talked around. There are many that are helpful when interpreted with understanding and discretion, but I have yet to find any that can ‘stand alone’ without risk of mis-interpretation. Diagram 6 (page 89) seems at first sight a valuable contribution to the corpus, worthy of further study and perhaps development.

I consider Perrow limited because his ‘yardstick’ tends to be an existing system and his recommendation seems to be ‘complexity and uncertainty are dangerous’. But if we want resilience through innovation we cannot avoid complexity and uncertainty. Further, glocalization seems to imply a turbulent diversity of types of coupling, such that Perrow’s analysis is impossible to apply.

I have come across the Johari window used in government as a way of explaining uncertainty, but here the yardstick is what others think they know, and in any case the concept of ‘knowledge’ seems just as difficult as that of uncertainty. So while this motivates, it doesn’t really explain.

The top ‘quad’ says something important about conventional economics. Much of life is a zero sum game: if I eat the cake, then you can’t. But resilience is about other aspects of life: we need a notion of rationality that suits this side of life. This will need further development.

Positive Deviancy and Education

 Lord Reid (below) made some comments when launching the booklet that clarify some of the issues. He emphasises the role for positive deviancy and education in the sense of ‘bringing out’. This seems to me to be vital.

Control and Patching

Lord Reid (below) emphasises that a control-based approach, or continual ‘patching’, aren’t enough. There is a qualitative change in the nature of Cyber, and hence a need for a completely different approach. This might have been made more explicit in the document.

Criticisms

The main criticisms that I have seen have been either of the recommendations that they wrongly assume John Reid is making (e.g., for more control) or appear to be based on a dislike of Lord Reid. In any case, changes such as those proposed would seem to call for a more international figure-head or lead institution, perhaps with ISRS in a supporting role.

What next?

The argument for having some doctrine matches my own leanings, as does the general trend of  the suggestions. But (as the government, below, says) one needs an international consensus, which in practice would seem to mean an approach endorsed by the UN security council (including America, France, Russia and China). Such a hopeless task seems to lead people to underestimate the risks of the status quo, or of ‘evolutionary’ patching of it with either less order or more control. As with the financial crisis, this may be the biggest threat to our security, let alone our resilience.

It seems to me, though, that behind the specific ideas proffered the underlying instincts are not all that different from those of the founders of the UN, and that seen in that context the ideas might not be too far from being attractive to each of the permanent members, if only the opportunities were appreciated.

Any re-invention or re-articulation of the principles of the UN would naturally have an impact on member states, and call for some adjustment to their legal codes. The UK’s latest Prevent strategy already emphasises the ‘fundamental values’ of ‘universal human rights, equality before the law, democracy and full participation in our society’.  In effect, we could see the proposed Cyber doctrine as proposing principles that would support a right to live in a reasonably resilient society. If for resilience we read sustainability, then we could say that there should be a right to be able to sustain oneself without jeopardising the prospects of one’s children and grandchildren. I am not sure what ‘full participation in our society’ would mean under reformed principles, but I see governments as having a role in fostering the broadest range of possible ‘positive deviants’, rather than (perhaps inadvertently) encouraging dangerous groupthink. These thoughts are perhaps prompted more by Lord Reid’s comments than the document itself.

Conclusion

 The booklet raises important issues about the nature, opportunities and threats of globalisation as impacted by Cyberspace. It seems clear that there is a consequent need for doctrine, but not yet what routes forward there may be. Food for thought, but not a clear prospectus.

See Also

Government position, Lord Reid’s Guardian article. , Police Led Intelligence, some negative comment.

Dave Marsay

Science advice and the management of risk

Science advice and the management of risk in government and business

The foundation for science and technology, 10 November 2010

An authoritative summary of the UK governments position on risk, with talks and papers.

  •  Beddington gives a good overview. He discusses probability versus impact ‘heat maps’, the use of ‘worst case’ scenarios, the limitations of heat maps and Blackett reviews. He discusses how management strategy has to reflect both the location on the heat map and the uncertainty in the location.
  • Omand discusses ‘Why wont they (politicians) listen (to the experts)?’  He notes the difference between secrets (hard to uncover) and secrets (hard to make sense of), and makes ‘common cause’ between science and intelligence in attempting to communicate with politicians. Presents a familiar type of chart in which probability is thought of as totally ordered (as in Bayesian probability) and seeks to standardise on the descriptors of ranges of probability, such as ‘highly probable’.
  • Goodman discusses economic risk management and the need to cope with ‘irrational cycles of exuberance’, focussing on ‘low probability high impact’ events. Only some risks can be quantified. Recommends ‘generalised Pareto distribution’.
  • Spielgelhalter introduced the discussion with some important insights:

The issue ultimately comes down to whether we can put numbers on these events.  … how can a figure communicate the enormous number of assumptions which underlie such quantifications? … The … goal of a numerical probability … becomes much more difficult when dealing with deeper uncertainties. … This concerns the acknowledgment of indeterminacy and ignorance.

Standard methods of analysis deal with recognised, quantifiable uncertainties, but this is only part of the story, although … we tend to focus at this level. A first extra step is to be explicit about acknowledged inadequacies – things that are not put into the analysis such as the methane cycle in climate models. These could be called ‘indeterminacy’. We do not know how to quantify them but we know they might be influential.

Yet there are even greater unknowns which require an essential humility. This is not just ignorance about what is wrong with the model, it is an acknowledgment that there could be a different conceptual basis for our analysis, another way to approach the problem.

There will be a continuing debate  about the process of communicating these deeper uncertainties.

  • The discussion covered the following:
    • More coverage of the role of emotion and group think is needed.
    • “[G]overnments did not base policies on evidence; they proclaimed them because they thought that a particular policy would attract votes. They would then seek to find evidence that supported their view. It would be more realistic to ask for policies to be evidence tested [rather than evidence-based.]”
    • “A new language was needed to describe uncertainty and the impossibility of removing risk from ordinary life … .”
    •  Advisors must advise, not covertly subvert decision-making.

Comments

If we accept that there is more to uncertainty than  can be reflected in a typical scale of probability, then it is no wonder that organisational decisions fail to take account of it adequately, or that some advisors seek to subvert such poor processes. Moreover, this seems to be a ‘difference that makes a difference’.

From a Keynesian perspective conditional probabilities, P(X|A), sometimes exist but unconditional ones, P(X), rarely do. As Spielgelhalter notes it is often the assumptions that are wrong: the estimated probability is then irrelevant. Spielgelhalter mentioned the common use of ‘sensitivity analysis’, noting that it is unhelpful. But what is commonly done is to test the sensitivity of P(X|y,A) to some minor variable y while keeping the assumptions, A. fixed. What is more often (for these types of risk) needed is a sensitivity to assumptions. Thus, if P(X|A) is high:

  • one needs to identify possible alternatives, A’, to A for which P(X|A’) is low, no matter how improbable A’ may be regarded.

Here:

  • ‘Possible’ means consistent with the evidence rather than anything psychological.
  • The criteria for what is regarded as ‘low’ or ‘high’ will be set by the decision context.

The rationale is that everything that has ever happened was, with hind-sight, possible: the things that catch us out are those that we overlooked, perhaps because we thought them improbable.

A conventional analysis would overlook emergent properties, such as booming cycles of ‘irrational’ exuberance. Thus in considering alternatives one needs to consider potential emotions and other emergencies and epochal events.

This suggests a typical ‘risk communication’ would consist of an extrapolated ‘main case’ probability together with a description of scenarios in which the opposite probability would hold.

See also

mathematicsheat maps, extrapolation and induction

Other debates, my bibliography.

Dave Marsay

 

Uncertainty and risk ‘Heat Maps’

Risk heat maps

A risk ‘heat map’ shows possible impact against likelihood of various events or scenarios, as in this one from the EIU website:

The ‘Managing Uncertainty’ blog draws attention to it and raises some interesting issues. Importantly, it notes that it includes events with both positive and negative potential impacts. But I go further and note that in assigning a small blob to each event, it fails to show  Knightian uncertainty at all.

Incorporating uncertainty

Uncertainty can be shown by having multiple blobs per event, perhaps smearing them into a region. One way to set the blobs is to get multiple stakeholders to mark their own assessments. My experience in crisis management and security is that:

  • Stakeholders will tend to judge impact for their own organisations. This can be helpful, but often one will want them to also assess the impact on ‘the big picture’ and the ‘route’ through which that impact may take effect. This can help flesh out the scenario. For example, perhaps an organisation doesn’t see any (direct) impact on them but another organisation sees that although they will be effected they can shift the burden to the unsuspecting first organisation.
  • Often, the risk comes from a lack of preparation, which often comes from a lack of anticipation. Thus the situation is highly reflexive. One can use the heat map to show a range of outcomes from ‘taken by surprise’ to ‘fully prepared’.
  • One generally needs some sort of role-playing ‘game’ backed by good analysis before stakeholders can make reasonable appreciations of the impact on ‘the whole’.
  • It is often helpful for stakeholders to mark the range of positions assumed within their organisations.
  • A suitably marked heat map can be used to facilitate debate and scenarios and marks developed until one either has convergence or a clear idea of why convergence is lacking.
  • The various scenarios will often need some analysis to bring out the key relationships (‘e.g. contagion’), which can then be validated by further debate / gaming.
  • Out of the debate, supported by the heat map with rationalised scenarios, comes a view about which issues need to be communicated better or more widely, so that all organisations appreciate the relative importance of their uncertainties, and how they are affected by and affect others’.
  • Any difficulties in above (such as irreconcilable views, or questions that cannot be answered) lead to requirements for further research, debate, etc.
  • When time is pressing a ‘bold decision’ may need to substitute for thorough analysis. But there is then a danger that the residual risks become ‘unspeakable’. The quality of the debate, to avoid this and other kinds of groupthink, can thus be critical.

Example

The UK at first misunderstood the nature of the protestors in the ‘first fuel crisis’ of 2000, which could have had dire consequences. It proved key that the risk heat map showed not only the mainstream view, but also credible alternatives. This is seen to be a case where Internet, mobile phone and social media changed the nature of protest. With this in mind the EIU’s event 2 (technology leading to rapid political and economic change) have positive or negative consequences, depending on how well governments respond. It may be that democratic governments believe that they can respond to rapid change, but it ought still be flagged up as a risk.  

 See also

Cynefin , mathematics of uncertainty

Dave Marsay 

All watched over by machines of loving grace

What?

An Adam Curtis documentary shown on the BBC May/June 2011.

Comment

The trailers (above link) give a good feel for the series, which is entertaining, with some good video, music, pseudo-history and comment. The details shouldn’t be taken too seriously, but it is thought-provoking, on some topics that need thought.

Thoughts

The series ends:

The idea that human beings are helpless chunks of hardware controlled by software programs written in their genetic codes [remains powerfully influential in our society]. The question is, have we embraced that idea because it is a comfort in a world where everything that we do, either good or bad, seems to have terrible unforeseen consequences? …

We have embraced a fatalistic philosophy of us as helpless computing machines, to both excuse and explain our political failure to change the world.

This thesis has three parts:

  1. that everything we do has terrible unforeseen consequences
  2. that we are fatalistic in the face of such uncertainty
  3. that we have adopted a machine metaphor as ‘cover’ for our fatalism.

Uncertainty

The program demonizes unforeseen consequences. Certainly we should be troubled by them, and their implications for rationalism and pragmatism. But if there were no uncertainties then we could be rational and ‘should’ behave like machines. Reasoning in a complex, dynamic world calls for more than narrowly rational machine-like calculation, and gives purpose to being human.

Fatalism

It seems reasonable to suppose that most of the time most people can do little to influence the factors that shape their lives, but I think this is true even when people can perfectly well see the likely consequences of what is being done in their name. What is at issue here is not so much ordinary fatalism, which seems justified, as the charge that those who are making big decisions on our behalf are also fatalistic.

In democracies, no-one makes a free decision anymore. Everyone is held accountable and expected to abide by generally accepted norms and procedures. In principle whenever one has a novel situation the extant rules should be at least briefly reviewed, lest they lead to ‘unforseen consequences’. A fatalist would presumably not do this. Perhaps the failure, then, is not to challenge assumptions or ‘kick against’ constraints.

The machine metaphor

Computers and mathematicians played a big role in the documentary. Humans are seen as being programmed by a genetic code that has evolved to self-replicate. But evolution leads to ‘punctuated equilibrium’ and epochs.  Reasoning in epochs is not like reasoning in stable situations, the preserve of rule-driven machines. The mathematics of Whitehead and Turing supports the machine-metaphor, but only within an epoch. How would a genetically programmed person fare if they move to a different culture or had to cope with new technologies radically transforming their daily lives? One might suppose that we are encoded for ‘general ways of living and learning’ but then that we seem to require a grasp of uncertainty beyond that which we currently associate with machines.

Notes

  • The program had a discussion on altruism and other traits in which behaviours might disbenefit the individual but advantage those who are genetically similar over others. This would seem to justify much terrorism and even suicide-bombing. The machine metaphor would seem undesirable for reasons other than its tendency to fatalism.
  • An alternative to absolute fatalism would be fatalism about long-term consequences. This would lead to a short-term-ism that might provide a better explanation for real-world events
  • The financial crash of 2007/8 was preceded by a kind of fatalism, in that it was supposed that free markets could never crash. This was associated with machine trading, but neither a belief in the machine metaphor nor a fear of unintended consequences seems to have been at the root of the problem. A belief in the potency of markets was perhaps reasonable (in the short term) once the high-tech bubble had burst. The problem seems to be that people got hooked on the bubble drug, and went into denial.
  • Mathematicians came in for some implicit criticism in the program. But the only subject of mathematics is mathematics. In applying mathematics to real systems the error is surely in substituting myth for science. If some people mis-use mathematics, the mathematics is no more at fault than their pencils. (Although maybe mathematicians ought to be more vigorous in uncovering abuse, rather than just doing mathematics.)

Conclusion

Entertaining, thought-provoking.

Dave Marsay

Out of Control

Kevin Kelly’s ‘Out of Control‘ (1994) sub-titled “The New Biology of Machines, Social Systems, and the Economic World” gives ‘the nine laws of god’which it commends for all future systems, including organisations and economies. They didn’t work out too well in 2008.

The claims

The book is introduced (above) by:

“Out of Control is a summary of what we know about self-sustaining systems, both living ones such as a tropical wetland, or an artificial one, such as a computer simulation of our planet. The last chapter of the book, “The Nine Laws of God,” is a distillation of the nine common principles that all life-like systems share. The major themes of the book are:

  • As we make our machines and institutions more complex, we have to make them more biological in order to manage them.
  • The most potent force in technology will be artificial evolution. We are already evolving software and drugs … .
  • Organic life is the ultimate technology, and all technology will improve towards biology.
  • The main thing computers are good for is creating little worlds so that we can try out the Great Questions. …
  • As we shape technology, it shapes us. We are connecting everything to everything, and so our entire culture is migrating to a “network culture” and a new network economics.

In order to harvest the power of organic machines, we have to instill in them guidelines and self-governance, and relinquish some of our total control.”

Holism

Much of the book is Holistic in nature, The above could be read as applying the ideas of Smuts’ Holism to newer technologies. (Chapter 19 does make explicit reference to JC Smuts in connection with internal selection, but doesn’t reference his work.)

Jan Smuts based his work on wide experience, including with improving arms production in the Great War, and went on to found ecology and help modernise the sciences, thus leading to the views that Kelly picks up on. Superficially, Kelly’s book is greatly concerned with technology that ante-dates Smuts, but his arguments claim to be quite general, so an apostle of Smuts would expect Kelly to be consist, but applying the ideas to the new realm. But where does Kelly depart from Smuts, and what new insights does he bring? Below we pick out Kelly’s key texts and compare them.

The nine Laws of God

The laws with my italics are:

Distribute being

When the sum of the parts can add up to more than the parts, then that extra being … is distributed among the parts. Whenever we find something from nothing, we find it arising from a field of many interacting smaller pieces. All the mysteries we find most interesting — life, intelligence, evolution — are found in the soil of large distributed systems.

The first phrase is clearly Holistic, and perhaps consistent with Smuts’ view that the ‘extra’ arises from the ‘field of interactions’. However in many current technologies the ‘pieces’ are very hard-edged, with limited ‘mutual interaction’. 

Control from the bottom up

When everything is connected to everything in a distributed network … overall governance must arise from the most humble interdependent acts done locally in parallel, and not from a central command. …

The phrases ‘bottom up’ and ‘humble interdependent acts’ seem inconsistent with Smuts’ own behaviour, for example in taking the ‘go’ decision for D-day. Generally, Kelly seems to ignore or deny the need for different operational levels, as in the military’s tactical and strategic.

Cultivate increasing returns

Each time you use an idea, a language, or a skill you strengthen it, reinforce it, and make it more likely to be used again. … Success breeds success. In the Gospels, this principle of social dynamics is known as “To those who have, more will be given.” Anything which alters its environment to increase production of itself is playing the game … And all large, sustaining systems play the game … in economics, biology, computer science, and human psychology. …

Smuts seems to have been the first to recognize that one could inherit a tendency to have more of something (such as height) than your parents, so that a succesful tendency (such as being tall) would be reinforced. The difference between Kelly and Smuts is that Kelly has a general rule whereas Smuts has it as a product of evolution for each attribute. Kelly’s version also needs to be balanced against not optimising (below).

Grow by chunking

The only way to make a complex system that works is to begin with a simple system that works. Attempts to instantly install highly complex organization — such as intelligence or a market economy — without growing it, inevitably lead to failure. … Time is needed to let each part test itself against all the others. Complexity is created, then, by assembling it incrementally from simple modules that can operate independently.

Kelly is uncomfortable with the term ‘complex’. In Smuts’ usage a military platoon attack is often ‘complex’, whereas a superior headquarters could be simple. Systems with humans in naturally tend to be complex (as Kelly describes) and are only made simple by prescriptive rules and procedures. In many settings such process-driven systems would (as Kelly describes them) be quite fragile, and unable to operate independently in a demanding environment (e.g., one with a thinking adversary). Thus I suppose that Kelly is advocating starting with small but adaptable systems and growing them. This is desirable, but often Smuts did not have that luxury, and had to re-engineer systems such as production or fighting systems, ‘on the fly’

Maximize the fringes

… A uniform entity must adapt to the world by occasional earth-shattering revolutions, one of which is sure to kill it. A diverse heterogeneous entity, on the other hand, can adapt to the world in a thousand daily mini revolutions, staying in a state of permanent, but never fatal, churning. Diversity favors remote borders, the outskirts, hidden corners, moments of chaos, and isolated clusters. In economic, ecological, evolutionary, and institutional models, a healthy fringe speeds adaptation, increases resilience, and is almost always the source of innovations.

A large uniform entity cannot adapt and maintain its uniformity, and so is unsustainable in the face of a changing situation or environment. If diversity is allowed then parts can adapt independently, and generally favourable adaptations spread. Moreover, the more diverse an entity is the more it can fill a variety of niches, and the more likely that it will survive some shot. Here Kelly, Smuts and Darwin essentially agree.

Honor your errors

A trick will only work for a while, until everyone else is doing it. To advance from the ordinary requires a new game, or a new territory. But the process of going outside the conventional method, game, or territory is indistinguishable from error. Even the most brilliant act of human genius, in the final analysis, is an act of trial and error. … Error, whether random or deliberate, must become an integral part of any process of creation. Evolution can be thought of as systematic error management.

Here the problem of competition is addressed. Here Kelly supposes that the only viable strategy in the face of complexity is blind trial and error, ‘the no strategy strategy’. But the main thing is to be able to identify actual errors. Smuts might also add that one might learn from near-misses and other potential errors.

Pursue no optima; have multiple goals

 …  a large system can only survive by “satisficing” (making “good enough”) a multitude of functions. For instance, an adaptive system must trade off between exploiting a known path of success (optimizing a current strategy), or diverting resources to exploring new paths (thereby wasting energy trying less efficient methods). …  forget elegance; if it works, it’s beautiful.

Here Kelly confuses ‘a known path of success’ with ‘a current strategy’, which may explain why he is dismissive of strategy. Smuts would say that getting an adequate balance between the exploitation of manifest success and the exploration of alternatives would be a key feature of any strategy. Sometimes it pays not to go after near-term returns, perhaps even accepting a loss.

Seek persistent disequilibrium

Neither constancy nor relentless change will support a creation. A good creation … is persistent disequilibrium — a continuous state of surfing forever on the edge between never stopping but never falling. Homing in on that liquid threshold is the still mysterious holy grail of creation and the quest of all amateur gods.

This is a key insight. The implication is that even the nine laws do not guarantee success. Kelly does not say how the disequilibrium is generated. In many systems it is only generated as part of an eco-system, so that reducing the challenge to a system can lead to its virtual death. A key part of growth (above) is o grow the ability to maintain a healthy disequilibrium despite increasing novel challenges.

Change changes itself

… When extremely large systems are built up out of complicated systems, then each system begins to influence and ultimately change the organizations of other systems. That is, if the rules of the game are composed from the bottom up, then it is likely that interacting forces at the bottom level will alter the rules of the game as it progresses.  Over time, the rules for change get changed themselves. …

It seems that the changes the rules are blindly adaptive. This may be because, unlike Smuts, Kelly does not believe in strategy, or in the power of theory to enlighten.

Kelly’s discussion

These nine principles underpin the awesome workings of prairies, flamingoes, cedar forests, eyeballs, natural selection in geological time, and the unfolding of a baby elephant from a tiny seed of elephant sperm and egg.

These same principles of bio-logic are now being implanted in computer chips, electronic communication networks, robot modules, pharmaceutical searches, software design, and corporate management, in order that these artificial systems may overcome their own complexity.

When the Technos is enlivened by Bios we get artifacts that can adapt, learn, and evolve. …

The intensely biological nature of the coming culture derives from five influences:

    • Despite the increasing technization of our world, organic life — both wild and domesticated — will continue to be the prime infrastructure of human experience on the global scale.
    • Machines will become more biological in character.
    • Technological networks will make human culture even more ecological and evolutionary.
    • Engineered biology and biotechnology will eclipse the importance of mechanical technology.
    • Biological ways will be revered as ideal ways.

 …

As complex as things are today, everything will be more complex tomorrow. The scientists and projects reported here have been concerned with harnessing the laws of design so that order can emerge from chaos, so that organized complexity can be kept from unraveling into unorganized complications, and so that something can be made from nothing.

My discussion

Considering local action only, Kelly’s arguments often come down to the supposed impossibility of effective strategy in the face of complexity, leading to the recommendation of the universal ‘no strategy strategy’: continually adapt to the actual situation, identifying and setting appropriate goals and sub-goals. Superficially, this seems quite restrictive, but we are free as to how we interpret events, learn, set goals and monitor progress and react. There seems to be nothing to prevent us from following a more substantial strategy but describing it in Kelly’s terms.

 The ‘bottom up’ principle seems to be based on the difficulty of central control. But Kelly envisages the use of markets, which can be seen as a ‘no control control’. That is, we are heavily influenced by markets but they have no intention. An alternative would be to allow a range of mechanisms, ideally also without intention; whatever is supported by an appropriate majority (2/3?).

For economics, Kelly’s laws are suggestive of Hayek, whereas Smuts’ approach was shared with his colleague, Keynes. 

Conclusion

What is remarkable about Kelly’s laws is the impotence of the individuals in the face of ‘the system’. It would seem better to allow for ‘central’ (or intermediate) mechanisms to be ‘bottom up’ in the sense that they are supported by an informed ‘bottom’.

See Also

David Marsay

AV: Yes or No? A comparison of the Campaigns’ ‘reasons’

At last we have some sensible claims to compare, at the Beeb. Here are some comments:

YES Campaign

Its reasons

  1. AV makes people work harder
  2. AV cuts safe seats
  3. AV is a simple upgrade
  4. AV makes votes count
  5. AV is our one chance for a change

An Assessment

These are essentially taken from the all-party Jenkins Commission. The NO Campaign rejoinders seem to be:

  1. Not significantly so.
  2. Not significantly so.
  3. AV will require computers and £250M to implement (see below).
  4. AV Makes votes count twice, or more (se below).
  5. Too right!

A Summary

Worthy, but dull.

An Addenda

I would add:

  • There would be a lot less need for tactical voting
  • The results would more reliably indicate people’s actual first preferences
  • It would be a lot easier to vote out an unpopular government – no ‘vote splitting’
  • It would make it possible for a new party to grow support across elections to challenge the status quo.
  • It may lead to greater turnout, especially in seats that are currently safe

NO Campaign Reasons

AV is unfair

Claim

“… some people would get their vote counted more times than others. For generations, elections in the UK have been based on the fundamental principle of ‘one person, one vote’. AV would undermine all that by allowing the supporters of fringe parties to have their second, third or fourth choices counted – while supporters of the mainstream candidates would only get their vote counted once.”

Notes

According to the Concise OED a vote is ‘a formal expression of will or opinion in regard to election of … signified by ballot …’ Thus the Irish, Scottish, Welsh, Australians, who cast similar ballots to AV, ‘have one vote’. The NO Campaign use of the term ‘counted’ is also confusing. The general meaning is a ‘reckoning’, and in this sense each polling station has one count per election, and this remains true under AV. A peculiarity of AV is that ballots are also counted in the sense of ‘find number of”. (See ‘maths of voting’ for more.)

Assessment

There is no obvious principle that requires us to stick with FPTP: all ballots are counted according to the same rules.

Should ‘supporters of fringe parties’ have their second preferences counted? The ‘fringe’ includes:

  • Local candidates, such as a doctor trying to stop the closure of a hospital
  • The Greens
  • In some constituencies, Labour, LibDem, Conservative.

AV is blind to everything except how voters rank them. Consider an election in which the top three candidates get 30%, 28%, 26%, with some also-rans. According to the NO campaign the candidate with a narrow margin should be declared the winner. Thus they would disregard the preferences of anyone who votes for their hospital (say). Is this reasonable?

AV is not widely used

True-ish, but neither is FPTP (in terms of countries – one of them is large), and variants of AV (IRV, STV, …) together are the most widely used.

AV is expensive

Countries with AV don’t have election machinery. Australian elections may cost more than ours, but it is a much bigger country with a smaller population. 

AV hand more power to politicians

See the Jenkins Commission.

AV supporters are sceptical

Opposition to FPTP is split between variants of AV, with single-member constituencies and forms of PR. The Jenkins Commission recommended AV+, seeking to provide the best of both. The referendum is FPTP and hence can only cope with two alternatives: YES or NO.

I don’t know that AV supporters are sceptical against a move away from FPTP – just differ on what would be ideal.

Addenda

  • The NO campaign is playing down the ‘strong and stable government’ argument. The flip side is that an unpopular government can survive.
  • A traditional argument for FPTP was that it encourages tactical voting and hence politicking, and hence develops tough leaders, good at dealing with foreigners. We haven’t heard this, this time. Maybe the times are different?

See Also

AV: the worst example

According to the NO campaign the Torfaen election shows AV in the worst light. Labour won with 44.8%, followed by Conservative (20%), LibDem (16.6%)  and 6 more (5.3% or less each). The No campaign claim that under AV the 8th placed, an Independent, could have won. But to do so Labour would have had to have picked up less than 5.3% from the other candidates, including LibDem, and the Independent would have had to be ranked higher than the others by a majority. In particular, the Independent could not have won without support from Conservative voters.

Is it reasonable for Conservatives to complain?:

  • Conservative votes contributed to the victory.
  • Don’t the Conservatives prefer this to Labour?

It is also worth noting that the Independent would have to have picked up most second-rank votes from the Greens and UKIP, and so on, which also seems unlikely.

See Also

AV pros and cons

Dave Marsay