Traffic bunching

In heavy traffic, such as on motorways in rush-hour, there is often oscillation in speed and there can even be mysterious ’emergent’ halts. The use of variable speed limits can result in everyone getting along a given stretch of road quicker.

Soros (worth reading) has written an article that suggests that this is all to do with the humanity and ‘thinking’ of the drivers, and that something similar is the case for economic and financial booms and busts. This might seem to indicate that ‘mathematical models’ were a part of our problems, not solutions. So I suggest the following thought experiment:

Suppose a huge number of  identical driverless cars with deterministic control functions all try to go along the same road, seeking to optimise performance in terms of ‘progress’ and fuel economy. Will they necessarily succeed, or might there be some ‘tragedy of the commons’ that can only be resolved by some overall regulation? What are the critical factors? Is the nature of the ‘brains’ one of them?

Are these problems the preserve of psychologists, or does mathematics have anything useful to say?

Dave Marsay

Haldane’s The dog and the Frisbee

Andrew Haldane The dog and the Frisbee

Haldane argues in favour of simplified regulation. I find the conclusions reasonable, but have some quibbles about the details of the argument. My own view is that much of our financial problems have been due – at least in part – to a misrepresentation of the associated mathematics, and so I am keen to ensure that we avoid similar misunderstandings in the future. I see this as a primary responsibility of ‘regulators’, viewed in the round.

The paper starts with a variation of Ashby’s ball-catching observation, involving dog and a Frisbee instead of a man and a ball: you don’t need to estimate the position of the Frisbee or be an expert in aerodynamics: a simple, natural, heuristic will do. He applies this analogy to financial regulation, but it is somewhat flawed. When catching a Frisbee one relies on the Frisbee behaving normally, but in financial regulation one is concerned with what had seemed to be abnormal, such as the crisis period of 2007/8.

It is noted of Game theory that

John von Neumann and Oskar Morgenstern established that optimal decision-making involved probabilistically-weighting all possible future outcomes.

In apparent contrast

Many of the dominant figures in 20th century economics – from Keynes to Hayek, from Simon to Friedman – placed imperfections in information and knowledge centre-stage. Uncertainty was for them the normal state of decision-making affairs.

“It is not what we know, but what we do not know which we must always address, to avoid major failures, catastrophes and panics.”

The Game Theory thinking is characterised as ignoring the possibility of uncertainty, which – from a mathematical point of view – seems an absurd misreading. Theories can only ever have conditional conclusions: any unconditional misinterpretation goes beyond the proper bounds. The paper – rightly – rejects the conclusions of two-player zero-sum static game theory. But its critique of such a theory is much less thorough than von Neumann and Morgenstern’s own (e.g. their 4.3.3) and fails to identify which conditions are violated by economics. More worryingly, it seems to invite the reader to accept them, as here:

The choice of optimal decision-making strategy depends importantly on the degree of uncertainty about the environment – in statistical terms, model uncertainty. A key factor determining that uncertainty is the length of the sample over which the model is estimated. Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies.

This seems to suggest that – contra game theory – we could ‘in principle’ establish a sound model, if only we had enough data. Yet:

Einstein wrote that: “The problems that exist in the world today cannot be solved by the level of thinking that created them”.

There seems a non-sequitur here: if new thinking is repeatedly being applied then surely the nature of the system will continually be changing? Or is it proposed that the ‘new thinking’ will yield a final solution, eliminating uncertainty? If it is the case that ‘new thinking’ is repeatedly being applied then the regularity conditions of basic game theory (e.g. at 4.6.3 and 11.1.1) are not met (as discussed at 2.2.3). It is certainly not an unconditional conclusion that the methods of game theory apply to economies beyond the short-run, and experience would seem to show that such an assumption would be false.

The paper recommends the use of heuristics, by which it presumably means what Gigernezer means: methods that ignore some of the data. Thus, for example, all formal methods are heuristics since they ignore intuition.  But a dog catching a Frisbeee only has its own experience, which it is using, and so presumably – by this definition – is not actually using a heuristic either. In 2006 most financial and economics methods were heuristics in the sense that they ignored the lessons identified by von Neumann and Morgenstern. Gigerenzer’s definition seems hardly helpful. The dictionary definition relates to learning on one’s own, ignoring others. The economic problem, it seems to me, was of paying too much atention to the wrong people, and too little to those such as von Neumann and Morgenstern – and Keynes.   

The implication of the paper and Gigerenzer is, I think, that a heuristic is a set method that is used, rather than solving a problem from first principles. This is clearly a good idea, provided that the method incorporates a check that whatever principles that it relies upon do in fact hold in the case at hand. (This is what economists have often neglecte to do.) If set methods are used as meta-heuristics to identify the appropriate heuristics for particular cases, then one has something like recognition-primed decision-making. It could be argued that the financial community had such meta-heuristics, which led to the crash: the adoption of heuristics as such seems not to be a solution. Instead one needs to appreciate what kind of heuristic are appropriate when. Game theory shows us that the probabilistic heuristics are ill-founded when there is significant innovation, as there was both prior, through and immediately after 2007/8. In so far as economics and finance are games, some events are game-changers. The problem is not the proper application of mathematical game theory, but the ‘pragmatic’ application of a simplistic version: playing the game as it appears to be unless and until it changes. An unstated possible deduction from the paper is surely that such ‘pragmatic’ approaches are inadequate. For mutable games, strategy needs to take place at a higher level than it does for fixed games: it is not just that different strategies are required, but that ‘strategy’ has a different meaning: it should at least recognize the possibility of a change to a seemingly established status quo.

If we take an analogy with a dog and a Frisbee, and consider Frisbee catching to be a statistically regular problem, then the conditions of simple game theory may be met, and it is also possible to establish statistically that a heuristic (method) is adequate. But if there is innovation in the situation then we cannot rely on any simplistic theory or on any learnt methods. Instead we need a more principled approach, such as that of Keynes or Ashby,  considering the conditionality and looking out for potential game-changers. The key is not just simpler regulation, but regulation that is less reliant on conditions that we expect to hold but for which, on maturer reflection, are not totally reliable. In practice this may necessitate a mature on-going debate to adjust the regime to potential game-changers as they emerge.

See Also

Ariel Rubinstein opines that:

classical game theory deals with situations where people are fully rational.

Yet von Neumann and Morgenstern (4.1.2) note that:

the rules of rational behaviour must provide definitely for the possibility of irrational conduct on the part of others.

Indeed, in a paradigmatic zero-sum two person game, if the other person players rationally (according to game theory) then your expected return is the same irrespective of how you play. Thus it is of the essence that you consider potential non-rational plays. I take it, then, that game theory as reflected in economics is a very simplified – indeed an over-simplified – version. It is presumably this distorted version that Haldane’s criticism’s properly apply to.

Dave Marsay

Haldane’s Tails of the Unexpected

A. Haldane, B. Nelson Tails of the unexpected,  The Credit Crisis Five Years On: Unpacking the Crisis conference, University of Edinburgh Business School, 8-9 June 2012

The credit crisis is blamed on a simplistic belief in ‘the Normal Distribution’ and its ‘thin tails’, understating risk. Complexity and chaos theories point to greater risks, as does the work of Taleb.

Modern weather forecasting is pointed to as good relevant practice, where one can spot trouble brewing. Robust and resilient regulatory mechanisms need to be employed. It is no good relying on statistics like VaR (Value at Risk) that assume a normal distribution. The Bank of England is developing an approach based on these ideas.

Comment

Risk arises when the statistical distribution of the future can be calculated or is known. Uncertainty arises when this distribution is incalculable, perhaps unknown.

While the paper acknowledges Keynes’ economics and Knightian uncertainty, it overlooks Keynes’ Treatise on Probability, which underpins his economics.

Much of modern econometric theory is … underpinned by the assumption of randomness in variables and estimated error terms.

Keynes was critical of this assumption, and of this model:

Economics … shift[ed] from models of Classical determinism to statistical laws. … Evgeny Slutsky (1927) and Ragnar Frisch (1933) … divided the dynamics of the economy into two elements: an irregular random element or impulse and a regular systematic element or propagation mechanism. This impulse/propagation paradigm remains the centrepiece of macro-economics to this day.

Keynes pointed out that such assumptions could only be validated empirically and (as the current paper also does) in the Treatise he cited Lexis’s falsification.

The paper cites a game of paper/scissors/stone which Sotheby’s thought was a simple game of chance but which Christie’s saw  as an opportunity for strategizing – and won millions of dollars. Apparently Christie’s consulted some 11 year old girls, but they might equally well have been familiar with Shannon‘s machine for defeating strategy-impaired humans. With this in mind, it is not clear why the paper characterises uncertainty a merly being about unknown probability distributions, as distinct from Keynes’ more radical position, that there is no such distribution. 

The paper is critical of nerds, who apparently ‘like to show off’.  But to me the problem is not the show-offs, but those who don’t know as much as they think they know. They pay too little attention to the theory, not too much. The girls and Shannon seem okay to me: it is those nerds who see everything as the product of randomness or a game of chance who are the problem.

If we compare the Slutsky Frisch model with Kuhn’s description of the development of science, then economics is assumed to develop in much the same way as normal science, but without ever undergoing anything like a (systemic) paradigm shift. Thus, while the model may be correct most of the time,  violations, such as in 2007/8, matter.

Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.

 One can understand this reasoning by analogy with science: the more dominant a school which protects its core myths, the greater the reaction and impact when the myths are exposed. But in finance it may not be just ‘risk control’ that causes a problem. Any optimisation that is blind to the possibility of systemic change may tend to increase the chance of change (for good or ill) [E.g. Bohr Atomic Physics and Human Knowledge. Ox Bow Press 1958].

See Also

Previous posts on articles by or about Haldane, along similar lines:

My notes on:

Dave Marsay

The voice of science: let’s agree to disagree (Nature)

Sarewitz uses his Nature column to argue against forced or otherwise false consensus in science.

“The very idea that science best expresses its authority through consensus statements is at odds with a vibrant scientific enterprise. … Science would provide better value to politics if it articulated the broadest set of plausible interpretations, options and perspectives, imagined by the best experts, rather than forcing convergence to an allegedly unified voice.”

D. Sarewitz The voice of science: let’s agree to disagree Nature Vol 478 Pg 3, 6 October 2011.

Sarewitz seems to be thinking in terms of issues such as academic freedom and vibrancy. But there are arguably more important aspects. Given any set of experiments or other evidence there will generally be a wide range of credible theories. The choice of a particular theory is not determined by any logic, but such factors as which one was thought of first and by whom, and is easiest to work with in making predictions etc.

In issues like smoking and climate change the problem is that the paucity of data is obvious and different credible theories lead to different policy or action recommendations. Thus no one detailed theory is credible. We need a different way of reasoning, that should at least recognize the range of credible theories and the consequential uncertainty.

I have experience of a different kind of problem: where one has seemingly well established theories but these are suddenly falsified in a crisis (as in the financial crash of 2008). Politicians (and the public, where they are involved) understandably lose confidence in the ‘science’ and can fall back on instincts that may or may not be appropriate. One can try to rebuild a credible theory over-night (literally) from scratch, but this is not recommended. Some scientists have a clear grasp of their subject. They understand that the accepted theory is part science part narrative and are able to help politicians understand the difference. We may need more of these.

Enlightened scientists will seek to encourage debate, e.g. via enlightened journals, but in some fields, as in economics, they may find themselves ‘out in the cold’. We need to make sure that such people have a platform. I think that this goes much broader than the committees Sarewitz is considering.

I also think that many of our contemporary problems are because societies end to suppress uncertainty, being more comfortable with consensus and giving more credence to people who are confident in their subject. This attitude suppresses a consideration of alternatives and turns novelty into shocks, which can have disastrous results. 

Previous work

In a 2001 Nature article Roger Pielke covers much the same ground. But he also says:

“Take for example weather forecasters, who are learning that the value to society of their forecasts is enhanced when decision-makers are provided with predictions in probabilistic rather than categorical fashion and decisions are made in full view of uncertainty.”

 From this and his blog it seems that the uncertainty is merely probabilistic, and differs only in magnitude. But it seems to me that before global warming became significant  weather forecasting and climate modelling seemed probabilistic but that there was an intermediate time-scale (in the UK one or two weeks) which was always more complex and which had different types of uncertainty, as described by Keynes. But this does not detract from the main point of the article.

See also

Popper’s Logic of Scientific Discovery , Roger Pielke’s blog (with a link to his 2001 article in Nature on the same topic).

Dave Marsay

How to live in a world that we don’t understand, and enjoy it (Taleb)

N Taleb How to live in a world that we don’t understand, and enjoy it  Goldstone Lecture 2011 (U Penn, Wharton)

Notes from the talk

Taleb returns to his alma mater. This talk supercedes his previous work (e.g. Black Swan). His main points are:

  • We don’t have a word for the opposite of fragile.
      Fragile systems have small probability of huge negative payoff
      Robust systems have consistent payoffs
      ? has a small probability of a large pay-off
  • Fragile systems eventually fail. ? systems eventually come good.
  • Financial statistics have a kurtosis that cannot in practice be measured, and tend to hugely under-estimate risk.
      Often more than 80% of kurtosis over a few years is contributed by a single (memorable) day.
  • We should try to create ? systems.
      He calls them convex systems, where the expected return exceeds the return given the expected environment.
      Fragile systems are concave, where the expected return is less than the return from the expected situation.
      He also talks about ‘creating optionality’.
  • He notes an ‘action bias’, where whenever there is a game like the stock market then we want to get involved and win. It may be better not to play.
  • He gives some examples.

 Comments

Taleb is dismissive of economists who talk about Knightian uncertainty, which goes back to Keynes’ Treatise on Probability. Their corresponding story is that:

  • Fragile systems are vulnerable to ‘true uncertainty’
  • Fragile systems eventually fail
  • Practical numeric measures of risk ignore ‘true uncertainty’.
  • We should try to create systems that are robust to or exploit true uncertainty.
  • Rather than trying to be the best at playing the game, we should try to change the rules of the game or play a ‘higher’ game.
  • Keynes gives examples.

The difference is that Taleb implicitly suppose that financial systems etc are stochastic, but have too much kurtosis for us to be able to estimate their parameters. Rare events are regarded as rare events generated stochastically. Keynes (and Whitehead) suppose that it may be possible to approximate such systems by a stochastic model for a while, but the rare events denote a change to a new model, so that – for example – there is not a universal economic theory. Instead, we occasionally have new economics, calling for new stochastic models. Practically, there seems little to choose between them, so far.

From a scientific viewpoint, one can only asses definite stochastic models. Thus, as Keynes and Whitehead note, one can only say that a given model fitted the data up to a certain date, and then it didn’t. The notion that there is a true universal stochastic model is not provable scientifically, but neither is it falsifiable. Hence according to Popper one should not entertain it as a view. This is possibly too harsh on Taleb, but the point is this:

Taleb’s explanation has pedagogic appeal, but this shouldn’t detract from an appreciation of alternative explanations based on non-stochastic uncertainty.

 In particular:

  • Taleb (in this talk) seems to regard rare crisis as ‘acts of fate’ whereas Keynes regards them as arising from misperceptions on the part of regulators and major ‘players’. This suggests that we might be able to ameliorate them.
  • Taleb implicitly uses the language of probability theory, as if this were rational. Yet his argument (like Keynes’) undermines the notion of probability as derived from rational decision theory.
      Not playing is better whenever there is Knightian uncertainty.
      Maybe we need to be able to talk about systems that thrive on uncertainty, in addition to convex systems.
  • Taleb also views the up-side as good fortune, whereas we might view it as an innovation, by whatever combination of luck, inspiration, understanding and hard work.

See also

On fat tails versus epochs.

Dave Marsay

Composability

State of the art – software engineering

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations … .”For information systems, from a software engineering perspective,  the essential features are regarded as modularity and statelessness. Current inhibitors include:  

“Lack of clear composition semantics that describe the intention of the composition and allow to manage change propagation.”

Broader context

Composability has a natural interpretation as readiness to be composed with others, and has broader applicability. For example, one suspects that if some people met their own clone, they would not be able to collaborate. Quite generally, composability would seem necessary but perhaps not sufficient to ‘good’ behaviour. Thus each culture tends to develop ways for people to work effectively together, but some sub-cultures seem parasitic, in that they couldn’t sustain themselves on their own.

Cultures tend to evolve, but technical interventions tend to be designed. How can we be sure that the resultant systems are viable under evolutionary pressure? Composability would seem to be an important element, as it allows elements to be re-used and recombined, with the aspiration of supporting change propagation.

Analysis

Composability is particularly evident, and important, in algorithms in statistics and data fusion.  If modularity and statelessness are important for the implementation of the algorithms, it is clear that there are also characteristics of the algorithms as functions (ignoring internal details) that are also important.

If we partition a given data set, apply a function to the parts and the combine the result, we want to get the same result no matter how the data is partitioned. That is, we want the result to depend on the data, not the partitioning.

In elections for example, it is not necessarily true that a party who gets a majority of the votes overall will get the most candidates elected. This lack of composability can lead to a loss of confidence in the electoral process. Similarly, media coverage is often an editor’s precis of the precis by different reporters. One would hope that a similar story would emerge if one reporter had covered the whole. 

More technically, averages over parts cannot, in general, be combined to give a true overall average, whereas counting and summing are composable. Desired functions can often be computed composably by using a preparation function, then composable function, then a projection or interpretation function. Thus an average can be computed by finding the number of terms averaged, reporting the sum and count, summing over parts to give an overall sum and count, then projecting to get the average. If a given function can be implented via two or more composable functions, then those functions must be ‘conjugate’: the same up to some change of basis. (For example, multiplication is composable, but one could prepare using logs and project using exponentiation to calculate a product using a sum.)

In any domain, then, it is natural to look for composable functions and to implement algorithms in terms of them. This seems to have been widespread practice until the late 1980s, when it became more common to implement algorithms directly and then to worry about how to distribute them.

Iterative Composability

In some cases it is not possible to determine composable functions in advance, or perhaps at all. For example, where innovation can take place, or one is otherwise ignorant of what may be. Here one may look for a form of ‘iterative composability’ in which one hopes tha the results is normally adequate, there will be signs if it is not, and that one will be able to improve the situation. What matters is that this process should converge, so that one can get as close as one likes to the results one would get from using all the data.

Elections under FPTP (first past the post) are not composable, and one cannot tell if the party who is most voter’s first preference has failed to get in. AV (alternative vote) is also not composable, but one has more information (voters give rankings) and so can sometimes tell that there cannot have been a party who was most voters first preference who failed to get in. If there can have been, one could have a second round with only the top parties’ candidates. This is a partial step towards general iterative composability, which might often be iteratively composable for the given situation, much more so than fptp.

Parametric estimation is generally composable when one has a fixed number of entities whose parameters are being estimated. Otherwise one has an ‘association’ problem, which might be tackled differently for the different parts. If so, this needs to be detected and remedied, perhaps iteratively. This is effectively a form of hypothesis testing. Here the problem is that the testing of hypotheses using likelihood ratios is not composable. But, again, if hypotheses are compared differences can be detected and remedial action taken. It is less obvious that this process will converge, but for constrained hypothesis spaces it does.

Innovation, transformation, freedom and rationality

It is common to suppose that people acting in their environment should characterise their situation within a context in enough detail to removes all but (numeric) probabilistic uncertainty, so that they can optimize. Acting sub-optimally, it is supposed, would not be rational. But if innovation is about transformation then a supposedly rational act may undermine the context of another, leading to a loss of performance and possibly crisis or chaos.

Simultaneous innovation could be managed by having an over-arching policy or plan, but this would clearly constrain freedom and hence genuine innovation. To much innovation and one has chaos, too little and there is too little progress.

A composable approach is to seek innovations that respect each other’s contexts, and to make clear to other’s what one’s essential context is. This supports only very timid innovation if the innovation is rational (in the above sense), since no true (Knightian) uncertainty can be accepted. A more composable approach is to seek to minimise dependencies and to innovate in a way that accepts – possibly embraces – true uncertainty. This necessitates a deep understanding of the situation and its potentialities.  

Conclusion

Composability is an important concept that can be applied quite generally. The structure of activity shouldn’t impact on the outcome of the activity (other than resource usage). This can mean developing core components that provide a sound infrastructure, and then adapting it to perform the desired tasks, rather than seeking to implement the desired functionality directly.

Dave Marsay

Cyber Doctrine

Cyber Doctrine: Towards a coherent evolutionary framework for learning resilience, ISRS, JP MacIntosh, J Reid and LR Tyler.

A large booklet that provides a critical contribution to the Cyber debate. Here I provide my initial reactions: the document merits more detailed study.

Topics

Scope

Just as financial security is about more than just defending against bank-robbers, cyber security is about more than just defending against deliberate attack, and extends to all aspects of resilience, including freedom from whatever delusions might be analogous to the efficient market hypothesis.

Approach

Innovation is key to a vibrant Cyberspace and further innovation in Cyberspace is vital to our real lives. Thus a notion of security based on constraint or resilience based on always returning to the status quo are simply not appropriate. 

Resilience and Transformation

Resilience is defined as “the enduring power of a body or bodies for transformation, renewal and recovery through the flux of interactions and flow of events.” It is not just the ability to ‘bounce back’ to its previous state. It implies the ability to learn from events and adapt to be in a better position to face them.

Transformation is taken to be the key characteristic. It is not defined, which might lead people to turn to wikipedia, whose notion does not explicitly address complexity or uncertainty. I would like to see more emphasis on the long-run issues of adapting to evolve as against sequentially adapting to what one thinks the current needs are. This may include ‘deep transformation’ and ‘transformation in contact’ and the elimination of parts that are no longer needed.

Pragmatism 

The document claims to be ‘pragmatic’: I have concerns about what this term means to readers. According to wikipedia, “it describes a process where theory is extracted from practice, and applied back to practice to form what is called intelligent practice.” Fair enough. But the efficient market hypothesis was once regarded as pragmatic, and there are many who think it pragmatic to act as if one’s beliefs were true. Effective Cyber practice would seem to depend on an appropriate notion of pragmatism, which a doctrine perhaps ought to elucidate.

Glocalization

The document advocates glocalization. According to wikipedia this means ‘think global act local’ and the document refers to a variant: “the compression of the world and the intensification of the consciousness of the world as a whole”. But how should we conceive the whole? The document says “In cyberspace our lives are conducted through a kaleidoscope of global and local relations, which coalesce and dissipate as diverse glocals.” Thus this is not wholism (which supposes that the parts should be dominated by the needs of the whole) but a more holistic vision, which seeks a sustainable solution, somehow ‘balancing’ a range of needs on a range of scales. The doctrinal principles will need to support the structuring and balancing more explicitly.

Composability

The document highlights composability as a key aspect of best structural practice that – pragmatically – perhaps ought to be leveraged further. I intend to blog specifically on this. Effective collaboration is clearly essential to innovation, including resilience. Composability would seem essential to effective collaboration.

Visualisation: Quads

I imagine that anyone who has worked on these types of complex issue, with all their uncertainties, will recognize the importance of visual aids that can be talked around. There are many that are helpful when interpreted with understanding and discretion, but I have yet to find any that can ‘stand alone’ without risk of mis-interpretation. Diagram 6 (page 89) seems at first sight a valuable contribution to the corpus, worthy of further study and perhaps development.

I consider Perrow limited because his ‘yardstick’ tends to be an existing system and his recommendation seems to be ‘complexity and uncertainty are dangerous’. But if we want resilience through innovation we cannot avoid complexity and uncertainty. Further, glocalization seems to imply a turbulent diversity of types of coupling, such that Perrow’s analysis is impossible to apply.

I have come across the Johari window used in government as a way of explaining uncertainty, but here the yardstick is what others think they know, and in any case the concept of ‘knowledge’ seems just as difficult as that of uncertainty. So while this motivates, it doesn’t really explain.

The top ‘quad’ says something important about conventional economics. Much of life is a zero sum game: if I eat the cake, then you can’t. But resilience is about other aspects of life: we need a notion of rationality that suits this side of life. This will need further development.

Positive Deviancy and Education

 Lord Reid (below) made some comments when launching the booklet that clarify some of the issues. He emphasises the role for positive deviancy and education in the sense of ‘bringing out’. This seems to me to be vital.

Control and Patching

Lord Reid (below) emphasises that a control-based approach, or continual ‘patching’, aren’t enough. There is a qualitative change in the nature of Cyber, and hence a need for a completely different approach. This might have been made more explicit in the document.

Criticisms

The main criticisms that I have seen have been either of the recommendations that they wrongly assume John Reid is making (e.g., for more control) or appear to be based on a dislike of Lord Reid. In any case, changes such as those proposed would seem to call for a more international figure-head or lead institution, perhaps with ISRS in a supporting role.

What next?

The argument for having some doctrine matches my own leanings, as does the general trend of  the suggestions. But (as the government, below, says) one needs an international consensus, which in practice would seem to mean an approach endorsed by the UN security council (including America, France, Russia and China). Such a hopeless task seems to lead people to underestimate the risks of the status quo, or of ‘evolutionary’ patching of it with either less order or more control. As with the financial crisis, this may be the biggest threat to our security, let alone our resilience.

It seems to me, though, that behind the specific ideas proffered the underlying instincts are not all that different from those of the founders of the UN, and that seen in that context the ideas might not be too far from being attractive to each of the permanent members, if only the opportunities were appreciated.

Any re-invention or re-articulation of the principles of the UN would naturally have an impact on member states, and call for some adjustment to their legal codes. The UK’s latest Prevent strategy already emphasises the ‘fundamental values’ of ‘universal human rights, equality before the law, democracy and full participation in our society’.  In effect, we could see the proposed Cyber doctrine as proposing principles that would support a right to live in a reasonably resilient society. If for resilience we read sustainability, then we could say that there should be a right to be able to sustain oneself without jeopardising the prospects of one’s children and grandchildren. I am not sure what ‘full participation in our society’ would mean under reformed principles, but I see governments as having a role in fostering the broadest range of possible ‘positive deviants’, rather than (perhaps inadvertently) encouraging dangerous groupthink. These thoughts are perhaps prompted more by Lord Reid’s comments than the document itself.

Conclusion

 The booklet raises important issues about the nature, opportunities and threats of globalisation as impacted by Cyberspace. It seems clear that there is a consequent need for doctrine, but not yet what routes forward there may be. Food for thought, but not a clear prospectus.

See Also

Government position, Lord Reid’s Guardian article. , Police Led Intelligence, some negative comment.

Dave Marsay