Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

What should replace utility maximization in economics?

Mainstream economics has been based on the idea of people producing and trading in order to maximize their utility, which depends on their assigning values and conditional  probabilities to outcomes. Thus, in particular, mainstream economics implies that people do best by assigning probabilities to possible outcomes, even when there seems no sensible way to do this (such as when considering a possible crash). Ken Arrow has asked, if one rejects utility maximization, what should one replace it with?

The assumption here seems to be that it is better to have a wrong theory than to have no theory. The fear seems to be that economies would grind to a holt unless they were sanctioned by some theory – even a wrong one. But this fear seems at odds with another common view, that economies are driven by businesses, which are driven by ‘pragmatic’ men. It might be that without the endorsement of some (wrong) theory some practices, such as the development of novel technical instruments and the use of large leverages, would be curtailed. But would this be a bad thing?

Nonetheless, Arrow’s challenge deserves a response.

There are many variations in detail of utility maximization theories. Suppose we identity ‘utility maximization’ as a possible heuristic, then utility maximization theory claims that people use some specific heuristics, so an obvious alternative is to consider a wider  range. The implicit idea behind utility maximization theory seems to be under a competitive regime resembling evolution, the evolutionary stable strategies (‘the good ones’) do maximize some utility function, so that in time utility maximizers ought to get to dominate economies. (Maybe poor people do not maximize any utility, but they – supposedly – have relatively little influence on economies.) But this idea is hardly credible. If – as seems to be the case – economies have significant ‘Black Swans’ (low probability high impact events) then utility maximizers  who ignore the possibility of a Black Swan (such as a crash) will do better in the short-term, and so the economy will become dominated by people with the wrong utilities. People with the right utilities would do better in the long run, but have two problems: they need to survive the short-term and they need to estimate the probability of the Black Swan. No method has been suggested for doing this. An alternative is to take account of some notional utility but also take account of any other factors that seem relevant.

For example, when driving a hire-car along a windy road with a sheer drop I ‘should’ adjust my speed to trade time of arrival against risk of death or injury. But usually I simply reduce my speed to the point where the risk is slight, and accept the consequential delay. These are qualitative judgements, not arithmetic trade-offs. Similarly an individual might limit their at-risk investments (e.g. stocks) so that a reasonable fall (e.g. 25%) could be tolerated, rather than try to keep track of all the possible things that could go wrong (such as terrorists stealing a US Minuteman) and their likely impact.

More generally, we could suppose that people act according to their own heuristics, and that there are competitive pressures on heuristics, but not that utility maximization is necessarily ‘best’ or even that a healthy economy relies on most people having similar heuristics, or that there is some stable set of ‘good’ heuristics. All these questions (and possibly more) could be left open for study and debate. As a mathematician it seems to me that decision-making involves ideas, and that ideas are never unique or final, so that novel heuristics could arise and be successful from time to time. Or at least, the contrary would require an explanation. In terms of game theory, the conventional theory seems to presuppose a fixed single-level game, whereas – like much else – economies seem to have scope for changing the game and even for creating higher-level games, without limit. In this case, the strategies must surely change and are created rather than drawn from a fixed set?

See Also

Some evidence against utility maximization. (Arrow’s response prompted this post).

My blog on reasoning under uncertainty with application to economics.

Dave Marsay

Haldane’s The dog and the Frisbee

Andrew Haldane The dog and the Frisbee

Haldane argues in favour of simplified regulation. I find the conclusions reasonable, but have some quibbles about the details of the argument. My own view is that much of our financial problems have been due – at least in part – to a misrepresentation of the associated mathematics, and so I am keen to ensure that we avoid similar misunderstandings in the future. I see this as a primary responsibility of ‘regulators’, viewed in the round.

The paper starts with a variation of Ashby’s ball-catching observation, involving dog and a Frisbee instead of a man and a ball: you don’t need to estimate the position of the Frisbee or be an expert in aerodynamics: a simple, natural, heuristic will do. He applies this analogy to financial regulation, but it is somewhat flawed. When catching a Frisbee one relies on the Frisbee behaving normally, but in financial regulation one is concerned with what had seemed to be abnormal, such as the crisis period of 2007/8.

It is noted of Game theory that

John von Neumann and Oskar Morgenstern established that optimal decision-making involved probabilistically-weighting all possible future outcomes.

In apparent contrast

Many of the dominant figures in 20th century economics – from Keynes to Hayek, from Simon to Friedman – placed imperfections in information and knowledge centre-stage. Uncertainty was for them the normal state of decision-making affairs.

“It is not what we know, but what we do not know which we must always address, to avoid major failures, catastrophes and panics.”

The Game Theory thinking is characterised as ignoring the possibility of uncertainty, which – from a mathematical point of view – seems an absurd misreading. Theories can only ever have conditional conclusions: any unconditional misinterpretation goes beyond the proper bounds. The paper – rightly – rejects the conclusions of two-player zero-sum static game theory. But its critique of such a theory is much less thorough than von Neumann and Morgenstern’s own (e.g. their 4.3.3) and fails to identify which conditions are violated by economics. More worryingly, it seems to invite the reader to accept them, as here:

The choice of optimal decision-making strategy depends importantly on the degree of uncertainty about the environment – in statistical terms, model uncertainty. A key factor determining that uncertainty is the length of the sample over which the model is estimated. Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies.

This seems to suggest that – contra game theory – we could ‘in principle’ establish a sound model, if only we had enough data. Yet:

Einstein wrote that: “The problems that exist in the world today cannot be solved by the level of thinking that created them”.

There seems a non-sequitur here: if new thinking is repeatedly being applied then surely the nature of the system will continually be changing? Or is it proposed that the ‘new thinking’ will yield a final solution, eliminating uncertainty? If it is the case that ‘new thinking’ is repeatedly being applied then the regularity conditions of basic game theory (e.g. at 4.6.3 and 11.1.1) are not met (as discussed at 2.2.3). It is certainly not an unconditional conclusion that the methods of game theory apply to economies beyond the short-run, and experience would seem to show that such an assumption would be false.

The paper recommends the use of heuristics, by which it presumably means what Gigernezer means: methods that ignore some of the data. Thus, for example, all formal methods are heuristics since they ignore intuition.  But a dog catching a Frisbeee only has its own experience, which it is using, and so presumably – by this definition – is not actually using a heuristic either. In 2006 most financial and economics methods were heuristics in the sense that they ignored the lessons identified by von Neumann and Morgenstern. Gigerenzer’s definition seems hardly helpful. The dictionary definition relates to learning on one’s own, ignoring others. The economic problem, it seems to me, was of paying too much atention to the wrong people, and too little to those such as von Neumann and Morgenstern – and Keynes.   

The implication of the paper and Gigerenzer is, I think, that a heuristic is a set method that is used, rather than solving a problem from first principles. This is clearly a good idea, provided that the method incorporates a check that whatever principles that it relies upon do in fact hold in the case at hand. (This is what economists have often neglecte to do.) If set methods are used as meta-heuristics to identify the appropriate heuristics for particular cases, then one has something like recognition-primed decision-making. It could be argued that the financial community had such meta-heuristics, which led to the crash: the adoption of heuristics as such seems not to be a solution. Instead one needs to appreciate what kind of heuristic are appropriate when. Game theory shows us that the probabilistic heuristics are ill-founded when there is significant innovation, as there was both prior, through and immediately after 2007/8. In so far as economics and finance are games, some events are game-changers. The problem is not the proper application of mathematical game theory, but the ‘pragmatic’ application of a simplistic version: playing the game as it appears to be unless and until it changes. An unstated possible deduction from the paper is surely that such ‘pragmatic’ approaches are inadequate. For mutable games, strategy needs to take place at a higher level than it does for fixed games: it is not just that different strategies are required, but that ‘strategy’ has a different meaning: it should at least recognize the possibility of a change to a seemingly established status quo.

If we take an analogy with a dog and a Frisbee, and consider Frisbee catching to be a statistically regular problem, then the conditions of simple game theory may be met, and it is also possible to establish statistically that a heuristic (method) is adequate. But if there is innovation in the situation then we cannot rely on any simplistic theory or on any learnt methods. Instead we need a more principled approach, such as that of Keynes or Ashby,  considering the conditionality and looking out for potential game-changers. The key is not just simpler regulation, but regulation that is less reliant on conditions that we expect to hold but for which, on maturer reflection, are not totally reliable. In practice this may necessitate a mature on-going debate to adjust the regime to potential game-changers as they emerge.

See Also

Ariel Rubinstein opines that:

classical game theory deals with situations where people are fully rational.

Yet von Neumann and Morgenstern (4.1.2) note that:

the rules of rational behaviour must provide definitely for the possibility of irrational conduct on the part of others.

Indeed, in a paradigmatic zero-sum two person game, if the other person players rationally (according to game theory) then your expected return is the same irrespective of how you play. Thus it is of the essence that you consider potential non-rational plays. I take it, then, that game theory as reflected in economics is a very simplified – indeed an over-simplified – version. It is presumably this distorted version that Haldane’s criticism’s properly apply to.

Dave Marsay

The money forecast

A review of The Money forecast A Haldane New Scientist 10 Dec. 2011. On-line version is To Navigate economic storms we need better forecasting.

Summary

Andrew Haldane, ‘Andy’, is one of the more insightful and – hopefully – influential members of the UK economic community, recognising that new ways of thinking are needed and taking a lead in their development.

He refers to a previous article ‘Revealed – the Capitalist network that runs the world’, which inspires him to attempt to map the world of finance.

“… Making sense of the financial system is more an act of archaeology than futurology.”

Of the pre-crisis approach it says:

“… The mistake came in thinking the behaviour of the system was just an aggregated version of the behaviour of the individual. …

”    Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behaviour of any one node. To make an analogy, you cannot understand the brain by focusing on a neuron – and then simply multiplying by 100 billion. …

… When parts started to malfunction … no one had much idea what critical faculties would be impaired.

    That uncertainty, coupled with dense financial wiring, turned small failures into systemic collapse. …

    Those experiences are now seared onto the conscience of regulators. Systemic risk has entered their lexicon, and to understand that risk, they readily acknowledge the need to join the dots across the network. So far, so good. Still lacking are the data and models necessary to turn this good intent into action.

… Other disciplines have cut a dash in their complex network mapping over the past generation, assisted by increases in data-capture and modelling capability made possible by technology. One such is weather forecasting … .

   Success stories can also be told about utility grids and transport networks, the web, social networks, global supply chains and perhaps the most complex web of all, the brain.

    …  imagine the scene a generation hence. There is a single nerve centre for global finance. Inside, a map of financial flows is being drawn in real time. The world’s regulatory forecasters sit monitoring the financial world, perhaps even broadcasting it to the world’s media.

    National regulators may only be interested in a quite narrow subset of the data for the institutions for which they have responsibility. These data could be part of, or distinct from, the global architecture.

    …  it would enable “what-if?” simulations to be run – if UK bank Northern Rock is the first domino, what will be the next?”

Comments

I am unconvinced that archeology, weather forecasting or the other examples are really as complex as economic forecasting, which can be reflexive: if all the media forecast a crash there probably will be one, irrespective of the ‘objective’ financial and economic conditions. Similarly, prior to the crisis most people seemed to believe in ‘the great moderation’, and the good times rolled on, seemingly.

Prior to the crisis I was aware that a minority of British economists were concerned about the resilience of the global financial system and that the ‘great moderation’ was a cross between a house of cards and a pyramid selling scheme. In their view, a global financial crisis precipitated by a US crisis was the greatest threat to our security. In so far as I could understand their concerns, Keynes’ mathematical work on uncertainty together with his later work on economics seemed to be key.

Events in 2007 were worrying. I was advised that the Chinese were thinking more sensibly about these issues, and I took to opportunity to visit China in Easter 2008, hosted by the Chinese Young Persons Tourist Group, presumably not noted for their financial and economic acumen. It was very apparent from a coach ride from Beijing to the Great Wall that their program of building new towns and moving peasants in was on hold. The reason given by the Tour Guide was that the US financial system was expected to crash after their Olympics, leading to a slow-down in their economic growth, which needed to be above 8% or else they faced civil unrest. Once tipped off, similar measures to mitigate a crisis were apparent almost everywhere. I also talked to a financier, and had some great discussions about Keynes and his colleagues, and the implications for the crash. In the event the crisis seems to have been triggered by other causes, but Keynes conceptual framework still seemed relevant.

The above only went to reinforce my prejudice:

  • Not only is uncertainty important, but one needs to understand its ramifications as least as well as Keynes did (e.g. in his Treatise and ‘Economic Consequences of the Peace’).
  • Building on this, concepts such as risk need to be understood to their fullest extent, not reduced to numbers.
  • The quotes above are indicative of the need for a holistic approach. Whatever variety one prefers, I do think that this cannot be avoided.
  • The quote about national regulators only having a narrow interest seems remarkably reductionist. I would think that they would all need a broad interest and to be exchanging data and views, albeit they may only have narrow responsibilities. Financial storms can spread around the world quicker than meteorological ones.
  • The – perhaps implicit – notion of only monitoring financial ‘flows’ seems ludicrous. I knew that the US was bound to fail eventually, but it was only by observing changes in migration that I realised it was imminent. Actually, I might have drawn the same conclusion from observing changes in financial regulation in China, but that still was not a ‘financial flow’. I did previously draw similar conclusions talking to people who were speculating on ‘buy to let’, thinking it a sure-thing.
  • Interactions between agents and architectures are important, but if Keynes was right then what really matters are changes to ‘the rules of the games’. The end of the Olympics was not just a change in ‘flows’ but a potential game-changer.
  • Often it is difficult to predict what will trigger a crisis, but one can observe when the situation is ripe for one. To draw an analogy with forest fires, one can’t predict when someone will drop a bottle or a lit cigarette, but one can observe when the tinder has built up and is dry.

It thus seems to me that while Andy Haldane is insightful, the actual article is not that enlightening, and invites a much too prosaic view of forecasting. Even if we think that Keynes was wrong I am fairly sure that we need to develop language and concepts in which we can have a discussion of the issues, even if only ‘Knightian uncertainty’. The big problem that I had prior to the crisis was the lack of a possibility of such a discussion. If we are to learn anything from the crisis it is surely that such discussions are essential. The article could be a good start.

See Also

The short long. On the trend to short-termism.

Control rights (and wrongs). On the imbalance between incentives and risks in banking.

Risk Off. A behaviorist’ view of risk. It notes that prior to the crash ‘risk was under-priced’.

  Dave Marsay

 

GLS Shackle, imagined and deemed possible?

Background

This is a personal view of GLS Shackle’s uncertainty. Having previously used Keynes’ approach to identify possible failure modes in systems, including financial systems (in the run-up to the collapse of the tech bubble), I became concerned  in 2007 that there was another bubble with a potential for a Keynes-type  25% drop in equities, constituting a ‘crisis’. In discussions with government advisers I first came across Shackle. The differences between him and Keynes were emphasised. I tried, but failed to make sense of Shackle, so that I could form my own view, but failed. Unfinished business.

Since the crash of 2008 there have been various attempts to compare and contrast Shackle and Keynes, and others. Here I imagine a solution to the conundrum which I deem possible: unless you know different?

Imagined Shackle

Technically, Shackle seems to focus on the wickeder aspects of uncertainty, to seek to explain them and their significance to economists and politicians, and to advise on how to deal with them. Keynes provides a more academic view, covering all kinds of uncertainty, contrasting tame probabilities with wicked uncertainties, helping us to understand both in a language that is better placed to survive the passage of time and the interpretation by a wider – if more technically aware – audience.

Politically, Shackle lacks the baggage of Lord Keynes, whose image has been tarnished by the misuse of the term ‘Keynesian’. (Like Keynes, I am not a Keynesian.)

Conventional probability theory would make sense if the world was a complicated randomizing machine, so that one has ‘the law of large numbers’: that in the long run particular events will tend to occur with some characteristic, stable, frequency. Thus in principle it would be possible to learn the frequency of events, such that reasonably rare events would be about as rare as we expect them to be. Taleb has pointed out that we can never learn the frequencies of very rare events, and that this is a technical flaw in many accounts of probability theory, which fail to point this out. But Keynes and Shackle have more radical concerns.

If we think of the world as a complicated randomizing machine, then as in Whitehead, it is one which can suddenly change. Shackle’s approach, in so far as I understand it, is to be open to the possibility of a change, recognize when the evidence of a change is overwhelming, and to react to it. This is an important difference for the conventional approach, in which all inference is done on the assumptions that the machine is known. Any evidence that it may have change is simply normalised away. Shackle’s approach is clearly superior in all those situations where substantive change can occur.

Shackle terms decisions about a possibly changing world ‘critical’. He makes the point that the application of a predetermined strategy or habit is not a decision proper: all ‘real’ decisions are critical in that they make a lasting difference to the situation. Thus one has strategies for situations that one expects to repeat, and makes decisions about situations that one is trying to ‘move on’. This seems a useful distinction.

Shackle’s approach to critical decisions is to imagine potential changes to new behaviours, to assess them and then to choose between those deemed possible. This is based on preference not expected utility, because ‘probability’ does not make sense. He gives an example of  a French guard at the time of the revolution who can either give access to a key prisoner or not. He expects to lose his life if he makes the wrong decision, depending on whether the revolution succeeds or not. A conventional approach would be based on the realisation that most attempted revolutions fail. But his choice may have a big impact on whether or not the revolution succeeds. So Shackle advocates imagining the two possible outcomes and their impact on him, and then making a choice. This seems reasonable. The situation is one of choice, not probability.

Keynes can support Shackle’s reasoning. But he also supports other types of wicked uncertainty. Firstly, it is not always the case that a change is ‘out of the blue’. One may not be able to predict when the change will come, but it is sometimes possible to see that there is an economic bubble, and the French guard probably had some indications that he was living in extraordinary times. Thus Keynes goes beyond Shackle’s pragmatism.

In reality, there is no strict dualism between probabilistic behaviour and chaos, between probability and Shackle’s complete ignorance. There are regions in-between that Keynes helps explore. For example, the French guard is not faced with a strictly probabilistic situation, but could usefully think in terms of probabilities conditioned on his actions. In economics, one might usefully think of outcomes as conditioned on the survival of conventions and institutions (October 2011).

I also have a clearer view why consideration of Shackle led to the rise in behavioural economics: if one is ‘being open’ and ‘imagining’ then psychology is clearly important. On the other hand, much of behavioral economics seems to use conventional rationality as some form of ‘gold standard’ for reasoning under uncertainty, and to consider departures from it as a ‘bias’.  But then I don’t understand that either!

Addendum

(Feb 2012, after Blue’s comments.)

I have often noticed that decision-takers and their advisers have different views about how to tackle uncertainty, with decision-takers focusing on the non-probabilistic aspects while their advisers (e.g. scientists or at least scientifically trained) tend to, and may even insist on, treating the problem probabilistically, and hence have radically different approaches to problem-solving. Perhaps the situation is crucial for the decision-taker, but routine for the adviser? (‘The agency problem.’) (Econophysics seems to suffer from this.)

I can see how Shackle had much that was potentially helpful in the run-up to the financial crash. But it seems to me no surprise that the neoclassical mainstream was unmoved by it. They didn’t regard the situation as crucial, and didn’t imagine or deem possible a crash. Unless anyone knows different, there seems to be nothing in Shackle’s key ideas that provide as explicit a warning as Keynes. While Shackle was more acceptable that Keynes (lacking the ‘Keynesian’ label) he also still seems less to the point. One needs both together.

See Also

Prigogine , who provides models of systems that can suddenly change ‘become’. He also  relates to Shackle’s discussion on how making decisions relates to the notion of ‘time’.

Dave Marsay

Bretton Woods: Modelling and Economics

The institute for new economic thinking has a video on modelling and economics. It is considerably more interesting that it might have been before the financial crises beginning 2007. I make a few points from a mathematical perspective.

  • There is a tendency to apply a ‘canned’ model, varying a few parameters, rather then to engage in genuine modelling. The difference makes a difference. In the run-up to the crises of 2007 on there was wide-spread agreement on key aspects of economic theory and some fixed models became to be treated as ‘fact’. In this sense, modelling had stopped. So maybe proper modeling in economics would be a useful innovation? 😉
  • Milton Friedman distinguishes between models that predict well short-term) and those that have ‘realistic’ micro-features. One should also be concerned about the typical behaviours of the model.
  • One particularly needs, as Keynes did, to distinguish between short-run and long-run  models.
  • Models that are solely judged by their ability to predict short-run events will tend to forget about significant events (e.g. crises) that occur over a longer time-frame, and to fall into the habit of extrapolating from current trends, rather than seeking to model potential changes to the status quo.
  • Again, as Keynes pointed out, in complex situations one often cannot predict the long-run future, but only anticipate potential failure modes (scenarios).
  • A single model is at best a possible model. There will always be alternatives (scenarios). One at least needs a representative set of credible models if one is to rely on them.
  • As Keynes said, there is a reflexive relationship between one’s long-run model and what actually happens. Crises mitigated are less likely to happen. A belief in the inevitable stability of the status quo increases the likelihood of a failure.
  • Generally, as Keynes said, the economic system works because people expect it to work. We are part of the system to be modelled.
  • It is better for a model to be imprecise but reliable than to be precisely wrong. This particularly applies to assumptions about human behaviour.
  • It may be better for a model to have some challenging gaps than to fill those gaps with myths.

Part 2 ‘Progress in Economics’ gives the impression that understanding crises is what is most needed, whereas much of the modelling video used language that seems more appropriate to adding epicycles to our models of the new status quo – if we ever have one.

See Also

Reasoning in a complex, dynamic, world, Which mathematics of uncertainty? , Keynes’ General Theory

Dave Marsay

Scientists of the subprime

‘Science of the subprime’ is currently available from BBC iplayer.

Overview

Mathematicians and scientists were complicit in the crash. Financiers were ‘in thrall to mathematics’, with people like Stiglitz and Soros ‘lone voices in the wilderness’. The ‘low point’ were derivatives, which were ‘fiendishly complicated’, yet ‘mathematical models’ convinced people to trade in them.

The problem was that liberalisation led to an increase in connectedness, which was thought to be a good thing, but that this went to far and led to a decrease in diversity, which made the whole system very fragile, eventually crashing. This was presented by Lord May from an ecological perspective.

Perhaps the most interesting part was that Lord May had tackled his lunching partner Mervyn King before the crash, and that in 2003 Andrew Haldane had independently come up with a ‘toy model’ that he felt compelling, but which failed to gain traction.

After the crash, none of the mainstream mathematical models gave any insight into what had gone wrong. The problem was that the models concerned single-point failures, not systemic failures [my words]. Since then Haldane and May have published a paper in Nature showing that structure matters.

The new activities are to generate financial maps, much like weather maps and transport maps.

One problem is diversity: the solution is

  • To ensure that banks suffer the consequences of their actions [no ‘moral hazard’].
  • To ’tilt the playing field’ against large players [the opposite of what is done now].

Another problem is the expectation of certainty: it must be recognized that sensible models can give insights but not reliable predictions.

In summary, the main story is that physics-based mathematics led decision-makers astray, and they wouldn’t be persuaded by Lord May or their own experts. There were also some comments on why this might be:-

Gillian Tett (FT) commented that decision makers needed predictions and the illusion of certainty from their models. A decision-maker commented on the tension between containing long-term risk and making a living in the short-run [but this was not developed]. Moreover, policy makers tend to search for data, models or theories to support their views: the problems are not due to the science as such, but the application of science

Comments

  • This broadly reflects and amplifies the Turner review, but I found it less appealing than Lord Turner’s recent INET interview.
  • Gordon Brown ‘at the top of the shop’ shared these concerns, but seems unable to intervene until his immediate post-crash speech. This seems to raise some interesting issues, especially if the key point was about financial diversity.
  • The underlying problem seems to be that the policy-makers and decision-makers are pragmatic, in a particular sense.
  • Even if the complexity explanation for the crash is correct, it is not clear that this is the only way that crashes can happen, so that pragmatic regulation based on ‘carry on but fix the hole’ may not be effective.
  • The explanations and observations are reminiscent of Keynes, Stiglitz, Soros and Brown have all commended Keynes pre crash, and many have recognized the significance of Keynes post-crash. Yet he is not mentioned. Before the 1929 crash he thought the sustained performance of the stock market remarkable, rather than taking it for granted. His theory was that it would remain stable just so long as everyone was able to trade and expected everyone else to be able to trade, and the central role of confidence has been recognized ever since. The programme ignored this, which seems odd as the behaviourist are also quite fashionable.
  • Keynes underpinning theory of probability [not mentioned] is linked to his tutor’s, Whitehead’s, process logic, which underpins much of modern science, including ecology. This makes the problem quite clear: if mathematicians and scientists are employed by banks and banks are run as ordinary commercial organisations then  they will be focussing on the short-term. The long-term is simply not their responsibility. That is what governments are for (at least according to Locke).  But the central machinery doesn’t seem to be up to it. We shouldn’t blame anyone not in government, academia or similar supposedly ‘for the common good’ organisations.
  •  There were plenty of mathematicians, scientists and economists (not just Lord May) who understood the issues and were trying hard to get the message across, many of them civil servants etc. If we don’t understand how they failed we may find ourselves in the same position again. I think that in the 90s and 00s everything became more ‘commercial’ and hence short-term. Or can we just kick out the Physicists and bring on the Ecologists?

See Also

General approach

Dave Marsay