Applications of Statistics

Lars Syll has commented on a book by David Salsburg, criticising workaday applications of statistics. Lars has this quote:

Kolmogorov established the mathematical meaning of probability: Probability is a measure of sets in an abstract space of events.

This is not quite right.

  • Kolmogorov established a possible meaning, not ‘the’ meaning. (Actually Wittgenstein anticipated him.)
  • Even taking this theory, it is not clear why the space should be ‘measurable‘. More generally one has ‘upper’ and ‘lower’ measures, which need not be equal. One can extend the more familiar notions of probability, entropy, information and statistics to such measures. Such extended notions seem more credible.
  • In practice one often has some ‘given data’ which is at least slightly distant from the ‘real’ ‘events’ of interest. The data space is typically rather a rather tame ‘space’, so that a careful use of statistics is appropriate. But one still has the problem of ‘lifting’ the results to the ‘real events’.

These remarks seem to cover the criques of Syll and Salsburg, but are more nuanced. Statistical results, like any mathematics, need to be interpreted with care. But, depending on which of the above remarks apply, the results may be more or less easy to interpret: not all naive statistics are equally dubious!

Dave Marsay

AI pros and cons

Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher The Metamorphosis Atlantic August 2019.

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.

The authors are looking for comments. My initial reaction is here. I hope to say more. Meanwhile, I’d appreciate your reactions.


Dave Marsay

What logical term or concept ought to be more widely known?

Various What scientific term or concept ought to be more widely known? Edge, 2017.


Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. …

Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.


As against others on:

(This is as far as I’ve got.)


I’ve grouped the contributions according to whether or not I think they give due weight to the notion of uncertainty as expressed in my blog. Interestingly Steven Pinker seems not to give due weight in his article, whereas he is credited by Nicholas G. Carr with some profound insights (in the first of the second batch). So maybe I am not reading them right.

My own thinking

Misplaced Concreteness

Whitehead’s fallacy of misplaced concerteness, also known as the reification fallacy, “holds when one mistakes an abstract belief, opinion, or concept about the way things are for a physical or “concrete” reality.” Most of what we think of as knowledge is ‘known about a theory” rather than truly “known about reality”. The difference seems to matter in psychology, sociology, economics and physics. This is not a term or concept of any particular science, but rather a seeming ‘brute fact’ of ‘the theory of science’ that perhaps ought to have been called attention to in the above article.


My own speciifc suggestion, to illustrate the above fallacy, would be Turing’s theory of ‘Morphogenesis’. The particular predictions seem to have been confirmed ‘scientifically’, but it is essentially a logical / mathematical theory. If, as the introduction to the Edge article suggests, science is “reliable methods for obtaining knowledge” then it seems to me that logic and mathematics are more reliable than empirical methods, and deserve some special recognition. Although, I must concede that it may be hard to tell logic from pseudo-logic, and that unless you can do so my distinction is potentially dangerous.

The second law of thermodynamics, and much common sense rationality,  assumes a situation in which the law of large numbers applies. But Turing adds to the second law’s notion of random dissipation a notion of relative structuring (as in gravity) to show that ‘critical instabilities’ are inevitable. These are inconsistent with the law of large numbers, so the assumptions of the second law of thermodynamics (and much else) cannot be true. The universe cannot be ‘closed’ in its sense.


If the assumptions of the second law seem to leave no room for free will and hence no reason to believe in our agency and hence no point in any of the contributions to Edge: they are what they are and we do what we do. But Pinker does not go so far: he simply notes that if things inevitably degrade we do not need to beat ourselves up, or look for scape-goats when things go wrong. But this can be true even if the second law does not apply. If we take Turing seriously then a seeming permanent status quo can contain the reasons for its own destruction, so that turning a blind eye and doing nothing can mean sleep-walking to disaster. Where Pinker concludes:

[An] underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

This would seem to follow more clearly from the theory of morphogenesis than the second law. Turing’s theory also goes some way to suggesting or even explaining the items in the second batch. So, I commend it.


Dave Marsay



Heuristics or Algorithms: Confused?

The Editor of the New Scientist (Vol. 3176, 5 May 2018, Letters, p54) opined in response to Adrian Bowyer ‘swish to distinguish between ‘heuristics’ and ‘algorithms’ in AI that:

This distinction is no longer widely made by practitioners of the craft, and we have to follow language as it is used, even when it loses precision.

Sadly, I have to accept that AI folk tend to consistently fail to respect a widely held distinction, but it seems odd that their failure has led to an obligation on the New Scientist – which has a much broader readership than just AI folk. I would agree that in addressing audiences that include significant sectors that fail to make some distinction, we need to be aware of the fact, but if the distinction is relevant – as Bowyer argues, surely we should explain it.

According to the freedictionary:

Heuristic: adj 1. Of or relating to a usually speculative formulation serving as a guide in the investigation or solution of a problem.

Algorithm: n: A finite set of unambiguous instructions that, given some set of initial conditions, can be performed in a prescribed sequence to achieve a certain goal and that has a recognizable set of end conditions.

It even also this quote:

heuristic: of or relating to or using a general formulation that serves to guide investigation  algorithmic – of or relating to or having the characteristics of an algorithm.

But perhaps this is not clear?

AI practitioners routinely apply algorithms as heuristics in the same way that a bridge designer may routinely use a computer program. We might reasonably regard a bridge-designing app as good if it correctly implements best practice in  bridge-building, but this is not to say that a bridge designed using it would necessarily be safe, particularly if it is has significant novelties (as in London’s wobbly bridge).

Thus any app (or other process) has two sides: as an algorithm and as a heuristic. As an algorithm we ask if it meets its concrete goals. As a heuristic we ask if it solves a real-world problem. Thus a process for identifying some kind of undesirable would be regarded as good algorithmically if it conformed to our idea of the undesirables, but may still be poor heuristically. In particular, good AI would seem to depend on someone understand at least the factors involved in the problem. This may not always be the case, no matter how ‘mathematically sophisticated’ the algorithms involved.

Perhaps you could improve on this attempted explanation?

Dave Marsay

Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

How can economics be a science?

This note is prompted by Thaler’s Nobel prize, the reaction to it, and attempts by mathematicians to explain both what they do do and what they could do. Briefly, mathematicians are increasingly employed to assist practitioners (such as financiers) to sharpen their tools and improve their results, in some pre-defined sense (such as making more profit). They are less used to sharpen core ideas, much less to challenge assumptions. This is unfortunate when tools are misused and mathematicians blamed. It is no good saying that mathematicians should not go along with such misuse, since the misuse is often not obvious without some (expensive) investigations, and in any case whistleblowers are likely to get shown the door (even if only for being inefficient).

Mainstream economics aspires to be a science in the sense of being able to make predictions, at least probabilistically. Some (mostly before 2007/8) claimed that it achieved this, because its methods were scientific. But are they? Keynes coined the term ‘pseudo-mathematical’ for the then mainstream practices, whereby mathematics was applied without due regard for the soundness of the application. Then, as now, the mathematics in itself is as much beyond doubt as anything can be. The problem is a ‘halo effect’ whereby the application is regarded as ‘true’ just because the mathematics is. It is like physics before Einstein, whereby some (such as Locke) thought that classical geometry must be ‘true’ as physics, largely because it was so true as mathematics and they couldn’t envisage an alternative.

From a logical perspective, all that the use of scientific methods can do is to make probabilistic predictions that are contingent on there being no fundamental change. In some domains (such as particle physics, cosmology) there have never been any fundamental changes (at least since soon after the big bang) and we may not expect any. But economics, as life more generally, seems full of changes.

Popper famously noted that proper science is in principle falsifiable. Many practitioners in science and science-like fields regard the aim of their domain as to produce ‘scientific’ predictions. They have had to change their theories in the past, and may have to do so again. But many still suppose that there is some ultimate ‘true’ theory, to which their theories are tending. But according to Popper this is not a ‘proper’ scientific belief. Following Keynes we may call it an example of ‘pseudo-science’: something that masquerades as a science but goes beyond it bounds.

One approach to mainstream economics, then, is to disregard the pseudo-scientific ideology and just take its scientific content. Thus we may regard its predictions as mere extrapolations, and look out for circumstances in which they may not be valid. (As Eddington did for cosmology.)

Mainstream economics depends heavily on two notions:

  1. That there is some pre-ordained state space.
  2. That transitions evolve according to fixed conditional probabilities.

For most of us, most of the time, fortunately, these seem credible locally and in the short term, but not globally in space-time. (At the time of writing it seems hard to believe that just after the big bang there were in any meaningful sense state spaces and conditional probabilities that are now being realised.) We might adjust the usual assumptions:

The ‘real’ state of nature is unknowable, but one can make reasonable observations and extrapolations that will be ‘good enough’ most of the time for most routine purposes.

This is true for hard and soft sciences, and for economics. What varies is the balance between the routine and the exceptional.

Keynes observed that some economic structures work because people expect them to. For example, gold tends to rise in price because people think of it as being relatively sound. Thus anything that has a huge effect on expectations can undermine any prior extrapolations. This might be a new product or service, an independence movement, a conflict or a cyber failing. These all have a structural impact on economies that can cascade. But will the effect dissipate as it spreads, or may it result in a noticable shift? A mainstream economist would argue that all such impacts are probabilistic, and hence all that was happening was that we were observing new parts of the existing state space and new transitions. If we suppose for a moment that it is true, it is not a scientific belief, and hardly seems a useful way of thinking about potential and actual crises.

Mainstream economists suppose that people are ‘rational’, by which they mean that they act as if they are maximizing some utility, which is something to do with value and probability. But, even if the world is probabilistic, being rational is not necessarily scientific. For example, when a levee is built  to withstand a ‘100 year storm’, this is scientific if it is clear that the claim is based on past storm data. But it is unscientific if there is an implicit claim that the climate can not change. When building a levee it may be ‘rational’ to build it to withstand all but very improbable storms, but it is more sensible to add a margin and make contingency arrangements (as engineers normally do). In much of life it is common experience that the ‘scientific’ results aren’t entirely reliable, so it is ‘unscientific’ (or at least unreasonable) to totally rely on them.

Much of this is bread-and-butter in disciplines other than economics, and I am not sure that what economists mostly need is to improve their mathematics: they need to improve their sciencey-ness, and then use mathematics better. But I do think that they need somehow to come to a better appreciation of the mathematics of uncertainty, beyond basic probability  theory and its ramifications.

Dave Marsay



Why do people hate maths?

New Scientist 3141 ( 2 Sept 2017) has the cover splash ‘Your mathematical mind: Why do our brains speak the language of reality?’. The article (p 31) is titled ‘The origin of mathematics’.

I have made pedantic comments on previous articles on similar topics, to be told that the author’s intentions have been slightly skewed in the editing process. Maybe it has again. But some interesting (to me) points still arise.

Firstly, we are told that brain scans showthat:

a network of brain regions involved in mathematical thought that was activated when mathematicians reflected on problems in algebra, geometry and topology, but not when they were thinking about non-mathsy things. No such distinction was visible in other academics. Crucially, this “maths network” does not overlap with brain regions involved in language.

It seems reasonable to suppose that many people do not develop such a maths capability from experience in ordinary life or non-mathsy subjects, and perhaps don’t really appreciate its significance. Such people would certainly find maths stressful, which may explain their ‘hate’. At least we can say – contradicting the cover splash – that most people lack a mathematical mind, which may explain the difficulties mathematicians have in communicating.

In addition, I have come across a few seemingly sensible people who may seem to hate maths, although I would rather say that they hate ‘pseudo-maths’. For example, it may be true that we have a better grasp on reality if we can think mathematically – as scientists and technologists routinely do – but it seems a huge jump – and misleading – to claim that mathematics is ‘the language of reality’ in any more objective sense. By pseudo-maths I mean something that appears to be maths (at least to the non-mathematician) but which uses ordinary reasoning to make bold claims (such as ‘is the language of reality’).

But there is a more fundamental problem. The article cites Ashby to the effect that ‘effective control’ relies on adequate models. Such models are of course computational and as such we rely on mathematics to reason about them. Thus we might say that mathematics is the language of effective control. If – as some seem to – we make a dichotomy between controllable and not controllable systems then mathematics is the pragmatic language of reality. Here we enter murky waters. For example, if reality is socially constructed then presumably pragmatic social sciences (such as economics) are necessarily concerned with control, as in their models. But one point of my blog is that the kind of maths that applies to control is only a small portion. There is at least the possibility that almost all things of interest to us as humans are better considered using different maths. In this sense it seems to me that some people justifiably hate control and hence related pseudo-maths. It would be interesting to give them a brain scan to see if  their thinking appeared mathematical, or if they had some other characteristic networks of brain regions. Either way, I suspect that many problems would benefit from collaborations between mathematicians and those who hate pseudo-mathematic without necessarily being professional mathematicians. This seems to match my own experience.

Dave Marsay

Mathematical Modelling

Mathematics and modelling in particular is very powerful, and hence can be very risky if you get it wrong, as in mainstream economics. But is modelling inappropriate – as has been claimed – or is it just that it has not been done well enough?

As a mathematician who has dabbled in modelling and economics I thought I’d try my hand at modelling economies. What harm could there be?

My first notion is that actors activity is habitual.

My second is that habits persist until there is a ‘bad’ experience, in which case they are revised. What is taken account of, what counts as ‘bad’ and how habits are replaced or revised are all subject to meta-habits (habits about habits).

In particular, mainstream economists suppose that actors seek to maximise their utilities, and they never revise this approach. But this may be too restrictive.

Myself, I would add that most actors mostly seek to copy others and also tend to discount experiences and lessons identified by previous generations.

With some such ‘axioms’ (suitably formalised) such as those above, one can predict booms and busts leading to new ‘epochs’ characterised by dominant theories and habits. For example, suppose that some actors habitually borrow as much as they can to invest in an asset (such as a house for rent) and the asset class performs well. Then they will continue in their habit, and others who have done less well will increasingly copy them, fuelling an asset price boom. But no asset class is worth an infinite amount, so the boom must end, resulting in disappointment and changes in habit, which may again be copied by those who are losing out on the asset class., giving a bust.  Thus one has an ’emergent behaviour’ that contradicts some of the implicit mainstream assumptions about rationality  (such as ‘ergodicity’), and hence the possibility of meaningful ‘expectations’ and utility functions to be maximized. This is not to say that such things cannot exist, only that if they do exist it must be due to some economic law as yet unidentified, and we need an alternative explanation for booms and busts.

What I take from this is that mathematical models seem possible and may even provide insights.I do not assume that a model that is adequate in the short-run will necessarily continue to be adequate, and my model shows how economic epochs can be self-destructing. To me, the problem in economics is not so much that it uses mathematics and in particular mathematical modelling but that it does so badly. My ‘axioms’ mimic the approach that Einstein took to physics: it replaces an absolutist model by a relativistic one, and shows that it makes a difference. In my model there are no magical ‘expectations’, rather actors may have realistic habits and expectations, based on their experience and interpretation of the media and other sources, which may be ‘correct’ (or at least not falsified) in the short-run, but which cannot provide adequate predictions for the longer run. To survive a change of epochs our actors would need to be at least following some actors who were monitoring and thinking about the overall situation more broadly and deeply than those who focus on short run utility. (Something that currently seems lacking.)

David Marsay

Can polls be reliable?

Election polls in many countries have seemed unusually unreliable recently. Why? And can they be fixed?

The most basic observation is that if one has a random sample of a population in which x% has some attribute then it is reasonable to estimate that x% of the whole population has that attribute, and that this estimate will tend to be more accurate the larger the sample is. In some polls sample size can be an issue, but not in the main political polls.

A fundamental problem with most polls is that the ‘random’ sample may not be uniformly distributed, with some sub-groups over or under represented. Political polls have some additional issues, that are sometimes blamed:

  • People with certain opinions may be reluctant to express them, or may even mislead.
  • There may be a shift in opinions with time, due to campaigns or events.
  • Different groups may differ in whether they actually vote, for example depending on the weather.

I also think that in the UK the trend to postal voting may have confused things, as postal voters will have missed out on the later stages of campaigns, and on later events. (Which were significant in the UK 2017 general election.)

Pollsters have a lot of experience at compensating for these distortions, and are increasingly using ‘sophisticated mathematical tools’. How is this possible, and is there any residual uncertainty?

Back to mathematics, suppose that we have a science-like situation in which we know which factors (e.g. gender, age, social class ..) are relevant. With a large enough sample we can partition the results by combination of factors, measure the proportions for each combination, and then combine these proportions, weighting by the prevalence of the combinations in the whole population. (More sophisticated approaches are used for smaller samples, but they only reduce the statistical reliability.)

Systematic errors can creep in in two ways:

  1. Instead of using just the poll data, some ‘laws of politics’ (such as the effect of rain) or other heuristics (such as that the swing among postal votes will be similar to that for votes in person) may be wrong.
  2. An important factor is missed. (For example, people with teenage children or grandchildren may vote differently from their peers when student fees are an issue.)

These issues have analogues in the science lab. In the first place one is using the wrong theory to interpret the data, and so the results are corrupted. In the second case one has some unnoticed ‘uncontrolled variable’ that can really confuse things.

A polling method using fixed factors and laws will only be reliable when they reasonably accurately the attributes of interest, and not when ‘the nature of politics’ is changing, as it often does and as it seems to be right now in North America and Europe. (According to game theory one should expect such changes when coalitions change or are under threat, as they are.) To do better, the polling organisation would need to understand the factors that the parties were bringing into play at least as well as the parties themselves, and possibly better. This seems unlikely, at least in the UK.

What can be done?

It seems to me that polls used to be relatively easy to interpret, possibly because they were simpler. Our more sophisticated contemporary methods make more detailed assumptions. To interpret them we would need to know what these assumptions were. We could then ‘aim off’, based on our own judgment. But this would involve pollsters in publishing some details of their methods, which they are naturally loth to do. So what could be done? Maybe we could have some agreed simple methods and publish findings as ‘extrapolations’ to inform debate, rather than predictions. We could then factor in our own assumptions. (For example, our assumptions about students turnout.)

So, I don’t think that we can expect reliable poll findings that are predictions, but possibly we could have useful poll findings that would inform debate and allow us to take our own views. (A bit like any ‘big data’.)

Dave Marsay


Mathematical modelling

I had the good fortune to attend a public talk on mathematical modelling, organised by the University of Birmingham (UK). The speaker, Dr Nira Chamberlain CMath FIMA CSci, is a council member of the appropriate institution, and so may reasonably be thought to be speaking for mathematicians generally.

He observed that there were many professional areas that used mathematics as a tool, and that they generally failed to see the need for professional mathematicians as such. He thought that mathematical modelling was one area where – at least for the more important problems – mathematicians ought to be involved. He gave examples of modelling, including one of the financial crisis.

The main conclusion seemed very reasonable, and in line with the beliefs of most ‘right thinking’ mathematicians. But on reflection, I wonder if my non-mathematician professional colleagues would accept it. In 19th century professional mathematicians were proclaiming it a mathematical fact that the physical world conformed to classical geometry. On this basis, mathematicians do not seem to have any special ability to produce valid models. Indeed, in the run up to the financial crash there were too many professional mathematicians who were advocating some mainstream mathematical models of finance and economies in which the crash was impossible.

In Dr Chamberlain’s own model of the crash, it seems that deregulation and competition led to excessive risk taking, which risks eventually materialised. A colleague who is a professional scientist but not a professional mathematician has advised me that this general model was recognised by the UK at the time of our deregulation, but that it was assumed (as Greenspan did) that somehow some institution would step in to foreclose this excessive risk taking. To me, the key thing to note is that the risks being taken were systemic and not necessarily recognised by those taking them. To me, the virtue of a model does not just depend on it being correct in some abstract sense, but also that ‘has traction’ with relevant policy and decision makers and takers. Thus, reflecting on the talk, I am left accepting the view of many of my colleagues that some mathematical models are too important to be left to mathematicians.

If we have a thesis and antithesis, then the synthesis that I and my colleagues have long come to is that important mathematical model needs to be a collaborative endeavour, including mathematicians as having a special role in challenging, interpret and (potentially) developing the model, including developing (as Dr C said) new mathematics where necessary. A modelling team will often need mathematicians ‘on tap’ to apply various methods and theories, and this is common. But what is also needed is a mathematical insight into the appropriateness of these tools and the meaning of the results. This requires people who are more concerned with their mathematical integrity than in satisfying their non-mathematical pay-masters. It seems to me that these are a sub-set of those that are generally regarded as ‘professional’. How do we identify such people?

Dave Marsay 


Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Instrumental Probabilities

Reflecting on my recent contribution to the economics ejournal special issue on uncertainty (comments invited), I realised that from a purely mathematical point of view, the current mainstream mathematical view, as expressed by Dawid, could be seen as a very much more accessible version of Keynes’. But there is a difference in expression that can be crucial.

In Keynes’ view ‘probability’ is a very general term, so that it always legitimate to ask about the probability of something. The challenge is to determine the probability, and in particular whether it is just a number. In some usages, as in Kolmogorov, the term probability is reserved for those cases where certain axioms hold. In such cases the answer to a request for a probability might be to say that there isn’t one. This seems safe even if it conflicts with the questioner’s presuppositions about the universality of probabilities. In the instrumentalist view of Dawid, however, suggests that probabilistic methods are tools that can always be used. Thus the probability may exist even if it does not have the significance that one might think and, in particular, it is not appropriate to use it for ‘rational decision making’.

I have often come across seemingly sensible people who use ‘sophisticated mathematics’ in strange ways. I think perhaps they take an instrumentalist view of mathematics as a whole, and not just probability theory. This instrumentalist mathematics reminds me of Keynes’ ‘pseudo-mathematics’. But the key difference is that mathematicians, such as Dawid, know that the usage is only instrumentalist and that there are other questions to be asked. The problem is not the instrumentalist view as such, but the dogma (of at last some) that it is heretical to question widely used instruments.

The financial crises of 2007/8 were partly attributed by Lord Turner to the use of ‘sophisticated mathematics’. From Keynes’ perspective it was the use of pseudo-mathematics. My view is that if it is all you have then even pseudo-mathematics can be quite informative, and hence worthwhile. One just has to remember that it is not ‘proper’ mathematics. In Dawid’s terminology  the problem seems to be that the instrumental use of mathematics without any obvious concern for its empirical validity. Indeed, since his notion of validity concerns limiting frequencies, one might say that the problem was the use of an instrument that was stunningly inappropriate to the question at issue.

It has long seemed  to me that a similar issue arises with many miscarriages of justice, intelligence blunders and significant policy mis-steps. In Keynes’ terms people are relying on a theory that simply does not apply. In Dawid’s terms one can put it blunter: Decision-takers were relying on the fact that something had a very high probability when they ought to have been paying more attention to the evidence in the actual situation, which showed that the probability was – in Dawid’s terms – empirically invalid. It could even be that the thing with a high instrumental probability was very unlikely, all things considered.

Artificial Intelligence?

The subject of ‘Artificial Intelligence’ (AI) has long provided ample scope for long and inconclusive debates. Wikipedia seems to have settled on a view, that we may take as straw-man:

Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. [Dartmouth Conference, 1956] The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds. [John Searle’s straw-man hypothesis]

Readers of my blog will realise that I agree with Searle that his hypothesis is wrong, but for different reasons. It seems to me that mainstream AI (mAI) is about being able to take instruction. This is a part of learning, but by no means all. Thus – I claim – mAI is about a sub-set of intelligence. In many organisational settings it may be that sub-set which the organisation values. It may even be that an AI that ‘thought for itself’ would be a danger. For example, in old discussions about whether or not some type of AI could ever act as a G.P. (General Practitioner – first line doctor) the underlying issue has been whether G.P.s ‘should’ think for themselves, or just apply their trained responses. My own experience is that sometimes G.P.s doubt the applicability of what they have been taught, and that sometimes this is ‘a good thing’. In effect, we sometimes want to train people, or otherwise arrange for them to react in predictable ways, as if they were machines. mAI can create better machines, and thus has many key roles to play. But between mAI and ‘superhuman intelligence’  there seems to be an important gap: the kind of intelligence that makes us human. Can machines display such intelligence? (Can people, in organisations that treat them like machines?)

One successful mainstream approach to AI is to work with probabilities, such a P(A|B) (‘the probability of A given B’), making extensive use of Bayes’ rule, and such an approach is sometimes thought to be ‘logical’, ‘mathematical, ‘statistical’ and ‘scientific’. But, mathematically, we can generalise the approach by taking account of some context, C, using Jack Good’s notation P(A|B:C) (‘the probability of A given B, in the context C’). AI that is explicitly or implicitly statistical is more successful when it operates within a definite fixed context, C, for which the appropriate probabilities are (at least approximately) well-defined and stable. For example, training within an organisation will typically seek to enable staff (or machines) to characterise their job sufficiently well for it to become routine. In practice ‘AI’-based machines often show a little intelligence beyond that described above: they will monitor the situation and ‘raise an exception’ when the situation is too far outside what it ‘expects’. But this just points to the need for a superior intelligence to resolve the situation. Here I present some thoughts.

When we state ‘P(A|B)=p’ we are often not just asserting the probability relationship: it is usually implicit that ‘B’ is the appropriate condition to consider if we are interested in ‘A’. Contemporary mAI usually takes the conditions a given, and computes ‘target’ probabilities from given probabilities. Whilst this requires a kind of intelligence, it seems to me that humans will sometimes also revise the conditions being considered, and this requires a different type of intelligence (not just the ability to apply Bayes’ rule). For example, astronomers who refine the value of relevant parameters are displaying some intelligence and are ‘doing science’, but those first in the field, who determined which parameters are relevant employed a different kind of intelligence and were doing a different kind of science. What we need, at least, is an appropriate way of interpreting and computing ‘probability’ to support this enhanced intelligence.

The notions of Whitehead, Keynes, Russell, Turing and Good seem to me a good start, albeit they need explaining better – hence this blog. Maybe an example is economics. The notion of probability routinely used would be appropriate if we were certain about some fundamental assumptions. But are we? At least we should realise that it is not logical to attempt to justify those assumptions by reasoning using concepts that implicitly rely on them.

Dave Marsay

The limits of (atomistic) mathematics

Lars Syll draws attention to a recent seminar on ‘Confronting economics’ by Tony Lawson, as part of the Bloomsbury Confrontations at UCLU.

If you replace his every use of the term ‘mathematics’ by something like ‘atomistic mathematics’ then I would regard this talk as not only very important, but true. Tony approving quotes Whitehead on challenging implicit assumptions. Is his implicit assumption that mathematics is ‘atomistic’? What about Whitehead’s own mathematics, or that of Russell, Keynes and Turing? He (Tony) seems to suppose that mathematics can’t deal with emergent properities. So What is Whitehead’s work on Process, Keynes’ work on uncertainty, Russell’s work on knowledge or Turing’s work on morphogenesis all about?

Dave Marsay


Evolution of Pragmatism?

A common ‘pragmatic’ approach is to keep doing what you normally do until you hit a snag, and (only) then to reconsider. Whereas Lamarckian evolution would lead to the ‘survival of the fittest’, with everyone adapting to the current niche, tending to yield a homogenous population, Darwinian evolution has survival of the maximal variety of all those who can survive, with characteristics only dying out when they are not viable. This evolution of diversity makes for greater resilience, which is maybe why ‘pragmatic’ Darwinian evolution has evolved.

The products of evolution are generally also pragmatic, in that they have virtually pre-programmed behaviours which ‘unfold’ in the environment. Plants grow and procreate, while animals have a richer variety of behaviours, but still tend just to do what they do. But humans can ‘think for themselves’ and be ‘creative’, and so have the possibility of not being just pragmatic.

I was at a (very good) lecture by Alice Roberts last night on the evolution of technology. She noted that many creatures use tools, but humans seem to be unique in that at some critical population mass the manufacture and use of tools becomes sustained through teaching, copying and co-operation. It occurred to me that much of this could be pragmatic. After all, until recently development has been very slow, and so may well have been driven by specific practical problems rather than continual searching for improvements. Also, the more recent upswing of innovation seems to have been associated with an increased mixing of cultures and decreased intolerance for people who think for themselves.

In biological evolution mutations can lead to innovation, so evolution is not entirely pragmatic, but their impact is normally limited by the need to fit the current niche, so evolution typically appears to be pragmatic. The role of mutations is more to increase the diversity of behaviours within the niche, rather than innovation as such.

In social evolution there will probably always have been mavericks and misfits, but the social pressure has been towards conformity. I conjecture that such an environment has favoured a habit of pragmatism. These days, it seems to me, a better approach would be more open-minded, inclusive and exploratory, but possibly we do have a biologically-conditioned tendency to be overly pragmatic: to confuse conventions for facts and  heuristics for laws of nature, and not to challenge widely-held beliefs.

The financial crash of 2008 was blamed by some on mathematics. This seems ridiculous. But the post Cold War world was largely one of growth with the threat of nuclear devastation much diminished, so it might be expected that pragmatism would be favoured. Thus powerful tools (mathematical or otherwise) could be taken up and exploited pragmatically, without enough consideration of the potential dangers. It seems to me that this problem is much broader than economics, but I wonder what the cure is, apart from better education and more enlightened public debate?

Dave Marsay



Traffic bunching

In heavy traffic, such as on motorways in rush-hour, there is often oscillation in speed and there can even be mysterious ’emergent’ halts. The use of variable speed limits can result in everyone getting along a given stretch of road quicker.

Soros (worth reading) has written an article that suggests that this is all to do with the humanity and ‘thinking’ of the drivers, and that something similar is the case for economic and financial booms and busts. This might seem to indicate that ‘mathematical models’ were a part of our problems, not solutions. So I suggest the following thought experiment:

Suppose a huge number of  identical driverless cars with deterministic control functions all try to go along the same road, seeking to optimise performance in terms of ‘progress’ and fuel economy. Will they necessarily succeed, or might there be some ‘tragedy of the commons’ that can only be resolved by some overall regulation? What are the critical factors? Is the nature of the ‘brains’ one of them?

Are these problems the preserve of psychologists, or does mathematics have anything useful to say?

Dave Marsay