Applications of Statistics

Lars Syll has commented on a book by David Salsburg, criticising workaday applications of statistics. Lars has this quote:

Kolmogorov established the mathematical meaning of probability: Probability is a measure of sets in an abstract space of events.

This is not quite right.

  • Kolmogorov established a possible meaning, not ‘the’ meaning. (Actually Wittgenstein anticipated him.)
  • Even taking this theory, it is not clear why the space should be ‘measurable‘. More generally one has ‘upper’ and ‘lower’ measures, which need not be equal. One can extend the more familiar notions of probability, entropy, information and statistics to such measures. Such extended notions seem more credible.
  • In practice one often has some ‘given data’ which is at least slightly distant from the ‘real’ ‘events’ of interest. The data space is typically rather a rather tame ‘space’, so that a careful use of statistics is appropriate. But one still has the problem of ‘lifting’ the results to the ‘real events’.

These remarks seem to cover the criques of Syll and Salsburg, but are more nuanced. Statistical results, like any mathematics, need to be interpreted with care. But, depending on which of the above remarks apply, the results may be more or less easy to interpret: not all naive statistics are equally dubious!

Dave Marsay

Advertisements

AI pros and cons

Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher The Metamorphosis Atlantic August 2019.

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.

The authors are looking for comments. My initial reaction is here. I hope to say more. Meanwhile, I’d appreciate your reactions.

 

Dave Marsay

What logical term or concept ought to be more widely known?

Various What scientific term or concept ought to be more widely known? Edge, 2017.

INTRODUCTION: SCIENTIA

Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. …

Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.

Contributions

As against others on:

(This is as far as I’ve got.)

Comment

I’ve grouped the contributions according to whether or not I think they give due weight to the notion of uncertainty as expressed in my blog. Interestingly Steven Pinker seems not to give due weight in his article, whereas he is credited by Nicholas G. Carr with some profound insights (in the first of the second batch). So maybe I am not reading them right.

My own suggestion would be Turing’s theory of ‘Morphogenesis’. The particular predictions seem to have been confirmed ‘scientifically’, but it is essentially a logical / mathematical theory. If, as the introduction suggests, science is “reliable methods for obtaining knowledge” then it seems to me that logic and mathematics are more reliable than empirical methods, and deserve some special recognition. Although, I must concede that it may be hard to tell logic from pseudo-logic, and that unless you can do so my distinction is potentially dangerous.

Morphogenesis

The second law of thermodynamics, and much common sense rationality,  assumes a situation in which the law of large numbers applies. But Turing adds to the second law’s notion of random dissipation a notion of relative structuring (as in gravity) to show that ‘critical instabilities’ are inevitable. These are inconsistent with the law of large numbers, so the assumptions of the second law of thermodynamics (and much else) cannot be true. The universe cannot be ‘closed’ in its sense.

Implications

If the assumptions of the second law seem to leave no room for free will and hence no reason to believe in our agency and hence no point in any of the contributions to Edge: they are what they are and we do what we do. But Pinker does not go so far: he simply notes that if things inevitably degrade we do not need to beat ourselves up, or look for scape-goats when things go wrong. But this can be true even if the second law does not apply. If we take Turing seriously then a seeming permanent status quo can contain the reasons for its own destruction, so that turning a blind eye and doing nothing can mean sleep-walking to disaster. Where Pinker concludes:

[An] underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

This would seem to follow more clearly from the theory of morphogenesis than the second law. Turing’s theory also goes some way to suggesting or even explaining the items in the second batch. So, I commend it.

Dave Marsay

 

 

Why do people hate maths?

New Scientist 3141 ( 2 Sept 2017) has the cover splash ‘Your mathematical mind: Why do our brains speak the language of reality?’. The article (p 31) is titled ‘The origin of mathematics’.

I have made pedantic comments on previous articles on similar topics, to be told that the author’s intentions have been slightly skewed in the editing process. Maybe it has again. But some interesting (to me) points still arise.

Firstly, we are told that brain scans showthat:

a network of brain regions involved in mathematical thought that was activated when mathematicians reflected on problems in algebra, geometry and topology, but not when they were thinking about non-mathsy things. No such distinction was visible in other academics. Crucially, this “maths network” does not overlap with brain regions involved in language.

It seems reasonable to suppose that many people do not develop such a maths capability from experience in ordinary life or non-mathsy subjects, and perhaps don’t really appreciate its significance. Such people would certainly find maths stressful, which may explain their ‘hate’. At least we can say – contradicting the cover splash – that most people lack a mathematical mind, which may explain the difficulties mathematicians have in communicating.

In addition, I have come across a few seemingly sensible people who may seem to hate maths, although I would rather say that they hate ‘pseudo-maths’. For example, it may be true that we have a better grasp on reality if we can think mathematically – as scientists and technologists routinely do – but it seems a huge jump – and misleading – to claim that mathematics is ‘the language of reality’ in any more objective sense. By pseudo-maths I mean something that appears to be maths (at least to the non-mathematician) but which uses ordinary reasoning to make bold claims (such as ‘is the language of reality’).

But there is a more fundamental problem. The article cites Ashby to the effect that ‘effective control’ relies on adequate models. Such models are of course computational and as such we rely on mathematics to reason about them. Thus we might say that mathematics is the language of effective control. If – as some seem to – we make a dichotomy between controllable and not controllable systems then mathematics is the pragmatic language of reality. Here we enter murky waters. For example, if reality is socially constructed then presumably pragmatic social sciences (such as economics) are necessarily concerned with control, as in their models. But one point of my blog is that the kind of maths that applies to control is only a small portion. There is at least the possibility that almost all things of interest to us as humans are better considered using different maths. In this sense it seems to me that some people justifiably hate control and hence related pseudo-maths. It would be interesting to give them a brain scan to see if  their thinking appeared mathematical, or if they had some other characteristic networks of brain regions. Either way, I suspect that many problems would benefit from collaborations between mathematicians and those who hate pseudo-mathematic without necessarily being professional mathematicians. This seems to match my own experience.

Dave Marsay

Mathematical modelling

I had the good fortune to attend a public talk on mathematical modelling, organised by the University of Birmingham (UK). The speaker, Dr Nira Chamberlain CMath FIMA CSci, is a council member of the appropriate institution, and so may reasonably be thought to be speaking for mathematicians generally.

He observed that there were many professional areas that used mathematics as a tool, and that they generally failed to see the need for professional mathematicians as such. He thought that mathematical modelling was one area where – at least for the more important problems – mathematicians ought to be involved. He gave examples of modelling, including one of the financial crisis.

The main conclusion seemed very reasonable, and in line with the beliefs of most ‘right thinking’ mathematicians. But on reflection, I wonder if my non-mathematician professional colleagues would accept it. In 19th century professional mathematicians were proclaiming it a mathematical fact that the physical world conformed to classical geometry. On this basis, mathematicians do not seem to have any special ability to produce valid models. Indeed, in the run up to the financial crash there were too many professional mathematicians who were advocating some mainstream mathematical models of finance and economies in which the crash was impossible.

In Dr Chamberlain’s own model of the crash, it seems that deregulation and competition led to excessive risk taking, which risks eventually materialised. A colleague who is a professional scientist but not a professional mathematician has advised me that this general model was recognised by the UK at the time of our deregulation, but that it was assumed (as Greenspan did) that somehow some institution would step in to foreclose this excessive risk taking. To me, the key thing to note is that the risks being taken were systemic and not necessarily recognised by those taking them. To me, the virtue of a model does not just depend on it being correct in some abstract sense, but also that ‘has traction’ with relevant policy and decision makers and takers. Thus, reflecting on the talk, I am left accepting the view of many of my colleagues that some mathematical models are too important to be left to mathematicians.

If we have a thesis and antithesis, then the synthesis that I and my colleagues have long come to is that important mathematical model needs to be a collaborative endeavour, including mathematicians as having a special role in challenging, interpret and (potentially) developing the model, including developing (as Dr C said) new mathematics where necessary. A modelling team will often need mathematicians ‘on tap’ to apply various methods and theories, and this is common. But what is also needed is a mathematical insight into the appropriateness of these tools and the meaning of the results. This requires people who are more concerned with their mathematical integrity than in satisfying their non-mathematical pay-masters. It seems to me that these are a sub-set of those that are generally regarded as ‘professional’. How do we identify such people?

Dave Marsay 

 

The limits of (atomistic) mathematics

Lars Syll draws attention to a recent seminar on ‘Confronting economics’ by Tony Lawson, as part of the Bloomsbury Confrontations at UCLU.

If you replace his every use of the term ‘mathematics’ by something like ‘atomistic mathematics’ then I would regard this talk as not only very important, but true. Tony approving quotes Whitehead on challenging implicit assumptions. Is his implicit assumption that mathematics is ‘atomistic’? What about Whitehead’s own mathematics, or that of Russell, Keynes and Turing? He (Tony) seems to suppose that mathematics can’t deal with emergent properities. So What is Whitehead’s work on Process, Keynes’ work on uncertainty, Russell’s work on knowledge or Turing’s work on morphogenesis all about?

Dave Marsay

 

Are more intelligent people more biased?

It has been claimed that:

U.S. intelligence agents may be more prone to irrational inconsistencies in decision making compared to college students and post-college adults … .

This is scary, if unsurprising to many. Perhaps more surprisingly:

Participants who had graduated college seemed to occupy a middle ground between college students and the intelligence agents, suggesting that people with more “advanced” reasoning skills are also more likely to show reasoning biases.

It seems as if there is some serious  mis-education in the US. But what is it?

The above conclusions are based on responses to the following two questions:

1. The U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Do you: (a) Save 200 people for sure, or (b) choose the option with 1/3 probability that 600 will be saved and a 2/3 probability no one will be saved?

2. In the same scenario, do you (a) pick the option where 400 will surely die, or instead (b) a 2/3 probability that all 600 will die and a 1/3 probability no one dies?

You might like to think about your answers to the above, before reading on.

.

.

.

.

.

The paper claims that:

Notably, the different scenarios resulted in the same potential outcomes — the first option in both scenarios, for example, has a net result of saving 200 people and losing 400.

Is this what you thought? You might like to re-read the questions and reconsider your answer, before reading on.

.

.

.

.

.

The questions may appear to contain statements of fact, that we are entitled to treat as ‘given’. But in real-life situations we should treat such questions as utterances, and use the appropriate logics. This may give the same result as taking them at face value – or it may not.

It is (sadly) probably true that if this were a UK school examination question then the appropriate logic would be (1) to treat the statements ‘at face value’ (2) assume that if 200 people will be saved ‘for sure’ then exactly 200 people will be saved, no more. On the other hand, this is just the kind of question that I ask mathematics graduates to check that they have an adequate understanding of the issues before advising decision-takers. In the questions as set, the (b) options are the same, but (1a) is preferable to (2a), unless one is in the very rare situation of knowing exactly how many will die. With this interpretation, the more education and the more experience, the better the decisions – even in the US 😉

It would be interesting to repeat the experiment with less ambiguous wording. Meanwhile, I hope that intelligence agents are not being re-educated. Or have I missed something?

Also

Kahneman’s Thinking, fast and slow has a similar example, in which we are given ‘exact scientific estimates’ of probable outcomes, avoiding the above ambiguity. This might be a good candidate experimental question.

Kahneman’s question is not without its own subtleties, though. It concerns the efficacy of ‘programs to combat disease’. It seems to me that if I was told that a vaccine would save 1/3 of the lives, I would suppose that it had been widely tested, and that the ‘scientific’ estimate was well founded. On the other hand, if I was told that there was a 2/3 chance of the vaccine being ineffective I would suppose that it hadn’t been tested adequately, and the ‘scientific’ estimate was really just an informed guess. In this case, I would expect the estimate of efficacy to be revised in the light of new information. It could even be that while some scientist has made an honest estimate based on the information that they have, some other scientist (or technician) already knows that the vaccine is ineffective. A program based on such a vaccine would be more complicated and ‘risky’ than one based on a well-founded estimate, and so I would be reluctant to recommend it. (Ideally, I would want to know a lot more about how the estimates were arrived at, but if pressed for a quick decision, this is what I would do.)

Could the framing make a difference? In one case, we are told that ‘scientifically’, 200 people will be saved. But scientific conclusions always depend on assumptions, so really one should say ‘if …. then 200 will be saved’. My experience is that otherwise the outcome should not be expected, and that saving 200 is the best that should be expected. In the other case we are told that ‘400 will die’. This seems to me to be a very odd thing to say. From a logical perspective one would like to understand the circumstances in which someone would put it like this. I would be suspicious, and might well (‘irrationally’) avoid a program described in that way.

Addenda

The example also shows a common failing, in assuming that the utility is proportional to lives lost. Suppose that when we are told that lives will be ‘saved’ we assume that we will get credit, then we might take the utility from saving lives to be number of lives saved, but with a limit of ‘kudos’ at 250 lives saved. In this case, it is rational to save 200 ‘for sure’, as the expected credit from taking a risk is very much lower. On the other hand, if we are told that 400 lives will be ‘lost’ we might assume that we will be blamed, and take the utility to be minus the lives lost, limited at -10. In this case it is rational to take a risk, as we have some chance of avoiding the worst case utility, whereas if we went for the sure option we would be certain to suffer the worst case.

These kind of asymmetric utilities may be just the kind that experts experience. More study required?

 

Dave Marsay