Applications of Statistics

Lars Syll has commented on a book by David Salsburg, criticising workaday applications of statistics. Lars has this quote:

Kolmogorov established the mathematical meaning of probability: Probability is a measure of sets in an abstract space of events.

This is not quite right.

  • Kolmogorov established a possible meaning, not ‘the’ meaning. (Actually Wittgenstein anticipated him.)
  • Even taking this theory, it is not clear why the space should be ‘measurable‘. More generally one has ‘upper’ and ‘lower’ measures, which need not be equal. One can extend the more familiar notions of probability, entropy, information and statistics to such measures. Such extended notions seem more credible.
  • In practice one often has some ‘given data’ which is at least slightly distant from the ‘real’ ‘events’ of interest. The data space is typically rather a rather tame ‘space’, so that a careful use of statistics is appropriate. But one still has the problem of ‘lifting’ the results to the ‘real events’.

These remarks seem to cover the criques of Syll and Salsburg, but are more nuanced. Statistical results, like any mathematics, need to be interpreted with care. But, depending on which of the above remarks apply, the results may be more or less easy to interpret: not all naive statistics are equally dubious!

Dave Marsay

Advertisements

AI pros and cons

Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher The Metamorphosis Atlantic August 2019.

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.

The authors are looking for comments. My initial reaction is here. I hope to say more. Meanwhile, I’d appreciate your reactions.

 

Dave Marsay

What logical term or concept ought to be more widely known?

Various What scientific term or concept ought to be more widely known? Edge, 2017.

INTRODUCTION: SCIENTIA

Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. …

Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.

Contributions

As against others on:

(This is as far as I’ve got.)

Comment

I’ve grouped the contributions according to whether or not I think they give due weight to the notion of uncertainty as expressed in my blog. Interestingly Steven Pinker seems not to give due weight in his article, whereas he is credited by Nicholas G. Carr with some profound insights (in the first of the second batch). So maybe I am not reading them right.

My own suggestion would be Turing’s theory of ‘Morphogenesis’. The particular predictions seem to have been confirmed ‘scientifically’, but it is essentially a logical / mathematical theory. If, as the introduction suggests, science is “reliable methods for obtaining knowledge” then it seems to me that logic and mathematics are more reliable than empirical methods, and deserve some special recognition. Although, I must concede that it may be hard to tell logic from pseudo-logic, and that unless you can do so my distinction is potentially dangerous.

Morphogenesis

The second law of thermodynamics, and much common sense rationality,  assumes a situation in which the law of large numbers applies. But Turing adds to the second law’s notion of random dissipation a notion of relative structuring (as in gravity) to show that ‘critical instabilities’ are inevitable. These are inconsistent with the law of large numbers, so the assumptions of the second law of thermodynamics (and much else) cannot be true. The universe cannot be ‘closed’ in its sense.

Implications

If the assumptions of the second law seem to leave no room for free will and hence no reason to believe in our agency and hence no point in any of the contributions to Edge: they are what they are and we do what we do. But Pinker does not go so far: he simply notes that if things inevitably degrade we do not need to beat ourselves up, or look for scape-goats when things go wrong. But this can be true even if the second law does not apply. If we take Turing seriously then a seeming permanent status quo can contain the reasons for its own destruction, so that turning a blind eye and doing nothing can mean sleep-walking to disaster. Where Pinker concludes:

[An] underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

This would seem to follow more clearly from the theory of morphogenesis than the second law. Turing’s theory also goes some way to suggesting or even explaining the items in the second batch. So, I commend it.

Dave Marsay

 

 

Heuristics or Algorithms: Confused?

The Editor of the New Scientist (Vol. 3176, 5 May 2018, Letters, p54) opined in response to Adrian Bowyer ‘swish to distinguish between ‘heuristics’ and ‘algorithms’ in AI that:

This distinction is no longer widely made by practitioners of the craft, and we have to follow language as it is used, even when it loses precision.

Sadly, I have to accept that AI folk tend to consistently fail to respect a widely held distinction, but it seems odd that their failure has led to an obligation on the New Scientist – which has a much broader readership than just AI folk. I would agree that in addressing audiences that include significant sectors that fail to make some distinction, we need to be aware of the fact, but if the distinction is relevant – as Bowyer argues, surely we should explain it.

According to the freedictionary:

Heuristic: adj 1. Of or relating to a usually speculative formulation serving as a guide in the investigation or solution of a problem.

Algorithm: n: A finite set of unambiguous instructions that, given some set of initial conditions, can be performed in a prescribed sequence to achieve a certain goal and that has a recognizable set of end conditions.

It even also this quote:

heuristic: of or relating to or using a general formulation that serves to guide investigation  algorithmic – of or relating to or having the characteristics of an algorithm.

But perhaps this is not clear?

AI practitioners routinely apply algorithms as heuristics in the same way that a bridge designer may routinely use a computer program. We might reasonably regard a bridge-designing app as good if it correctly implements best practice in  bridge-building, but this is not to say that a bridge designed using it would necessarily be safe, particularly if it is has significant novelties (as in London’s wobbly bridge).

Thus any app (or other process) has two sides: as an algorithm and as a heuristic. As an algorithm we ask if it meets its concrete goals. As a heuristic we ask if it solves a real-world problem. Thus a process for identifying some kind of undesirable would be regarded as good algorithmically if it conformed to our idea of the undesirables, but may still be poor heuristically. In particular, good AI would seem to depend on someone understand at least the factors involved in the problem. This may not always be the case, no matter how ‘mathematically sophisticated’ the algorithms involved.

Perhaps you could improve on this attempted explanation?

Dave Marsay

Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

How can economics be a science?

This note is prompted by Thaler’s Nobel prize, the reaction to it, and attempts by mathematicians to explain both what they do do and what they could do. Briefly, mathematicians are increasingly employed to assist practitioners (such as financiers) to sharpen their tools and improve their results, in some pre-defined sense (such as making more profit). They are less used to sharpen core ideas, much less to challenge assumptions. This is unfortunate when tools are misused and mathematicians blamed. It is no good saying that mathematicians should not go along with such misuse, since the misuse is often not obvious without some (expensive) investigations, and in any case whistleblowers are likely to get shown the door (even if only for being inefficient).

Mainstream economics aspires to be a science in the sense of being able to make predictions, at least probabilistically. Some (mostly before 2007/8) claimed that it achieved this, because its methods were scientific. But are they? Keynes coined the term ‘pseudo-mathematical’ for the then mainstream practices, whereby mathematics was applied without due regard for the soundness of the application. Then, as now, the mathematics in itself is as much beyond doubt as anything can be. The problem is a ‘halo effect’ whereby the application is regarded as ‘true’ just because the mathematics is. It is like physics before Einstein, whereby some (such as Locke) thought that classical geometry must be ‘true’ as physics, largely because it was so true as mathematics and they couldn’t envisage an alternative.

From a logical perspective, all that the use of scientific methods can do is to make probabilistic predictions that are contingent on there being no fundamental change. In some domains (such as particle physics, cosmology) there have never been any fundamental changes (at least since soon after the big bang) and we may not expect any. But economics, as life more generally, seems full of changes.

Popper famously noted that proper science is in principle falsifiable. Many practitioners in science and science-like fields regard the aim of their domain as to produce ‘scientific’ predictions. They have had to change their theories in the past, and may have to do so again. But many still suppose that there is some ultimate ‘true’ theory, to which their theories are tending. But according to Popper this is not a ‘proper’ scientific belief. Following Keynes we may call it an example of ‘pseudo-science’: something that masquerades as a science but goes beyond it bounds.

One approach to mainstream economics, then, is to disregard the pseudo-scientific ideology and just take its scientific content. Thus we may regard its predictions as mere extrapolations, and look out for circumstances in which they may not be valid. (As Eddington did for cosmology.)

Mainstream economics depends heavily on two notions:

  1. That there is some pre-ordained state space.
  2. That transitions evolve according to fixed conditional probabilities.

For most of us, most of the time, fortunately, these seem credible locally and in the short term, but not globally in space-time. (At the time of writing it seems hard to believe that just after the big bang there were in any meaningful sense state spaces and conditional probabilities that are now being realised.) We might adjust the usual assumptions:

The ‘real’ state of nature is unknowable, but one can make reasonable observations and extrapolations that will be ‘good enough’ most of the time for most routine purposes.

This is true for hard and soft sciences, and for economics. What varies is the balance between the routine and the exceptional.

Keynes observed that some economic structures work because people expect them to. For example, gold tends to rise in price because people think of it as being relatively sound. Thus anything that has a huge effect on expectations can undermine any prior extrapolations. This might be a new product or service, an independence movement, a conflict or a cyber failing. These all have a structural impact on economies that can cascade. But will the effect dissipate as it spreads, or may it result in a noticable shift? A mainstream economist would argue that all such impacts are probabilistic, and hence all that was happening was that we were observing new parts of the existing state space and new transitions. If we suppose for a moment that it is true, it is not a scientific belief, and hardly seems a useful way of thinking about potential and actual crises.

Mainstream economists suppose that people are ‘rational’, by which they mean that they act as if they are maximizing some utility, which is something to do with value and probability. But, even if the world is probabilistic, being rational is not necessarily scientific. For example, when a levee is built  to withstand a ‘100 year storm’, this is scientific if it is clear that the claim is based on past storm data. But it is unscientific if there is an implicit claim that the climate can not change. When building a levee it may be ‘rational’ to build it to withstand all but very improbable storms, but it is more sensible to add a margin and make contingency arrangements (as engineers normally do). In much of life it is common experience that the ‘scientific’ results aren’t entirely reliable, so it is ‘unscientific’ (or at least unreasonable) to totally rely on them.

Much of this is bread-and-butter in disciplines other than economics, and I am not sure that what economists mostly need is to improve their mathematics: they need to improve their sciencey-ness, and then use mathematics better. But I do think that they need somehow to come to a better appreciation of the mathematics of uncertainty, beyond basic probability  theory and its ramifications.

Dave Marsay

 

 

Why do people hate maths?

New Scientist 3141 ( 2 Sept 2017) has the cover splash ‘Your mathematical mind: Why do our brains speak the language of reality?’. The article (p 31) is titled ‘The origin of mathematics’.

I have made pedantic comments on previous articles on similar topics, to be told that the author’s intentions have been slightly skewed in the editing process. Maybe it has again. But some interesting (to me) points still arise.

Firstly, we are told that brain scans showthat:

a network of brain regions involved in mathematical thought that was activated when mathematicians reflected on problems in algebra, geometry and topology, but not when they were thinking about non-mathsy things. No such distinction was visible in other academics. Crucially, this “maths network” does not overlap with brain regions involved in language.

It seems reasonable to suppose that many people do not develop such a maths capability from experience in ordinary life or non-mathsy subjects, and perhaps don’t really appreciate its significance. Such people would certainly find maths stressful, which may explain their ‘hate’. At least we can say – contradicting the cover splash – that most people lack a mathematical mind, which may explain the difficulties mathematicians have in communicating.

In addition, I have come across a few seemingly sensible people who may seem to hate maths, although I would rather say that they hate ‘pseudo-maths’. For example, it may be true that we have a better grasp on reality if we can think mathematically – as scientists and technologists routinely do – but it seems a huge jump – and misleading – to claim that mathematics is ‘the language of reality’ in any more objective sense. By pseudo-maths I mean something that appears to be maths (at least to the non-mathematician) but which uses ordinary reasoning to make bold claims (such as ‘is the language of reality’).

But there is a more fundamental problem. The article cites Ashby to the effect that ‘effective control’ relies on adequate models. Such models are of course computational and as such we rely on mathematics to reason about them. Thus we might say that mathematics is the language of effective control. If – as some seem to – we make a dichotomy between controllable and not controllable systems then mathematics is the pragmatic language of reality. Here we enter murky waters. For example, if reality is socially constructed then presumably pragmatic social sciences (such as economics) are necessarily concerned with control, as in their models. But one point of my blog is that the kind of maths that applies to control is only a small portion. There is at least the possibility that almost all things of interest to us as humans are better considered using different maths. In this sense it seems to me that some people justifiably hate control and hence related pseudo-maths. It would be interesting to give them a brain scan to see if  their thinking appeared mathematical, or if they had some other characteristic networks of brain regions. Either way, I suspect that many problems would benefit from collaborations between mathematicians and those who hate pseudo-mathematic without necessarily being professional mathematicians. This seems to match my own experience.

Dave Marsay