Who thinks probability is just a number? A plea.

Many people think – perhaps they were taught it – that it is meaningful to talk about the unconditional probability of ‘Heads’ (I.e. P(Heads)) for a real coin, and even that there are logical or mathematical arguments to this effect. I have been collecting and commenting on works which have been – too widely – interpreted in this way, and quoting their authors in contradiction. De Finetti seemed to be the only example of a respected person who seemed to think that he had provided such an argument. But a friendly economist has just forwarded a link to a recent work that debunks this notion, based on wider  reading of his work.

So, am I done? Does anyone have any seeming mathematical sources for the view that ‘probability is just a number’ for me to consider?

I have already covered:

There are some more modern authors who make strong claims about probability, but – unless you know different – they rely on the above, and hence do not need to be addressed separately. I do also opine on a few less well known sources: you can search my blog to check.

Dave Marsay

Advertisements

What logical term or concept ought to be more widely known?

Various What scientific term or concept ought to be more widely known? Edge, 2017.

INTRODUCTION: SCIENTIA

Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. …

Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.

Contributions

As against others on:

(This is as far as I’ve got.)

Comment

I’ve grouped the contributions according to whether or not I think they give due weight to the notion of uncertainty as expressed in my blog. Interestingly Steven Pinker seems not to give due weight in his article, whereas he is credited by Nicholas G. Carr with some profound insights (in the first of the second batch). So maybe I am not reading them right.

My own suggestion would be Turing’s theory of ‘Morphogenesis’. The particular predictions seem to have been confirmed ‘scientifically’, but it is essentially a logical / mathematical theory. If, as the introduction suggests, science is “reliable methods for obtaining knowledge” then it seems to me that logic and mathematics are more reliable than empirical methods, and deserve some special recognition. Although, I must concede that it may be hard to tell logic from pseudo-logic, and that unless you can do so my distinction is potentially dangerous.

Morphogenesis

The second law of thermodynamics, and much common sense rationality,  assumes a situation in which the law of large numbers applies. But Turing adds to the second law’s notion of random dissipation a notion of relative structuring (as in gravity) to show that ‘critical instabilities’ are inevitable. These are inconsistent with the law of large numbers, so the assumptions of the second law of thermodynamics (and much else) cannot be true. The universe cannot be ‘closed’ in its sense.

Implications

If the assumptions of the second law seem to leave no room for free will and hence no reason to believe in our agency and hence no point in any of the contributions to Edge: they are what they are and we do what we do. But Pinker does not go so far: he simply notes that if things inevitably degrade we do not need to beat ourselves up, or look for scape-goats when things go wrong. But this can be true even if the second law does not apply. If we take Turing seriously then a seeming permanent status quo can contain the reasons for its own destruction, so that turning a blind eye and doing nothing can mean sleep-walking to disaster. Where Pinker concludes:

[An] underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

This would seem to follow more clearly from the theory of morphogenesis than the second law. Turing’s theory also goes some way to suggesting or even explaining the items in the second batch. So, I commend it.

Dave Marsay

 

 

Heuristics or Algorithms: Confused?

The Editor of the New Scientist (Vol. 3176, 5 May 2018, Letters, p54) opined in response to Adrian Bowyer ‘swish to distinguish between ‘heuristics’ and ‘algorithms’ in AI that:

This distinction is no longer widely made by practitioners of the craft, and we have to follow language as it is used, even when it loses precision.

Sadly, I have to accept that AI folk tend to consistently fail to respect a widely held distinction, but it seems odd that their failure has led to an obligation on the New Scientist – which has a much broader readership than just AI folk. I would agree that in addressing audiences that include significant sectors that fail to make some distinction, we need to be aware of the fact, but if the distinction is relevant – as Bowyer argues, surely we should explain it.

According to the freedictionary:

Heuristic: adj 1. Of or relating to a usually speculative formulation serving as a guide in the investigation or solution of a problem.

Algorithm: n: A finite set of unambiguous instructions that, given some set of initial conditions, can be performed in a prescribed sequence to achieve a certain goal and that has a recognizable set of end conditions.

It even also this quote:

heuristic: of or relating to or using a general formulation that serves to guide investigation  algorithmic – of or relating to or having the characteristics of an algorithm.

But perhaps this is not clear?

AI practitioners routinely apply algorithms as heuristics in the same way that a bridge designer may routinely use a computer program. We might reasonably regard a bridge-designing app as good if it correctly implements best practice in  bridge-building, but this is not to say that a bridge designed using it would necessarily be safe, particularly if it is has significant novelties (as in London’s wobbly bridge).

Thus any app (or other process) has two sides: as an algorithm and as a heuristic. As an algorithm we ask if it meets its concrete goals. As a heuristic we ask if it solves a real-world problem. Thus a process for identifying some kind of undesirable would be regarded as good algorithmically if it conformed to our idea of the undesirables, but may still be poor heuristically. In particular, good AI would seem to depend on someone understand at least the factors involved in the problem. This may not always be the case, no matter how ‘mathematically sophisticated’ the algorithms involved.

Perhaps you could improve on this attempted explanation?

Dave Marsay

Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

Why do people hate maths?

New Scientist 3141 ( 2 Sept 2017) has the cover splash ‘Your mathematical mind: Why do our brains speak the language of reality?’. The article (p 31) is titled ‘The origin of mathematics’.

I have made pedantic comments on previous articles on similar topics, to be told that the author’s intentions have been slightly skewed in the editing process. Maybe it has again. But some interesting (to me) points still arise.

Firstly, we are told that brain scans showthat:

a network of brain regions involved in mathematical thought that was activated when mathematicians reflected on problems in algebra, geometry and topology, but not when they were thinking about non-mathsy things. No such distinction was visible in other academics. Crucially, this “maths network” does not overlap with brain regions involved in language.

It seems reasonable to suppose that many people do not develop such a maths capability from experience in ordinary life or non-mathsy subjects, and perhaps don’t really appreciate its significance. Such people would certainly find maths stressful, which may explain their ‘hate’. At least we can say – contradicting the cover splash – that most people lack a mathematical mind, which may explain the difficulties mathematicians have in communicating.

In addition, I have come across a few seemingly sensible people who may seem to hate maths, although I would rather say that they hate ‘pseudo-maths’. For example, it may be true that we have a better grasp on reality if we can think mathematically – as scientists and technologists routinely do – but it seems a huge jump – and misleading – to claim that mathematics is ‘the language of reality’ in any more objective sense. By pseudo-maths I mean something that appears to be maths (at least to the non-mathematician) but which uses ordinary reasoning to make bold claims (such as ‘is the language of reality’).

But there is a more fundamental problem. The article cites Ashby to the effect that ‘effective control’ relies on adequate models. Such models are of course computational and as such we rely on mathematics to reason about them. Thus we might say that mathematics is the language of effective control. If – as some seem to – we make a dichotomy between controllable and not controllable systems then mathematics is the pragmatic language of reality. Here we enter murky waters. For example, if reality is socially constructed then presumably pragmatic social sciences (such as economics) are necessarily concerned with control, as in their models. But one point of my blog is that the kind of maths that applies to control is only a small portion. There is at least the possibility that almost all things of interest to us as humans are better considered using different maths. In this sense it seems to me that some people justifiably hate control and hence related pseudo-maths. It would be interesting to give them a brain scan to see if  their thinking appeared mathematical, or if they had some other characteristic networks of brain regions. Either way, I suspect that many problems would benefit from collaborations between mathematicians and those who hate pseudo-mathematic without necessarily being professional mathematicians. This seems to match my own experience.

Dave Marsay

Mathematical Modelling

Mathematics and modelling in particular is very powerful, and hence can be very risky if you get it wrong, as in mainstream economics. But is modelling inappropriate – as has been claimed – or is it just that it has not been done well enough?

As a mathematician who has dabbled in modelling and economics I thought I’d try my hand at modelling economies. What harm could there be?

My first notion is that actors activity is habitual.

My second is that habits persist until there is a ‘bad’ experience, in which case they are revised. What is taken account of, what counts as ‘bad’ and how habits are replaced or revised are all subject to meta-habits (habits about habits).

In particular, mainstream economists suppose that actors seek to maximise their utilities, and they never revise this approach. But this may be too restrictive.

Myself, I would add that most actors mostly seek to copy others and also tend to discount experiences and lessons identified by previous generations.

With some such ‘axioms’ (suitably formalised) such as those above, one can predict booms and busts leading to new ‘epochs’ characterised by dominant theories and habits. For example, suppose that some actors habitually borrow as much as they can to invest in an asset (such as a house for rent) and the asset class performs well. Then they will continue in their habit, and others who have done less well will increasingly copy them, fuelling an asset price boom. But no asset class is worth an infinite amount, so the boom must end, resulting in disappointment and changes in habit, which may again be copied by those who are losing out on the asset class., giving a bust.  Thus one has an ’emergent behaviour’ that contradicts some of the implicit mainstream assumptions about rationality  (such as ‘ergodicity’), and hence the possibility of meaningful ‘expectations’ and utility functions to be maximized. This is not to say that such things cannot exist, only that if they do exist it must be due to some economic law as yet unidentified, and we need an alternative explanation for booms and busts.

What I take from this is that mathematical models seem possible and may even provide insights.I do not assume that a model that is adequate in the short-run will necessarily continue to be adequate, and my model shows how economic epochs can be self-destructing. To me, the problem in economics is not so much that it uses mathematics and in particular mathematical modelling but that it does so badly. My ‘axioms’ mimic the approach that Einstein took to physics: it replaces an absolutist model by a relativistic one, and shows that it makes a difference. In my model there are no magical ‘expectations’, rather actors may have realistic habits and expectations, based on their experience and interpretation of the media and other sources, which may be ‘correct’ (or at least not falsified) in the short-run, but which cannot provide adequate predictions for the longer run. To survive a change of epochs our actors would need to be at least following some actors who were monitoring and thinking about the overall situation more broadly and deeply than those who focus on short run utility. (Something that currently seems lacking.)

David Marsay

Can polls be reliable?

Election polls in many countries have seemed unusually unreliable recently. Why? And can they be fixed?

The most basic observation is that if one has a random sample of a population in which x% has some attribute then it is reasonable to estimate that x% of the whole population has that attribute, and that this estimate will tend to be more accurate the larger the sample is. In some polls sample size can be an issue, but not in the main political polls.

A fundamental problem with most polls is that the ‘random’ sample may not be uniformly distributed, with some sub-groups over or under represented. Political polls have some additional issues, that are sometimes blamed:

  • People with certain opinions may be reluctant to express them, or may even mislead.
  • There may be a shift in opinions with time, due to campaigns or events.
  • Different groups may differ in whether they actually vote, for example depending on the weather.

I also think that in the UK the trend to postal voting may have confused things, as postal voters will have missed out on the later stages of campaigns, and on later events. (Which were significant in the UK 2017 general election.)

Pollsters have a lot of experience at compensating for these distortions, and are increasingly using ‘sophisticated mathematical tools’. How is this possible, and is there any residual uncertainty?

Back to mathematics, suppose that we have a science-like situation in which we know which factors (e.g. gender, age, social class ..) are relevant. With a large enough sample we can partition the results by combination of factors, measure the proportions for each combination, and then combine these proportions, weighting by the prevalence of the combinations in the whole population. (More sophisticated approaches are used for smaller samples, but they only reduce the statistical reliability.)

Systematic errors can creep in in two ways:

  1. Instead of using just the poll data, some ‘laws of politics’ (such as the effect of rain) or other heuristics (such as that the swing among postal votes will be similar to that for votes in person) may be wrong.
  2. An important factor is missed. (For example, people with teenage children or grandchildren may vote differently from their peers when student fees are an issue.)

These issues have analogues in the science lab. In the first place one is using the wrong theory to interpret the data, and so the results are corrupted. In the second case one has some unnoticed ‘uncontrolled variable’ that can really confuse things.

A polling method using fixed factors and laws will only be reliable when they reasonably accurately the attributes of interest, and not when ‘the nature of politics’ is changing, as it often does and as it seems to be right now in North America and Europe. (According to game theory one should expect such changes when coalitions change or are under threat, as they are.) To do better, the polling organisation would need to understand the factors that the parties were bringing into play at least as well as the parties themselves, and possibly better. This seems unlikely, at least in the UK.

What can be done?

It seems to me that polls used to be relatively easy to interpret, possibly because they were simpler. Our more sophisticated contemporary methods make more detailed assumptions. To interpret them we would need to know what these assumptions were. We could then ‘aim off’, based on our own judgment. But this would involve pollsters in publishing some details of their methods, which they are naturally loth to do. So what could be done? Maybe we could have some agreed simple methods and publish findings as ‘extrapolations’ to inform debate, rather than predictions. We could then factor in our own assumptions. (For example, our assumptions about students turnout.)

So, I don’t think that we can expect reliable poll findings that are predictions, but possibly we could have useful poll findings that would inform debate and allow us to take our own views. (A bit like any ‘big data’.)

Dave Marsay

 

The search for MH370: uncertainty

There is an interesting podcast about the search for MH370 by a former colleague. I think it illustrates in a relatively accessible form some aspects of uncertainty.

According to the familiar theory, if one has an initial probability distribution over the globe for the location of MH370’s flight recorder, say, then one can update it using Bayes’ rule to get a refined distribution. Conventionally, one should search where there is a higher probability density (all else being equal). But in this case it is fairly obvious that there is no principled way of deriving an initial distribution, and even Bayes’ rule is problematic. Conventionally, one should do the best one can, and search accordingly.

The podcaster (Simon) gives examples of some hypotheses (such as the pilot being well, well-motivated and unhindered throughout) for which the probabilistic approach is more reasonable. One can then split one’s effort over such credible hypotheses, not ruled out by evidence.

A conventional probabilist would note that any ‘rational’ search would be equivalent to some initial probability distribution over hypotheses, and hence some overall distribution. This may be so, but it is clear from Simon’s account that this would hardly be helpful.

I have been involved in similar situations, and have found it easier to explain the issues to non-mathematicians when there is some severe resource constraint, such as time. For example, we are looking for a person. The conventional approach is to maximise our estimated probability of finding them based on our estimated probabilities of them having acted in various ways (e.g., run for it, hunkered down). An alternative is to consider the ways they may ‘reasonably’ be thought to have acted and then to seek to maximize the worst case probability of finding them. Then again, we may have a ranking of ways that they may have acted, and seek to maximize the number of ways for which the probability of our success exceeds some acceptable amount (e.g. 90%). The key point here is that there are many reasonable objectives one might have, for only one of which the conventional assumptions are valid. The relevant mathematics does still apply, though!

Dave Marsay