Who thinks probability is just a number? A plea.

Many people think – perhaps they were taught it – that it is meaningful to talk about the unconditional probability of ‘Heads’ (I.e. P(Heads)) for a real coin, and even that there are logical or mathematical arguments to this effect. I have been collecting and commenting on works which have been – too widely – interpreted in this way, and quoting their authors in contradiction. De Finetti seemed to be the only example of a respected person who seemed to think that he had provided such an argument. But a friendly economist has just forwarded a link to a recent work that debunks this notion, based on wider  reading of his work.

So, am I done? Does anyone have any seeming mathematical sources for the view that ‘probability is just a number’ for me to consider?

I have already covered:

There are some more modern authors who make strong claims about probability, but – unless you know different – they rely on the above, and hence do not need to be addressed separately. I do also opine on a few less well known sources: you can search my blog to check.

Dave Marsay

Artificial Intelligence?

The subject of ‘Artificial Intelligence’ (AI) has long provided ample scope for long and inconclusive debates. Wikipedia seems to have settled on a view, that we may take as straw-man:

Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. [Dartmouth Conference, 1956] The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds. [John Searle’s straw-man hypothesis]

Readers of my blog will realise that I agree with Searle that his hypothesis is wrong, but for different reasons. It seems to me that mainstream AI (mAI) is about being able to take instruction. This is a part of learning, but by no means all. Thus – I claim – mAI is about a sub-set of intelligence. In many organisational settings it may be that sub-set which the organisation values. It may even be that an AI that ‘thought for itself’ would be a danger. For example, in old discussions about whether or not some type of AI could ever act as a G.P. (General Practitioner – first line doctor) the underlying issue has been whether G.P.s ‘should’ think for themselves, or just apply their trained responses. My own experience is that sometimes G.P.s doubt the applicability of what they have been taught, and that sometimes this is ‘a good thing’. In effect, we sometimes want to train people, or otherwise arrange for them to react in predictable ways, as if they were machines. mAI can create better machines, and thus has many key roles to play. But between mAI and ‘superhuman intelligence’  there seems to be an important gap: the kind of intelligence that makes us human. Can machines display such intelligence? (Can people, in organisations that treat them like machines?)

One successful mainstream approach to AI is to work with probabilities, such a P(A|B) (‘the probability of A given B’), making extensive use of Bayes’ rule, and such an approach is sometimes thought to be ‘logical’, ‘mathematical, ‘statistical’ and ‘scientific’. But, mathematically, we can generalise the approach by taking account of some context, C, using Jack Good’s notation P(A|B:C) (‘the probability of A given B, in the context C’). AI that is explicitly or implicitly statistical is more successful when it operates within a definite fixed context, C, for which the appropriate probabilities are (at least approximately) well-defined and stable. For example, training within an organisation will typically seek to enable staff (or machines) to characterise their job sufficiently well for it to become routine. In practice ‘AI’-based machines often show a little intelligence beyond that described above: they will monitor the situation and ‘raise an exception’ when the situation is too far outside what it ‘expects’. But this just points to the need for a superior intelligence to resolve the situation. Here I present some thoughts.

When we state ‘P(A|B)=p’ we are often not just asserting the probability relationship: it is usually implicit that ‘B’ is the appropriate condition to consider if we are interested in ‘A’. Contemporary mAI usually takes the conditions a given, and computes ‘target’ probabilities from given probabilities. Whilst this requires a kind of intelligence, it seems to me that humans will sometimes also revise the conditions being considered, and this requires a different type of intelligence (not just the ability to apply Bayes’ rule). For example, astronomers who refine the value of relevant parameters are displaying some intelligence and are ‘doing science’, but those first in the field, who determined which parameters are relevant employed a different kind of intelligence and were doing a different kind of science. What we need, at least, is an appropriate way of interpreting and computing ‘probability’ to support this enhanced intelligence.

The notions of Whitehead, Keynes, Russell, Turing and Good seem to me a god start, albeit they need explaining better – hence this blog. Maybe an example is economics. The notion of probability routinely used would be appropriate if we were certain about some fundamental assumptions. But are we? At least we should realise that it is not logical to attempt to justify those assumptions by reasoning using concepts that implicitly rely on them.

Dave Marsay

Distilling the Science from the Art

Geoff Evatt (U o Manchester, UK) gave a ‘Mathematics in the Workplace’ talk at the recent Manchester Festival of Mathematics and its Applications, printed in the Oct 2014 Mathematics Today.

He showed how the Mathematical modeller could turn their hand to diverse subjects of financial regulation and … .

He is critical of the view that ‘Mathematical Modelling is like an Art’ and advocates the prescriptive teaching of best-practice. His main motivation seems to be to attract more students and the up-take by industry (etc).

This … will be achieved by academics from a variety of universities agreeing in what is ‘best practice’ in teaching modelling is … .

Comments

Taking the title, I accept that the term ‘art’ may be misleading, but I am not convinced that there is much science in, for example, finance, or that those funding the mathematics really care, so the term ‘science’ could be equally misleading and more dangerous. I would say that mathematical modelling is often a craft. Where it is part of a proper scientific endeavour, I would think that this would be because of the domain experts and ought to be certified from a scientific rather than mathematical point of view. To me ‘best practice’ is to work closely with domain experts, to give them what they need, and to make sure that they understand what they do – and don’t have. It is good to seek to be scientific and objective, but not to misrepresent what has actually been achieved.

In the run-up to the financial crash best practice included characterising mathematical modelling in this area as an ‘art’ and not a science, to prevent financiers and politicians from thinking that the ‘mathematical’ nature of the models somehow lent them the same credibility normally accorded to mathematics. A key part of the financial problem was that this was not well-enough understood.

A key part of economics is the concept of ‘uncertainty’. The classical mathematical models did not model uncertainty beyond mere probability, possibly because was not covered by contemporary mainstream courses.

Best practice would include ensuring that the mathematics used was appropriate to the domain, or at least in explaining any short-falls. I think that this requires more development than Evatt supposes. I also think that one would need to go beyond academics, to include people who understand the issues involved.

Dave Marsay

What should replace utility maximization in economics?

Mainstream economics has been based on the idea of people producing and trading in order to maximize their utility, which depends on their assigning values and conditional  probabilities to outcomes. Thus, in particular, mainstream economics implies that people do best by assigning probabilities to possible outcomes, even when there seems no sensible way to do this (such as when considering a possible crash). Ken Arrow has asked, if one rejects utility maximization, what should one replace it with?

The assumption here seems to be that it is better to have a wrong theory than to have no theory. The fear seems to be that economies would grind to a holt unless they were sanctioned by some theory – even a wrong one. But this fear seems at odds with another common view, that economies are driven by businesses, which are driven by ‘pragmatic’ men. It might be that without the endorsement of some (wrong) theory some practices, such as the development of novel technical instruments and the use of large leverages, would be curtailed. But would this be a bad thing?

Nonetheless, Arrow’s challenge deserves a response.

There are many variations in detail of utility maximization theories. Suppose we identity ‘utility maximization’ as a possible heuristic, then utility maximization theory claims that people use some specific heuristics, so an obvious alternative is to consider a wider  range. The implicit idea behind utility maximization theory seems to be under a competitive regime resembling evolution, the evolutionary stable strategies (‘the good ones’) do maximize some utility function, so that in time utility maximizers ought to get to dominate economies. (Maybe poor people do not maximize any utility, but they – supposedly – have relatively little influence on economies.) But this idea is hardly credible. If – as seems to be the case – economies have significant ‘Black Swans’ (low probability high impact events) then utility maximizers  who ignore the possibility of a Black Swan (such as a crash) will do better in the short-term, and so the economy will become dominated by people with the wrong utilities. People with the right utilities would do better in the long run, but have two problems: they need to survive the short-term and they need to estimate the probability of the Black Swan. No method has been suggested for doing this. An alternative is to take account of some notional utility but also take account of any other factors that seem relevant.

For example, when driving a hire-car along a windy road with a sheer drop I ‘should’ adjust my speed to trade time of arrival against risk of death or injury. But usually I simply reduce my speed to the point where the risk is slight, and accept the consequential delay. These are qualitative judgements, not arithmetic trade-offs. Similarly an individual might limit their at-risk investments (e.g. stocks) so that a reasonable fall (e.g. 25%) could be tolerated, rather than try to keep track of all the possible things that could go wrong (such as terrorists stealing a US Minuteman) and their likely impact.

More generally, we could suppose that people act according to their own heuristics, and that there are competitive pressures on heuristics, but not that utility maximization is necessarily ‘best’ or even that a healthy economy relies on most people having similar heuristics, or that there is some stable set of ‘good’ heuristics. All these questions (and possibly more) could be left open for study and debate. As a mathematician it seems to me that decision-making involves ideas, and that ideas are never unique or final, so that novel heuristics could arise and be successful from time to time. Or at least, the contrary would require an explanation. In terms of game theory, the conventional theory seems to presuppose a fixed single-level game, whereas – like much else – economies seem to have scope for changing the game and even for creating higher-level games, without limit. In this case, the strategies must surely change and are created rather than drawn from a fixed set?

See Also

Some evidence against utility maximization. (Arrow’s response prompted this post).

My blog on reasoning under uncertainty with application to economics.

Dave Marsay

The limits of (atomistic) mathematics

Lars Syll draws attention to a recent seminar on ‘Confronting economics’ by Tony Lawson, as part of the Bloomsbury Confrontations at UCLU.

If you replace his every use of the term ‘mathematics’ by something like ‘atomistic mathematics’ then I would regard this talk as not only very important, but true. Tony approving quotes Whitehead on challenging implicit assumptions. Is his implicit assumption that mathematics is ‘atomistic’? What about Whitehead’s own mathematics, or that of Russell, Keynes and Turing? He (Tony) seems to suppose that mathematics can’t deal with emergent properities. So What is Whitehead’s work on Process, Keynes’ work on uncertainty, Russell’s work on knowledge or Turing’s work on morphogenesis all about?

Dave Marsay

 

Evolution of Pragmatism?

A common ‘pragmatic’ approach is to keep doing what you normally do until you hit a snag, and (only) then to reconsider. Whereas Lamarckian evolution would lead to the ‘survival of the fittest’, with everyone adapting to the current niche, tending to yield a homogenous population, Darwinian evolution has survival of the maximal variety of all those who can survive, with characteristics only dying out when they are not viable. This evolution of diversity makes for greater resilience, which is maybe why ‘pragmatic’ Darwinian evolution has evolved.

The products of evolution are generally also pragmatic, in that they have virtually pre-programmed behaviours which ‘unfold’ in the environment. Plants grow and procreate, while animals have a richer variety of behaviours, but still tend just to do what they do. But humans can ‘think for themselves’ and be ‘creative’, and so have the possibility of not being just pragmatic.

I was at a (very good) lecture by Alice Roberts last night on the evolution of technology. She noted that many creatures use tools, but humans seem to be unique in that at some critical population mass the manufacture and use of tools becomes sustained through teaching, copying and co-operation. It occurred to me that much of this could be pragmatic. After all, until recently development has been very slow, and so may well have been driven by specific practical problems rather than continual searching for improvements. Also, the more recent upswing of innovation seems to have been associated with an increased mixing of cultures and decreased intolerance for people who think for themselves.

In biological evolution mutations can lead to innovation, so evolution is not entirely pragmatic, but their impact is normally limited by the need to fit the current niche, so evolution typically appears to be pragmatic. The role of mutations is more to increase the diversity of behaviours within the niche, rather than innovation as such.

In social evolution there will probably always have been mavericks and misfits, but the social pressure has been towards conformity. I conjecture that such an environment has favoured a habit of pragmatism. These days, it seems to me, a better approach would be more open-minded, inclusive and exploratory, but possibly we do have a biologically-conditioned tendency to be overly pragmatic: to confuse conventions for facts and  heuristics for laws of nature, and not to challenge widely-held beliefs.

The financial crash of 2008 was blamed by some on mathematics. This seems ridiculous. But the post Cold War world was largely one of growth with the threat of nuclear devastation much diminished, so it might be expected that pragmatism would be favoured. Thus powerful tools (mathematical or otherwise) could be taken up and exploited pragmatically, without enough consideration of the potential dangers. It seems to me that this problem is much broader than economics, but I wonder what the cure is, apart from better education and more enlightened public debate?

Dave Marsay

 

 

Traffic bunching

In heavy traffic, such as on motorways in rush-hour, there is often oscillation in speed and there can even be mysterious ‘emergent’ halts. The use of variable speed limits can result in everyone getting along a given stretch of road quicker.

Soros (worth reading) has written an article that suggests that this is all to do with the humanity and ‘thinking’ of the drivers, and that something similar is the case for economic and financial booms and busts. This might seem to indicate that ‘mathematical models’ were a part of our problems, not solutions. So I suggest the following thought experiment:

Suppose a huge number of  identical driverless cars with deterministic control functions all try to go along the same road, seeking to optimise performance in terms of ‘progress’ and fuel economy. Will they necessarily succeed, or might there be some ‘tragedy of the commons’ that can only be resolved by some overall regulation? What are the critical factors? Is the nature of the ‘brains’ one of them?

Are these problems the preserve of psychologists, or does mathematics have anything useful to say?

Dave Marsay

Follow

Get every new post delivered to your Inbox.

Join 29 other followers