Mathematics and Action

NOTE: Under construction.


As a mathematician collaborating with people from a wide range of areas and with a broad range of views, I have often found that the relationship between mathematics and reasonable action is far from straightforward. Many scientists and practitioners are much better trained and experience in the use of specific mathematics as tools than I am, including developments of calculus and applications of statistics. I tend to get engaged with when, despite their (often world-class) expertise, things are going wrong somewhere. Often this seems to be because of some snag in communication between disciplines.

Previous generations thought of mathematics as dealing with abstractions from ‘reality’. Thus Euclidean geometry was thought of as a mathematical theory of space. This proved untenable, so we now distinguish between mathematical geometries (including a mathematization of conventional geometry) and physics: geometry may be absolutely correct as mathematics, absolutely false as physics, yet useful to engineers and others ‘of action’.

In most cases where I have had to get involved, people have failed to make this distinction. It is as if an applied mathematician  developed methods that had been used to great effect on small scale were being used on progressively largely scales, without any considering whether scale might matter (e.g., due to the curvature of the earth).

The Issue

The question of the utility of a particular method (no matter how mathematical) in a particular real case is not itself a mathematical one, but mathematics may have something to say about it.

If one can somehow identify a recognizable real context within which a method has consistently and repeatedly proved useful, despite all challenges, then one can reasonably apply an inductive heuristic to suppose it reasonable to think that the method may prove useful again, unless there are new challenges. Thus scientists in the laboratory do seem good at creating a consistent context for their experiments and in protecting them from challenging outside interference. This experienced success is the basis for the ‘hard sciences’. ‘Softer sciences’ and disciplines with scientific ambitions, such as economics, have experienced repeated failures, which calls into question there ability to establish or recognize consistent contexts and to protect their subjects from challenge. Thus the inductive heuristic of the hard sciences seems much less reliable.

Sometimes I have been asked to look at problems that involve relatively hard sciences, but in a softer context. Problems seem to have arisen where hard scientists have followed their inductive habits and presented their findings as if they were from a definite context, whereas (it seemed to me) the softer aspects were important, creating a previously unrecognized uncertainty.

Formally or not, explicitly or not, many of the issues came down to applications of probability and statistics. They are underpinned by (mathematical) measure theory. The hard sciences, at least as I understand them, have long seemed to suppose that all ‘real’ things of interest are measurable, and that in particular it is ‘pragmatic’ to think in terms of unconditional probabilities. My main contribution has often been to point to some logical flaw in this thinking, allowing a sounder approach to be developed, somewhat ad-hoc.

An Approach

I now speculate on the possibilities for a more structured approach.

In the hard sciences it seems to be a scientific fact that certain methods can be used to solve problems. In other areas methods can only provide information relative to some assumed context, so no method can be relied upon to give ‘the’ unconditional solution. But as with geometry, the language in which probability and statistics are couched do seem to suggest to the unwary that their results are good enough to be acted upon more or less directly. It is all very well pointing out that all unconditional uses of probability and statistics are ill-founded and require justification by reference to experience, but the cases in which I have been involved are all novel, so what are people to do?

Conventional mathematical models are notorious for misleading, and part of the solution is to consider alternatives, to challenge them and to address any anomalies by doing more modelling. This is, in essence, how the hard sciences work. It is just that outsiders typically just the results, not the process.

Scientists and mathematicians are often accused of seeing things in terms of ‘machines’, and this does seem to be a problem. Yet working with engineers I find that the best of them do not actually things of machines and other constructs in the stereotypical way that seems to be a problem. I offer the following analogies:

  • A driver might see his car as something that can’t cope with snow. Yet a better driver, with a better understanding of the mechanism and its potentials, may be able to manipulate the controls skilfully to be able to cope.
  • We might have often observed a lorry manoeuvring, including reversing, without ever realising that it may have more than one reverse gear.

So I think the problem is that often we see a real machine in terms of some conceptual machine which can only work in ways we have experienced or been told about. This seems to me the problem, which applies as much to living creatures as inert machines.

Observations of States?

In terms of science, we conventionally model things in terms of state spaces and operations defined on them. (See, for example, Wikipedia.) I think this may be what some people mean by ‘a mathematical model’. But mathematics as such is fundamentally different: it takes an axiomatic approach, as in geometry.  A particular mathematical structure that satisfies a set of axioms is called a ‘model’. In the hard sciences it sometimes seems to be assumed that one can always do more experiments to identify add more axioms until one reaches a situation in which all models have the same real implications. But actually all one can really say is this:

One can sometimes reach ‘experimental closure’, whereby given a range of challenges that can be combined experimentally, one reaches a (possibly probabilistic) mathematical model that seems impervious to such challenges.

That is, one can reach a position where the inductive habit of the hard sciences seems effective, at least until comes up with a new challenge (such as a higher energy probe or a more accurate instrument).

But it seems to me that in the case where I have been involved this is ‘obviously’ (to me) not the case. Yet there seems to me that people think it ‘rational’ to act as if one’s best theory were a ‘well-tested’ theory in the above sense and to ignore the caveats and uncertainties. It seems better, to me, to admit the limitations of the theory, to seek to establish sounder grounds from which to reason, and to proceed cautiously, typically ‘hedging’ across alternative possibilities, rather than optimising against one’s best guess.

The above works for me, but seems to rely on a ‘mathematical intuition’ that is either lacking or is too easily over-ridden by some psychological, organisational or cultural sense of ‘rationality’. Hence my blog.

Observations as Processes

But now I speculate further:

If what we know is ultimately based on observation then maybe we should base our models on observations as processes, rather than being ‘of’ states.

Normally when I see things I can point to them out to others who will confirm my observations. But this is not always true of art, for example, and is repeatably false for rainbows: If I see a rainbow that ends at a friend’s house I don’t expect them to share my experience.

Thus I should initially regard all things as possible. My ‘axioms’ are that an inability (given the current conditions and ‘challenge capabilities’) to discriminate between certain things are well-tested directly or in prescribed combinations.

We would like to have a simple logic in which we can deduce that we will not be able to discriminate between certain combinations that we have not yet tested. In general there is no reason why this should be so, and the inability to discriminate should only be a conjecture. Yet in some cases we do find that there are certain methods for generating such conjectures, and the notion that such conjectures hold may come to be well-tested. In which case this too may be an axiom.

Some points to note:

  1. The axioms are about ‘things’ that are not conventional mathematical structures.
  2. Not all axioms are ‘at the same level’
  3. An axiom only describes the result of testing so far: as with geometry we may need to revise them to take account of further tests.
  4. ‘Well-testedness’ relates to our ability to discriminate, which is subjective, subject to change and even to some extent subject to our choice.

At any one time our model may be an n-category theory, subject to revision at any level.

  • Looked at in this way one of the most significant things would seem to be establishing ‘the level of the challenge’. There is no point focussing on lower levels as such if the higher levels are in need of revision. But if we are limited – as we seem to be – in the levels at which we can directly test, then it may be fruitless to try to challenge too high a level unless and until we are forced to in order to resolve lower-level issues.
  • The axioms may or may not determine models that are unique up to some equivalence, and hence may or may not determine conventional mathematical structures.
  • There may be a lowest level that is unconditionally probabilistic, but this would be subject to revision by the level above, and challenges at the same level may necessitate the introduction of a level below.
  • There may be a highest level, but challenges may result in the need for a higher level.


(snow, lorries).

Probability, Expectation

Probabilities are conditional on nothing changing at the higher level and the lower-levels continuing to be noise-like and hence it being reasonable to treat them as essentially static in their characteristics. They only reflect how things have been. We can extrapolate (assuming a reasonable quantity of data) but such extrapolations are always caveated. We can form mathematical expectations in the usual way, but should we really ‘expect’ them to be born out? Or should we ‘expect the unexpected’? It depends on the situation, and in particular the propensity for change at the higher level. (See, for example, Turing’s Morphogenesis)

In the hard sciences, under laboratory conditions, we expect the expected: less so for other domains..

Uncertainty Principle


The Heisenberg uncertainty principle has it that the more certain you can be about a particle’s position the less certain you can be about its momentum, and similarly for other pairs of variables.

Thinking in terms of categories, the more effort we put into analysing one category, the less we can put into analysing the higher or lower level:. we can only reasonably extrapolate at a given level if the lower level is noise-like and the higher-level is stable.

For example, there are often trade-offs between exploiting a situation as it is and exploring alternative possibilities.


Many systems of interest are adaptive: that is their immediate functioning is conditioned by some higher-level process. If we model the system as a category-of-categories then it too must be subject to an uncertainty principle.

Note that we are assuming that the other system is constrained in much the same way as our logical thinking is: we rule out any kind of magic. But of course our model may be very wrong, so in fact the system of interest need not be adaptive in quite the way we suppose. But if we think our model reasonable, it is reasonable to suppose it adaptive in a comprehensible way.

A consequence of this is that if we subject a system of interest to constant enough challenge then it will be unable to adapt: to do so requires some pause and reflection.


In so far as we only have a categorical model, there will normally be a particular level of exploitation, with a possibility for exploration at the lower levels. The more we have explored the more options we will have for future adaptation, as and when required. Thus diversity has an ‘adaptive potential value’ that need not unduly compromise current exploitation.

Similarly, we should regard other systems of interest as being more viable when they have such diversity, and are not unduly optimized.

Diversity and adaptability can be promoted by identifying aspects that are not essential to current exploitation and challenging them (without challenging the essentials).


So far, it seems to me that some kind of logically consistent ‘process thinking’ might be an improvement on narrowly ‘mechanistic’, reductionist, stochastic or culturally conditioned reasoning, and that it might be possible to guard it against cultural corruption,. at least in so far as such source shave been identified, and this might be ‘a good thing’. So far it seems that notions of ‘principled relativism’, discrimination, communication are vital and could be incorporated into a logical model to provide an alternative to conventional ‘mathematical models’. Of course one would need to check for undue cultural influence and unwarranted assumptions, and this would be an on-going process. But maybe one could at least come up with something that doesn’t conflict with our experience so far.

Dave Marsay

%d bloggers like this: