The limits of pragmatism

This is a personal attempt to identify and articulate a fruitful form of pragmatism, as distinct from what seems to me the many dangerous forms. My starting point is Wikipedia and my notion that the differences it notes can sometimes matter.

Doubt, like belief, requires justification. Genuine doubt irritates and inhibits, in the sense that belief is that upon which one is prepared to act.[2] It arises from confrontation with some specific recalcitrant matter of fact (which Dewey called a “situation”), which unsettles our belief in some specific proposition. Inquiry is then the rationally self-controlled process of attempting to return to a settled state of belief about the matter. Note that anti-skepticism is a reaction to modern academic skepticism in the wake of Descartes. The pragmatist insistence that all knowledge is tentative is quite congenial to the older skeptical tradition

My own contribution to things scientific has been on some very specific issues, but which I attempt to generalise:

  • It is sometimes seems much too late to wait to act on doubt for something that pragmatic folk recognize as a ‘specific recalcitrant matter of fact’. I would rather say (with the skeptics) that we should always be in some doubt, but that our actions require justification, and should only invest in relation to that justification. Requiring ‘facts’ seems to high a hurdle to act at all.
  • Psychologically, people do seek ‘settled states of belief’, but I would rather say (with the skeptics) that the degree of settledness ought to be only in so far as is justified. Relatively settled belief but not fundamentalist dogma!
  • It is often supposed that ‘facts’ and ‘beliefs’ should concern the ‘state’ of some supposed ‘real world’. There is some evidence that it is ‘better’ in some sense to think of the world as one in which certain processes are appropriate. In this case, as in category theory, the apparent state arises as a consequence of sufficient constraints on the processes. This can make an important difference when one considers uncertainties, but in ‘small worlds’ there are no such uncertainties.

It seems to me that the notion of ‘small worlds’ is helpful. A small world would be one which could be conceived of or ‘mentally modelled’. Pragmatists (of differing varieties) seem to believe that often we can conceive of a small world representation of the actual world, and act on that representation ‘as if’ the world were really small. So far, I find this plausible, even if not my own habit of thinking. The contentious point, I think, is that in every situation we should do our best to from a small world representation and then act as if it were true unless and until we are confronted with some ‘specific recalcitrant matter of fact’. This can be too late.

But let us take the notion of  a ‘small world’ as far as we can. It is accepted that the small world might be violated. If it could be violated as a consequence of something that we might inadvertently do then it hardly seems a ‘pragmatic’ notion in terms of ordinary usage, and might reasonably said to be dangerous in so far as it lulls us into a false sense of security.

One common interpretation of ‘pragmatism’ seems to be that we may as well act on our beliefs as there seems no alternative. But I shall refute this by presenting one. Another interpretation is that there is no ‘practical’ alternative’. That is to say, whatever we do could not affect the potential violation of the small world. But if this is the case it seems to me that there must be some insulation between ourselves and the small world. Thus the small world is actually embedded in some larger closed world. But do we just suppose that we are so insulated, or do we have some specific closed world in mind?

It seems to me that doubt is more justified the less our belief in insulation is justified. Even when we have specific insulation in mind, we surely need to keep an open mind and monitor the situation for any changes, or any reduction in justification for our belief.

From this, it seems to me that (as in my own work) what matters is not having some small world belief, but in taking a view on the insulations between what you seek to change and what you seek to rely on as unchanging. And from these identifying not only a single credible world in which to anchor one’s justifications for action, but in seeking out credible possible small worlds in the hope that at least one may remain credible as things proceed.

Dave Marsay

See also my earlier thoughts on pragmatism, from a different starting point.

Advertisements

Which pragmatism as a guide to life?

Much debate on practical matters ends up in distracting metaphysics. If only we could all agree on what was ‘pragmatic’. My blog is mostly negative, in so far as it rubbishes various suggestions, but ‘the best is trhe enemy of the good’, and we do need to do something.

Unfortunately, different ‘schools’ start from a huge variety of different places, so it is difficult to compare and contrast approaches. But it is about time I had a go. (In part inspired by a recent public engagement talk on mathematics).

Suppose you have a method Π that you regard as pragmatic, in the sense that you can always act on it. To justify this, I think (like Popper) that you should have criteria , Γ, which if falsified would lead you to reconsider ∏ . So your pragmatic process is actually

If Γ then ∏ else reconsider.

But this is hardly reasonable if we try to arrange things so that Γ will never appear to be falsified. So an improvement is:

Spend some effort in monitoring Γ. If it is not falsified then ∏.

In practice if one thinks that Γ can be relied on, one may not think it worth spending much effort on checking it, but surely one should at least be open to suggestions that it could be wrong. The proper balance between monitoring Γ and acting on ∏ seems  impossible to establish with any confidence, but ignoring all evidence against Γ seems risky, to say the least.

Some argue that if you have no alternative to ∏  then it is pointless considering Γ. This may be a  reasonable argument when applied to concepts, but not to actions in the real world. Whatever evidence we may have for ∏ it will never uniquely prove it. It may be that it rules out all the alternatives that we have thought of, or which we consider credible or otherwise acceptable, but we should think again. Logically, there are always alternatives.

The above clearly applies to science. No theory is ever regarded as asolute and for ever. Scientists make their careers by identifying alternative theories to explain the experimental results and then devising new experiments to try to falsify the current theory. This process could only ever end when we were all sure that we had performed every possible experiment using every possible means in every possible circumstance, which implies the end of evolution and inventiveness. We aren’t there yet.

My proposal, then, is that very generally (not just in science) we ought to expect any ‘pragmatic’ ∏  to include a specific ‘caveat’, Γ(∏). If it doesn’t, we ought to develop one. This caveat will include its own rules for falsifying, tests, and we ought to regard more severe tests (in some sense) to be better. We then seek to develop alternatives that might be less precise (and hence less ‘useful’) than ∏ but which might survive falsification of ∏.

Much of my blog has some ideas on how tom do this in particular cases: a work in progress. But an example may appeal:

Faced with what looks like a coin being tossed we might act ‘as if’ we believe it to be fair and to correspond to the axioms of mathematical probability theory, but keep an eye out for evidence to the contrary. Perhaps we inspect it and toss it a few times. Perhaps we watch whoever tosses it carefully. We do what we can, but still if someone tosses it and over a very large runs gets an excess of ‘Heads’ that our statistical friends tell us is hugely significant, we may be suspicious and reconsider

In this case we may decline from gambling on coin tosses even if we lack a specific ‘theory of the coin’, but it might be better if we had an alternative theory. Perhaps it is an ingenious fake coin? Perhaps the person tossing it has a cunning technique to bias it? Perhaps the person tossing it is a magician, and is actually faking the results?

This seems to me a like a good approach, surely better than acting ‘pragmatically’ but without such falsifying criteria. Can it be improved upon? (Suggestions please!)

Dave Marsay

The search for MH370: uncertainty

There is an interesting podcast about the search for MH370 by a former colleague. I think it illustrates in a relatively accessible form some aspects of uncertainty.

According to the familiar theory, if one has an initial probability distribution over the globe for the location of MH370’s flight recorder, say, then one can update it using Bayes’ rule to get a refined distribution. Conventionally, one should search where there is a higher probability density (all else being equal). But in this case it is fairly obvious that there is no principled way of deriving an initial distribution, and even Bayes’ rule is problematic. Conventionally, one should do the best one can, and search accordingly.

The podcaster (Simon) gives examples of some hypotheses (such as the pilot being well, well-motivated and unhindered throughout) for which the probabilistic approach is more reasonable. One can then split one’s effort over such credible hypotheses, not ruled out by evidence.

A conventional probabilist would note that any ‘rational’ search would be equivalent to some initial probability distribution over hypotheses, and hence some overall distribution. This may be so, but it is clear from Simon’s account that this would hardly be helpful.

I have been involved in similar situations, and have found it easier to explain the issues to non-mathematicians when there is some severe resource constraint, such as time. For example, we are looking for a person. The conventional approach is to maximise our estimated probability of finding them based on our estimated probabilities of them having acted in various ways (e.g., run for it, hunkered down). An alternative is to consider the ways they may ‘reasonably’ be thought to have acted and then to seek to maximize the worst case probability of finding them. Then again, we may have a ranking of ways that they may have acted, and seek to maximize the number of ways for which the probability of our success exceeds some acceptable amount (e.g. 90%). The key point here is that there are many reasonable objectives one might have, for only one of which the conventional assumptions are valid. The relevant mathematics does still apply, though!

Dave Marsay

More to Uncertainty than Probability!

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

(Actually, my paper was published jan 2016,but somehow this request for comments got stuck in a limbo somewhere. Better late than never?)

Which rationality?

We often suppose that rationality is ‘a good thing’, but is it always?

Rationality is variously defined as being in accord with reason, logic or ‘the facts’. Here ‘reason’ may mean one’s espoused or actual reasons, or it may mean in accord with some external standards. Thus in its broadest interpretation, it seems that anything that has a reason for being the way that it is may be considered broadly rational. But the notion of rationality derives from ‘reason’, one aspect of which is ‘sound judgement, good sense’. This suggests some external standard.

If we use the term ‘simple’ to denote a situation in which there are definite ‘objective’ standards of soundness and goodness, then rationality in simple situations is behaviour that accords with those standards. Philosophers can argue endlessly about whether any such situations exist, so it seems sensible to define rationality more generally as being relative to some set of standards. The question then being: What standards?

My natural inclination as a mathematician is that those standards should always include the best relevant logics, including mathematics. Yet I have witnessed many occasions on which the use of mathematics has tended to promote disasters, and the advocates of such approaches (apart from those few who think like me) have seemed culpable. I have a great deal of respect and sympathy for the view that mathematics is harmful in complex situations. Yet intellectually it seems quite wrong, and I cannot accept it.

In each case there seems to be some specific failing, which many of my colleagues have attributed to some human factor, such as hubris or the need to keep a job or preserve an institution. But the perpetrators do not seem to me to be much different from the rest of us, and I have long thought that there is some more fundamental common standard that is incompatible with the use of reason. The financial crises of 2007/8/9 are cases where it is hard to believe that most of those pushing the ‘mathematical’ view that turned out to be harmful were either irrational or rationally harmful.

Here I want to suggest a possible explanation.

From a theoretical perspective, there are no given ‘facts’, ‘logics’ or ‘reasons’ that we can rely on.This certainly seems to be true of finance and economics. For example, in economics the models used may be mathematical and in this sense beyond criticism, but the issue of their relevance to a particular situation is never purely logical, and ought to be questioned. Yet it seems that many institutions, including businesses,  rely on having absolute beliefs: questioning them would be wasteful in the short-run. So individual actors tend not only to be rational, but also to be narrowly rational ‘in the short run’, which normally goes with acting ‘as if’ it had narrow facts.

For example, it seems to me to be  a fact that according to the best contemporary scientific theories, the earth is not stationary. It is generally expedient to for me to act ‘as if’ I knew that the earth moved. But unless we can be absolutely sure that the earth moves, the tendency to suppose that it is a fact that the earth moves could be dangerous. (You could try substituting other facts, such as that economies always tend to a healthy equilibrium.)

In a healthy society there would be a competition of ideas,  such that society as a whole could be said to be being more broadly rational, even while its actors were being only narrowly rational. For example, a science would consist of various schools, each of which would be developing its own theories, consistent with the facts, which between them would be exploring and developing the space of all such credible theories. At a practical level, an engineer would appreciate the difference between building a bridge based on a theory that had been tested on similar bridges, and building a novel type of bridge where the existing heuristics could not be relied upon.

I do not think that human society as a whole is healthy in this sense. Why not? In evolutionary theory separate niches, such as islands, promote the development of healthy diversity. Perhaps the rise of global communications and trade, and even the spread of the use of English, is eliminating the niches in which ideas can be explored and so is having a long-run negative impact that needs to be compensated for?

Thus I think we need to distinguish between short and long-run rationalities, and to understand and foster both. It seems to me that most of the time, for most areas of life, short-run rationality is adequate, and it is this that is familiar. But this needs to be accompanied by an understanding of the long-run issues, and an appropriate balance achieved. Perhaps too much (short-run) rationality can be harmful (in the long-run). And not only in economies.

Dave Marsay

Decision-making under uncertainty: ‘after Keynes’

I have a new discussion paper. I am happy to take comments here, on LinkedIn, at the more formal Economics e-journal site or by email (if you have it!), but wish to record substantive comments on the journal site while continuing to build up a site of whatever any school of thought may think is relevant, with my comments, here.

Please do comment somewhere.

Clarifications

I refer to Keynes’ ‘weights of argument’ mostly as something to be taken into account in addition to probability. For example, if one has two urns each with a mix of 100 otherwise identical black and white balls, where the first urn is known to have equal number of each colour, but the mix for the other urn is unknown, then conventionally one has equal probability of drawing a black ball form each urn, but the weight of argument is greater for the first than the second.

Keynes does fully develop his notion of weights and it seems not to be well understood, and I wanted my overview of Keynes’ views to be non-contentious. But from some off-line comments I should clarify.

Ch. VI para 8 is worth reading, followed by Ch. III para 8. Whatever the weight may be, it is ‘strengthened by’:

  • Being more numerous.
  • Having been obtained with a greater variety of conditions.
  • Concerning a greater generalisation.

Keynes argues that this weight cannot be reduced to a single number, and so weights can be incomparable. He uses the term ‘strength’ to indicate that something is increased while recognizing that it may not be measurable. This can be confusing, as in Ch. III para 7, where he refers to ‘the strength of argument’. In simple cases this would just be the probability, not to be confused with the weight.

It seems to me that Keynes’ concerns relate to Mayo’s:

Severity Principle: Data x provides a good evidence for hypothesis H if and only if x results from a test procedure T which, taken as a whole, constitutes H having passed a severe test – that is, a procedure which would have, with very high probability, uncovered the discrepancies from H, and yet no such error is detected.

In cases where one has performed a test, severity seems to roughly correspond to have a strong weight, at least in simpler cases. Keynes’ notion applies more broadly. Currently, it seems to me, care needs taking in applying either to particular cases. But that is no reason to ignore them.

 

 

Dave Marsay

Mathiness

(Pseudo-)Mathiness

Paul Romer has recently attracted attention by his criticism of what he terms ‘mathiness’ in economic growth theory. As a mathematician, I would have thought that economics could benefit from more mathiness, not less. But what he seems to be denigrating is not mathematics as I understand it, but what Keynes called ‘pseudomathematics’. In his main example the problem is not inappropriate mathematics as such, but a succession of symbols masquerading as mathematics, which Paul unmasks using – mathematics. Thus, it seems to me the paper that he is criticising would have benefited from more (genuine) mathiness and less pseudomathiness.

I do agree with Paul, in effect, that bad (pseudo) mathematics has been crowding out the good, and that this should be resisted and reversed. But, as a mathematician, I guess I would think that.

I also agree with Paul that:

We will make faster scientific progress if we can continue to rely on the clarity and precision that math brings to our shared vocabulary, and if, in our analysis of data and observations, we keep using and refining the powerful abstractions that mathematical theory highlights … .

But more broadly some of Paul’s remarks suggest to me that we should be much clearer about the general theoretical stance and the role of mathematics within it. Even if an economics paper makes proper use of some proper mathematics, this only ever goes so far in supporting economic conclusions, and I have the impression that Paul is expecting too much, such that any attempt to fill his requirement with mathematics would necessarily be pseudo-mathematics. It seems to me that economics can never be a science like the hard sciences, and as such it needs to develop an appropriate logical framework. This would be genuinely mathsy but not entirely mathematical. I have similar views about other disciplines, but the need is perhaps greatest for economics.

Media

Bloomberg (and others) agree that (pseudo)-mathiness is rife in macro-economics and that (perhaps in consequence) there has been a shift away from theory to (naïve) empiricism.

Tim Harford, in the ft, discusses the related misuse of statistics.

… the antidote to mathiness isn’t to stop using mathematics. It is to use better maths. … Statistical claims should be robust, match everyday language as much as possible, and be transparent about methods.

… Mathematics offers precision that English cannot. But it also offers a cloak for the muddle-headed and the unscrupulous. There is a profound difference between good maths and bad maths, between careful statistics and junk statistics. Alas, on the surface, the good and the bad can look very much the same.

Thus, contrary to what is happening, we might look for a reform and reinvigoration of theory, particularly macroeconomic.

Addendum

Romer adds an analogy between his mathiness, which has actual formulae and a description on the one hand, and computer code, which typically has both the actual code and some comments. Romer’s mathiness is like when the code is obscure and the comments are wrong, as when the code does a bubble sort but the comment says it does a prime number sieve. He gives the impression that in economics this may often be deliberate. But a similar phenomenon is when the coder made the comment in good faith, so that the code appears to do what it says in the comment, but that there is some subtle, technical, flaw. A form of pseudo-mathiness is when one is heedless to such a possibility. The cure is more genuine mathiness. Even in computer code, it is possible to write code that is more or less obscure, and the less obscure code is typically more reliable. Similarly in economics, it would be better for economists to use mathematics that is within their competence, and to strive to make it clear. Maybe the word Romer is looking for is obscurantism?

Dave Marsay