AI pros and cons

Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher The Metamorphosis Atlantic August 2019.

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.

The authors are looking for comments. My initial reaction is here. I hope to say more. Meanwhile, I’d appreciate your reactions.

 

Dave Marsay

What logical term or concept ought to be more widely known?

Various What scientific term or concept ought to be more widely known? Edge, 2017.

INTRODUCTION: SCIENTIA

Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. …

Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.

Contributions

As against others on:

(This is as far as I’ve got.)

Comment

I’ve grouped the contributions according to whether or not I think they give due weight to the notion of uncertainty as expressed in my blog. Interestingly Steven Pinker seems not to give due weight in his article, whereas he is credited by Nicholas G. Carr with some profound insights (in the first of the second batch). So maybe I am not reading them right.

My own suggestion would be Turing’s theory of ‘Morphogenesis’. The particular predictions seem to have been confirmed ‘scientifically’, but it is essentially a logical / mathematical theory. If, as the introduction suggests, science is “reliable methods for obtaining knowledge” then it seems to me that logic and mathematics are more reliable than empirical methods, and deserve some special recognition. Although, I must concede that it may be hard to tell logic from pseudo-logic, and that unless you can do so my distinction is potentially dangerous.

Morphogenesis

The second law of thermodynamics, and much common sense rationality,  assumes a situation in which the law of large numbers applies. But Turing adds to the second law’s notion of random dissipation a notion of relative structuring (as in gravity) to show that ‘critical instabilities’ are inevitable. These are inconsistent with the law of large numbers, so the assumptions of the second law of thermodynamics (and much else) cannot be true. The universe cannot be ‘closed’ in its sense.

Implications

If the assumptions of the second law seem to leave no room for free will and hence no reason to believe in our agency and hence no point in any of the contributions to Edge: they are what they are and we do what we do. But Pinker does not go so far: he simply notes that if things inevitably degrade we do not need to beat ourselves up, or look for scape-goats when things go wrong. But this can be true even if the second law does not apply. If we take Turing seriously then a seeming permanent status quo can contain the reasons for its own destruction, so that turning a blind eye and doing nothing can mean sleep-walking to disaster. Where Pinker concludes:

[An] underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

This would seem to follow more clearly from the theory of morphogenesis than the second law. Turing’s theory also goes some way to suggesting or even explaining the items in the second batch. So, I commend it.

Dave Marsay

 

 

How can economics be a science?

This note is prompted by Thaler’s Nobel prize, the reaction to it, and attempts by mathematicians to explain both what they do do and what they could do. Briefly, mathematicians are increasingly employed to assist practitioners (such as financiers) to sharpen their tools and improve their results, in some pre-defined sense (such as making more profit). They are less used to sharpen core ideas, much less to challenge assumptions. This is unfortunate when tools are misused and mathematicians blamed. It is no good saying that mathematicians should not go along with such misuse, since the misuse is often not obvious without some (expensive) investigations, and in any case whistleblowers are likely to get shown the door (even if only for being inefficient).

Mainstream economics aspires to be a science in the sense of being able to make predictions, at least probabilistically. Some (mostly before 2007/8) claimed that it achieved this, because its methods were scientific. But are they? Keynes coined the term ‘pseudo-mathematical’ for the then mainstream practices, whereby mathematics was applied without due regard for the soundness of the application. Then, as now, the mathematics in itself is as much beyond doubt as anything can be. The problem is a ‘halo effect’ whereby the application is regarded as ‘true’ just because the mathematics is. It is like physics before Einstein, whereby some (such as Locke) thought that classical geometry must be ‘true’ as physics, largely because it was so true as mathematics and they couldn’t envisage an alternative.

From a logical perspective, all that the use of scientific methods can do is to make probabilistic predictions that are contingent on there being no fundamental change. In some domains (such as particle physics, cosmology) there have never been any fundamental changes (at least since soon after the big bang) and we may not expect any. But economics, as life more generally, seems full of changes.

Popper famously noted that proper science is in principle falsifiable. Many practitioners in science and science-like fields regard the aim of their domain as to produce ‘scientific’ predictions. They have had to change their theories in the past, and may have to do so again. But many still suppose that there is some ultimate ‘true’ theory, to which their theories are tending. But according to Popper this is not a ‘proper’ scientific belief. Following Keynes we may call it an example of ‘pseudo-science’: something that masquerades as a science but goes beyond it bounds.

One approach to mainstream economics, then, is to disregard the pseudo-scientific ideology and just take its scientific content. Thus we may regard its predictions as mere extrapolations, and look out for circumstances in which they may not be valid. (As Eddington did for cosmology.)

Mainstream economics depends heavily on two notions:

  1. That there is some pre-ordained state space.
  2. That transitions evolve according to fixed conditional probabilities.

For most of us, most of the time, fortunately, these seem credible locally and in the short term, but not globally in space-time. (At the time of writing it seems hard to believe that just after the big bang there were in any meaningful sense state spaces and conditional probabilities that are now being realised.) We might adjust the usual assumptions:

The ‘real’ state of nature is unknowable, but one can make reasonable observations and extrapolations that will be ‘good enough’ most of the time for most routine purposes.

This is true for hard and soft sciences, and for economics. What varies is the balance between the routine and the exceptional.

Keynes observed that some economic structures work because people expect them to. For example, gold tends to rise in price because people think of it as being relatively sound. Thus anything that has a huge effect on expectations can undermine any prior extrapolations. This might be a new product or service, an independence movement, a conflict or a cyber failing. These all have a structural impact on economies that can cascade. But will the effect dissipate as it spreads, or may it result in a noticable shift? A mainstream economist would argue that all such impacts are probabilistic, and hence all that was happening was that we were observing new parts of the existing state space and new transitions. If we suppose for a moment that it is true, it is not a scientific belief, and hardly seems a useful way of thinking about potential and actual crises.

Mainstream economists suppose that people are ‘rational’, by which they mean that they act as if they are maximizing some utility, which is something to do with value and probability. But, even if the world is probabilistic, being rational is not necessarily scientific. For example, when a levee is built  to withstand a ‘100 year storm’, this is scientific if it is clear that the claim is based on past storm data. But it is unscientific if there is an implicit claim that the climate can not change. When building a levee it may be ‘rational’ to build it to withstand all but very improbable storms, but it is more sensible to add a margin and make contingency arrangements (as engineers normally do). In much of life it is common experience that the ‘scientific’ results aren’t entirely reliable, so it is ‘unscientific’ (or at least unreasonable) to totally rely on them.

Much of this is bread-and-butter in disciplines other than economics, and I am not sure that what economists mostly need is to improve their mathematics: they need to improve their sciencey-ness, and then use mathematics better. But I do think that they need somehow to come to a better appreciation of the mathematics of uncertainty, beyond basic probability  theory and its ramifications.

Dave Marsay

 

 

Uncertainty is not just probability

I have just had published my paper, based on the discussion paper referred to in a previous post. In Facebook it is described as:

An understanding of Keynesian uncertainties can be relevant to many contemporary challenges. Keynes was arguably the first person to put probability theory on a sound mathematical footing. …

So it is not just for economists. I could be tempted to discuss the wider implications.

Comments are welcome here, at the publisher’s web site or on Facebook. I’m told that it is also discussed on Google+, Twitter and LinkedIn, but I couldn’t find it – maybe I’ll try again later.

Dave Marsay

Evolution of Pragmatism?

A common ‘pragmatic’ approach is to keep doing what you normally do until you hit a snag, and (only) then to reconsider. Whereas Lamarckian evolution would lead to the ‘survival of the fittest’, with everyone adapting to the current niche, tending to yield a homogenous population, Darwinian evolution has survival of the maximal variety of all those who can survive, with characteristics only dying out when they are not viable. This evolution of diversity makes for greater resilience, which is maybe why ‘pragmatic’ Darwinian evolution has evolved.

The products of evolution are generally also pragmatic, in that they have virtually pre-programmed behaviours which ‘unfold’ in the environment. Plants grow and procreate, while animals have a richer variety of behaviours, but still tend just to do what they do. But humans can ‘think for themselves’ and be ‘creative’, and so have the possibility of not being just pragmatic.

I was at a (very good) lecture by Alice Roberts last night on the evolution of technology. She noted that many creatures use tools, but humans seem to be unique in that at some critical population mass the manufacture and use of tools becomes sustained through teaching, copying and co-operation. It occurred to me that much of this could be pragmatic. After all, until recently development has been very slow, and so may well have been driven by specific practical problems rather than continual searching for improvements. Also, the more recent upswing of innovation seems to have been associated with an increased mixing of cultures and decreased intolerance for people who think for themselves.

In biological evolution mutations can lead to innovation, so evolution is not entirely pragmatic, but their impact is normally limited by the need to fit the current niche, so evolution typically appears to be pragmatic. The role of mutations is more to increase the diversity of behaviours within the niche, rather than innovation as such.

In social evolution there will probably always have been mavericks and misfits, but the social pressure has been towards conformity. I conjecture that such an environment has favoured a habit of pragmatism. These days, it seems to me, a better approach would be more open-minded, inclusive and exploratory, but possibly we do have a biologically-conditioned tendency to be overly pragmatic: to confuse conventions for facts and  heuristics for laws of nature, and not to challenge widely-held beliefs.

The financial crash of 2008 was blamed by some on mathematics. This seems ridiculous. But the post Cold War world was largely one of growth with the threat of nuclear devastation much diminished, so it might be expected that pragmatism would be favoured. Thus powerful tools (mathematical or otherwise) could be taken up and exploited pragmatically, without enough consideration of the potential dangers. It seems to me that this problem is much broader than economics, but I wonder what the cure is, apart from better education and more enlightened public debate?

Dave Marsay

 

 

Traffic bunching

In heavy traffic, such as on motorways in rush-hour, there is often oscillation in speed and there can even be mysterious ’emergent’ halts. The use of variable speed limits can result in everyone getting along a given stretch of road quicker.

Soros (worth reading) has written an article that suggests that this is all to do with the humanity and ‘thinking’ of the drivers, and that something similar is the case for economic and financial booms and busts. This might seem to indicate that ‘mathematical models’ were a part of our problems, not solutions. So I suggest the following thought experiment:

Suppose a huge number of  identical driverless cars with deterministic control functions all try to go along the same road, seeking to optimise performance in terms of ‘progress’ and fuel economy. Will they necessarily succeed, or might there be some ‘tragedy of the commons’ that can only be resolved by some overall regulation? What are the critical factors? Is the nature of the ‘brains’ one of them?

Are these problems the preserve of psychologists, or does mathematics have anything useful to say?

Dave Marsay

Haldane’s The dog and the Frisbee

Andrew Haldane The dog and the Frisbee

Haldane argues in favour of simplified regulation. I find the conclusions reasonable, but have some quibbles about the details of the argument. My own view is that much of our financial problems have been due – at least in part – to a misrepresentation of the associated mathematics, and so I am keen to ensure that we avoid similar misunderstandings in the future. I see this as a primary responsibility of ‘regulators’, viewed in the round.

The paper starts with a variation of Ashby’s ball-catching observation, involving dog and a Frisbee instead of a man and a ball: you don’t need to estimate the position of the Frisbee or be an expert in aerodynamics: a simple, natural, heuristic will do. He applies this analogy to financial regulation, but it is somewhat flawed. When catching a Frisbee one relies on the Frisbee behaving normally, but in financial regulation one is concerned with what had seemed to be abnormal, such as the crisis period of 2007/8.

It is noted of Game theory that

John von Neumann and Oskar Morgenstern established that optimal decision-making involved probabilistically-weighting all possible future outcomes.

In apparent contrast

Many of the dominant figures in 20th century economics – from Keynes to Hayek, from Simon to Friedman – placed imperfections in information and knowledge centre-stage. Uncertainty was for them the normal state of decision-making affairs.

“It is not what we know, but what we do not know which we must always address, to avoid major failures, catastrophes and panics.”

The Game Theory thinking is characterised as ignoring the possibility of uncertainty, which – from a mathematical point of view – seems an absurd misreading. Theories can only ever have conditional conclusions: any unconditional misinterpretation goes beyond the proper bounds. The paper – rightly – rejects the conclusions of two-player zero-sum static game theory. But its critique of such a theory is much less thorough than von Neumann and Morgenstern’s own (e.g. their 4.3.3) and fails to identify which conditions are violated by economics. More worryingly, it seems to invite the reader to accept them, as here:

The choice of optimal decision-making strategy depends importantly on the degree of uncertainty about the environment – in statistical terms, model uncertainty. A key factor determining that uncertainty is the length of the sample over which the model is estimated. Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies.

This seems to suggest that – contra game theory – we could ‘in principle’ establish a sound model, if only we had enough data. Yet:

Einstein wrote that: “The problems that exist in the world today cannot be solved by the level of thinking that created them”.

There seems a non-sequitur here: if new thinking is repeatedly being applied then surely the nature of the system will continually be changing? Or is it proposed that the ‘new thinking’ will yield a final solution, eliminating uncertainty? If it is the case that ‘new thinking’ is repeatedly being applied then the regularity conditions of basic game theory (e.g. at 4.6.3 and 11.1.1) are not met (as discussed at 2.2.3). It is certainly not an unconditional conclusion that the methods of game theory apply to economies beyond the short-run, and experience would seem to show that such an assumption would be false.

The paper recommends the use of heuristics, by which it presumably means what Gigernezer means: methods that ignore some of the data. Thus, for example, all formal methods are heuristics since they ignore intuition.  But a dog catching a Frisbeee only has its own experience, which it is using, and so presumably – by this definition – is not actually using a heuristic either. In 2006 most financial and economics methods were heuristics in the sense that they ignored the lessons identified by von Neumann and Morgenstern. Gigerenzer’s definition seems hardly helpful. The dictionary definition relates to learning on one’s own, ignoring others. The economic problem, it seems to me, was of paying too much atention to the wrong people, and too little to those such as von Neumann and Morgenstern – and Keynes.   

The implication of the paper and Gigerenzer is, I think, that a heuristic is a set method that is used, rather than solving a problem from first principles. This is clearly a good idea, provided that the method incorporates a check that whatever principles that it relies upon do in fact hold in the case at hand. (This is what economists have often neglecte to do.) If set methods are used as meta-heuristics to identify the appropriate heuristics for particular cases, then one has something like recognition-primed decision-making. It could be argued that the financial community had such meta-heuristics, which led to the crash: the adoption of heuristics as such seems not to be a solution. Instead one needs to appreciate what kind of heuristic are appropriate when. Game theory shows us that the probabilistic heuristics are ill-founded when there is significant innovation, as there was both prior, through and immediately after 2007/8. In so far as economics and finance are games, some events are game-changers. The problem is not the proper application of mathematical game theory, but the ‘pragmatic’ application of a simplistic version: playing the game as it appears to be unless and until it changes. An unstated possible deduction from the paper is surely that such ‘pragmatic’ approaches are inadequate. For mutable games, strategy needs to take place at a higher level than it does for fixed games: it is not just that different strategies are required, but that ‘strategy’ has a different meaning: it should at least recognize the possibility of a change to a seemingly established status quo.

If we take an analogy with a dog and a Frisbee, and consider Frisbee catching to be a statistically regular problem, then the conditions of simple game theory may be met, and it is also possible to establish statistically that a heuristic (method) is adequate. But if there is innovation in the situation then we cannot rely on any simplistic theory or on any learnt methods. Instead we need a more principled approach, such as that of Keynes or Ashby,  considering the conditionality and looking out for potential game-changers. The key is not just simpler regulation, but regulation that is less reliant on conditions that we expect to hold but for which, on maturer reflection, are not totally reliable. In practice this may necessitate a mature on-going debate to adjust the regime to potential game-changers as they emerge.

See Also

Ariel Rubinstein opines that:

classical game theory deals with situations where people are fully rational.

Yet von Neumann and Morgenstern (4.1.2) note that:

the rules of rational behaviour must provide definitely for the possibility of irrational conduct on the part of others.

Indeed, in a paradigmatic zero-sum two person game, if the other person players rationally (according to game theory) then your expected return is the same irrespective of how you play. Thus it is of the essence that you consider potential non-rational plays. I take it, then, that game theory as reflected in economics is a very simplified – indeed an over-simplified – version. It is presumably this distorted version that Haldane’s criticism’s properly apply to.

Dave Marsay