AI pros and cons

Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher The Metamorphosis Atlantic August 2019.

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.

The authors are looking for comments. My initial reaction is here. I hope to say more. Meanwhile, I’d appreciate your reactions.

 

Dave Marsay

Advertisements

Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

Haldane’s Tails of the Unexpected

A. Haldane, B. Nelson Tails of the unexpected,  The Credit Crisis Five Years On: Unpacking the Crisis conference, University of Edinburgh Business School, 8-9 June 2012

The credit crisis is blamed on a simplistic belief in ‘the Normal Distribution’ and its ‘thin tails’, understating risk. Complexity and chaos theories point to greater risks, as does the work of Taleb.

Modern weather forecasting is pointed to as good relevant practice, where one can spot trouble brewing. Robust and resilient regulatory mechanisms need to be employed. It is no good relying on statistics like VaR (Value at Risk) that assume a normal distribution. The Bank of England is developing an approach based on these ideas.

Comment

Risk arises when the statistical distribution of the future can be calculated or is known. Uncertainty arises when this distribution is incalculable, perhaps unknown.

While the paper acknowledges Keynes’ economics and Knightian uncertainty, it overlooks Keynes’ Treatise on Probability, which underpins his economics.

Much of modern econometric theory is … underpinned by the assumption of randomness in variables and estimated error terms.

Keynes was critical of this assumption, and of this model:

Economics … shift[ed] from models of Classical determinism to statistical laws. … Evgeny Slutsky (1927) and Ragnar Frisch (1933) … divided the dynamics of the economy into two elements: an irregular random element or impulse and a regular systematic element or propagation mechanism. This impulse/propagation paradigm remains the centrepiece of macro-economics to this day.

Keynes pointed out that such assumptions could only be validated empirically and (as the current paper also does) in the Treatise he cited Lexis’s falsification.

The paper cites a game of paper/scissors/stone which Sotheby’s thought was a simple game of chance but which Christie’s saw  as an opportunity for strategizing – and won millions of dollars. Apparently Christie’s consulted some 11 year old girls, but they might equally well have been familiar with Shannon‘s machine for defeating strategy-impaired humans. With this in mind, it is not clear why the paper characterises uncertainty a merly being about unknown probability distributions, as distinct from Keynes’ more radical position, that there is no such distribution. 

The paper is critical of nerds, who apparently ‘like to show off’.  But to me the problem is not the show-offs, but those who don’t know as much as they think they know. They pay too little attention to the theory, not too much. The girls and Shannon seem okay to me: it is those nerds who see everything as the product of randomness or a game of chance who are the problem.

If we compare the Slutsky Frisch model with Kuhn’s description of the development of science, then economics is assumed to develop in much the same way as normal science, but without ever undergoing anything like a (systemic) paradigm shift. Thus, while the model may be correct most of the time,  violations, such as in 2007/8, matter.

Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.

 One can understand this reasoning by analogy with science: the more dominant a school which protects its core myths, the greater the reaction and impact when the myths are exposed. But in finance it may not be just ‘risk control’ that causes a problem. Any optimisation that is blind to the possibility of systemic change may tend to increase the chance of change (for good or ill) [E.g. Bohr Atomic Physics and Human Knowledge. Ox Bow Press 1958].

See Also

Previous posts on articles by or about Haldane, along similar lines:

My notes on:

Dave Marsay

Anyone for Tennis?

An example of Knightian uncertainty?

Sam, a Norwegian statistician, and Gina, a Moldovian game-theorist, have just met on holiday and are playing tennis. Sam knows that in previous games Gina has taken 70% of the opportunities to ‘go to the net’, and out of 10 opportunities in their games far, has gone to the net 7 times.

What is the probability that Gina will go to the net at the next opportunity? (And what is your reasoning? You may consult my notes on probability.)

More, similar, puzzles here.

Dave Marsay

Making your mind up (NS)

Difficult choices to make? A heavy dose of irrationality may be just what you need.

Comment on a New Scientist article, 12 Nov. 2011, pg 39.

The on-line version is Decision time: How subtle forces shape your choices: Struggling to make your mind up? Interpret your gut instincts to help you make the right choice.

The article talks a lot about decision theory and rationality. No definitions are given, but it seems to be assumed that all decisions are analogous to decisions about games of chance. It is clearly supposed, without motivation, that the objective is always to maximize expected utility. This might make sense for gamblers who expect to live forever without ever running out of funds, but more generally is unmotivated.

Well-known alternatives include:

  • taking account of the chances of going broke (short-term) and never getting to the ‘expected’ (long-term) returns.
  • taking account of uncertainty, as in the Ellsberg’s approach.
  • taking account of the cost of evaluating options, as in March’s ‘bounded rationality’.

The logic of inconsistency

A box claims that ‘intransitive preferences’ give mathematicians a head-ache. But as a mathematician I find that some people’s assumptions about rationality give me a headache, especially if they try to force them on to me.

Suppose that I prefer apples to plums to pears, but I prefer a mixture to having just apples. If I am given the choice between apples and plums I will pick apples. If I am then given the choice between plums and pears I will pick plums. If I am now given the choice between apples and pears I will pick pears, to have a good spread of fruit. According to the article I am inconsistent and illogical: I should have chosen apples. But what kind of logic is it in which I would end up with all meat and no gravy? Or all bananas and no custard?

Another reason I might pick pears was if I wanted to acquire things that appeared scarce. Thus being offered a choice of apples or plums suggests that neither are scarce, so what I really want is pears. In this case, if I was subsequently given a choice of plums to pears I would choice pears, even though I actually prefer plums. An question imparts information, and is not just a means of eliciting information.

In criticising rationality one needs to consider exactly what the notion of ‘utility’ is, and whether or not it is appropriate.

Human factors

On the last page it becomes clear that ‘utility’ is even narrower than one might suppose. Most games of chance have an expected monetary loss for the gambler and thus – it seems – such gamblers are ‘irrational’. But maybe there is something about the experience that they value. They may, for example, be developing friendships that will stand them in good stead. Perhaps if we counted such expected benefits, gambling might be rational. Could buying a lottery ticket be rational if it gave people hope and something to talk about with friends?

If we expect that co-operation or conformity  have a benefit, then could not such behaviours be rational? The example is given of someone who donates anonymously to charity. “In purely evolutionary terms, it is a bad choice.” But why? What if we feel better about ourselves and are able to act more confidently in social situations where others may be donors?

Retirement

“Governments wanting us to save up for retirement need to understand why we are so bad at making long-term decisions.”

But are we so very bad? This could do with much more analysis. With the article’s view of rationality under-saving could be caused by a combination of:

  • poor expected returns on savings (especially at the moment)
  • pessimism about life expectancy
  • heavy discounting of future value
  • an anticipation of a need to access the funds before retirement
    (e.g., due to redundancy or emigration).

The article suggests that there might also be some biases. These should be considered, although they are really just departures from a normative notion of rationality that may not be appropriate. But I think one would really want to consider broader factors on expected utility. Maybe, for example, investing in one’s children’s’ future may seem a more sensible investment. Similarly, in some cultures, investing one’s aura of success (sports car, smart suits, …) might be a rational gamble. Is it that ‘we’ as individuals are bad at making long-term decisions, or that society as a whole has led to a situation in which for many people it is ‘rational’ to save less than governments think we ought to have? The notion of rationality in the article hardly seems appropriate to address this question.

Conclusion

The article raises some important issues but takes much too limited a view of even mathematical decision theory and seems – uncritically – to suppose that it is universally normatively correct. Maybe what we need is not so much irrationality as the right rationality, at least as a guide.

See also

Kahneman: anomalies paper , Review, Judgment. Uncertainty: Cosimedes and Tooby, Ellsberg. Examples. Inferences from utterances.

Dave Marsay

How to live in a world that we don’t understand, and enjoy it (Taleb)

N Taleb How to live in a world that we don’t understand, and enjoy it  Goldstone Lecture 2011 (U Penn, Wharton)

Notes from the talk

Taleb returns to his alma mater. This talk supercedes his previous work (e.g. Black Swan). His main points are:

  • We don’t have a word for the opposite of fragile.
      Fragile systems have small probability of huge negative payoff
      Robust systems have consistent payoffs
      ? has a small probability of a large pay-off
  • Fragile systems eventually fail. ? systems eventually come good.
  • Financial statistics have a kurtosis that cannot in practice be measured, and tend to hugely under-estimate risk.
      Often more than 80% of kurtosis over a few years is contributed by a single (memorable) day.
  • We should try to create ? systems.
      He calls them convex systems, where the expected return exceeds the return given the expected environment.
      Fragile systems are concave, where the expected return is less than the return from the expected situation.
      He also talks about ‘creating optionality’.
  • He notes an ‘action bias’, where whenever there is a game like the stock market then we want to get involved and win. It may be better not to play.
  • He gives some examples.

 Comments

Taleb is dismissive of economists who talk about Knightian uncertainty, which goes back to Keynes’ Treatise on Probability. Their corresponding story is that:

  • Fragile systems are vulnerable to ‘true uncertainty’
  • Fragile systems eventually fail
  • Practical numeric measures of risk ignore ‘true uncertainty’.
  • We should try to create systems that are robust to or exploit true uncertainty.
  • Rather than trying to be the best at playing the game, we should try to change the rules of the game or play a ‘higher’ game.
  • Keynes gives examples.

The difference is that Taleb implicitly suppose that financial systems etc are stochastic, but have too much kurtosis for us to be able to estimate their parameters. Rare events are regarded as rare events generated stochastically. Keynes (and Whitehead) suppose that it may be possible to approximate such systems by a stochastic model for a while, but the rare events denote a change to a new model, so that – for example – there is not a universal economic theory. Instead, we occasionally have new economics, calling for new stochastic models. Practically, there seems little to choose between them, so far.

From a scientific viewpoint, one can only asses definite stochastic models. Thus, as Keynes and Whitehead note, one can only say that a given model fitted the data up to a certain date, and then it didn’t. The notion that there is a true universal stochastic model is not provable scientifically, but neither is it falsifiable. Hence according to Popper one should not entertain it as a view. This is possibly too harsh on Taleb, but the point is this:

Taleb’s explanation has pedagogic appeal, but this shouldn’t detract from an appreciation of alternative explanations based on non-stochastic uncertainty.

 In particular:

  • Taleb (in this talk) seems to regard rare crisis as ‘acts of fate’ whereas Keynes regards them as arising from misperceptions on the part of regulators and major ‘players’. This suggests that we might be able to ameliorate them.
  • Taleb implicitly uses the language of probability theory, as if this were rational. Yet his argument (like Keynes’) undermines the notion of probability as derived from rational decision theory.
      Not playing is better whenever there is Knightian uncertainty.
      Maybe we need to be able to talk about systems that thrive on uncertainty, in addition to convex systems.
  • Taleb also views the up-side as good fortune, whereas we might view it as an innovation, by whatever combination of luck, inspiration, understanding and hard work.

See also

On fat tails versus epochs.

Dave Marsay

Uncertainty, utility and paradox

Brooklyn Museum - An Embarrassment of Choices,...

Image via Wikipedia

Allais

Allais devised two choices:

  1. between a definite £1M versus a gamble whose expected return was much greater, but could give nothing
  2. between two gambles

He showed that most people made choices that were inconsistent with expected utility theory, and hence paradoxical.

In the first choice, one option has a certain payoff and so is reasonably prefered. In the other choice both choices have similarly uncertain outcomes and so it is reasonable to choose based on expected utility. In general, uncertainty reasonably detracts from expected utility.

Ellsberg

Ellsberg devised a similar paradox, but again people consistently prefer alternatives with the least uncertainty.

See also

mathematics, illustrations, examples.

Dave Marsay