McNamara’s Implicit Frequency Dependence

JOHN M. McNamara Implicit frequency dependence and kin selection in fluctuating environments Evolutionary Ecology, 1995, 9, 185-203.

The paper is about biology. I seem to recall Ken Binmore applying the results more widely, but can’t find his paper now. My notes, here, on its broader implications, for rationality and utility maximization.

Utility maximization and risk-aversion

The most straightforward setting to consider rationality is of a decision-maker who is faced with a series of decisions on each of which he is scored (typical based on outcomes), with his aim being to maximize his total score in the long-run. Each score depends on the decision via some known probability distribution. The law of large numbers applies, so that in the long-run the score will almost always approximate to its mathematically expected value, and the best – rational – strategy is to maximize the expected score for each decision. Utility is score.

Real-life is rarely like this, but it is usual to assume that utility maximization is nevertheless rational.

McNamara considers reproductive success. The natural score is the size of the population, but in many ecological settings the size of the population multiplied by reproductive success, so that the long-run score is a product. Thus, to maximize the long-run population one should maximize the short-run expected log population. The effect of this is to give more ‘weight’ to avoiding poor success. In this sense, it pays to be ‘risk averse’.

A similar situation would be investing a lump sum in the stock market to build up a pot (e.g., for one’s childrens’ weddings, or one’s own pension). One wishes to maximize growth (value at end divided by the value at the start). The value at the end of the year is multiplied by the growth within the year, so – as in McNamara’s example:

Maximising expected growth in each year does not maximize expected growth in the long run: one needs a more risk-averse strategy.

It is recognized that if you put your funds in the hand of a fund manager who is rewarded for punished growth but not punished for loss, then they will be risk-seeking, which is not in your interest. But even if the fund manager has ‘skin in the game’ and is rewarded in proportion to growth, they will be what is called ‘risk-neutral’ in the short-run, which is riskier than you would wish.

Complex situations

McNamara’s discussion is of an environment whose characteristics are independent of the organisms, such as when a small population is seeking to exploit a large niche. Here one seeks to maximize:

Σs log(r(s)).f(s), where s ranges over the possible states, r is the score (reproductive success for McNamara) and f( ) is a probabilistic distribution function over states.

In practice the distribution, f( ), will often be estimated based on experience. But we may be in a world where it could suddenly change. (For example, due to invasion by another species, or a financial crash). We might try to model this probabilistically, but even without doing this it is clear that we should take robustness or resilience in the face of such possibilities more seriously even than maximizing the expected log score, recognizing that the ‘expectation’ is only the expectation on the assumption that the distribution remains f( ), which it may not. This takes us towards a maximin strategy.

Next, reproductive success (biological or financial) may lead to changes in the environment. For example, an expanding deer population might destroy the trees that they feed on, leading to a collapse of the population, so short-run maximization may not be sustainable. Thus one should include consideration of these critical factors in the score, so that the strategy is balanced between exploiting the environment and not destroying it.

Reflexivity

As Keynes pointed out, there is an additional difficulty in social systems. The environment is created by those who inhabit it. One has something like:

Σs log(r(s)).f(r,s),

Thus influential players seek strategies that create environments in which the strategies are effective, a kind of resilience. There may be many such concrete strategies, but of primary interest is to understand the extent to which ‘maximization of expected utility’ needs to be modified. One version of pragmatism has it that one only needs to optimize in the short-run. Optimizing expected log utility is clearly better than maximizing expected raw utility, but is it good enough?

Utility

Conventionally, one is assumed to haver a fixed utility function, that one seeks to maximize. In the approach above (as in Whitehead’s process logic) utilities only make sense in the short-term. It is important to assess when they may change. One needs to monitor and ideally influence external influences. One also needs to monitor reactions to one’s own actions, and adjust accordingly. One needs to extrapolate from the current state in the conventional way, to recognize when one has a clash or critical instability, consider possible futures, ands make strategic choices that are not simply maximizing ‘objective’ functions. This will involve a lot of activity (such as networking, developing relationships and collaborating) that may not make any sense from a narrow utility maximization point of view.

Dave Marsay

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.