Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

Disease

“You are suffering from a disease that, according to your manifest symptoms, is either A or B. For a variety of demographic reasons disease A happens to be nineteen times as common as B. The two diseases are equally fatal if untreated, but it is dangerous to combine the respectively appropriate treatments. Your physician orders a certain test which, through the operation of a fairly well understood causal process, always gives a unique diagnosis in such cases, and this diagnosis has been tried out on equal numbers of A- and B-patients and is known to be correct on 80% of those occasions. The tests report that you are suffering from disease B. Should you nevertheless opt for the treatment appropriate to A … ?”

My thoughts below …

.

.

.

.

.

.

.

.

If, following Good, we use

P(A|B:C) to denote the odds of A, conditional on B in the context C, Odds(A1/A2|B:C) to denote the odds P(A1|B:C)/P(A2|B:C), and LR(B|A1/A2:C) to denote the likelihood ratio, P(B|A1:C)/P(B|A2:C).

then we want

Odds(A/B | diagnosis of B : you), given
Odds(A/B : population) and
P(diagnosis of B | B : test), and similarly for A.

This looks like a job for Bayes’ rule! In Odds form this is

Odds(A1/A2|B:C) = LR(B|A1/A2:C).Odds(A1/A2:C).

If we ignore the dependence on context, this would yield

Odds(A/B | diagnosis of B ) = LR(diagnosis of B | A/B ).Odds(A/B).

But are we justified in ignoring the differences? For simplicity, suppose that the tests were conducted on a representative sample of the population, so that we have Odds(A/B | diagnosis of B : population), but still need Odds(A/B | diagnosis of B : you). According to Blackburn’s population indifference principle (PIP) you ‘should’ use the whole population statistics, but his reasons seem doubtful. Suppose that:

  • You thought yourself in every way typical of the population as a whole.
  • The prevalence of diseases among those you know was consistent with the whole population data.

Then PIP seems more reasonable. But if you are of a minority ethnicity – for example – with many relatives, neighbours and friends who share your distinguishing characteristic, then it might be more reasonable to use an informal estimate based on a more appropriate population, rather than a better quality estimate based on a less appropriate estimate. (This is a kind of converse to the availability heuristic.)

See Also

My notes on Cohen for a discussion of alternatives.

Other, similar, Puzzles.

My notes on probability.

Dave Marsay

Cab accident

“In a certain town blue and green cabs operate in a ratio of 85 to 15, respectively. A witness identifies a cab in a crash as green, and the court is told [based on a test] that in the relevant light conditions he can distinguish blue cabs from green ones in 80% of cases. [What] is the probability (expressed as a percentage) that the cab involved in the accident was blue?” (See my notes on Cohen for a discussion of alternatives.)

For bonus points …. if you were involved , what questions might you reasonably ask before estimating the required percentage? Does your first answer imply some assumptions about the answers, and are they reasonable?

My thoughts below:

.

.

.

.

.

.

If, following Good, we use

P(A|B:C) to denote the odds of A, conditional on B in the context C,
Odds(A1/A2|B:C) to denote the odds P(A1|B:C)/P(A2|B:C), and
LR(B|A1/A2:C) to denote the likelihood ratio, P(B|A1:C)/P(B|A2:C).

Then we want P(blue| witness: accident), which can be derived by normalisation from Odds(blue/green| witness : accident).
We have Odds(blue/green: city) and the statement that the witness “can distinguish blue cabs from green ones in 80% of cases”.

Let us suppose (as I think is the intention) that this means that we know Odds(witness| blue/green: test) under the test conditions. This looks like a job for Bayes’ rule! In Odds form this is

Odds(A1/A2|B:C) = LR(B|A1/A2:C).Odds(A1/A2:C),

as can be verified from the identity P(A|B:C) = P(A&B:C)/P(B:C) whenever P(B:C)≠0.

If we ignore the contexts, this would yield:

Odds(blue/green| witness) = LR(witness| blue/green).Odds(blue/green),

as required. But this would only be valid if the context made no difference. For example, suppose that:

  • Green cabs have many more accidents than blue ones.
  • The accident was in an area where green cabs were more common.
  •  The witness knew that blue cabs were much more common than green and yet was still confident that it was a green cab.

In each case, one would wish to re-assess the required odds. Would it be reasonable to assume that none of the above applied, if one didn’t ask?

See Also

Other Puzzles.

My notes on probability.

Dave Marsay

Are more intelligent people more biased?

It has been claimed that:

U.S. intelligence agents may be more prone to irrational inconsistencies in decision making compared to college students and post-college adults … .

This is scary, if unsurprising to many. Perhaps more surprisingly:

Participants who had graduated college seemed to occupy a middle ground between college students and the intelligence agents, suggesting that people with more “advanced” reasoning skills are also more likely to show reasoning biases.

It seems as if there is some serious  mis-education in the US. But what is it?

The above conclusions are based on responses to the following two questions:

1. The U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Do you: (a) Save 200 people for sure, or (b) choose the option with 1/3 probability that 600 will be saved and a 2/3 probability no one will be saved?

2. In the same scenario, do you (a) pick the option where 400 will surely die, or instead (b) a 2/3 probability that all 600 will die and a 1/3 probability no one dies?

You might like to think about your answers to the above, before reading on.

.

.

.

.

.

The paper claims that:

Notably, the different scenarios resulted in the same potential outcomes — the first option in both scenarios, for example, has a net result of saving 200 people and losing 400.

Is this what you thought? You might like to re-read the questions and reconsider your answer, before reading on.

.

.

.

.

.

The questions may appear to contain statements of fact, that we are entitled to treat as ‘given’. But in real-life situations we should treat such questions as utterances, and use the appropriate logics. This may give the same result as taking them at face value – or it may not.

It is (sadly) probably true that if this were a UK school examination question then the appropriate logic would be (1) to treat the statements ‘at face value’ (2) assume that if 200 people will be saved ‘for sure’ then exactly 200 people will be saved, no more. On the other hand, this is just the kind of question that I ask mathematics graduates to check that they have an adequate understanding of the issues before advising decision-takers. In the questions as set, the (b) options are the same, but (1a) is preferable to (2a), unless one is in the very rare situation of knowing exactly how many will die. With this interpretation, the more education and the more experience, the better the decisions – even in the US 😉

It would be interesting to repeat the experiment with less ambiguous wording. Meanwhile, I hope that intelligence agents are not being re-educated. Or have I missed something?

Also

Kahneman’s Thinking, fast and slow has a similar example, in which we are given ‘exact scientific estimates’ of probable outcomes, avoiding the above ambiguity. This might be a good candidate experimental question.

Kahneman’s question is not without its own subtleties, though. It concerns the efficacy of ‘programs to combat disease’. It seems to me that if I was told that a vaccine would save 1/3 of the lives, I would suppose that it had been widely tested, and that the ‘scientific’ estimate was well founded. On the other hand, if I was told that there was a 2/3 chance of the vaccine being ineffective I would suppose that it hadn’t been tested adequately, and the ‘scientific’ estimate was really just an informed guess. In this case, I would expect the estimate of efficacy to be revised in the light of new information. It could even be that while some scientist has made an honest estimate based on the information that they have, some other scientist (or technician) already knows that the vaccine is ineffective. A program based on such a vaccine would be more complicated and ‘risky’ than one based on a well-founded estimate, and so I would be reluctant to recommend it. (Ideally, I would want to know a lot more about how the estimates were arrived at, but if pressed for a quick decision, this is what I would do.)

Could the framing make a difference? In one case, we are told that ‘scientifically’, 200 people will be saved. But scientific conclusions always depend on assumptions, so really one should say ‘if …. then 200 will be saved’. My experience is that otherwise the outcome should not be expected, and that saving 200 is the best that should be expected. In the other case we are told that ‘400 will die’. This seems to me to be a very odd thing to say. From a logical perspective one would like to understand the circumstances in which someone would put it like this. I would be suspicious, and might well (‘irrationally’) avoid a program described in that way.

Addenda

The example also shows a common failing, in assuming that the utility is proportional to lives lost. Suppose that when we are told that lives will be ‘saved’ we assume that we will get credit, then we might take the utility from saving lives to be number of lives saved, but with a limit of ‘kudos’ at 250 lives saved. In this case, it is rational to save 200 ‘for sure’, as the expected credit from taking a risk is very much lower. On the other hand, if we are told that 400 lives will be ‘lost’ we might assume that we will be blamed, and take the utility to be minus the lives lost, limited at -10. In this case it is rational to take a risk, as we have some chance of avoiding the worst case utility, whereas if we went for the sure option we would be certain to suffer the worst case.

These kind of asymmetric utilities may be just the kind that experts experience. More study required?

 

Dave Marsay

Mathematics, psychology, decisions

I attended a conference on the mathematics of finance last week. It seems that things would have gone better in 2007/8 if only policy makers had employed some mathematicians to critique the then dominant dogmas. But I am not so sure. I think one would need to understand why people went along with the dogmas. Psychology, such as behavioural economics, doesn’t seem to help much, since although it challenges some aspects of the dogmas it fails to challenge (and perhaps even promotes) other aspects, so that it is not at all clear how it could have helped.

Here I speculate on an answer.

Finance and economics are either empirical subjects or they are quasi-religious, based on dogmas. The problems seem to arise when they are the latter but we mistake them for the former. If they are empirical then they have models whose justification is based on evidence.

Naïve inductivism boils down to the view that whatever has always (never) been the case will continue always (never) to be the case. Logically it is untenable, because one often gets clashes, where two different applications of naïve induction are incompatible. But pragmatically, it is attractive.

According to naïve inductivism we might suppose that if the evidence has always fitted the models, then actions based on the supposition that they will continue to do so will be justified. (Hence, ‘it is rational to act as if the model is true’). But for something as complex as an economy the models are necessarily incomplete, so that one can only say that the evidence fitted the models within the context as it was at the time. Thus all that naïve inductivism could tell you is that ‘it is rational’ to act as if the  model is true, unless and until the context should change. But many of the papers at the mathematics of finance conference were pointing out specific cases in which the actions ‘obviously’ changed the context, so that naïve inductivism should not have been applied.

It seems to me that one could take a number of attitudes:

  1. It is always rational to act on naïve inductivism.
  2. It is always rational to act on naïve inductivism, unless there is some clear reason why not.
  3. It is always rational to act on naïve inductivism, as long as one has made a reasonable effort to rule out any contra-indications (e.g., by considering ‘the whole’).
  4. It is only reasonable to act on naïve inductivism when one has ruled out any possible changes to the context, particularly reactions to our actions, by considering an adequate experience base.

In addition, one might regard the models as conditionally valid, and hedge accordingly. (‘Unless and until there is a reaction’.) Current psychology seems to suppose (1) and hence has little to help us understand why people tend to lean too strongly on naïve inductivism. It may be that a belief in (1) is not really psychological, but simply a consequence of education (i.e., cultural).

See Also

Russell’s Human Knowledge. My media for the conference.

Dave Marsay

Making your mind up (NS)

Difficult choices to make? A heavy dose of irrationality may be just what you need.

Comment on a New Scientist article, 12 Nov. 2011, pg 39.

The on-line version is Decision time: How subtle forces shape your choices: Struggling to make your mind up? Interpret your gut instincts to help you make the right choice.

The article talks a lot about decision theory and rationality. No definitions are given, but it seems to be assumed that all decisions are analogous to decisions about games of chance. It is clearly supposed, without motivation, that the objective is always to maximize expected utility. This might make sense for gamblers who expect to live forever without ever running out of funds, but more generally is unmotivated.

Well-known alternatives include:

  • taking account of the chances of going broke (short-term) and never getting to the ‘expected’ (long-term) returns.
  • taking account of uncertainty, as in the Ellsberg’s approach.
  • taking account of the cost of evaluating options, as in March’s ‘bounded rationality’.

The logic of inconsistency

A box claims that ‘intransitive preferences’ give mathematicians a head-ache. But as a mathematician I find that some people’s assumptions about rationality give me a headache, especially if they try to force them on to me.

Suppose that I prefer apples to plums to pears, but I prefer a mixture to having just apples. If I am given the choice between apples and plums I will pick apples. If I am then given the choice between plums and pears I will pick plums. If I am now given the choice between apples and pears I will pick pears, to have a good spread of fruit. According to the article I am inconsistent and illogical: I should have chosen apples. But what kind of logic is it in which I would end up with all meat and no gravy? Or all bananas and no custard?

Another reason I might pick pears was if I wanted to acquire things that appeared scarce. Thus being offered a choice of apples or plums suggests that neither are scarce, so what I really want is pears. In this case, if I was subsequently given a choice of plums to pears I would choice pears, even though I actually prefer plums. An question imparts information, and is not just a means of eliciting information.

In criticising rationality one needs to consider exactly what the notion of ‘utility’ is, and whether or not it is appropriate.

Human factors

On the last page it becomes clear that ‘utility’ is even narrower than one might suppose. Most games of chance have an expected monetary loss for the gambler and thus – it seems – such gamblers are ‘irrational’. But maybe there is something about the experience that they value. They may, for example, be developing friendships that will stand them in good stead. Perhaps if we counted such expected benefits, gambling might be rational. Could buying a lottery ticket be rational if it gave people hope and something to talk about with friends?

If we expect that co-operation or conformity  have a benefit, then could not such behaviours be rational? The example is given of someone who donates anonymously to charity. “In purely evolutionary terms, it is a bad choice.” But why? What if we feel better about ourselves and are able to act more confidently in social situations where others may be donors?

Retirement

“Governments wanting us to save up for retirement need to understand why we are so bad at making long-term decisions.”

But are we so very bad? This could do with much more analysis. With the article’s view of rationality under-saving could be caused by a combination of:

  • poor expected returns on savings (especially at the moment)
  • pessimism about life expectancy
  • heavy discounting of future value
  • an anticipation of a need to access the funds before retirement
    (e.g., due to redundancy or emigration).

The article suggests that there might also be some biases. These should be considered, although they are really just departures from a normative notion of rationality that may not be appropriate. But I think one would really want to consider broader factors on expected utility. Maybe, for example, investing in one’s children’s’ future may seem a more sensible investment. Similarly, in some cultures, investing one’s aura of success (sports car, smart suits, …) might be a rational gamble. Is it that ‘we’ as individuals are bad at making long-term decisions, or that society as a whole has led to a situation in which for many people it is ‘rational’ to save less than governments think we ought to have? The notion of rationality in the article hardly seems appropriate to address this question.

Conclusion

The article raises some important issues but takes much too limited a view of even mathematical decision theory and seems – uncritically – to suppose that it is universally normatively correct. Maybe what we need is not so much irrationality as the right rationality, at least as a guide.

See also

Kahneman: anomalies paper , Review, Judgment. Uncertainty: Cosimedes and Tooby, Ellsberg. Examples. Inferences from utterances.

Dave Marsay

Kahneman et al’s Anomalies

Daniel Kahneman, Jack L. Knetsch, Richard H. Thaler Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias The Journal of Economic Perspectives, 5(1), pp. 193-206, Winter 1991

[Some] “behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences … . An empirical result qualifies as an anomaly if it is difficult to “rationalize,” or if implausible assumptions are necessary to explain it within the paradigm.”

The first candidate anomaly is:

“A wine-loving economist we know purchased some nice Bordeaux wines … . The wines have greatly appreciated in value, so that a bottle that cost only $10 when purchased would now fetch $200 at auction. This economist now drinks some of this wine occasionally, but would neither be willing to sell the wine at the auction price nor buy an additional bottle at that price.”

This an example of the effects in the title. Is it anomalous? Suppose that the economist can spare $120 but not $200 on self-indulgencies, of which wine is her favourite. Would this not explain why she might buy a crate cheaply but not pay a lot for a bottle or sell it at a profit. Is it an anomaly? The anomalies seem to be relative to expected utility theory. (However, some of the other examples may be genuine psychological effects.)

See also

Kahneman’s review, Keynes’ General Theory

Dave Marsay

The Logic of Scientific Discovery

K.R. Popper The Logic of Scientific Discovery 1980 Routledge A review. (The last edition has some useful clarifications.) See new location.

Dave Marsay

All watched over by machines of loving grace

What?

An Adam Curtis documentary shown on the BBC May/June 2011.

Comment

The trailers (above link) give a good feel for the series, which is entertaining, with some good video, music, pseudo-history and comment. The details shouldn’t be taken too seriously, but it is thought-provoking, on some topics that need thought.

Thoughts

The series ends:

The idea that human beings are helpless chunks of hardware controlled by software programs written in their genetic codes [remains powerfully influential in our society]. The question is, have we embraced that idea because it is a comfort in a world where everything that we do, either good or bad, seems to have terrible unforeseen consequences? …

We have embraced a fatalistic philosophy of us as helpless computing machines, to both excuse and explain our political failure to change the world.

This thesis has three parts:

  1. that everything we do has terrible unforeseen consequences
  2. that we are fatalistic in the face of such uncertainty
  3. that we have adopted a machine metaphor as ‘cover’ for our fatalism.

Uncertainty

The program demonizes unforeseen consequences. Certainly we should be troubled by them, and their implications for rationalism and pragmatism. But if there were no uncertainties then we could be rational and ‘should’ behave like machines. Reasoning in a complex, dynamic world calls for more than narrowly rational machine-like calculation, and gives purpose to being human.

Fatalism

It seems reasonable to suppose that most of the time most people can do little to influence the factors that shape their lives, but I think this is true even when people can perfectly well see the likely consequences of what is being done in their name. What is at issue here is not so much ordinary fatalism, which seems justified, as the charge that those who are making big decisions on our behalf are also fatalistic.

In democracies, no-one makes a free decision anymore. Everyone is held accountable and expected to abide by generally accepted norms and procedures. In principle whenever one has a novel situation the extant rules should be at least briefly reviewed, lest they lead to ‘unforseen consequences’. A fatalist would presumably not do this. Perhaps the failure, then, is not to challenge assumptions or ‘kick against’ constraints.

The machine metaphor

Computers and mathematicians played a big role in the documentary. Humans are seen as being programmed by a genetic code that has evolved to self-replicate. But evolution leads to ‘punctuated equilibrium’ and epochs.  Reasoning in epochs is not like reasoning in stable situations, the preserve of rule-driven machines. The mathematics of Whitehead and Turing supports the machine-metaphor, but only within an epoch. How would a genetically programmed person fare if they move to a different culture or had to cope with new technologies radically transforming their daily lives? One might suppose that we are encoded for ‘general ways of living and learning’ but then that we seem to require a grasp of uncertainty beyond that which we currently associate with machines.

Notes

  • The program had a discussion on altruism and other traits in which behaviours might disbenefit the individual but advantage those who are genetically similar over others. This would seem to justify much terrorism and even suicide-bombing. The machine metaphor would seem undesirable for reasons other than its tendency to fatalism.
  • An alternative to absolute fatalism would be fatalism about long-term consequences. This would lead to a short-term-ism that might provide a better explanation for real-world events
  • The financial crash of 2007/8 was preceded by a kind of fatalism, in that it was supposed that free markets could never crash. This was associated with machine trading, but neither a belief in the machine metaphor nor a fear of unintended consequences seems to have been at the root of the problem. A belief in the potency of markets was perhaps reasonable (in the short term) once the high-tech bubble had burst. The problem seems to be that people got hooked on the bubble drug, and went into denial.
  • Mathematicians came in for some implicit criticism in the program. But the only subject of mathematics is mathematics. In applying mathematics to real systems the error is surely in substituting myth for science. If some people mis-use mathematics, the mathematics is no more at fault than their pencils. (Although maybe mathematicians ought to be more vigorous in uncovering abuse, rather than just doing mathematics.)

Conclusion

Entertaining, thought-provoking.

Dave Marsay

Critique of Pure Reason

I Kant’s Critique of Pure Reason, Ed. 2 1787.

See new location.

David Marsay

Critical phenomena in complex networks

Source

Critical phenomena in complex networks at arXiv is a 2007 review of activity mediated by complex networks, including the co-evolution of activity and networks.

Abstract

The combination of the compactness of networks, featuring small diameters, and their complex architectures results in a variety of critical effects dramatically different from those in cooperative systems on lattices. In the last few years, important steps have been made toward understanding the qualitatively new critical phenomena in complex networks. The results, concepts, and methods of this rapidly developing field are reviewed. Two closely related classes of these critical phenomena are considered, namely, structural phase transitions in the network architectures and transitions in cooperative models on networks as substrates. Systems where a network and interacting agents on it influence each other are also discussed. [i.e. co-evolution] A wide range of critical phenomena in equilibrium and growing networks including the birth of the giant connected component, percolation, k-core percolation, phenomena near epidemic thresholds, condensation transitions, critical phenomena in spin models placed on networks, synchronization, and self-organized criticality effects in interacting systems on networks are mentioned. Strong finite-size effects in these systems and open problems and perspectives are also discussed.

Notes

The summary notes:

Real-life networks are finite, loopy (clustered) and correlated. Most of them are out of equilibrium. A solid  theory of correlation phenomena in complex networks must take into account finite-size effects, loops, degree  correlations, and other structural peculiarities. We described two successful analytical approaches to cooperative phenomena in infinite networks. The first was based on the tree ansatz, and the second was the generalization of the Landau theory of phase transitions. What is beyond these approaches?

Thus we can distinguish between:

  • very complex: where the existing analytic approaches do not work.
  • moderately complex: where the analytic approaches do work pragmatically, at least in the short-term, even though their assumptions aren’t strictly true.
  • not very complex: where analytic approaches work in theory and practice. Complicated? 

This blog is focussed on the very complex. The paper notes that in these cases:

  • evolution is more than just a continuous change in a parameter of some over-arching model.
  • fluctuations are typically scale-free (and in particular non-Guassian, taking one outside the realm assumed by elementary statistics).
  • the scale-free exponent is small.

The latter implies that:

  • many familiar statistics are undefined.
  • the influence of network heterogeneity is ‘dramatic’.
  • mean-field notions, essential to classical Physics, are not valid.
  • notions such as ‘prior probability’ and ‘information’ are poorly defined, or perhaps none-sense.
  • synchronization across the network is robust but not optimisable.
  • one gets an infinite series of nested k-cores. (Thus while one lacks classical structure, there is something there which is analogous to the ‘structure’ of water: hard to comprehend.)

So what?

Such complex activity is inherently robust and (according to Cybernetic theory) cannot be controlled. The other regions are not robust and can be controlled (and hence subverted). From an evolutionary perspective, then, if this theory is an adequate representation of real systems, we should expect that in the long term real networks will tend to end up as very complex rather than as one of the lesser complexities. It also suggests that we should try to learn to live with the complexity rather than ‘tame’ it. Attempts to over-organize would seem doomed.

Verifying the theory

(Opinion.) In so far as the theory leads to the conclusion that we need to understand and learn to live with full complexity, it seems to me that it only needs to be interpreted into domains such as epidemics, confrontation and conflict, economics and developement to be recognized as having a truth. But in so far as our experience is limited by the narrow range of approaches that we have tried to such problems, we must beware of the usual paradox: acting on the theory would violate the logical grounds for believing in it. More practically, we may note that the old approaches, in essence, assumed that the future would be like the past. Our new insights would allow us to transcend our current epoch and step into the next. But it may not be enough to take one such step at a time: we may need a more sustainable strategy. (Keynes, Whitehead, Smuts, Turing, …)  

Application

An appreciation of the truly complex might usefully inform strategies and collaborations of all kinds.

I will separate out some of this into more appropriate places.

See Also

Peter Allen, Fat tails and epochs, … .

Dave Marsay

Reasoning in a complex, dynamic, world

I’m publishing what might seem an eclectic mix of stuff here, so I’ve started with something of a Rosetta Stone:

Conventional models tend to ignore potential long-term changes.
How models may relate to reality. Peter Allen

This is almost self-explanatory: most of our reasoning habits are okay in the short term, but may not be appropriate for the long-term.

A point to note is that sometimes ‘the long term’ can rush upon us. It is sometimes said that the best we can do is to reason for the short term, and respond and adapt to the long-term changes as they arise. But is this good enough? Can we do better?

More on this in some of the other pages. For now, trust me: they are related.

Dave Marsay