Soros’ Fallability

George Soros Fallibility, Reflexivity, and the Human Uncertainty Principle 2014

Fallibility and reflexivity

“My conceptual framework is built on two relatively simple propositions. [Fallibility and reflexivity.]

The two principles are tied together like Siamese twins, but fallibility is the firstborn: without fallibility there would be no reflexivity. Both principles can be observed operating in the real world.

“I lump together the two concepts as the human uncertainty principle.”


“The complexity of the world in which we live exceeds our capacity to comprehend it. Confronted by a reality of extreme complexity, we are obliged to resort to various methods of simplification: generalizations, dichotomies, metaphors, decision rules, and moral precepts, just to mention a few. These mental constructs take on a (subjective) existence of their own, further complicating the situation.”


“The concept of reflexivity needs some further explication. It applies exclusively to situations that have thinking participants.”


…  The participants’ views influence but do not determine the course of events, and the course of events influences but does not determine the participants’ views. The influence is continuous and circular; that is what turns it into a feedback loop.”

Popper’s theory of scientific method


This is a brilliant construct that makes science both empirical and rational. According to Popper, it is empirical because we test our theories by observing whether the predictions we derive from them are true, and it is rational because we use deductive logic in doing so.”

Human uncertainty as an impediment to scientific method

“… the facts produced by social processes … are influenced by theories held by participants. This makes social theories themselves subject to reflexivity. In other words, they serve not only a cognitive but also a manipulative function.”

A spectrum between physical and social sciences

“In my argument, I have drawn a sharp distinction between the social and natural sciences. But such dichotomies are usually not found in reality; rather we introduce them in our efforts to make some sense out of an otherwise confusing reality. Indeed, while the dichotomy between physics and social sciences seems clear cut, there are other sciences, such as biology and the study of animal societies, that occupy intermediate positions.”

The limits and promise of social science

“Any valid methodology of social science must explicitly recognize both fallibility and reflexivity and the Knightian uncertainty they create. Empirical testing ought to remain a decisive criterion for judging whether a theory qualifies as scientific, but in light of the human uncertainty principle in social systems it cannot always be as rigorous as Popper’s scheme requires. Nor can universally and timelessly valid theories be expected to yield determinate predictions because future events are contingent on future decisions, which are based on imperfect knowledge. Time- and context-bound generalizations may yield more specific explanations and predictions than timeless and universal generalizations.”

Footnote 1: post publication:

 “There are many similarities between human and non-human complex systems, which could be obfuscated by the proposed convention. Instead of denying the unity of science we ought to redefine scientific method so that it is not confined to Popper’s model.”

Boom–bust processes

“A boom–bust process is set in motion when a trend and a misconception positively reinforce each other. The process is liable to be tested by negative feedback along the way, giving rise to climaxes which may or may not turn out to be genuine. If a trend is strong enough to survive the test, both the trend and the misconception will be further reinforced. Eventually, market expectations become so far removed from reality that people are forced to recognize that a misconception is involved. A twilight period ensues during which doubts grow and more people lose faith, but the prevailing trend is sustained by inertia. … Eventually, a point is reached when the trend is reversed, it then becomes self-reinforcing in the opposite direction. Boom–bust processes tend to be asymmetrical: booms are slow to develop and take a long time to become unsustainable, busts tend to be more abrupt … .”

“Figure 4 A typical market boom–bust. In the initial stage (AB), a new positive earning trend is not yet recognized. Then comes a period of acceleration (BC) when the trend is recognized and reinforced by expectations. A period of testing may intervene when either earnings or expectations waiver (CD). If the positive trend and bias survive the testing, both emerge stronger.  Conviction develops and is no longer shaken by a setback in earnings (DE). The gap between expectations and reality becomes wider (EF) until the moment of truth arrives when reality can no longer sustain the exaggerated expectations and the bias is recognized as such (F). A twilight period ensues when people continue to play the game although they no longer believe in it (FG). Eventually a crossover point (G) is reached when the trend turns down and prices lose their last prop. This leads to a catastrophic downward acceleration (GH) commonly known as the crash. The pessimism becomes over done, earnings stabilize, and prices recover somewhat (HI).”

Fat tails


Reality feeds the participants so much information that they need to introduce dichotomies and other simplifying devices to make some sense of it. The simplest way to introduce order is binary division; hence, the tendency to use dichotomies. When markets switch from one side of a dichotomy to another the transition can be quite violent. The tipping point is difficult to predict but it is associated with a sharp increase in volatility, which manifests itself in fat tails.”

Toward a new paradigm

“… As a market participant, I formulate conjectures and expose them to refutation. I also assume that other market participants are doing the same thing whether they realize it or not. Their expectations are usefully aggregated in market prices. I can therefore compare my own expectations with prevailing prices. When I see a divergence, I see a profit opportunity. The bigger the divergence, the bigger the opportunity.

This works well in markets that are efficient in the sense that transaction costs are minimal; it does not work in private equity investments that are not readily marketable.


Frydman, R., & Goldberg, M. D. (2013). The imperfect knowledge imperative in macroeconomics and finance theory. In R.Frydman & E. S.Phelps (Eds.), Rethinking expectations: The way forward for macroeconomics (pp. 130–168). Princeton, NJ: Princeton University Press, Chapter 4.

My comments

Some pedantry

  • Even if one had a ‘correct’ view of the current situation, regarded as being ‘objective’, there are circumstances in which it is not possible to form a reliable view of some impending future state [Turing … ].
  • A view of a current situation can only be regarded as ‘correct’ in so far as it makes certain distinctions. There is always the possibility that some previously unmade distinction will suddenly become significant.
  • There is always some unavoidable limit to what can be ‘known’: but Soros’ point is that it can have practical consequences if one is more fallible than one needs to be, or more fallible than one thinks.
  • Soros’ new paradigm seems most appropriate when you can take either side of gamble (as when choosing to go ‘long’ or ‘short’).

The Frydman & Goldberg reference is interesting, but very limited in its scope.

Broader implications


 Soros’ ‘new paradigm’ seems a useful advance on the conventional notion of rationality that he critiques.  If the overall behaviour is stable with some ‘normal’ behaviour, and if the current activity is consistent with the norm, then it is reasonable to ‘forecast’ by extrapolation from the norm. That is, it seems rational to act on one’s normal expectations. But if an individual sees an inconsistency, then it is reasonable to anticipate further change. This gives rise to ‘reflexive probability’ because the change in anticipation tends to lead to changes in behaviour, which tends to affect what actually happens.


It is perhaps worth noting the following:

Whenever one has a positive feedback cycle one is liable to see self- reinforcing positive or negative changes tending to lead to outcomes that are extremes. There may be some intermediate situation that might seem sustainable by some outside influence, but taking the feedback loop in isolation, such a potential outcome would be unstable. Moreover, there seems to be no fixed policy or strategy that could infallibly sustain such a situation: one would need a system that would be open to challenges and innovative (or even inventive) in responding to them. In particular, it needs to be able to make whatever distinctions may become necessary to appreciate the nature of the challenge.

Scientific method

Soros notes the common view that science produces reliable knowledge because it has been tested, and contrast his view that all knowledge is fallible, and seeks a reconciliation.

Suppose that instead of considering how well a theory corresponds to some unknowable ‘reality’ we simply consider how useful and well-tested it seems to us. Scientific theories are regarded as unusually well tested but not infallible. Moreover, for many areas of science it seems to me reasonable to regard such theories as actually well tested, in so far as is currently practical. This is as good as it gets outside of pure logic. Economic theories are also hard to test, but do not seem to me nearly so well tested. Indeed, many of them seem to preclude much important economic behaviour, such as crashes.

Thus we might reasonably extend Soros’ notion of ‘human fallibility’ to scientists, and also note that any theory is only as good as the tests it has survived. Thus we might reformulate Soros’ new paradigm by saying that we should consider how well the prevailing theory has been tested, and the extent to which the current circumstances might be considered a routine variation on the test conditions as against a novel test. If there is reason to doubt the prevailing theory then one might bet against it, even in the absence of a specific theory or ‘expectation’. Better, though, to explore, develop and maintain and test alternative theories. For example, one might have a theory of the current economy that allows you to extract maximum value, guarded by a more general, less rewarding but also less risky, theory to fall back on if the more precise theory seems unreliable. Alternatively we might envisage an ecology in which individuals (or groups) follow different theories but can learn from each other, particularly when some theories are falsified.

Dave Marsay

%d bloggers like this: