Beinhocker’s Reflexivity …
Eric D. Beinhocker (2013) Reflexivity, complexity, and the nature of social science, Journal of Economic Methodology, 20:4, 330-342, DOI: 10.1080/1350178X.2013.859403
This seeks to define Soros’ notion of reflexivity and relate it to contemporary complexity theories and social sciences. It is usefully less ambiguous than Soros’ own work, and has been endorsed by him. It seems to me more meaningful, and hence open to (positive) criticism.
It starts using language that suggests that – as with Soros’ original view – the essence of the problem is something mysterious to do with human nature, but goes on to develop a more general view of ‘complex reflexive systems’, a sub-set of complex adaptive systems – that are not just human, but have ‘reflexivity’.
2.1 Necessary conditions
I would propose that in order for a system to be ‘reflexive’ it must have the following elements:
- Environment: …
- Agent: There must be at least one agent interacting with that environment and possibly … with each other … .
- Goal: The agents must have some goal or goals they are pursuing in that environment.
- Cognitive function: The agents must have some way of receiving information about their environment, perceiving the state of that environment, comparing that perceived state against the goal state, and identifying gaps between the perceived state and the goal state; Soros calls this the ‘cognitive function.’
- Manipulative function: … some way of interacting with their environment … in pursuit of their ’
- Internal model: Each agent contains an internal model that connects its cognitive and manipulative functions; that model contains a mapping between states of the environment and possible actions and consequences.
[The] inevitable flaws and shortcomings in any such model lead to the particular dynamics of reflexive systems.
2.2 Distinguishing characteristics: internal model updating and complexity
There are two additional elements that I would argue distinguish a reflexive system from a dynamic feedback system:
- Internal model updating: The internal decision model of the agents is not fixed, but itself can change in response to interactions between the agent and its environment; s internal model.
- Complexity The system in which the agent is embedded in is complex in two senses: the system has interactive complexity due to multiple interactions between heterogeneous agents and the system has dynamic complexity due to nonlinearity in feedbacks in the system.
I would further interpret Soros’s definition to add that not only can models update through changes in model parameters (e.g. Bayesian updating), but also the rules or structure of the model itself might change … . … an agent might ‘learn’ and its internal model might improve its performance in mapping perceptions and actions toward achieving the agent’s goals.
3. Limits to knowledge and fallibility
[Constructing] such an accurate internal model and improving its performance through learning in a complex environment runs into fundamental limits to knowledge issues.
3.1 Flawed models in a complex world
Mathematicians and philosophers have discovered a number of results that fundamentally limit the knowledge that agents situated in complex systems can attain:
- Difficulty discovering the correct model from finite data: … .
- Lack of knowledge of initial conditions and parameters: ….
- Inability to predict with finite computing time: … (E.g. N-P complete problems).
- No ‘God’s-eye view’: … – this is related to Gödel’s famous incompleteness theorem.
… We thus have the recursive loop that is at the center of Soros’s concept: fallible agents try to understand and act in an environment of fallible agents trying to understand and act in an environment of fallible agents trying to understand. … [It] is precisely these limits to knowledge and the fallibility that they imply that the rational expectations hypothesis (REH) in economics assumes away.
3.2 Complex reflexive systems
Complex adaptive and complex reflexive systems differ as follows:
First, complex adaptive systems are generally thought of as multi-agent systems, but it is possible to imagine a reflexive system with one agent. Second, as noted, in reflexive systems internal model updating often involves not only changes in model parameters or weights, but changes in rules and model structure as well. Systems where agents have fixed rules but simply adjust rule parameters or weights in response to environmental feedback are often considered adaptive, but I would claim they are not necessarily reflexive in Soros’s use of the term.
4.3 Common epistemological challenges
If one accepts the spectrum of complexity argument then it has important implications for the nature of social science. What defines the epistemological challenge of understanding a particular phenomenon is where it sits on the spectrum of complexity, not its domain. Understanding and explaining two people playing a simple game theory problem with an easily calculated unique Nash equilibrium has more in common with a simple mechanical equilibrium system than it does with trying to understand the effect of contagion in a banking crisis. But understanding the effect of contagion in a banking crisis has some striking similarities to understanding contagion in epidemiology, or the collapse of a food web in ecology (Haldane & May, 2011).
Soros is thus correct that reflexive systems are very challenging to model and understand scientifically. I would argue that they are challenging fundamental difference between physical and social systems, but because they are extremely complex – whether it is an ecosystem of human beings trading in a stock market, an ecosystem of sophisticated stock trading computer algorithms, or an ecosystem of interacting species (Farmer, 2002). [Soros appears to accept this last point, see the link above.]
4.4 Evolution, good enough models, and muddling through
The epistemological challenges may be high for understanding reflexive systems, but fallible models, limits to knowledge, and the inherent indeterminacy of reflexive systems do not necessarily imply that all is lost and that we cannot gain insight into such systems. Biological systems, human systems, and various artificial systems all manage to function despite fallibility and limits to knowledge. The internal models of agents and their cognitive and manipulative functions may be ‘good enough’ for them to muddle through and make progress toward their goals. As the statistician George E.P. Box once said, ‘Essentially, all models are wrong, but some are useful’ (Box, 1987, p. 424).
Indeed, biological evolution depends on ‘useful enough’ models muddling through without the ability to forecast. Evolution picks up on regularities and through experimentation, selection, and amplification finds heuristics that are ‘good enough for now’ until something better comes along or something selects against them.
I should also briefly note that while complex reflexive systems present a challenge due to their Knightian uncertainty, there is also an upside – their inherent indeterminacy creates space for novelty and creativity. … [Reflexivity] makes free will both possible and necessary.
4.5 Model-dependent realism: reconciling Soros and Popper
[There] may or may not be an objective reality independent from us and the models we create. But whether there is or not does not matter because the only way we can access and perceive our world is via the models we create … .
[Explanations] are always mediated by our models and observations and thus cannot claim to be objectively and perpetually true (model dependency).
5. A way forward for economics
… Following Soros and categorizing economies as complex reflexive systems would end the false certainty of neoclassical theory and enable economists to embrace the inherent fallibility and Knightian uncertainty that characterizes real-world economic systems.
[We] can be hopeful that although our ability to understand such systems may always be limited, our creativity in trying to will not be.
It seems to me that this conception is a great leap forward, and it may be that it adequately signposts an adequate direction of travel for economics and for the social sciences generally. A key aspect od this is in showing the need to consider Knightian uncertainty in additional to mere probabilistic variability. But since fallibility is the key concept, I will not pass by this opportunity to critique the paper.
State-determined systems and Pragmatism
- I do not see how to rationalise the important notion of model-dependent realism, as advocated here, with the notions of ‘goal’ or ‘state’, much less ‘goal state’.
- The use of ‘state’ here seems uniformed by Keynes’ critique, which is bound up with his own thoughts on uncertainty, which in turn seem vital to his notion of economics, which still seems to have a vital contribution to make, along similar lines to this paper. At least, we ignored Keynes to our cost (2001-20008).
- Contrary to what some psychologists seem to suppose, I am not clear that I would fit the description of an ‘agent’. On the contrary, it seems to me that when there are what Turing calls ‘critical instabilities’ my behaviour is more fluid and creative. Whatever I have that might be called a goal relates to an understanding of the situation that is itself highly fluid and (potentially) creative, and so it seems to me that I have no set ‘goals’ (although I may perhaps have principals). It seems to me that I am not alone in this.
- I tend to regard too narrow a ‘pragmatism’ as something that is often efficient, but dangerous. The description appears to be of agents that are pragmatic in such a dangerous case. In which case I would agree that the interactions of such pragmatic agents create problems, but hope that we could educate them, and at least ameliorate the problems.
The paper is (rightly) critical of ‘simple’ game theory. From a game theory perspective, the key characteristic is learning, and innovative learning, not just adaptation. This is not simple game theory, and would seem applicable. It would at least provide an even more concrete and testable interpretation of Soros’ ideas. In the terms of such theory, one would say that pragmatic approaches proceed as if some framework or set of rules, regulations or behavioural regularities could relied upon unless and until they were clearly broken. But it might be better to invest some effort in looking out for potential critical instabilities, where the rules could change, and considering how one could influence the situation. In such situations the notion of a classical state becomes questionable, a goals will need to be reviewed and often revised, perhaps critically. The way to avoid losing is to avoid playing the wrong game.