Good Inferences from Utterances?

The problem of interpretation

What people say isn’t necessarily literally true. On the other hand, we may be able to deduce more from what people say than its literal content, even where that content is true. This is something of an open problem in, for example, human-machine interaction or human-trained human interaction.

A Good approach?

Here I speculate that we could simply treat utterances as events to be interpreted, rather than try to develop any special theory. I start with the approach of Keynes as developed by Turing and Good. At its simplest, in a context C we believe that if a hypothetical state H obtained we would have a likelihood P(E|H:C) of an evidential state E. In a deterministic situation this will be 1 for a unique E. The inverse will be a set of possible H. For convenience, we normally encapsulate these into a single, canonical, conjunction. The probabilistic and possibilistic cases are similar.

The interpretation of utterances can thus be broken into two parts:

  1. What is the context?
  2. What are they likely to say for different hypothetical situations within that context?

For humans we can analyse these further: what is their culture and role? what are they attending to? … There seems not to be any need to develop any novel theories or mechanisms.

Probability

When someone says ‘X, with probability 1’, it is commonplace to think that we may be justified in supposing that X holds with probability 1, particularly when the source is authoritative or trusted.

Yet suppose that the source is an academic with little experience of the real-world situation to which X pertains, based on a somewhat artificial experiment. Then all the statement  can mean is that the academic has not been able to conceive of a situation in which X is not probably true. But we, having regard to the academic’s lack of experience, might not regard X as probably true. If we are advising someone else who might have been mislead by the statement, a good comment would be to describe a possibility for which X might not be true; even better if X seems probably false.

Reflexivity

If we believe that others interpret our utterances using an interpretation process I(•:C) then if we want them to believe H we should utter some U such that I(U:C) = H, an inverse problem. More generally, we may say whatever is likely to lead to them believing H. Thus the utterance and interpretation processes depend reflexively on each other,. If they are uncertain then one has what Keynes calls reflexive probability, as in-house prices where the more people expect the price to go up, the more it does, fueling further expectations.

If utterance and interpretation are seen as enduring habits (not just one-offs) then they are strategies and there is a game-theory like relationship between them. But none of this is any different from the problem of entities who interact via other actions. Since overt actions and utterances often go together (as in ‘look at that’) it seems reasonable to treat them uniformly.

Notes

It is not implied that people actually do think like this, or that this is consistent with how people think they think. For example, people may have a unconcious emotionally reaction which determines whether they take others literally, cautiously or suspiciously. All that is claimed is that this approach will be a ‘yardstick’, showing the best that could be done.

Contexts

A context has to contain anything that might influence utterances and interpretations. It may be ‘socially constructed’, as when a meeting room is arranged subliminally to indicate the likely tone of the meeting. The context does not need to be naively or explainably ‘real’.

Implications

Some current interest is in ‘theories of mind’, which seems to introduce new complexities over and above Good’s account, for example. But we could just include an account of the ongoing social and inter-personal interactions in the context, and our theory about how the interlocutor reacts to the context into the likelihood function. The result may not be very good account of how people do actually interpret situations,  but of how they ‘should’.

For example, if we ‘share a context’ and there is no reason to doubt someone, we may normally take their words literally. If a specialist draws our attention to something within his specialism, we do not necessarily suppose that it is the most important thing that we need to attend to. If a native person says that they see big silver birds flying high, we consider other possibilities. If an adversary calls our attention to one thing, we might also look elsewhere, and so on.

Examples

‘Your cheque is in the post’, ‘Does my bum look big in this?’. Such utterances are not always intended to be taken literally, and perhaps neither is our reply. I think at least some people imagine the possible situations that may have given rise  to such utterances, and either seek to narrow down the options or hedge across them, without necessarily considering themselves to be illogical or dishonest.

Alternatives

The New Scientist (3 December 2011) has an article ‘Time to think like a computer’ or ‘Do thoughts have a language of their own?’ (On-line). This claims that we do not treat statements using ‘traditional logic’ but that what we do is more like ‘computational logic’. This latter logic is not defined, but Prolog is given as an example.

My view is that we can apply ‘traditional logic’ as long as we treat an utterance as an utterance, as is done here, and not as a ‘fact’. Computational logic, in my experience, is best regarded as a framework for developing specific logics. The article mentions default logics. If we think that someone is in a situation where particular default assumptions would be appropriate, then we would naturally employ default reasoning as a special case of the ‘good’ approach. But the good approach is clearly much more general.

The article has this example:
   If Mary has an essay to write, then she will study late in the library.
   Mary has an essay to write.

This look like ‘modus ponens’, from which we ‘should’ conclude that Mary will study late in the library. Actually, we will probably have a few caveats, which can be explained in a variety of ways. In the article it is now said that:
If the library is open, then Mary will study late in the library.

As the article says, taking this literally implies that Mary will study late even when she has no essay to write. To get around this problem the article proposes that there is a separate ‘language of thought’. This language  is not specified, but seems to include default reasoning. But it seems to me that the example is just the kind of ‘bad logic’ that many people use much of the time, and that – knowing that – we can generally decode it using traditional logic, but treating statements as data, not ‘facts’.

Conclusion

We can conceptualize our thinking about acting and re-acting as part of a generic problem of interaction, perhaps drawing on ideas about evolutionary learning in games. Our understanding of particular actors (e.g., humans) then determines what we think is relevant within contexts, and how this may influence actors.

The advantage over a more customised approach would be:

  • includes all types of activity, in conjunction
  • can draw on insights from these other areas
  • can be theoretically underpinned, and so be less ad-hoc
  • by treating ‘models of mind’ as a variable, is less likely to be culturally specific
  • it may be more fruitful in suggesting likely variations in ‘style’.

This is speculative.

See also

Thomas McCarthy, Translator’s Introduction to Habermas, Legitimation crisis, Heinemann 1976:

According to Habermas a smoothly functioning language game rests on a background consensus formed from the mutual recognition of at least four different types of validity claims … that are involved in the exchange of speech acts:

  1. claims that the utterance is understandable,
  2. that its propositional content is true,
  3. that the speaker is sincere in uttering it, and
  4. that it is right or appropriate for the speaker to be performing the speech act.

[It] is possible for one or more of them to become problematic in a fundamental way.

From the point of view of ‘Good’s approach’, it is sufficient that the context is sufficiently understood that the likelihoods P(E|H:C) are understood (1), for some set of hypotheses, {H}. Typically, a speaker cannot know that what they are saying is actually ‘true’ (2), but for a speaker to be sincere (3)  it should be the best hypothesis out of those that they have considered. It may be that the listener is considering a broader range of hypotheses, in which case the may know that the proposition is false, and yet it may still be useful.

Even in a situation where the speaker is motivated to lie, we may understand the situation (1),  regard the lie as ‘true to the situation’ and the regard the speaker as sincere and the act appropriate within that context (3, 4). Thus the key does seem to be understanding why they said what they said, and not understanding the apparent ‘content’ of the proposition.

David Marsay

Quantum Evolution

NewScientist No. 2794 8 Jan. 2011, p 28.

This notes the effect of epigenetics on the variability/uncertainty of inheritence, notes the benefits of this for populations in surviving sudden changes in the environment, and speculates (as did Smuts, below) that this could have come about due to natural selection. That is, under natural selection organisms avoid over-adaptation as long as they are subject to harsh enough and frequent enough shocks.

See Also

Smuts’ Holism and Evolution, Peter Allen

David Marsay

Holism and Evolution

Holism and evolution 1927. Smuts’ notoriously inaccessible theory of evolution, building on and show-casing Keynes’ notion of uncertainty. Smuts made significant revisions and additions in later editions to reflect some of the details of the then current understanding. Not all of these now appear to be an improvement. Although Smuts and Whitehead worked independently, they recognized that their theories were equivalent. The book is of most interest for its general approach, rather than its detail. Smuts went on to become the centennial president of the British Association for the Advancement of Science, drawing on these ideas to characterise ‘modern science’.

Holism is a term introduced by Smuts, in contrast to individualism and wholism. In the context of evolution it emphasises co-evolution between parts and wholes, with neither being dominant. The best explanation I have found is:

“Back in the days of those Ancient Greeks, Aristotle (384-322BCE) gave us:

The whole is greater than the sum of its parts; (the composition law)
The part is more than a fraction of the whole. (the decomposition law)

Composition Laws” (From Derek Hitchins’ Systems World.)

Smuts also develops LLoyd Morgan’s concept of emergence,  For example, the evolutionary ‘fitness function’ may emerge from a co-adaptation rather than be fixed.

The book covers evolution from physics to personality. Smuts intended a sequel covering, for example, social and political evolution, but was distracted by the second world war, for example.

Smuts noted that according to the popular view of evolution, one would expect organisms to become more and more adapted to their environmental niches, whereas they were more ‘adapted to adapt’, particularly mankind. There seemed to be inheritance of variability in offspring as whole as the more familiar inheritance of manifest characteristics, which suggested more sudden changes in the environment than had been assumed. This led Smuts to support research into the Wegner hypothesis (concerning continental drift) and the geographic origins of  life-forms. 

See also

Ian Stewart, Peter Allen

David Marsay

Life’s Other Secret

Ian Stewart Life’s Other Secret: The new mathematics of the living world, 1998.

This updates D’Arcy Thompson’s classic On growth and form, ending with a manifesto for a ‘new’ mathematics, and a good explanation of the relationship between mathematics and scientific ‘knowledge’.

Like most post-80s writings, it’s main failing is that it sees science as having achieved some great new insights in the 80s, ignoring the work of Whitehead et al, as explained by Smuts, for example.

Ian repeatedly notes the tendency for models to assume fixed rules, and hence only to apply within a fixed Whitehead-epoch, whereas (as Smuts also noted) life bears the imprint of having being formed during (catastrophic) changes of epoch.

The discussion provides some supporting evidence for the following, but does not develop the ideas:

The manifesto is for a model combining the strengths  the strengths of cellular automata with Turing’s reaction-diffusion approach, and more. Thus it is similar to Smuts’ thoughts on Whitehead et al, as developed in SMUTS. Stewart also notes the inadequacy of the conventional interpretation of Shannon’s ‘information’.

See also

Mathematics and real systems. Evolution and uncertainty, epochs.

Dave Marsay

Reasoning under uncertainty methods

In reasoning about or under uncertainty it is sometimes not enough to use the best method, or an accredited method: one needs to understand the limitations of method.

The limits of method

Strictly, rational reasoning implies that everything can be assigned a value and probability, so that the overall utility can me maximised. So when faced with greater uncertainties, the appearance of rationality can only achieved by faking it, which is not always effective.

Pragmatism is more general than rationalism. The key feature is that one uses a fixed model until it is no longer credible. But if the situation is complex or uncertain it may not be possible to use a definite model without making unwarranted assumptions.

Turing (a grand-student of Whitehead) demonstrated some of the limitations of definite methods more generally. We cannot be too restrictive in we consider to be ‘methodical’.

One approach to method is to link overall decisions to ‘objective’ sub-decisions, made by accredited specialist decision-makers. This relies on some conceptual linkage between the specialists and generalists, and between all collaborators on a decision. This can deal with complicatedness across specialisms and complexity and uncertainty within specialisms, in so far as they are understood across the specialisms.

The difficulty with this approach is that complexity and uncertainty often span the whole domain. One is thus left with the problem of handling complexity and uncertainty within a collaboration.

Conceptualization

It follows from the Conant-Ashby theorem that people who are good at dealing with complexity and uncertainty without having dominance must, in some sense, understand these topics, even if they have had no exposure to the relevant theories. This raises these questions:

  • How do we recognize people whose track records are such that we can be sure that they have the appropriate understanding, and weren’t reliant on others or lucky?
  • To what extent can is an understanding of complexity and uncertainty developed in one situation relevant to another? Are all complexities and uncertainties in some sense similar, or amenable to the same approaches?
  • How can such understanding and methods be communicated?

Way ahead?

The following have been found helpful in confrontation, crisis and conflict anticipation and management:

  • Developing the broadest possible theoretical base for complexity and uncertainty, using Keynes’ Treatise as a former.
  • Identifying and engaging as broadly and fully as possible with all parties to the situations (of whatever nationality etc) who show an understanding or ability to handle complexity and uncertainty.

More broadly, seeking to take an overview and engagement with practitioners who are engaged with the more challenging kinds of complexity and uncertainty, with the aim of developing practical aids to comprehension:

  • To assist those who are or may be engaged
  • To help develop understanding among those who could support those who are or may be engaged.
  • To help establish some common language etc so that the broadest possible community can have the fullest possible visibility and understanding of the process, and – where appropriate – involvement.

The core of all this would seem to be a set of resources that address complexity and uncertainty rather than complicatedness and probability, with understanding to be developed via collaborative ‘war-games’.

Pedagogic resources

The aids included in the following have proved helpful.

  • Peter Allen’s overview, allows an appreciation of most of the fields.
  • Everyday and other metaphors.
  • A Whitehall report on collaboration highlights complexity and uncertainty, including in the ‘collaborative partnership model’.
  • SMUTS, supporting exploration of key factors.

See Also

How much uncertainty?, HeuristicsKnightian uncertainty , Kant’s critique.

David Marsay

Decoding Reality

A well-recommended book, but it doesn’t explain its key assumptions, and so I feel that it’s conclusions should be limited to ‘the understood aspects of reality’ rather than the whole of reality.

Overview

Decoding Reality: the universe as quantum information, Vlatko Vedral, 2010, has been recommended to me by quite a few Physicists with an interest in quantum mechanics and information.

The theme is ‘information is Physical’, but I am never clear if:

  1. our perception of the world is constructed from our information
  2. things are the way that they really are because of information.

The language seems to wander from one to the other. Also, (p 28) Vedral follows Aristotle in linking information to probability, and (p 31) credits Shannon with this. Yet Shannon only deals with the ‘technical’ problem, making assumptions that – as Keynes noted – appear not to be universally valid. Weaver’s introduction to Shannon covers this.

A key metaphor is of sculpting (p 205). As science is developed what remains to be explained reduces towards nothing, although a new viewpoint can focus on previously unappreciated possibilities. This is quite different from a view in which attempts to apply science create new complexities to be understood. While it may be true that this ‘completely and faithfully embodies the spirit of how science operates’, there is no discussion of the alternatives, and which may be more effective. Much of my difficulties with the book seem to flow from this assumption. For example,

  • While it is recognized (p 193) that ‘all quantum information is ultimately context dependent’, the discussions on probability and information neglect the role of context (e.g. p 189).
  • The approach is explicitly pragmatic, in the sense of using a model until it clearly fails. Is this wise?
  • The book uses the word ‘reality’ for our conception of it. Thus in 2006 the financial boom was ‘really’ going to continue forever: we had no way to talk about the crash until it happened. Thus this type of ‘science’ cannot handle uncertainty without introducing paradoxes.
  • Specifically (p 170) there is only randomness and determinancy, yet the notion of randomness is narrower than uncertainty.
  • No mention is made of Turing, whom one might have thought central.
  • It interprets Occam’s razor as favouring the theory with the shortest description (p 166), rather then the one with the fewest assumptions.

Comment

Much of what is said is valid within an epoch, but unhelpful more generally. Perhaps one could compromise on “the understood part of the universe as the understood part of quantum information”. Alternatively, given Vedral’s usage of ‘real’ perhaps the title could stand. But then the ‘unreal possible’ might surprise us.

See Also

Allen , induction , pragmatism , metaphors – see quantum .

David Marsay