Good Inferences from Utterances?
March 19, 2011 7 Comments
The problem of interpretation
What people say isn’t necessarily literally true. On the other hand, we may be able to deduce more from what people say than its literal content, even where that content is true. This is something of an open problem in, for example, human-machine interaction or human-trained human interaction.
A Good approach?
Here I speculate that we could simply treat utterances as events to be interpreted, rather than try to develop any special theory. I start with the approach of Keynes as developed by Turing and Good. At its simplest, in a context C we believe that if a hypothetical state H obtained we would have a likelihood P(E|H:C) of an evidential state E. In a deterministic situation this will be 1 for a unique E. The inverse will be a set of possible H. For convenience, we normally encapsulate these into a single, canonical, conjunction. The probabilistic and possibilistic cases are similar.
The interpretation of utterances can thus be broken into two parts:
- What is the context?
- What are they likely to say for different hypothetical situations within that context?
For humans we can analyse these further: what is their culture and role? what are they attending to? … There seems not to be any need to develop any novel theories or mechanisms.
If we believe that others interpret our utterances using an interpretation process I(•:C) then if we want them to believe H we should utter some U such that I(U:C) = H, an inverse problem. More generally, we may say whatever is likely to lead to them believing H. Thus the utterance and interpretation processes depend reflexively on each other,. If they are uncertain then one has what Keynes calls reflexive probability, as in-house prices where the more people expect the price to go up, the more it does, fueling further expectations.
If utterance and interpretation are seen as enduring habits (not just one-offs) then they are strategies and there is a game-theory like relationship between them. But none of this is any different from the problem of entities who interact via other actions. Since overt actions and utterances often go together (as in ‘look at that’) it seems reasonable to treat them uniformly.
It is not implied that people actually do think like this, or that this is consistent with how people think they think. For example, people may have a unconcious emotionally reaction which determines whether they take others literally, cautiously or suspiciously. All that is claimed is that this approach will be a ‘yardstick’, showing the best that could be done.
A context has to contain anything that might influence utterances and interpretations. It may be ‘socially constructed’, as when a meeting room is arranged subliminally to indicate the likely tone of the meeting. The context does not need to be naively or explainably ‘real’.
Some current interest is in ‘theories of mind’, which seems to introduce new complexities over and above Good’s account, for example. But we could just include an account of the ongoing social and inter-personal interactions in the context, and our theory about how the interlocutor reacts to the context into the likelihood function. The result may not be very good account of how people do actually interpret situations, but of how they ‘should’.
For example, if we ‘share a context’ and there is no reason to doubt someone, we may normally take their words literally. If a specialist draws our attention to something within his specialism, we do not necessarily suppose that it is the most important thing that we need to attend to. If a native person says that they see big silver birds flying high, we consider other possibilities. If an adversary calls our attention to one thing, we might also look elsewhere, and so on.
‘Your cheque is in the post’, ‘Does my bum look big in this?’. Such utterances are not always intended to be taken literally, and perhaps neither is our reply. I think at least some people imagine the possible situations that may have given rise to such utterances, and either seek to narrow down the options or hedge across them, without necessarily considering themselves to be illogical or dishonest.
The New Scientist (3 December 2011) has an article ‘Time to think like a computer’ or ‘Do thoughts have a language of their own?’ (On-line). This claims that we do not treat statements using ‘traditional logic’ but that what we do is more like ‘computational logic’. This latter logic is not defined, but Prolog is given as an example.
My view is that we can apply ‘traditional logic’ as long as we treat an utterance as an utterance, as is done here, and not as a ‘fact’. Computational logic, in my experience, is best regarded as a framework for developing specific logics. The article mentions default logics. If we think that someone is in a situation where particular default assumptions would be appropriate, then we would naturally employ default reasoning as a special case of the ‘good’ approach. But the good approach is clearly much more general.
The article has this example:
If Mary has an essay to write, then she will study late in the library.
Mary has an essay to write.
This look like ‘modus ponens’, from which we ‘should’ conclude that Mary will study late in the library. Actually, we will probably have a few caveats, which can be explained in a variety of ways. In the article it is now said that:
If the library is open, then Mary will study late in the library.
As the article says, taking this literally implies that Mary will study late even when she has no essay to write. To get around this problem the article proposes that there is a separate ‘language of thought’. This language is not specified, but seems to include default reasoning. But it seems to me that the example is just the kind of ‘bad logic’ that many people use much of the time, and that - knowing that - we can generally decode it using traditional logic, but treating statements as data, not ‘facts’.
We can conceptualize our thinking about acting and re-acting as part of a generic problem of interaction, perhaps drawing on ideas about evolutionary learning in games. Our understanding of particular actors (e.g., humans) then determines what we think is relevant within contexts, and how this may influence actors.
The advantage over a more customised approach would be:
- includes all types of activity, in conjunction
- can draw on insights from these other areas
- can be theoretically underpinned, and so be less ad-hoc
- by treating ‘models of mind’ as a variable, is less likely to be culturally specific
- it may be more fruitful in suggesting likely variations in ‘style’.
This is speculative.