P[ : C]: Possibility Theory

Possible probabilities, P<A|B:C> are a generalization of the more conventional precise probabilities, P(A|B). They include the full range of possible solutions consistent with the given constraints and the context, C, including assumptions and heuristics.

Possibility Theory is an established variant of probability theory that I do not understand. Nevertheless, I attempt to relate them. I consider sets of propositions.

Possibilities from Imprecision

It is standard that one can derive a canonical possibility distribution, Π(_) (wikipedia’s Pos(_) ), from an imprecise probability distribution, P<_:C>:

Π(U) =  supp∈P<U:C> {p(U)}.

(Using the possible probabilities notation.)

Given a space, Γ, of possible contexts, C, we can define the unconditional or total possible probabilities:

P<U> ≡ P<U:Γ> ≡ ∪C∈ΓP<U:C>.

Thus we have possibility distributions for either particular contexts or the totality of contexts.

Extending Possibility Notation

It is convenient to follow possible probabilities in adapting Jack Good’s notation for context-dependent probabilities, P(_|_:C):

 Π(U:C) =  supp∈P<U:C> {p(U)}.

Thus, by default:

Π(U) ≡ Π(U:Γ) ≡ supp∈P<U> {p(U)}.

This enables one to distinguish between short-run and long-run possibilities.

Limitations of Possibility Theory

Given a possibility distribution,  Π(U:C), it is standard that there is a corresponding set of possible probabilities:

P<_:C> = {probability distributions, p | p(U) ≤ Π(U:C) }.

(Using the notation of possible probabilities.)

For example, if, for a coin,  Π(Heads:C) =  Π(Tails:C) = 1, then the canonical probability distribution corresponds to coins of unknown biases. But double-sided coins (both Heads, or both Tails) have the same possibility distribution. The difficulty is that from a decision perspective they are very different situations. Suppose that you pick a coin, toss it once, and get Heads. In the first case (C) both Heads and Tails are possible on the next toss. In the second case (C’), only Heads is possible.

A work-around would be to consider possibility of sequences. In the first case Π(U,V:C) = 1 for all combinations, U,V, of Heads of Tails.
In the second case Π(Heads,Tails:C’) =  0.

Modus ponens

In ordinary logic, modus ponens is the rule that from U and U ⇒ V one can deduce V. In possibility theory,

Π(V) = min{Π(U), Π(U ⇒ V)}.

(Where ‘U ⇒ V’ is ‘V∨¬U’.)

Now, V = U∧(U ⇒ V), so this is just a special case of the general formula

Π(U∧V) = min{Π(U), Π(V)},

which is easily verified. Thus possibility theory allows one to derive possibility measures for modus ponens deductions.

A standard example where a possibility distribution is more natural than a probability distribution is:

  1. All birds can fly.
  2. Penguins are birds.
  3. Penguins cannot fly.

In classical logic this yields a contradiction. In probabilistic logic this is resolved by assigning possibility values to each statement. (3) only holds when its possibility value is higher than the other two.

It seems to me that taking account of the context for possibility judgements is useful here. In this case, perhaps (1) was decided without considering penguins (a context issue), and perhaps (3) has been decided based on only a cursory inspection (an issue of imprecision). Thus even here the notation Π(U:C) is useful.

What matters is perhaps not so much P(fly|bird), but what the estimate depends on. We are interested in:

Π(fly|bird:C) ≈ 1 versus Π(fly|bird:C’) << 1.

Our extended possibility theory seems appropriate here. But if we include in our context ‘using possibility judgements and then taking canonical probability distributions’ then using (extended) imprecise probabilities or possible probabilities seems harmless. Further, (2) and (3) seem the sort of statements that one might be able to assign probabilities to, in which case it seems natural to make use of such estimates.

Conclusions

Possibility Theory, in its simplest form, fails to capture some important aspects of uncertainty. Following Jack Good’s lead, It can be extended to make explicit the dependence on context. One might also look at non-canonical possible probabilities by considering possibility distributions for sequences.

Possibility theory, used this way, seems like an alternative representation to imprecise probabilities or possible probabilities. Either of these seem – to me – more intuitive. Possibility theory would be justified where it is not derived from probabilistic considerations.

A possible confusion is that – it seems to me – that using possibility distributions of sequences would risk confusion, since their possibility is only meaningful if the context persists. If one had a probability distribution for contexts then the interpretation of the probability of a sequenced might confuse a Bayesian. In the light of this and the lack of compelling advantages, I do not find the notation attractive. But having established a connection between possibility theory, imprecise probabilities, various probability theories and possible probabilities, the body of theory and examples do seem insightful.

See Also

My notes on:

 Dave Marsay

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: