Crossley’s Logic

J.N. Crossley and others What is mathematical logic? OUP 1972


[We hope that this book] will give some idea of the exciting aspects of mathematical logic to those who have no mathematical training.

1 Historical Survey

[Mathematics and logic] start to converge in the 19th century, … about 1850, when we have logicians such as Boole and Frege attempting to give a final and definitive form to what formal deduction actually was. …. Frege arrived at predicate calculus which turned out to be an adequate logical basis for all of today’s mathematics.

Russell … really showed that mathematics was logic and set theory.

[The] first concern of mathematicians was to clean up the basis of set theory so that paradoxes could not arise. … Fraenkel and Skolem in 1922 [developed] an axiomatic set theory which seemed to avoid the problems.

But when we have just a collection of symbols, they are susceptible to many different interpretations (or models as they are called). [That] there are definitely unintended interpretations was proved in 1915 by Lowenheim and … Skolem.

[In particular] every theory has countable models. … This is possible because, if you can only say countably many things about a certain domain, then only countably many objects are needed to satisfy those statements.

Since the axioms of set theory yield a proof that uncountable sets exist, how can they be satisfied in a countable domain? This state of affairs is called the Skolem paradox.

Godel showed that, no matter what formal system you start with, … there are always sentences of arithmetic that cannot be proved or disproved relative to the original system.

Church showed that there was no mechanical method that would decide truth or falsity in a  predicate calculus.

The  Axiom of Choice and the Continuum Hypothesis are both not disprovable, and their negations are both not disprovable.

2 The Completeness of Predicate Calculus

[The] statements that we can prove in the predicate calculus are exactly the ones that are really true.

3 Model Theory

The predicate letter E is called the identity predicate and predicate calculus with E is called the predicate calculus with identity.

One has:

  • P(b) & E(j,b) → P(j).

The following axioms for equality are true in every normal interpretation:

  1. ∀x E(x,x). (Idempotent.)
  2. ∀x ∀y (E(x,y) → E(y,x). (Symmetric.)
  3. ∀x ∀y ∀z (E(x,y)&E(y,z) → E(x,z). (Transitive)
  4. For any formula Φ, ∀x ∀y (E(x,y) → (Φ(x,x) → Φ(x,y))). (Leibnitz’s Law.)

[A] formula Φ is true in every normal interpretation if and only if Φ is provable from the logical axioms together with the axioms for equality.

[One] can always get a normal model for a set of sentence including {Equality} given that the set has a model at all.

[A] relation … which is reflexive, symmetric and transitive [and which satisfies Leibnitz’s Law) … is not necessarily equality, for consider the relation ‘same age as’ … That two people are the same age doe not imply that they are the same person.

The relation ‘<‘ is also axiomatized and completeness and compactness results presented.

4 Turing Machines and Recursive Functions

Turing machine computation

Since the [Turing-Post] notion of computation is an intuitive one, any precise formulation of this notion will have to rest on evidence rather than mathematical proof.

Partial recursive functions

Connected with the notion of computability is the notion of solvability of problems by an algorithm … where a uniform method is sought to answer an infinite class of questions.

Representations of word transformations in predicate calculus

[There] is no algorithm which will enable us to decide whether or not … any given formula of predicate calculus … is deducible from the axioms of predicate calculus.

By the completeness theorem … no algorithm exists for deciding whether any given formula is logically valid.

Godel’s incompleteness theorem

 [The] numbers 0, 1, 2 … are represented in the formal system by the numerals 0, s0, ss0, … respectively.

[There] is a [numeric] formula such that neither it nor its negation are provable. [Hence] there is a formula that is true but not provable in arithmetic.

[Also:] Arithmetic is not sufficient to prove its own consistency. And indeed any extension of arithmetic has this same deficiency … .

Set Theory

Suppose that I have a property that applies to all sets. Then there is a set of all things with this property. [But] from [this] we can derive Russell’s paradox.

[If] two sets have exactly the same members, then I cannot distinguish between them, in other words they are equal. And this is … the Axiom of Extensionality.

[The] Comprehension schema is [that] given any set and given any formula with a free variable, then there is another set [whose] members are exactly those members of [the given set] which have the given property.

[The] Axiom of foundation (or Regularity) is this: any descending membership chain is finite.

The Replacement Schema replaces the members of a set by things which they correspond to according some function-like formula. (This obsoletes the comprehension schema.)

Consistency of Axiom of Choice

[Let] us define a relation between conditions p and sentences ψ which ought to hold just when the information encoded in [the condition] p causes what [the sentence] ψ says about [the model] to come true. This relation is … ‘p forces ψ‘.

We want p to force ¬ψ only when, no matter how much additional information we have, ψ is still not forced.

A set of conditions is called generic if it has the properties:

  • [For every sentence, either it or its negation is forced by some condition.]
  • [For no sentence are both it and its negation forced.]

Cohen’s Truth Lemma: [For any generic set, there exists a canonical interpretation under which the true sentences are exactly those which are forced.]

Any such model suffices to establish the independence of the Continuum Hypothesis from the [standard] set theory.

My Comments

Applied logics

All sciences rest on the assumption that their methods of ‘proof’ tend to produce credible results. But few scientists seem concerned, let alone excited, by the logical aspects. If their methods of proof were analogous to those of mathematical logic, then this ‘pragmatic’ attitude might be reasonable. But on the other hand mathematics needs to rule out various pathological situations, so to be analogous we would wish, similarly, to rule our corresponding analogous situations. Most working applied scientists seem to assume that this doesn’t matter, or perhaps that it has already been done.

Most sciences tend to follow physics, with quantum physics being at the bleeding edge of foundational concerns. Here the notion of wave-particle duality is still respectable, if paradoxical. There are also seeming inconsistencies between quantum physics and relativity, which motivates a search for a unified theory.

Ideally, whatever emerges from physics should have a logic that is at least as sound as mathematical logic. So far the focus seems to be on developing the theory rather than the logic, and sometimes seeming in disregard for logic. I am not sure why.


Much of pure mathematics is concerns the existence of suitable models. Scientists (among others), on the other hand, often use ‘mathematical models’ with no explicit reference to this issue: maybe they assume that the issue has already been resolved. I would prefer the term ‘mathematical model’ to be reserved for models whose logics were known to be mathematical. The term ‘computational model’ might be used for models that use mathematics but which may lack a model in the sense of mathematical logic. Sadly this is not common usage. At least it ought to be possible to interpret a ‘mathematical model’ mathematically, but it is not obvious that this is always possible.

If we interpret a theoretical claim of equality, such as those from science, as merely a logical claim, then there would always have to the rider ‘in so far as the science can tell’. But this is problematic, since science advances. For example, any claimed probability can – logically – only be conditional, and might be overturned by new ‘data’. Routine data might reasonably lead to updates using Bayes’ rule. But any insights that change the nature of ‘perceived equality’ will change probabilities more radically. To put it another way, probabilities depend on the ability to discriminate, so any advance in that ability should lead to a review of all probabilities which is not just ‘Bayesian updating’.

Logically, we need to distinguish between data that leads us to update our models in a routine fashion, within a fixed conceptual framework, and situations where we are prompted to seek alternative models, outside those we had previously considered possible.


Algorithms, such as those used by computers, rely on computational models. There are two problem areas:

  • The implementation of the algorithm.
  • The formulation of the algorithm.

Much computer science is concerned with the former. But, in my experience, there may be problems of formulation:

  • The subject matter models (e.g. physics) may not be computable, so some extra assumptions may be needed
    • It may be that the subject matter and computer specialists between them are unable, even with the best collaboration, to identify any reasonable assumptions.
    • In some cases, the subject matter models may be logically incompatible with the computational modelling (e.g., if they involve changing differentia).
  • In ordinary usage the subject matter models may have been reliable when used in conjunction with ‘common sense’, but may became unreliable when formalised.
    • The tiniest uncertainty, ambiguity or imprecision can be magnified by long chains of reasoning to the point of manifest error.
    • Common sense would block some lines of reason. This would need to be reflected in a customised logic, as well as the domain model.
  • Even where in some technical sense the computed inferences are ‘true’ they may be misinterpreted, particularly where the translation from subject domain to computer model is at all subtle.

In particular, I expect problems if the subject matter logics are not logical in the mathematical sense, and many aren’t. A common approach is to identify a computational model that seems compatible. A more mathematical approach is to seek to identify the broadest range of sets of axioms that are consistent with the subject matter models and logics. This often involves non-trivial interpretations of subject matter differentia in terms of computer model (‘mathematical’) differentia, particularly where they may change. (For example, international relations notions of what is a country is not straightforward to represent.)

The process of creating a computational representation is often termed ‘mathematization’. But a ‘mathematization’ that forces a representation into some pre-existing and inappropriate class of ‘mathematical models’ is not totally ‘mathematical’ if it ignores logic. For example, a mathematization of the biological notion of species is problematic (exercise for the reader?), yet biologists seem able to reason usefully.


Computational models are formally based on the notion of sets (and classes etc.) Yet many things of interest are not obviously sets, and many subject matter logics are liable to paradoxes, if left unguarded.

  • The Axiom of Extensionality is problematic, in that if the ability to distinguish changes then so should the sets, yet computer representations take them as foundational.
  • The Comprehension schema is problematic in that the relevant properties are often not given or even fixed, but often adapting.
  • The Axiom of foundation (or Regularity) is problematic in that any given set has a limited ‘depth’, yet in many subjects (e.g. internal relations) there is no natural limit to depth.

Thus one often cannot simply map subject matter models to computational models. Instead one has to build appropriate computational schema (e.g. programming languages, applications) as a logic engine with its own representations that  re not necessarily set-like. This can be challenging.


The notion of causality is problematic in the sciences. It is often assumed that it must be some kind of deductive causality, but perhaps it is merely forced. If a conjectured effect is not actually forced then, by analogy with the logic, we may do well look for additional conditions under which either the conjectured effect is actually forced or its negation is forced. Thus a good answer to ‘is ψ the case?’ may be ‘yes if if Φ is’, rather than estimating a ‘probability’.


In any domain, it has always been common to develop technical terminology, theories and models to support reasoning. The nature of the models etc needs to be matched to the type of reasoning (logic, even) that will be applied to them. Thus in computing it is common to develop some mediating computational engine between the generic engine (which assumes a conventional computational logic) and the application ‘logic’. Failure to do this can cause confusion and error. Moreover, whereas algorithms most naturally deal in definite putative ‘facts’ most applications deal with ‘data’ that is not entirely definite or fact-like, and which concerns a reality which may not be entirely fact-like. Thus this mediating engine has to make a significant change in the type of data, for which an appropriate theory, in addition to ‘the logic of definite statements and definite methods’ is required.

There seems no consensus on what such a computational system might be like.

Dave Marsay

%d bloggers like this: