A ⇒ B ∨ ∀x ∃y : (P (x) → Q(x, y) ∧ R(y)) M |= A ∨ M |=v B ⇔ [[A]]M = 1 ∨ [[B]]M v = 1, ∃M : h{q0 , q1 }, {0, 1}, q0 , τ ...

Author:
Micha l. Walicki

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

A ⇒ B ∨ ∀x ∃y : (P (x) → Q(x, y) ∧ R(y)) M |= A ∨ M |=v B ⇔ [[A]]M = 1 ∨ [[B]]M v = 1, ∃M : h{q0 , q1 }, {0, 1}, q0 , τ i `S0 , Γ `N B → B, ∀x : x = x ∨ ⊥

Introduction to Logic Michal Walicki 2006

A ⇒ B∨∀x ∃y : (P (x) → Q(x, y)∧R(y)) M |= A∨M |=v B ⇔ [[A]]

M

M = 1∨[[B]]v = 1, ∃M : h{q0 , q1 }, {0, 1}, q0 , τ i `S0 , Γ `N B → B, ∀x : x = x∨⊥

ii

Contents The History of Logic

1

A Logic – patterns of reasoning A.1 Reductio ad absurdum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Aristotle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.3 Other patterns and later developments . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 4

B Logic – a language about something B.1 Early semantic observations and problems B.2 The Scholastic theory of supposition . . . . B.3 Intension vs. extension . . . . . . . . . . . B.4 Modalities . . . . . . . . . . . . . . . . . .

. . . .

5 5 6 6 7

C Logic – a symbolic language C.1 The “universally characteristic language” . . . . . . . . . . . . . . . . . . . . . . . .

7 8

D 19th and 20th Century – D.1 George Boole . . . . . . D.2 Gottlob Frege . . . . . D.3 Set theory . . . . . . . D.4 20th century logic . . .

. . . .

. . . .

mathematization of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

9 10 12 14 15

E Modern Symbolic Logic E.1 Formal logical systems: syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.2 Formal semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.3 Computability and Decidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16 16 19 21

F Summary

23

Bibliography

23

Part I. Basic Set Theory

28

1. Sets, Functions, Relations 1.1. Sets and Functions . . . 1.2. Relations . . . . . . . . . 1.3. Ordering Relations . . . 1.4. Infinities . . . . . . . . .

logic . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

28 28 32 33 35

2. Induction 2.1. Well-Founded Orderings . . . . . . . . . . . . . . . . . . 2.1.1. Inductive Proofs on Well-founded Orderings . . . 2.2. Inductive Definitions . . . . . . . . . . . . . . . . . . . . 2.2.1. “1-1” Definitions . . . . . . . . . . . . . . . . . . . 2.2.2. Inductive Definitions and Recursive Programming 2.2.3. Proofs by Structural Induction . . . . . . . . . . . 2.3. Transfinite Induction [optional] . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

42 42 43 47 49 50 53 56

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

iii

Part II. Turing Machines

59

3. Turing Machines 3.1. Alphabets and Languages . . . . . . . . . . . . . . . 3.2. Turing Machines . . . . . . . . . . . . . . . . . . . . 3.2.1. Composing Turing machines . . . . . . . . . 3.2.2. Alternative representation of TMs [optional] 3.3. Universal Turing Machine . . . . . . . . . . . . . . . 3.4. Decidability and the Halting Problem . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

Part III. Statement Logic

59 59 60 64 65 66 69

73

4. Syntax and Proof Systems 4.1. Axiomatic Systems . . . . . . . . . . . . . . . . . . 4.2. Syntax of SL . . . . . . . . . . . . . . . . . . . . . . 4.3. The axiomatic system of Hilbert’s . . . . . . . . . . 4.4. Natural Deduction system . . . . . . . . . . . . . . 4.5. Hilbert vs. ND . . . . . . . . . . . . . . . . . . . . . 4.6. Provable Equivalence of formulae . . . . . . . . . . 4.7. Consistency . . . . . . . . . . . . . . . . . . . . . . 4.8. The axiomatic system of Gentzen’s . . . . . . . . . 4.8.1. Decidability of the axiomatic systems for SL 4.8.2. Gentzen’s rules for abbreviated connectives . 4.9. Some proof techniques . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

73 73 77 77 79 81 82 83 84 84 85 86

5. Semantics of SL 5.1. Semantics of SL . . . . . . . . . . . . . . 5.2. Semantic properties of formulae . . . . . 5.3. Abbreviations . . . . . . . . . . . . . . . 5.4. Sets, Propositions and Boolean Algebras 5.4.1. Laws . . . . . . . . . . . . . . . . 5.4.2. Sets and SL . . . . . . . . . . . . . 5.4.3. Boolean Algebras [optional] . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

88 88 92 93 94 94 94 96

6. Soundness and Completeness 6.1. Adequate Sets of Connectives . . . . . . . . . . . . . . . . 6.2. DNF, CNF . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3. Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4. Completeness . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1. Some Applications of Soundness and Completeness

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

101 101 102 104 106 108

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

Part IV. Predicate Logic 7. Syntax and Proof System of FOL 7.1. Syntax of FOL . . . . . . . . . . . . 7.1.1. Abbreviations . . . . . . . . 7.2. Scope of Quantifiers, Free Variables, 7.2.1. Some examples . . . . . . . . 7.2.2. Substitution . . . . . . . . . 7.3. Proof System . . . . . . . . . . . . . 7.3.1. Deduction Theorem in FOL . 7.4. Gentzen’s system for FOL . . . . . .

113 . . . . . . . . . . . . . . . . Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

113 114 116 116 117 119 120 121 122

iv

8. Semantics 8.1. Semantics of FOL . . . . . . . . . . 8.2. Semantic properties of formulae . . 8.3. Open vs. closed formulae . . . . . . 8.3.1. Deduction Theorem in G and

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

126 126 129 130 133

9. More Semantics 9.1. Prenex operations . . . . . . . . . . . . . . . . . . 9.2. A few bits of Model Theory . . . . . . . . . . . . 9.2.1. Substructures . . . . . . . . . . . . . . . . 9.2.2. Σ-Π classification . . . . . . . . . . . . . . 9.3. “Syntactic” semantic and Computations . . . . . 9.3.1. Reachable structures and Term structures . 9.3.2. Herbrand’s theorem . . . . . . . . . . . . . 9.3.3. Horn clauses and logic programming . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

136 136 138 139 139 141 141 144 144

. . . . . . N

. . . .

. . . .

. . . .

. . . .

. . . .

10. Soundness, Completeness 151 10.1. Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 10.2. Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 10.2.1. Some Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 11. Identity and Some Consequences 11.1. FOL with Identity . . . . . . . . . . . . . . . 11.1.1. Axioms for Identity . . . . . . . . . . 11.1.2. Some examples . . . . . . . . . . . . . 11.1.3. Soundness and Completeness of FOL= 11.2. A few more bits of Model Theory . . . . . . . 11.2.1. Compactness . . . . . . . . . . . . . . 11.2.2. Skolem-L¨ owenheim . . . . . . . . . . . 11.3. Semi-Decidability and Undecidability of FOL 11.4. Why is First-Order Logic “First-Order”? . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

159 159 160 161 162 164 164 165 165 166

12. Summary 12.1. Functions, Sets, Cardinality . . . . . . . . . . 12.2. Relations, Orderings, Induction . . . . . . . . 12.3. Turing Machines . . . . . . . . . . . . . . . . 12.4. Formal Systems in general . . . . . . . . . . . 12.4.1. Axiomatic System – the syntactic part 12.4.2. Semantics . . . . . . . . . . . . . . . . 12.4.3. Syntax vs. Semantics . . . . . . . . . 12.5. Statement Logic . . . . . . . . . . . . . . . . 12.6. First Order Logic . . . . . . . . . . . . . . . . 12.7. First Order Logic with identity . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

170 170 171 171 172 172 173 174 175 176 177

1

The History of Logic The term “logic” may be, very roughly and vaguely, associated with something like “correct thinking”. Aristotle defined a syllogism as “discourse in which, certain things being stated something other than what is stated follows of necessity from their being so.” And, in fact, this intuition not only lies at its origin, ca. 500 BC, but has been the main force motivating its development since that time until the last century. There was a medieval tradition according to which the Greek philosopher Parmenides (5th century BC) invented logic while living on a rock in Egypt. The story is pure legend, but it does reflect the fact that Parmenides was the first philosopher to use an extended argument for his views, rather than merely proposing a vision of reality. But using arguments is not the same as studying them, and Parmenides never systematically formulated or studied principles of argumentation in their own right. Indeed, there is no evidence that he was even aware of the implicit rules of inference used in presenting his doctrine. Perhaps Parmenides’ use of argument was inspired by the practice of early Greek mathematics among the Pythagoreans. Thus it is significant that Parmenides is reported to have had a Pythagorean teacher. But the history of Pythagoreanism in this early period is shrouded in mystery, and it is hard to separate fact from legend. We will sketch the development of logic along the three axes which reflect the three main domains of the field. 1. The foremost is the interest in correctness of reasoning which involves study of correct arguments, their form or pattern and the possibilities of manipulating such forms in order to arrive at new correct arguments. The other two aspects are very intimately connected with this one. 2. In order to construct valid forms of arguments one has to know what such forms can be built from, that is, determine the ultimate “building blocks”. One has to identify the basic terms, their kinds, means of combination and, not least, their meaning. 3. Finally, there is the question of how to represent these patterns. Although apparently of secondary importance, it is the answer to this question which puts purely symbolic manipulation in the focus. It can be considered the beginning of modern mathematical logic which led to the development of the devices for symbolic manipulation known as computers. The first three sections sketch the development along the respective lines until Renaissance. In section D, we indicate the development in modern era, with particular emphasis on the last two centuries. Section E indicates some basic aspects of modern mathematical logic and its relations to computers.

A. Logic – patterns of reasoning A.1. Reductio ad absurdum If Parmenides was not aware of the general rules underlying his arguments, the same perhaps is not true for his disciple Zeno of Elea (5th century BC). Parmenides taught that there is no real change in the world and that all things remain, eventually, the same one being. In the defense of this heavily criticized thesis, Zeno designed a series of ingenious arguments, known under the name “Zeno’s paradoxes”, which demonstrated that the contrary assumption must lead to absurdity. Some of the most known is the story of Achilles and the tortoise compete in a race Tortoise, being a slower runner, starts some time t before Achilles. In this time t, the tortoise will go some way w towards the goal. Now Achilles starts running but in order to catch up with the tortoise he has to first run the way w which will take him some time t1. In this time, tortoise will again walk some distance w1 away from the point w and closer to the goal. Then again, Achilles must first run the way w1 in order to catch the tortoise, but this will in the same time walk some distance w2 away. In

2

short, Achilles will never catch the tortoise, which is obviously absurd. Roughly, this means that the thesis that the two are really changing their positions cannot be true. The point of the story is not what is possibly wrong with this way of thinking but that the same form of reasoning was applied by Zeno in many other stories: assuming a thesis T , we can analyze it and arrive at a conclusion C; but C turns out to be absurd – therefore T cannot be true. This pattern has been given the name “reductio ad absurdum” and is still frequently used in both informal and formal arguments.

A.2. Aristotle Various ways of arguing in political and philosophical debates were advanced by various thinkers. Sophists, often discredited by the “serious” philosophers, certainly deserve the credit for promoting the idea of “correct arguing” no matter what the argument is concerned with. Horrified by the immorality of sophists’ arguing, Plato attempted to combat them by plunging into ethical and metaphysical discussions and claiming that these indeed had a strong methodological logic – the logic of discourse, “dialectic”. In terms of development of modern logic there is, however, close to nothing one can learn from that. The development of “correct reasoning” culminated in ancient Greece with Aristotle’s (384-322 BC) teaching of categorical forms and syllogisms. A.2.1. Categorical forms Most of Aristotle’s logic was concerned with certain kinds of propositions that can be analyzed as consisting of five basic building blocks: (1) usually a quantifier (“every”, “some”, or the universal negative quantifier “no”), (2) a subject, (3) a copula, (4) perhaps a negation (“not”), (5) a predicate. Propositions analyzable in this way were later called “categorical propositions” and fall into one or another of the following forms: (quantifier)

subject copula (negation) predicate

Every, Some, No

β

is

1.

Every

β

is

2.

Every

β

is

3.

Some

β

is

4.

Some

β

is

x

is

5.

not

not not

an α an α

: Universal affirmative

an α

: Universal negative

an α

: Particular affirmative

an α

: Particular negative

an α

: Singular affirmative

6. x is not an α : Singular negative In the singular judgements x stands for an individual, e.g. “Socrates is (not) a man.” A.2.2. Conversions Sometimes Aristotle adopted alternative but equivalent formulations. Instead of saying, for example, “Every β is an α”, he would say, “α belongs to every β” or “α is predicated of every β.” More significantly, he might use equivalent formulations, for example, instead of 2,he might say “No β is an α.” 1. “Every β is an α” 2. “Every β is not an α”

is equivalent to is equivalent to is equivalent to

“α belongs to every β”, or “α is predicated of every β.” “No β is an α.”

Aristotle formulated several rules later known collectively as the theory of conversion. To “convert” a proposition in this sense is to interchange its subject and predicate. Aristotle observed that propositions of forms 2 and 3 can be validly converted in this way: if “no β is α”, then also “no α is β”, and if “some β is α”, then also “some α is β”. In later terminology, such propositions were said to be converted “simply” (simpliciter). But propositions of form 1 cannot be converted in this way; if “every β is an α”, it does not follow that “every α is a β”. It does follow, however, that “some α is a β”. Such propositions, which can be converted provided that not only are their subjects and predicates interchanged but also the universal quantifier is weakened to an existential

3

(or particular) quantifier “some”, were later said to be converted “accidentally” (per accidens). Propositions of form 4 cannot be converted at all; from the fact that some animal is not a dog, it does not follow that some dog is not an animal. Aristotle used these laws of conversion to reduce other syllogisms to syllogisms in the first figure, as described below. Conversions represent the first form of formal manipulation. They provide the rules for: how to replace occurrence of one (categorical) form of a statement by another – without affecting the proposition! What does “affecting the proposition” mean is another subtle matter. The whole point of such a manipulation is that one, in one sense or another, changes the concrete appearance of a sentence, without changing its value. In Aristotle this meant simply that the pairs he determined could be exchanged. The intuition might have been that they “essentially mean the same”. In a more abstract, and later formulation, one would say that “not to affect a proposition” is “not to change its truth value” – either both are false or both are true. Thus one obtains the idea that Two statements are equivalent (interchangeable) if they have the same truth value. This wasn’t exactly the point of Aristotle’s but we may ascribe him a lot of intuition in this direction. From now on, this will be a constantly recurring theme in logic. Looking at propositions as thus determining a truth value gives rise to some questions (and severe problems, as we will see.) Since we allow using some “placeholders” – variables – a proposition need not have a unique truth value. “All α are β” depends on what we substitute for α and β. In general, a proposition P may be: 1. a tautology – P is always true, no matter what we choose to substitute for the “placeholders”; (e.g., “All α are α”. In particular, a proposition without any “placeholders”, e.g., “all animals are animals”, may be a tautology.) 2. a contradiction – P is never true (e.g., “no α is α”); 3. contingent – P is sometimes true and sometimes false; (“all α are β” is true, for instance, if we substitute “animals” for both α and β, while it is false if we substitute “birds” for α and “pigeons” for β). A.2.3. Syllogisms Aristotlean logic is best known for the theory of syllogisms which had remained practically unchanged and unchallenged for approximately 2000 years. Aristotle defined a syllogism as a “discourse in which, certain things being stated something other than what is stated follows of necessity from their being so.” But in practice he confined the term to arguments containing two premises and a conclusion, each of which is a categorical proposition. The subject and predicate of the conclusion each occur in one of the premises, together with a third term (the middle) that is found in both premises but not in the conclusion. A syllogism thus argues that because α and γ are related in certain ways to β (the middle) in the premises, they are related in a certain way to one another in the conclusion. The predicate of the conclusion is called the major term, and the premise in which it occurs is called the major premise. The subject of the conclusion is called the minor term and the premise in which it occurs is called the minor premise. This way of describing major and minor terms conforms to Aristotle’s actual practice and was proposed as a definition by the 6th-century Greek commentator John Philoponus. But in one passage Aristotle put it differently: the minor term is said to be “included” in the middle and the middle “included” in the major term. This remark, which appears to have been intended to apply only to the first figure (see below), has caused much confusion among some of Aristotle’s commentators, who interpreted it as applying to all three figures. Aristotle distinguished three different “figures” of syllogisms, according to how the middle is related to the other two terms in the premises. In one passage, he says that if one wants to prove α of γ syllogistically, one finds a middle term β such that either I. α is predicated of β and β of γ (i.e., β is α and γ is β), or

4

II. β is predicated of both α and γ (i.e., α is β and γ is β), or else III. both α and γ are predicated of β (i.e., β is α and β is γ). All syllogisms must, according to Aristotle, fall into one or another of these figures. Each of these figures can be combined with various categorical forms, yielding a large taxonomy of possible syllogisms. Aristotle identified 19 among them which were valid (“universally correct”). The following is an example of a syllogism of figure I and categorical forms S(ome), E(very), S(ome). “Worm” is here the middle term. Some Every Some

(A.i)

of my Friends Worm of my Friends

are is are

Worms. Ugly. Ugly.

The table below gives examples of syllogisms of all three figures with middle term in bold face. figure I:

[F is W]

[W is U]

[F is U]

S,E,S

Some [F is W]

Every [W is U]

Some [F is U]

E,E,E

Every [F is W]

Every [W is U]

Every [F is U]

figure II:

[M is W]

[U is W]

[M is U]

N,E,N

No [M is W]

Every [U is W]

no [M is U]

figure III:

[W is U]

[W is N]

[N is U]

E,E,S

Every [W is U]

Every [W is N]

Some [N is U]

E,E,E

Every [W is U]

Every [W is N]

Every [N is U]

–

Validity of an argument means here that no matter what concrete terms we substitute for α, β, γ, if only the premises are true then also the conclusion is guaranteed to be true. For instance, the first 4 examples above are valid while the last one is not. To see this last point, we find a counterexample. Substituting women for W, female for U and human for N, the premises hold while the conclusion states that every human is female. Note that a correct application of a valid syllogism does not guarantee truth of the conclusion. (A.i) is such an application, but the conclusion need not be true. This correct application uses namely a false assumption (none of my friends is a worm) and in such cases no guarantees about the truth value of the conclusion can be given. We see again that the main idea is truth preservation in the reasoning process. An obvious, yet nonetheless crucially important, assumption is: The contradiction principle For any proposition P it is never the case that both P and not-P are true. This principle seemed (and to many still seems) intuitively obvious enough to accept it without any discussion. If it were violated, there would be little point in constructing any “truth preserving” arguments.

A.3. Other patterns and later developments Aristotle’s syllogisms dominated logic until late Middle Ages. A lot of variations were invented, as well as ways of reducing some valid patterns to others (cf. A.2.2). The claim that all valid arguments can be obtained by conversion and, possibly indirect proof (reductio ad absurdum) from the three figures

5

has been challenged and discussed ad nauseum. Early developments (already in Aristotle) attempted to extend the syllogisms to modalities, i.e., by considering instead of the categorical forms as above, the propositions of the form “it is possible/necessary that some α are β”. Early followers of Aristotle (Theophrastus of Eresus (371-286), the school of Megarians with Euclid (430-360), Diodorus Cronus (4th century BC)) elaborated on the modal syllogisms and introduced another form of a proposition, the conditional “if (α is β) then (γ is δ)”. These were further developed by Stoics who also made another significant step. Instead of considering only “patterns of terms” where α, β, etc. are placeholders for some objects, they started to investigate logic with “patterns of propositions”. Such patterns would use the variables standing for propositions instead of terms. For instance, from two propositions: “the first” and “the second”, we may form new propositions, e.g., “the first or the second”, “if the first then the second”, etc. The terms “the first”, “the second” were used by Stoics as variables instead of α, β, etc. The truth of such compound propositions may be determined from the truth of their constituents. We thus get new patterns of arguments. The Stoics gave the following list of five patterns If 1 then 2;

but 1;

therefore 2.

If 1 then 2;

but not 2;

therefore not 1.

Not both 1 and 2;

but 1;

therefore not 2.

Either 1 or 2;

but 1;

therefore not 2.

Either 1 or 2;

but not 2;

therefore 1.

Chrysippus (c.279-208 BC) derived many other schemata. Stoics claimed (wrongly, as it seems) that all valid arguments could be derived from these patterns. At the time, the two approaches seemed different and a lot of discussions centered around the question which is “the right one”. Although Stoic’s “propositional patterns” had fallen in oblivion for a long time, they re-emerged as the basic tools of modern mathematical propositional logic. Medieval logic was dominated by Aristotlean syllogisms elaborating on them but without contributing significantly to this aspect of reasoning. However, Scholasticism developed very sophisticated theories concerning other central aspects of logic.

B. Logic – a language about something The pattern of a valid argument is the first and through the centuries fundamental issue in the study of logic. But there were (and are) a lot of related issues. For instance, the two statements 1. “all horses are animals”, and 2. “all birds can fly” are not exactly of the same form. More precisely, this depends on what a form is. The first says that one class (horses) is included in another (animals), while the second that all members of a class (birds) have some property (can fly). Is this grammatical difference essential or not? Or else, can it be covered by one and the same pattern or not? Can we replace a noun by an adjective in a valid pattern and still obtain a valid pattern or not? In fact, the first categorical form subsumes both above sentences, i.e., from the point of view of our logic, they are considered as having the same form. This kind of questions indicate, however, that forms of statements and patterns of reasoning, like syllogisms, require further analysis of “what can be plugged where” which, in turn, depends on which words or phrases can be considered as “having similar function”, perhaps even as “having the same meaning”. What are the objects referred to by various kinds of words? What are the objects referred to by the propositions?

6

B.1. Early semantic observations and problems Certain particular teachings of the sophists and rhetoricians are significant for the early history of (this aspect of) logic. For example, the arch-sophists Protagoras (500 BC) is reported to have been the first to distinguish different kinds of sentences: questions, answers, prayers, and injunctions. Prodicus appears to have maintained that no two words can mean exactly the same thing. Accordingly, he devoted much attention to carefully distinguishing and defining the meanings of apparent synonyms, including many ethical terms. The categorical forms from A.2.1 were, too, classified according to such organizing principles. Since logic studies statements, their form as well as patterns in which they can be arranged to form valid arguments, one of the basic questions concerns the meaning of a proposition. As we indicated earlier, two propositions can be considered equivalent if they have the same truth value. This indicates another law, beside the contradiction principle, namely The law of excluded middle Each proposition P is either true or false. There is surprisingly much to say against this apparently simple claim. There are modal statements (see B.4) which do not seem to have any definite truth value. Among many early counter-examples, there is the most famous one, produced by the Megarians, which is still disturbing and discussed by modern logicians: The “liar paradox” The sentence “This sentence is false” does not seem to have any content – it is false if and only if it is true! Such paradoxes indicated the need for closer analysis of fundamental notions of the logical enterprise.

B.2. The Scholastic theory of supposition The character and meaning of various “building blocks” of a logical language were thoroughly investigated by the Scholastics. The theory of supposition was meant to answer the question: “To what does a given occurrence of a term refer in a given proposition?” Roughly, one distinguished three kinds of supposition/reference: 1. personal: In the sentence “Every horse is an animal”, the term “horse” refers to individual horses. 2. simple: In the sentence “Horse is a species”, the term “horse” refers to a universal (the concept ‘horse’). 3. material: In the sentence “Horse is a monosyllable”, the term “horse” refers to the spoken or written word. We can notice here the distinction based on the fundamental duality of individuals and universals which had been one of the most debated issues in Scholasticism. The third point indicates the important development, namely, the increasing attention paid to the language as such which slowly becomes the object of study.

B.3. Intension vs. extension In addition to supposition and its satellite theories, several logicians during the 14th century developed a sophisticated theory of connotation. The term “black” does not merely denote all black things – it also connotes the quality, blackness, which all such things possess. This has become one of the central distinctions in the later development of logic and in the discussions about the entities referred to by the words we are using. One begun to call connotation “intension” – saying “black” I intend blackness. Denotation is closer to “extension” – the collection of all the

7

objects referred to by the term “black”. One has arrived at the understanding of a term which can be represented pictorially as follows: termL LLL rr Lrefers LLL to r r LL% r r yr intension can be ascribed to / extension intendsrrr

The crux of many problems is that different intensions may refer to (denote) the same extension. The “Morning Star” and the “Evening Star” have different intensions and for centuries were considered to refer to two different stars. As it turned out, these are actually two appearances of one and the same planet Venus, i.e., the two terms have the same extension. One might expect logic to be occupied with concepts, that is connotations – after all, it tries to capture correct reasoning. Many attempts have been made to design a “universal language of thought” in which one could speak directly about the concepts and their interrelations. Unfortunately, the concept of concept is not that obvious and one had to wait a while until a somehow tractable way of speaking of/modeling/representing concepts become available. The emergence of modern mathematical logic coincides with the successful coupling of logical language with the precise statement of its meaning in terms of extension. This by no means solved all the problems and modern logic still has branches of intensional logic – we will return to this point later on.

B.4. Modalities Also these disputes started with Aristotle. In chapter 9 of De Interpretatione, he discusses the assertion “There will be a sea battle tomorrow”. The problem with this assertion is that, at the moment when it is made, it does not seem to have any definite truth value – whether it is true or false will become clear tomorrow but until then it is possible that it will be the one as well the other. This is another example (besides the “liar paradox”) indicating that adopting the principle of “excluded middle”, i.e., considering the propositions as having always only one of two possible truth values, may be insufficient. Medieval logicians continued the tradition of modal syllogistic inherited from Aristotle. In addition, modal factors were incorporated into the theory of supposition. But the most important developments in modal logic occurred in three other contexts: 1. whether propositions about future contingent events are now true or false (the question raised by Aristotle), 2. whether a future contingent event can be known in advance, and 3. whether God (who, the tradition says, cannot be acted upon causally) can know future contingent events. All these issues link logical modality with time. Thus, Peter Aureoli (c. 1280-1322) held that if something is in fact P (P is some predicate) but can be not-P , then it is capable of changing from being P to being not-P . However here, as in the case of categorical propositions, important issues could hardly be settled before one had a clearer idea concerning the kinds of objects or state of affairs modalities are supposed to describe. Duns Scotus in the late 13th century was the first to sever the link between time and modality. He proposed a notion of possibility that was not linked with time but based purely on the notion of semantic consistency. “Possible” means here logically possible, that is, not involving contradiction. This radically new conception had a tremendous influence on later generations down to the 20th century. Shortly afterward, Ockham developed an influential theory of modality and time that reconciles the claim that every proposition is either true or false with the claim that certain propositions about the future are genuinely contingent. Duns Scotus’ ideas were revived in the 20th century. Starting with the work of Jan Lukasiewicz who, once again, studied Aristotle’s example and introduced 3-valued logic – a proposition may be true, or false, or else it may have the third, “undetermined” truth value.

8

C. Logic – a symbolic language Logic’s preoccupation with concepts and reasoning begun gradually to put more and more severe demands on the appropriate and precise representation of the used terms. We saw that syllogisms used fixed forms of categorical statements with variables – α, β, etc. – which represented arbitrary terms (or objects). Use of variables was indisputable contribution of Aristotle to the logical, and more generally mathematical notation. We also saw that Stoics introduced analogous variables standing for propositions. Such notational tricks facilitated more concise, more general and more precise statement of various logical facts. Following the Scholastic discussions of connotation vs. denotation, logicians of the 16th century felt the increased need for a more general logical language. One of the goals was the development of an ideal logical language that naturally expressed ideal thought and was more precise than natural language. An important motivation underlying the attempts in this direction was the idea of manipulation, in fact, symbolic or even mechanical manipulation of arguments represented in such a language. Aristotelian logic had seen itself as a tool for training “natural” abilities at reasoning. Now one would like to develop methods of thinking that would accelerate or improve human thought or would even allow its replacement by mechanical devices. Among the initial attempts was the work of Spanish soldier, priest and mystic Ramon Lull (1235-1315) who tried to symbolize concepts and derive propositions from various combinations of possibilities. The work of some of his followers, Juan Vives (1492-1540) and Johann Alsted (1588-1683) represents perhaps the first systematic effort at a logical symbolism. Some philosophical ideas in this direction occurred within the Port-Royal Logic – a group of anticlerical Jansenists located in Port-Royal outside Paris, whose most prominent member was Blaise Pascal. They elaborated on the Scholastical distinction comprehension vs. extension. Most importantly, Pascal introduced the distinction between real and nominal definitions. Real definitions were descriptive and stated the essential properties in a concept, while nominal definitions were creative and stipulated the conventions by which a linguistic term was to be used. (“Man is a rational animal.” attempts to give a real definition of the concept ‘man’. “By monoid we will understand a set with a unary operation.” is a nominal definition assigning a concept to the word “monoid”.) Although the Port-Royal logic itself contained no symbolism, the philosophical foundation for using symbols by nominal definitions was nevertheless laid.

C.1. The “universally characteristic language” Lingua characteristica universalis was Gottfried Leibniz’ ideal that would, first, notationally represent concepts by displaying the more basic concepts of which they were composed, and second, naturally represent (in the manner of graphs or pictures, “iconically”) the concept in a way that could be easily grasped by readers, no matter what their native tongue. Leibniz studied and was impressed by the method of the Egyptians and Chinese in using picturelike expressions for concepts. The goal of a universal language had already been suggested by Descartes for mathematics as a “universal mathematics”; it had also been discussed extensively by the English philologist George Dalgarno (c. 1626-87) and, for mathematical language and communication, by the French algebraist Fran¸cois Vi`ete (1540-1603). C.1.1. “Calculus of reason” Another and distinct goal Leibniz proposed for logic was a “calculus of reason” (calculus ratiocinator). This would naturally first require a symbolism but would then involve explicit manipulations of the symbols according to established rules by which either new truths could be discovered or proposed conclusions could be checked to see if they could indeed be derived from the premises. Reasoning could then take place in the way large sums are done – that is, mechanically or algorithmically – and thus not be subject to individual mistakes and failures of ingenuity. Such derivations could be checked by others or performed by machines, a possibility that Leibniz seriously contemplated. Leibniz’ suggestion that machines could be constructed to draw valid inferences or to check the deductions of others was followed up by Charles Babbage, William Stanley Jevons, and Charles Sanders Peirce and his student Allan Marquand in the 19th century, and with wide success

9

on modern computers after World War II. (See chapter 7 in C. Sobel, The Cognitive sciences, an interdisciplinary approach, for more detailed examples.) The symbolic calculus that Leibniz devised seems to have been more of a calculus of reason than a “characteristic” language. It was motivated by his view that most concepts were “composite”: they were collections or conjunctions of other more basic concepts. Symbols (letters, lines, or circles) were then used to stand for concepts and their relationships. This resulted in what is intensional rather than an extensional logic – one whose terms stand for properties or concepts rather than for the things having these properties. Leibniz’ basic notion of the truth of a judgment was that the concepts making up the predicate were “included in” the concept of the subject For instance, the judgment ‘A zebra is striped and a mammal.’ is true because the concepts forming the predicate ‘striped-and-mammal’ are, in fact, “included in” the concept (all possible predicates) of the subject ‘zebra’. What Leibniz symbolized as A∞B, or what we would write today as A = B, was that all the concepts making up concept A also are contained in concept B, and vice versa. Leibniz used two further notions to expand the basic logical calculus. In his notation, A⊕B∞C indicates that the concepts in A and those in B wholly constitute those in C. We might write this as A + B = C or A ∨ B = C – if we keep in mind that A, B, and C stand for concepts or properties, not for individual things. Leibniz also used the juxtaposition of terms in the following way: AB∞C, which we might write as A × B = C or A ∧ B = C, signifies in his system that all the concepts in both A and B wholly constitute the concept C. A universal affirmative judgment, such as “All A’s are B’s,” becomes in Leibniz’ notation A∞AB. This equation states that the concepts included in the concepts of both A and B are the same as those in A. A syllogism: “All A’s are B’s; all B’s are C’s; therefore all A’s are C’s,” becomes the sequence of equations : A = AB; B = BC; therefore A = AC Notice, that this conclusion can be derived from the premises by two simple algebraic substitutions and the associativity of logical multiplication.

(C.i)

(1 + 2)

1. A 2. B A A

= = = =

AB BC ABC AC

Every A is B Every B is C therefore : Every A is C

As many early symbolic logics, including many developed in the 19th century, Leibniz’ system had difficulties with particular and negative statements, and it included little discussion of propositional logic and no formal treatment of quantified relational statements. (Leibniz later became keenly aware of the importance of relations and relational inferences.) Although Leibniz might seem to deserve to be credited with great originality in his symbolic logic – especially in his equational, algebraic logic – it turns out that such insights were relatively common to mathematicians of the 17th and 18th centuries who had a knowledge of traditional syllogistic logic. In 1685 Jakob Bernoulli published a pamphlet on the parallels of logic and algebra and gave some algebraic renderings of categorical statements. Later the symbolic work of Lambert, Ploucquet, Euler, and even Boole – all apparently uninfluenced by Leibniz’ or even Bernoulli’s work – seems to show the extent to which these ideas were apparent to the best mathematical minds of the day.

D. 19th and 20th Century – mathematization of logic Leibniz’ system and calculus mark the appearance of formalized, symbolic language which is prone to mathematical (either algebraic or other) manipulation. A bit ironically, emergence of mathematical logic marks also this logic’s, if not a divorce then at least separation from philosophy. Of course, the discussions of logic have continued both among logicians and philosophers but from now on these groups form two increasingly distinct camps. Not all questions of philosophical logic are important for mathematicians and most of results of mathematical logic have rather technical character which is not always of interest for philosophers. (There are, of course, exceptions like,

10

for instance, the extremist camp of analytical philosophers who in the beginning of 20th century attempted to design a philosophy based exclusively on the principles of mathematical logic.) In this short presentation we have to ignore some developments which did take place between 17th and 19th century. It was only in the last century that the substantial contributions were made which created modern logic. The first issue concerned the intentional vs. extensional dispute – the work of George Boole, based on purely extensional interpretation was a real break-through. It did not settle the issue once and for all – for instance Frege, “the father of first-order logic” was still in favor of concepts and intensions; and in modern logic there is still a branch of intensional logic. However, Boole’s approach was so convincingly precise and intuitive that it was later taken up and become the basis of modern – extensional or set theoretical – semantics.

D.1. George Boole The two most important contributors to British logic in the first half of the 19th century were undoubtedly George Boole and Augustus De Morgan. Their work took place against a more general background of logical work in English by figures such as Whately, George Bentham, Sir William Hamilton, and others. Although Boole cannot be credited with the very first symbolic logic, he was the first major formulator of a symbolic extensional logic that is familiar today as a logic or algebra of classes. (A correspondent of Lambert, Georg von Holland, had experimented with an extensional theory, and in 1839 the English writer Thomas Solly presented an extensional logic in A Syllabus of Logic, though not an algebraic one.) Boole published two major works, The Mathematical Analysis of Logic in 1847 and An Investigation of the Laws of Thought in 1854. It was the first of these two works that had the deeper impact on his contemporaries and on the history of logic. The Mathematical Analysis of Logic arose as the result of two broad streams of influence. The first was the English logic-textbook tradition. The second was the rapid growth in the early 19th century of sophisticated discussions of algebra and anticipations of nonstandard algebras. The British mathematicians D.F. Gregory and George Peacock were major figures in this theoretical appreciation of algebra. Such conceptions gradually evolved into “nonstandard” abstract algebras such as quaternions, vectors, linear algebra, and Boolean algebra itself. Boole used capital letters to stand for the extensions of terms; they are referred to (in 1854) as classes of “things” but should not be understood as modern sets. Nevertheless, this extensional perspective made the Boolean algebra a very intuitive and simple structure which, at the same time, seems to capture many essential intuitions. The universal class or term – which he called simply “the Universe” – was represented by the numeral “1”, and the empty class by “0”. The juxtaposition of terms (for example, “AB”) created a term referring to the intersection of two classes or terms. The addition sign signified the non-overlapping union; that is, “A + B” referred to the entities in A or in B; in cases where the extensions of terms A and B overlapped, the expression was held to be “undefined.” For designating a proper subclass of a class A, Boole used the notation “vA”. Finally, he used subtraction to indicate the removing of terms from classes. For example, “1 − x” would indicate what one would obtain by removing the elements of x from the universal class – that is, obtaining the complement of x (relative to the universe, 1). Basic equations included: 1A = A

0A = 0

for A = 0 : A + 1 = 1

A+0=A

A(B + C) = AB + AC

AB = BA A+B =B+A

AA = A A(BC) = (AB)C (associativity)

A + (BC) = (A + B)(A + C)

(distributivity)

Boole offered a relatively systematic, but not rigorously axiomatic, presentation. For a universal affirmative statement such as “All A’s are B’s,” Boole used three alternative notations: A = AB (somewhat in the manner of Leibniz), A(1 − B) = 0, or A = vB (the class of A’s is equal to some proper subclass of the B’s). The first and second interpretations allowed one to derive syllogisms by algebraic substitution; the third one required manipulation of the subclass (“v”) symbols.

11

Derivation (C.i) becomes now explicitly controlled by the applied axioms.

(D.i)

A = AB B = BC A = A(BC) = (AB)C = AC

assumption assumption substitution BC f or B associativity substitution A f or AB

In contrast to earlier symbolisms, Boole’s was extensively developed, with a thorough exploration of a large number of equations and techniques. The formal logic was separately applied to the interpretation of propositional logic, which became an interpretation of the class or term logic – with terms standing for occasions or times rather than for concrete individual things. Following the English textbook tradition, deductive logic is but one half of the subject matter of the book, with inductive logic and probability theory constituting the other half of both his 1847 and 1854 works. Seen in historical perspective, Boole’s logic was a remarkably smooth bend of the new “algebraic” perspective and the English-logic textbook tradition. His 1847 work begins with a slogan that could have served as the motto of abstract algebra: “. . . the validity of the processes of analysis does not depend upon the interpretation of the symbols which are employed, but solely upon the laws of combination.” D.1.1. Further developments of Boole’s algebra; De Morgan Modifications to Boole’s system were swift in coming: in the 1860s Peirce and Jevons both proposed replacing Boole’s “+” with a simple inclusive union or summation: the expression “A + B” was to be interpreted as designating the class of things in A, in B, or in both. This results in accepting the equation “1 + 1 = 1”, which is certainly not true of the ordinary numerical algebra and at which Boole apparently balked. Interestingly, one defect in Boole’s theory, its failure to detail relational inferences, was dealt with almost simultaneously with the publication of his first major work. In 1847 Augustus De Morgan published his Formal Logic; or, the Calculus of Inference, Necessary and Probable. Unlike Boole and most other logicians in the United Kingdom, De Morgan knew the medieval theory of logic and semantics and also knew the Continental, Leibnizian symbolic tradition of Lambert, Ploucquet, and Gergonne. The symbolic system that De Morgan introduced in his work and used in subsequent publications is, however, clumsy and does not show the appreciation of abstract algebras that Boole’s did. De Morgan did introduce the enormously influential notion of a possibly arbitrary and stipulated “universe of discourse” that was used by later Booleans. (Boole’s original universe referred simply to “all things.”) This view influenced 20th-century logical semantics. The notion of a stipulated “universe of discourse” means that, instead of talking about “The Universe”, one can choose this universe depending on the context, i.e., “1” may sometimes stand for “the universe of all animals”, and in other for merely two-element set, say “the true” and “the false”. In the former case, the derivation (D.i) of A = AC from A = AB; B = BC represents the classical syllogism “All A’s are B’s; all B’s are C’s; therefore all A’s are C’s”. In the latter case, the equations of Boolean algebra yield the laws of propositional logic where “A + B” is taken to mean disjunction “A or B”, and juxtaposition “AB” conjunction “A and B”. With this reading, the derivation (D.i) represents another reading of the syllogism, namely: “If A implies B and B implies C, then A implies C”. Negation of A is simply its complement 1 − A, which may also be written as A. De Morgan is known to all the students of elementary logic through the so called ‘De Morgan laws’: AB = A + B and, dually, (A)(B) = A + B. Using these laws, as well as some additional, today standard, facts, like BB = 0, B = B, we can derive the following reformulation of the

12

reductio ad absurdum “If every A is B then every not-B is not-A”: A = AB A − AB = 0 A(1 − B) = 0 AB = 0 A+B = 1 B(A + B) = B (B)(A) + BB = B (B)(A) + 0 = B (B)(A) = B

−AB distributivity over − B =1−B DeMorgan B· distributivity BB = 0 A+0=A

I.e., if “Every A is B”, A = AB, than “every not-B is not-A”, B = (B)(A). Or: if “A implies B” then “if B is false (absurd) then so is A”. De Morgan’s other essays on logic were published in a series of papers from 1846 to 1862 (and an unpublished essay of 1868) entitled simply On the Syllogism. The first series of four papers found its way into the middle of the Formal Logic of 1847. The second series, published in 1850, is of considerable significance in the history of logic, for it marks the first extensive discussion of quantified relations since late medieval logic and Jung’s massive Logica hamburgensis of 1638. In fact, De Morgan made the point, later to be exhaustively repeated by Peirce and implicitly endorsed by Frege, that relational inferences are the core of mathematical inference and scientific reasoning of all sorts; relational inferences are thus not just one type of reasoning but rather are the most important type of deductive reasoning. Often attributed to De Morgan – not precisely correctly but in the right spirit – was the observation that all of Aristotelian logic was helpless to show the validity of the inference, (D.ii)

All horses are animals; therefore, every head of a horse is the head of an animal.

The title of this series of papers, De Morgan’s devotion to the history of logic, his reluctance to mathematize logic in any serious way, and even his clumsy notation – apparently designed to represent as well as possible the traditional theory of the syllogism – show De Morgan to be a deeply traditional logician.

D.2. Gottlob Frege In 1879 the young German mathematician Gottlob Frege – whose mathematical speciality, like Boole’s, had actually been calculus – published perhaps the finest single book on symbolic logic in the 19th century, Begriffsschrift (“Conceptual Notation”). The title was taken from Trendelenburg’s translation of Leibniz’ notion of a characteristic language. Frege’s small volume is a rigorous presentation of what would now be called the first-order predicate logic. It contains a careful use of quantifiers and predicates (although predicates are described as functions, suggestive of the technique of Lambert). It shows no trace of the influence of Boole and little trace of the older German tradition of symbolic logic. One might surmise that Frege was familiar with Trendelenburg’s discussion of Leibniz, had probably encountered works by Drobisch and Hermann Grassmann, and possibly had a passing familiarity with the works of Boole and Lambert, but was otherwise ignorant of the history of logic. He later characterized his system as inspired by Leibniz’ goal of a characteristic language but not of a calculus of reason. Frege’s notation was unique and problematically two-dimensional; this alone caused it to be little read. Frege was well aware of the importance of functions in mathematics, and these form the basis of his notation for predicates; he never showed an awareness of the work of De Morgan and Peirce on relations or of older medieval treatments. The work was reviewed (by Schr¨ oder, among others), but never very positively, and the reviews always chided him for his failure to acknowledge the Boolean and older German symbolic tradition; reviews written by philosophers chided him for various sins against reigning idealist dogmas. Frege stubbornly ignored the critiques of his notation and persisted in publishing all his later works using it, including his little-read magnum opus, Grundgesetze der Arithmetik (1893-1903; “The Basic Laws of Arithmetic”). Although notationally cumbersome, Frege’s system contained precise and adequate (in the sense, “adopted later”) treatment of several basic notions. The universal affirmative “All A’s are

13

B’s” meant for Frege that the concept A implies the concept B, or that “to be A implies also to be B”. Moreover, this applies to arbitrary x which happens to be A. Thus the statement becomes: “∀x : A(x) → B(x)”, where the quantifier ∀x stands for “for all x” and the arrow “→” for implication. The analysis of this, and one other statement, can be represented as follows: Every

horse

is

an animal

Some

Every x

which is a horse

is

an animal

Every x

if it is a horse

then

it is an animal

H(x)

→

∀x :

animals

are

horses

Some x’s

which are animals

are

horses

Some x’s

are animals

and

horses

A(x)

&

H(x)

∃x :

A(x)

This was not the way Frege would write it but this was the way he would put it and think of it and this is his main contribution. The syllogism “All A’s are B’s; all B’s are C’s; therefore: all A’s are C’s” will be written today in first-order logic as: [ (∀x : A(x) → B(x)) & (∀x : B(x) → C(x)) ] → (∀x : A(x) → C(x)) and will be read as: “If any x which is A is also B, and any x which is B is also C; then any x which is A is also C”. Particular judgments (concerning individuals) can be obtained from the universal ones by substitution. For instance: Hugo is a horse; (D.iii)

H(Hugo)

and

Every horse is an animal;

Hence:

&

(∀x : H(x) → A(x)) H(Hugo) → A(Hugo)

→

Hugo is an animal. A(Hugo)

The relational arguments, like (D.ii) about horse-heads and animal-heads, can be derived after we have represented the involved statements as follows: 1.

y is a head of some horse

= = =

there is there is an x ∃x :

2.

y is a head of some animal

=

∃x :

a horse which is a horse H(x)

and and &

A(x)

&

y is its head y is the head of x Hd(y, x) Hd(y, x)

Now, “All horses are animals; therefore: Every horse-head is an animal-head.” will be given the form as in the first line and (very informally) treatement as follows: ∀v : H(v) → A(v)

→

∀y : ∃x : H(x) & Hd(y, x) → ∃z : A(z) & Hd(y, z)

take an arbitrary horse − head a : ∃x : H(x) & Hd(a, x) then there is a horse h : but h is an animal by (D.iii),

→ ∃z : A(z) & Hd(a, z)

H(h) & Hd(a, h) → ∃z : A(z) & Hd(a, z) so

A(h) & Hd(a, h)

Frege’s first writings after the Begriffsschrift were bitter attacks on Boolean methods (showing no awareness of the improvements by Peirce, Jevons, Schr¨ oder, and others) and a defense of his own system. His main complaint against Boole was the artificiality of mimicking notation better suited for numerical analysis rather than developing a notation for logical analysis alone. This work was followed by the Die Grundlagen der Arithmetik (1884; “The Foundations of Arithmetic”) and then by a series of extremely important papers on precise mathematical and logical topics. After 1879 Frege carefully developed his position that all of mathematics could be derived from, or reduced to, basic logical laws – a position later to be known as logicism in the philosophy of mathematics. His view paralleled similar ideas about the reducibility of mathematics to set theory from roughly the same time – although Frege always stressed that his was an intensional logic of concepts, not of extensions and classes. His views are often marked by hostility to British extensional logic and to the general English-speaking tendencies toward nominalism and empiricism that he found in authors such as J.S. Mill. Frege’s work was much admired in the period 1900-10 by Bertrand Russell who promoted Frege’s logicist research program – first in the Introduction to Mathematical

14

Logic (1903), and then with Alfred North Whitehead, in Principia Mathematica (1910-13) – but who used a Peirce-Schr¨ oder-Peano system of notation rather than Frege’s; Russell’s development of relations and functions was very similar to Schr¨ oder’s and Peirce’s. Nevertheless, Russell’s formulation of what is now called the “set-theoretic” paradoxes was taken by Frege himself, perhaps too readily, as a shattering blow to his goal of founding mathematics and science in an intensional, “conceptual” logic. Almost all progress in symbolic logic in the first half of the 20th century was accomplished using set theories and extensional logics and thus mainly relied upon work by Peirce, Schr¨ oder, Peano, and Georg Cantor. Frege’s care and rigour were, however, admired by many German logicians and mathematicians, including David Hilbert and Ludwig Wittgenstein. Although he did not formulate his theories in an axiomatic form, Frege’s derivations were so careful and painstaking that he is sometimes regarded as a founder of this axiomatic tradition in logic. Since the 1960s Frege’s works have been translated extensively into English and reprinted in German, and they have had an enormous impact on a new generation of mathematical and philosophical logicians.

D.3. Set theory A development in Germany originally completely distinct from logic but later to merge with it was Georg Cantor’s development of set theory. As mentioned before, the extensional view of concepts began gradually winning the stage with advance of Boolean algebra. Eventually, even Frege’s analyses become incorporated and merged with the set theoretical approach to the semantics of logical formalism. In work originating from discussions on the foundations of the infinitesimal and derivative calculus by Baron Augustin-Louis Cauchy and Karl Weierstrauss, Cantor and Richard Dedekind developed methods of dealing with the large, and in fact infinite, sets of the integers and points on the real number line. Although the Booleans had used the notion of a class, they rarely developed tools for dealing with infinite classes, and no one systematically considered the possibility of classes whose elements were themselves classes, which is a crucial feature of Cantorian set theory. The conception of “real” or “actual” infinities of things, as opposed to merely unlimited possibilities, was a medieval problem that had also troubled 19th-century German mathematicians, especially the great Carl Friedrich Gauss. The Bohemian mathematician and priest Bernhard Bolzano emphasized the difficulties posed by infinities in his Paradoxien des Unendlichen (1851; “Paradoxes of the Infinite”); in 1837 he had written an anti-Kantian and pro-Leibnizian nonsymbolic logic that was later widely studied. De Morgan and Peirce had given technically correct characterizations of infinite domains; these were not especially useful in set theory and went unnoticed in the German mathematical world. And the decisive development happened in this world. First Dedekind and then Cantor used Bolzano’s tool of measuring sets by one-to-one mappings: Two sets are “equinumerous” iff there is one-to-one mapping between them. Using this technique, Dedekind gave inWas sind und was sollen die Zahlen? (1888; “What Are and Should Be the Numbers?”) a precise definition of an infinite set. A set is infinite if and only if the whole set can be put into one-to-one correspondence with its proper subset. This looks like a contradiction because, as long as we think of finite sets, it indeed is. But take the set of all natural numbers, N = {0, 1, 2, 3, 4, ...} and remove from it 0 getting N1 = {1, 2, 3, 4...}. The functions f : N1 → N , given by f (x) = x − 1, and f1 : N → N1 , given by f1 (x) = x + 1, are mutually inverse and establish a one-to-one correspondence between N and its proper subset N 1 . A set A is said to be “countable” iff it is equinumerous with N . One of the main results of Cantor was demonstration that there are uncountable infinite sets, in fact, sets “arbitrarily infinite”. (For instance, the set R of real numbers was shown by Cantor to be “genuinely larger” than N .) Although Cantor developed the basic outlines of a set theory, especially in his treatment of infinite sets and the real number line, he did not worry about rigorous foundations for such a theory – thus, for example, he did not give axioms of set theory – nor about the precise conditions governing the concept of a set and the formation of sets. Although there are some hints in Cantor’s

15

writing of an awareness of problems in this area (such as hints of what later came to be known as the class/set distinction), these difficulties were forcefully posed by the paradoxes of Russell and the Italian mathematician Cesare Burali-Forti and were first overcome in what has come to be known as Zermelo-Fraenkel set theory.

D.4. 20th century logic In 1900 logic was poised on the brink of the most active period in its history. The late 19thcentury work of Frege, Peano, and Cantor, as well as Peirce’s and Schr¨ oder’s extensions of Boole’s insights, had broken new ground, raised considerable interest, established international lines of communication, and formed a new alliance between logic and mathematics. Five projects internal to late 19th-century logic coalesced in the early 20th century, especially in works such as Russell and Whitehead’s Principia Mathematica. These were 1. the development of a consistent set or property theory (originating in the work of Cantor and Frege), 2. the application of the axiomatic method 3. the development of quantificational logic, 4. the use of logic to understand mathematical objects 5. and the nature of mathematical proof . The five projects were unified by a general effort to use symbolic techniques, sometimes called mathematical, or formal, techniques. Logic became increasingly “mathematical,” then, in two senses. • First, it attempted to use symbolic methods like those that had come to dominate mathematics. • Second, an often dominant purpose of logic came to be its use as a tool for understanding the nature of mathematics – such as in defining mathematical concepts, precisely characterizing mathematical systems, or describing the nature of a mathematical proof. D.4.1. Logic and philosophy of mathematics An outgrowth of the theory of Russell and Whitehead, and of most modern set theories, was a better articulation of logicism, the philosophy of mathematics claiming that operations and objects spoken about in mathematics are really purely logical constructions. This has focused increased attention on what exactly “pure” logic is and whether, for example, set theory is really logic in a narrow sense. There seems little doubt that set theory is not “just” logic in the way in which, for example, Frege viewed logic – i.e., as a formal theory of functions and properties. Because set theory engenders a large number of interestingly distinct kinds of nonphysical, nonperceived abstract objects, it has also been regarded by some philosophers and logicians as suspiciously (or endearingly) Platonistic. Others, such as Quine, have “pragmatically” endorsed set theory as a convenient way – perhaps the only such way – of organizing the whole world around us, especially if this world contains the richness of transfinite mathematics. For most of the first half of the 20th century, new work in logic saw logic’s goal as being primarily to provide a foundation for, or at least to play an organizing role in, mathematics. Even for those researchers who did not endorse the logicist program, logic’s goal was closely allied with techniques and goals in mathematics, such as giving an account of formal systems (“formalism”) or of the ideal nature of nonempirical proof and demonstration. Interest in the logicist and formalist program waned after G¨ odel’s demonstration that logic could not provide exactly the sort of foundation for mathematics or account of its formal systems that had been sought. Namely, G¨ odel proved a mathematical theorem which interpreted in a natural language says something like: G¨ odel’s incompleteness theorem Any logical theory, satisfying reasonable and rather weak conditions, cannot be consistent and, at the same time, prove all its logical consequences.

16

Thus mathematics could not be reduced to a provably complete and consistent logical theory. An interesting fact is that what G¨ odel did in the proof of this theorem was to construct a sentence which looked very much like the Liar paradox. He showed that in any formal theory satisfying his conditions one can write the sentence “I am not provable in this theory”, which cannot be provable unless the theory is inconsistent. In spite of this negative result, logic has still remained closely allied with mathematical foundations and principles. Traditionally, logic had set itself the task of understanding valid arguments of all sorts, not just mathematical ones. It had developed the concepts and operations needed for describing concepts, propositions, and arguments – especially in terms of patterns, or “logical forms” – insofar as such tools could conceivably affect the assessment of any argument’s quality or ideal persuasiveness. It is this general ideal that many logicians have developed and endorsed, and that some, such as Hegel, have rejected as impossible or useless. For the first decades of the 20th century, logic threatened to become exclusively preoccupied with a new and historically somewhat foreign role of serving in the analysis of arguments in only one field of study, mathematics. The philosophicallinguistic task of developing tools for analyzing statements and arguments that can be expressed in some natural language about some field of inquiry, or even for analyzing propositions as they are actually (and perhaps necessarily) thought or conceived by human beings, was almost completely lost. There were scattered efforts to eliminate this gap by reducing basic principles in all disciplines – including physics, biology, and even music – to axioms, particularly axioms in set theory or firstorder logic. But these attempts, beyond showing that it could be done, at least in principle, did not seem especially enlightening. Thus, such efforts, at their zenith in the 1950s and ’60s, had all but disappeared in the ’70s: one did not better and more usefully understand an atom or a plant by being told it was a certain kind of set. Logic, having become a formal discipline, resulted also in the understanding of mechanical reasoning. Although this seems to involve a serious severing of its relation to human reasoning, it found a wide range of applications in the field based on purely formal and mechanical manipulations: computer science. The close connections between these two fields will be sketched in the following section.

E. Modern Symbolic Logic Already Euclid, also Aristotle were well aware of the notion of a rigorous logical theory, in the sense of a specification, often axiomatic, of theorems of a theory. In fact, one might feel tempted to credit the crises in geometry of the 19th century for focusing on the need for very careful presentations of these theories and other aspects of formal systems. As one should know, Euclid designed his Elements around 10 axioms and postulates which one could not resist accepting as obvious (e.g., “an interval can be prolonged indefinitely”, “all right angles are equal”). From the assumption of their truth, he deduced some 465 theorems. The famous postulate of the parallels was The fifth postulate If a straight line falling on two straight lines makes the interior angles on the same side less than the two right angles, the two straight lines, if produced indefinitely, meet on that side on which the angles are less than the two right angles. With time one begun to point out that the fifth postulate (even if reformulated) was somehow less intuitive and more complicated than others. Through hundreds of years mathematicians had unsuccessfully attempted to derive the fifth postulate from the others until, in the 19th century, they started to reach the conclusion that it must be independent from the rest. This meant that one might as well drop it! That was done independently by Hungarian B´ olayi and Russian Lobaczewsky in 1832. What was left was another axiomatic system, the first system of nonEuclidean geometry. The discovery unveiled some importance of admitting the possibility of manipulating the axioms which, perhaps, need not be given by God and intuition but may be chosen with some freedom. Dropping the fifth postulate raised the question about what these new (sub)set of axioms possibly describe. New models were created which satisfied all the axioms but the fifth. This was the first exercise in what later became called “model theory”.

17

E.1. Formal logical systems: syntax. Although set theory and the type theory of Russell and Whitehead were considered to be “logic” for the purposes of the logicist program, a narrower sense of logic re-emerged in the mid-20th century as what is usually called the “underlying logic” of these systems: whatever concerns only rules for propositional connectives, quantifiers, and nonspecific terms for individuals and predicates. (An interesting issue is whether the privileged relation of identity, typically denoted by the symbol “=”, is a part of logic: most researchers have assumed that it is.) In the early 20th century and especially after Tarski’s work in the 1920s and ’30s, a formal logical system was regarded as being composed of three parts, all of which could be rigorously described: 1. the syntax (or notation); 2. the rules of inference (or the patterns of reasoning); 3. the semantics (or the meaning of the syntactic symbols). One of the fundamental contributions of Tarski was his analysis of the concept of ‘truth’ which, in the above three-fold setting is given a precise treatement as a relation between syntax (linguistic expressions) and semantics (contexts, world). The Euclidean, and then non-Euclidean geometry where, as a matter of fact, built as axiomaticdeductive systems (point 2.). The other two aspects of a formal system identified by Tarski were present too, but much less emphasized: notation was very informal, relying often on drawings; the semantics was rather intuitive and obvious. Tarski’s work initiated rigorous study of all three aspects. E.1.1. The language First, there is the notation: the rules of formation for terms and for well-formed formulas in the logical system. This theory of notation itself became subject to exacting treatment in the concatenation theory, or theory of strings, of Tarski, and in the work of the American Alonzo Church. A formal language is simply a set of words (well formed formulae, wff), that is, strings over some given alphabet (set of symbols) and is typically specified by the rules of formation. For instance: • the alphabet Σ = {2, 4, →, −, (, )} • the rules for forming words (formulae, elements) of the language L: – 2, 4 ∈ L – if A, B ∈ L then also −A ∈ L and (A → B) ∈ L. This specification allows us to conclude that, for instance, 4, −2, (4 → −2), −(2 → −4) all belong to L, while 24, () or 2 → do not. Previously, notation was often a haphazard affair in which it was unclear what could be formulated or asserted in a logical theory and whether expressions were finite or were schemata standing for infinitely long wffs (well formed formulas). Issues that arose out of notational questions include definability of one wff by another (addressed in Beth’s and Craig’s theorems, and in other results), creativity, and replaceability, as well as the expressive power and complexity of different logical languages. E.1.2. Reasoning system The second part of a logical system consists of the axioms and rules of inference, or other ways of identifying what counts as a theorem. This is what is usually meant by the logical “theory” proper: a (typically recursive) description of the theorems of the theory, including axioms and every wff derivable from axioms by admitted rules. Using the language L, one migh, for instance, define the following theory T :

18

Axioms:

i) ii) iii) iv)

2 (4 → −2) (A → − − A) (− − A → A)

Upper case letters denote variables for which our language L. (A → B) ; (B → C) Rules: 1) (A → C) (A → B) ; A 2) B (A → B) ; −B 3) −A

we can substitute arbitrary formulae of if A then B; if B then C if A then C if A then B ; but A B if A then B ; but not B not A

We can now perform symbolic derivations whose correctness can be checked mechanically. For instance: ii (E.i)

i 2

(4 → −2)

iii (2 → − − 2) −−2

2) 3)

−4

iii (−4 → − − −4)

− − −4

2)

Thus, −−−4 is a theorem of our theory, and so is −4 which is obtained by the (left) subderivation ending with the application of rule 3). Although the axiomatic method of characterizing such theories with axioms or postulates or both and a small number of rules of inference had a very old history (going back to Euclid or further), two new methods arose in the 1930s and ’40s. 1. First, in 1934, there was the German mathematician Gerhard Gentzen’s method of succinct Sequenzen (rules of consequents), which were especially useful for deriving metalogical results. This method originated with Paul Hertz in 1932, and a related method was described by Stanislaw Ja´skowski in 1934. 2. Next to appear was the similarly axiomless method of “natural deduction,” which used only rules of inference; it originated in a suggestion by Russell in 1925 but was developed by Quine and the American logicians Frederick Fitch and George David Wharton Berry. The natural deduction technique is widely used in the teaching of logic, although it makes the demonstration of metalogical results somewhat difficult, partly because historically these arose in axiomatic and consequent formulations. A formal description of a language, together with a specification of a theory’s theorems (derivable propositions), are often called the “syntax” of the theory. (This is somewhat misleading when one compares the practice in linguistics, which would limit syntax to the narrower issue of grammaticality.) The term “calculus” is sometimes chosen to emphasize the purely syntactic, uninterpreted nature of a reasoning system. E.1.3. Semantics The last component of a logical system is the semantics for such a theory and language: a declaration of what the terms of a theory refer to, and how the basic operations and connectives are to be interpreted in a domain of discourse, including truth conditions for the formulae in this domain. Consider, as an example the rule 1) from the theory T above: (A → B) ; (B → C) (A → C)

19

It is merely a “piece of text” and its symbols allow almost unlimited interpretations. We may, for instance, take A, B, C, ... to denote propositions and → an implication. But we may likewise let A, B, C, ... stand for sets and → for set-inclusion. The following are then examples of applications of this rule under these two interpretations: If If If

it’s nice we leave it’s nice

then then then

we’ll leave we’ll see a movie we’ll see a movie

respectively

{1, 2} ⊆ {1, 2, 3} {1, 2, 3} ⊆ {1, 2, 3, 5} {1, 2} ⊆ {1, 2, 3, 5}

The rule is “sound” with respect to these interpretations – when applied to these domains in the prescribed way, it represents a valid argument. A specification of a domain of objects (De Morgan’s “universe of discourse”), and of the rules for interpreting the symbols of a logical language in this domain such that all the theorems of the logical theory are true is said to be a “model” of the theory. The two suggested interpretations are models of this rule. (To make them models of the whole theory T would require more work, in particular, finding appropriate interpretation of 2, 4 and −, such that the axioms become true and all rules sound. For the propositional case, one could for instance let − denote negation, 2 denote ‘true’ and 4 ‘false’.) If we chose to interpret the formulae of L as events and A → B as, say, “A is independet from B”, the rule would not be sound. Such an interpretation would not give a model of the theory or, what amounts to the same, if the theory were applied to this part of the world, we could not trust its results. We devote next subsection to some further concepts arising with the formal semantics.

E.2. Formal semantics What is known as formal semantics, or model theory, has a more complicated history than does logical syntax; indeed, one could say that the history of the emergence of semantic conceptions of logic in the late 19th and early 20th centuries is poorly understood even today. Certainly, Frege’s notion that propositions refer to (German: bedeuten) “The True” or “The False” – and this for complex propositions as a function of the truth values of simple propositions – counts as semantics. As we mentioned earlier, this has often been the intuition since Aristotle, although modal propositions and paradoxes like the “liar paradox” pose some problems for this understanding. Nevertheless, this view dominates most of the logic, in particular such basic fields as propositional and first-order logic. Also, earlier medieval theories of supposition incorporated useful semantic observations. So, too, do the techniques of letters referring to the values 1 and 0 that are seen from Boole through Peirce and Schr¨ oder. Both Peirce and Schr¨ oder occasionally gave brief demonstrations of the independence of certain logical postulates using models in which some postulates were true, but not others. This was also the technique used by the inventors of non-Euclidean geometry. The first clear and significant general result in model theory is usually accepted to be a result discovered by L¨ owenheim in 1915 and strengthened in work by Skolem from the 1920s. L¨ owenheim-Skolem theorem A theory that has a model at all, has a countable model. That is to say, if there exists some model of a theory (i.e., an application of it to some domain of objects), then there is sure to be one with a domain no larger than the natural numbers. Although L¨ owenheim and Skolem understood their results perfectly well, they did not explicitly use the modern language of “theories” being true in “models.” The L¨ owenheim-Skolem theorem is in some ways a shocking result, since it implies that any consistent formal theory of anything – no matter how hard it tries to address the phenomena unique to a field such as biology, physics, or even sets or just real (decimal) numbers – can just as well be understood from its formalisms alone as being about natural numbers. Consistency The second major result in formal semantics, G¨ odel’s completeness theorem of 1930, required even for its description, let alone its proof, more careful development of precise concepts about logical

20

systems – metalogical concepts – than existed in earlier decades. One question for all logicians since Boole, and certainly since Frege, had been: Is the theory consistent? In its purely syntactic analysis, this amounts to the question: Is a contradictory sentence (of the form “A and not-A”) a theorem? In most cases, the equivalent semantic counterpart of this is the question: Does the theory have a model at all? For a logical theory, consistency means that a contradictory theorem cannot be derived in the theory. But since logic was intended to be a theory of necessarily true statements, the goal was stronger: a theory is Post-consistent (named for the Polish-American logician Emil Post) if every theorem is valid – that is, if no theorem is a contradictory or a contingent statement. (In nonclassical logical systems, one may define many other interestingly distinct notions of consistency; these notions were not distinguished until the 1930s.) Consistency was quickly acknowledged as a desired feature of formal systems: it was widely and correctly assumed that various earlier theories of propositional and first-order logic were consistent. Zermelo was, as has been observed, concerned with demonstrating that ZF was consistent; Hilbert had even observed that there was no proof that the Peano postulates were consistent. These questions received an answer that was not what was hoped for in a later result – G¨ odel’s incompleteness theorem. A clear proof of the consistency of propositional logic was first given by Post in 1921. Its tardiness in the history of symbolic logic is a commentary not so much on the difficulty of the problem as it is on the slow emergence of the semantic and syntactic notions necessary to characterize consistency precisely. The first clear proof of the consistency of the first-order predicate logic is found in the work of Hilbert and Wilhelm Ackermann from 1928. Here the problem was not only the precise awareness of consistency as a property of formal theories but also of a rigorous statement of first-order predicate logic as a formal theory. Completeness In 1928 Hilbert and Ackermann also posed the question of whether a logical system, and, in particular, first-order predicate logic, was (as it is now called) “complete”. This is the question of whether every valid proposition – that is, every proposition that is true in all intended models – is provable in the theory. In other words, does the formal theory describe all the noncontingent truths of a subject matter? Although some sort of completeness had clearly been a guiding principle of formal logical theories dating back to Boole, and even to Aristotle (and to Euclid in geometry) – otherwise they would not have sought numerous axioms or postulates, risking nonindependence and even inconsistency – earlier writers seemed to have lacked the semantic terminology to specify what their theory was about and wherein “aboutness” consists. Specifically, they lacked a precise notion of a proposition being “valid”, – that is, “true in all (intended) models” – and hence lacked a way of precisely characterizing completeness. Even the language of Hilbert and Ackermann from 1928 is not perfectly clear by modern standards. G¨ odel proved the completeness of first-order predicate logic in his doctoral dissertation of 1930; Post had shown the completeness of propositional logic in 1921. In many ways, however, explicit consideration of issues in semantics, along with the development of many of the concepts now widely used in formal semantics and model theory (including the term metalanguage), first appeared in a paper by Alfred Tarski, The Concept of Truth in Formalized Languages, published in Polish in 1933; it became widely known through a German translation of 1936. Although the theory of truth Tarski advocated has had a complex and debated legacy, there is little doubt that the concepts there (and in later papers from the 1930s) developed for discussing what it is for a sentence to be “true in” a model marked the beginning of model theory in its modern phase. Although the outlines of how to model propositional logic had been clear to the Booleans and to Frege, one of Tarski’s most important contributions was an application of his general theory of semantics in a proposal for the semantics of the first-order predicate logic (now termed the set-theoretic, or Tarskian, interpretation).

21

Tarski’s techniques and language for precisely discussing semantic concepts, as well as properties of formal systems described using his concepts – such as consistency, completeness, and independence – rapidly and almost imperceptibly entered the literature in the late 1930s and after. This influence accelerated with the publication of many of his works in German and then in English, and with his move to the United States in 1939.

E.3. Computability and Decidability The underlying theme of the whole development we have sketched is the attempt to formalize logical reasoning, hopefully, to the level at which it can be performed mechanically. The idea of “mechanical reasoning” has been always present, if not always explicitly, in the logical investigations and could be almost taken as their primary, if only ideal, goal. Intuitively, “mechanical” involves some blind following of the rules and such a blind rule following is the essence of a symbolic system as described in E.1.2. This “mechanical blindness” follows from the fact the language and the rules are unambiguously defined. Consequently, correctness of the application of a rule to an actual formula can be verified mechanically. You can easily check that all applications of ; 4 is not rules in the derivation (E.i) are correct and equally easily see that, for instance, (2→4) 2 a correct application of any rule from T . Logic was supposed to capture correct reasoning and correctness amounts to conformance to some accepted rules. A symbolic reasoning system is an ultimately precise expression of this view of correctness which also makes its verification a purely mechanic procedure. Such a mechnism is possible because all legal moves and restrictions are expressed in the syntax: the language, axioms and rules. In other words, it is exactly the uninterpreted nature of symbolic systems which leads to mechanisation of reasoning. Naturally enough, once the symbolic systems were defined and one became familiar with them, i.e., in the beginning of the 20th century, the questions about mechanical computability were raised by the logicians. The answers led to what is today called “informational revolution” centering around the design and use of computers – devices for symbolic, that is, uninterpreted manipulation. Computability What does it mean that something can be computed mechanically? In the 1930s this question acquired the ultimately precise, mathematical meaning. In the proof of the incompleteness theorem G¨ odel introduced special schemata for so called “recursive functions” working on natural numbers. Some time later Church proposed the thesis Church thesis A function is computable if and only if it can be defined using only recursive functions. This may sound astonishing – why just recursive function are to have such a special significance? The answer comes from the work of Alan Turing who introduced “devices” which came to be known as Turing machines. Although defined as conceptual entities, one could easily imagine that such devices could be actually built as physical machines performing exactly the operations suggested by Turing. The machines could, for instance, recognize whether a string had some specific form and, generally, compute functions (see pp.206ff in C. Sobel, The Cognitive sciences, an interdisciplinary approach.) The functions which could be computed on Turing machines were shown to be exactly the recursive functions! Even more significant for us may be the fact that there is a well-defined sublogic of first-order logic in which proving a theorem amounts to computing a recursive function, that is, which can code all possible computer programs.1 Thus, in the wide field of logic, there is a small subdomain providing sufficient means to study the issues of computability. (Such connections are much deeper and more intricate but we cannot address them all here.) 1 Incidentally, or perhaps not, this subset comprises the formulae of the form “If A and A and ... and A then n 1 2 C”. Such formulae give the syntax of an elegant programming language Prolog. In the terminology of P. Thagard, Mind, chapter 3, they correspond to rules which are there claimed to have more “psychological plausibility” than general logical representations. It might seem surprising why a computable sublogic should be more psychologically plausibile than the full first-order logic. This, however, might be due to the relative simplicity of such formulae as compared to the general format of formulae in first-order logic. Due to its simplicity also propositional logic could be, probably, claimed and even shown to be more “psychologically plausible” than first-order logic.

22

Church thesis remains only a thesis. Nevertheless, so far nobody has proposed a notion of computability which would exceed the capacities of Turing machines (and hence of recursive functions). A modern computer with all its sophistication is, as far as its power and possibilities are concerned, nothing more than a Turing machine! Thus, logical results, in particular the negative theorems stating the limitations of logical formalism determine also the ultimate limits of computers’ capabilities and we give a few examples below. Decidability By the 1930s almost all work in the foundations of mathematics and in symbolic logic was being done in a standard first-order predicate logic, often extended with axioms or axiom schemata of set- or type-theory. This underlying logic consisted of a theory of “classical” truth functional connectives, such as “and”, “not” and “if . . . then” (propositional logic) and first-order quantification permitting propositions that “all” and “at least one” individual satisfy a certain formula. Only gradually in the 1920s and ’30s did a conception of a ”first-order” logic, and of more expressive alternatives, arise. Formal theories can be classified according to their expressive or representational power, depending on their language (notation) and reasoning system (inference rules). Propositional logic allows merely manipulation of simple propositions combined with operators like “or”, “and”. Firstorder logic allows explicit reference to, and quantification over, individuals, such as numbers or sets, but not quantification over properties of these individuals. For instance, the statement “for all x: if x is man then x is human” is first-order. But the following one is second-order, involving quantification over properties P, R: “for every x and any properties P, R: if P implies R and x is P then x is R.”2 (Likewise, the fifth postulate of Euclid is not finitely axiomatizable in the first-order language but is rather a schema or second-order formulation.) The question “why should one bother with less expressive formalisms, when more expressive ones are available?” should appear quite natural. The answer lies in the fact that increasing expressive power of a formalisms clashes with another desired feature, namely: decidability there exists a finite mechanical procedure for determining whether a proposition is, or is not, a theorem of the theory. The germ of this idea is present in the law of excluded middle claiming that every proposition is either true or false. But decidability adds to it the requirement which can be expressed only with the precise definition of a finite mechanical procedure, of computability. This is the requirement that not only the proposition must be true/provable or not: there must be a terminating algorithm which can be run (on a computer) to decide which is the case. (In E.1.2 we have shown that, for instance, −4 is a theorem of the theory T defined there. But if you were now to tell whether (− − 4 → (−2 → 2)) is a theorem, you might have hard time trying to find a derivation and even harder trying to prove that no derivation of this formula exists. Decidability of a theory means that such questions can be answered by a computer program.) The decidability of propositional logic, through the use of truth tables, was known to Frege and Peirce; a proof of its decidability is attributable to Jan Lukasiewicz and Emil Post independently in 1921. L¨ owenheim showed in 1915 that first-order predicate logic with only single-place predicates was decidable and that the full theory was decidable if the first-order predicate calculus with only two-place predicates was decidable; further developments were made by Thoralf Skolem, Heinrich Behmann, Jacques Herbrand, and Willard Quine. Herbrand showed the existence of an algorithm which, if a theorem of the first-order predicate logic is valid, will determine it to be so; the difficulty, then, was in designing an algorithm that in a finite amount of time would determine that propositions were invalid. (We can easily imagine a machine which, starting with the specified axioms, generates all possible theorems by simply generating all possible derivations – sequences of correct rule applications. If the formula is provable, the machine will, sooner or later, find a proof. But if the formula is not provable, the machine will keep for ever since the number of proofs 2 Note a vague analogy of the distinction between first-order quantification over individuals and second-order quantification over properties to the distinction between extensional and intensional aspects from B.3.

23

is, typically, infinite.) As early as the 1880s, Peirce seemed to be aware that the propositional logic was decidable but that the full first-order predicate logic with relations was undecidable. The proof that first-order predicate logic (in any general formulation) was undecidable was first shown definitively by Alan Turing and Alonzo Church independently in 1936. Together with G¨ odel’s (second) incompleteness theorem and the earlier L¨ owenheim-Skolem theorem, the Church-Turing theorem of the undecidability of the first-order predicate logic is one of the most important, even if “negative”, results of 20th-century logic. Among consequences of these negative results are many facts about limits of computers. For instance, it is not (and never will be!) possible to write a computer program which, given an arbitrary first-order theory T and some formula f , is guaranteed to terminate giving the answer “Yes” if f is a theorem of T and “No” if it is not. A more mundane example is the following. As we know, one can easily write a computer program which for some inputs will not terminate. It might be therefore desirable to have a program U which could take as input another program P (a piece of text just like “usual” input to any program) and description of its input d and decide whether P run on d would terminate or not. Such a program U , however, will never be written as the problem described is undecidable.

F. Summary The idea of correct thinking is probably as old as thinking itself. With Aristotle there begins the process of explicit formulation of the rules, patterns of reasoning, conformance to which would guarantee correctness. This idea of correctness has been gradually made precise and unambiguous leading to the formulation of (the general schema for defining) symbolic languages, the rules of their manipulation and hence cirteria of correct “reasoning”. It is, however, far from obvious that the result indeed captures the natural reasoning as performed by humans. The need for precision led to complete separation of the reasoning aspect (syntactic manipulation) from its possible meaning. The completely uninterpreted nature of symbolic systems makes their relation to the real world highly problematic. Moreover, as one has arrived at the general schema of defining formal systems, no unique system has arosen as the right one and their variety seems surpassed only by the range of possible application domains. The discussions about which rules actually represent human thinking can probably continue indefinitely. Nevertheless, this purely syntactic character of formal reasoning systems provided the basis for a precise definition of the old theme of logical investigations: the mechanical computability. The question whether human mind and thinking can be reduced to such a mechanic computation and simulated by a computer is still discussed by the philosophers and cognitive scientists. Also, much successful research is driven by the idea, if not an explicit goal, of obtaining such a reduction. The “negative” results as those quoted at the end of the last section, established by human mind and demonstrating limitations of the power of logic and computers, suggest that human cognition may not be reducible to, and hence neither simulated by, mechanic computation. In particular, reduction to mechanic computation would imply that all human thinking could be expressed as applications of simple rules like those described in footnote 1, p. 21. Its possibility has not been disproved but it certainly does not appear plausible. Yet, as computable functions correspond only to a small part of logic, even if this reduction turns out impossible, the question of reduction of thinking to logic at large would still remain open. Most researchers do not seem to believe in such reductions but this is not the place to speculate on their (im)possibility. Some elements of this broad discussion can be found in P. Thagard, Mind, as well as in most other books on cognitive science.

Bibliography General and introductory texts 1. D.GABBAY and F. GUENTHNER (eds.), Handbook of Philosophical Logic, 4 vol. (198389). 2. GERALD J. MASSEY, Understanding Symbolic Logic (1970). 3. THIERRY SCHEURER, Foundations of Computing, Addison-Wesley (1994).

24

4. V.SPERSCHNEIDER, G.ANTONIOU, Logic: A Foundatiuon for Computer Science, AddisonWesley (1991). 5. WILLARD V. QUINE, Methods of Logic, Harvard University Press, 1st. ed. (1950, reprinted 1982). 6. JENS ERIK FENSTAD, DAG NORMANN, Algorithms and Logic, Matematisk Institutt, Universitetet i Oslo (1984). 7. GERALD J. MASSEY, Understanding Symbolic Logic (1970). 8. ELLIOTT MENDELSON, Introduction to Mathematical Logic, 3rd ed. (1987). 9. CHIN-LIANG CHANG, RICHARD CHAR-TUNG LEE, Symbolic Logic and Mechanical Theorem Proving, Academic Press Inc. (1973). 10. R.D.DOWSING, V.J.RAYWARD-SMITH, C.D.WALTER, A First Course in Formal Logic and its Applications in Computer Science, Alfred Waller Ltd. 2nd ed. (1994). 11. RAMIN YASID, Logic and Programming in Logic, Immediate Publishing (1997). 12. ROBERT E. BUTTS and JAAKKO HINTIKKA, Logic, Foundations of Mathematics, and Computability Theory (1977), a collection of conference papers. History of logic 1. WILLIAM KNEALE and MARTHA KNEALE, The Development of Logic (1962, reprinted 1984). 2. The Encyclopedia of Philosophy, ed. by PAUL EDWARDS, 8 vol. (1967). 3. New Catholic Encyclopedia, 18 vol. (1967-89). ´ 4. I.M. BOCHENSKI, Ancient Formal Logic (1951, reprinted 1968). On Aristotle: 5. JAN LUKASIEWICZ, Aristotle’s Syllogistic from the Standpoint of Modern Formal Logic, 2nd ed., enlarged (1957, reprinted 1987). ¨ 6. GUNTHER PATZIG, Aristotle’s Theory of the Syllogism (1968; originally published in German, 2nd ed., 1959). 7. OTTO A. BIRD, Syllogistic and Its Extensions (1964). 8. STORRS McCALL, Aristotle’s Modal Syllogisms (1963). ´ 9. I.M. BOCHENSKI, La Logique de Theophraste (1947, reprinted 1987). On Stoics: 10. BENSON MATES, Stoic Logic (1953, reprinted 1973). 11. MICHAEL FREDE, Die stoische Logik (1974). Medieval logic: 12. NORMAN KRETZMANN, ANTHONY KENNY, and JAN PINBORG (eds.), The Cambridge History of Later Medieval Philosophy: From the Rediscovery of Aristotle to the Disintegration of Scholasticism, 1100-1600 (1982). 13. NORMAN KRETZMANN and ELEONORE STUMP (eds.), Logic and the Philosophy of Language (1988). 14. For Boethius, see: MARGARET GIBSON (ed.), Boethius, His Life, Thought, and Influence (1981). Arabic logic: 15. NICHOLAS RESCHER, The Development of Arabic Logic (1964). 16. L.M. DE RIJK, Logica Modernorum: A Contribution to the History of Early Terminist Logic, 2 vol. in 3 (1962-1967). 17. NORMAN KRETZMANN (ed.), Meaning and Inference in Medieval Philosophy (1988). Surveys of modern logic:

25

18. WILHELM RISSE, Die Logik der Neuzeit, 2 vol. (1964-70). 19. ROBERT ADAMSON, A Short History of Logic (1911, reprinted 1965). 20. C.I. LEWIS, A Survey of Symbolic Logic (1918, reissued 1960). ¨ 21. JORGEN JRGENSEN, A Treatise of Formal Logic: Its Evolution and Main Branches with Its Relations to Mathematics and Philosophy, 3 vol. (1931, reissued 1962). 22. ALONZO CHURCH, Introduction to Mathematical Logic (1956). ´ 23. I.M. BOCHENSKI, A History of Formal Logic, 2nd ed. (1970; originally published in German, 1962). 24. HEINRICH SCHOLZ, Concise History of Logic (1961; originally published in German, 1959). 25. ALICE M. HILTON, Logic, Computing Machines, and Automation (1963); 26. N.I. STYAZHKIN, History of Mathematical Logic from Leibniz to Peano (1969; originally published in Russian, 1964); 27. CARL B. BOYER, A History of Mathematics, 2nd ed., rev. by UTA C. MERZBACH (1991); 28. E.M. BARTH, The Logic of the Articles in Traditional Philosophy: A Contribution to the Study of Conceptual Structures (1974; originally published in Dutch, 1971); 29. MARTIN GARDNER, Logic Machines and Diagrams, 2nd ed. (1982); and 30. E.J. ASHWORTH, Studies in Post-Medieval Semantics (1985). Developments in the science of logic in the 20th century are reflected mostly in periodical literature. 31. WARREN D. GOLDFARB, ”Logic in the Twenties: The Nature of the Quantifier,” The Journal of Symbolic Logic 44:351-368 (September 1979); 32. R.L. VAUGHT, ”Model Theory Before 1945,” and C.C. CHANG, ”Model Theory 19451971,” both in LEON HENKIN et al. (eds.), Proceedings of the Tarski Symposium (1974), pp. 153-172 and 173-186, respectively; and 33. IAN HACKING, ”What is Logic?” The Journal of Philosophy 76:285-319 (June 1979). 34. Other journals devoted to the subject include History and Philosophy of Logic (biannual); Notre Dame Journal of Formal Logic (quarterly); and Modern Logic (quarterly). Formal logic 1. MICHAEL DUMMETT, Elements of Intuitionism (1977), offers a clear presentation of the philosophic approach that demands constructibility in logical proofs. 2. G.E. HUGHES and M.J. CRESSWELL, An Introduction to Modal Logic (1968, reprinted 1989), treats operators acting on sentences in first-order logic (or predicate calculus) so that, instead of being interpreted as statements of fact, they become necessarily or possibly true or true at all or some times in the past, or they denote obligatory or permissible actions, and so on. 3. JON BARWISE et al. (eds.), Handbook of Mathematical Logic (1977), provides a technical survey of work in the foundations of mathematics (set theory) and in proof theory (theories with infinitely long expressions). 4. ELLIOTT MENDELSON, Introduction to Mathematical Logic, 3rd ed. (1987), is the standard text; and 5. G. KREISEL and J.L. KRIVINE, Elements of Mathematical Logic: Model Theory (1967, reprinted 1971; originally published in French, 1967), covers all standard topics at an advanced level. 6. A.S. TROELSTRA, Choice Sequences: A Chapter of Intuitionistic Mathematics (1977), offers an advanced analysis of the philosophical position regarding what are legitimate proofs and logical truths; and 7. A.S. TROELSTRA and D. VAN DALEN, Constructivism in Mathematics, 2 vol. (1988), applies intuitionist strictures to the problem of the foundations of mathematics.

26

Metalogic 1. JON BARWISE and S. FEFERMAN (eds.), Model-Theoretic Logics (1985), emphasizes semantics of models. 2. J.L. BELL and A.B. SLOMSON, Models and Ultraproducts: An Introduction, 3rd rev. ed. (1974), explores technical semantics. 3. RICHARD MONTAGUE, Formal Philosophy: Selected Papers of Richard Montague, ed. by RICHMOND H. THOMASON (1974), uses modern logic to deal with the semantics of natural languages. 4. MARTIN DAVIS, Computability and Unsolvability (1958, reprinted with a new preface and appendix, 1982), is an early classic on important work arising from G¨ odel’s theorem, and the same author’s The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems, and Computable Functions (1965), is a collection of seminal papers on issues of computability. 5. ROLF HERKEN (ed.), The Universal Turing Machine: A Half-Century Survey (1988), takes a look at where G¨ odel’s theorem on undecidable sentences has led researchers. 6. HANS HERMES, Enumerability, Decidability, Computability, 2nd rev. ed. (1969, originally published in German, 1961), offers an excellent mathematical introduction to the theory of computability and Turing machines. 7. A classic treatment of computability is presented in HARTLEY ROGERS, JR., Theory of Recursive Functions and Effective Computability (1967, reissued 1987). 8. M.E. SZABO, Algebra of Proofs (1978), is an advanced treatment of syntactical proof theory. 9. P.T. JOHNSTONE, Topos Theory (1977), explores the theory of structures that can serve as interpretations of various theories stated in predicate calculus. 10. H.J. KEISLER, ”Logic with the Quantifier ’There Exist Uncountably Many’,” Annals of Mathematical Logic 1:1-93 (January 1970), reports on a seminal investigation that opened the way for Barwise (1977), cited earlier, and 11. CAROL RUTH KARP, Language with Expressions of Infinite Length (1964), which expands the syntax of the language of predicate calculus so that expressions of infinite length can be constructed. 12. C.C. CHANG and H.J. KEISLER, Model Theory, 3rd rev. ed. (1990), is the single most important text on semantics. 13. F.W. LAWVERE, C. MAURER, and G.C. WRAITH (eds.), Model Theory and Topoi (1975), is an advanced, mathematically sophisticated treatment of the semantics of theories expressed in predicate calculus with identity. 14. MICHAEL MAKKAI and GONZALO REYES, First Order Categorical Logic: ModelTheoretical Methods in the Theory of Topoi and Related Categories (1977), analyzes the semantics of theories expressed in predicate calculus. 15. SAHARON SHELAH, ”Stability, the F.C.P., and Superstability: Model-Theoretic Properties of Formulas in First Order Theory,” Annals of Mathematical Logic 3:271-362 (October 1971), explores advanced semantics. Applied logic 1. Applications of logic in unexpected areas of philosophy are studied in EVANDRO AGAZZI (ed.), Modern Logic–A Survey: Historical, Philosophical, and Mathematical Aspects of Modern Logic and Its Applications (1981). 2. WILLIAM L. HARPER, ROBERT STALNAKER, and GLENN PEARCE (eds.), IFs: Conditionals, Belief, Decision, Chance, and Time (1981), surveys hypothetical reasoning and inductive reasoning. 3. On the applied logic in philosophy of language, see EDWARD L. KEENAN (ed.), Formal Semantics of Natural Language (1975);

27

4. JOHAN VAN BENTHEM, Language in Action: Categories, Lambdas, and Dynamic Logic (1991), also discussing the temporal stages in the working out of computer programs, and the same author’s Essays in Logical Semantics (1986), emphasizing grammars of natural languages. 5. DAVID HAREL, First-Order Dynamic Logic (1979); and J.W. LLOYD, Foundations of Logic Programming, 2nd extended ed. (1987), study the logic of computer programming. 6. Important topics in artificial intelligence, or computer reasoning, are studied in PETER GOERDENFORS, Knowledge in Flux: Modeling the Dynamics of Epistemic States (1988), including the problem of changing one’s premises during the course of an argument. 7. For more on nonmonotonic logic, see JOHN McCARTHY, ”Circumscription: A Form of Non-Monotonic Reasoning,” Artificial Intelligence 13(1-2):27-39 (April 1980); 8. DREW McDERMOTT and JON DOYLE, ”Non-Monotonic Logic I,” Artificial Intelligence 13(1-2):41-72 (April 1980); 9. DREW McDERMOTT, ”Nonmonotonic Logic II: Nonmonotonic Modal Theories,” Journal of the Association for Computing Machinery 29(1):33-57 (January 1982); and 10. YOAV SHOHAM, Reasoning About Change: Time and Causation from the Standpoint of Artificial Intelligence (1988).

28

Basic Set Theory

Chapter 1 Sets, Functions, Relations • Sets and Functions – Set building operations – Some equational laws • Relations and Sets with Structures – Properties of relations – Ordering relations • Infinities – Countability vs. uncountability

1: Sets and Functions a Background Story ♦ ♦ A set is an arbitrary collection of arbitrary objects, called its members. One should take these two occurrences of “arbitrary” seriously. Firstly, sets may be finite, e.g., the set C of cars on the parking lot outside the building, or infinite, e.g. the set N of numbers greater than 5. Secondly, any objects can be members of sets. We can talk about sets of cars, bloodcells, numbers, Roman emperors, etc. We can also talk about the set X whose elements are: my car, your mother and number 6. (Not that such a set necessarily is useful for any purpose, but it is possible to collect these various elements into one set.) In particular sets themselves can be members of other sets. I can, for instance, form the set whose elements are: my favorite pen, my four best friends and the set N . This set will have 6 elements, even though the set N itself is infinite. A set with only one element is called a singleton, e.g., the set containing only planet Earth. There is one special and very important set – the empty set – which has no members. If it seems startling, you may think of the set of all square circles or all numbers x such that x < x. This set is mainly a mathematical convenience – defining a set by describing the properties of its members in an involved way, we may not know from the very begining what its members are. Eventually, we may find that no such objects exist, that is, that we defined an empty set. It also makes many formulations simpler since, without the assumption of its existence, one would often had to take special precautions for the case a set happened to contain no elements. It may be legitimate to speak about a set even if we do not know exactly its members. The set of people born in 1964 may be hard to determine exactly but it is a well defined object because, at least in principle, we can determine membership of any object in this set. Similarly, we will say that the set R of red objects is well defined even if we certainly do not know all its members. But confronted with a new object, we can determine if it belongs to R or not (assuming, that we do not dispute the meaning of the word “red”.) There are four basic means of specifying a set. 1. If a set is finite and small, we may list all its elements, e.g., S = {1, 2, 3, 4} is a set with four elements. 2. A set can be specified by determining a property which makes objects qualify as its elements. The set R of red objects is specified in this way. The set S can be described as ‘the set of natural numbers greater than 0 and less than 5’.

I.1. Sets, Functions, Relations

29

3. A set may be obtained from other sets. For instance, given the set S and the set S 0 = {3, 4, 5, 6} we can form a new set S 00 = {3, 4} which is the intersection of S and S 0 . Given the sets of odd {1, 3, 5, 7, 9...} and even numbers {0, 2, 4, 6, 8...} we can form a new set N by taking their union. 4. Finally, a set can be determined by describing the rules by which its elements may be generated. For instance, the set N of natural numbers {0, 1, 2, 3, 4, ...} can be described as follows: 0 belongs to N and if n belongs to N, then also n + 1 belongs to N and, finally, nothing else belongs to N. In this chapter we will use mainly the first three ways of describing sets. In particular, we will use various set building operations as in point 3. In the later chapters, we will constantly encouter sets described by the last method. One important point is that the properties of a set are entirely independent from the way the set is described. Whether we just say ‘the set of natural numbers’ or the set N as defined in point 2. or 4., we get the same set. Another thing is that studying and proving properties of a set may be easier when the set is described in one way rather than another. ♦

♦

Definition 1.1 Given some sets S and T we write: x ∈ S - x is a member (element) of S S ⊆ T - S is a subset of T .... . . . . . . . . . . . for all x : if x ∈ S then x ∈ T S ⊂ T - S ⊆ T and S 6= T .... . . . . . . . . . . for all x : if x ∈ S then x ∈ T and there is an x : x ∈ T and x 6∈ S Set building operations : ∅ - empty set .... . . . . . . . . . . . . . . . . . . . for any x : x 6∈ ∅ S ∪ T - union of S and T .... . . . . . . . . . . . x ∈ S ∪ T iff x ∈ S or x ∈ T S ∩ T - intersection of S and T .... . . . . . x ∈ S ∩ T iff x ∈ S and x ∈ T S \ T - difference of S and T .... . . . . . . . x ∈ S \ T iff x ∈ S and x 6∈ T S 0 - complement of S; assuming a universe U of all elements S 0 = U \ S S × T - Cartesian product of S and T .... x ∈ S × T iff x = hs, ti and s ∈ S and t ∈ T ℘(S) - the power set of S .... . . . . . . . . . . x ∈ ℘(S) iff x ⊆ S {x ∈ S : Prop(x)} - set of those x ∈ S which have the specified property Prop Remark. Notice that sets may be members of other sets. For instance {∅} is the set with one element – which is the empty set ∅. In fact, {∅} = ℘(∅). It is different from the set ∅ which has no elements. {{a, b}, a} is a set with two elements: a and the set {a, b}. Also {a, {a}} has two different elements: a and {a}. In particular, the power set will contain many sets as elements: ℘({a, {a, b}}) = {∅, {a}, {{a, b}}, {a, {a, b}}}. In the definition of Cartesian product, we used the notation hs, ti to denote an ordered pair whose first element is s and second t. In set theory, all possible objects are modelled as sets. An ordered pair hs, ti is then represented as the set with two elements – both being sets – {{s}, {s, t}}. Why not {{s}, {t}} or, even simpler, {s, t}? Because elements of a set are not ordered. Thus {s, t} and {t, s} denote the same set. Also, {{s}, {t}} and {{t}, {s}} denote the same set (but different from the set {s, t}). In ordered pairs, on the other hand, the order does matter – hs, ti and ht, si are different pairs. This ordering is captured by the representation {{s}, {s, t}}. We have here a set with two elements {A, B} where A = {s} and B = {s, t}. The relationship between these two elements tells us which is the first and which the second: A ⊂ B identifies the member of A as the first element of the pair, and then the element of B \ A as the second one. Thus hs, ti = {{s}, {s, t}} 6= {{t}, {s, t}} = ht, si. 2

30

Basic Set Theory

The set operations ∪, ∩, 0 and \ obey some well known laws: 1. Idempotency A∪A = A A∩A = A

2. Associativity (A ∪ B) ∪ C = A ∪ (B ∪ C) (A ∩ B) ∩ C = A ∩ (B ∩ C)

3. Commutativity A∪B = B∪A A∩B = B∩A

4. Distributivity A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)

5. DeMorgan 0 (A ∪ B) = A0 ∩ B 0 0 (A ∩ B) = A0 ∪ B 0

6. Complement A ∩ A0 = ∅ A \ B = A ∩ B0

7. Emptyset ∅∪A = A

∅∩A = ∅

8. Consistency principles a) A ⊆ B iff A ∪ B = B

b) A ⊆ B iff A ∩ B = A

Remark 1.2 [Venn’s diagrams] It is very common to represent sets and set relations by means of Venn’s diagrams – overlapping figures, typically, circles or rectangles. On the the left in the figure below, we have two sets A and B in some universe U . Their intersection A∩B is marked as the area belonging to both by both vertical and horisontal lines. If we take A to represent Armenians and B bachelors, the darkest region in the middle represents Armenian bachelors. The region covered by only vertical, but not horisontal, lines is the set difference A \ B – Armenians who are not bachelors. The whole region covered by either vertical or horisontal lines represents all those who are either Armenian or are bachelors. B

U

B

A

U

A 0

0

(A ∪ B) = A ∩ B

0

Now, the white region is the complement of the set A ∪ B (in the universe U ) – all those who are neither Armenians nor bachelors. The diagram to the right is essentially the same but was constructed in a different way. Here, the region covered with vertical lines is the complement of A – all non-Armenians. The region covered with horisontal lines represents all non-bachelors. The region covered with both horisontal and vertical lines is the intersection of these two complements – all those who are neither Armenians nor bachelors. The two diagrams illustrate the first DeMorgan law since the white area on the left, (A ∪ B)0 , is exactly the same as the area covered with both horisontal and vertical lines on the right, A0 ∩ B 0 . 2

Venn’s diagrams may be handy tool to visualize simple set operations. However, the equalities above can be also seen as a (not yet quite, but almost) formal system allowing one to derive various other set equalities. The rule for performing such derivations is ‘substitution of equals for equals’, known also from elementary arithemtics. For instance, the fact that, for an arbitrary set A : A ⊆ A amounts to a single application of rule 8.a): A ⊆ A iff A ∪ A = A, where the last equality holds by 1. A bit longer derivation shows that (A ∪ B) ∪ C = (C ∪ A) ∪ B : 3

2

(A ∪ B) ∪ C = C ∪ (A ∪ B) = (C ∪ A) ∪ B. In exercises we will encounter more elaborate examples. In addition to the set building operations from the above defintion, one often encounters also disjoint union of sets A and B, written A ] B and defined as A ] B = (A × {0}) ∪ (B × {1}). The idea is to use 0, resp. 1, as indices to distinguish the elements originating from A and from B. If A ∩ B = ∅, this would not be necessary, but otherwise the “disjointness” of this union requires that the common elements be duplicated. E.g., for A = {a, b, c} and B = {b, c, d}, we have A ∪ B = {a, b, c, d} while A ] B = {ha, 0i, hb, 0i, hc, 0i, hb, 1i, hc, 1i, hd, 1i}, which can be thought of as {a0 , b0 , c0 , b1 , c1 , d1 }.

31

I.1. Sets, Functions, Relations

Definition 1.3 Given two sets S and T , a function f from S to T , f : S → T , is a subset of S × T such that • whenever hs, ti ∈ f and hs, t0 i ∈ f , then t = t0 , and • for each s ∈ S there is some t ∈ T such that hs, ti ∈ f . A subset of S × T that satisfies the first condition above but not necessarily the second, is called a partial function from S to T . For a function f : S → T , the set S is called the source or domain of the function, and the set T its target or codomain. The second point of this definition means that function is total – for each argument (element s ∈ S), the function has some value, i.e., an element t ∈ T such that hs, ti ∈ f . Sometimes this requirement is dropped and one speaks about partial functions which may have no value for some arguments but we will be for the most concerned with total functions. Example 1.4 Let N denote the set of natural numbers {0, 1, 2, 3, ...}. The mapping f : N → N defined by f (n) = 2n is a function. It is the set of all pairs f = {hn, 2ni : n ∈ N}. If we let M denote the set of all people, then the set of all pairs f ather = {hm, m0 s fatheri : m ∈ M } is a function assigning to each person his/her father. A mapping ‘children’, assigning to each person his/her children is not a function M → M for two reasons. For the first, a person may have no children, while saying “function” we mean a total function. For the second, a person may have more than one child. These problems may be overcome if we considered it instead as a function M → ℘(M ) assigning to each person the set (possibly empty) of all his/her children. 2

Notice that although intuitively we think of a function as a mapping assigning to each argument some value, the definition states that it is actually a set (a subset of S ×T is a set.) The restrictions put on this set are exactly what makes it possible to think of this set as a mapping. Nevertheless, functions – being sets – can be elements of other sets. We may encounter situations involving sets of functions, e.g. the set T S of all functions from set S to set T , which is just the set of all subsets of S × T , each satisfying the conditions of the definition 1.3. Remark 1.5 [Notation] A function f associates with each element s ∈ S a unique element t ∈ T . We write this t as f (s) – the value of f at point s. When S is finite (and small) we may sometimes write a function as a set {hs1 , t1 i, hs2 , t2 i, ..., hsn , tn i} or else as {s1 7→ t1 , s2 7→ t2 , ..., sn 7→ tn }. If f is given then by f [s 7→ p] we denote the function f 0 which is the same as f for all arguments x 6= s : f 0 (x) = f (x), while f 0 (s) = p. 2

Definition 1.6 A function f : S → T is injective iff whenever f (s) = f (s0 ) then s = s0 ; surjective iff for all t ∈ T there exists an s ∈ S such that f (s) = t; bijective, or a set-isomorphism, iff it is both injective and surjective. Injectivity means that no two distinct elements from the source set are mapped to the same element in the target set; surjectivity that each element in the target is an image of some element from the source. Example 1.7 The function f ather : M → M is injective – everybody has exactly one (biological) father, but it is not surjective – not everybody is a father of somebody. The following drawing gives some examples: s 7654 S

h

?> =< /T

6/ t 1 mmm m m m s2 t2 1

0123 s QQ Q 3

89t :;

QQQ (

t3 4

7654 s S

1

0123

f

?> =< /T

/ t1

s2 QQ m6 t2 m QmQ mmQ Q( m s3 t3

89t :; 4

?>s =

|S| |S| |S| |S| ∗ |T | |S| 2

From certain assumptions (“axioms”) about sets it can be proven that the relation ≤ on cardinalities has the properties of a weak TO, i.e., it is reflexive (obvious), transitive (fairly obvious), antisymmetric (not so obvious) and total (less obvious). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [optional] As an example of how intricate reasoning may be needed to establish such “not quite but almost obvious” facts, we show that ≤ is antisymmetric. Theorem 1.23 [Schr¨ oder-Bernstein] For arbitrary sets X, Y , if there are injections i : X → Y and j : Y → X, then there is a bijection f : X → Y (i.e., if |X| ≤ |Y | and |Y | ≤ |X| then |X| = |Y |).

37

I.1. Sets, Functions, Relations

Proof If the injection i : X → Y is surjective, i.e., i(X) = Y , then i is a bijection and we are done. Otherwise, we have Y0 = Y \ i(X) 6= Ø and we apply j and i repeatively as follows Y0 = Y \ i(X)

X0 = j(Y0 )

Yn+1 = i(Xn )

Xn+1 = j(Yn+1 )

Y∗ =

ω [

Yn

X∗ =

n=0

ω [

YO 0 B B

YO 1 C C

YO 2 C C

i

i

i

i.e.

Xn

BB j BB BB

X0

X

CC j CC CC !

X1

CC j CC CC !

...

O j i

X2

n=0

$

...

Thus we can divide both sets into disjoint components as in the diagram below. Y KS ∗

YO = j

KS

j

i

(Y \ Y ∗ )

∪

i

X∗

X =

(X \ X ∗ )

∪

We show that the respective restrictions of j and i are bijections. First, j : Y ∗ → X ∗ is a bijection (it is injective, and the following equation shows that it is surjective): j(Y ∗ ) = j(

ω [

Yn ) =

By lemma 1.8, j Furthermore: (F.i)

: X

i(X ∗ ) = i(

∗

ω [

j(Yn ) =

ω [

Xn = X ∗

n=0

n=0

n=0 −

ω [

∗

−

→ Y , defined by j (x) = y : j(y) = x is a bijection too.

Xn ) =

n=0

ω [

i(Xn ) =

n=0

ω [

n=0

Yn+1 =

ω [

Yn = Y ∗ \ Y 0 .

n=1

Now, the first of the following equalities holds since i is injective, the second by (F.i) and since i(X) = Y \ Y0 (definition of Y0 ), and the last since Y0 ⊆ Y ∗ : i(X \ X ∗ ) = i(X) \ i(X ∗ ) = (Y \ Y0 ) \ (Y ∗ \ Y0 ) = Y \ Y ∗ , i.e., i : (X \ X ∗ ) → (Y \ Y ∗ ) is a bijection. We obtain a bijection f : X → Y defind by f (x) =

i(x) j − (x)

if x ∈ X \ X ∗ if x ∈ X ∗

QED (1.23)

A more abstract proof. The construction of the sets X ∗ and Y ∗ in the above proof can be subsumed under a more abstract formulation implied by the Claims 1. and 2. below. In particular, Claim 1. has a very general form. Claim 1. For any set X, if h : ℘(X) → ℘(X) is monotonic, i.e. such that, whenever A ⊆ B ⊆ X then h(A) ⊆ h(B); then there is a set T ⊆ X : h(T ) = T . We show that T =

S

Y∗

{A ⊆ X : A ⊆ h(A)}.

a) T ⊆ h(T ) : for for each t ∈ T there is an A : t ∈ A ⊆ T and A ⊆ h(A). But then A ⊆ T implies h(A) ⊆ h(T ), and so t ∈ h(T ). b) h(T ) ⊆ T : from a) T ⊆ h(T ), so h(T ) ⊆ h(h(T )) which means that h(T ) ⊆ T by definition of T . Claim 2. Given injections i, j define ∗ : ℘(X) → ℘(X) by A∗ = X \ j(Y \ i(A)). If A ⊆ B ⊆ X then A∗ ⊆ B ∗ . Follows trivially from injectivity of i and j. A ⊆ B, so i(A) ⊆ i(B), so Y \ i(A) ⊇ Y \ i(B), so j(Y \ i(A)) ⊇ j(Y \ i(B)), and hence X \ j(Y \ i(A)) ⊆ X \ j(Y \ i(B)). ∗ 3. Claims 1 and 2 imply that there is a T ⊆ X such that T = T , i.e., T = X\j(Y \i(T )). Then i(x) if x ∈ T is a bijection. We have X = j(Y \i(T ))∪T f : X → Y defined by f (x) = j −1 (x) if x 6∈ T and Y = (Y \ i(T )) ∪ i(T ), and obviously j −1 is a bijection between j(Y \ i(T )) and Y \ i(T ), while i is a bijection between T and i(T ). 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [end optional]

X∗

38

Basic Set Theory

Each finite set has a cardinality which is a natural number. The apparently empty Definition 1.20 becomes more significant when we look at the infinite sets. Definition 1.24 A set S is infinite iff there exists a proper subset T ⊂ S such that S * ) T. Example 1.25 def For simplicity, let us consider the set of natural numbers, N. We denote its cardinality |N| = ℵ0 . (Sometimes it is also written ω, although axiomatic set theory distinguishes between the cardinal number ℵ0 and the ordinal number ω. Ordinal number is a more fine-grained notion than cardinal number, but we shall not worry about this.) We have, for instance, that |N| = |N \ {0}| by the simple bijection: {

0 l 1

{

1 l 2

2 l 3

3 l 4

... ... ...

In fact, the cardinality of N is the same as the cardinality of the even natural numbers! It is easy to see that the pair of functions f (n) = 2n and f −1 (2n) = n is a bijection: { f ↓ {

0 l 0

1 l 2

2 l 4

3 l 6

4 l 8

5 l 10

... ↑ f −1 ...

In fact, when |S| = |T | = ℵ0 and |P | = n < ℵ0 , we have |S ∪ T | = ℵ0 |S \ P | = ℵ0 |S × T | = ℵ0

1. 3. 4.

We illustrate a possible set-isomorphisms N * ) N × N below:

[email protected] 03

@ R @

I @ @

02

[email protected] 01

13

@ @

@ R @

I @ @

00

I @ @

12

@ - 10

@

@

@ R @

I @ @

11

@ 23

I @ @

22

@ @

@ R @

@ R @

21

20

33

@ @

@ R @

I @ @

I @ @

32

@ R @

I @ @

31

@ - 30

@

@

43

I @ @

42

@ @

@ R @

41

@

@ R @

I @ @

40

@ - 50 2

* N amounts to an enumeration of the elements of S. Thus, if |S| ≤ ℵ0 A set-isomorphism S ) we say that S is enumerable or countable; in case of equality, we say that it is countably infinite. Now, the question “are there any uncountable sets?” was answered by the founder of modern set theory, Georg Cantor Theorem 1.26 For any set A : |A| < |℘(A)|. Proof The construction applied here shows that the contrary assumption – A * ) ℘(A) – leads to a contradiction. Obviously, (F.ii)

|A| ≤ |℘(A)|

since the inclusion defined by f (a) = {a} is an injective function f : A → ℘(A). So, assume that the equality holds in (F.ii), i.e., that there is a corresponding F : A → ℘(A) def which is both injective and surjective. Define the subset of A by B = {a ∈ A : a 6∈ F (a)}. Since B ⊆ A, so B ∈ ℘(A) and, since F is surjective, there is a b ∈ A such that F (b) = B. Is b in B or not? Each of the two possible answers yields a contradiction:

39

I.1. Sets, Functions, Relations

1. b ∈ F (b) means b ∈ {a ∈ A : a 6∈ F (a)}, which means b 6∈ F (b) 2. b 6∈ F (b) means b 6∈ {a ∈ A : a 6∈ F (a)}, which means b ∈ F (b).

QED (1.26)

Corollary 1.27 For each cardinal number λ there is a cardinal number κ > λ. In particular, ℵ0 = |N| < |℘(N)| < |℘(℘(N))| < ... Theorem 1.26 proves that there exist uncountable sets, but are they of any interest? Another theorem of Cantor shows that such sets have been around in mathematics for quite a while. Theorem 1.28 The set R of real numbers is uncountable. Proof Since N ⊂ R, we know that |N| ≤ |R|. The diagonalisation technique introduced here by Cantor, reduces the assumption that |N| = |R| ad absurdum. If R * ) N then, certainly, we can enumerate any subset of R. Consider only the closed interval [0, 1] ⊂ R. If it is countable, we can list all its members, writing them in decimal expansion (each rij is a digit): n1 n2 n3 n4 n5 n6

= = = = = = .. .

0. r11 0. r21 0. r31 0. r41 0. r51 0. r61

r12 r22 r32 r42 r52 r62

r13 r23 r33 r43 r53 r63

r14 r24 r34 r44 r54 r64

r15 r25 r35 r45 r55 r65

r16 r26 r36 r46 r56 r66

.... .... .... .... .... ....

Form a new real number r by replacing each rii (marked with bold) with another digit, for rii + 1 if rii < 9 instance, let r = 0.r1 r2 r3 r4 ..., where ri = . 0 if rii = 9 Then r cannot be any of the enumerated numbers n1 , n2 , n3 , .... For each number ni in this list has a digit rii at its i-th position which is different from the digit ri at the i-th position in r. QED (1.28)

“Sets” which are not Sets In Definition 1.1 we introduced several set building operations. The power set operation ℘( ) has proven particularly powerful. However, the most peculiar one is the comprehension operation, namely, the one allowing us to form a set of elements satisfying some property {x : Prop(x)}. Although apparently very natural, its unrestricted use leads to severe problems. Russell’s Paradox Define U as the set {x : x 6∈ x}. Now the question is: Is U ∈ U ? Each possible answer leads to absurdity: 1. U ∈ U means that U is one of x in U , i.e., U ∈ {x : x 6∈ x}, so U 6∈ U 2. U 6∈ U means that U ∈ {x : x 6∈ x}, so U ∈ U . 2 The problem arises because in the definition of U we did not specify what kind of x’s we are gathering. Among many solutions to this paradox, the most commonly accepted is to exclude such definitions by requiring that x’s which are to satisfy a given property when collected into a new set must already belong to some other set. This is the formulation we used in Definition 1.1, where we said that if S is a set then {x ∈ S : Prop(x)} is a set too. The “definition” of U = {x : x 6∈ x} does not conform to this format and hence is not considered a valid description of a set.

40

Basic Set Theory

Exercises (week 1) exercise 1.1+ Given the following sets S1 S2 S3 S4 S5

= = = = =

{{∅}, A, {A}} A {A} {{A}} {A, {A}}

S6 S7 S8 S9

= = = =

∅ {∅} {{∅}} {∅, {∅}}

Of the sets S1-S9, which 1. 2. 3. 4. 5. 6.

are are are are are are

members of S1 ? subsets of S1 ? members of S9 ? subsets of S9 ? members of S4 ? subsets of S4 ?

exercise 1.2+ Let A = {a, b, c}, B = {c, d}, C = {d, e, f }. 1. Write the sets: A ∪ B, A ∩ B, A ∪ (B ∩ C) 2. Is a a member of {A, B}, of A ∪ B? 3. Write the sets A × B and B × A. exercise 1.3 Using the set theoretic equalities (page 30), show that: 1. A ∩ (B \ A) = ∅ 2. ((A ∪ C) ∩ (B ∪ C 0 )) ⊆ (A ∪ B) (Show first some lemmas: • A∩B ⊆A • if A ⊆ B then A ⊆ B ∪ C • if A1 ⊆ X and A2 ⊆ X then A1 ∪ A2 ⊆ X Expand then the expression (A ∪ C) ∩ (B ∪ C 0 ) to one of the form X1 ∪ X2 ∪ X3 ∪ X4 , show that each Xi ⊆ A ∪ B and use the last lemma.) exercise 1.4+ Let S = {0, 1, 2} and T = {0, 1, {0, 1}}. Construct ℘(S) and ℘(T ). exercise 1.5 Let S = {5, 10, 15, 20, 25, ...} and T = {3, 4, 7, 8, 11, 12, 15, 16, 19, 20, ...}. 1. Specify each of these sets by defining the properties PS and PT such that S = {x ∈ N : PS (x)} and T = {x ∈ N : PT (x)} 2. For each of these sets specify two other properties PS1 , PS2 and PT 1 , PT 2 , such that S = {x ∈ N : PS1 (x)} ∪ {x ∈ N : PS2 (x)} and similarly for T . exercise 1.6 Construct examples of injective (but not surjective) and surjective (but not injective) functions S → T , which do not induce the inverse function in the way a bijection does (Lemma 1.8). exercise 1.7 Prove the claims cited below definition 1.13, that 1. Every sPO (irreflexive, transitive relation) is asymmetric. 2. Every asymmetric relation is antisymmetric. 3. If R is connected, symmetric and reflexive, then R(s, t) for every pair s, t. What about a relation that is both connected, symmetric and transitive? In what way does this depend on the cardinality of S? exercise 1.8+ Let C be a collection of sets. Show that equality = and existence of set-isomorphism * ) are equivalence relations on C × C as claimed under Definition 1.13. Give an example of two sets S and T such that S * ) T but S 6= T (they are set-isomorphic but not equal).

I.1. Sets, Functions, Relations

41

exercise 1.9 Given an arbitrary (non-empty) collection of sets C. Show that 1. the inclusion relation ⊆ is a wPO on C 2. ⊂ is its strict version 3. ⊆ is not (necessarily) a TO on C. exercise 1.10+ If |S| = n for some natural number n, what will be the cardinality of ℘(S)? exercise 1.11 Let A be a countable set. 1. If also B is countable, show that: (a) the disjoint union A ] B is countable (specify its enumeration, assuming the existence of the enumerations of A and B); (b) the union A ∪ B is countable (specify an injection into A ] B). 2. If B is uncountable, can A × B ever be countable?

42

Basic Set Theory

Chapter 2 Induction • Well-founded Orderings – General notion of Inductive proof • Inductive Definitions – Structural Induction

1: Well-Founded Orderings a Background Story ♦ ♦ Ancients had many ideas about the basic structure and limits of the world. According to one of them our world – the earth – rested on a huge tortoise. The tortoise itself couldn’t just be suspended in a vacuum – it stood on the backs of several elephants. The elephants stood all on a huge disk which, in turn, was perhaps resting on the backs of some camels. And camels? Well, the story obviously had to stop somewhere because, as we notice, one could produce new sub-levels of animals resting on other objects resting on yet other animals, resting on ... indefinitely. The idea is not well founded because such a hierarchy has no well defined begining, it hangs in a vacuum. Any attempt to provide the last, the most fundamental level is immediately met with the question “And what is beyond that?” The same problem of the lacking foundation can be encoutered when one tries to think about the begining of time. When was it? Physicists may say that it was Big Bang. But then one immediately asks “OK, but what was before?”. Some early opponents of the Biblical story of creation of the world – and thus, of time as well – asked “What did God do before He created time?”. St. Augustine, realising the need for a definite answer which, however, couldn’t be given in the same spirit as the question, answered “He prepared the hell for those asking such questions.” One should be wary here of the distinction between the begining and the end, or else, between moving backward and forward. For sure, we imagine that things, the world may continue to exist indefinitely in the future – this idea does not cause much trouble. But our intuition is uneasy with things which do not have any begining, with chains of events extending indefinitely backwards, whether it is a backward movement along the dimension of time or causality. Such non well founded chains are hard to imagine and even harder to do anything with – all our thinking, activity, constructions have to start from some begining. Having an idea of a begining, one will often be able to develop it into a description of the ensuing process. One will typically say: since the begining was so-and-so, such-and-such had to follow since it is implied by the properties of the begining. Then, the properties of this second stage, imply some more, and so on. But having nothing to start with, we are left without foundation to perform any intelligible acts. Mathematics has no problems with chains extending infinitely in both directions. Yet, it has a particular liking for chains which do have a begining, for orderings which are well-founded. As with our intuition and activity otherwise, the possibility of ordering a set in a way which identifies its least, first, starting elements, gives a mathematician a lot of powerful tools. We will study in this chapter some fundamental tools of this kind. As we will see later, almost all our presentation will be based on well-founded orderings. ♦ Definition 2.1 Let hS, ≤i be a PO and T ⊆ S.

♦

43

I.2. Induction

• x ∈ T is a minimal element of T iff there is no element smaller than x, i.e., for no y ∈ T : y < x • hS, ≤i is well-founded iff each non-empty T ⊆ S has a minimal element. The set of natural numbers with the standard ordering hN, ≤i is well-founded, but the set of all integers with the natural extension of this ordering hZ, ≤i is not – the subset of all negative integers does not have a ≤-minimal element. Intuitively, well-foundedness means that the ordering has a “basis”, a set of minimal “starting points”. This is captured by the following lemma. Lemma 2.2 A PO hS, ≤i is well-founded iff there is no infinite decreasing sequence, i.e., no sequence {an }n∈N of elements of S such that an > an+1 . Proof ⇐) If hS, ≤i is not well-founded, then let T ⊆ S be a subset without a minimal element. Let a1 ∈ T – since it is not minimal, we can find a2 ∈ T such that a1 > a2 . Again, a2 is not minimal, so we can find a3 ∈ T such that a2 > a3 . Continuing this process we obtain an infinite descending sequence a1 > a2 > a3 > .... ⇒) Suppose that there is such a sequence a1 > a2 > a3 > .... Then, obviously, the set {an : n ∈ N} ⊆ S has no minimal element. QED (2.2)

Example 2.3 Consider again the orderings on finite strings as defined in example 1.17. 1. The relation ≺Q is a well-founded sPO; there is no way to construct an infinite sequence of strings with ever decreasing lengths! 2. The relation ≺P is a well-founded PO : any subset of strings will contain element(s) such that none of their prefixes (except the strings themselves) are in the set. For instance, a and bc are ≺P -minimal elements in S = {ab, abc, a, bcaa, bca, bc}. 3. The relation ≺L is not well-founded, since there exist infinite descending sequences like . . . ≺L aaaab ≺L aaab ≺L aab ≺L ab ≺L b. In order to construct any such descending sequence, however, there is a need to introduce ever longer strings as we proceed towards infinity. Hence the alternative ordering below is also of interest. 4. The relation ≺Q was defined in example 1.17. Now define s ≺L0 p iff s ≺Q p or (length(s) = length(p) and s ≺L p). Hence sequences are ordered primarily by length, secondarily by the previous lexicographic order. The ordering ≺L0 is indeed well-founded and, in addition, connected, i.e., a well-founded TO. 2

Definition 2.4 hS, ≤i is a well-ordering, WO, iff it is a TO and is well-founded. Notice that well-founded ordering is not the same as well-ordering. The former can still be a PO which is not a TO. The requirement that a WO = hS, ≤i is a TO implies that each (sub)set of S has not only a minimal element but also a unique minimal element. Example 2.5 The set of natural numbers with the “less than” relation, hN, 1 be arbitrary, i.e., z 0 = z + 1 for some z > 0. 1 + 3 + ... + (2z 0 − 1)

=

1 + 3 + ... + (2z − 1) + (2(z + 1) − 1)

(by IH since z < z 0 = z + 1)

=

z 2 + 2z + 1 = (z + 1)2 = (z 0 )2

The proof for z < 0 is entirely analogous, but now we have to reverse the ordering: we start with z = −1 and proceed along the negative integers only considering z ≺ z 0 iff |z| < |z 0 |, where |z| denotes the absolute value of z (i.e., |z| = −z for z < 0). Thus, for z, z 0 < 0, we have actually that z ≺ z 0 iff z > z 0 . Basis :: For z = −1, we have −1 = −(−1)2 . Induction :: Let z 0 < −1 be arbitrary, i.e., z 0 = z − 1 for some z < 0. −1 − 3 + ... + (2z 0 + 1)

=

−1 − 3 + ... + (2z + 1) + (2(z − 1) + 1)

(by IH since z ≺ z 0 )

=

−|z|2 − 2|z| − 1

=

−(|z|2 + 2|z| + 1) = −(|z| + 1)2 = −(z − 1)2 = −(z 0 )2

The second part of the proof makes it clear that the well-founded ordering on the whole Z we have been using was not the usual 0 for all z ∈ Z : . −(1 + 3 + 5 + ... + (2|z| − 1)) = −(z 2 ) if z < 0 Thus formulated, it becomes obvious that we only have one statement to prove: we show the first claim (in the same way as we did it in point 1.) and then apply trivial arithmetics to conclude from x = y that also −x = −y. This, indeed, is a smarter way to prove the claim. Is it induction? Yes it is. We prove it first for all positive integers – by induction on the natural ordering of the positive integers. Then, we take an arbitrary negative integer z and observe (assume induction hypothesis!) that we have already proved 1 + 3 + ... + (2|z| − 1) = |z|2 . The well-founded ordering on Z we are using in this case orders first all positive integers along < (for proving the first part of the claim) and then, for any z < 0, puts z after |z| but unrelated to other n > |z| > 0 – the induction hypothesis for proving the claim for such a z < 0 is, namely, that the claim holds for |z|. The ordering is shown on the left: −1 A 1

−1

/2

−2

/3

−3

/4

−4

/5

−5

/ ... ...

1

/2

/3

/4

/5

/ ...

−2 rr8 rrr / −3 =L r=L =L=LLL == & ==−4 = ...

I.2. Induction

47

As a matter of fact, the structure of this proof allows us to view the used ordering in yet another way. We first prove the claim for all positive integers. Thus, when proving it for an arbitrary negative integer z < 0, we can assume the stronger statement than the one we are actually using, i.e., that the claim holds for all positive integers. This ordering puts all negative integers after all positive ones as shown on the right in the figure above. Notice that none of the orderings we encountered in this example was total. 2

2: Inductive Definitions We have introduced the general idea of inductive proof over an arbitrary well-founded ordering defined on an arbitrary set. The idea of induction – a kind of stepwise construction of the whole from a “basis” by repetitive applications of given rules – can be applied not only for constructing proofs but also for constructing, that is defining, sets. We now illustrate this technique of definition by induction and then (subsection 2.3) proceed to show how it gives rise to the possibility of using a special case of the inductive proof strategy – the structural induction – on sets defined in this way. a Background Story ♦ ♦ Suppose I make a simple statement, for instance, (1) ‘John is a nice person’. Its truth may be debatable – some people may think that, on the contrary, he is not nice at all. Pointing this out, they might say – “No, he is not, you only think that he is”. So, to make my statemet less definite I might instead say (2) ‘I think that ‘John is a nice person’’. In the philosophical tradition one would say that (2) expresses a reflection over (1) – it expresses the act of reflecting over the first statement. But now, (2) is a new statement, and so I can reflect over it again: (3) ‘I think that ‘I think that ‘John is nice”’. It isn’t perhaps obvious why I should make this kind of statement, but I certainly can make it and, with some effort, perhaps even attach some meaning to it. Then, I can just continue: (4) ‘I think that (3)’, (5) ‘I think that (4)’, etc. The further (or higher) we go, the less idea we have what one might possibly intend with such expressions. Philosophers used to spend time analysing their possible meaning – the possible meaning of such repetitive acts of reflection over reflection over reflection ... over something. In general, they agree that such an infinite regress does not yield anything intuitively meaningful and should be avoided. In terms of the normal language usage, we hardly ever attempt to carry such a process beyond the level (2) – the statements at the higher levels do not make any meaningful contribution to a conversation. Yet they are possible for purely linguistic reasons – each statement obtained in this way is grammatically correct. And what is ‘this way’ ? Simply: Basis :: Start with some statement, e.g., (1) ‘John is nice’. Step :: Whenever you have produced some statement (n) – at first, it is just (1), but after a few steps, you have some higher statement (n) – you may produce a new statement by prepending (n) with ‘I think that ...’. Thus you obtain a new, (n+1), statement ‘I think that (n)’. Anything you obtain according to this rule, happens to be grammatically correct and the whole infinite chain of such statements consitutes what philosophers call an infinite regress. The crucial point here is that we do not start with some set which we analyse, for instance, by looking for some ordering on it. We are defining a new set – the set of statements {(1), (2), (3), ...} – in a very particular way. We are applying the idea of induction – stepwise construction from a “basis” – not for proving properties of a given set with some well-founded ordering but for defining a new set. ♦

♦

One may often encounter sets described by means of abbreviations like E = {0, 2, 4, 6, 8, ...} or T = {1, 4, 7, 10, 13, 16, ...}. The abbreviation ... indicates that the author assumes that you have figured out what the subsequent elements will be – and that there will be infinitely many of them.

48

Basic Set Theory

It is assumed that you have figured out the rule by which to generate all the elements. The same sets may be defined more precisely with the explicit reference to the respective rule: (F.iv)

E = {2 ∗ n : n ∈ N} and T = {3 ∗ n + 1 : n ∈ N}

Another way to describe these two rules is as follows. The set E is defined by: Basis :: 0 ∈ E and, Step :: whenever an x ∈ E, then also x + 2 ∈ E. Closure :: Nothing else belongs to E. The other set is defined similarly: Basis :: 1 ∈ T and, Step :: whenever an x ∈ T , then also x + 3 ∈ T . Closure :: Nothing else belongs to T . Here we are not so much defining the whole set by one static formula (as we did in (F.iv)), as we are specifying the rules for generating new elements from some elements which we have already included in the set. Not all formulae (static rules, as those used in (F.iv)) allow equivalent formulation in terms of such generation rules. Yet, quite many sets of interest can be defined by means of such generation rules – quite many sets can be introduced by means of inductive definitions. Inductively defined sets will play a central role in all the subsequent chapters. Idea 2.13 [Inductive definition of a set] An inductive definition of a set S consists of Basis :: List some (at least one) elements B ⊆ S. Induction :: Give one or more rules to construct new elements of S from already existing elements. Closure :: State that S consists exactly of the elements obtained by the basis and induction steps. (This is typically assumed rather than stated explicitly.) Example 2.14 The finite strings Σ∗ over an alphabet Σ from Example 1.17, can be defined inductively, staring with the empty string, , i.e., the string of length 0, as follows: Basis :: ∈ Σ∗ Induction :: if s ∈ Σ∗ then xs ∈ Σ∗ for all x ∈ Σ Constructors are the empty string and the operations prepending an element in front of a string x , for all x ∈ Σ. Notice that 1-element strings like x will be here represented as x. 2 Example 2.15 The finite non-empty strings Σ+ over alphabet Σ are defined by starting with a different basis. Basis :: x ∈ Σ+ for all x ∈ Σ Induction :: if s ∈ Σ+ then xs ∈ Σ+ for all x ∈ Σ 2

Often, one is not interested in all possible strings over a given alphabet but only in some subsets. Such subsets are called languages and, typically, are defined by induction. Example 2.16 Define the set of strings N over Σ = {0, s}: Basis :: 0 ∈ N Induction :: If n ∈ N then sn ∈ N This language is the basis of the formal definition of natural numbers. The constructors are 0 and the operation of appending the symbol ‘s’ to the left. (The ‘s’ signifies the “successor” function corresponding to n+1.) Notice that we do not obtain the set {0, 1, 2, 3...} but {0, s0, ss0, sss0...}, which is a kind of unary representation of natural numbers. Notice that, for instance, the strings 00s, s0s0 6∈ N, i.e., N 6= Σ ∗ . 2 Example 2.17

49

I.2. Induction

1. Let Σ = {a, b} and let us define the language L ⊆ Σ∗ consisting only of the strings starting with a number of a’s followed by the equal number of b’s, i.e., {an bn : n ∈ N}. Basis :: ∈ L Induction :: if s ∈ L then asb ∈ L Constructors of L are and the operation adding an a in the beginning and a b at the end of a string s ∈ L. 2. Here is a more complicated language over Σ = {a, b, c, (, ), ¬, →} with two rules of generation. Basis :: a, b, c ∈ L Induction :: if s ∈ L then ¬s ∈ L if s, r ∈ L then (s → r) ∈ L By the closure property, we can see that, for instance, ‘(‘ 6∈ L and (¬b) 6∈ L. 2

In the examples from section 1 we saw that a given set may be endowed with various well-founded orderings. Having succeded in this, we can than use the powerful technique of proof by induction according to theorem 2.7. The usefulness of inductive definitions is related to the fact that such an ordering may be obtained for free – the resulting set obtains implicitly a well-founded ordering induced by the very definition as follows.3 Idea 2.18 [Induced wf Order] For an inductively defined set S, define a function f : S → N as follows: def

Basis :: Let S0 = B and for all b ∈ S0 : f (b) = 0. Induction :: Given Si , let Si+1 be the union of Si and all the elements x ∈ S \ Si which can be obtained according to one of the rules from some elements y 1 , . . . , yn of Si . For each def such new x ∈ Si+1 \ Si , let f (x) = i + 1. Closure :: The actual ordering is then x ≺ y iff f (x) < f (y). The function f is essentially counting the minimal number of steps – consecutive applications of the rules allowed by the induction step of Definition 2.13 – needed to obtain a given element of the inductive set. Example 2.19 Refer to Example 2.14. Since the induction step amounts there to increasing the length of a string by 1, following the above idea, we would obtain the ordering on strings s ≺ p iff length(s) < length(p). 2

2.1: “1-1” Definitions A common feature of the above examples of inductively defined sets is the impossibility of deriving an element in more than one way. For instance, according to example 2.15 the only way to derive the string abc is to start with c and then add b and a to the left in sequence. One apparently tiny modification to the example changes this state of affairs: Example 2.20 The finite non-empty strings over alphabet Σ can also be defined inductively as follows. Basis :: x ∈ Σ+ for all x ∈ Σ Induction :: if s ∈ Σ+ and p ∈ Σ+ then sp ∈ Σ+ 2

According to this example, abc can be derived by concatenating either a and bc, or ab and c. We often say that the former definitions are 1-1, while the latter is not. Given a 1-1 inductive definition of a set S, there is an easy way to define new functions on S – again by induction. 3 In

fact, an inductive definition imposes at least two such orderings of interest, but here we consider just one.

50

Basic Set Theory

Idea 2.21 [Inductive function definition] Suppose S is defined inductively from basis B and a certain set of construction rules. To define a function f on elements of S do the following: Basis :: Identify the value f (x) for each x in B. Induction :: For each way an x ∈ S can be constructed from one or more y 1 , . . . , yn ∈ S, show how to obtain f (x) from the values f (y1 ), . . . , f (yn ). Closure :: If you managed to do this, then the closure property of S guarantees that f is defined for all elements of S. The next few examples illustrate this method. Example 2.22 We define the length function on finite strings by induction on the definition in example 2.14 as follows: Basis :: length() = 0 Induction :: length(xs) = length(s) + 1 2 Example 2.23 We define the concatenation of finite strings by induction on the definition from example 2.14: Basis :: · t = t Induction :: xs · t = x(s · t) 2 Example 2.24 In Example 2.14 strings were defined by a left append (prepend) operation which we wrote as juxtaposition xs. A corresponding right append operation can be now defined inductively. Basis :: ` y = y Induction :: xs ` y = x(s ` y) and the operation of reversing a string: Basis :: R = Induction :: (xs)R = sR ` x 2

The right append operation ` does not quite fit the format of idea 2.21 since it takes two arguments – a symbol as well as a string. It is possible to give a more general version that covers such cases as well, but we shall not do so here. The definition below also apparently goes beyond the format of idea 2.21, but in order to make it fit we merely have to think of addition, for instance in m + n, as an application of the one-argument function add n to the argument m. Example 2.25 Using the definition of N from Example 2.16, we can define the plus operation for all n, m ∈ N : Basis :: 0 + n = n Induction :: s(m) + n = s(m + n) It is not immediate that this is the usual plus - we cannot see, for example, that n + m = m + n. We shall see in an exercise that this is actually the case. We can use this definition to calculate the sum of two arbitrary natural numbers represented as elements of N. For instance, 2 + 3 would be processed as follows: ss0 + sss0 7→ s(s0 + sss0) 7→ ss(0 + sss0) 7→ ss(sss0) = sssss0

2

Note carefully that the method of inductive function definition 2.21 is guaranteed to work only when the set is given by a 1-1 definition. Imagine that we tried to define a version of the length function in example 2.22 by induction on the definition in example 2.20 as follows: len(x) = 1 for x ∈ Σ, while len(ps) = len(p) + 1. This would provide us with alternative (and hence mutually contradictive) values for len(abc), depending on which way we choose to derive abc. Note also that the two equations len(x) = 1 and len(ps) = len(p) + len(s) provide a working definition, but in this case it takes some reasoning to check that this is indeed the case.

51

I.2. Induction

2.2: Inductive Definitions and Recursive Programming If you are not familiar with the basics of programming you may skip this subsection and go directly to subsection 2.3. Here we do not introduce any new concepts but merely illustrate the relation between the two areas from the title. All basic structures known from computer science are defined inductively – any instance of a List, Stack, Tree, etc., is generated in finitely many steps by applying some basic constructor operations. These operations themselves may vary from one programming language to another, or from one application to another, but they always capture the inductive structure of these data types. We give here but two simple examples which illustrate the inductive nature of two basic data structures and show how this leads to the elegant technique of recursive programming. 1. Lists A List (to simplify matters, we assume that we store only integeres as data elements) is a sequence of 0 or more integers. The idea of a ‘sequence’ or, more generally, of a linked structure is captured by pointers between objects storing data. Thus, one would define objects of the form given on the left – the list 3, 7, 2, 5 would then contain 4 List objects (plus additional null object at the end) and look as shown on the right: List int x; List next;

3 •

next

/ 7 •

next

/ 2 •

next

/ 5 •

next

/

The declaration of the List objects tells us that a List is: 1. either a null object (which is a default, always possible value for pointers) 2. or an integer (stored in the current List object) followed by another List object. But this is exactly an inductive definition, namely, the one from example 2.14, the only difference being that of the language used. 1a. From inductive definition to recursion This is also a 1-1 definition and thus gives rise to natural recursive programming over lists. The idea of recursive programming is to traverse a structure in the order opposite to the way we imagine it built along its inductive definition. We start at some point of the structure and proceed to its subparts until we reach the basis case. For instance, the function computing length of a list is programmed recursively to the left: int length(List L) IF (L is null) return 0; ELSE return (1 + length(L.next));

int sum(List L) IF (L is null) return 0; ELSE return (L.x + sum(L.next));

It should be easy to see that the pseudo-code on the left is nothing more than the inductive definition of the function from example 2.22. Instead of the mathematical formulation used there, it uses the operational language of programming to specify: 1. the value of the function in the basis case (which also terminates the recursion) and then 2. the way to compute the value in the non-basis case from the value for some subcase which brings recursion “closer to” the basis. The same schema is applied in the function to the right which computes the sum of all integers in the list. Notice that, abstractly, Lists can be viewed simply as finite strings. You may rewrite the definition of concatenation from Example 2.23 for Lists as represented here. 1b. Equality of Lists Inductive 1-1 definition of a set (here of the set of List objects given by their declaration) gives also rise to the obvious recursive function for comparing objects for equality. Two lists are equal iff i) they have the same structure (i.e., the same number of elements) and

52

Basic Set Theory

ii) respective elements in both lists are equal The corresponding pseudo-code for recursive function definition is as follows. The first two lines check point i) and ensure termination upon reaching the basis case. The third line checks point ii). If everything is ok so far, the recursion proceeds to check the rest of the lists: boolean equal(List L, R) IF (L is null AND R is null) return TRUE; ELSE IF (L is null OR R is null) return FALSE; ELSE IF (L.x 6= R.x) return FALSE; ELSE return equal(L.next, R.next); 2. Trees Another very common data structure is binary tree BT. (Again, we simplify presentation by assuming that we only store integers as data.) Unlike in a list, each node (except the null ones) has two successors (called “children”) left and right: BT int x; BT left; BT right;

p 5 MMMMright MMM p p M& xpp 2 = right 7 < right = left left == 0 such that B(x, F (x, y)) is true. If no such x exists G(y) is udnefined. We consider the alphabet Σ = {#, 1, Y, N } and functions over positive natural numbers (without 0) N, with unary representation as strings of 1’s. Let our given functions be F : N × N → N and B : N×N → {Y, N }, and the corresponding Turing machines, MF , resp. MB . More precisely, sF is the initial state of MF which, starting in a configuration of the form C1, halts iff z = F (x, y) is defined in the final state eF in the configuration of the form C2: ···#

C1 :

1y 1 ...

1

#

1x 1 ...

1

#···

↑ sF

↓

MF

···#

C2 :

1y 1 ...

1

#

1x 1 ...

1

1z 1 ...

#

#···

1

↑ eF

If, for some pair x, y, F (x, y) is undefined, MF (y, x) may go forever. B, on the other hand, is total and MB always halts in its final state eB , when started from a configuration of the form C2 and intial state sB , yiedling a confguration of the form C3: ···#

C2 :

1y 1 ...

1

#

1x 1 ...

1

#

1z 1 ...

1

#···

↑ sB

MB C3 :

↓ ···#

1y 1 ...

1

#

1x 1 ...

1

#

1z 1 ...

1

u

#···

↑ eB

where u = Y iff B(x, z) = Y (true) and u = N iff B(x, z) = N (false). Using MF and MB , we design a TM MG which starts in its initial state sG in a configuration

65

II.1. Turing Machines

C0, and halts in its final state eG iff G(y) = x is defined, with the tape as shown in T : ···#

C0 :

1y 1 ...

1

#···

↑

sG MG T :

↓ ···#

1y 1 ...

1

#

1x 1 ...

1

#···

MG will add a single 1 (= x) past the lefttmost # to the right of the input y, and run MF and MB on these two numbers. If MB halts with Y , we only have to clean up F (x, y). If MB halts with N , MG erases F (x, y), extends x with a single 1 and continues: sE G 1,R

#,R

'!"#&%$

/1

'!"#&4%$ o

#,1

/ M O

1,1

F

'!"#&3%$ o

B

#,1 #,L 1,#

/

/ M '!"#&2%$

N,# #,L

Y,#

'!"#&%$

/5 O 1,#

'!"#&6%$

#,L

In case of success, MB exits along the Y and states 5-6 erase the sequence of 1’s representing F (x, y). MG stops in state 6 to the right of x. If MB got N , this N is erased and states 3-4 erase the current F (x, y). The first blank # encountered in the state 3 is the blank right to the right of the last x. This # is replaced with 1 – increasing x to x + 1 – and MF is started in a configuration of the form C1. MG (y) wil go forever if no x exists such that B(x, F (x, y)) = Y . However, it may also go forever if such x exists but F (x0 , y) is undefined for some 0 < x0 < x ! Then the function G(y) computed by MG is undefined. In the theory of recursive functions, such a schema is called µ-recursion (“µ” for minimal) – here it is the function G : N → N G(y) = the least x ∈ N such that B(x, F (x, y)) = Y and F (x0 , y) is defined for all x0 ≤ x The fact that if G(y) = x (i.e., when it is defined) then also F (x0 , y) must be defined for all x0 ≤ x, captures the idea of mechanic computability – MG simply checks all consequtive values of x0 until the correct one is found. 2.2: Alternative representation of TMs

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [optional]

We give an alternative, equivalent, representation of an arbitrary TM defining it directly by a set of transitions between situations rather than of “more abstract” instructions. The definition 3.5 of a TM embodies the abstract character of an algorithm which operates on any possible actual input. The following definition 3.10 gives a “more concrete” representation of a computation on some given input in that it takes into account the “global state” of the computation expressed by the contents of the tape to the left and to the right of the reading head. Definition 3.10 A situation of a TM is a quadruple hl, q, c, ri where q is the current state, c the symbol currently under the reading head, l the tape to the left and r the tape to the right of the current symbol. For instance ··· #

a

b

b

# ···

↑ qi

corresponds to the situation hab, qi , b, i. Notice that l represents only the part of the tape to the left up to the beginning of the infinite sequence of only blanks (resp. r to the right). A computation of a TM M is a sequence of transitions between situations (F.vii)

S0 7→M S1 7→M S2 7→M ...

where S0 is an initial situation and each S 7→M S 0 is an execution of a single instruction. The reflexive transitive closure of the relation 7→M is written 7→M∗ .

66

Turing Machines

In example 3.8 we saw a machine accepting (halting on) each sequence of an even number of 1’s. Its computation starting on the input 11 expressed as a sequence of transitions between the subsequent situations will be ··· #

1

1

# ···

↑ q0

7→M

··· #

1

1

# ···

↑ q1

7→M

··· #

1

1

# ··· ↑

q0

In order to capture the manipulation of the whole situations, we need some means of manipulating the strings (to the left and right of the reading head). Given a string s and a symbol x ∈ Σ, let xs denote application of a function prepending the symbol x in front of s (x a s from example 2.24). Furthermore, we consider the functions hd and tl returning, resp. the first symbol and the rest of a non-empty string. (E.g., hd(abc) = a and tl(abc) = bc.) Since the infinite string of only blanks corresponds to empty input, we will identify such a string with the empty string, #ω = . Consequently, we let # = . The functions hd and tl must be adjusted, i.e., hd() = # and tl() = . Basis :: hd() = # and tl() = (with #ω = ) Ind :: hd(sx) = x and tl(sx) = s. We imagine the reading head some place on the tape and two (infinte) strings staring to the left, resp., right of the head. (Thus, to ease readability, the prepending operation on the left string will be written sx rather than xs.) Each instruction of a TM can be equivalently expressed as a set of transitions between situations. That is, given a TM M according to definition 3.5, we can construct an equivalent representation of M as a set of transitions between situtations. Each write-instruction hq, ai 7→ hp, bi, for a, b ∈ Σ corresponds to the transition: w : hl, q, a, ri ` hl, p, b, ri A move-right instruction hq, xi 7→ hp, Ri corresponds to R : hl, q, x, ri ` hlx, p, hd(r), tl(r)i and, analogously, hq, xi 7→ hp, Li L : hl, q, x, ri ` htl(l), p, hd(l), xri Notice that, for instance, for L, if l = and x = #, the equations we have imposed earlier will yield h, q, #, ri ` h, p, #, #ri. Thus a TM M can be represented as a quadruple hK, Σ, q0 , `M i, where `M is a relation (function, actually) on the set of situations `M ⊆ Sit × Sit. (Here, Sit are represented using the functions on strings as above.) For instance, the machine M from example 3.8 will now look as follows: 1. hl, q0 , 1, ri ` hl1, q1 , hd(r), tl(r)i 2. hl, q1 , 1, ri ` hl1, q0 , hd(r), tl(r)i 3. hl, q1 , #, ri ` hl, q1 , #, ri A computation of a TM M according to this representation is a sequence of transitions (F.viii)

S0 `M S1 `M S2 `M ...

where each S `M S 0 corresponds to one of the specified transitions between situations. The reflexive transitive closure of this relation is denoted `M∗ . It is easily shown (exercise 3.7) that the two representations are equivalent, i.e., a machine obtained by such a transformation will have exactly the same computations (on the same inputs) as the original machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [end optional]

3: Universal Turing Machine Informally, we might say that one Turing machine M 0 simulates another one M if M 0 is able to perform all the computations which can be performed by M or, more precisely, if any input w for M can be represented as an input w 0 for M 0 and the result M 0 (w0 ) represents the result M (w).

67

II.1. Turing Machines

This may happen in various ways, the most trivial one being the case when M 0 is strictly more powerful than M . If M is a multiplication machine (returning n ∗ m for any two natural numbers), while M 0 can do both multiplication and addition, then augmenting the input w for M with the indication of multiplication, we can use M 0 to do the same thing as M would do. Another possibility might be some encoding of the instructions of M in such a way that M 0 , using this encoding as a part of its input, can act as if it was M . This is what happens in a computer since a computer program is a description of an algorithm, while an algorithm is just a mechanical procedure for performing computations of some specific type – i.e., it is a Turing machine. A program in a high level language is a Turing machine M – compiling it into a machine code amounts to constructing a machine M 0 which can simulate M . Execution of M (w) proceeds by representing the high level input w as an input w 0 acceptable for M 0 , running M 0 (w0 ) and converting the result back to the high level representation. We won’t define formally the notions of representation and simulation, relying instead on their intuitive understanding and the example of a Universal Turing machine we will present. A Universal Turing machine is a Turing machine which can simulate any other Turing machine. It is a conceptual prototype and paradigm of the programmable computers as we know them. Idea 1. 2. 3.

3.11 [A Universal TM] To build a UTM which can simulate an arbitrary TM M Choose a coding of Turing machines so that they can be represented on an input tape for UTM. Represent the input of M on the input tape for UTM. Choose a way of representing the state of the simulated machine M (the current state and position of the head) on the tape of UTM. 4. Design the set of instructions for the UTM.

To simplify the task, without losing generality, we will assume that the simulated machines work only on the default alphabet Σ = {∗, #}. At the same time, the UTM will use an extended alphabet with several symbols, namely Π, which is the union of the following sets: • Σ – the alphabet of M • {S, N, R, L} – additional symbols to represent instructions of M • {X, Y, 0, 1} – symbols used to keep track of the current state and position • {(, A, B} – auxiliary symbols for bookkeeping We will code machine M together with its original input as follows: (

instructions of M

current state

input and head position

1: A possible coding of TMs 1. Get the set of instructions from the description of a TM M = hK, Σ, q1 , τ i. 2. Each instruction t ∈ τ is a four-tuple t : hqi , ai 7→ hqj , bi where qi , qj ∈ K, a is # or ∗, and b ∈ Σ ∪ {L, R}. We assume that states are numbered from 1 up to n > 0. Represent t as Ct : S 1

Ct :

... S i

a b N ... N 1 j

i.e., first i S-symbols representing the initial state qi , then the read symbol a, so the action – either the symbol to be written or R, L, and finally j N -symbols for the final state qj . 3. String the representations of all the instructions, with no extra spaces, in increasing order ∗ of state numbers. If for a state i there are two instructions, t# i for input symbol # and ti # ∗ for input symbol ∗, put ti before ti . 4. Put the “end” symbol ‘(’ to the left: (

C t1

C t2

· · · C tz

···

68

Turing Machines

2 Example 3.12 Let M = h{q1 , q2 , q3 }, {∗, #}, q1 , τ i, where τ is given in the left part of the table: 1Y

#,∗

'!"#&%$

/2z

hq1 , ∗i hq1 , #i hq2 , ∗i hq2 , #i

∗,L

'!"#&3%$

#,R

∗,R

7→ 7→ 7→ 7→

hq1 , Ri hq2 , ∗i hq2 , Li hq3 , Ri

S ∗ RN S# ∗ N N SS ∗ LN N SS#RN N N

The coding of the instructions is given in right part of the table. The whole machine will be coded as: (

S

∗

R

N

S

#

∗

N

N

S

S

∗

L

N

N

S

S

#

R

N

N

N

···

It is not necessary to perform the above conversion but – can you tell what M does? 2

2: Input representation We included the alphabet of the original machines Σ = {∗, #} in the alphabet of the UTM. There is no need to code this part of the simulated machines. 2 3: Current state After the representation of the instruction set of M , we will reserve part of the tape for the representation of the current state. There are n states of M , so we reserve n + 1 fields for unary representation of the number of the current state. The i-th state is represented by i X’s followed by (n + 1 − i) Y ’s: if M is in the state i, this part of the tape will be: instructions

X 1

··· X Y i

··· Y

input

n+1

We use n + 1 positions so that there is always at least one Y to the right of the sequence of X’s representing the current state. To “remember” the current position of the head, we will use the two extra symbols 0 and 1 corresponding, respectively, to # and ∗. The current symbol under the head will be always changed to 0, resp., 1. When the head is moved away, these symbols will be restored back to the original ones #, resp., ∗. For instance, if M ’s head on the input tape ∗ ∗ ## ∗ #∗ is in the 4-th place, the input part of the UTM tape will be ∗ ∗ #0 ∗ #∗. 2 4: Instructions for UTM We will let UTM start execution with its head at the rightmost X in the bookkeeping section of the tape. After completing the simulation of one step of M ’s computation, the head will again be placed at the rightmost X. The simulation of each step of computation of M will involve several things: 1. Locate the instruction to be used next. 2. Execute this instruction, i.e., either print a new symbol or move the head on M ’s tape. 3. Write down the new state in the bookkeeping section. 4. Get ready for the next step: clear up any mess and move the head to the rightmost X. We indicate the working of UTM at these stages: 1: Find instruction In a loop we erase one X at a time, replacing it by Y , and pass through all the instructions converting one S to A in each instruction. If there are too few S’s in an instruction, we convert all the N ’s to B’s in that instruction. When all the X’s have been replaced by Y ’s, the instructions corresponding to the actual state have only A’s instead of S. Now we eliminate the instructions which still contain S by going through all the instructions: if there is some S not converted to A in an instruction, we replace all N ’s by B’s in that instruction. Now, there remain at most 2 N -lists associated with the instruction(s) for the current state. We go and read the current symbol on M ’s tape and replace N ’s by B’s at the instruction (if any)

II.1. Turing Machines

69

which does not correspond to what we read. The instruction to be executed is now the one with N ’s – the rest have only B’s. 2: Execute instruction UTM starts now looking for a sequence of N ’s. If none is found, then M – and UTM – stops. Otherwise, we check what to do looking at the symbol just to the left of the leftmost N . If it is R or L, we go to the M ’s tape and move its head restoring the current symbol to its Σ form and replacing the new symbol by 1, resp. 0. If the instruction tells us to write a new symbol, we just write the appropriate thing. 3: Write new state We find again the sequence of N ’s and write the same number of X’s in the bookkeeping section indicating the next state. 4: Clean up Finally, convert all A’s and B’s back to S and N ’s, and move the head to the rightmost X. 2

4: Decidability and the Halting Problem Turing machine is a possible formal expression of the idea of mechanical computability – we are willing to say that a function is computable iff there is a Turing machine which computes its values for all possible arguments. (Such functions are also called recursive.) Notice that if a function is not defined on some arguments (for instance, division by 0) this would require us to assign some special, perhaps new, values for such arguments. For the partial functions one uses a slightly different notion. function F is computable iff there is a TM which halts with F (x) for all inputs x semi-computable iff there is a TM which halts with F (x) whenever F is defined on x but does not halt when F is undefined on x A problem P of YES-NO type (like “is x a member of set S?”) gives rise to a special case of function FP (a predicate) which returns one of the only two values. We get here a third notion. problem P is decidable iff FP is computable – the machine computing FP always halts returning correct answer YES or NO semi-decidable iff FP is semi-computable – the machine computing FP halts with the correct answer YES, but may not halt when the answer is NO co-semi-decidable iff not-FP is semi-computable – the machine computing FP halts with the correct answer NO, but may not halt when the answer is YES Thus a problem is decidable iff it is both semi- and co-semi-decidable. Set membership is a special case of YES-NO problem but one uses a different terminology: set S is recursive iff the membership problem x ∈ S is decidable recursively enumerable iff the membership problem x ∈ S is semi-decidable co-recursively enumerable iff the membership problem x ∈ S is co-semi-decidable Again, a set is recursive iff it is both recursively and co-recursively enumerable. One of the most fundamental results about Turing Machines concerns the undecidability of the Halting Problem. Following our strategy for encoding TMs and their inputs for simulation by a UTM, we assume that the encoding of the instruction set of a machine M is E(M ), while the encoding of input w for M is just w itself. Problem 3.13 [The Halting problem] Is there a Turing machine MU such that for any machine M and any input w, MU (E(M ), w) always halts and Y (es) if M (w) halts MU (E(M ), w) = N (o) if M (w) does not halt

70

Turing Machines

Notice that the problem is trivially semi-decidable: given an M and w, simply run M (w) and see what happens. If the computation halts, we get the correct YES answer to our problem. If it does not halt, then we may wait forever. Unfortunately, the following theorem ensures that, in general, there is not much else to do than wait and see what happens. Theorem 3.14 [Undecidability of Halting Problem] There is no Turing machine which decides the halting problem. Proof Assume, on the contrary, that there is such a machine MU . 1. We can easily design a machine M1 that is undefined (does not halt) on input Y and defined everywhere else, e.g., a machine with one state q0 and instruction hq0 , Y i 7→ hq0 , Y i. 2. Now, construct machine M10 which on the input (E(M ), w) gives M1 (MU (E(M ), w)). It has the property that M10 (E(M ), w) halts iff M (w) does not halt. In particular: M10 (E(M ), E(M )) halts iff M (E(M )) does not halt. 3. Let M ∗ be a machine which to an input w first computes (w, w) and then M10 (w, w). In particular, M ∗ (E(M ∗ )) = M10 (E(M ∗ ), E(M ∗ )). This one has the property that: M ∗ (E(M ∗ )) halts iff M10 (E(M ∗ ), E(M ∗ )) halts iff M ∗ (E(M ∗ )) does not halt This is clearly a contradiction, from which the theorem follows. QED (3.14) Thus the set {hM, wi : M halts on input w} is semi-recursive but not recursive. In terms of programming, the undecidability of Halting Problem means that it is impossible to write a program which could 1) take as input an arbitrary program M and its possible input w and 2) determine whether M run on w will terminate or not. The theorem gives rise to a series of corollaries identifying other undecidable problems. The usual strategy for such proofs is to show that if a given problem was decidable then we could use it to decide the (halting) problem already known to be undecidable. Corollary 3.15 There is no Turing machine 1. MD which, for any machine M , always halts on MD (E(M )) with 0 iff M is total (always halts) and with 1 iff M is undefined for some input; 2. ME which, for given two machines M1 , M2 , always halts with 1 iff the two halt on the same inputs and with 0 otherwise. Proof 1. Assume that we have an MD . Given an M and some input w, we may easily construct a machine Mw which, for any input x computes M (w). In particular Mw is total iff M (w) halts. Then we can use arbitrary x in MD (E(Mw ), x) to decide halting problem. Hence there is no such MD . 2. Assume that we have an ME . Take as M1 a machine which does nothing but halts immediately on any input. Then we can use ME and M1 to construct an MD , which does not exist by the previous point. QED (3.15)

Exercises (week 3) exercise 3.1+ Suppose that we want to encode the alphabet consisting of 26 (Latin) letters and 10 digits using strings – of fixed length – of symbols from the alphabet ∆ = {−, •}. What is the minimal length of ∆-strings allowing us to do that? What is the maximal number of distinct symbols which can be represented using the ∆-strings of this length? The Morse code. The Morse code is an example of such an encoding although it actually uses additional symbol – corresponding to # – to separate the representations, and it uses strings of different lengths. For instance, Morse represents A as •− and B as − • ••, while 0 as − − − − −. (The more frequently a letter is used, the shorter its representation in Morse.) Thus the sequence • − # − • • • is distinct from • − − • ••. 2

71

II.1. Turing Machines

exercise 3.2+ The questions at the end of Examples 3.8 and 3.9 (run the respective machines on the suggested inputs). exercise 3.3+ Let Σ = {1, #} – a sequence of 1’s on the input represents a natural number. Design a TM which starting at the leftmost 1 of the input x performs the operation x + 1 by appending a 1 at the end of x, and returns the head to the leftmost 1. exercise 3.4 Consider the alphabet Σ = {a, b} and the language from example 2.17.1, i.e., L = {an bn : n ∈ N}. 1. Build a TM M1 which given a string s over Σ (possibly with additional blank symbol #) halts iff s ∈ L and goes forever iff s 6∈ L. If you find it necessary, you may allow M1 to modify the input string. 2. Modify M1 to an M2 which does a similar thing but always halts in the same state indicating the answer. For instance, the answer ‘YES’ may be indicated by M2 just halting, and ‘NO’ by M2 writing some specific string (e.g., ‘NO’) and halting. exercise 3.5 The correct ()-expressions are defined inductively (relatively to a given set S of other expressions): Basis :: Each s ∈ S and empty word are correct ()-expressions Induction :: If s and t are correct ()-expressions then so are: (s) and st. 1. Use induction on the length of ()-expressions to show that: s is correct iff 1) the numbers of left ‘(’ and right ‘)’ parantheses in s are equal, say n, and 2) for each 1 ≤ i ≤ n the i-th ‘(’ comes before the i-th ‘)’. (the leftmost ‘(’ comes before the leftmost ‘)’, the second leftmost ‘(’ before the second leftmost ‘)’, etc.) 2. The following machine will read a ()-expression starting on its leftmost symbol – it will halt in state 3 iff the input was incorrect and in state 7 iff the input was correct. The alphabet Σ for the machine consists of two disjoint sets Σ1 ∩ Σ2 = ∅, where Σ1 is some set of symbols (for writing S-expressions) and Σ2 = {X, Y, (, ), #}. In the diagram we use an abbreviation ’ ?’ to indicate ‘any other symbol from Σ not mentioned explicitly among the transitions from this state’. For instance, when in state 2 and reading # the machine goes to state 3 and writes #; reading ) it writes Y and goes to state 4 – while reading any other symbol ?, it moves head to the right remaining in state 2.

'!"#&%$

9 3 eLL LL rrr LLL#,# ?,R ),) rrr r LL r r L r LL LL rrrr (,X X,R /2 / 0 eLL 1 LLL LL#,R LLL #,L ),Y ?,L LLL L . 6 Lp 5o 4 M LLL X,( Y,L LL LL L #,R LLL ?,L LL % 7

?,R

Y,)

'!"#&%$

'!"#&%$ '!"#&%$ '!"#&%$

'!"#&%$ '!"#&%$

Run the machine on a couple of your own tapes with ()-expressions (correct and incorrect!). Can you, using the claim from 1., justify that this machine does the right thing, i.e., decides the correctness of ()-expressions? exercise 3.6 Let Σ = {a, b, c} and ∆ = {0, 1} (Example 3.3). Specify an encoding of Σ in ∆ ∗ and build two Turing machines: 1. Mc which given a string over Σ converts it to a string over ∆ 2. Md which given a string over ∆ converts it to a string over Σ

72

Turing Machines

The two should act so that their composition gives identity, i.e., for all s ∈ Σ∗ : Md (Mc (s)) = s and, for all d ∈ ∆∗ : Mc (Md (d)) = d. Choose the initial and final position of the head for both machines so that, executing the one after another will actually produce the same initial string. Run each of the machines on some example tapes. Run then the two machines subsequently to check whether the final tape is the same as the initial one. exercise 3.7 Use induction on the length of computations to show that, applying the schema from subsection 2.2 of transforming an instruction representation of an arbitrary TM M over the alphabet Σ = {#, 1}, yields the same machine M . I.e. for any input (initial situation) S0 the two computations given by (F.vii) of definition 3.10 and (F.viii) from subsection 2.2 are identical: S0 7→M S1 7→M S2 7→M ... = S0 `M S1 `M S2 `M ...

The following (optional) exercises concern construction of a UTM. exercise 3.8 Following the strategy from 1: A possible coding of TMs, and the Example 3.12, code the machine which you designed in exercise 3.3. exercise 3.9 Complete the construction of UTM. 1. Design four TMs to be used in a UTM as described in the four stages of simulation in 4: Instructions for UTM. 2. Indicate for each (sub)machine the assumptions about its initial and final situation. 3. Put the four pieces together and run your UTM on the coding from the previous Exercise with some actual inputs.

III.1. Syntax and Proof Systems

73

Chapter 4 Syntax and Proof Systems • Axiomatic Systems in general • Syntax of SL • Proof Systems – Hilbert – ND • Provable equivalence, Syntactic consistency and compactness • Gentzen proof system – Decidability of axiomatic systems for SL

1: Axiomatic Systems a Background Story ♦ ♦ One of the fundamental goals of all scientific inquiry is to achieve precision and clarity of a body of knowledge. This “precision and clarity” means, among other things: • all assumptions of a given theory are stated explicitly; • the language of the theory is designed carefully by choosing some basic, primitive notions and defining others in terms of these ones; • the theory contains some basic principles – all other claims of the theory follow from its basic principles by applications of definitions and some explicit laws. Axiomatization in a formal system is the ultimate expression of these postulates. Axioms play the role of basic principles – explicitly stated fundamental assumptions, which may be disputable but, once assumed imply the other claims, called theorems. Theorems follow from the axioms not by some unclear arguments but by formal deductions according to well defined rules. The most famous example of an axiomatisation (and the one which, in more than one way gave the origin to the modern axiomatic systems) was Euclidean geometry. Euclid systematised geometry by showing how many geometrical statements could be logically derived from a small set of axioms and principles. The axioms he postulated were supposed to be intuitively obvious: A1. Given two points, there is an interval that joins them. A2. An interval can be prolonged indefinitely. A3. A circle can be constructed when its center, and a point on it, are given. A4. All right angles are equal. There was also the famous fifth axiom – we will return to it shortly. Another part of the system were “common notions” which may be perhaps more adequately called inference rules about equality: CN1. Things equal to the same thing are equal. CN2. If equals are added to equals, the wholes are equal. CN3. If equals are subtracted from equals, the reminders are equal. CN4. Things that coincide with one another are equal.

74

Statement Logic

CN5. The whole is greater than a part. Presenting a theory, in this case geometry, as an axiomatic system has tremendous advantages. For the first, it is economical – instead of long lists of facts and claims, we can store only axioms and deduction rules, since the rest is derivable from them. In a sense, axioms and rules “code” the knowledge of the whole field. More importantly, it systematises knowledge by displaying the fundamental assumptions and basic facts which form a logical basis of the field. In a sense, Euclid uncovered “the essence of geometry” by identifying axioms and rules which are sufficient and necessary for deriving all geometrical theorems. Finally, having such a compact presentation of a complicated field, makes it possible to relate not only to particular theorems but also to the whole field as such. This possibility is reflected in us speaking about Euclidean geometry vs. non-Euclidean ones. The differences between them concern precisely changes of some basic principles – inclusion or removal of the fifth postulate. As an example of proof in Euclid’s system, we show how using the above axioms and rules he deduced the following proposition (“Elements”, Book 1, Proposition 4): Proposition 4.1 If two triangles have two sides equal to two sides respectively, and have the angles contained by the equal straight lines equal, then they also have the base equal to the base, the triangle equals the triangle, and the remaining angles equal the remaining angles respectively, namely those opposite the equal sides. Proof Let ABC and DEF be two triangles having the two sides AB and AC equal to the two sides DE and DF respectively, namely AB equal to DE and AC equal to DF , and the angle BAC equal to the angle EDF . Ar

@ @ @r C r B

D r

@ @ @r F E r

I say that the base BC also equals the base EF , the triangle ABC equals the triangle DEF , and the remaining angles equal the remaining angles respectively, namely those opposite the equal sides, that is, the angle ABC equals the angle DEF , and the angle ACB equals the angle DF E. If the triangle ABC is superposed on the triangle DEF , and if the point A is placed on the point D and the straight line AB on DE, then the point B also coincides with E, because AB equals DE. Again, AB coinciding with DE, the straight line AC also coincides with DF , because the angle BAC equals the angle EDF . Hence the point C also coincides with the point F , because AC again equals DF . But B also coincides with E, hence the base BC coincides with the base EF and – by CN4. – equals it. Thus the whole triangle ABC coincides with the whole triangle DEF and – by CN4. – equals it. QED (4.1) The proof is allowed to use only the given assumptions, the axioms and the deduction rules. Yet, the Euclidean proofs are not exactly what we mean by a formal proof in an axiomatic system. Why? Because Euclid presupposed a particular model, namely, the abstract set of points, lines and figures in an infinite, homogenous space. This presupposition need not be wrong (although, according to modern physics, it is), but it has important bearing on the notion of proof. For instance, it is intuitively obvious what Euclid means by “superposing one triangle on another”. Yet, this operation hides some further assumptions, for instance, that length does not change during such a process. This implicit assumption comes most clearly forth in considering the language of Euclid’s geometry. Here are just few definitions from “Elements”:

III.1. Syntax and Proof Systems

75

D1. A point is that which has no part. D2. A line is breadthless length. D3. The ends of a line are points. D4. A straight line is a line which lies evenly with the points on itself. D23. Parallel straight lines are straight lines which, being in the same plane and being produced indefinitely in both directions, do not meet one another in either direction. These are certainly smart formulations but one can wonder if, for instance, D1 really defines anything or, perhaps, merely states a property of something intended by the name “point”. Or else, does D2 define anything if one does not pressupose some intuition of what length is? To make a genuinely formal system, one would have to identify some basic notions as truly primitive – that is, with no intended interpretation. For these notions one may postulate some properties. For instance, one might say that we have the primitive notions of P, L and IL (for point, line and indefinitely prolonged line). P has no parts; L has two ends, both being P’s; any two P’s determine an L (whose ends they are – this reminds of A1); any L determines uniquely an IL (cf. A2.), and so on. Then, one may identify derived notions which are defined in terms of the primitive ones. Thus, for instance, the notion of parallel lines can be defined from the primitives as it was done in D23. The difference may seem negligible but, in fact, it is of the utmost importance. By insisting on the uniterpreted character of the primitive notions, it opens an entirely new perspective. On the one hand, we have our primitive, uniterpreted notions. These can be manipulated according to the axioms and rules we have postulated. On the other hand, there are various possibilities of interpretating these primitive notions. All such interpretations will have to satisfy the axioms and conform to the rules, but otherwise they may be vastly different. This was the insight which led, first, to non-Euclidean geometry and, then, to the formal systems. We will now illustrate this first stage of development. The famous fifth axiom, the “Parallel Postulate”, has a bit more involved formulation then the former four: A5. If a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straigh lines, if produced indefinitely, meet on that side on which the angles are less than the two right angles. The strongest, and more clear formulation of this axiom, is as follows: A5. Given a line L and a point p not on line L, exactly one line L0 can be drawn through p parallel to L (i.e., not intersecting L no matter how far extended). This axiom seems to be much less intuitive than the other ones and mathematicians had spent centuries trying to derive it from the other ones. Failing to do that, they started to ask the question “But, does this postulate have to be true? What if it isn’t?” Well, it may seem that it is true – but how can we check? It may be hard to prolong any line indefinitely. Thus we encouter the other aspect of formal systems, which we will study in the following chapters, namely, what is the meaning or semantics of such a system. Designing an axiomatic system, one has to specify precisely what are its primitive terms and how these terms may interact in derivation of the theorems. On the other hand, one specifies what these terms are supposed to denote. In fact, terms of a formal system may denote anything which conforms to the rules specified for their interaction. Euclidean geometry was designed with a particular model in mind – the abstract set of points, lines and figures that can be constructed with compass and straightedge in an infinite space. But now, allowing for the primitve character of the basic notions, we can consider other interpretations. We can consider as our space a finite circle C, interpret a P as any point within C, an L as any closed interval within C and an IL as an open-ended chord of the circle, i.e., a straight line within the circle which approaches indefinitely closely, but never touches the circumference. (Thus one can “prolong a line indefinitely” without ever meeting the circumference.) Such an interpretation does not satisfy the fifth postulate.

76

Statement Logic

x~ y} z |{ r ... L

..

p

x~ y} z|{ L... y r... ..x r r p ..

We start with a line L and a point p not on L. We can then choose two other points x and y and, by A1, obtain two lines xp and yp which can be prolonged indefinitely according to A2. As we see, neither of these indefinitely prolonged lines intersects L. Thus, both are parallel to L according to the very same, old definition D23. Failing to satify the fifth postulate, this interpretation is not a model of Euclidean geometry. But it is a model of the first non-Euclidean geometry – the Bolyai-Lobachevsky geometry, which keeps all the definitions, postulates and rules except the fifth postulate. Later, many other non-Euclidean geometries have been developed – perhaps the most famous one, by Hermann Minkowski as a four-dimensional space-time universe of the relativity theory. And now we can observe another advantage of using axiomatic systems. Since nonEuclidean geometry preserves all Euclid’s postulates except the fifth one, all the theorems and results which were derived without the use of the fifth postulate remain valid. For instance, the proposition 4.1 need no new proof in the new geometries. Thus, it should be also indicated that axiomatic systems deserve a separate study. Such a study may reveal consequences (theorems) of various sets of postulates. Studying then some particular phenomena, one will first ask the question which postulates are satisfied by them. An answer to this question will then immediately yield all the theorems which have been proven in the corresponding system. What is of the fundamental importance and should be constantly kept in mind, is that axiomatic systems, their primitive terms and proofs, are purely syntactic, that is do not presuppose any particular interpretation. Some fundamental axiomatic systems will be studied in this chapter. Of course, the eventual usefulness of such a system will depend on whether we can find interesting interpretations for its terms and rules but this is another story. In the following chapters we will look at possible interpretations of the axiomatic systems introduced here. ♦

♦

Recall that an inductive definition of a set consists of a Basis, an Induction part, and an implicit Closure condition. When the set defined is a language, i.e., a set of strings, we often talk about an axiomatic system. In this case, the elements of the basis are called axioms, while the induction part is given by a set of proof rules. The set defined is called the set of theorems. A special symbol ` is used to denote the set of theorems. Thus A ∈ ` iff A is a theorem. The statement A ∈ ` is usually written ` A. Usually ` is identified as a subset of some other language L ⊆ Σ∗ , thus ` ⊆ L ⊆ Σ∗ . Definition 4.2 Given an L ⊆ Σ∗ , an axiomatic system ` takes the following form. Axioms :: A set Ax ⊆ ` ⊆ L, and Proof `A1 ; . . . ; `An Rules :: of the form: “if A1 ∈ `, . . . , An ∈ ` then C ∈ ` ”, written R : `C (Ai are premisses and C conclusion of the rule R. A rule is just an element R ∈ L n ×L.) The rules are always designed so that C is in L if A1 , . . . , An are, thus ` is guaranteed to be a subset of L. Definition 4.3 A proof in an axiomatic system is a finite sequence A 1 , . . . ,An of strings from L, such that for each Ai • either Ai ∈ Ax or else • there are Ai1 , . . . , Aik in the sequence with all i1 , . . . , ik < i, and an application of a proof rule `Ai1 ; . . . ; `Aik R: . (I.e., such that hAi1 , ..., Aik , Ai i = R.) `Ai

III.1. Syntax and Proof Systems

77

A proof of A is a proof in which A is the final string. Remark. Clearly A is a theorem of the system iff there is a proof of A in the system. Notice that for a given language L there may be several axiomatic systems which all define the same subset of L, albeit, by means of very different rules. There are also variations which we will consider, where the predicate ` is defined on various sets built over L, for instance, ℘(L) × L. 2

2: Syntax of SL The basic logical system, originating with Boole’s algebra, is Propositional Logic, also called Statement Logic (SL). The names reflect the fact that the expressions of the language are “intended as” propositions. This interpretation will be part of the semantics of SL to be discussed in the following chapters. Here we introduce syntax and the associated axiomatic proof system of SL. Definition 4.4 The language of well-formed formulae of SL is defined as follows: 1. An alphabet for an SL language consists of a set of propositional variables Σ = {a, b, c...} – together with the (formula building) connectives: ¬ and →, and auxiliary symbols (, ). 2. The well-formed formulae, WFFΣ SL , are defined inductively: Basis :: Σ ⊆ WFFΣ SL ; Σ Ind :: 1) if A ∈ WFFΣ SL then ¬A ∈ WFFSL Σ 2) if A, B ∈ WFFSL then (A → B) ∈ WFFΣ SL . 3. The propositional variables are called atomic formulae, the formulae of the form A or ¬A, where A is atomic are called literals. Remark 4.5 [Some conventions] 1) Compare this definition to the exercise 2.2. 2) The outermost pair of parantheses is often suppressed, hence A → (B → C) stands for the formula (A → (B → C)) while (A → B) → C stands for the formula ((A → B) → C). 3) Note that the well-formed formulae are strings over the symbols in Σ ∪ {), (, →, ¬}, i.e., WFF Σ SL ⊆ (Σ ∪ {), (, →, ¬})∗ . We use lower case letters for the propositional variables of a particular alphabet Σ, while upper case letters stand for arbitrary formulae. The sets of WFFΣ SL over Σ = {a, b} and over Σ1 = {c, d} are disjoint (though in one-to-one correspondence). Thus, the definition yields a different set of formulae for different Σ’s. Writing WFFSL we mean well-formed SL formulae over an arbitrary alphabet and most of our discussion is concerned with this general case irrespectively of a particular alphabet Σ. 4) It is always implicitly assumed that Σ 6= Ø. 5) For the reasons which we will explain later, occasionally, we may use the abbreviations ⊥ for ¬(B → B) and > for B → B, for arbitrary B. 2

In the following we will always – unless explicitly stated otherwise – assume that the formulae involved are well-formed.

3: The axiomatic system of Hilbert’s Hilbert’s system H for SL is defined with respect to a unary relation (predicate) `H ⊆ WFFSL which we write as `H B rather than as B ∈ `H . It reads as “B is provable in H”. Definition 4.6 The predicate `H of Hilbert’s system for SL is defined inductively by: Axiom Schemata:: A1: `H A → (B → A); A2: `H (A → (B → C)) → ((A → B) → (A → C)); A3: `H (¬B → ¬A) → (A → B);

78

Statement Logic

Proof Rule :: called Modus Ponens:

`H A ; `H A → B . `H B

Remark [Axioms vs. axiom schemata] A1–A3 are in fact axiom schemata; the actual axioms comprise all formulae of the indicated form with the letters A, B, C instantiated to arbitrary formulae. For each particular alphabet Σ, there will be a different (infinite) collection of actual axioms. Similar instantiations are performed in the proof rule. For instance, for Σ = {a, b, c, d}, all the following formulae are instances of axiom schemata: A1 : b → (a → b), (b → d) → (¬a → (b → d)), a → (a → a), A3 : (¬¬d → ¬b) → (b → ¬d). The following formulae are not instances of the axioms: a → (b → b), (¬b → a) → (¬a → b). Thus, an axiom schema, like A1, is actually a predicate – for any given Σ, we get the set of Σ (Σ-)instances A1Σ = {x → (y → x) : x, y ∈ WFFΣ SL } ⊂ WFFSL . Also proof rules, like Modus Ponens (MP), are written as schemata using variables (A, B) standing for arbitrary formulae. The MP proof rule schema compraises, for a given alphabet, infinitely many rules of the same form, e.g., `H a → ¬b ; `H (a → ¬b) → (b → c) `H ¬(a → b) ; `H ¬(a → b) → c `H a ; `H a → b , , , `H b `H b → c `H c ... Thus, in general, a proof rule R with n premises (in an axiomatic system over a language L) is a schema – a relation R ⊆ Ln × L. For a given Σ, Hilbert’s Modus Ponens schema yields a set of Σ Σ Σ (legal Σ-)applications M P Σ = {hx, x → y, yi : x, y ∈ WFFΣ SL } ⊂ WFFSL × WFFSL × WFFSL . A proof rule as in definition 4.2 is just one element of this relation. Rules are almost always given in form of such schemata – an element of the respective relation is then called an “application of the rule”. The above three examples are applications of M P , i.e., elements of M P Σ . Notice that the sets AiΣ and M P Σ are recursive (provided that Σ is, which it always is by assumption). Recursivity of the last set means that we can always decide whether a given triple of formulae is a (legal) application of the rule. Recursivity of the set of axioms means that we can always decide whether a given formula is an axiom or not. Axiomatic systems which do not satisfy these conditions (i.e., where either axioms or applications of rules are undecidable) are of little interest and we will not consider them at all. 2 That both axioms and applications of M P form recursive sets does not (necessarily) imply that so is `H . This only means that given a sequence of formulae, we can decide whether it is a proof or not. To decide if a given formula belongs to `H would require a procedure for deciding if such a proof exists – probably, a procedure for constructing a proof. We will see several examples illustrating that, even if such a procedure for `H exists, it is by no means simple. Lemma 4.7 For an arbitrary B ∈ WFFSL : `H B → B Proof 1 : `H (B → ((B → B) → B)) → ((B → (B → B)) → (B → B)) A2 2: 3: 4: 5:

`H `H `H `H

B → ((B → B) → B) (B → (B → B)) → (B → B) B → (B → B) B→B

A1 M P (2, 1) A1 M P (4, 3) QED (4.7)

The phrase “for an arbitrary B ∈ WFFSL ” indicates that any actual formula of the above form (i.e., for any actual alphabet Σ and any well-formed formula substituted for B) will be derivable, e.g. `H a → a, `H (a → ¬b) → (a → ¬b), etc. All the results concerning SL will be stated in this way. But we cannot substitute different formulae for the two occurences of B. If we try to apply the above proof to deduce `H A → B it will fail – identify the place(s) where it would require invalid transitions. In addition to provability of simple formulae, also derivations can be “stored” for future use. The above lemma means that we can always, for arbitrary formula B, use `H B → B as a step in a proof. More generally, we can “store” derivations in the form of admissible rules. Definition 4.8 Let C be an axiomatic system. A rule

`C A1 ; . . . ; `C An is admissible in C if `C C

79

III.1. Syntax and Proof Systems

whenever there are proofs in C of all the premisses, i.e., `C Ai for all 1 ≤ i ≤ n, then there is a proof in C of the conclusion `C C. Lemma 4.9 The following rules are admissible in H: 1.

`H A → B ; `H B → C `H A → C

2.

`H B `H A → B

Proof 1. 1 : 2: 3: 4: 5: 6: 7:

2. 1 : 2: 3:

`H `H `H `H `H `H `H

(A → (B → C)) → ((A → B) → (A → C)) (B → C) → (A → (B → C)) B→C A → (B → C) (A → B) → (A → C) A→B A→C

`H B `H B → (A → B) `H A → B

Lemma 4.10

A2 A1 assumption M P (3, 2) M P (4, 1) assumption M P (6, 5)

assumption A1 M P (1, 2)

QED (4.9)

1. `H ¬¬B → B 2. `H B → ¬¬B

Proof 1. 1 : 2: 3: 4: 5: 6: 7: 8:

2. 1 : 2: 3:

`H `H `H `H `H `H `H `H

¬¬B → (¬¬¬¬B → ¬¬B) (¬¬¬¬B → ¬¬B) → (¬B → ¬¬¬B) ¬¬B → (¬B → ¬¬¬B) (¬B → ¬¬¬B) → (¬¬B → B) ¬¬B → (¬¬B → B) (¬¬B → (¬¬B → B)) → ((¬¬B → ¬¬B) → (¬¬B → B)) (¬¬B → ¬¬B) → (¬¬B → B) ¬¬B → B

`H ¬¬¬B → ¬B `H (¬¬¬B → ¬B) → (B → ¬¬B) `H B → ¬¬B

A1 A3 L.4.9.1 (1, 2) A3 L.4.9.1 (3, 4) A2 M P (5, 6) M P (L.4.7, 7)

point 1. A3 M P (1, 2) QED (4.10)

4: Natural Deduction system In a Natural Deduction system for SL, instead of the unary predicate `H , we use a binary relation `N ⊆ ℘(WFFSL ) × WFFSL , which, for Γ ∈ ℘(WFFSL ), B ∈ WFFSL , we write as Γ `N B. This relation reads as “B is provable in N from the assumptions Γ”. Definition 4.11 The axioms and rules of Natural Deduction are as in Hilbert’s system with the additional axiom schema A0: Axiom Schemata:: A0: A1: A2: A3:

Γ `N Γ `N Γ `N Γ `N

B, whenever B ∈ Γ; A → (B → A); (A → (B → C)) → ((A → B) → (A → C)); (¬B → ¬A) → (A → B);

Proof Γ `N A ; Γ `N A → B Rule :: Modus Ponens: . Γ `N B

80

Statement Logic

Remark. As for the Hilbert system, the “axioms” are actually axiom schemata. The real set of axioms is the infinite set of actual formulae obtained from the axiom schemata by substitution of actual formulae for the upper case letters. Similarly for the proof rule. 2

The next lemma corresponds exactly to lemma 4.9. In fact, the proof of that lemma (and any other in H) can be taken over line for line, with hardly any modification (just replace `H by Γ `N ) to serve as a proof of this lemma. Lemma 4.12 The following rules are admissible in N : 1.

Γ `N A → B ; Γ `N B → C Γ `N A → C

2.

Γ `N B Γ `N A → B

The system N is not exactly what is usually called Natural Deduction. We have adopted N because it corresponds so closely to the Hilbert system. The common features of N and Natural Deduction are that they both provide the means of reasoning from the assumptions Γ and not only, like H, for deriving single formulae. Furthermore, they both satisfy the crucial theorem which we prove next. (The expression “Γ, A” is short for “Γ ∪ {A}.”) Theorem 4.13 [Deduction Theorem] If Γ, A `N B, then Γ `N A → B. Proof By induction on the length l of a proof of Γ, A `N B. Basis, l = 1, means that the proof consists merely of an instance of an axiom and it has two cases depending on which axiom was involved: A1-A3 :: If B is one of these axioms, then we also have Γ `N B and lemma 4.12.2 gives the conclusion. A0 :: If B results from this axiom, we have two subcases: 1. If B = A, then, by lemma 4.7, we know that Γ `N B → B. 2. If B 6= A, then B ∈ Γ, and so Γ `N B. By lemma 4.12.2 we get Γ `N A → B. Γ, A `N C ; Γ, A `N C → B MP :: B was obtained by MP, i.e.: Γ, A `N B By the induction hypothesis, we have the first two lines of the following proof: 1: 2: 3: 4: 5:

Γ `N Γ `N Γ `N Γ `N Γ `N

A→C A → (C → B) (A → (C → B)) → ((A → C) → (A → B)) (A → C) → (A → B) A→B

A2 M P (2, 3) M P (1, 4) QED (4.13)

Example 4.14 Using the deduction theorem significantly shortens the proofs. The tedious example from lemma 4.7 can now be recast as: 1 : B `N B A0 2 : `N B → B DT 2

The deduction theorem is a kind of dual to MP: each gives one implication of the following Corollary 4.15 Γ, A `N B iff Γ `N A → B. Proof ⇒) is the deduction theorem 4.13.

81

III.1. Syntax and Proof Systems

⇐) By exercise 4.5, the assumption may be strengthened to Γ, A `N A → B. But then, also Γ, A `N A, and by MP Γ, A `N B. QED (4.15) We can now easily show the following: Corollary 4.16 The following rule is admissible in N :

Γ `N A → (B → C) Γ `N B → (A → C)

Proof Follows trivially from the above 4.15: Γ `N A → (B → C) iff Γ, A `N B → C iff Γ, A, B `N C – as Γ, A, B abbreviates the set Γ∪{A, B}, this is equivalent to – Γ, B `N A → C iff Γ `N B → (A → C). QED (4.16) Lemma 4.17 `N (A → B) → (¬B → ¬A) Proof 1 : A → B `N (¬¬A → ¬¬B) → (¬B → ¬A) A3, C.4.15 2: 3: 4: 5: 6: 7: 8:

A → B `N ¬¬A → A A → B `N A → B A → B `N ¬¬A → B A → B `N B → ¬¬B A → B `N ¬¬A → ¬¬B A → B `N ¬B → ¬A `N (A → B) → (¬B → ¬A)

L.4.10.1 A0 L.4.12.1(2, 3) L.4.10.2 L.4.12.1(4, 5) M P (6, 1) DT

QED (4.17)

5: Hilbert vs. ND In H we prove only single formulae, while in N we work “from the assumptions” proving their consequences. Since the axiom schemata and rules of H are special cases of their counterparts in N , it is obvious that for any formula B, if `H B then ∅ `N B. In fact this can be strengthened to an equivalence. (Below we follow the convention of writing “`N B” for “∅ `N B.”) Lemma 4.18 For any formula B we have `H B iff `N B. Proof One direction is noted above. In fact, any proof of `H B itself qualifies as a proof of `N B. The other direction is almost as obvious, since there is no way to make any real use of A0 in a proof of `N B. More precisely, take any proof of `N B and delete all lines (if any) of the form Γ `N A for Γ 6= ∅. The result is still a proof of `N B, and now also of `H B. To proceed more formally, the lemma can be proved by induction on the length of a proof of `N B: Since Γ = ∅ the last step of the proof could have used either an axiom A1,A2,A3 or MP. The same step can be then done in H – for MP, the proofs of `N A and `N A → B for the appropriate A are shorter and hence by the IH have counterparts in H. QED (4.18) The next lemma is a further generalization of this result. Lemma 4.19 `H G1 → (G2 → ...(Gn → B)...) iff {G1 , G2 , . . . , Gn } `N B. Proof We prove the lemma by induction on n: Basis :: The special case corresponding to n = 0 is just the previous lemma. Induction :: Suppose `H G1 → (G2 → ...(Gn → B)...) iff {G1 , G2 , . . . , Gn } `N B for any B. Then, taking (Gn+1 → B) for B, we have by IH `H G1 → (G2 → ...(Gn → (Gn+1 → B))...) iff {G1 , G2 , . . . , Gn } `N (Gn+1 → B), which by corollary 4.15 holds iff {G1 , G2 , . . . , Gn , Gn+1 } `N B. QED (4.19) Lemma 4.18 states equivalence of N and H with respect to the simple formulae of H. This lemma states essentially the general equivalence of these two systems: for any finite N -expression B ∈ `N there is a corresponding H-formula B 0 ∈ `H and vice versa. Observe, however, that this equivalence is restricted to finite Γ in N -expressions. The significant difference between the two systems consists in that N allows to consider also consequences

82

Statement Logic

of infinite sets of assumptions, for which there are no corresponding formulae in H, since every formula must be finite.

6: Provable Equivalence of formulae Equational reasoning is based on the simple principle of substitution of equals for equals. E.g., having the arithmetical expression 2 + (7 + 3) and knowing that 7 + 3 = 10, we also obtain a=b where F [ ] 2 + (7 + 3) = 2 + 10. The rule applied in such cases may be written as F [a] = F [b] is an expression “with a hole” (a variable or a placeholder) into which we may substitute other expressions. We now illustrate a logical counterpart of this idea. Lemma 4.7 showed that any formula of the form (B → B) is derivable in H and, by lemma 4.18, in N . It allows us to use, for instance, 1) `N a → a, 2) `N (a → b) → (a → b), 3) . . . as a step in any proof. Putting it a bit differently, the lemma says that 1) is provable iff 2) is provable iff ... Recall that remark 4.5 introduced an abbreviation > for an arbitrary formula of this form! It also introduced an abbreviation ⊥ for an arbitrary formula of the form ¬(B → B). These abbreviations indicate that all the formulae of the respective form are equivalent in the following sense. Definition 4.20 Formulae A and B are provably equivalent in an axiomatic system C for SL, if both `C A → B and `C B → A. If this is the case, we write `C A ↔ B.4 Lemma 4.10 provides an example, namely (F.ix)

`H B ↔ ¬¬B

Another example follows from axiom A3 and lemma 4.17: `N (A → B) ↔ (¬B → ¬A)

(F.x)

It is also easy to show (exc. 4.3) that all formulae > are provably equivalent, i.e., (F.xi)

`N (A → A) ↔ (B → B).

To show the analogous equivalence of all ⊥ formulae, (F.xii)

`N ¬(A → A) ↔ ¬(B → B).

we have to proceed differently since we do not have `N ¬(B → B).5 We can use the above fact and lemma 4.17: 1: 2: 3:

`N (A → A) → (B → B) `N ((A → A) → (B → B)) → (¬(B → B) → ¬(A → A)) `N ¬(B → B) → ¬(A → A)

L.4.17 M P (1, 2)

and the opposite implication is again an instance of this one. Provable equivalence A ↔ B means – and it is its main importance – that the formulae are interchangable. Whenever we have a proof of a formula F [A] containing A (as a subformula, possibly with several occurences), we can replace A by B – the result will be provable too. This fact is a powerful tool in simplifying proofs and is expressed in the following theorem. (The analogous version holds for H.) Theorem 4.21 The following rule is admissible in N :

`N A ↔ B , for any formula F [A]. `N F [A] ↔ F [B]

Proof By induction on the complexity of F [ ] viewed as a formula “with a hole” (where there may be several occurences of the “hole”, i.e., F [ ] may have the form ¬[ ] → G or else [ ] → (¬G → ([ ] → H)), etc.). [ ] :: i.e., F [A] = A and F [B] = B – the conclusion is then the same as the premise. 4 In the view of lemma 4.7 and 4.9.1, and their generalizations to N , the relation Im ⊆ WFF Σ × WFFΣ given SL SL by Im(A, B) ⇔ `N A → B is reflexive and transitive. This definition amounts to adding the requirement of symmetricity making ↔ the greatest equivalence contained in Im. 5 In fact, this is not true, as we will see later on.

83

III.1. Syntax and Proof Systems

¬G[ ] :: IH allows us to assume the claim for G[ ] : 1: 2: 3: 4:

`N `N `N `N

`N A ↔ B . `N G[A] ↔ G[B]

A→B G[A] → G[B] (G[A] → G[B]) → (¬G[B] → ¬G[A]) ¬G[B] → ¬G[A]

assumption IH L.4.17 M P (2, 3)

The same for `N ¬G[A] → ¬G[B], starting with the assumption `N B → A. G[ ] → H[ ] :: Assuming `N A ↔ B, IH gives us the following ssumptions : `N G[A] → G[B], `N G[B] → G[A], `N H[A] → H[B] and `N H[B] → H[A]. We show that `N F [A] → F [B]: 1: 2: 3: 4: 5: 6: 7: 8:

`N H[A] → H[B] G[A] → H[A] `N H[A] → H[B] G[A] → H[A] `N G[A] → H[A] G[A] → H[A] `N G[A] → H[B] `N G[B] → G[A] G[A] → H[A] `N G[B] → G[A] G[A] → H[A] `N G[B] → H[B] `N (G[A] → H[A]) → (G[B] → H[B])

IH exc.4.5 A0 L.4.12.1(3, 2) IH exc.4.5 L.4.12.1(6, 4) DT (7)

Entirely symmetric proof yields the other implication `N F [B] → F [A].

QED (4.21)

The theorem, together with the preceding observations about equivalence of all > and all ⊥ formulae justify the use of these abbreviations: in a proof, any formula of the form ⊥, resp. >, can be replaced by any other formula of the same form. As a simple consequence of the theorem, we obtain: Corollary 4.22 For any formula F [A], the following rule is admissible:

`N F [A] ; `N A ↔ B `N F [B]

Proof If `N A ↔ B, theorem 4.21 gives us `N F [A] ↔ F [B] which, in particular, implies `N F [A] → F [B]. MP applied to this and the premise `N F [A], gives `N F [B]. QED (4.22)

7: Consistency Lemma 4.7, and the discussion of provable equivalence above, show that for any Γ (also for Γ = Ø) we have Γ `N >, where > is an arbitrary instance of B → B. The following notion indicates that the similar fact, namely Γ `N ⊥ need not always hold. Definition 4.23 A set of formulae Γ is consistent iff Γ 6`N ⊥. An equivalent formulation says that Γ is consistent iff there is a formula A such that Γ 6`N A. In fact, if Γ `N A for all A then, in particular, Γ `N ⊥. Equivalence follows then by the next lemma. Lemma 4.24 If Γ `N ⊥, then Γ `N A for all A. Proof (Observe how corollary 4.22 simplifies the proof.) 1: 2: 3: 4: 5: 6:

Γ `N Γ `N Γ `N Γ `N Γ `N Γ `N

¬(B → B) B→B ¬A → (B → B) ¬(B → B) → ¬¬A ¬¬A A

assumption L.4.7 2 + L.4.12.2 C.4.22 (F.x) M P (1, 4) C.4.22 (F.ix)

QED (4.24)

This lemma is the (syntactic) reason for why inconsistent sets of “assumptions” Γ are uninteresting. Given such a set, we do not need the machinery of the proof system in order to check whether something is a theorem or not – we merely have to check if the formula is well-formed. Similarly, an axiomatic system, like H, is inconsistent if its rules and axioms allow us to derive `H ⊥. Notice that the definition requires that ⊥ is not derivable. In other words, to decide if Γ is consistent it does not suffice to run enough proofs and see what can be derived from Γ. One must

84

Statement Logic

show that, no matter what, one will never be able to derive ⊥. This, in general, may be an infinite task requiring searching through all the proofs. If ⊥ is derivable, we will eventually construct a proof of it, but if it is not, we will never reach any conclusion. That is, in general, consistency of a given system may be semi-decidable. (Fortunately, consistency of H as well as of N for an arbitrary Γ is decidable (as a consequence of the fact that “being a theorem” is decidable for these systems) and we will comment on this in subsection 8.1.) In some cases, the following theorem may be used to ease the process of deciding that a given Γ is (in)consistent. Theorem 4.25 [Compactness] Γ is consistent iff each finite subset ∆ ⊆ Γ is consistent. Proof ⇒ If Γ 6`N ⊥ then, obviously, there is no such proof from any subset of Γ. ⇐ Contrapositively, assume that Γ is inconsistent. The proof of ⊥ must be finite and, in particular, uses only a finite number of assumptions ∆ ⊆ Γ. This means that the proof Γ `N ⊥ can be carried from a finite subset ∆ of Γ, i.e., ∆ `N ⊥. QED (4.25)

8: The axiomatic system of Gentzen’s By now you should be convinced that it is rather cumbersome to design proofs in H or N . From the mere form of the axioms and rules of these systems it is by no means clear that they define recursive sets of formulae. (As usual, it is easy to see (a bit more tedious to prove) that these sets are semi-recursive.) We give yet another axiomatic system for SL in which proofs can be constructed mechanically. The relation `G ⊆ ℘(WFFSL )×℘(WFFSL ), contains expressions, called sequents, of the form Γ `G ∆, where Γ, ∆ ⊆ WFFSL are finite sets of formulae. It is defined inductively as follows: Axioms :: Γ `G ∆, whenever Γ ∩ ∆ 6= ∅ Rules ::

¬` :

Γ `G ∆, A Γ, ¬A `G ∆

`¬ :

Γ, A `G ∆ Γ `G ∆, ¬A

→ `:

Γ `G ∆, A ; Γ, B `G ∆ Γ, A → B `G ∆

`→ :

Γ, A `G ∆, B Γ `G ∆, A → B

The power of the system is the same whether we assume that Γ’s and ∆’s in the axioms contain only atomic formulae, or else arbitrary formulae. We comment now on the “mechanical” character of G and the way one can use it.

8.1: Decidability of the axiomatic systems for SL Gentzen’s system defines a set `G ⊆ ℘(WFFSL ) × ℘(WFFSL ). Unlike for H or N , it is (almost) obvious that this set is recursive – we do not give a formal proof but indicate its main steps. Theorem 4.26 Relation `G is decidable. Proof Given an arbitrary sequent Γ `G ∆ = A1 , ..., An `G B1 , ..., Bm , we can start processing the formulae (bottom-up!) in an arbitrary order, for instance, from left to right. For instance, B → A, ¬A `G ¬B is shown by building the proof starting at the bottom line: 5: 4: 3: 2: 1:

axiom B `G A, B `¬ →` ¬`

`G ¬B, A, B

axiom ;

A `G A, ¬B

B → A `G ¬B, A ¬A, B → A `G ¬B

In general, the proof in G proceeds as follows: • If Ai is atomic, we continue with A1+i , and then with B’s.

85

III.1. Syntax and Proof Systems

• If a formula is not atomic, it is either ¬C or C → D. In either case there is only one rule which can be applied (remember, we go bottom-up). Premise(s) of this rule are uniquely determined by the conclusion (formula we are processing at the moment) and its application will remove the main connective, i.e., reduce the number of ¬, resp. →! • Thus, eventually, we will arrive at a sequent Γ0 `G ∆0 which contains only atomic formulae. We then only have to check whether Γ0 ∩ ∆0 = Ø, which is obviously a decidable problem since both sets are finite. Notice that the rule →` will “split” the proof into two branches, but each of them will contain fewer connectives. We have to process both branches but, again, for each we will eventually arrive at sequents with only atomic formulae. The initial sequent is derivable in G iff all such branches terminate with axioms. And it is not derivable iff at least one terminates with a non-axiom (i.e., Γ0 `G ∆0 where Γ0 ∩ ∆0 = Ø). Since all branches are guaranteed to terminate `G is decidable. QED (4.26) Now, notice that the expressions used in N are special cases of sequents, namely, the ones with exactly one formula on the right of `N . If we restrict our attention in G to such sequents, the above theorem still tells us that the respective restriction of `G is decidable. We now indicate the main steps involved in showing that this restricted relation is the same as `N . As a consequence, we obtain that `N is decidable, too. That is, we want to show that Γ `N B iff Γ `G B. 1) In the exercise 4.4, you are asked to prove a part of the implication “if `N B then `G B”, by showing that all axioms of N are derivable in G. It is not too difficult to show that also the MP rule is admissible in G. It is there called the (cut)-rule whose simplest form is: Γ `G A ; Γ, A `G B (cut) Γ `G B and M P is easily derivable from it. (If Γ `G A → B, then it must have been derived using the rule `→, i.e., we must have had earlier (“above”) in the proof of the right premise Γ, A `G B. Thus we could have applied (cut) at this earlier stage and obtain Γ `G B, without bothering to derive Γ `G A → B at all.) 2) To complete the proof we would have to show also the opposite implication “if `G B then `N B”, namely that G does not prove more formulae than N does. (If it did, the problem would be still open, since we would have a decision procedure for `G but not for `N ⊂ `G . I.e., for some formula B 6∈ `N we might still get the positive answer, which would merely mean that B ∈ `G .) This part of the proof is more involved since Gentzen’s rules for ¬ do not produce N -expressions, i.e., a proof in G may go through intermediary steps involving expressions not derivable (not existing, or “illegal”) in N . 3) Finally, if N is decidable, then lemma 4.18 implies that also H is decidable – according to this lemma, to decide if `H B, it suffices to decide if `N B.

8.2: Gentzen’s rules for abbreviated connectives The rules of Gentzen’s form a very well-structured system. For each connective, →, ¬ there are two rules – one treating its occurrence on the left, and one on the right of `G . As we will soon see, it makes often things easier if one is allowed to work with some abbreviations for frequently occurring sets of symbols. For instance, assume that in the course of some proof, we run again and again in the sequence of the form ¬A → B. Processing it requires application of at least two def rules. One may be therefore tempted to define a new connective A ∨ B = ¬A → B, and a new rule for its treatement. In fact, in Gentzen’s system we should obtain two rules for the occurrence of this new symbol on the left, resp. on the right of `G . Now looking back at the original rules

86

Statement Logic

from the begining of this section, we can see how such a connective should be treated: Γ Γ, ¬A Γ Γ

`G `G `G `G

Γ, A `G ∆ Γ `G ¬A, ∆ Γ, B `G ∆ Γ, ¬A → B `G ∆ Γ, A ∨ B `G ∆

A, B, ∆ B, ∆ ¬A → B, ∆ A ∨ B, ∆

Abbreviating these two derivations yields the following two rules: (F.xiii)

`∨

Γ `G A, B, ∆ Γ `G A ∨ B, ∆

∨`

Γ, A `G ∆ ; Γ, B `G ∆ Γ, A ∨ B `G ∆ def

In a similar fashion, we may construct the rules for another, very common abbreviation, A ∧ B = ¬(A → ¬B): Γ `G A, ∆ ; Γ `G B, ∆ Γ, A, B `G ∆ `∧ (F.xiv) ∧` Γ `G A ∧ B, ∆ Γ, A ∧ B `G ∆ It is hard to imagine how to perform a similar construction in the systems H or N . We will meet the above abbreviations in the following chapters.

9: Some proof techniques In the next chapter we will see that formulae of SL may be interpreted as propositions – statements possessing truth-value true or false. The connective ¬ may be then interpreted as negation of the argument proposition, while → as (a kind of) implication. With this intuition, we may recognize some of the provable facts (either formulae or admissible rules) as giving rise to particular strategies of proof which can be – and are – utilized at all levels, in fact, throughout the whole of mathematics, as well as in much of informal reasoning. Most facts from and about SL can be viewed in this way, but here we give only a few most common examples. • As a trivial example, the provable equivalence `N B ↔ ¬¬B from (F.ix), means that in order to show double negation ¬¬B, it suffices to show B. One will hardly try to say “I am not unmarried.” – “I am married.” is both more convenient and natural. • Let G, D stand, respectively, for the statements ‘Γ `N ⊥’ and ‘∆ `N ⊥ for some ∆ ⊆ Γ’ from the proof of theorem 4.25. In the second point, we showed ¬D → ¬G contrapositively, i.e., by showing G → D. That this is a legal and sufficient way of proving the first statement can be, at the present point, justified by appealing to (F.x) – (A → B) ↔ (¬B → ¬A) says precisely that proving one is the same as (equivalent to) proving the other. • Another proof technique is expressed in corollary 4.15: A `N B iff `N A → B. Treating formulae on the left of `N as assumptions, this tells us that in order to prove that A implies B, A → B, we may prove A `N B, i.e., assume that A is true and show that then also B must be true. A `N ⊥ . Interpreting • In exercise 4.2 you are asked to show admissibility of the following rule: `N ¬A ⊥ as something which never can be true, a contradiction or, perhaps more generously, as an absurdity, this rule expresses actually reductio ad absurdum which we have seen in the chapter on history of logic (Zeno’s argument about Achilles and tortoise): if A can be used to derive an absurdity, then A can not be true i.e., (applying the law of excluded middle) its negation must be.

Compulsory Exercises I (week 4) exercise 4.1 Let A be a countable set. Show that A has countably many finite subsets. 1. Show first that for any n ∈ N, the set ℘n (A) of finite subsets – with exactly n elements – of A is countable. 2. Using a technique S similar to the one from the drawing in example 1.25 on page 38, show that the union n∈N ℘n (A) of all these sets is countable.

exercise 4.2 Prove the following statements in N :

87

III.1. Syntax and Proof Systems

1. `N ¬A → (A → B) (Hint: Complete the following proof: 1: 2: 3: 4: 5:

`N ¬A → (¬B → ¬A) ¬A `N ¬B → ¬A

A1 C.4.15 A3 M P (2, 3) DT (4) )

2. ¬B, A `N ¬(A → B) (Hint: Start as follows: A, A → B `N A A0 A, A → B `N A → B A0 A, A → B `N B M P (1, 2) A `N (A → B) → B DT (3) .. . Apply then lemma 4.17; you will also need corollary 4.15.) 1: 2: 3: 4:

3. `N A → (¬B → ¬(A → B)) 4. `N (A → ⊥) → ¬A 5. Show now admissibility in N of the rules `N A → ⊥ (a) `N ¬A

(b)

A `N ⊥ `N ¬A

(Hint: for (a) use 4 and MP, and for (b) use (a) and Deduction Theorem)

6. Prove the first formula in H, i.e. : 10 . `H ¬A → (A → B). exercise 4.3+ Show the claim (F.xi), i.e., `N (A → A) ↔ (B → B). (Hint: use lemma 4.7 and then lemma 4.12.2.)

exercise 4.4+ Consider the Gentzen’s system G from section 8. 1. Show that all axioms of the N system are derivable in G. (Hint: Instead of pondering over the axioms to start with, apply the bottom-up strategy from 8.1.)

2. Using the same bottom-up strategy, prove the formulae 1., 2. and 3. from exercise 4.2 in G. exercise 4.5+ Lemma 4.12 generalized lemma 4.9 to the expressions involving assumptions Γ `N . . . We can, however, reformulate the rules in a different way, namely, by placing the antecedents of → to the left of `N . Show the admissibility in N of the rules: 1.

Γ `N B Γ, A `N B

2.

Γ, A `N B ; Γ, B `N C Γ, A `N C

(1. must be shown directly by induction on the length of the proof of Γ `N B, without using corollary 4.15 – why? For 2. you can then use 4.15.)

exercise 4.6 Show that the following definition of consistency is equivalent to 4.23: Γ is consistent iff there is no formula A such that both Γ `A and Γ `¬A. Hint: You should show that for arbitrary Γ one has that: Γ 6`N ⊥ Γ `N ⊥

iff ⇔

for no A : Γ `N A and Γ `N ¬A, which is the same as showing that: for some A : Γ `N A and Γ `N ¬A.

The implication ⇒) follows easily from the assumption Γ `N ¬(B → B) and lemma 4.7. For the opposite one start as follows (use corollary 4.22 on 3: and then MP): 1: 2: 3: .. .

Γ `N A Γ `N ¬A Γ `N ¬¬(A → A) → ¬A

ass. ass. L.4.12.2 (2)

88

Statement Logic

Chapter 5 Semantics of SL • • • •

Semantics of SL Semantic properties of formulae Abbreviations Sets, SL and Boolean Algebras

In this chapter we are leaving the proofs and axioms from the previous chapter aside. For the time being, none of the concepts and discussion below should be referred to any earlier results on axiomatic systems. (Connection to the proof systems will be studied in the following chapters.) We are now studying exclusively the language of SL – definition 4.4 – and the standard way of assigning meaning to its expressions.

1: Semantics of SL a Background Story ♦ ♦ There is a huge field of Proof Theory which studies axiomatic systems per se, i.e., without reference to their possible meanings. This was the kind of study we were carrying out in the preceding chapter. As we emphasised at the begining of that chapter, an axiomatic system may be given very different interpretations and we will in this chapter see a few possibilities for interpreting the system of Statement Logic. Yet, axiomatic systems are typically introduced for the purpose of studying particular areas of interest or particular phenomena therein. They provide syntactic means for such a study: a language which is used to refer to various objects of some domain and a proof calculus which, hopefully, captures some of the essentail relationships between various aspects of the domain. As you should have gathered from the presentation of the history of logic, its original intention was to capture the patterns of correct reasoning which we otherwise carry out in natural language. Statement Logic, in particular, was conceived as a logic of statements or propositions: propositional variables may be interpreted as arbitrary statements, while the connectives as the means of constructing new statements from others. Thus, for instance, lets make the following reasoning: and hence

If it is raining, we will go to cinema. If we go to cinema, we will see a Kurosawa film. If it is raining, we will see a Kurosawa film.

If we agree to represent the implication if ... then ... by the syntactic symbol →, this reasoning is represented by interpreting A as It will rain, B as We will go to cinema, A→B ; B→C C as We will see a Kurosawa film and by the deduction . As we have A→C seen in lemma 4.9, this is a valid rule in the system `H . Thus, we might say that the system `H (as well as `N ) captures this aspect of our natural reasoning. However, one has to be very – indeed, extremely – careful with this kinds of analogies. They are never complete and any formal system runs, sooner or later, into problems when confronted with the richness and sophistication of natural language. Consider the following argument: and hence

If If If if

I I I I

am am am am

in in in in

Paris then I am in France. Rome then I am in Italy. Paris then I am in Italy or else Rome then I am in France.

89

III.2. Semantics of SL

It does not look plausible, does it? Now, let us translate it into statement logic: P for being in Paris, F for being in France, R in Rome and I in Italy. Using Gentzen’s rules with the standard reading of ∧ as ‘and’ and ∨ as ‘or’, we obtain: R → I, P, R `G I, F, P P → F, R → I, P, R `G P → F, R → I, P `G P → F, R → I `G P → F, R → I `G P → F ∧ R → I `G `G

;

F, R → I, P, R `G I, F I, F I, R → F P → I, R → F P →I ∨R→F P →I ∨R→F (P → F ∧ R → I) → (P → I ∨ R → F )

Our argument – the implication from (P → F and R → I) to (P → I or R → F ) turns out to be provable in `G . (It is so in the other systems as well.) Logicians happen to have an answer to this particular problem (we will return to it in exercise 6.1). But there are other strange things which cannot be easily answered. Typically, any formal system attempting to capture some area of discourse, will capture only some part of it. Attempting to apply it beyond this area, leads inevitably to counterintuitive phenomena. Statement logic attempts to capture some simple patterns of reasoning at the level of propositions. A proposition can be thought of as a declarative sentence which may be assigned a unique truth value. The sentence “It is raining” is either true or false. Thus, the intended and possible meanings of propositions are truth values: true or false. Now, the meaning of the proposition If it rains, we will go to a cinema, A → B, can be construed as: if ‘it is true that it will rain’ then ‘it is true that we will go to a cinema’. The implication A → B says that if A is true then B must be true as well. Now, since this implication is itself a proposition, it will have to be given a truth value as its meaning. And this truth value will depend on the truth value of its constituents: the propositions A and B. If A is true (it is raining) but B is false (we are not going to a cincema), the whole implication A → B is false. And now comes the question: what if A is false? Did the implication A → B assert anything about this situation? No, it did not. If A is false (it is not raining), we may go to a cinema or we may stay at home – I haven’t said anything about that case. Yet, the proposition has to have a meaning for all possible values of its parts. In this case – when the antecedent A is false – the whole implication A → B is declared true irrespectively of the truth value of B. You should notice that here something special is happening which does not necessarily correspond so closely to our intuition. And indeed, it is something very strange! If I am a woman, then you are Dalai Lama. Since I am not a woman, the implication happens to be true! But, as you know, this does not mean that you are Dalai Lama. This example, too, can be explained by the same argument as the above one (to be indicated in exercise 6.1). However, the following implication is true, too, and there is no formal way of excusing it being so or explaining it away: If it is not true that when I am a man then I am a man, then you are Dalai Lama, ¬(M → M ) → D. It is correct, it is true and ... it seems to be entirely meaningless. In short, formal correctness and accuracy does not always correspond to something meaningful in natural language, even if such a correspondance was the original motivation. A possible discrepancy indicated above concerned, primarily, the discrepancy between our intuition about the meaning of sentences and their representation in a syntactic system. But the same problem occurs at yet another level – the same or analogous discrepancies occur between our intuitive understanding of the world and the formal semantic model of the world. Thinking about axiomatic systems as tools for modelling the world, we might be tempted to look at the relation as illustrated on the left side of the following figure: an axiomatic system modelling the world. In truth, however, the relation is more complicated as illustrated on the right of the figure.

90

Statement Logic

Axiomatic system

Axiomatic system Formal semantics

'

The World

$

'

The World

$

An axiomatic system never addresses the world directly. What it addresses is a possible semantic model which tries to give a formal representation of the world. As we have mentioned several times, a given axiomatic system may be given various interpretations – all such interpretations will be possible formal semantic models of the system. To what extent these semantic models capture our intuition about the world is a different question. It is the question about “correctness” or “incorrectness” of modelling – an axiomatic system in itself is neither, because it can be endowed with different interpretations. The problems indicated above were really the problems with the semantic model of natural language which was implicitly introduced by assuming that statements are to be interpreted as truth values. We will now endavour to study the semantics – meaning – of the syntactic expessions from SL. We will see some alternative semantics starting with the standard one based on the so called “truth functions” (which we will call “boolean functions”). To avoid confusion and surprises, one should always keep in mind that we are not talking about the world but are defining a formal model of SL which, at best, can provide an imperfect link between the syntax of SL and the world. The formality of the model, as always, will introduce some discrepancies as those described above and many things may turn out not exactly as we would expect them to be in the real world. ♦

♦

Let B be a set with two elements. Any such set would do but, for convenience, we will typically let B = {1, 0}. Whenever one tries to capture the meaning of propositions as their truth value, and uses Statement Logic with this intention, one interprets B as the set {true, false}. Since this gives too strong associations and leads often to incorrect intuitions without improving anything, we will avoid the words true and false. Instead we will talk about “boolean values” (1 and 0) and “boolean functions”. If the word “truth” appears, it may be safely replaced with “boolean”. For any n ≥ 0 we can imagine various functions mapping Bn → B. For instance, for n = 2, a def def def def function f : B × B → B can be defined by f (1, 1) = 1, f (1, 0) = 0, f (0, 1) = 1 and f (0, 0) = 1. It can be written more concisely as the table: x 1 1 0 0

y 1 0 1 0

f (x, y) 1 0 1 1

The first n-columns contain all the possible combinations of the arguments (giving 2n distinct rows), and the last column specifies the value of the function for this combination of the arguments. For each of the 2n rows a function takes one of the two possible values, so for any n there are n exactly 22 different functions Bn → B. For n = 0, there are only two (constant) functions, for n = 1 there will be four distinct functions (which ones?) and so on. Surprisingly (or not), the language of SL is designed exactly for describing such functions! Definition 5.1 An SL structure consists of: 1. A domain with two elements, called boolean values, {1, 0}

91

III.2. Semantics of SL

2. Interpretation of the connectives, ¬ : B → B and → : B2 → B, given by the boolean tables:6 x 1 0

¬x 0 1

x 1 1 0 0

y 1 0 1 0

x→y 1 0 1 1

Given an alphabet Σ, an SL structure for Σ is an SL structure with 3. an assignment of boolean values to all propositional variables, i.e., a function V : Σ → {1, 0}. (Such a V is also called a valuation of Σ.) Thus connectives are interpreted as functions on the set {1, 0}. To distinguish the two, we use the simple symbols ¬ and → when talking about syntax, and the underlined ones ¬ and → when we are talking about the semantic interpretation as boolean functions. ¬ is interpreted as the def def function ¬ : {1, 0} → {1, 0}, defined by ¬(1) = 0 and ¬(0) = 1. → is binary and represents one of the functions from {1, 0}2 into {1, 0}. Example 5.2 Let Σ = {a, b}. V = {a 7→ 1, b 7→ 1} is a Σ-structure (i.e., a structure interpreting all symbols from Σ) assigning 1 (true) to both variables. V = {a 7→ 1, b 7→ 0} is another Σ-structure. Let Σ = {‘John smokes’, ‘Mary sings’}. Here ‘John smokes’ is a propositional variable (with a rather lengthy name). V = {‘John smokes’ 7→ 1, ‘Mary sings’ 7→ 0} is a Σ-structure in which both “John smokes” and “Mary does not sing”. 2

The domain of interpretation has two boolean values 1 and 0, and so we can imagine various functions, in addition to those interpreting the connectives. As remarked above, for arbitrary n n ≥ 0 there are 22 distinc functions mapping {1, 0}n into {1, 0}. Example 5.3 Here is an example of a (somewhat involved) boolean function F : {1, 0}3 → {1, 0} x 1 1 1 1 0 0 0 0

y 1 1 0 0 1 1 0 0

z 1 0 1 0 1 0 1 0

F (x, y, z) 1 1 1 0 1 1 1 1 2

Notice that in Definition 5.1 only the valuation differs from structure to structure. The interpretation of the connectives is always the same – for any Σ, it is fixed once and for all as the specific boolean functions. Hence, given a valuation V , there is a canonical way of extending it to the interpretation of all formulae – a valuation of propositional variables induces a valuation of all well-formed formulae. We sometimes write V for this extended valuation. This is given in the following definition which, intuitively, corresponds to the fact that if we know that ‘John smokes’ and ‘Mary does not sing’, then we also know that ‘John smokes and Mary does not sing’, or else that it is not true that ‘John does not smoke’. Definition 5.4 Any valuation V : Σ → {1, 0} induces a unique valuation V : WFF Σ SL → {1, 0} as follows: 1. for A ∈ Σ : V (A) = V (A) 2. for A = ¬B : V (A) = ¬(V (B)) 3. for A = (B → C) : V (A) = V (B) → V (C) 6 The standard name for such tables is “truth tables” but since we are trying not to misuse the word “truth”, we stay consistent by replacing it here, too, with “boolean”.

92

Statement Logic

For the purposes of this section it is convenient to assume that some total ordering has been selected for the propositional variables, so that for instance a “comes before” b, which again “comes before” c. Example 5.5 Given the alphabet Σ = {a, b, c}, we use the fixed interpretation of the connectives to determine the boolean value of, for instance, a → (¬b → c) as follows: a 1 1 1 1 0 0 0 0

b 1 1 0 0 1 1 0 0

¬b 0 0 1 1 0 0 1 1

c 1 0 1 0 1 0 1 0

¬b → c 1 1 1 0 1 1 1 0

a → (¬b → c) 1 1 1 0 1 1 1 1 2

Ignoring the intermediary columns, this table displays exactly the same dependence of the entries in the last column on the entries in the first three ones as the function F from Example 5.3. We say that the formula a → (¬b → c) determines the function F . The general definition is given below. Definition 5.6 For any formula B, let {b1 , . . . , bn } be the propositional variables in B, listed in increasing order. Each assignment V : {b1 , . . . , bn } → {1, 0} determines a unique boolean value V (B). Hence, each formula B determines a function B : {1, 0} n → {1, 0}, given by the equation B(x1 , . . . , xn ) = {b1 7→ x1 , . . . , bn 7→ xn }(B). Example 5.7 Suppose a and b are in Σ, and a comes before b in the ordering. Then (a → b) determines the function →, while (b → a) determines the function ← with the boolean table shown below. x 1 1 0 0

y 1 0 1 0

x→y 1 0 1 1

x←y 1 1 0 1 2 n

Observe that although for a given n there are exactly 22 boolean functions, there are infinitely many formulae over n propositional variables. Thus, different formulae will often determine the same boolean function. Deciding which formulae determine the same functions is an important problem which we will soon encounter.

2: Semantic properties of formulae Formula determines a boolean function and we now list some semantic properties of formulae, i.e., properties which are actually the properties of such induced functions. Definition 5.8 Let A, B ∈ WFFSL , and V be a valuation. A is satisfied in V not satisfied in V valid/tautology not valid satisfiable unsatisfiable/contradiction (tauto)logical consequence of B (tauto)logically equivalent to B

iff iff iff iff iff iff iff iff iff

condition holds V (A) = 1 V (A) = 0 for all V : V |= A there is a V : V 6|= A there is a V : V |= A for all V : V 6|= A B → A is valid A ⇒ B and B ⇒ A

notation: V |= A V 6|= A |= A 6|= A

B⇒A A⇔B

93

III.2. Semantics of SL

If A is satisfied in V , we say that V satisfies A. Otherwise V falsifies A. (Sometimes, one also says that A is valid in V , when A is satisfied in V . But notice that validity of A in V does not mean or imply that A is valid (in general), only that it is satisfiable.) Valid formulae – those satisfied in all structures – are also called tautologies; the unsatisfiable ones contradictions, while the not valid ones are said to be falsifiable. Those which are both falsifiable and satisfiable, i.e., which are neither tautologies nor contradictions, are called contingent. A valuation which satisfies a formula A is also called a model of A. Sets of formulae are sometimes called theories. Many of the properties defined for formulae are defined for theories as well. Thus a valuation is said to satisfy a theory iff it satisfies every formula in the theory. Such a valuation is also said to be a model of the theory. The class of all models of a given theory Γ is denoted Mod(Γ). Like a single formula, a set of formulae Γ is satisfiable iff it has a model, i.e., iff M od(Γ) 6= ∅. Example 5.9 a → b is not a tautology – assign V (a) = 1 and V (b) = 0. Hence a ⇒ b does not hold. However, it is satisfiable, since it is true, for instance, under the valuation {a 7→ 1, b 7→ 1}. The formula is contingent. B → B evaluates to 1 for any valuation (and any B ∈ WFFSL ), and so B ⇒ B. As a last example, we have that B ⇔ ¬¬B. B 0 1

B 0 1

B→B 1 1

¬B 0 1

B 1 0

¬¬B 1 0

B→ ¬¬B and ¬¬B→B 1 and 1 1 and 1 2

The operators ⇒ and ⇔ are meta-connectives stating that a corresponding relation (→ and ↔, respectively) between the two formulae holds for all boolean assignments. These operators are therefore used only at the outermost level, like for A ⇒ B – we avoid something like A ⇔ (A ⇒ B) or A → (A ⇔ B). Fact 5.10 We have the obvious relations between the sets of Sat(isfiable), Fal(sifiable), Taut(ological), Contr(adictory) and All formulae: • Contr ⊂ F al • T aut ⊂ Sat • F al ∩ Sat 6= ∅ • All = T aut ∪ Contr ∪ (F al ∩ Sat)

3: Abbreviations Intuitively, ¬ is supposed to express negation and we read ¬B as “not B”. → corresponds to implication: A → B is similar to “if A then B”. These formal symbols and their semantics are not exact counterparts of the natural language expressions but they do try to mimic the latter as far as possible. In natural language there are several other connectives but, as we will see in chapter 5, the two we have introduced for SL are all that is needed. We will, however, try to make our formulae shorter – and more readable – by using the following abbreviations: Definition 5.11 We define the following abbreviations: def

• A ∨ B = ¬A → B, read as “A or B” def • A ∧ B = ¬(A → ¬B), read as “A and B” def

• A ↔ B = (A → B) ∧ (B → A), read as “A if and only if B” (!Not to be confused with the provable equivalence from definition 4.20!) Example 5.12 Some intuitive justification for the reading of these abbreviations comes from the boolean tables for the functions they denote. For instance, the table for ∧ will be constructed according to its definition: x 1 1 0 0

y 1 0 1 0

¬y 0 1 0 1

x → ¬y 0 1 1 1

x∧y= ¬ (x → ¬y) 1 0 0 0

94

Statement Logic

Thus A ∧ B evaluates to 1 (true) iff both components are true. (In exercise 5.1 you are asked to do the analogous thing for ∨.) 2

4: Sets, Propositions and Boolean Algebras We have defined semantics of SL by interpreting the connectives as functions over B. Some consequences in form of the laws which follow from this definition are listed in subsection 4.1. In subsection 4.2 we observe close relationship to the laws obeyed by the set operations and then define an altenative semantics of the language of SL based on set-interpretation. Finally, in subsection 4.3, we gather these similarities in the common concept of boolean algebra.

4.1: Laws The definitions of semantics of the connectives together with the introduced abbreviations entitle us to conclude validity of some laws for SL. 1. Idempotency A∨A ⇔ A A∧A ⇔ A

2. Associativity (A ∨ B) ∨ C ⇔ A ∨ (B ∨ C) (A ∧ B) ∧ C ⇔ A ∧ (B ∧ C)

3. Commutativity A∨B ⇔ B∨A A∧B ⇔ B∧A

4. Distributivity A ∨ (B ∧ C) ⇔ (A ∨ B) ∧ (A ∨ C) A ∧ (B ∨ C) ⇔ (A ∧ B) ∨ (A ∧ C)

5. DeMorgan ¬(A ∨ B) ⇔ ¬A ∧ ¬B ¬(A ∧ B) ⇔ ¬A ∨ ¬B

6. Conditional A → B ⇔ ¬A ∨ B A → B ⇔ ¬B → ¬A

For instance, idempotency of ∧ is verified directly from the definition of ∧, as follows: A 1 0

def

¬A A→ ¬A ¬(A→ ¬A) = A∧A 0 0 1 1 1 0

The other laws can (and should) be verified in a similar manner. A ⇔ B means that for all valuations (of the propositional variables occurring in A and B) the truth values of both formulae are the same. This means almost that they determine the same function, with one restriction which is discussed in exercise 5.10. For any A, B, C the two formulae (A ∧ B) ∧ C and A ∧ (B ∧ C) are distinct. However, as they are tautologically equivalent it is not always a very urgent matter to distinguish between them. In general, there are a great many ways to insert the missing parentheses in an expression like A1 ∧ A2 ∧ . . . ∧ An , but since they all yield equivalent formulae we usually do not care where these parentheses go. Hence for a sequence A1 , A2 , . . . , An of formulae we may just talk about their conjunction and mean any formula obtained by supplying missing parentheses to the expression A1 ∧ A2 ∧ . . . ∧ An . Analogously, the disjunction of A1 , A2 , . . . , An is any formula obtained by supplying missing parentheses to the expression A1 ∨ A2 ∨ . . . ∨ An . Moreover, the laws of commutativity and idempotency tell us that order and repetition don’t matter either. Hence we may talk about the conjunction of the formulae in some finite set, and mean any conjunction formed by the elements in some order or other. Similarly for disjunction. The elements A1 , . . . , An of a conjunction A1 ∧ . . . ∧ An are called the conjuncts. The term disjunct is used analogously.

4.2: Sets and SL Compare the set laws 1.– 5. from page 1 with the tautological equivalences from the previous subsection. It is easy to see that they have “corresponding form” and can be obtained from each

95

III.2. Semantics of SL

other by the following translations. set − expression − statement set variable a, b... − propositional variable a, b... 0 − ¬ ∩ − ∧ ∪ − ∨ = − ⇔ One also translates : U − > ∅ − ⊥

Remark 5.13 [Formula- vs. set-operations] Although there is some sense of connection between the subset ⊆ and implication →, the two have very different function. The latter allows us to construct new propositions. The former, ⊆, is not however a set building operation: A ⊆ B does not denote a (new) set but states a relation between two sets. The consistency principles are not translated because they are not so much laws as definitions introducing a new relation ⊆ which holds only under the specified conditions. In order to find a set operation corresponding to →, we should reformulate the syntactic definiton 5.11 and verify that A → B ⇔ ¬A ∨ B. The def corresponding set-building operation ., would be then defined by A . B = A0 ∪ B. We do not have a propositional counterpart of the set minus \ operation and so the second complement law A \ B = A ∩ B 0 has no propositional form. However, this law says that in the propositional case we can merely use the expression A ∧ ¬B corresponding to A ∩ B 0 . We may translate the remaining set laws, e.g., 3. A∩A0 = Ø as A∧¬A ⇔ ⊥, etc. Using definition of ∧, we then get ⊥ ⇔ A∧¬A ⇔ ¬A∧A ⇔ ¬(¬A → ¬A), which is an instance of the formula ¬(B → B). 2

Let us see if we can discover the reason for this exact match of laws. For the time being let us ignore the superficial differences of syntax, and settle for the logical symbols on the right. Expressions built up from Σ with the use of these, we call boolean expressions, BEΣ . As an alternative to a valuation V : Σ → {0, 1} we may consider a set-valuation SV : Σ → ℘(U ), where U is any non-empty set. Thus, instead of the boolean-value semantics in the set B, we are defining a set-valued semantics in an arbitrary set U . Such SV can be extended to SV : BEΣ → ℘(U ) according to the rules SV (a) = SV (a) for all a ∈ Σ SV (>) = U SV (⊥) = ∅ SV (¬A) = U \ SV (A) SV (A ∧ B) = SV (A) ∩ SV (B) SV (A ∨ B) = SV (A) ∪ SV (B) Lemma 5.14 Let x ∈ U be arbitrary, V : Σ → {1, 0} and SV : Σ → ℘(U ) be such that for all a ∈ Σ we have x ∈ SV (a) iff V (a) = 1. Then for all A ∈ BEΣ we have x ∈ SV (A) iff V (A) = 1. Proof By induction on the complexity of A. Everything follows from the boolean-tables of >, ⊥, ¬, ∧, ∨ and the observations below. x∈U x∈∅ x ∈ P0 x∈P ∩Q x∈P ∪Q

iff iff iff

always never x 6∈ P x ∈ P and x ∈ Q x ∈ P or x ∈ Q QED (5.14)

Example 5.15 Let Σ = {a, b, c}, U = {4, 5, 6, 7} and choose x ∈ U to be 4. The upper part of the table shows an example

96

Statement Logic

of a valuation and set-valuation satisfying the conditions of the lemma, and the lower part the values of some formulae (boolean expressions) under these valuations. {1, 0} 1 1 0

← ← ← ←

V

Σ a b c

SV

→ → → →

℘({4, 5, 6, 7}) {4, 5} {4, 6} {5, 7}

{1, 0} 1 0 1 0

← ← ← ← ←

V

BESL a∧b ¬a a∨c ¬(a ∨ c)

SV

℘({4, 5, 6, 7}) {4} {6, 7} {4, 5, 7} {6}

→ → → → →

The four formulae illustrate the general fact that, for any A ∈ BESL we have V (A) = 1 ⇔ 4 ∈ SV (A). 2

The set-identities on page 1 say that the BE’s on each side of an identity are interpreted identically by any set-valuation. Hence the corollary below expresses the correspondence between the setidentities and tautological equivalences. Corollary 5.16 Let A, B ∈ BE. Then SV (A) = SV (B) for all set-valuations SV iff A ⇔ B. Proof The idea is to show that for every set-valuation that interprets A and B differently, there is some valuation that interprets them differently, and conversely. ⇐) First suppose SV (A) 6= SV (B). Then there is some x ∈ U that is contained in one but not the other. Let Vx be the valuation such that for all a ∈ Σ, Vx (a) = 1 iff x ∈ SV (a). Then Vx (A) 6= Vx (B) follows from lemma 5.14. ⇒) Now suppose V (A) 6= V (B). Let Vset be the set-valuation into ℘({1}) such that for all a ∈ Σ, 1 ∈ Vset (a) iff V (a) = 1. Again lemma 5.14 applies, and Vset (A) 6= Vset (B) follows.

QED (5.16)

This corollary provides an explanation for the validity of essentially the same laws for statement logic and for sets. These laws were universal, i.e., they stated equality of some set expressions for all possible sets and, on the other hand, logical equiavlence of corresponding logical formulae for all possible valuations. We can now rewirte any valid equality A = B between set expressions as A0 ⇔ B 0 , where primed symbols indicate the corresponding logical formulae; and vice versa. Corollary says that one is valid if and only if the other one is. Let us reflect briefly over this result which is quite significant. For the first, observe that the semantics with which we started, namely, the one interpreting connectives and formulae over the set B, turns out to be a special case of the set-based semantics. We said that B may be an arbitrary two-element set. Now, take U = {•}; then ℘(U ) = {∅, {•}} has two elements. Using • as the “designated” element x (x from lemma 5.14), the set-based semantics over this set will coincide with the propositional semantics which identifies ∅ with 0 and {•} with 1. Reinterpreting corollary with this in mind, i.e., substituting ℘({•}) for B, tells us that A = B is valid (in all possible ℘(U ) for all possible assignments) iff it is valid in ℘({•})! In other words, to check if some set equality holds under all possible interpretations of the involved set variables, it is enough to check if it holds under all possible interpretations of these variables in the structure ℘({•}). (One says that this structure is a cannonical representative of all such set-based interpretations of propositional logic.) We have thus reduced a problem which might seem to involve infinitely many possibilities (all possible sets standing for each variable), to a simple task of checking the solutions with substituting only {•} or ∅ for the inolved variables. 4.3: Boolean Algebras

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [optional]

The discussion in subsection 4.2 shows the concrete connection between the set interpretation and the standard interpretation of the language of SL. The fact that both set operations and

97

III.2. Semantics of SL

(functions interpreting the) propositional connectives obey essentially the same laws can be, however, stated more abstractly – they are both examples of yet other, general structures called “boolean algebras”. Definition 5.17 The language of boolean algebra is given by 1) the set of boolean expressions, BEΣ , relatively to a given alphabet Σ of variables: Basis :: 0, 1 ∈ BEΣ and Σ ⊂ BEΣ Induction :: If t ∈ BEΣ then −t ∈ BEΣ :: If s, t ∈ BEΣ then (s + t) ∈ BEΣ and (s ∗ t) ∈ BEΣ and by 2) the formulae which are equations s ≡ t where s, t ∈ BEΣ . A boolean algebra is any set X with interpretation • • • •

of of of of

0, 1 as constants 0, 1 ∈ X (“bottom” and “top”); − as a unary operation − : X → X (“complement”), and +, ∗ as binary operations +, ∗ : X 2 → X (“join” and “meet”), ≡ as identity, =,

satisfying the following axioms: 1.

Neutral elements x+0 ≡ x x∗1 ≡ x

2.

Associativity (x + y) + z ≡ x + (y + z) (x ∗ y) ∗ z ≡ x ∗ (y ∗ z)

3.

Commutativity x+y ≡ y+x x∗y ≡ y∗x

4.

Distributivity x + (y ∗ z) ≡ (x + y) ∗ (x + z) x ∗ (y + z) ≡ (x ∗ y) + (x ∗ z)

5.

Complement x ∗ (−x) ≡ 0

x + (−x) ≡ 1

Be wary of confusing the meaning of the symbols “0,1,+, −, ∗” above with the usual meaning of arithmetic zero, one, plus, etc. – they have nothing in common, except for the superficial syntax!!! Roughly speaking, the word “algebra”, stands here for the fact that the only formulae are equalities and reasoning happens by using properties of equality: reflexivity – x ≡ x, symmetry , transitivity – x≡yx≡z; y≡z , and by “substituting equals for equals”, according to the – x≡y y≡x rule: g[x] ≡ z ; x ≡ y (F.xv) g[y] ≡ z i.e. given two equalities x ≡ y and g[x] ≡ z we can derive g[y] ≡ z. (Compare this to the provable equivalence from theorem 4.21, in particular, the rule from 4.22.) Other laws, which we listed before, are derivable in this manner from the above axioms. For instance, • Idempotency of ∗, i.e. (F.xvi) x ≡ x∗x is shown as follows 1 5 4 5 1 : x ≡ x ∗ 1 ≡ x ∗ (x + (−x)) ≡ (x ∗ x) + (x ∗ (−x)) ≡ (x ∗ x) + 0 ≡ x ∗ x (Similarly, x ≡ x + x.) • Another fact is a form of absorption: 0∗x≡0

(F.xvii) 5

3

5

3,2

x+1≡1 2

: 0 ∗ x ≡ (x ∗ (−x)) ∗ x ≡ ((−x) ∗ x) ∗ x ≡ (−x) ∗ (x ∗ x) : x + 1 ≡ x + (x + (−x)) ≡ (x + x) + (−x)

(F.xvi)

≡

(F.xvi)

≡

5

(−x) ∗ x ≡ 0

5

x + (−x) ≡ 1

• Complement of any x is determined uniquely by the two properties from 5., namely, any y satisfying both these properties is necessarily x’s complement: if a) x + y ≡ 1 and b) y ∗ x ≡ 0 then y ≡ −x

(F.xviii) 1

5

4

b)

5

3,4

: y ≡ y ∗ 1 ≡ y ∗ (x + (−x)) ≡ (y ∗ x) + (y ∗ (−x)) ≡ 0 + (y ∗ (−x)) ≡ (x ∗ (−x)) + (y ∗ (−x)) ≡ a)

3,1

(x + y) ∗ (−x) ≡ 1 ∗ (−x) ≡ −x

98

Statement Logic

• Involution, (F.xix) −(−x) ≡ x follows from (F.xviii). By 5. we have x ∗ (−x) ≡ 0 and x + (−x) ≡ 1 which, by (F.xviii) imply that x ≡ −(−x). The new notation used in the definition 5.17 was meant to emphasize the fact that boolean algebras are more general structures. It should have been obvious, however, that the intended interpretation of these new symbols was as follows: sets ℘(U ) 3 x∪y x∩y x0 Ø U =

← boolean algebra ← x ← x+y ← x∗y ← −x ← 0 ← 1 ← ≡

→ → → → → → → →

SL ∈ {1, 0} x∨y x∧y ¬x 0 1 ⇔

The fact that any set ℘(U ) obeys the set laws from page 30, and that the set B = {1, 0} obeys the SL-laws from 4.1 amounts to the statement that these structures are, in fact, boolean algebras under the above interpretation of boolean operations. (Not all the axioms of boolean algebras were included, so one has to verify, for instance, the laws for neutral elements and complement, but this is an easy task.) Thus, all the above formulae (F.xvi)–(F.xix) will be valid for these structures under the interpretation from the table above, i.e., ℘(U ) − law A∩A = A ∅∩A=∅ U ∪A=U 0 (A0 ) = A

← boolean algebra law → ← x∗x = x → ← 0∗x=0 → ← 1+x = 1 → ← −(−x) = x →

SL − law A∧A⇔A ⊥∧A⇔⊥ >∨A⇔> ¬(¬A) ⇔ A

(F.xvi) (F.xvii) (F.xvii) (F.xix)

The last fact for SL was, for instance, verified at the end of example 5.9. Thus the two possible semantics for our WFFSL , namely, the set {1, 0} with the logical interpretation of the (boolean) connectives as ¬, ∧, etc. on the one hand, and an arbitrary ℘(U ) with the interpretation of (boolean) connectives as 0 , ∩, etc. are both boolean algebras. Now, we said that boolean algebras come with the reasoning system – equational logic – which allows us to prove equations A ≡ B, where A, B ∈ BE. On the other hand, the axiomatic systems for SL, e.g., the Hilbert’s system `H , proved only simple boolean expressions: `H A. Are these two reasoning systems related in some way? They are, indeed, but we will not study precise relationship in detail. At this point we only state the following fact: if `H A then also the equation A ≡ 1 is provable in equational logic, where A ≡ 1 is obtained by replacing all subformulas x → y by the respective expressions −x + y (recall x → y ⇔ ¬x ∨ y). For instance, the equation corresponding to the first axiom of `H A → (B → A) is obtained by translating → to the equivalent boolean expression: (−A+−B+A) = 1. You may easily verify provability of this equation from axioms 1.-5., as well as that it holds under set interpretation – for any sets A, B ⊆ U : A0 ∪ B 0 ∪ A = U . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [end optional]

Exercises (week 5) exercise 5.1+ Recall example 5.12 and set up the boolean table for the formula a ∨ b where ∨ is trying to represent “or”. Consider if it is possible – and if so, then how – to represent the following statements using your definition: 1. x < y or x = y. 2. John is ill or Paul is ill. 3. Either we go to cinema or we stay at home. exercise 5.2+ Write the boolean tables for the following formulae and decide to which among the four classes from Fact 5.10 (Definition 5.8) they belong:

99

III.2. Semantics of SL

1. a → (b → a) 2. (a → (b → c)) → ((a → b) → (a → c)) 3. (¬b → ¬a) → (a → b) exercise 5.3+ Rewrite the laws 1. Neutral elements and 5. Complement of boolean algebra to their propositional form and verify their validity using boolean tables. (Use > for 1 and ⊥ for 0.) exercise 5.4+ Verify whether (a → b) → b is a tautology. Is the following proof correct? If not, what is wrong with it? 1: 2: 3: 4:

A, A → B `N A → B A, A → B `N A A → B `N B `N (A → B) → B

A0 A0 M P (2, 1) DT

exercise 5.5+ Verify the following facts: 1. A1 → (A2 → (A3 → B)) ⇔ (A1 ∧ A2 ∧ A3 ) → B. 2. (A ∧ (A → B)) ⇒ B Use boolean tables to show that the following formulae are contradictions: 3. ¬(B → B) 4. ¬(B ∨ C) ∧ C Determine now what sets are denoted by these two expressions – for the set-interpretation of → recall remark 5.13. exercise 5.6+ Show which of the following pairs are equivalent: 1. A → (B → C) ? 2. A → (B → C) ? 3. A ∧ ¬B ?

(A → B) → C B → (A → C) ¬(A → B)

exercise 5.7+ Prove a result analogous to corollary 5.16 for ⊆ and ⇒ instead of = and ⇔. exercise 5.8+ Use point 3. from exercise 5.6 to verify that (C ∧ D) → (A ∧ ¬B) and (C ∧ D) → ¬(A → B) are equivalent. How does this earlier exercise simplify the work here? exercise 5.9 (Compositionality and substitutivity) Let F [A] be an arbitrary formula with a subformula A, and V be a valuation of all propositional variables occurring in F [A]. It induces the valuation V (F [A]) of the whole formula. Let B be another formula and assume that for all valuations V , V (A) = V (B). Use induction on the complexity of F [ ] to show that then V (F [A]) = V (F [B]), for all valuations V . (Hint: The structure of the proof will be similar to that of theorem 4.21. Observe, however, that here you are proving a completely different fact concerning not the provability relation but the semantic interpretation of the formulae – not their provable but tautological equivalence.)

exercise 5.10 Tautological equivalence A ⇔ B amounts almost to the fact that A and B have the same interpretation. We have to make the meaning of this “almost” more precise. 1. Show that neither of the two relations A ⇔ B and A = B imply the other, i.e., give examples of A and B such that (a) A ⇔ B but A 6= B and (b) A = B but not A ⇔ B. (Hint: Use extra/different propositional variables not affecting the truth of the formula.)

2. Explain why the two relations are the same whenever A and B contain the same variables. 3. Finally explain that if A = B then there exists some formula C obtained from B by “renaming” the propositional variables, such that A ⇔ C.

100

Statement Logic

exercise 5.11 Let Φ be an arbitrary, possibly infinite, set of formulae. The following conventions generalize the notion of (satisfaction of) binary conjunction/disjunction to such arbitrary sets. Given a valuation V , we say that Φ’s: V • conjunction is true under V , V ( Φ) = 1, iff for all formulae A : if A ∈ Φ then V (A) = 1. W • disjunction is true udner V , V ( Φ) = 1, iff there exists an A such that A ∈ Φ and V (A) = 1. Let now Φ be a set containing zero or one formulae. What would be the most natural interpretations of the expressions “the conjunction of formulae in Φ” and “the disjunction of formulae in Φ” ?

101

III.3. Soundness and Completeness

Chapter 6 Soundness and Completeness • Adequate Sets of Connectives • Normal Forms – DNF – CNF • Soundness of N and H • Completeness of N and H This chapter focuses on the relations between the syntax and axiomatic systems for SL and their semantic counterpart. Before we discuss the central concepts of soundness and completeness, we will first ask about the ‘expressive power’ of the language we have introduced. Expressive power of a language for statement logic can be identified with the possibilities it provides for defining various boolean functions. In section 1 we show that all boolean functions can be defined by the formulae in our language. Section 2 explores a useful consequence of this fact showing that each formula can be written equivalently in a special normal form. The rest of the chapter then studies soundness and completeness of our axiomatic systems.

1: Adequate Sets of Connectives This and next section study the relation we have established in definition 5.6 between formulae of SL and boolean functions (on our two-element set B). According to this definition, any SL formula defines a boolean function. The question now is the opposite: Can any boolean function be defined by some formula of SL? Introducing abbreviations ∧, ∨ and others in chapter 5, section 3, we remarked that they are not necessary but merely convenient. The meaning of their being “not necessary” is that any function which can be defined by a formula containing these connectives, can also be defined by a formula which does not contain them. E.g., a function defined using ∨ can be also defined using ¬ and →. Concerning our main question we need a stronger notion, namely, the notion of a complete, or adequate set of connectives which is sufficient to define all boolean functions. Definition 6.1 A set AS of connectives is adequate if for every n > 0, every boolean-function f : {1, 0}n → {1, 0} is determined by some formula containing only the connectives from the set AS. Certainly, not every set is adequate. If we take, for example, the set with only negation {¬}, it is easy to see that it cannot be adequate. It is a unary operation, so that it will never give rise to, for instance, a function with two arguments. But it won’t even be able to define all unary functions. It can be used to define only two functions B → B – inverse (i.e., ¬ itself) and identity (¬¬): x 1 0

¬(x) 0 1

¬¬(x) 1 0

(The proof-theoretic counterpart of the fact that ¬¬ is identity was lemma 4.10 showing provable equivalence of B and ¬¬B.) Any further applications of ¬ will yield one of these two functions. The constant functions (f (x) = 1 or f (x) = 0) can not be defined using exclusively this single connective. The following theorem identifies the first adequate set. Theorem 6.2 {¬, ∧, ∨} is an adequate set.

102

Statement Logic

Proof Let f : {1, 0}n → {1, 0} be an arbitrary boolean function of n arguments (for some n > 0) with a given boolean table. If f always produces 0 then the contradiction (a1 ∧ ¬a1 ) ∨ . . . ∨ (an ∧ ¬an ) determines f . For the case when f produces 1 for at least one set of arguments, we write the proof to the left illustrating it with an example to the right. Proof Let a1 , a2 , . . . , an be distinct propositional variables listed in increasing order. The boolean table for f has 2n rows. Let arc denote the entry in the c-th column and r-th row.

a1 1 1 0 0

a2 1 0 1 0

Example f (a1 , a2 ) 0 1 1 0

For each 1 ≤ r ≤ 2n and 1 ≤ c ≤ n let ac if arc = 1 Lrc = ¬ac if arc = 0

L11 L21 L31 L41

= a1 , L12 = a2 = a1 , L22 = ¬a2 = ¬a1 , L32 = a2 = ¬a1 , L42 = ¬a2

For each row 1 ≤ r ≤ 2n form the conjunction C r = Lr1 ∧ Lr2 ∧ . . . ∧ Lrn This means that for all rows r and p 6= r : C r (ar1 , . . . , arn ) = 1 and C r (ap1 , . . . , apn ) = 0

C1 C2 C3 C4

= a 1 ∧ a2 = a1 ∧ ¬a2 = ¬a1 ∧ a2 = ¬a1 ∧ ¬a2

Let D be the disjunction of those C r for which f (ar1 , . . . , arn ) = 1.

D = C2 ∨ C3 = (a1 ∧ ¬a2 ) ∨ (¬a1 ∧ a2 )

The claim is: the boolean function determined by D is f , i.e., D = f . If f (ar1 , . . . , arn ) = 1 then D contains the corresponding disjunct C r which, since C r (ar1 , . . . , arn ) = 1, makes D(ar1 , . . . , arn ) = 1. If f (ar1 , . . . , arn ) = 0, then D does not contain the corresponding disjunct C r . But for all p 6= r we have C p (ar1 , . . . , arn ) = 0, so none of the disjuncts in D will be 1 for QED (6.2) these arguments, and hence D(ar1 , . . . , arn ) = 0. Corollary 6.3 The following sets of connectives are adequate: 1. {¬, ∨} 2. {¬, ∧} 3. {¬, →} Proof 1. By De Morgan’s law A ∧ B ⇔ ¬(¬A ∨ ¬B). Thus we can express each conjunction by negations and disjunction. Using distributive and associative laws, this allows us to rewrite the formula obtained in the proof of Theorem 6.2 to an equivalent one without conjunction 2. The same argument as above. def

3. According to definition 5.11, A ∨ B = ¬A → B. This, however, was a merely syntactic definition of a new symbol ‘∨’. Here we have to show that the boolean functions ¬ and → can be used to define the boolean function ∨. But this was done in Exercise 5.2.(1) where the semantics (boolean table) for ∨ was given according to the definition 5.11, i.e., where A ∨ B ⇔ ¬A → B, required here, was shown. So the claim follows from point 1. QED (6.3) Remark. Note that the definition of “adequate” does not require that any formula determines the functions from {1, 0}0 into {1, 0}. {1, 0}0 is the singleton set {}. There are two functions from {} into {1, 0}, namely { 7→ 1} and { 7→ 0}. These functions are not determined by any formula in the connectives ∧, ∨, →, ¬. The best approximations are tautologies and contradictions like (a → a) and ¬(a → a), which we in fact took to be the special formulae > and ⊥. Note however that these determine the functions {1 7→ 1, 0 7→ 1} and {1 7→ 0, 0 7→ 0}, which in a strict set-theoretic sense are distinct from the functions above. To obtain a set of connectives that is adequate in a stricter sense, one would have to introduce > or ⊥ as a special formula (in fact a 0-argument connective) that is not considered to contain any propositional variables. 2

103

III.3. Soundness and Completeness

2: DNF, CNF The fact that, for instance, {¬, →} is an adequate set, vastly reduces the need for elaborate syntax when studying propositional logic. We can (as we indeed have done) restrict the syntax of WFFSL to the necessary minimum. This simplifies many proofs concerned with the syntax and the axiomatic systems since such proofs involve, typically, induction on the definition (of WFF SL , of `, etc.) Adequacy of the set means that any entity (any function defined by a formula) has some specific, “normal” form using only the connectives from the adequate set. Now we will show that even more “normalization” can be achieved. Not only every (boolean) function can be defined by some formula using only the connectives from one adequate set – every such a function can be defined by such a formula which, in addition, has a very specific form. Definition 6.4 A formula B is in 1. disjunctive normal form, DNF, iff B = C1 ∨ ... ∨ Cn , where each Ci is a conjunction of literals. 2. conjunctive normal form, CNF, iff B = D1 ∧ ... ∧ Dn , where each Di is a disjunction of literals. Example 6.5 Let Σ = {a, b, c}. • (a ∧ b) ∨ (¬a ∧ ¬b) and (a ∧ b ∧ ¬c) ∨ (¬a ∧ c) are both in DNF • a ∨ b and a ∧ b are both in DNF and CNF • (a ∨ (b ∧ c)) ∧ (¬b ∨ a) is neither in DNF nor in CNF • (a ∨ b) ∧ c ∧ (¬a ∨ ¬b ∨ ¬c) is in CNF but not in DNF • (a ∧ b) ∨ (¬a ∨ ¬b) is in DNF but not in CNF. The last formula can be transformed into CNF applying the laws like those from 5.4.1 on p. 94. The distributivity and associativity laws yield: (a ∧ b) ∨ (¬a ∨ ¬b) ⇔ (a ∨ ¬a ∨ ¬b) ∧ (b ∨ ¬a ∨ ¬b) and the formula on the right hand side is in CNF.

2

Recall the form of the formula constructed in the proof of Theorem 6.2 – it was in DNF! Thus, this proof tells us not only that the set {¬, ∧, ∨} is adequate but also Corollary 6.6 Each formula B is logically equivalent to a formula B D in DNF. Proof For any B there is a D in DNF such that B = D. By “renaming” the propositional variables of D (see exercise 5.10) one obtains a new formula BD in DNF, such that B ⇔ BD . QED (6.6)

We now use this corollary to show the next. Corollary 6.7 Each formula B is logically equivalent to a formula B C in CNF. Proof By Corollary 6.3, we may assume that the only connectives in B are ¬ and ∧. By Corollary 6.6 we may also assume that any formula is equivalent to a DNF formula. We now proceed by induction on the complexity of B a :: A propositional variable is a conjunction over one literal, and hence is in CNF. ¬A :: By Corollary 6.6, A is equivalent to a formula AD in DNF. Exercise 6.10 allows us to conclude that B is equivalent to BC in CNF. C ∧ A :: By IH, both C and A have CNF : CC , AC . Then CC ∧ AC is easily transformed into CNF (using associative laws), i.e., we obtain an equivalent BC in CNF. QED (6.7)

104

Statement Logic

3: Soundness a Background Story ♦ ♦ The library offers its customers the possibility of ordering books on internet. From the main page one may ask the system to find the book one wishes to borrow. (We assume that appropriate search engine will always find the book one is looking for or else give a message that it could not be identified. In the sequel we are considering only the case when the book you asked for was found.) The book (found by the system) may happen to be immediately available for loan. In this case, you may just reserve it and our story ends here. But the most frequent case is that the book is on loan or else must be borrowed from another library. In such a case, the system gives you the possibility to order it: you mark the book and the system will send you a message as soon as the book becomes available. (You need no message as long as the book is not available and the system need not inform you about that.) Simplicity of this scenario notwithstanding, this is actually our whole story. There are two distinct assumptions which make us relay on the system when we order a book. The first is that when you get the message that the book is available it really is. The system will not play fool with you saying “Hi, the book is here” while it is still on loan with another user. We trust that what the system says (“The book is here”) is true. This property is what we call “soundness” of the system – it never provides us with false information. But there is also another important aspect making up our trust in the system. Suppose that the book actually becomes available, but you do not get the appropriate message. The system is still sound – it does not give you any wrong information – but only because it does not give you any information whatsoever. It keeps silent although it should have said that the book is there and you can borrow it. The other aspect of our trust is that whenever there is a fact to be reported (‘the book became available’), the system will do it – this is what we call “completeness”. Just like a system may be sound without being complete (keep silent even though the book arrived to the library), it may be complete without being sound. If it constantly sent messages that the book you ordered was available, it would, sooner or later (namely when the book eventually became available) report the true fact. However, in the meantime, it would provide you with a series of incorrect information – it would be unsound. Thus, soundness of a system means that whatever it says is correct: it says “The book is here” only if it is here. Completeness means that everything that is correct will be said by the system: it says “The book is here” if (always when) the book is here. In the latter case, we should pay attention to the phrase “everything that is correct”. It makes sense because our setting is very limited. We have one command ‘order the book ...’, and one possible response of the system: the message that the book became avilable. “Everything that is correct” means here simply that the book you ordered actually is available. It is only this limited context (i.e., limited and well-defined amount of true facts) which makes the notion of completeness meaningfull. In connection with axiomatic systems one often resorts to another analogy. The axioms and the deduction rules together define the scope of the system’s knowledge about the world. If all aspects of this knowledge (all the theorems) are true about the world, the system is sound. This idea has enough intuitive content to be grasped with reference to vague notions of ‘knowledge’, ‘the world’, etc. and our illustration with the system saying “The book is here” only when it actually is, merely makes it more specific. Completeness, on the other hand, would mean that everything that is true about the world (and expressible in the actual language), is also reflected in the system’s knowledge (theorems). Here it becomes less clear what the intuitive content of ‘completeness’ might be. What can one possibly mean by “everything that is true”? In our library example, the user and the system use only very limited language allowing the user to ‘order the book ...’ and the system to state that it is avaialable. Thus, the possible meaning of “everything” is limited to the book being available or not. One should keep this difference between

105

III.3. Soundness and Completeness

‘real world’ and ‘availability of a book’ in mind because the notion of completeness is as unnatural in the context of natural language and real world, as it is adequate in the context of bounded, sharply delineated worlds of formal semantics. The limited expressiveness of a formal language plays here crucial role of limiting the discourse to a well-defined set of expressible facts. The library system should be both sound and complete to be useful. For axiomatic systems, the minimal requirement is that that they are sound – completeness is a desirable feature which, typically, is much harder to prove. Also, it is known that there are axiomatic systems which are sound but inherently incomplete. We will not study such systems but merely mention one in the last chapter, theorem 11.15. 7 ♦

♦

Definition 5.8 introduced, among other concepts, the validity relation |= A, stating that A is satisfied by all structures. On the other hand, we studied the syntactic notion of a proof in a given axiomatic system, which we wrote as `C A. We also saw a generalization of the provability predicate `H in Hilbert’s system to the relation Γ `N A, where Γ is a theory – a set of formulae. We now define the semantic relation Γ |= A of “A being a (tauto)logical consequence of Γ”. Definition 6.8 • V |=Γ • Mod(Γ) • |=Γ • Γ|=A

Given a set Γ ⊆ WFFSL and A ∈ WFFSL we write: iff V |= G for all G ∈ Γ, where V is some valuation, called a model of Γ = {V : V |= Γ} – the set of all models of Γ iff for all V : V |= Γ iff for all V ∈ M od(Γ) : V |= A, i.e., ∀V : V |= Γ ⇒ V |= A

The analogy between the symbols |= and ` is not accidental. The former refers to a semantic, while the later to a syntactic notion and, ideally, these two notions should be equivalent in some sense. The following table gives the picture of the intended “equivalences”: Syntactic Semantic `H A |= A Γ |= A Γ `N A We will see that for H and N there is such an equivalence.The equivalence we desire is (F.xx)

Γ `A ⇔ Γ |= A

The implication Γ `C A ⇒ Γ |= A is called soundness of the proof system C : whatever we can prove in C from the assumptions Γ, is true in every structure satisfying Γ. This is usually easy to establish as we will see shortly. The problematic implication is the other one – completeness – stating that any formula which is true in all models of Γ is provable from the assumptions Γ. (Γ = Ø is a special case: the theorems `C A are tautologies and the formulae |= A are those satisfied by all possible structures (since any structure V satisfies the empty set of assumptions).)

Remark 6.9 [Soundness and Completeness] Another way of viewing these two implications is as follows. Given an axiomatic system C and a theory Γ, the relation Γ `C defines the set of formulae – the theorems – T hC (Γ) = {A : Γ `C A}. On the other hand, given the definition of Γ |= , we obtain a (possibly different) set of formulae, namely, the set Γ∗ = {B : Γ |= B} of (tauto)logical consequences of Γ. Soundness of C, i.e., the implication Γ `C A ⇒ Γ |= A means that any provable consequence is also a (tauto)logical consequence and amounts to the inclusion T hC (Γ) ⊆ Γ∗ . Completeness means that any (tauto)logical consequence of Γ is also provable and amounts to the opposite inclusion Γ∗ ⊆ T hC (Γ). 2

For proving soundness of a system consisting of axioms and proof rules (like H or N ), one has to show that the axioms of the system are valid and that the rules preserve truth: whenever the assumptions of the rule are satisfied in a model M , then so is the conclusion. (Since H treats only tautologies, this claim reduces there to preservation of validity: whenever the assumptions of 7 Thanks

to Eivind Kolflaath for the library analogy.

106

Statement Logic

the rule are valid, so is the conclusion.) When these two facts are established, a straightforward induction proof shows that all theorems of the system must be valid. We show soundness and completeness of N . Theorem 6.10 [Soundness] Let Γ ⊆ WFFSL and A ∈ WFFSL . Then Γ `N A ⇒ Γ |= A Proof From the above remarks, we have to show that all axioms are valid, and that MP preserves validity: A1–A3 :: In exercise 5.2 we have seen that all axioms of H are valid, i.e., satisfied by any structure. In particular, the axioms are satisfied by all models of Γ, for any Γ. A0 :: The axiom schema A0 allows us to conclude Γ `N B for any B ∈ Γ. This is obviously sound: any model V of Γ must satisfy all the formulae of Γ and, in particular, B. MP :: Suppose Γ |= A and Γ |= A → B. Then, for an arbitrary V ∈ M od(Γ) we have V (A) = 1 and V (A → B) = 1. Consulting the boolean table for → : the first assumption reduces the possibilites for V to the two rows for which V (A) = 1, and then, the second assumption to the only possible row in which V (A → B) = 1. In this row V (B) = 1, so V |= B. Since V was arbitrary model of Γ, we conclude that Γ |= B.

QED (6.10)

You should not have any problems with simplifying this proof to obtain the soundness of H. Corollary 6.11 Every satisfiable theory is consistent. Proof We show the equivalent statement that every inconsistent theory is unsatisfiable: if Γ `N ⊥ then Γ |= ⊥ by theorem 6.10, hence Γ is not satisfiable (since ¬(x→x) = 0). QED (6.11) Remark 6.12 [Equivalence of two soundness notions] Soundness is often expressed in the form of corollary 6.11. In fact, the two formulations are equivalent: 6.10. Γ `N A ⇒ Γ |= A 6.11. (exists V : V |= Γ) ⇒ Γ 6`N ⊥ The implication 6.10 ⇒ 6.11 is given in the proof of corollary 6.11. For the opposite: if Γ `N A then Γ ∪ {¬A} is inconsistent (Exercise 4.6) and hence (by 6.11) unsatisfiable, i.e., for any V : V |= Γ ⇒ V 6|= ¬A. But if V 6|= ¬A then V |= A, and so, since V was arbitrary, Γ |= A. 2

4: Completeness The proof of completness involves several lemmas which we now proceed to establish. Just as there are two equivalent ways of expressing soundness (remark 6.12), there are two equivalent ways of expressing completeness. One (corresponding to corollary 6.11) says that every consistent theory is satisfiable and the other that any valid formula is provable. Lemma 6.13 The following two formulations of completness are equivalent: 1. Γ 6`N ⊥ ⇒ M od(Γ) 6= Ø 2. Γ |= A ⇒ Γ `N A Proof 1. ⇒ 2.) Assume 1. and Γ |= A, i.e., for any V : V |= Γ ⇒ V |= A. Then Γ ∪ {¬A} has no model and, by 1., Γ, ¬A `N ⊥. By the deduction theorem Γ `N ¬A → ⊥, and so Γ `N A by Exercise 4.2.5 and lemma 4.10.1. 2. ⇒ 1.) Assume 2. and Γ 6`N ⊥. By (the observation before) lemma 4.24 this means that there is an A such that Γ 6`N A and, by 2., that Γ 6|= A. This means that there is a structure V such that V 6|= A and V |= Γ. Thus 1. holds. QED (6.13) We prove the first of the above formulations: we take an arbitrary Γ and, assuming that it is consistent, i.e., Γ 6`N ⊥, we show that M od(Γ) 6= Ø by constructing a particular structure which we prove to be a model of Γ. This proof is not the simplest possible for SL. However, we choose to do it this way because it illustrates the general strategy used later in the completeness proof for FOL. Our proof uses the notion of a maximal consistent theory:

107

III.3. Soundness and Completeness

Definition 6.14 A theory Γ ⊂ WFFΣ SL is maximal consistent iff it is consistent and, for any formula A ∈ WFFΣ , Γ ` A or Γ ` ¬A. N N SL From exercise 4.6 we know that if Γ is consistent then for any A at most one of Γ `N A and Γ `N ¬A is the case – Γ can not prove too much. Put a bit differently, if Γ is consistent then the following holds for any formula A : (F.xxi)

Γ `N A ⇒ Γ 6`N ¬A

or equivalently

Γ 6`N A or Γ 6`N ¬A

– if Γ proves something (A) then there is something else (namely ¬A) which Γ does not prove. Maximality is a kind of the opposite – Γ can not prove too little: if Γ does not prove something (¬A) then there must be something else it proves (namely A): (F.xxii)

Γ `N A ⇐ Γ 6`N ¬A

or equivalently

Γ `N A or Γ `N ¬A

If Γ is maximal consistent it satisfies both (F.xxi) and (F.xxii) and hence, for any formula A, exactly one of Γ `N A and Γ `N ¬A is the case. For instance, given Σ = {a, b}, the theory Γ = {a → b} is consistent. However, it is not maximal consistent because, for instance, Γ 6`N a and Γ 6`N ¬a. (The same holds if we replace a by b.) In fact, we have an alternative, equivalent, and easier to check formulation of maximal consistency for SL. Fact 6.15 A theory Γ ⊂ WFFΣ SL is maximal consistent iff it is consistent and for all a ∈ Σ : Γ `N a or Γ `N ¬a. Proof ‘Only if’ part, i.e. ⇒, is trivial from definition 6.14 which ensures that Γ `N A or Γ `N ¬A for all formulae, in particular all atomic ones. The opposite implication is shown by induction on the complexity of A. A is : a ∈ Σ :: This basis case is trivial, since it is exactly what is given. ¬B :: By IH, we have that Γ `N B or Γ `N ¬B. In the latter case, we are done (Γ `N A), while in the former we obtain Γ `N ¬A, i.e., Γ `N ¬¬B from lemma 4.10. C → D :: By IH we have that either Γ `N D or Γ `N ¬D. In the former case, we obtain Γ `N C → D by lemma 4.9. In the latter case, we have to consider two subcases – by IH either Γ `N C or Γ `N ¬C. If Γ `N ¬C then, by lemma 4.9, Γ `N ¬D → ¬C. Applying MP to this and axiom A3, we obtain Γ `N C → D. So, finally, assume Γ `N C (and Γ `N ¬D). But then Γ `N ¬(C → D) by exercise 4.2.3.

QED (6.15)

The maximality property of a maximal consistent theory makes it easier to construct a model for it. We prove first this special case of the completeness theorem: Lemma 6.16 Every maximal consistent theory is satisfiable. Proof Let Γ be any maximal consistent theory, and let Σ be the set of propositional variables. We define the valuation V : Σ → {1, 0} by the equivalence V (a) = 1 iff Γ `N a for every a ∈ Σ. (Hence also V (a) = 0 iff Γ 6`N a.) We now show that V is a model of Γ, i.e., for any formula B : if B ∈ Γ then V (B) = 1. In fact we prove the stronger result that for any formula B, V (B) = 1 iff Γ `N B. The proof goes by induction on (the complexity of) B. B is : a :: Immediate from the definition of V . ¬C :: V (¬C) = 1 iff V (C) = 0. By IH, the latter holds iff Γ 6`N C, i.e., iff Γ `N ¬C. C → D :: We consider two cases:

108

Statement Logic

– V (C → D) = 1 implies (V (C) = 0 or V (D) = 1). By the IH, this implies (Γ 6`N C or Γ `N D), i.e., (Γ `N ¬C or Γ `N D). In the former case Exercise 4.2.1, and in the latter lemma 4.12.2 gives that Γ `N C → D. – V (C → D) = 0 implies V (C) = 1 and V (D) = 0, which by the IH imply Γ `N C and Γ 6`N D, i.e., Γ `N C and Γ `N ¬D, which by exercise 4.2.3 and two applications of MP imply Γ `N ¬(C → D), i.e., Γ 6`N C → D. QED (6.16) Next we use this result to show that every consistent theory is satisfiable. What we need, is a result stating that every consistent theory is a subset of some maximal consistent theory. Lemma 6.17 Every consistent theory can be extended to a maximal consistent theory. Proof Let Γ be a consistent theory, and let {a0 , . . . , an } be the set of propositional variables used in Γ. [The case when n = ω (is countably infinite) is treated in the small font within the square brackets.] Let • Γ0 = Γ • Γi+1 =

• Γ = Γn+1

Γi , a i if this is consistent Γi , ¬ai otherwise S [=

i 0) → (∃y y = 1). We can apply the prenex operations in two ways:

⇔

(∀x x > 0) → (∃y y = 1) ⇔ ∃x (x > 0 → (∃y y = 1)) ∃y ((∀x x > 0) → y = 1) ∃x ∃y (x > 0 → y = 1) ∃y ∃x (x > 0 → y = 1)

⇔

Obviously, since the order of the quantifiers of the same kind does not matter (exercise 7.2.1), the two resulting formulae are equivalent. However, the quantifiers may also be of different kinds:

⇔

(∃x x > 0) → (∃y y = 1) ⇔ ∀x (x > 0 → (∃y y = 1)) ∃y ((∃x x > 0) → y = 1) ∀x ∃y (x > 0 → y = 1) ∃y ∀x (x > 0 → y = 1)

⇔

Although it is not true in general that ∀x∃yA ⇔ ∃y∀xA, the equivalence preserving prenex operations ensure – due to renaming of bound variables which avoids name clashes with variables in other subformulae – that the results (like the two formulae above) are equivalent. 2

IV.3. More Semantics

139

2: A few bits of Model Theory Roughly and approximately, model theory studies the properties of model classes. Notice that a model class is not just an arbitrary collection K of FOL-structures – it is a collection of models of some set Γ of formulae, i.e., such that K = M od(Γ) for some Γ. The important point is that the syntactic form of the formulae in Γ may have a heavy influence on the properties of its model class (as we illustrate in theorem 9.11). On the other hand, knowing some properties of a given class of structures, model theory may sometimes tell what syntactic forms of axioms are necessary/sufficient for axiomatizing this class. In general, there exist non-axiomatizable classes K, i.e., such that for no FOL-theory Γ, one can get K = M od(Γ).

2.1: Substructures As an elementary example of the property of a class of structures we will consider (in 2.2) closure under substructures and superstructures. Here we only define these notions. Definition 9.7 Let Σ be a FOL alphabet and let M and N be Σ-structures: N is a substructure of M (or M is a superstructure (or extension) of N ), written N v M , iff: • N ⊆M • For all a ∈ I : [[a]]N = [[a]]M • For all f ∈ F, and a1 , . . . ,an ∈ N : [[f ]]N (a1 , . . . ,an ) = [[f ]]M (a1 , . . . ,an ) ∈ N • For all R ∈ R, and a1 , . . . ,an ∈ N : ha1 , . . . ,an i ∈ [[R]]N ⇔ ha1 , . . . ,an i ∈ [[R]]M Let K be an arbitrary class of structures. We say that K is: • closed under substructures if whenever M ∈ K and N v M , then also N ∈ K • closed under superstructures if whenever N ∈ K and N v M , then also M ∈ K Thus N v M iff N has a more restricted interpretation domain than M , but all constants, function and relation symbols are interpreted identically within this restricted domain. Obviously, every structure is its own substructure, M v M . If N v M and N 6= M , which means that N is a proper subset of M , then we say that N is a proper substructure of M . Example 9.8 Let Σ contain one individual constant c and one binary function symbol . The structure Z with Z = Z being the integers, [[c]]Z = 0 and [[ ]]Z (x, y) = x + y is a Σ-structure. The structure N with N = N being only the natural numbers with zero, [[c]]N = 0 and [[ ]]N (x, y) = x + y is obviously a substructure N v Z. Restricting furthermore the domain to the even numbers, i.e. taking P with P being the even numbers greater or equal zero, [[c]]P = 0 and [[ ]]P (x, y) = x + y yields again a substructure P v N . The class K = {Z, N, P } is not closed under substructures. One can easily find other Σ-substructures not belonging to K (for instance, all negative numbers with zero and addition is a substructure of Z). Notice that, in general, to obtain a substructure it is not enough to select an arbitrary subset of the underlying set. If we restrict N to the set {0, 1, 2, 3} it will not yield a substructure of N – because, for instance, [[ ]]N (1, 3) = 4 and this element is not in our set. Any structure, and hence a substructure in particular, must be “closed under all operations”, i.e., applying any operation to elements of the (underlying set of the) structure must produce an element in the structure. On the other hand, a subset of the underlying set may fail to be a substructure if the operations are interpreted in different way. Let M be like Z only that now we let [[ ]]M (x, y) = x − y. Neither N nor P are substructures of M since, in general, for x, y ∈ N (or ∈ P ): [[ (x, y)]]N = x + y 6= x − y = [[ (x, y)]]M . 0 Modifying N so that [[ ]]N (x, y) = x − y does not yield a substructure of M either, because this does not 0 define [[ ]]N for x > y. No matter how we define this operation for such cases (for instance, to return 0), we won’t obtain a substructure of M – the result will be different than in M . 2 Remark 9.9 Given an FOL alphabet Σ, we may consider all Σ-structures, Str(Σ). Obviously, this class is closed under Σ-substructures. With the substructure relation, hStr(Σ), v i is a weak partial ordering: v is obviously reflexive (any structure is its own substructure), and also transitive (substructure X of a substructure of Y is itself a substructure of Y ) and antisymmetric (if both X v Y and Y v X then X = Y ). 2

140

Predicate Logic

2.2: Σ-Π classification A consequence of theorem 9.4 is that any axiomatizable class K, can be axiomatized by formulae in PNF. This fact has a model theoretic flavour, but the theory studies, in general, more specific phenomena. Since it is the relation between the classes of structures, on the one hand, and the syntactic form of formulae, on the other, one often introduces various syntactic classifications of formulae. We give here only one example. The existence of PNF allows us to “measure the complexity” of formulae. Comparing the prefixes, we would say that A1 = ∃x∀y∃zB is “more complex” than A2 = ∃x∃y∃zB. Roughly, a formula is the more complex, the more changes of quantifiers in its prefix. Definition 9.10 A formula A is ∆0 iff it has no quantifiers. It is: • Σ1 iff A ⇔ ∃x1 . . . ∃xn B, where B is ∆0 . • Π1 iff A ⇔ ∀x1 . . . ∀xn B, where B is ∆0 . • Σi+1 iff A ⇔ ∃x1 . . . ∃xn B, where B is Πi . • Πi+1 iff A ⇔ ∀x1 . . . ∀xn B, where B is Σi .

∃

∀

Σ2 SS S k Π2 ∃ SSS kkk∀k SSSS k k k SS kkkk Σ1 H Π1 v ∃HHH ∀ vvv ∆0

Since PNF is not unique, a formula can belong to several levels and we have to consider all possible PNFs for a formula in order to determine its complexity. Typically, saying that a formula is Σi , resp. Πi , one means that this is the least such i. A formula may be both Σi and Πi – in Example 9.6 we saw (the second) formula equivalent to both ∀x∃yB and to ∃y∀xB, i.e., one that is both Π2 and Σ2 . Such formulae are called ∆i . We only consider the following (simple) example of a model theoretic result. Point 1 says that the validity of an existential formula is preserved when passing to the superstructures – the model class of existential sentences is closed under superstructures. Dually, 2 implies that model class of universal sentences is closed under substructures. Theorem 9.11 Let A, B be closed formulae over some alphabet Σ, and assume A is Σ 1 and B is Π1 . Let M, N be Σ-structures and N v M . If 1. N |= A then M |= A 2. M |= B then N |= B. Proof 1. A is closed Σ1 , i.e., it is (equivalent to) ∃x1 . . . ∃xn A0 where A0 has no quantifiers nor variables other that x1 , . . . ,xn . If N |= A then there exist a1 , . . . ,an ∈ N such that N |=x1 7→a1 ,...,xn 7→an A0 . Since N v M , we have N ⊆ M and the interpretation of all symbols is the same in M as in N . Hence M |=x1 7→a1 ,...,xn 7→an A0 , i.e., M |= A. 2. This is a dual argument. Since M |= ∀x1 . . . ∀xn B 0 , B 0 is true for all elements of M and N ⊆ M , so B 0 will be true for all element of this subset as well. QED (9.11) The theorem can be applied in at least two different ways which we illustrate in the following two examples. We consider only case 2., i.e., when the formulae of interest are Π1 (universal). Example 9.12 [Constructing new structures for Π1 axioms] First, given a set of Π1 axioms and an arbitrary structure satisfying them, the theorem allows us to conclude that any substructure will also satisfy the axioms. Let Σ contain only one binary relation symbol R. Recall definition 1.15 – a strict partial ordering is axiomatized by two formulae 1. ∀x∀y∀z : R(x, y) ∧ R(y, z) → R(x, z) – transitivity, and 2. ∀x : ¬R(x, x) – irreflexivity. Let N be an arbitrary strict partial ordering, i.e., an arbitrary Σ-structure satisfying these axioms. For instance, let N = hN,

Introduction to Logic Michal Walicki 2006

A ⇒ B∨∀x ∃y : (P (x) → Q(x, y)∧R(y)) M |= A∨M |=v B ⇔ [[A]]

M

M = 1∨[[B]]v = 1, ∃M : h{q0 , q1 }, {0, 1}, q0 , τ i `S0 , Γ `N B → B, ∀x : x = x∨⊥

ii

Contents The History of Logic

1

A Logic – patterns of reasoning A.1 Reductio ad absurdum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Aristotle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.3 Other patterns and later developments . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 4

B Logic – a language about something B.1 Early semantic observations and problems B.2 The Scholastic theory of supposition . . . . B.3 Intension vs. extension . . . . . . . . . . . B.4 Modalities . . . . . . . . . . . . . . . . . .

. . . .

5 5 6 6 7

C Logic – a symbolic language C.1 The “universally characteristic language” . . . . . . . . . . . . . . . . . . . . . . . .

7 8

D 19th and 20th Century – D.1 George Boole . . . . . . D.2 Gottlob Frege . . . . . D.3 Set theory . . . . . . . D.4 20th century logic . . .

. . . .

. . . .

mathematization of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

9 10 12 14 15

E Modern Symbolic Logic E.1 Formal logical systems: syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.2 Formal semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.3 Computability and Decidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16 16 19 21

F Summary

23

Bibliography

23

Part I. Basic Set Theory

28

1. Sets, Functions, Relations 1.1. Sets and Functions . . . 1.2. Relations . . . . . . . . . 1.3. Ordering Relations . . . 1.4. Infinities . . . . . . . . .

logic . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

28 28 32 33 35

2. Induction 2.1. Well-Founded Orderings . . . . . . . . . . . . . . . . . . 2.1.1. Inductive Proofs on Well-founded Orderings . . . 2.2. Inductive Definitions . . . . . . . . . . . . . . . . . . . . 2.2.1. “1-1” Definitions . . . . . . . . . . . . . . . . . . . 2.2.2. Inductive Definitions and Recursive Programming 2.2.3. Proofs by Structural Induction . . . . . . . . . . . 2.3. Transfinite Induction [optional] . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

42 42 43 47 49 50 53 56

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

iii

Part II. Turing Machines

59

3. Turing Machines 3.1. Alphabets and Languages . . . . . . . . . . . . . . . 3.2. Turing Machines . . . . . . . . . . . . . . . . . . . . 3.2.1. Composing Turing machines . . . . . . . . . 3.2.2. Alternative representation of TMs [optional] 3.3. Universal Turing Machine . . . . . . . . . . . . . . . 3.4. Decidability and the Halting Problem . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

Part III. Statement Logic

59 59 60 64 65 66 69

73

4. Syntax and Proof Systems 4.1. Axiomatic Systems . . . . . . . . . . . . . . . . . . 4.2. Syntax of SL . . . . . . . . . . . . . . . . . . . . . . 4.3. The axiomatic system of Hilbert’s . . . . . . . . . . 4.4. Natural Deduction system . . . . . . . . . . . . . . 4.5. Hilbert vs. ND . . . . . . . . . . . . . . . . . . . . . 4.6. Provable Equivalence of formulae . . . . . . . . . . 4.7. Consistency . . . . . . . . . . . . . . . . . . . . . . 4.8. The axiomatic system of Gentzen’s . . . . . . . . . 4.8.1. Decidability of the axiomatic systems for SL 4.8.2. Gentzen’s rules for abbreviated connectives . 4.9. Some proof techniques . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

73 73 77 77 79 81 82 83 84 84 85 86

5. Semantics of SL 5.1. Semantics of SL . . . . . . . . . . . . . . 5.2. Semantic properties of formulae . . . . . 5.3. Abbreviations . . . . . . . . . . . . . . . 5.4. Sets, Propositions and Boolean Algebras 5.4.1. Laws . . . . . . . . . . . . . . . . 5.4.2. Sets and SL . . . . . . . . . . . . . 5.4.3. Boolean Algebras [optional] . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

88 88 92 93 94 94 94 96

6. Soundness and Completeness 6.1. Adequate Sets of Connectives . . . . . . . . . . . . . . . . 6.2. DNF, CNF . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3. Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4. Completeness . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1. Some Applications of Soundness and Completeness

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

101 101 102 104 106 108

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

Part IV. Predicate Logic 7. Syntax and Proof System of FOL 7.1. Syntax of FOL . . . . . . . . . . . . 7.1.1. Abbreviations . . . . . . . . 7.2. Scope of Quantifiers, Free Variables, 7.2.1. Some examples . . . . . . . . 7.2.2. Substitution . . . . . . . . . 7.3. Proof System . . . . . . . . . . . . . 7.3.1. Deduction Theorem in FOL . 7.4. Gentzen’s system for FOL . . . . . .

113 . . . . . . . . . . . . . . . . Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

113 114 116 116 117 119 120 121 122

iv

8. Semantics 8.1. Semantics of FOL . . . . . . . . . . 8.2. Semantic properties of formulae . . 8.3. Open vs. closed formulae . . . . . . 8.3.1. Deduction Theorem in G and

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

126 126 129 130 133

9. More Semantics 9.1. Prenex operations . . . . . . . . . . . . . . . . . . 9.2. A few bits of Model Theory . . . . . . . . . . . . 9.2.1. Substructures . . . . . . . . . . . . . . . . 9.2.2. Σ-Π classification . . . . . . . . . . . . . . 9.3. “Syntactic” semantic and Computations . . . . . 9.3.1. Reachable structures and Term structures . 9.3.2. Herbrand’s theorem . . . . . . . . . . . . . 9.3.3. Horn clauses and logic programming . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

136 136 138 139 139 141 141 144 144

. . . . . . N

. . . .

. . . .

. . . .

. . . .

. . . .

10. Soundness, Completeness 151 10.1. Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 10.2. Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 10.2.1. Some Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 11. Identity and Some Consequences 11.1. FOL with Identity . . . . . . . . . . . . . . . 11.1.1. Axioms for Identity . . . . . . . . . . 11.1.2. Some examples . . . . . . . . . . . . . 11.1.3. Soundness and Completeness of FOL= 11.2. A few more bits of Model Theory . . . . . . . 11.2.1. Compactness . . . . . . . . . . . . . . 11.2.2. Skolem-L¨ owenheim . . . . . . . . . . . 11.3. Semi-Decidability and Undecidability of FOL 11.4. Why is First-Order Logic “First-Order”? . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

159 159 160 161 162 164 164 165 165 166

12. Summary 12.1. Functions, Sets, Cardinality . . . . . . . . . . 12.2. Relations, Orderings, Induction . . . . . . . . 12.3. Turing Machines . . . . . . . . . . . . . . . . 12.4. Formal Systems in general . . . . . . . . . . . 12.4.1. Axiomatic System – the syntactic part 12.4.2. Semantics . . . . . . . . . . . . . . . . 12.4.3. Syntax vs. Semantics . . . . . . . . . 12.5. Statement Logic . . . . . . . . . . . . . . . . 12.6. First Order Logic . . . . . . . . . . . . . . . . 12.7. First Order Logic with identity . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

170 170 171 171 172 172 173 174 175 176 177

1

The History of Logic The term “logic” may be, very roughly and vaguely, associated with something like “correct thinking”. Aristotle defined a syllogism as “discourse in which, certain things being stated something other than what is stated follows of necessity from their being so.” And, in fact, this intuition not only lies at its origin, ca. 500 BC, but has been the main force motivating its development since that time until the last century. There was a medieval tradition according to which the Greek philosopher Parmenides (5th century BC) invented logic while living on a rock in Egypt. The story is pure legend, but it does reflect the fact that Parmenides was the first philosopher to use an extended argument for his views, rather than merely proposing a vision of reality. But using arguments is not the same as studying them, and Parmenides never systematically formulated or studied principles of argumentation in their own right. Indeed, there is no evidence that he was even aware of the implicit rules of inference used in presenting his doctrine. Perhaps Parmenides’ use of argument was inspired by the practice of early Greek mathematics among the Pythagoreans. Thus it is significant that Parmenides is reported to have had a Pythagorean teacher. But the history of Pythagoreanism in this early period is shrouded in mystery, and it is hard to separate fact from legend. We will sketch the development of logic along the three axes which reflect the three main domains of the field. 1. The foremost is the interest in correctness of reasoning which involves study of correct arguments, their form or pattern and the possibilities of manipulating such forms in order to arrive at new correct arguments. The other two aspects are very intimately connected with this one. 2. In order to construct valid forms of arguments one has to know what such forms can be built from, that is, determine the ultimate “building blocks”. One has to identify the basic terms, their kinds, means of combination and, not least, their meaning. 3. Finally, there is the question of how to represent these patterns. Although apparently of secondary importance, it is the answer to this question which puts purely symbolic manipulation in the focus. It can be considered the beginning of modern mathematical logic which led to the development of the devices for symbolic manipulation known as computers. The first three sections sketch the development along the respective lines until Renaissance. In section D, we indicate the development in modern era, with particular emphasis on the last two centuries. Section E indicates some basic aspects of modern mathematical logic and its relations to computers.

A. Logic – patterns of reasoning A.1. Reductio ad absurdum If Parmenides was not aware of the general rules underlying his arguments, the same perhaps is not true for his disciple Zeno of Elea (5th century BC). Parmenides taught that there is no real change in the world and that all things remain, eventually, the same one being. In the defense of this heavily criticized thesis, Zeno designed a series of ingenious arguments, known under the name “Zeno’s paradoxes”, which demonstrated that the contrary assumption must lead to absurdity. Some of the most known is the story of Achilles and the tortoise compete in a race Tortoise, being a slower runner, starts some time t before Achilles. In this time t, the tortoise will go some way w towards the goal. Now Achilles starts running but in order to catch up with the tortoise he has to first run the way w which will take him some time t1. In this time, tortoise will again walk some distance w1 away from the point w and closer to the goal. Then again, Achilles must first run the way w1 in order to catch the tortoise, but this will in the same time walk some distance w2 away. In

2

short, Achilles will never catch the tortoise, which is obviously absurd. Roughly, this means that the thesis that the two are really changing their positions cannot be true. The point of the story is not what is possibly wrong with this way of thinking but that the same form of reasoning was applied by Zeno in many other stories: assuming a thesis T , we can analyze it and arrive at a conclusion C; but C turns out to be absurd – therefore T cannot be true. This pattern has been given the name “reductio ad absurdum” and is still frequently used in both informal and formal arguments.

A.2. Aristotle Various ways of arguing in political and philosophical debates were advanced by various thinkers. Sophists, often discredited by the “serious” philosophers, certainly deserve the credit for promoting the idea of “correct arguing” no matter what the argument is concerned with. Horrified by the immorality of sophists’ arguing, Plato attempted to combat them by plunging into ethical and metaphysical discussions and claiming that these indeed had a strong methodological logic – the logic of discourse, “dialectic”. In terms of development of modern logic there is, however, close to nothing one can learn from that. The development of “correct reasoning” culminated in ancient Greece with Aristotle’s (384-322 BC) teaching of categorical forms and syllogisms. A.2.1. Categorical forms Most of Aristotle’s logic was concerned with certain kinds of propositions that can be analyzed as consisting of five basic building blocks: (1) usually a quantifier (“every”, “some”, or the universal negative quantifier “no”), (2) a subject, (3) a copula, (4) perhaps a negation (“not”), (5) a predicate. Propositions analyzable in this way were later called “categorical propositions” and fall into one or another of the following forms: (quantifier)

subject copula (negation) predicate

Every, Some, No

β

is

1.

Every

β

is

2.

Every

β

is

3.

Some

β

is

4.

Some

β

is

x

is

5.

not

not not

an α an α

: Universal affirmative

an α

: Universal negative

an α

: Particular affirmative

an α

: Particular negative

an α

: Singular affirmative

6. x is not an α : Singular negative In the singular judgements x stands for an individual, e.g. “Socrates is (not) a man.” A.2.2. Conversions Sometimes Aristotle adopted alternative but equivalent formulations. Instead of saying, for example, “Every β is an α”, he would say, “α belongs to every β” or “α is predicated of every β.” More significantly, he might use equivalent formulations, for example, instead of 2,he might say “No β is an α.” 1. “Every β is an α” 2. “Every β is not an α”

is equivalent to is equivalent to is equivalent to

“α belongs to every β”, or “α is predicated of every β.” “No β is an α.”

Aristotle formulated several rules later known collectively as the theory of conversion. To “convert” a proposition in this sense is to interchange its subject and predicate. Aristotle observed that propositions of forms 2 and 3 can be validly converted in this way: if “no β is α”, then also “no α is β”, and if “some β is α”, then also “some α is β”. In later terminology, such propositions were said to be converted “simply” (simpliciter). But propositions of form 1 cannot be converted in this way; if “every β is an α”, it does not follow that “every α is a β”. It does follow, however, that “some α is a β”. Such propositions, which can be converted provided that not only are their subjects and predicates interchanged but also the universal quantifier is weakened to an existential

3

(or particular) quantifier “some”, were later said to be converted “accidentally” (per accidens). Propositions of form 4 cannot be converted at all; from the fact that some animal is not a dog, it does not follow that some dog is not an animal. Aristotle used these laws of conversion to reduce other syllogisms to syllogisms in the first figure, as described below. Conversions represent the first form of formal manipulation. They provide the rules for: how to replace occurrence of one (categorical) form of a statement by another – without affecting the proposition! What does “affecting the proposition” mean is another subtle matter. The whole point of such a manipulation is that one, in one sense or another, changes the concrete appearance of a sentence, without changing its value. In Aristotle this meant simply that the pairs he determined could be exchanged. The intuition might have been that they “essentially mean the same”. In a more abstract, and later formulation, one would say that “not to affect a proposition” is “not to change its truth value” – either both are false or both are true. Thus one obtains the idea that Two statements are equivalent (interchangeable) if they have the same truth value. This wasn’t exactly the point of Aristotle’s but we may ascribe him a lot of intuition in this direction. From now on, this will be a constantly recurring theme in logic. Looking at propositions as thus determining a truth value gives rise to some questions (and severe problems, as we will see.) Since we allow using some “placeholders” – variables – a proposition need not have a unique truth value. “All α are β” depends on what we substitute for α and β. In general, a proposition P may be: 1. a tautology – P is always true, no matter what we choose to substitute for the “placeholders”; (e.g., “All α are α”. In particular, a proposition without any “placeholders”, e.g., “all animals are animals”, may be a tautology.) 2. a contradiction – P is never true (e.g., “no α is α”); 3. contingent – P is sometimes true and sometimes false; (“all α are β” is true, for instance, if we substitute “animals” for both α and β, while it is false if we substitute “birds” for α and “pigeons” for β). A.2.3. Syllogisms Aristotlean logic is best known for the theory of syllogisms which had remained practically unchanged and unchallenged for approximately 2000 years. Aristotle defined a syllogism as a “discourse in which, certain things being stated something other than what is stated follows of necessity from their being so.” But in practice he confined the term to arguments containing two premises and a conclusion, each of which is a categorical proposition. The subject and predicate of the conclusion each occur in one of the premises, together with a third term (the middle) that is found in both premises but not in the conclusion. A syllogism thus argues that because α and γ are related in certain ways to β (the middle) in the premises, they are related in a certain way to one another in the conclusion. The predicate of the conclusion is called the major term, and the premise in which it occurs is called the major premise. The subject of the conclusion is called the minor term and the premise in which it occurs is called the minor premise. This way of describing major and minor terms conforms to Aristotle’s actual practice and was proposed as a definition by the 6th-century Greek commentator John Philoponus. But in one passage Aristotle put it differently: the minor term is said to be “included” in the middle and the middle “included” in the major term. This remark, which appears to have been intended to apply only to the first figure (see below), has caused much confusion among some of Aristotle’s commentators, who interpreted it as applying to all three figures. Aristotle distinguished three different “figures” of syllogisms, according to how the middle is related to the other two terms in the premises. In one passage, he says that if one wants to prove α of γ syllogistically, one finds a middle term β such that either I. α is predicated of β and β of γ (i.e., β is α and γ is β), or

4

II. β is predicated of both α and γ (i.e., α is β and γ is β), or else III. both α and γ are predicated of β (i.e., β is α and β is γ). All syllogisms must, according to Aristotle, fall into one or another of these figures. Each of these figures can be combined with various categorical forms, yielding a large taxonomy of possible syllogisms. Aristotle identified 19 among them which were valid (“universally correct”). The following is an example of a syllogism of figure I and categorical forms S(ome), E(very), S(ome). “Worm” is here the middle term. Some Every Some

(A.i)

of my Friends Worm of my Friends

are is are

Worms. Ugly. Ugly.

The table below gives examples of syllogisms of all three figures with middle term in bold face. figure I:

[F is W]

[W is U]

[F is U]

S,E,S

Some [F is W]

Every [W is U]

Some [F is U]

E,E,E

Every [F is W]

Every [W is U]

Every [F is U]

figure II:

[M is W]

[U is W]

[M is U]

N,E,N

No [M is W]

Every [U is W]

no [M is U]

figure III:

[W is U]

[W is N]

[N is U]

E,E,S

Every [W is U]

Every [W is N]

Some [N is U]

E,E,E

Every [W is U]

Every [W is N]

Every [N is U]

–

Validity of an argument means here that no matter what concrete terms we substitute for α, β, γ, if only the premises are true then also the conclusion is guaranteed to be true. For instance, the first 4 examples above are valid while the last one is not. To see this last point, we find a counterexample. Substituting women for W, female for U and human for N, the premises hold while the conclusion states that every human is female. Note that a correct application of a valid syllogism does not guarantee truth of the conclusion. (A.i) is such an application, but the conclusion need not be true. This correct application uses namely a false assumption (none of my friends is a worm) and in such cases no guarantees about the truth value of the conclusion can be given. We see again that the main idea is truth preservation in the reasoning process. An obvious, yet nonetheless crucially important, assumption is: The contradiction principle For any proposition P it is never the case that both P and not-P are true. This principle seemed (and to many still seems) intuitively obvious enough to accept it without any discussion. If it were violated, there would be little point in constructing any “truth preserving” arguments.

A.3. Other patterns and later developments Aristotle’s syllogisms dominated logic until late Middle Ages. A lot of variations were invented, as well as ways of reducing some valid patterns to others (cf. A.2.2). The claim that all valid arguments can be obtained by conversion and, possibly indirect proof (reductio ad absurdum) from the three figures

5

has been challenged and discussed ad nauseum. Early developments (already in Aristotle) attempted to extend the syllogisms to modalities, i.e., by considering instead of the categorical forms as above, the propositions of the form “it is possible/necessary that some α are β”. Early followers of Aristotle (Theophrastus of Eresus (371-286), the school of Megarians with Euclid (430-360), Diodorus Cronus (4th century BC)) elaborated on the modal syllogisms and introduced another form of a proposition, the conditional “if (α is β) then (γ is δ)”. These were further developed by Stoics who also made another significant step. Instead of considering only “patterns of terms” where α, β, etc. are placeholders for some objects, they started to investigate logic with “patterns of propositions”. Such patterns would use the variables standing for propositions instead of terms. For instance, from two propositions: “the first” and “the second”, we may form new propositions, e.g., “the first or the second”, “if the first then the second”, etc. The terms “the first”, “the second” were used by Stoics as variables instead of α, β, etc. The truth of such compound propositions may be determined from the truth of their constituents. We thus get new patterns of arguments. The Stoics gave the following list of five patterns If 1 then 2;

but 1;

therefore 2.

If 1 then 2;

but not 2;

therefore not 1.

Not both 1 and 2;

but 1;

therefore not 2.

Either 1 or 2;

but 1;

therefore not 2.

Either 1 or 2;

but not 2;

therefore 1.

Chrysippus (c.279-208 BC) derived many other schemata. Stoics claimed (wrongly, as it seems) that all valid arguments could be derived from these patterns. At the time, the two approaches seemed different and a lot of discussions centered around the question which is “the right one”. Although Stoic’s “propositional patterns” had fallen in oblivion for a long time, they re-emerged as the basic tools of modern mathematical propositional logic. Medieval logic was dominated by Aristotlean syllogisms elaborating on them but without contributing significantly to this aspect of reasoning. However, Scholasticism developed very sophisticated theories concerning other central aspects of logic.

B. Logic – a language about something The pattern of a valid argument is the first and through the centuries fundamental issue in the study of logic. But there were (and are) a lot of related issues. For instance, the two statements 1. “all horses are animals”, and 2. “all birds can fly” are not exactly of the same form. More precisely, this depends on what a form is. The first says that one class (horses) is included in another (animals), while the second that all members of a class (birds) have some property (can fly). Is this grammatical difference essential or not? Or else, can it be covered by one and the same pattern or not? Can we replace a noun by an adjective in a valid pattern and still obtain a valid pattern or not? In fact, the first categorical form subsumes both above sentences, i.e., from the point of view of our logic, they are considered as having the same form. This kind of questions indicate, however, that forms of statements and patterns of reasoning, like syllogisms, require further analysis of “what can be plugged where” which, in turn, depends on which words or phrases can be considered as “having similar function”, perhaps even as “having the same meaning”. What are the objects referred to by various kinds of words? What are the objects referred to by the propositions?

6

B.1. Early semantic observations and problems Certain particular teachings of the sophists and rhetoricians are significant for the early history of (this aspect of) logic. For example, the arch-sophists Protagoras (500 BC) is reported to have been the first to distinguish different kinds of sentences: questions, answers, prayers, and injunctions. Prodicus appears to have maintained that no two words can mean exactly the same thing. Accordingly, he devoted much attention to carefully distinguishing and defining the meanings of apparent synonyms, including many ethical terms. The categorical forms from A.2.1 were, too, classified according to such organizing principles. Since logic studies statements, their form as well as patterns in which they can be arranged to form valid arguments, one of the basic questions concerns the meaning of a proposition. As we indicated earlier, two propositions can be considered equivalent if they have the same truth value. This indicates another law, beside the contradiction principle, namely The law of excluded middle Each proposition P is either true or false. There is surprisingly much to say against this apparently simple claim. There are modal statements (see B.4) which do not seem to have any definite truth value. Among many early counter-examples, there is the most famous one, produced by the Megarians, which is still disturbing and discussed by modern logicians: The “liar paradox” The sentence “This sentence is false” does not seem to have any content – it is false if and only if it is true! Such paradoxes indicated the need for closer analysis of fundamental notions of the logical enterprise.

B.2. The Scholastic theory of supposition The character and meaning of various “building blocks” of a logical language were thoroughly investigated by the Scholastics. The theory of supposition was meant to answer the question: “To what does a given occurrence of a term refer in a given proposition?” Roughly, one distinguished three kinds of supposition/reference: 1. personal: In the sentence “Every horse is an animal”, the term “horse” refers to individual horses. 2. simple: In the sentence “Horse is a species”, the term “horse” refers to a universal (the concept ‘horse’). 3. material: In the sentence “Horse is a monosyllable”, the term “horse” refers to the spoken or written word. We can notice here the distinction based on the fundamental duality of individuals and universals which had been one of the most debated issues in Scholasticism. The third point indicates the important development, namely, the increasing attention paid to the language as such which slowly becomes the object of study.

B.3. Intension vs. extension In addition to supposition and its satellite theories, several logicians during the 14th century developed a sophisticated theory of connotation. The term “black” does not merely denote all black things – it also connotes the quality, blackness, which all such things possess. This has become one of the central distinctions in the later development of logic and in the discussions about the entities referred to by the words we are using. One begun to call connotation “intension” – saying “black” I intend blackness. Denotation is closer to “extension” – the collection of all the

7

objects referred to by the term “black”. One has arrived at the understanding of a term which can be represented pictorially as follows: termL LLL rr Lrefers LLL to r r LL% r r yr intension can be ascribed to / extension intendsrrr

The crux of many problems is that different intensions may refer to (denote) the same extension. The “Morning Star” and the “Evening Star” have different intensions and for centuries were considered to refer to two different stars. As it turned out, these are actually two appearances of one and the same planet Venus, i.e., the two terms have the same extension. One might expect logic to be occupied with concepts, that is connotations – after all, it tries to capture correct reasoning. Many attempts have been made to design a “universal language of thought” in which one could speak directly about the concepts and their interrelations. Unfortunately, the concept of concept is not that obvious and one had to wait a while until a somehow tractable way of speaking of/modeling/representing concepts become available. The emergence of modern mathematical logic coincides with the successful coupling of logical language with the precise statement of its meaning in terms of extension. This by no means solved all the problems and modern logic still has branches of intensional logic – we will return to this point later on.

B.4. Modalities Also these disputes started with Aristotle. In chapter 9 of De Interpretatione, he discusses the assertion “There will be a sea battle tomorrow”. The problem with this assertion is that, at the moment when it is made, it does not seem to have any definite truth value – whether it is true or false will become clear tomorrow but until then it is possible that it will be the one as well the other. This is another example (besides the “liar paradox”) indicating that adopting the principle of “excluded middle”, i.e., considering the propositions as having always only one of two possible truth values, may be insufficient. Medieval logicians continued the tradition of modal syllogistic inherited from Aristotle. In addition, modal factors were incorporated into the theory of supposition. But the most important developments in modal logic occurred in three other contexts: 1. whether propositions about future contingent events are now true or false (the question raised by Aristotle), 2. whether a future contingent event can be known in advance, and 3. whether God (who, the tradition says, cannot be acted upon causally) can know future contingent events. All these issues link logical modality with time. Thus, Peter Aureoli (c. 1280-1322) held that if something is in fact P (P is some predicate) but can be not-P , then it is capable of changing from being P to being not-P . However here, as in the case of categorical propositions, important issues could hardly be settled before one had a clearer idea concerning the kinds of objects or state of affairs modalities are supposed to describe. Duns Scotus in the late 13th century was the first to sever the link between time and modality. He proposed a notion of possibility that was not linked with time but based purely on the notion of semantic consistency. “Possible” means here logically possible, that is, not involving contradiction. This radically new conception had a tremendous influence on later generations down to the 20th century. Shortly afterward, Ockham developed an influential theory of modality and time that reconciles the claim that every proposition is either true or false with the claim that certain propositions about the future are genuinely contingent. Duns Scotus’ ideas were revived in the 20th century. Starting with the work of Jan Lukasiewicz who, once again, studied Aristotle’s example and introduced 3-valued logic – a proposition may be true, or false, or else it may have the third, “undetermined” truth value.

8

C. Logic – a symbolic language Logic’s preoccupation with concepts and reasoning begun gradually to put more and more severe demands on the appropriate and precise representation of the used terms. We saw that syllogisms used fixed forms of categorical statements with variables – α, β, etc. – which represented arbitrary terms (or objects). Use of variables was indisputable contribution of Aristotle to the logical, and more generally mathematical notation. We also saw that Stoics introduced analogous variables standing for propositions. Such notational tricks facilitated more concise, more general and more precise statement of various logical facts. Following the Scholastic discussions of connotation vs. denotation, logicians of the 16th century felt the increased need for a more general logical language. One of the goals was the development of an ideal logical language that naturally expressed ideal thought and was more precise than natural language. An important motivation underlying the attempts in this direction was the idea of manipulation, in fact, symbolic or even mechanical manipulation of arguments represented in such a language. Aristotelian logic had seen itself as a tool for training “natural” abilities at reasoning. Now one would like to develop methods of thinking that would accelerate or improve human thought or would even allow its replacement by mechanical devices. Among the initial attempts was the work of Spanish soldier, priest and mystic Ramon Lull (1235-1315) who tried to symbolize concepts and derive propositions from various combinations of possibilities. The work of some of his followers, Juan Vives (1492-1540) and Johann Alsted (1588-1683) represents perhaps the first systematic effort at a logical symbolism. Some philosophical ideas in this direction occurred within the Port-Royal Logic – a group of anticlerical Jansenists located in Port-Royal outside Paris, whose most prominent member was Blaise Pascal. They elaborated on the Scholastical distinction comprehension vs. extension. Most importantly, Pascal introduced the distinction between real and nominal definitions. Real definitions were descriptive and stated the essential properties in a concept, while nominal definitions were creative and stipulated the conventions by which a linguistic term was to be used. (“Man is a rational animal.” attempts to give a real definition of the concept ‘man’. “By monoid we will understand a set with a unary operation.” is a nominal definition assigning a concept to the word “monoid”.) Although the Port-Royal logic itself contained no symbolism, the philosophical foundation for using symbols by nominal definitions was nevertheless laid.

C.1. The “universally characteristic language” Lingua characteristica universalis was Gottfried Leibniz’ ideal that would, first, notationally represent concepts by displaying the more basic concepts of which they were composed, and second, naturally represent (in the manner of graphs or pictures, “iconically”) the concept in a way that could be easily grasped by readers, no matter what their native tongue. Leibniz studied and was impressed by the method of the Egyptians and Chinese in using picturelike expressions for concepts. The goal of a universal language had already been suggested by Descartes for mathematics as a “universal mathematics”; it had also been discussed extensively by the English philologist George Dalgarno (c. 1626-87) and, for mathematical language and communication, by the French algebraist Fran¸cois Vi`ete (1540-1603). C.1.1. “Calculus of reason” Another and distinct goal Leibniz proposed for logic was a “calculus of reason” (calculus ratiocinator). This would naturally first require a symbolism but would then involve explicit manipulations of the symbols according to established rules by which either new truths could be discovered or proposed conclusions could be checked to see if they could indeed be derived from the premises. Reasoning could then take place in the way large sums are done – that is, mechanically or algorithmically – and thus not be subject to individual mistakes and failures of ingenuity. Such derivations could be checked by others or performed by machines, a possibility that Leibniz seriously contemplated. Leibniz’ suggestion that machines could be constructed to draw valid inferences or to check the deductions of others was followed up by Charles Babbage, William Stanley Jevons, and Charles Sanders Peirce and his student Allan Marquand in the 19th century, and with wide success

9

on modern computers after World War II. (See chapter 7 in C. Sobel, The Cognitive sciences, an interdisciplinary approach, for more detailed examples.) The symbolic calculus that Leibniz devised seems to have been more of a calculus of reason than a “characteristic” language. It was motivated by his view that most concepts were “composite”: they were collections or conjunctions of other more basic concepts. Symbols (letters, lines, or circles) were then used to stand for concepts and their relationships. This resulted in what is intensional rather than an extensional logic – one whose terms stand for properties or concepts rather than for the things having these properties. Leibniz’ basic notion of the truth of a judgment was that the concepts making up the predicate were “included in” the concept of the subject For instance, the judgment ‘A zebra is striped and a mammal.’ is true because the concepts forming the predicate ‘striped-and-mammal’ are, in fact, “included in” the concept (all possible predicates) of the subject ‘zebra’. What Leibniz symbolized as A∞B, or what we would write today as A = B, was that all the concepts making up concept A also are contained in concept B, and vice versa. Leibniz used two further notions to expand the basic logical calculus. In his notation, A⊕B∞C indicates that the concepts in A and those in B wholly constitute those in C. We might write this as A + B = C or A ∨ B = C – if we keep in mind that A, B, and C stand for concepts or properties, not for individual things. Leibniz also used the juxtaposition of terms in the following way: AB∞C, which we might write as A × B = C or A ∧ B = C, signifies in his system that all the concepts in both A and B wholly constitute the concept C. A universal affirmative judgment, such as “All A’s are B’s,” becomes in Leibniz’ notation A∞AB. This equation states that the concepts included in the concepts of both A and B are the same as those in A. A syllogism: “All A’s are B’s; all B’s are C’s; therefore all A’s are C’s,” becomes the sequence of equations : A = AB; B = BC; therefore A = AC Notice, that this conclusion can be derived from the premises by two simple algebraic substitutions and the associativity of logical multiplication.

(C.i)

(1 + 2)

1. A 2. B A A

= = = =

AB BC ABC AC

Every A is B Every B is C therefore : Every A is C

As many early symbolic logics, including many developed in the 19th century, Leibniz’ system had difficulties with particular and negative statements, and it included little discussion of propositional logic and no formal treatment of quantified relational statements. (Leibniz later became keenly aware of the importance of relations and relational inferences.) Although Leibniz might seem to deserve to be credited with great originality in his symbolic logic – especially in his equational, algebraic logic – it turns out that such insights were relatively common to mathematicians of the 17th and 18th centuries who had a knowledge of traditional syllogistic logic. In 1685 Jakob Bernoulli published a pamphlet on the parallels of logic and algebra and gave some algebraic renderings of categorical statements. Later the symbolic work of Lambert, Ploucquet, Euler, and even Boole – all apparently uninfluenced by Leibniz’ or even Bernoulli’s work – seems to show the extent to which these ideas were apparent to the best mathematical minds of the day.

D. 19th and 20th Century – mathematization of logic Leibniz’ system and calculus mark the appearance of formalized, symbolic language which is prone to mathematical (either algebraic or other) manipulation. A bit ironically, emergence of mathematical logic marks also this logic’s, if not a divorce then at least separation from philosophy. Of course, the discussions of logic have continued both among logicians and philosophers but from now on these groups form two increasingly distinct camps. Not all questions of philosophical logic are important for mathematicians and most of results of mathematical logic have rather technical character which is not always of interest for philosophers. (There are, of course, exceptions like,

10

for instance, the extremist camp of analytical philosophers who in the beginning of 20th century attempted to design a philosophy based exclusively on the principles of mathematical logic.) In this short presentation we have to ignore some developments which did take place between 17th and 19th century. It was only in the last century that the substantial contributions were made which created modern logic. The first issue concerned the intentional vs. extensional dispute – the work of George Boole, based on purely extensional interpretation was a real break-through. It did not settle the issue once and for all – for instance Frege, “the father of first-order logic” was still in favor of concepts and intensions; and in modern logic there is still a branch of intensional logic. However, Boole’s approach was so convincingly precise and intuitive that it was later taken up and become the basis of modern – extensional or set theoretical – semantics.

D.1. George Boole The two most important contributors to British logic in the first half of the 19th century were undoubtedly George Boole and Augustus De Morgan. Their work took place against a more general background of logical work in English by figures such as Whately, George Bentham, Sir William Hamilton, and others. Although Boole cannot be credited with the very first symbolic logic, he was the first major formulator of a symbolic extensional logic that is familiar today as a logic or algebra of classes. (A correspondent of Lambert, Georg von Holland, had experimented with an extensional theory, and in 1839 the English writer Thomas Solly presented an extensional logic in A Syllabus of Logic, though not an algebraic one.) Boole published two major works, The Mathematical Analysis of Logic in 1847 and An Investigation of the Laws of Thought in 1854. It was the first of these two works that had the deeper impact on his contemporaries and on the history of logic. The Mathematical Analysis of Logic arose as the result of two broad streams of influence. The first was the English logic-textbook tradition. The second was the rapid growth in the early 19th century of sophisticated discussions of algebra and anticipations of nonstandard algebras. The British mathematicians D.F. Gregory and George Peacock were major figures in this theoretical appreciation of algebra. Such conceptions gradually evolved into “nonstandard” abstract algebras such as quaternions, vectors, linear algebra, and Boolean algebra itself. Boole used capital letters to stand for the extensions of terms; they are referred to (in 1854) as classes of “things” but should not be understood as modern sets. Nevertheless, this extensional perspective made the Boolean algebra a very intuitive and simple structure which, at the same time, seems to capture many essential intuitions. The universal class or term – which he called simply “the Universe” – was represented by the numeral “1”, and the empty class by “0”. The juxtaposition of terms (for example, “AB”) created a term referring to the intersection of two classes or terms. The addition sign signified the non-overlapping union; that is, “A + B” referred to the entities in A or in B; in cases where the extensions of terms A and B overlapped, the expression was held to be “undefined.” For designating a proper subclass of a class A, Boole used the notation “vA”. Finally, he used subtraction to indicate the removing of terms from classes. For example, “1 − x” would indicate what one would obtain by removing the elements of x from the universal class – that is, obtaining the complement of x (relative to the universe, 1). Basic equations included: 1A = A

0A = 0

for A = 0 : A + 1 = 1

A+0=A

A(B + C) = AB + AC

AB = BA A+B =B+A

AA = A A(BC) = (AB)C (associativity)

A + (BC) = (A + B)(A + C)

(distributivity)

Boole offered a relatively systematic, but not rigorously axiomatic, presentation. For a universal affirmative statement such as “All A’s are B’s,” Boole used three alternative notations: A = AB (somewhat in the manner of Leibniz), A(1 − B) = 0, or A = vB (the class of A’s is equal to some proper subclass of the B’s). The first and second interpretations allowed one to derive syllogisms by algebraic substitution; the third one required manipulation of the subclass (“v”) symbols.

11

Derivation (C.i) becomes now explicitly controlled by the applied axioms.

(D.i)

A = AB B = BC A = A(BC) = (AB)C = AC

assumption assumption substitution BC f or B associativity substitution A f or AB

In contrast to earlier symbolisms, Boole’s was extensively developed, with a thorough exploration of a large number of equations and techniques. The formal logic was separately applied to the interpretation of propositional logic, which became an interpretation of the class or term logic – with terms standing for occasions or times rather than for concrete individual things. Following the English textbook tradition, deductive logic is but one half of the subject matter of the book, with inductive logic and probability theory constituting the other half of both his 1847 and 1854 works. Seen in historical perspective, Boole’s logic was a remarkably smooth bend of the new “algebraic” perspective and the English-logic textbook tradition. His 1847 work begins with a slogan that could have served as the motto of abstract algebra: “. . . the validity of the processes of analysis does not depend upon the interpretation of the symbols which are employed, but solely upon the laws of combination.” D.1.1. Further developments of Boole’s algebra; De Morgan Modifications to Boole’s system were swift in coming: in the 1860s Peirce and Jevons both proposed replacing Boole’s “+” with a simple inclusive union or summation: the expression “A + B” was to be interpreted as designating the class of things in A, in B, or in both. This results in accepting the equation “1 + 1 = 1”, which is certainly not true of the ordinary numerical algebra and at which Boole apparently balked. Interestingly, one defect in Boole’s theory, its failure to detail relational inferences, was dealt with almost simultaneously with the publication of his first major work. In 1847 Augustus De Morgan published his Formal Logic; or, the Calculus of Inference, Necessary and Probable. Unlike Boole and most other logicians in the United Kingdom, De Morgan knew the medieval theory of logic and semantics and also knew the Continental, Leibnizian symbolic tradition of Lambert, Ploucquet, and Gergonne. The symbolic system that De Morgan introduced in his work and used in subsequent publications is, however, clumsy and does not show the appreciation of abstract algebras that Boole’s did. De Morgan did introduce the enormously influential notion of a possibly arbitrary and stipulated “universe of discourse” that was used by later Booleans. (Boole’s original universe referred simply to “all things.”) This view influenced 20th-century logical semantics. The notion of a stipulated “universe of discourse” means that, instead of talking about “The Universe”, one can choose this universe depending on the context, i.e., “1” may sometimes stand for “the universe of all animals”, and in other for merely two-element set, say “the true” and “the false”. In the former case, the derivation (D.i) of A = AC from A = AB; B = BC represents the classical syllogism “All A’s are B’s; all B’s are C’s; therefore all A’s are C’s”. In the latter case, the equations of Boolean algebra yield the laws of propositional logic where “A + B” is taken to mean disjunction “A or B”, and juxtaposition “AB” conjunction “A and B”. With this reading, the derivation (D.i) represents another reading of the syllogism, namely: “If A implies B and B implies C, then A implies C”. Negation of A is simply its complement 1 − A, which may also be written as A. De Morgan is known to all the students of elementary logic through the so called ‘De Morgan laws’: AB = A + B and, dually, (A)(B) = A + B. Using these laws, as well as some additional, today standard, facts, like BB = 0, B = B, we can derive the following reformulation of the

12

reductio ad absurdum “If every A is B then every not-B is not-A”: A = AB A − AB = 0 A(1 − B) = 0 AB = 0 A+B = 1 B(A + B) = B (B)(A) + BB = B (B)(A) + 0 = B (B)(A) = B

−AB distributivity over − B =1−B DeMorgan B· distributivity BB = 0 A+0=A

I.e., if “Every A is B”, A = AB, than “every not-B is not-A”, B = (B)(A). Or: if “A implies B” then “if B is false (absurd) then so is A”. De Morgan’s other essays on logic were published in a series of papers from 1846 to 1862 (and an unpublished essay of 1868) entitled simply On the Syllogism. The first series of four papers found its way into the middle of the Formal Logic of 1847. The second series, published in 1850, is of considerable significance in the history of logic, for it marks the first extensive discussion of quantified relations since late medieval logic and Jung’s massive Logica hamburgensis of 1638. In fact, De Morgan made the point, later to be exhaustively repeated by Peirce and implicitly endorsed by Frege, that relational inferences are the core of mathematical inference and scientific reasoning of all sorts; relational inferences are thus not just one type of reasoning but rather are the most important type of deductive reasoning. Often attributed to De Morgan – not precisely correctly but in the right spirit – was the observation that all of Aristotelian logic was helpless to show the validity of the inference, (D.ii)

All horses are animals; therefore, every head of a horse is the head of an animal.

The title of this series of papers, De Morgan’s devotion to the history of logic, his reluctance to mathematize logic in any serious way, and even his clumsy notation – apparently designed to represent as well as possible the traditional theory of the syllogism – show De Morgan to be a deeply traditional logician.

D.2. Gottlob Frege In 1879 the young German mathematician Gottlob Frege – whose mathematical speciality, like Boole’s, had actually been calculus – published perhaps the finest single book on symbolic logic in the 19th century, Begriffsschrift (“Conceptual Notation”). The title was taken from Trendelenburg’s translation of Leibniz’ notion of a characteristic language. Frege’s small volume is a rigorous presentation of what would now be called the first-order predicate logic. It contains a careful use of quantifiers and predicates (although predicates are described as functions, suggestive of the technique of Lambert). It shows no trace of the influence of Boole and little trace of the older German tradition of symbolic logic. One might surmise that Frege was familiar with Trendelenburg’s discussion of Leibniz, had probably encountered works by Drobisch and Hermann Grassmann, and possibly had a passing familiarity with the works of Boole and Lambert, but was otherwise ignorant of the history of logic. He later characterized his system as inspired by Leibniz’ goal of a characteristic language but not of a calculus of reason. Frege’s notation was unique and problematically two-dimensional; this alone caused it to be little read. Frege was well aware of the importance of functions in mathematics, and these form the basis of his notation for predicates; he never showed an awareness of the work of De Morgan and Peirce on relations or of older medieval treatments. The work was reviewed (by Schr¨ oder, among others), but never very positively, and the reviews always chided him for his failure to acknowledge the Boolean and older German symbolic tradition; reviews written by philosophers chided him for various sins against reigning idealist dogmas. Frege stubbornly ignored the critiques of his notation and persisted in publishing all his later works using it, including his little-read magnum opus, Grundgesetze der Arithmetik (1893-1903; “The Basic Laws of Arithmetic”). Although notationally cumbersome, Frege’s system contained precise and adequate (in the sense, “adopted later”) treatment of several basic notions. The universal affirmative “All A’s are

13

B’s” meant for Frege that the concept A implies the concept B, or that “to be A implies also to be B”. Moreover, this applies to arbitrary x which happens to be A. Thus the statement becomes: “∀x : A(x) → B(x)”, where the quantifier ∀x stands for “for all x” and the arrow “→” for implication. The analysis of this, and one other statement, can be represented as follows: Every

horse

is

an animal

Some

Every x

which is a horse

is

an animal

Every x

if it is a horse

then

it is an animal

H(x)

→

∀x :

animals

are

horses

Some x’s

which are animals

are

horses

Some x’s

are animals

and

horses

A(x)

&

H(x)

∃x :

A(x)

This was not the way Frege would write it but this was the way he would put it and think of it and this is his main contribution. The syllogism “All A’s are B’s; all B’s are C’s; therefore: all A’s are C’s” will be written today in first-order logic as: [ (∀x : A(x) → B(x)) & (∀x : B(x) → C(x)) ] → (∀x : A(x) → C(x)) and will be read as: “If any x which is A is also B, and any x which is B is also C; then any x which is A is also C”. Particular judgments (concerning individuals) can be obtained from the universal ones by substitution. For instance: Hugo is a horse; (D.iii)

H(Hugo)

and

Every horse is an animal;

Hence:

&

(∀x : H(x) → A(x)) H(Hugo) → A(Hugo)

→

Hugo is an animal. A(Hugo)

The relational arguments, like (D.ii) about horse-heads and animal-heads, can be derived after we have represented the involved statements as follows: 1.

y is a head of some horse

= = =

there is there is an x ∃x :

2.

y is a head of some animal

=

∃x :

a horse which is a horse H(x)

and and &

A(x)

&

y is its head y is the head of x Hd(y, x) Hd(y, x)

Now, “All horses are animals; therefore: Every horse-head is an animal-head.” will be given the form as in the first line and (very informally) treatement as follows: ∀v : H(v) → A(v)

→

∀y : ∃x : H(x) & Hd(y, x) → ∃z : A(z) & Hd(y, z)

take an arbitrary horse − head a : ∃x : H(x) & Hd(a, x) then there is a horse h : but h is an animal by (D.iii),

→ ∃z : A(z) & Hd(a, z)

H(h) & Hd(a, h) → ∃z : A(z) & Hd(a, z) so

A(h) & Hd(a, h)

Frege’s first writings after the Begriffsschrift were bitter attacks on Boolean methods (showing no awareness of the improvements by Peirce, Jevons, Schr¨ oder, and others) and a defense of his own system. His main complaint against Boole was the artificiality of mimicking notation better suited for numerical analysis rather than developing a notation for logical analysis alone. This work was followed by the Die Grundlagen der Arithmetik (1884; “The Foundations of Arithmetic”) and then by a series of extremely important papers on precise mathematical and logical topics. After 1879 Frege carefully developed his position that all of mathematics could be derived from, or reduced to, basic logical laws – a position later to be known as logicism in the philosophy of mathematics. His view paralleled similar ideas about the reducibility of mathematics to set theory from roughly the same time – although Frege always stressed that his was an intensional logic of concepts, not of extensions and classes. His views are often marked by hostility to British extensional logic and to the general English-speaking tendencies toward nominalism and empiricism that he found in authors such as J.S. Mill. Frege’s work was much admired in the period 1900-10 by Bertrand Russell who promoted Frege’s logicist research program – first in the Introduction to Mathematical

14

Logic (1903), and then with Alfred North Whitehead, in Principia Mathematica (1910-13) – but who used a Peirce-Schr¨ oder-Peano system of notation rather than Frege’s; Russell’s development of relations and functions was very similar to Schr¨ oder’s and Peirce’s. Nevertheless, Russell’s formulation of what is now called the “set-theoretic” paradoxes was taken by Frege himself, perhaps too readily, as a shattering blow to his goal of founding mathematics and science in an intensional, “conceptual” logic. Almost all progress in symbolic logic in the first half of the 20th century was accomplished using set theories and extensional logics and thus mainly relied upon work by Peirce, Schr¨ oder, Peano, and Georg Cantor. Frege’s care and rigour were, however, admired by many German logicians and mathematicians, including David Hilbert and Ludwig Wittgenstein. Although he did not formulate his theories in an axiomatic form, Frege’s derivations were so careful and painstaking that he is sometimes regarded as a founder of this axiomatic tradition in logic. Since the 1960s Frege’s works have been translated extensively into English and reprinted in German, and they have had an enormous impact on a new generation of mathematical and philosophical logicians.

D.3. Set theory A development in Germany originally completely distinct from logic but later to merge with it was Georg Cantor’s development of set theory. As mentioned before, the extensional view of concepts began gradually winning the stage with advance of Boolean algebra. Eventually, even Frege’s analyses become incorporated and merged with the set theoretical approach to the semantics of logical formalism. In work originating from discussions on the foundations of the infinitesimal and derivative calculus by Baron Augustin-Louis Cauchy and Karl Weierstrauss, Cantor and Richard Dedekind developed methods of dealing with the large, and in fact infinite, sets of the integers and points on the real number line. Although the Booleans had used the notion of a class, they rarely developed tools for dealing with infinite classes, and no one systematically considered the possibility of classes whose elements were themselves classes, which is a crucial feature of Cantorian set theory. The conception of “real” or “actual” infinities of things, as opposed to merely unlimited possibilities, was a medieval problem that had also troubled 19th-century German mathematicians, especially the great Carl Friedrich Gauss. The Bohemian mathematician and priest Bernhard Bolzano emphasized the difficulties posed by infinities in his Paradoxien des Unendlichen (1851; “Paradoxes of the Infinite”); in 1837 he had written an anti-Kantian and pro-Leibnizian nonsymbolic logic that was later widely studied. De Morgan and Peirce had given technically correct characterizations of infinite domains; these were not especially useful in set theory and went unnoticed in the German mathematical world. And the decisive development happened in this world. First Dedekind and then Cantor used Bolzano’s tool of measuring sets by one-to-one mappings: Two sets are “equinumerous” iff there is one-to-one mapping between them. Using this technique, Dedekind gave inWas sind und was sollen die Zahlen? (1888; “What Are and Should Be the Numbers?”) a precise definition of an infinite set. A set is infinite if and only if the whole set can be put into one-to-one correspondence with its proper subset. This looks like a contradiction because, as long as we think of finite sets, it indeed is. But take the set of all natural numbers, N = {0, 1, 2, 3, 4, ...} and remove from it 0 getting N1 = {1, 2, 3, 4...}. The functions f : N1 → N , given by f (x) = x − 1, and f1 : N → N1 , given by f1 (x) = x + 1, are mutually inverse and establish a one-to-one correspondence between N and its proper subset N 1 . A set A is said to be “countable” iff it is equinumerous with N . One of the main results of Cantor was demonstration that there are uncountable infinite sets, in fact, sets “arbitrarily infinite”. (For instance, the set R of real numbers was shown by Cantor to be “genuinely larger” than N .) Although Cantor developed the basic outlines of a set theory, especially in his treatment of infinite sets and the real number line, he did not worry about rigorous foundations for such a theory – thus, for example, he did not give axioms of set theory – nor about the precise conditions governing the concept of a set and the formation of sets. Although there are some hints in Cantor’s

15

writing of an awareness of problems in this area (such as hints of what later came to be known as the class/set distinction), these difficulties were forcefully posed by the paradoxes of Russell and the Italian mathematician Cesare Burali-Forti and were first overcome in what has come to be known as Zermelo-Fraenkel set theory.

D.4. 20th century logic In 1900 logic was poised on the brink of the most active period in its history. The late 19thcentury work of Frege, Peano, and Cantor, as well as Peirce’s and Schr¨ oder’s extensions of Boole’s insights, had broken new ground, raised considerable interest, established international lines of communication, and formed a new alliance between logic and mathematics. Five projects internal to late 19th-century logic coalesced in the early 20th century, especially in works such as Russell and Whitehead’s Principia Mathematica. These were 1. the development of a consistent set or property theory (originating in the work of Cantor and Frege), 2. the application of the axiomatic method 3. the development of quantificational logic, 4. the use of logic to understand mathematical objects 5. and the nature of mathematical proof . The five projects were unified by a general effort to use symbolic techniques, sometimes called mathematical, or formal, techniques. Logic became increasingly “mathematical,” then, in two senses. • First, it attempted to use symbolic methods like those that had come to dominate mathematics. • Second, an often dominant purpose of logic came to be its use as a tool for understanding the nature of mathematics – such as in defining mathematical concepts, precisely characterizing mathematical systems, or describing the nature of a mathematical proof. D.4.1. Logic and philosophy of mathematics An outgrowth of the theory of Russell and Whitehead, and of most modern set theories, was a better articulation of logicism, the philosophy of mathematics claiming that operations and objects spoken about in mathematics are really purely logical constructions. This has focused increased attention on what exactly “pure” logic is and whether, for example, set theory is really logic in a narrow sense. There seems little doubt that set theory is not “just” logic in the way in which, for example, Frege viewed logic – i.e., as a formal theory of functions and properties. Because set theory engenders a large number of interestingly distinct kinds of nonphysical, nonperceived abstract objects, it has also been regarded by some philosophers and logicians as suspiciously (or endearingly) Platonistic. Others, such as Quine, have “pragmatically” endorsed set theory as a convenient way – perhaps the only such way – of organizing the whole world around us, especially if this world contains the richness of transfinite mathematics. For most of the first half of the 20th century, new work in logic saw logic’s goal as being primarily to provide a foundation for, or at least to play an organizing role in, mathematics. Even for those researchers who did not endorse the logicist program, logic’s goal was closely allied with techniques and goals in mathematics, such as giving an account of formal systems (“formalism”) or of the ideal nature of nonempirical proof and demonstration. Interest in the logicist and formalist program waned after G¨ odel’s demonstration that logic could not provide exactly the sort of foundation for mathematics or account of its formal systems that had been sought. Namely, G¨ odel proved a mathematical theorem which interpreted in a natural language says something like: G¨ odel’s incompleteness theorem Any logical theory, satisfying reasonable and rather weak conditions, cannot be consistent and, at the same time, prove all its logical consequences.

16

Thus mathematics could not be reduced to a provably complete and consistent logical theory. An interesting fact is that what G¨ odel did in the proof of this theorem was to construct a sentence which looked very much like the Liar paradox. He showed that in any formal theory satisfying his conditions one can write the sentence “I am not provable in this theory”, which cannot be provable unless the theory is inconsistent. In spite of this negative result, logic has still remained closely allied with mathematical foundations and principles. Traditionally, logic had set itself the task of understanding valid arguments of all sorts, not just mathematical ones. It had developed the concepts and operations needed for describing concepts, propositions, and arguments – especially in terms of patterns, or “logical forms” – insofar as such tools could conceivably affect the assessment of any argument’s quality or ideal persuasiveness. It is this general ideal that many logicians have developed and endorsed, and that some, such as Hegel, have rejected as impossible or useless. For the first decades of the 20th century, logic threatened to become exclusively preoccupied with a new and historically somewhat foreign role of serving in the analysis of arguments in only one field of study, mathematics. The philosophicallinguistic task of developing tools for analyzing statements and arguments that can be expressed in some natural language about some field of inquiry, or even for analyzing propositions as they are actually (and perhaps necessarily) thought or conceived by human beings, was almost completely lost. There were scattered efforts to eliminate this gap by reducing basic principles in all disciplines – including physics, biology, and even music – to axioms, particularly axioms in set theory or firstorder logic. But these attempts, beyond showing that it could be done, at least in principle, did not seem especially enlightening. Thus, such efforts, at their zenith in the 1950s and ’60s, had all but disappeared in the ’70s: one did not better and more usefully understand an atom or a plant by being told it was a certain kind of set. Logic, having become a formal discipline, resulted also in the understanding of mechanical reasoning. Although this seems to involve a serious severing of its relation to human reasoning, it found a wide range of applications in the field based on purely formal and mechanical manipulations: computer science. The close connections between these two fields will be sketched in the following section.

E. Modern Symbolic Logic Already Euclid, also Aristotle were well aware of the notion of a rigorous logical theory, in the sense of a specification, often axiomatic, of theorems of a theory. In fact, one might feel tempted to credit the crises in geometry of the 19th century for focusing on the need for very careful presentations of these theories and other aspects of formal systems. As one should know, Euclid designed his Elements around 10 axioms and postulates which one could not resist accepting as obvious (e.g., “an interval can be prolonged indefinitely”, “all right angles are equal”). From the assumption of their truth, he deduced some 465 theorems. The famous postulate of the parallels was The fifth postulate If a straight line falling on two straight lines makes the interior angles on the same side less than the two right angles, the two straight lines, if produced indefinitely, meet on that side on which the angles are less than the two right angles. With time one begun to point out that the fifth postulate (even if reformulated) was somehow less intuitive and more complicated than others. Through hundreds of years mathematicians had unsuccessfully attempted to derive the fifth postulate from the others until, in the 19th century, they started to reach the conclusion that it must be independent from the rest. This meant that one might as well drop it! That was done independently by Hungarian B´ olayi and Russian Lobaczewsky in 1832. What was left was another axiomatic system, the first system of nonEuclidean geometry. The discovery unveiled some importance of admitting the possibility of manipulating the axioms which, perhaps, need not be given by God and intuition but may be chosen with some freedom. Dropping the fifth postulate raised the question about what these new (sub)set of axioms possibly describe. New models were created which satisfied all the axioms but the fifth. This was the first exercise in what later became called “model theory”.

17

E.1. Formal logical systems: syntax. Although set theory and the type theory of Russell and Whitehead were considered to be “logic” for the purposes of the logicist program, a narrower sense of logic re-emerged in the mid-20th century as what is usually called the “underlying logic” of these systems: whatever concerns only rules for propositional connectives, quantifiers, and nonspecific terms for individuals and predicates. (An interesting issue is whether the privileged relation of identity, typically denoted by the symbol “=”, is a part of logic: most researchers have assumed that it is.) In the early 20th century and especially after Tarski’s work in the 1920s and ’30s, a formal logical system was regarded as being composed of three parts, all of which could be rigorously described: 1. the syntax (or notation); 2. the rules of inference (or the patterns of reasoning); 3. the semantics (or the meaning of the syntactic symbols). One of the fundamental contributions of Tarski was his analysis of the concept of ‘truth’ which, in the above three-fold setting is given a precise treatement as a relation between syntax (linguistic expressions) and semantics (contexts, world). The Euclidean, and then non-Euclidean geometry where, as a matter of fact, built as axiomaticdeductive systems (point 2.). The other two aspects of a formal system identified by Tarski were present too, but much less emphasized: notation was very informal, relying often on drawings; the semantics was rather intuitive and obvious. Tarski’s work initiated rigorous study of all three aspects. E.1.1. The language First, there is the notation: the rules of formation for terms and for well-formed formulas in the logical system. This theory of notation itself became subject to exacting treatment in the concatenation theory, or theory of strings, of Tarski, and in the work of the American Alonzo Church. A formal language is simply a set of words (well formed formulae, wff), that is, strings over some given alphabet (set of symbols) and is typically specified by the rules of formation. For instance: • the alphabet Σ = {2, 4, →, −, (, )} • the rules for forming words (formulae, elements) of the language L: – 2, 4 ∈ L – if A, B ∈ L then also −A ∈ L and (A → B) ∈ L. This specification allows us to conclude that, for instance, 4, −2, (4 → −2), −(2 → −4) all belong to L, while 24, () or 2 → do not. Previously, notation was often a haphazard affair in which it was unclear what could be formulated or asserted in a logical theory and whether expressions were finite or were schemata standing for infinitely long wffs (well formed formulas). Issues that arose out of notational questions include definability of one wff by another (addressed in Beth’s and Craig’s theorems, and in other results), creativity, and replaceability, as well as the expressive power and complexity of different logical languages. E.1.2. Reasoning system The second part of a logical system consists of the axioms and rules of inference, or other ways of identifying what counts as a theorem. This is what is usually meant by the logical “theory” proper: a (typically recursive) description of the theorems of the theory, including axioms and every wff derivable from axioms by admitted rules. Using the language L, one migh, for instance, define the following theory T :

18

Axioms:

i) ii) iii) iv)

2 (4 → −2) (A → − − A) (− − A → A)

Upper case letters denote variables for which our language L. (A → B) ; (B → C) Rules: 1) (A → C) (A → B) ; A 2) B (A → B) ; −B 3) −A

we can substitute arbitrary formulae of if A then B; if B then C if A then C if A then B ; but A B if A then B ; but not B not A

We can now perform symbolic derivations whose correctness can be checked mechanically. For instance: ii (E.i)

i 2

(4 → −2)

iii (2 → − − 2) −−2

2) 3)

−4

iii (−4 → − − −4)

− − −4

2)

Thus, −−−4 is a theorem of our theory, and so is −4 which is obtained by the (left) subderivation ending with the application of rule 3). Although the axiomatic method of characterizing such theories with axioms or postulates or both and a small number of rules of inference had a very old history (going back to Euclid or further), two new methods arose in the 1930s and ’40s. 1. First, in 1934, there was the German mathematician Gerhard Gentzen’s method of succinct Sequenzen (rules of consequents), which were especially useful for deriving metalogical results. This method originated with Paul Hertz in 1932, and a related method was described by Stanislaw Ja´skowski in 1934. 2. Next to appear was the similarly axiomless method of “natural deduction,” which used only rules of inference; it originated in a suggestion by Russell in 1925 but was developed by Quine and the American logicians Frederick Fitch and George David Wharton Berry. The natural deduction technique is widely used in the teaching of logic, although it makes the demonstration of metalogical results somewhat difficult, partly because historically these arose in axiomatic and consequent formulations. A formal description of a language, together with a specification of a theory’s theorems (derivable propositions), are often called the “syntax” of the theory. (This is somewhat misleading when one compares the practice in linguistics, which would limit syntax to the narrower issue of grammaticality.) The term “calculus” is sometimes chosen to emphasize the purely syntactic, uninterpreted nature of a reasoning system. E.1.3. Semantics The last component of a logical system is the semantics for such a theory and language: a declaration of what the terms of a theory refer to, and how the basic operations and connectives are to be interpreted in a domain of discourse, including truth conditions for the formulae in this domain. Consider, as an example the rule 1) from the theory T above: (A → B) ; (B → C) (A → C)

19

It is merely a “piece of text” and its symbols allow almost unlimited interpretations. We may, for instance, take A, B, C, ... to denote propositions and → an implication. But we may likewise let A, B, C, ... stand for sets and → for set-inclusion. The following are then examples of applications of this rule under these two interpretations: If If If

it’s nice we leave it’s nice

then then then

we’ll leave we’ll see a movie we’ll see a movie

respectively

{1, 2} ⊆ {1, 2, 3} {1, 2, 3} ⊆ {1, 2, 3, 5} {1, 2} ⊆ {1, 2, 3, 5}

The rule is “sound” with respect to these interpretations – when applied to these domains in the prescribed way, it represents a valid argument. A specification of a domain of objects (De Morgan’s “universe of discourse”), and of the rules for interpreting the symbols of a logical language in this domain such that all the theorems of the logical theory are true is said to be a “model” of the theory. The two suggested interpretations are models of this rule. (To make them models of the whole theory T would require more work, in particular, finding appropriate interpretation of 2, 4 and −, such that the axioms become true and all rules sound. For the propositional case, one could for instance let − denote negation, 2 denote ‘true’ and 4 ‘false’.) If we chose to interpret the formulae of L as events and A → B as, say, “A is independet from B”, the rule would not be sound. Such an interpretation would not give a model of the theory or, what amounts to the same, if the theory were applied to this part of the world, we could not trust its results. We devote next subsection to some further concepts arising with the formal semantics.

E.2. Formal semantics What is known as formal semantics, or model theory, has a more complicated history than does logical syntax; indeed, one could say that the history of the emergence of semantic conceptions of logic in the late 19th and early 20th centuries is poorly understood even today. Certainly, Frege’s notion that propositions refer to (German: bedeuten) “The True” or “The False” – and this for complex propositions as a function of the truth values of simple propositions – counts as semantics. As we mentioned earlier, this has often been the intuition since Aristotle, although modal propositions and paradoxes like the “liar paradox” pose some problems for this understanding. Nevertheless, this view dominates most of the logic, in particular such basic fields as propositional and first-order logic. Also, earlier medieval theories of supposition incorporated useful semantic observations. So, too, do the techniques of letters referring to the values 1 and 0 that are seen from Boole through Peirce and Schr¨ oder. Both Peirce and Schr¨ oder occasionally gave brief demonstrations of the independence of certain logical postulates using models in which some postulates were true, but not others. This was also the technique used by the inventors of non-Euclidean geometry. The first clear and significant general result in model theory is usually accepted to be a result discovered by L¨ owenheim in 1915 and strengthened in work by Skolem from the 1920s. L¨ owenheim-Skolem theorem A theory that has a model at all, has a countable model. That is to say, if there exists some model of a theory (i.e., an application of it to some domain of objects), then there is sure to be one with a domain no larger than the natural numbers. Although L¨ owenheim and Skolem understood their results perfectly well, they did not explicitly use the modern language of “theories” being true in “models.” The L¨ owenheim-Skolem theorem is in some ways a shocking result, since it implies that any consistent formal theory of anything – no matter how hard it tries to address the phenomena unique to a field such as biology, physics, or even sets or just real (decimal) numbers – can just as well be understood from its formalisms alone as being about natural numbers. Consistency The second major result in formal semantics, G¨ odel’s completeness theorem of 1930, required even for its description, let alone its proof, more careful development of precise concepts about logical

20

systems – metalogical concepts – than existed in earlier decades. One question for all logicians since Boole, and certainly since Frege, had been: Is the theory consistent? In its purely syntactic analysis, this amounts to the question: Is a contradictory sentence (of the form “A and not-A”) a theorem? In most cases, the equivalent semantic counterpart of this is the question: Does the theory have a model at all? For a logical theory, consistency means that a contradictory theorem cannot be derived in the theory. But since logic was intended to be a theory of necessarily true statements, the goal was stronger: a theory is Post-consistent (named for the Polish-American logician Emil Post) if every theorem is valid – that is, if no theorem is a contradictory or a contingent statement. (In nonclassical logical systems, one may define many other interestingly distinct notions of consistency; these notions were not distinguished until the 1930s.) Consistency was quickly acknowledged as a desired feature of formal systems: it was widely and correctly assumed that various earlier theories of propositional and first-order logic were consistent. Zermelo was, as has been observed, concerned with demonstrating that ZF was consistent; Hilbert had even observed that there was no proof that the Peano postulates were consistent. These questions received an answer that was not what was hoped for in a later result – G¨ odel’s incompleteness theorem. A clear proof of the consistency of propositional logic was first given by Post in 1921. Its tardiness in the history of symbolic logic is a commentary not so much on the difficulty of the problem as it is on the slow emergence of the semantic and syntactic notions necessary to characterize consistency precisely. The first clear proof of the consistency of the first-order predicate logic is found in the work of Hilbert and Wilhelm Ackermann from 1928. Here the problem was not only the precise awareness of consistency as a property of formal theories but also of a rigorous statement of first-order predicate logic as a formal theory. Completeness In 1928 Hilbert and Ackermann also posed the question of whether a logical system, and, in particular, first-order predicate logic, was (as it is now called) “complete”. This is the question of whether every valid proposition – that is, every proposition that is true in all intended models – is provable in the theory. In other words, does the formal theory describe all the noncontingent truths of a subject matter? Although some sort of completeness had clearly been a guiding principle of formal logical theories dating back to Boole, and even to Aristotle (and to Euclid in geometry) – otherwise they would not have sought numerous axioms or postulates, risking nonindependence and even inconsistency – earlier writers seemed to have lacked the semantic terminology to specify what their theory was about and wherein “aboutness” consists. Specifically, they lacked a precise notion of a proposition being “valid”, – that is, “true in all (intended) models” – and hence lacked a way of precisely characterizing completeness. Even the language of Hilbert and Ackermann from 1928 is not perfectly clear by modern standards. G¨ odel proved the completeness of first-order predicate logic in his doctoral dissertation of 1930; Post had shown the completeness of propositional logic in 1921. In many ways, however, explicit consideration of issues in semantics, along with the development of many of the concepts now widely used in formal semantics and model theory (including the term metalanguage), first appeared in a paper by Alfred Tarski, The Concept of Truth in Formalized Languages, published in Polish in 1933; it became widely known through a German translation of 1936. Although the theory of truth Tarski advocated has had a complex and debated legacy, there is little doubt that the concepts there (and in later papers from the 1930s) developed for discussing what it is for a sentence to be “true in” a model marked the beginning of model theory in its modern phase. Although the outlines of how to model propositional logic had been clear to the Booleans and to Frege, one of Tarski’s most important contributions was an application of his general theory of semantics in a proposal for the semantics of the first-order predicate logic (now termed the set-theoretic, or Tarskian, interpretation).

21

Tarski’s techniques and language for precisely discussing semantic concepts, as well as properties of formal systems described using his concepts – such as consistency, completeness, and independence – rapidly and almost imperceptibly entered the literature in the late 1930s and after. This influence accelerated with the publication of many of his works in German and then in English, and with his move to the United States in 1939.

E.3. Computability and Decidability The underlying theme of the whole development we have sketched is the attempt to formalize logical reasoning, hopefully, to the level at which it can be performed mechanically. The idea of “mechanical reasoning” has been always present, if not always explicitly, in the logical investigations and could be almost taken as their primary, if only ideal, goal. Intuitively, “mechanical” involves some blind following of the rules and such a blind rule following is the essence of a symbolic system as described in E.1.2. This “mechanical blindness” follows from the fact the language and the rules are unambiguously defined. Consequently, correctness of the application of a rule to an actual formula can be verified mechanically. You can easily check that all applications of ; 4 is not rules in the derivation (E.i) are correct and equally easily see that, for instance, (2→4) 2 a correct application of any rule from T . Logic was supposed to capture correct reasoning and correctness amounts to conformance to some accepted rules. A symbolic reasoning system is an ultimately precise expression of this view of correctness which also makes its verification a purely mechanic procedure. Such a mechnism is possible because all legal moves and restrictions are expressed in the syntax: the language, axioms and rules. In other words, it is exactly the uninterpreted nature of symbolic systems which leads to mechanisation of reasoning. Naturally enough, once the symbolic systems were defined and one became familiar with them, i.e., in the beginning of the 20th century, the questions about mechanical computability were raised by the logicians. The answers led to what is today called “informational revolution” centering around the design and use of computers – devices for symbolic, that is, uninterpreted manipulation. Computability What does it mean that something can be computed mechanically? In the 1930s this question acquired the ultimately precise, mathematical meaning. In the proof of the incompleteness theorem G¨ odel introduced special schemata for so called “recursive functions” working on natural numbers. Some time later Church proposed the thesis Church thesis A function is computable if and only if it can be defined using only recursive functions. This may sound astonishing – why just recursive function are to have such a special significance? The answer comes from the work of Alan Turing who introduced “devices” which came to be known as Turing machines. Although defined as conceptual entities, one could easily imagine that such devices could be actually built as physical machines performing exactly the operations suggested by Turing. The machines could, for instance, recognize whether a string had some specific form and, generally, compute functions (see pp.206ff in C. Sobel, The Cognitive sciences, an interdisciplinary approach.) The functions which could be computed on Turing machines were shown to be exactly the recursive functions! Even more significant for us may be the fact that there is a well-defined sublogic of first-order logic in which proving a theorem amounts to computing a recursive function, that is, which can code all possible computer programs.1 Thus, in the wide field of logic, there is a small subdomain providing sufficient means to study the issues of computability. (Such connections are much deeper and more intricate but we cannot address them all here.) 1 Incidentally, or perhaps not, this subset comprises the formulae of the form “If A and A and ... and A then n 1 2 C”. Such formulae give the syntax of an elegant programming language Prolog. In the terminology of P. Thagard, Mind, chapter 3, they correspond to rules which are there claimed to have more “psychological plausibility” than general logical representations. It might seem surprising why a computable sublogic should be more psychologically plausibile than the full first-order logic. This, however, might be due to the relative simplicity of such formulae as compared to the general format of formulae in first-order logic. Due to its simplicity also propositional logic could be, probably, claimed and even shown to be more “psychologically plausible” than first-order logic.

22

Church thesis remains only a thesis. Nevertheless, so far nobody has proposed a notion of computability which would exceed the capacities of Turing machines (and hence of recursive functions). A modern computer with all its sophistication is, as far as its power and possibilities are concerned, nothing more than a Turing machine! Thus, logical results, in particular the negative theorems stating the limitations of logical formalism determine also the ultimate limits of computers’ capabilities and we give a few examples below. Decidability By the 1930s almost all work in the foundations of mathematics and in symbolic logic was being done in a standard first-order predicate logic, often extended with axioms or axiom schemata of set- or type-theory. This underlying logic consisted of a theory of “classical” truth functional connectives, such as “and”, “not” and “if . . . then” (propositional logic) and first-order quantification permitting propositions that “all” and “at least one” individual satisfy a certain formula. Only gradually in the 1920s and ’30s did a conception of a ”first-order” logic, and of more expressive alternatives, arise. Formal theories can be classified according to their expressive or representational power, depending on their language (notation) and reasoning system (inference rules). Propositional logic allows merely manipulation of simple propositions combined with operators like “or”, “and”. Firstorder logic allows explicit reference to, and quantification over, individuals, such as numbers or sets, but not quantification over properties of these individuals. For instance, the statement “for all x: if x is man then x is human” is first-order. But the following one is second-order, involving quantification over properties P, R: “for every x and any properties P, R: if P implies R and x is P then x is R.”2 (Likewise, the fifth postulate of Euclid is not finitely axiomatizable in the first-order language but is rather a schema or second-order formulation.) The question “why should one bother with less expressive formalisms, when more expressive ones are available?” should appear quite natural. The answer lies in the fact that increasing expressive power of a formalisms clashes with another desired feature, namely: decidability there exists a finite mechanical procedure for determining whether a proposition is, or is not, a theorem of the theory. The germ of this idea is present in the law of excluded middle claiming that every proposition is either true or false. But decidability adds to it the requirement which can be expressed only with the precise definition of a finite mechanical procedure, of computability. This is the requirement that not only the proposition must be true/provable or not: there must be a terminating algorithm which can be run (on a computer) to decide which is the case. (In E.1.2 we have shown that, for instance, −4 is a theorem of the theory T defined there. But if you were now to tell whether (− − 4 → (−2 → 2)) is a theorem, you might have hard time trying to find a derivation and even harder trying to prove that no derivation of this formula exists. Decidability of a theory means that such questions can be answered by a computer program.) The decidability of propositional logic, through the use of truth tables, was known to Frege and Peirce; a proof of its decidability is attributable to Jan Lukasiewicz and Emil Post independently in 1921. L¨ owenheim showed in 1915 that first-order predicate logic with only single-place predicates was decidable and that the full theory was decidable if the first-order predicate calculus with only two-place predicates was decidable; further developments were made by Thoralf Skolem, Heinrich Behmann, Jacques Herbrand, and Willard Quine. Herbrand showed the existence of an algorithm which, if a theorem of the first-order predicate logic is valid, will determine it to be so; the difficulty, then, was in designing an algorithm that in a finite amount of time would determine that propositions were invalid. (We can easily imagine a machine which, starting with the specified axioms, generates all possible theorems by simply generating all possible derivations – sequences of correct rule applications. If the formula is provable, the machine will, sooner or later, find a proof. But if the formula is not provable, the machine will keep for ever since the number of proofs 2 Note a vague analogy of the distinction between first-order quantification over individuals and second-order quantification over properties to the distinction between extensional and intensional aspects from B.3.

23

is, typically, infinite.) As early as the 1880s, Peirce seemed to be aware that the propositional logic was decidable but that the full first-order predicate logic with relations was undecidable. The proof that first-order predicate logic (in any general formulation) was undecidable was first shown definitively by Alan Turing and Alonzo Church independently in 1936. Together with G¨ odel’s (second) incompleteness theorem and the earlier L¨ owenheim-Skolem theorem, the Church-Turing theorem of the undecidability of the first-order predicate logic is one of the most important, even if “negative”, results of 20th-century logic. Among consequences of these negative results are many facts about limits of computers. For instance, it is not (and never will be!) possible to write a computer program which, given an arbitrary first-order theory T and some formula f , is guaranteed to terminate giving the answer “Yes” if f is a theorem of T and “No” if it is not. A more mundane example is the following. As we know, one can easily write a computer program which for some inputs will not terminate. It might be therefore desirable to have a program U which could take as input another program P (a piece of text just like “usual” input to any program) and description of its input d and decide whether P run on d would terminate or not. Such a program U , however, will never be written as the problem described is undecidable.

F. Summary The idea of correct thinking is probably as old as thinking itself. With Aristotle there begins the process of explicit formulation of the rules, patterns of reasoning, conformance to which would guarantee correctness. This idea of correctness has been gradually made precise and unambiguous leading to the formulation of (the general schema for defining) symbolic languages, the rules of their manipulation and hence cirteria of correct “reasoning”. It is, however, far from obvious that the result indeed captures the natural reasoning as performed by humans. The need for precision led to complete separation of the reasoning aspect (syntactic manipulation) from its possible meaning. The completely uninterpreted nature of symbolic systems makes their relation to the real world highly problematic. Moreover, as one has arrived at the general schema of defining formal systems, no unique system has arosen as the right one and their variety seems surpassed only by the range of possible application domains. The discussions about which rules actually represent human thinking can probably continue indefinitely. Nevertheless, this purely syntactic character of formal reasoning systems provided the basis for a precise definition of the old theme of logical investigations: the mechanical computability. The question whether human mind and thinking can be reduced to such a mechanic computation and simulated by a computer is still discussed by the philosophers and cognitive scientists. Also, much successful research is driven by the idea, if not an explicit goal, of obtaining such a reduction. The “negative” results as those quoted at the end of the last section, established by human mind and demonstrating limitations of the power of logic and computers, suggest that human cognition may not be reducible to, and hence neither simulated by, mechanic computation. In particular, reduction to mechanic computation would imply that all human thinking could be expressed as applications of simple rules like those described in footnote 1, p. 21. Its possibility has not been disproved but it certainly does not appear plausible. Yet, as computable functions correspond only to a small part of logic, even if this reduction turns out impossible, the question of reduction of thinking to logic at large would still remain open. Most researchers do not seem to believe in such reductions but this is not the place to speculate on their (im)possibility. Some elements of this broad discussion can be found in P. Thagard, Mind, as well as in most other books on cognitive science.

Bibliography General and introductory texts 1. D.GABBAY and F. GUENTHNER (eds.), Handbook of Philosophical Logic, 4 vol. (198389). 2. GERALD J. MASSEY, Understanding Symbolic Logic (1970). 3. THIERRY SCHEURER, Foundations of Computing, Addison-Wesley (1994).

24

4. V.SPERSCHNEIDER, G.ANTONIOU, Logic: A Foundatiuon for Computer Science, AddisonWesley (1991). 5. WILLARD V. QUINE, Methods of Logic, Harvard University Press, 1st. ed. (1950, reprinted 1982). 6. JENS ERIK FENSTAD, DAG NORMANN, Algorithms and Logic, Matematisk Institutt, Universitetet i Oslo (1984). 7. GERALD J. MASSEY, Understanding Symbolic Logic (1970). 8. ELLIOTT MENDELSON, Introduction to Mathematical Logic, 3rd ed. (1987). 9. CHIN-LIANG CHANG, RICHARD CHAR-TUNG LEE, Symbolic Logic and Mechanical Theorem Proving, Academic Press Inc. (1973). 10. R.D.DOWSING, V.J.RAYWARD-SMITH, C.D.WALTER, A First Course in Formal Logic and its Applications in Computer Science, Alfred Waller Ltd. 2nd ed. (1994). 11. RAMIN YASID, Logic and Programming in Logic, Immediate Publishing (1997). 12. ROBERT E. BUTTS and JAAKKO HINTIKKA, Logic, Foundations of Mathematics, and Computability Theory (1977), a collection of conference papers. History of logic 1. WILLIAM KNEALE and MARTHA KNEALE, The Development of Logic (1962, reprinted 1984). 2. The Encyclopedia of Philosophy, ed. by PAUL EDWARDS, 8 vol. (1967). 3. New Catholic Encyclopedia, 18 vol. (1967-89). ´ 4. I.M. BOCHENSKI, Ancient Formal Logic (1951, reprinted 1968). On Aristotle: 5. JAN LUKASIEWICZ, Aristotle’s Syllogistic from the Standpoint of Modern Formal Logic, 2nd ed., enlarged (1957, reprinted 1987). ¨ 6. GUNTHER PATZIG, Aristotle’s Theory of the Syllogism (1968; originally published in German, 2nd ed., 1959). 7. OTTO A. BIRD, Syllogistic and Its Extensions (1964). 8. STORRS McCALL, Aristotle’s Modal Syllogisms (1963). ´ 9. I.M. BOCHENSKI, La Logique de Theophraste (1947, reprinted 1987). On Stoics: 10. BENSON MATES, Stoic Logic (1953, reprinted 1973). 11. MICHAEL FREDE, Die stoische Logik (1974). Medieval logic: 12. NORMAN KRETZMANN, ANTHONY KENNY, and JAN PINBORG (eds.), The Cambridge History of Later Medieval Philosophy: From the Rediscovery of Aristotle to the Disintegration of Scholasticism, 1100-1600 (1982). 13. NORMAN KRETZMANN and ELEONORE STUMP (eds.), Logic and the Philosophy of Language (1988). 14. For Boethius, see: MARGARET GIBSON (ed.), Boethius, His Life, Thought, and Influence (1981). Arabic logic: 15. NICHOLAS RESCHER, The Development of Arabic Logic (1964). 16. L.M. DE RIJK, Logica Modernorum: A Contribution to the History of Early Terminist Logic, 2 vol. in 3 (1962-1967). 17. NORMAN KRETZMANN (ed.), Meaning and Inference in Medieval Philosophy (1988). Surveys of modern logic:

25

18. WILHELM RISSE, Die Logik der Neuzeit, 2 vol. (1964-70). 19. ROBERT ADAMSON, A Short History of Logic (1911, reprinted 1965). 20. C.I. LEWIS, A Survey of Symbolic Logic (1918, reissued 1960). ¨ 21. JORGEN JRGENSEN, A Treatise of Formal Logic: Its Evolution and Main Branches with Its Relations to Mathematics and Philosophy, 3 vol. (1931, reissued 1962). 22. ALONZO CHURCH, Introduction to Mathematical Logic (1956). ´ 23. I.M. BOCHENSKI, A History of Formal Logic, 2nd ed. (1970; originally published in German, 1962). 24. HEINRICH SCHOLZ, Concise History of Logic (1961; originally published in German, 1959). 25. ALICE M. HILTON, Logic, Computing Machines, and Automation (1963); 26. N.I. STYAZHKIN, History of Mathematical Logic from Leibniz to Peano (1969; originally published in Russian, 1964); 27. CARL B. BOYER, A History of Mathematics, 2nd ed., rev. by UTA C. MERZBACH (1991); 28. E.M. BARTH, The Logic of the Articles in Traditional Philosophy: A Contribution to the Study of Conceptual Structures (1974; originally published in Dutch, 1971); 29. MARTIN GARDNER, Logic Machines and Diagrams, 2nd ed. (1982); and 30. E.J. ASHWORTH, Studies in Post-Medieval Semantics (1985). Developments in the science of logic in the 20th century are reflected mostly in periodical literature. 31. WARREN D. GOLDFARB, ”Logic in the Twenties: The Nature of the Quantifier,” The Journal of Symbolic Logic 44:351-368 (September 1979); 32. R.L. VAUGHT, ”Model Theory Before 1945,” and C.C. CHANG, ”Model Theory 19451971,” both in LEON HENKIN et al. (eds.), Proceedings of the Tarski Symposium (1974), pp. 153-172 and 173-186, respectively; and 33. IAN HACKING, ”What is Logic?” The Journal of Philosophy 76:285-319 (June 1979). 34. Other journals devoted to the subject include History and Philosophy of Logic (biannual); Notre Dame Journal of Formal Logic (quarterly); and Modern Logic (quarterly). Formal logic 1. MICHAEL DUMMETT, Elements of Intuitionism (1977), offers a clear presentation of the philosophic approach that demands constructibility in logical proofs. 2. G.E. HUGHES and M.J. CRESSWELL, An Introduction to Modal Logic (1968, reprinted 1989), treats operators acting on sentences in first-order logic (or predicate calculus) so that, instead of being interpreted as statements of fact, they become necessarily or possibly true or true at all or some times in the past, or they denote obligatory or permissible actions, and so on. 3. JON BARWISE et al. (eds.), Handbook of Mathematical Logic (1977), provides a technical survey of work in the foundations of mathematics (set theory) and in proof theory (theories with infinitely long expressions). 4. ELLIOTT MENDELSON, Introduction to Mathematical Logic, 3rd ed. (1987), is the standard text; and 5. G. KREISEL and J.L. KRIVINE, Elements of Mathematical Logic: Model Theory (1967, reprinted 1971; originally published in French, 1967), covers all standard topics at an advanced level. 6. A.S. TROELSTRA, Choice Sequences: A Chapter of Intuitionistic Mathematics (1977), offers an advanced analysis of the philosophical position regarding what are legitimate proofs and logical truths; and 7. A.S. TROELSTRA and D. VAN DALEN, Constructivism in Mathematics, 2 vol. (1988), applies intuitionist strictures to the problem of the foundations of mathematics.

26

Metalogic 1. JON BARWISE and S. FEFERMAN (eds.), Model-Theoretic Logics (1985), emphasizes semantics of models. 2. J.L. BELL and A.B. SLOMSON, Models and Ultraproducts: An Introduction, 3rd rev. ed. (1974), explores technical semantics. 3. RICHARD MONTAGUE, Formal Philosophy: Selected Papers of Richard Montague, ed. by RICHMOND H. THOMASON (1974), uses modern logic to deal with the semantics of natural languages. 4. MARTIN DAVIS, Computability and Unsolvability (1958, reprinted with a new preface and appendix, 1982), is an early classic on important work arising from G¨ odel’s theorem, and the same author’s The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems, and Computable Functions (1965), is a collection of seminal papers on issues of computability. 5. ROLF HERKEN (ed.), The Universal Turing Machine: A Half-Century Survey (1988), takes a look at where G¨ odel’s theorem on undecidable sentences has led researchers. 6. HANS HERMES, Enumerability, Decidability, Computability, 2nd rev. ed. (1969, originally published in German, 1961), offers an excellent mathematical introduction to the theory of computability and Turing machines. 7. A classic treatment of computability is presented in HARTLEY ROGERS, JR., Theory of Recursive Functions and Effective Computability (1967, reissued 1987). 8. M.E. SZABO, Algebra of Proofs (1978), is an advanced treatment of syntactical proof theory. 9. P.T. JOHNSTONE, Topos Theory (1977), explores the theory of structures that can serve as interpretations of various theories stated in predicate calculus. 10. H.J. KEISLER, ”Logic with the Quantifier ’There Exist Uncountably Many’,” Annals of Mathematical Logic 1:1-93 (January 1970), reports on a seminal investigation that opened the way for Barwise (1977), cited earlier, and 11. CAROL RUTH KARP, Language with Expressions of Infinite Length (1964), which expands the syntax of the language of predicate calculus so that expressions of infinite length can be constructed. 12. C.C. CHANG and H.J. KEISLER, Model Theory, 3rd rev. ed. (1990), is the single most important text on semantics. 13. F.W. LAWVERE, C. MAURER, and G.C. WRAITH (eds.), Model Theory and Topoi (1975), is an advanced, mathematically sophisticated treatment of the semantics of theories expressed in predicate calculus with identity. 14. MICHAEL MAKKAI and GONZALO REYES, First Order Categorical Logic: ModelTheoretical Methods in the Theory of Topoi and Related Categories (1977), analyzes the semantics of theories expressed in predicate calculus. 15. SAHARON SHELAH, ”Stability, the F.C.P., and Superstability: Model-Theoretic Properties of Formulas in First Order Theory,” Annals of Mathematical Logic 3:271-362 (October 1971), explores advanced semantics. Applied logic 1. Applications of logic in unexpected areas of philosophy are studied in EVANDRO AGAZZI (ed.), Modern Logic–A Survey: Historical, Philosophical, and Mathematical Aspects of Modern Logic and Its Applications (1981). 2. WILLIAM L. HARPER, ROBERT STALNAKER, and GLENN PEARCE (eds.), IFs: Conditionals, Belief, Decision, Chance, and Time (1981), surveys hypothetical reasoning and inductive reasoning. 3. On the applied logic in philosophy of language, see EDWARD L. KEENAN (ed.), Formal Semantics of Natural Language (1975);

27

4. JOHAN VAN BENTHEM, Language in Action: Categories, Lambdas, and Dynamic Logic (1991), also discussing the temporal stages in the working out of computer programs, and the same author’s Essays in Logical Semantics (1986), emphasizing grammars of natural languages. 5. DAVID HAREL, First-Order Dynamic Logic (1979); and J.W. LLOYD, Foundations of Logic Programming, 2nd extended ed. (1987), study the logic of computer programming. 6. Important topics in artificial intelligence, or computer reasoning, are studied in PETER GOERDENFORS, Knowledge in Flux: Modeling the Dynamics of Epistemic States (1988), including the problem of changing one’s premises during the course of an argument. 7. For more on nonmonotonic logic, see JOHN McCARTHY, ”Circumscription: A Form of Non-Monotonic Reasoning,” Artificial Intelligence 13(1-2):27-39 (April 1980); 8. DREW McDERMOTT and JON DOYLE, ”Non-Monotonic Logic I,” Artificial Intelligence 13(1-2):41-72 (April 1980); 9. DREW McDERMOTT, ”Nonmonotonic Logic II: Nonmonotonic Modal Theories,” Journal of the Association for Computing Machinery 29(1):33-57 (January 1982); and 10. YOAV SHOHAM, Reasoning About Change: Time and Causation from the Standpoint of Artificial Intelligence (1988).

28

Basic Set Theory

Chapter 1 Sets, Functions, Relations • Sets and Functions – Set building operations – Some equational laws • Relations and Sets with Structures – Properties of relations – Ordering relations • Infinities – Countability vs. uncountability

1: Sets and Functions a Background Story ♦ ♦ A set is an arbitrary collection of arbitrary objects, called its members. One should take these two occurrences of “arbitrary” seriously. Firstly, sets may be finite, e.g., the set C of cars on the parking lot outside the building, or infinite, e.g. the set N of numbers greater than 5. Secondly, any objects can be members of sets. We can talk about sets of cars, bloodcells, numbers, Roman emperors, etc. We can also talk about the set X whose elements are: my car, your mother and number 6. (Not that such a set necessarily is useful for any purpose, but it is possible to collect these various elements into one set.) In particular sets themselves can be members of other sets. I can, for instance, form the set whose elements are: my favorite pen, my four best friends and the set N . This set will have 6 elements, even though the set N itself is infinite. A set with only one element is called a singleton, e.g., the set containing only planet Earth. There is one special and very important set – the empty set – which has no members. If it seems startling, you may think of the set of all square circles or all numbers x such that x < x. This set is mainly a mathematical convenience – defining a set by describing the properties of its members in an involved way, we may not know from the very begining what its members are. Eventually, we may find that no such objects exist, that is, that we defined an empty set. It also makes many formulations simpler since, without the assumption of its existence, one would often had to take special precautions for the case a set happened to contain no elements. It may be legitimate to speak about a set even if we do not know exactly its members. The set of people born in 1964 may be hard to determine exactly but it is a well defined object because, at least in principle, we can determine membership of any object in this set. Similarly, we will say that the set R of red objects is well defined even if we certainly do not know all its members. But confronted with a new object, we can determine if it belongs to R or not (assuming, that we do not dispute the meaning of the word “red”.) There are four basic means of specifying a set. 1. If a set is finite and small, we may list all its elements, e.g., S = {1, 2, 3, 4} is a set with four elements. 2. A set can be specified by determining a property which makes objects qualify as its elements. The set R of red objects is specified in this way. The set S can be described as ‘the set of natural numbers greater than 0 and less than 5’.

I.1. Sets, Functions, Relations

29

3. A set may be obtained from other sets. For instance, given the set S and the set S 0 = {3, 4, 5, 6} we can form a new set S 00 = {3, 4} which is the intersection of S and S 0 . Given the sets of odd {1, 3, 5, 7, 9...} and even numbers {0, 2, 4, 6, 8...} we can form a new set N by taking their union. 4. Finally, a set can be determined by describing the rules by which its elements may be generated. For instance, the set N of natural numbers {0, 1, 2, 3, 4, ...} can be described as follows: 0 belongs to N and if n belongs to N, then also n + 1 belongs to N and, finally, nothing else belongs to N. In this chapter we will use mainly the first three ways of describing sets. In particular, we will use various set building operations as in point 3. In the later chapters, we will constantly encouter sets described by the last method. One important point is that the properties of a set are entirely independent from the way the set is described. Whether we just say ‘the set of natural numbers’ or the set N as defined in point 2. or 4., we get the same set. Another thing is that studying and proving properties of a set may be easier when the set is described in one way rather than another. ♦

♦

Definition 1.1 Given some sets S and T we write: x ∈ S - x is a member (element) of S S ⊆ T - S is a subset of T .... . . . . . . . . . . . for all x : if x ∈ S then x ∈ T S ⊂ T - S ⊆ T and S 6= T .... . . . . . . . . . . for all x : if x ∈ S then x ∈ T and there is an x : x ∈ T and x 6∈ S Set building operations : ∅ - empty set .... . . . . . . . . . . . . . . . . . . . for any x : x 6∈ ∅ S ∪ T - union of S and T .... . . . . . . . . . . . x ∈ S ∪ T iff x ∈ S or x ∈ T S ∩ T - intersection of S and T .... . . . . . x ∈ S ∩ T iff x ∈ S and x ∈ T S \ T - difference of S and T .... . . . . . . . x ∈ S \ T iff x ∈ S and x 6∈ T S 0 - complement of S; assuming a universe U of all elements S 0 = U \ S S × T - Cartesian product of S and T .... x ∈ S × T iff x = hs, ti and s ∈ S and t ∈ T ℘(S) - the power set of S .... . . . . . . . . . . x ∈ ℘(S) iff x ⊆ S {x ∈ S : Prop(x)} - set of those x ∈ S which have the specified property Prop Remark. Notice that sets may be members of other sets. For instance {∅} is the set with one element – which is the empty set ∅. In fact, {∅} = ℘(∅). It is different from the set ∅ which has no elements. {{a, b}, a} is a set with two elements: a and the set {a, b}. Also {a, {a}} has two different elements: a and {a}. In particular, the power set will contain many sets as elements: ℘({a, {a, b}}) = {∅, {a}, {{a, b}}, {a, {a, b}}}. In the definition of Cartesian product, we used the notation hs, ti to denote an ordered pair whose first element is s and second t. In set theory, all possible objects are modelled as sets. An ordered pair hs, ti is then represented as the set with two elements – both being sets – {{s}, {s, t}}. Why not {{s}, {t}} or, even simpler, {s, t}? Because elements of a set are not ordered. Thus {s, t} and {t, s} denote the same set. Also, {{s}, {t}} and {{t}, {s}} denote the same set (but different from the set {s, t}). In ordered pairs, on the other hand, the order does matter – hs, ti and ht, si are different pairs. This ordering is captured by the representation {{s}, {s, t}}. We have here a set with two elements {A, B} where A = {s} and B = {s, t}. The relationship between these two elements tells us which is the first and which the second: A ⊂ B identifies the member of A as the first element of the pair, and then the element of B \ A as the second one. Thus hs, ti = {{s}, {s, t}} 6= {{t}, {s, t}} = ht, si. 2

30

Basic Set Theory

The set operations ∪, ∩, 0 and \ obey some well known laws: 1. Idempotency A∪A = A A∩A = A

2. Associativity (A ∪ B) ∪ C = A ∪ (B ∪ C) (A ∩ B) ∩ C = A ∩ (B ∩ C)

3. Commutativity A∪B = B∪A A∩B = B∩A

4. Distributivity A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)

5. DeMorgan 0 (A ∪ B) = A0 ∩ B 0 0 (A ∩ B) = A0 ∪ B 0

6. Complement A ∩ A0 = ∅ A \ B = A ∩ B0

7. Emptyset ∅∪A = A

∅∩A = ∅

8. Consistency principles a) A ⊆ B iff A ∪ B = B

b) A ⊆ B iff A ∩ B = A

Remark 1.2 [Venn’s diagrams] It is very common to represent sets and set relations by means of Venn’s diagrams – overlapping figures, typically, circles or rectangles. On the the left in the figure below, we have two sets A and B in some universe U . Their intersection A∩B is marked as the area belonging to both by both vertical and horisontal lines. If we take A to represent Armenians and B bachelors, the darkest region in the middle represents Armenian bachelors. The region covered by only vertical, but not horisontal, lines is the set difference A \ B – Armenians who are not bachelors. The whole region covered by either vertical or horisontal lines represents all those who are either Armenian or are bachelors. B

U

B

A

U

A 0

0

(A ∪ B) = A ∩ B

0

Now, the white region is the complement of the set A ∪ B (in the universe U ) – all those who are neither Armenians nor bachelors. The diagram to the right is essentially the same but was constructed in a different way. Here, the region covered with vertical lines is the complement of A – all non-Armenians. The region covered with horisontal lines represents all non-bachelors. The region covered with both horisontal and vertical lines is the intersection of these two complements – all those who are neither Armenians nor bachelors. The two diagrams illustrate the first DeMorgan law since the white area on the left, (A ∪ B)0 , is exactly the same as the area covered with both horisontal and vertical lines on the right, A0 ∩ B 0 . 2

Venn’s diagrams may be handy tool to visualize simple set operations. However, the equalities above can be also seen as a (not yet quite, but almost) formal system allowing one to derive various other set equalities. The rule for performing such derivations is ‘substitution of equals for equals’, known also from elementary arithemtics. For instance, the fact that, for an arbitrary set A : A ⊆ A amounts to a single application of rule 8.a): A ⊆ A iff A ∪ A = A, where the last equality holds by 1. A bit longer derivation shows that (A ∪ B) ∪ C = (C ∪ A) ∪ B : 3

2

(A ∪ B) ∪ C = C ∪ (A ∪ B) = (C ∪ A) ∪ B. In exercises we will encounter more elaborate examples. In addition to the set building operations from the above defintion, one often encounters also disjoint union of sets A and B, written A ] B and defined as A ] B = (A × {0}) ∪ (B × {1}). The idea is to use 0, resp. 1, as indices to distinguish the elements originating from A and from B. If A ∩ B = ∅, this would not be necessary, but otherwise the “disjointness” of this union requires that the common elements be duplicated. E.g., for A = {a, b, c} and B = {b, c, d}, we have A ∪ B = {a, b, c, d} while A ] B = {ha, 0i, hb, 0i, hc, 0i, hb, 1i, hc, 1i, hd, 1i}, which can be thought of as {a0 , b0 , c0 , b1 , c1 , d1 }.

31

I.1. Sets, Functions, Relations

Definition 1.3 Given two sets S and T , a function f from S to T , f : S → T , is a subset of S × T such that • whenever hs, ti ∈ f and hs, t0 i ∈ f , then t = t0 , and • for each s ∈ S there is some t ∈ T such that hs, ti ∈ f . A subset of S × T that satisfies the first condition above but not necessarily the second, is called a partial function from S to T . For a function f : S → T , the set S is called the source or domain of the function, and the set T its target or codomain. The second point of this definition means that function is total – for each argument (element s ∈ S), the function has some value, i.e., an element t ∈ T such that hs, ti ∈ f . Sometimes this requirement is dropped and one speaks about partial functions which may have no value for some arguments but we will be for the most concerned with total functions. Example 1.4 Let N denote the set of natural numbers {0, 1, 2, 3, ...}. The mapping f : N → N defined by f (n) = 2n is a function. It is the set of all pairs f = {hn, 2ni : n ∈ N}. If we let M denote the set of all people, then the set of all pairs f ather = {hm, m0 s fatheri : m ∈ M } is a function assigning to each person his/her father. A mapping ‘children’, assigning to each person his/her children is not a function M → M for two reasons. For the first, a person may have no children, while saying “function” we mean a total function. For the second, a person may have more than one child. These problems may be overcome if we considered it instead as a function M → ℘(M ) assigning to each person the set (possibly empty) of all his/her children. 2

Notice that although intuitively we think of a function as a mapping assigning to each argument some value, the definition states that it is actually a set (a subset of S ×T is a set.) The restrictions put on this set are exactly what makes it possible to think of this set as a mapping. Nevertheless, functions – being sets – can be elements of other sets. We may encounter situations involving sets of functions, e.g. the set T S of all functions from set S to set T , which is just the set of all subsets of S × T , each satisfying the conditions of the definition 1.3. Remark 1.5 [Notation] A function f associates with each element s ∈ S a unique element t ∈ T . We write this t as f (s) – the value of f at point s. When S is finite (and small) we may sometimes write a function as a set {hs1 , t1 i, hs2 , t2 i, ..., hsn , tn i} or else as {s1 7→ t1 , s2 7→ t2 , ..., sn 7→ tn }. If f is given then by f [s 7→ p] we denote the function f 0 which is the same as f for all arguments x 6= s : f 0 (x) = f (x), while f 0 (s) = p. 2

Definition 1.6 A function f : S → T is injective iff whenever f (s) = f (s0 ) then s = s0 ; surjective iff for all t ∈ T there exists an s ∈ S such that f (s) = t; bijective, or a set-isomorphism, iff it is both injective and surjective. Injectivity means that no two distinct elements from the source set are mapped to the same element in the target set; surjectivity that each element in the target is an image of some element from the source. Example 1.7 The function f ather : M → M is injective – everybody has exactly one (biological) father, but it is not surjective – not everybody is a father of somebody. The following drawing gives some examples: s 7654 S

h

?> =< /T

6/ t 1 mmm m m m s2 t2 1

0123 s QQ Q 3

89t :;

QQQ (

t3 4

7654 s S

1

0123

f

?> =< /T

/ t1

s2 QQ m6 t2 m QmQ mmQ Q( m s3 t3

89t :; 4

?>s =

|S| |S| |S| |S| ∗ |T | |S| 2

From certain assumptions (“axioms”) about sets it can be proven that the relation ≤ on cardinalities has the properties of a weak TO, i.e., it is reflexive (obvious), transitive (fairly obvious), antisymmetric (not so obvious) and total (less obvious). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [optional] As an example of how intricate reasoning may be needed to establish such “not quite but almost obvious” facts, we show that ≤ is antisymmetric. Theorem 1.23 [Schr¨ oder-Bernstein] For arbitrary sets X, Y , if there are injections i : X → Y and j : Y → X, then there is a bijection f : X → Y (i.e., if |X| ≤ |Y | and |Y | ≤ |X| then |X| = |Y |).

37

I.1. Sets, Functions, Relations

Proof If the injection i : X → Y is surjective, i.e., i(X) = Y , then i is a bijection and we are done. Otherwise, we have Y0 = Y \ i(X) 6= Ø and we apply j and i repeatively as follows Y0 = Y \ i(X)

X0 = j(Y0 )

Yn+1 = i(Xn )

Xn+1 = j(Yn+1 )

Y∗ =

ω [

Yn

X∗ =

n=0

ω [

YO 0 B B

YO 1 C C

YO 2 C C

i

i

i

i.e.

Xn

BB j BB BB

X0

X

CC j CC CC !

X1

CC j CC CC !

...

O j i

X2

n=0

$

...

Thus we can divide both sets into disjoint components as in the diagram below. Y KS ∗

YO = j

KS

j

i

(Y \ Y ∗ )

∪

i

X∗

X =

(X \ X ∗ )

∪

We show that the respective restrictions of j and i are bijections. First, j : Y ∗ → X ∗ is a bijection (it is injective, and the following equation shows that it is surjective): j(Y ∗ ) = j(

ω [

Yn ) =

By lemma 1.8, j Furthermore: (F.i)

: X

i(X ∗ ) = i(

∗

ω [

j(Yn ) =

ω [

Xn = X ∗

n=0

n=0

n=0 −

ω [

∗

−

→ Y , defined by j (x) = y : j(y) = x is a bijection too.

Xn ) =

n=0

ω [

i(Xn ) =

n=0

ω [

n=0

Yn+1 =

ω [

Yn = Y ∗ \ Y 0 .

n=1

Now, the first of the following equalities holds since i is injective, the second by (F.i) and since i(X) = Y \ Y0 (definition of Y0 ), and the last since Y0 ⊆ Y ∗ : i(X \ X ∗ ) = i(X) \ i(X ∗ ) = (Y \ Y0 ) \ (Y ∗ \ Y0 ) = Y \ Y ∗ , i.e., i : (X \ X ∗ ) → (Y \ Y ∗ ) is a bijection. We obtain a bijection f : X → Y defind by f (x) =

i(x) j − (x)

if x ∈ X \ X ∗ if x ∈ X ∗

QED (1.23)

A more abstract proof. The construction of the sets X ∗ and Y ∗ in the above proof can be subsumed under a more abstract formulation implied by the Claims 1. and 2. below. In particular, Claim 1. has a very general form. Claim 1. For any set X, if h : ℘(X) → ℘(X) is monotonic, i.e. such that, whenever A ⊆ B ⊆ X then h(A) ⊆ h(B); then there is a set T ⊆ X : h(T ) = T . We show that T =

S

Y∗

{A ⊆ X : A ⊆ h(A)}.

a) T ⊆ h(T ) : for for each t ∈ T there is an A : t ∈ A ⊆ T and A ⊆ h(A). But then A ⊆ T implies h(A) ⊆ h(T ), and so t ∈ h(T ). b) h(T ) ⊆ T : from a) T ⊆ h(T ), so h(T ) ⊆ h(h(T )) which means that h(T ) ⊆ T by definition of T . Claim 2. Given injections i, j define ∗ : ℘(X) → ℘(X) by A∗ = X \ j(Y \ i(A)). If A ⊆ B ⊆ X then A∗ ⊆ B ∗ . Follows trivially from injectivity of i and j. A ⊆ B, so i(A) ⊆ i(B), so Y \ i(A) ⊇ Y \ i(B), so j(Y \ i(A)) ⊇ j(Y \ i(B)), and hence X \ j(Y \ i(A)) ⊆ X \ j(Y \ i(B)). ∗ 3. Claims 1 and 2 imply that there is a T ⊆ X such that T = T , i.e., T = X\j(Y \i(T )). Then i(x) if x ∈ T is a bijection. We have X = j(Y \i(T ))∪T f : X → Y defined by f (x) = j −1 (x) if x 6∈ T and Y = (Y \ i(T )) ∪ i(T ), and obviously j −1 is a bijection between j(Y \ i(T )) and Y \ i(T ), while i is a bijection between T and i(T ). 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [end optional]

X∗

38

Basic Set Theory

Each finite set has a cardinality which is a natural number. The apparently empty Definition 1.20 becomes more significant when we look at the infinite sets. Definition 1.24 A set S is infinite iff there exists a proper subset T ⊂ S such that S * ) T. Example 1.25 def For simplicity, let us consider the set of natural numbers, N. We denote its cardinality |N| = ℵ0 . (Sometimes it is also written ω, although axiomatic set theory distinguishes between the cardinal number ℵ0 and the ordinal number ω. Ordinal number is a more fine-grained notion than cardinal number, but we shall not worry about this.) We have, for instance, that |N| = |N \ {0}| by the simple bijection: {

0 l 1

{

1 l 2

2 l 3

3 l 4

... ... ...

In fact, the cardinality of N is the same as the cardinality of the even natural numbers! It is easy to see that the pair of functions f (n) = 2n and f −1 (2n) = n is a bijection: { f ↓ {

0 l 0

1 l 2

2 l 4

3 l 6

4 l 8

5 l 10

... ↑ f −1 ...

In fact, when |S| = |T | = ℵ0 and |P | = n < ℵ0 , we have |S ∪ T | = ℵ0 |S \ P | = ℵ0 |S × T | = ℵ0

1. 3. 4.

We illustrate a possible set-isomorphisms N * ) N × N below:

[email protected] 03

@ R @

I @ @

02

[email protected] 01

13

@ @

@ R @

I @ @

00

I @ @

12

@ - 10

@

@

@ R @

I @ @

11

@ 23

I @ @

22

@ @

@ R @

@ R @

21

20

33

@ @

@ R @

I @ @

I @ @

32

@ R @

I @ @

31

@ - 30

@

@

43

I @ @

42

@ @

@ R @

41

@

@ R @

I @ @

40

@ - 50 2

* N amounts to an enumeration of the elements of S. Thus, if |S| ≤ ℵ0 A set-isomorphism S ) we say that S is enumerable or countable; in case of equality, we say that it is countably infinite. Now, the question “are there any uncountable sets?” was answered by the founder of modern set theory, Georg Cantor Theorem 1.26 For any set A : |A| < |℘(A)|. Proof The construction applied here shows that the contrary assumption – A * ) ℘(A) – leads to a contradiction. Obviously, (F.ii)

|A| ≤ |℘(A)|

since the inclusion defined by f (a) = {a} is an injective function f : A → ℘(A). So, assume that the equality holds in (F.ii), i.e., that there is a corresponding F : A → ℘(A) def which is both injective and surjective. Define the subset of A by B = {a ∈ A : a 6∈ F (a)}. Since B ⊆ A, so B ∈ ℘(A) and, since F is surjective, there is a b ∈ A such that F (b) = B. Is b in B or not? Each of the two possible answers yields a contradiction:

39

I.1. Sets, Functions, Relations

1. b ∈ F (b) means b ∈ {a ∈ A : a 6∈ F (a)}, which means b 6∈ F (b) 2. b 6∈ F (b) means b 6∈ {a ∈ A : a 6∈ F (a)}, which means b ∈ F (b).

QED (1.26)

Corollary 1.27 For each cardinal number λ there is a cardinal number κ > λ. In particular, ℵ0 = |N| < |℘(N)| < |℘(℘(N))| < ... Theorem 1.26 proves that there exist uncountable sets, but are they of any interest? Another theorem of Cantor shows that such sets have been around in mathematics for quite a while. Theorem 1.28 The set R of real numbers is uncountable. Proof Since N ⊂ R, we know that |N| ≤ |R|. The diagonalisation technique introduced here by Cantor, reduces the assumption that |N| = |R| ad absurdum. If R * ) N then, certainly, we can enumerate any subset of R. Consider only the closed interval [0, 1] ⊂ R. If it is countable, we can list all its members, writing them in decimal expansion (each rij is a digit): n1 n2 n3 n4 n5 n6

= = = = = = .. .

0. r11 0. r21 0. r31 0. r41 0. r51 0. r61

r12 r22 r32 r42 r52 r62

r13 r23 r33 r43 r53 r63

r14 r24 r34 r44 r54 r64

r15 r25 r35 r45 r55 r65

r16 r26 r36 r46 r56 r66

.... .... .... .... .... ....

Form a new real number r by replacing each rii (marked with bold) with another digit, for rii + 1 if rii < 9 instance, let r = 0.r1 r2 r3 r4 ..., where ri = . 0 if rii = 9 Then r cannot be any of the enumerated numbers n1 , n2 , n3 , .... For each number ni in this list has a digit rii at its i-th position which is different from the digit ri at the i-th position in r. QED (1.28)

“Sets” which are not Sets In Definition 1.1 we introduced several set building operations. The power set operation ℘( ) has proven particularly powerful. However, the most peculiar one is the comprehension operation, namely, the one allowing us to form a set of elements satisfying some property {x : Prop(x)}. Although apparently very natural, its unrestricted use leads to severe problems. Russell’s Paradox Define U as the set {x : x 6∈ x}. Now the question is: Is U ∈ U ? Each possible answer leads to absurdity: 1. U ∈ U means that U is one of x in U , i.e., U ∈ {x : x 6∈ x}, so U 6∈ U 2. U 6∈ U means that U ∈ {x : x 6∈ x}, so U ∈ U . 2 The problem arises because in the definition of U we did not specify what kind of x’s we are gathering. Among many solutions to this paradox, the most commonly accepted is to exclude such definitions by requiring that x’s which are to satisfy a given property when collected into a new set must already belong to some other set. This is the formulation we used in Definition 1.1, where we said that if S is a set then {x ∈ S : Prop(x)} is a set too. The “definition” of U = {x : x 6∈ x} does not conform to this format and hence is not considered a valid description of a set.

40

Basic Set Theory

Exercises (week 1) exercise 1.1+ Given the following sets S1 S2 S3 S4 S5

= = = = =

{{∅}, A, {A}} A {A} {{A}} {A, {A}}

S6 S7 S8 S9

= = = =

∅ {∅} {{∅}} {∅, {∅}}

Of the sets S1-S9, which 1. 2. 3. 4. 5. 6.

are are are are are are

members of S1 ? subsets of S1 ? members of S9 ? subsets of S9 ? members of S4 ? subsets of S4 ?

exercise 1.2+ Let A = {a, b, c}, B = {c, d}, C = {d, e, f }. 1. Write the sets: A ∪ B, A ∩ B, A ∪ (B ∩ C) 2. Is a a member of {A, B}, of A ∪ B? 3. Write the sets A × B and B × A. exercise 1.3 Using the set theoretic equalities (page 30), show that: 1. A ∩ (B \ A) = ∅ 2. ((A ∪ C) ∩ (B ∪ C 0 )) ⊆ (A ∪ B) (Show first some lemmas: • A∩B ⊆A • if A ⊆ B then A ⊆ B ∪ C • if A1 ⊆ X and A2 ⊆ X then A1 ∪ A2 ⊆ X Expand then the expression (A ∪ C) ∩ (B ∪ C 0 ) to one of the form X1 ∪ X2 ∪ X3 ∪ X4 , show that each Xi ⊆ A ∪ B and use the last lemma.) exercise 1.4+ Let S = {0, 1, 2} and T = {0, 1, {0, 1}}. Construct ℘(S) and ℘(T ). exercise 1.5 Let S = {5, 10, 15, 20, 25, ...} and T = {3, 4, 7, 8, 11, 12, 15, 16, 19, 20, ...}. 1. Specify each of these sets by defining the properties PS and PT such that S = {x ∈ N : PS (x)} and T = {x ∈ N : PT (x)} 2. For each of these sets specify two other properties PS1 , PS2 and PT 1 , PT 2 , such that S = {x ∈ N : PS1 (x)} ∪ {x ∈ N : PS2 (x)} and similarly for T . exercise 1.6 Construct examples of injective (but not surjective) and surjective (but not injective) functions S → T , which do not induce the inverse function in the way a bijection does (Lemma 1.8). exercise 1.7 Prove the claims cited below definition 1.13, that 1. Every sPO (irreflexive, transitive relation) is asymmetric. 2. Every asymmetric relation is antisymmetric. 3. If R is connected, symmetric and reflexive, then R(s, t) for every pair s, t. What about a relation that is both connected, symmetric and transitive? In what way does this depend on the cardinality of S? exercise 1.8+ Let C be a collection of sets. Show that equality = and existence of set-isomorphism * ) are equivalence relations on C × C as claimed under Definition 1.13. Give an example of two sets S and T such that S * ) T but S 6= T (they are set-isomorphic but not equal).

I.1. Sets, Functions, Relations

41

exercise 1.9 Given an arbitrary (non-empty) collection of sets C. Show that 1. the inclusion relation ⊆ is a wPO on C 2. ⊂ is its strict version 3. ⊆ is not (necessarily) a TO on C. exercise 1.10+ If |S| = n for some natural number n, what will be the cardinality of ℘(S)? exercise 1.11 Let A be a countable set. 1. If also B is countable, show that: (a) the disjoint union A ] B is countable (specify its enumeration, assuming the existence of the enumerations of A and B); (b) the union A ∪ B is countable (specify an injection into A ] B). 2. If B is uncountable, can A × B ever be countable?

42

Basic Set Theory

Chapter 2 Induction • Well-founded Orderings – General notion of Inductive proof • Inductive Definitions – Structural Induction

1: Well-Founded Orderings a Background Story ♦ ♦ Ancients had many ideas about the basic structure and limits of the world. According to one of them our world – the earth – rested on a huge tortoise. The tortoise itself couldn’t just be suspended in a vacuum – it stood on the backs of several elephants. The elephants stood all on a huge disk which, in turn, was perhaps resting on the backs of some camels. And camels? Well, the story obviously had to stop somewhere because, as we notice, one could produce new sub-levels of animals resting on other objects resting on yet other animals, resting on ... indefinitely. The idea is not well founded because such a hierarchy has no well defined begining, it hangs in a vacuum. Any attempt to provide the last, the most fundamental level is immediately met with the question “And what is beyond that?” The same problem of the lacking foundation can be encoutered when one tries to think about the begining of time. When was it? Physicists may say that it was Big Bang. But then one immediately asks “OK, but what was before?”. Some early opponents of the Biblical story of creation of the world – and thus, of time as well – asked “What did God do before He created time?”. St. Augustine, realising the need for a definite answer which, however, couldn’t be given in the same spirit as the question, answered “He prepared the hell for those asking such questions.” One should be wary here of the distinction between the begining and the end, or else, between moving backward and forward. For sure, we imagine that things, the world may continue to exist indefinitely in the future – this idea does not cause much trouble. But our intuition is uneasy with things which do not have any begining, with chains of events extending indefinitely backwards, whether it is a backward movement along the dimension of time or causality. Such non well founded chains are hard to imagine and even harder to do anything with – all our thinking, activity, constructions have to start from some begining. Having an idea of a begining, one will often be able to develop it into a description of the ensuing process. One will typically say: since the begining was so-and-so, such-and-such had to follow since it is implied by the properties of the begining. Then, the properties of this second stage, imply some more, and so on. But having nothing to start with, we are left without foundation to perform any intelligible acts. Mathematics has no problems with chains extending infinitely in both directions. Yet, it has a particular liking for chains which do have a begining, for orderings which are well-founded. As with our intuition and activity otherwise, the possibility of ordering a set in a way which identifies its least, first, starting elements, gives a mathematician a lot of powerful tools. We will study in this chapter some fundamental tools of this kind. As we will see later, almost all our presentation will be based on well-founded orderings. ♦ Definition 2.1 Let hS, ≤i be a PO and T ⊆ S.

♦

43

I.2. Induction

• x ∈ T is a minimal element of T iff there is no element smaller than x, i.e., for no y ∈ T : y < x • hS, ≤i is well-founded iff each non-empty T ⊆ S has a minimal element. The set of natural numbers with the standard ordering hN, ≤i is well-founded, but the set of all integers with the natural extension of this ordering hZ, ≤i is not – the subset of all negative integers does not have a ≤-minimal element. Intuitively, well-foundedness means that the ordering has a “basis”, a set of minimal “starting points”. This is captured by the following lemma. Lemma 2.2 A PO hS, ≤i is well-founded iff there is no infinite decreasing sequence, i.e., no sequence {an }n∈N of elements of S such that an > an+1 . Proof ⇐) If hS, ≤i is not well-founded, then let T ⊆ S be a subset without a minimal element. Let a1 ∈ T – since it is not minimal, we can find a2 ∈ T such that a1 > a2 . Again, a2 is not minimal, so we can find a3 ∈ T such that a2 > a3 . Continuing this process we obtain an infinite descending sequence a1 > a2 > a3 > .... ⇒) Suppose that there is such a sequence a1 > a2 > a3 > .... Then, obviously, the set {an : n ∈ N} ⊆ S has no minimal element. QED (2.2)

Example 2.3 Consider again the orderings on finite strings as defined in example 1.17. 1. The relation ≺Q is a well-founded sPO; there is no way to construct an infinite sequence of strings with ever decreasing lengths! 2. The relation ≺P is a well-founded PO : any subset of strings will contain element(s) such that none of their prefixes (except the strings themselves) are in the set. For instance, a and bc are ≺P -minimal elements in S = {ab, abc, a, bcaa, bca, bc}. 3. The relation ≺L is not well-founded, since there exist infinite descending sequences like . . . ≺L aaaab ≺L aaab ≺L aab ≺L ab ≺L b. In order to construct any such descending sequence, however, there is a need to introduce ever longer strings as we proceed towards infinity. Hence the alternative ordering below is also of interest. 4. The relation ≺Q was defined in example 1.17. Now define s ≺L0 p iff s ≺Q p or (length(s) = length(p) and s ≺L p). Hence sequences are ordered primarily by length, secondarily by the previous lexicographic order. The ordering ≺L0 is indeed well-founded and, in addition, connected, i.e., a well-founded TO. 2

Definition 2.4 hS, ≤i is a well-ordering, WO, iff it is a TO and is well-founded. Notice that well-founded ordering is not the same as well-ordering. The former can still be a PO which is not a TO. The requirement that a WO = hS, ≤i is a TO implies that each (sub)set of S has not only a minimal element but also a unique minimal element. Example 2.5 The set of natural numbers with the “less than” relation, hN, 1 be arbitrary, i.e., z 0 = z + 1 for some z > 0. 1 + 3 + ... + (2z 0 − 1)

=

1 + 3 + ... + (2z − 1) + (2(z + 1) − 1)

(by IH since z < z 0 = z + 1)

=

z 2 + 2z + 1 = (z + 1)2 = (z 0 )2

The proof for z < 0 is entirely analogous, but now we have to reverse the ordering: we start with z = −1 and proceed along the negative integers only considering z ≺ z 0 iff |z| < |z 0 |, where |z| denotes the absolute value of z (i.e., |z| = −z for z < 0). Thus, for z, z 0 < 0, we have actually that z ≺ z 0 iff z > z 0 . Basis :: For z = −1, we have −1 = −(−1)2 . Induction :: Let z 0 < −1 be arbitrary, i.e., z 0 = z − 1 for some z < 0. −1 − 3 + ... + (2z 0 + 1)

=

−1 − 3 + ... + (2z + 1) + (2(z − 1) + 1)

(by IH since z ≺ z 0 )

=

−|z|2 − 2|z| − 1

=

−(|z|2 + 2|z| + 1) = −(|z| + 1)2 = −(z − 1)2 = −(z 0 )2

The second part of the proof makes it clear that the well-founded ordering on the whole Z we have been using was not the usual 0 for all z ∈ Z : . −(1 + 3 + 5 + ... + (2|z| − 1)) = −(z 2 ) if z < 0 Thus formulated, it becomes obvious that we only have one statement to prove: we show the first claim (in the same way as we did it in point 1.) and then apply trivial arithmetics to conclude from x = y that also −x = −y. This, indeed, is a smarter way to prove the claim. Is it induction? Yes it is. We prove it first for all positive integers – by induction on the natural ordering of the positive integers. Then, we take an arbitrary negative integer z and observe (assume induction hypothesis!) that we have already proved 1 + 3 + ... + (2|z| − 1) = |z|2 . The well-founded ordering on Z we are using in this case orders first all positive integers along < (for proving the first part of the claim) and then, for any z < 0, puts z after |z| but unrelated to other n > |z| > 0 – the induction hypothesis for proving the claim for such a z < 0 is, namely, that the claim holds for |z|. The ordering is shown on the left: −1 A 1

−1

/2

−2

/3

−3

/4

−4

/5

−5

/ ... ...

1

/2

/3

/4

/5

/ ...

−2 rr8 rrr / −3 =L r=L =L=LLL == & ==−4 = ...

I.2. Induction

47

As a matter of fact, the structure of this proof allows us to view the used ordering in yet another way. We first prove the claim for all positive integers. Thus, when proving it for an arbitrary negative integer z < 0, we can assume the stronger statement than the one we are actually using, i.e., that the claim holds for all positive integers. This ordering puts all negative integers after all positive ones as shown on the right in the figure above. Notice that none of the orderings we encountered in this example was total. 2

2: Inductive Definitions We have introduced the general idea of inductive proof over an arbitrary well-founded ordering defined on an arbitrary set. The idea of induction – a kind of stepwise construction of the whole from a “basis” by repetitive applications of given rules – can be applied not only for constructing proofs but also for constructing, that is defining, sets. We now illustrate this technique of definition by induction and then (subsection 2.3) proceed to show how it gives rise to the possibility of using a special case of the inductive proof strategy – the structural induction – on sets defined in this way. a Background Story ♦ ♦ Suppose I make a simple statement, for instance, (1) ‘John is a nice person’. Its truth may be debatable – some people may think that, on the contrary, he is not nice at all. Pointing this out, they might say – “No, he is not, you only think that he is”. So, to make my statemet less definite I might instead say (2) ‘I think that ‘John is a nice person’’. In the philosophical tradition one would say that (2) expresses a reflection over (1) – it expresses the act of reflecting over the first statement. But now, (2) is a new statement, and so I can reflect over it again: (3) ‘I think that ‘I think that ‘John is nice”’. It isn’t perhaps obvious why I should make this kind of statement, but I certainly can make it and, with some effort, perhaps even attach some meaning to it. Then, I can just continue: (4) ‘I think that (3)’, (5) ‘I think that (4)’, etc. The further (or higher) we go, the less idea we have what one might possibly intend with such expressions. Philosophers used to spend time analysing their possible meaning – the possible meaning of such repetitive acts of reflection over reflection over reflection ... over something. In general, they agree that such an infinite regress does not yield anything intuitively meaningful and should be avoided. In terms of the normal language usage, we hardly ever attempt to carry such a process beyond the level (2) – the statements at the higher levels do not make any meaningful contribution to a conversation. Yet they are possible for purely linguistic reasons – each statement obtained in this way is grammatically correct. And what is ‘this way’ ? Simply: Basis :: Start with some statement, e.g., (1) ‘John is nice’. Step :: Whenever you have produced some statement (n) – at first, it is just (1), but after a few steps, you have some higher statement (n) – you may produce a new statement by prepending (n) with ‘I think that ...’. Thus you obtain a new, (n+1), statement ‘I think that (n)’. Anything you obtain according to this rule, happens to be grammatically correct and the whole infinite chain of such statements consitutes what philosophers call an infinite regress. The crucial point here is that we do not start with some set which we analyse, for instance, by looking for some ordering on it. We are defining a new set – the set of statements {(1), (2), (3), ...} – in a very particular way. We are applying the idea of induction – stepwise construction from a “basis” – not for proving properties of a given set with some well-founded ordering but for defining a new set. ♦

♦

One may often encounter sets described by means of abbreviations like E = {0, 2, 4, 6, 8, ...} or T = {1, 4, 7, 10, 13, 16, ...}. The abbreviation ... indicates that the author assumes that you have figured out what the subsequent elements will be – and that there will be infinitely many of them.

48

Basic Set Theory

It is assumed that you have figured out the rule by which to generate all the elements. The same sets may be defined more precisely with the explicit reference to the respective rule: (F.iv)

E = {2 ∗ n : n ∈ N} and T = {3 ∗ n + 1 : n ∈ N}

Another way to describe these two rules is as follows. The set E is defined by: Basis :: 0 ∈ E and, Step :: whenever an x ∈ E, then also x + 2 ∈ E. Closure :: Nothing else belongs to E. The other set is defined similarly: Basis :: 1 ∈ T and, Step :: whenever an x ∈ T , then also x + 3 ∈ T . Closure :: Nothing else belongs to T . Here we are not so much defining the whole set by one static formula (as we did in (F.iv)), as we are specifying the rules for generating new elements from some elements which we have already included in the set. Not all formulae (static rules, as those used in (F.iv)) allow equivalent formulation in terms of such generation rules. Yet, quite many sets of interest can be defined by means of such generation rules – quite many sets can be introduced by means of inductive definitions. Inductively defined sets will play a central role in all the subsequent chapters. Idea 2.13 [Inductive definition of a set] An inductive definition of a set S consists of Basis :: List some (at least one) elements B ⊆ S. Induction :: Give one or more rules to construct new elements of S from already existing elements. Closure :: State that S consists exactly of the elements obtained by the basis and induction steps. (This is typically assumed rather than stated explicitly.) Example 2.14 The finite strings Σ∗ over an alphabet Σ from Example 1.17, can be defined inductively, staring with the empty string, , i.e., the string of length 0, as follows: Basis :: ∈ Σ∗ Induction :: if s ∈ Σ∗ then xs ∈ Σ∗ for all x ∈ Σ Constructors are the empty string and the operations prepending an element in front of a string x , for all x ∈ Σ. Notice that 1-element strings like x will be here represented as x. 2 Example 2.15 The finite non-empty strings Σ+ over alphabet Σ are defined by starting with a different basis. Basis :: x ∈ Σ+ for all x ∈ Σ Induction :: if s ∈ Σ+ then xs ∈ Σ+ for all x ∈ Σ 2

Often, one is not interested in all possible strings over a given alphabet but only in some subsets. Such subsets are called languages and, typically, are defined by induction. Example 2.16 Define the set of strings N over Σ = {0, s}: Basis :: 0 ∈ N Induction :: If n ∈ N then sn ∈ N This language is the basis of the formal definition of natural numbers. The constructors are 0 and the operation of appending the symbol ‘s’ to the left. (The ‘s’ signifies the “successor” function corresponding to n+1.) Notice that we do not obtain the set {0, 1, 2, 3...} but {0, s0, ss0, sss0...}, which is a kind of unary representation of natural numbers. Notice that, for instance, the strings 00s, s0s0 6∈ N, i.e., N 6= Σ ∗ . 2 Example 2.17

49

I.2. Induction

1. Let Σ = {a, b} and let us define the language L ⊆ Σ∗ consisting only of the strings starting with a number of a’s followed by the equal number of b’s, i.e., {an bn : n ∈ N}. Basis :: ∈ L Induction :: if s ∈ L then asb ∈ L Constructors of L are and the operation adding an a in the beginning and a b at the end of a string s ∈ L. 2. Here is a more complicated language over Σ = {a, b, c, (, ), ¬, →} with two rules of generation. Basis :: a, b, c ∈ L Induction :: if s ∈ L then ¬s ∈ L if s, r ∈ L then (s → r) ∈ L By the closure property, we can see that, for instance, ‘(‘ 6∈ L and (¬b) 6∈ L. 2

In the examples from section 1 we saw that a given set may be endowed with various well-founded orderings. Having succeded in this, we can than use the powerful technique of proof by induction according to theorem 2.7. The usefulness of inductive definitions is related to the fact that such an ordering may be obtained for free – the resulting set obtains implicitly a well-founded ordering induced by the very definition as follows.3 Idea 2.18 [Induced wf Order] For an inductively defined set S, define a function f : S → N as follows: def

Basis :: Let S0 = B and for all b ∈ S0 : f (b) = 0. Induction :: Given Si , let Si+1 be the union of Si and all the elements x ∈ S \ Si which can be obtained according to one of the rules from some elements y 1 , . . . , yn of Si . For each def such new x ∈ Si+1 \ Si , let f (x) = i + 1. Closure :: The actual ordering is then x ≺ y iff f (x) < f (y). The function f is essentially counting the minimal number of steps – consecutive applications of the rules allowed by the induction step of Definition 2.13 – needed to obtain a given element of the inductive set. Example 2.19 Refer to Example 2.14. Since the induction step amounts there to increasing the length of a string by 1, following the above idea, we would obtain the ordering on strings s ≺ p iff length(s) < length(p). 2

2.1: “1-1” Definitions A common feature of the above examples of inductively defined sets is the impossibility of deriving an element in more than one way. For instance, according to example 2.15 the only way to derive the string abc is to start with c and then add b and a to the left in sequence. One apparently tiny modification to the example changes this state of affairs: Example 2.20 The finite non-empty strings over alphabet Σ can also be defined inductively as follows. Basis :: x ∈ Σ+ for all x ∈ Σ Induction :: if s ∈ Σ+ and p ∈ Σ+ then sp ∈ Σ+ 2

According to this example, abc can be derived by concatenating either a and bc, or ab and c. We often say that the former definitions are 1-1, while the latter is not. Given a 1-1 inductive definition of a set S, there is an easy way to define new functions on S – again by induction. 3 In

fact, an inductive definition imposes at least two such orderings of interest, but here we consider just one.

50

Basic Set Theory

Idea 2.21 [Inductive function definition] Suppose S is defined inductively from basis B and a certain set of construction rules. To define a function f on elements of S do the following: Basis :: Identify the value f (x) for each x in B. Induction :: For each way an x ∈ S can be constructed from one or more y 1 , . . . , yn ∈ S, show how to obtain f (x) from the values f (y1 ), . . . , f (yn ). Closure :: If you managed to do this, then the closure property of S guarantees that f is defined for all elements of S. The next few examples illustrate this method. Example 2.22 We define the length function on finite strings by induction on the definition in example 2.14 as follows: Basis :: length() = 0 Induction :: length(xs) = length(s) + 1 2 Example 2.23 We define the concatenation of finite strings by induction on the definition from example 2.14: Basis :: · t = t Induction :: xs · t = x(s · t) 2 Example 2.24 In Example 2.14 strings were defined by a left append (prepend) operation which we wrote as juxtaposition xs. A corresponding right append operation can be now defined inductively. Basis :: ` y = y Induction :: xs ` y = x(s ` y) and the operation of reversing a string: Basis :: R = Induction :: (xs)R = sR ` x 2

The right append operation ` does not quite fit the format of idea 2.21 since it takes two arguments – a symbol as well as a string. It is possible to give a more general version that covers such cases as well, but we shall not do so here. The definition below also apparently goes beyond the format of idea 2.21, but in order to make it fit we merely have to think of addition, for instance in m + n, as an application of the one-argument function add n to the argument m. Example 2.25 Using the definition of N from Example 2.16, we can define the plus operation for all n, m ∈ N : Basis :: 0 + n = n Induction :: s(m) + n = s(m + n) It is not immediate that this is the usual plus - we cannot see, for example, that n + m = m + n. We shall see in an exercise that this is actually the case. We can use this definition to calculate the sum of two arbitrary natural numbers represented as elements of N. For instance, 2 + 3 would be processed as follows: ss0 + sss0 7→ s(s0 + sss0) 7→ ss(0 + sss0) 7→ ss(sss0) = sssss0

2

Note carefully that the method of inductive function definition 2.21 is guaranteed to work only when the set is given by a 1-1 definition. Imagine that we tried to define a version of the length function in example 2.22 by induction on the definition in example 2.20 as follows: len(x) = 1 for x ∈ Σ, while len(ps) = len(p) + 1. This would provide us with alternative (and hence mutually contradictive) values for len(abc), depending on which way we choose to derive abc. Note also that the two equations len(x) = 1 and len(ps) = len(p) + len(s) provide a working definition, but in this case it takes some reasoning to check that this is indeed the case.

51

I.2. Induction

2.2: Inductive Definitions and Recursive Programming If you are not familiar with the basics of programming you may skip this subsection and go directly to subsection 2.3. Here we do not introduce any new concepts but merely illustrate the relation between the two areas from the title. All basic structures known from computer science are defined inductively – any instance of a List, Stack, Tree, etc., is generated in finitely many steps by applying some basic constructor operations. These operations themselves may vary from one programming language to another, or from one application to another, but they always capture the inductive structure of these data types. We give here but two simple examples which illustrate the inductive nature of two basic data structures and show how this leads to the elegant technique of recursive programming. 1. Lists A List (to simplify matters, we assume that we store only integeres as data elements) is a sequence of 0 or more integers. The idea of a ‘sequence’ or, more generally, of a linked structure is captured by pointers between objects storing data. Thus, one would define objects of the form given on the left – the list 3, 7, 2, 5 would then contain 4 List objects (plus additional null object at the end) and look as shown on the right: List int x; List next;

3 •

next

/ 7 •

next

/ 2 •

next

/ 5 •

next

/

The declaration of the List objects tells us that a List is: 1. either a null object (which is a default, always possible value for pointers) 2. or an integer (stored in the current List object) followed by another List object. But this is exactly an inductive definition, namely, the one from example 2.14, the only difference being that of the language used. 1a. From inductive definition to recursion This is also a 1-1 definition and thus gives rise to natural recursive programming over lists. The idea of recursive programming is to traverse a structure in the order opposite to the way we imagine it built along its inductive definition. We start at some point of the structure and proceed to its subparts until we reach the basis case. For instance, the function computing length of a list is programmed recursively to the left: int length(List L) IF (L is null) return 0; ELSE return (1 + length(L.next));

int sum(List L) IF (L is null) return 0; ELSE return (L.x + sum(L.next));

It should be easy to see that the pseudo-code on the left is nothing more than the inductive definition of the function from example 2.22. Instead of the mathematical formulation used there, it uses the operational language of programming to specify: 1. the value of the function in the basis case (which also terminates the recursion) and then 2. the way to compute the value in the non-basis case from the value for some subcase which brings recursion “closer to” the basis. The same schema is applied in the function to the right which computes the sum of all integers in the list. Notice that, abstractly, Lists can be viewed simply as finite strings. You may rewrite the definition of concatenation from Example 2.23 for Lists as represented here. 1b. Equality of Lists Inductive 1-1 definition of a set (here of the set of List objects given by their declaration) gives also rise to the obvious recursive function for comparing objects for equality. Two lists are equal iff i) they have the same structure (i.e., the same number of elements) and

52

Basic Set Theory

ii) respective elements in both lists are equal The corresponding pseudo-code for recursive function definition is as follows. The first two lines check point i) and ensure termination upon reaching the basis case. The third line checks point ii). If everything is ok so far, the recursion proceeds to check the rest of the lists: boolean equal(List L, R) IF (L is null AND R is null) return TRUE; ELSE IF (L is null OR R is null) return FALSE; ELSE IF (L.x 6= R.x) return FALSE; ELSE return equal(L.next, R.next); 2. Trees Another very common data structure is binary tree BT. (Again, we simplify presentation by assuming that we only store integers as data.) Unlike in a list, each node (except the null ones) has two successors (called “children”) left and right: BT int x; BT left; BT right;

p 5 MMMMright MMM p p M& xpp 2 = right 7 < right = left left == 0 such that B(x, F (x, y)) is true. If no such x exists G(y) is udnefined. We consider the alphabet Σ = {#, 1, Y, N } and functions over positive natural numbers (without 0) N, with unary representation as strings of 1’s. Let our given functions be F : N × N → N and B : N×N → {Y, N }, and the corresponding Turing machines, MF , resp. MB . More precisely, sF is the initial state of MF which, starting in a configuration of the form C1, halts iff z = F (x, y) is defined in the final state eF in the configuration of the form C2: ···#

C1 :

1y 1 ...

1

#

1x 1 ...

1

#···

↑ sF

↓

MF

···#

C2 :

1y 1 ...

1

#

1x 1 ...

1

1z 1 ...

#

#···

1

↑ eF

If, for some pair x, y, F (x, y) is undefined, MF (y, x) may go forever. B, on the other hand, is total and MB always halts in its final state eB , when started from a configuration of the form C2 and intial state sB , yiedling a confguration of the form C3: ···#

C2 :

1y 1 ...

1

#

1x 1 ...

1

#

1z 1 ...

1

#···

↑ sB

MB C3 :

↓ ···#

1y 1 ...

1

#

1x 1 ...

1

#

1z 1 ...

1

u

#···

↑ eB

where u = Y iff B(x, z) = Y (true) and u = N iff B(x, z) = N (false). Using MF and MB , we design a TM MG which starts in its initial state sG in a configuration

65

II.1. Turing Machines

C0, and halts in its final state eG iff G(y) = x is defined, with the tape as shown in T : ···#

C0 :

1y 1 ...

1

#···

↑

sG MG T :

↓ ···#

1y 1 ...

1

#

1x 1 ...

1

#···

MG will add a single 1 (= x) past the lefttmost # to the right of the input y, and run MF and MB on these two numbers. If MB halts with Y , we only have to clean up F (x, y). If MB halts with N , MG erases F (x, y), extends x with a single 1 and continues: sE G 1,R

#,R

'!"#&%$

/1

'!"#&4%$ o

#,1

/ M O

1,1

F

'!"#&3%$ o

B

#,1 #,L 1,#

/

/ M '!"#&2%$

N,# #,L

Y,#

'!"#&%$

/5 O 1,#

'!"#&6%$

#,L

In case of success, MB exits along the Y and states 5-6 erase the sequence of 1’s representing F (x, y). MG stops in state 6 to the right of x. If MB got N , this N is erased and states 3-4 erase the current F (x, y). The first blank # encountered in the state 3 is the blank right to the right of the last x. This # is replaced with 1 – increasing x to x + 1 – and MF is started in a configuration of the form C1. MG (y) wil go forever if no x exists such that B(x, F (x, y)) = Y . However, it may also go forever if such x exists but F (x0 , y) is undefined for some 0 < x0 < x ! Then the function G(y) computed by MG is undefined. In the theory of recursive functions, such a schema is called µ-recursion (“µ” for minimal) – here it is the function G : N → N G(y) = the least x ∈ N such that B(x, F (x, y)) = Y and F (x0 , y) is defined for all x0 ≤ x The fact that if G(y) = x (i.e., when it is defined) then also F (x0 , y) must be defined for all x0 ≤ x, captures the idea of mechanic computability – MG simply checks all consequtive values of x0 until the correct one is found. 2.2: Alternative representation of TMs

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [optional]

We give an alternative, equivalent, representation of an arbitrary TM defining it directly by a set of transitions between situations rather than of “more abstract” instructions. The definition 3.5 of a TM embodies the abstract character of an algorithm which operates on any possible actual input. The following definition 3.10 gives a “more concrete” representation of a computation on some given input in that it takes into account the “global state” of the computation expressed by the contents of the tape to the left and to the right of the reading head. Definition 3.10 A situation of a TM is a quadruple hl, q, c, ri where q is the current state, c the symbol currently under the reading head, l the tape to the left and r the tape to the right of the current symbol. For instance ··· #

a

b

b

# ···

↑ qi

corresponds to the situation hab, qi , b, i. Notice that l represents only the part of the tape to the left up to the beginning of the infinite sequence of only blanks (resp. r to the right). A computation of a TM M is a sequence of transitions between situations (F.vii)

S0 7→M S1 7→M S2 7→M ...

where S0 is an initial situation and each S 7→M S 0 is an execution of a single instruction. The reflexive transitive closure of the relation 7→M is written 7→M∗ .

66

Turing Machines

In example 3.8 we saw a machine accepting (halting on) each sequence of an even number of 1’s. Its computation starting on the input 11 expressed as a sequence of transitions between the subsequent situations will be ··· #

1

1

# ···

↑ q0

7→M

··· #

1

1

# ···

↑ q1

7→M

··· #

1

1

# ··· ↑

q0

In order to capture the manipulation of the whole situations, we need some means of manipulating the strings (to the left and right of the reading head). Given a string s and a symbol x ∈ Σ, let xs denote application of a function prepending the symbol x in front of s (x a s from example 2.24). Furthermore, we consider the functions hd and tl returning, resp. the first symbol and the rest of a non-empty string. (E.g., hd(abc) = a and tl(abc) = bc.) Since the infinite string of only blanks corresponds to empty input, we will identify such a string with the empty string, #ω = . Consequently, we let # = . The functions hd and tl must be adjusted, i.e., hd() = # and tl() = . Basis :: hd() = # and tl() = (with #ω = ) Ind :: hd(sx) = x and tl(sx) = s. We imagine the reading head some place on the tape and two (infinte) strings staring to the left, resp., right of the head. (Thus, to ease readability, the prepending operation on the left string will be written sx rather than xs.) Each instruction of a TM can be equivalently expressed as a set of transitions between situations. That is, given a TM M according to definition 3.5, we can construct an equivalent representation of M as a set of transitions between situtations. Each write-instruction hq, ai 7→ hp, bi, for a, b ∈ Σ corresponds to the transition: w : hl, q, a, ri ` hl, p, b, ri A move-right instruction hq, xi 7→ hp, Ri corresponds to R : hl, q, x, ri ` hlx, p, hd(r), tl(r)i and, analogously, hq, xi 7→ hp, Li L : hl, q, x, ri ` htl(l), p, hd(l), xri Notice that, for instance, for L, if l = and x = #, the equations we have imposed earlier will yield h, q, #, ri ` h, p, #, #ri. Thus a TM M can be represented as a quadruple hK, Σ, q0 , `M i, where `M is a relation (function, actually) on the set of situations `M ⊆ Sit × Sit. (Here, Sit are represented using the functions on strings as above.) For instance, the machine M from example 3.8 will now look as follows: 1. hl, q0 , 1, ri ` hl1, q1 , hd(r), tl(r)i 2. hl, q1 , 1, ri ` hl1, q0 , hd(r), tl(r)i 3. hl, q1 , #, ri ` hl, q1 , #, ri A computation of a TM M according to this representation is a sequence of transitions (F.viii)

S0 `M S1 `M S2 `M ...

where each S `M S 0 corresponds to one of the specified transitions between situations. The reflexive transitive closure of this relation is denoted `M∗ . It is easily shown (exercise 3.7) that the two representations are equivalent, i.e., a machine obtained by such a transformation will have exactly the same computations (on the same inputs) as the original machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [end optional]

3: Universal Turing Machine Informally, we might say that one Turing machine M 0 simulates another one M if M 0 is able to perform all the computations which can be performed by M or, more precisely, if any input w for M can be represented as an input w 0 for M 0 and the result M 0 (w0 ) represents the result M (w).

67

II.1. Turing Machines

This may happen in various ways, the most trivial one being the case when M 0 is strictly more powerful than M . If M is a multiplication machine (returning n ∗ m for any two natural numbers), while M 0 can do both multiplication and addition, then augmenting the input w for M with the indication of multiplication, we can use M 0 to do the same thing as M would do. Another possibility might be some encoding of the instructions of M in such a way that M 0 , using this encoding as a part of its input, can act as if it was M . This is what happens in a computer since a computer program is a description of an algorithm, while an algorithm is just a mechanical procedure for performing computations of some specific type – i.e., it is a Turing machine. A program in a high level language is a Turing machine M – compiling it into a machine code amounts to constructing a machine M 0 which can simulate M . Execution of M (w) proceeds by representing the high level input w as an input w 0 acceptable for M 0 , running M 0 (w0 ) and converting the result back to the high level representation. We won’t define formally the notions of representation and simulation, relying instead on their intuitive understanding and the example of a Universal Turing machine we will present. A Universal Turing machine is a Turing machine which can simulate any other Turing machine. It is a conceptual prototype and paradigm of the programmable computers as we know them. Idea 1. 2. 3.

3.11 [A Universal TM] To build a UTM which can simulate an arbitrary TM M Choose a coding of Turing machines so that they can be represented on an input tape for UTM. Represent the input of M on the input tape for UTM. Choose a way of representing the state of the simulated machine M (the current state and position of the head) on the tape of UTM. 4. Design the set of instructions for the UTM.

To simplify the task, without losing generality, we will assume that the simulated machines work only on the default alphabet Σ = {∗, #}. At the same time, the UTM will use an extended alphabet with several symbols, namely Π, which is the union of the following sets: • Σ – the alphabet of M • {S, N, R, L} – additional symbols to represent instructions of M • {X, Y, 0, 1} – symbols used to keep track of the current state and position • {(, A, B} – auxiliary symbols for bookkeeping We will code machine M together with its original input as follows: (

instructions of M

current state

input and head position

1: A possible coding of TMs 1. Get the set of instructions from the description of a TM M = hK, Σ, q1 , τ i. 2. Each instruction t ∈ τ is a four-tuple t : hqi , ai 7→ hqj , bi where qi , qj ∈ K, a is # or ∗, and b ∈ Σ ∪ {L, R}. We assume that states are numbered from 1 up to n > 0. Represent t as Ct : S 1

Ct :

... S i

a b N ... N 1 j

i.e., first i S-symbols representing the initial state qi , then the read symbol a, so the action – either the symbol to be written or R, L, and finally j N -symbols for the final state qj . 3. String the representations of all the instructions, with no extra spaces, in increasing order ∗ of state numbers. If for a state i there are two instructions, t# i for input symbol # and ti # ∗ for input symbol ∗, put ti before ti . 4. Put the “end” symbol ‘(’ to the left: (

C t1

C t2

· · · C tz

···

68

Turing Machines

2 Example 3.12 Let M = h{q1 , q2 , q3 }, {∗, #}, q1 , τ i, where τ is given in the left part of the table: 1Y

#,∗

'!"#&%$

/2z

hq1 , ∗i hq1 , #i hq2 , ∗i hq2 , #i

∗,L

'!"#&3%$

#,R

∗,R

7→ 7→ 7→ 7→

hq1 , Ri hq2 , ∗i hq2 , Li hq3 , Ri

S ∗ RN S# ∗ N N SS ∗ LN N SS#RN N N

The coding of the instructions is given in right part of the table. The whole machine will be coded as: (

S

∗

R

N

S

#

∗

N

N

S

S

∗

L

N

N

S

S

#

R

N

N

N

···

It is not necessary to perform the above conversion but – can you tell what M does? 2

2: Input representation We included the alphabet of the original machines Σ = {∗, #} in the alphabet of the UTM. There is no need to code this part of the simulated machines. 2 3: Current state After the representation of the instruction set of M , we will reserve part of the tape for the representation of the current state. There are n states of M , so we reserve n + 1 fields for unary representation of the number of the current state. The i-th state is represented by i X’s followed by (n + 1 − i) Y ’s: if M is in the state i, this part of the tape will be: instructions

X 1

··· X Y i

··· Y

input

n+1

We use n + 1 positions so that there is always at least one Y to the right of the sequence of X’s representing the current state. To “remember” the current position of the head, we will use the two extra symbols 0 and 1 corresponding, respectively, to # and ∗. The current symbol under the head will be always changed to 0, resp., 1. When the head is moved away, these symbols will be restored back to the original ones #, resp., ∗. For instance, if M ’s head on the input tape ∗ ∗ ## ∗ #∗ is in the 4-th place, the input part of the UTM tape will be ∗ ∗ #0 ∗ #∗. 2 4: Instructions for UTM We will let UTM start execution with its head at the rightmost X in the bookkeeping section of the tape. After completing the simulation of one step of M ’s computation, the head will again be placed at the rightmost X. The simulation of each step of computation of M will involve several things: 1. Locate the instruction to be used next. 2. Execute this instruction, i.e., either print a new symbol or move the head on M ’s tape. 3. Write down the new state in the bookkeeping section. 4. Get ready for the next step: clear up any mess and move the head to the rightmost X. We indicate the working of UTM at these stages: 1: Find instruction In a loop we erase one X at a time, replacing it by Y , and pass through all the instructions converting one S to A in each instruction. If there are too few S’s in an instruction, we convert all the N ’s to B’s in that instruction. When all the X’s have been replaced by Y ’s, the instructions corresponding to the actual state have only A’s instead of S. Now we eliminate the instructions which still contain S by going through all the instructions: if there is some S not converted to A in an instruction, we replace all N ’s by B’s in that instruction. Now, there remain at most 2 N -lists associated with the instruction(s) for the current state. We go and read the current symbol on M ’s tape and replace N ’s by B’s at the instruction (if any)

II.1. Turing Machines

69

which does not correspond to what we read. The instruction to be executed is now the one with N ’s – the rest have only B’s. 2: Execute instruction UTM starts now looking for a sequence of N ’s. If none is found, then M – and UTM – stops. Otherwise, we check what to do looking at the symbol just to the left of the leftmost N . If it is R or L, we go to the M ’s tape and move its head restoring the current symbol to its Σ form and replacing the new symbol by 1, resp. 0. If the instruction tells us to write a new symbol, we just write the appropriate thing. 3: Write new state We find again the sequence of N ’s and write the same number of X’s in the bookkeeping section indicating the next state. 4: Clean up Finally, convert all A’s and B’s back to S and N ’s, and move the head to the rightmost X. 2

4: Decidability and the Halting Problem Turing machine is a possible formal expression of the idea of mechanical computability – we are willing to say that a function is computable iff there is a Turing machine which computes its values for all possible arguments. (Such functions are also called recursive.) Notice that if a function is not defined on some arguments (for instance, division by 0) this would require us to assign some special, perhaps new, values for such arguments. For the partial functions one uses a slightly different notion. function F is computable iff there is a TM which halts with F (x) for all inputs x semi-computable iff there is a TM which halts with F (x) whenever F is defined on x but does not halt when F is undefined on x A problem P of YES-NO type (like “is x a member of set S?”) gives rise to a special case of function FP (a predicate) which returns one of the only two values. We get here a third notion. problem P is decidable iff FP is computable – the machine computing FP always halts returning correct answer YES or NO semi-decidable iff FP is semi-computable – the machine computing FP halts with the correct answer YES, but may not halt when the answer is NO co-semi-decidable iff not-FP is semi-computable – the machine computing FP halts with the correct answer NO, but may not halt when the answer is YES Thus a problem is decidable iff it is both semi- and co-semi-decidable. Set membership is a special case of YES-NO problem but one uses a different terminology: set S is recursive iff the membership problem x ∈ S is decidable recursively enumerable iff the membership problem x ∈ S is semi-decidable co-recursively enumerable iff the membership problem x ∈ S is co-semi-decidable Again, a set is recursive iff it is both recursively and co-recursively enumerable. One of the most fundamental results about Turing Machines concerns the undecidability of the Halting Problem. Following our strategy for encoding TMs and their inputs for simulation by a UTM, we assume that the encoding of the instruction set of a machine M is E(M ), while the encoding of input w for M is just w itself. Problem 3.13 [The Halting problem] Is there a Turing machine MU such that for any machine M and any input w, MU (E(M ), w) always halts and Y (es) if M (w) halts MU (E(M ), w) = N (o) if M (w) does not halt

70

Turing Machines

Notice that the problem is trivially semi-decidable: given an M and w, simply run M (w) and see what happens. If the computation halts, we get the correct YES answer to our problem. If it does not halt, then we may wait forever. Unfortunately, the following theorem ensures that, in general, there is not much else to do than wait and see what happens. Theorem 3.14 [Undecidability of Halting Problem] There is no Turing machine which decides the halting problem. Proof Assume, on the contrary, that there is such a machine MU . 1. We can easily design a machine M1 that is undefined (does not halt) on input Y and defined everywhere else, e.g., a machine with one state q0 and instruction hq0 , Y i 7→ hq0 , Y i. 2. Now, construct machine M10 which on the input (E(M ), w) gives M1 (MU (E(M ), w)). It has the property that M10 (E(M ), w) halts iff M (w) does not halt. In particular: M10 (E(M ), E(M )) halts iff M (E(M )) does not halt. 3. Let M ∗ be a machine which to an input w first computes (w, w) and then M10 (w, w). In particular, M ∗ (E(M ∗ )) = M10 (E(M ∗ ), E(M ∗ )). This one has the property that: M ∗ (E(M ∗ )) halts iff M10 (E(M ∗ ), E(M ∗ )) halts iff M ∗ (E(M ∗ )) does not halt This is clearly a contradiction, from which the theorem follows. QED (3.14) Thus the set {hM, wi : M halts on input w} is semi-recursive but not recursive. In terms of programming, the undecidability of Halting Problem means that it is impossible to write a program which could 1) take as input an arbitrary program M and its possible input w and 2) determine whether M run on w will terminate or not. The theorem gives rise to a series of corollaries identifying other undecidable problems. The usual strategy for such proofs is to show that if a given problem was decidable then we could use it to decide the (halting) problem already known to be undecidable. Corollary 3.15 There is no Turing machine 1. MD which, for any machine M , always halts on MD (E(M )) with 0 iff M is total (always halts) and with 1 iff M is undefined for some input; 2. ME which, for given two machines M1 , M2 , always halts with 1 iff the two halt on the same inputs and with 0 otherwise. Proof 1. Assume that we have an MD . Given an M and some input w, we may easily construct a machine Mw which, for any input x computes M (w). In particular Mw is total iff M (w) halts. Then we can use arbitrary x in MD (E(Mw ), x) to decide halting problem. Hence there is no such MD . 2. Assume that we have an ME . Take as M1 a machine which does nothing but halts immediately on any input. Then we can use ME and M1 to construct an MD , which does not exist by the previous point. QED (3.15)

Exercises (week 3) exercise 3.1+ Suppose that we want to encode the alphabet consisting of 26 (Latin) letters and 10 digits using strings – of fixed length – of symbols from the alphabet ∆ = {−, •}. What is the minimal length of ∆-strings allowing us to do that? What is the maximal number of distinct symbols which can be represented using the ∆-strings of this length? The Morse code. The Morse code is an example of such an encoding although it actually uses additional symbol – corresponding to # – to separate the representations, and it uses strings of different lengths. For instance, Morse represents A as •− and B as − • ••, while 0 as − − − − −. (The more frequently a letter is used, the shorter its representation in Morse.) Thus the sequence • − # − • • • is distinct from • − − • ••. 2

71

II.1. Turing Machines

exercise 3.2+ The questions at the end of Examples 3.8 and 3.9 (run the respective machines on the suggested inputs). exercise 3.3+ Let Σ = {1, #} – a sequence of 1’s on the input represents a natural number. Design a TM which starting at the leftmost 1 of the input x performs the operation x + 1 by appending a 1 at the end of x, and returns the head to the leftmost 1. exercise 3.4 Consider the alphabet Σ = {a, b} and the language from example 2.17.1, i.e., L = {an bn : n ∈ N}. 1. Build a TM M1 which given a string s over Σ (possibly with additional blank symbol #) halts iff s ∈ L and goes forever iff s 6∈ L. If you find it necessary, you may allow M1 to modify the input string. 2. Modify M1 to an M2 which does a similar thing but always halts in the same state indicating the answer. For instance, the answer ‘YES’ may be indicated by M2 just halting, and ‘NO’ by M2 writing some specific string (e.g., ‘NO’) and halting. exercise 3.5 The correct ()-expressions are defined inductively (relatively to a given set S of other expressions): Basis :: Each s ∈ S and empty word are correct ()-expressions Induction :: If s and t are correct ()-expressions then so are: (s) and st. 1. Use induction on the length of ()-expressions to show that: s is correct iff 1) the numbers of left ‘(’ and right ‘)’ parantheses in s are equal, say n, and 2) for each 1 ≤ i ≤ n the i-th ‘(’ comes before the i-th ‘)’. (the leftmost ‘(’ comes before the leftmost ‘)’, the second leftmost ‘(’ before the second leftmost ‘)’, etc.) 2. The following machine will read a ()-expression starting on its leftmost symbol – it will halt in state 3 iff the input was incorrect and in state 7 iff the input was correct. The alphabet Σ for the machine consists of two disjoint sets Σ1 ∩ Σ2 = ∅, where Σ1 is some set of symbols (for writing S-expressions) and Σ2 = {X, Y, (, ), #}. In the diagram we use an abbreviation ’ ?’ to indicate ‘any other symbol from Σ not mentioned explicitly among the transitions from this state’. For instance, when in state 2 and reading # the machine goes to state 3 and writes #; reading ) it writes Y and goes to state 4 – while reading any other symbol ?, it moves head to the right remaining in state 2.

'!"#&%$

9 3 eLL LL rrr LLL#,# ?,R ),) rrr r LL r r L r LL LL rrrr (,X X,R /2 / 0 eLL 1 LLL LL#,R LLL #,L ),Y ?,L LLL L . 6 Lp 5o 4 M LLL X,( Y,L LL LL L #,R LLL ?,L LL % 7

?,R

Y,)

'!"#&%$

'!"#&%$ '!"#&%$ '!"#&%$

'!"#&%$ '!"#&%$

Run the machine on a couple of your own tapes with ()-expressions (correct and incorrect!). Can you, using the claim from 1., justify that this machine does the right thing, i.e., decides the correctness of ()-expressions? exercise 3.6 Let Σ = {a, b, c} and ∆ = {0, 1} (Example 3.3). Specify an encoding of Σ in ∆ ∗ and build two Turing machines: 1. Mc which given a string over Σ converts it to a string over ∆ 2. Md which given a string over ∆ converts it to a string over Σ

72

Turing Machines

The two should act so that their composition gives identity, i.e., for all s ∈ Σ∗ : Md (Mc (s)) = s and, for all d ∈ ∆∗ : Mc (Md (d)) = d. Choose the initial and final position of the head for both machines so that, executing the one after another will actually produce the same initial string. Run each of the machines on some example tapes. Run then the two machines subsequently to check whether the final tape is the same as the initial one. exercise 3.7 Use induction on the length of computations to show that, applying the schema from subsection 2.2 of transforming an instruction representation of an arbitrary TM M over the alphabet Σ = {#, 1}, yields the same machine M . I.e. for any input (initial situation) S0 the two computations given by (F.vii) of definition 3.10 and (F.viii) from subsection 2.2 are identical: S0 7→M S1 7→M S2 7→M ... = S0 `M S1 `M S2 `M ...

The following (optional) exercises concern construction of a UTM. exercise 3.8 Following the strategy from 1: A possible coding of TMs, and the Example 3.12, code the machine which you designed in exercise 3.3. exercise 3.9 Complete the construction of UTM. 1. Design four TMs to be used in a UTM as described in the four stages of simulation in 4: Instructions for UTM. 2. Indicate for each (sub)machine the assumptions about its initial and final situation. 3. Put the four pieces together and run your UTM on the coding from the previous Exercise with some actual inputs.

III.1. Syntax and Proof Systems

73

Chapter 4 Syntax and Proof Systems • Axiomatic Systems in general • Syntax of SL • Proof Systems – Hilbert – ND • Provable equivalence, Syntactic consistency and compactness • Gentzen proof system – Decidability of axiomatic systems for SL

1: Axiomatic Systems a Background Story ♦ ♦ One of the fundamental goals of all scientific inquiry is to achieve precision and clarity of a body of knowledge. This “precision and clarity” means, among other things: • all assumptions of a given theory are stated explicitly; • the language of the theory is designed carefully by choosing some basic, primitive notions and defining others in terms of these ones; • the theory contains some basic principles – all other claims of the theory follow from its basic principles by applications of definitions and some explicit laws. Axiomatization in a formal system is the ultimate expression of these postulates. Axioms play the role of basic principles – explicitly stated fundamental assumptions, which may be disputable but, once assumed imply the other claims, called theorems. Theorems follow from the axioms not by some unclear arguments but by formal deductions according to well defined rules. The most famous example of an axiomatisation (and the one which, in more than one way gave the origin to the modern axiomatic systems) was Euclidean geometry. Euclid systematised geometry by showing how many geometrical statements could be logically derived from a small set of axioms and principles. The axioms he postulated were supposed to be intuitively obvious: A1. Given two points, there is an interval that joins them. A2. An interval can be prolonged indefinitely. A3. A circle can be constructed when its center, and a point on it, are given. A4. All right angles are equal. There was also the famous fifth axiom – we will return to it shortly. Another part of the system were “common notions” which may be perhaps more adequately called inference rules about equality: CN1. Things equal to the same thing are equal. CN2. If equals are added to equals, the wholes are equal. CN3. If equals are subtracted from equals, the reminders are equal. CN4. Things that coincide with one another are equal.

74

Statement Logic

CN5. The whole is greater than a part. Presenting a theory, in this case geometry, as an axiomatic system has tremendous advantages. For the first, it is economical – instead of long lists of facts and claims, we can store only axioms and deduction rules, since the rest is derivable from them. In a sense, axioms and rules “code” the knowledge of the whole field. More importantly, it systematises knowledge by displaying the fundamental assumptions and basic facts which form a logical basis of the field. In a sense, Euclid uncovered “the essence of geometry” by identifying axioms and rules which are sufficient and necessary for deriving all geometrical theorems. Finally, having such a compact presentation of a complicated field, makes it possible to relate not only to particular theorems but also to the whole field as such. This possibility is reflected in us speaking about Euclidean geometry vs. non-Euclidean ones. The differences between them concern precisely changes of some basic principles – inclusion or removal of the fifth postulate. As an example of proof in Euclid’s system, we show how using the above axioms and rules he deduced the following proposition (“Elements”, Book 1, Proposition 4): Proposition 4.1 If two triangles have two sides equal to two sides respectively, and have the angles contained by the equal straight lines equal, then they also have the base equal to the base, the triangle equals the triangle, and the remaining angles equal the remaining angles respectively, namely those opposite the equal sides. Proof Let ABC and DEF be two triangles having the two sides AB and AC equal to the two sides DE and DF respectively, namely AB equal to DE and AC equal to DF , and the angle BAC equal to the angle EDF . Ar

@ @ @r C r B

D r

@ @ @r F E r

I say that the base BC also equals the base EF , the triangle ABC equals the triangle DEF , and the remaining angles equal the remaining angles respectively, namely those opposite the equal sides, that is, the angle ABC equals the angle DEF , and the angle ACB equals the angle DF E. If the triangle ABC is superposed on the triangle DEF , and if the point A is placed on the point D and the straight line AB on DE, then the point B also coincides with E, because AB equals DE. Again, AB coinciding with DE, the straight line AC also coincides with DF , because the angle BAC equals the angle EDF . Hence the point C also coincides with the point F , because AC again equals DF . But B also coincides with E, hence the base BC coincides with the base EF and – by CN4. – equals it. Thus the whole triangle ABC coincides with the whole triangle DEF and – by CN4. – equals it. QED (4.1) The proof is allowed to use only the given assumptions, the axioms and the deduction rules. Yet, the Euclidean proofs are not exactly what we mean by a formal proof in an axiomatic system. Why? Because Euclid presupposed a particular model, namely, the abstract set of points, lines and figures in an infinite, homogenous space. This presupposition need not be wrong (although, according to modern physics, it is), but it has important bearing on the notion of proof. For instance, it is intuitively obvious what Euclid means by “superposing one triangle on another”. Yet, this operation hides some further assumptions, for instance, that length does not change during such a process. This implicit assumption comes most clearly forth in considering the language of Euclid’s geometry. Here are just few definitions from “Elements”:

III.1. Syntax and Proof Systems

75

D1. A point is that which has no part. D2. A line is breadthless length. D3. The ends of a line are points. D4. A straight line is a line which lies evenly with the points on itself. D23. Parallel straight lines are straight lines which, being in the same plane and being produced indefinitely in both directions, do not meet one another in either direction. These are certainly smart formulations but one can wonder if, for instance, D1 really defines anything or, perhaps, merely states a property of something intended by the name “point”. Or else, does D2 define anything if one does not pressupose some intuition of what length is? To make a genuinely formal system, one would have to identify some basic notions as truly primitive – that is, with no intended interpretation. For these notions one may postulate some properties. For instance, one might say that we have the primitive notions of P, L and IL (for point, line and indefinitely prolonged line). P has no parts; L has two ends, both being P’s; any two P’s determine an L (whose ends they are – this reminds of A1); any L determines uniquely an IL (cf. A2.), and so on. Then, one may identify derived notions which are defined in terms of the primitive ones. Thus, for instance, the notion of parallel lines can be defined from the primitives as it was done in D23. The difference may seem negligible but, in fact, it is of the utmost importance. By insisting on the uniterpreted character of the primitive notions, it opens an entirely new perspective. On the one hand, we have our primitive, uniterpreted notions. These can be manipulated according to the axioms and rules we have postulated. On the other hand, there are various possibilities of interpretating these primitive notions. All such interpretations will have to satisfy the axioms and conform to the rules, but otherwise they may be vastly different. This was the insight which led, first, to non-Euclidean geometry and, then, to the formal systems. We will now illustrate this first stage of development. The famous fifth axiom, the “Parallel Postulate”, has a bit more involved formulation then the former four: A5. If a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straigh lines, if produced indefinitely, meet on that side on which the angles are less than the two right angles. The strongest, and more clear formulation of this axiom, is as follows: A5. Given a line L and a point p not on line L, exactly one line L0 can be drawn through p parallel to L (i.e., not intersecting L no matter how far extended). This axiom seems to be much less intuitive than the other ones and mathematicians had spent centuries trying to derive it from the other ones. Failing to do that, they started to ask the question “But, does this postulate have to be true? What if it isn’t?” Well, it may seem that it is true – but how can we check? It may be hard to prolong any line indefinitely. Thus we encouter the other aspect of formal systems, which we will study in the following chapters, namely, what is the meaning or semantics of such a system. Designing an axiomatic system, one has to specify precisely what are its primitive terms and how these terms may interact in derivation of the theorems. On the other hand, one specifies what these terms are supposed to denote. In fact, terms of a formal system may denote anything which conforms to the rules specified for their interaction. Euclidean geometry was designed with a particular model in mind – the abstract set of points, lines and figures that can be constructed with compass and straightedge in an infinite space. But now, allowing for the primitve character of the basic notions, we can consider other interpretations. We can consider as our space a finite circle C, interpret a P as any point within C, an L as any closed interval within C and an IL as an open-ended chord of the circle, i.e., a straight line within the circle which approaches indefinitely closely, but never touches the circumference. (Thus one can “prolong a line indefinitely” without ever meeting the circumference.) Such an interpretation does not satisfy the fifth postulate.

76

Statement Logic

x~ y} z |{ r ... L

..

p

x~ y} z|{ L... y r... ..x r r p ..

We start with a line L and a point p not on L. We can then choose two other points x and y and, by A1, obtain two lines xp and yp which can be prolonged indefinitely according to A2. As we see, neither of these indefinitely prolonged lines intersects L. Thus, both are parallel to L according to the very same, old definition D23. Failing to satify the fifth postulate, this interpretation is not a model of Euclidean geometry. But it is a model of the first non-Euclidean geometry – the Bolyai-Lobachevsky geometry, which keeps all the definitions, postulates and rules except the fifth postulate. Later, many other non-Euclidean geometries have been developed – perhaps the most famous one, by Hermann Minkowski as a four-dimensional space-time universe of the relativity theory. And now we can observe another advantage of using axiomatic systems. Since nonEuclidean geometry preserves all Euclid’s postulates except the fifth one, all the theorems and results which were derived without the use of the fifth postulate remain valid. For instance, the proposition 4.1 need no new proof in the new geometries. Thus, it should be also indicated that axiomatic systems deserve a separate study. Such a study may reveal consequences (theorems) of various sets of postulates. Studying then some particular phenomena, one will first ask the question which postulates are satisfied by them. An answer to this question will then immediately yield all the theorems which have been proven in the corresponding system. What is of the fundamental importance and should be constantly kept in mind, is that axiomatic systems, their primitive terms and proofs, are purely syntactic, that is do not presuppose any particular interpretation. Some fundamental axiomatic systems will be studied in this chapter. Of course, the eventual usefulness of such a system will depend on whether we can find interesting interpretations for its terms and rules but this is another story. In the following chapters we will look at possible interpretations of the axiomatic systems introduced here. ♦

♦

Recall that an inductive definition of a set consists of a Basis, an Induction part, and an implicit Closure condition. When the set defined is a language, i.e., a set of strings, we often talk about an axiomatic system. In this case, the elements of the basis are called axioms, while the induction part is given by a set of proof rules. The set defined is called the set of theorems. A special symbol ` is used to denote the set of theorems. Thus A ∈ ` iff A is a theorem. The statement A ∈ ` is usually written ` A. Usually ` is identified as a subset of some other language L ⊆ Σ∗ , thus ` ⊆ L ⊆ Σ∗ . Definition 4.2 Given an L ⊆ Σ∗ , an axiomatic system ` takes the following form. Axioms :: A set Ax ⊆ ` ⊆ L, and Proof `A1 ; . . . ; `An Rules :: of the form: “if A1 ∈ `, . . . , An ∈ ` then C ∈ ` ”, written R : `C (Ai are premisses and C conclusion of the rule R. A rule is just an element R ∈ L n ×L.) The rules are always designed so that C is in L if A1 , . . . , An are, thus ` is guaranteed to be a subset of L. Definition 4.3 A proof in an axiomatic system is a finite sequence A 1 , . . . ,An of strings from L, such that for each Ai • either Ai ∈ Ax or else • there are Ai1 , . . . , Aik in the sequence with all i1 , . . . , ik < i, and an application of a proof rule `Ai1 ; . . . ; `Aik R: . (I.e., such that hAi1 , ..., Aik , Ai i = R.) `Ai

III.1. Syntax and Proof Systems

77

A proof of A is a proof in which A is the final string. Remark. Clearly A is a theorem of the system iff there is a proof of A in the system. Notice that for a given language L there may be several axiomatic systems which all define the same subset of L, albeit, by means of very different rules. There are also variations which we will consider, where the predicate ` is defined on various sets built over L, for instance, ℘(L) × L. 2

2: Syntax of SL The basic logical system, originating with Boole’s algebra, is Propositional Logic, also called Statement Logic (SL). The names reflect the fact that the expressions of the language are “intended as” propositions. This interpretation will be part of the semantics of SL to be discussed in the following chapters. Here we introduce syntax and the associated axiomatic proof system of SL. Definition 4.4 The language of well-formed formulae of SL is defined as follows: 1. An alphabet for an SL language consists of a set of propositional variables Σ = {a, b, c...} – together with the (formula building) connectives: ¬ and →, and auxiliary symbols (, ). 2. The well-formed formulae, WFFΣ SL , are defined inductively: Basis :: Σ ⊆ WFFΣ SL ; Σ Ind :: 1) if A ∈ WFFΣ SL then ¬A ∈ WFFSL Σ 2) if A, B ∈ WFFSL then (A → B) ∈ WFFΣ SL . 3. The propositional variables are called atomic formulae, the formulae of the form A or ¬A, where A is atomic are called literals. Remark 4.5 [Some conventions] 1) Compare this definition to the exercise 2.2. 2) The outermost pair of parantheses is often suppressed, hence A → (B → C) stands for the formula (A → (B → C)) while (A → B) → C stands for the formula ((A → B) → C). 3) Note that the well-formed formulae are strings over the symbols in Σ ∪ {), (, →, ¬}, i.e., WFF Σ SL ⊆ (Σ ∪ {), (, →, ¬})∗ . We use lower case letters for the propositional variables of a particular alphabet Σ, while upper case letters stand for arbitrary formulae. The sets of WFFΣ SL over Σ = {a, b} and over Σ1 = {c, d} are disjoint (though in one-to-one correspondence). Thus, the definition yields a different set of formulae for different Σ’s. Writing WFFSL we mean well-formed SL formulae over an arbitrary alphabet and most of our discussion is concerned with this general case irrespectively of a particular alphabet Σ. 4) It is always implicitly assumed that Σ 6= Ø. 5) For the reasons which we will explain later, occasionally, we may use the abbreviations ⊥ for ¬(B → B) and > for B → B, for arbitrary B. 2

In the following we will always – unless explicitly stated otherwise – assume that the formulae involved are well-formed.

3: The axiomatic system of Hilbert’s Hilbert’s system H for SL is defined with respect to a unary relation (predicate) `H ⊆ WFFSL which we write as `H B rather than as B ∈ `H . It reads as “B is provable in H”. Definition 4.6 The predicate `H of Hilbert’s system for SL is defined inductively by: Axiom Schemata:: A1: `H A → (B → A); A2: `H (A → (B → C)) → ((A → B) → (A → C)); A3: `H (¬B → ¬A) → (A → B);

78

Statement Logic

Proof Rule :: called Modus Ponens:

`H A ; `H A → B . `H B

Remark [Axioms vs. axiom schemata] A1–A3 are in fact axiom schemata; the actual axioms comprise all formulae of the indicated form with the letters A, B, C instantiated to arbitrary formulae. For each particular alphabet Σ, there will be a different (infinite) collection of actual axioms. Similar instantiations are performed in the proof rule. For instance, for Σ = {a, b, c, d}, all the following formulae are instances of axiom schemata: A1 : b → (a → b), (b → d) → (¬a → (b → d)), a → (a → a), A3 : (¬¬d → ¬b) → (b → ¬d). The following formulae are not instances of the axioms: a → (b → b), (¬b → a) → (¬a → b). Thus, an axiom schema, like A1, is actually a predicate – for any given Σ, we get the set of Σ (Σ-)instances A1Σ = {x → (y → x) : x, y ∈ WFFΣ SL } ⊂ WFFSL . Also proof rules, like Modus Ponens (MP), are written as schemata using variables (A, B) standing for arbitrary formulae. The MP proof rule schema compraises, for a given alphabet, infinitely many rules of the same form, e.g., `H a → ¬b ; `H (a → ¬b) → (b → c) `H ¬(a → b) ; `H ¬(a → b) → c `H a ; `H a → b , , , `H b `H b → c `H c ... Thus, in general, a proof rule R with n premises (in an axiomatic system over a language L) is a schema – a relation R ⊆ Ln × L. For a given Σ, Hilbert’s Modus Ponens schema yields a set of Σ Σ Σ (legal Σ-)applications M P Σ = {hx, x → y, yi : x, y ∈ WFFΣ SL } ⊂ WFFSL × WFFSL × WFFSL . A proof rule as in definition 4.2 is just one element of this relation. Rules are almost always given in form of such schemata – an element of the respective relation is then called an “application of the rule”. The above three examples are applications of M P , i.e., elements of M P Σ . Notice that the sets AiΣ and M P Σ are recursive (provided that Σ is, which it always is by assumption). Recursivity of the last set means that we can always decide whether a given triple of formulae is a (legal) application of the rule. Recursivity of the set of axioms means that we can always decide whether a given formula is an axiom or not. Axiomatic systems which do not satisfy these conditions (i.e., where either axioms or applications of rules are undecidable) are of little interest and we will not consider them at all. 2 That both axioms and applications of M P form recursive sets does not (necessarily) imply that so is `H . This only means that given a sequence of formulae, we can decide whether it is a proof or not. To decide if a given formula belongs to `H would require a procedure for deciding if such a proof exists – probably, a procedure for constructing a proof. We will see several examples illustrating that, even if such a procedure for `H exists, it is by no means simple. Lemma 4.7 For an arbitrary B ∈ WFFSL : `H B → B Proof 1 : `H (B → ((B → B) → B)) → ((B → (B → B)) → (B → B)) A2 2: 3: 4: 5:

`H `H `H `H

B → ((B → B) → B) (B → (B → B)) → (B → B) B → (B → B) B→B

A1 M P (2, 1) A1 M P (4, 3) QED (4.7)

The phrase “for an arbitrary B ∈ WFFSL ” indicates that any actual formula of the above form (i.e., for any actual alphabet Σ and any well-formed formula substituted for B) will be derivable, e.g. `H a → a, `H (a → ¬b) → (a → ¬b), etc. All the results concerning SL will be stated in this way. But we cannot substitute different formulae for the two occurences of B. If we try to apply the above proof to deduce `H A → B it will fail – identify the place(s) where it would require invalid transitions. In addition to provability of simple formulae, also derivations can be “stored” for future use. The above lemma means that we can always, for arbitrary formula B, use `H B → B as a step in a proof. More generally, we can “store” derivations in the form of admissible rules. Definition 4.8 Let C be an axiomatic system. A rule

`C A1 ; . . . ; `C An is admissible in C if `C C

79

III.1. Syntax and Proof Systems

whenever there are proofs in C of all the premisses, i.e., `C Ai for all 1 ≤ i ≤ n, then there is a proof in C of the conclusion `C C. Lemma 4.9 The following rules are admissible in H: 1.

`H A → B ; `H B → C `H A → C

2.

`H B `H A → B

Proof 1. 1 : 2: 3: 4: 5: 6: 7:

2. 1 : 2: 3:

`H `H `H `H `H `H `H

(A → (B → C)) → ((A → B) → (A → C)) (B → C) → (A → (B → C)) B→C A → (B → C) (A → B) → (A → C) A→B A→C

`H B `H B → (A → B) `H A → B

Lemma 4.10

A2 A1 assumption M P (3, 2) M P (4, 1) assumption M P (6, 5)

assumption A1 M P (1, 2)

QED (4.9)

1. `H ¬¬B → B 2. `H B → ¬¬B

Proof 1. 1 : 2: 3: 4: 5: 6: 7: 8:

2. 1 : 2: 3:

`H `H `H `H `H `H `H `H

¬¬B → (¬¬¬¬B → ¬¬B) (¬¬¬¬B → ¬¬B) → (¬B → ¬¬¬B) ¬¬B → (¬B → ¬¬¬B) (¬B → ¬¬¬B) → (¬¬B → B) ¬¬B → (¬¬B → B) (¬¬B → (¬¬B → B)) → ((¬¬B → ¬¬B) → (¬¬B → B)) (¬¬B → ¬¬B) → (¬¬B → B) ¬¬B → B

`H ¬¬¬B → ¬B `H (¬¬¬B → ¬B) → (B → ¬¬B) `H B → ¬¬B

A1 A3 L.4.9.1 (1, 2) A3 L.4.9.1 (3, 4) A2 M P (5, 6) M P (L.4.7, 7)

point 1. A3 M P (1, 2) QED (4.10)

4: Natural Deduction system In a Natural Deduction system for SL, instead of the unary predicate `H , we use a binary relation `N ⊆ ℘(WFFSL ) × WFFSL , which, for Γ ∈ ℘(WFFSL ), B ∈ WFFSL , we write as Γ `N B. This relation reads as “B is provable in N from the assumptions Γ”. Definition 4.11 The axioms and rules of Natural Deduction are as in Hilbert’s system with the additional axiom schema A0: Axiom Schemata:: A0: A1: A2: A3:

Γ `N Γ `N Γ `N Γ `N

B, whenever B ∈ Γ; A → (B → A); (A → (B → C)) → ((A → B) → (A → C)); (¬B → ¬A) → (A → B);

Proof Γ `N A ; Γ `N A → B Rule :: Modus Ponens: . Γ `N B

80

Statement Logic

Remark. As for the Hilbert system, the “axioms” are actually axiom schemata. The real set of axioms is the infinite set of actual formulae obtained from the axiom schemata by substitution of actual formulae for the upper case letters. Similarly for the proof rule. 2

The next lemma corresponds exactly to lemma 4.9. In fact, the proof of that lemma (and any other in H) can be taken over line for line, with hardly any modification (just replace `H by Γ `N ) to serve as a proof of this lemma. Lemma 4.12 The following rules are admissible in N : 1.

Γ `N A → B ; Γ `N B → C Γ `N A → C

2.

Γ `N B Γ `N A → B

The system N is not exactly what is usually called Natural Deduction. We have adopted N because it corresponds so closely to the Hilbert system. The common features of N and Natural Deduction are that they both provide the means of reasoning from the assumptions Γ and not only, like H, for deriving single formulae. Furthermore, they both satisfy the crucial theorem which we prove next. (The expression “Γ, A” is short for “Γ ∪ {A}.”) Theorem 4.13 [Deduction Theorem] If Γ, A `N B, then Γ `N A → B. Proof By induction on the length l of a proof of Γ, A `N B. Basis, l = 1, means that the proof consists merely of an instance of an axiom and it has two cases depending on which axiom was involved: A1-A3 :: If B is one of these axioms, then we also have Γ `N B and lemma 4.12.2 gives the conclusion. A0 :: If B results from this axiom, we have two subcases: 1. If B = A, then, by lemma 4.7, we know that Γ `N B → B. 2. If B 6= A, then B ∈ Γ, and so Γ `N B. By lemma 4.12.2 we get Γ `N A → B. Γ, A `N C ; Γ, A `N C → B MP :: B was obtained by MP, i.e.: Γ, A `N B By the induction hypothesis, we have the first two lines of the following proof: 1: 2: 3: 4: 5:

Γ `N Γ `N Γ `N Γ `N Γ `N

A→C A → (C → B) (A → (C → B)) → ((A → C) → (A → B)) (A → C) → (A → B) A→B

A2 M P (2, 3) M P (1, 4) QED (4.13)

Example 4.14 Using the deduction theorem significantly shortens the proofs. The tedious example from lemma 4.7 can now be recast as: 1 : B `N B A0 2 : `N B → B DT 2

The deduction theorem is a kind of dual to MP: each gives one implication of the following Corollary 4.15 Γ, A `N B iff Γ `N A → B. Proof ⇒) is the deduction theorem 4.13.

81

III.1. Syntax and Proof Systems

⇐) By exercise 4.5, the assumption may be strengthened to Γ, A `N A → B. But then, also Γ, A `N A, and by MP Γ, A `N B. QED (4.15) We can now easily show the following: Corollary 4.16 The following rule is admissible in N :

Γ `N A → (B → C) Γ `N B → (A → C)

Proof Follows trivially from the above 4.15: Γ `N A → (B → C) iff Γ, A `N B → C iff Γ, A, B `N C – as Γ, A, B abbreviates the set Γ∪{A, B}, this is equivalent to – Γ, B `N A → C iff Γ `N B → (A → C). QED (4.16) Lemma 4.17 `N (A → B) → (¬B → ¬A) Proof 1 : A → B `N (¬¬A → ¬¬B) → (¬B → ¬A) A3, C.4.15 2: 3: 4: 5: 6: 7: 8:

A → B `N ¬¬A → A A → B `N A → B A → B `N ¬¬A → B A → B `N B → ¬¬B A → B `N ¬¬A → ¬¬B A → B `N ¬B → ¬A `N (A → B) → (¬B → ¬A)

L.4.10.1 A0 L.4.12.1(2, 3) L.4.10.2 L.4.12.1(4, 5) M P (6, 1) DT

QED (4.17)

5: Hilbert vs. ND In H we prove only single formulae, while in N we work “from the assumptions” proving their consequences. Since the axiom schemata and rules of H are special cases of their counterparts in N , it is obvious that for any formula B, if `H B then ∅ `N B. In fact this can be strengthened to an equivalence. (Below we follow the convention of writing “`N B” for “∅ `N B.”) Lemma 4.18 For any formula B we have `H B iff `N B. Proof One direction is noted above. In fact, any proof of `H B itself qualifies as a proof of `N B. The other direction is almost as obvious, since there is no way to make any real use of A0 in a proof of `N B. More precisely, take any proof of `N B and delete all lines (if any) of the form Γ `N A for Γ 6= ∅. The result is still a proof of `N B, and now also of `H B. To proceed more formally, the lemma can be proved by induction on the length of a proof of `N B: Since Γ = ∅ the last step of the proof could have used either an axiom A1,A2,A3 or MP. The same step can be then done in H – for MP, the proofs of `N A and `N A → B for the appropriate A are shorter and hence by the IH have counterparts in H. QED (4.18) The next lemma is a further generalization of this result. Lemma 4.19 `H G1 → (G2 → ...(Gn → B)...) iff {G1 , G2 , . . . , Gn } `N B. Proof We prove the lemma by induction on n: Basis :: The special case corresponding to n = 0 is just the previous lemma. Induction :: Suppose `H G1 → (G2 → ...(Gn → B)...) iff {G1 , G2 , . . . , Gn } `N B for any B. Then, taking (Gn+1 → B) for B, we have by IH `H G1 → (G2 → ...(Gn → (Gn+1 → B))...) iff {G1 , G2 , . . . , Gn } `N (Gn+1 → B), which by corollary 4.15 holds iff {G1 , G2 , . . . , Gn , Gn+1 } `N B. QED (4.19) Lemma 4.18 states equivalence of N and H with respect to the simple formulae of H. This lemma states essentially the general equivalence of these two systems: for any finite N -expression B ∈ `N there is a corresponding H-formula B 0 ∈ `H and vice versa. Observe, however, that this equivalence is restricted to finite Γ in N -expressions. The significant difference between the two systems consists in that N allows to consider also consequences

82

Statement Logic

of infinite sets of assumptions, for which there are no corresponding formulae in H, since every formula must be finite.

6: Provable Equivalence of formulae Equational reasoning is based on the simple principle of substitution of equals for equals. E.g., having the arithmetical expression 2 + (7 + 3) and knowing that 7 + 3 = 10, we also obtain a=b where F [ ] 2 + (7 + 3) = 2 + 10. The rule applied in such cases may be written as F [a] = F [b] is an expression “with a hole” (a variable or a placeholder) into which we may substitute other expressions. We now illustrate a logical counterpart of this idea. Lemma 4.7 showed that any formula of the form (B → B) is derivable in H and, by lemma 4.18, in N . It allows us to use, for instance, 1) `N a → a, 2) `N (a → b) → (a → b), 3) . . . as a step in any proof. Putting it a bit differently, the lemma says that 1) is provable iff 2) is provable iff ... Recall that remark 4.5 introduced an abbreviation > for an arbitrary formula of this form! It also introduced an abbreviation ⊥ for an arbitrary formula of the form ¬(B → B). These abbreviations indicate that all the formulae of the respective form are equivalent in the following sense. Definition 4.20 Formulae A and B are provably equivalent in an axiomatic system C for SL, if both `C A → B and `C B → A. If this is the case, we write `C A ↔ B.4 Lemma 4.10 provides an example, namely (F.ix)

`H B ↔ ¬¬B

Another example follows from axiom A3 and lemma 4.17: `N (A → B) ↔ (¬B → ¬A)

(F.x)

It is also easy to show (exc. 4.3) that all formulae > are provably equivalent, i.e., (F.xi)

`N (A → A) ↔ (B → B).

To show the analogous equivalence of all ⊥ formulae, (F.xii)

`N ¬(A → A) ↔ ¬(B → B).

we have to proceed differently since we do not have `N ¬(B → B).5 We can use the above fact and lemma 4.17: 1: 2: 3:

`N (A → A) → (B → B) `N ((A → A) → (B → B)) → (¬(B → B) → ¬(A → A)) `N ¬(B → B) → ¬(A → A)

L.4.17 M P (1, 2)

and the opposite implication is again an instance of this one. Provable equivalence A ↔ B means – and it is its main importance – that the formulae are interchangable. Whenever we have a proof of a formula F [A] containing A (as a subformula, possibly with several occurences), we can replace A by B – the result will be provable too. This fact is a powerful tool in simplifying proofs and is expressed in the following theorem. (The analogous version holds for H.) Theorem 4.21 The following rule is admissible in N :

`N A ↔ B , for any formula F [A]. `N F [A] ↔ F [B]

Proof By induction on the complexity of F [ ] viewed as a formula “with a hole” (where there may be several occurences of the “hole”, i.e., F [ ] may have the form ¬[ ] → G or else [ ] → (¬G → ([ ] → H)), etc.). [ ] :: i.e., F [A] = A and F [B] = B – the conclusion is then the same as the premise. 4 In the view of lemma 4.7 and 4.9.1, and their generalizations to N , the relation Im ⊆ WFF Σ × WFFΣ given SL SL by Im(A, B) ⇔ `N A → B is reflexive and transitive. This definition amounts to adding the requirement of symmetricity making ↔ the greatest equivalence contained in Im. 5 In fact, this is not true, as we will see later on.

83

III.1. Syntax and Proof Systems

¬G[ ] :: IH allows us to assume the claim for G[ ] : 1: 2: 3: 4:

`N `N `N `N

`N A ↔ B . `N G[A] ↔ G[B]

A→B G[A] → G[B] (G[A] → G[B]) → (¬G[B] → ¬G[A]) ¬G[B] → ¬G[A]

assumption IH L.4.17 M P (2, 3)

The same for `N ¬G[A] → ¬G[B], starting with the assumption `N B → A. G[ ] → H[ ] :: Assuming `N A ↔ B, IH gives us the following ssumptions : `N G[A] → G[B], `N G[B] → G[A], `N H[A] → H[B] and `N H[B] → H[A]. We show that `N F [A] → F [B]: 1: 2: 3: 4: 5: 6: 7: 8:

`N H[A] → H[B] G[A] → H[A] `N H[A] → H[B] G[A] → H[A] `N G[A] → H[A] G[A] → H[A] `N G[A] → H[B] `N G[B] → G[A] G[A] → H[A] `N G[B] → G[A] G[A] → H[A] `N G[B] → H[B] `N (G[A] → H[A]) → (G[B] → H[B])

IH exc.4.5 A0 L.4.12.1(3, 2) IH exc.4.5 L.4.12.1(6, 4) DT (7)

Entirely symmetric proof yields the other implication `N F [B] → F [A].

QED (4.21)

The theorem, together with the preceding observations about equivalence of all > and all ⊥ formulae justify the use of these abbreviations: in a proof, any formula of the form ⊥, resp. >, can be replaced by any other formula of the same form. As a simple consequence of the theorem, we obtain: Corollary 4.22 For any formula F [A], the following rule is admissible:

`N F [A] ; `N A ↔ B `N F [B]

Proof If `N A ↔ B, theorem 4.21 gives us `N F [A] ↔ F [B] which, in particular, implies `N F [A] → F [B]. MP applied to this and the premise `N F [A], gives `N F [B]. QED (4.22)

7: Consistency Lemma 4.7, and the discussion of provable equivalence above, show that for any Γ (also for Γ = Ø) we have Γ `N >, where > is an arbitrary instance of B → B. The following notion indicates that the similar fact, namely Γ `N ⊥ need not always hold. Definition 4.23 A set of formulae Γ is consistent iff Γ 6`N ⊥. An equivalent formulation says that Γ is consistent iff there is a formula A such that Γ 6`N A. In fact, if Γ `N A for all A then, in particular, Γ `N ⊥. Equivalence follows then by the next lemma. Lemma 4.24 If Γ `N ⊥, then Γ `N A for all A. Proof (Observe how corollary 4.22 simplifies the proof.) 1: 2: 3: 4: 5: 6:

Γ `N Γ `N Γ `N Γ `N Γ `N Γ `N

¬(B → B) B→B ¬A → (B → B) ¬(B → B) → ¬¬A ¬¬A A

assumption L.4.7 2 + L.4.12.2 C.4.22 (F.x) M P (1, 4) C.4.22 (F.ix)

QED (4.24)

This lemma is the (syntactic) reason for why inconsistent sets of “assumptions” Γ are uninteresting. Given such a set, we do not need the machinery of the proof system in order to check whether something is a theorem or not – we merely have to check if the formula is well-formed. Similarly, an axiomatic system, like H, is inconsistent if its rules and axioms allow us to derive `H ⊥. Notice that the definition requires that ⊥ is not derivable. In other words, to decide if Γ is consistent it does not suffice to run enough proofs and see what can be derived from Γ. One must

84

Statement Logic

show that, no matter what, one will never be able to derive ⊥. This, in general, may be an infinite task requiring searching through all the proofs. If ⊥ is derivable, we will eventually construct a proof of it, but if it is not, we will never reach any conclusion. That is, in general, consistency of a given system may be semi-decidable. (Fortunately, consistency of H as well as of N for an arbitrary Γ is decidable (as a consequence of the fact that “being a theorem” is decidable for these systems) and we will comment on this in subsection 8.1.) In some cases, the following theorem may be used to ease the process of deciding that a given Γ is (in)consistent. Theorem 4.25 [Compactness] Γ is consistent iff each finite subset ∆ ⊆ Γ is consistent. Proof ⇒ If Γ 6`N ⊥ then, obviously, there is no such proof from any subset of Γ. ⇐ Contrapositively, assume that Γ is inconsistent. The proof of ⊥ must be finite and, in particular, uses only a finite number of assumptions ∆ ⊆ Γ. This means that the proof Γ `N ⊥ can be carried from a finite subset ∆ of Γ, i.e., ∆ `N ⊥. QED (4.25)

8: The axiomatic system of Gentzen’s By now you should be convinced that it is rather cumbersome to design proofs in H or N . From the mere form of the axioms and rules of these systems it is by no means clear that they define recursive sets of formulae. (As usual, it is easy to see (a bit more tedious to prove) that these sets are semi-recursive.) We give yet another axiomatic system for SL in which proofs can be constructed mechanically. The relation `G ⊆ ℘(WFFSL )×℘(WFFSL ), contains expressions, called sequents, of the form Γ `G ∆, where Γ, ∆ ⊆ WFFSL are finite sets of formulae. It is defined inductively as follows: Axioms :: Γ `G ∆, whenever Γ ∩ ∆ 6= ∅ Rules ::

¬` :

Γ `G ∆, A Γ, ¬A `G ∆

`¬ :

Γ, A `G ∆ Γ `G ∆, ¬A

→ `:

Γ `G ∆, A ; Γ, B `G ∆ Γ, A → B `G ∆

`→ :

Γ, A `G ∆, B Γ `G ∆, A → B

The power of the system is the same whether we assume that Γ’s and ∆’s in the axioms contain only atomic formulae, or else arbitrary formulae. We comment now on the “mechanical” character of G and the way one can use it.

8.1: Decidability of the axiomatic systems for SL Gentzen’s system defines a set `G ⊆ ℘(WFFSL ) × ℘(WFFSL ). Unlike for H or N , it is (almost) obvious that this set is recursive – we do not give a formal proof but indicate its main steps. Theorem 4.26 Relation `G is decidable. Proof Given an arbitrary sequent Γ `G ∆ = A1 , ..., An `G B1 , ..., Bm , we can start processing the formulae (bottom-up!) in an arbitrary order, for instance, from left to right. For instance, B → A, ¬A `G ¬B is shown by building the proof starting at the bottom line: 5: 4: 3: 2: 1:

axiom B `G A, B `¬ →` ¬`

`G ¬B, A, B

axiom ;

A `G A, ¬B

B → A `G ¬B, A ¬A, B → A `G ¬B

In general, the proof in G proceeds as follows: • If Ai is atomic, we continue with A1+i , and then with B’s.

85

III.1. Syntax and Proof Systems

• If a formula is not atomic, it is either ¬C or C → D. In either case there is only one rule which can be applied (remember, we go bottom-up). Premise(s) of this rule are uniquely determined by the conclusion (formula we are processing at the moment) and its application will remove the main connective, i.e., reduce the number of ¬, resp. →! • Thus, eventually, we will arrive at a sequent Γ0 `G ∆0 which contains only atomic formulae. We then only have to check whether Γ0 ∩ ∆0 = Ø, which is obviously a decidable problem since both sets are finite. Notice that the rule →` will “split” the proof into two branches, but each of them will contain fewer connectives. We have to process both branches but, again, for each we will eventually arrive at sequents with only atomic formulae. The initial sequent is derivable in G iff all such branches terminate with axioms. And it is not derivable iff at least one terminates with a non-axiom (i.e., Γ0 `G ∆0 where Γ0 ∩ ∆0 = Ø). Since all branches are guaranteed to terminate `G is decidable. QED (4.26) Now, notice that the expressions used in N are special cases of sequents, namely, the ones with exactly one formula on the right of `N . If we restrict our attention in G to such sequents, the above theorem still tells us that the respective restriction of `G is decidable. We now indicate the main steps involved in showing that this restricted relation is the same as `N . As a consequence, we obtain that `N is decidable, too. That is, we want to show that Γ `N B iff Γ `G B. 1) In the exercise 4.4, you are asked to prove a part of the implication “if `N B then `G B”, by showing that all axioms of N are derivable in G. It is not too difficult to show that also the MP rule is admissible in G. It is there called the (cut)-rule whose simplest form is: Γ `G A ; Γ, A `G B (cut) Γ `G B and M P is easily derivable from it. (If Γ `G A → B, then it must have been derived using the rule `→, i.e., we must have had earlier (“above”) in the proof of the right premise Γ, A `G B. Thus we could have applied (cut) at this earlier stage and obtain Γ `G B, without bothering to derive Γ `G A → B at all.) 2) To complete the proof we would have to show also the opposite implication “if `G B then `N B”, namely that G does not prove more formulae than N does. (If it did, the problem would be still open, since we would have a decision procedure for `G but not for `N ⊂ `G . I.e., for some formula B 6∈ `N we might still get the positive answer, which would merely mean that B ∈ `G .) This part of the proof is more involved since Gentzen’s rules for ¬ do not produce N -expressions, i.e., a proof in G may go through intermediary steps involving expressions not derivable (not existing, or “illegal”) in N . 3) Finally, if N is decidable, then lemma 4.18 implies that also H is decidable – according to this lemma, to decide if `H B, it suffices to decide if `N B.

8.2: Gentzen’s rules for abbreviated connectives The rules of Gentzen’s form a very well-structured system. For each connective, →, ¬ there are two rules – one treating its occurrence on the left, and one on the right of `G . As we will soon see, it makes often things easier if one is allowed to work with some abbreviations for frequently occurring sets of symbols. For instance, assume that in the course of some proof, we run again and again in the sequence of the form ¬A → B. Processing it requires application of at least two def rules. One may be therefore tempted to define a new connective A ∨ B = ¬A → B, and a new rule for its treatement. In fact, in Gentzen’s system we should obtain two rules for the occurrence of this new symbol on the left, resp. on the right of `G . Now looking back at the original rules

86

Statement Logic

from the begining of this section, we can see how such a connective should be treated: Γ Γ, ¬A Γ Γ

`G `G `G `G

Γ, A `G ∆ Γ `G ¬A, ∆ Γ, B `G ∆ Γ, ¬A → B `G ∆ Γ, A ∨ B `G ∆

A, B, ∆ B, ∆ ¬A → B, ∆ A ∨ B, ∆

Abbreviating these two derivations yields the following two rules: (F.xiii)

`∨

Γ `G A, B, ∆ Γ `G A ∨ B, ∆

∨`

Γ, A `G ∆ ; Γ, B `G ∆ Γ, A ∨ B `G ∆ def

In a similar fashion, we may construct the rules for another, very common abbreviation, A ∧ B = ¬(A → ¬B): Γ `G A, ∆ ; Γ `G B, ∆ Γ, A, B `G ∆ `∧ (F.xiv) ∧` Γ `G A ∧ B, ∆ Γ, A ∧ B `G ∆ It is hard to imagine how to perform a similar construction in the systems H or N . We will meet the above abbreviations in the following chapters.

9: Some proof techniques In the next chapter we will see that formulae of SL may be interpreted as propositions – statements possessing truth-value true or false. The connective ¬ may be then interpreted as negation of the argument proposition, while → as (a kind of) implication. With this intuition, we may recognize some of the provable facts (either formulae or admissible rules) as giving rise to particular strategies of proof which can be – and are – utilized at all levels, in fact, throughout the whole of mathematics, as well as in much of informal reasoning. Most facts from and about SL can be viewed in this way, but here we give only a few most common examples. • As a trivial example, the provable equivalence `N B ↔ ¬¬B from (F.ix), means that in order to show double negation ¬¬B, it suffices to show B. One will hardly try to say “I am not unmarried.” – “I am married.” is both more convenient and natural. • Let G, D stand, respectively, for the statements ‘Γ `N ⊥’ and ‘∆ `N ⊥ for some ∆ ⊆ Γ’ from the proof of theorem 4.25. In the second point, we showed ¬D → ¬G contrapositively, i.e., by showing G → D. That this is a legal and sufficient way of proving the first statement can be, at the present point, justified by appealing to (F.x) – (A → B) ↔ (¬B → ¬A) says precisely that proving one is the same as (equivalent to) proving the other. • Another proof technique is expressed in corollary 4.15: A `N B iff `N A → B. Treating formulae on the left of `N as assumptions, this tells us that in order to prove that A implies B, A → B, we may prove A `N B, i.e., assume that A is true and show that then also B must be true. A `N ⊥ . Interpreting • In exercise 4.2 you are asked to show admissibility of the following rule: `N ¬A ⊥ as something which never can be true, a contradiction or, perhaps more generously, as an absurdity, this rule expresses actually reductio ad absurdum which we have seen in the chapter on history of logic (Zeno’s argument about Achilles and tortoise): if A can be used to derive an absurdity, then A can not be true i.e., (applying the law of excluded middle) its negation must be.

Compulsory Exercises I (week 4) exercise 4.1 Let A be a countable set. Show that A has countably many finite subsets. 1. Show first that for any n ∈ N, the set ℘n (A) of finite subsets – with exactly n elements – of A is countable. 2. Using a technique S similar to the one from the drawing in example 1.25 on page 38, show that the union n∈N ℘n (A) of all these sets is countable.

exercise 4.2 Prove the following statements in N :

87

III.1. Syntax and Proof Systems

1. `N ¬A → (A → B) (Hint: Complete the following proof: 1: 2: 3: 4: 5:

`N ¬A → (¬B → ¬A) ¬A `N ¬B → ¬A

A1 C.4.15 A3 M P (2, 3) DT (4) )

2. ¬B, A `N ¬(A → B) (Hint: Start as follows: A, A → B `N A A0 A, A → B `N A → B A0 A, A → B `N B M P (1, 2) A `N (A → B) → B DT (3) .. . Apply then lemma 4.17; you will also need corollary 4.15.) 1: 2: 3: 4:

3. `N A → (¬B → ¬(A → B)) 4. `N (A → ⊥) → ¬A 5. Show now admissibility in N of the rules `N A → ⊥ (a) `N ¬A

(b)

A `N ⊥ `N ¬A

(Hint: for (a) use 4 and MP, and for (b) use (a) and Deduction Theorem)

6. Prove the first formula in H, i.e. : 10 . `H ¬A → (A → B). exercise 4.3+ Show the claim (F.xi), i.e., `N (A → A) ↔ (B → B). (Hint: use lemma 4.7 and then lemma 4.12.2.)

exercise 4.4+ Consider the Gentzen’s system G from section 8. 1. Show that all axioms of the N system are derivable in G. (Hint: Instead of pondering over the axioms to start with, apply the bottom-up strategy from 8.1.)

2. Using the same bottom-up strategy, prove the formulae 1., 2. and 3. from exercise 4.2 in G. exercise 4.5+ Lemma 4.12 generalized lemma 4.9 to the expressions involving assumptions Γ `N . . . We can, however, reformulate the rules in a different way, namely, by placing the antecedents of → to the left of `N . Show the admissibility in N of the rules: 1.

Γ `N B Γ, A `N B

2.

Γ, A `N B ; Γ, B `N C Γ, A `N C

(1. must be shown directly by induction on the length of the proof of Γ `N B, without using corollary 4.15 – why? For 2. you can then use 4.15.)

exercise 4.6 Show that the following definition of consistency is equivalent to 4.23: Γ is consistent iff there is no formula A such that both Γ `A and Γ `¬A. Hint: You should show that for arbitrary Γ one has that: Γ 6`N ⊥ Γ `N ⊥

iff ⇔

for no A : Γ `N A and Γ `N ¬A, which is the same as showing that: for some A : Γ `N A and Γ `N ¬A.

The implication ⇒) follows easily from the assumption Γ `N ¬(B → B) and lemma 4.7. For the opposite one start as follows (use corollary 4.22 on 3: and then MP): 1: 2: 3: .. .

Γ `N A Γ `N ¬A Γ `N ¬¬(A → A) → ¬A

ass. ass. L.4.12.2 (2)

88

Statement Logic

Chapter 5 Semantics of SL • • • •

Semantics of SL Semantic properties of formulae Abbreviations Sets, SL and Boolean Algebras

In this chapter we are leaving the proofs and axioms from the previous chapter aside. For the time being, none of the concepts and discussion below should be referred to any earlier results on axiomatic systems. (Connection to the proof systems will be studied in the following chapters.) We are now studying exclusively the language of SL – definition 4.4 – and the standard way of assigning meaning to its expressions.

1: Semantics of SL a Background Story ♦ ♦ There is a huge field of Proof Theory which studies axiomatic systems per se, i.e., without reference to their possible meanings. This was the kind of study we were carrying out in the preceding chapter. As we emphasised at the begining of that chapter, an axiomatic system may be given very different interpretations and we will in this chapter see a few possibilities for interpreting the system of Statement Logic. Yet, axiomatic systems are typically introduced for the purpose of studying particular areas of interest or particular phenomena therein. They provide syntactic means for such a study: a language which is used to refer to various objects of some domain and a proof calculus which, hopefully, captures some of the essentail relationships between various aspects of the domain. As you should have gathered from the presentation of the history of logic, its original intention was to capture the patterns of correct reasoning which we otherwise carry out in natural language. Statement Logic, in particular, was conceived as a logic of statements or propositions: propositional variables may be interpreted as arbitrary statements, while the connectives as the means of constructing new statements from others. Thus, for instance, lets make the following reasoning: and hence

If it is raining, we will go to cinema. If we go to cinema, we will see a Kurosawa film. If it is raining, we will see a Kurosawa film.

If we agree to represent the implication if ... then ... by the syntactic symbol →, this reasoning is represented by interpreting A as It will rain, B as We will go to cinema, A→B ; B→C C as We will see a Kurosawa film and by the deduction . As we have A→C seen in lemma 4.9, this is a valid rule in the system `H . Thus, we might say that the system `H (as well as `N ) captures this aspect of our natural reasoning. However, one has to be very – indeed, extremely – careful with this kinds of analogies. They are never complete and any formal system runs, sooner or later, into problems when confronted with the richness and sophistication of natural language. Consider the following argument: and hence

If If If if

I I I I

am am am am

in in in in

Paris then I am in France. Rome then I am in Italy. Paris then I am in Italy or else Rome then I am in France.

89

III.2. Semantics of SL

It does not look plausible, does it? Now, let us translate it into statement logic: P for being in Paris, F for being in France, R in Rome and I in Italy. Using Gentzen’s rules with the standard reading of ∧ as ‘and’ and ∨ as ‘or’, we obtain: R → I, P, R `G I, F, P P → F, R → I, P, R `G P → F, R → I, P `G P → F, R → I `G P → F, R → I `G P → F ∧ R → I `G `G

;

F, R → I, P, R `G I, F I, F I, R → F P → I, R → F P →I ∨R→F P →I ∨R→F (P → F ∧ R → I) → (P → I ∨ R → F )

Our argument – the implication from (P → F and R → I) to (P → I or R → F ) turns out to be provable in `G . (It is so in the other systems as well.) Logicians happen to have an answer to this particular problem (we will return to it in exercise 6.1). But there are other strange things which cannot be easily answered. Typically, any formal system attempting to capture some area of discourse, will capture only some part of it. Attempting to apply it beyond this area, leads inevitably to counterintuitive phenomena. Statement logic attempts to capture some simple patterns of reasoning at the level of propositions. A proposition can be thought of as a declarative sentence which may be assigned a unique truth value. The sentence “It is raining” is either true or false. Thus, the intended and possible meanings of propositions are truth values: true or false. Now, the meaning of the proposition If it rains, we will go to a cinema, A → B, can be construed as: if ‘it is true that it will rain’ then ‘it is true that we will go to a cinema’. The implication A → B says that if A is true then B must be true as well. Now, since this implication is itself a proposition, it will have to be given a truth value as its meaning. And this truth value will depend on the truth value of its constituents: the propositions A and B. If A is true (it is raining) but B is false (we are not going to a cincema), the whole implication A → B is false. And now comes the question: what if A is false? Did the implication A → B assert anything about this situation? No, it did not. If A is false (it is not raining), we may go to a cinema or we may stay at home – I haven’t said anything about that case. Yet, the proposition has to have a meaning for all possible values of its parts. In this case – when the antecedent A is false – the whole implication A → B is declared true irrespectively of the truth value of B. You should notice that here something special is happening which does not necessarily correspond so closely to our intuition. And indeed, it is something very strange! If I am a woman, then you are Dalai Lama. Since I am not a woman, the implication happens to be true! But, as you know, this does not mean that you are Dalai Lama. This example, too, can be explained by the same argument as the above one (to be indicated in exercise 6.1). However, the following implication is true, too, and there is no formal way of excusing it being so or explaining it away: If it is not true that when I am a man then I am a man, then you are Dalai Lama, ¬(M → M ) → D. It is correct, it is true and ... it seems to be entirely meaningless. In short, formal correctness and accuracy does not always correspond to something meaningful in natural language, even if such a correspondance was the original motivation. A possible discrepancy indicated above concerned, primarily, the discrepancy between our intuition about the meaning of sentences and their representation in a syntactic system. But the same problem occurs at yet another level – the same or analogous discrepancies occur between our intuitive understanding of the world and the formal semantic model of the world. Thinking about axiomatic systems as tools for modelling the world, we might be tempted to look at the relation as illustrated on the left side of the following figure: an axiomatic system modelling the world. In truth, however, the relation is more complicated as illustrated on the right of the figure.

90

Statement Logic

Axiomatic system

Axiomatic system Formal semantics

'

The World

$

'

The World

$

An axiomatic system never addresses the world directly. What it addresses is a possible semantic model which tries to give a formal representation of the world. As we have mentioned several times, a given axiomatic system may be given various interpretations – all such interpretations will be possible formal semantic models of the system. To what extent these semantic models capture our intuition about the world is a different question. It is the question about “correctness” or “incorrectness” of modelling – an axiomatic system in itself is neither, because it can be endowed with different interpretations. The problems indicated above were really the problems with the semantic model of natural language which was implicitly introduced by assuming that statements are to be interpreted as truth values. We will now endavour to study the semantics – meaning – of the syntactic expessions from SL. We will see some alternative semantics starting with the standard one based on the so called “truth functions” (which we will call “boolean functions”). To avoid confusion and surprises, one should always keep in mind that we are not talking about the world but are defining a formal model of SL which, at best, can provide an imperfect link between the syntax of SL and the world. The formality of the model, as always, will introduce some discrepancies as those described above and many things may turn out not exactly as we would expect them to be in the real world. ♦

♦

Let B be a set with two elements. Any such set would do but, for convenience, we will typically let B = {1, 0}. Whenever one tries to capture the meaning of propositions as their truth value, and uses Statement Logic with this intention, one interprets B as the set {true, false}. Since this gives too strong associations and leads often to incorrect intuitions without improving anything, we will avoid the words true and false. Instead we will talk about “boolean values” (1 and 0) and “boolean functions”. If the word “truth” appears, it may be safely replaced with “boolean”. For any n ≥ 0 we can imagine various functions mapping Bn → B. For instance, for n = 2, a def def def def function f : B × B → B can be defined by f (1, 1) = 1, f (1, 0) = 0, f (0, 1) = 1 and f (0, 0) = 1. It can be written more concisely as the table: x 1 1 0 0

y 1 0 1 0

f (x, y) 1 0 1 1

The first n-columns contain all the possible combinations of the arguments (giving 2n distinct rows), and the last column specifies the value of the function for this combination of the arguments. For each of the 2n rows a function takes one of the two possible values, so for any n there are n exactly 22 different functions Bn → B. For n = 0, there are only two (constant) functions, for n = 1 there will be four distinct functions (which ones?) and so on. Surprisingly (or not), the language of SL is designed exactly for describing such functions! Definition 5.1 An SL structure consists of: 1. A domain with two elements, called boolean values, {1, 0}

91

III.2. Semantics of SL

2. Interpretation of the connectives, ¬ : B → B and → : B2 → B, given by the boolean tables:6 x 1 0

¬x 0 1

x 1 1 0 0

y 1 0 1 0

x→y 1 0 1 1

Given an alphabet Σ, an SL structure for Σ is an SL structure with 3. an assignment of boolean values to all propositional variables, i.e., a function V : Σ → {1, 0}. (Such a V is also called a valuation of Σ.) Thus connectives are interpreted as functions on the set {1, 0}. To distinguish the two, we use the simple symbols ¬ and → when talking about syntax, and the underlined ones ¬ and → when we are talking about the semantic interpretation as boolean functions. ¬ is interpreted as the def def function ¬ : {1, 0} → {1, 0}, defined by ¬(1) = 0 and ¬(0) = 1. → is binary and represents one of the functions from {1, 0}2 into {1, 0}. Example 5.2 Let Σ = {a, b}. V = {a 7→ 1, b 7→ 1} is a Σ-structure (i.e., a structure interpreting all symbols from Σ) assigning 1 (true) to both variables. V = {a 7→ 1, b 7→ 0} is another Σ-structure. Let Σ = {‘John smokes’, ‘Mary sings’}. Here ‘John smokes’ is a propositional variable (with a rather lengthy name). V = {‘John smokes’ 7→ 1, ‘Mary sings’ 7→ 0} is a Σ-structure in which both “John smokes” and “Mary does not sing”. 2

The domain of interpretation has two boolean values 1 and 0, and so we can imagine various functions, in addition to those interpreting the connectives. As remarked above, for arbitrary n n ≥ 0 there are 22 distinc functions mapping {1, 0}n into {1, 0}. Example 5.3 Here is an example of a (somewhat involved) boolean function F : {1, 0}3 → {1, 0} x 1 1 1 1 0 0 0 0

y 1 1 0 0 1 1 0 0

z 1 0 1 0 1 0 1 0

F (x, y, z) 1 1 1 0 1 1 1 1 2

Notice that in Definition 5.1 only the valuation differs from structure to structure. The interpretation of the connectives is always the same – for any Σ, it is fixed once and for all as the specific boolean functions. Hence, given a valuation V , there is a canonical way of extending it to the interpretation of all formulae – a valuation of propositional variables induces a valuation of all well-formed formulae. We sometimes write V for this extended valuation. This is given in the following definition which, intuitively, corresponds to the fact that if we know that ‘John smokes’ and ‘Mary does not sing’, then we also know that ‘John smokes and Mary does not sing’, or else that it is not true that ‘John does not smoke’. Definition 5.4 Any valuation V : Σ → {1, 0} induces a unique valuation V : WFF Σ SL → {1, 0} as follows: 1. for A ∈ Σ : V (A) = V (A) 2. for A = ¬B : V (A) = ¬(V (B)) 3. for A = (B → C) : V (A) = V (B) → V (C) 6 The standard name for such tables is “truth tables” but since we are trying not to misuse the word “truth”, we stay consistent by replacing it here, too, with “boolean”.

92

Statement Logic

For the purposes of this section it is convenient to assume that some total ordering has been selected for the propositional variables, so that for instance a “comes before” b, which again “comes before” c. Example 5.5 Given the alphabet Σ = {a, b, c}, we use the fixed interpretation of the connectives to determine the boolean value of, for instance, a → (¬b → c) as follows: a 1 1 1 1 0 0 0 0

b 1 1 0 0 1 1 0 0

¬b 0 0 1 1 0 0 1 1

c 1 0 1 0 1 0 1 0

¬b → c 1 1 1 0 1 1 1 0

a → (¬b → c) 1 1 1 0 1 1 1 1 2

Ignoring the intermediary columns, this table displays exactly the same dependence of the entries in the last column on the entries in the first three ones as the function F from Example 5.3. We say that the formula a → (¬b → c) determines the function F . The general definition is given below. Definition 5.6 For any formula B, let {b1 , . . . , bn } be the propositional variables in B, listed in increasing order. Each assignment V : {b1 , . . . , bn } → {1, 0} determines a unique boolean value V (B). Hence, each formula B determines a function B : {1, 0} n → {1, 0}, given by the equation B(x1 , . . . , xn ) = {b1 7→ x1 , . . . , bn 7→ xn }(B). Example 5.7 Suppose a and b are in Σ, and a comes before b in the ordering. Then (a → b) determines the function →, while (b → a) determines the function ← with the boolean table shown below. x 1 1 0 0

y 1 0 1 0

x→y 1 0 1 1

x←y 1 1 0 1 2 n

Observe that although for a given n there are exactly 22 boolean functions, there are infinitely many formulae over n propositional variables. Thus, different formulae will often determine the same boolean function. Deciding which formulae determine the same functions is an important problem which we will soon encounter.

2: Semantic properties of formulae Formula determines a boolean function and we now list some semantic properties of formulae, i.e., properties which are actually the properties of such induced functions. Definition 5.8 Let A, B ∈ WFFSL , and V be a valuation. A is satisfied in V not satisfied in V valid/tautology not valid satisfiable unsatisfiable/contradiction (tauto)logical consequence of B (tauto)logically equivalent to B

iff iff iff iff iff iff iff iff iff

condition holds V (A) = 1 V (A) = 0 for all V : V |= A there is a V : V 6|= A there is a V : V |= A for all V : V 6|= A B → A is valid A ⇒ B and B ⇒ A

notation: V |= A V 6|= A |= A 6|= A

B⇒A A⇔B

93

III.2. Semantics of SL

If A is satisfied in V , we say that V satisfies A. Otherwise V falsifies A. (Sometimes, one also says that A is valid in V , when A is satisfied in V . But notice that validity of A in V does not mean or imply that A is valid (in general), only that it is satisfiable.) Valid formulae – those satisfied in all structures – are also called tautologies; the unsatisfiable ones contradictions, while the not valid ones are said to be falsifiable. Those which are both falsifiable and satisfiable, i.e., which are neither tautologies nor contradictions, are called contingent. A valuation which satisfies a formula A is also called a model of A. Sets of formulae are sometimes called theories. Many of the properties defined for formulae are defined for theories as well. Thus a valuation is said to satisfy a theory iff it satisfies every formula in the theory. Such a valuation is also said to be a model of the theory. The class of all models of a given theory Γ is denoted Mod(Γ). Like a single formula, a set of formulae Γ is satisfiable iff it has a model, i.e., iff M od(Γ) 6= ∅. Example 5.9 a → b is not a tautology – assign V (a) = 1 and V (b) = 0. Hence a ⇒ b does not hold. However, it is satisfiable, since it is true, for instance, under the valuation {a 7→ 1, b 7→ 1}. The formula is contingent. B → B evaluates to 1 for any valuation (and any B ∈ WFFSL ), and so B ⇒ B. As a last example, we have that B ⇔ ¬¬B. B 0 1

B 0 1

B→B 1 1

¬B 0 1

B 1 0

¬¬B 1 0

B→ ¬¬B and ¬¬B→B 1 and 1 1 and 1 2

The operators ⇒ and ⇔ are meta-connectives stating that a corresponding relation (→ and ↔, respectively) between the two formulae holds for all boolean assignments. These operators are therefore used only at the outermost level, like for A ⇒ B – we avoid something like A ⇔ (A ⇒ B) or A → (A ⇔ B). Fact 5.10 We have the obvious relations between the sets of Sat(isfiable), Fal(sifiable), Taut(ological), Contr(adictory) and All formulae: • Contr ⊂ F al • T aut ⊂ Sat • F al ∩ Sat 6= ∅ • All = T aut ∪ Contr ∪ (F al ∩ Sat)

3: Abbreviations Intuitively, ¬ is supposed to express negation and we read ¬B as “not B”. → corresponds to implication: A → B is similar to “if A then B”. These formal symbols and their semantics are not exact counterparts of the natural language expressions but they do try to mimic the latter as far as possible. In natural language there are several other connectives but, as we will see in chapter 5, the two we have introduced for SL are all that is needed. We will, however, try to make our formulae shorter – and more readable – by using the following abbreviations: Definition 5.11 We define the following abbreviations: def

• A ∨ B = ¬A → B, read as “A or B” def • A ∧ B = ¬(A → ¬B), read as “A and B” def

• A ↔ B = (A → B) ∧ (B → A), read as “A if and only if B” (!Not to be confused with the provable equivalence from definition 4.20!) Example 5.12 Some intuitive justification for the reading of these abbreviations comes from the boolean tables for the functions they denote. For instance, the table for ∧ will be constructed according to its definition: x 1 1 0 0

y 1 0 1 0

¬y 0 1 0 1

x → ¬y 0 1 1 1

x∧y= ¬ (x → ¬y) 1 0 0 0

94

Statement Logic

Thus A ∧ B evaluates to 1 (true) iff both components are true. (In exercise 5.1 you are asked to do the analogous thing for ∨.) 2

4: Sets, Propositions and Boolean Algebras We have defined semantics of SL by interpreting the connectives as functions over B. Some consequences in form of the laws which follow from this definition are listed in subsection 4.1. In subsection 4.2 we observe close relationship to the laws obeyed by the set operations and then define an altenative semantics of the language of SL based on set-interpretation. Finally, in subsection 4.3, we gather these similarities in the common concept of boolean algebra.

4.1: Laws The definitions of semantics of the connectives together with the introduced abbreviations entitle us to conclude validity of some laws for SL. 1. Idempotency A∨A ⇔ A A∧A ⇔ A

2. Associativity (A ∨ B) ∨ C ⇔ A ∨ (B ∨ C) (A ∧ B) ∧ C ⇔ A ∧ (B ∧ C)

3. Commutativity A∨B ⇔ B∨A A∧B ⇔ B∧A

4. Distributivity A ∨ (B ∧ C) ⇔ (A ∨ B) ∧ (A ∨ C) A ∧ (B ∨ C) ⇔ (A ∧ B) ∨ (A ∧ C)

5. DeMorgan ¬(A ∨ B) ⇔ ¬A ∧ ¬B ¬(A ∧ B) ⇔ ¬A ∨ ¬B

6. Conditional A → B ⇔ ¬A ∨ B A → B ⇔ ¬B → ¬A

For instance, idempotency of ∧ is verified directly from the definition of ∧, as follows: A 1 0

def

¬A A→ ¬A ¬(A→ ¬A) = A∧A 0 0 1 1 1 0

The other laws can (and should) be verified in a similar manner. A ⇔ B means that for all valuations (of the propositional variables occurring in A and B) the truth values of both formulae are the same. This means almost that they determine the same function, with one restriction which is discussed in exercise 5.10. For any A, B, C the two formulae (A ∧ B) ∧ C and A ∧ (B ∧ C) are distinct. However, as they are tautologically equivalent it is not always a very urgent matter to distinguish between them. In general, there are a great many ways to insert the missing parentheses in an expression like A1 ∧ A2 ∧ . . . ∧ An , but since they all yield equivalent formulae we usually do not care where these parentheses go. Hence for a sequence A1 , A2 , . . . , An of formulae we may just talk about their conjunction and mean any formula obtained by supplying missing parentheses to the expression A1 ∧ A2 ∧ . . . ∧ An . Analogously, the disjunction of A1 , A2 , . . . , An is any formula obtained by supplying missing parentheses to the expression A1 ∨ A2 ∨ . . . ∨ An . Moreover, the laws of commutativity and idempotency tell us that order and repetition don’t matter either. Hence we may talk about the conjunction of the formulae in some finite set, and mean any conjunction formed by the elements in some order or other. Similarly for disjunction. The elements A1 , . . . , An of a conjunction A1 ∧ . . . ∧ An are called the conjuncts. The term disjunct is used analogously.

4.2: Sets and SL Compare the set laws 1.– 5. from page 1 with the tautological equivalences from the previous subsection. It is easy to see that they have “corresponding form” and can be obtained from each

95

III.2. Semantics of SL

other by the following translations. set − expression − statement set variable a, b... − propositional variable a, b... 0 − ¬ ∩ − ∧ ∪ − ∨ = − ⇔ One also translates : U − > ∅ − ⊥

Remark 5.13 [Formula- vs. set-operations] Although there is some sense of connection between the subset ⊆ and implication →, the two have very different function. The latter allows us to construct new propositions. The former, ⊆, is not however a set building operation: A ⊆ B does not denote a (new) set but states a relation between two sets. The consistency principles are not translated because they are not so much laws as definitions introducing a new relation ⊆ which holds only under the specified conditions. In order to find a set operation corresponding to →, we should reformulate the syntactic definiton 5.11 and verify that A → B ⇔ ¬A ∨ B. The def corresponding set-building operation ., would be then defined by A . B = A0 ∪ B. We do not have a propositional counterpart of the set minus \ operation and so the second complement law A \ B = A ∩ B 0 has no propositional form. However, this law says that in the propositional case we can merely use the expression A ∧ ¬B corresponding to A ∩ B 0 . We may translate the remaining set laws, e.g., 3. A∩A0 = Ø as A∧¬A ⇔ ⊥, etc. Using definition of ∧, we then get ⊥ ⇔ A∧¬A ⇔ ¬A∧A ⇔ ¬(¬A → ¬A), which is an instance of the formula ¬(B → B). 2

Let us see if we can discover the reason for this exact match of laws. For the time being let us ignore the superficial differences of syntax, and settle for the logical symbols on the right. Expressions built up from Σ with the use of these, we call boolean expressions, BEΣ . As an alternative to a valuation V : Σ → {0, 1} we may consider a set-valuation SV : Σ → ℘(U ), where U is any non-empty set. Thus, instead of the boolean-value semantics in the set B, we are defining a set-valued semantics in an arbitrary set U . Such SV can be extended to SV : BEΣ → ℘(U ) according to the rules SV (a) = SV (a) for all a ∈ Σ SV (>) = U SV (⊥) = ∅ SV (¬A) = U \ SV (A) SV (A ∧ B) = SV (A) ∩ SV (B) SV (A ∨ B) = SV (A) ∪ SV (B) Lemma 5.14 Let x ∈ U be arbitrary, V : Σ → {1, 0} and SV : Σ → ℘(U ) be such that for all a ∈ Σ we have x ∈ SV (a) iff V (a) = 1. Then for all A ∈ BEΣ we have x ∈ SV (A) iff V (A) = 1. Proof By induction on the complexity of A. Everything follows from the boolean-tables of >, ⊥, ¬, ∧, ∨ and the observations below. x∈U x∈∅ x ∈ P0 x∈P ∩Q x∈P ∪Q

iff iff iff

always never x 6∈ P x ∈ P and x ∈ Q x ∈ P or x ∈ Q QED (5.14)

Example 5.15 Let Σ = {a, b, c}, U = {4, 5, 6, 7} and choose x ∈ U to be 4. The upper part of the table shows an example

96

Statement Logic

of a valuation and set-valuation satisfying the conditions of the lemma, and the lower part the values of some formulae (boolean expressions) under these valuations. {1, 0} 1 1 0

← ← ← ←

V

Σ a b c

SV

→ → → →

℘({4, 5, 6, 7}) {4, 5} {4, 6} {5, 7}

{1, 0} 1 0 1 0

← ← ← ← ←

V

BESL a∧b ¬a a∨c ¬(a ∨ c)

SV

℘({4, 5, 6, 7}) {4} {6, 7} {4, 5, 7} {6}

→ → → → →

The four formulae illustrate the general fact that, for any A ∈ BESL we have V (A) = 1 ⇔ 4 ∈ SV (A). 2

The set-identities on page 1 say that the BE’s on each side of an identity are interpreted identically by any set-valuation. Hence the corollary below expresses the correspondence between the setidentities and tautological equivalences. Corollary 5.16 Let A, B ∈ BE. Then SV (A) = SV (B) for all set-valuations SV iff A ⇔ B. Proof The idea is to show that for every set-valuation that interprets A and B differently, there is some valuation that interprets them differently, and conversely. ⇐) First suppose SV (A) 6= SV (B). Then there is some x ∈ U that is contained in one but not the other. Let Vx be the valuation such that for all a ∈ Σ, Vx (a) = 1 iff x ∈ SV (a). Then Vx (A) 6= Vx (B) follows from lemma 5.14. ⇒) Now suppose V (A) 6= V (B). Let Vset be the set-valuation into ℘({1}) such that for all a ∈ Σ, 1 ∈ Vset (a) iff V (a) = 1. Again lemma 5.14 applies, and Vset (A) 6= Vset (B) follows.

QED (5.16)

This corollary provides an explanation for the validity of essentially the same laws for statement logic and for sets. These laws were universal, i.e., they stated equality of some set expressions for all possible sets and, on the other hand, logical equiavlence of corresponding logical formulae for all possible valuations. We can now rewirte any valid equality A = B between set expressions as A0 ⇔ B 0 , where primed symbols indicate the corresponding logical formulae; and vice versa. Corollary says that one is valid if and only if the other one is. Let us reflect briefly over this result which is quite significant. For the first, observe that the semantics with which we started, namely, the one interpreting connectives and formulae over the set B, turns out to be a special case of the set-based semantics. We said that B may be an arbitrary two-element set. Now, take U = {•}; then ℘(U ) = {∅, {•}} has two elements. Using • as the “designated” element x (x from lemma 5.14), the set-based semantics over this set will coincide with the propositional semantics which identifies ∅ with 0 and {•} with 1. Reinterpreting corollary with this in mind, i.e., substituting ℘({•}) for B, tells us that A = B is valid (in all possible ℘(U ) for all possible assignments) iff it is valid in ℘({•})! In other words, to check if some set equality holds under all possible interpretations of the involved set variables, it is enough to check if it holds under all possible interpretations of these variables in the structure ℘({•}). (One says that this structure is a cannonical representative of all such set-based interpretations of propositional logic.) We have thus reduced a problem which might seem to involve infinitely many possibilities (all possible sets standing for each variable), to a simple task of checking the solutions with substituting only {•} or ∅ for the inolved variables. 4.3: Boolean Algebras

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [optional]

The discussion in subsection 4.2 shows the concrete connection between the set interpretation and the standard interpretation of the language of SL. The fact that both set operations and

97

III.2. Semantics of SL

(functions interpreting the) propositional connectives obey essentially the same laws can be, however, stated more abstractly – they are both examples of yet other, general structures called “boolean algebras”. Definition 5.17 The language of boolean algebra is given by 1) the set of boolean expressions, BEΣ , relatively to a given alphabet Σ of variables: Basis :: 0, 1 ∈ BEΣ and Σ ⊂ BEΣ Induction :: If t ∈ BEΣ then −t ∈ BEΣ :: If s, t ∈ BEΣ then (s + t) ∈ BEΣ and (s ∗ t) ∈ BEΣ and by 2) the formulae which are equations s ≡ t where s, t ∈ BEΣ . A boolean algebra is any set X with interpretation • • • •

of of of of

0, 1 as constants 0, 1 ∈ X (“bottom” and “top”); − as a unary operation − : X → X (“complement”), and +, ∗ as binary operations +, ∗ : X 2 → X (“join” and “meet”), ≡ as identity, =,

satisfying the following axioms: 1.

Neutral elements x+0 ≡ x x∗1 ≡ x

2.

Associativity (x + y) + z ≡ x + (y + z) (x ∗ y) ∗ z ≡ x ∗ (y ∗ z)

3.

Commutativity x+y ≡ y+x x∗y ≡ y∗x

4.

Distributivity x + (y ∗ z) ≡ (x + y) ∗ (x + z) x ∗ (y + z) ≡ (x ∗ y) + (x ∗ z)

5.

Complement x ∗ (−x) ≡ 0

x + (−x) ≡ 1

Be wary of confusing the meaning of the symbols “0,1,+, −, ∗” above with the usual meaning of arithmetic zero, one, plus, etc. – they have nothing in common, except for the superficial syntax!!! Roughly speaking, the word “algebra”, stands here for the fact that the only formulae are equalities and reasoning happens by using properties of equality: reflexivity – x ≡ x, symmetry , transitivity – x≡yx≡z; y≡z , and by “substituting equals for equals”, according to the – x≡y y≡x rule: g[x] ≡ z ; x ≡ y (F.xv) g[y] ≡ z i.e. given two equalities x ≡ y and g[x] ≡ z we can derive g[y] ≡ z. (Compare this to the provable equivalence from theorem 4.21, in particular, the rule from 4.22.) Other laws, which we listed before, are derivable in this manner from the above axioms. For instance, • Idempotency of ∗, i.e. (F.xvi) x ≡ x∗x is shown as follows 1 5 4 5 1 : x ≡ x ∗ 1 ≡ x ∗ (x + (−x)) ≡ (x ∗ x) + (x ∗ (−x)) ≡ (x ∗ x) + 0 ≡ x ∗ x (Similarly, x ≡ x + x.) • Another fact is a form of absorption: 0∗x≡0

(F.xvii) 5

3

5

3,2

x+1≡1 2

: 0 ∗ x ≡ (x ∗ (−x)) ∗ x ≡ ((−x) ∗ x) ∗ x ≡ (−x) ∗ (x ∗ x) : x + 1 ≡ x + (x + (−x)) ≡ (x + x) + (−x)

(F.xvi)

≡

(F.xvi)

≡

5

(−x) ∗ x ≡ 0

5

x + (−x) ≡ 1

• Complement of any x is determined uniquely by the two properties from 5., namely, any y satisfying both these properties is necessarily x’s complement: if a) x + y ≡ 1 and b) y ∗ x ≡ 0 then y ≡ −x

(F.xviii) 1

5

4

b)

5

3,4

: y ≡ y ∗ 1 ≡ y ∗ (x + (−x)) ≡ (y ∗ x) + (y ∗ (−x)) ≡ 0 + (y ∗ (−x)) ≡ (x ∗ (−x)) + (y ∗ (−x)) ≡ a)

3,1

(x + y) ∗ (−x) ≡ 1 ∗ (−x) ≡ −x

98

Statement Logic

• Involution, (F.xix) −(−x) ≡ x follows from (F.xviii). By 5. we have x ∗ (−x) ≡ 0 and x + (−x) ≡ 1 which, by (F.xviii) imply that x ≡ −(−x). The new notation used in the definition 5.17 was meant to emphasize the fact that boolean algebras are more general structures. It should have been obvious, however, that the intended interpretation of these new symbols was as follows: sets ℘(U ) 3 x∪y x∩y x0 Ø U =

← boolean algebra ← x ← x+y ← x∗y ← −x ← 0 ← 1 ← ≡

→ → → → → → → →

SL ∈ {1, 0} x∨y x∧y ¬x 0 1 ⇔

The fact that any set ℘(U ) obeys the set laws from page 30, and that the set B = {1, 0} obeys the SL-laws from 4.1 amounts to the statement that these structures are, in fact, boolean algebras under the above interpretation of boolean operations. (Not all the axioms of boolean algebras were included, so one has to verify, for instance, the laws for neutral elements and complement, but this is an easy task.) Thus, all the above formulae (F.xvi)–(F.xix) will be valid for these structures under the interpretation from the table above, i.e., ℘(U ) − law A∩A = A ∅∩A=∅ U ∪A=U 0 (A0 ) = A

← boolean algebra law → ← x∗x = x → ← 0∗x=0 → ← 1+x = 1 → ← −(−x) = x →

SL − law A∧A⇔A ⊥∧A⇔⊥ >∨A⇔> ¬(¬A) ⇔ A

(F.xvi) (F.xvii) (F.xvii) (F.xix)

The last fact for SL was, for instance, verified at the end of example 5.9. Thus the two possible semantics for our WFFSL , namely, the set {1, 0} with the logical interpretation of the (boolean) connectives as ¬, ∧, etc. on the one hand, and an arbitrary ℘(U ) with the interpretation of (boolean) connectives as 0 , ∩, etc. are both boolean algebras. Now, we said that boolean algebras come with the reasoning system – equational logic – which allows us to prove equations A ≡ B, where A, B ∈ BE. On the other hand, the axiomatic systems for SL, e.g., the Hilbert’s system `H , proved only simple boolean expressions: `H A. Are these two reasoning systems related in some way? They are, indeed, but we will not study precise relationship in detail. At this point we only state the following fact: if `H A then also the equation A ≡ 1 is provable in equational logic, where A ≡ 1 is obtained by replacing all subformulas x → y by the respective expressions −x + y (recall x → y ⇔ ¬x ∨ y). For instance, the equation corresponding to the first axiom of `H A → (B → A) is obtained by translating → to the equivalent boolean expression: (−A+−B+A) = 1. You may easily verify provability of this equation from axioms 1.-5., as well as that it holds under set interpretation – for any sets A, B ⊆ U : A0 ∪ B 0 ∪ A = U . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [end optional]

Exercises (week 5) exercise 5.1+ Recall example 5.12 and set up the boolean table for the formula a ∨ b where ∨ is trying to represent “or”. Consider if it is possible – and if so, then how – to represent the following statements using your definition: 1. x < y or x = y. 2. John is ill or Paul is ill. 3. Either we go to cinema or we stay at home. exercise 5.2+ Write the boolean tables for the following formulae and decide to which among the four classes from Fact 5.10 (Definition 5.8) they belong:

99

III.2. Semantics of SL

1. a → (b → a) 2. (a → (b → c)) → ((a → b) → (a → c)) 3. (¬b → ¬a) → (a → b) exercise 5.3+ Rewrite the laws 1. Neutral elements and 5. Complement of boolean algebra to their propositional form and verify their validity using boolean tables. (Use > for 1 and ⊥ for 0.) exercise 5.4+ Verify whether (a → b) → b is a tautology. Is the following proof correct? If not, what is wrong with it? 1: 2: 3: 4:

A, A → B `N A → B A, A → B `N A A → B `N B `N (A → B) → B

A0 A0 M P (2, 1) DT

exercise 5.5+ Verify the following facts: 1. A1 → (A2 → (A3 → B)) ⇔ (A1 ∧ A2 ∧ A3 ) → B. 2. (A ∧ (A → B)) ⇒ B Use boolean tables to show that the following formulae are contradictions: 3. ¬(B → B) 4. ¬(B ∨ C) ∧ C Determine now what sets are denoted by these two expressions – for the set-interpretation of → recall remark 5.13. exercise 5.6+ Show which of the following pairs are equivalent: 1. A → (B → C) ? 2. A → (B → C) ? 3. A ∧ ¬B ?

(A → B) → C B → (A → C) ¬(A → B)

exercise 5.7+ Prove a result analogous to corollary 5.16 for ⊆ and ⇒ instead of = and ⇔. exercise 5.8+ Use point 3. from exercise 5.6 to verify that (C ∧ D) → (A ∧ ¬B) and (C ∧ D) → ¬(A → B) are equivalent. How does this earlier exercise simplify the work here? exercise 5.9 (Compositionality and substitutivity) Let F [A] be an arbitrary formula with a subformula A, and V be a valuation of all propositional variables occurring in F [A]. It induces the valuation V (F [A]) of the whole formula. Let B be another formula and assume that for all valuations V , V (A) = V (B). Use induction on the complexity of F [ ] to show that then V (F [A]) = V (F [B]), for all valuations V . (Hint: The structure of the proof will be similar to that of theorem 4.21. Observe, however, that here you are proving a completely different fact concerning not the provability relation but the semantic interpretation of the formulae – not their provable but tautological equivalence.)

exercise 5.10 Tautological equivalence A ⇔ B amounts almost to the fact that A and B have the same interpretation. We have to make the meaning of this “almost” more precise. 1. Show that neither of the two relations A ⇔ B and A = B imply the other, i.e., give examples of A and B such that (a) A ⇔ B but A 6= B and (b) A = B but not A ⇔ B. (Hint: Use extra/different propositional variables not affecting the truth of the formula.)

2. Explain why the two relations are the same whenever A and B contain the same variables. 3. Finally explain that if A = B then there exists some formula C obtained from B by “renaming” the propositional variables, such that A ⇔ C.

100

Statement Logic

exercise 5.11 Let Φ be an arbitrary, possibly infinite, set of formulae. The following conventions generalize the notion of (satisfaction of) binary conjunction/disjunction to such arbitrary sets. Given a valuation V , we say that Φ’s: V • conjunction is true under V , V ( Φ) = 1, iff for all formulae A : if A ∈ Φ then V (A) = 1. W • disjunction is true udner V , V ( Φ) = 1, iff there exists an A such that A ∈ Φ and V (A) = 1. Let now Φ be a set containing zero or one formulae. What would be the most natural interpretations of the expressions “the conjunction of formulae in Φ” and “the disjunction of formulae in Φ” ?

101

III.3. Soundness and Completeness

Chapter 6 Soundness and Completeness • Adequate Sets of Connectives • Normal Forms – DNF – CNF • Soundness of N and H • Completeness of N and H This chapter focuses on the relations between the syntax and axiomatic systems for SL and their semantic counterpart. Before we discuss the central concepts of soundness and completeness, we will first ask about the ‘expressive power’ of the language we have introduced. Expressive power of a language for statement logic can be identified with the possibilities it provides for defining various boolean functions. In section 1 we show that all boolean functions can be defined by the formulae in our language. Section 2 explores a useful consequence of this fact showing that each formula can be written equivalently in a special normal form. The rest of the chapter then studies soundness and completeness of our axiomatic systems.

1: Adequate Sets of Connectives This and next section study the relation we have established in definition 5.6 between formulae of SL and boolean functions (on our two-element set B). According to this definition, any SL formula defines a boolean function. The question now is the opposite: Can any boolean function be defined by some formula of SL? Introducing abbreviations ∧, ∨ and others in chapter 5, section 3, we remarked that they are not necessary but merely convenient. The meaning of their being “not necessary” is that any function which can be defined by a formula containing these connectives, can also be defined by a formula which does not contain them. E.g., a function defined using ∨ can be also defined using ¬ and →. Concerning our main question we need a stronger notion, namely, the notion of a complete, or adequate set of connectives which is sufficient to define all boolean functions. Definition 6.1 A set AS of connectives is adequate if for every n > 0, every boolean-function f : {1, 0}n → {1, 0} is determined by some formula containing only the connectives from the set AS. Certainly, not every set is adequate. If we take, for example, the set with only negation {¬}, it is easy to see that it cannot be adequate. It is a unary operation, so that it will never give rise to, for instance, a function with two arguments. But it won’t even be able to define all unary functions. It can be used to define only two functions B → B – inverse (i.e., ¬ itself) and identity (¬¬): x 1 0

¬(x) 0 1

¬¬(x) 1 0

(The proof-theoretic counterpart of the fact that ¬¬ is identity was lemma 4.10 showing provable equivalence of B and ¬¬B.) Any further applications of ¬ will yield one of these two functions. The constant functions (f (x) = 1 or f (x) = 0) can not be defined using exclusively this single connective. The following theorem identifies the first adequate set. Theorem 6.2 {¬, ∧, ∨} is an adequate set.

102

Statement Logic

Proof Let f : {1, 0}n → {1, 0} be an arbitrary boolean function of n arguments (for some n > 0) with a given boolean table. If f always produces 0 then the contradiction (a1 ∧ ¬a1 ) ∨ . . . ∨ (an ∧ ¬an ) determines f . For the case when f produces 1 for at least one set of arguments, we write the proof to the left illustrating it with an example to the right. Proof Let a1 , a2 , . . . , an be distinct propositional variables listed in increasing order. The boolean table for f has 2n rows. Let arc denote the entry in the c-th column and r-th row.

a1 1 1 0 0

a2 1 0 1 0

Example f (a1 , a2 ) 0 1 1 0

For each 1 ≤ r ≤ 2n and 1 ≤ c ≤ n let ac if arc = 1 Lrc = ¬ac if arc = 0

L11 L21 L31 L41

= a1 , L12 = a2 = a1 , L22 = ¬a2 = ¬a1 , L32 = a2 = ¬a1 , L42 = ¬a2

For each row 1 ≤ r ≤ 2n form the conjunction C r = Lr1 ∧ Lr2 ∧ . . . ∧ Lrn This means that for all rows r and p 6= r : C r (ar1 , . . . , arn ) = 1 and C r (ap1 , . . . , apn ) = 0

C1 C2 C3 C4

= a 1 ∧ a2 = a1 ∧ ¬a2 = ¬a1 ∧ a2 = ¬a1 ∧ ¬a2

Let D be the disjunction of those C r for which f (ar1 , . . . , arn ) = 1.

D = C2 ∨ C3 = (a1 ∧ ¬a2 ) ∨ (¬a1 ∧ a2 )

The claim is: the boolean function determined by D is f , i.e., D = f . If f (ar1 , . . . , arn ) = 1 then D contains the corresponding disjunct C r which, since C r (ar1 , . . . , arn ) = 1, makes D(ar1 , . . . , arn ) = 1. If f (ar1 , . . . , arn ) = 0, then D does not contain the corresponding disjunct C r . But for all p 6= r we have C p (ar1 , . . . , arn ) = 0, so none of the disjuncts in D will be 1 for QED (6.2) these arguments, and hence D(ar1 , . . . , arn ) = 0. Corollary 6.3 The following sets of connectives are adequate: 1. {¬, ∨} 2. {¬, ∧} 3. {¬, →} Proof 1. By De Morgan’s law A ∧ B ⇔ ¬(¬A ∨ ¬B). Thus we can express each conjunction by negations and disjunction. Using distributive and associative laws, this allows us to rewrite the formula obtained in the proof of Theorem 6.2 to an equivalent one without conjunction 2. The same argument as above. def

3. According to definition 5.11, A ∨ B = ¬A → B. This, however, was a merely syntactic definition of a new symbol ‘∨’. Here we have to show that the boolean functions ¬ and → can be used to define the boolean function ∨. But this was done in Exercise 5.2.(1) where the semantics (boolean table) for ∨ was given according to the definition 5.11, i.e., where A ∨ B ⇔ ¬A → B, required here, was shown. So the claim follows from point 1. QED (6.3) Remark. Note that the definition of “adequate” does not require that any formula determines the functions from {1, 0}0 into {1, 0}. {1, 0}0 is the singleton set {}. There are two functions from {} into {1, 0}, namely { 7→ 1} and { 7→ 0}. These functions are not determined by any formula in the connectives ∧, ∨, →, ¬. The best approximations are tautologies and contradictions like (a → a) and ¬(a → a), which we in fact took to be the special formulae > and ⊥. Note however that these determine the functions {1 7→ 1, 0 7→ 1} and {1 7→ 0, 0 7→ 0}, which in a strict set-theoretic sense are distinct from the functions above. To obtain a set of connectives that is adequate in a stricter sense, one would have to introduce > or ⊥ as a special formula (in fact a 0-argument connective) that is not considered to contain any propositional variables. 2

103

III.3. Soundness and Completeness

2: DNF, CNF The fact that, for instance, {¬, →} is an adequate set, vastly reduces the need for elaborate syntax when studying propositional logic. We can (as we indeed have done) restrict the syntax of WFFSL to the necessary minimum. This simplifies many proofs concerned with the syntax and the axiomatic systems since such proofs involve, typically, induction on the definition (of WFF SL , of `, etc.) Adequacy of the set means that any entity (any function defined by a formula) has some specific, “normal” form using only the connectives from the adequate set. Now we will show that even more “normalization” can be achieved. Not only every (boolean) function can be defined by some formula using only the connectives from one adequate set – every such a function can be defined by such a formula which, in addition, has a very specific form. Definition 6.4 A formula B is in 1. disjunctive normal form, DNF, iff B = C1 ∨ ... ∨ Cn , where each Ci is a conjunction of literals. 2. conjunctive normal form, CNF, iff B = D1 ∧ ... ∧ Dn , where each Di is a disjunction of literals. Example 6.5 Let Σ = {a, b, c}. • (a ∧ b) ∨ (¬a ∧ ¬b) and (a ∧ b ∧ ¬c) ∨ (¬a ∧ c) are both in DNF • a ∨ b and a ∧ b are both in DNF and CNF • (a ∨ (b ∧ c)) ∧ (¬b ∨ a) is neither in DNF nor in CNF • (a ∨ b) ∧ c ∧ (¬a ∨ ¬b ∨ ¬c) is in CNF but not in DNF • (a ∧ b) ∨ (¬a ∨ ¬b) is in DNF but not in CNF. The last formula can be transformed into CNF applying the laws like those from 5.4.1 on p. 94. The distributivity and associativity laws yield: (a ∧ b) ∨ (¬a ∨ ¬b) ⇔ (a ∨ ¬a ∨ ¬b) ∧ (b ∨ ¬a ∨ ¬b) and the formula on the right hand side is in CNF.

2

Recall the form of the formula constructed in the proof of Theorem 6.2 – it was in DNF! Thus, this proof tells us not only that the set {¬, ∧, ∨} is adequate but also Corollary 6.6 Each formula B is logically equivalent to a formula B D in DNF. Proof For any B there is a D in DNF such that B = D. By “renaming” the propositional variables of D (see exercise 5.10) one obtains a new formula BD in DNF, such that B ⇔ BD . QED (6.6)

We now use this corollary to show the next. Corollary 6.7 Each formula B is logically equivalent to a formula B C in CNF. Proof By Corollary 6.3, we may assume that the only connectives in B are ¬ and ∧. By Corollary 6.6 we may also assume that any formula is equivalent to a DNF formula. We now proceed by induction on the complexity of B a :: A propositional variable is a conjunction over one literal, and hence is in CNF. ¬A :: By Corollary 6.6, A is equivalent to a formula AD in DNF. Exercise 6.10 allows us to conclude that B is equivalent to BC in CNF. C ∧ A :: By IH, both C and A have CNF : CC , AC . Then CC ∧ AC is easily transformed into CNF (using associative laws), i.e., we obtain an equivalent BC in CNF. QED (6.7)

104

Statement Logic

3: Soundness a Background Story ♦ ♦ The library offers its customers the possibility of ordering books on internet. From the main page one may ask the system to find the book one wishes to borrow. (We assume that appropriate search engine will always find the book one is looking for or else give a message that it could not be identified. In the sequel we are considering only the case when the book you asked for was found.) The book (found by the system) may happen to be immediately available for loan. In this case, you may just reserve it and our story ends here. But the most frequent case is that the book is on loan or else must be borrowed from another library. In such a case, the system gives you the possibility to order it: you mark the book and the system will send you a message as soon as the book becomes available. (You need no message as long as the book is not available and the system need not inform you about that.) Simplicity of this scenario notwithstanding, this is actually our whole story. There are two distinct assumptions which make us relay on the system when we order a book. The first is that when you get the message that the book is available it really is. The system will not play fool with you saying “Hi, the book is here” while it is still on loan with another user. We trust that what the system says (“The book is here”) is true. This property is what we call “soundness” of the system – it never provides us with false information. But there is also another important aspect making up our trust in the system. Suppose that the book actually becomes available, but you do not get the appropriate message. The system is still sound – it does not give you any wrong information – but only because it does not give you any information whatsoever. It keeps silent although it should have said that the book is there and you can borrow it. The other aspect of our trust is that whenever there is a fact to be reported (‘the book became available’), the system will do it – this is what we call “completeness”. Just like a system may be sound without being complete (keep silent even though the book arrived to the library), it may be complete without being sound. If it constantly sent messages that the book you ordered was available, it would, sooner or later (namely when the book eventually became available) report the true fact. However, in the meantime, it would provide you with a series of incorrect information – it would be unsound. Thus, soundness of a system means that whatever it says is correct: it says “The book is here” only if it is here. Completeness means that everything that is correct will be said by the system: it says “The book is here” if (always when) the book is here. In the latter case, we should pay attention to the phrase “everything that is correct”. It makes sense because our setting is very limited. We have one command ‘order the book ...’, and one possible response of the system: the message that the book became avilable. “Everything that is correct” means here simply that the book you ordered actually is available. It is only this limited context (i.e., limited and well-defined amount of true facts) which makes the notion of completeness meaningfull. In connection with axiomatic systems one often resorts to another analogy. The axioms and the deduction rules together define the scope of the system’s knowledge about the world. If all aspects of this knowledge (all the theorems) are true about the world, the system is sound. This idea has enough intuitive content to be grasped with reference to vague notions of ‘knowledge’, ‘the world’, etc. and our illustration with the system saying “The book is here” only when it actually is, merely makes it more specific. Completeness, on the other hand, would mean that everything that is true about the world (and expressible in the actual language), is also reflected in the system’s knowledge (theorems). Here it becomes less clear what the intuitive content of ‘completeness’ might be. What can one possibly mean by “everything that is true”? In our library example, the user and the system use only very limited language allowing the user to ‘order the book ...’ and the system to state that it is avaialable. Thus, the possible meaning of “everything” is limited to the book being available or not. One should keep this difference between

105

III.3. Soundness and Completeness

‘real world’ and ‘availability of a book’ in mind because the notion of completeness is as unnatural in the context of natural language and real world, as it is adequate in the context of bounded, sharply delineated worlds of formal semantics. The limited expressiveness of a formal language plays here crucial role of limiting the discourse to a well-defined set of expressible facts. The library system should be both sound and complete to be useful. For axiomatic systems, the minimal requirement is that that they are sound – completeness is a desirable feature which, typically, is much harder to prove. Also, it is known that there are axiomatic systems which are sound but inherently incomplete. We will not study such systems but merely mention one in the last chapter, theorem 11.15. 7 ♦

♦

Definition 5.8 introduced, among other concepts, the validity relation |= A, stating that A is satisfied by all structures. On the other hand, we studied the syntactic notion of a proof in a given axiomatic system, which we wrote as `C A. We also saw a generalization of the provability predicate `H in Hilbert’s system to the relation Γ `N A, where Γ is a theory – a set of formulae. We now define the semantic relation Γ |= A of “A being a (tauto)logical consequence of Γ”. Definition 6.8 • V |=Γ • Mod(Γ) • |=Γ • Γ|=A

Given a set Γ ⊆ WFFSL and A ∈ WFFSL we write: iff V |= G for all G ∈ Γ, where V is some valuation, called a model of Γ = {V : V |= Γ} – the set of all models of Γ iff for all V : V |= Γ iff for all V ∈ M od(Γ) : V |= A, i.e., ∀V : V |= Γ ⇒ V |= A

The analogy between the symbols |= and ` is not accidental. The former refers to a semantic, while the later to a syntactic notion and, ideally, these two notions should be equivalent in some sense. The following table gives the picture of the intended “equivalences”: Syntactic Semantic `H A |= A Γ |= A Γ `N A We will see that for H and N there is such an equivalence.The equivalence we desire is (F.xx)

Γ `A ⇔ Γ |= A

The implication Γ `C A ⇒ Γ |= A is called soundness of the proof system C : whatever we can prove in C from the assumptions Γ, is true in every structure satisfying Γ. This is usually easy to establish as we will see shortly. The problematic implication is the other one – completeness – stating that any formula which is true in all models of Γ is provable from the assumptions Γ. (Γ = Ø is a special case: the theorems `C A are tautologies and the formulae |= A are those satisfied by all possible structures (since any structure V satisfies the empty set of assumptions).)

Remark 6.9 [Soundness and Completeness] Another way of viewing these two implications is as follows. Given an axiomatic system C and a theory Γ, the relation Γ `C defines the set of formulae – the theorems – T hC (Γ) = {A : Γ `C A}. On the other hand, given the definition of Γ |= , we obtain a (possibly different) set of formulae, namely, the set Γ∗ = {B : Γ |= B} of (tauto)logical consequences of Γ. Soundness of C, i.e., the implication Γ `C A ⇒ Γ |= A means that any provable consequence is also a (tauto)logical consequence and amounts to the inclusion T hC (Γ) ⊆ Γ∗ . Completeness means that any (tauto)logical consequence of Γ is also provable and amounts to the opposite inclusion Γ∗ ⊆ T hC (Γ). 2

For proving soundness of a system consisting of axioms and proof rules (like H or N ), one has to show that the axioms of the system are valid and that the rules preserve truth: whenever the assumptions of the rule are satisfied in a model M , then so is the conclusion. (Since H treats only tautologies, this claim reduces there to preservation of validity: whenever the assumptions of 7 Thanks

to Eivind Kolflaath for the library analogy.

106

Statement Logic

the rule are valid, so is the conclusion.) When these two facts are established, a straightforward induction proof shows that all theorems of the system must be valid. We show soundness and completeness of N . Theorem 6.10 [Soundness] Let Γ ⊆ WFFSL and A ∈ WFFSL . Then Γ `N A ⇒ Γ |= A Proof From the above remarks, we have to show that all axioms are valid, and that MP preserves validity: A1–A3 :: In exercise 5.2 we have seen that all axioms of H are valid, i.e., satisfied by any structure. In particular, the axioms are satisfied by all models of Γ, for any Γ. A0 :: The axiom schema A0 allows us to conclude Γ `N B for any B ∈ Γ. This is obviously sound: any model V of Γ must satisfy all the formulae of Γ and, in particular, B. MP :: Suppose Γ |= A and Γ |= A → B. Then, for an arbitrary V ∈ M od(Γ) we have V (A) = 1 and V (A → B) = 1. Consulting the boolean table for → : the first assumption reduces the possibilites for V to the two rows for which V (A) = 1, and then, the second assumption to the only possible row in which V (A → B) = 1. In this row V (B) = 1, so V |= B. Since V was arbitrary model of Γ, we conclude that Γ |= B.

QED (6.10)

You should not have any problems with simplifying this proof to obtain the soundness of H. Corollary 6.11 Every satisfiable theory is consistent. Proof We show the equivalent statement that every inconsistent theory is unsatisfiable: if Γ `N ⊥ then Γ |= ⊥ by theorem 6.10, hence Γ is not satisfiable (since ¬(x→x) = 0). QED (6.11) Remark 6.12 [Equivalence of two soundness notions] Soundness is often expressed in the form of corollary 6.11. In fact, the two formulations are equivalent: 6.10. Γ `N A ⇒ Γ |= A 6.11. (exists V : V |= Γ) ⇒ Γ 6`N ⊥ The implication 6.10 ⇒ 6.11 is given in the proof of corollary 6.11. For the opposite: if Γ `N A then Γ ∪ {¬A} is inconsistent (Exercise 4.6) and hence (by 6.11) unsatisfiable, i.e., for any V : V |= Γ ⇒ V 6|= ¬A. But if V 6|= ¬A then V |= A, and so, since V was arbitrary, Γ |= A. 2

4: Completeness The proof of completness involves several lemmas which we now proceed to establish. Just as there are two equivalent ways of expressing soundness (remark 6.12), there are two equivalent ways of expressing completeness. One (corresponding to corollary 6.11) says that every consistent theory is satisfiable and the other that any valid formula is provable. Lemma 6.13 The following two formulations of completness are equivalent: 1. Γ 6`N ⊥ ⇒ M od(Γ) 6= Ø 2. Γ |= A ⇒ Γ `N A Proof 1. ⇒ 2.) Assume 1. and Γ |= A, i.e., for any V : V |= Γ ⇒ V |= A. Then Γ ∪ {¬A} has no model and, by 1., Γ, ¬A `N ⊥. By the deduction theorem Γ `N ¬A → ⊥, and so Γ `N A by Exercise 4.2.5 and lemma 4.10.1. 2. ⇒ 1.) Assume 2. and Γ 6`N ⊥. By (the observation before) lemma 4.24 this means that there is an A such that Γ 6`N A and, by 2., that Γ 6|= A. This means that there is a structure V such that V 6|= A and V |= Γ. Thus 1. holds. QED (6.13) We prove the first of the above formulations: we take an arbitrary Γ and, assuming that it is consistent, i.e., Γ 6`N ⊥, we show that M od(Γ) 6= Ø by constructing a particular structure which we prove to be a model of Γ. This proof is not the simplest possible for SL. However, we choose to do it this way because it illustrates the general strategy used later in the completeness proof for FOL. Our proof uses the notion of a maximal consistent theory:

107

III.3. Soundness and Completeness

Definition 6.14 A theory Γ ⊂ WFFΣ SL is maximal consistent iff it is consistent and, for any formula A ∈ WFFΣ , Γ ` A or Γ ` ¬A. N N SL From exercise 4.6 we know that if Γ is consistent then for any A at most one of Γ `N A and Γ `N ¬A is the case – Γ can not prove too much. Put a bit differently, if Γ is consistent then the following holds for any formula A : (F.xxi)

Γ `N A ⇒ Γ 6`N ¬A

or equivalently

Γ 6`N A or Γ 6`N ¬A

– if Γ proves something (A) then there is something else (namely ¬A) which Γ does not prove. Maximality is a kind of the opposite – Γ can not prove too little: if Γ does not prove something (¬A) then there must be something else it proves (namely A): (F.xxii)

Γ `N A ⇐ Γ 6`N ¬A

or equivalently

Γ `N A or Γ `N ¬A

If Γ is maximal consistent it satisfies both (F.xxi) and (F.xxii) and hence, for any formula A, exactly one of Γ `N A and Γ `N ¬A is the case. For instance, given Σ = {a, b}, the theory Γ = {a → b} is consistent. However, it is not maximal consistent because, for instance, Γ 6`N a and Γ 6`N ¬a. (The same holds if we replace a by b.) In fact, we have an alternative, equivalent, and easier to check formulation of maximal consistency for SL. Fact 6.15 A theory Γ ⊂ WFFΣ SL is maximal consistent iff it is consistent and for all a ∈ Σ : Γ `N a or Γ `N ¬a. Proof ‘Only if’ part, i.e. ⇒, is trivial from definition 6.14 which ensures that Γ `N A or Γ `N ¬A for all formulae, in particular all atomic ones. The opposite implication is shown by induction on the complexity of A. A is : a ∈ Σ :: This basis case is trivial, since it is exactly what is given. ¬B :: By IH, we have that Γ `N B or Γ `N ¬B. In the latter case, we are done (Γ `N A), while in the former we obtain Γ `N ¬A, i.e., Γ `N ¬¬B from lemma 4.10. C → D :: By IH we have that either Γ `N D or Γ `N ¬D. In the former case, we obtain Γ `N C → D by lemma 4.9. In the latter case, we have to consider two subcases – by IH either Γ `N C or Γ `N ¬C. If Γ `N ¬C then, by lemma 4.9, Γ `N ¬D → ¬C. Applying MP to this and axiom A3, we obtain Γ `N C → D. So, finally, assume Γ `N C (and Γ `N ¬D). But then Γ `N ¬(C → D) by exercise 4.2.3.

QED (6.15)

The maximality property of a maximal consistent theory makes it easier to construct a model for it. We prove first this special case of the completeness theorem: Lemma 6.16 Every maximal consistent theory is satisfiable. Proof Let Γ be any maximal consistent theory, and let Σ be the set of propositional variables. We define the valuation V : Σ → {1, 0} by the equivalence V (a) = 1 iff Γ `N a for every a ∈ Σ. (Hence also V (a) = 0 iff Γ 6`N a.) We now show that V is a model of Γ, i.e., for any formula B : if B ∈ Γ then V (B) = 1. In fact we prove the stronger result that for any formula B, V (B) = 1 iff Γ `N B. The proof goes by induction on (the complexity of) B. B is : a :: Immediate from the definition of V . ¬C :: V (¬C) = 1 iff V (C) = 0. By IH, the latter holds iff Γ 6`N C, i.e., iff Γ `N ¬C. C → D :: We consider two cases:

108

Statement Logic

– V (C → D) = 1 implies (V (C) = 0 or V (D) = 1). By the IH, this implies (Γ 6`N C or Γ `N D), i.e., (Γ `N ¬C or Γ `N D). In the former case Exercise 4.2.1, and in the latter lemma 4.12.2 gives that Γ `N C → D. – V (C → D) = 0 implies V (C) = 1 and V (D) = 0, which by the IH imply Γ `N C and Γ 6`N D, i.e., Γ `N C and Γ `N ¬D, which by exercise 4.2.3 and two applications of MP imply Γ `N ¬(C → D), i.e., Γ 6`N C → D. QED (6.16) Next we use this result to show that every consistent theory is satisfiable. What we need, is a result stating that every consistent theory is a subset of some maximal consistent theory. Lemma 6.17 Every consistent theory can be extended to a maximal consistent theory. Proof Let Γ be a consistent theory, and let {a0 , . . . , an } be the set of propositional variables used in Γ. [The case when n = ω (is countably infinite) is treated in the small font within the square brackets.] Let • Γ0 = Γ • Γi+1 =

• Γ = Γn+1

Γi , a i if this is consistent Γi , ¬ai otherwise S [=

i 0) → (∃y y = 1). We can apply the prenex operations in two ways:

⇔

(∀x x > 0) → (∃y y = 1) ⇔ ∃x (x > 0 → (∃y y = 1)) ∃y ((∀x x > 0) → y = 1) ∃x ∃y (x > 0 → y = 1) ∃y ∃x (x > 0 → y = 1)

⇔

Obviously, since the order of the quantifiers of the same kind does not matter (exercise 7.2.1), the two resulting formulae are equivalent. However, the quantifiers may also be of different kinds:

⇔

(∃x x > 0) → (∃y y = 1) ⇔ ∀x (x > 0 → (∃y y = 1)) ∃y ((∃x x > 0) → y = 1) ∀x ∃y (x > 0 → y = 1) ∃y ∀x (x > 0 → y = 1)

⇔

Although it is not true in general that ∀x∃yA ⇔ ∃y∀xA, the equivalence preserving prenex operations ensure – due to renaming of bound variables which avoids name clashes with variables in other subformulae – that the results (like the two formulae above) are equivalent. 2

IV.3. More Semantics

139

2: A few bits of Model Theory Roughly and approximately, model theory studies the properties of model classes. Notice that a model class is not just an arbitrary collection K of FOL-structures – it is a collection of models of some set Γ of formulae, i.e., such that K = M od(Γ) for some Γ. The important point is that the syntactic form of the formulae in Γ may have a heavy influence on the properties of its model class (as we illustrate in theorem 9.11). On the other hand, knowing some properties of a given class of structures, model theory may sometimes tell what syntactic forms of axioms are necessary/sufficient for axiomatizing this class. In general, there exist non-axiomatizable classes K, i.e., such that for no FOL-theory Γ, one can get K = M od(Γ).

2.1: Substructures As an elementary example of the property of a class of structures we will consider (in 2.2) closure under substructures and superstructures. Here we only define these notions. Definition 9.7 Let Σ be a FOL alphabet and let M and N be Σ-structures: N is a substructure of M (or M is a superstructure (or extension) of N ), written N v M , iff: • N ⊆M • For all a ∈ I : [[a]]N = [[a]]M • For all f ∈ F, and a1 , . . . ,an ∈ N : [[f ]]N (a1 , . . . ,an ) = [[f ]]M (a1 , . . . ,an ) ∈ N • For all R ∈ R, and a1 , . . . ,an ∈ N : ha1 , . . . ,an i ∈ [[R]]N ⇔ ha1 , . . . ,an i ∈ [[R]]M Let K be an arbitrary class of structures. We say that K is: • closed under substructures if whenever M ∈ K and N v M , then also N ∈ K • closed under superstructures if whenever N ∈ K and N v M , then also M ∈ K Thus N v M iff N has a more restricted interpretation domain than M , but all constants, function and relation symbols are interpreted identically within this restricted domain. Obviously, every structure is its own substructure, M v M . If N v M and N 6= M , which means that N is a proper subset of M , then we say that N is a proper substructure of M . Example 9.8 Let Σ contain one individual constant c and one binary function symbol . The structure Z with Z = Z being the integers, [[c]]Z = 0 and [[ ]]Z (x, y) = x + y is a Σ-structure. The structure N with N = N being only the natural numbers with zero, [[c]]N = 0 and [[ ]]N (x, y) = x + y is obviously a substructure N v Z. Restricting furthermore the domain to the even numbers, i.e. taking P with P being the even numbers greater or equal zero, [[c]]P = 0 and [[ ]]P (x, y) = x + y yields again a substructure P v N . The class K = {Z, N, P } is not closed under substructures. One can easily find other Σ-substructures not belonging to K (for instance, all negative numbers with zero and addition is a substructure of Z). Notice that, in general, to obtain a substructure it is not enough to select an arbitrary subset of the underlying set. If we restrict N to the set {0, 1, 2, 3} it will not yield a substructure of N – because, for instance, [[ ]]N (1, 3) = 4 and this element is not in our set. Any structure, and hence a substructure in particular, must be “closed under all operations”, i.e., applying any operation to elements of the (underlying set of the) structure must produce an element in the structure. On the other hand, a subset of the underlying set may fail to be a substructure if the operations are interpreted in different way. Let M be like Z only that now we let [[ ]]M (x, y) = x − y. Neither N nor P are substructures of M since, in general, for x, y ∈ N (or ∈ P ): [[ (x, y)]]N = x + y 6= x − y = [[ (x, y)]]M . 0 Modifying N so that [[ ]]N (x, y) = x − y does not yield a substructure of M either, because this does not 0 define [[ ]]N for x > y. No matter how we define this operation for such cases (for instance, to return 0), we won’t obtain a substructure of M – the result will be different than in M . 2 Remark 9.9 Given an FOL alphabet Σ, we may consider all Σ-structures, Str(Σ). Obviously, this class is closed under Σ-substructures. With the substructure relation, hStr(Σ), v i is a weak partial ordering: v is obviously reflexive (any structure is its own substructure), and also transitive (substructure X of a substructure of Y is itself a substructure of Y ) and antisymmetric (if both X v Y and Y v X then X = Y ). 2

140

Predicate Logic

2.2: Σ-Π classification A consequence of theorem 9.4 is that any axiomatizable class K, can be axiomatized by formulae in PNF. This fact has a model theoretic flavour, but the theory studies, in general, more specific phenomena. Since it is the relation between the classes of structures, on the one hand, and the syntactic form of formulae, on the other, one often introduces various syntactic classifications of formulae. We give here only one example. The existence of PNF allows us to “measure the complexity” of formulae. Comparing the prefixes, we would say that A1 = ∃x∀y∃zB is “more complex” than A2 = ∃x∃y∃zB. Roughly, a formula is the more complex, the more changes of quantifiers in its prefix. Definition 9.10 A formula A is ∆0 iff it has no quantifiers. It is: • Σ1 iff A ⇔ ∃x1 . . . ∃xn B, where B is ∆0 . • Π1 iff A ⇔ ∀x1 . . . ∀xn B, where B is ∆0 . • Σi+1 iff A ⇔ ∃x1 . . . ∃xn B, where B is Πi . • Πi+1 iff A ⇔ ∀x1 . . . ∀xn B, where B is Σi .

∃

∀

Σ2 SS S k Π2 ∃ SSS kkk∀k SSSS k k k SS kkkk Σ1 H Π1 v ∃HHH ∀ vvv ∆0

Since PNF is not unique, a formula can belong to several levels and we have to consider all possible PNFs for a formula in order to determine its complexity. Typically, saying that a formula is Σi , resp. Πi , one means that this is the least such i. A formula may be both Σi and Πi – in Example 9.6 we saw (the second) formula equivalent to both ∀x∃yB and to ∃y∀xB, i.e., one that is both Π2 and Σ2 . Such formulae are called ∆i . We only consider the following (simple) example of a model theoretic result. Point 1 says that the validity of an existential formula is preserved when passing to the superstructures – the model class of existential sentences is closed under superstructures. Dually, 2 implies that model class of universal sentences is closed under substructures. Theorem 9.11 Let A, B be closed formulae over some alphabet Σ, and assume A is Σ 1 and B is Π1 . Let M, N be Σ-structures and N v M . If 1. N |= A then M |= A 2. M |= B then N |= B. Proof 1. A is closed Σ1 , i.e., it is (equivalent to) ∃x1 . . . ∃xn A0 where A0 has no quantifiers nor variables other that x1 , . . . ,xn . If N |= A then there exist a1 , . . . ,an ∈ N such that N |=x1 7→a1 ,...,xn 7→an A0 . Since N v M , we have N ⊆ M and the interpretation of all symbols is the same in M as in N . Hence M |=x1 7→a1 ,...,xn 7→an A0 , i.e., M |= A. 2. This is a dual argument. Since M |= ∀x1 . . . ∀xn B 0 , B 0 is true for all elements of M and N ⊆ M , so B 0 will be true for all element of this subset as well. QED (9.11) The theorem can be applied in at least two different ways which we illustrate in the following two examples. We consider only case 2., i.e., when the formulae of interest are Π1 (universal). Example 9.12 [Constructing new structures for Π1 axioms] First, given a set of Π1 axioms and an arbitrary structure satisfying them, the theorem allows us to conclude that any substructure will also satisfy the axioms. Let Σ contain only one binary relation symbol R. Recall definition 1.15 – a strict partial ordering is axiomatized by two formulae 1. ∀x∀y∀z : R(x, y) ∧ R(y, z) → R(x, z) – transitivity, and 2. ∀x : ¬R(x, x) – irreflexivity. Let N be an arbitrary strict partial ordering, i.e., an arbitrary Σ-structure satisfying these axioms. For instance, let N = hN,

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close