Proof Theory and Algebra in Logic [1st ed. 2019] 978-981-13-7996-3, 978-981-13-7997-0

This book offers a concise introduction to both proof-theory and algebraic methods, the core of the syntactic and semant

272 101 2MB

English Pages VIII, 160 [164] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Proof Theory and Algebra in Logic [1st ed. 2019]
 978-981-13-7996-3, 978-981-13-7997-0

Table of contents :
Front Matter ....Pages i-viii
Front Matter ....Pages 1-2
Sequent Systems (Hiroakira Ono)....Pages 3-22
Cut Elimination for Sequent Systems (Hiroakira Ono)....Pages 23-33
Proof-Theoretic Analysis of Logical Properties (Hiroakira Ono)....Pages 35-46
Modal and Substructural Logics (Hiroakira Ono)....Pages 47-60
Deducibility and Axiomatic Extensions (Hiroakira Ono)....Pages 61-73
Front Matter ....Pages 75-76
From Algebra to Logic (Hiroakira Ono)....Pages 77-95
Basics of Algebraic Logic (Hiroakira Ono)....Pages 97-111
Logics and Varieties (Hiroakira Ono)....Pages 113-128
Residuated Structures (Hiroakira Ono)....Pages 129-138
Modal Algebras (Hiroakira Ono)....Pages 139-149
Back Matter ....Pages 151-160

Citation preview

Short Textbooks in Logic

Hiroakira Ono

Proof Theory and Algebra in Logic

Short Textbooks in Logic Series Editors Fenrong Liu, Department of Philosophy, Tsinghua University, Beijing, China Hiroakira Ono, School of Information Science, Japan Advanced Institute of Science and Technology, Nomi City, Ishikawa, Japan Eric Pacuit, Department of Philosophy, University of Maryland, College Park, MD, USA Jeremy Seligman, University of Auckland, Auckland, New Zealand

This is a systematically designed book series that comprises of textbooks on various topics in logic. Though each book can be read independently, the series as a whole gives readers a comprehensive view of logic of the present time. Each book in the series is written clearly and concisely, and at the same time supplies plenty of well-planned examples and exercises to the point. The series is also aimed at providing readers with adequate explanations of scope and motivation of topics. The topics discussed in the series range from mathematical and philosophical logic to logical methods applied to computer science. Some will be introductory and some other will be advanced but not too much specific. Its targeted readers are advanced undergraduate as well as graduate students in philosophy, mathematics, computer science and the related fields. The series is also suitable for self-taught learning.

More information about this series at http://www.springer.com/series/15706

Hiroakira Ono

Proof Theory and Algebra in Logic

123

Hiroakira Ono Japan Advanced Institute of Science and Technology Nomi, Japan

ISSN 2522-5480 ISSN 2522-5499 (electronic) Short Textbooks in Logic ISBN 978-981-13-7996-3 ISBN 978-981-13-7997-0 (eBook) https://doi.org/10.1007/978-981-13-7997-0 © Springer Nature Singapore Pte Ltd. 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

This is an introductory textbook on proof theory and algebra in logic, focusing on both interrelating and complementary features of these two topics. Proof theory and algebra are two major mathematical branches of study in nonclassical logics, each of which is a core of syntactic and semantical approaches, respectively. Syntactic study of logic is based on combinatorial argument and symbolic manipulation, which will produce deep and concrete results. In contrast, standard mathematical arguments will be fully used in the semantical study, from which general and abstract results can be often derived. Though these two approaches have been often discussed separately, the recent development of study, in particular, in both modal and substructural logics, has shown that intrinsic connections exist between them, which are much more closer than what we thought before. Thus, the present book is intended to cover these two approaches together but concisely, and to foster understanding between them. This, we hope, will make it possible for readers to get things in proper perspective. The book will be used as an introductory textbook on logic for a course at the undergraduate or graduate level. We tried to make it as elementary as possible and to minimize mathematical prerequisites. Many examples will be taken from topics of various nonclassical logics to show how these techniques are applied to the field. Hence, the book can be used also as a primary introduction to nonclassical logic for a course at the undergraduate or graduate level, which covers modal logics, many-valued logics, superintuitionistic, and substructural logics. Though the present textbook is intended to explain both proof theory and algebraic study of nonclassical logics, we organize it so that readers can read Part I and Part II independently without much effort. In particular, after a brief look at preliminaries in Sect. 1.1, one may start to read the book from Part II in order to study basics of algebraic logic and universal algebra with examples of their applications in logic. To look up notions and definitions given in Part I easily when necessary, we prepare a detailed index at the end of the book. As this is an introductory textbook, we have not given a detailed reference in order not to disturb smooth reading. Instead, we prepare a list of major books on the topics at the end of

v

vi

Preface

each Part as a guide for further reading, and also a list of basic materials and primary sources in the references at the end. The present book originated partly from my survey papers [Ono98, Ono10b] on proof theory in nonclassical logic and algebraic logic, respectively, and also from notes of my talks for undergraduate and graduate students at various places in recent years. They include Tbilisi Summer School in 2011, State University of Campinas, University of Tehran, National Taiwan University, Southwest University at Chongqing, Tsinghua University and Peking University at Beijing. I am deeply grateful to people who offered me opportunities for giving such talks. I am indebted to many friends and colleagues for their valuable discussions, suggestions, and encouragement on various occasions. I would like to thank Tomasz Kowalski for his support and encouragement for many years. I thank also Majid Alizadeh, Katsuhiko Sano, and anonymous reviewers for their helpful suggestions and important comments on early version of the manuscript, and also Ryo Hatano for his technical help, in particular, in drawing diagrams. As the present Short Textbooks series in Logic was initiated and promoted in cooperation with Fenrong Liu, taking this opportunity I would like to thank her. Finally, I would like to express gratitude to my wife Kazue for her constant support. Kanazawa, Japan February 2019

Hiroakira Ono

Contents

Part I

Proof Theory

1

Sequent Systems . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Prologue . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Sequent Systems LK for Classical Logic . . 1.3 Completeness and Cut Elimination . . . . . . . 1.4 Sequent System LJ for Intuitionistic Logic .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

3 3 7 17 19

2

Cut Elimination for Sequent 2.1 Basic Idea . . . . . . . . . 2.2 Cut Elimination . . . . . 2.3 Subformula Property . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

23 23 26 32

3

Proof-Theoretic Analysis of Logical Properties . 3.1 Decidability of Intuitionistic Logic . . . . . . . 3.2 Disjunction Property . . . . . . . . . . . . . . . . . 3.3 Craig’s Interpolation Property . . . . . . . . . . 3.4 Glivenko’s Theorem . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

35 35 38 39 46

4

Modal and Substructural Logics . . . . . . . . . . . . . . . . . . . 4.1 Standard Sequent Systems for Normal Modal Logics 4.2 Roles of Structural Rules . . . . . . . . . . . . . . . . . . . . . 4.3 Sequent Systems for Basic Substructural Logics . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

47 48 52 54

5

Deducibility and Axiomatic Extensions . . . . . . . . . . . . . . . . 5.1 Deducibility and Deduction Theorem . . . . . . . . . . . . . . 5.2 Local Deduction Theorems . . . . . . . . . . . . . . . . . . . . . 5.3 Axiomatic Extensions . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Framework for Substructural Logics and Modal Logics . 5.5 A View of Substructural Logics . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

61 61 64 66 69 71

Systems . . . .......... .......... ..........

. . . .

. . . .

. . . .

. . . .

vii

viii

Contents

Part II

Algebra in Logic Algebra to Logic . . . . . . . . . . . . . . . . . . . . . . . . . Lattices and Boolean Algebras . . . . . . . . . . . . . . . . Subalgebras, Homomorphisms and Direct Products . Representations of Boolean Algebras . . . . . . . . . . . Algebraic Completeness of Classical Logic . . . . . . Many-Valued Chains and the Law of Residuation .

77 77 83 86 88 89

6

From 6.1 6.2 6.3 6.4 6.5

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

7

Basics of Algebraic Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Heyting Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Lindenbaum-Tarski Algebras . . . . . . . . . . . . . . . . . . . . . 7.3 Locally Finite Algebras . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Finite Embeddability Property and Finite Model Property 7.5 Canonical Extensions of Heyting Algebras . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. 97 . 97 . 99 . 102 . 105 . 107

8

Logics and Varieties . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Lattice Structure of Superintuitionistic Logics . . . . 8.2 The Variety HA of All Heyting Algebras . . . . . . . 8.3 Subvarieties of HA and Superintuitionistic Logics . 8.4 Subdirect Representation Theorem . . . . . . . . . . . . 8.5 Algebraic Aspects of Logical Properties . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

113 113 116 119 123 125

9

Residuated Structures . . . . . . . . . . . . . . . . 9.1 Residuated Lattices and FL-Algebras . 9.2 FL-Algebras and Substructural Logics 9.3 Residuations Over the Unit Interval . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

129 129 133 135

10 Modal Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Modal Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Canonical Extensions and Jónsson-Tarski Theorem . 10.3 Kripke Semantics from Algebraic Viewpoint . . . . . 10.4 Gödel Translation . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

139 139 142 143 147

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . . .

. . . . . .

. . . . . .

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Further Reading for Part I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Further Reading for Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

Part I

Proof Theory

Proof theory is a syntactic core of mathematical study of proofs in formal systems. The main concern of proof theory is to study and analyze structures of proofs. A typical question in proof theory is “what kind of proofs a given formula will have as long as it is provable” and in particular “whether it has a normal (or standard) proof or not.” In Part I of the present textbook, we will introduce sequent systems for various logics from classical and intuitionistic logics to modal and substructural logics. Though there exist alternative approaches to formalize these logics, like natural deduction and tableau calculi, we will focus our attention on sequent systems and will develop the proof theory based on them. The reason is that sequent systems are carefully designed especially for analysis of proofs, and hence logical properties specific to the logic represented by a given sequent system are in sharply reflected by proofs, in particular when cut elimination holds. Thus, formalization by sequent systems will be regarded as the most illuminating and informative way when we develop proof-theoretic study of nonclassical logics. Part I will be devoted to present sequent systems for various nonclassical logics and to develop proof-theoretic analysis in these systems. As a central issue of this Part I, we will discuss cut elimination theorem in detail, which was proved by G. Gentzen. The theorem says that when classical logic is formalized in a sequent system called LK, every formula has a normal proof (in fact, a proof without any application of cut rule)as long as it is provable. We present a proof of cut elimination in a slightly simplified way. An important consequence of cut elimination is that we can extract interesting logical properties by analyzing structures of normal proofs, since such proofs often contain implicitly necessary information on these logical properties. Cut elimination holds not only for LK of classical logic but also many other sequent systems for basic modal and substructural logics, including intuitionistic logic, and consequently, we can obtain many interesting and useful logical properties of these logics as consequences of cut elimination. As this is an introductory textbook, we do not treat further extensions of sequent systems which have been actively studied in recent years, such as display calculi, nested sequent systems, hypersequent systems and labeled sequent systems.

2

Proof Theory

Proof-theoretic methods are quite powerful but they can be applied to rather limited number of logics. On the other hand, algebraic methods discussed in Part II can be applied to a wide class of logics. Thus, these two methods are complementary to each other. To establish a link with Part II, in the last chapter of Part I, we introduce the notion of axiomatic extensions of a given logic, which will offer sound and uniform logical grounds of many different classes of logics discussed in Part II.

Chapter 1

Sequent Systems

After giving preliminary remarks and a brief explanation of the scope of this book in Sect. 1.1, we will introduce two sequent systems LK and LJ for classical logic and intuitionistic logic, respectively. Both are sequent systems fundamental to prooftheoretic study in Part I, which were originally introduced by G. Gentzen in his dissertation (Gentzen 1935). Basic concepts of proofs and provability are introduced. Then an elementary proof of completeness and cut elimination for LK is given, by making use of an invertible sequent system LK∗ for classical logic which is an alternative to LK. This will be a concise example which shows how proof-theoretic arguments go.

1.1 Prologue Basic Notions and Notations We will use ∧, ∨, →, ¬ for expressing basic logical connectives conjunction, disjunction, implication and negation, respectively. When it is necessary, for instance, when we discuss modal logics and substructural logics, we will introduce some additional logical connectives and logical constants. We take a fixed countable set of propositional variables from the outset. Formulas are defined inductively as usual, starting with propositional variables and logical constants, and applying logical connectives repeatedly. Parentheses will be omitted in a usual way as long as no confusion may occur. To simplify expressions, we adopt the convention that ¬ binds more tightly than other logical connectives. Thus, for instance, a formula p ∨ ¬q means p ∨ (¬q). We will use small Greek letters to express formulas. When a formula β appears in the inductive definition of a formula α, the formula β is called a subformula of α. Every subformula of α except α itself is called a proper subformula of © Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_1

3

4

1 Sequent Systems

α. For example, p, q, ¬q and p ∨ ¬q are all subformulas of p ∨ ¬q, and the first three formulas are proper subformulas of p ∨ ¬q. Sometimes, we use an inductive arguments along the formation of formulas as follows. That is, to confirm that every formula has a given property P, we show that (1) every propositional variable (and every constant) has P, and that (2) for each β, if every proper subformula γ of β has P then β has also P. The induction of this form can be expressed alternatively as the induction on the length of a formula. Here, the length of a given formula α means the total number of logical symbols in α. It happens that a formula β appears more than once in a given formula α as a subformula but in a different place. For example, a formula p appears twice in the formula p ∨ ¬ p. If we need to distinguish one from another, we use the word formula occurrences. Then, we can say that p ∨ ¬ p has two formula occurrences of p. In general, we use the word occurrences in other contexts when we are concerned not only with an expression itself but also with the place which it appears. In our book, we sometimes use iff (as a mathematical jargon) as an abbreviation of if and only if. Syntactical Versus Semantical Approach to Logic In the following, a brief explanation for the background of topics, in particular for differences and connections between Part I and Part II of the present textbook. A reader who is familiar with them may skip over it. A standard way of introducing a logic is to describe it as a formal system. A formal system consists usually of axioms and rules of inference, which determines the provability of formulas in the system. A given formula α is provable in the system, if α can be derived in the system by starting with its axioms and applying its rules repeatedly. A proof (or a derivation) of α is a finite figure which shows how α is derived in a system. The notion of proofs and the provability are thus syntactic concepts, which are composed of symbolic and mechanical procedures. As an example of formal systems, we will give a standard Hilbert-style system HK for classical logic below. It consists of modus ponens as a single rule and axiom schemes given below. Here, modus ponens says that a formula β can be derived from both α and α → β. Also, each axiom scheme is not a single formula but expresses all formulas having the same syntactic form represented by the scheme. Thus each instance of an axiom scheme is regarded as an axiom. (For simplicity’s sake, ¬α is regarded as the abbreviation of α → 0 in the rest of this section, where 0 is a logical constant which expresses a contradiction.)1 Now, our list of axiom schemes of HK is as follows, which looks slightly different from standard one. But it is chosen so as to make it easy to see a connection with the sequent system LK for classical logic introduced in the next section. 1. α → (β → α) (left-weakening axioms), 2. (α → (α → γ )) → (α → γ ) (contraction axioms), 3. (α → (β → γ )) → (β → (α → γ )) (exchange axioms), 1 To

avoid too much complications in our notation, we use same symbols for expressing formulas and formula schemes.

1.1 Prologue

4. 5. 6. 7. 8. 9. 10. 11. 12.

5

0 → α, (α → β) → ((γ → α) → (γ → β)), (α → γ ) → ((β → γ ) → ((α ∨ β) → γ )), α → (α ∨ β), β → (α ∨ β), (γ → α) → ((γ → β) → (γ → (α ∧ β))), (α ∧ β) → α, (α ∧ β) → β, ¬¬α → α (the law of double negation).

An alternative way of presenting a Hilbert-style system is to take both modus ponens and the substitution rule as rules, and to take axioms instead of axiom schemes. Here, the substitution rule is a rule by which we can infer σ (α) from α for any uniform substitution σ .2 In this case, the weakening axiom is for instance given as a single axiom p → (q → p), where p and q are distinct propositional variables. Other weakening axioms are obtained from this axiom by applying the substitution rule. Example 1.1 1. The formula (γ → α) → (¬α → ¬γ ) is provable in HK. In fact, by applying modus ponens to the 5th and (a substitution instance of) the 3rd axioms, the formula (γ → α) → ((α → β) → (γ → β)) is provable. Then, by taking the logical constant 0 for β, we get the above formula. 2. The formula ¬(α ∨ ¬α) → α is provable in HK. To see this, by taking ¬α for β in the 8th axiom and then using the first formula in the above 1, we have ¬(α ∨ ¬α) → ¬¬α. Then, we have the above formula by using the law of double negation and the second formula in the above 1. If moreover we use the 7th axiom, we can show also that the formula ¬(α ∨ ¬α) → (α ∨ ¬α) is provable. 3. The formula (¬δ → δ) → δ is provable in HK. To see this, by using the second formula in the above 1, we have (¬δ → δ) → (¬δ → (¬δ → 0)) (recall that ¬δ is the abbreviation of δ → 0), and therefore we have also (¬δ → δ) → (¬δ → 0) by applying contraction axiom. Now, we have a required result by using the law of double negation. 4. Taking α ∨ ¬α for δ in the first formula in 3 and using the last formula in 2, we have that the law of excluded middle α ∨ ¬α is provable in HK. A characteristic feature of Hilbert-style formal systems is modularity in their presentation. Though various types of formal systems have been proposed and studied, such as natural deductions, sequent systems and tableau systems, we will focus our attention on syntactic study of logics based on sequent systems in this textbook, since they are quite effective and powerful for developing proof-theoretic study of logic as shown in Part I. Another approach is to discuss logic from semantical point of view. In semantical study, we are concerned not only with when a formula is true or false under a given is a mapping from a finite set { p1 , . . . , pm } of propositional variables to a set of formulas, and σ (α) denotes a formula obtained from α by replacing every pi in α simultaneously by σ ( pi ) for i = 1, . . . , m. Such a formula σ (α) is called a substitution instance (or simply, an instance) of α.

2 Here, σ

6

1 Sequent Systems

interpretation, but also with when a formula is valid, i.e., when it is always true under every interpretation. From mathematical point of view, each interpretation is determined by an assignment of truth value to propositional variables. In case of classical logic, the set of truth values is {1, 0} where 1 and 0 denote the truth and the falsehood, respectively. Thus, an assignment is a mapping from the set of all propositional variables to the set of truth values {1, 0}. Depending on each assignment of truth values, the truth value of complex formulas by this assignment is inductively determined by using the following truth tables (from left to right) for each logical connective ∨, ∧ and →, respectively. a\b 1 0 1 1 1 0 1 0

a\b 1 0 1 1 0 0 0 0

a\b 1 0 1 1 0 0 1 1

In the following, the same symbol is used to denote for both a logical connective and the corresponding algebraic operation (and also 0 for both a logical constant and a truth value) by the abuse of symbols, when no confusion will occur. For instance, the symbol → expresses not only implication as a logical connective, but also an algebraic operation over the set {0, 1} which is described in the rightmost truth table. Thus, the table for → says that a → b = 1 if and only if either a = 0 or b = 1. Now, take a formula p → (q → p) and consider any mapping g satisfying that g( p) = 0 and g(q) = 1. By using the truth table for →, we can calculate that g(q → p) = 1 → 0 = 0 and hence g( p → (q → p)) = 0 → 0, which is equal to 1. It is easy to see that every assignment h can be extended uniquely to a mapping h ∗ from the set of all formulas to the set {1, 0} by these truth tables. Thus we can use the same h also to denote the extended mapping h ∗ . We can easily see that the truth value h(ϕ) of a formula ϕ consisting of propositional variables p1 , . . . , pm can be determined only by the values h( pi ) for i = 1, . . . , m. Hence, in such a case it is enough to consider only mappings from the set { p1 , . . . , pm } to {0, 1} as assignments. A formula ϕ is said to be valid if it takes always the value 1 for every assignment for ϕ. A valid formula (in the present sense) is called also a tautology. When a formula ϕ contains m distinct propositional variables { p1 , . . . , pm }, there exist 2m distinct assignments of it, since pi takes either 0 or 1 for each i = 1, . . . , m. Therefore, we can decide whether ϕ is a tautology or not, by checking the truth value of ϕ for these finitely many assignments. The following result shows a relation between our syntactic and semantical approach. This is obtained by checking that (1) every axiom of HK is a tautology and (2) if both α and α → β are tautologies then β is also a tautology. Lemma 1.1 Every formula which is provable in HK is a tautology. The converse direction of the above statement will be shown in the next section. In this way, classical logic formalized syntactically as a system HK is shown to be characterized semantically as the set of all valid formulas in two-valued semantics. In Part II, we will pursue semantical study of nonclassical logics further, where truth values and truth tables over them will be extended from two-valued semantics to

1.1 Prologue

7

many-valued semantics, and moreover to algebraic semantics defined over general algebraic structures, so as to grasp the provability in logic in terms of algebra. Once such a link is established between logic and algebra, we can expect that important logical consequences will be yielded by concepts and results from algebra and vice versa. This is the main goal of Part II.

1.2 Sequent Systems LK for Classical Logic We will introduce a sequent system LK for classical logic. Different from Hilbertstyle systems, basic expressions of sequent systems are sequents. Every sequent of LK is an expression of the following form, where each αi (i ≤ m) and each β j ( j ≤ n) are formulas and m, n ≥ 0. Here, commas ‘,’ and arrow ‘⇒’ are metalogical symbols. α1 , . . . , αm ⇒ β1 , . . . , βn . Thus, every sequent is a finite sequence of formulas separated by commas which is divided by ⇒ into the antecedents α1 , . . . , αm and the succedents β1 , . . . , βn of the sequent. Roughly, antecedents and succedents are understood as assumptions and conclusions, respectively. But, one must be careful here, as antecedents are understood conjunctive-like while succedents are disjunctive-like. That is, the above sequent intuitively means that (β1 ∨ . . . ∨ βn ) follows from the assumptions α1 , . . . , αm , or equivalently from the assumption (α1 ∧ . . . ∧ αm ). When the succedents are empty, the sequent α1 , . . . , αm ⇒ means ‘a contradiction follows from assumptions α1 , . . . , αm ’. On the other hand, when the antecedents are empty, the sequent ⇒ β1 , . . . , βn is understood as’ (β1 ∨ . . . ∨ βn ) follows (without any assumption)’. In sequent systems, the provability and the validity of sequents are discussed instead of those of formulas, from syntactical and semantical point of view, respectively. Each sequent system consists of initial sequents and rules. The former correspond to axiom schemes in Hilbert-style systems. Each rule of a given sequent system determines when and how a sequent can be inferred from other (usually, a finite number of) sequents which have been already inferred. In our system LK, each rule consists of one or two upper sequents and one lower sequent as shown below, which means that the lower sequent can be inferred from these upper sequents. In the following, capital Greek letters denote finite (possibly empty) sequences of formulas. LK has the following three kinds of rules3 ; 1. (left and right) rules for logical connectives ∨, ∧, → and ¬, 2. cut rule, 3. (left and right) structural rules. 3 Cut

rule is often regarded as one of structural rules. But, in the present textbook we will separate cut rule from structural rules as this will be a more convenient way of introducing substructural logics.

8

1 Sequent Systems

LK has the following initial sequents and rules. 0. Initial sequents: each initial sequent of LK is a sequent of the form α ⇒ α. 1. Rules for logical connectives: α, Γ ⇒ Π β, Γ ⇒ Π (∨ ⇒) α ∨ β, Γ ⇒ Π Γ ⇒ Λ, α (⇒ ∨1) Γ ⇒ Λ, α ∨ β α, Γ ⇒ Π (∧1 ⇒) α ∧ β, Γ ⇒ Π

Γ ⇒ Λ, β (⇒ ∨2) Γ ⇒ Λ, α ∨ β β, Γ ⇒ Π (∧2 ⇒) α ∧ β, Γ ⇒ Π

Γ ⇒ Λ, α Γ ⇒ Λ, β (⇒ ∧) Γ ⇒ Λ, α ∧ β Γ ⇒ Λ, α β, Δ ⇒ Π (→⇒) α → β, Γ, Δ ⇒ Λ, Π Γ ⇒ Λ, α (¬ ⇒) ¬α, Γ ⇒ Λ

α, Γ ⇒ Λ, β (⇒→) Γ ⇒ Λ, α → β α, Γ ⇒ Λ (⇒ ¬) Γ ⇒ Λ, ¬α

2. Cut rule Γ ⇒ Λ, α α, Δ ⇒ Π (cut) Γ, Δ ⇒ Λ, Π 3. Structural rules • exchangerules: Γ, α, β, Δ ⇒ Π (e ⇒) Γ, β, α, Δ ⇒ Π

Γ ⇒ Π, α, β, Λ (⇒ e) Γ ⇒ Π, β, α, Λ

• contraction rules: α, α, Γ ⇒ Π (c ⇒) α, Γ ⇒ Π

Γ ⇒ Π, α, α (⇒ c) Γ ⇒ Π, α

• weakening rules: Γ ⇒ Π (w ⇒) α, Γ ⇒ Π

Γ ⇒ Π (⇒ w) Γ ⇒ Π, α

1.2 Sequent Systems LK for Classical Logic

9

We attach a name like (∨ ⇒) to each rule, for convenience. In each rule, formulas in small Greek letters α and β are called active formulas. Also, in each rule formulas in small Greek letters explicitly displayed in lower sequents are called principal formulas of the rule. For example, α ∨ β is the principal formula of the rule (∨ ⇒), and α is the principal formula of the rule (w ⇒). Other formulas are called side formulas. The active formula of cut rule is called the cut formula. A rule for logical connectives with the principal formula and a structural rule with its active formulas in the antecedents (the succedent) are called left rules (right rules, respectively), to each of which a name of the form ( ⇒) ((⇒ ), respectively) is attached. Each rule says that when the upper sequent is provable (or, both of upper sequents are provable) its lower sequent is also provable. For instance, the rule (∧1 ⇒) says that the sequent α ∧ β, Γ ⇒ Π is provable whenever α, Γ ⇒ Π is so. Similarly, the rule (⇒ ∧) says (assuming that Λ is empty for the simplicity’s sake) that Γ ⇒ α ∧ β is provable whenever both Γ ⇒ α and Γ ⇒ β are provable. In this way, rules for each logical connective describe exactly the function of the connective. Cut rule expresses a fundamental principle in deductive reasoning. In fact, it says (assuming again that Λ is empty for the simplicity’s sake) that if Γ ⇒ α is provable and moreover if α, Δ ⇒ Π is provable, then Γ, Δ ⇒ Π is also provable. Structural rules as a whole control the order, duplication and omission of formulas in antecedents and succedents of a given sequent. In particular, each of left structural rules intuitively means as follows: • exchange rule allows us to use formulas in the antecedents whatever they are ordered, • contraction rule says that each formula occurrence in the antecedents can be used more than once in inferring the conclusion, • left weakening rule allows us to put any formula in the antecedents even if it is not used in inferring the conclusion. Definition 1.1 (Proofs and provability) A proof P of a sequent Γ ⇒ Δ (in LK) is a finite tree-like figure defined inductively in the following way, which shows how this sequent is derived from initial sequents in LK. 1. every uppermost sequent in the proof P is an initial sequent, 2. every sequent except uppermost sequents in the proof P is obtained by an application of any one of rules, 3. Γ ⇒ Δ is the single lowest sequent, which is called the end sequent of the proof P. A sequent Γ ⇒ Δ is provable in LK iff there is a proof of a sequent Γ ⇒ Δ. Also, a formula α is said to be provable in LK when the sequent ⇒ α is provable in it.4 We note that a sequent may have several different proofs when it is provable. For a given proof P of a sequent Γ ⇒ Δ, suppose that a sequent Σ ⇒ Θ is a sequent appearing in P. Then, a subfigure Q of the proof P over the sequent Σ ⇒ Θ 4 Henceforth,

we use sometimes the word ‘iff ’ as an abbreviation of ‘if and only if ’.

10

1 Sequent Systems

(including itself) forms also a proof whose end sequent is Σ ⇒ Θ. Such a proof as Q is called a subproof of P. Before giving examples of proofs, we introduce a convention below in order to simplify descriptions of proofs. We note that in our list of rules of LK, an active formula is always placed in the leftmost place of antecedents or in the rightmost place of succedents in upper sequents of each rule except exchange rules. The reason is only to simplify the presentation of rules, and hence these restrictions are not essential. In fact, each rule can be applied as long as its active formula appears at any place in antecedents or succedents. For, we apply first exchange rules to upper sequents, move the active formula to leftmost of antecedents or the rightmost of succedents and then apply the rule concerned. After that we move the resulting formula back by applying exchange rules. To clarify this adjustments, let us consider the rule (∧1 ⇒) of the following form, for example; Σ, α, Γ ⇒ Δ Σ, α ∧ β, Γ ⇒ Δ since the lower sequent in the above is obtained from the upper sequent in the following way. Σ, α, Γ ⇒ Δ ... α, Σ, Γ ⇒ Δ (∧1 ⇒) α ∧ β, Σ, Γ ⇒ Δ ... Σ, α ∧ β, Γ ⇒ Δ Here, dots . . . express some number of applications of exchange rule. In this way, we will allow to apply each rule irrespective of where its active formula occurs. Example 1.2 A proof of distributive law α ∧ (β ∨ γ ) ⇒ (α ∧ β) ∨ (α ∧ γ ) in LK is given as follows. To clarify which rules are applied, names of rules are explicitly indicated below at some places. (It is not always necessary to add these names in a proof when you write) β⇒β α⇒α (w ⇒) (w ⇒) α ⇒ α (w ⇒) γ ⇒ γ (w ⇒) α, β ⇒ α α, β ⇒ β α, γ ⇒ α α, γ ⇒ γ α, γ ⇒ α ∧ γ α, β ⇒ α ∧ β α, β ⇒ (α ∧ β) ∨ (α ∧ γ ) α, γ ⇒ (α ∧ β) ∨ (α ∧ γ ) α, β ∨ γ ⇒ (α ∧ β) ∨ (α ∧ γ ) α ∧ (β ∨ γ ), β ∨ γ ⇒ (α ∧ β) ∨ (α ∧ γ ) α ∧ (β ∨ γ ), α ∧ (β ∨ γ ) ⇒ (α ∧ β) ∨ (α ∧ γ ) (c ⇒) α ∧ (β ∨ γ ) ⇒ (α ∧ β) ∨ (α ∧ γ ) Exercise 1.1 Give proofs of the following sequents in LK. 1. ⇒ α → (β → α),

1.2 Sequent Systems LK for Classical Logic

11

2. α → (β → γ ) ⇒ (α → β) → (α → γ ). Exercise 1.2 Give proofs of the following sequents in LK. (Check whether there exists a sequent in your proof which has at least two formulas in the succedents or not.) 1. ⇒ α ∨ ¬α, 2. ¬(α ∧ β) ⇒ ¬α ∨ ¬β, 3. (α → β) → α ⇒ α. Remark 1.3 (Logical constant 0, the falsum) The negation ¬α of a formula α intuitively means that assuming α is led to a contradiction. Sometimes it is convenient to introduce explicitly the logical constant 0 (the falsum, or the falsehood) in our sequent system for expressing an arbitrary contradiction, and define the negation ¬α of α as an abbreviation of the formula α → 0, as what we did for HK in Sect. 1.1. In such a case, we take a new initial sequent 0 ⇒ , which means that the falsum implies anything, and then delete both rules for negation. In this new sequent system, both left and right rules for negations can be derivable, i.e., for either rule, the lower sequent can be obtained from the upper one by using only initial sequents and rules of the new system. (Here, we understand a rule schematically, but not an instance of the rule.) To see this, let us consider the rule (⇒ ¬) for instance, and show that this rule is derivable in this new system. Assume that α, Γ ⇒ is provable. Then, by applying the right weakening rule, we get α, Γ ⇒ 0. Hence the sequent Γ ⇒ α → 0, which means Γ ⇒ ¬α, is inferred by using (⇒→). But we will take ¬ as a primitive logical connective of LK without using the logical constant 0 in the following, unless otherwise noted. We note that every formula of the form β ∧ ¬β for an arbitrary formula β plays essentially the same role as the constant 0 in LK. A Modified Presentation of LK Using Multisets of Formulas To make the above convention more explicit, we will introduce a slightly modified system of LK. In our new system, each sequent is an expression of the form Γ ⇒ Δ where Γ and Δ are finite multisets of formulas.5 Different from sets of formulas, two multisets are distinguished from each other if and only if the multiplicity of any member of them, i.e., the number of occurrences of any formula in them, is different. For example, a multiset {α, β, α} is identified with a multiset {β, α, α}, while it is distinct from a multiset {α, β}, as the multiplicity of α in the first two is 2 while that in the last is 1. (Formally speaking, finite multisets of elements of a given set S can be defined as follows. Let us consider the set S ∗ of all finite sequence of elements of S. We define an equivalence relation on S ∗ by the condition that for all Φ1 and Φ2 in S ∗ , Φ1 Φ2 if and only if the multiplicity of d in Φ1 is equal to that in Φ2 for every d ∈ S. Now, let M be the quotient set S ∗ / . Then, M is the set of all finite multisets of elements of S.) In our new system, initial sequents and rules are exactly the same as those of LK mentioned before, as long as sequents in them are understood in the present sense. 5 In

the rest of the present textbook, by multisets we always mean finite multisets.

12

1 Sequent Systems

But, obviously, exchange rules become redundant in new system. This modification will not affect anything in proof-theoretic arguments developed hereafter. Thus, from now on, by LK we mean this modified system. As mentioned at the beginning of this section, commas in the antecedents of sequents can be understood as conjunctions but commas in the succedents of sequents will be understood as disjunctions. In fact, we can easily show the following. Lemma 1.2 A sequent α1 , . . . , αm ⇒ β1 , . . . , βn is provable in LK, iff α1 ∧ . . . ∧ αm ⇒ β1 ∨ . . . ∨ βn is provable in LK, iff ⇒ (α1 ∧ . . . ∧ αm ) → (β1 ∨ . . . ∨ βn ) is provable in LK. The formula (α1 ∧ . . . ∧ αm ) → (β1 ∨ . . . ∨ βn ) in the succedent of the third sequent in the above Lemma is called a corresponding formula of the sequent. α1 , . . . , αm ⇒ β1 , . . . , βn . (There will be several corresponding formulas of a given sequent, according to the ordering in αi and β j , but this is inessential as they are logically equivalent.) When m = 0 (n = 0) the corresponding formula is understood as β1 ∨ . . . ∨ βn (¬(α1 ∧ . . . ∧ αm ), respectively.) When m = n = 0, it is understood as the falsum, e.g. p ∧ ¬ p. Exercise 1.3 Show the above Lemma when m = n = 2. Exercise 1.4 Let Θ be any non-empty multiset of formulas. Does the following form of cut rule hold always? Γ ⇒ Λ, Θ Θ, Δ ⇒ Π Γ, Δ ⇒ Λ, Π Induction on the Length of a Given Proof Proof-theoretic study consists of analyses of structures of proofs. These analyses are often carried out using the induction on the length of a given proof. Here, the length (P) of a proof P is the total number of (occurrences of) sequents in P. In the rest of this section, we show that our LK is a semantically adequate sequent system for classical logic. First we show the soundness of LK. Definition 1.2 (Tautology) A sequent Γ ⇒ Δ is a tautology iff its corresponding formula is a tautology. In particular, for any formula ϕ, a sequent ⇒ ϕ is a tautology iff ϕ is a tautology.6 Lemma 1.3 (Soundness of LK) If a sequent α1 , . . . , αm ⇒ β1 , . . . , βn is provable in LK then it is a tautology. This lemma is shown by using induction on the length of a given proof of α1 , . . . , αm ⇒ β1 , . . . , βn . More plainly, it is enough to check the following. 6 In a sequent system for classical logic, sometimes a sequent is said to be valid when it is a tautology.

But a generalized notion of validity will be introduced in Part II. To avoid confusion, we will not adopt this terminology.

1.2 Sequent Systems LK for Classical Logic

13

• every initial sequent is a tautology, • for each rule of LK, if (each of) the upper sequent(s) is a tautology then the lower sequent is a tautology. Exercise 1.5 Give a detailed proof of Lemma 1.3. In Sect. 1.3, we will show the converse of Lemma 1.3 in a stronger form. In fact, it holds that if a sequent is a tautology then it is cut-free provable in LK. Here we say that a sequent S is cut-free provable in LK when S has a proof of LK which contains no applications of cut rule. Such a proof is called a cut-free proof. For this purpose, we introduce first two auxiliary sequent systems LK1 and LK2 , and yet another system LK∗ for classical logic in the following, so as to make the whole proof clear and intelligible. By using induction on the length of proofs and also case analysis, which is typical in proof theory, we will show that for each of these systems, a sequent is provable in it if and only if it is provable in LK. Consider first a system LK1 which is obtained from LK by (1) restricting initial sequents only to the form p ⇒ p for each propositional variable p, and (2) replacing left and right weakening rules by the weakening rule (w) of the following form; Γ ⇒Δ (w) Σ, Γ ⇒ Δ, Θ , where Σ and Θ are arbitrary multisets of formulas. Clearly, left and right weakening rules of LK can be regarded as a special case of this rule (w), and conversely every application of (w) of LK1 can be replaced by repeated applications of left and right weakening rules. Moreover the restriction of the form of initial sequents does not affect the provability, as stated in the following Exercise 1.6, which is shown by using the induction on the length of a formula α. Exercise 1.6 Show that α ⇒ α is provable in this restricted system LK1 for every formula α. In this way, every proof of a given sequent in LK can be transformed into a proof of this sequent in LK1 , and vice versa. Moreover, if such a proof does not contain any application of cut rule then neither does the transformed proof. Thus, we have the following. Lemma 1.4 A sequent is (cut-free) provable in LK1 iff it is (cut-free) provable in LK. Next, consider the second system LK2 which is obtained from LK by dropping weakening rules and taking only sequents of the form p, Γ ⇒ Δ, p for initial sequents of LK2 , where P is an arbitrary propositional variable.7 It is clear that each sequent which is provable in LK2 is provable also in LK. To show the converse, it 7 Our

modified LK and LK2 , but except for cut rule, are nearly the same as systems G1 and G2 in Kleene (1952) (and also as G1c and G2c in Troelstra and Schwichtenberg 2000), respectively.

14

1 Sequent Systems

suffices to prove that each sequent which is provable in LK1 is provable also in LK2 , because of Lemma 1.4. An idea is to confirm that each application of the weakening rule (w) in a proof of LK1 can be shifted up, i.e., any application of (w) just below any rule J of LK1 can be replaced by an application of J first and an application of (w). For example, consider the case where J is (∨ ⇒). Then, the proof is of the following form: α, Γ ⇒ Π β, Γ ⇒ Π (∨ ⇒) α ∨ β, Γ ⇒ Π (w) Σ, α ∨ β, Γ ⇒ Π, Θ . This proof can be replaced by the following. β, Γ ⇒ Π α, Γ ⇒ Π (w) (w) Σ, α, Γ ⇒ Π, Θ Σ, β, Γ ⇒ Π, Θ (∨ ⇒) Σ, α ∨ β, Γ ⇒ Π, Θ Then, by repeating this procedure, we will eventually have a proof in LK1 in which (w) is applied only to an initial sequent of the form p ⇒ p. But this application of (w) becomes unnecessary in LK2 , as it is already of the form of an initial sequent of LK2 . In this way, we can get a proof of a give sequent in LK2 . Hence we have the following, which says that weakening rules can be eliminable as long as we take any sequent of the form p, Γ ⇒ Δ, p for initial sequents. We note that cut rule is not used at all in the above replacement. Lemma 1.5 A sequent is (cut-free) provable in LK2 iff it is (cut-free) provable in LK. Exercise 1.7 Give details of the above proof of Lemma 1.5, using induction on the length of proofs. Invertible System LK∗ We introduce next another sequent system LK∗ which consist only of invertible rules. Recall that in proving Lemma 1.3, we used the fact that for each rule of LK, if (each of) the upper sequent(s) is a tautology then the lower sequent is a tautology. But the converse does not hold in general. For instance, consider an application of the rule (⇒ ∨2) whose lower sequent is p ⇒ p ∨ q. Though this sequent is a tautology, p ⇒ q is not. We say that a given rule is invertible, when (each of) the upper sequent(s) is a tautology if and only if the lower sequent is a tautology.8 Among rules for logical connectives of LK, left rules of conjunction and implication, and also right rules of disjunction are non-invertible. In the same as our (modified) 8

Though we define here the invertibility of a rule in terms of tautologies for the sake of our present purpose, the notion is sometimes defined in terms of the provability. An invertible sequent system was discussed first by Ketonen (1944), and similar systems G3 and LC were introduced later by Kleene (1952) and Kanger (1957), respectively.

1.2 Sequent Systems LK for Classical Logic

15

LK, sequents of LK∗ are expressions of the form Γ ⇒ Δ where both Γ and Δ are multisets of formulas. The system LK∗ has the following initial sequents and rules. (It should be noted that it has neither explicit structural rules nor cut rule.) 0. Initial sequents of LK∗ : Each initial sequent is a sequent of the form p, Γ0 ⇒ Δ0 , p, where p is an arbitrary propositional variable, and both Γ0 and Δ0 are arbitrary (possibly empty) multisets of propositional variables. 1. Rules for logical connectives of LK∗ : Γ ⇒ Δ, α, β (⇒ ∨) Γ ⇒ Δ, α ∨ β

α, Γ ⇒ Δ β, Γ ⇒ Δ (∨ ⇒) α ∨ β, Γ ⇒ Δ α, β, Γ ⇒ Δ (∧ ⇒) α ∧ β, Γ ⇒ Δ

Γ ⇒ Δ, α Γ ⇒ Δ, β (⇒ ∧) Γ ⇒ Δ, α ∧ β

Γ ⇒ Δ, α β, Γ ⇒ Δ (→⇒) α → β, Γ ⇒ Δ Γ ⇒ Δ, α (¬ ⇒) ¬α, Γ ⇒ Δ

α, Γ ⇒ Δ, β (⇒→) Γ ⇒ Δ, α → β α, Γ ⇒ Δ (⇒ ¬) Γ ⇒ Δ, ¬α

Tautologies of sequents of LK∗ can be defined in the same way as that of LK. The following lemma can be shown similarly to Lemma 1.3. But it says moreover that every rule of LK∗ is invertible. As a matter of fact, the following can be shown in a slightly stronger form, i.e., for each assignment h, (each of) the upper sequent(s) is true with respect to h iff the lower sequent is true with respect to h for any rule of LK∗ . Lemma 1.6 (Invertibility) For each rule of LK∗ , (each of) the upper sequent(s) is a tautology iff the lower sequent is a tautology. Exercise 1.8 Give a proof of the above lemma. For a given sequent S , a proof search tree of S is a proof-like tree constructed by applying rules of LK∗ but in the converse direction, until any of its topmost sequent is elementary. Here, we say that a sequent is elementary if it is of the form p1 , . . . , pm ⇒ q1 , . . . , qn with m, n ≥ 0 where all pi ’s and q j ’s are propositional variables. To find such a proof search tree of S is called a proof searching of S . A proof search tree of S is successful if all of whose topmost sequents are initial sequents of LK∗ . Otherwise, it is said to be failed. It is easily seen that any construction of a proof search tree of S always terminates, since for each rule of LK∗ the total number of logical connectives in formulas of (each of) upper sequent is strictly smaller than that of the lower sequent. Thus for any sequent S there exists always a proof search tree of S . Moreover, if a proof search tree of a sequent S is successful then it is in fact a proof of S in LK∗ . Here are some examples of proof search trees.

16

1 Sequent Systems

Example 1.4 The following is a successful proof search tree of the sequent ( p → q) → p ⇒ p. In fact, it is the single proof search tree of it. Clearly, it is a proof of ( p → q) → p ⇒ p in LK∗ . p ⇒ p, q ⇒ p, p → q p ⇒ p ( p → q) → p ⇒ p Similarly we can construct a successful proof search tree of the sequent ( p → q) → p ⇒ p ∨ r . In this case, there exist several proof search trees, depending on when the converse of the rule (⇒ ∨) is applied. Example 1.5 The following is the single proof search tree of the sequent p ⇒ (q → r ) ∨ s. p, q ⇒ r, s p ⇒ q → r, s p ⇒ (q → r ) ∨ s When sets { p, q} and {r, s} contains at least one propositional variable in common, it is successful. Otherwise, it is failed. From Lemma 1.6, we get the following immediately. Corollary 1.7 A given sequent S of LK∗ is a tautology iff every topmost sequent of each proof search tree of S is a tautology. On the other hand, clearly we have the following. Lemma 1.8 An elementary sequent is a tautology iff it is an initial sequent of LK∗ . Exercise 1.9 Give a proof of the above lemma. Corollary 1.9 (Soundness and completeness of LK∗ ) A given sequent S is a tautology iff it is provable in LK∗ . Proof The if-part follows from Lemma 1.6. To show the only-if part, suppose that a given sequent S is a tautology. By Corollary 1.7, every topmost sequent of each proof search tree of S is a tautology. Since any topmost sequent of a proof search tree must be elementary, by Lemma 1.8 this is equivalent to say that every topmost sequent of each proof search tree of S is an initial sequent, i.e., each proof search tree of S is successful. As any successful proof search tree of S is nothing other  than a proof of S in LK∗ , the sequent S is provable in LK∗ . We note that if there exists at least one successful proof search tree of S , then S is provable and hence is a tautology. Hence, every proof search tree of S must be successful in this case.

1.3 Completeness and Cut Elimination

17

1.3 Completeness and Cut Elimination Next we will prove the completeness of LK. Suppose that a given sequent S is a tautology. Then it has a proof of LK∗ , by Corollary 1.9. Then, this proof can be transformed into a proof of S in LK2 , by supplementing structural rules. In fact, all rules of LK∗ except (⇒ ∨), (∧ ⇒) and (→⇒) are rules of LK2 , and also the remaining three rules can be simulated in LK2 by using structural rules (but without using cut rule) (see Exercise 1.10). Exercise 1.10 Show that for each rule (⇒ ∨), (∧ ⇒) and (→⇒) of LK∗ , its lower sequent can be obtained from (both of) its upper sequent(s) by using rules of LK2 but without using cut rule. As LK∗ does not have cut rule, this transformation gives us a cut-free proof of S in LK2 . By combining this with Lemma 1.5 we have the following completeness of LK without cut rule. Lemma 1.10 If a given sequent is a tautology, it is provable in LK without using cut rule. Example 1.6 A successful proof search tree of ( p → q) → p ⇒ p in Example 1.4 can be transformed into the following proof in LK without cut rule. p⇒p p ⇒ p, q (⇒ w) ⇒ p, p → q p⇒p ( p → q) → p ⇒ p, p (⇒ c) ( p → q) → p ⇒ p To sum up, we have the following result. Theorem 1.11 (Completeness and cut elimination of LK) For any sequent S , the following three conditions are mutually equivalent. 1. S is provable in LK, 2. S is a tautology, 3. S is provable in LK without using cut rule. Proof By Lemma 1.3, the statement 1 implies 2. Next, the statement 2 implies 3 by Lemma 1.10. The statement 3 obviously implies 1.  Consequently, every sequent which is a tautology is provable in LK (completeness of LK), and also every sequent provable in LK has a proof without any application of cut rule (cut elimination for LK). Exercise 1.11 Show first that both ¬α → α ⇒ α and (α → β) → (α → γ ) ⇒ α → (β → γ ) are a tautology. Then, give a cut-free proof of each of these sequents in LK.

18

1 Sequent Systems

Now, we can give a proof of the following result on a connection between LK and Hilbert-style system HK mentioned in Sect. 1.1. The proof of the if-part is given with the help of completeness. It is possible to give a purely syntactic proof of it, which will be a bit complicated. Corollary 1.12 (LK and Hilbert-style system HK) For each sequent, it is provable in LK iff its corresponding formula is provable in HK. Proof We can show that for each rule of LK, if the corresponding formula(s) of (each of) upper sequent(s) is provable in HK then the corresponding formula of the lower sequent is also provable in HK. Consequently, if a given sequent Γ ⇒ Δ is provable in LK then its corresponding formula is provable in HK. Conversely, by Lemma 1.1 if the corresponding formula of Γ ⇒ Δ is provable then it is a tautology, and hence Γ ⇒ Δ is a tautology. Therefore, by Theorem 1.11, it is provable in LK.9  The proof of Theorem 1.11 as a whole implies the following. Corollary 1.13 (Decidability) There exists an algorithm which decides whether S is provable or not in LK for a given sequent S . In fact, this algorithm can give us a cut-free proof of S when it is provable, and also an assignment which makes S false when it is not. Proof We explain here only how the latter half of the second statement holds, as others are shown already. Suppose that S is not provable. Then there exists a failed proof search tree such that at least one of its topmost sequents is of the form p1 , . . . , pm ⇒ q1 , . . . , qn for some propositional variables p1 , . . . , pm , q1 , . . . , qn such that no variables appear common in both { p1 , . . . , pm } and {q1 , . . . , qn }. In such a case, we can consistently define an assignment g by g( pi ) = 1 for each 1 ≤ i ≤ m and g(q j ) = 0 for each 1 ≤ j ≤ n. Clearly, by this assignment g, the sequent p1 , . . . , pm ⇒ q1 , . . . , qn is false. Hence, S is false with respect to this assignment g by a note just above Lemma 1.6,  As we have seen, the sequent system LK∗ has many nice features as a sequent system specific for classical logic. For instance, it can give us a practical system in automated theorem proving of classical logic. On the other hand, from a theoretical point of view, the system LK can be regarded as a core system by its generality, and is useful when we make comparison with other sequent systems such as those for substructural logics. When cut elimination holds for a given sequent system, the system is said to be cut-free. So, LK is a cut-free sequent system. Our proof of cut elimination theorem given above tells us the fact that every application of cut rule is always eliminable, but will not clarify how each application of cut rule can be eliminated. On the other hand, the proof by G. Gentzen consists of a concrete procedure of eliminating applications of cut rule. This proof-theoretic approach will be discussed in details in Chap. 2. 9 Corresponding formulas defined just below Lemma 1.2 may contain the negation ¬. Thus, precisely speaking, we need to consider here a formula obtained from a corresponding formula (in the original sense) by replacing every subformula of the form ¬β by β → 0.

1.4 Sequent System LJ for Intuitionistic Logic

19

1.4 Sequent System LJ for Intuitionistic Logic We introduce next a sequent system LJ for intuitionistic logic. First, we will give a brief explanation of intuitionistic logic. The discovery of Russell’s paradox in set theory in 1901 caused disputes over logical arguments in mathematics. L.E.J. Brouwer insisted that mathematics should be limited only to constructive concepts and arguments. His attitude toward mathematics is called intuitionist’s (or constructive) point of view, which will be summed up informally as follows. • To infer α ∧ β, it is required to give a proof of α and a proof of β, • To infer α ∨ β, it is required to give either a proof of α or a proof of β, • To infer α → β, it is required to give an algorithm which transforms any given proof of α into a proof of β, • To infer ∀xα(x), it is required to give an algorithm which provides a proof of α(u) for any given individual element u, • To infer ∃xα(x), it is required to give an algorithm which provides both an individual element u and a proof of α(u). From intuitionist’s point of view, the truth of formulas must be established by giving a proof. For instance, a proof of the disjunctive formula α ∨ β is obtained by giving a proof of either α or β and showing which one holds. This principle is called the disjunction property. Thus, the law of excluded middle α ∨ ¬α is not acceptable in general. For, to assert this, we need to have a proof of either α or non-α but this is not always the case. We can observe that a principle called reductio ad absurdum is often used in standard mathematical arguments. That is, when we wants to show α, to the contrary we assume that α is not the case, and then try to derive the contradiction from this assumption. If we succeed to show the contradiction, we can conclude that α holds by the law of excluded middle. On the other hand, this is not acceptable from intuitionist’s point of view. To clarify the point, in particular consider the case where α is a statement of the form “there exists x such that P(x)”. Such statements are called existence theorems in mathematics. Intuitionists criticize the above argument that what is shown by it is simply the fact that “non-existence of x such that P(x) is contradictory”, and that it doesn’t say anything about a concrete element u such that P(u) holds. For, the latter is necessary to infer ∃xα(x) from their standpoint. Thus, the law of the double negation ¬¬α → α is not acceptable either. Example 1.7 Here is an example of a non-constructive proof of the following exist tence theorem: There exist two irrational √ numbers s and t such that s bis rational. number a . Suppose The proof goes as follows. Let a = b = 2, and consider the√ first that a b is a rational number. Then, it is enough√to take 2 for both s and t. but (a b )b is Suppose otherwise. In such a case, both a b and b(= 2) are irrational, √ √ √ 2 rational since (a b )b = a b×b = a 2 = 2. Hence, we can take 2 for s and 2 for t. Thus, the above statement holds in either case. However, this proof will not tell us

20

1 Sequent Systems

which is in fact the case and hence the above argument is not acceptable from the intuitionist’s point of view. We will study various kinds of logics from mathematical point of view in this book, and will not be concerned further with background viewpoints. In 1930, Heyting (1930) formalized the pure logical part of intuitionistic arguments, which is now called intuitionistic logic. In Hilbert-style formulation, a system HJ for intuitionistic logic is obtained from the system HK for classical logic given in Sect. 1.1 just by dropping the axiom scheme of double negation: ¬¬α → α. Interestingly enough, the system LJ for intuitionistic logic introduced by Gentzen is obtained from LK simply by imposing a restriction on the number of formulas of succedents of sequents, as shown below. A sequent of LJ is an expression of the following form with m ≥ 0, where each αi ( j ≤ m) is a formula and β is either a formula or empty; α1 , . . . , αm ⇒ β. Thus, each sequent of LJ has a single (including empty) succedent, instead of multi-succedent. This sequent is understood also as ‘β follows from assumptions α1 , . . . , αm ’. Rules of the sequent system LJ must be modified according to this restriction. As a matter of fact, the system LJ can be obtained from LK presented in the previous section, by taking a single formula or empty sequence of formulas for Π and also by taking empty sequence for Λ. Consequently, LJ consists of the following initial sequents and rules. 0. Initial sequents: each initial sequent of LJ is a sequent of the form α ⇒ α. 1. Rules for logical connectives: α, Γ ⇒ ϕ β, Γ ⇒ ϕ (∨ ⇒) α ∨ β, Γ ⇒ ϕ Γ ⇒α (⇒ ∨1) Γ ⇒α∨β

Γ ⇒β (⇒ ∨2) Γ ⇒α∨β

α, Γ ⇒ ϕ (∧1 ⇒) α ∧ β, Γ ⇒ ϕ

β, Γ ⇒ ϕ (∧2 ⇒) α ∧ β, Γ ⇒ ϕ

Γ ⇒α Γ ⇒β (⇒ ∧) Γ ⇒α∧β Γ ⇒ α β, Δ ⇒ ϕ (→⇒) α → β, Γ, Δ ⇒ ϕ Γ ⇒α (¬ ⇒) ¬α, Γ ⇒

α, Γ ⇒ β (⇒→) Γ ⇒α→β α, Γ ⇒ (⇒ ¬) Γ ⇒ ¬α

1.4 Sequent System LJ for Intuitionistic Logic

21

2. Cut rule Γ ⇒ α α, Δ ⇒ ϕ (cut) Γ, Δ ⇒ ϕ 3. Structural rules: • exchange rule:

• contraction rule:

• left weakening rule:

• right weakening rule:

Γ, α, β, Δ ⇒ ϕ (e ⇒) Γ, β, α, Δ ⇒ ϕ α, α, Γ ⇒ ϕ (c ⇒) α, Γ ⇒ ϕ Γ ⇒ϕ (w ⇒) α, Γ ⇒ ϕ Γ ⇒ (⇒ w) Γ ⇒α

By the same reason as before, in the following we will modify the system LJ by assuming that every sequent of LJ is of the form Γ ⇒ β with a multiset Γ of formulas and hence deleting the exchange rule. Proofs of sequents in LJ and the provability of sequents in LJ are defined in the same way as in LK. Similarly to Lemma 1.2 we can show the following. Lemma 1.14 A sequent α1 , . . . , αm ⇒ β is provable in LJ iff α1 ∧ . . . ∧ αm ⇒ β is provable in LJ. It is easy to see that any proof of a sequent of the form Γ ⇒ α in LJ is regarded also as a proof of this sequent in LK. Obviously, the following holds. Lemma 1.15 If a sequent of LJ is provable in LJ then it is provable in LK. Suppose that a sequent Γ ⇒ β has a proof P in LK such that every sequent in P has a single or empty succedent. Then P can be regarded as a proof in LJ and hence Γ ⇒ β is provable in LJ. Thus, for instance, the distributive law in Example 1.2 is already provable in LJ. Exercise 1.12 Check that two sequents in Exercise 1.1 are in fact provable in LJ. Exercise 1.13 Show that ⇒ ¬¬(α ∨ ¬α) is provable in LJ. The converse of Lemma 1.15 does not hold always. For instance, any sequent in Exercise 1.2 is not provable in LJ, in general. (Later, we will give a way of checking whether a given sequent is provable in LJ or not.)

22

1 Sequent Systems

Example 1.8 The sequent ¬α ∨ ¬β ⇒ ¬(α ∧ β) is provable in LJ. But ¬(α ∧ β) ⇒ ¬α ∨ ¬β is not always provable in LJ. Remark 1.9 The sequent system LJ is a single succedent sequent system, while LK is a multi-succedent one. It is possible to give a sequent system for intuitionistic logic by imposing restrictions on some rules of LK. Let LJ be the multi-succedent system obtained from LK by restricting only rules (⇒→) and (⇒ ¬) to the following form, but keeping other rules; α, Γ ⇒ β (⇒→) Γ ⇒α→β

α, Γ ⇒ (⇒ ¬) Γ ⇒ ¬α .

Then LJ is a sequent system for intuitionistic logic (see Exercise 1.14) below.10 Exercise 1.14 Show that every sequent of the form α1 , . . . , αm ⇒ β is provable in LJ iff it is provable in LJ. Exercise 1.15 Suppose that a formula α is provable in LK but not in LJ. Show that α contains either → or ¬. (Look at the form of each sequent in Exercise 1.2, for example.)

a multi-succedent sequent system for intuitionistic logic as LJ was discussed by several researchers including Kleene (1952), Maehara (1954) and Umezawa (1955).

10 Such

Chapter 2

Cut Elimination for Sequent Systems

Cut elimination for a given sequent system L means that if a sequent is provable in L then it is also provable in L without using cut rule. Any proof P of L is said to be cutfree when P contains any application of cut rule in it. Intuitively, any cut-free proof is a kind of a proof without detours, or a direct proof, though it may be longer than a proof with cut. On the other hand, such a direct proof has many ‘good’ properties. Because of this, it is one of most important goals of syntactic study of logic to formalize a given logic in a sequent system and to show cut elimination for this system, although cut elimination holds for rather a limited number of sequent systems. We have already given a semantical proof of cut elimination for LK in Theorem 1.11. But the proof does not tell us any concrete effective procedure of eliminating cut rules from a given proof with cut rule. Moreover, since the proof in the previous chapter relies highly on the invertibility of the system LK∗ we cannot expect that a similar proof will work well for other sequent systems. In this chapter, we will give a general syntactic proof of cut elimination, which will work for many sequent systems including LJ. We note that our proof presented here (essentially due to OnoKomori 1985) is a simplified version of Gentzen’s original proof. In the following, sequents are expressions of the form Γ ⇒ Δ (or Γ ⇒ α for LJ) where Γ and Δ are multisets of formulas and α is a formula or empty.1

2.1 Basic Idea To convey a basic idea of our procedure of cut elimination, we will give first an intuitive explanation of the background idea in this section, and then give a precise proof of cut elimination in the next section. 1 This

is simply for brevity’s sake, and our argument works well even for the case where sequents are defined by sequences of formulas instead of multisets of formulas.

© Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_2

23

24

2 Cut Elimination for Sequent Systems

The process of cut elimination consists of a detailed case analysis and an effective procedure of elimination of cut rule, depending on how an application of cut rule appears in a given proof. For the sake of brevity, we will explain it for the sequent system LJ for intuitionistic logic, but the argument can be applied essentially not only to LK but also to many other sequent systems. Let us consider an arbitrary proof Q which contains at least one application of cut rule. We take one of the uppermost applications of cut rule in this proof Q with the lower sequent S . Let P be the subproof of Q which is obtained by taking all sequents over S including S itself. The last part of P must be of the following form, and thus S is equal to Γ, Δ ⇒ ϕ in this case. Γ ⇒ α α, Δ ⇒ ϕ (cut) Γ, Δ ⇒ ϕ Our aim is to eliminate this application of cut rule. But as a matter of fact, it is impossible in general to eliminate this application of cut rule by a single step. Instead, we transform P into a proof P with the same end sequent in which every application of cut rule is simpler than the application of cut rule in P. By repeating such transformations, we can expect to get a cut-free proof of this end sequent eventually. There are two basic ways of getting simpler proofs. The first way is to give a proof P containing an application of cut rule such that the length of its cut formula, called the grade, is shorter than that of the cut formula in P. Let us consider the following example, where the cut formula α is equal to β ∧ γ . Moreover, we assume that Γ ⇒ β ∧ γ and β ∧ γ , Δ ⇒ ϕ are obtained by applying (⇒ ∧) and (∧ ⇒), respectively, either of whose principal formula is β ∧ γ . Γ ⇒β Γ ⇒γ β, Δ ⇒ ϕ Γ ⇒β ∧γ β ∧ γ, Δ ⇒ ϕ (cut) Γ, Δ ⇒ ϕ We will transform it into the following proof P , in which the cut formula β is apparently shorter than β ∧ γ in length. Γ ⇒ β β, Δ ⇒ ϕ (cut) Γ, Δ ⇒ ϕ Another way of getting a simpler proof P is to give a proof P in which cut rule is applied sooner than the application of cut rule in P. To measure the simplicity of this kind, we introduce the notion of height. That is, the height of a given application J of cut rule is the total number of (occurrences of) all sequents above the lower sequent of J (including this lower sequent). The height will be decreased if we can push up the application of cut rule. For example, look at the following application of cut rule, where the left upper sequent β ∧ γ , Γ  ⇒ α of cut rule is obtained by

2.1 Basic Idea

applying (∧1 ⇒).

25

β, Γ  ⇒ α β ∧ γ , Γ  ⇒ α α, Δ ⇒ ϕ (cut) β ∧ γ , Γ , Δ ⇒ ϕ

Consider the following proof P of β ∧ γ , Γ  , Δ ⇒ ϕ. β, Γ  ⇒ α α, Δ ⇒ ϕ (cut) β, Γ  , Δ ⇒ ϕ β ∧ γ , Γ , Δ ⇒ ϕ Clearly, the height of the application of cut rule in P is smaller than that in P by 1. In such a way as this, we will transform a given proof P into another proof P with the same end sequent so that for every application of cut rule in P , either the grade or the height becomes smaller than that of the last application of cut rule in P. (It should be noted that in general P will contain more than one applications of cut rule.) In fact, we can show that a given proof P can be transformed into a simpler P by using these two ways in almost all cases. But a trouble will happen only when one of the upper sequents of cut rule in P is a lower sequent of an application of contraction rule whose principal formula is the cut formula α. In LJ, this happens when α, Δ ⇒ ϕ is obtained from the upper sequent α, α, Δ ⇒ ϕ by applying (left) contraction rule. That is; α, α, Δ ⇒ ϕ (c ⇒) Γ ⇒ α α, Δ ⇒ ϕ (cut) Γ, Δ ⇒ ϕ . A possible way of dealing with this case would be to push up the application of cut rule in the following way. Γ ⇒ α α, α, Δ ⇒ ϕ (cut1 ) α, Γ, Δ ⇒ ϕ Γ ⇒α (cut2 ) Γ, Γ, Δ ⇒ ϕ some (c ⇒) Γ, Δ ⇒ ϕ Apparently, the height of the first application (cut 1 ) of cut rule is smaller than the height of the original one, while the height of the lower application (cut 2 ) of cut rule is not. Here, our primary idea reaches a deadlock for the case of LJ and also LK. On the other hand, the above proof works perfectly well for sequent systems having no contraction rules. They are discussed in Sect. 4.3.

26

2 Cut Elimination for Sequent Systems

2.2 Cut Elimination Extended Cut Rule To remedy this drawback, we introduce the following extended cut rule (briefly, e-cut rule), which is a generalization of cut rule.2 Γ ⇒ α α, Δ ⇒ ϕ (e-cut) Γ, Δα ⇒ ϕ . Here, Δα represents a multiset of formulas obtained from Δ by deleting some occurrences of α (possibly none and not necessarily all occurrences) in it. We call the above α also the cut formula of this application of e-cut rule. It is easily seen that each application of cut rule can be regarded as a particular application of e-cut rule such that Δα is Δ. Conversely, each application of e-cut rule can be replaced by some repeated applications of cut rule. In the following, we will give a detailed proof of cut elimination for LJ. For this purpose, we introduce a tentative system LJe which is obtained from LJ by replacing cut rule by extended cut rule and show extended cut elimination for LJe . This is enough to derive cut elimination for LJ. To see this, suppose that a sequent S has a proof P in LJ. Then the proof P can be regarded also as a proof in LJe since each application of cut rule in it can be regarded as an application of e-cut rule. Then by extended cut elimination for LJe , we get a proof R of S in LJe without any application of e-cut rule. But this proof is no other than a cut-free proof of S in LJ. Now, consider the following statement (∗). (∗) every proof P containing only one application of e-cut rule as its last inference can be transformed into a proof P∗ containing no applications of e-cut rule without changing its end sequent. Extended cut elimination follows from the statement (∗). To show this, suppose that a sequent Σ ⇒ ϕ is provable in LJ with a proof Q containing m(> 0) applications of e-cut rule. We take one of the uppermost applications of cut rule in this proof Q. Let S be the lower sequent of this application of cut rule, and P be the subproof of Q which is obtained by taking all sequents over S including also S itself. From our assumption (∗), it follows that the sequent S has a proof P without e-cut rule. We replace the subproof P in Q by P . The proof thus obtained is also a proof of Σ ⇒ ϕ, which contains m − 1 applications of e-cut rule. By using induction on the number of applications of e-cut rule, we can conclude that Σ ⇒ ϕ in provable in LJ without using any e-cut rule. Thus, extended cut elimination follows. Procedure for Eliminating Extended Cut Rule Our proof of the statement (∗) goes almost the same as our basic idea described before. But a slight modification will be necessary as cut rule is replaced by e-cut 2 In order to overcome this difficulty, G. Gentzen introduced mix rule in his thesis Gentzen (1935). We take here a slightly different approach. The difference between original Gentzen’s proof and our proof is clarified in Remark 2.1.

2.2 Cut Elimination

27

rule. We will give a detailed proof below. We assume that the last part of P is of the following form: Γ ⇒ α α, Δ ⇒ ϕ (e-cut) Γ, Δα ⇒ ϕ . We must pay attention to the following two points in showing this. 1. all the possible proof structures of P should be examined, 2. it should be confirmed that our procedure of simplifying proofs will eventually terminate and reach a e-cut-free proof in the end. To attain the second point, we use the double induction on the grade and the height of an application of e-cut rule. Definition 2.1 (Grade and height) Let J be an application of extended cut rule in a proof of LJe . Then 1. the grade of J is the total number of (occurrences of) logical symbols in the cut formula α of J , 2. the height of J is the total number of sequents appearing in the subproof of the lower sequent of J in P, i.e., the length of the subproof of the lower sequent of J in P We attach a pair (k, n) to each application J of e-cut rule in a proof if the grade and the height of J are k and n, respectively. Now, we introduce the following order ≺ on the Cartesian product of the set of all non-negative integers; (k, n) ≺ (k  , n  ) iff (1)k < k  , or (2)k = k  and n < n  . That is, a pair (k, n) is smaller (with respect to ≺) than another pair (k  , n  ) if and only if either the grade of the first pair is smaller, or the grade is the same but the height of the first pair is smaller. It can be shown that the binary relation ≺ is well-ordered, which means that any descending chain of these ordered pairs is always finite. Our proof using the double induction is outlined as follows. We assume that a proof P contains an application of e-cut rule as its last inference. Depending on how this e-cut rule appears, P is transformed in various ways into another proof P but without changing its end sequent. The proof P may contain some (including none) applications of e-cut rule. No matter how P is transformed, it can be shown that every pair attached to an e-cut rule in P is smaller (with respect to ≺) than the pair attached to the original e-cut rule in P. If this is the case then by repeating these transformations, our procedure must eventually terminate as ≺ is well-ordered. This implies that after finitely many steps of transformations, we get a proof P∗ without any applications of e-cut rule. Now we suppose that the e-cut rule at the end of P has the upper sequents Γ ⇒ α and α, Δ ⇒ ϕ. Depending on how these upper sequents are introduced, we consider the following four cases.

28

2 Cut Elimination for Sequent Systems

Case 1. The case where either of the upper sequents is an initial sequent. Consider the case where Γ ⇒ α is the initial sequent α ⇒ α. The upper sequent on the right side is α, Δ ⇒ ϕ, which is provable without using any e-cut rule by our assumption. The lower sequent is α, Δα ⇒ ϕ in this case. To get this sequent, it is enough to apply contraction rule to α, Δ ⇒ ϕ as many times as necessary. Moreover, this proof of α, Δα ⇒ ϕ contains no applications of e-cut rule. This completes the proof. The case where α, Δ ⇒ ϕ is an initial sequent is obvious. Case 2. The case where either of the upper sequents is a lower sequent of a structural rule. Let I denote the application of this structural rule, which is either a weakening rule or a contraction rule. If the principal formula of I is not equal to the cut formula α, the upper sequent of I must contain α. In such a case, we can push the e-cut rule up and then apply the same structural rule, as explained in the previous section. This makes the height of this application of e-cut rule smaller while the grade is the same. Next we consider the case where the principal formula of I is equal to the cut formula α. First, suppose that α is introduced by an application I of a weakening rule. For instance, consider the case where α, Δ ⇒ ϕ is obtained by (w ⇒) in the following way; Δ⇒ϕ (w ⇒) Γ ⇒ α α, Δ ⇒ ϕ (e-cut) Γ, Δα ⇒ ϕ . When Δα is equal to Δ, the lowest sequent is obtained from Δ ⇒ ϕ by repeated application of (w ⇒) and without any application of e-cut rule. Otherwise, Δ must contain at least one α. Then, we can replace the above by the following, in which the application of e-cut rule has a smaller height; Γ ⇒α Δ⇒ϕ (e-cut) Γ, Δα ⇒ ϕ . Lastly suppose that α is the principal formula of an application I of a contraction rule. For LJ, this happens only for left contraction rule. This is in fact the case where we met a difficulty in our informal argument mentioned before. α, α, Δ ⇒ ϕ (c ⇒) Γ ⇒ α α, Δ ⇒ ϕ (e-cut) Γ, Δα ⇒ ϕ Now that we have e-cut rule, we can treat this as follows, in which the application of e-cut rule has a smaller height; Γ ⇒ α α, α, Δ ⇒ ϕ (e-cut) Γ, Δα ⇒ ϕ .

2.2 Cut Elimination

29

Case 3. The case where either of the upper sequents is a lower sequent of a rule for logical connectives whose principal formula is not the e-cut formula. For instance, consider the case where Γ ⇒ α is obtained by applying (∧1 ⇒) as follows. β, Γ  ⇒ α β ∧ γ , Γ  ⇒ α α, Δ ⇒ ϕ (cut) β ∧ γ , Γ  , Δα ⇒ ϕ We can replace it by the following in almost the same way as in the previous section, in which the application of e-cut rule has a smaller height; β, Γ  ⇒ α α, Δ ⇒ ϕ (cut) β, Γ  , Δα ⇒ ϕ β ∧ γ , Γ  , Δα ⇒ ϕ . Similar arguments work also for other rules for logical connectives. Case 4. The remaining case is the case where both of the upper sequents are lower sequents of rules for logical connectives, each of whose principal formula is the cut formula. Similarly to discussions in the previous section, we consider the case where the cut formula α is of the form β ∧ γ , and assume that Γ ⇒ β ∧ γ and β ∧ γ , Δ ⇒ ϕ are obtained by applying (⇒ ∧) and (∧1 ⇒), respectively, each of whose principal formula is β ∧ γ . Thus, the proof must end as follows. Γ ⇒β Γ ⇒γ β, Δ ⇒ ϕ Γ ⇒β ∧γ β ∧ γ, Δ ⇒ ϕ (e-cut) Γ, Δβ∧γ ⇒ ϕ If Δ contains no β ∧ γ , then Δβ∧γ is equal to Δ. Then we can replace it by the following, where the application of e-cut rule has a smaller grade. Γ ⇒ β β, Δ ⇒ ϕ (e-cut) Γ, Δ ⇒ ϕ Otherwise, Δ contains at least one β ∧ γ . Then the proof can be replaced by the following. Γ ⇒ β ∧ γ β, Δ ⇒ ϕ (e-cut) Γ ⇒β β, Γ, Δβ∧γ ⇒ ϕ (e-cut) Γ, Γ, Δβ∧γ ⇒ ϕ some (c ⇒) Γ, Δβ∧γ ⇒ ϕ The upper application of e-cut rule has the same grade but has a smaller height, and hence β, Γ, Δβ∧γ ⇒ ϕ has a proof without e-cut rule. On the other hand, the lower

30

2 Cut Elimination for Sequent Systems

one has a smaller grade. Therefore, Γ, Δβ∧γ ⇒ ϕ has also a proof without e-cut rule. Similar arguments work well for other rules for logical connectives. Now, all possibilities are exhausted by these four cases. Thus, by the hypothesis of induction we can conclude that Γ, Δα ⇒ ϕ has a proof without e-cut rule . This completes our proof of (extended) cut elimination. Theorem 2.1 Cut elimination holds for the sequent system LJ for intuitionistic logic. Exercise 2.1 Replace the following application of e-cut rule by a simpler application of e-cut rule. Γ ⇒ β γ, Σ ⇒ α β → γ , Γ, Σ ⇒ α α, Δ ⇒ δ (e-cut) β → γ , Γ, Σ, Δα ⇒ δ Exercise 2.2 Replace the following application of e-cut rule by a simpler applications of e-cut rule. β, Σ ⇒ γ Γ ⇒ β γ, Δ ⇒ δ Σ ⇒ β → γ β → γ , Γ, Δ ⇒ δ (cut) Σ, Γβ→γ , Δβ→γ ⇒ δ We have already given a semantical proof of cut elimination for the sequent system LK in Theorem 1.11. In the same way as above, we can show it in a syntactic way. In this case, e-cut rule has the following form. Γ ⇒ Λ, α α, Δ ⇒ Π (e-cut) Γ, Δα ⇒ Λα , Π Here, Λα is defined in the same as Δα . Theorem 2.2 Cut elimination holds for the sequent system LK for classical logic. Remark 2.1 (Comparison with Gentzen’s proof ) We will make a brief comment about a difference of our proof from Gentzen’s original proof in Gentzen (1935). As we mentioned before, in standard proof of cut elimination due to Gentzen, the following mix rule is used (for the case of LK), instead of extended cut rule; Γ ⇒Θ Σ ⇒Π (mix) Γ, Σα∗ ⇒ Θα∗ , Π . Here, both Σ and Θ are multisets of formulas containing at least one α, and Σα∗ and Θα∗ denote the multisets of formulas obtained from Σ and Θ, respectively, by deleting all α in them. Obviously, mix rule can be regarded as a special application of extended cut rule. Mix rule works as well for showing cut elimination. But Gentzen needed to introduce a different measure called the rank, instead of the height, since

2.2 Cut Elimination

31

in some cases the height will not decrease after replacing an application of mix rule by some other mix rules. This caused some complications in the proof. Extended cut rule and the notion ‘height’ will make our proof easier and more accessible than Gentzen’s proof. In Exercise 1.14, we introduced a sequent system LJ for intuitionistic logic which is obtained from LK by restricting two rules (⇒→) and (⇒ ¬) so that each of their lower sequents has a single formula in the right-hand side. Though cut elimination for LJ can be shown basically in the same way as above, some careful considerations will be necessary for Case 3, where the right upper sequent of a e-cut rule is either (⇒→) or (⇒ ¬). For instance, consider the following (sub)proof P where the left upper sequent of e-cut rule is introduced by (⇒ ∨). α, γ ∨ δ, Δ ⇒ β Γ ⇒ Λ, γ Γ ⇒ Λ, γ ∨ δ γ ∨ δ, Δ ⇒ α → β (e-cut) Γ, Δγ ∨δ ⇒ Λγ ∨δ , α → β A naive idea would be to push the e-cut rule up in the following way in order to decrease the height of e-cut rule. Γ ⇒ Λ, γ Γ ⇒ Λ, γ ∨ δ α, γ ∨ δ, Δ ⇒ β (e-cut) α, Γ, Δγ ∨δ ⇒ Λγ ∨δ , β Γ, Δγ ∨δ ⇒ Λγ ∨δ , α → β But, when Λγ ∨δ is non-empty, the last application of (⇒→) is not allowed in LJ . At this place, the above idea does not work well. Nevertheless, a careful observation and modification described below make it possible to show the following. Theorem 2.3 Cut elimination holds for LJ for intuitionistic logic. Proof We will show how to deal with the above case. So we suppose that Λγ ∨δ is non-empty. Let us consider the following proof. γ ⇒γ γ ⇒ γ ∨ δ α, γ ∨ δ, Δ ⇒ β (e-cut) α, γ , Δγ ∨δ ⇒ β γ , Δγ ∨δ ⇒ α → β By using the hypothesis of induction, the sequent γ , Δγ ∨δ ⇒ α → β is provable without e-cut rule, as the height of the e-cut rule in the above is smaller than that in P. For, Γ ⇒ Λ, γ in P cannot be an initial sequent, as Λγ ∨δ is non-empty and so is Λ. Now we consider the following two subcases further, depending on whether Λ = Λγ ∨δ or not. First, suppose that Λ = Λγ ∨δ . Then, applying e-cut rule to Γ ⇒ Λ, γ and the above sequent γ , Δγ ∨δ ⇒ α → β, we get Γ, Δγ ∨δ ⇒ Λγ ∨δ , α → β, whose

32

2 Cut Elimination for Sequent Systems

grade is smaller than that in P. Otherwise, Λ = Λγ ∨δ . This means that Λ contains at least one more γ ∨ δ than Λγ ∨δ . Consider the following proof with the cut formula γ ∨ δ in which the height of e-cut rule is smaller than that in P. α, γ ∨ δ, Δ ⇒ β Γ ⇒ Λ, γ γ ∨ δ, Δ ⇒ α → β (e-cut) Γ, Δγ ∨δ ⇒ Λγ ∨δ , α → β, γ Then, applying e-cut rule to Γ, Δγ ∨δ ⇒ Λγ ∨δ , α → β, γ and γ , Δγ ∨δ ⇒ α → β with the cut formula γ . The grade is smaller than that in P. By contracting Δγ ∨δ , Δγ ∨δ to Δγ ∨δ , we have Γ, Δγ ∨δ ⇒ Λγ ∨δ , α → β in this case. Thus, by using hypothesis of induction, this sequent is provable in LJ without cut rule. Other cases can be treated similarly.  Exercise 2.3 Check the case in the above proof, when the left upper sequent of e-cut rule is introduced by (⇒ ∧).

2.3 Subformula Property One of the most important consequences of cut elimination is subformula property. Definition 2.2 (Subformula property) A proof P of a sequent S in LK (LJ) has the subformula property if every formula appearing in P is a subformula of a formula in S . A sequent system L has the subformula property when every sequent which is provable in L has a proof in L with the subformula property. Theorem 2.4 Each of sequent systems LK and LJ has the subformula property. Proof Suppose that a sequent S is provable in LK (LJ). Then, it has a cut-free proof P by Theorem 2.2 (Theorem 2.1). It is easily seen that for each rule J of LK (LJ, respectively) except cut rule, every formula in an upper sequent appear as a subformula of a formula in the lower sequent. Thus, by using induction on the length of the proof P we can show that every formula appearing in P is a subformula of a formula in S . This completes our proof.  Exercise 2.4 Show that LK is consistent, i.e., the sequent ⇒ is not provable in LK. It is shown in the next chapter that many important logical properties follow from subformula property. We note that in any application of cut rule, the cut formula may not appear in the lower sequent though it appears in both of upper sequents. One of immediate consequences of subformula property is shown in the following Corollary 2.5. This makes a big contrast to proofs in Hilbert-style systems in which ‘implication’ plays a special and multiple role.

2.3 Subformula Property

33

Corollary 2.5 Let ⊗ be any one of logical connectives. For any sequent S which does not contain ⊗, if S is provable in LK (LJ) then S has a proof in LK (LJ, respectively) in which no rules for ⊗ are used. Example 2.2 A proof P of distributive law α ∧ (β ∨ γ ) ⇒ (α ∧ β) ∨ (α ∧ γ ) given in Example 1.2 is in fact cut-free. Since the sequent contains only ∨ and ∧ as logical connectives, the proof P consists only of applications of rules for ∨ and ∧, in addition to structural rules. Exercise 2.5 Give a cut-free proof of the sequent (α ∨ β) ∧ (α ∨ γ ) ⇒ α ∨ (β ∧ γ ), by referring the proof P in Example 1.2. (Pay attention to the duality between rules for ∨ and ∧ of LK, when antecedents and succedents of sequents are interchanged.) Remark 2.3 (Conservative extension) Let LK0 and LJ0 be sequent systems obtained from LK and LJ, respectively, by adding the logical constant 0 with the initial sequent 0 ⇒ (cf. Remark 1.3). Just in the same way as in the previous section, we can show cut elimination for both LK0 and LJ0 . Then, similarly to Corollary 2.5, we can show that for any sequent S which does not contain the logical constant 0, if S is provable in LK0 (LJ0 ) then S is also provable in LK (LJ, respectively). So, LK0 (LJ0 ) is a conservative extension of LK (LJ, respectively).

Chapter 3

Proof-Theoretic Analysis of Logical Properties

In the following, we show how proof-theoretic arguments will work well in study of logical properties. Distinguishing features of proof-theoretic approach lie in its concrete and combinatorial aspects, which often yield information much more than semantical approach. Two major instruments for developing proof-theoretic study are cut elimination and inductive arguments using the length of proofs. In fact, cut elimination and subformula property play an essential role in showing logical properties discussed below, i.e., the decision problem, the disjunction property of intuitionistic logic, and Craig’s interpolation property. In the last section, we show that inductive arguments also work well even if a given proof contains applications of cut rule.

3.1 Decidability of Intuitionistic Logic The decision problem for a given logic L is the problem whether there exists an algorithm which decides whether any given formula (or sequent) is provable in L or not. If such an algorithm exists, then L is said to be decidable, and otherwise, it is said to be undecidable. The decidability of a logic L often follows from cut elimination. We will explain first an outline of the decision algorithm for intuitionistic logic, based on the sequent system LJ. Our algorithm consists of proof searching of a given sequent S in LJ which is similar to proof searching in the sequent system LK∗ . That is, to find a prooflike tree obtained by applying rules of LJ in the converse direction, until all of its topmost sequents are initial sequents of LJ. Because of cut elimination for LJ, it is not necessary to consider applications of cut rule. Also, by the subformula property of cut-free proofs, it is enough to search for a proof-like tree constructed by sequents which consist only of subformulas of formulas appearing in S . If there exists a successful proof search tree of S , i.e., a proof search tree of S all of whose topmost © Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_3

35

36

3 Proof-Theoretic Analysis of Logical Properties

sequents are initial, we can conclude that S is provable in LJ, and this proof search tree is in fact a proof of S . On the other hand, to conclude that S is not provable in LJ, it is necessary to show by exhaustive search that every proof seach is failed. There are two distinct differences between proof searching in LJ and that in LK∗ . The first point is that every rule of LK∗ is invertible, while some rules of LJ, like rules (∧1 ⇒) and (∧2 ⇒), are not always so. It means that even when the lower sequent is provable, upper sequents may not be provable if we make a wrong choice. So, proof searchings in LJ contain inevitably many trials and errors, and must have a sound backtracking mechanism, a mechanism which records a history of proof searching till then, and which tells us from which case our next proof searching should be restarted, when a proof searching under way turns to be failed. In fact, this will make a procedure more complicated in practice, but still can be managed. Another issue, which will be much more serious, is the presence of contraction rule in LJ. Apparently, the upper sequent of each application of contraction rule is by no means simpler or shorter than the lower sequent. Because of this, we are not sure whether our proof searching described above will eventually terminate. It should be noted here that contraction rules in their explicit form do not appear in LK∗ as they are merged into rules for logical connectives. In fact, contraction rules are derivable in it. For example, if α, α, Γ ⇒ Π is provable in LK∗ then α, Γ ⇒ Π is always provable in it. Here we explain the idea of showing decidability due to Gentzen (1935). For each n > 0, we say that a sequent is n-reduced if every formula appears at most n times in the antecedent.1 For example, α, β, α, α → γ ⇒ α is 2-reduced, and hence it is also n-reduced for every n > 2. For a given sequent Γ ⇒ ϕ, we say that a sequent Δ ⇒ ϕ is a 1-reduced contraction of Γ ⇒ ϕ iff it is 1-reduced sequent which is obtained from Γ ⇒ ϕ by applying contraction rule repeatedly. For example, β, α ⇒ α is a 1-reduced contraction of α, β, α ⇒ α. Moreover, such a 1-reduced contraction of a given sequent is determined uniquely, as we take a multiset for Δ. From a given sequent, we can get its 1-reduced contraction simply by applying contraction rule as far as possible, and conversely from the 1-reduced contraction we can recover the original sequent by repeated applications of weakening rule. Any proof of a sequent in LJ is said to be reduced iff it contains only 3-reduced sequents. We will show the following. Lemma 3.1 For each sequent Γ ⇒ ϕ, the following two conditions are mutually equivalent. 1. Γ ⇒ ϕ is provable in LJ, 2. The 1-reduced contraction of Γ ⇒ ϕ has a cut-free reduced proof in LJ. Proof It is obvious that the second condition implies the first condition by using weakening rule repeatedly. We will show that the first condition implies the second 1 We

need to modify this definition slightly when we consider a sequent system with multiple formulas in succedents, like LK. In such a case, a sequent is n-reduced if every formula appears at most n times both in the antecedent and in the succedent.

3.1 Decidability of Intuitionistic Logic

37

one. Suppose that Γ ⇒ ϕ has a cut-free proof P of LJ. For each sequent Δ ⇒ ψ in P, we show that its 1-reduced contraction Δ∗ ⇒ ψ has a cut-free reduced proof, by using the induction of the length of the proof of Δ ⇒ ψ. This holds trivially when Δ ⇒ ψ is an initial sequent. Next, suppose that Δ ⇒ ψ is obtained by applying a rule (R) for logical connectives. Consider the case, for example, where (R) is (→⇒) whose upper sequents are Δ1 ⇒ α and β, Δ2 ⇒ ψ. In this case, Δ must be α → β, Δ1 , Δ2 . By the hypothesis of induction, both 1-reduced contractions Δ1 ∗ ⇒ α of Δ1 ⇒ α and {β, Δ2 }∗ ⇒ ψ of β, Δ2 ⇒ ψ have reduced proofs, respectively. Here, the sequent {β, Δ2 }∗ ⇒ ψ is either of the form β, Δ2 ∗ ⇒ ψ when β does not appear in Δ2 ∗ , or of the form Δ2 ∗ ⇒ ψ otherwise. In the latter case, by applying weakening rule, we get also β, Δ2 ∗ ⇒ ψ, which is 2-reduced. Now applying (→⇒) we have α → β, Δ1 ∗ , Δ2 ∗ ⇒ ψ which is at most 3-reduced. As the following example shows, it may happen the case where it is in fact 3-reduced. α → β, γ ⇒ α β, α → β, δ ⇒ ψ (→⇒) α → β, α → β, γ , α → β, δ ⇒ ψ . By applying contraction rule to the above sequent as necessary, we get a reduced proof of a 1-reduced contraction of α → β, Δ1 ∗ , Δ2 ∗ ⇒ ψ, which is also equal to a 1-reduced contraction of α → β, Δ ⇒ ψ. When (R) is any other rule, the proof can be given in a similar way. We note that no applications of cut rule come into this procedure of getting reduced proofs.  We say that a proof P of a sequent S has a redundancy (or, P is redundant), when there exists a “path” in P in which the same sequent appears in different places as shown below: P0 Σ ⇒ψ .. . Σ ⇒ψ P1 S Such a redundancy can be removed by replacing it as follows: P0 Σ ⇒ψ P1 S By repeating such a replacement, we can remove all redundancies in a given proof without changing its end sequent. Combining these results all together we have the following.

38

3 Proof-Theoretic Analysis of Logical Properties

Lemma 3.2 A sequent S is provable in LJ iff there exists a cut-free reduced proof P of a 1-reduced contraction of S such that 1. P has subformula property, 2. P has no redundancies. Suppose that a sequent S is given. Let S  be the 1-reduced sequent of S , and Ψ be the set of all subformulas of formulas appearing in S . Since Ψ is finite, the total number of 3-reduced sequents consisting of formulas in Ψ is also finite. Thus, the number of all cut-free proof-like trees without redundancies consisting of such 3reduced sequents is also finite. By exhaustive search of them, we can decide whether or not S  has in fact a proof among them that satisfies two conditions of the above lemma. Consequently, we can decide whether S is provable in LJ or not. In this way, we get a decision procedure of deciding the provability of a given sequent in LJ by combining a backward proof searching mentioned before with the loop-checking for eliminating redundancies in proofs. Thus we can conclude that intuitionistic logic is decidable. As for decision procedure of classical logic, we have already mentioned a simple decision procedure using an invertible system LK∗ . Alternatively, we may use essentially the same procedure in the above for LK. Thus we have the following. Theorem 3.3 Both classical logic and intuitionistic logic are decidable. Moreover, there exists an algorithm which gives a cut-free proof of a given sequent in LK and LJ, respectively, when it is provable. Remark 3.1 In implementing the above decision procedure, we need to introduce various strategies for improving its efficiency. For instance, in backward proof searches, we must give the highest priority to the rules (⇒→), (⇒ ¬), (⇒ ∧) and (∨ ⇒), and apply them (backwardly) at first as far as possible. For, they are invertible rules. Also it is necessary to incorporate loop-checking mechanism for systems with contraction rules, like LK and LJ. This obviously makes proof searching inefficient. So, it would be preferable if we could find a system without “explicit contraction rules”. For instance, LK∗ is such a system for classical logic, where contraction rules are incorporated into each rule for logical connectives. Some sequent systems without “explicit contraction rules” are known for intuitionistic logic.

3.2 Disjunction Property In this and next sections, we show that in addition to decidability, two important logical properties can be derived as consequences of cut elimination. Our first example is the disjunction property of intuitionisitc logic. Definition 3.1 (Disjunction property) A logic L has the disjunction property iff for all formulas α and β either α or β is provable in L whenever α ∨ β is provable in L.

3.2 Disjunction Property

39

Classical logic does not have the disjunction property, since the formula p ∨ ¬ p is provable while neither p nor ¬ p is provable where p is an arbitrary propositional variable. On the other hand, we have the following. Theorem 3.4 Intuitionistic logic has the disjunction property. Proof Suppose that the formula α ∨ β is provable in intuitionistic logic. Then there is a cut-free proof of the sequent ⇒ α ∨ β in LJ. Clearly, this sequent is not an initial sequent. So, consider the last rule J in this proof. As it cannot be cut rule, it must be among the following three rules, i.e., right weakening rule, (⇒ ∨1) and (⇒ ∨2), because of the form of this sequent. In the first case, the upper sequent is ⇒ . But this is impossible. For, if it had a cut-free proof Q then every formula in any one of initial sequents in Q should appear in its end sequent as a subformula by subformula property. But this cannot be the case. Hence, J is either of the remaining two cases. This implies that the upper sequent is either ⇒ α or ⇒ β, which means that either α or β is provable in intuitionistic logic.  Exercise 3.1 Obviously, the sequent ⇒ p ∨ ¬ p is provable in LK without cut. Then, check what is the last rule in this proof? Definition 3.2 (Halldén-completeness) A logic L is Halldén-complete iff for all formulas α and β such that no propositional variables appear in common, either α or β is provable in L whenever α ∨ β is provable in L.2 It is clear that the disjunction property implies Halldén-completeness. Algebraic aspects of the disjunction property and Halldén completeness will be discussed in Chap. 8. Exercise 3.2 Show that classical logic is Halldén-complete although it does not have the disjunction property. ([Hint] Use the fact that a formula is provable in classical logic iff it is a tautology.)

3.3 Craig’s Interpolation Property Definition 3.3 (Craig’s interpolation property) A logic L has the Craig’s interpolation property (CIP), if for all formulas α, β such that α → β is provable in L, there exists a formula γ , called an interpolant of α → β (in L), such that • both α → γ and γ → β are provable in L, • V ar (γ ) ⊆ V ar (α) ∩ V ar (β).

2 The

notion was introduced by Halldén (1951).

40

3 Proof-Theoretic Analysis of Logical Properties

Here, V ar (δ) denotes the set of all propositional variables appearing in a formula δ. Craig (1957) proved that the CIP holds for classical logic, using a model-theoretic way. It should be noted that one must be cautious about the second condition of interpolants, as it may happen that formulas α and β contain no propositional variables in common. As long as our language contains some logical constants, the above theorem tells us that any interpolant σ of α → β must be a formula constructed only by logical constants in such a case. Otherwise, we need to make some modifications of the statement of our theorem. As a matter of fact, the CIP is sensitive to the choice of language. In the following, we first assume that our language of both classical logic and intuitionistic logic contain the logical constant 0. In this case, we can use also the logical constant 1 which is defined by 0 → 0. Afterwards, we will explain how to modify the statement when no logical constants are contained in our language. Exercise 3.3 Find an interpolant of each of the following formulas in classical logic. 1. [ p ∧ ¬( p ∧ q)] → (q → r ) 2. [ p ∧ ¬( p ∧ q) ∧ s] → [q → (r ∨ s)] We remark that interpolants are not necessarily unique, i.e., there may be interpolants which are not logical equivalent to each other (in classical logic). Here, we say that a formula α is logically equivalent to another formula β in classical logic, if both α → β and β → α are provable in classical logic. For instance, both s and ¬q are interpolants of the second formula in the above exercise. Exercise 3.4 Suppose that both γ and δ are interpolants of a formula α → β in classical logic. Show that both γ ∧ δ and γ ∨ δ are also interpolants of α → β. (1) An elementary proof for classical logic In the following, we first give an elementary semantical proof of the CIP of classical logic. Then we explain a syntactical proof of the CIP due to S. Maehara, which is based on cut elimination. Suppose that a given formula α → β is provable in classical logic. Our goal is to give an interpolant of α → β. Let p1 , . . . , pm , r1 , . . . , rk (p, r for short) and q1 , . . . , qn , r1 , . . . , rk (q, r for short) be an enumeration of all propositional variables (without repetitions) appearing in α and β, respectively, where r1 , . . . , rk (= r) are all propositional variables appearing in both of α and β. Sometimes, we use expressions α(p, r) and β(q, r) in place of α and β, respectively, to emphasize propositional variables appearing in them. Since α → β is provable in classical logic, it must be a tautology. That is, for any assignment h, h(α → β) takes always the value 1 (truth). This implies that for any replacement of each variable in p by either of logical constants 0 and 1 in α → β, the resulting formula α(e, r) → β(q, r) is again a tautology, where e ∈ {0, 1}m , i.e., e is formula obtained from α(p, r) any m-tuple consisting of 0 and 1,3 and α(e, r) is the by replacing variables p by e. Hence the formula [ e∈{0,1}m α(e,  r)] → β(q, r) is also a tautology. On the other hand, the formula α(p, r) → [ e∈{0,1}m α(e, r)] is a 3 We use symbols 0 and 1 not only for truth value but also for logical constants, by abuse of symbols.

3.3 Craig’s Interpolation Property

41

tautology, since for a given assignment h, h( pi ) takes a value in {0, 1} for each i and hence h(α(p, r)) = h(α(e , r)) for some e ∈ {0, 1}m . Let α∗ [r] be the formula  e∈{0,1}m α(e, r). Then, from these two facts, it follows that α∗ [r] is an interpolant of α → β, and moreover that α∗ [r] is determined only by α and r but independently of β. This implies that if δ is any formula such that α → δ is provable and that any propositional variable appearing in both α and δ is in r then α∗ [r] is also an interpolant of α → δ. Now suppose that γ is any interpolant of α → β. Then α → γ is provable and moreover any propositional variable appearing in both α and γ belongs to r, since γ is a formula consisting only of propositional variables in r. Hence α∗ [r] is an interpolant also of α → γ by the above argument. This implies that α∗ [r] → γ is provable. Therefore, α∗ [r] is the strongest interpolant (up to logical equivalence) among interpolants of α → β. The formula α∗ [r] is called the post-interpolant of α with respect to r. Example 3.2 Consider the second formula in Exercise 3.3. Variables q and s appear in both p ∧ ¬( p ∧ q) ∧ s and q → (r ∨ s). Since 1 ∧ ¬(1 ∧ q) ∧ s is equivalent to ¬q ∧ s and 0 ∧ ¬(0 ∧ q) ∧ s is equivalent to 0, the post-interpolant of p ∧ ¬( p ∧ q) ∧ s with respect to q, s is ¬q ∧ s. By a similar argument, but by taking the formula β instead, we can show that  n for each α(p, r) → β(e , r)  e ∈ {0, 1} . Hence, we conclude that  is a tautology  both α(p, r) → [ e ∈{0,1}n β(e , r)] and [ e ∈{0,1}n β(e , r)] → β(q, r) are tautologies. Thus, the formula β ∗ [r], defined by e ∈{0,1}n β(e , r), is another interpolant of α → β. Moreover, similarly as before, we can show that γ → β ∗ [r] is provable for any interpolant γ of α → β. Thus, β ∗ [r] is the weakest interpolant (up to logical equivalence) of α → β. The formula β ∗ [r], which is determined only by β and r, is called the pre-interplant of β with respect to r. Example 3.3 Consider once again the second formula in Exercise 3.3. The preinterpolant of q → (r ∨ s) with respect to q, s is [q → (1 ∨ s)] ∧ [q → (0 ∨ s)], which is equivalent to q → s, and hence to ¬q ∨ s. In this way, the Craig’s interpolation property of classical logic can be shown in a stronger form. We say that a logic L has the uniform interpolation property whenever both post-interpolant and pre-interpolant exist for every formula and every finite set of propositional variables. Theorem 3.5 The uniform interpolation property holds for classical logic. (2) Maehara’s method The proof of the CIP given above is simple, but it highly relies on the two-valued semantics for classical logic. Hence, we cannot expect to use the similar idea to other logics. Maehara (1960) introduced a syntactic way of showing the CIP as a consequence of cut elimination. Here is an outline of Maehara’s method applying to the CIP for LK0 , the sequent system LK with the logical constant 0 introduced at the end of the previous chapter. Let us remind you that each sequent of LK0 is of the

42

3 Proof-Theoretic Analysis of Logical Properties

form Γ ⇒ Δ with multisets of formulas Γ and Δ. A partition of a given multiset Γ of formulas is a pair Γ1 : Γ2 of multisets Γ1 and Γ2 such that their multiset union is equal to Γ .4 (Sometimes, we express the multiset union of two multisets Σ and Π as Σ, Π , and also as Σ, α1 , . . . , αm when Π is {α1 , . . . , αm }.) A partition of a given sequent Γ ⇒ Δ is a pair of the form ( Γ1 : Δ1 , Γ2 : Δ2 ) where pairs Γ1 : Γ2 and Δ1 : Δ2 are partition of Γ and Δ, respectively. In the following, V ar (Σ) for a multiset Σ denotes the set of all propositional variables appearing in some formulas in Σ. We prove now Craig’s interpolation theorem in the following stronger form. Theorem 3.6 Suppose that a sequent Γ ⇒ Δ is provable in LK0 and ( Γ1 : Δ1 , Γ2 : Δ2 ) is any of its partitions. Then there exists a formula σ such that • both Γ1 ⇒ Δ1 , σ and σ, Γ2 ⇒ Δ2 are provable in LK0 , • V ar (σ ) ⊆ V ar (Γ1 , Δ1 ) ∩ V ar (Γ2 , Δ2 ). Proof Our theorem can be shown by using induction on the length of a given cutfree proof of Γ ⇒ Δ. We call such a formula σ as above an interpolant of Γ ⇒ Δ with respect to the partition ( Γ1 : Δ1 , Γ2 : Δ2 ). Suppose first that Γ ⇒ Δ is the initial sequent ϕ ⇒ ϕ. We need to consider the following four cases depending on to which Γi and Δ j (i, j ∈ {1, 2}) the formula ϕ belongs. Thus it is necessary to find a formula σ satisfying the second condition of interpolants on variables in each of the following: • • • •

both ϕ ⇒ ϕ, σ and σ ⇒ are provable, both ϕ ⇒ σ and σ ⇒ ϕ are provable, both ⇒ ϕ, σ and σ, ϕ ⇒ are provable, both ⇒ σ and σ, ϕ ⇒ ϕ are provable.

Then, for each case it is enough to take 0, ϕ, ¬ϕ and 1 for σ , respectively. When Γ ⇒ Δ is the initial sequent 0 ⇒ , as an interpolant we can take 0 for the partition ( {0} : ∅ , ∅ : ∅ ), and 1 for the partition ( ∅ : ∅ , {0} : ∅ ), respectively. Suppose next that Γ ⇒ Δ is not an initial sequent. Let J be the last rule applied. What we need to show is that Γ ⇒ Δ has an interpolant with respect to any of its partition, by assuming that upper sequents of J have interpolants with respect to any of their partitions. Here, we give a detailed proof for the case where J is one of (⇒ ∨), (∨ ⇒), (⇒→) and (→⇒) in the following. Consider first the case where J is (⇒ ∨1): Γ ⇒ Π, α Γ ⇒ Π, α ∨ β Take a partition ( Γ1 : Π1 , α ∨ β , Γ2 : Π2 ). By the hypothesis of induction, with respect to a partition ( Γ1 : Π1 , α , Γ2 : Π2 ) of Γ ⇒ Π, α there exists an interpolant σ such that precisely, when Γ1 and Γ2 are multisets {α1 , . . . , αm } and {β1 , . . . , βn }, respectively, the multiset union of them is the multiset {α1 , . . . , αm , β1 , . . . , βn }.

4 More

3.3 Craig’s Interpolation Property

43

• both Γ1 ⇒ Π1 , α, σ and σ, Γ2 ⇒ Π2 are provable, • V ar (γ ) ⊆ V ar (Γ1 , Π1 , α) ∩ V ar (Γ2 , Π2 ) Clearly, both Γ1 ⇒ Π1 , α ∨ β, σ and σ, Γ2 ⇒ Π2 are provable, and also the condition that V ar (γ ) ⊆ V ar (Γ1 , Π1 , α ∨ β) ∩ V ar (Γ2 , Π2 ) is satisfied. Similarly, we can show the existence of an interpolant of Γ ⇒ Δ with respect to the partition ( Γ1 : Π1 , Γ2 : Π2 , α ∨ β ), and also for the case where J is (⇒ ∨2). Consider next the case where J is (∨ ⇒): α, Σ ⇒ Δ β, Σ ⇒ Δ α ∨ β, Σ ⇒ Δ First, take a partition ( α ∨ β, Σ1 : Δ1 , Σ2 : Δ2 ) of α ∨ β, Σ ⇒ Δ. Then, taking partitions ( α, Σ1 : Σ2 , Σ2 : Δ2 ) and ( β, Σ1 : Σ2 , Σ2 : Δ2 ) of upper sequents, respectively, and using the hypothesis of induction, we get formulas σ and δ such that • both α, Σ1 ⇒ Δ1 , σ and σ, Σ2 ⇒ Δ2 are provable, and V ar (σ ) ⊆ V ar (α, Σ1 , Δ1 ) ∩ V ar (Σ2 , Δ2 ), • both β, Σ1 ⇒ Δ1 , δ and δ, Σ2 ⇒ Δ2 are provable, and V ar (δ) ⊆ V ar (β, Σ1 , Δ1 ) ∩ V ar (Σ2 , Δ2 ). Thus we have that both α ∨ β, Σ1 ⇒ Δ1 , σ ∨ δ and σ ∨ δ, Σ2 ⇒ Δ2 are provable, and moreover that V ar (σ ∨ δ) ⊆ V ar (α ∨ β, Σ1 , Δ1 ) ∩ V ar (Σ2 , Δ2 ). Hence, the formula σ ∨ δ is an interpolant of Γ ⇒ Δ with respect to the above partition. Next, consider a partition ( Σ1 : α ∨ β, Σ2 , Σ2 : Δ2 ) of α ∨ β, Σ ⇒ Δ. This time, we take partitions ( Σ1 : α, Σ2 , Σ2 : Δ2 ) and ( Σ1 : β, Σ2 , Σ2 : Δ2 ) of β, Σ of upper sequents, respectively, and by the hypothesis of induction, we get formulas σ and δ such that • both Σ1 ⇒ Δ1 , σ and σ, α, Σ2 ⇒ Δ2 are provable, and V ar (σ ) ⊆ V ar (Σ1 , Δ1 ) ∩ V ar (α, Σ2 , Δ2 ), • both Σ1 ⇒ Δ1 , δ and δ, β, Σ2 ⇒ Δ2 are provable, and V ar (δ) ⊆ V ar (Σ1 , Δ1 ) ∩ V ar (β, Σ2 , Δ2 ). From them it follows that both Σ1 ⇒ Δ1 , σ ∧ δ and σ ∧ δ, α ∨ β, Σ2 ⇒ Δ2 are provable, and V ar (σ ∧ δ) ⊆ V ar (Σ1 , Δ1 ) ∩ V ar (α ∨ β, Σ2 , Δ2 ) holds. Therefore, the formula σ ∧ δ is an interpolant in this case. We consider next the case where J is (⇒→): α, Γ ⇒ Π, β Γ ⇒ Π, α → β We take a partition ( Γ1 : Π1 , α → β , Γ2 : Π2 ) first. By the hypothesis of induction, with respect to a partition ( α, Γ1 : Π1 , β , Γ2 : Π2 ) of α, Γ ⇒ Π, β there exists an interpolant σ such that • both α, Γ1 ⇒ Π1 , β, σ and σ, Γ2 ⇒ Π2 are provable,

44

3 Proof-Theoretic Analysis of Logical Properties

• V ar (σ ) ⊆ V ar (α, Γ 1 , Π1 , β) ∩ V ar (Γ2 , Π2 ) Clearly, both Γ1 ⇒ Π1 , α → β, σ and σ, Γ2 ⇒ Π2 are provable, and moreover the condition V ar (σ ) ⊆ V ar (Γ1 , Π1 , α → β) ∩ V ar (Γ2 , Π2 ) holds. Similarly, we can show the existence of an interpolant of Γ ⇒ Δ with respect to the partition ( Γ1 : Π1 , Γ2 : Π2 , α → β ), When J is (→⇒): Σ ⇒ Π, α β, Ψ ⇒ Θ α → β, Σ, Ψ ⇒ Π, Θ it is necessary to consider partitions of the form ( α → β, Σ1 , Ψ1 : Π1 , Θ1 , Σ2 , Ψ2 : Π2 , Θ2 ) and of the form ( Σ1 , Ψ1 : Π1 , Θ1 , α → β, Σ2 , Ψ2 : Π2 , Θ2 ). For the first case, let σ be an interpolant of Σ ⇒ Π, α with respect to ( Σ2 : Π2 , Σ1 : Π1 , α ), and let δ be an interpolant of β, Ψ ⇒ Θ with respect to ( β, Ψ1 : Θ1 , Ψ2 : Θ2 ). Then, the following statements hold. • both Σ2 ⇒ Π2 , σ and σ, Σ1 ⇒ Π1 , α are provable, and V ar (σ ) ⊆ V ar (Σ2 , Π2 ) ∩ V ar (Σ1 , Π1 , α), • both β, Ψ1 ⇒ Θ1 , δ and δ, Ψ2 ⇒ Θ2 are provable, and V ar (δ) ⊆ V ar (β, Ψ1 , Θ1 ) ∩ V ar (Ψ2 , Θ2 ). Then, the sequents σ, α → β, Σ1 , Ψ1 ⇒ Π1 , Θ1 , δ and σ → δ, Σ2 , Ψ2 ⇒ Π2 , Θ2 are provable. Using the former sequent, the sequent α → β, Σ1 , Ψ1 ⇒ Π1 , Θ1 , σ → δ is also provable. Moreover, it is easy to see that the formula σ → δ satisfies the required condition on variables to be an interpolant. Thus, it is an interpolant of α → β, Σ, Ψ ⇒ Π, Θ with respect to the partition ( α → β, Σ1 , Ψ1 : Π1 , Θ1 , Σ2 , Ψ2 : Π2 , Θ2 ). For the second case, let σ be an interpolant of Σ ⇒ Π, α with respect to ( Σ1 : Π1 , Σ2 : Π2 , α ), and δ be an interpolant of β, Ψ ⇒ Θ with respect to ( Ψ1 : Θ1 , β, Ψ2 : Θ2 ). Then, the following statements hold. • both Σ1 ⇒ Π1 , σ and σ, Σ2 ⇒ Π2 , α are provable, and V ar (σ ) ⊆ V ar (Σ1 , Π1 ) ∩ V ar (Σ2 , Π2 , α), • both Ψ1 ⇒ Θ1 , δ and δ, β, Ψ2 ⇒ Θ2 are provable, and V ar (δ) ⊆ V ar (Ψ1 , Θ1 ) ∩ V ar (β, Ψ2 , Θ2 ). Then, it is easily seen that both Σ1 , Ψ1 ⇒ Π1 , Θ1 , σ ∧ δ and σ ∧ δ, α → β, Σ2 , Ψ2 ⇒ Π2 , Θ2 are provable. Moreover, as σ ∧ δ satisfies the required condition on variables to be an interpolant, we can conclude that σ ∧ δ is an interpolant of α → β, Σ, Ψ ⇒ Π, Θ with respect to the partition ( Σ1 , Ψ1 : Π1 , Θ1 , α →  β, Σ2 , Ψ2 : Π2 , Θ2 ). Exercise 3.5 Complete the above proof, by checking it when J is any other rule. The similar result holds for LJ0 . In this case, by a partition of a given sequent Γ ⇒ ϕ of LJ0 , we mean a partition (Γ1 : Γ2 ) of Γ . Similarly to Theorem 3.6, we can show the following.

3.3 Craig’s Interpolation Property

45

Theorem 3.7 Suppose that a sequent Γ ⇒ ϕ is provable in LJ0 and that Γ1 : Γ2 is an arbitrary partition of Γ . Then there exists a formula σ such that • both Γ1 ⇒ σ and σ, Γ2 ⇒ ϕ are provable in LJ0 , • V ar (σ ) ⊆ V ar (Γ1 ) ∩ V ar (Γ2 , ϕ). Craig’s interpolation for classical logic follows immediately from Theorem 3.6 by taking α ⇒ β for Γ ⇒ Δ and considering the partition ( α : ∅ , ∅ : β ). Similarly, Craig’s interpolation for intuitionistic logic follows from Theorem 3.7. Corollary 3.8 (Craig’s interpolation) The Craig’s interpolation property holds for both classical logic and intuitionistic logic. So far, we assume the existence of the logical constant 0 in our language. Now, we consider how to modify the form of the Craig’s interpolation property when our language L does not contain any logical constant. For comparison, let L 0 be the language L supplemented with the logical constant 0. Let us suppose that α ⇒ β is provable in LK, where α and β are formulas in the language L . Obviously, it is provable also in LK0 . By Theorem 3.6, there exists an interpolant σ of α ⇒ β (in the language L 0 ) such that both α ⇒ σ and σ ⇒ β are provable in LK0 and that V ar (σ ) ⊆ V ar (α) ∩ V ar (β). We note that the formula σ may contain the logical constant 0. Suppose first that V ar (α) ∩ V ar (β) = ∅. Let σ ∗ to be a formula obtained from σ by replacing each occurrence of 0 in it by the formula p ∧ ¬ p, where p is an arbitrary propositional variable in the set V ar (α) ∩ V ar (β). As p ∧ ¬ p is logically equivalent to 0, σ ∗ is logically equivalent to σ in LK0 . Hence, the formula σ ∗ is also an interpolant of α ⇒ β in LK0 . Since σ ∗ is a formula in the language L and LK0 is a conservative extension of LK, both α ⇒ σ ∗ and σ ∗ ⇒ β are provable in LK. Moreover, as V ar (σ ∗ ) ⊆ V ar (α) ∩ V ar (β) because of the choice of p, the formula σ ∗ is an interpolant of α ⇒ β. Next suppose that V ar (α) ∩ V ar (β) = ∅. Then such an interpolant σ of α → β must be a formula made up only of 0 with logical connectives. It is easy to see that such a formula is logically equivalent in LK0 to either 0 or 1(= 0 → 0). When it is equivalent to 0, the formula ¬α, which is equal to α → 0, must be provable, as 0 is an interpolant of α ⇒ β in this case. On the other hand, when it is equal to 1, the formula β, which is equal to 1 → β, must be provable in LK0 . Thus, either ¬α or β is provable in LK0 , and hence in LK. (See Remark 2.3.) The similar argument works well also between LJ0 and LJ. Thus, we have the following. Corollary 3.9 (Craig’s interpolation without logical constants) The Craig’s interpolation property of the following form holds for both classical logic and intuitionistic logic in the language without logical constants. Suppose that α → β is provable. When V ar (α) ∩ V ar (β) = ∅, there exists a formula σ such that • both α → σ and σ → β are provable, • V ar (σ ) ⊆ V ar (α) ∩ V ar (β). Otherwise, when V ar (α) ∩ V ar (β) = ∅ either ¬α or β is provable.

46

3 Proof-Theoretic Analysis of Logical Properties

3.4 Glivenko’s Theorem In the previous sections, we have shown how proof-theoretic arguments work and how cut elimination plays a key role in them. But inductive proofs using the length of a proof will sometimes provide us an interesting result of constructive character, even if the proof contains applications of cut rules. Here is such an example. It is shown that every sequent which is provable in LJ is provable also in LK but the converse statement does not hold. Thus, we can say that classical logic is properly stronger than intuitionistic logic. One may naturally ask what are connections between formulas which are provable in classical logic and those in intuitionistic logic. Glivenko (1929) gave an answer to this question by showing the following theorem. Theorem 3.10 (Glivenko) A formula α is provable in classical logic iff ¬¬α is provable in intuitionistic logic. Thus, to check whether a formula α is provable in classical logic, it is enough to check whether ¬¬α is provable in intuitionistic logic. Hence, one can say that intuitionistic logic has enough ability to check the provability of formulas in classical logic though it is weaker than classical logic. Exercise 3.6 Give a proof of ¬¬((¬α → α) → α) in LJ. (Cf. Exercise 1.11.) Though Glivenko obtained the result by using algebraic method, we will give here a syntactic proof of Glivenko’s theorem in the following generalized form. Here ¬Δ denotes the multiset of formulas ¬α1 , . . . , ¬αn when Δ is α1 , . . . , αn . Theorem 3.11 For all multisets of formulas Γ and Δ, the sequent Γ ⇒ Δ is provable in LK iff the sequent ¬Δ, Γ ⇒ is provable in LJ. The if-part of Theorem 3.11 is almost obvious. The only-if-part can be shown by using induction on the length of a given proof P of Γ ⇒ Δ. In this way, we will get a proof Q of ¬Δ, Γ ⇒ constructively from P. Exercise 3.7 Give full details of the proof of Theorem 3.11. We note that Glivenko’s result stated in Theorem 3.10 follows Theorem 3.11 by taking empty multiset for Γ and a singleton multiset of α for Δ. For, it is easily seen that ¬α ⇒ is provable in LJ iff ⇒ ¬¬α is provable in it.

Chapter 4

Modal and Substructural Logics

In this section, we will give a brief introduction to proof theory for two important branches of nonclassical logics, that is, modal logics and substructural logics. They are important because both of them include vast varieties of logics that have been actively studied. Modal logics are usually defined over classical logic by adding various kinds of axiom schemes on modalities. To treat them, Kripke semantics has played a key role in the development of semantical study which started from the beginning of the 1960s. On the syntactical side, cut elimination has been shown for sequent systems for some modal logics, which are obtained from LK by adding rules for modal operator(s). But for many other basic modal logics, it is not easy to find sequent systems for which cut elimination holds. So various attempts have been made to introduce different kinds of cut-free extensions of standard sequent systems for these logics, including hypersequent systems, labelled sequent systems, nested sequent systems, and display systems. As these topics are beyond the scope of the present book, we will confine ourselves here to basics of standard sequent systems for normal modal logics. Another interesting class of nonclassical logics is the class of substructural logics. Substructural logics are not a new class of nonclassical logics. The study proposes a new view of understanding various, existing nonclassical logics from the substructural perspective which is induced by the presence or the absence of structural rules when they are formulated in sequent systems. Basic substructural logics are defined to be logics obtained from either classical logic or intuitionistic logic by deleting some of structural rules, and substructural logics are in general defined to be extensions of some of basic substructural logics. Then it turns out that they include many known classes of nonclassical logics, like Łukasiewicz’s many-valued logics, relevant logics and linear logic, though they have been introduced and studied from different background and motivation. In fact, Łukasiewicz’s many-valued logics lack contraction rules, relevant logics lack weakening rules and linear logic has neither of them. This would be a rather unexpected observation, but it may be not so clear why the formalization of logics in sequent systems does matter. The subject will © Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_4

47

48

4 Modal and Substructural Logics

be discussed and explicated in Chap. 9 of Part II from algebraic point of view. In Sect. 4.3 of the present chapter, sequent systems for basic substructural logics will be introduced and proof-theoretic analyses of them will be made.

4.1 Standard Sequent Systems for Normal Modal Logics We will introduce standard sequent systems for some normal modal logics by adding rules for the modal operator  to LK. The modal operator ♦ is defined by ♦ϕ ≡ ¬¬ϕ for any ϕ. Hilbert-style system for modal logic K is obtained from Hilbertstyle system for classical logic HK by adding the following axiom scheme K : (α → β) → (α → β) with modus ponens, and also the rule of necessitation, which says that α is inferred from α for every formula α. A modal logic is a normal extension of K, if it contains all axiom schemes of K and has both modus ponens and the rule of necessitation. We discuss only normal modal logics in this book. Let us consider the following axiom schemes for standard modal logics. D: α → ♦α, T: α → α, 4: α → α, B: α → ♦α, 5: ♦α → ♦α. Modal logics KD, KT, K4, KB are obtained from the modal logic K by adding each axiom scheme D, T, 4, B, respectively. A modal logic obtained from K adding any combinations of these axiom schemes can be expressed in the same way. For example, KTB is the normal extension of K with axiom schemes T and B. Modal logics KT4 and KT5 are known also as S4 and S5. Exercise 4.1 1. Show that every axiom of the form 4 is provable in KB5. 2. Show that conversely every axiom 5 is provable in KB4. (Hence, KB4 determines the same logic as KB5.) 3. Show that both D and B are provable in S5. 4. Show that T is provable in KDB4. 5. Conclude that both KDB4 and KDB5 determine the same logic as S5. We will next introduce sequent systems for these modal logics. Let us consider the following six rules for .1 Here Γ denotes the multiset of formulas α1 , . . . , αn when Γ is α1 , . . . , αn . Γ ⇒α (K) Γ ⇒ α

Γ ⇒ Γ ⇒

(D)

Γ, Γ ⇒ Γ ⇒

(4D)

formula α in the lower sequent of rules (K), (4), (T), (S4), and also all formulas in Γ in the lower sequent of rules (K) and (D) must be regarded as principal formulas (see Sect. 1.2) of the corresponding rules, when we prove cut elimination (see Theorem 4.1). 1 The

4.1 Standard Sequent Systems for Normal Modal Logics

Γ, Γ ⇒ α (4) Γ ⇒ α Γ ⇒ α (S4) Γ ⇒ α

49

α, Γ ⇒ Δ (T) α, Γ ⇒ Δ Γ ⇒ Δ, α (S5) Γ ⇒ Δ, α

It is easily seen that any formula of the form (α → β) → (α → β) is provable in the system LK with the rule (K). In the above rules, both Γ and Δ may be empty sequences. So, (D) is obtained by applying (T) repeatedly, and (S4) is also a special case of (S5) with the empty multiset Δ. It is easily seen that (D) follows from (4D) and also that both (K) and (S4) follow from (4) with the help of weakening rule. If a system has the rule (T), the rule (4) follows from (S4) together with contraction rule. Sequent systems GK, GKD, GKT, GK4, GK4D, GS4, GS5 for modal logic K, KD, KT, K4, K4D, S4, S5, respectively, are introduced as follows. Here, for example LK + (R1 ) + (R2 ) means the sequent system obtained from LK by adding rules (R1 ) and (R2 ). GK: LK + (K), GKD: LK + (K) + (D), GKT: LK + (K) + (T), GK4: LK + (4), GK4D: LK + (4) + (4D), GS4: LK + (T) + (S4), GS5: LK + (T) + (S5). Exercise 4.2 Show that the sequent ((α ∧ β) → γ ) ⇒ (α ∧ β) → γ is provable in GK. Exercise 4.3 Show that the rule (K) is redundant, i.e., derivable, in GK4, and also in GS4. Exercise 4.4 1. Show that the sequent α ⇒ ♦α is provable in GKD. 2. Show that the sequent α ⇒ α is provable in GK4. 3. Show that the rule (S4) is derivable in the system GK with the initial sequents of the form β ⇒ β. Here, the derivability of (S4) means that Γ ⇒ α is provable in this system whenever Γ ⇒ α is provable in it. 4. Show that the sequent ¬α ⇒ ¬α is provable in GS5. 5. Show that the rule (S5) is derivable in the system GS4 with the initial sequents of the form ¬α ⇒ ¬α.

50

4 Modal and Substructural Logics

In the same way as Chap. 2, we can show cut elimination for the following systems. Theorem 4.1 Cut elimination holds for GK, GK4, GKD, GKT, GK4D and GS4. Exercise 4.5 Give a detailed proof of cut elimination for GS4. In the same way as classical logic and intuitionistic logic, we can derive the decidability and Craig’s interpolation property of modal logics from cut elimination theorem (see Sects. 3.1 and 3.3). For Craig’s interpolation property, what is necessary is to check that for each rule for modality, an interpolant of the lower sequent exists for a given partition if interpolants of the upper sequents exist always for any partition of them. We have the following. Theorem 4.2 Modal logics K, K4, KD, KT, K4D and S4 are decidable. Theorem 4.3 Craig’s interpolation property holds for modal logics K, K4, KD, KT, K4D and S4. Exercise 4.6 Give a detailed proof of Craig’s interpolation property for S4, using Maehara’s method. Analytic Cut Property On the other hand, cut elimination does not hold in GS5. For example, the sequent p ⇒ ¬¬ p is provable in GS5 for a propositional variable p as shown below, ¬ p ⇒ ¬ p p⇒p ⇒ ¬¬ p, ¬ p ¬ p, p ⇒ ⇒ ¬¬ p, ¬ p ¬ p, p ⇒ (cut) p ⇒ ¬¬ p but it is not provable in GS5 without using cut rule. To see this, suppose that p ⇒ ¬¬ p has a cut-free proof. As this is not an initial sequent, it must be the lower sequent of an application of a rule J , which is different from cut rule. By examining all possible cases, we can show that no such rule exists. As a matter of fact, any simple sequent system is not known for the modal logic S5 for which cut elimination holds. Note that the above sequent expresses essentially an axiom B. Many modal logics having B as an axiom scheme will face the similar troubles with cut-free sequent systems. In the above example, the cut formula ¬ p of the application of cut rule is a subformula of a formula in the lower sequent. Thus, the above proof containing cut rule still has the subformula property. Then, can we say that every sequent which is provable in GS5 always has a proof with the subformula property? Here we note that both rules (T) and (S5) for modality of GS5 are acceptable, i.e., every formula in the upper sequent of either rule appears as a subformula of a formula in the lower sequent. We say that an application of cut rule is analytic if the cut formula is a subformula of

4.1 Standard Sequent Systems for Normal Modal Logics

51

some formula in the lower sequent. Otherwise, it is said to be non-analytic. Also, we say that a sequent system has analytic cut property, whenever every sequent which is provable in the system always has a proof in which every application of cut rule is analytic. We can show the following theorem by a proof-theoretic method similarly as cut elimination. But we omit the proof here as the proof is a bit complicated.2 Theorem 4.4 Sequent system GS5 has analytic cut property. If a given system has analytic cut property, then the subformula property will follow as long as every application of each logical rule in it is acceptable. Then, many important logical properties, e.g. the decidability and Craig’s interpolation property, can be derived from the subformula property. In the following, by using Maehara’s method and Theorem 4.4 we show Craig’s interpolation property for GS5. By Theorem 4.4, it is enough fr us to consider a proof in which every application of cut rule is analytic. It is not hard to see that for both rules of modality (T) and (S5), an interpolant of the lower sequent for any given partition exists if we assume the existence of interpolant of the upper sequent for every partition. Thus, what remains for us is an application of analytic cut rule, as shown below. Here we assume that α appears as a subformula in Γ, Δ ⇒ Λ, Π . Γ ⇒ Λ, α α, Δ ⇒ Π Γ, Δ ⇒ Λ, Π Take an arbitrary partition (Γ1 , Δ1 : Λ1 , Π1 , Γ2 , Δ2 : Λ2 , Π2 ) of the sequent Γ, Δ ⇒ Λ, Π . Without a loss of generality, we can assume that α is a subformula of a formula in Γ1 , Δ1 , Λ1 , Π1 . Consider the partition (Γ1 : Λ1 , α, Γ2 : Λ2 ) of the sequent Γ ⇒ Λ, α, and also the partition (α, Δ1 : Π1 , Δ2 : Π2 ) of the sequent α, Δ ⇒ Π . By the hypothesis of induction, there exists an interpolant β of Γ ⇒ Λ, α with respect to the above first partition and an interpolant γ of α, Δ ⇒ Π with respect to the second partition. Thus, the following statements hold. • both Γ1 ⇒ Λ1 , α, β and β, Γ2 ⇒ Λ2 are provable, and V ar (β) ⊆ V ar (Γ1 , Λ1 , α) ∩ V ar (Γ2 , Λ2 ), • both α, Δ1 ⇒ Π1 , γ and γ , Δ2 ⇒ Π2 are provable, and V ar (γ ) ⊆ V ar (α, Δ1 , Π1 ) ∩ V ar (Δ2 , Π2 ). Thus, both sequents Γ1 , Δ1 ⇒ Λ1 , Π1 , β, γ and β ∨ γ , Γ2 , Δ2 ⇒ Λ2 , Π2 are provable in GS5. From the first, it follows that the sequent Γ1 , Δ1 ⇒ Λ1 , Π1 , β ∨ γ is also provable. Moreover, we can show that V ar (β ∨ γ ) ⊆ V ar (Γ1 , Λ1 , α) ∪ V ar (α, Δ1 , Π1 ) = V ar (Γ1 , Δ1 , Λ1 , Π1 ), because α is a subformula of a formula in Γ1 , Δ1 , Λ1 , Π1 . Also we can show that V ar (β ∨ γ ) ⊆ V ar (Γ2 , Δ2 , Λ2 , Π2 ). This means that the formula β ∨ γ is an interpolant of Γ, Δ ⇒ Λ, Π with respect to the 2 Analytic

cut property of GS5 is proved by Takano (1992). In addition to it, he introduced also sequent systems for modal logics KB, KDB, KTB and KB4, in which every rule for modality  is acceptable, and showed that each of them has analytic cut property.

52

4 Modal and Substructural Logics

partition partition (Γ1 , Δ1 : Λ1 , Π1 , Γ2 , Δ2 : Λ2 , Π2 ). Thus, we can conclude the following, Theorem 4.5 Craig’s interpolation property holds for modal logic S5.

4.2 Roles of Structural Rules We will examine the role of structural rules in more detail. For this purpose, in the rest of this chapter we assume that each sequent is an expression of the following form, where α1 , . . . , αm is a sequence of formulas but not a multiset of formulas, and β is a formula or empty; α1 , . . . , αm ⇒ β,

(4.1)

which is understood intuitively as “β follows from the assumptions α1 , . . . , αm ”. We consider the following rules for structural rules, as we are concerned only with single-succedent sequents. (e) exchange rule:

(c) contraction rule:

(i) left weakening rule:

(o) right weakening rule:

Γ, α, β, Δ ⇒ ϕ Γ, β, α, Δ ⇒ ϕ

Γ, α, α, Δ ⇒ ϕ Γ, α, Δ ⇒ ϕ

Γ, Δ ⇒ ϕ Γ, α, Δ ⇒ ϕ

Γ ⇒ Γ ⇒α

These rules are usually considered to be auxiliary rules, which are necessary to determine the meaning of the expression “α1 , . . . , αm ” in the sequent (4.1), in particular the role of the commas in sequents, independently of the meaning of logical connectives. In fact, each structural rule will express a way how an assumption (i.e., an antecedent) can be used for deriving a consequence (i.e., a succedent). That is, exchange rule allows us to use formulas in the assumption in an arbitrary order, contraction rule to

4.2 Roles of Structural Rules

53

use any formula in the assumptions more than once, and left-weakening rule allows us to ignore some formulas in the assumption. (See also Sect. 1.2). But this does not imply that structural rules are nonessential and supplementary rules. On the contrary, the presence or the absence of each structural rule will sometimes affect crucially logical properties of a given sequent system. For instance, let us look at the proof of distributive law in Example 1.2. Then we can see that both contraction rule and weakening rule play a key role in it. As a matter of fact, it can be shown that the distributive law is not provable if we delete either one of these structural rules from LJ. In this section, we will explain that logical properties are sometimes quite sensitive to the existence of structural rules. To clarify this connection is one of initial motives for study of substructural logics. In the sequent system LJ for intuitionistic logic, it is shown that the sequent (4.1) is provable if and only if the following sequent is provable, with the help of both contraction rule and left-weakening rule. α1 ∧ . . . ∧ αm ⇒ β

(4.2)

Thus in LJ, any comma in the antecedent of sequents will be regarded as an outer expression of conjunction. But this is not always the case. If a system lacks either contraction rule or left-weakening rule, we cannot expect that commas can be understood as conjunction. To clarify the situation, we will introduce a new logical connective fusion (·, in symbol) and will take the following rules for fusion. Γ ⇒α Δ⇒β (⇒ ·) Γ, Δ ⇒ α · β

Σ, α, β, Δ ⇒ ϕ (· ⇒) Σ, α · β, Δ ⇒ ϕ

Sometimes, fusion is called multiplicative conjunction, instead. In such a case, the usual conjunction is called additive conjunction to avoid confusions. Remark 4.1 (Associativity of fusion) It is easy to see that sequents (α · β) · γ ⇒ α · (β · γ ) and α · (β · γ ) ⇒ (α · β) · γ are both provable by using rules for fusion. This means that the associativity of fusion holds. Thus we will often omit parentheses and express either of them simply as α · β · γ . By these two rules with cut rule (but without any structural rule), we can show easily that the sequent (4.1) is provable if and only if the following sequent is provable. α1 · . . . · αm ⇒ β

(4.3)

Hence, we can say that every comma in sequents is always expressed by a fusion. Exercise 4.7 Let L be the sequent system consisting only of initial sequents, cut rule of LJ and the above two rules for fusion. Show that the sequent (4.1) is provable in L if and only if the sequent (4.3) is provable in L.

54

4 Modal and Substructural Logics

Remark 4.2 To see differences of fusion from conjunction in a system lacking some of structural rules, consider sequents α ⇒ α ∧ α and α ∧ β ⇒ α. Both are provable as long as usual rules for conjunction are assumed. On the other hand, though α, α ⇒ α · α follows by using (⇒ ·), contraction rule will be necessary to deduce α ⇒ α · α from this. Similarly, α · β ⇒ α is provable if and only if so is α, β ⇒ α. But to deduce the sequent in the right-hand side from the initial sequent α ⇒ α, left-weakening rule will be needed. To gain an intuitive understanding of the difference between fusion and conjunction, the following example will be helpful, which is a modified version of the example given by Girard (1989). Let α, β and γ be the following statements, respectively. (α) one has $25. (β) one can get this paperback. (γ ) one can have lunch at that restaurant. We suppose moreover that either of ‘this paperback’ and ‘lunch at that restaurant’ costs $25. Then, both α ⇒ β and α ⇒ γ hold. By applying rules for fusion, α · α ⇒ β · γ can be deduced, but α ⇒ β · γ will not. On the other hand, α ⇒ β ∧ γ will follow by applying rules for conjunction. Then how fusion should be interpreted to see the difference? A possible way is to understand these statements in the context of consumption of resources. In the present case, $25 is a resource which may be consumed by buying this paperback or by paying for lunch or by something else. Once a resource is consumed, it cannot be used any more. Under this interpretation, the statement α · α will express “one has $50” and hence “one can get this paperback and at the same time can have lunch”. Thus, α · α ⇒ β · γ holds, while α ⇒ β · γ does not since $25 is not enough to have both of them. On the other hand, α ⇒ β ∧ γ will says that if one has $25 then one can get this paperback and also can have lunch. Then someone may wonder that conjunction in this interpretation sounds just like disjunction, as in the latter case it says that if one has $25 then either one can get this paperback or one can have lunch. But this is not the case. For, if the price of lunch at that restaurant has risen to $30, then α ⇒ β ∧ γ does not hold anymore while α ⇒ β ∨ γ still holds. The exact meaning of the former is that either choice is possible but not both.

4.3 Sequent Systems for Basic Substructural Logics We introduce several sequent systems that are obtained from LJ by deleting some or all of structural rules. They constitute systems for basic substructural logics. We introduce first the sequent system FL (full Lambek calculus) which has no structural rules. Sequents of FL are with single-succedents, and thus are of the form Γ ⇒ ϕ with a finite sequence of formulas Γ and a formula ϕ. In FL, fusion is used as a logical connective. As the system FL lacks exchange rule, the order of formulas in the antecedent of sequents becomes essential. Thus, a sequent α, β ⇒ ϕ must

4.3 Sequent Systems for Basic Substructural Logics

55

be distinguished from a sequent β, α ⇒ ϕ. This will affect the definition of the logical connective “implication”. In fact, we are obliged to introduce two kinds of “implication”, “left-implication” and “right-implication”. Alternatively, they are called left-division \ and right-division /, borrowing the term and notation from algebra. Also the word residuation can be used for division. For residuation, we assume the following rules. Rules for residuation α, Γ ⇒ β (⇒ \) Γ ⇒ α\β

Γ ⇒ α Σ, β, Δ ⇒ θ (\ ⇒) Σ, Γ, α\β, Δ ⇒ θ

Γ, α ⇒ β (⇒ /) Γ ⇒ β/α

Γ ⇒ α Σ, β, Δ ⇒ θ (/ ⇒) Σ, β/α, Γ, Δ ⇒ θ

Take notice of the place of the formula α\β (the formula β/α) and the sequence Γ of formulas in the lower sequent of the rule (\ ⇒) ((/ ⇒), respectively). It is easily seen that as long as we assume exchange rule, either of the left-division and the right-division is equivalent to standard implication. That is, both α\β and β/α are equivalent, and are expressed as α → β. Example 4.3 An example of a proof in FL β⇒β γ ⇒γ β/γ , γ ⇒ β α⇒α α, α\(β/γ ), γ ⇒ β α\(β/γ ), γ ⇒ α\β α\(β/γ ) ⇒ (α\β)/γ It is easy to see the following. Lemma 4.6 1. α, Γ ⇒ β is provable in FL iff Γ ⇒ α\β is provable in FL, 2. Γ, α ⇒ β is provable in FL iff Γ ⇒ β/α is provable in FL. Remark 4.4 One may notice a resemblance between the above logical equivalences and the following equivalences in ordered groups. • x · y ≤ z if and only if y ≤ x −1 · z, • x · y ≤ z if and only if x ≤ z · y −1 . We take two logical constants 0 and 1 for FL, and initial sequents and rules for them as follows. (i) 0 ⇒ , (ii) ⇒ 1. Γ ⇒ (0w) Γ ⇒0

Γ, Δ ⇒ ϕ (1w) Γ, 1, Δ ⇒ ϕ

56

4 Modal and Substructural Logics

These initial sequents and rules in the above say that 0 (and 1) can be identified with empty formula in the right-hand side (and left-hand side, respectively) of sequents. Obviously, the rule (0w) (and (1w)) becomes redundant when a given system has the right weakening rule (and left weakening rule, respectively). As FL has two implications (or two divisions), it has two negations, ∼ and ¬, which are defined by ∼ α = α\0 and ¬α = 0/α, respectively. To sum up, the sequent system FL is precisely described as follows. The language consists of the standard logical connectives ∨, ∧, three new logical connectives ·, \ and /, and two logical constants 0 and 1. Its initial sequents are (i) and (ii) in addition to the standard ones, i.e., sequents of the form α ⇒ α. Its rules comprise rules for ∨, ∧, rules for ·, \ and /, and also (0w), (1w) for logical constants, but without any structural rule. We regard FL as the fundamental system for substructural logics. That is, we will define a substructural logic as any axiomatic extension of FL. But, this will be rather an intuitive explanation of substructural logics. The notion of axiomatic extensions will be defined in the next section. Exercise 4.8 Give a proof of each of the following sequents in FL. 1. 2. 3. 4.

α, β/γ ⇒ (α · β)/γ . γ /β, β/α ⇒ γ /α. (β\α) ∧ (γ \α) ⇒ (β ∨ γ )\α. β/α ⇒ β/((β/α)\β).

We introduce next several sequent systems of FL which are obtained by adding some structural rules to FL. These systems will be regarded as systems for basic substructural logics. Let FLe be the sequent system obtained from FL by adding (left) exchange rule. Also, FLw and FLc are the sequent systems obtained from FL by adding both left and right weakening rules, and (left) contraction rule, respectively. (See structural rules in Sect. 4.2.) Similarly, let FLew (and FLec ) be the system obtained from FLe by adding left and right weakening rules (and (left) contraction rule, respectively).3 We note that in FLe , both α\β ⇒ β/α and β/α ⇒ α\β are provable. That is, the left-division and the right-division (of β by α) is logically equivalent. Thus, we can use the standard symbol → of implication, instead of leftand right-divisions, and use also the symbol ¬ of negation, instead of left- and rightnegations. When a given sequent system has both the right and left weakening rules, by applying the right (and the left) weakening rule to the initial sequent (i) (and (ii), respectively), sequents 0 ⇒ α and α ⇒ 1 are provable in it for every formula α. Thus, 0 (and 1) can be regarded as the weakest (and the strongest formula, respectively) with respect to the provability. Sometimes, they are denoted by symbols ⊥ and , instead. Exercise 4.9 Show that a formula α1 → (α2 → (· · · → (αm → β) · · · )) is provable in FLe if and only if a formula (α1 · α2 · · · · · αm ) → β is provable in it, for all formulas α1 , α2 , · · · , αm and β. 3 The name ‘full Lambek caliculus’ and the series of sequent systems prefixed by FL were firstly introduced in Ono (1990).

4.3 Sequent Systems for Basic Substructural Logics

57

Cut Elimination and Decision Problem By using the same procedure of cut elimination described in Chap. 2, we can show the following Theorem. Let us remind you that cut rule is replaced in the procedure by e-cut rule in order to get rid of a difficulty caused by the presence of contraction rule(s). In other words, this replacement is not necessary for FL, FLe , FLw and FLew . In these cases, every application of cut rule can be directly eliminated by using the procedure described in Sect. 2.1. On the other hand, for FLec , cut elimination can be derived from e-cut elimination. Consequently, we have the following. Theorem 4.7 Cut elimination holds in FL, FLe , FLw , FLew and FLec , Hence, subformula property holds for them. We can show that the decidability and the Craig’s interpolation property for these logics will follow from cut elimination, by using essentially the same arguments described in the previous chapter. As for the decidability, the termination of backward proof searching in each system among FL, FLe , FLw and FLew follows immediately from the fact that (each of) its upper sequent(s) is shorter that its lower sequent in every rule, as it lacks contraction rule. Theorem 4.8 Sequent systems FL, FLe , FLw and FLew are decidable. It should be noted here that the existence of additional connectives fusion, and left- and right-division and logical constants 0 and 1 will not cause any difficulty in the decision problem for FL and FLw . Example 4.5 Let us examine whether the sequent p → q, r → p ⇒ r → q is provable in FLe or not, by using backward proof searching. Here, p, q, r are distinct propositional variables. • Using the invertibility of the rule (⇒→), the above problem can be transformed into the provability of the sequent p → q, r → p, r ⇒ q. • Apparently, this is not an initial sequent. Then the possible last rule is only (→⇒) (if we neglect exchange rule and consider the antecedent of this sequent as a multiset). • There are following four cases of upper sequents of this (→⇒) when r → p is its principal formula. (i) p → q, r ⇒ r and p ⇒ q, (ii) p → q ⇒ r and r, p ⇒ q, (iii) r ⇒ r and p → q, p ⇒ q, and (iv) ⇒ r and p, p → q, r ⇒ q. • Obviously, none of the second sequent of (i) and (ii), and the first sequent of (iv) is provable. Thus, we abandon these cases, and consider only the case (iii). • Possible upper sequents of (iii) are (a) ⇒ p and p, q ⇒ q, and (b) p ⇒ p and q ⇒ q. Both of sequents in (b) are initial sequents. Thus, the sequent under consideration is provable, and the cut-free proof obtained from this backward proof searching is given as follows. p⇒ p q⇒q r ⇒ r p → q, p ⇒ q p → q, r → p, r ⇒ q p → q, r → p ⇒ r → q

58

4 Modal and Substructural Logics

• Also, if p → q is the principal formula, a single possible case is when upper sequents are r → p, r ⇒ p and q → q. In this case, the following cut-free proof is obtained. r ⇒r p⇒ p r → p, r ⇒ p q ⇒ q p → q, r → p, r ⇒ q p → q, r → p ⇒ r → q Exercise 4.10 Check whether each of the following sequents is provable in FLew or not, by using the backward proof searching. • ⇒ p ∨ ¬ p, • p, p → q ⇒ p ∧ q. Exercise 4.11 Show that ¬¬α is provable in FLe if and only if α is provable in FLe for any formula α. ([Hint] Note that ¬¬α is the abbreviation of (α → 0) → 0. Now consider any cut-free proof of ¬¬α in FLe .) On the other hand, such a decision algorithm as above does not work well for FLec , as it has contraction rule. Besides, different from LJ, the provability of a given sequent is not always equivalent to the provability of its 1-reduced contraction (see Lemma 3.1), because of the lack of weakening rule. The decision problem of the implicational fragment of FLec was solved positively by Kripke in (1959) by using ingenious combinatorial method which can be extended also to the system FLec . Thus we have the following, though we omit the details of the proof.4 Theorem 4.9 FLec is decidable. It is known that cut elimination does not hold for our system FLc (see Bayu Surarso and Ono 1996). This result does not deny the existence of a cut-free sequent system which determines the same set of provable sequents as that of FLc . But, such an attempt would be useless for deriving the decidability, as the following is shown recently by Chvalovský and Horˇcik (2016). Theorem 4.10 FLc is undecidable. Involutive Substructural Logics Basic substructural logics discussed so far are obtained from the sequent system LJ by deleting some structural rules. A natural question will be the following. What will happen if we delete some of structural rules from the sequent system LK? We will discuss this problem briefly in the following. In the following, we consider sequents with multi-succedents, and thus they are expressions of the following form; α1 , . . . , αm ⇒ β1 , . . . , βn 4

(4.4)

The result was shown in Kiriyama and Ono (1991) by using the technique developed in Meyer (1966).

4.3 Sequent Systems for Basic Substructural Logics

59

where m, n ≥ 0. To avoid unnecessary complications, we will keep (left- and right-) exchange rules in the following. Thus, we assume that both α1 , . . . , αm and β1 , . . . , βn are multisets of formulas. We suppose that our language consists of logical connectives ∨, ∧, → and · (fusion), and also logical constants 0 and 1. The negation ¬α of α is defined by α → 0. We will take the following as initial sequents of our sequent system InFLe . (0) α ⇒ α,

(i) 0 ⇒ ,

(ii) ⇒ 1.

Rules of InFLe comprise of rules for ∨, ∧ and →, cut rule and also rules for fusion of the following form; α, β, Γ ⇒ Λ (· ⇒) α · β, Γ ⇒ Λ

Γ ⇒ Λ, α Δ ⇒ Π, β (⇒ ·) Γ, Δ ⇒ Λ, Π, α · β ,

as well as rules for logical constants of the following form; Γ ⇒ Λ (0w) Γ ⇒ Λ, 0

Γ ⇒ Λ (1w) 1, Γ ⇒ Λ .

Sequent systems InFLew and InFLec are obtained from InFLe by adding left- and right-weakening rules, and left- and right-contraction rules, respectively, which are the same as those of LK. It is easily seen that for all Γ and Δ, if Γ ⇒ Δ is provable in InFLe then Γ, ¬Δ ⇒ is also provable in it. Here ¬Δ is a sequence of formulas ¬β1 , . . . , ¬βn when Δ is β1 , . . . , βn . Exercise 4.12 1. Show that the constant 1 can be expressed as ¬0, i.e., both 1 ⇒ ¬0 and ¬0 ⇒ 1 are provable in InFLe . 2. Show that ¬¬α ⇒ α is provable in InFLe for each α. As shown in Exercise 4.12 the law of double negation ¬¬α ⇒ α is provable in these systems InFLe , InFLew and InFLec . Because of this, these systems are said to be involutive, and hence their names are prefixed with “In”. 5 It is easy to see that every instance of the law of double negation of the form ¬¬ p ⇒ p is not provable in FLe for any propositional variable p, as FLe is weaker than LJ. Similarly to Theorems 4.7 and 4.8, we can show the following. Theorem 4.11 Cut elimination holds for InFLe , InFLew and InFLec . Each of them is decidable. Let FL+ e is the sequent system obtained from FLe by adding sequents of the form ¬¬β ⇒ β as new initial sequents. (We note here that only sequents of the form Σ ⇒ β are allowed in FL+ e .) Then, we have the following. 5 Precisely

speaking, they should be called involutive commutative substructural logics. Here, a commutative logic means a logic having exchange rule(s).

60

4 Modal and Substructural Logics

Theorem 4.12 For all multisets of formulas Γ and Δ, the sequent Γ ⇒ Δ is provable in InFLe iff the sequent ¬Δ, Γ ⇒ is provable in FL+ e . + Proof It is obvious that every sequent (of FL+ e ) which is provable in FLe is provable also in InFLe , as every sequent of the form ¬¬β ⇒ β, which is an extra initial sequent of FL+ e , is also provable in InFLe . Therefore, if ¬Δ, Γ ⇒ is provable in , it is provable also in InFLe , and hence Γ ⇒ Δ is provable in InFLe . The FL+ e converse direction can be shown by using the induction of the length of any given proof of Γ ⇒ Δ in InFLe . This can be almost in the same way as the case for Glivenko’s theorem (Theorem 3.11). 

From this, we have the following immediately. Corollary 4.13 A formula α is provable in InFLe if and only if it is provable in FL+ e for any formula α. The same relation as Corollary 4.13 holds between FLew and InFLew , and also between FLec and InFLec . We notice here that Glivenko-type theorem does not hold between InFLe and FLe . For, any formula of the form ¬¬β → β is provable in InFLe , while ¬¬(¬¬β → β) is not always provable in FLe (see Exercise 4.11). Exercise 4.13 Suppose that Γ ⇒ α is any sequent containing no logical constants. Show the following. 1. The sequent Γ ⇒ α is provable in InFLew if and only if it is provable in FLew . 2. The sequent Γ ⇒ α is provable in InFLec if and only if it is provable in FLec . ([Hint] The only-if part is essential. Take any cut-free proof P of Γ ⇒ α in InFLew , and suppose that P contains a sequent of the form Δ ⇒ Σ such that Σ consists of more than one formula. Then, without using contraction rules, how can P reach the end sequent of the form Γ ⇒ α? Similarly for InFLec .) Note that such a relation as in the above exercise does not hold between LK and LJ. For instance, the sequent ( p → q) → p ⇒ p is provable in LK but not in LJ when p and q are distinct propositional variables.

Chapter 5

Deducibility and Axiomatic Extensions

Throughout Part I, we have been discussing sequent systems for particular logics, like classical logic and intuitionistic logic, and logical properties of these logics by proof-theoretic analysis of sequent systems for them. These results are sharp and deep, which are often obtained as consequences of cut elimination. On the other hand, cut elimination holds for only a limited number of sequent systems. In contrast, algebraic methods developed in Part II will give us general results on a wide family of logics. Algebraic methods in fact enable us to answer general questions like ‘how many logics there are over intuitionistic logic’ and ‘how many logics among them have Craig’s interpolation property’. But before raising such questions, it must be necessary to give a precise definition of logics, or more specifically, a definition of logics over a given particular logic. To prepare a sound and not an ad hoc basis for the definition of logics, we introduce first the notion of deducibility from a set of formulas in a given logic L, by extending the provability in it. Various forms of deduction theorem show relations between the deducibility and provability. By using deducibility, we will introduce many wellknow classes of nonclassical logics, including the class of substructural logics, the class of superintuitionistic logics, and the class of normal modal logics, in a uniform and precise way. Thus, this chapter is intended to build a bridge between Part I and Part II.

5.1 Deducibility and Deduction Theorem Let S be a set of formulas, and Γ ⇒ Δ be a sequent of LK. A deduction of Γ ⇒ Δ from a set S in the sequent system LK is a proof-figure with the end sequent Γ ⇒ Δ, in which every sequent of the form ⇒ σ with σ ∈ S is allowed as an extra initial sequent. It should be noted that a sequent ⇒ σ ∗ is not necessarily allowed as an initial sequent where σ ∗ is a substitution instance of σ ∈ S, unless σ ∗ itself belongs to © Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_5

61

62

5 Deducibility and Axiomatic Extensions

S. Thus, in a deduction only members of S are regarded as its additional assumptions. When S is empty, a deduction of Γ ⇒ Δ from S is nothing but a usual proof of the sequent Γ ⇒ Δ in LK. We say that a sequent Γ ⇒ Δ is deducible from S in LK iff there exists a deduction of Γ ⇒ Δ from S in LK. In particular, when a sequent ⇒ α is deducible from S, we simply say that the formula α is deducible from S. The notion of deducibility can be defined also when classical logic is formulated in a Hilbert-style system. In any case, the deducibility of α from S says that α is provable in classical logic assuming that all members of S as its extra axioms (but not as axiom schemes), which can be defined independently of how classical logic Cl can be formulated. Thus, we can say that a formula α is deducible from S in classical logic Cl if α is deducible from S in the sequent system LK, and write it as S Cl α in symbol, by using the deducibility relation Cl . The deducibility relation Int for intuitionistic logic Int can be defined in the same way, simply by replacing LK by LJ. We now give a definition of consequence relations which is a basic notion in algebraic logic. Definition 5.1 (Consequence relation) For a given set X , let  be a relation between ℘ (X ) and X . Then,  is called a consequence relation over X , if it satisfies the following: For all subsets S and T of X 1. S  x for every x ∈ S, 2. for every x ∈ X , if S  y for every y ∈ T and also T  x then S  x. Exercise 5.1 Show that the following monotonicity holds for every consequence relation over a set X . for all subsets S and T of X and all x ∈ X , if S ⊆ T and S  x then T  x.

Definition 5.2 A consequence relation over X is finitary (or compact), when for all subset S of X and all x ∈ X if S  x then there exists a finite subset S  of S such that S   x. A consequence relation over X is substitution invariant (or structural), when for all subset S of X and all x ∈ X if S  x then σ (S)  σ (x) for every substitution σ . Here, σ (S) means the set {σ (y) : y ∈ S}. Exercise 5.2 Let  be either Cl or Int and Φ be the set of all formulas. Show that  is a finitary and substitution invariant consequence relation over Φ. The following result holds. Theorem 5.1 (Deduction theorem) Let  denote either Cl or Int . Then, S ∪ {α}  β iff S  α → β, for an arbitrary set of formulas S and arbitrary formulas α and β. Proof Suppose that S  α → β holds. Since the sequent α, α → β ⇒ β is provable in both LJ and LK. Take any deduction of ⇒ α → β from S and apply cut rule (with the cut formula α → β) to these two sequents. Then we can get a deduction of α ⇒ β from S. Then, by applying cut rule to a sequent ⇒ α and α ⇒ β, we get a deduction of ⇒ β from S ∪ {α}.

5.1 Deducibility and Deduction Theorem

63

Conversely, suppose that S ∪ {α}  β. Let P be a deduction of ⇒ β from assumptions S ∪ {α}. Now, for each sequent Γ ⇒ Δ in P, we show that α, Γ ⇒ Δ is deducible from S, by using the induction on the length k of the deduction of Γ ⇒ Δ in P. Suppose first that k = 1, which means that Γ ⇒ Δ is one of the following. • an initial sequent (of LK or LJ), • ⇒ δ for some δ ∈ S (i.e., Γ is empty and Δ is δ), • ⇒ α (i.e., Γ is empty and Δ is α). For the first two cases, α, Γ ⇒ Δ is shown to be deducible from S by applying left weakening rule. In the third case, α, Γ ⇒ Δ is α ⇒ α, which is an initial sequent 0f LK, and hence is deducible from S. Suppose that the above statement holds for every k ≤ n. Consider the case where k = n + 1. Let J be the last rule. For instance, suppose that J is (→⇒) of LK and the upper sequents are of the form Σ1 ⇒ Λ1 , γ and σ, Σ2 ⇒ Λ2 , respectively. Then, Γ ⇒ Δ must be equal to γ → σ, Σ1 , Σ2 ⇒ Λ1 , Λ2 . By the hypothesis of induction, both α, Σ1 ⇒ Λ1 , γ and α, σ, Σ2 ⇒ Λ2 are deducible from S. Then, by applying (→⇒) to these two sequents, we have α, α, γ → σ, Σ1 , Σ2 ⇒ Λ1 , Λ2 . Now, applying contraction rule to its consequence, we have a deduction of α, γ → σ, Σ1 , Σ2 ⇒ Λ1 , Λ2 from S. Other cases can be treated in the similar way. As a consequence, α ⇒ β is deducible from S by taking the end sequent ⇒ β of P for Γ ⇒ Δ. Hence ⇒ α → β is also deducible from S.  By using deduction theorem repeatedly, we have the following result. Corollary 5.2 The following conditions are mutually equivalent; 1. 2. 3. 4.

the formula β is deducible from a set {α1 , α2 , . . . , αm } in classical logic, the formula α1 → (α2 → (· · · → (αm → β) · · · )) is provable in classical logic, the formula (α1 ∧ α2 ∧ · · · ∧ αm ) → β is provable in classical logic, the sequent α1 , α2 , · · · , αm ⇒ β is provable in LK. These equivalences hold also for intuitionistic logic and LJ.

In symbol, we can express the above corollary as: {α1 , α2 , . . . , αm }  β iff  α1 → (α2 → (· · · → (αm → β) · · · )) iff  (α1 ∧ α2 ∧ · · · ∧ αm ) → β. Roughly speaking, the corollary says that for both classical logic and intuitionistic logic, the deducibility can be reduced to the provability in sequent systems, as long as we identify the set {α1 , α2 , . . . , αm } of formulas which are assumptions of a deduction with the multiset α1 , α2 , · · · , αm of formulas in the antecedent of a sequent.

64

5 Deducibility and Axiomatic Extensions

5.2 Local Deduction Theorems The deducibility relation L can be defined for a substructural logic or a modal logic L when a sequent system for L is given. It is clear that L is a consequence relation over the set of all formulas (of the corresponding language). But we may not expect that the deduction theorem like Theorem 5.1 holds for L, as our proof of Theorem 5.1 relies on the existence of both contraction rules and weakening rules. In fact, the following example shows that the deduction theorem fails for FLe . Example 5.1 It is easy to see that the sequent α, α, α → (α → β) ⇒ β is provable in FLe while α, α → (α → β) ⇒ β is not always provable, since FLe lacks contraction rule. On the other hand, α, α → (α → β) FLe β holds. As the following deduction shows, in a deduction we can use an assumption α as many times (possibly none) as we want. ⇒ α α, α, α → (α → β) ⇒ β ⇒α α, α → (α → β) ⇒ β ⇒ α → (α → β) α → (α → β) ⇒ β ⇒β Local Deduction Theorems for Substructural Logics Still we can show the following weaker result for FLe (and in fact for any axiomatic extension of FLe , introduced in the next section). Here, for each formula γ and each positive integer m, the formula γ m is defined inductively by: γ 1 = γ and γ k+1 = γ · γ k , where · denotes fusion. Theorem 5.3 (Local deduction theorem for FLe ) For an arbitrary set of formulas S and arbitrary formulas α and β, S ∪ {α} FLe β iff S FLe (α ∧ 1)m → β for some m > 0. Proof The proof goes essentially in the same way as the proof of Theorem 5.1. The right-hand side implies the left-hand side, since S ∪ {α} FLe α ∧ 1. For the converse direction, we show that for each sequent Γ ⇒ Δ in a given deduction P of ⇒ β from assumptions S ∪ {α}, the sequent α ∧ 1, . . . , α ∧ 1, Γ ⇒ Δ is deducible from S for some (positive) number of occurrences of α ∧ 1. This can be proved by using the induction on the length k of the deduction of Γ ⇒ Δ. Notice that in the proof of Theorem 5.1, for k = 1 we used weakening rule for deriving α, Γ ⇒ Δ from Γ ⇒ Δ in the first two cases. But, this cannot be accepted in the present case as FLe lacks weakening rules. Still it is possible to derive α ∧ 1, Γ ⇒ Δ from Γ ⇒ Δ. For, by applying (1w) rule to Γ ⇒ Δ we get the sequent 1, Γ ⇒ Δ, and then by using (∧2 ⇒), we have α ∧ 1, Γ ⇒ Δ. Also in our proof of Theorem 5.1, we used contraction rule for the case where k > 0 in order to contract multiple occurrences of the formula α to a single one. But this is not necessary in the present case, as we leave multiple occurrences of the formula α ∧ 1 as they are. Thus we can complete our inductive argument. Applying this result to the end sequent ⇒ β of P, the sequent

5.2 Local Deduction Theorems

65

α ∧ 1, . . . , α ∧ 1 ⇒ β is deducible for some number of α ∧ 1, and hence the formula  (α ∧ 1)m → β is deducible from S for some positive m. The above theorem is called the local deduction theorem, as the condition of the right-hand side depends on an indeterminate number m. By a slight modification of our proof of Theorem 5.3, we have the following two results as corollaries. Corollary 5.4 (Deduction theorem for FLec ) For an arbitrary set of formulas S and arbitrary formulas α and β, S ∪ {α} FLec β iff S FLec (α ∧ 1) → β. Corollary 5.5 (Local deduction theorem for FLew ) For an arbitrary set of formulas S and arbitrary formulas α and β, S ∪ {α} FLew β iff S FLew α m → β for some m. Exercise 5.3 Confirm the above two theorems by giving details of their proofs. As for FL, a weaker form of local deduction theorem, called parameterized local deduction theorem, still holds. But we will omit the detail as it is slightly complicated. (For further details see Galatos and Ono 2006.) We say that the deducibility problem of a logic L is decidable if there is an effective procedure of deciding whether or not S L α holds for each finite set of formulas S and each formula α. The provability problem or the decision problem of a logic L is regarded as a special case of the deducibility problem where S is empty. We can show the following Theorem 5.6. The first result is mentioned already in Theorem 4.8. The second result is essentially due to the undecidability of linear logic with exponentials shown in Lincoln et al. (1992), which was obtained by reducing it to the undecidability of halting problem of Minsky machines (see e.g. Troelstra 1992). Theorem 5.6 1. The provability problem of FLe is decidable. 2. The deducibility problem of FLe is undecidable. As a consequence of the second result in Theorem 5.6, there exists no algorithm of deciding a number m in Theorem 5.3 from given S, α, β. This can be confirmed as follows. Suppose to the contrary that for given formulas α1 , . . . , αk , β there exists an algorithm of deciding numbers m 1 , . . . , m k such that {α1 , . . . , αk } FLe β holds iff (α1 ∧ 1)m 1 , . . . , (αk ∧ 1)m k ⇒ β is provable in FLe . By Theorem 4.8, there exists an effective procedure of deciding whether a given sequent is provable in FLe or not. Thus, by combining these two procedures together, we get an effective procedure of deciding whether {α1 , . . . , αk } FLe β holds or not. But this contradicts the second result in Theorem 5.6. Local Deduction Theorems in Modal Logic Similar argument to our proof of Theorem 5.1 works also for modal logic K. We show that local deduction theorem holds for K, and hence it holds for any normal extension of K. In the following, for any m ≥ 0 and any formula α, the formula m α is defined inductively as follows: 0 α = α and k+1 α = (k α).

66

5 Deducibility and Axiomatic Extensions

Theorem 5.7 (Local deduction theorem for K) For an arbitrary set of formulas S and arbitrary formulas α and β, S ∪ {α}K β iff S K (α ∧ α ∧ . . . ∧ m α) → β for some m ≥ 0. Proof It is easy to show that the right-hand side implies the left-hand side, since ⇒ k α is provable for any k by applying the rule () whenever ⇒ α is provable. For the converse direction, suppose that a deduction P of ⇒ β from assumptions S ∪ {α} is given. By using induction, it can be shown that for each sequent Γ ⇒ Δ in P, the sequent α, α, . . . , m α, Γ ⇒ Δ is deducible from S for some m ≥ 0. The crucial case is when the last rule J of P is (). Suppose that the upper sequent of J is Σ ⇒ Λ and the lower sequent is Σ ⇒ Λ, which is equal to Γ ⇒ Δ. By the hypothesis of induction, there exists a deduction of α, α, . . . , m α, Σ ⇒ Λ from S for some m. Applying the rule () to it, we get a deduction of α, 2 α, . . . , m+1 α, Σ ⇒ Λ from S, and hence have also a deduction of α, α, 2 α, . . . , m+1 α, Γ ⇒ Δ.  As k α ⇒ k+1 α is provable in K4 for k > 0 while h+1 α ⇒ h α is provable in KT for h ≥ 0, we have the following results as corollaries of Theorem 5.7. Corollary 5.8 (Deduction theorem for K4 and S4) S ∪ {α}K4 β iff S K4 (α ∧ α) → β. Consequently, S ∪ {α}S4 β iff S S4 α → β. Corollary 5.9 (Local deduction theorem for KT) S ∪{α}KT β iff S KT m α → β for some m ≥ 0.

5.3 Axiomatic Extensions Until now, we have been concerned mostly with special logics, beginning with both classical logic and intuitionistic logic and then some modal and substructural logics, which are introduced as sequent systems. We have discussed their logical properties like decidability, Craig’s interpolation property and disjunction property, using prooftheoretic methods, and have made comparisons of them among these logics. Meanwhile, we will be naturally attracted to more general questions like ‘under what condition a logic has a given logical property’, or ‘whether a given logical property holds for most of logics or not’. The following are some concrete examples of such questions: • Is there a logic greater than Int which has the disjunction property? • Does a logic usually have Craig’s interpolation property? Or, is there a logic which does not have Craig’s interpolation property? • Is there any connection between the disjunction property and Craig’s interpolation property? • Is there a logic greater than Int but smaller than Cl? If so, then how many? Here, we say that a logic L1 is smaller than another logic L2 if any formula which is provable in L1 is also provable in L2 , but the converse does not hold. A logic L2 is said to be greater than L1 when L1 is smaller than L2 .

5.3 Axiomatic Extensions

67

To answer these questions, we need to clarify how to define logics in general in a reasonable way in order to expand our scope of logics. This will be compared with the situation of algebra in the early twentieth century when various abstract notions like semigroups, groups, rings and fields were introduced as generalization of concrete algebraic structures. Hereafter we are concerned with logics which are axiomatic extensions of a given logic, and will also give a formal justification of the notion of axiomatic extensions in the following.1 Then, in Chap. 8 a close connection of axiomatic extensions to varieties of algebras will be clarified. For our explanation, we will first consider axiomatic extensions of intuitionistic logic Int. Then by generalizing the idea, we will introduce axiomatic extensions of other logics including FL and K. Let us recall here that a formula ϕ is deducible from a set T of formulas in intuitionistic logic (T Int ϕ, in symbol) if and only if it is deducible from T in the sequent system LJ. For a given (possibly infinite) set S of formulas, let S ∗ be the set of all substitution instances of formulas in S. We define Int[S] to be the set {ϕ : S ∗ Int ϕ}, i.e., the set of all formulas which are deducible from S ∗ in LJ. We call Int[S], the axiomatic extension of Int with the set of axioms S. Exercise 5.4 Suppose that θ is a formula and p is a propositional variable. For any formula ψ, let ψ˜ be the formula obtained from ψ by replacing every occurrence of p in ψ by the formula θ . Show that a formula α is deducible from S in Int then the formula α˜ is deducible from S˜ in Int, where S˜ = {ψ˜ : ψ ∈ S}, i.e., if S Int α then ˜ S˜ Int α. Remark 5.2 Suppose that a formula α ∗∗ is a substitution instance of a formula α ∗ , which, in turn, is a substitution instance of a formula α. Then, it can be shown that α ∗∗ is a substitution instance of a formula α, as the composition of substitutions is also a substitution. Therefore, (S ∗ )∗ = S ∗ for any set S of formulas. This implies that Int[S ∗ ] is equal to Int[S]. We can easily show the following. Lemma 5.10 1. Int[S] is closed under substitution, i.e., if α ∈ Int[S] then α ∗ ∈ Int[S] for any substitution instance α ∗ of α. 2. Int[S] is closed under deducibility in Int, i.e., if α1 , . . . , αm Int β and moreover αi ∈ Int[S] for each i ≤ m then β ∈ Int[S]. Exercise 5.5 Give a proof of each statement in Lemma 5.10. From the second statement of Lemma 5.10, it follows that Int[S] contains always every formula which is provable in Int (by considering the case where m = 0). Observations mentioned above lead us to the following abstract definition of logics over Int. Definition 5.3 (Logics over Int) A set of formulas is a logic over Int if it is closed under both substitution and deducibility in Int. 1 One

may skip the rest of this section in her/his first reading, and go directly to Sect. 5.4.

68

5 Deducibility and Axiomatic Extensions

The above Lemma 5.10 says that every axiomatic extension of Int is a logic over Int. The converse holds also. Lemma 5.11 A set of formulas is a logic over Int if and only if it is an axiomatic extension of Int. Proof It is enough to show that every logic S over Int can be expressed as Int[S]. If a formula β belongs to S then obviously S ∗ Int β. Hence S ⊆ Int[S]. Next suppose that S ∗ Int β. Then there exist formulas αi ∈ S such that α1 ∗ , . . . , αm ∗ Int β, where each αi ∗ is a substitution instance of αi for each i. Since S is closed under substitution, αi ∗ belongs also to S. Moreover, as S is closed under deducibility in Int, the formula β must be in S. Thus, Int[S] ⊆ S.  In this way, by a logic over Int, we understand a set S of formulas which is closed under substitution and deducibility in Int, which is also equal to the set of formulas which are deducible from S in the sequent system LJ. Thus, in our terminology, a logic means a set of formulas satisfying some conditions. For instance, intuitionistic logic Int is now understood as the set of all formulas which are provable in the sequent system LJ. From now on, we use boldface letters to denote logics, and also we say that a formula ϕ is provable in a logic L when ϕ ∈ L. Logics over Int are called also superintuitionistic logics. The set of all superintuitionistic logics is partially ordered by the set inclusion ⊆. Obviously, Int is the smallest and the set Φ of all formulas is the greatest among them. It is shown later (in Lemma 8.1) that every superintuitionistic logic except Φ is included by classical logic Cl.2 When a logic L is expressed as Int[S] with a set S of formulas, it is said to be axiomatized by S over Int. If S is a finite set, L is said to be finitely axiomatizable (over Int). In particular, when the set S is equal to {α1 , . . . , αm }, the set Int[S] is also expressed as Int[α1 , . . . , αm ]. Classical logic Cl is a finitely axiomatizable logic over Int, since it can be expressed as Int[ p ∨ ¬ p] and also as Int[¬¬q → q].3 We can develop the same arguments as above for a given logic L in general by the deducibility relation L , which is defined by using a sequent system GL for the logic L. Thus, an axiomatic extension of a logic L is a set of formulas of the form L[S] = {ϕ : S ∗ L ϕ}, or equivalently {ϕ : ⇒ ϕ is deducible from S ∗ in GL}. Definition 5.4 (Logics over L in general) A set of formulas is a logic over a logic L if it is closed under both substitution and deducibility in L. In the same way as Lemma 5.11, we have the following. Lemma 5.12 For any given logic L, a set of formulas is a logic over L if and only if it is an axiomatic extension of L. logics other than Φ are called intermediate logics, as they are intermediate between intuitionistic logic and classical. 3 If we take a Hilbert-style system HJ for Int mentioned in Sect. 1.1, we can see that HJ itself consists of finitely many axiom schemes. Thus, every finitely axiomatizable logic over Int can be formalized in a Hilbert-style system with finitely many axiom schemes. 2 Superintuitionistic

5.3 Axiomatic Extensions

69

By taking sequent systems FL, FLe for substructural logics and GK for modal logic, we can introduce logics over FL, FLe and K.4 They are conventionally called as follows. • • • •

superintuitionistic logics for logics over Int, substructural logics for logics over FL, commutative substructural logics for logics over FLe , normal modal logics for logics over modal logic K (in modal language).

By our definition, superintuitionistic logics are logics over Int and substructural logics are logics over FL. What we expect is that a superintuitionistic logic is exactly a substructural logics including Int. To confirm this, we introduce the deducibility relation L[S] for any logic L[S] over L as follows. For any set of formula T and any formula α, (5.1) T L[S] α if and only if T ∪ S ∗ L α. Lemma 5.13 Suppose that a logic L1 is an axiomatic extension of L0 . Then, L is an axiomatic extension of L1 if and only if it is an axiomatic extension of L0 including L1 . Proof Let L1 = L0 [T ] for a set T . First suppose that L is an axiomatic extension of L1 . Then L = L1 [S] for a set of formulas S. It is clear that L includes L1 . Using (5.1) twice, we can show that L = L0 [S ∪ T ]. This means that L is an axiomatic extension of L0 . Conversely, suppose that L is an axiomatic extension of L0 which includes L1 . Then L is expressed as L0 [U ] for some set U . We will show that L is equal to L1 [U ]. Clearly, L = L0 [U ] ⊆ L1 [U ]. Suppose that ϕ ∈ L1 [U ]. Then T ∗ ∪ U ∗ L0 ϕ. For any ψ ∈ T ∗ , ψ ∈ L0 [T ] = L1 ⊆ L = L0 [U ] and hence U ∗ L0 ψ. A fortiori U ∗ L0 ψ for every ψ ∈ T ∗ ∪ U ∗ . From these facts, U ∗ L0 ϕ follows by using the second condition of consequence relations in Definition 5.1. This means that ϕ ∈ L. Thus, we have L = L1 [U ]. (It should be noted here that L0 is a consequence relation, which can be shown similarly to Exercise 5.2.) 

5.4 Framework for Substructural Logics and Modal Logics The above definitions of logics and axiomatic extensions give us a uniform way of defining various kinds of logics. On the other hand, it may be cumbersome to check the condition on closure under deducibility. Thus we will give here alternative conditions which are more explicit and informative. They can be obtained by using (local) deduction theorems. Theorem 5.14 A set of formulas L is a superintuitionistic logic, i.e., a logic over Int, if and only if it satisfies the following conditions; 4 Here

FL, FLe etc. denote not only sequent systems, but also substructural logics determined by them, to ease a notational burden.

70

5 Deducibility and Axiomatic Extensions

• every formula provable in Int belongs to L, • L is closed under substitution, • L is closed under modus ponens, i.e., if both α and α → β belong to L then β belongs to L. Proof If L is a superintuitionistic logic then obviously it satisfies the first and the second conditions. Moreover, since {α, α → β} Int β holds, the third condition holds. Conversely, suppose that {α1 , α2 , . . . , αm } Int β holds where every αi ∈ L. Then by Corollary 5.2, the formula α1 → (α2 → (· · · → (αm → β) · · · )) belongs to L. Thus, if L is closed under modus ponens, then we have β ∈ L by applying this condition repeatedly.  Similarly, we have the following result by using Theorem 5.3 for commutative substructural logics. Theorem 5.15 A set of formulas L is a commutative substructural logic if and only if it satisfies the following conditions; • • • •

every formula provable in FLe belongs to L, it is closed under substitution, it is closed under modus ponens, if α ∈ L then α ∧ 1 ∈ L.

We note that the formula (α · β) → γ is logically equivalent to β → (α → γ ) in FLe . Hence, it is not necessary to assume that L is closed under fusion. Also, the fourth condition in the above theorem can be replaced by the following stronger condition; if α and β belong to L then α ∧ β ∈ L, which is sometimes called the closure under adjunction. In fact, this condition implies the above fourth condition since 1 ∈ FLe ⊆ L. Conversely, suppose that L satisfies these four conditions. Moreover suppose that α and β belong to L. Then both α ∧ 1 and 1 → β belong to L. In fact, the latter is obtained by using the fact that β → (1 → β) is provable in FLe . Now, since (1 → β) → ((α ∧ 1) → (α ∧ β)) is provable in FLe and hence belongs to L, it follows that α ∧ β ∈ L from the assumption that L is closed under modus ponens. We note here that to define logics over FLew , it suffices to replace FLe in the first condition by FLew and to omit the fourth condition, since α ∧ 1 is logically equivalent to α in FLew . In the same way, we have the following, by using parameterized local deduction theorem for FL, though we omit the details. Theorem 5.16 A set of formulas L is a substructural logic iff it satisfies the following conditions; • • • • •

every formula provable in FL belongs to L, it is closed under substitution, if α and α\β belong to L, then β belongs to L, if α ∈ L then α ∧ 1 ∈ L, if α ∈ L and γ is any formula, then both γ \(α · γ ) and (γ · α)/γ belong to L.

5.4 Framework for Substructural Logics and Modal Logics

71

Finally, by applying the similar argument to modal logics and using local deduction theorem for K (Theorem 5.7), we have the following result. Usually, the following fourth condition is called the closure under necessitation. Theorem 5.17 A set of formulas L (of the modal language) is a logic over K iff it satisfies the following conditions; • • • •

every formula provable in K belongs to L, it is closed under substitution, it is closed under modus ponens, if α ∈ L then α ∈ L.

Exercise 5.6 Give a proof of Theorem 5.17. General study of logics in these classes and their logical properties will be developed in Part II from algebraic point of view.

5.5 A View of Substructural Logics We have given a general framework for substructural logics and modal logics in the above. We show that many known classes of nonclassical logics will fall into these logics. We will not touch on modal logics here, as there are already several standard textbooks on them. In the following, we will list some important subclasses of substructural logics with short historical remarks. For general information, consult Galatos et al. 2007 and also refer to a list of books and survey papers given in Further Reading of Part I. Lambek Calculus J. Lambek introduced the calculus in Lambek (1958), as a calculus for categorial grammer which was introduced originally by K. Ajdukiewicz and Y. Bar-Hillel. The calculus introduced by Lambek is essentially equal to the logic FL, though its language in the original form contains neither disjunction nor conjunction. Moreover, sequents are restricted to those of the form Γ ⇒ α with a non-empty sequence of formulas Γ . This restriction comes from linguistic reasons. In the early 80s, the calculus was rediscovered and researchers including J. van Benthem and W. Buszkowski have contributed to revive the subject. Our FL was named after Lambek as it can be a generalized form of the original Lambek calculus. Linear Logic Linear logic was introduced by J.-Y. Girard in his influential paper (Girard 1987). Linear logic has moreover exponentials in the language, which are a kind of modal operators. Because of the presence of exponentials, linear logic is shown to be undecidable, while its exponential-free fragment, i.e., the multiplicative and additive fragment MALL, is decidable as it is equal to our InFLe . Also the exponential-free

72

5 Deducibility and Axiomatic Extensions

fragment of his intuitionistic linear logic is equal to FLe . Linear logic is sometimes regarded as a logic of resource. Extending this idea to other substructural logics, sometimes substructural logics have been regarded as resource sensitive logics. Relevant Logics Implication in classical logic have been sometimes criticized for not reflecting the relevance between assumptions and consequences. For, example, “falsehood implies any proposition” is admitted. Relevant logics were introduced to understand logical features of relevant implication, which avoids paradoxes of material implication as above. A common feature to relevant logics will be to reject the weakening rules. A. Anderson, N. Belnap Jr., R.K. Meyer and M. Dunn made especially important contributions to the development of this study. A. Urquhart showed the undecidability of relevant propositional logic R in Urquhart (1984). Here, R is the extension of FLec with the law of double negation and the disturbutive law, i.e., InFLec [(α∧(β ∨γ )) → ((α ∧ β) ∨ (α ∧ γ ))]. For general information, see e.g. Dunn (1986). Logics Without the Contraction Rule They are axiomatic extensions of FLew . In his book (Wang 1963) H. Wang mentioned that classical predicate logic without contraction rules is decidable. V. Grishin noticed that contraction rules are used essentially in deriving Russell’s paradox (Grišin 1982). A cut-free sequent calculus for residuated lattices was introduced by Tamura (1974), whose aim was to show the decidability of the equational theory of residuated lattices. (A close connection between logics and equational theories will be discussed in Part II.) Ono and Komori (1985) developed syntactic and semantical study of logics without contraction rules in a systematic way. For further study in this direction, see e.g. Ono (1993, 2010a). Fuzzy Logics In 1965, R. Zadeh introduced the notion of fuzzy sets (Zadeh 1965). He developed fuzzy set theory, which is a theory of sets with unsharp boundaries. The theory has been quite influential on engineering, especially on control theory, and a new research field, called soft computing, has emerged. At the same time, the theory has been controversial since it seemed that its logical and philosophical basis are not founded well. Then P. Hájek developed a logical approach to fuzzy theory, which is called “fuzzy logic in the narrow sense” (due to Zadeh). What Hájek has done was to introduce an axiom system for basic fuzzy logic BL, which is in fact an extension of FLew , and developed fuzzy logics as its extensions (Hájek 1998). They will be briefly discussed in Sect. 9.3. Łukasiewicz’s Many-Valued Logics J. Łukasiewicz introduced many-valued semantics by extending two-valued semantics for classical logic (see e.g. Łukasiewicz (1920) for three-valued logic). Thus, these many-valued logics were introduced and studied semantically. On the other hand, from a syntactic point of view, they can be regarded as extensions of FLew as

5.5 A View of Substructural Logics

73

contraction axiom does not hold in general. In fact, the class of Łukasiewicz’s manyvalued logics forms an important subclass of fuzzy logics. They will be discussed in Sect. 6.5. They are considered as subclasses of logics without contraction rules. For further reading, see Cignoli et al. (2000). Johansson’s Minimal Logic This is a logic without the right-weakening, introduced by Johanssn (1936). It rejects the principle that every formula follows from a contradiction. It is equal to the logic FLec with left-weakening rule. Superintuitionistic Logics As we have mentioned already, superintuitionistic logics are logics over Int. The existence of infinitely many superintuitionistic logics was already noticed in Gödel (1932), while a systematic study has started from the middle of the 1950s (see Umezawa (1955) for instance). After that, comprehensive studies have been developed (see e.g. Zakharyaschev et al. 2001). It is known that the set of all superintuitionistic logic has close connections with the set of all normal modal logics over S4. Superintuitionistic logics will be discussed from algebraic point of view in Chap. 8.

Part II

Algebra in Logic

Algebraic methods are important basic tools in the study of nonclassical propositional logics. While a formal system for a given logic L determines the set of all formulas which are provable in the system, a class K of algebras for L determines the set of all formulas which are valid in every algebra in K . We say that a logic L is algebraically complete (with respect to a class K of algebras) whenever the set of all formulas which are provable in the system for L is exactly equal to the set of all formulas which are valid (in every algebra in K ). In such a case, for any formula ϕ which is not provable in L, there exists always an algebra (in K ) in which ϕ is falsified, and hence, we can find in principle an evidence of non-provability of ϕ in L. As a matter of fact, every logic under consideration in our book is shown to be algebraically complete (Theorem 8.7). Moreover, for some logics including intuitionistic logic, for each non-provable formula ϕ in the logic, it is always possible to find a finite algebra in which ϕ is not true, A notable aspect of algebraic approach will be seen in the fact that close connections can be often established between logical properties and algebraic properties. For instance, it may happen that for a given logical property P, there exists such an algebraic condition Q for a class of algebras that for any logic L, a logical property P holds in L if and only if the algebraic condition Q holds for a class of algebras corresponding to L, and conversely that for a given algebraic condition Q, there exists a logical property P for which such a connection holds. It will be quite useful if this is the case, as algebraic results will resolve logical problems and vice versa. As we have seen in Part I, proof-theoretic methods can yield sharp and informative results on logical properties, but often they are limited to logics having cut-free sequent systems. In contrast, algebraic methods can be applied to extensive classes of logics with the help of outcomes of algebraic logic and universal algebra, and hence provide us quite general results. Therefore, we can say that algebraic methods and proof-theoretic methods are complementary to each other. In Part II, beginning by explaining basic concepts and results on algebra in Chap. 6, we will introduce fundamental results in algebraic logic in Chap. 7. Then, Chap. 8 will be devoted to an introduction to universal algebraic approach to logic. To make it intelligible and not too abstract, we take superintuitionistic logics and varieties of Heyting algebras as examples and focus mainly on them. These expositions will

76

Algebra in Logic

clarify what algebraic approaches are, how algebraic notions and properties are linked to logical ones, and in which way they will be applied for solving problems in logic. In Chap. 9, we mention briefly a recent development of the study of substructural logics, in which algebraic methods have made a remarkable success. Also, we discuss algebraic approach to modal logics and its connection to relational semantics in Chap. 10.

Chapter 6

From Algebra to Logic

Syntactic or symbolic approaches to logic began from the middle of nineteenth century. G. Boole attempted to express logical inference as an algebraic calculation in his book (Boole 1854). It took several decades before Hilbert-style formal systems were introduced. Though it was not in a complete form, Boole developed the axiomatic foundation of algebra of logic by introducing some algebraic equations (as logical axioms) and rules for deriving algebraic equations from other equations (as the rules of inference). It should be remarked that around that time, abstract algebraic structures such as groups, rings and fields had been introduced and study of these structures had begun. Until that time mathematicians had paid attention mostly to concrete algebraic structures like the set of integers, the set of rational numbers, the set of real numbers and so on. Boolean algebras introduced in the present chapter are abstract algebraic structures, introduced essentially by Boole. Obviously, the definition came from the algebraic description of behaviors of logical connectives of classical logic. After giving definitions of some basic algebraic structures, including lattices and Boolean algebras, we will introduce three algebraic notions in Sect. 6.2, which are primary in our arguments throughout Part II. In the last two sections of this chapter, we give first examples of a link between logics and algebras, which is the main theme of Part II.

6.1 Lattices and Boolean Algebras We will first introduce some basic notions of algebraic structures and show properties of them in the following.1 1 Davey and Priestley 2002 will be a useful guide to topics in this chapter. Anyone familiar with the

book can skip the present chapter except Sect. 6.5. © Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_6

77

78

6 From Algebra to Logic

Definition 6.1 (Partial orders) A partial order ≤ on a set A is a binary relation on A which satisfies the following: For all x, y, z ∈ A, 1. x ≤ x (reflexivity), 2. if x ≤ y and y ≤ z then x ≤ z (transitivity), 3. if x ≤ y and y ≤ x then x = y (anti-symmetricity). A partially ordered set A, ≤ is a pair of a set A and a partial order ≤ on it. A partial order ≤ on a set A is a total order (or, a linear order) if either x ≤ y or y ≤ x holds always for all x, y ∈ A. In this case, A, ≤ is said to be a totally ordered set, or a chain. The relation x < y is defined by the conditions that x ≤ y but not y ≤ x, which is called a strict order (induced by a partial order ≤). Clearly, x ≤ y holds if and only if x < y or x = y holds. Each of the following three pictures (Hasse diagrams) represents a partially ordered set. Only the left-most one is a chain. ◦





















Example 6.1 1. Let N be the set of all natural numbers, i.e., the set of all positive integers, and ≤ is the usual order among natural numbers. Clearly, N, ≤ is a chain. 2. On the other hand, define a binary relation | on N by the condition that x|y iff y is divisible by x. For instance, 3|12 but not 5|12. This divisibility relation | is a partial order on N, but is not a total order. For example, neither 5|12 nor 12|5 holds.2 Lattices Definition 6.2 (Lattices) A partially ordered set A, ≤ is a lattice iff for all x, y ∈ A there exist x ∨ y (join) and x ∧ y (meet) such that: 1. 2. 3. 4.

x ≤ x ∨ y and y ≤ x ∨ y, if x ≤ z and y ≤ z then x ∨ y ≤ z for all z ∈ A, x ∧ y ≤ x and x ∧ y ≤ y, if z ≤ x and z ≤ y then z ≤ x ∧ y for all z ∈ A.

The first condition in the above definition says that x ∨ y is an upper bound of x and y, i.e., an element which is greater than both of x and y. The second condition says that x ∨ y is smaller than or equal to any upper bound of x and y. Therefore, the join x ∨ y of x and y is the least upper bound of x and y. Similarly, the third condition say that the meet x ∧ y of x and y is a lower bound of x and y, and the fourth condition together with the third say that the meet x ∧ y of x and y is the greatest lower bound of x and y. Obviously, the following equivalences holds always 2 We use sometimes the word ‘iff ’ as an abbreviation of ‘if and only if ’ in the following, as we have

already stated in Part I.

6.1 Lattices and Boolean Algebras

79

in every lattice; x ∨ y = y iff x ≤ y iff x ∧ y = x. From this it follows that the partial order ≤ of a given lattice is determined uniquely either by ∨ or ∧. Hence, we can safely say that the triple A, ∨, ∧ is a lattice, instead of A, ≤. A given lattice is bounded if there exist both the greatest and the least element in it. Often, they are denoted as and ⊥, respectively. Among the following three partially ordered sets, the left and the middle ones are lattices while the right one is not. ◦

◦ ◦ ◦

◦ ◦





◦ ◦

◦ ◦









Example 6.2 1. Every totally ordered set forms a lattice by defining x ∨ y = max{x, y} and x ∧ y = min{x, y}. The totally ordered set N, ≤ has the least element 1 but does not have any greatest element. Thus it is not bounded. 2. The partially ordered set N, | of the second example in Example 6.1 is a lattice. In fact, for given two natural numbers x and y, the join x ∨ y is the least common multiple and the meet x ∧ y is the greatest common divisor of x and y, respectively. For example, both 6 ∨ 14 = 42 and 6 ∧ 14 = 2 hold in this partially ordered set. Exercise 6.1 Show that in any lattice, x ≤ y implies both x ∨ z ≤ y ∨ z and x ∧ z ≤ y ∧ z for each z. We will give basic equalities for lattices. Lemma 6.1 The following equalities hold in any lattice. For all x, y, z, (1a)x (2a)x (3a)x (4a)x

∨ x = x, ∨ y = y ∨ x, ∨ (y ∨ z) = (x ∨ y) ∨ z, ∨ (x ∧ y) = x,

(1b)x (2b)x (3b)x (4b)x

∧ x = x, ∧ y = y ∧ x, ∧ (y ∧ z) = (x ∧ y) ∧ z, ∧ (x ∨ y) = x.

Exercise 6.2 Give a proof of both (3a) and (4a) of Lemma 6.1. Remark 6.3 (An alternative definition of lattices) According to Definition 6.2, a lattice is a partially ordered set in which both join and meet exist for all pairs of its members. Then, Lemma 6.1 says that eight equalities given there hold always in every lattice. Now, an alternative way of defining a lattice is to take an algebra of the form A, ∨, ∧ with two binary operations ∨ and ∧ in which all of these eight equalities hold always. To show this equivalence, it is necessary to introduce a partial order in any such algebra so that operations ∨ and ∧ express join and meet, respectively, with respect to this partial order. As a matter of fact, we can show the following three statements for such any algebra A, ∨, ∧ in which eight equalities of Lemma 6.1 hold always.

80

6 From Algebra to Logic

(1) x ∨ y = y iff x ∧ y = x for all x, y. (2) Define a binary relation x ≤ y on A by x ∧ y = x for all x, y. Then, ≤ is a partial order. (We call it the partial order induced by the lattice operations ∨ and ∧). (3) x ∨ y and x ∧ y are the join and the meet of x and y for all x, y, respectively, with respect to this ≤. Exercise 6.3 Show the above three statements for any algebra A, ∨, ∧ in which all of eight equalities of Lemma 6.1 hold. Definition 6.3 (Distributive lattices) A lattice A, ∨, ∧ is distributive if the equality x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z) (distributive law) holds for every x, y, z ∈ A. Remark 6.4 The following three statements hold. 1. Every totally ordered lattice is distributive. 2. The inequality x ∧ (y ∨ z) ≥ (x ∧ y) ∨ (x ∧ z) holds for all x, y, z in any lattice. 3. The distributive law in Definition 6.3 can be replaced by the following dual form: x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z) for all x, y, z, and vice versa. We will outline a proof of the third statement. First we show that the dual form follows from the distributivity. In the following, we quote equalities for lattices in Lemma 6.1. By the distributivity in Definition 6.3, (x ∨ y) ∧ (x ∨ z) = ((x ∨ y) ∧ x) ∨ ((x ∨ y) ∧ z). Now, (x ∨ y) ∧ x = x by using (2b) and (4b), and (x ∨ y) ∧ z = (x ∧ z) ∨ (y ∧ z) by (2b) and the distributivity. Hence, the righthand side of the above equality is equal to x ∨ ((x ∧ z) ∨ (y ∧ z)), which is equal to (x ∨ (x ∧ z)) ∨ (y ∧ z)) by (3a), and hence to x ∨ (y ∧ z) by (4a) of Lemma 6.1. Thus we get the dual form. It remains to show the converse, that is, the distributivity follows from this dual form. But, this is almost obvious, once we notice that equalities in Lemma 6.1 still hold by interchanging ∧ and ∨. This means that we can get a required proof of the converse simply by interchanging ∧ and ∨ everywhere in the above proof. Exercise 6.4 Let J be a partially ordered set {0, a, b, c, 1} with the strict order < such that 0 < a < c, 0 < b < c and c < 1 while a and b are mutually incomparable. (See Fig. 6.1 (1).) Then we can show that J, ≤ forms a lattice. Answer the following questions. 1. Calculate the value of c ∧ (a ∨ b). 2. Show that the lattice J, ≤ is distributive. ([Hint]. By Remark 6.4, it is enough to show that x ∧ (y ∨ z) ≤ (x ∧ y) ∨ (x ∧ z) for all x, y, z ∈ J . We note that this inequality holds always when x is either 0 or 1, and also when either y ≤ z or z ≤ y.) Exercise 6.5 Let D be a partially ordered set {0, d, e, f, 1} with the strict order < such that 0 < d < e < 1, 0 < f < 1, while both d and e are incomparable with f (see Fig. 6.1 (2)). Show that the lattice D, ≤ is not distributive. Definition 6.4 (Complete lattices) A lattice A, ∨, ∧ is complete if for any  (possibly empty) subset S of A, both the least upper bound of S (denoted by S) and the greatest lower bound of S (denoted by S) exist. Here, an element a of A is the least upper bound of a set S, when

6.1 Lattices and Boolean Algebras

81

◦ 1

Fig. 6.1 Distributive lattice J and non-distributive lattice D

◦ 1 e ◦

◦ c a ◦

◦ b ◦ 0 lattice J

◦ f d ◦ ◦ 0 lattice D

• x ≤ a for all x ∈ S (i.e., a is an upper bound of S), • for every c ∈ A, if x ≤ c for all x ∈ S then a ≤ c (i.e., a ≤ c holds for any upper bound c of S). The greatest lower bound can be defined dually. Thus, it is a lower bound of S which is greater than or equal to any other lower bound of S.   Due to the definition, ∅ (and ∅) is the least element ⊥ of A (and the greatest element of A, respectively). Thus, every complete lattice must be bounded. Example 6.5 Let C be an arbitrary set. The power set ℘ (C) of C, i.e., the set of all subsets of C, is partially ordered by the set inclusion relation ⊆. Its greatest element is C and the least element is the empty set ∅. The partially ordered set ℘ (C), ⊆ forms a lattice, in fact, a complete lattice, in which the join (and the meet) of sets X, Y ∈ ℘ (C) is given by the union X ∪ Y (and the intersection X ∩ Y , respectively.) Exercise 6.6 Show that the lattice ℘ (C), ∪, ∩ in Example 6.5 is distributive for any C. Example 6.6 Let Q and R be the set of all rational numbers and the set of all real numbers, respectively. Define the set S = {r ∈ Q : r 2 ≤ 2}, which is a√subset of Q and hence of R. Then, the least upper bound of S exists in R, which is 2, but does not exist in Q. Thus, Q, ≤ is not a complete lattice, while R, ≤ is known to be complete. Boolean Algebras Let us recall the truth table for logical connectives ∨, ∧ and → of classical logic in Introduction, where 0 and 1 represent falsehood and truth, respectively. a\b 1 0 1 1 1 0 1 0

a\b 1 0 1 1 0 0 0 0

a\b 1 0 1 1 0 0 1 1

Now define a partial order ≤ into the set {0, 1} in a natural way, i.e., 0 < 1. Then, the left and the middle truth tables in the above say that disjunction a ∨ b and conjunction a ∧ b are join and meet, respectively (with respect to the partial

82

6 From Algebra to Logic

order ≤) of a and b where a, b ∈ {0, 1}. We note that joins and meets in this case can be expressed also as a ∨ b = max{a, b} and a ∧ b = min{a, b}. Using the terminology of algebra, we can say that {0, 1}, ∨, ∧, →, 0 is an algebra where {0, 1}, ∨, ∧, 0 is a lattice with the least element 0 with an additional operation → satisfying that a → b = 1 if a ≤ b, and a → b = 0 otherwise, i.e., when a = 1 and b = 0. By using the least element 0, we define ¬a = a → 0. Clearly, ¬a = 1 if and only if a = 0. It is easy to see that ¬¬a = a for all a ∈ {0, 1}. By generalizing this, we introduce algebras for classical logic as follows. Definition 6.5 (Boolean algebras) An algebra A = A, ∨, ∧, →, 0 is a Boolean algebra iff 1. A, ∨, ∧ is a lattice with the least element 0, 2. the law of residuation holds, i.e., a ∧ b ≤ c iff a ≤ b → c, for all a, b, c ∈ A. 3. the law of double negation holds, i.e., ¬¬a = a for all a ∈ A , where ¬a is defined by a → 0. The algebra {0, 1}, ∨, ∧, →, 0 which is determined by the above truth table is a Boolean algebra. To confirm this, it is enough to show that the law of residuation holds in it. In fact, we can see that 1 ≤ b → c iff b ≤ c iff 1 ∧ b ≤ c when a = 1, and that both 0 ≤ b → c and 0 ∧ b ≤ c hold always when a = 0. This Boolean algebra consisting of two elements 0 and 1 is called the two-valued (or 2-valued), and is denoted by 2. The law of residuation says that for each b, c the greatest element exists always in the set U = {x : x ∧ b ≤ c} and is equal to b → c. In fact, by the law of residuation, if x ∧ b ≤ c then x ≤ b → c for all x. This means that b → c is an upper bound of U . On the other hand, since b → c ≤ b → c, we have (b → c) ∧ b ≤ c by using the converse direction of the law of residuation. This means that b → c ∈ U . Therefore b → c = max{x ∈ A : x ∧ b ≤ c}. Now, consider the case b = c, in particular. As x ∧ b ≤ b holds always, the element of the form b → b must be the greatest element of A, which is denoted by 1. This implies that every Boolean algebra contains both the least element 0 and the greatest element 1. A pathological example of a Boolean algebra is a Boolean algebra in which 0 = 1 holds. This Boolean algebra consists of the single element 0, and is called the degenerate Boolean algebra. In the following, by a Boolean algebra we mean a non-degenerate Boolean algebra, unless otherwise noted. The following figure shows the 8-valued Boolean algebra.

a ∨ b◦

1 ◦ ¬b◦

a◦

b ◦

◦¬a ◦¬a ∨ ¬b

◦ 0

6.1 Lattices and Boolean Algebras

83

We can show the following Lemma 6.2 for Boolean algebras, but without using the law of double negation. Thus, the result holds in fact for Heyting algebras, algebras obtained from Boolean algebras by dropping the law of double negation. These algebras will be discussed in details in Chap. 7. Lemma 6.2 The following statements hold for all x, y, z in any Boolean algebra. 1. x ∧ (x → y) ≤ y holds always, and hence x ∧ ¬x = 0, in particular. 2. x ≤ y implies both z → x ≤ z → y and y → z ≤ x → z. Thus, x ≤ y implies ¬y ≤ ¬x, 3. the distributive law x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z) holds. Proof We will give here only a proof of the third statement. As shown in Remark 6.4, x ∧ (y ∨ z) is always an upper bound of x ∧ y and x ∧ z in any lattice. So, it is enough to show that x ∧ (y ∨ z) is the least upper bound of them. Suppose that u is any upper bound of x ∧ y and x ∧ z. Then, x ∧ y ≤ u and x ∧ z ≤ u. By the law of residuation, both y ≤ x → u and z ≤ x → u hold. Hence, y ∨ z ≤ x → u. Using the law of residuation once again but in the converse direction, we have x ∧ (y ∨ z) ≤ u. Thus, we have a required result.  Lemma 6.3 The following statements hold in every Boolean algebra. 1. x ∨ y = ¬(¬x ∧ ¬y) for all x, y, 2. x → y = ¬x ∨ y holds for all x, y. In particular, x ∨ ¬x = 1 for all x. Proof We show the second statement. It is easy to see that x → y is an upper bound of ¬x and y. Suppose that w is any upper bound of ¬x and y. From ¬x ≤ w, it follows that ¬w ≤ ¬¬x = x, and hence the inequalities x → y ≤ ¬w → y ≤ ¬w → w hold by the second statement of Lemma 6.2. On the other hand, as ¬w ∧ (¬w → w) ≤ ¬w ∧ w = 0, we have ¬w → w ≤ ¬w → 0 = ¬¬w = w. Combining these two inequalities, we have x → y ≤ w. It means that x → y is the least upper bound of ¬x and y, and hence x → y = ¬x ∨ y.  We notice that the equality x ∨ ¬x = 1 in Lemma 6.3 expresses an algebraic counterpart of the law of excluded middle (see Sect. 1.1). The above Lemma 6.3 says that both ∨ and → can be defined by ∧ and ¬, and are redundant in any Boolean algebra. On the other hand, to make a comparison clearer with other algebras introduced in later chapters, we keep on our definition of Boolean algebras as a quintuple of the form A, ∨, ∧, →, 0. Exercise 6.7 Show that the following hold in every Boolean algebra. 1. (x → y) ∨ (y → x) = 1 for all x, y. 2. (x → y) → x ≤ x for all x, y.

6.2 Subalgebras, Homomorphisms and Direct Products For further developments of algebraic study, we introduce three basic notions of algebras. As these notions can be defined for various kinds of algebra, we define them in a general setting. A language L of algebras is a set of operation symbols,

84

6 From Algebra to Logic

each of which has a fixed arity, a non-negative integer. Operation symbols of arity 0 are called constant symbols. In the following, we assume that L is a finite ordered set  f 1 , f 2 , . . . , f m , following the convention that n 1 ≥ n 2 ≥ · · · ≥ n m , where n i is the arity of f i . An algebra A of type L is a structure of the form A, f 1 A , f 2 A , . . . , f m A , where A is a non-empty set, called the universe (or underlying set) of A, and f i A is an n i -ary operation on A for each 1 ≤ i ≤ m. Each f i A is understood as an interpretation of the operation symbol f i of L in the algebra A. For the brevity’s sake, we sometimes omit the superscript A of f i A when no confusions may occur. For instance, in the previous section we take a language ∨, ∧, →, 0 to describe the class of all Boolean algebras, each of which has an arity 2, 2, 2 and 0, respectively. In the following three definitions, we assume that algebras A and B are of the same type. Definition 6.6 (Subalgebras) An algebra B is a subalgebra of A if B is a subset of A and f i B is the restriction of f i A to B for each i. That is, f i B (b1 , . . . , bni ) = f i A (b1 , . . . , bni ) for all b1 , . . . , bni ∈ B and all i. Example 6.7 Suppose that A = A, ∨, ∧ is a lattice, and B is a non-empty subset of A. Then, B = B, ∨ , ∧  is a subalgebra of A (conventionally, called a sublattice of the lattice A) if and only if b1 ∨ b2 = b1 ∨ b2 and b1 ∧ b2 = b1 ∧ b2 for all b1 , b2 ∈ B, which, in turn, is equivalent to the condition that both b1 ∨ b2 and b1 ∧ b2 belong to B for all b1 , b2 ∈ B. Let us consider the lattice J, ≤ in Exercise 6.4. The subset {0, a, b, c} of J forms a sublattice. On the other hand, {0, a, b, 1}, ≤ is not a sublattice of J, ≤, though it is a lattice. ◦1

•1

•c

◦c

a•

•b • 0

a•

•b • 0

Definition 6.7 (Homomorphism and homomorphic images) A mapping h : A −→ B is a homomorphism from A to B, if h( f i A (a1 , . . . , ani )) = f i B (h(a1 ), . . . , h(ani )) for all a1 , . . . , ani ∈ A and all i. When h is injective (or, one-to-one), h is called an embedding (of A into B). In this case, A is said to be embedded into B by h. If h is surjective (or, onto), i.e., each element b ∈ B is expressed as h(a) for some a ∈ A, then B is called a homomorphic image of A (by the homomorphism h). When h is bijective (i.e., one-to-one and onto), h is called an isomorphism and A is said to be isomorphic to B (by the isomorphism h). Remark 6.8 Suppose that h is a homomorphism from a lattice A to a lattice B. Then h is order-preserving, i.e., for all a, a  ∈ A, a ≤A a  implies h(a) ≤B h(a  ), where ≤A (≤B ) is the order of A (B, respectively) induced by its lattice operations. For, if

6.2 Subalgebras, Homomorphisms and Direct Products

85

a ≤A a  holds then a ∧A a  = a, and hence h(a) ∧B h(a  ) = h(a ∧A a  ) = h(a). This means that h(a) ≤B h(a  ). Exercise 6.8 Suppose that h is a homomorphism from a lattice A to a lattice B and A is a subalgebra of A. Define the image h(A ) of A under h to be {h(d) : d ∈ A } (with lattice operations of B restricted to this set). Show that both h(d) ∨B h(d  ) and h(d) ∧B h(d  ) belong to h(A ) for all d, d  ∈ A and hence h(A ) is a subalgebra of B. Definition 6.8 (Direct products) For given algebras A and B, define the direct product A × B to be an algebra A × B, f 1 A×B , . . . , f m A×B , where A × B = {(a, b) : a ∈ A and b ∈ B} and for each i, let f i A×B ((a1 , b1 ), . . . , (ani , bni )) = ( f i A (a1 , . . . , ani ), f i B (b1 , . . . , bni )), that is, f i A×B is defined component-wise. In general, the direct product of algebras A j ( j ∈ J ) (possibly, J may be infinite) is defined to be an algebra A (written as j∈J A j ) whose universe A is the direct  product j∈J A j of sets A j ( j ∈ J ), and the j-th component f i A (a1 , . . . , ani )( j) of f i A (a1 , . . . , ani ) is given by f i A j (a1 ( j), . . . , ani ( j)) for each j ∈ J and for each ak ∈ A. Here, ak ( j) is an element of A j which is the j-th component of ak . Example 6.9 1. Let N = N, +, × and E be the set of all even natural numbers. Then, E = E, +, × is a subalgebra of A. On the other hand, the set O of all odd natural numbers with + and × is not a subalgebra of N. For, the sum of two odd numbers is never odd. 2. Let M = M, + , × , where M = {0, 1, 2, 3, 4} and + and × are addition and multiplication modulo 5. Thus, for example, 2 + 4 = 1 and 2 × 4 = 3. A mapping h from the above N to M is defined by h(n) = m where m is the remainder when n is divided by 5. Then, h is a homomorphism. For instance, h(12 × 4) = h(48) = h(9 × 5 + 3) ≡ 3, and h(12) × h(4) = h(2 × 5 + 2) × h(4) ≡ 2 × 4 ≡ 3. Here, ≡ denotes the congruence modulo 5. 3. The direct product N ×M of M and N in the above is the algebra N× M, +∗ , ×∗ , where N × M is the set of all ordered pairs (m, n) with m ∈ N and n ∈ M, and +∗ and ×∗ are defined component-wise, i.e., (m, n) ◦∗ (k, j) = (m ◦ k, n ◦ j) for ◦ ∈ {+, ×}. Exercise 6.9 Let f and g be lattice homomorphisms from A to C and from B to D, respectively. Define a mapping h from the direct product A × B to C × D by h(a, b) = ( f (a), g(b)) for all a ∈ A and b ∈ B. Show that h is a lattice homomorphism. In the case of the class of all Boolean algebras, B is a subalgebra of a Boolean algebra A if and only if B is a subset of A which contains 0A and is closed under ∨, ∧ and →. It is obvious that every non-degenerate Boolean algebra has a subalgebra which is isomorphic to the two-valued Boolean algebra 2. When both A and B are Boolean algebras, a mapping h : A −→ B is a homomorphism from A to B if and only if h(a ∗A a  ) = h(a) ∗B h(a  ) for every ∗ ∈ {∨, ∧, →} and every a, a  ∈ A, and moreover h(0A ) = 0B .

86

6 From Algebra to Logic

For any Boolean algebra A, every subalgebra of A is also a Boolean algebra. Also, if B is a homomorphic image of A by a homomorphism h, then B is also a Boolean algebra. To show the latter, we check that B satisfies three conditions for Boolean algebras. It is easily shown that B satisfies all conditions for lattices. For the third condition, take any element a  ∈ B. Then, a  is of the form h(a) for an element a ∈ A. Then ¬¬a  = ¬¬h(a) = h(¬¬a) = h(a) = a  as A is a Boolean algebra. As for the law of residuation, for given a  , b , c ∈ B, take a, b, c ∈ A such that a  = h(a), b = h(b) and c = h(c), as B is a homomorphic image of A. Suppose first that a  ∧ b ≤ c , which is equal to the condition h(a ∧ b) ≤ h(c). Since b ≤ a → (a ∧ b) holds in A, the inequality h(b) ≤ h(a) → h(a ∧ b) ≤ h(a) → h(c) holds as h is order-preserving. Thus, b ≤ a  → c . Conversely, suppose that b ≤ a  → c . Then, h(b) ≤ h(a → c). Then we have h(a) ∧ h(b) ≤ h(a) ∧ h(a → c) ≤ h(c) as    a ∧ (a → c) ≤ c holds in A.  Thus, a ∧ b ≤ c holds. If B is a direct product j∈J A j of Boolean algebras A j ( j ∈ J ), then B is also a Boolean algebra. For, each operation of B is defined component-wise. To sum up, we state these results in the following way. This topic will be discussed in a general setting again in Sect. 8.2. Theorem 6.4 The class of all Boolean algebras is closed under subalgebras, homomorphic images and direct products.

6.3 Representations of Boolean Algebras In Example 6.5 and Exercise 6.6, it is shown that the power set ℘ (C) of an arbitrary set C forms a distributive lattice ℘ (C), ∪, ∩ with respect to the union ∪ and the intersection ∩, in which ∅ is the least element. Let −X be the complement of X for each subset X of C. Define X → Y = −X ∪ Y for X, Y ⊆ D. Then, ℘(C)(= ℘ (C), ∪, ∩, →, ∅) forms a Boolean algebra. Boolean algebras of this form are called powerset Boolean algebras. Exercise 6.10 Check that the law of residuation holds in ℘(C), i.e., X ∩ Y ⊆ Z if and only if X ⊆ −Y ∪ Z for all X, Y, Z ⊆ C. Exercise 6.11 Take a powerset Boolean algebra ℘(C) and a is a fixed element of C. We define a mapping h from ℘(C) to the two-valued Boolean algebra 2 by h(X ) = 1 if a ∈ X , and h(X ) = 0 otherwise, for each subset X of C. Show that h is a homomorphism. Finite Boolean Algebras Let C be any finite set with m elements, say {ak : 1 ≤ k ≤ m}. Then the powerset Boolean algebra ℘(C) is a 2m -valued Boolean algebra. If D is another set with m elements, then ℘(D) is obviously isomorphic to ℘(C). We note that any 2m -valued

6.3 Representations of Boolean Algebras

87

Boolean algebra can be also understood as the m times direct product 2m of twovalued Boolean algebra 2, each of whose element is of the form ( j1 , j2 , . . . , jm ) with ji ∈ {0, 1} for each j, each operation is defined component-wise, and the least element is (0, 0, . . . , 0). The isomorphism g from this 2m to the above ℘(C) is defined by g(( j1 , j2 , . . . , jm )) = {a ji : ji = 1}. It can be shown that every finite Boolean algebra is isomorphic to a 2m -valued Boolean algebra 2m for some m. Here is an example of 4-valued Boolean algebra ℘(C), where m = 2 and C = {a, b}. ◦ {a, b} {a} ◦

◦ {b} ◦ ∅

Infinite Boolean Algebras In the same way as above, the powerset Boolean algebra ℘(C) is shown to be isomorphic to 2C , which is a direct product of the Boolean algebra 2, even if C is infinite. On the other hand, there is an infinite Boolean algebra which is not isomorphic to any powerset Boolean algebra. Here is such an example. Let N be the set of all natural numbers. Let F (N) be the collection of subsets of N, consisting of all finite subsets and all cofinite (i.e., its complement is finite) subsets of N. For example, the set {m ∈ N : m > 1000} is a cofinite subset of N, while the set of all even natural numbers is neither a finite nor a cofinite subset of N. Obviously, ∅ belongs to F (N). Now consider the structure F (N) with operations ∪, ∩, → of ℘(N) but restricted to members of this set. We can show that the set F (N) is closed under each of these three operations. Thus, it is a subalgebra of the powerset Boolean algebra ℘ (N). Since all members of F (N) can be enumerated, it must be a countable Boolean algebra. On the other hand, since the cardinality of any power set is either finite or uncountable, F (N) cannot be isomorphic to a powerset Boolean algebra. Exercise 6.12 1. Show that F (N) is closed under the operation →, where X → Y is defined by −X ∪ Y for X, Y ⊆ N. 2. Give a way of enumerating all members of F (N). On the other hand, we can show the following theorem, which can be regarded as a particular case of Theorem 7.14 whose proof is given in Sect. 7.5. Theorem 6.5 (Stone’s representation theorem for Boolean algebras) Every Boolean algebra can be embedded into a powerset Boolean algebra. We say that a Boolean algebra A = A, ∨, ∧, →, 0 is complete, when its lattice reduct (i.e., its lattice part) A, ∨, ∧ is complete. Thus, every powerset Boolean algebra is complete as was already noticed in Example 6.5.

88

6 From Algebra to Logic

6.4 Algebraic Completeness of Classical Logic Each assignment in two-valued semantics discussed in Sect. 1.1 determines an interpretation of formulas. The notion of assignments and validity can be naturally extended to arbitrary Boolean algebras. Let A be a Boolean algebra. An assignment h on A is any mapping from the set of all propositional variables to A.3 The assignment h can be naturally extended to a mapping from the set of all formulas to the set A, by defining h(α ∨ β) = h(α) ∨ A h(β), h(α ∧ β) = h(α) ∧ A h(β), h(α → β) = h(α) → A h(β) and h(0) = 0 A . (We use the same symbol h for the extended mapping by abuse of symbols. Symbols ∨, ∧, → and 0 in the above denote logical connectives and constant, while ∨ A , ∧ A , → A and 0 A denote corresponding algebraic operations of A and the least element of A, respectively, to avoid confusions.) A formula ϕ is valid in a Boolean algebra A if it takes always the value 1 A , i.e., h(ϕ) = 1 A for every assignment on A. The notion of validity of formulas can be extended naturally to validity of sequents. Obviously, tautologies discussed in Chap. 1 mean formulas and sequents which are valid in two-valued Boolean algebra 2.4 Theorem 6.6 (Algebraic completeness) For any given non-degenerate Boolean algebra B, the following three conditions are mutually equivalent. For every formula ϕ, 1. ϕ is provable in classical logic, 2. ϕ is valid in all Boolean algebras, 3. ϕ is valid in the Boolean algebra B. Proof We take the Hilbert-style system HK (see Sect. 1.1), for instance, as a system for classical logic. To show that 1 implies 2, it is enough to confirm that for every Boolean algebra A, (1) each axiom of HK is valid in A and (2) modus ponens preserves the validity in A, i.e., if both α and α → β are valid then β is also valid. The latter follows from the fact that 1 → b = 1 implies b = 1 in any Boolean algebra. It is trivial that 2 implies 3. To show that 3 implies 1, it is enough to show that if ϕ is valid in B then it is valid in the 2-valued Boolean algebra 2, as the validity in 2 is equivalent to the provability in HK (see Corollary 1.12 and Theorem 1.11). Suppose that ϕ is not valid in 2. Then there exists an assignment h on 2 such that h(ϕ) < 1. On the other hand, since the non-degenerate Boolean algebra B has a subalgebra which is isomorphic to 2 as noted in Sect. 6.2, h can be regarded also as an assignment on B. But this contradicts our assumption that ϕ is valid in B.  Theorem 6.6 tells us that even if we consider complicated Boolean algebras, we can get nothing new as concerns the validity of formulas. In contrast with this, similar results do not hold in general for logics and algebras corresponding to them, as shown in later chapters. 3 Sometimes,

assignments are called valuations. But in our book we use the word ‘valuations’ only in the context of Kripke semantics (see Chap. 10). 4 In Part II, the word ‘validity’ is used in this general sense.

6.5 Many-Valued Chains and the Law of Residuation

89

6.5 Many-Valued Chains and the Law of Residuation From algebraic point of view, it will be natural to ask how our two-valued semantics, i.e., algebraic semantics based on two-valued Boolean algebra, can be generalized to many-valued one. In the present section, we consider two possible ways of extending two-valued semantics to many-valued one, depending on which form of the law of residuation we will sustain. We introduce two types of many-valued chains, i.e., algebras with residuation defined over totally ordered sets. They are Gödel chains and Łukasiewicz chains. Algebraic methods introduced in earlier sections will be applied to each of them, and basic results on two distinct classes of many-valued logics will be shown. In later chapters, the idea will be generalized further to algebras with residuation defined over lattices, and eventually will lead us the notion of Heyting algebras and of residuated lattices. As a generalization of two-valued semantics, we consider how to introduce manyvalued semantics for the same language as classical logic. We restrict our attention here to the case where the set A of truth values is a totally ordered (possibly infinite) set, i.e., a chain, with the least element 0 and the greatest element 1. If it is threevalued semantics, we can take {0, 1/2, 1} for A, for example. In this case, the value 1/2 will be regarded as halfway truth if we understand truth values as the degree of truth. As we assume that A is totally ordered, a ∨ b and a ∧ b are expressed as max{a, b} and min{a, b}, respectively. So, the main question is how to define implication → on this set. One idea is just to keep the law of residuation between conjunction and implication. Suppose first that → satisfies the law of residuation. Then, → must be defined so that for given a and b in A, d ∧ a ≤ b iff d ≤ a → b for all d ∈ A. This is equivalent to say that min{d, a} ≤ b iff d ≤ a → b for any d ∈ A. If this is the case, we can show that a → b = 1 if a ≤ b, and a → b = b if otherwise. We take note here that if a ≤ b does not hold then a > b holds as ≤ is totally ordered. Now, if a ≤ b then min{d, a} ≤ b holds always for any d, and hence d ≤ a → b must hold for any d. This means a → b = 1. Next, if a > b then min{b, a} = b while min{d, a} > b for all d such that d > b. Thus, a → b must be equal to b. Conversely, suppose that the operation → satisfies that (1) a → b = 1 when a ≤ b and (2) a → b = b otherwise. Then we will show that the law of residuation holds. For, if a ≤ b, then the inequalities min{d, a} ≤ a ≤ b hold always for any d on the one hand, while d ≤ 1 = a → b holds always for any d on the other hand. When a > b, min{d, a} ≤ b iff d ≤ b iff d ≤ a → b by our assumption. Consequently, we have shown the following. Lemma 6.7 Suppose that A, ∨, ∧ is a chain with the greatest element 1. Then, the law of residuation holds between ∧ and → if and only if a → b = 1 if a ≤ b, and a → b = b if otherwise, for all a, b ∈ A.

90

6 From Algebra to Logic

Gödel Chains and Gödel Logics Lemma 6.7 will suggest the following definition. Now, suppose that A is an arbitrary chain with the least element 0 and the greatest element 1. Define three operations ∨, ∧ and → on A as follows. • a ∨ b = max{a, b}, • a ∧ b = min{a, b},  1 (a ≤ b) • a→b= b (other wise) Then, the algebra A = A, ∨, ∧, →, 0 is called a Gödel chain.5 When A has exactly n + 1 elements, A is called an (n + 1)-valued Gödel chain for n > 0. It is easy to see that every (n + 1)-valued Gödel chain is isomorphic. A standard presentation of (n + 1)-valued Gödel chains is given by taking the set G n+1 = {0, 1/n, 2/n, . . . , (n − 1)/n, 1} for A. On the other hand, infinite Gödel chains are not always isomorphic, as there exists a Gödel chain of an arbitrary cardinality. Let A be any Gödel chain. We can define assignments on A in the same way as we did for Boolean algebras. We say that a formula α is valid in A iff f (α) = 1 for every assignment f on A. The set of all valid formulas in A is expressed as L(A). The set L(A) is called the m-valued Gödel logic if A is isomorphic to Gm for some number m, and is called an infinite Gödel logic (determined by a chain A) when A is infinite. It is clear that L(G2 ) is equal to classical logic Cl. These Gödel logics are in fact superintuitionistic logics in the sense of Sect. 5.4. Exercise 6.13 1. Show that the law of double negation ¬¬ p → p is not valid in G3. 2. The formula ( p → q) ∨ (q → p) is valid in any Gödel chain. Next we will see what inclusion relations hold among Gödel logics. The following lemma holds in general and will be discussed again in general setting in Chap. 8. (To understand the following proof, one may regard both A and B temporarily as Gödel chains.) Lemma 6.8 If B is a subalgebra of A, then L(A) ⊆ L(B). Proof Suppose that a formula ϕ is valid in A. Let f be any assignment on B. Then f can be considered also to be an assignment on A as B is a subset of A. Since B is a subalgebra of A, we can see that for any formula ψ, the value f (ψ) under the assignment f on B is equal to the value f (ψ) under the assignment f on A. If we take ϕ for ψ, f (ϕ) must be equal to 1 (in B) for every f , as ϕ is valid in A. This means that ϕ is valid in B.  In general, we can show that every universal formula in the language of algebras which is valid in an algebra A is also valid in any subalgebra of A. 5 These

chains were discussed in Gödel (1932).

6.5 Many-Valued Chains and the Law of Residuation

91

Theorem 6.9 1. If a formula is valid in Gm+1 then it is valid in Gm for m ≥ 2, but the converse does not hold always. Thus, L(Gm+1 ) ⊂ L(Gm ). 2. Suppose that A is any infinite Gödel chain. If a formula is valid in A then it is valid in Gm for m ≥ 2, i.e., L(A) ⊆ L(Gm ). 3. In fact, for every infinite Gödel chain A, the logic L(A) is equal to the intersection k L(Gk ) of all logics L(Gk ) for k ≥ 2. Hence, the infinite Gödel logic is determined uniquely. Proof We apply Lemma 6.8 to Gödel chains to show the inclusion relation in the first statement. We note that any subset U of G m+1 containing both 0 and 1 forms a subalgebra of Gm+1 , since a → b always belongs to U as long as a, b ∈ U by the definition. Thus, for example, if we take the set {0, 1/m, 2/m, . . . , (m − 2)/m, 1} with m elements for U , then U forms a subalgebra isomorphic to Gm . Thus, the inclusion relation claimed in the first statement of Theorem 6.9 holds. 1◦ 1/2 ◦ 0◦ G3

◦1 ◦ 2/3 ◦ 1/3 ◦0 G4

It remains to show that L(Gm+1 ) = L(Gm ). We define formulas πk inductively for any k ≥ 1 as follows. Let p1 , p2 , . . . be distinct propositional variables. Then, 

π1 = p1 πk+1 = pk+1 ∨ ( pk+1 → πk )

for k ≥ 1

(6.1)

We notice that π2 is the law of excluded middle. It is easy to see that the formula πm → πn is provable in intuitionistic logic, when 1 ≤ m ≤ n. We can show the following. (The proof is left for readers. See Exercise 6.14.) Lemma 6.10 The formula πn is not valid in Gm if and only if there exist n elements a1 , . . . , an in Gm such that a1 < a2 < . . . < an < 1. Thus, πn is valid in Gm if and only if m ≤ n. We continue to give a proof of Theorem 6.9. From Lemma 6.10 it follows in particular that πm belongs to L(Gm ), but not to L(Gm+1 ). The inclusion relation in the second statement of Theorem 6.9 can be shown similarly to the first one. Finally, to show the equality in the third statement of Theorem 6.9, it is enough to show that any formula α which is not valid in A is neither valid in some Gk . Suppose that a formula α is not valid in A under an assignment g and moreover that it contains m propositional variables p1 , . . . , pm . Let V be a finite subset {g( p1 ), . . . , g( pm )} ∪ {0, 1} of A. Let k be the number of distinct elements of V . It is easy to see that V forms a subalgebra of A which is isomorphic to Gk . Then,

92

6 From Algebra to Logic

a→b 1 e 0

Fig. 6.2 Łukasiewicz implication and Gödel implication

1 1 1 1

e e 1 1

0 0 e 1

a→b 1 e 0

1 1 1 1

e e 1 1

0 0 0 1

α is not valid in Gk as g can be regarded also as an assignment on Gk . Consequently, α∈ / L(A) implies α ∈ / L(Gk ) for some k. (Take note  that k depends on α.) Combining  this with the second statement, we have L(A) = k L(Gk ). Exercise 6.14 Give a proof of Lemma 6.10 by using induction on n.  The logic determined by k L(Gk ) is called the Gödel-Dummett logic and is henceforth expressed as GD.6 A Hilbert-style axiom system of GD is obtained from HJ for intuitionistic logic by adding the following prelinearity axiom scheme: (α → β) ∨ (β → α). Łukasiewicz Chains J. Łukasiewicz introduced a many-valued semantics over chains in his paper (Łukasiewicz 1920), which is different from the one in the previous section. He introduced three-valued semantics first, and this was extended later to many-valued one. This time, the third value 1/2 between 0 and 1 of three-valued case is understood as undetermined or indeterminate. As it is a chain, a ∨ b and a ∧ b are equal to max{a, b} and min{a, b}, respectively, as before. The truth table of Łukasiewicz implication on {0, 1/2, 1} is given in comparison with that of Gödel implication in the following figure, where e denotes 1/2. (The symbol → is used for Gödel implication only in the following figure, only to avoid confusions.) In either chain, the negation ¬a is defined by a → 0. Thus, for three-valued case, the difference lies only in the value of ¬e (= e → 0). For the Łukasiewicz negation, ¬e = e and hence the law of double negation holds always, while for the Gödel negation, ¬e = 0, ¬¬e = 1 and hence the law of double negation fails (Fig. 6.2). In general, (n + 1)-valued Łukasiewicz chain Łn+1 is the set L n+1 = {0, 1/n, 2/n, . . . , (n − 1)/n, 1} with the following three operations ∨, ∧ and → for n ≥ 1. • a ∨ b = max{a, b}, • a ∧ b = min{a, b}, • a → b = min{1, 1 − a + b} =



1 (a ≤ b) 1 − a + b (other wise)

Similarly, the standard Łukasiewicz chain Ł is defined by replacing the universe L n+1 by the unit interval [0, 1] (i.e., the set of all real numbers r such that 0 ≤ r ≤ 1), while keeping the same definition of three operations. We can define assignments on 6 The

logic is named after K. Gödel and also M. Dummett (1959).

6.5 Many-Valued Chains and the Law of Residuation

93

Łukasiewicz chains as before and say that a formula is valid in a Łukasiewicz chain when it takes always the value 1 for every assignment on it. For example, the law of double negation ¬¬α → α is valid in every Łukasiewicz chain, as ¬a = 1 − a for each element a by the definition. Example 6.10 There exists a formula which is provable in classical logic, but is not valid in Ł3 . For instance, consider the following contraction axiom ( p → ( p → q)) → ( p → q) of HK for classical logic. Let a and b denote 1/2 and 0, respectively, and consider the value of (a → (a → b)) → (a → b). Since a → b = a → 0 = 1 − 1/2 = a, the above value is equal to the value of (a → a) → a, which is equal to 1 → 1/2 = 1/2 = 1. Thus, the formula ( p → ( p → q)) → ( p → q) is not valid in three-valued Łukasiewicz chain. Exercise 6.15 Check whether each of the following equalities holds always or not in Ł3 . • • • •

x ∨ ¬x = 1 for all x, x → (y → x) = 1 for all x, (x → y) ∨ y → x) = 1 for all x, y, ¬(x ∧ y) = ¬x ∨ ¬y for all x, y.

Exercise 6.16 Show that the equality x ∨ y = (x → y) → y holds for all x, y in Ł. (Thus, (x → y) → y = (y → x) → x holds always.) Exercise 6.17 Give an example of triples a, b, c such that the law of residuation in the following form fails in Ł; a ∧ b ≤ c iff a ≤ b → c. We define a binary operation ·, called fusion, on Ł by a · b = max{0, a + b − 1}. Obviously, · is commutative, i.e., x · y = y · x. Then the following lemma holds. Lemma 6.11 The following law of residuation between · and → holds in Ł, i.e., for all a, b, c ∈ [0, 1], a · b ≤ c iff a ≤ b → c. (6.2) Exercise 6.18 1. Show that the statement (6.2) holds in Ł. 2. Show that both equalities x · y = ¬(x → ¬y) and x ∧ y = x · (x → y) hold in Ł. 3. Show that the fusion · on Ł is associative, i.e., x · (y · z) = (x · y) · z holds. Thus, another way of extending two-valued semantics to many-valued one is to introduce Łukasiewicz chains. In this case, the law of double negation is preserved while the law of residuation between conjunction and implication is not. On the other hand, the law still holds between fusion and implication. In general, an algebraic structure which contains implication-like operations is said to be residuated when the law of residuation holds for them with some operations, not necessarily with conjunction. See Chap. 9 for further details.

94

6 From Algebra to Logic

Łukasiewicz’s Many-Valued Logics By taking any one of these Łukasiewicz chains, we can introduce a Łukasiewicz’s many-valued logic. In the following, we call Łukasiewicz’s many-valued logics simply as Łukasiewicz logics. For each Łukasiewicz chain A, let L(A) denote the set of all valid formulas in A. Logics L(Łn+1 ) and L(Ł) are called (n + 1)-valued Łukasiewicz logic and the infinite-valued Łukasiewicz logic, respectively. As Ł2 is equal to 2valued Boolean algebra 2, the set L(Ł2 ) is no other than the set of all tautologies. We note that these Łukasiewicz logics are indeed substructural logics over FLew in the sense of Sect. 5.4. Next we will see what inclusion relations hold among these Łukasiewicz logics. Using Lemma 6.8, we can show the following result, as Ł2 is a subalgebra of Łn+1 , which in turn is a subalgebra of Ł as Łukasiewicz chains. Lemma 6.12 L(Ł) ⊆ L(Łn+1 ) ⊆ L(Ł2 ) for all n ≥ 1. Thus every finite-valued Łukasiewicz logic includes Ł, and two-valued Łukasiewicz logic, which is classical logic, is the greatest among them. Now let us compare two algebras Łm+1 and Łn+1 for given m and n. Example 6.11 Suppose that m = 3 and n = 6. The algebra L 4 = {0, 1/3, 2/3, 1}, while L 7 = {0, 1/6, 2/6, 3/6, 4/6, 5/6, 1}. Clearly, L 4 is a subset of L 7 , as 3 is a divisor of 6. Moreover, L 4 is closed under three operations max{a, b}, min{a, b} and min{1, 1 − a + b}. Thus, Ł4 is a subalgebra of Ł7 . On the other hand, the set {0, 1/6, 1/3, 2/3, 1} is a subset of L 7 , but it is not closed under →. For, the value 1/3 → 1/6 (= min{1, 1 − 1/3 + 1/6} = 5/6) does not belong to that set. 1◦

0 ◦

◦ ◦ ◦ ◦ ◦ ◦ ◦

Ł4

Ł7

2/3 ◦ 1/3 ◦

1 5/6 4/6 3/6 2/6 1/6 0

Exercise 6.19 By generalizing the argument of the above Example 6.11 and using Lemma 6.8, show that if the number m is a divisor of the number n then Łm+1 is a subalgebra of Łn+1 . The converse of the above statement in the following, which is due to A. Lindenbaum, can be shown but we will omit the proof.7 Theorem 6.13 L(Łn+1 ) ⊆ L(Łm+1 ) iff m is a divisor of n. 7 His

result was reported in Łukasiewicz and Tarski (1930).

6.5 Many-Valued Chains and the Law of Residuation

95

Let us consider which finite-valued Łukasiewicz logics come just under classical logic. We say that L(Łp+1 ) ( p > 1) is maximal (among finite-valued Łukasiewicz logics), if L(Łp+1 ) ⊆ L(Łk+1 ) implies k = p for all k > 1. As an immediate corollary of Theorem 6.13, we can show that L(Łp+1 ) is maximal when p is a prime number. Thus, L(Ł3 ), L(Ł4 ), L(Ł6 ), L(Ł8 ), . . ., are exactly finite-valued Łukasiewicz logics just under classical logic. Obviously, they are mutually incomparable. Hence, there exist infinitely many such logics. A complete characterization of the class of all logics over L(Ł) is given by Komori (1981). The class contains also logics which are not expressed by finite Łukasiewicz chains. But still they are countably many in total. Moreover, each of these logics is finitely axiomatizable and decidable. A Hilbert-style axiom system of L(Ł) is obtained from the axiom system HK of classical logic by first deleting the contraction axiom scheme and then adding the following axiom scheme ((α → β) → β) → ((β → α) → α). (See Example 6.10 and Exercise 6.16. We notice that the law of double negation follows from this axiom scheme.) The infinite-valued Łukasiewicz logic and Gödel-Dummett logic are mutually incomparable. In fact, the law of double negation holds in the former but not in the latter, while contraction axiom scheme holds in the latter but not in the former (see Exercise 6.13 and Example 6.10).

Chapter 7

Basics of Algebraic Logic

The main goal of this chapter is to introduce several basic concepts in algebraic logic, i.e., Lindenbaum-Tarski algebras, locally finite algebras, finite embeddability property and canonical extensions. They are important algebraic tools for developing algebraic approach to logic. Though these concepts are quite general, we will confine ourselves to discussing them in the context of Heyting algebras and intuitionistic logic, to make arguments concrete and not to lose a way in their generality. It will be shown that the Lindenbaum-Tarski algebra gives a way of proving the algebraic completeness of intuitionistic logic with respect to the class of Heyting algebras. Also, by using the non-local finiteness and the finite embeddability property of the class of Heyting algebras, we can show that intuitionistic logic is complete with respect to the set of all finite Heyting algebras but not to any single finite algebra. In the last section, we present a basic representation theorem of Heyting algebras by M. Stone, which comes from duality between Heyting algebras and partially ordered sets.

7.1 Heyting Algebras As mentioned already in the previous chapter, Heyting algebras can be defined by dropping the law of double negation from the definition of Boolean algebras.1 Definition 7.1 (Heyting algebras) An algebra A = A, ∨, ∧, →, 0 is a Heyting algebra iff 1. A, ∨, ∧ is a lattice with the least element 0, 2. the law of residuation holds, i.e., a ∧ b ≤ c iff a ≤ b → c for all a, b, c ∈ A. 1 The

notion was originally introduced by Heyting (1930).

© Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_7

97

98

7 Basics of Algebraic Logic

Every Heyting algebra has also the greatest element 1, where 1 is equal to a → a for any a. Also ¬a is defined by a → 0. It is obvious that every Boolean algebra is a Heyting algebra. Gödel chains are no other than totally ordered Heyting algebras. Some basic properties of Heyting algebras are already shown in Lemma 6.2 in Chap. 6. Recall here that the law of residuation means that the set {x : x ∧ b ≤ c} has the greatest element b → c for all b, c. In the following, unless otherwise noted we consider only non-degenerate Heyting algebras, i.e., Heyting algebras containing at least two distinct elements, the least element 0 and the greatest 1. Similarly as before, a formula α is valid in a Heyting algebra A iff f (α) = 1 for every assignment f on A. The set of all valid formulas in A is denoted by L(A). Any Heyting algebra in which the prelinearity axiom scheme (α → β) ∨ (β → α) is valid is called a Gödel algebra. Clearly, every Boolean algebra is a Gödel algebra. But Boolean algebras are not totally ordered except two-valued Boolean algebra. This implies that Gödel algebras are not always Gödel chains. Exercise 7.1 Let A be any Heyting algebra. Show the following. 1. x ≤ ¬¬x for every x ∈ A. 2. If x ∨ ¬x = 1 holds for all x ∈ A then ¬¬x ≤ x holds for all x ∈ A. (Hence a Heyting algebra A is a Boolean algebra iff x ∨ ¬x = 1 holds for all x ∈ A. Cf. Lemma 6.3.) By Lemma 6.2, every Heyting algebra is distributive. The converse holds when a given distributive lattice is finite, as the following lemma shows. Lemma 7.1 (Finite distributive lattices) Let D be any finite distributive lattice. Then, for all b, c ∈ D there exists always the maximum element of the set {x ∈ D : x ∧ b ≤ c}. Thus, every finite distributive lattice can be regarded as a Heyting algebra if b → c is defined by max{x ∈ D : x ∧ b ≤ c} for all b, c ∈ D. Proof For given elements b, c of a finite distributive lattice D, define U to be the set {x ∈ D : x ∧ b ≤ c}. As c belngs to U , the set  U is non-empty. We enumerate all elements of U as {xi : 1 ≤ i ≤ m}. Let w = i x i , which is the least upper bound  of U . Then w ∧ b = i xi ∧ b = i (xi ∧ b) by the distributivity. As xi ∧ b ≤ c for each i, the inequality w ∧ b ≤ c holds. This means that w belongs also to U , and hence w = max U . Thus, b → c exists always for all b, c. Hence, D forms a Heyting algebra.  Exercise 7.2 Consider the finite distributive lattice J with the underlying set {0, a, b, c, 1} in Exercise 6.4. By Lemma 7.1, J = J, ∨, ∧, →, 0 is regarded as a Heyting algebra. Answer the following questions. 1. Calculate the values of a → b and c → b. 2. Show that the subset {0, c, 1} of J forms a Heyting subalgebra of J. 3. Show that the prelinearity axiom scheme is not valid in J. 4. Show that L(J) is a proper subset of L(G3 ). 5. Show that the formula π3 (in the proof of Theorem 6.9) is valid in J.

7.1 Heyting Algebras

99

Remark 7.1 There exists a bounded distributive lattice D which is not a Heyting algebra. To show this, it is enough to show that max{x ∈ A : x ∧ b ≤ c} does not exist for some elements b, c ∈ D, since the existence of such an element is necessary for defining b → c. Here is an example of such D, whose universe D is the set {(i, m) : i ∈ {0, 1} and m is a non-negative integer} ∪ {ω}. ◦ω

(1, 2) ◦ (1, 1) ◦ ◦ (0, 2) (1, 0) ◦ ◦ (0, 1) ◦ (0, 0) Define a partial order ≤ on D so that (1) ω is the greatest element and (2) the restriction of ≤ to the set D\{ω} is the product order, i.e., (i, m) ≤ ( j, n) holds iff both i ≤ j and m ≤ n hold for (i, m), ( j, n) ∈ D\{ω}. It can be shown that D = D, ≤ forms a bounded distributive lattice with the least element (0, 0), where the lattice operations ∨ and ∧ are defined component-wise on D\{ω}. Now, let us consider the set M = {x ∈ D : x ∧ (1, 0) ≤ (0, 0)}. It is easy to see that x ∈ M iff x is of the form (0, m) for some non-negative integer m. It is easy to see that ω is the single upper bound of M in D. On the other hand, ω ∈ / M since ω ∧ (1, 0) = (1, 0)  (0, 0). Thus M does not have the maximum element.

7.2 Lindenbaum-Tarski Algebras We will show the algebraic completeness of intuitionistic logic in the following Theorem 7.2. We introduce first several basic algebraic notions and then explain the details of the proof in the subsequent sections till Sect. 7.4. Here, we say that a formula α is provable in intuitionistic logic, when it is provable in LJ. Theorem 7.2 (Algebraic completeness of intuitionistic logic) The following are mutually equivalent for each formula α. 1. α is provable in intuitionistic logic. 2. α is valid in every Heyting algebra. 3. α is valid in every finite Heyting algebra. In this section we show that the condition 1 is equivalent to the condition 2. To show that the condition 1 implies the condition 2, we prove a slightly general statement in the following, by using the induction on the length of a given proof of the sequent γ1 , . . . , γm ⇒ ϕ. It goes similarly to Lemma 1.3 if the validity is understood as the

100

7 Basics of Algebraic Logic

validity in a given Heyting algebra. We note that the corresponding formula of a given sequent γ1 , . . . , γm ⇒ ϕ of LJ means the formula (γ1 ∧ . . . ∧ γm ) → ϕ (see Sect. 1.2). Lemma 7.3 If a given sequent of LJ is provable then its corresponding formula is valid in any Heyting algebra. Proof We note that by the law of residuation, x → y = 1 iff x ≤ y for all x, y. Therefore, to show that (γ1 ∧ . . . ∧ γm ) → ϕ is valid, it is enough to show that g(γ1 ∧ . . . ∧ γm ) ≤ g(ϕ) for each assignment g on a given Heyting algebra. This can be shown by the induction of the length of the proof of a given sequent γ1 , . . . , γm ⇒ ϕ. This is immediate for the case when it is an initial sequent. So, suppose that γ1 , . . . , γm ⇒ ϕ is obtained by applying a rule R of LJ. We consider here only the case where R is (⇒→) with upper sequents Γ ⇒ α and β, Δ ⇒ ϕ. Hence, γ1 , . . . , γm ⇒ ϕ is expressed as α → β, Γ, Δ ⇒ ϕ. Let f be any assignment, and we assume that f (α) = a, f (β) = b, f (γ ) = g, f (δ) = d and f (ϕ) = e, where both γ and δ are the conjunctions of all formulas in Γ and Δ, respectively. By the hypothesis of induction, we have both g ≤ a and b ∧ d ≤ e. Then, (a → b) ∧ g ∧ d ≤ (a → b) ∧ a ∧ d ≤ b ∧ d ≤ e. Thus, the corresponding formula which is equal to ((α → β) ∧ γ ∧ δ) → ϕ is valid, as f is an arbitrary assignment.  By taking the sequent ⇒ α for γ1 , . . . , γm ⇒ ϕ in the above Lemma 7.3, we have that condition 1 implies condition 2 of Theorem 7.2. The converse direction of Lemma 7.3 holds also as shown below, and hence they are equivalent. Theorem 7.4 (Completeness) A formula α is provable in intuitionistic logic if and only if it is valid in every Heyting algebra A. It is enough to give a proof of the if-part of the above theorem. We introduce here a canonical way of showing algebraic completeness of a wide class of logics, using Lindenbaum-Tarski algebra (or, Lindenbaum algebra). In fact, it works well for many nonclassical propositional logics.2 Below, we describe an outline of the construction of Lindenbaum-Tarski algebra for the case of intuitionistic logic. We will construct an abstract Heyting algebra FInt , called the Lindenbaum-Tarski algebra of intuitionistic logic, which is comprised of syntactic objects, in which every formula not provable in intuitionistic logic is false, i.e., takes a value smaller than 1, under the canonical assignment. Here is an outline of a proof of Theorem 7.4. Let Φ be the set of all formulas. We define a binary relation ≡ on Φ, by putting α ≡ β iff the formulas α → β and β → α are both provable in intuitionistic logic, or equivalently, the sequents α ⇒ β and β ⇒ α are both provable in LJ. When α ≡ β holds, we say that α and β are logically equivalent (in intuitionistic logic). It is easy to see that the relation ≡ is an equivalence relation on Φ, i.e., it satisfies the following three conditions; 2 The

algebra was named after A. Lindenbaum and A. Tarski. It was introduced firstly in Tarski (1935, 1936).

7.2 Lindenbaum-Tarski Algebras

101

• α ≡ α for all α, • α ≡ β implies β ≡ α for all α and β, • if α ≡ β and β ≡ γ then α ≡ γ for all α, β and γ . In fact, the first and the second statements are almost trivial by the definition of logical equivalence. The third statement is shown by using the the cut rule of LJ. It can be shown moreover that ≡ is a congruence relation and is also fully invariant, i.e., satisfying the following first condition and the second one, respectively, • if α ≡ α and β ≡ β then α ∗ β ≡ α ∗ β for ∗ ∈ {∨, ∧, →}, • if α ≡ β then σ (α) ≡ σ (β) for every substitution σ . Exercise 7.3 1. Show that α ≡ α and β ≡ β imply α → β ≡ α → β . 2. Suppose that a proof P of the sequent α ⇒ β in LJ is given and that σ is an arbitrary substitution. Explain how to get a proof of the sequent σ (α) ⇒ σ (β) from P. The relation α ≡ β means roughly that they are indistinguishable from each other in intuitionistic logic with respect to their provability, and hence any replacement of α by β in any context will not make any difference, as far as their provability is concerned. By identifying indistinguishable formulas, the set Φ is partitioned into equivalence classes. The equivalence class to which a formula α belongs is expressed as [α]. Formally, [α] is defined to be the set {ξ : α ≡ ξ } of formulas. The whole set of equivalence classes is denoted as Φ/ ≡. Example 7.2 For instance, two formulas ( p ∨ q) → r and ( p → r ) ∧ (q → r ) are shown to be logically equivalent in LJ. In this case, the full invariance means that (α ∨ β) → γ and (α → γ ) ∧ (β → γ ) are logically equivalent for all α, β and γ . We define an algebra F whose underlying set is the set Φ/ ≡ of equivalence classes with respect to ≡. Algebraic operations ∨≡ , ∧≡ , →≡ on Φ/ ≡ are defined by [α] ∗≡ [β] = [α ∗ β] for each ∗ ∈ {∨, ∧, →}. Though this definition seems to depend on the choice of “representatives” α and β of equivalence classes [α] and [β], respectively, the operation ∗≡ is well-defined, i.e., it is determined unambiguously. This is because of the fact that the relation ≡ is a congruence relation. We can show that the algebra F =  Φ/ ≡, ∨≡ , ∧≡ , →≡ , [0] is a Heyting algebra with the greatest element [1], the equivalence class which consists of all formulas provable in intuitionistic logic. Now we show the if-part of our Theorem 7.4. Suppose that a formula α is not provable in intuitionistic logic. Let g be an assignment defined by g( p) = [ p] for each propositional variable p, which is called the canonical assignment. Then, by induction we can show that g(ϕ) = [ϕ] for any formula ϕ. In particular, we have g(α) = [α]. As α is not provable, g(α)(= [α]) cannot be equal to [1]. Thus, α is not valid in the Heyting algebra F. This completes our proof of Theorem 7.4. Remark 7.3 The lattice operations of Heyting algebra F determines a partial order ≤ of F, which satisfies [α] ≤ [β] if and only if [α] ∧≡ [β] = [α] for all α, β. Then it is easily seen that [α] ≤ [β] if and only if the formula α → β is provable in intuitionistic logic.

102

7 Basics of Algebraic Logic

The above Heyting algebra F is called the Lindenbaum-Tarski algebra of intuitionistic logic, which is denoted as FInt . We note that the definition of the assignment g which falsifies α does not depend on a given formula α. In this sense, g is said to be canonical. While it looks complicated at first sight, the above argument is a standard way of showing the algebraic completeness and in fact it can be applied to a wide class of logics. To apply this argument to a given logic L, it is enough to replace the logical equivalence in intuitionistic logic by the logical equivalence in L. What is needed is to confirm that the equivalence is a fully invariant, congruence relation. Thus, once we understand how to use Lindenbaum-Tarski algebra, it is rather a routine work to show the algebraic completeness of most of propositional logics. This topic will be discussed again in Sect. 8.2.

7.3 Locally Finite Algebras The provability of a given formula in classical logic can be checked by examining whether it is a tautology or not. A natural question will arise about whether there exists a finite Heyting algebra by which intuitionistic logic can be characterized, just as classical logic is characterized by two-valued Boolean algebra. If there exists such an algebra, we can decide whether or not a given formula is provable in intuitionistic logic just in the same way as classical logic, since the validity of a given formula in a finite algebra will be carried out in finitely many steps. But, as observed by Gödel (1932), any single finite Heyting algebra cannot characterize intuitionistic logic. We will give the following definition to make these statements precise. Definition 7.2 A logic L is characterized by a class C of algebras, when the following holds; a formula α is provable in L if and only if α is valid in all algebras in C , for any formula α. In particular, when L is characterized by a class of finite algebras, L is said to have the finite model property (FMP). When C is a singleton set of an algebra A, we say simply that L is characterized by A. Lemma 7.5 For any finite Heyting algebra A, there exists a formula α which is valid in A but is not provable in intuitionistic logic. Thus, intuitionistic logic cannot be characterized by a single finite Heyting algebra.  Proof Let us consider the formula χm = 0≤i< j≤m ( pi ↔ p j ). Here, { ph : 0 ≤ h ≤ m} is a set of m + 1 distinct propositional variables, and p ↔ q means ( p → q) ∧ (q → p). If a given Heyting algebra A contains exactly k elements then χm is valid in it for every m ≥ k. For, g( pi ) = g( p j ) must hold for some i, j with 0 ≤ i < j ≤ m since m + 1 > k, and hence g( pi ↔ p j ) = 1 for any assignment g on A. Thus, in particular, χk is valid in A. On the other hand, take the k + 1-valued Gödel chain Gk+1 , and consider the assignment h such that h( pi ) = i/k for each i such that

7.3 Locally Finite Algebras

103

0 ≤ i ≤ k. Then, h( pi ↔ p j ) = i/k whenever i < j. Thus, h(χk ) = max{i/k : 0 ≤ i < k} = (k − 1)/k < 1. Hence, χk is not valid in Gk+1 and therefore is not provable in intuitionistic logic by Theorem 7.4.  Though every formula which is not provable in intuitionistic logic is falsified uniformly by the canonical assignment on the Lindenbaum-Tarski algebra, the above lemma denies the existence of such finite Heyting algebras. On the other hand, it will be shown in Theorem 7.8 in the next section that for each formula α, if it is not provable in intuitionistic logic then there exists a finite Heyting algebra (depending on a formula α) in which α is falsified. This is equivalent to say that intuitionistic logic has the finite model property. We note here that the third statement of Theorem 6.9 says the finite model property of the Gödel-Dummett logic GD. As the proof shows, such a finite Gödel chain is given depending on each formula α. In the rest of the present section, we will make some preparations for showing the finite model property of intuitionistic logic in an algebraic way. Lemma 7.6 Let A be a Heyting algebra and X is a subset of A. Then there exists the smallest subalgebra C of A which contains X . Proof Let S be the set of all subalgebras of A which includes X . As A itself is  such an subalgebra, S is non-empty. Define C = {D : D ∈ S }. Obviously, C is closed under all operations of Heyting algebras and contains 0, since each member D of S is so. Thus, C is also a member of S , which is in fact its smallest member by its definition.  Lemma 7.6 holds in fact for arbitrary algebras, as shown by using essentially the same argument as above. We call C, the subalgebra of A generated by the set X , and X is a set of generators of C.3 When C has a finite set of generators, it is said to be finitely generated. A more intuitive and instructive way of describing the subalgebra C generated by a set X will be given as follows. Let us take a Heyting algebra A for example. We will define first subsets {X i : i ∈ N} inductively as follows. • X 0 = X ∪ {0}, • X n+1 = X n ∪ {a ∗ b : a, b ∈ X n , ∗ ∈ {∨, ∧, →}} for n ≥ 0.  Clearly, X n ⊆ X n+1 for each n. Let C = n X n . Then, C is closed under all operations of Heyting algebras and includes X ∪ {0}. Moreover, it can be shown that if D is any subalgebras of A which includes X then X n ⊆ D for each n, and hence C ⊆ D. This means that C is the smallest algebra in S . Definition 7.3 (Locally finite algebras) An algebra is locally finite if every finitely generated subalgebra is finite. A class of algebras is locally finite if every algebra in the class is locally finite. 3 We

exclude “empty” algebras. For Heyting algebras, the subalgebra C generated by any subset X of A is always non-empty, since 0 must be an element of C. But, the same statement does not necessarily hold, as some algebras may not have constants in the definition. In such a case, we need to assume that a set X of generators is non-empty. Such an example of a class of algebras is treated in Remark 7.4 below.

104

7 Basics of Algebraic Logic

Remark 7.4 We will show that the class of all distributive lattices is locally finite. Let L be an arbitrary distributive lattice, and X be any non-empty finite subset of L. Our goal is to show that the sublattice by X is finite.4 Let L ∗ be  generated  the set of all elements of L of the form i≤k ( j≤ni xi j ) with xi j ∈ X for all i, j. Since y ∧ y = y and y ∨ y = y hold for all y in any lattice, we may assume that each  set {xi j : j ≤ n i } consists of distinct elements for each i, and also each set { j≤ni xi j : i ≤ k} consists of distinct elements. Then, it can be shown that when m X consists of m elements, the number of elements of L ∗ is no more than 22 . We show that L∗ = L ∗ , ∨, ∧ is a sublattice of L. For this purpose, it is enough to check that L ∗ is closed under both ∨ and ∧. The former is obvious, because of the that a and form of elements of L ∗ . To show the latter,  suppose   b are  arbitrary elements of L ∗ , each of which of the form  i≤k (  j≤n i u i j ) and  d≤s (  e≤m d vde ) with , v ∈ X , respectively. Then, a ∧ b = ( u ) ∧ ( ( u i≤k j≤n i i j d≤s e≤m d vde )) =    i j de ( u ∧ v ), using distributive law. Thus, a ∧ b is of the i≤k d≤s j≤n i i j e≤m d de form of elements of L ∗ . Next, if L is any sublattice of L which includes X , then clearly L contains every element of L ∗ . Hence L∗ is the smallest sublattice of L including X , which is moreover finite. Exercise 7.4 Show that every Boolean algebra is locally finite. ([Hint] Imitate the proof sketched in Remark 7.4, and use the fact that every element of the subalgebra generated  by a given set X is expressed as its disjunctive normal form, i.e., the form  i≤k ( j≤n i yi j ) where yi j ∈ X , or yi j = ¬x i j with x i j ∈ X for every i, j.) As shown in Sect. 6.5, the class of all Gödel chains, i.e., all totally ordered Heyting algebras, is locally finite. On the other hand, there exists a Heyting algebra which is not locally finite. As a matter of fact, the Lindenbaum-Tarski algebra Fint is such an example. For any propositional variable p, consider the subalgebra of Fint generated by the single equivalence class [ p]. Then, this subalgebra, called Rieger-Nishimura lattice, is shown to be infinite.5 In other words, there exist infinitely many formulas consisting only of the variable p such that any two distinct formulas among them are not mutually logically equivalent in intuitionistic logic. (To simplify the following figure, each equivalence class is denoted by its representative.) (Fig. 7.1). Fig. 7.1 (A lowest part of) Rieger-Nishimura lattice

◦ ¬¬p ∨ (¬¬p → p) ¬¬p → p

◦ ◦



p ∨ ¬p

◦ ◦

¬p ∨ ¬¬p ◦ ¬¬p



¬p ◦

(¬¬p → p) → (p ∨ ¬p)

◦ p ◦ 0

4 As

we mentioned in Example 6.7, the word sublattices is used instead of subalgebras, as we are discussing lattices. 5 The fact was discovered independently by Rieger (1949) and Nishimura (1960).

7.4 Finite Embeddability Property and Finite Model Property

105

7.4 Finite Embeddability Property and Finite Model Property To complete our proof of Theorem 7.2, we need to show the equivalence of Condition 2 and Condition 3. It remains to show that Condition 3 implies Condition 2, as the converse direction holds trivially. This can be derived from Lemma 7.7 below, which was shown by McKinsey and Tarski (1946). We introduce some basic notions. Definition 7.4 A partial algebra B is a set B with partial operations. Thus, when f is such an n-ary partial operation, f (b1 , . . . , bm ) may not be defined for some b1 , . . . , bm ∈ B. A partial algebra B is a partial subalgebra of an algebra A, when the underlying set B of B is a subset of the underlying set A of A and for each operation f if f B (b1 , . . . .bm ) is defined for b1 , . . . .bm ∈ B then f B (b1 , . . . .bm ) = f A (b1 , . . . .bm ). Definition 7.5 A given class K of algebras of the same type has the finite embeddability property when for any finite partial subalgebra B of an algebra A in K , there exists a finite algebra D in K into which B is embedded. We note that when K is locally finite, it has obviously the finite embeddability property, as it is enough to take the smallest subalgebra of A generated by B for D. Lemma 7.7 (Finite embeddability property) The class of all Heyting algebras has the finite embeddability property. Proof Suppose that a finite partial Heyting algebra B of a Heyting algebra A is given. Let D be the sublattice generated by the set B ∪ {0, 1} of A (regarded as a distributive lattice). Since every distributive lattice is locally finite (see Remark 7.4), D is a finite distributive lattice having 0 and 1 as the least element and the greatest, respectively. By Lemma 7.1, D forms a finite Heyting algebra. It remains to show that the inclusion mapping h of B into D (i.e., h(b) = b for all b ∈ B) preserves all existing operations in B. As D is defined to be a sublattice of A, for all b, c ∈ B such that b ∨B c exists, it is equal to b ∨A c since B is a partial algebra of A, and hence is equal to b ∨D c as D is a sublattice of A. Similarly, b ∧B c = b ∧D c when the left side is defined. As stated in Lemma 7.1, for elements b, c ∈ B (⊆ D), the element b →D c is defined to be max{x ∈ D : x ∧ b ≤ c}, while the implication b →A c in A must satisfy max{x ∈ A : x ∧ b ≤ c}. Thus, b →D c ≤ b →A c, in general. But when b →A c ∈ B, it belongs to D and hence b →A c ≤ max{x ∈ D : x ∧ b ≤ c} = b →D c. As B is a partial algebra of A, the equalities b →B c = b →A c = b →D c hold.  Theorem 7.8 (Finite model property of intuitionistic logic) If a formula α is not provable in intuitionistic logic there exists always a finite Heyting algebra in which α is not valid. Proof Suppose that α is not provable in intuitionistic logic. Then, by Theorem 7.4 there exists a a Heyting algebra A and an assignment g on A such that g(α) < 1.

106

7 Basics of Algebraic Logic

Let S be the set of all subformulas of α, and B be the set {g(β) : β ∈ S} ∪ {0, 1}, which is a finite subset of A. Then, the set B itself can be regarded as a finite partial subalgebra of A. By Lemma 7.7, there exists a finite Heyting algebra D into which B is embedded. We define an assignment h on D by h( p) = g( p) for every propositional variable p ∈ S. (For any propositional variable q ∈ / S, it is enough to define h(q) in an arbitrary way.) Then, it is shown by using induction that h(β) = g(β) holds for any β ∈ S. To see this, let us consider the case where β is of the form γ → δ. Obviously, both γ , δ ∈ S. By the hypothesis of induction, h(γ ) = g(γ ) and h(δ) = g(δ). Hence h(β) = h(γ ) →D h(δ) = g(γ ) →D g(δ) = g(γ ) →B g(δ) by Lemma 7.7, which is also equal to g(γ ) →A g(δ)(= g(β)), as g(γ ) →B g(δ) is defined. Since α ∈ S, we have In particular that h(α) = g(α) < 1. Thus, α is not valid in this finite Heyting algebra D.  We are interested in the finite model property, since the property is useful in obtaining the decidability of a given logic as the following theorem assures. In Chap. 1, a finitely axiomatized Hilbert-style system for intuitionistic logic is given.6 Hence, once its finite model property is shown, we can conclude that intuitionistic logic is decidable, although the decidability result itself is shown already by using prooftheoretic method in Chap. 3. Theorem 7.9 (Harrop’s theorem) A logic is decidable if it is finitely axiomatizable and also has the finite model property. Proof We will give here a brief outline of a proof of Harrop’s theorem (Harrop, 1958) for the case of intuitionistic logic, though it can be easily generalized. Our goal is to describe an algorithm for deciding whether a given formula α is provable in intuitionistic logic. The algorithm comprises two independent procedures in the following: • The first procedure is to try to find a derivation of α in our finitely axiomatizable system. We generate formulas which are provable in intuitionistic logic one by one in an exhaustive way, and check whether α is among them, until α is found. • The second procedure is to try to find a countermodel of α. We generate finite Heyting algebras one after another in an exhaustive way and check whether α is falsified by some assignments on this finite algebra. Since a given Heyting algebra is finite, there are only finitely many assignments. If α is not falsified by any assignment on this algebra, we generate a next finite Heyting algebra and repeat the procedure until a finite algebra is found such that α is false. We run these two procedures in parallel. As each formula α is either provable in intuitionistic logic, or otherwise there exists a finite Heyting algebra which is a countermodel of it, due to Theorem 7.8. Hence, either of these procedures will eventually terminate and tells us whether α is provable or not, depending on which procedure terminates, the first one or the second.  6 See

also the footnote 1 in Sect. 5.3.

7.4 Finite Embeddability Property and Finite Model Property

107

The theorem gives an important theoretical criterion on the decidability from theoretical point of view. But, from practical point of view, the above algorithm is extremely inefficient and hardly usable.

7.5 Canonical Extensions of Heyting Algebras A primary method of constructing a Boolean algebra is to take the power set of any given set. As shown in Theorem 6.5, in general every Boolean algebra can be embedded into a Boolean algebra of this type, i.e., a powerset Boolean algebra. It will be natural to search for a similar representation for Heyting algebras. Heyting Algebras Constructed from Partially Ordered Sets Suppose that a partially ordered set S = S, ≤ is given. A subset D of S is upward closed if and only if for all a, b ∈ S such that a ≤ b, a ∈ D implies b ∈ D. Let U (S) be the set of all upward closed subsets of S. Obviously, the empty set ∅ and S itself are upward closed subsets of S. It is also easy to see that if both D1 and D2 are upward closed subsets of S then both D1 ∪ D2 and D1 ∩ D2 are also upward closed. Now, for all subsets D1 , D2 of S, define a subset D1 ⇒ D2 of S by D1 ⇒ D2 = {a ∈ S : for all c such that a ≤ c, if c ∈ D1 then c ∈ D2 }.

(7.1)

Clearly, D1 ⇒ D2 is upward closed by the definition. We define an algebra U (S) by U (S), ∪, ∩, ⇒, ∅. We will show that U (S) is a Heyting algebra. It is clear that U (S), ∪, ∩ is a lattice with the least element ∅ and the greatest element S (with respect to the set inclusion). So it suffices to show that it satisfies the law of residuation. Let D, E and F be upward closed subsets of S. Suppose first that D ∩ E ⊆ F holds. To show that D ⊆ E ⇒ F, let a ∈ D and take any c ∈ E such that a ≤ c. Since D is upward closed, c ∈ D and hence c ∈ D ∩ E. Thus, c is in F by the assumption. Therefore D ⊆ E ⇒ F. Conversely, suppose that D ⊆ E ⇒ F and also that a ∈ D ∩ E. Then a ∈ E ⇒ F, and as a ∈ E, we have that a ∈ F. Thus, D ∩ E ⊆ F. This Heyting algebra U (S) is called the Heyting algebra which is the dual of the partially ordered set S. In particular when the order ≤ of S is the identity relation =, the set U (S) is equal to the power set ℘ (S). In this case, the algebra U (S) is no other than the powerset Boolean algebra ℘ (S). We note that in such a case, D1 ⇒ D2 = (D1 )c ∪ D2 , where (D1 )c is the complement of D1 with respect to S. Exercise 7.5 Let S be a partially ordered set {a, b, c}, ≤ and < be the strict order induced by the partial order ≤. 1. Suppose first that x < y holds if and only if x = b and y = a. Find all elements of U (S), and describe the Heyting algebra U (S). 2. Repeat the above exercise, when we suppose that x < y holds if and only if x is either b or c, and y = a.

108

7 Basics of Algebraic Logic

Next, let us consider whether a given Heyting algebra A can be represented as U (S) for some partially ordered set S or not, and also in which way if there is such S. Definition 7.6 (Filters) Let A be a Heyting algebra A, ∨, ∧, →, 0. A subset F of A is a filter of A, if 1. F = ∅, 2. if x, y ∈ F then x ∧ y ∈ F, 3. if x ∈ F and x ≤ y then y ∈ F. Exercise 7.6 Show that a subset F of A is a filter if and only if F satisfies that • 1 ∈ F, • if x ∈ F and x → y ∈ F then y ∈ F. Clearly, the set A itself is a filter of A. For each element a ∈ A, let Fa be the set {x ∈ A : a ≤ x}. Then, Fa is a filter, which in fact is the smallest filter containing the element a. This filter Fa is called the principal filter generated by a. For each nonempty subset U of A, consider the set G = {x ∈ A : a1 ∧ . . . ∧ am ≤ x for some a1 , . . . , am ∈ U }. Then, G is a filter, which is in fact the smallest filter which contains U . This filter G is called the filter generated by U . When U = {b1 , . . . , bn }, the filter generated by U is shown to be equal to the principal filter generated by the element b1 ∧ . . . ∧ bn . Thus, every filter of a finite Heyting algebra is principal. Exercise 7.7 1. Confirm that G in the above paragraph is in fact the smallest filter which contains a given nonempty subset U of A. 2. Suppose that F is a filter of a Heyting algebra A and a is an element of A. Show that the filter generated by the set F ∪ {a} is equal to the set {x ∈ A : a ∧ u ≤ x for some u ∈ F}. A filter F of A is said to be proper if F = A. A filter F of A is prime if it is proper and satisfies that for all x, y, if x ∨ y ∈ F then either x ∈ F or y ∈ F. A proper filter F of A is maximal if F ⊆ G implies G = F for any proper filter G. That is, F ⊆ G and G = F implies G = A for any filter G. A proper filter F of A is an ultrafilter if either x ∈ F or ¬x ∈ F for all x ∈ A. Exercise 7.8 For any filter F of a Heyting algebra A, show that for all a, b ∈ A; 1. a ∧ b ∈ F if and only if a ∈ F and b ∈ F, 2. a ∨ b ∈ F if and only if a ∈ F or b ∈ F, whenever F is prime. Example 7.5 Let us consider the lattice C in the following figure. It is distributive, and hence forms a Heyting algebra. Both filters Fa and Fb are maximal filters of C. The filter Fc is prime but not maximal, while the filter Fd is not prime.

7.5 Canonical Extensions of Heyting Algebras

109

◦1 e◦ c◦

◦ f ◦d

a◦

◦b ◦ 0

Lemma 7.10 Let A be any Heyting algebra. Then the following statements hold for any filter F of A. 1. F is maximal if and only if F is an ultrafilter, 2. If F is maximal then F is prime. Proof 1. Suppose first that F is maximal, and a ∈ / F. As F is maximal, the filter generated by F ∪ {a} must be equal to A. Hence, in particular there exists an element c ∈ F such that a ∧ c ≤ 0. Thus, c ≤ ¬a and hence ¬a ∈ F. Therefore, F is an ultrafilter. Conversely, suppose that a given ultrafilter F is properly included by another filter G. Take an element b ∈ G\F. Since b ∈ / F, ¬b must be a member of F and hence of G. As both b and ¬b belong to G, the element 0 (= b ∧ ¬b) is in G. Hence G = A. This means that F is maximal. 2. Suppose that F is maximal. To show that F is prime, suppose that a ∨ b ∈ F and b∈ / F for a, b ∈ A. Since F is an ultrafilter due to the first statement of our lemma, ¬b ∈ F and hence a ∨ ¬b ∈ F. As both a ∨ b and a ∨ ¬b belong to a filter F, we have a = a ∨ 0 = a ∨ (b ∧ ¬b) = (a ∨ b) ∧ (a ∨ ¬b) ∈ F. Hence F is prime.  Corollary 7.11 For any Boolean algebra A, the following three conditions are equivalent. 1. F is a maximal filter, 2. F is a prime filter, 3. F is an ultrafilter. Proof Due to Lemma 7.10, it is enough to show that any prime filter is an ultrafilter, when A is a Boolean algebra. Obviously, a ∨ ¬a = 1 ∈ F for all a in a Boolean algebra A. If F is prime, then either a ∈ F or ¬a ∈ F for all a ∈ A, which means that F is an ultrafilter.  Example 7.6 Let us consider the Heyting algebra J in Exercise 7.2 (see also Exercise 6.4). Proper filters of J are the sets Fa , Fb , Fc and {1}, where Fu = {x ∈ J : u ≤ x} for u ∈ {a, b, c}. Among them, only Fc is not prime. In fact, a ∨ b = c ∈ Fc , while neither a nor b is an element of Fc . We can show the following by using Zorn’s lemma (see e.g. Davey and Priestley 2002), but will omit the proof. Theorem 7.12 (Prime filter theorem) Let F be a filter of a Heyting algebra A such that a ∈ / F for an element a of A. Then there exists a prime filter G of A such that a∈ / G and F ⊆ G.

110

7 Basics of Algebraic Logic

Corollary 7.13 Let F be a filter F of A such that a → b ∈ / F. Then there exists a prime filter G such that F ⊆ G, a ∈ G but b ∈ / G. Proof Let F ∗ be the filter generated by the set F ∪ {a}. If b ∈ F ∗ , then a ∧ u ≤ b for some u ∈ F, and hence u ≤ a → b by the law of residuation. Therefore a → b must be an element of F. But this contradicts our assumption. Thus, b ∈ / F ∗ . Now, ∗ applying prime filter theorem for F and b, we get a prime filter G such that F ∗ ⊆ G and b ∈ / G. Obviously, a ∈ G and F ⊆ G.  Canonical Embedding Algebras For a given Heyting algebra A, define D(A) to be the set of all prime filters of A. Obviously, D(A) = D(A), ⊆ is a partially ordered set, which is called the dual (intuitionistic) frame of a Heyting algebra A. Now, for a given Heyting algebra A, consider the Heyting algebra U (D(A)), which consists of all upward closed subsets of D(A). This algebra is called the canonical extension (also the canonical embedding algebra) of A and is expressed also as Aδ . Let us consider a mapping σ from A to Aδ by σ (a) = {F ∈ D(A) : a ∈ F} (7.2) for each a ∈ A. We note that the set {F ∈ D(A) : a ∈ F} is an upward closed subset of D(A). In fact, for all prime filters F and G of A, if a ∈ F and F ⊆ G then a ∈ G. We show that σ is an embedding of A into Aδ . First, it is obvious that σ (0) = ∅, as 0∈ / F for any prime filter F. Next, from Exercise 7.8, it follows that σ (a ∧ b) = σ (a) ∩ σ (b) and σ (a ∨ b) = σ (a) ∪ σ (b). To show that σ (a → b) = σ (a) ⇒ σ (b) (as for the definition of ⇒, see Eq. 7.1), we need to check that a → b ∈ F if and only if for any prime filter G ⊇ F, a ∈ G implies b ∈ G. The only-if-part is easily shown, and the if-part follows from Corollary 7.13. Thus, σ is a homomorphism. It remains to show that σ is one-to-one. Suppose that a = b. Without loss of generality, we can assume that b  a. Consider the principal filter Fb generated by the element b. Then a ∈ / Fb by our assumption. By prime filter theorem, there exists / G, the filter G does not a prime filter G of A such that a ∈ / G and Fb ⊆ G. Since a ∈ belong to σ (a), while G ∈ σ (b) as b ∈ Fb ⊆ G. Thus, σ (a) = σ (b). This mapping σ is called a canonical embedding. In conclusion, we have the following theorem by Stone 1936, 1937. Theorem 7.14 (Stone’s representation theorem for Heyting algebras) Every Heyting algebra A can be embedded into the Heyting algebra Aδ . As mentioned in Exercise 7.8, every prime filter is maximal in any Boolean algebra. Thus the partial order of D(A) for a Boolean algebra A is the identity relation. Then, obviously every subset of D(A) is upward closed with respect to this order, and thus Aδ is equal to the powerset Boolean algebra ℘ (D(A)). Hence, Theorem 6.5 can be regarded as a particular case of the above theorem.

7.5 Canonical Extensions of Heyting Algebras

111

Example 7.7 Let us take the Heyting algebra J in Example 7.6 again. As it is mentioned there, the partially ordered set D(J) consists of three elements Fa , Fb and ¯ respectively. They are ordered by ¯ b¯ and 1, {1}(= F1 ). Here we abbreviate them as a, ¯ Hence, all upward closed subsets of D(J ) are the set inclusion as 1¯ ⊂ a¯ and 1¯ ⊂ b. ¯ {a, ¯ and {a, ¯ 1}, ¯ which compose the universe of Jδ . It can be shown ∅, {a}, ¯ {b}, ¯ b} ¯ b, δ that J forms a Heyting algebra, which is in fact isomorphic to J. The lattice operations of U (S) for a partially ordered set S are the set union and intersection, and hence the lattice reduct of the Heyting algebra U (S) is always complete, i.e., closed under arbitrary joins and meets. On the other hand, one can easily find an Heyting algebra (even a Gödel algebra) whose lattice reduct is not complete. This implies that embeddings in Stone’s representation theorem are not always surjective. But for finite Heyting algebras, we have the following, which can be shown by generalizing the argument in Example 7.7. Corollary 7.15 Every finite Heyting algebra A is isomorphic to its canonical extension Aδ . Proof We show that the canonical embedding g from A to Aδ is surjective when A is finite. Let U be any given upward closed subset of prime filters of A, ordered by the set inclusion. Our goal is to show that for each such set U there exists always an element a ∈ A such that for any prime filter F, F ∈ U if and only if a ∈ F. As A is finite, U is a finite set of prime filters of the form {b¯1 , . . . , b¯m }. (Here we borrow the notation in Example 7.7 and thus b¯ denotes the principal filter generated by b.) Elements b¯1 , . . . , b¯m are partially ordered by the set inclusion ⊆. Note that b¯i , ⊆ b¯j iff b j ≤ bi . We say that a filter c¯ in U is minimal in U when d¯ ⊆ c¯ implies d¯ = c¯ for any d¯ ∈ U , i.e., c ≤ d implies d = c for any d¯ ∈ U . Let c¯1 , . . . , c¯k be all the minimal members in U . By the definition, any b¯ ∈ U includes some minimal c¯i . In such a case, b ≤ ci holds in A. Now we define the element a ∈ A by a = c1 ∨ . . . ∨ ck , and show that a is the element which we are looking for. Suppose first that F ∈ U for a prime filter F of A. Then there exists i (≤ k) such that c¯i ⊆ F, which implies ci ∈ F, and hence a ∈ F. Conversely, suppose that a ∈ F. Then for some i (≤ k), ci must be in F, as F is prime. This implies that c¯i ⊆ F. Since c¯i ∈ U and U is an upward closed set, F is also a member of U .  Canonical extensions of Heyting algebras will be discussed in connection with completeness of logics with respect to Kripke semantics in Chap. 10.

Chapter 8

Logics and Varieties

Until now, we have discussed connections between particular logics and corresponding algebras, e.g., between classical logic and Boolean algebras, and also between intuitionistic logic and Heyting algebras. We pursue this question further in a more general setting, by developing a general study of connections between various logics and corresponding classes of algebras. Once these connections are established, significant logical consequences can be obtained through algebraic characterizations of logical properties, as shown in Sect. 8.5. Universal algebraic approach is quite powerful for attaining our goal. To make our explanation concrete and clear, we focus our attention mainly on a connection between superintuitionistic logics and classes of Heyting algebras in the following. These arguments can be easily generalized and thus are applied to other classes of logics, including modal logics and substructural logics, and their corresponding classes of algebras. Readers who are not familiar with algebra may feel that discussions in the following are sometimes too abstract and complicated. In such a case, readers may skip such paragraphs in their first reading, in particular those paragraphs attached with the mark (∗). For general information on universal algebra, consult Burris and Sankappanavar 1981.

8.1 Lattice Structure of Superintuitionistic Logics Superintuitionistic logics are defined to be axiomatic extensions of Int in Chap. 5. Theorem 5.14 in that chapter says that the necessary and sufficient conditions for a set of formulas L to be a superintuitionistic logic are the following; 1. L contains all formulas that are provable in Int, 2. L is closed under substitution, i.e., if β ∈ L and β  is any substitution instance of β then β  ∈ L, © Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_8

113

114

8 Logics and Varieties

3. L is closed under modus ponens, i.e., if both α and α → β are in L then β ∈ L. In the following, SUP denotes the set of all superintuitionistic logics. (Since each superintuitionistic logic is a subset of the set Φ of all formulas, SUP is a subset of the set ℘ (Φ).) Hereafter, both Int and Cl denote also the sets of formulas which are provable in intuitionistic logic and classical logic, respectively. By analogy, we say that a formula ϕ is provable in L when ϕ is a member of a superintuitionistic logic L. The set SUP of all superintuitionistic logics is partially ordered by set inclusion ⊆ as sets of formulas. The smallest logic in SUP is Int and the greatest is the set Φ itself, which is called the inconsistent logic. A logic L ∈ SUP is consistent when L is a proper subset of Φ. The following lemma says that classical logic Cl is the second greatest logic in SUP. Lemma 8.1 Every consistent superintuitionistic logic is included by classical logic Cl. Proof First we note that the calculation of truth tables of the two-valued Boolean algebra 2 can be carried out in intuitionistic logic. More precisely, we can show the following. Let 0ˆ and 1ˆ be abbreviations of formulas p ∧ ¬ p and p ∨ ¬ p, respectively, where p is a fixed propositional variable. To avoid confusions, only in this proof we use symbols ∨, ∧, → for logical connectives, while we use ∨2 , ∧2 , →2 for corresponding Boolean operations on the algebra 2. Then, for all a, b ∈ {0, 1} and for each logical connective ∗, we can show that the formula aˆ ∗ bˆ is logically equivalent to the formula a ∗2 b in Int. For example, the truth tables of →2 and ∧2 tell us that both 1 →2 0 = 0 and 0 ∧2 1 = 0 hold. Corresponding to them, both ˆ ↔ 0ˆ and (0ˆ ∧ 1) ˆ ↔ 0ˆ are provable in Int. formulas (1ˆ → 0) Now our lemma can be shown as follows. Suppose that L  Cl for a given consistent superintuitionistic logic L. Then there is a formula α ∈ L which is not a tautology. Thus, there is an assignment g on 2 such that g(α) = 0. Let p1 , . . . , pm be an enumeration of all distinct propositional variables in α. For each subformula β of α, the formula β † be the substitution instance of β obtained by replacing each pi  in β by the formula g( pi ) for each i = 1, . . . , m. Then, from what we proved in the  is provable in Int. In above, it can be shown by using the induction that β † ↔ g(β) † ˆ particular α ↔ 0 is provable. Since L is closed under both substitution and modus ponens, the formula α † and hence 0ˆ belong to L. Moreover, 0ˆ → γ is provable in Int for any γ . Therefore L contains every formula γ , which means that L = Φ. This contradicts our assumption that L is consistent.  Example 8.1 Take ( p ∨ q) → p for α, for example, in the first paragraph of the above proof. It is easy to see that α is false for an assignment g such that g( p) = 0 ˆ → 0ˆ in this case, which is shown to be logically and g(q) = 1. Then, α † is (0ˆ ∨ 1) ˆ equivalent to 0 in Int. Due to our definition, each superintuitionistic logic is a set of formulas containing the set Int, which is moreover closed under modus ponens and substitution. Thus, if both L1 and L2 are members of SUP then the set intersection L1 ∩ L2 is also in

8.1 Lattice Structure of Superintuitionistic Logics

115

SUP. Obviously, this argument works well also for the set intersection of infinitely many superintuitionistic logics. Next, take any set S of formulas. Consider the set Λ(S) of all superintuitionistic logics which contain S. Notice that the set Λ(S) is non-empty  as Φ always belongs to be the intersection of all logics in Λ(S), i.e., L = Λ(S) (which is defined it. Let L S  S as L∈Λ(S) L). Then, L S is shown to be also a superintuitionistic logic which contains S, and hence belongs also to Λ(S). Thus, L S is the smallest superintuitionistic logic containing the set S. On the other hand, since the axiomatic extension Int[S] of Int with the set of axioms S is obviously the smallest superintuitionistic logic containing the set S, the logic L S must be equal to Int[S]. Though the set intersection of logics in SUP belongs also to SUP, the set union L1 ∪L2 of L1 and L2 in SUP is not always a member of SUP since the union may not be closed under modus ponens. So define L1 L2 to be the smallest superintuitionistic logic containing L1 ∪ L2 . It is easy to see the following. Lemma 8.2 The set SUP of all superintuitionistic logics forms a lattice with respect to ∩ and . It is in fact a complete lattice. Exercise 8.1 Show that the lattice SUP of all superintuitionistic logics is distributive. As we noted before, if a superintuitionistic logic L is finitely axiomatizable, it can be axiomatized by a single axiom and hence can be represented of the form Int[α] for a formula α. When L is axiomatized by an axiom α, it is also axiomatized by an axiom α  , where α  is obtained from α by renaming propositional variables appearing in it. Here, by renaming we mean a replacement of distinct propositional variables in α with other distinct propositional variables. We take notice here that a formula δ belongs to Int[α], i.e., δ is provable in Int[α] iff a formula (α1 ∧ . . . ∧ αm ) → δ is provable in Int for some substitution instances α1 , . . . , αm of α. This follows from deduction theorem for superintuitionistic logics (see Corollary 5.2 in Chap. 5). Now we have the following. Lemma 8.3 If both L1 and L2 are finitely axiomatizable superintuitionistic logics, then L1 ∩ L2 and L1 L2 are also finitely axiomatizable. More precisely, when L1 and L2 are axiomatized by axioms α and β, respectively, then L1 ∩L2 and L1 L2 are axiomatized by the axiom α ∨ β and α ∧ β, respectively. (Without loss of generality, we assume in the former case that no variables appear in common in both α and β.) Proof We give here a proof of the case for L1 ∩ L2 . We can assume that no variables appear in common in both α and β, by renaming propositional variables if necessary. Since the formula α ∨β belongs to L1 ∩L2 , the inclusion Int[α ∨β] ⊆ L1 ∩L2 holds obviously. Conversely, if a formula δ belongs to both L1 and L2 , then by deduction theorem both formulas (α1 ∧ . . . ∧ αm ) → δ and (β1 ∧ . . . ∧ βn ) → δ are provable in Int for some substitution instances α1 , . . . , αm of α and for some substitution , βn of β. Hence, ((α1 ∧ . . . ∧ αm ) ∨ (β1 ∧ . . . ∧ βn )) → δ, and instances  β1 , . . . therefore i≤m j≤n (αi ∨ β j ) → δ are provable in Int. But, each αi ∨ β j can be a substitution instance of α ∨ β, as no variables appear in common in both α and β.  Thus we have L1 ∩ L2 ⊆ Int[α ∨ β].

116

8 Logics and Varieties

8.2 The Variety HA of All Heyting Algebras In Chap. 7, L(A) is defined to be the set of all formulas which are valid in a given Heyting algebra A. Similarly, for a given class C of Heyting algebras L(C ) denotes the set of all formulas which are valid in C , i.e., which are valid in all Heyting algebras in C . We have the following theorem. Theorem 8.4 For any Heyting algebra A, L(A) is a superintuitionistic logic. In general, for any class C of Heyting algebras, L(C ) is a superintuitionistic logic. Proof It is enough to check that L(A) satisfies three conditions for superintuitionistic logics, mentioned at the beginning of the previous section. It is obvious that Int ⊆ L(A) by Theorem 7.2. Suppose that both α and α → β are valid in A. Then for any assignment f , both f (α) = 1 and f (α) ≤ f (β) hold, as f (α → β) = f (α) → f (β) = 1. Thus we have f (β) = 1. As f is an arbitrary assignment, β is valid in A. Next take any substitution σ . We suppose that σ replaces each propositional variable pi in α by a formula γi for each i ∈ I . Let g be an arbitrary assignment, and suppose that g(γi ) takes a value bi (∈ A) for each i. We take an assignment f such that f ( pi ) = bi for each i. Then by using induction we can show that g(σ (ϕ)) = f (ϕ) for any formula ϕ. Now suppose that a formula α is valid in A. Then g(σ (α)) = f (α) = 1 holds. As this holds for every assignment g, the formula σ (α) is valid. Hence, the set L(A) is a superintuitionistic logic. The second statement is immediate from the above argument and the fact that any intersection of superintuitionistic logics is a superintuitionistic logic.  Due to Definition 7.2, a superintuitionistic logic L(A) (and L(C )) is said to be characterized by a Heyting algebra A (and by a class C of Heyting algebras, respectively). The following lemma shows a connection between basic algebraic operations and inclusion relations of superintuitionistic logics on Heyting algebras. (We note that the same connections hold in general cases.) Lemma 8.5 1. If B is a subalgebra of a Heyting algebra A then L(A) ⊆ L(B). 2. If B is a homomorphic image of a Heyting algebra A then L(A) ⊆ L(B).  3. If B is a direct product of Heyting algebras A j for j ∈ J then L(B) = j∈J L(A j ). Proof 1. This was proved essentially in Lemma 6.8. 2. Let B be the homomorphic image of A by a homomorphism h. Taking the contraposition, suppose that a formula α is not valid in B. Then there exists an assignment g on B such that g(α) < 1. We define an assignment f on A so that f satisfies f ( p) ∈ h −1 (g( p)) for each propositional variable p. (Here, h −1 (g( p)) denotes the set {a ∈ A : h(a) = g( p)}, which is always non-empty since h is surjective. So, it suffices to take any element a of this set and define f ( p) = a.) We show that h( f (β)) = g(β) for every formula β, using the induction on the length of a formula β. In fact, this is true for any propositional variable by the definition of f . Also, when β is of the form δ ∨ γ , h( f (β)) = h( f (δ) ∨ f (γ )) as f is an assignment, which is equal to h( f (δ)) ∨ h( f (γ )) because h is a homomorphism, and hence is equal to

8.2 The Variety HA of All Heyting Algebras

117

g(δ) ∨ g(γ ), which is g(β). The same argument works as well for other connectives. Now, by taking α for β in particular, we have h( f (α)) = g(α) < 1. Hence, f (α) cannot be equal to 1 as h(1) = 1, and therefore α is not valid in A. 3. Just for the simplicity’s sake, we suppose that B is of the form C × D. For any assignment f on B, define assignments g and h on C and D, respectively, by the condition that g( p) = c and h( p) = d whenever f ( p) = (c, d) for every c ∈ C, d ∈ D and every propositional variable p. Conversely, if assignments g and h on C and D, respectively, are given, we can define an assignment f on B by f ( p) = (g( p), h( p)). Now we suppose that f ( p) = (g( p), h( p)) holds for any propositional variable p, where f, g and h are assignments on B, C and D, respectively. Then, by using the induction, we can show that f (α) = (g(α), h(α)) for any formula α. Thus, f (α) = 1B = (1C , 1D ) if and only if both g(α) = 1C and h(α) = 1D . Hence, α is valid in B if and only if it is valid in both C and D. That is, L(B) = L(C) ∩ L(D).  For a given class K of algebras of the same type, let H (K ), S(K ) and P(K ) be the class of all homomorphic images of algebras from K , the class of all subalgebras of algebras from K and the class of all direct products of algebras from K . A class K is said to be closed under H when H (K ) ⊆ K . This is defined similarly for S and for P. Definition 8.1 (Varieties) A class of algebras K of the same type is called an variety if K is closed under all of H, S and P. For each superintuitionistic logic L, define VL to be the class of all Haying algebras A such that L ⊆ L(A), that is, the class of all Heyting algebras in which every formula provable in L is valid. We will call each algebra in VL , an L-Heyting algebra. The following result is an immediate corollary of Lemma 8.5. Corollary 8.6 For a given superintuitionistic logic L, the class VL of all L-Heying algebras is a variety. If L is intuitionistic logic Int (classical logic Cl), then VL is the class of all Heyting algebras (the class of all Boolean algebras, respectively). Thus each of them is a variety, which is denoted by HA and BA, respectively (see Theorem 6.4). Remark 8.2 For a given superintuitionistic logic L, the following three conditions to characterize L by algebras are shown to be mutually equivalent. 1. L is characterized by a single Heyting algebra. 2. L is characterized by a class of Heyting algebras. 3. L is characterized by a set of at most countably many Heyting algebras. It is obvious that 1 implies 2. Suppose that L is characterized by a class C of Heyting algebras. It is clear that every algebra A in C satisfies that L ⊆ L(A). As we assume that our language is countable, the set of all formulas is also countable. Let {α j : j ∈ J } be any enumeration of formulas which are non-members of L.

118

8 Logics and Varieties

Then the index set J is at most countable. Now, by our assumption, for each j ∈ J , / L(A j ). Then it is easy to see that L is there exists an algebra A j in C such that α j ∈ characterized also by a set {A j : j ∈ J } of Heyting algebras. Lastly, to show that 3 implies 1, suppose that L is characterized by a set (but not a class) of Heyting algebras  {A j : j ∈ J }. We define a Heyting algebra A to be the direct product j∈J A j . Then L is characterized also by a single Heyting algebra A by Lemma 8.5. Theorem 8.4 said that L(A) is always a superintuitionistic logic for a Heyting algebra A. Then, is every superintuitionistic logic L can be characterized by some Heyting algebra? This question can be answered positively by taking the Lindenbaum-Tarski algebra of L for A, which is defined just in the same way as that of intuitionistic logic. In the present case, it is enough to introduce the congruence relation ≡ in the proof of Theorem 7.4 by using the provability in L, in place of the provability in intuitionistic logic. Actually, it can be shown that our argument works well for other axiomatic extensions discussed in Sect. 5.3, simply by restating the arguments in a general setting. Thus, we can show the following. Here the notion of algebras for substructural logics will be introduced in Chap. 9. Theorem 8.7 (Algebraic completeness in general) Every superintuitionistic logic is algebraically complete. More precisely, for each superintuitionistic logic L and each formula ϕ, a formula ϕ is provable in L if and only if ϕ is valid in all Heyting algebras in VL . This holds for every normal modal logic and also for every substructural logic. Proof From the definition of L-Heyting algebras, it follows that every formula which is provable in L is valid in all L-Heyting algebras. To show the converse implication, assume that a formula ϕ is not provable in L. In the same way as in Sect. 7.2, we can show that ϕ does not take the value [1] in the Lindenbaum-Tarski algebra FL of L by the canonical assignment. It remains to show that FL is an L-Heyting algebra, that is, h(α) = [1] for every axiom α of L and every assignment h on FL . Let us express α as α( p1 , . . . , pm ) by listing all propositional variables p1 , . . . , pm appearing in α explicitly. For a given assignment h on FL , let h( pi ) = [θi ] (∈ FL ) for a formula θi for each i ≤ m. Then, h(α) = α FL (h( p1 ), . . . , h( pm )) = α FL ([θ1 ], . . . , [θm ]) = [α(θ1 , . . . , θm )] hold.1 Since the formula α(θ1 , . . . , θm ) is a substitution instance of α, it is provable in L and hence h(α) = [α(θ1 , . . . , θm )] = [1]. Hence FL is an  L-Heyting algebra. Consequently, ϕ is not valid in an algebra FL ∈ VL . Free algebras and universal mapping property (∗) From algebraic point of view, the Lindenbaum-Tarski algebra of a logic L is understood as a free algebra in the variety VL . Let us give here a brief explanation of its key property. In the construction of the Lindenbaum-Tarski algebra of a logic L, if we take any set X of variables with an arbitrary cardinality, instead a fixed countable set of variables and consider the set of all formulas composed of variables in X , then we get an free algebra over X in VL , which is expressed as FL (X ). We mention the following key lemma on free algebras without proofs. example, if α is p → (q ∨ r ), h( p) = [γ ], h(q) = [δ] and h(r ) = [σ ], then h(α) = [γ ] →A ([δ] ∨A [σ ]) = [γ → (δ ∨ σ )], where A is FL . 1 For

8.2 The Variety HA of All Heyting Algebras

119

Lemma 8.8 (Free algebras) For any set X of variables, the algebra FL (X ) is a member of VL which has the following universal mapping property over X with respect to VL : For any member A of VL and any mapping f from X to A, f can be extended to a homomorphism from FL (X ) to A. When a class K of algebras is a variety, FL(K ) (X ) is in fact a member of K .

8.3 Subvarieties of HA and Superintuitionistic Logics For a given class of algebras K of the same type, let V (K ) be the smallest variety containing K , which is called the variety generated by K . Thus, K is a variety if and only if V (K ) = K . We mention a basic theorem due to A. Tarski Tarski (1946) without proofs. Theorem 8.9 For any class of algebras K of the same type, V (K ) = H S P(K ). That is, every algebra in V (K ) is a homomorphic image of a subalgebra of a direct product of algebras in K . From Lemma 8.5, the following holds. Lemma 8.10 For any class K of algebras, each of L(K ), L(H (K )), L(S(K )) and L(P(K )) determines the same set of formulas. Hence, L(V (K )) is equal to L(K ). Exercise 8.2 Describe the details of a proof of Lemma 8.10. A subclass K of a given variety V is a subvariety of V if and only if K forms a variety. Subvarieties of V are ordered by inclusion relation. The following can be easily shown. The last statement follows from Theorem 8.7. Lemma 8.11 Let v a mapping from SUP to subvarieties of the variety HA which is defined by v(L) = VL . Then v is an order-reversing isomorphism, i.e., an isomorphism satisfying that L ⊂ L implies v(L ) ⊂ v(L) for all L and L . Moreover, L = L(v(L))(= L(VL )) holds for each L ∈ SUP. Exercise 8.3 Describe the details of a proof of Lemma 8.11. Corollary 8.6 says that for each superintuitionistic logic L, the class VL is a variety and hence is a subvariety of the variety HA of all Heyting algebras. The biggest subvariety is obviously HA itself, and the smallest is the variety consisting only of the degenerate Boolean algebra, i.e., the Boolean algebra with a single element 0 (= 1), which corresponds to the inconsistent logic. The following two theorems show fundamental connections between subvarieties of HA and superintuitionistic logics. Theorem 8.12 (Subvarieties of HA and superintuitionistic logics (∗)) For any class K of Heyting algebras, K is a subvariety of HA if and only if it is of the form VL for a superintuitionistic logic L. In fact, V = v(L(V )) holds for every subvariety V of Heyting algebras.

120

8 Logics and Varieties

Proof The if-part is shown already in Corollary 8.6. For the only-if-part, in fact K = VL(K ) holds for every subvariety K of HA. Here we give an outline of the proof of this result. First suppose that A is in K . If a formula α is valid in K , then obviously α is valid in A. In other words, every formula in L(K ) is valid in A. Thus, K ⊆ VL(K ) . Now, let K  = VL(K ) . Since K ⊆ K  , L(K  ) ⊆ L(K ). On the other hand, suppose that α ∈ L(K ), i.e., α is valid in K . Then, for any B in K  , α must be valid in B as K  = VL(K ) . This means that α is valid in K  . Hence, L(K ) ⊆ L(K  ) and therefore L(K  ) = L(K ) holds. Now, suppose that a given Heyting algebra C belongs to VL(K ) (= K  ). Take any set X of variables whose cardinality is equal or greater than the cardinality of C so that a surjective mapping f from X to C exists. By the universal mapping property over X with respect to K  in Lemma 8.8, f is extended to a surjective homomorphism from FL(K  ) (X ) to C. Since L(K  ) = L(K ), f is also a surjective homomorphism from FL(K ) (X ) to C. Thus, C is a homomorphic image of FL(K ) (X ). But, since K is a variety by our assumption, FL(K ) (X ) belongs to K by Lemma 8.8, and hence C belongs to K .  Thus, K  ⊆ K holds. Consequently, K = K  = VL(K ) . We have shown in the above that for classes K and K  of algebras of the same type, if K ⊆ K  then L(K  ) ⊆ L(K ). Conversely, suppose that L(K  ) ⊆ L(K ). Then VL(K ) ⊆ VL(K  ) by Lemma 8.11. Hence, K ⊆ K  follows from the argument in the proof of the above theorem, whenever both K and K  are varieties. Thus, the mapping L from all subvarieties of HA to the set SUP of all superintuitionistic logics is an order reversing surjective isomorphism. Moreover, it is shown that all subvarieties of HA form a lattice. We can show the following stronger result. Theorem 8.13 Mappings v : L −→ VL and L : V −→ L(V ) are dual lattice isomorphisms between the lattice of all superintuitionistic logics and the lattice of all subvarieties of HA. From Theorem 8.13 with Lemma 8.1, it follows that the variety BA is the second smallest among subvarieties of HA. Theorem 6.6 (algebraic completeness of classical logic) says in particular that CL = L(BA) = L({2}). Since L({2}) = L(V ({2})) by Lemma 8.10, we can show that BA = V ({2}) by using the above argument. That is, the variety BA is generated by the Boolean algebra 2. Thus, every Boolean algebra is expressed as a homomorphic image of a subalgebra of a direct product of 2. The fact was already shown as Stone’s representation theorem for Boolean algebras (Theorem 6.5) in a more explicit form.2 Similarly, using Theorem 7.8 we can derive that the variety HA of Heyting algebras is generated by the set of all finite Heyting algebras. Equational Classes and Birkhoff’s Theorem (∗) Our Theorem 8.12 follows in fact from Birkhoff’s theorem (Theorem 8.15) stated below. But we have avoided to refer directly to the theorem, in order to make our 2 It

should be noted that every power set Boolean algebra is isomorphic to a direct product of 2 and vice versa.

8.3 Subvarieties of HA and Superintuitionistic Logics

121

presentation simpler. We will give here a brief sketch of the theorem for readers who are interested in this fundamental result in universal algebra. Suppose that a language L and a fixed countable set of variables are given. L terms over the set of variables are defined inductively, by starting from variables and then applying operation symbols in L repeatedly. The expressions of the form s ≈ t for L -terms s and t are called equations (or identities) in L . For instance, take the language of lattices consisting of two binary operations ∨ and ∧ for L . (See Remark 6.3). Then, u ∧ (v ∨ w) ≈ (u ∧ v) ∨ (u ∧ w) is an example of an equation in L , where u, v and w are variables. Let A be an arbitrary algebra of type L . An assignment (for terms) h on A is any mapping from the set of variables to the universe A of A. Similarly to assignments of formulas, the assignment h can be naturally extended to a mapping from the set of all L -terms to A, inductively as follows. (As usual, we use the same symbol h also for the extended mapping.) For a given L -term s, if s is a variable v then h(s) = h(v). Otherwise, s must be of the form f i (t1 , . . . , tni ) with L -terms t1 , . . . , tni . Then let h(s) = f iA (h(t1 ), . . . , h(tni )).3 For each term s, let us express it explicitly in the form s(v1 , . . . , vn ) of a function of all distinct variables v1 , . . . , vn appearing in s. Then, for a given assignment h on A, the value h(s) will be represented as s A (a1 , . . . , an ) ¯ with a¯ = (a1 , . . . , an )) when h(vi ) = ai (∈ A) for each i. A (or even simply as s A (a) given equation s ≈ t in L is valid in an algebra A of type L (in symbol A |= s ≈ t) if and only if the equality h(s) = h(t) holds in A for any assignment h on A, i.e., ¯ = t A (a) ¯ for all n-tuples a¯ of A. If s ≈ t is valid in A then A is called a model s A (a) of s ≈ t. For any set of equations E in L , the class of all algebras of type L which are models of every equation in E is expressed by Mod(E), and is called the class of all models of E. Definition 8.2 (Equational classes) A class of algebras K of a type L is called an equational class (in L ) if there exists a set of equations E such that K is equal to the class Mod(E) of all models of E. Remark 8.3 An alternative definition of lattices is given in Remark 6.3, which says that an algebra A = A, ∨, ∧ is a lattice iff eight equalities in Lemma 6.1 hold always in it. Now, for each of these equalities, define an equation by replacing = by the symbol ≈ and also x, y, z by variables u, v, w, respectively, in it. For example, the equation corresponding to (2a) is u ∨v ≈ v ∨u.4 Let us call these eight equations, lattice equations. Then, the above definition can be restated as follows; an algebra A is a lattice iff all lattice equations are valid in A, i.e., A is a model of (a given set of) lattice equations. Thus, the class of all lattices is an equational class. Consequently, the class of all distributive lattices is also an equational class, by Definition 6.3. We can observe a close parallelism between formulas in logics and terms in algebras, and also between logics and equational classes. To see this parallelism that f i is an operation symbol of L , while f iA is the corresponding operation in A which determines an interpretation of f i . See Sect. 6.2. 4 By abuse of symbols, here we use ∨ for both an algebraic operation and an operation symbol. 3 Recall

122

8 Logics and Varieties

explicitly, we adopt the identification of formulas with terms and also with equations in the same language. Let us take the language of Boolean algebras, for example. We can see that the validity of a formula ϕ in a Boolean algebra A is equivalent to the validity of an equation ϕ ≈ 1 in A (if a formula ϕ is identified with a term ϕ). Conversely the validity of an equation s ≈ t can be shown to be equivalent the validity of a formula s ↔ t, i.e., (s → t)∧(t → s), (if an equation s ≈ t is identified with a formula s ↔ t this time). Now, by combining these two identifications, a formula ϕ is translated into an equation ϕ ≈ 1, which in turn is translated back into a formula ϕ ↔ 1. In fact, ϕ is (logically) equivalent to ϕ ↔ 1. Similarly, an equation s ≈ t is translated into a formula s ↔ t, which is then translated into an equation (s ↔ t) ≈ 1. We can confirm (under suitable assumptions on equational calculus) that s ≈ t is (equationally) equivalent to (s ↔ t) ≈ 1. This identification/translation is quite helpful in pursuing algebraic study of logic, as this will make it possible to incorporate important results on algebra into logic and vice versa. The following results come out immediately from Lemma 8.5 by this identification. Lemma 8.14 Suppose that A, B and A j for all j ∈ J are algebras of the same type, and that E is a set of equations. 1. If A is a model of E and B is a subalgebra of A, then B is also a model of E. 2. If A is a model of E and B is a homomorphic image of A, then B is also a model of E. 3. If every A j is a model of E for j ∈ J and B is the direct product of A j ( j ∈ J ), then B is also a model of E. The following result is shown by Birkhoff (1935). Theorem 8.15 (Birkhoff’s theorem) A class of algebras is a variety if and only if it is an equational class. Here we consider the class HA of Heyting algebras. It follows from Corollary 8.6 that HA is a variety, and hence is an equational class by Theorem 8.15. On the other hand, let us look at Definition 7.1 of Heyting algebras. While its first condition in it can be expressed by the set of lattice equations in Remark 8.3, the second condition on the law of residuation is not presented as a set of equations, as it stands. But in fact the law of residuation can be expressed by equations, as the following lemma shows. For terms s and t, let us define s  t as an abbreviation of the equation s ∨ t ≈ t and call it, an inequation. It is trivial by the definition that an inequation s(¯v)  t (¯v) is valid in a lattice A iff the inequality s A (a) ¯ ≤ t A (a) ¯ holds for all a¯ ∈ An (when v¯ is a n-tuple of variables). We stress here that every inequation is in fact an equation. Lemma 8.16 Let A = A, ∨, ∧, → be an arbitrary algebra such that A, ∨, ∧ is a lattice and → is a binary operation on A. Then, the law of residuation holds always between ∧ and → in A if and only if two inequations (1) u ∧ v ∧ (u → w)  w, (2) v  u → ((u ∧ v) ∨ w) are valid in A. Thus, the law of residuation in the definition of Heyting algebras can be replaced by these two inequations, and consequently the class HA is an equational class.

8.3 Subvarieties of HA and Superintuitionistic Logics

123

Proof It is easy to see that both inequalities (i) a ∧ b ∧ (a → c) ≤ c and (ii) b ≤ a → ((a ∧ b) ∨ c) holds always if we assume the law of residuation. Conversely, suppose that both inequalities (i) and (ii) hold. Let us assume first that a ∧ b ≤ c. Then by (ii), we have b ≤ a → ((a ∧ b) ∨ c) = a → c. Next we assume that b ≤ a → c. By using (i), a ∧ b = a ∧ b ∧ (a → c) ≤ c. Thus, the law of residuation holds.  From the algebraic completeness of Int, it follows that a formula is provable in Int if and only if the corresponding equation is valid in every algebra in HA. This relation can be easily extended to the one between superintuitionistic logics and equational classes which are subclasses of HA. By Birkhoff’s theorem, the latter are equal to subvarieties of HA. In this way, we can show Theorem 8.12.

8.4 Subdirect Representation Theorem An important result on algebraic representation of superintuitionistic logics can be obtained from another general result on algebra by Birkhoff (1944), called the subdirect representation theorem. Theorem 8.17 (Subdirect representation theorem) Every algebra A is a subdirect product of subdirectly irreducible algebras, which are moreover homomorphic images of A. Roughly speaking, a subdirect product is a subalgebra of direct product with an additional condition and a subdirectly irreducible (s.i.) algebra is an algebra which cannot be expressed as a direct product in a non-trivial way.5 The importance of subdirect representation theorem will be understood by comparing it to the prime factorization theorem in number theory, which says that every positive integer has a unique prime factorization, e.g. the number 180 can be expressed uniquely by 22 × 32 × 5 as a product of prime numbers 2, 3 and 5. In subdirect representation theorem, subdirectly irreducible algebras play the same role as prime numbers in number theory. So roughly speaking, every algebra can be “factorized” into subdirectly irreducible algebras. When A is finite, these s.i. algebras are finite as they are homomorphic images of A and hence the number of these s.i. algebras is also finite (by identifying an algebra with isomorphic ones). Thus the following holds. Corollary 8.18 Every finite algebra is a subdirect product of a finitely many finite subdirectly irreducible algebras. We will not go into further details of the theorem itself, as the topic is beyond the scope of the present textbook. Instead, we will discuss a specific case of the theorem applied to Heyting algebras, and show its logical consequences for superintuitionistic 5 For

a precise definition, see e.g. Burris and Sankappanavar (1981).

124

8 Logics and Varieties

logics. In case of Heyting algebras, a subdirectly irreducible algebra is exactly such a Heyting algebra A that has the second greatest element, i.e., an element a (< 1) such that x ≤ a for any x (< 1) in A. For example, among all Boolean algebras, the two-valued Boolean algebra 2 is the single s.i. Boolean algebra. Also, every Gödel chain is s.i. as long as it is finite. But the Gödel chain whose universe is the unit interval [0, 1] of reals is not s.i., since it has no second greatest element. By applying the subdirect representation theorem to Heyting algebras, we have the following. Corollary 8.19 For each superintuitionistic logic L there exist subdirectly irreducible Heyting algebras Ai (i ∈ I ) such that L = i∈I L(Ai ). Remark 8.4 A partially ordered set S = S, ≤ is said to be rooted if there exists the least element in S. Then, we can show the following. 1. A partially ordered set S is rooted if and only if the dual Heyting algebra U (S) is subdirectly irreducible. 2. A Heyting algebra A is subdirectly irreducible if and only if the dual frame D(A) is rooted. Exercise 8.4 Give a proof of each statement in the above remark. In the following, we give two logical consequences of Corollary 8.19. We recall here that classical logic is the greatest among consistent superintuitionistic logics, as shown in Lemma 8.1. Lemma 8.20 The three-valued Gödel logic L(G3 ) is the second greatest among consistent superintuitionistic logics. Proof Let us assume that L(∈ SUP) is consistent, i.e., L ⊆ Cl, and moreover it is not equal to Cl. By Corollary 8.19, L can be expressed as i∈I L(Ai ) with some s.i. Heyting algebras Ai (i ∈ I ). If there are Boolean algebras among Ai ’s, we delete them from outset. This does not affect the representation of L., since L is properly smaller than Cl. Thus, we can suppose that none of Ai ’s are Boolean algebras. Since each Ai is s.i. and is not the two-valued Boolean algebra, it must contain at least three elements, i.e., 0, 1, and the second greatest element, in addition. Therefore, each Ai has a subalgebra which is isomorphic to the three-valued Gödel chain G3 . So L(Ai ) ⊆ L(G3 ), and hence L ⊆ L(G3 ). Thus L(G3 ) is the second greatest among consistent superintuitionistic logics.  In Chap. 6, we proved that m-valued Gödel logic L(Gm ) (m > 1) are among extensions of Gödel-Dummett logic GD. Now we show that only they are proper extensions of GD. Theorem 8.21 Any consistent extension of the logic Int[(  p → q) ∨ (q → p)] is either of the form L(Gm ) for some m > 1 or GD (= m L(Gm )). In particular, Int[( p → q) ∨ (q → p)] = GD.

8.4 Subdirect Representation Theorem

125

Proof Let L beany extension of Int[( p → q) ∨ (q → p)], and suppose that L is represented as i∈I L(Ai ) for some s.i. Heyting algebras Ai (i ∈ I ). It is easy to see that for any s.i. Heyting algebra C, ( p → q) ∨ (q → p) is valid in C iff C is totally ordered. Hence, every Ai must be a Gödel chain. Without loss of generality, we can assume that for i, j ∈ I if i = j then Ai is not isomorphic to A j . Now, suppose first that (1) either one of Ai is infinite, or (2) all Ai are finite but I is infinite. The latter case implies that for each k > 0 there exist m > kand i ∈ I such that Ai is isomorphic to Gm . In either case, we can show that L = m L(Gm )) = GD (see Theorem 6.9). Otherwise, each Ai is finite and I is finite. Thus, each Ai must be a finite Gödel chain Gmi for some m i and moreover there exists the maximum number m among {m i : i ∈ I }. Then L = L(Gm ) holds in this case. As Int[( p → q) ∨ (q → p)] is the smallest among them, it must be equal to GD. 

8.5 Algebraic Aspects of Logical Properties In pursuing algebraic study of logics, a question will arise as to how logical properties of a given logic will reflect algebraic properties of the class of algebras corresponding to the logic, and conversely how algebraic properties of the class will reflect logical properties of a logic. This is in fact the main issue of algebraic study of nonclassical logics. In this section, we will take both Halldén-completeness and the disjunction property as examples, and discuss their algebraic characterizations. Let us remind you that a logic L is Halldén-complete if for all formulas α and β such that no propositional variables appear in common, either α or β is provable in L whenever α ∨ β is provable in L (see Definition 3.2). We need some preparations for stating the next theorem.  Corollary 8.19 says that every superintuitionistic logic L can be expressed as i∈I L(Ai ) for some subdirectly irreducible Heyting algebras Ai (i ∈ I ). So it will be interesting to see when a given logic L can be expressed in the form L(A) with a single subdirectly irreducible Heyting algebra A. A Heyting algebra A is wellconnected when x ∨ y = 1 implies either x = 1 or y = 1 for every x, y ∈ A. Obviously, every subdirectly irreducible Heyting algebra is well-connected. On the other hand, the Gödel chain whose universe is the unit interval [0, 1] of reals is wellconnected but not subdirectly irreducible. The next definition is on a global property of a superintuitionistic logic, which says in which way a given logic is located in the lattice SUP. Definition 8.3 A superintuitionistic logic L is meet irreducible if it cannot be expressed as an intersection L1 ∩ L2 of two incomparable logics L1 and L2 in the lattice SUP. Here we say that L1 and L2 are incomparable when neither L1 ⊆ L2 nor L2 ⊆ L1 holds. In other words, L is meet irreducible when L = L1 ∩ L2 implies either L = L1 or L = L2 for all L1 and L2 . The following theorem is obtained by combining results by Lemmon (1966) and Wro´nski (1976).

126

8 Logics and Varieties

Theorem 8.22 For every superintuitionistic logic L, the following four conditions are mutually equivalent: 1. 2. 3. 4.

L is Halldén-complete, L = L(A) for a subdirectly irreducible Heyting algebra A, L = L(A) for a well-connected Heyting algebra A, L is meet irreducible.

Proof It is trivial that the second implies the third. Now let us suppose that L = L(A) for a well-connected Heyting algebra A. We will show that L is meet irreducible. To the contrary, we suppose that L = L1 ∩ L2 for incomparable logics L1 and L2 . Then, there exist formulas α and β such that α ∈ L1 \L2 and β ∈ L2 \L1 . Without loss of generality, we can assume that α and β have no propositional variables in common (by renaming variables if necessary). Since neither of them is a member of L(= L(A)), there exists an assignment f on A such that f (α) < 1 and f (β) < 1. (This is possible since α and β have no propositional variables in common.) As A is well-connected, f (α ∨ β) = f (α) ∨ f (β) < 1 and hence α ∨ β ∈ / L. On the other hand, since α ∈ L1 and β ∈ L2 , the formula α ∨ β must belong to both L1 and L2 , and hence α ∨ β ∈ L1 ∩ L2 = L. This is a contradiction. Next, suppose that L is meet irreducible but is not Halldén-complete. Then there exist formulas ϕ, ψ with no variables in common such that ϕ ∨ ψ ∈ L while neither ϕ nor ψ is a member of L. Let L1 and L2 be axiomatic extensions L[ϕ] and L[ψ], respectively, of L. Obviously, both of them are properly stronger than L. By the definition, L ⊆ L1 ∩ L2 holds, and at the same time L1 ∩ L2 can be expressed as L[ϕ ∨ ψ] by Lemma 8.3. Then L[ϕ ∨ ψ] must be equal to L as ϕ ∨ ψ ∈ L. Thus, L = L1 ∩ L2 . But this contradicts meet irreducibility of L. We must omit the proof that the first statement implies the second, as certain preparations are necessary.  An interesting point of this theorem is that it states the equivalence of three different features of logic. That is, the first one is on a syntactic property, the second and third ones are about algebraic characterizations of a logic which are local, while the last statement is about a global property of a logic in the lattice SUP. Example 8.5 Let us consider the intersection L of logics L(G4 ) and L(J). Here J is a Heyting algebra discussed in Exercise 7.2. From Lemma 6.10 and Exercise 7.2 (3), (4), it follows that these two logics are mutually incomparable. Thus, L is not meet irreducible and hence is not Halldén-complete. Next, we will discuss an algebraic characterization of the disjunction property given by Maksimova (1986). As we explained in Part I, the disjunction property follows often from cut elimination. But cut-free sequent systems can be introduced only to a limited number of logics. Because of this, such an algebraic characterization like the following is useful in showing the disjunction property of a wide class of superintuitionistic logics.

8.5 Algebraic Aspects of Logical Properties

127

Theorem 8.23 For any superintuitionistic logic L, the following are equivalent: 1. L has the disjunction property, 2. For all A, B ∈ VL there exist a well-connected Heyting algebra C ∈ VL and a surjective homomorphism from C to the direct product A × B. Proof Suppose first that L has the disjunction property. For any given A, B ∈ VL , their direct product A × B belongs also to VL . By Lemma 8.8, for a set of variables X which is big enough, there exists a surjective homomorphism for a free algebra FL (X ) over X in VL to A × B by the universal mapping property of FL (X ). Similarly to the Lindenbaum-Tarski algebra, FL (X ) is shown to be well-connected. Thus, it is enough to take FL (X ) for C. Conversely, suppose that L does not have the disjunction property. Then, there exist formulas α and β neither of which are provable in L but α ∨ β is provable. By the algebraic completeness, there exist Heyting algebras A, B ∈ VL and assignments f, g on each of them, respectively, such that f (α) < 1 holds in A and g(β) < 1 holds in B. Now, we assume that the second statement of Theorem 8.23 holds. Then, there must exist a well-connected algebra C ∈ VL and a surjective homomorphism h from C to the direct product A × B. We take any assignment k on C such that k( p) ∈ h −1 ( f ( p), g( p)) for every propositional variable p. Thus, h(k( p)) =  f ( p), g( p). Then, using induction, we can show that h(k(ϕ)) =  f (ϕ), g(ϕ) holds for each formula ϕ. Since Since α ∨ β is provable in L, k(α) ∨ k(β) = k(α ∨ β) = 1 holds in C. As C is well-connected, either k(α) or k(β) is equal to 1, which implies that either h(k(α)) or h(k(β)) is equal to 1. This means that either f (α) or g(β) is equal to 1. But this is a contradiction.  Remark 8.6 We can show that any superintuitionistic logic characterized by a finite non-degenerate Heyting algebra does not have the disjunction property. In fact, suppose that a superintuitionistic logic L is characterized by a finite Heyting algebra A with  k elements, where k > 1. Then, Lemma 7.5 shows that the formula χk , which is 0≤i< j≤k ( pi ↔ p j ), is valid in A. If L has the disjunction property, pi ↔ p j must be provable in L for some i, j such that i < j, and hence its substitution instance ( p → p) ↔ α is provable in L. Consequently, α is also provable in L for any formula α. It means that L is the inconsistent logic. But this contradicts our assumption that L is characterized by a non-degenerate A. We notice here that L is still Halldén-complete as long as A is subdirectly irreducible, by Theorem 8.22. Before closing the present chapter, we give a few remarks on results obtained by using algebraic methods in superintuitionistic logics. In Chap. 6, we pointed out that there exist at least countably many superintuitionistic logics. As a matter of fact, V.A. Jankov showed in Jankov (1968) that there exist uncountably many superintuitionistic logics. This implies that there exist uncountably many non-finitely axiomatizable superintuitionistic logics, as the set of all finitely axiomatizable superintuitionistic logics can be countably enumerated. As for the disjunction property, there exist uncountably many superintuitionistic logics with the disjunction property

128

8 Logics and Varieties

and also uncountably many superintuitionistic logics without the disjunction property. The most notable example of demonstrating the power of algebraic methods was given by Maksimova (1977). First, she gave an algebraic characterization of Craig’s interpolation property, and then obtained the following result by using it. Theorem 8.24 (Craig’s interpolation property) There exist exactly seven consistent superintuitionistic logics having Craig’s interpolation property. They are Cl, L(G3 ), L(J), Int[π3 ], GD, Int[¬ p ∨ ¬¬ p] and Int.6

6 For

the algebra J, see Exercise 7.2, and for the formula π3 see Eq. (6.1) in Sect. 6.5.

Chapter 9

Residuated Structures

In this chapter, we give a short introduction to residuated structures which are algebraic structures for substructural logics.1 Boolean algebras and also Heyting algebras are defined to be lattice structures with a binary relation → which satisfy the law of residuation between ∧ and →, i.e., a ∧ b ≤ c iff a ≤ b → c, for all a, b, c. On the other hand, Łukasiewicz implication introduced in Chap. 6 does not always satisfy the law. But, it still satisfies the law of residuation between fusion · and →. In this chapter, we discuss algebraic structures in which the law of residuation in general form are satisfied, and focus our attention especially on residuated lattices. This notion is a key to understanding substructural logics from algebraic point of view. In the last section, we consider particular residuated lattices with the unit interval [0, 1] for their underlying sets. It is shown that many-valued chains discussed in Chap. 6 are subalgebras of these residuated lattices.

9.1 Residuated Lattices and FL-Algebras Let us consider how to introduce an implication on the unit interval [0, 1]. As we mentioned in Sect. 6.5, the Gödel implication is defined by  1 (a ≤ b) a→b= b (other wise).

(9.1)

while the Łukasiewicz implication on the unit interval is defined by

1 It

is recommended to have a brief look at Sect. 5.5 before starting to read the present chapter.

© Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_9

129

130

9 Residuated Structures

 a→b=

1 (a ≤ b) 1 − a + b (other wise),

(9.2)

It was shown there that the law of residuation holds in Gödel chains between conjunction ∧ and Gödel implication. Heyting algebras are defined in Chap. 7 by extending this idea. Thus, they satisfy the law of residuation between conjunction and Heyting implication. On the other hand, in Łukasiewicz chains the law of residuation still holds but between fusion · and Łukasiewicz implication →, when the fusion · is defined by a · b = max{0, a + b − 1} (Eq. (6.2)). We consider now the law of residuation for → in more general setting. For that purpose, we introduce several basic notions of algebraic structures. Definition 9.1 (Semigroups, Monoids) 1. An algebra A = A, · is a semigroup if · is an associative binary operation on A, i.e., x · (y · z) = (x · y) · z holds for all x, y, z. 2. An algebra A = A, ·, 1 is a monoid if A, · is a semigroup and 1 is an element of A satisfying x · 1 = 1 · x = x for all x ∈ A. It can be shown that such an element 1 is uniquely determined if exists, which is called the unit element of the monoid A. A semigroup (monoid) A is commutative when x · y = y · x holds for all x, y ∈ A. Example 9.1 1. Let N be the set of all positive integers. Then, both N, + and N, × are commutative semigroups. Moreover, N, ×, 1 forms a monoid. 2. Let Σ be an arbitrary non-empty set and Σ ∗ be the set of all finite (possibly empty) sequences of elements of Σ. Thus, each member of Σ ∗ is of the form c1 , c2 , . . . , ck  for some k ≥ 0 with c1 , c2 , . . . , ck ∈ Σ. The empty sequence is denoted by ε. We define an operation ∗ (called the concatenation) on Σ ∗ by c1 , c2 , . . . , ck  ∗ d1 , d2 , . . . , dm  = c1 , c2 , . . . , ck , d1 , d2 , . . . , dm . Then Σ ∗ , ∗, ε forms a monoid which is not commutative. Definition 9.2 (Partially ordered semigroups (monoids)) A structure A, ·, ≤ is a partially ordered semigroup (abbreviated as p.o. semigroup) iff 1. A, ≤ is a partially ordered set, 2. A, · is a semigroup satisfying the following monotonicity; x ≤ y implies both x · z ≤ y · z and z · x ≤ z · y for all x, y, z ∈ A. When A, · has the unit element and hence is a monoid, the structure A, ·, ≤ is called partially ordered monoid (abbreviated as p.o. monoid). Example 9.2 Let ≤ be the natural order of positive integers. 1. The structure N, ×, 1, ≤ is a p.o. monoid. 2. Let us take N for Σ in the second example of Example 9.1. We define a binary relation ≤∗ on Σ ∗ by; (c1 , c2 , . . . , ck ) ≤∗ (d1 , d2 , . . . , dm ) iff (1) k = m and (2) ci ≤ di for every i ≤ k. Then Σ ∗ , ∗, ε, ≤∗  is a p.o. monoid. Exercise 9.1 Confirm two statements in Example 9.2.

9.1 Residuated Lattices and FL-Algebras

131

Definition 9.3 (Residuated p.o. semigroups (monoids)) A p.o. semigroup A with a semigroup operation · is residuated if there exist two operations \ and / that satisfy the following law of residuation: • x · y ≤ z iff y ≤ x\z iff x ≤ z/y. Operations \ and / are called left and right residual (also, left and right division) of the operation ·, respectively. Residuated p.o. monoids can be defined similarly. When the semigroup operation · is moreover commutative, it is called a residuated commutative p.o. semigroup. In such a case, as the left and right residuation are mutually equivalent, i.e., x\y = y/x for all x, y, the symbol → is used to express both of them, and the operation → is called simply the residual of the operation ·. Recall that the logical connective which corresponds to the semigroup operation is called fusion. It is clear that in Heyting algebras, the meet ∧ with the greatest element 1 forms a commutative p.o. monoid with respect to its order ≤ whose unit element is 1. Moreover the operation → is the residual of ∧. Also, in Łukasiewicz chains, the operation ·, which is defined by a · b = max{0, a + b − 1} for all a, b, with the greatest element 1 forms a commutative p.o. monoid with the unit element 1. As we have shown in Lemma 6.11, the Łukasiewicz implication is the residual of this fusion ·. Thus, both can be regarded as residuated commutative p.o. monoids. Remark 9.3 (Intuitive meaning of residuals) We will give here an intuitive explanation of residuals. Let I be the set of all integers and + be addition on I. It is easy to see that the structure I, +, 0, ≤ forms a commutative p.o. monoid. In fact, it is moreover a residuated commutative p.o. monoid, in which operation − (subtraction) is the residual of + as the following relation holds; • x + y ≤ z iff y ≤ z − x. Consider next the structure R+ = R+ , ×, 1, ≤, where R+ denotes the set of all positive real numbers. We can show that R+ is a commutative p.o. monoid. For x, z ∈ R+ , the real number z/x denotes the result of dividing z by x. It is clear that z/x exists always and belongs to R+ . As the following relation shows, R+ with the division operation forms a residuated commutative p.o. monoid. • x × y ≤ z iff y ≤ z/x. Thus, residual can be understood as the inverse operation of a given monoid operation. Definition 9.4 (Residuated lattices) When the partially ordered set A, ≤ of a given residuated p.o. monoid A is a lattice, A is called a residuated lattice. In other words, an algebra A = A, ∨, ∧, ·, \, /, 1 is a residuated lattice if • A, ∨, ∧ is a lattice, • A, ·, 1 is a monoid, • x · y ≤ z iff y ≤ x\z iff x ≤ z/y for all x, y, z ∈ A.

132

9 Residuated Structures

We note that A, ∨, ∧ is not necessary a bounded lattice and that the unit element 1 is not always the greatest element of A even if the greatest element exists. We note also that though we do not assume the monotonicity (cf. Definition 9.2) of the monoid operation · in Definition 9.4, it follows from the law of residuation. For, suppose that x ≤ y. If y · z ≤ u then x ≤ y ≤ u/z. Hence x · z ≤ u. Since this holds for any u, take y · z for u, we get x · z ≤ y · z. Another monotonicity condition can be shown in the same way. Exercise 9.2 Show that the class of all residuated lattices is an equational class (cf. Lemma 8.16). Any Heyting algebra is an example of a commutative residuated lattice in which the meet ∧ is taken for the monoid operation. Also any residuated p.o. monoid with a total order is obviously a residuated lattice. Exercise 9.3 Show that in every residuated commutative p.o. monoid, the following statements hold. 1. 2. 3. 4. 5.

If x ≤ y then both y → z ≤ x → z and z → x ≤ z → y hold for all x, y, z. 1 → x = x and 1 ≤ x → x for all x. x → (y → z) = (x · y) → z for all x, y, z. x ≤ (x → y) → y for all x, y. As a special case, x = (x → x) → x for all x. x · (z → y) ≤ z → (x · y) for all x, y, z.

Exercise 9.4 Show that in every residuated lattice, the following statements hold. 1. x → (y ∧ z) = (x → y) ∧ (x → z) for all x, y, z. 2. (y ∨ z) → x = (y → x) ∧ (z → x) for all x, y, z. 3. x · (y ∨ z) = (x · y) ∨ (x · z) for all x, y, z. A residuated lattice A = A, ∨, ∧, ·, \, /, 1 is commutative if the monoid A, · is commutative. It is integral if it has the greatest element which is at the same time the unit element. In other words, x ≤ 1 for all x ∈ A. This condition can be expressed also as x · y ≤ x ∧ y. For, suppose that x ≤ 1 for all x. Then, by the monotonicity, x · y ≤ x · 1 = x. Similarly, x · y ≤ y holds. Thus, x · y ≤ x ∧ y. Conversely, if x · y ≤ x ∧ y holds for all x, y, then x · y ≤ x ∧ y ≤ y for all x, y. Taking 1 for y, we get x = x · 1 ≤ 1. A residuated lattice A = A, ∨, ∧, ·, \, /, 1 is contractive (or square-increasing) if x ≤ x · x for all x. This condition can be expressed also as x ∧ y ≤ x · y. In fact, suppose that x ≤ x · x for all x. As both x ∧ y ≤ x and x ∧ y ≤ y hold, x ∧ y ≤ (x ∧ y) · (x ∧ y) ≤ x · y. As for the converse direction, if we take x for y in x ∧ y ≤ x · y, we can get x = x ∧ x ≤ x · x. So, if a residuated lattice A is both integral and contractive, x · y = x ∧ y for all x, y. This implies that A is commutative, since the meet is commutative. It is easy to see that every class of all commutative (integral, and contractive, respectively) residuated lattices is an equational class and hence is a variety (see Exercise 9.2). Exercise 9.5 Let A be a residuated lattice. Show that A is commutative if and only if x\y = y/x for all x, y ∈ A.

9.2 FL-Algebras and Substructural Logics

133

9.2 FL-Algebras and Substructural Logics The notion of residuated lattices was introduced in the 1930s in ring theory of algebra (in Ward and Dilworth 1939). Then, it was rediscovered in the 1990s as algebraic structures of substructural logics. In applying them to logic, we make a slight modification so as to introduce an algebraic counterpart of the logical connective “negation”. Thus we have the following notion of full Lambek algebras (Ono 1993). Definition 9.5 (Full Lambek algebras) An algebra A = A, ∨, ∧, ·, \, /, 1, 0 is an full Lambek algebra (FL-algebra, for short) if A, ∨, ∧, ·, \, /, 1 is a residuated lattice and 0 is an arbitrary element of A (called its zero element). Sometimes, FLalgebras are called pointed residuated lattices. It should be noted again that 1 is the unit element of the moniod A, ·, 1 and is not necessarily equal to the greatest element. In fact, an FL-algebra may have neither the greatest nor the least elements. The constant 0 is used to define two negations; i.e., ∼ x = x\0 and −x = 0/x in each FL-algebra. Every commutative FL-algebra is called an FLe -algebra. In any FLe -algebra, these two negations are always equivalent and hence we use the ordinary symbol ¬ for them, i.e., ¬x = x → 0. An FLe -algebra is involutive iff ¬¬x = x holds for all x. An FL-algebra satisfying 0 ≤ x ≤ 1 for all x, which is equivalently an integral FL-algebra satisfying 0 ≤ x for all x (zero bounded), is called an FLw -algebra. An contractive FL-algebra is called an FLc algebra. By taking any combination of the subscripts e, w, c, we can define a subclass of FL-algebras with the corresponding properties. For instance, an FLew -algebra is an FL-algebra which is commutative, integral and zero bounded. We will show next that FL-algebras are exactly algebras for substructural logics. We interpret the logical connective ‘fusion’ as a monoid operation in a given FLalgebra. As usual, we use the same symbol · for both fusion and a monoid operation as long as no confusions may occur. We define notions of assignments and validity in a given FL-algebra in the same way as before, but some modifications will be necessary as the greatest element, even if it exists, may not be equal to the unit element 1. Now we say that a formula α is valid in a given FL-algebra A if and only if 1 ≤ f (α) holds for every assignment f on A. Also, in Chap. 4, it is shown that a sequent γ1 , . . . , γm ⇒ ϕ is provable in FL if and only if the sequent (γ1 · . . . · γm ) ⇒ ϕ is provable in FL (see Eq. (4.3)). Thus we have the following definition. Definition 9.6 A sequent γ1 , . . . , γm ⇒ ϕ is said to be valid in an FL-algebra A, if and only if the formula (γ1 · . . . · γm )\ϕ is valid in it, or equivalently, f (γ1 · . . . · γm ) ≤ f (ϕ) holds for every assignment f on A. Note that we may replace the above condition by the condition that ϕ/(γ1 · . . . · γm ) is valid. Similarly to Lemma 7.2, we have the following. Lemma 9.1 If a sequent is provable in the sequent system FL, it is valid in every FL-algebra. Also, if a sequent is provable in the sequent system FLx , it is valid in every FLx -algebra for any x ∈ {e, w, c, ew, ec}.

134

9 Residuated Structures

Here as an example we show that the left wakening rule preserves the validity in each integral FL-algebra A, i.e., if Γ, Δ ⇒ ϕ is valid in A then Γ, α, Δ ⇒ ϕ is also valid in it. Take any assignment f on A. Let f (α) = a, f (γ1 · . . . · γm ) = g, f (δ1 · . . . · δn ) = d and f (ϕ) = c, when Γ is γ1 , . . . , γm and Δ is δ1 , . . . , δn , respectively. By our assumption, g · d = f (γ1 · . . . · γm · δ1 · . . . · δn ) ≤ f (ϕ) = c. On the other hand, by the integrality, g · a · d ≤ g · 1 · d ≤ c. Hence, g · a · d ≤ c. Thus, Γ, α, Δ ⇒ ϕ is shown to be valid in A. Similarly, we can confirm that zero-boundedness, commutativity and contractivity validate the right weakening, exchange and contraction rules, respectively. Exercise 9.6 Give a detailed proof of Lemma 9.1 for the case of FL. Arguments on algebraic completeness (see Theorem 8.7) using LindenbaumTarski algebras work well also for in the present case. If a given formula ϕ is not provable in FL, and 1\ϕ is neither provable, then [1]  [ϕ] holds in the LindenbaumTarski algebra FFL of FL. Hence, under the canonical assignment, the value of ϕ is not equal or greater than 1, and thus ϕ is not valid in FFL . Combining this with Lemma 9.1, we have the following. Theorem 9.2 (Algebraic completeness of basic substructural logics) A formula α is provable in FL if and only if it is valid in every FL-algebra. The result holds for other basic substructural logics FLe , FLw , FLc , FLew and FLec . In fact, algebraic completeness holds for any substructural logic. Quite similarly, we can show algebraic completeness of each of basic involutive substructural logics with respect to the corresponding class of involutive FLe algebras. Similarly to discussions in Chap. 8, we can show fundamental connections between varieties of FL-algebras and substructural logics, though we do not repeat the details. It can be easily shown that the class VFL of all FL-algebras is an equational class and hence is a variety by Theorem 8.15. Let FL denote this variety. For each substructural logic L, the class VL is a variety and hence is a subvariety of the variety FL. Theorem 9.3 (Subvarieties of FL and substructural logics) A class K of algebras of the same type is a subvariety of FL if and only if K is of the form VL for a substructural logic L for any K . This connection holds also between subvarieties of the variety FLx of all FLx -algebras and substructural logics over FLx for any x ∈ {e, w, c, ew, ec}. Residuation and Sequent Formulation The above result reveals two sides of substructural logics. One side says that substructural logics are logics lacking some of structural rules if they are formalized in sequent systems. This was in fact a common understanding of substructural logics in the early stage of the study, from which the name ‘substructural logics’ was originated. Meanwhile, the other side came from algebra, which says that substructural

9.2 FL-Algebras and Substructural Logics

135

logics are logics for residuated structures where implications are residuals of fusions, as Theorem 9.2 shows. From syntactic point of view, the law of residuation in FLe , for example, can be understood as the following equivalence; α, γ ⇒ β is provable in FLe iff γ ⇒ α → β is provable in FLe . By introducing the logical connective ‘fusion’, the left-hand side can be expressed as “α · γ ⇒ β is provable in FLe ”. But, when FLe is replaced by LJ, this will be equivalent to say that “α ∧ γ ⇒ β is provable”. In this way, we get a syntactic form of the law of residuation between ∧ and → in LJ. This observation says that fusion is an explicit presentation of comma in sequent systems as a logical connective. In other words, formalizing a logic in a sequent system can sometimes reveal the existence of a hidden connective ‘fusion’, with which ‘implication’ forms a residuation relation. (See Ono 2003 for further discussions.)

9.3 Residuations Over the Unit Interval As a case study, we focus on the topic of residuated lattices over the unit interval [0, 1], which is interesting from both mathematical and logical point of view. Let us consider any commutative p.o. monoid U, ·, 1, ≤ where U is the unit interval [0, 1], i.e., {x ∈ R : 0 ≤ x ≤ 1}, and ≤ is the natural total order of real numbers. Obviously, U, ≤ forms a bounded lattice. We note that 1 is the greatest element as well as the monoidal unit. Such a monoid operation · on U is called a triangular norm (t-norm, in short) in fuzzy logic. More precisely, a t-norm is a binary operation on U which satisfies (1) associativity, (2) commutativity, (3) x · 1 = x for all x, and (4) monotonicity. The following are three typical examples of t-norms. The first two were discussed already in Chap. 6. The product t-norm × in the following denotes the multiplication of real numbers. • x · y = x ∧ y(= min{x, y}) (Gödel t-norm), • x · y = max{x + y − 1, 0} (Łukasiewicz t-norm), • x · y = x × y (product t-norm). A particular feature of these commutative p.o. monoids determined by t-norms is that the set U, ≤ is totally ordered and moreover forms a complete lattice. As the underlying set U is a set of real numbers, we can introduce some notions which are familiar inelementary analysis. Because of this, we use sup D and inf D, instead of D and D for each subset D of U , to denote the least upper bound and the greatest lower bound of D, respectively. Let g(x, y) be an arbitrary binary mapping defined over U . The mapping g is left-continuous if g preserves every supremum in both arguments, i.e.,

136

9 Residuated Structures

• g(x, sup C) = sup{g(x, z) : z ∈ C} for all x ∈ U and all C ⊆ U , • g(sup C, y) = sup{g(z, y) : z ∈ C} for all y ∈ U and all C ⊆ U . Similarly, g is defined to be right-continuous if g preserves every infimum in both arguments. When g satisfies g(x, y) = g(y, x) for all x, y, obviously one of two conditions in the above is enough for g to be left-continuous. A mapping g is continuous if g is both left- and right-continuous. As each t-norm can be regarded as a binary mapping on U , we can say, for example, that a given t-norm is left- (or right-) continuous, or continuos. Exercise 9.7 Show that each of three t-norms in the above is continuous. An interesting question is to see when a commutative p.o. monoid determined by a given t-norm · can be residuated. This is answered as follows. Lemma 9.4 A commutative p.o. monoid U, ·, 1, ≤ with a t-norm · is residuated if and only if · is left-continuous. Proof Suppose first that U, ·, 1, ≤ is residuated. Let → be the residual of ·. Then, a · c ≤ b iff c ≤ a → b for all a, b, c ∈ U . To show that · is left-continuous, take any x ∈ U and any C ⊆ U . By the completeness of real numbers, sup C exists always as C is bounded. By the monotonicity of ·, the inequality a · c ≤ a · sup C holds always for all c ∈ C. Thus, a · sup C is an upper bound of the set {a · z : z ∈ C}. Next suppose that b is any upper bound of this set. Then a · z ≤ b for all z ∈ C. By the law of residuation, this is equivalent to say that z ≤ a → b for all z ∈ C. This means that a → b is an upper bound of C. Hence, sup C ≤ a → b and thus a · sup C ≤ b, by the law of residuation. Therefore, a · sup C is the least upper bound of {a · z : z ∈ C}, i.e., a · sup C = sup{a · z : z ∈ C}. This means that · is left-continuous, as · is commutative. Conversely, suppose that · is left-continuous. For given a, b ∈ U , let D = {z ∈ U : a · z ≤ b}. Thus a · z ≤ b for each z ∈ D, which implies that b is an upper bound of the set {a · z : z ∈ D}. By the left-continuity, a · sup D is equal to sup{a · z : z ∈ D}, which is the supremum of the set {a · z : z ∈ D}. Hence a · sup D ≤ b, and thus sup D belongs also to D. Therefore sup D is in fact the maximum element of D. So it can be expressed as max D. As max D is determined only by a and b, let us express this as a → b. It remains to show that → is the residual of ·. So suppose that a · z ≤ b for an element z. Then z ∈ D by the definition and hence z ≤ sup D = max D = a → b. Conversely, suppose that z ≤ a → b = max D. Then by the monotonicity, a · z ≤ a · max D ≤ b. Hence, the law of residuation holds between · and →, and the residual → is expressed explicitly as max{z ∈ U : a · z ≤ b}.  The above lemma can be easily extended to a more general setting. A p.o. semigroup A, ·, ≤ is said to be a lattice-ordered semigroup (abbreviated as .o. semigroup) when A, ≤ forms a lattice. If a residuated .o. semigroup is moreover a monoid it is no other than a residuated lattice. An .o.  semigroup  is said to be complete when its lattice part is complete, i.e., both C and C exist always for every subset C of the underlying set. Then we can show the following, similarly to Lemma 9.4.

9.3 Residuations Over the Unit Interval

137

Theorem 9.5 A, ·, ≤ is residuated iff the infinite dis A complete  .o. semigroup  tributivity ( C) · x = (C · x) and x · ( C) = (x · C) holds for every C ⊆ A and every x ∈ A. Let us take an arbitrary finite lattice A and also take the meet ∧ for ·. By identifying A with A, ∧, ≤, A can be regarded as a complete .o. semigroup. As A is finite, the distributivity in Theorem 9.5 is just the usual distributive law. Thus, in this setting, Theorem 9.5 says that any finite lattice is residuated iff it is distributive. This is exactly what we showed in Lemma 7.1, which says that every finite distributive lattice forms a Heyting algebra. Remark 9.4 (Residual of product t-norm) We know already that the residual of Gödel t-norm (Łukasiewicz t-norm) is Gödel implication (Łukasiewicz implication, respectively). We consider here the residual of product t-norm, which we call product implication. Recall first that the second example in Remark 9.3 showed that the residual of the product (or multiplication) in the structure R+ is no other than the division \. Thus, we may expect that the product implication will be also the division, but some modifications will be needed as explained below. First, since the product t-norm is defined over the unit interval U , the real number y/x cannot be taken for the value x → y when x < y, as y/x becomes greater than 1. Also, as 0 ∈ U , the value 0 → y should be defined. As indicated in the proof of Lemma 9.4, the product implication must satisfy a → b = max{z ∈ U : a × z ≤ b}. Therefore, the product implication can be expressed as follows;  a→b=

1 (a ≤ b) b/a (other wise)

(9.3)

Suppose that a given commutative p.o. monoid U, ·, 1, ≤ with a t-norm · is left-continuous. Since it is residuated by Lemma 9.4, we can naturally introduce an FLe -algebra U, max, min, ·, →, 1, 0. This is in fact an FLew -algebra as 0 ≤ x ≤ 1 holds for all x. In other words, every left-continuous t-norm determines an FLew -algebra, and hence a substructural logic over FLew . This is a basic idea of mathematical fuzzy logic developed by P. Hájek, who tried to introduce a sound mathematical basis to fuzzy theory introduced by R. Zadeh. Along this line, Hájek introduced a basic logic BL, and then Esteva and Godo introduced a weaker logic MTL (monoidal t-norm logic) for fuzzy logic (in Hájek 1998; Esteva and Godo 2001). The logic MTL is an extension of FLew , obtained by adding the axiom of prelinearity: (α → β) ∨ (β → α), and BL is obtained from MTL by adding the axiom of divisibility: (α ∧ β) → (α · (α → β)). Exercise 9.8 Show that the axiom of divisibility is valid in the FLew -algebra determined by each of three t-norm, given at the beginning of the present section. It is known that the following completeness holds for them.

138

9 Residuated Structures

Theorem 9.6 1. A formula α is provable in BL iff it is valid in every FLew -algebra determined by a continuous t-norm. 2. A formula α is provable in MTL iff it is valid in every FLew -algebra determined by a left-continuous t-norm. Gödel-Dummett logic GD (infinite-valued Łukasiewicz logic L(Ł)) is determined by the Gödel t-norm (the Łukasiewicz t-norm, respectively). Similarly, define product logic Π to be the substructural logic determined by the product t-norm. Each of these three logic is finitely axiomatizable over BL, which is obtained by adding α → (β → α) (weakening axiom) for GD, ¬¬α → α (the law of double negation) for L(Ł), and ¬¬α → ((α → (α · β)) → (β · ¬¬β)) for Π, respectively. Extensions of GD and L(Ł) are described in Theorem 8.21 and at the end of Sect. 6.5, respectively. Both of them are infinitely many, while it is shown that classical logic is the single consistent extension of Π.

Chapter 10

Modal Algebras

Semantical study of modal logics have been developed successfully already by using Kripke semantics. In the present chapter, we will discuss an algebraic approach to modal logics. Since our algebraic approach has many parallels with what we explained already in the previous chapters of Part II, it will be explained rather briefly. After introducing modal algebras, we present Jónsson-Tarski theorem which is an extension of Stone’s representation theorem to modal algebras. Jónsson-Tarski theorem will explain how Kripke semantics can be viewed from algebraic point of view, and hence clarify a link between algebraic semantics and Kripke semantics. In the last section, we will give an algebraic proof of the Gödel translation of superintuitionistic logics into modal logics over S4.

10.1 Modal Algebras A modal algebra is defined to be a Boolean algebra with a unary operator , which gives an interpretation of the modal operator. We use the symbol  for both an algebraic operation and a logical connective as before. For further information on modal algebras, see e.g. Blackburn et al. (2001), Kracht (1999). Definition 10.1 (modal algebras) An algebra A = A, ∨, ∧, →, , 0 is a modal algebra iff 1. A, ∨, ∧, →, , 0 is a Boolean algebra, 2.  is a unary operator on A satisfying that (1) 1 = 1, and (2) (x ∧ y) = (x ∧ y) for all x, y ∈ A.. It is obvious that the class of modal algebras forms an equational class, i.e., a class defined by a set of equations. © Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0_10

139

140

10 Modal Algebras

Exercise 10.1 1. Show that in every modal algebra, the following monotonicity holds; for all x, y, if x ≤ y then x ≤ y. 2. Show that the equality (2) in Definition 10.1 can be replaced by the inequality; for all x, y, (x → y) ≤ x → y. The equality (1) in the second condition in Definition 10.1 is an algebraic expression of the rule of necessitation, i.e., if α is provable then α is also provable. Also the equality (2) is an algebraic expression of K, i.e., (α → β) → (α → β). This equality (2) can be expressed also by its dual form (2’): ♦(x ∨ y) = (♦x ∨ ♦y) for all x, y, where an operator ♦ is defined by ♦z = ¬¬z for any element z. As the axiom scheme K is always valid in our modal algebras, they can be regarded as algebras for normal modal logics, i.e., extensions of the modal logic K. Now it will be an easy task to generalize this idea and to introduce a class of modal algebras for each standard normal logic. For example, consider axiom schemes introduced in Chap. 4. That is, D: α → ♦α, T: α → α, 4: α → α, B: α → ♦α, and 5: ♦α → ♦α. They are expressed by the following inequalities of modal algebras; for all x, d: t: 4: b: 5:

x ≤ ♦x, x ≤ x, x ≤ x, x ≤ ♦x, ♦x ≤ ♦x,

respectively. We notice that these inequalities can be replaced by their dual forms. (Note that d is self-dual.); t : 4 : b : 5 :

x ≤ ♦x, ♦♦x ≤ ♦x, ♦x ≤ x, ♦x ≤ x,

respectively. To see this, let us consider b, for example. Taking ¬y for x, we have ¬y ≤ ♦¬y, and hence ¬♦¬y ≤ ¬¬y = y. On the other hand, ♦y = ¬♦¬y holds by using a property of Boolean algebras (see Lemma 6.2). Thus, we have ♦y ≤ y, which is b . The converse direction can be shown in a similar way. Exercise 10.2 Let A be an arbitrary modal algebra in which both b and 5 holds. Show that 4 holds also in A. Example 10.1 If x ≤ x holds always in a modal algebra then x ≤ x holds also in it, because of the monotonicity (see Exercise 10.1). But the converse does not hold always. Consider an algebra A = {0, 1}, ∨, ∧, →, , 0 whose non-modal part {0, 1}, ∨, ∧, →, 0 is two-valued Boolean algebra, in which 0 = 1 = 1. Clearly, A is a modal algebra in which x ≤ x holds always, as a = 1 for all a. On the other hand, 0 = 1  0. Exercise 10.3 Give a modal algebra in which x ≤ x does not hold always.

10.1 Modal Algebras

141

In Chap. 4, we have introduced standard modal logics like KD, KT, K4, KB S4 and S5. Corresponding to each logic L among them, we can introduce a subclass of modal algebras, called the class of L-modal algebras (or, simply L-algebras), by collecting all modal algebras in which algebraic inequations associated with axiom schemes of L. For example, KD-algebras are modal algebras in which the inequation v  ♦v is valid, and S4-algebras are modal algebras in which both v  v and v  v are valid as S4 is KT4. Of course, the notion of L-modal algebras can be introduced to any other axiomatic extension L of modal logic K. That is, for a given normal modal logic L, a modal algebra A is an L-modal algebra if every formula provable in L is valid in A. We can define VL to be the class of all L-modal algebras, which is in fact a variety. Algebraic completeness of L with respect to the class VL , and the connection between subvarieties of VL and normal modal logics over L can be shown similarly as before. Example 10.2 (Topological spaces and S4-algebras) Let X = X, τ  be a topological space. Here X is a set and τ is a collection of subsets of X satisfying that (1) both ∅ and X belong to τ , (2) every union of members of τ belongs also to τ , and (3) every finite intersection of members of τ belongs also to τ . Each member of τ is called an open set. Obviously, ℘ (X) = ℘ (X ), ∪, ∩, →, ∅ forms a powerset Boolean algebra. For each subset W of X , define I (W ) = {U : U is an open subset of W }, which is in fact the biggest open subset of W . The operator I is called an interior operator. Then, ℘ (X) with the operator I forms an S4-algebra. Because of this, S4-algebras are sometimes called interior algebras and also topological Boolean algebras. Exercise 10.4 Show that ℘ (X) with the operator I introduced in Example 10.2 is in fact an S4-algebra. Finite Embeddability Property of S4-Algebras We will show that the class of all S4-algebras has the finite embeddability property. Our proof is based essentially on J.C.C. McKinsey’s proof in McKinsey (1941). As a consequence, the finite model property of S4 follows. Although a simpler proof of the finite modal property of S4 is well-known, which is obtained by using filtration method applied to Kripke models, it will be interesting to see how well algebraic methods work for showing the finite model property. Theorem 10.1 The class of all S4-algebras has the finite embeddability property. Proof Suppose that a finite partial S4-algebra B of an S4-algebra A(= A, ∨, ∧, →, , 0 is given. Let E be a finite subset of A defined by E = {0, 1} ∪ B, and D be the Boolean subalgebra of the Boolean reduct (i.e., the Boolean part) of A generated by E. The algebra D is also finite. Take note here that for some  d ∈ D, d may not belong to D. We define a new operator  on D by d = {z ∈ D : z = z and z ≤ d}. Clearly, the set {z ∈ D : z = z and z ≤ d} is non-empty as it contains 0, and d ≤ d holds for any d ∈ D by the definition.

142

10 Modal Algebras

We will now confirm that d = d as long as d ∈ D. For each z ∈ D, if z = z and z ≤ d hold then z = z ≤ d holds, and hence d ≤ d. On the other hand, d is a member of {z ∈ D : z = z and z ≤ d} when d belongs to D. For, d = d and d ≤ d hold because  is an S4 modal operator, i.e., a modal operator satisfying t and 4, on A. Therefore d ≤ d and hence d = d. To show that D, ∨, ∧, →, , 0 is a modal algebra, we need to prove that (d ∧ e) = d ∧ e for all d, e ∈ D. Since  is a monotone operator by the definition, (d ∧ e) ≤ d ∧ e holds. For the converse direction, d ∧ e = {z ∧ w ∈  D : z = z, w = w, z ≤ d and w ≤ e} ≤ {u ∈ D : u = u and u ≤ d ∧ e}, as (z ∧ w) = z ∧ w and z ∧ w ≤ d ∧ e. Since the right-hand side of the inequality is (d ∧ e), we have d ∧ e ≤ (d ∧ e). As we have already note that d ≤ d, it remains to show that d ≤   d holds for any d ∈ D, by which we can conclude that  satisfies conditions of an S4 modal operator. Suppose that z = z and z ≤ d for z ∈ D. Then z ≤ d by the definition of . This means that the set {z ∈ D : z =  z and z ≤ d} is included by the set {z ∈ D : z = z and z ≤ d}. Hence, d = {z ∈ D : z = z and z ≤ d} ≤ {z ∈ D : z = z and z ≤ d} =   d.  Corollary 10.2 Modal logic S4 has the finite model property.

10.2 Canonical Extensions and Jónsson-Tarski Theorem We will show an extension of Stone’s representation theorem for Boolean algebras (Theorem 6.5) to modal algebras, which is called Jónsson-Tarski theorem. The proof goes similarly to the proof of Theorem 7.14. A (modal) frame F is a pair W, R of a nonempty set W and a binary relation R on W . Consider the powerset Boolean algebra ℘ (F) = ℘ (W ), ∪, ∩, →, ∅. For each subset U of W , define U by {x ∈ W : y ∈ U for every y such that x Ry}. It is easy to see that  is a modal operator on ℘ (F). The modal algebra constructed from a frame F in this way is called the dual modal algebra of F and is denoted as F+ . Exercise 10.5 Let F+ be the dual modal algebra of a frame F = W, R. Show the following statements. 1. R is reflexive, i.e., x Rx holds for all x ∈ W , if and only if t holds in F+ , 2. R is transitive, i.e., x Ry and y Rz imply x Rz for all x, y, z ∈ W , if and only if 4 holds in F+ , 3. R is symmetric, i.e., x Ry implies y Rx for all x, y ∈ W , if and only if b holds in F+ . Next, for a given modal algebra A, let D(A) to be the set of all maximal filters of A. (We notice that all prime filters are maximal in the present case, as the non-modal part of A is a Boolean algebra.) We define a binary relation R A on D(A) by; F R A G if and only if for all x ∈ A, x ∈ F implies x ∈ G.

10.2 Canonical Extensions and Jónsson-Tarski Theorem

143

Then the pair D(A), R A  forms a frame, which is called the dual (modal) frame of the modal algebra A and is denoted by A+ . Consider now the modal algebra (A+ )+ , i.e., the dual modal algebra of the dual frame of A, for any given modal algebra A. This algebra is called the canonical extension (also the canonical embedding algebra) of a modal algebra A and is denoted as Aδ . Similarly as the case for Heyting algebras, take a mapping σ from A to Aδ . by σ (a) = {F ∈ D(A) : a ∈ F}

(10.1)

for each a ∈ A. By referring the proof of Theorem 7.14, to conclude that σ is an embedding, it remains to show that σ (a) = σ (a). Suppose that F ∈ σ (a), which means that a ∈ F. If F R A G then a ∈ G, and hence G ∈ σ (a) for each G ∈ D(A). Thus, F ∈ σ (a). Hence we have σ (a) ⊆ σ (a). Conversely, suppose that F ∈ σ (a). Define U = {x ∈ A : x ∈ F}. As 1 ∈ U , the set U is nonempty. Let H be the filter generated by U . Obviously, F R A H holds. Thus, a ∈ H by our assumption. It means that there exist elements b1 , . . . , bn ∈ U such that b1 ∧ . . . ∧ bn ≤ a. Therefore b1 ∧ . . . ∧ bn ≤ a. Since b1 , . . . , bn ∈ F, a must be also in F. Thus, F ∈ σ (a), and hence σ (a) ⊆ σ (a) holds. Consequently, we have the following theorem by Jónsson and Tarski 1951. Theorem 10.3 (Jónsson-Tarski theorem) Every modal algebra A can be embedded into its canonical extension Aδ . Similarly to Corollary 7.15, we can show the following. Lemma 10.4 Every finite modal algebra A is isomorphic to its canonical extension Aδ . Exercise 10.6 Give a proof of the above Lemma.

10.3 Kripke Semantics from Algebraic Viewpoint Till the beginning of the 1960s, semantical study of nonclassical logics are mostly limited to algebraic ones. But, after works on relational semantics, often called Kripke semantics due to important contributions by S. Kripke, relational semantics became the main stream of semantics for nonclassical logics, which has been quite successful in particular for modal logics and superintuitionistic logics (Kripke 1963a, b, 1965). There are obvious merits of Kripke semantics, as it is intuitively understandable and philosophically persuasive much more than algebraic semantics, and also it is mathematically tractable. As a matter of fact, each Kripke frame consists of a quite simple structure, i.e., a set (of possible worlds) with binary relations, called accessibility relations. They have promoted a rapid development of study of extensions and applications of modal logic, like temporal logic and epistemic logic.

144

10 Modal Algebras

Stone’s representation theorem and Jónsson-Tarski theorem on canonical extensions of Heyting algebras and modal algebras discussed so far will give us a clear view on connections between Kripke semantics and algebraic semantics. We will explain these connections at first for modal logics and then briefly for superintuitionistic logics. Kripke Frames for Modal Logics Kripke semantics for modal logics consists of Kripke frames, i.e., modal frames, and interpretations of formulas in them. Here, we give a brief sketch of Kripke semantics. As touched on in the previous section, each Kripke frame (for modal logics) is of the form W, R where W is a nonempty set and R a binary relation on W . Intuitively, W and R are understood as the set of all possible worlds and an accessibility relation between worlds, respectively. To define the validity of given formulas in a Kripke frame W, R, we use valuations on it. A valuation V is a mapping from the set of all propositional variables to ℘ (W ). Each valuation V determines a binary relation |= between the set of worlds and the set of formulas, called the truth relation determined by V (on W, R), which is defined inductively as follows. • • • • •

w w w w w

|= p iff w ∈ V ( p) for each propositional variable p, |= α ∨ β iff w |= α or w |= β, |= α ∧ β iff w |= α and w |= β, |= α iff v |= α for each v such that w Rv, |= 0.

When w |= α holds, we read it as “the formula α is true at the world w.” It can be easily shown that the truth relation |= is determined uniquely by V . Thus, we can safely define V (ϕ) by {w ∈ S : w |= ϕ} for each formula ϕ, by extending the original valuation V in a consistent way. A Kripke model is a triple W, R, V  such that W, R is a Kripke frame and V a valuation on it. A formula ϕ is true in a Kripke model W, R, V  if w |= ϕ for every w ∈ W , where |= is the truth relation determined by V . That is, ϕ is true in W, R, V  if and only if V (ϕ) = W . Moreover, when the formula ϕ is true in a Kripke model W, R, V  for every valuation V on the Kripke frame W, R, it is said to be valid in W, R. Now let L∗ (F) be the set of all formulas which are valid in a given Kripke frame F. The following lemma says that the validity of a formula ϕ in a Kripke frame for modal logics F can be expressed as the validity of ϕ in its dual modal algebra F+ . Lemma 10.5 L∗ (F) = L(F+ ) holds for each Kripke frame F for modal logics. Proof Let F be a Kripke frame W, R. For any valuation V on F, V ( p) is a subset of W , i.e., V ( p) ∈ ℘ (W ) for each propositional variable p. On the other hand, for any assignment f on the modal algebra F+ , f ( p) is an element of ℘ (W ) for each propositional variable p. Now suppose that f ( p) = V ( p) holds for each propositional variable p, where f is an assignment on F+ and V is a valuation on a Kripke frame F. Then, using induction on the length of a formula, we can show that f (ϕ) = V (ϕ)

10.3 Kripke Semantics from Algebraic Viewpoint

145

for every formula ϕ. To confirm this, consider the case where ϕ is of the form ψ, for instance. An element w belongs to f (ψ), which is equal to  f (ψ), iff v ∈ f (ψ) for all v such that w Rv (by the definition of  on F+ ), iff v ∈ V (ψ) for all v such that w Rv (by the hypothesis of induction), iff v |= ψ for all v such that w Rv (by the definition of V ), iff w |= ψ, which means w ∈ V (ψ). To show our lemma, suppose first that α is valid in the modal algebra F+ . Let V be an arbitrary valuation on the Kripke frame F. Define an assignment f on F+ by f ( p) = V ( p) for each propositional variable p. Then f (ϕ) = V (ϕ) for every formula ϕ as shown in the above. Since α is valid in F+ , we have V (α) = f (α) = W . This means that w |= α holds for any w ∈ W . Hence, α is true in a Kripke model W, R, V , where V is an arbitrary valuation. Thus α is valid in the Kripke frame F. Next, suppose that α is valid in F. For each assignment g on F+ , let V is a valuation on F satisfying that V ( p) = g( p) for each propositional variable p. Using the same argument as the above, we have g(α) = W for any assignment g. As W is the greatest  element of the algebra F+ , α is valid in it. On the other hand, the dual equality L(A) = L∗ (A+ ) does not hold always between a given modal algebra A and its dual frame A+ . We can prove only the following inclusion. Lemma 10.6 The inclusion relation L∗ (A+ ) ⊆ L(A) holds for any modal algebra A. Therefore, L(Aδ ) ⊆ L(A) holds. The equality L(Aδ ) = L(A) holds when A is a finite modal algebra. Proof Let f be any assignment on A. For each propositional variable p, we define a valuation V on A+ by V ( p) = {F ∈ D(A) : f ( p) ∈ F}, i.e., F |= p iff f ( p) ∈ F for each F ∈ D(A). Then, we can show that F |= ϕ iff f (ϕ) ∈ F for each formula ϕ and each F ∈ D(A), by using induction and also using the proof of Jónsson-Tarski theorem, in particular, using the equality σ ( f (ψ)) = σ ( f (ψ)). (As for σ , see Eq. (10.1).) Now, suppose that a formula α is not valid in A and f is an assignment on A such that f (α) = 1. Then by taking f (α) for a and {1} for F in Theorem 7.12 (as the modal-part of A is a Boolean algebra), we can show that there exists a maximal filter G of A such that f (α) ∈ / G. Using the above relation between |= and f , we can derive that G |= α. Hence, α is not valid in the Kripke frame A+ . The second statement follows immediately from this with Lemma 10.5, and the third statement is a consequence of Lemma 10.4.  The converse would hold if we could find an element a ∈ A such that U = {F ∈ D(A) : a ∈ F} for each subset U of D(A). But we cannot expect that this happens in general. Definition 10.2 A modal logic L is Kripke complete with respect to a class C of Kripke frames, when a formula ϕ is provable in L if and only if it is valid in every Kripke frame in C . A modal logic L is Kripke complete if it is Kripke complete with respect to some class of Kripke frames. By Lemmas 10.4 and 10.6, the following holds.

146

10 Modal Algebras

Corollary 10.7 If a modal logic has the finite model property then it is Kripke complete. Here, we give the following remark. Suppose that L is Kripke complete with respect to a set of Kripke frames {Wi , Ri  : i ∈ I }. Then,  L can be expressed as  ∗ L (W , R ). Now, let W be the disjoint union i i i∈I i∈I Wi of sets {Wi : i ∈ I } and R is a binary relation on W satisfying that x Ry holds iff there exists i ∈ I such that x, y ∈ Wi and x Ri y. It is easy to see that L∗ (W, R) = i∈I L∗ (Wi , Ri ) holds for this W, R. Hence, L can be characterized by a single Kripke frame W, R. Recall that VL denotes the class of all L-algebras for any normal modal logic L, which is in fact a variety. Definition 10.3 A normal modal logic L is canonical, if VL is closed under canonical extensions. That is, Aδ belongs to VL whenever A belongs to it. Corollary 10.8 Every canonical modal logic is Kripke complete. Proof Suppose that a modal logic L is canonical and that a given formula α is not provable in L. By algebraic completeness of normal modal logics mentioned in Sect. 10.1 there exists an algebra A ∈ VL in which α is not valid. By Lemma 10.6, it is not valid in a Kripke frame A+ . It remains to show that every formula provable in L is valid in the Kripke frame A+ . As L is canonical, the algebra Aδ (= (A+ )+ ) belongs to VL . From this with Lemma 10.5, it follows that every formula in L is valid  in the Kripke frame A+ . On the other hand, there exists a Kripke incomplete modal logic (Thomason (1974), Fine (1974)), and in fact uncountably many Kripke incomplete modal logics over S4 (Rybakov (1977)). Kripke Frames for Superintuitionistic Logics What we have mentioned above holds also for algebraic semantics and Kripke semantics of superintuitionistic logics. So, instead of repeating arguments, we will give a short remark. A Kripke frame (for superintuitionistic logics) is any partially ordered set S, ≤. A valuation V on a given Kripke frame (for superintuitionistic logics) to be a mapping, but for the case of superintuitionistic logics V ( p) must be a upward closed subset of S for each propositional variable p. Each valuation V determines a binary relation |= between the set of worlds and the set of formulas, which is defined inductively as follows. • • • • •

w w w w w

|= p iff w ∈ V ( p) for each propositional variable p, |= α ∨ β iff w |= α or w |= β, |= α ∧ β iff w |= α and w |= β, |= α → β iff for each v such that w ≤ v, if v |= α then v |= β, |= 0.

10.3 Kripke Semantics from Algebraic Viewpoint

147

It can be shown by using induction that V (ϕ) is always upward closed for any formula ϕ. Truth in a given Kripke model and validity in a given Kripke frame can be defined in the same way as before. Then, the result on Kripke frames for superintuitionistic logics which corresponds to Lemma 10.5 holds between any partially ordered set and its dual Heyting algebra. Also the result corresponding to Lemma 10.6 holds between any Heyting algebra and its dual intuitionistic frame. To verify these results, we use Stone’s representation theorem for Heyting algebras (Theorem 7.14), instead of Jónsson-Tarski theorem.

10.4 Gödel Translation As shown above, there are certain connections between formulas provable in intuitionistic logic and formulas provable in modal logic S4, and also between Heyting algebras and modal algebras from algebraic point of view. Actually, there exists a translation of intuitionistic formulas into modal formulas such that a formula is provable in intuitionistic logic if and only if its translation is provable in S4. This translation is called the Gödel translation or the Gödel-McKinsey-Tarski translation as it was discovered by K. Gödel and also by J.C.C. McKinsey and A. Tarski (in Gödel 1933; McKinsey and Tarski 1948). The existence of such a translation will partly clarify the similarity mentioned above. Definition 10.4 (Gödel translation) The Gödel translation T of intuitionistic formulas into modal formulas is defined inductively as follows. • • • • •

T ( p) =  p for every propositional variable p, T (0) = 0, T (α ∨ β) = T (α) ∨ T (β), T (α ∧ β) = T (α) ∧ T (β), T (α → β) = (¬T (α) ∨ T (β)).

Theorem 10.9 A formula α is provable in intuitionistic logic if and only if its translation T (α) is provable in modal logic S4 for every intuitionistic formula α. The rest of this section will be devoted to an algebraic proof of the above theorem. We will show the theorem in the following form: α is valid in all Heyting algebras if and only if T (α) is valid in all S4 algebras. Suppose first that an S4 algebra A = A, ∨, ∧, →, , 0 is given. An element a of A is called open if and only if a = a. It is obvious that both 0 and 1 are open elements. Let H A be the set of all open elements of A. It is easy to show that a ∨ b and a ∧ b are open whenever both a and b are open. In fact, a ∨ b is open, because (a ∨ b) ≤ a ∨ b as A is an S4 algebra and also (a ∨ b) ≥ a ∨ b = a ∨ b by the monotonicity of . Trivially, a ∧ b is open. For all a and b in H A , define a  b = (¬a ∨ b). By the definition, a  b is always open. We can show the following.

148

10 Modal Algebras

Lemma 10.10 For a given S4 algebra A = A, ∨, ∧, →, , 0, define HA to be H A , ∨, ∧, , 0, where H A be the set of all open elements of A and  is defined by a  b = (¬a ∨ b). Then, HA is a Heyting algebra. Moreover, if h and g are assignments on HA and A, respectively, satisfying h( p) = g(T ( p)) for every propositional variable p, then h(ψ) = g(T (ψ)) holds for every intuitionistic formula ψ. Proof To show that HA is a Heyting algebra, it is enough to see that the law of residuation holds between ∧ and , i.e., for all a, b, c ∈ H A , a ∧ c ≤ b if and only if c ≤ a  b. Suppose first that c ≤ a  b. Then, a ∧ c ≤ a ∧ (a  b) = a ∧ (¬a ∨ b) = (a ∧ (¬a ∨ b)) = (a ∧ b) ≤ b = b. Conversely, suppose that a ∧ c ≤ b. As ¬a ∧ c ≤ ¬a holds always, we have c = (¬a ∧ c) ∨ (a ∧ c) ≤ ¬a ∨ b. Thus c = c ≤ (¬a ∨ b) = a  b. Next suppose that h( p) = g(T ( p)) holds for every propositional variable p. We show that h(ψ) = g(T (ψ)) holds for every intuitionistic formula ψ by using induction on the length of ψ. This is trivial by the definition when ψ is a propositional variable. When ψ is α ∨ β, h(α ∨ β) = h(α) ∨ h(β) = g(T (α)) ∨ g(T (β)) = g(T (α ∨ β)), using the definition of T . Similarly, this holds when ψ is α ∧ β. If ψ is α → β then h(α → β) = h(α)  h(β) = g(T (α))  g(T (β)) = (¬g(T (α)) ∨ g(T (β))) = g((¬T (α) ∨ T (β))) = g(T (α → β)).  Now we have the following. Corollary 10.11 For any intuitionistic formula ϕ, if ϕ is valid in every Heyting algebra then T (ϕ) is valid in every S4 algebra. Proof Taking the contraposition, suppose that T (ϕ) is not valid in A. Let g be an assignment on A such that g(T (ϕ)) < 1. We define an assignment h on HA by h( p) = g(T ( p)) for every propositional variable p. Then by Lemma 10.10, h(ϕ) =  g(T (ϕ)) < 1. Thus, ϕ is not valid in HA . Exercise 10.7 Give a syntactic proof of the above Corollary by showing the following: For all intuitionistic formulas α1 , . . . , αm , β, if the sequent α1 , . . . , αm ⇒ β is provable in LJ, then the sequent T (α1 ), . . . , T (αm ) ⇒ T (β) is provable in GS4. As for the converse of Corollary 10.11, we show the following. Lemma 10.12 For each Heyting algebra H there exists an S4 algebra C such that H is embedded into a Heyting algebra HC , the Heyting algebra comprised of all open elements of C. Proof By Stone’s representation theorem for Heyting algebras (Theorem 7.14), every Heyting algebra can be embedded into its canonical extension Hδ , which is equal to U (D(H)). Here, D(H) = D(H ), ⊆ is the set of all prime filters of H ordered by the set inclusion, and U (D(H)) consists of all upward closed subsets of D(H). The structure D(H ), ⊆ can be regarded also as a modal frame whose relation ⊆ is both reflexive and transitive. Thus, D(H) determines a modal algebra which consists of the powerset Boolean algebra ℘ (D(H)) with a modal operator  satisfying W =

10.4 Gödel Translation

149

{F ∈ D(H ) : G ∈ W for every G such that F ⊆ G} for any subset W of D(H ). By using results in Exercise 10.5, we can see that  is an S4 modal operator on ℘ (D(H)). We can also show that a subset W of D(H ) is an open element if and only if W is an upward closed subset, i.e., W ∈ U (D(H )). Remember that the implication ⇒ of the Heyting algebra U (D(H)) is defined by W ⇒ V = {F ∈ D(H ) : for all G such that F ⊆ G, if G ∈ W then G ∈ V } (see Eq. (7.11)), which is equal to (¬W ∪ V ) = W  V . Thus, Hδ , which is defined by U (D(H)), is equal to the Heyting algebra consisting of all open elements of the Boolean algebra ℘ (D(H)). Hence our lemma holds by taking ℘ (D(H)) for C.  Corollary 10.13 For any intuitionistic formula ϕ, if T (ϕ) is valid in every S4 algebra then ϕ is valid in every Heyting algebra. Proof Suppose that ϕ is not valid in a Heyting algebra H. Then by Lemma 10.12 there exists a Heyting algebra HC which consists of all open elements of an S4 algebra C in which ϕ is nor valid. Let h be an assignment on HC such that h(ϕ) < 1. Define an assignment g on C by g( p) = h( p) for every propositional variable p. As h( p) is an open element of C, g(T ( p)) = g( p) = g( p) = h( p) = h( p). Then by using Lemma 10.10, g(T (ϕ)) = h(ϕ) < 1. Thus, T (ϕ) is not valid in C. But this contradicts our assumption.  From this with Corollary 10.11, we have Theorem 10.9. We note that S4 is not a single modal logic into which intuitionistic logic is embedded by Gödel translation. It is known that the embedding of intuitionistic logic by Gödel translation works for any axiomatic extension of S4 as long as it is included by the logic Grz. Here, Grz is the extension of S4 with the axiom scheme (( p →  p) → p) → p. Also, we can extend the embedding by Gödel translation to the embedding of every superintuitionistic logic, i.e., every axiomatic extension of intuitionistic logic. For example, we can show the following. Theorem 10.14 A formula α is provable in classical logic if and only if its translation T (α) is provable in modal logic S5, for every intuitionistic formula α. Note that the Gödel translation of the law of excluded middle p ∨ ¬ p is  p ∨ ¬ p. It is easy to see that the extension of S4 with this axiom scheme is equal to the extension of S4 with the axiom scheme ♦ p → ♦ p, which is no other than S5. Exercise 10.8 Show that the class of all S4 algebras is not locally finite. ([Hint] Recall that the class of Heyting algebras is not locally finite. Try to reduce the present question to this result.)

References

Anderson, A.R., and N.D. Belnap Jr. 1975. Entailment. The logic of relevance and necessity: Princeton University Press. Anderson, A.R., N.D. Belnap Jr., and J.M. Dunn. 1992. Entailment: The logic of relevance and necessity II. Princeton University Press. Bayu Surarso, and H. Ono. 1996. Cut elimination in noncommutative substructural logics. Reports on Mathematical Logic 30: 13–29. Bimbo, K. 2014. Proof theory: Sequent calculi and related formalisms. Discrete mathematics and its applications: Chapman and Hall/CRC. Birkhoff, G. 1935. On the structure of abstract algebras. Proceedings of the Cambridge Philosophical Society 31: 433–451. Birkhoff, G. 1944. Subdirect unions in universal algebras. Bulletin of American Mathematical Society 50: 764–768. Blackburn, P., M. de Rijke and Y. Venema. 2001. Modal logic, Cambridge tracts in theoretical computer science, 53. Blok, W.J., and D. Pigozzi. 1989. Algebraizable logics. Memoirs of the AMS: Americal Mathematical Society. Boole, G. 1854. An investigation into the laws of thought, Walton and Maberly. Bull, R., and K. Segerberg. 1984. Basic modal logic. In Handbook of philosophical logic 2, ed. Gabbay, D., and F. Guenthner, Reidel, 1–88. Burris, S., and H.P. Sankappanavar. 1981. A course in universal algebra, Graduate Texts in Mathematics, Springer. available on line. Chagrov, A., and M. Zakharyaschev. 1997. Modal logic. Clarendon Press. Chvalovský, K., and R. Horˇcik. 2016. Full Lambek calculus with contraction is undecidable. Journal of Symbolic Logic 81: 524–540. Cignoli, R., I. D’Ottaviano, and D. Mundici. 2000. Algebraic foundations of many-valued reasoning, trends in logic 7. Kluwer. Cintula, P., P. Hájek and C. Noguera eds. 2011. Handbook of mathematical fuzzy logic, Vol. 1. Mathematical Logic and Foundations 37, College Publications. Cintula, P., P. Hájek and C. Noguera eds. 2011. Handbook of mathematical fuzzy logic, Vol. 2. Mathematical Logic and Foundations 38, College Publications. Craig, W. 1957. Three uses of Herbrand-Gentzen theorem in relating model theory and proof theory. Journal of Symbolic Logic 22: 269–285.

© Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0

151

152

References

Davey, B.A., and H.A. Priestley. 2002. Introduction to lattices and order, 2nd ed. Cambridge: Cambridge University Press. Dummett, M. 1959. A propositional calculus with denumerable matrix. Journal of Symbolic Logic 24: 97–106. Dunn, J.M. 1986. Relevance logic and entailment. In Handbook of philosophical logic 3, 2nd edn, ed. Gabbay, D., and F. Guenther, Reidel, pp. 117–229. Dunn, J.M., and G. Hardegree. 2001. Algebraic methods in philosophical logic, Oxford logic guides 41. Oxford University Press. Esteva, F., and L. Godo. 2001. Monoidal t-norm based logic: Towards a logic of left-continuous t-norms. Fuzzy Sets and Systems 124: 271–288. Fine, K. 1974. An incomplete logic containing S4. Theoria 40: 23–29. Font, J.M. 2016. Abstract algebraic logic. An Introductory textbook, Mathematical Logic and Foundations 60, College Publications. Galatos, N., P. Jipsen, T. Kowalski, and H. Ono. 2007. Residuated lattices: An algebraic glimpse at Substructural logics, studies in logic and the foundations of mathematics 151. Elsevier. Galatos, N., and H. Ono. 2006. Algebraization, parametrized local deduction theorem and interpolation for substructural logics over FL. Studia Logica 83: 279–308. Gentzen, G. 1935. Untersuchungen über das logische Schliessen I. II. Mathematische Zeitschrift 39, pp. 176-210, pp. 405–431. Girard, J.-Y. 1989. Towards a geometry of interaction. Contemporary Mathematics 92: 69–108. Glivenko, V. 1929. Sur quelques points de la logique de M. Brouwer, Bulletin Academie des Sciences de Belgique 15: 183–188. Gödel, K. 1932. Zum intuitionistischen Aussagenkalkül. Anzeiger der Akademie Ergebnisse der Wissenschaften in Wien 69: 65–66. Gödel, K. 1933. Eine Interpretation des intuitionistischen Aussagenkalküls. Ergebnisse eines mathematischen Kolloquiumus 4: 39–40. Grišin, V. 1982. Predicate and set-theoretic calculi based on logic without contraction. Mathematical USSR Izvestiya 18: 41–59. Halldén, S. 1951. On the semantic non-completeness of certain Lewis calculus. Journal of Symbolic Logic 16: 127–129. Halmos, P., and S. Givant. 1998. Logic as algebra, Dolciani Mathematical Expositions 21, The Mathematical Association of America. Hájek, P. 1998. Metamathematics of fuzzy logic, trends in logic 4. Kluwer. Harrop, R. 1958. On the existence of finite models and decision procedures for propositional calculi. Proceedings of the Cambridge Philosophical Society 54: 1–13. Heyting, A. 1930. Die formalen Regeln der intuitionistischen Logik I, II, III, Sitzungsberichte der preussischen Akademie von Wissenschaften, pp. 42–56, 57–71, 158–169. Jankov, V.A. 1968. The construction of a sequence of strongly independent superintuitionistic propositional calculi. Soviet Mathematics Doklady 9: 806–807. Johansson, I. 1936. Der Minimalkalkül ein reduzierter Intuitionistischer Formalismus. Cmpositio Mathematica 4: 119–136. Jónsson, B., and A. Tarski. 1951. Boolean algebras with operators. Part I, American Journal Mathematics 73: 891–939. Kanger, S. 1957. Provability in logic, Stockholm Studies in Philosophy 1, Ålmqvist & Wiksell. Ketonen, O. 1944. Untersuchungen zum Prädikatenkalkül, Annales Academiae Scientiarum Fennicae, ser. A, I. Mathematica-physica, 23. Kiriyama, E., and H. Ono. 1991. The contraction rule and decision problems for logics without structural rules. Studia Logica 50: 299–319. Kleene, S.C. 1952. Introduction to Metamathematics, D. Van Nostrand Co. Komori, Y. 1981. Super-Łukasiewicz propositional logics. Nagoya Mathematical Journal 84: 119– 133. Kracht, M. 1999. Tools and techniques in modal logic, studies in logic and the foundations of mathematics 142. Elsevier.

References

153

Kripke, S.A. 1959. The problem of entailment (abstract). Journal of Symbolic Logic 24: 324. Kripke, S.A. 1963. Semantical analysis of modal logic I. Normal modal propositional calculi, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik 9: 67–96. Kripke, S.A. 1963. Semantical considerations on modal logic. Acta Philosophica Fennica 16: 83–94. Kripke, S.A. 1965. Semantical analysis of intuitionistic logic I. In Formal systems and recursive functions, ed. Crossley, J.N., and M.A.E. Dummett, North Holland, pp. 92–130. Lambek, J. 1958. The mathematics of sentence structure. American Mathematical Monthly 12: 166–178. Lemmon, E.J. 1966. A note on Halldén-incompleteness. Notre Dame Journal of Formal Logic 7: 296–300. Lincoln, P., J. Mitchell, A. Scedrov, and N. Shankar. 1992. Decision problems for propositional linear logic. Annals of Pure and Applied Logic 56: 239–311. ´ Łukasiewicz, J. 1920. O logice trójwartoSciowej. In Polish logic 1920–1939, 1967, ed. McCall S. Clarendon Press. Łukasiewicz, J., and A. Tarski. 1930. Untersuchungen über den Aussagenkalkül. Comptes rendus des séances de la Société des Sciences et des Lettres de Varsovie, Class III 23: 30–50. Maehara, S. 1954. Ein Darstellung der intuitionistischen Logik in der klassischen. Nagoya Mathematical Journal 7: 45–64. Maehara, S. 1960. On the interpolation theorem (in Japanese). Sugaku 12: 235–237. Maksimova, L.L. 1977. Craig’s theorem in superintuitionistic logics and amalgamable varieties of pseudo-Boolean algebras. Algebra i Logika 16: 643–681. Maksimova, L.L. 1986. On maximal intermediate logics with the disjunction property. Studia Logica 45: 69–75. McKinsey, J.C.C. 1941. A solution of the decision problem for the Lewis systems S2 and S4, with an application to topology. Journal of Symbolic Logic 6: 117–134. McKinsey, J.C.C., and A. Tarski. 1946. On closed elements in closure algebras. Annals of Mathematics 47: 122–162. McKinsey, J.C.C., and A. Tarski. 1948. Some theorems about the sentential calculi of Lewis and Heyting. Journal of Symbolic Logic 13: 1–15. Meyer, R.K. 1966. Topics in modal and many-valued logic, Doctoral dissertation, University of Pittsburgh. Negri, S., and J. von Plato. 2008. Structural proof theory. Cambridge University Press. Nishimura, I. 1960. On formulas of one variable in intuitionistic propositional calculus. Journal of Symbolic Logic 25: 327–331. Ono, H. 1990. Structural rules and a logical hierarchy. In Mathematical logic, proceedings of the summer school and the conference ’Heyting ‘88’, ed. Petokov P.P. Plenum Press, pp. 95–104. Ono, H. 1993. Semantics for substructural logics. In Substructural logics, ed. Došen, K., and P. Schroeder-Heister, Oxford University Press, pp. 259–291. Ono, H. 1998. Proof-theoretic methods for nonclassical logic—An introduction. In Theories of types and proofs (MSJ memoirs 2), ed. Takahashi, M., M. Okada and M. Dezani-Ciancaglini, Mathematical Society of Japan, pp. 207–254. Ono, H. 2003. Substructural logics and residuated lattices—An introduction. In 50 Years of Studia Logica, Trends in Logic 21, ed. Hendricks, V.F. and J. Malinowski, Kluwer Academic Publishers, pp. 193–228. Ono, H. 2010. Logics without contraction rule and residuated lattices. Australasian Journal of Logic 8: 1–32. Ono, H. 2011. Algebraic logic, logic and philosophy today, Journal of Indian Council of Philosophical Research 27, ed. Gupta, A., and J. van Benthem, 2010, pp. 221–246. Also In Logic and philosophy today vol. 1, Studies in Logic 29, College Publications, ed. Gupta, A., and J. van Benthem, 2011, pp. 219–244. Ono, H., and Y. Komori. 1985. Logics without the contraction rule. Journal of Symbolic Logic 50: 169–201.

154

References

Rasiowa, H. 1974. An algebraic introduction to non-classical logics, studies in logic and the foundations of mathematics 78. PWN: North-Holland. Rieger, L. 1949. On the lattice of Brouwerian propositional logics, Acta Universitatis Carlinae. Mathematica et. Physica 189: 1949. Rieger, L. 1949. On the lattice of Brouwerian propositional logics, Acta Universitatis Carlinae. Mathematica et. Physica 189: 1949. Rybakov, V.V. 1977. Noncompact Extensions of the Logic S4. Algebra and Logic 16: 321–334. Stone, M.H. 1937. Topological representations of distributive lattices and Brouwerian logics. Casopis pro Pestovani Matematyki a Fysiky 67: 1–25. Stone, M.H. 1937. Topological representations of distributive lattices and Brouwerian logics. Casopis pro Pestovani Matematyki a Fysiky 67: 1–25. Takano, M. 1992. Subformula property as a substitute for cut-elimination in modal propositional logics. Mathematica Japonica 37: 1192–1145. Takeuti, G. 1975. Proof theory, studies in logic and the foundations of mathematics 81, NorthHolland, 2nd ed, 2013. Also: Dover Books on Mathematics. Tamura, S. 1974. On a decision procedure for free o-algebraic system, Technical Report of Mathematics, Yamaguchi University 9. Tarski, A. 1935. Grundzüge der Systemenkalküls. Erster teil, Fundamenta Mathematica 25: 503– 526. Tarski, A. 1936. Grundzüge der Systemenkalküls. Zweiter teil, Fundamenta Mathematica 26: 283– 301. Tarski, A. 1946. A remark on functionally free algebras. Annals of Mathematics 47: 163–165. Thomason, S.K. 1974. An incomplete theorem in modal logic. Theoria 40: 23–29. Troelstra, A.S. 1992. Lectures on linear logic, Lecture Notes 29, CSLI. Troelstra, A.S., and H. Schwichtenberg. 2000. Basic proof theory, cambridge tracts in theoretical computer science 43, 2nd ed. Cambridge University Press. Umezawa, T. 1955. Über Zwischensysteme der Aussagenlogik. Nagoya Mathematical Journal 9: 181–189. Urquhart, A. 1984. The undecidability of entailment and relevant implication. Journal of Symbolic Logic 49: 1059–1073. van Dalen, D. 2002. Intuitionistic logic. In Handbook of philosophical logic 5, 2nd ed. Gabbay, D., and F. Guenther, 1–114. Kluwer Academic Publishers. Wang, H. 1963. A survey of mathematical logic, Studies in Logic and the Foundations of Mathematics, North-Holland. Ward, M., and R.P. Dilworth. 1939. Residuated lattices. Transactions of the American Mathematical Society 45: 335–354. Wro´nski, A. 1976. Remarks on Halldén-completeness of modal and intermediate logics. Bulletin of the Section of Logic 5: 126–129. Zadeh, L.A. 1965. Fuzzy sets. Information and Control 8: 338–353. Zakharyaschev, M., F. Wolter and A. Chagrov. 2001. Advanced modal logic. In Handbook of philosophical logic 3, 2nd edn, ed. Gabbay, D. and F. Guenthner eds., Kluwer Academic Publishers, pp. 83–266.

References

155

Further Reading for Part I Here is a short list of major books and papers in which readers can find further information and advanced topics discussed in Part I.

Proof Theory Bimbo, K. 2014. Proof Theory: Sequent Calculi and Related Formalisms. Discrete Mathematics and its Applications: Chapman and Hall/CRC. Negri, S., and J. von Plato. 2008. Structural Proof Theory. Cambridge University Press. H. Ono, Proof-theoretic Methods for Nonclassical Logic—an introduction, in: Theories of Types and Proofs (MSJ Memoirs 2), eds. by M. Takahashi, M. Okada and M. Dezani-Ciancaglini, Mathematical Society of Japan, 1998, pp. 207–254. Takeuti, G. 2013. Proof Theory, Studies in Logic and the Foundations of Mathematics 81, NorthHolland, 1975, 2nd ed. Dover Books on Mathematics: Also. A.S. Troelstra and H. Schwichtenberg, Basic Proof Theory, Cambridge Tracts in Theoretical Computer Science 43, 2nd edition, Cambridge University Press, 2000.

Nonclassical Logics Anderson, A.R., and N.D. Belnap. 1975. Entailment. The Logic of Relevance and Necessity: Princeton University Press. P. Blackburn, M. de Rijke and Y. Venema, Modal Logic, Cambridge Tracts in Theoretical Computer Science 53, 2001. R. Bull and K. Segerberg, Basic Modal Logic, in: Handbook of Philosophical Logic 2, D. Gabbay and F. Guenthner eds., Reidel, 1984, pp. 1–88. Chagrov, A., and M. Zakharyaschev. 1997. Modal Logic. Clarendon Press. J. M. Dunn, Relevance Logic and Entailment, in: Handbook of Philosophical Logic 3, 2nd edition, D. Gabbay and F. Guenther eds., Reidel, pp. 117–229, 1986. Galatos, N., P. Jipsen, T. Kowalski, and H. Ono. 2007. Residuated Lattices: an Algebraic Glimpse at Substructural Logics, Studies in Logic and the Foundations of Mathematics 151. Elsevier. Hájek, P. 1998. Metamathematics of Fuzzy Logic, Trends in Logic 4. Kluwer. A.S. Troelstra, Lectures on Linear Logic, Lecture Notes 29, CSLI, 1992. D. van Dalen, Intuitionistic Logic, in: Handbook of Philosophical Logic 5, 2nd edition, D. Gabbay and F. Guenther eds., Kluwer Academic Publishers, 2002, pp. 1–114. M. Zakharyaschev, F. Wolter and A. Chagrov, Advanced Modal Logic, in: Handbook of Philosophical Logic 3, 2nd edition, D. Gabbay and F. Guenthner eds., Kluwer Academic Publishers, 2001, pp. 83–266.

156

References

Further Reading for Part II Here is a short list of major books and papers in which readers can find further information and advanced topics discussed in Part II. Blok, W.J., and D. Pigozzi. 1989. Algebraizable Logics. Memoirs of the AMS: Americal Mathematical Society. S. Burris and H.P. Sankappanavar, A Course in Universal Algebra, Graduate Texts in Mathematics, Springer, 1981, available on line. Cignoli, R., I. D’Ottaviano, and D. Mundici. 2000. Algebraic Foundations of Many-valued Reasoning, Trends in Logic 7. Kluwer. P. Cintula, P. Hájek and C. Noguera eds., Handbook of Mathematical Fuzzy Logic. Volume 1, Mathematical Logic and Foundations 37, College Publications, 2011. Davey, B.A., and H.A. Priestley. 2002. Introduction to Lattices and Order, 2nd ed. Cambridge: Cambridge University Press. Dunn, J.M., and G. Hardegree. 2001. Algebraic Methods in Philosophical Logic, Oxford Logic Guides 41. Oxford University Press. J.M. Font, Abstract Algebraic Logic. an Introductory Textbook, Mathematical Logic and Foundations 60, College Publications, 2016. Kracht, M. 1999. Tools and Techniques in Modal Logic, Studies in Logic and the Foundations of Mathematics 142. Elsevier. H. Rasiowa, An Algebraic Introduction to Non-Classical Logics, Studies in Logic and the Foundations of Mathematics 78, North-Holland, PWN, 1974.

Index

A Adjunction, 70 Algebraic completeness, 88, 118 Analytic cut, 51 Antecedent, 7 Anti-symmetricity, 78 Arrow, 7 Assignment, 6, 88 Axiom, 4 scheme, 4 Axiomatic extension, 67 Axiomatized, 68

B BA, 117 Birkhoff’s theorem, 122 Boolean algebra, 82 complete, 87 degenerate, 82 powerset, 86 two-valued, 82 Boolean reduct, 141

C Canonical assignment, 101 Canonical embedding, 110 algebra, 110, 143 Canonical extension, 110, 143 Chain, 78 Characterized, 102 Cl, 62 Classical logic, 4 Commutative semigroup, 130 Completeness, 16 algebraic, 88

Congruence relation, 101 fully invariant, 101 Consequence relation, 62 compact, 62 finitary, 62 substitution invariant, 62 Conservative extension, 33 Consistent, 32, 114 Constant symbol, 84 Continuous, 136 left-, 135 Contraction axiom, 4 rule, 8 Corresponding formula, 12 Cut elimination, 17, 23 extended, 26 Cut formula, 9, 26 Cut-free proof, 13, 23 provable, 13 sequent system, 18 Cut rule, 8 extended, 26

D Decidability, 18 Decidable, 35 Decision problem, 35 Deducibility, 61 problem, 65 relation, 62 Deduction, 61 Deduction theorem, 62 local, 64 Derivable rule, 11

© Springer Nature Singapore Pte Ltd. 2019 H. Ono, Proof Theory and Algebra in Logic, Short Textbooks in Logic, https://doi.org/10.1007/978-981-13-7997-0

157

158 Derivation, 4 Direct product, 85 Disjunction property, 38, 127 Distributive law, 10, 80 Divisibility, 137 Division left-, 55 right-, 55 Double induction, 27 Dual algebra, 107, 142 Dual frame, 110, 143

E Embedding, 84 canonical, 110 Equation, 121 Equational class, 121 Equivalence relation, 100 Exchange axiom, 4 rule, 8 Extension axiomatic, 67 canonical, 110 conservative, 33 of FL, 56

F Falsehood, 6 Falsum, 11 Filter, 108 maximal, 108 prime, 108 principal, 108 proper, 108 ultra, 108 Finite embeddability property, 105 Finitely axiomatizable, 68 Finite model property, 102 FL, 54 FL-algebra, 133 FLe -algebra, 133 involutive, 133 FL, 134 Formula, 3 active, 9 principal, 9, 48 side, 9 Free algebra, 118 Full Lambek algebra, 133 Fusion, 53, 93

Index Fuzzy logic, 72 G Generated, 103, 108 finitely, 103 Generator, 103 Glivenko’s theorem, 46 Gödel chain, 90 Gödel implication, 92 Gödel logic, 90 Gödel translation, 147 Gödel-Dummett logic, 92 Grade, 27 H HA, 117 Halldén-complete, 39, 126 Harrop’s theorem, 106 Height, 27 Heyting algebra, 97 Hilbert-style system, 4 HJ, 20 HK, 4 Homomorphic image, 84 Homomorphism, 84 I iff, 9, 78 Inconsistent, 114 Inequation, 122 Int, 62 Intermediate logic, 68 Interpolant, 39 Interpolation property Craig’s, 39, 128 uniform, 41 Int[S], 67 Intuitionistic logic, 20 Invertible, 14 Involutive FLe -algebra, 133 substructural logic, 59 Isomorphism, 84 J Join, 78 Jónsson-Tarski theorem, 142 K Kripke complete, 146

Index Kripke frame, 144, 146 Kripke model, 144 K, 48

L Lambek calculus, 71 Language, 83 Lattice, 78 bounded, 79 complete, 80 distributive, 80 reduct, 87 residuated, 131 Law of double negation, 5, 82 of excluded middle, 5 of residuation, 82 Length of formula, 4 of proof, 12 Lindenbaum-Tarski algebra, 100 Linear logic, 71 LJ, 19 LJ , 22 LK, 7 modified, 12 LK∗ , 14 Locally finite, 103 Logic, 68 Logical connective, 3 constant, 3 Logically equivalent, 40, 100 Lower bound, 78 greatest, 78, 81 Łukasiewicz chain, 92 Łukasiewicz implication, 92 Łukasiewicz logic, 94 infinite-valued, 94 Łukasiewicz’s many-valued logic, 72, 94

M Many-valued chain, 89 Meet, 78 Meet irreducible, 125 Minimal logic, 73 Mix rule, 30 Modal algebra, 139 Modal frame, 142 Modal logic, 47 canonical, 146

159 normal, 48 Modus ponens, 4 Monoid, 130 partially ordered, 130 Multiplicative conjunction, 53 Multiset, 11 union, 42

O Occurrence, 4 Operation symbol, 83 Order induced, 80 partial, 78 -preserving, 84 total, 78

P Partition of multiset, 42 of sequent, 42 p.o. monoid, 130 residuated, 131 p.o. semigroup, 130 residuated, 131 Power set, 81 Prelinearity, 92 Prime filter theorem, 109 Product implication, 137 Product logic, 138 Proof, 4, 9 Proof search, 15 Provable, 4, 9

R Reduced proof, 36 Reduct Boolean, 141 lattice, 87 Reductio ad absurdum, 19 Reflexivity, 78 Relevant implication, 72 Relevant logic, 72 Residual, 131 Residuated, 93 Residuated lattice, 131 commutative, 132 contractive, 132 integral, 132 Residuation, 55

160 Rule acceptable, 50 for logical connective, 8 left, 9 of inference, 4 of necessitation, 48 right, 9 structural, 8

S S4, 48 S5, 48 Semigroup, 130 partially ordered, 130 Sequent, 7 elementary, 15 end, 9 initial, 8 lower, 7 upper, 7 with single succedent, 20 Sequent system, 7 Soundness, 12 Stone’s representation theorem Boolean algebra, 87 Heyting algebra, 110 Strict order, 78 Structural rule, 8 Subalgebra, 84 partial, 105 Subdirectly irreducible, 123 Subdirect representation, 123 Subformula, 3 property, 32 Sublattice, 84 Subproof, 10 Substitution, 5 instance, 5 Substructural logic, 47 commutative, 69 involutive, 59 Subvariety, 119 Succedent, 7 SUP, 114

Index Superintuitionistic logic, 68

T Tautology, 6 sequent, 12 t-norm, 135 Transitivity, 78 Truth, 6 Truth relation, 144 Truth table, 6 Truth value, 6 Two-valued semantics, 6

U Undecidable, 35 Underlying set, 84 Unit element, 130 Universal mapping property, 119 Universe, 84 Upper bound, 78 least, 78, 80 Upward closed, 107

V Valid, 6, 88 in FL-algebras, 133 in Kripke frames, 144 Valuation, 144, 146 Variety, 117 VL , 117

W Weakening axiom, 4 rule, 8 Well-connected, 125

Z Zero bounded, 133 Zero element, 133