Algebras, Lattices, Varieties (The Wadsworth & Brooks/Cole Mathematics Series) [F First ed.] 0534076513, 9780534076511

Volume 1 is a leisurely paced introduction to general algebra and lattice theory. Besides the fundamental concepts and e

180 51 7MB

English Pages [380] Year 1987

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Algebras, Lattices, Varieties (The Wadsworth & Brooks/Cole Mathematics Series) [F First ed.]
 0534076513, 9780534076511

Table of contents :
Preface
Contents
Introduction
Preliminaries
1. Basic Concepts
2. Lattices
3. Unary and Binary Operations
4. Fundamental Algebraic Results
5. Unique Factorization
Bibliography
Table of Notation
Index of Names
Index of Terms

Citation preview

ALGEBRAS, LATTICES, VARIETIES

VOLUME I

The Wadsworth & Brooks/Cole Mathematics Series Series Editors Raoul H. Bott, Harvard University David Eisen bud, Brandeis University Hugh L. Montgomery, University of Michigan Paul J. Sally, Jr., University of Chicago Barry Simon, California Institute of Technology Richard P. Stanley, Massachusetts Institute of Technology M. Adams, V. Guillemin, Measure Theory and Probability W. Beckner, A. Calderon, R. Fefferman, P. Jones, Conference on Harmonic Analysis in Honor of Antoni Zygmund G. Chartrand and L. Lesniak, Graphs and Digraphs, Second Edition J. Cochran, Applied Mathematics: Principles, Techniques, and Applications W. Derrick, Complex An.alysis and Applications, Second Edition J. Dieudonne, History of Algebraic Geometry R. Durrett, Brownian ~Motion and Martingales in Analysis S. Fisher, Complex Variables A. Garsia, Topics in Almost Everywhere Convergence R. McKenzie, G. McNulty, W. Taylor, Algebras, Lattices, Varieties, Volume I E. Mendelson, Introduction to Mathematical Logic, Third Edition R. Salem, Algebraic Numbers and Fourier Analysis, and L. Carleson, Selected Problems on Exceptional Sets R. Stanley, Enumerative Combinatorics, Volume 1 K. Stromberg, An Introduction to Classical Real Analysis

ALGEBRAS, LATTICES, VARIETIES VOLUME I Ralph N. McKenzie University of California, Berkeley George F. MeN ulty University of South Carolina Walter F. Taylor University of Colorado

Wadsworth & Brooks/Cole Advanced Books & Software Monterey, California

Wadsworth & Brooks/Cole Advanced Books & Software A Division of Wadsworth, Inc.

© 1987 by Wadsworth, Inc., Belmont, California 94002. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transcribed, in any form or by any means-electronic, mechanical, photocopying, recording or otherwise-without the prior written permission of the publisher, Brooks/Cole Publishing Company, Monterey, California 93940, a division of Wadsworth, Inc. Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 Library of Congress Cataloging-in-Publication Data McKenzie, Ralph, [date] Algebras, lattices, varieties. Bibliography: v. 1, p. Includes index. 1. Algebra. 2. Lattice theory. 3. Varieti-es (Universal algebra). I. McNulty, George F., [date]. II. Taylor, Walter F., [date]. III. Title. QA51.M43 1987 512 86-23239 ISBN 0-534-07651-3 (v. 1) Sponsoring Editor: John Kimmel Editorial Assistant: Maria Rosillo Alsadi Production Editor: S.M. Bailey Manuscript Editor: David Hoyt Art Coordinator: Lisa Torri Interior Illustration: Lori Heckelman Typesetting: Asco Trade Typesetting Ltd., Hong Kong Printing and Binding: The Maple-Vail Book Manufacturing Group,

York, Pennsylvania

ALGEBRAS, LATTICES, VARIETIES VOLUME I Ralph N. McKenzie University of California, Berkeley George F. MeN ulty University of South Carolina Walter F. Taylor University of Colorado

Wadsworth & Brooks/Cole Advanced Books & Software Monterey, California

Preface

This is the first of four volumes devoted to the general theory of algebras and the closely related subject of lattice theory. This area of mathematics has grown very rapidly in the past twenty years. Not only has the literature expanded rapidly, but also the problems have become more sophisticated and the results deeper. The tendency toward specialization and fragmentation accompanying this growth has been countered by the emergence of new research themes (such as congruence class geometry, Maltsev classification, and congruence classification of varieties) and powerful new theories (such as general commutator theory and tame congruence theory), giving the field a degree of unity it never had before. Young mathematicians entering this field today are indeed fortunate, for there are hard and interesting problems to be attacked and sophisticated tools to be used. Even a casual reader of these volumes should gain an insight into the present-day vigor of general algebra. We regard an algebra as a nonempty set equipped with a system of finitary operations. This concept is broad enough to embrace many familiar mathematical structures yet retains a concrete character. The general theory of algebras borrows techniques and ideas from lattice theory, logic, and category theory and derives inspiration from older, more specialized branches of algebra such as the theories of groups, rings, and modules. The connections between lattice theory and the general theory of algebras are particularly strong. The most productive avenues to understanding the structure of algebras, in al1 their diversit y, generally involve the study of appropriate lattices. The lattice of congruence relations and the lattice of subalgebras of an individual algebra often contain (in a highly distilled form) much information about the interna1 structure of the algebra and the essential relations among its elements. In order to compare algebras, it is very useful to group them into varieties, which are classes defined by equations. Varieties can in turn be organized in various ways into lattices (e.g., the lattice of varieties, the lattice of interpretability types). The study of such lattices reveals an extraordinarily rich structure in varieties and helps to organize our knowledge about individual algebras and important families of algebras. Varieties themselves are elementary classes in the sense of logic, which affords an entry to model-theoretic ideas and techniques.

viii

Preface

Volume 1 is a leisurely paced introduction to general algebra and lattice theory. Besides the fundamental concepts and elementary results, it contains severa1 harder (but basic) results that will be required in later volumes and a final chapter on the beautiful topic of unique factorization. This volume is essentially self-contained.We sometimes omit proofs, but-except in rare cases-only those we believe the reader can easily supply with the lemmas and other materials that are readily at hand. It is explicitly stated when a proof has been omitted for other reasons, such as being outside the scope of the book. We believe that this volume can be used in severa1 ways as the text for a course. The first three chapters introduce basic concepts, giving numerous examples. They can serve as the text for a one-semester undergraduate course in abstract algebra for honors students. (The instructor will probably wish to supplement the text by supplying more detail on groups and rings than we have done.) A talented graduate student of mathematics with no prior exposure to our subject should find these chapters easy reading. Stiff resistance will be encountered only in $2.4-the proof of the Direct Join Decomposition Theorem for modular lattices of finite height-a tightly reasoned argument occupying severa1 pages. In Chapter 4, the exposition becomes more abstract and the pace somewhat faster. Al1 the basic results of the general theory of algebras are proved in this chapter. (There is one exception: The Homomorphism Theorem can be found in Chapter 1.) An important nonelementary result, the decomposition of a complemented modular algebraic lattice into a product of projective geometries, is proved in $4.8. Chapter 4 can stand by itself as the basis for a one-semester graduate course. (Nevertheless, we would advise spending severa1 weeks in the earlier chapters at the beginning of the course.) The reader who has mastered Chapters 1-4 can confidently go on to Volume 2 without further preliminaries, since the mastery of Chapter 5 is not a requirement for the later material. Chapter 5 deals with the possible uniqueness of the factorization of an algebra into a direct product of directly indecomposable algebras. As examples, integers, finite groups, and finite lattices admit a unique factorization. The Jordan normal form of a matrix results from the unique decomposition of the representation module of the matrix. This chapter contains many deep and beautiful results. Our favorite is Bjarni Jónsson's theorem giving the unique factorization of finite algebras having a modular congruence lattice and a one-element subalgebra (Theorem 5.4). Since this chapter is essentially self-contained, relying only on the Direct Join Decomposition Theorem in Chapter 2, a one-semester graduate course could be based upon it. We believe that it would be possible to get through the whole volume in a year's course at the graduate level, although none of us has yet had the opportunity to try this experiment. Volume 2 contains an introduction to first-order logic and model theory (al1 that is needed for our subject) and extensive treatments of equational logic, equational theories, and the theory of clones. Also included in Volume 2 are many of the deepest results about finite algebras and a very extensive survey of the results on classifying varieties by their Maltsev properties. Later volumes will deal with such advanced topics as commutator theory for congruence modular varieties, tame congruence theory for locally finite varieties, and the fine structure of lattices of equational theories.

l

! t

Preface

ix

Within each volume, chapters and sections within a chapter are numbered by arabic numerals; thus, g4.6 is the sixth section of Chapter 4 (in Volume 1). Important results, definitions, and exercise sets are numbered in one sequence throughout a chapter; for example, Lemma 4.50, Theorem 4.51, and Definition 4.52 occur consecutively in Chapter 4 (actually in g4.6). A major theorem may have satellite lemmas, corollaries, and examples clustered around it and numbered 1, 2, 3, . . . . A second sequence of numbers, set in the left-hand margins, is used for a catch-al1 category of statements, claims, minor definitions, equations, etc (with the counter reset to 1 at the start of each chapter). Exercises that we regard as difficult are marked with an asterisk. (Difficult exercises are sometimes accompanied by copious hints, which may make them much easier.) The beautiful edifice that we strive to portray in these volumes is the product of many hundreds of workers who, for over fifty years, have been tirelessly striving to uncover and understand the fundamental structures of general algebra. In the course of our writing, we have returned again and again to the literature, especially to the books of Birkhoff [1967], Burris and Sankappanavar [1981], Crawley and Dilworth [1972], Gratzer [1978, 19791, Jónsson [1972], Maltsev [1973], and Pierce [1968]. We wish to thank al1 of our friends, colleagues, and students who offered support, encouragement, and constructive criticism during the years when this volume was taking shape. It is our pleasure to specifically thank Clifford Bergman, Joel Berman, Stanley Burris, Wanda Van Buskirk, Ralph Freese, Tom Harrison, David Hobby, Bjarni Jónsson, Keith Kearnes, Renato Lewin, Jan Mycielski, Richard Pierce, Ivo Rosenberg, and Constantin Tsinakis. Thanks to Deberah Craig and Burt Rashbaum for their excellent typing. Our editor at Wadsworth & Brooks/Cole, John Kimmel, and Production Editor S. M. Bailey, Designer Victoria Van Deventer, and Art Coordinator Lisa Torri at Brooks/Cole have al1 taken friendly care of the authors and the manuscript and contributed greatly to the quality of the book. Don Pigozzi's contribution to the many long sessions in which the plan for these volumes was forged is greatly appreciated. We regret that he was not able to join us when it came time to write the first volume; nevertheless, his collaboration in the task of bringing this work to press has been extremely valuable to us. We gladly acknowledge the support given us during the writing of this volume by the National Science Foundation, the Alexander von Humboldt Foundation, and the Philippine-American Educational Foundation through a Fulbright-Hays grant. Apart from our home institutions, the University of Hawaii, the University of the Philippines, and die Technische Hochschule Darmstadt have each provided facilities and hospitality while this project was underway. Finally, we are deeply grateful for the solid support offered by our wives and children over the past five years. Ralph N. McKenzie George F. McNulty Walter F. Taylor

Preface

This is the first of four volumes devoted to the general theory of algebras and the closely related subject of lattice theory. This area of mathematics has grown very rapidly in the past twenty years. Not only has the literature expanded rapidly, but also the problems have become more sophisticated and the results deeper. The tendency toward specialization and fragmentation accompanying this growth has been countered by the emergence of new research themes (such as congruence class geometry, Maltsev classification, and congruence classification of varieties) and powerful new theories (such as general commutator theory and tame congruence theory), giving the field a degree of unity it never had before. Young mathematicians entering this field today are indeed fortunate, for there are hard and interesting problems to be attacked and sophisticated tools to be used. Even a casual reader of these volumes should gain an insight into the present-day vigor of general algebra. We regard an algebra as a nonempty set equipped with a system of finitary operations. This concept is broad enough to embrace many familiar mathematical structures yet retains a concrete character. The general theory of algebras borrows techniques and ideas from lattice theory, logic, and category theory and derives inspiration from older, more specialized branches of algebra such as the theories of groups, rings, and modules. The connections between lattice theory and the general theory of algebras are particularly strong. The most productive avenues to understanding the structure of algebras, in al1 their diversit y, generally involve the study of appropriate lattices. The lattice of congruence relations and the lattice of subalgebras of an individual algebra often contain (in a highly distilled form) much information about the interna1 structure of the algebra and the essential relations among its elements. In order to compare algebras, it is very useful to group them into varieties, which are classes defined by equations. Varieties can in turn be organized in various ways into lattices (e.g., the lattice of varieties, the lattice of interpretability types). The study of such lattices reveals an extraordinarily rich structure in varieties and helps to organize our knowledge about individual algebras and important families of algebras. Varieties themselves are elementary classes in the sense of logic, which affords an entry to model-theoretic ideas and techniques.

Contents

xii

Chapter 4

Fundamental Algebraic Results

4.1 Algebras and Clones 4.2 Isomorphism Theorems 4.3 Congruences 4.4 Direct and Subdirect Representations 4.5 The Subdirect Representation Theorem 4.6 Algebraic Lattices 4.7 Permuting Congruences 4.8 Projective Geometries 4.9 Distributive Congruence Lattices 4.10 Class Operators and Varieties 4.1 1 Free Algebras and the HSP Theorem 4.12 Equivalence and Interpretation of Varieties 4.13 Commutator Theory

Chapter 5 5.1 5.2 5.3 5.4 5.5 5.6 5.7

Unique Factorization Introduction and Examples Direct Factorization and Isotopy Consequences of Ore's Theorem Algebras with a Zero Element The Center of an Algebra with Zero Some Refinement Theorems Cancellation and Absorption

Bibliography Table of Notation Index of Names Index of Terms

ALGEBRAS, LATTICES, VARIETIES %VOLkjiWE1

Introduction

The mathematician at the beginning of the twentieth century was confronted with a host of algebraic systems ranging from the system of natural numbers (which had only recently been given an axiomatic presentation by Giuseppe Peano) to Schroder's algebra of binary relations. Included among these algebraic systems were groups (with a rapidly developing theory), Hamilton's quaternions, Lie algebras, number rings, algebraic number fields, vector spaces, Grassmann's calculus of extensions, Boolean algebras, and others, most destined for vigorous futures in the twentieth century. A. N. Whitehead [1898] undertook the task of placing these diverse algebraic systems within a common framework. Whitehead chose "A Treatise on Universal Algebra" as the title of his work. Whitehead's attention was focused on systems of formal axiomatic reasoning about equations, and it was to such formal systems that he applied the term "universal algebra." In content, Whitehead's treatise is a survey of algebra and geometry tied together by this focus on formal deductive systems. B. L. van der Waerden's influential Moderne Algebra [1931] brought the axiomatic method of modern algebra into instruction at the graduate level. It deals with a number of algebraic systems, such as groups, rings, vector spaces, and fields, within a common framework. Some of the basic concepts of general algebra are implicit in van der Waerden's book, but at the time of its publication there were no known results that belong, in the proper sense, to the general theory of algebras. So, despite their broad conception, both Whitehead's treatise and van der Waerden's book appeared before the true beginnings of our subject. The origins of the concept of lattice can be traced back to the work on the formalization of the logic of propositions by Augustus DeMorgan [1847] and George Boole [1854]. The concept of a lattice, as separate from that of a Boolean algebra, was enunciated in the work of C. S. Peirce [1880] and Ernst Schroder [1890- 19051. Independently, Richard Dedekind ([1897] and [1900]) arrived at the concept by way of ideal theory. Dedekind also introduced the modular law for lattices, a weakened form of the distributive law for Boolean algebras. A series of papers by Garrett Birkhoff, V. Glivenko, Karl Menger, John von Neumann, and Oystein Ore, published in the mid-thirties (shortly after the appearance of

2

Introduction

van der Waerden's book), showed that lattices had fundamental applications to modern algebra, projective geometry, point-set theory, and functional analysis. Birkhoff s papers, appearing in 1933-1935, also marked the beginning of the general theory of algebras. In Birkhoff [1933], one finds both the notion of an algebra and the notion of subalgebra explicitly stated-though somewhat more broadly than the notions we shall use. Birkhoff [1935a] delineates the concepts of a congruence, a free algebra, and a variety and presents severa1 of the most fundamental theorems concerning algebras in general. In these papers, the insight that lattices offer an important tool for understanding mathematical (especially algebraic) structures is put forward and amply illustrated. Indeed, the general theory of algebras is proposed in these papers as a highly appropriate arena in which to exploit the concepts and theorems of lattice theory. Birkhoff's further development of this theme can be followed in the successive and very different editions of his book Lattice Theory ([1940], [1948], and [1967]). In the preface to the 1967 edition, Birkhoff expressed his view in these words: "lattice-theoretic concepts pervade the whole of modern algebra, though many textbooks on algebra fail to make this apparent.. . . thus lattices and groups provide two of the most basic tools of 'universal algebra,' and in particular the structure of algebraic systems is usually most clearly revealed through the analysis of appropriate lattices." Since these words were written, lattices have come to play an ever more central role in the analysis of general algebrajc systems. In the same edition of his book (p. 132), Birkhoff defined universal algebra as that field of mathematics that "provides general theorems about algebras with single-valued, universally defined, finitary operations." This excellent definition of our subject is not to be taken too literally, for in order to analyze algebras, the researcher is inevitably led to the study of other systems that are not algebras-ordered sets, partial algebras, and relational structures, for instance. Neither should the dictum to prove general theorems be regarded as binding. The concept of an algebra is broad, and truly important results applying to al1 algebras are rare. In mathematics, deep theorems are usually established under strong hypotheses, and the general theory of algebras is no exception. In fact, much effort has been expended in the last fifty years on the discovery of appropriate conditions (such as modularity of the congruence lattice) under which farreaching conclusions can be drawn. The evolution of the general theory of algebras, from its origin in the early writings of G. Birkhoff up to the present day, divides naturally into four periods. The first era, which lasted until about 1950, saw the publication of a handful of papers working out the first ramifications of the ideas Birkhoff had introduced. Free algebras, the isomorphism theorems, congruence lattices, and subalgebra lattices were discussed. Birkhoff [1944] established the subdirect representation theorem, destined to become a principal result in the subject. Ore ([1935] and [1936]) gave a purely lattice-t heoretic account of the Krull-Schmidt direct decomposition theorem from group theory and noted that this considerably broadened the scope of the theorem. Post [1941] presented a detailed analysis of the lattice of clones on a two-element set. The notion of a relational structure

Introduction

3

emerged during this time in investigations of mathematical logic (cf. Tarski [1931] and [1935]). The initial work of Jónsson and Tarski [1947] on unique direct factorization appeared. This period also saw the flowering of lattice theory in the works of Birkhoff, Dilworth, Frink, von Neumann, Ore, and Whitman. It was also in this period that M. H. Stone published his representation and duality theory for Boolean algebras and A. 1. Maltsev E19361 established the compactness theorem (for arbitrary first-order languages). Both of these results led to significant later developments in general algebra. The second era, which was to last until about 1963, was distinguished by the predominance of ideas and methods derived from mathematical logic, especially model theory. The period began with Alfred Tarski's 1950 address to the International Congress of Mathematicians [1952]. In his lecture, Tarski defined very clearly a new branch of mathematics (which he called the theory of arithmetical classes), outlined its early results, and described its goals, methods, and prospects. Today this branch is called the model theory of first-order languages, or simply model theory. Tarski's address marked the recognition of model theory as a legitimate mathematical theory with potentially important applications. His fame attracted talented young mathematicians to the field, and it developed very rapidly during the next fifteen years. Significantly, Tarski conceived the theory of arithmetical classes to be "a chapter of universal algebra," and he credited Birkhoff for an important early result in this theory. In [1935a], Birkhoff had characterized varieties by the form of the first-order sentences axiomatizing them (i.e., equations), and this result now became a paradigm for numerous results in model theory, of a type called "preservation theorems." During the 1950s, Tarski influenced severa1 generations of logicians at Berkeley in the direction of the general theory of algebras; in Russia, Maltsev exerted a comparable influence. One of the many fruits of this era was the concept of ultraproduct, put forward by LOS [1955] and extensively developed in the Berkeley school. Independently of these schools, in the late fifties a group of Polish algebraists (most prominently, E. Marczewski) published more than fifty papers on the algebraic theory of free algebras. It was also during this period that A. L. Foster at Berkeley focused the attention of an active group of students on the possibility of placing Stone's representation theorem for Boolean algebras in a general setting. By 1960, the focus of research in model theory had moved far from the general theory of algebras. At the same time, universal algebraists had rediscovered an interest in problems that could not be handled by purely model-theoretic methods. There now began a third era, which was to last into the late 1970s. An early result of this period, which still exerts considerable influence, appeared in a paper by G. Gratzer and E. T. Schmidt [1963]. The authors presented an abstract characterization of congruence lattices of arbitrary algebras, thus solving a difficult problem of Birkhoff left open from the first period. This was followed shortly by a paper of B. Jónsson [1967], which provided elegant and powerful tools for the study of a diverse family of varieties-the congruence-distributive varieties. A new theme of classifying varieties according to the behavior of congruences in their algebras, and more generally by Maltsev properties, received much attention in the 1970s. A 1954

4

Introduction

paper by Maltsev had shown the first connection of the type that was now to be intensively studied, between a congruence property and a pair of equations holding in a variety. This third period also witnessed the formulation of the notion of Boolean product, the first deep results concerning the lattices of varieties, and a resumption of efforts toward understanding the lattices of clones over finite sets. The quantity of published papers increased astronomically during the 1960s and 1970s, and three of the hardest long-standing open problems in lattice theory were solved near the end of the period. P. Pudlák and J. Tiima [1980] proved that every finite lattice embeds into a finite partition lattice (which Whitman had conjectured in 1946). R. Freese [1980] proved that the word problem for free modular lattices is recursively unsolvable. J. B. Nation [1982] proved, as Jónsson had conjectured in 1960, that the finite sublattices of free lattices are identical with finite lattices satisfying three of the elementary properties that characterize free lattices. In the midst of this period appeared four books devoted to the exposition of the general theory of algebras. Cohn [1965] was the first to appear. The books of Gratzer and Pierce came out in 1968, but the publication of Maltsev [1973] was delayed by the author's death in 1967. In essence, each of these books gives an account of the findings of the two earlier periods. Of the four, Gratzer's book is the most comprehensive and soon became the standard reference. More sharply focused books, like Jónsson [1972], were also published. A very readable (but al1 too slender) book by Burris and Sankappanavar E19811 appeared at the close of this period. This period also saw the founding of Algebra Uniuersalis (a journal devoted to lattice theory and the general theory of algebras), the growth of strong centers of activity in Winnipeg, Darmstadt, Szeged, Honolulu, and elsewhere, and the regular organization of conferences at the national and the international levels, emphasizing universal algebra and lattice theory. The appearance of general commutator theory (in the papers by J. D. H. Smith 119761 and J. Hagemann and C. Herrmann [1979]) heralded the start of the present era. In just a few years, the application of commutator theory has greatly transformed and deepened the theory of varieties. Commutator theory finds its strongest applications in varieties that satisfy rather strong Maltsev properties, such as congruence modularity. Another product of this era, tame congruence theory, has found broad applications in the study of locally finite varieties without Maltsev conditions. In another direction, D. Pigozzi (with W. Blok and P. Kohler [1982], [1984]) has shown that the congruence distributive varieties fa11 naturally into an interesting hierarchy with many levels. One of the principal features of this period will certainly be a much deeper understanding of the significant natural families of varieties and the different approaches suitable for uncovering the properties of these families.

Preliminaries

We assume that the reader has a working knowledge of the most basic notions of set theory and has had a modest exposure to classical algebraic structures such as groups and rings. The approach to set theory is informal, and no particular set of axioms need be specified. (If the reader is unfamiliar with the material reviewed below, we recommend consulting Halmos [1960] or Vaught [1985], both of which contain excellent elementary introductions to set theory.) We use classes as well as sets. Informally speaking, a class is a collection so large that subjecting it to the operations admisible for sets would lead to logical contradictions. For example, we speak of the set of al1 sets of natural numbers but the class of al1 Abelian groups. We often use the term family in reference to a set whose members are sets. (In pure formal set theory, every member of any set is itself a set.) In dealing with sets we use these standard notations: membership (E), set-builder notation ({-: -)), the empty set inclusion (S), proper inclusion (c), union (U and intersection ( í l and difference (-), (ordered) n-tuples ((x,, . ,xn-, )), direct (or Cartesian) products of sets (A x B, Ai), direct powers of sets (A'). We shall not distinguish between (ordered) pairs and 2-tuples. The ordered pair of x and y will be denoted by (x, y) and sometimes by (x, y). There follows a series of reinarks introducing further notations and basic definitions.

(a), o),

U),

ni,,

1. The power set of a set A is the set {B: B S A) of al1 subsets of A. It is denoted by Y ( A ) . 2. An is the set ( (x,, xn-, ): {x,, - xn-, ) G A) of al1 n-tuples each of whose terms belongs to A. For the denotation of n-tuples we use bars over letters. Thus e ,

A"

=

m ,

{X: X = ( X ~ , . . ~ , X ~where -~)

{ x O , ~ ~ ~ , xS n -A). ,)

3. Concerning relations: a. An n-ary relation on a set A is a subset of A". b. A 2-ary relation on A is called a binary relation.

Preliminaries

c. The converse r" of a binary relation r on A is given by (a, b) E ru iff (b, a) E r. (L'iff" is an abbreviation for "if and only if".) r" may also be denoted r - l . d. The relational product of two binary relations r, S on A is defined by: (a, b) E r o S iff for some c, (a, c) E r and (c, b) E S. e. The transitive closure of a binary relation r on A is the binary relation r U (r o r) U (r o r o r) U Thus (x, y) belongs to this transitive closure iff for some integer n 2 1 there exist elements x, = x, x, , - x, = y satisfying ( x , , x , + , ) ~ rfor a11 i < n. m.

e,

1

I I 1

I

l

1

ll 1

I 1

I

1

11

l

I

1

i I l

l 1

1

l

I

I

4. Concerning functions: a. A function f from a set A to a set B is a subset of B x A such that for each a E A there is exactly one b E B with (b, a) f. Synonyms for function are mapping, map, and system. If f is a function from A to B, then we write f : A -+ B; and if (b, a) EL then we write f(a) = b, or fa = b, or f, = b,or f : a ~ b . T h u s i f f :A h B , then f = {(f(a),a):a~A). b. The set of al1 functions from A to B is denoted by B ~ . c. If f E BAand g E CB,then f and g are relations on A U B U C. We write gf for their relational product g o f ; thus gf E CAand gf (a) = g(f(a)). d. If f E BA,then ker f, the kernel of f, is the binary relation {(a,, al ) E A2: f (a,) = f (a,)}. f is called injective (or one-to-one) iff (x, y) E ker f implies x = y (for al1 x, y E A). e. If f E BA, X E A, and Y G B, then f (X) = { f (x): x E X) (the f-image of X) and f -'(Y) = {x E A: f (x)E Y} (the f-inverse image of Y). f : A -+B is called surjective (or said to map A onto B) iff f (A) = B. f. The function f E BAis bijective iff it is both injective and surjective. g. If f eBA,then we say that the domain of f is A, the co-domain of f is B, and the range of f is the set f (A). h. Function-builder notation (-: -) is quite analogous to set-builder notation. If X is a set and r(x) is an expression that completely prescribes an object for each x EX, then (r(x): x E X ) designates the function f with domain X satisfying f (x) = r(x) for al1 x E X. For example, if g E Y' and h e z Y ,then f = hg (see paragraph (4c)) is the same function as (h(g(x)): x EX). If the domain of a function that we wish to introduce is clearly implied by the context (say it is the set of real numbers), then we may specify the function with a simple phrase like "let f be the function X H x2." i. An (indexed) system (F,: i E 1), indexed by a set I, is just a function whose domain is I. j. If (A,: iEI) is a system of sets (A, is a set for al1 iEI), then (A,: i~ I), or Ai, denotes the set of al1 functions f with domain I such that f (i) E A, for al1 i E I. It is called the (direct or Cartesian) product of the A,, i~ l. For any sets A and I, the set A' of al1 functions from I to A is a direct power of A.

n

ni,,

I

5. The Axiom of Choice is the statement that if (A,: i E I) is any system of sets 1

l

7

Preliminaries

ni,,

with A, # @ for al1 iE 1, then A, # @. We assume the validity of this axiorn. Z, Q, R, C denote respectively the set of al1 integers O, 1, 2, 3, the set of al1 rational numbers, the set of al1 real numbers, and the set of al1 complex numbers. The union of a family (or set) F of sets, UF, is defined by x E UF iff x E B for some B EF. The intersection of a family F c 9(A), written OF, is defined (dually to the union) to be the set { X EA: X E B for al1 B EF}. (If F is nonempty (F # @), then this is independent of A.) An order over a set A is a binary relation I on A such that a. I is reflexive over A; Le., (x, x) E I for al1 x E A. b. I is anti-symmetric; i.e., if (x, y) E I and (y, x) E 1, then x = y. c. I is transitive; i.e., (x, y) E I and (y, z) E 5 always imply (x, Z) E < . For orders (and binary relations more generally), we often prefer to write x 4 y in place of (x, y) E I. Given an order < over a nonempty set A, the pair (A, I) is called an ordered set. (Ordered sets are frequently called "partially ordered" sets in the literature.) By a chain in an ordered set (A, 5 ) is meant a set C G A such that for al1 x, y E C either x < y or y I x. An upper bound of C is an element u E A for which c i u for al1 c E C. Zorn's Lemma is the statement that if (A, 5) is an ordered set in which ) has a maximal element m (i.e., every chain has an upper bound, then (A, I m~ A and m 5 X'E A implies m = x). We take this statement as an axiom. The Hausdorff Maximality Principle is the statement that every ordered set has a maximal chain. More precisely, let (A, I )be an ordered set and let L denote the family of al1 chains in (A, 5 ) . The principle states that the ordered set (L, S) has a maximal element. The Hausdorff Maximality Principle is equivalent to Zorn's Lemma. A linearly ordered set (also called a chain) is an ordered set (A, I ) in which every pair of elements x and y satisfy either x I y or y i x. A well-ordered set is a linearly ordered set (A, I ) such that every nonempty subset B c A has a least element 1 (i.e., 1 E B and 1 I x for al1 x E B). Concerning ordinals: a. The ordinals are generated from the empty set @ using the operations of successor (the successor of x is S(x) = x U {x}) and union (the union of any set of ordinals is an ordinal). b. O = @ (the empty set), 1 = S(O), 2 = S(l), . The finite ordinals are 0, 1,2, - , also called natural numbers or non-negative integers. c. Every set of ordinals is well-ordered by setting a I /3 (a and /3 are ordinals) iff a = /3 or a ~ / 3Then . O 5 1I 2 and we have n = {O, 1;--,n - 1) for each finite ordinal n 2 1. d. The least infinite ordinal is w = {O, 1,2, which is the set of al1 finite ordinals.

+ + +

- m - ,

a},

s.,

Preliminaries

e. It is useful to know that n-tuples are functions having domain (0, l , . . . , n - l}. 14. Concerning cardinals: a. Two sets A and B have the same cardinality iff there is a bijection from A to B. b. The cardinals are those ordinals K such that no ordinal j< IC has the same cardinality as K.The finite cardinals are just the finite ordinals, and o is the smallest infinite cardinal. c. The Well Ordering Theorem is the statement that every set has the same cardinality as some ordinal. We take it as an axiom. (Actually, the Well Ordering Theorem, the Axiom of Choice, Zorn's Lemma, and the Hausdorff Maximality Principle-see (5), (10) and (11)-can be proved mutually equivalent in a rather direct fashion.) d. The cardinality of a set A is the (unique) cardinal K such that A and K have the same cardinality. The cardinality of A is written 1 Al. e. The power set P(A) of a set A has the same cardinality as 2* (hence the term "power set"). f. Operations of addition, rnultiplication, and exponentiation of cardinals are defined so that for any sets A and B, (Al-IBI = (A x B ( ,( A (+ ( B J= \A U BI if A and B are disjoint (i.e., if A n B = and j ~ l l = ~ l /AB/. Addition and multiplication are rather trivial where infinite cardinals are involved. For instance, if K, A are cardinals, O < K < A, and co < A, then K + A = K - A = A. The cardinal 2" is the cardinality of the set of real numbers. g. There is a unique one-to-one order-preserving function (denoted by the Hebrew letter aleph) from the class of al1 ordinal numbers onto the class of al1 infinite cardinal numbers. The first few infinite cardinal numbers, etc. The least cardinal in ascending order, are K,(=co), N,, N,, larger than an infinite cardinal K is denoted by K'.

a),

S ,

15. Concerning equivalence relations: a. An equivalence relation over a set A is a binary relation on A that is reflexive over A, transitive (see (8)), and symmetric (i.e., (x, y) E iff (Y,x) E 4b. For an equivalence relation over A and for x E A, the equivalence class of x modulo is the set x/ = (y E A: x y). The factor set of A modulo -5s Al- = (x/-: XEA). c. For an equivalence relation over A, Al- is a partition of A. That is, A/ is a set of nonempty subsets of A, A = UA/-, and each pair of distinct sets U and V in A/ are disjoint. Every partition of A arises in this fashion from an equivalence relation (which is uniquely determined by the partition). d. The set of al1 equivalence relations over A is denoted by Eqv A. Individual equivalence relations are usually denoted by lowercase Greek letters a, p, p, etc. Instead of (x, y) E p (where p is an equivalence relation) we may write, variously, xpy, x = y (mod p), or x E,y. (In diagrams, the fact that

-

-

-

--

-

-

-

9

Preliminaries

(x, y ) ~p may be indicated by connecting points labeled x and y with a line segment and labeling the segment with p.) e. (Eqv A, c.) is an ordered set having greatest lower bounds and least upper bounds for any subset of its elements. The greatest lower bound of a set S c Eqv A is The least upper bound of S is the transitive closure of U S (see (3e)). 16. The equality symbol (=) is used to assert that two expressions name the same object. The formal equality symbol (Ñ) is used to build equations, such as the commutative law x y Ñ y x, which can only become true or false after we assign specific values to their symbols and ask whether the two sides name the same object. (See $4.11 in Chapter 4.)

OS.

C

H

A

P

T

E

R

Basic Concepts

1.1 ALGEBRAS AND OPERATIONS An algebra is a set endowed with operations. Algebras are the fundamental objects with which we shall deal, so our first step is to make the preceding sentence precise. Let A be a set and n be a natural number. An operation of rank n on A is a function from A" into A. Here we have used A" to denote the n-fold direct power of A-the set of al1 n-tuples of elements of A. By a (finitary)operation on A we mean an operation of rank n on A for some natural number n. Because virtually every operation taken up in this book will be finitary, we will generally omit the word "finitary" and use "operation" to mean finitary operation. If A is nonempty, then each operation on A has a unique rank. Operations of rank O on a nonempty set are functions that have only one value; that is, they are constant functions of a rather special kind. We call operations of rank O constants and identify them with their unique values. Similarly, we call operations of rank 1 on A unary operations and identify them with the functions from A into A. Binary and ternary operations are operations of rank 2 and 3, respectively. We use n-ary operation, operation of rank n, and operation of arity n interchangeably. It is important to realize that the domain of an operation of rank n on A is the whole set A". Functions from a subset of A" into A are called partial operations of rank n on A. Subtraction is a binary operation on the set Z of integers, but it is only a partial operation on the set m of natural numbers. The fundamental operations most frequently encountered in mathematics have very small ranks. A list of these important operations certainly includes addition, subtraction, multiplication, division, exponentiation, negation, conjugation, etc., on appropriate sets (usually sets of numbers, vectors, or matrices). This list should also include such operations as forming the greatest common divisor of two natural numbers, the composition of two functions, and the union of two sets. Of course, one is almost immediately confronted with operations of higher rank that are compounded from these. Operations of higher finite rank whose mathematical significance does not depend on how they are built up from operations of smaller rank seem, at first, to be uncommon. Such operations will emerge later in this work, especially in Chapter 4 and in later volumes. However,

12

Chapter 1 Basic Concepts

even then most of the operations have ranks no larger than 5. While there is some evidence that operations of such small rank provide adequate scope for the development of a general theory of algebras, why this might be so remains a puzzle. To form algebras, we plan to endow sets with operations. There are severa1 ways to accomplish this. We have selected the one that, for most of our purposes, leads to clear and elegant formulations of concepts and theorems.

DEFINITION 1.1. An algebra is an ordered pair (A, F ) such that A is a nonempty set and F = (F,: i E 1) where Fi is a finitary operation on A for each i E I. A is called the universe of (A, F), Fi is referred to as a fundamental or basic operation of (A, F) for each i E I, and I is called the index set or the set of operation symbols of (A, F). The reason we have endowed our algebras with indexed systems of operations rather than with mere sets of operations is so that we have a built-in means to keep the operations straight. From the customary viewpoint, rings have two basic binary operations labeled "addition" (or +) and "multiplication" (or For the development of ring theory, it is essential to distinguish these operations from each other. In effect, most expositions do this by consistent use of the symbols and . : The actual binary operations in any given ring are indexed by these symbols. This is why we have chosen to cal1 the index set of ( A , F ) its set of operation symbols. The distinction between operation symbols and operations is important, and we will have much to say regarding it in 94.11. The notation implicit in the definition above is unwieldly in most situations. Quite often the set of operation symbols is small. For example, ring theory is accommodated by the operation symbols +, and - (this last symbol is intended to name the unary operation of negation). But surely S).

+

e ,

is an uncomfortable way to display the ring of integers. In this situation and others like it, we find

+

much more acceptable. Notice that in this last display , and - are no longer operation symbols but operations; exactly which operations they are is clear from context. As a general convention, we use uppercase boldface letters A, B, C, to denote algebras and the corresponding uppercase letters A, B, C, to denote their universes, attaching subscripts as needed. Thus, in most uses, A is the universe of A, B, is the universe of B,, and so on. If Q is an operation symbol of A, then we use Q* to stand for the fundamental operation of A indexed by Q; we say that denotes Q* or that Q* is the interpretation of Q in A. Whenever the cause of clarity or the momentum of customary usage dictates, we will depart from these conventions. Given an algebra A with index set 1, there is a function p called the rank function from I into the set cc, of natural numbers defined by:

e

S ,

1.1 Algebras and Operations

p(Q) is the rank of QAfor al1 Q E 1.

The rank function of an algebra is also referred to as its similarity type or, more briefly, its type. Algebras A and B are said to be similar if and only if they have the same rank function. The similarity relation between algebras is an equivalente relation whose equivalence classes will be called similarity classes. Most of the time (with some important exceptions), only algebras of the same similarity type will be under consideration. In fact, this hypothesis that al1 algebras at hand are similar is so prevalent that we have left it unsaid, even in the statement of some theorems. The rank functions are partially ordered by set inclusion (that is, by extension of functions). This ordering can be imposed on the similarity classes as well. For individual algebras, we say that A is a reduct of B (and that B is an expansion of A) if and only if A and B have the same universe, the rank function of A is a = QBfor al1 operation symbols Q of A. subset of the rank function of B, and In essence, this means that B can be obtained by adjoining more basic operations to A. For example, each ring is an expansion of some Abelian group. We close this section with a series of examples of algebras. Besides illustrating the notions just introduced, these examples specify how we formalize various familiar kinds of algebras and serve as resources for later reference. In formulating the examples, we use the following operation symbols:

eA

Constant symbols: e, 1, 0, and 1' Unary operation symbols: -, -, ", and f, for each r E R Binary operation symbols: + , A , v .

-', m ,

Semigroups A semigroup is an algebra A

=

(a b) c aA

(A,

=a

eA

m A )

such that:

(b aAc) for a11 a, b, c E A.

Thus a semigroup is a nonempty set endowed with an associative binary operation. A typical example of a semigroup is the collection of al1 functions from X into X, where X is any set, with the operation being composition of functions. A more sophisticated example is the collection of al1 n x n matrices of integers endowed with matrix multiplication.

Monoids A monoid is an algebra A

=

(A, eA,eA) such that ( A ,

sA)

is a semigroup and

To obtain some concrete examples, we can let e denote the identity function in our first example of a semigroup and the identity matrix in the second example. Although every monoid is an expansion of a semigroup, not every semigroup can be expanded to a monoid.

Chapter 1 Basic Concepts

Groups A group is an algebra A = (A,

S ,

-', e) such that ( A , e) is a monoid and S ,

"

(The reader will have observed that the superscript has grown tiresome and been dropped.) A typical example of a group is the collection of al1 one-to-one functions from X onto X (such functions are called permutations of X), where X is an arbitrary set and the operations are composition of functions, inversion of functions, and the identity function. A more intricate example is the collection of isometries (also called distance-preserving functions) of the surface of the unit ball in ordinary Euclidean three-dimensional space endowed with the same sorts of basic operations. Groups have been construed as algebras of severa1 different similarity types. Most of the popular renditions of group theory define groups as certain special kinds of semigroups. Our choice of fundamental operations was motivated by the desire to have the class of groups turn out to be a variety. Still, there are a number of quite satisfactory ways to present the intuitive notion of a group. For instance, there is no real need to devote a basic operation to the unit element. It is even possible to make do with a single binary operation, though this cannot be . The sense in which such formulations are equivalent, not only for groups but for algebras in general, is made precise in 54.12. Some interesting aspects of serdigroups and groups are explored in Chapter 3.

Rings A ring is an algebra (A, +, ., -, O) such that (A, ( A , - ) is a semigroup, and

+ ,-, O)

is an Abelian group,

and A ring with unit is an algebra (A, +, -, 0 , l ) such that (A, +, -,O) is a ring and ( A , 1) is a monoid. A familiar example of a ring (with unit) is the integers endowed with the familiar operations. Another example is the set of n x n matrices with real entries endowed with the obvious operations. We regard fields as special kinds of rings. S ,

S ,

S ,

Vector Spaces and Modules In the familiar treatments, vector spaces and modules are equipped with a binary operation called addition and a scalar multiplication subject to certain conditions. As ordinarily conceived, the scalar multiplication is not what we have called an operation. An easy way around this trouble is to regard scalar multiplication as a schema of unary operations, one for each scalar. Actually, this

15

1.1 Algebras and Operations

is in accord with geometric intuition by which these operations amount to stretchings or shrinkings. Let

R

=

(R,

+, ., -,o,

1)

be a ring with unit. An R-module (sometimes called a left unitary R-module) is an algebra (M, +, -, O,f,),,. such that (M, +, -,O) is an Abelian group and for al1 a, b E M and for al1 r, S, and t E R the following equalities hold: f,(f,(a)) = ft(a) where r - s = t in R f,(a + b) = f,(a) + L(b) f,(a) + fs(a) =&(a) where r + s = t in R f d a ) = a. In essence, what these conditions say is that f,is an endomorphism of the Abelian group (M, + , - ,O) for each r E R, that this collection of endomorphisms is itself a ring with unit, and that the mapping r wf, is a homomorphism from R onto this ring. Although we will soon formulate such notions as homomorphism in our general setting, this last sentence is meaningful as it stands in the special context of ring theory. Part of the importance of modules lies in the fact that every ring is, up to isomorphism, a ring of endomorphisms of some Abelian group. This fact is analogous to the more familiar theorem of Cayley to the effect that every group is isomorphic to a group of permutations of some set. In the event that R is a field, we cal1 the R-modules vector spaces over R.

Bilinear Algebras over a Field

+

,

Let F = (F, + , - ,O, 1) be a field. An algebra A = (A, , -, O, f,), , is a bilinear algebra over F if (A, + , - ,O,f,),., is a vector space over F and for al1 a, b, c ~ A a n da l l r ~ F : m,

(a + b).c c (a + b)

m ,

+ (b-c) = (c . a) + (c b) = (aqc)

f,(a b) = (f,(a)). b = a -f,(b). If, in addition, (a b) c = a (b c) for al1 a, b, c E A, then A is called an associative algebra over F. Thus an associative algebra over a field has both a vector space reduct and a ring reduct. An example of an associative algebra can be constructed from the linear transformations of any vector space into itself. A concrete example of this kind is obtained by letting A be the set of al1 2 x 2 matrices over the field of real numbers and taking the natural matrix operations. Lie algebras, Jordan algebras, and alternative algebras provide important examples of nonassociative bilinear algebras that have arisen in connection with physics and analysis. A Lie algebra is a bilinear algebra that satisfies two further equalities: a . a = O forallaEA ((a - b) c) + ((b . c) - a) + ((c a). b) = O for al1 a, b, c E A.

16 Suppose (A,

Chapter 1 Basic Concepts

+ , ., -,O,f,),EFis an associative algebra over F. Define

It is not difficult to verify that (A, +, *, -, O,f,),,, is a Lie algebra. A good but brief introduction to bilinear algebras is available in Jacobson [1985]. Pierce [1982] offers an excellent account of associative algebras over fields. We remark that a common usage of the word "algebra" in the mathematical literature is to refer to those mathematical structures we have called "bilinear algebras over fields." The objects we have called "algebras" are then referred to as "universal algebras" (although there is nothing especially universal about, say, the threeelement group, which is one of these objects) or as "a-algebras" (perhaps because R is a kind of Greek abbreviation for "operation"). The establishment of the theories of groups, rings, fields, vector spaces, modules, and various kinds of bilinear algebras over fields is a sterling accomplishment of nineteenth-century mathematics. This line of mathematical research can be said to have reached its maturity at the hands of such mathematicians as Hilbert, Burnside, Frobenius, Wedderburn, Noether, van der Waerden, and E. Artin by the 1930s. It has continued to grow in depth and beauty, being today one of the most vigorous mathematical enterprises. There is another important series of examples of algebras, different in character from those described above.

Semilattices A semilattice is a semigroup (A,

A

) with the properties

and

A typical example of a semilattice is formed by taking A to be the collection of al1 subsets of an arbitrary set with the operation being intersection. Another example is formed by taking A to be the compact convex sets on the Euclidean plane and the operation to be the formation of the closed convex hull of the union of two compact convex sets.

Lattices A lattice is an algebra (A, A , v ) such that both (A, lattices and the following two equalities hold: a v (a

A

A

) and ( A , v ) are semi-

b) = a for al1 a, b E A

and a

A

(a v b) = a for al1 a, b E A.

17

1.1 Algebras and Operations

A typical example of a lattice is formed by taking A to be the collection of al1 equivalence relations on an arbitrary set, A to be intersection, and v to be the transitive closure of the union of two given equivalence relations. Another example is formed by taking A to be the set of natural numbers, v to be the formation of least common multiples, and A to be the formation of greatest common divisors. Lattices have a fundamental role to play in our work. Chapter 2 is devoted to the elements of lattice theory. The operation A is referred to as meet, and the operation v is called join.

Boolean Algebras A Boolean algebra is an algebra (A, A , v , -) such that ( A , A , v ) is a lattice and for al1 a, b, c E A the following equalities hold: a A (b v c) = (a A b) v (a A c) a v (b A c) = (a v b) A (a v c) (a v b)- = a- A b(a A b)- = a- v b-a =a (a- A a) v b = b (a- v a) A b = b. Thus a Boolean algebra is a distributive lattice with a unary operation of complementation (denoted here by -) adjoined. As an example, take A to be the collection of al1 subsets of an arbitrary set X, let the join v be union, the meet A be intersection, and the complementation - be set complementation relative to X. Another example is afforded by the clopen (simultaneously open and closed) subsets of a topological space under the same operations as above.

Relation Algebras A relation algebra is an algebra (A, A , v , -,", 1') such that (A, A , v , -) is a Boolean algebra, ( A , 1') is a monoid, and the following equalities hold for al1 a, b, c E A: m,

a ,

a (b v c) = (a b) v (a. c) (a b)" = bu a" (aU)" = a (a-)" = (au)( i 7 y= 1' (a v b)u = a" v bu (au (a b)-) A b = a- A a. An example of a relation algebra can be formed by taking A to be the collection

18

Chapter 1 Basic Concepts

of al1 binary relations on an arbitrary set X, giving A the Boolean operations by regarding A as the power set of X2,and defining the remaining operations so that R . S = { ( x , y ) : ( x , z ) ~ R and

(z,y)~S forsomez~x)

The relation algebra obtained in this way from the set X is sometimes denoted by RelX. Jónsson [1982] provides a thorough and very readable overview. Relation algebras have a rich theory, indeed rich enough to offer a reasonable algebraic context for the investigation of set theory. The essential idea for such an investigation is to regard set theory as the theory of membership. Membership is a binary relation. Adjoining an additional constant (nullary operation) e to the type of relation algebras to stand for membership opens the possibility of developing set theory by distinguishing e from other binary relations by means of the algebraic apparatus of relation algebras. It turns out to be possible, for example, to render the content of the Zermelo-Fraenkel axioms for set theory entirely as equations in this setting. The deep connections these algebras have with the foundations of mathematics emerges in the monograph of Tarski and Givant [1987]. Our list of examples of algebras ends here, having merely touched on a small selection. Further kinds of algebras will be introduced from time to time to serve as examples and counterexamples.

Exercises 1.2 1. Let A be a nonempty set and Q be a finitary operation on A. Prove that the rank of Q is unique. 2. Let A be a nonernpty set. Describe the operations on A of rank O in settheoretic terms. 3. Construct a semigroup that cannot be expanded to a monoid. 4. Construct a semigroup that is not the multiplicative semigroup of any ring. 5. Prove that every ring can be embedded in a ring that can be expanded to a ring with unit. 6. Let (A, + , -,O, f,), be an associative algebra over the field P and define m ,

.,

Prove that (A, +, *, -, O,f,),., is a Lie algebra. 7. Let A be a set and denote by Eqv A the set of al1 equivalence relations on A. For R, S E Eqv A define R A S = R ~ S R v S = R U RoSU RoSoRU R O S O R O S U . . . where o stands for relational product (that is, a(R O S)b means that there is some c such that both aRc and cSb). Prove that (Eqv A, A , v ) is a lattice.

1.2

Subalgebras, Homomorphisms, Direct Products

1.2 SUBALGEBRAS, HOMOMORPHISMS, AND DIRECT PRODUCTS One of the hallmarks of algebraic practice is the prominent role played by relationships holding among algebras. Some of the subtleties of complicated algebras can be more readily understood if some tractable way can be found to regard them as having been assembled from less complicated, more thoroughly understood algebras. The chief tools we will use to assemble new algebras from those already on hand are the formation of subalgebras, the formation of homomorphic images, and the formation of direct products. The reader is probably familiar with these notions in the settings of groups, rings, and vector spaces. They fit comfortably into our general setting. Let F be an operation of rank r on the nonempty set A, and let X be a subset of A. We say that X is closed with respect to F (also that F preserves X and that X is invariant under F) if and only if F(a,,a,,...,a,-,)EX

for al1 a,, a,,.-.,a,-, E X .

In the event that F is a constant, this means that X is closed with respect to F if and only if F EX. Thus the empty set is closed with respect to every operation on A of positive rank, but it is not closed with respect to any operation of rank O. Taking A to be the set of integers, we see that the set of odd integers is closed with respect to multiplication but not with respect to addition.

DEFINITION 1.3. Let A be an algebra. A subset of the universe A of A, which is closed with respect to each fundamental operation of A, is called a subuniverse of A. The algebra B is said to be a subalgebra of A if and only if A and B are similar, the universe B of B is a subuniverse of A, and QBis the restriction to B of QA, for each operation symbol Q of A. SubA denotes the set of al1 subuniverses of A. The ring of integers is a subalgebra of the ring of complex numbers. "B is an extension of A" means that A is a subalgebra of B; we render this in symbols as A c B. This convenient abuse of symbols should not lead to ambiguity-if nothing else, the boldface characters convey the algebraic intent. Our system of conventions exposes us, from time to time, to the minor annoyance of subuniverses that are not universes of subalgebras. We insisted that the universes of algebras be nonempty, but we have also insisted on empty subuniverses (exactly when there are no operations of rank O). By accepting this incongruity we avoid the need to single out many special cases in the statements of definitions and theorems. The notion of subalgebra defined above occasionally conflicts, at least in spirit, with common usage. Consider the case of fields. We have regarded fields as rather special sorts of rings, but the possibility of putting the function that sends each nonzero element to its multiplicative inverse on the same distinguished footing as the ring op~rationsis certainly enticing. We have not taken this step, since the function involved is only a partial operation, and the resulting

20

Chapter 1 Basic Concepts

mathematical system would not fa11 within our definition of an algebra. This deviation from our definition may seem small-from many viewpoints it is-but to widen the definition so as to allow partial operations would result in a havoc of technical complications and substantially alter the character of the ensuing mathematics. We have the option of declaring that O-' = O in order to force the operation to be defined everywhere, but this invalidates many equalities one ordinarily thinks of as holding in fields. For example,

would not hold when x = O unless the field had only one element. In any case, fields generally have subalgebras that are not fields. The integers form a subalgebra of the field of complex numbers that is not a subfield. In connection with the examples of algebras that concluded the previous section, the situation is very pleasant: Every subalgebra of a group, a ring, a vector space, etc., is again an algebra of the same sort. Now consider similar algebras A and B and let Q be an operation symbol of rank r. A function h from A into B is said to respect the interpretation of Q if and only if

for al1 a,,

. - ,a,-, E A.

DEFINITION 1.4. Let A and B be similar algebras. A function h from A into B is called a homomorphism from A into B if and only if h respects the interpretation of every operation symbol of A. hom(A,B) denotes the set of al1 homomorphisms from A into B. We distinguish severa1 kinds of homomorphisms and employ notation for them as follows. Let A and B be similar algebras. Each of h:A-+B

ASB h E hom(A, B) denotes that h is a homomorphism from A into B. By attaching a tail to the arrow, we express the condition of one-to-oneness of h; by attaching a second head to the arrow, we express the condition that h is onto B. Thus both

and

denote that h is a one-to-one homomorphism from A into B. We cal1 such homomorphisms embeddings. Likewise, both

1.2

Subalgebras,Homomorphisms, Direct Products

and

ALB denote that h is a homomorphism from A onto B, and in this case we say that B is the homomorphic image of A under h. Further, each of

and h

A r B denotes that h is a one-to-one homomorphism from A onto B. We cal1 such homomorphisms isomorphisms. A and B are said to be isomorphic, which we denote by A E B, iff there is an isomorphism from A onto B. A homomorphism from A into A is called an endornorphism of A, and an isomorphism from A onto A is called an autornorphism of A. End A and Aut A denote, respectively, the set of al1 endomorphisms of A and the set of al1 automorphisms of A. The identity rnap 1, belongs to each of these sets; moreover, each of these sets is closed with respect to composition of functions. In addition, each autornorphism of A is an invertible function and its inverse is also an automorphisrn. Thus (End A, o, 1,) is a monoid, which we shall designate by End A, and (Aut A, o, -l, 1,) is a group, which we shall designate by Aut A. (R', . ) and (R, + ) are isomorphic, where R is the set of real numbers and R+ is the set of positive real numbers. Indeed, the natural logarithm function is an isornorphism that illustrates this fact. An isomorphism is a one-to-one correspondence between the elements of two algebras that respects the interpretation of each operation symbol. This rneans that with regard to a host of properties, isomorphic algebras are indistinguishable from each other. This applies to most of the properties with which we shall deal; if they are true in a given algebra, then they are true for al1 isornorphic images of that algebra as well. Such properties have been called "algebraic properties." On the other hand, algebras that are isornorphic can be quite different from each other. For example, the set of al1 twice continuously differentiable realvalued functions of a real variable that are solutions to the differential equation

can be given the structure of a vector space over the field of real numbers. This vector space is isomorphic to the two-dimensional space familiar from Euclidean plane geometry. Roughly speaking, the distinction between these two isoqorphic vector spaces can be traced to the "internal" structure of their elements: on the one hand, functions, and on the other, geometric points. The notion that functions can be regarded as points with a geometric character is a key insight, not only for differential equations but also for functional analysis. Because the

22

Chapter 1 Basic Concepts

interna1 structure of the elements of an algebra can be used to establish algebraic properties, some of the most subtle and powerful theorems of algebra are those that assert the existence of isomorphisms. Isomorphism is an equivalence relation between algebras, and the equivalente classes are called isomorphism types. Isomorphism is a finer equivalence relation than similarity, in the sense that if two algebras are isomorphic, then they are also similar. In fact, among equivalence relations holding between algebras, isomorphism is probably the finest we will encounter; similarity is one of the coarsest. The formation of subalgebras and of homomorphic images seems to lead to algebras that are no more complicated than those with which the constructions started. By themselves, these constructions do not offer the means to form larger, more elaborate algebras. The direct product construction allows us to construct seemingly more elaborate and certainly larger algebras from systems of smaller ones. Let I be any set and let A, be a set for each i E I. The system A = (A,: i E 1) is called a system of sets indexed by I. By a choice function for A we mean a function f with domain I such that f(i) E Ai for al1 i E I. The direct product of the system A is the set of al1 choice functions for A. The direct product of A can be designated in any of the following ways:

Each set Ai for i E I is called a factor of the direct product. For each i rs 1, the ith projection functiori, denoted by pi, is the function with domain n A such that pi(f)= f(i), for al1 f E n A . Sometimes we refer to members of n A as I-tuples from A, and we write fi in place of f(i). Observe that if A, is empty for some i E I , then is empty. Also note that if I is empty, then n A has exactly one element: is also denoted by B' and the empty function. If A, = B, for al1 iEI, then referred to as a direct power of B. In the event that I = (0,1), we use A, x A, to denote n A . Now let I be a set and let A, be an algebra for each i E 1. Moreover, suppose that A, and A, are similar whenever i, j E I . SO (Ai: i E I ) is a system of similar algebras indexed by 1. We create the direct product of this system of algebras by imposing operations on n A coordinatewise. This is the unique choice of operations on the product set for which each projection function is a homomorphism.

n~

n~

DEFINITION 1.5. Let A = (A,: i~ I ) be a system of similar algebras. The direct product of (A,: i E 1 ) is the algebra, denoted by n A , with the same similarity type, with universe n A such that for each operation symbol Q and al1 f O, f ', f'-'E n A , where r is the rank of Q, m ,

( e n A (f O, f l,. ,f '-l)),

=

~ * i ( ~ .,~Jr-l) , ~ ~ ,

for al1 i E I . Here are some alternatives for denoting direct products:

.

1.2

Subalgebras, Homomorphisms, Direct Products

23

If B = A, for al1 i E 1, we write Br for the direct product and call it a direct power of B. In case I = {O, 11, we write A, x A, in place of n A . Throughout this book we have frequent need to write expressions like

where Q is an operation symbol (or a more complicated expression) of the algebi-a A with rank r and a,, a,, - a,-, E A. Very often the exact rank of Q is of little significance and the expression above is needlessly complex. We replace it by m.,

where ü stands for a tuple of elements of A of the correct length. The formation of homomorphic images, of subalgebras, and of direct products are the principal tools we will use to manipulate algebras. Frequently, these tools are used in conjunction with each other. For example, let R denote the ring of real numbers and let I denote the unit interval. Then IW1 is the ring of al1 realvalued functions on the unit interval. Going a step further, we can obtain the ring of al1 continuous real-valued functions on the unit interval as a subalgebra of m'. Let X be a class of similar algebras. We use the following notation: H ( X ) is the class of al1 homomorphic images of members of X S(%) is the class of al1 isomorphic images of subalgebras of members of X P ( X ) is the class of al1 isomorphic images of direct products of systems of algebras belonging to X We say that X is closed under the formation of homomorphic images, under the formation of subalgebras, and under the formation of direct products-provided, respectively, that W(X) E X , S ( X ) c X , and P ( X ) E jy: Observe that if X is closed with respect to direct products, then X contains al1 the one-element algebras of the similarity type, since, in particular, X must contain the direct product of the empty system of algebras. Let X be a class of similar algebras. We call X a variety if and only if X is closed under the formation of homomorphic images, of subalgebras, and of direct products (i.e., H ( X ) E X , S ( X ) c X , and P ( X ) c S ) . Al1 of the classes described at the close of the last section are varieties. Varieties offer us a means to classify algebras (that is, to organize them into classes) that is compatible with our chief means for manipulating algebras. The notion of a variety will become one of the central themes of these volumes.

Exercises 1.6 1. Let A and B be algebras. Prove that hom(A, B) = (Sub B x A) í l {h: h is a function from A into B). 2. Let A = (A,: i~ I) be a system of similar algebras. Prove that pi is a homomorphism from n A onto Ai for each i E l.

24

Chapter 1 Basic Concepts

3. Let A*= ( A i : i E 1) be a system of similar algebras and assume that B is an algebra of the same type with B = Prove that if pi is a homomorphism from B onto Ai for each i E l , then B = n A . 4. Let A = ( A i : i E I ) be a system of similar algebras. Let B be an algebra of the same type and hi be a homomorphism from B into Ai for each i~ l . Prove that there is a homomorphism g from B into n A such that hi = pi 0 g for each

n~.

i ~ l .

5. Prove that every semigroup is isomorphic to a semigroup of functions from X into X, where the operation is composition of functions and X is some set. 6. Prove that every ring is isomorphic to a ring of endomorphisms of some Abelian group. [Hint: Let A be an Abelian group. The sum h of endomorphisms f and g of A is defined so that h(a) = f(a) + g(a) for al1 ~ E A , where + is the basic binary operation of A. The product of a pair of endomorphisms is their composition.] 7. Describe al1 the three-element homomorphic images of (m, +), where m is the set of natural numbers.

1.3 GENERATION OF SUBALGEBRAS Let A be an algebra and let X be an arbitrary subset of the universe A of A. X is unlikely to be a subuniverse of A, since quite possibly there is a basic operation that, when applied to certain elements of X, produces a value outside X. So X may fail to be a subuniverse because it lacks certain elements. As a first step toward extending X to a subuniverse, one might gather into a set Y al1 those elements that result from applying the operations to elements of X. Then X U Y is no longer deficient in the way X was. The new elements from Y, however, may now be taken as arguments for the basic operations, and X U Y may not be closed under al1 these operations. But by repeating this process, perhaps countably infinitely many times, X can be extended to a subuniverse of A. With respect to the subset relation, this subuniverse will be the smallest subuniverse of A that includes X, since only those elements required by the closure conditions in the definition of subuniverses are introduced in the process. The subuniverse obtained in this way must be included in every subuniverse of which X is a subset. Thus it may be obtained as the intersection of al1 such subuniverses. The finitary character of the fundamental operations of the algebra ensures that this iterative process succeeds after only countably many steps.

THEOREM 1.7. Let A 'be un algebra and let S be any nonempty collection of subuniverses of A. Then O S is a subuniverse of A.

OS

Proof. Evidently is a subset of A. Let F be any basic operation of A and suppose that r is the rank of F. To see that OS is closed under F, pick any a,,

1.3 Generation of Subalgebras

-,

25

al, a,. E n S . For al1 B E S we know that a,, a,, - ., a,-, E B; but then F(a0, a,,.. a,-,) E B, since B is a subuniverse. Therefore F(ao,a,, . ,a,-,) E and ( ) S is closed under F. m S * . ,

n~

m ,

DEFINITION 1.8. Let A be an algebra and let X E A. The subuniverse of A generated by X is the set O{B: X c B and B is a subuniverse of A}. sgA(x) denotes the subuniverse of A generated by X. Since X E A and A is a subuniverse of A, Theorem 1.7 justifies calling sgA(x)a subuniverse of A. Now we can formalize the discussion that opened this section. THEOREM 1.9. Let A be un algebra and X G A. Define X,, by the following recursion: X, X,,,,

=X = X,, U

Then s$(x)

=

{F(¿i):.F is a basic operation of A and a is a tuple from X,,}.

U{X,,: n E w}

Proof. The proof consists of two claims. CLAIM O:

sgA(x) E U{X,,: n ~ w ) .

Since X c U{X,,: n E m}, we need only show that U{X,: n~ w)is a subuniverse. Let F be a basic operation and let ¿ be ia tuple from U{X,: n E a ) . Since F has some finite rank and Xo c X, E - , we can easily see that a is a tuple from X, for some large enough m. But then F(ü) E X,,, c U{X,,: n E o}. So this latter set is a subuniverse, as desired. CLAIM 1: U{X,,: n E w} E sgA(x). Since sgA(x)is the intersection of al1 subuniverses that include X, it suffices to show that X,, c B for every subuniverse that includes X and for every natural number n. This can be immediately accomplished by induction on n. m COROLLARY 1.10. Let A be un algebra and X G A. If a E sgA(x),then there is a finite set Y such that Y G X and a E sgA(y).

Proof. We will prove by induction on n that

*

If a EX,,, then a E sgA(y)for some finite Y c X.

Initial Step: n = O. Take Y = {a). Inductive Step: n = m + 1, and we assume without loss of generality that ) F is a basic operation and b is a tuple from X,. Letting Y be the a = ~ ( bwhere union of the finite sets obtained by applying the inductive hypothesis to each element of 6, we see that b is a tuple from s$(Y). Thus a E sgA(y)as desired.

26

Chapter 1 Basic Concepts

C O R O L L A R Y 1.11. Let A be un algebra and X, Y i. ii. iii. iv.

E A.

Then

X E sgA(x). sgA(sgA(x)) = sgA(x). If X S Y , then sgA(x)S sgA(y). sgA(x)= U{sgA(z):Z E X and Z is finite).

The properties of sgA,considered as a unary operation on the power set of A , which have been gathered in this last corollary, are so frequently used that usually no reference will be given. Subuniverses of the form sgA(Z),where Z is finite, are said to be finitely generated. Part (iv) of this corollary entails that the universe of any algebra is the union of its finitely generated subuniverses. The set-inclusion relation is a partial order on the collection of al1 subuniverses of A. This order induces lattice operations of join and meet on the collection of al1 subuniverses. Some of the fundamental facts concerning this order are easily deduced. They have been gathered in the next corollary.

C O R O L L A R Y 1.12. Let A be any algebra, S be any nonempty collection of subuniverses of A , and B be any subuniverse of A. Then i.

With respect to set-inclusion, n S is the largest subuniverse included in each member of S. ii. With respect to set-inclusion, s g A ( U s )is the smallest subuniverse including each member of S. iii. B is finitely generated if and only if whenever B c sgA(U T) for any set T of subuniverses of A , then B c s g A ( U ~ for ' ) some finite T' c T. iv. Suppose that for al1 B, C E S there is D E S such that B U C E D. Then U S is a subuniverse of A. E Parts (i) and (ii) describe, respectively, the meet and join in the lattice of subuniverses. In fact, they describe how to form meets and joins of arbitrary collections of subuniverses rather than just the meets and joins of subuniverses two at a time. The import of (iii) is that the notion of finite generation, which on its face appears to be something "internal" to a subuniverse, can be characterized in terms of the order-theoretic properties of the set of al1 subuniverses. This last corollary can be deduced from the preceding material with the help of only the following fact: The subuniverses of A are precisely those subsets X of A such that X = sgA(x). Suppose B and C are any two subuniverses of A. We define the join of B and C (denoted B v C) by

and the meet of B and C (denoted B

A

C) by

,

27

1.4 Congruence Relations and Quotient Algebras

It is not hard to prove, using the last corollary, that the collection of al1 subuniverses of A endowed with these two operations is a lattice. We cal1 this lattice the lattice of subuniverses of A and denote it by Sub A.

Exercises 1.13 l. Prove that every subuniverse of (o, +) is finitely generated, where o is the set {O, 1,2, - - 1 of natural numbers. 2. Supply proofs for Corollaries 1.11 and 1.12. 3. A collection C of sets is said to be directed iff for al1 B, D E C there is E E C such that B c E and D E E. C is called a chain of sets provided c is a linear ordering of C. Let A be an algebra. Prove that the following statements are equivalent: i. B is a finitely generated subuniverse of A. ii. If C is any nonempty directed collection of subuniverses of A and B c UC, then there is D E C such that B E D. iii. If C is any nonempty chain of subuniverses of A and B E UC, then there is D E C such that B S D. iv. If C is any nonempty directed collection of subuniverses of A and B = C, then B E C. v. If C is any nonempty chain of subuniverses of A and B = UC, then B E C. (HINT: It may help to prove first that every infinite set M is the union of a chain of its subsets, each of which has cardinality less than the cardinality 1 MI of M. Zorn's Lemma or some other variant of the Axiom of Choice would be useful at this point.) 4. An algebra A is called mono-unary if it has only one basic operation and that operation is unary. Prove that any infinite mono-unary algebra has a proper subalgebra.

U

1.4 CONGRUENCE RELATIONS AND QUOTIENT ALGEBRAS Unlike the formation of subalgebras, the formation of homomorphic images of an algebra apparently involves externa1 considerations. But there is a notion of quotient algebra that captures al1 homomorphic images, at least up to isomorphism. The constructions using normal subgroups and ideals familiar from the theories of groups and rings provide a clue as to how to proceed in the general setting. Let h be a homomorphism from A onto B. Define

O

=

((a, a'): a, a ' € A and h(a) = h(af)).

So 8 is a binary relation on the universe of A. It is convenient to write a O a' in place of (a, a') E O. Now 8 is easily seen to be an equivalence relation on A, since h is a function with domain A. Because h is a homomorphism, O has an additional property called the substitution property for A:

7

Preliminaries

ni,,

with A, # @ for al1 iE 1, then A, # @. We assume the validity of this axiorn. Z, Q, R, C denote respectively the set of al1 integers O, 1, 2, 3, the set of al1 rational numbers, the set of al1 real numbers, and the set of al1 complex numbers. The union of a family (or set) F of sets, UF, is defined by x E UF iff x E B for some B EF. The intersection of a family F c 9(A), written OF, is defined (dually to the union) to be the set { X EA: X E B for al1 B EF}. (If F is nonempty (F # @), then this is independent of A.) An order over a set A is a binary relation I on A such that a. I is reflexive over A; Le., (x, x) E I for al1 x E A. b. I is anti-symmetric; i.e., if (x, y) E I and (y, x) E 1, then x = y. c. I is transitive; i.e., (x, y) E I and (y, z) E 5 always imply (x, Z) E < . For orders (and binary relations more generally), we often prefer to write x 4 y in place of (x, y) E I. Given an order < over a nonempty set A, the pair (A, I) is called an ordered set. (Ordered sets are frequently called "partially ordered" sets in the literature.) By a chain in an ordered set (A, 5 ) is meant a set C G A such that for al1 x, y E C either x < y or y I x. An upper bound of C is an element u E A for which c i u for al1 c E C. Zorn's Lemma is the statement that if (A, 5) is an ordered set in which ) has a maximal element m (i.e., every chain has an upper bound, then (A, I m~ A and m 5 X'E A implies m = x). We take this statement as an axiom. The Hausdorff Maximality Principle is the statement that every ordered set has a maximal chain. More precisely, let (A, I )be an ordered set and let L denote the family of al1 chains in (A, 5 ) . The principle states that the ordered set (L, S) has a maximal element. The Hausdorff Maximality Principle is equivalent to Zorn's Lemma. A linearly ordered set (also called a chain) is an ordered set (A, I ) in which every pair of elements x and y satisfy either x I y or y i x. A well-ordered set is a linearly ordered set (A, I ) such that every nonempty subset B c A has a least element 1 (i.e., 1 E B and 1 I x for al1 x E B). Concerning ordinals: a. The ordinals are generated from the empty set @ using the operations of successor (the successor of x is S(x) = x U {x}) and union (the union of any set of ordinals is an ordinal). b. O = @ (the empty set), 1 = S(O), 2 = S(l), . The finite ordinals are 0, 1,2, - , also called natural numbers or non-negative integers. c. Every set of ordinals is well-ordered by setting a I /3 (a and /3 are ordinals) iff a = /3 or a ~ / 3Then . O 5 1I 2 and we have n = {O, 1;--,n - 1) for each finite ordinal n 2 1. d. The least infinite ordinal is w = {O, 1,2, which is the set of al1 finite ordinals.

+ + +

- m - ,

a},

s.,

1.4 Congruence Relations and Quotient Algebras

on A and g be the quotient map from A onto A/8. Then

i. The kernel of h is a congruence relation on A. ii. The quotient map g: A -» A/8 is a homomorphism from A onto A/8. iii. If 8 = ker h, then the unique function f from A/tl onto B satisfying f o g is an isomorphism from A/8 onto B.

=h

A ker h is represented as a partition

\

the quotient

Figure 1.1

Proof. As the various definitions were virtually designed to make (i) and (ii) true, we will look only at (iii). Since we want f o g = h and since g is the quotient map, the only option is to define f by f (a/8) = h(a) for al1 a E A. To see that this definition is sound, suppose that a 8 a'. Then h(a) = h(af),since 8 = ker h. .Thus f(a/8) = h(a) = h ( a f )= f(af/8),as desired. f is one-to-one, since f (ale) = f (a'/O) implies h(a) = h(a' ) and, as ker h = 0, we have a/8 = a'/@. Finally, to demonstrate that f is a homomorphism, let Q be an operation symbol and ü be a tuple from A. Then

f ( ~ e ( a i e= ) )f ( ~ e ( ~ ( ü ) ) ) =f

(g(eA(a3)) =h(eA(4) = eB(h(a)) =

eB(f( ~ ( 3 ) ) )

= QB(f

(ale)).

Thus, f is a homomorphism and hence an isomorphism from A/B onto B.

U

31

1.4 Congruence Relations and Quotient AIgebras

So the congruence relations on the ring of integers are exactly the relations =, where q is a non-negative integer. The homomorphic images of Z are, up to isomorphism, Z itself, the one element ring, and the rings of residues modulo integers greater than l.

2. Let R = (R, A , v ) where R is the set of real numbers and

R is a lattice. We wish to describe al1 the congruence relations on R. So let 0 be a congruence relation. Suppose r 0 s and r 5 t _< s. Then r = (r A t) 0 (S A t) = t, and so r 0 t. This means that the congruence classes of 0 are convex-that is, they are intervals, perhaps infinite or even degenerate. Pick an arbitrary element from each congruence class. It is evident that the set of selected elements forms a subalgebra of IW isomorphic to R/O. Now let O be any equivalence relation on R such that each O-equivalence class is a convex set of real numbers. To verify the substitution property, let r 0 r' and S 0 S'. CASE O:

r Os.

The substitution property is immediate, since r al1 belong to {r, S,r', S') E rl0.

A

S,r v S,r'

A S',

and r' v

S'

CASE 1: r and s lie in different equivalence classes modulo 0. The two 0-classes, which are intervals, cannot overlap. Thus, without loss of generality, we assume that every element of 1-16is less than every element of s/o. Hence

Therefore (r A S)0 (r' A S') and (r v S)O (r' v S'), as desired, so 0 is a congruence relation. Thus the congruence relations of R are exactly those equivalence relations whose equivalence classes are convex sets of real numbers. Since any proper convex subset of [W is a congruence class of infinitely many congruence relations, IW provides a strong contrast with the behavior of congruence relations on groups. For most algebras, the task of describing al1 the congruence relations is hopelessly difficult, so these two examples have a rather special character. The collection of al1 congruence relations of an algebra is a rich source of information about the algebra; discovering the properties of this collection often leads to a deeper understanding of the algebra.

32

Chapter 1 Basic Concepts

Just as the subuniverses of an algebra form a lattice, so do the congruence relations. Roughly the same analysis can be used. Let A be an algebra and let X be a binary relation on A. Now X may fail to be a congruence relation, either because it is not an equivalence relation on A or because it does not have the substitution property. In either case, the failure can be traced to the existence of an ordered pair that fails to belong to X but that must belong to any congruence relation that includes X. Al1 such necessary ordered pairs can be gathered into a set Y, and X U Y is at least not subject to the same deficiencies as X. Yet X U Y may fail transitivity or the substitution property. But by repeating the process, perhaps countably infinitely often, a congruence relation will be built; with respect to set inclusion, it will be the smallest congruence relation on A that includes X.

THEOREM 1.18. Let A be un algebra and C be any nonempty collection of ¤ congruence relations on A. Then n C is a congruence relation on A. The routine proof of this theorem is left as an exercise. This theorem allows us to proceed as we did with subuniverses.

DEFINITION 1.19. Let A be an algebra and X relation on A generated by X is the set

EA

x A. The congruence

(-){O:X S 0 and 0 is a congruence relation on A}. c g A ( x )denotes the congruence relation on A generated by X. As we did with subuniverses, we can formalize the discussion above to obtain a description of how to extend a binary relation to obtain the congruence relation it generates. This is complicated by the necessity of arriving at an equivalence relation. For the purposes of convenience in dealing with congruences, both here and in general, we introduce some notation. Let A be an algebra and 0 be a binary relation on A. Let ü and a' be tuples from A of the same length, say r. So ü = (a,,a,,.-.,a,-,)

and ü' = (aó,a;,--.,a:-,).

We use ¿i0

zf

in place of the more elaborate

Thus ü 0 ü' stands for "ai 0 a: for al1 i < r." Using this notation, we can rephrase the substitution property as: ü0 2' implies F (a)0 F (a'),for al1 basic operations F and al1 tuples ü and a'.

33

1.4 Congruence Reiations and Quotient Algebras

THEOREM 1.20. Let A be un algebra and X E A x A. Define X, by the following ~ecursion:

X, = X U {(a, a'): (a', a) E X ) U {(a, a): a € A )

xn+,= xnu T,'JQ,, where Q, = {(F(ü), F(Ü1)):F is a basic operation and Zand ü' are tuples such that üX,Ü') and T, = {(a,c): aX,bX,,c for some ~ E A J . Then c g A ( x )= X,. ¤

U

nEm

The proof of this theorem is much like the proof of Theorem 1.9, the only new element being an easy argument about transitive closures.

COROLLARY 1.21. Let A be un algebra and X G A x A. If (a, a') then there is a finite set Y E X such that (a, a') E cgA(y). COROLLARY 1.22. Let A be un algebra and X, Y i. ii. iii. iv.

E cgA(x),

¤

A x A. Then

X c c$(x). cgA(cgA(x))= cgA(x). If X E then c g A ( x )E cgA(y). c g A ( x )= U{cgA(Z):Z E X and Z is finite).

Congruente relations of the form cgA(2), where Z is finite, are said to ) called principal be finitely generated. Those of the form ~ g ~ ( { ( a , a ' ) ) are congruence relations. This last piece of notation is obviously awkwaid. We replace it by cgA(a,a'). Evidently, cgA(a,a') is the smallest congruence relation that places a and a' in the same congruence class-or, as we shall say more suggestively, cgA(a,a') is the smallest congruence collapsing (a, a').

COROLLARY 1.23. Let A be un algebra, C be any nonernpty collection of congruence relations on A, and 8 be any congruence relation on A. Then i.

I ,

l

i

With respect to set-inclusion, n C is the largest congruence relation on A included in each member of C. ii. With respect to set-inclusion, c g A ( U c ) is the smallest congruence relation including each member of C. iii. 8 is finitely generated fi and only if whenever D is a set of congruences on A and 8 i c g A ( UD), then 8 E c$(U D') for some finite D' E D. iv. Suppose that for each #, $ E C there is q E C such that # U S, E q. Then is a congruence relation on A.

UC

The proofs of these three corollaries do not differ significantly from the proofs of the three corollaries to Theorem 1.9. Suppose 4 and $ are congruence relations on the algebra A. We can define the join (designated by v ) and the meet (designated by A ) of # and S/ so that

34

Chapter 1 Basic concepts

4 v $ = CgA(4U $) and 4 A $ = 4 í? $. With these operations, the collection of al1 congruence relations on A becomes a lattice, which we shall cal1 the congruence lattice of A and denote by Con A. Every congruence relation of A is an equivalence relation on A and, as we saw in Exercise 1.2(7), the equivalence relations on A constitute a lattice Eqv A. In fact, Con A is a sublattice of Eqv A. The meet operation in both Con A and Eqv A is just set-theoretic intersection. To establish the contention that Con A is a sublattice of Eqv A, it is necessary to prove that the join operation in Con A is the restriction of the join operation in Eqv A. ~ h i is s the import of the next theorem, and it even applies to joins of arbitrary subsets of Con A. Note that we write Con A for the set of al1 congruence relations on A-the universe of the lattice Con A. By the same token, we write Sub A for the set of al1 subuniverses of A, and Eqv A for the set of al1 equivalence relations over the set A. THEOREM 1.24. Let A be un algebra and let C G Con A.

i. ii.

Con A = (Sub A x A) í l Eqv A. c g A ( U c ) =~ { R : U C G R U ~ ~ R E E ~ V A ) .

Proof. i. This is just a restatement of the definition of a congruence relation, using the language of subuniverses and direct products.

n

ii. Let 0 = {R: U C c R and R E Eqv A). Thus 0 is the smallest equivalence relation on A that includes UC. Since it is clear that U C c 0 c c g A ( U c ) ,we need only establish that 0 is a congruence on A. In view of part (i) of this theorem, it only remains to establish that 0 is a subuniverse of A x A. The transitive closure of relations was described in the Preliminaries. A more detailed analysis is used here. Let D be any collection of relations on A-that is, subsets of A x A. Let D* denote the set of al1 those relations that can be obtained as compositions (i.e., relational products) of finite nonempty sequences of relations belonging to D. Thus 4, o 4, o - - o &,, where 4i E D for each i < n is a typical element of D*. Checking that UD* is actually the presents no difficulties. This analysis of the transitransitive closure of tive closure leads immediately to the following conclusion: If D consists entirely of symmetric reflexive relations on A, then the transitive closure of is also symmetric and reflexive and is, therefore, the smallest equivalence relation including D. In particular, 0 is the transitive closure C* of UC. Now consider any two subuniverses 4 and $ of A x A. # o $ must be a subuniverse of A x A as well, since if F is any basic operation of A and n is the rank of F and if ai4bi$ci for al1 i < n, then

UD

UD

U

U

By the obvious inductive extension of this fact, if D consists entirely of subuniverses of A x A, then so does D*. Moreover, if every member of D is

35

1.4 Congruence Relations and Quotient Algebras

reflexive, then D* is directed upward by set-inclusion (see Exercise 1.13(3)). In particular, this means that C* is a collection of subuniverses of A x A and that for any # and S in C* there is y in C* such that # U c y. Hence, by Corollary 1 . 2 ( v ) U C * is a subuniverse of A x A. That is, 8 is a subuniverse of A x A, as desired. • Let A be any algebra. With A, we can associate four other algebras: End A, which is the monoid of al1 endomorphisms of A; Aut A, which is the group of al1 automorphisms of A; Sub A, which is the lattice of al1 subuniverses of A; and Con A, which is the lattice of al1 congruence relations on A. Chapter 2 is devoted to the rudiments of the abstract theory of lattices, and Chapter 3 takes up some aspects of the theories of monoids and of groups. These four algebras related to A contain significant information about A.

Exercises 1.25 1. Let h E hom(A, 13). Prove that ker h is a congruence relation on A. 2. Let 0 E Con A. Prove that the quotient map from A onto A10 is a homomorphism from A onto A/8 and that its kernel is O. 3. Let G be a group, 0 E Con G, and N be a normal subgroup of G. Prove that el8 is a normal subgroup of G, where e is the unit of the group. Prove that ({a, b): a . b-' E N) is a congruence relation on G. Finally, prove that if 4E Con G, then q,5 = 8 iff el4 = e/8. 4. Verify that =, is a congruence relation on the ring H of integers for every natural number q. 5. Suppose 0 E Con A. Prove that 8 = U{cgA(a,a'): a 8 a'}. 1s every subuniverse the join of 1-generated subuniverses? 6. Let A be an algebra and h ~ h o m ( A , A ) Prove . that h is one-to-one iff ker h = O*, where OA denotes the least congruence relation on A, namely the identity relation ((a, a): a E A}. 7. Let A be an algebra. Define F to be the function with domain End A such that F(h)

= h-' o h

for al1 h E End A.

Prove that F maps End A into Con A. "8. (Burris and Kwatinetz) Let A be an algebra that is countable (i.e., finite or countably infinite) and has only countably many basic operations. Prove each of the following: i. IAutAl < o or IAutAl = 2 " ii. JSubAJ_ < o or ISubA( = 2 " iii. IEndAl < o or IEndA( = 2 " iv. JConAl < o or IConAl=2" where is the cardinality of the set of natural numbers and 2" is the cardinality of the set of real numbers.

C

H

A

P

T

E

R

T

W

O

Lattices

2.1 FUNDAMENTAL CONCEPTS Lattices arise often in algebraic investigations. We have already seen that Sub A and Con A are lattices for every algebra A. Knowing the significance of normal subgroups in group theory and ideals in ring theory, we should not be surprised that lattices of congruences have an important role to play. By itself, this would justify a detailed development of lattice theory. It turns out that, in addition to congruence lattices, many other lattices prove useful in developing a general theory of algebras. This chapter is an introduction to lattice theory, focusing on results that will be put to use in these volumes. 594.6 and 4.8 further elaborate some aspects of lattice theory introduced here. Lattice theory is a rich subject in its own right. We can highly recommend Birkhoff [1967], Crawley and Dilworth [1973], and Gratzer [1978] for fuller expositions of lattice theory. Lattices were defined in Chapter 1 as algebras of the form (L, A , v ), with two binary operations called meet (designated by A ) and join (designated by v ), for which the following equalities are true for al1 a, b, c E L :

The equalities on the first line express commutativity, those on the second line associativity, those on the third line idempotency, and those on the last line absorption. These equalities are called the axioms of lattice theory. An alternative system of notation in common use denotes join by " +" and meet by "." (or simply by juxtaposition). Lattices can also be viewed as special ordered sets. Let L be a nonempty set and 5 be a binary relation on L. The relation < is said to be an order on A if and only if for al1 a, b, c E L

9

Preliminaries

(x, y ) ~p may be indicated by connecting points labeled x and y with a line segment and labeling the segment with p.) e. (Eqv A, c.) is an ordered set having greatest lower bounds and least upper bounds for any subset of its elements. The greatest lower bound of a set S c Eqv A is The least upper bound of S is the transitive closure of U S (see (3e)). 16. The equality symbol (=) is used to assert that two expressions name the same object. The formal equality symbol (Ñ) is used to build equations, such as the commutative law x y Ñ y x, which can only become true or false after we assign specific values to their symbols and ask whether the two sides name the same object. (See $4.11 in Chapter 4.)

OS.

38

Chapter 2 Lattices

3. Let L be a lattice and let a, b, c, d E L. Prove that if a < b and c < d, then a~c = (m(a7 p(a7 b),q(a7 b)),m(d7 c, 4 ) = m((a,4,(p(a, b),c ) , (q(a7 b),c ) ) e m(@, 4,( p ( a ,b),c ) , (q(a7 b),c ) ) = (m(b7 p(a, b),i ( a ,b)),m(d,c7 4) = P(b,a),q(b7 a)),m(d7 c, 4 ) = (b,c). Using the claim, it is easy to prove that 8, E Con Lo. In a similar way, we define 8, E Con L,. Now to see that h(0,, 0,) = 0, just follow the implications below:

(a,, a1 )h(0070,) (b07b l ) * a, 0, b, and a , 8, b1 * ('07 c ) 0 (b,, c ) and ( d , a , ) (d7 b, ) for al1 d E Lo and al1 c E L ,

* (a07a1> 8 (b07a1 ) 8 (b07bi ) * ('07 ) 0 @o, bl ). Hence h(O,, O,) E 0. Now suppose (a,, a , ) 0 (b,, b, ). So

= = =

b1

)7

~ (

( ~ ( ~ 0 7 ),(q(b07 ~ 0 q(bo> , U,)),

al

))

(bo,a,).

Thus a, 0, b,. Sirnilarly, a, 8, b, . Therefore h(8,, 0,)

=

8.

E

95

2.6 Congruence Relations on Lattices

This theorem holds for a wider class of algebras than lattices. In fact, the crucial requirement is the existence of severa1 expressions [m(x, y, z), p(x, y), and q(x, y)] built from variables and the fundamental operations for which certain equalities hold in both the factor algebras. This theorem supplies us with a strategy for describing ConL for certain lattices L. In the first step, we try to write L, up to isomorphism, as a direct product of lattices that cannot themselves be written as direct products of other lattices. The second step then consists of analyzing Con L in the case that L cannot be factored further. Let A be an algebra. A is directly indecomposable iff A has more than one element and if A B x C then either B has only one element or C has only one element. Carrying out the first step in our strategy is easy, if we impose some kind of finiteness condition on L.

THEOREM 2.71. Every lattice of finite height is isomorphic to a direct product of finitely many directly indecomposable lattices. • This theorem can be proven by a straightforward induction on height. The proof is left in the hands of the reader. The second step in the strategy requires more work and additional hypotheses to obtain useful conclusions. We cal1 an algebra A simple iff Con A has exactly two elements. Every simple algebra is directly indecomposable (just consider the kernels of the projection functions), but it is not hard to invent finite modular lattices that are directly indecomposable (by virtue of having a prime number of elements) but fail to be simple.

THEOREM 2.72. (R. P. Dilworth [1950].) Every directly indecomposable relatively complemented lattice of finite height is simple. Proof. Let L be a directly indecomposable relatively complemented lattice of finite height. Let 8 be a congruence on L different from the identity relation. Pick u < v so that u 8 v. Let u* be a complement of u in ICO, u]. Now observe that o=uAu*~vAu*=(uVU*)AU*=U*#O.

Thus, ( x : O 8 x # O} is not empty. Since L has finite length, in view of Theorem 2.6 pick a to be a maximal element of (x: 0 ex}. Since this set is an ideal, it is easy to see that a is the largest element in the ideal. Theorem 2.57 suggests that L might be decomposable as I(a] x I[a). That theorem requires that a have a complement and that L be distributive. We are still able to carry out roughly the same argument, the difficult point being to establish enough "distributivity." CLAIM O: x 8 y iff (x

A

y) v a' = x v y for some a' I a.

Suppose that x 8 y. Let a' be a complement of x A y in I[O, x v y]. Then a' = a' A (x v y) e a ' A (x A y) = O. So a ' < a by the maximality of a. The converse is immediate. CLATM 1: a v (x Observe that

A

y ) = ( a v x)

A

(a v y ) f o r a l l x , y ~ L .

Chapter 2 Lattices

and so x By Claim O, pick a' ((x A y)

A

A

y 8 (a v x)

A

(a v y).

< a so that

(a v x) A (a v y)) v a' = (x

A

y) v ((a v x) A (a v y)).

Using the absorption axioms, we obtain (x A y) v a ' = (a v x) A (a v y). Finally, using this equation, we obtain

and therefore a v (x A y) = (a v x) A (a v y). Thus, a possesses some distributivity in L. We need to know that a satisfies the dual of Claim 1 as well. To accomplish this, we will characterize the property in Claim 1 in a selfdual way. Cal1 an element b distributive iff b v (x A y) = (b v x) A (b v y) for al1 x, y E L. CLAIM 2: b is distributive iff no nontrivial subinterval of I(b] is projective with a nontrivial subinterval of I[b). Suppose first that b is distributive and that I[x, y] is a subinterval of I(b] and I [u, u] is a subinterval of I [b) such that I [x, y] and I [u, u] are projective. Define (b = {(z, w): z v b = w v b). Because b is distributive, (b is a congruence relation on L. Now x (b y, so u (b v. But this means that u = u v b = u v b = v. Therefore, I [u, u] is trivial and since I [x, y] is projective with I [u, u], it follows that I [x, y] is trivial, too. For the converse, suppose that b is not distributive. Pick x and y so that b v (x A y) < (b v x) A (b v y). Let $ = cgL(O,b).Then b v (x A y)$(b v x) A (b v y). Use Theorem 2.66 to pick a sequence

such that Ice,, e,,,] is weakly projective into I(b] for each i < n. Since b v (x A y) < (b v x) A (b v y), choose j < n with e, < e,,, . So I[e,, e,,,] is a nontrivial subinterval of I[b) that is weakly projective into I(b]. By Theorem 2.68, I[ej, e,+,] is projective with a subinterval of I(b], as desired. CLAIM 3: a ~ ( x v y ) = ( a ~ x ) v ( a ~ y ) f o r a l l x , y ~ L . CLAIM 4: If a Observe that

A

x

=a A

y and a v x = a v y, then x

= y.

2.6 Congruence Relations on Lattices

By Claim O, pick a' L a so that Therefore a' S y and a' i y A a = x A a i x. This means that a' < x A y. Consequently, x A y = y. By a similar argument, x A y = x. Therefore x = y. Now define h: L -+ I(a] x I [a) by h(x) = (x A a, x v a). Easy computations using Claims 1 and 3 reveal that h is a homomorphism. Claim 4 is virtually the statement that h is one-to-one. The fact that h is onto I(a] x Ira) follows as in Theorem 2.57, using Claims 1 and 3 above in place of the distributivity of L. Since L is directly indecomposable and a # O, we deduce that Ira) has only one element. Therefore a = 1 and 8 is L x L. Hence L has just two congruences; it ¤ is simple.

COROLLARY 2.73. Every relatively complemented lattice of finite height is isomorphic to a direct product of simple relatively complernented lattices of finite height. ¤ G. Birkhoff [1935b] and K. Menger [1936] had earlier established that every complemented finite dimensional lattice is isomorphic to a direct product of simple lattices. The complemented finite dimensional simple modular lattices turn out to be the two-element chain and the subspace lattices of nondegenerate finite dimensional projective geometries. See 54.8 for an account of this important result. Combined with Theorem 2.70, Corollary 2.73 yields the following corollary.

COROLLARY 2.74. (R. P. Dilworth [1950].) If L is a relatively complemented lattice of finite height, then Con L is a Boolean lattice. • The corollary can be drawn as well from the following theorem.

THEOREM 2.75. If L is a lattice with the finite chain condition and the projectivity property, then Con L is a Boolean lattice. Proof. First suppose a < b in L. Let c < d and (c, d) E cgL(a,b). According to Theorem 2.66, pick a finite sequence c = e, el . 5 e, = d such that I [ei, is projective with a subinterval of I [a, b]. Since a < b, there are no proper subintervals, and therefore Ira, b] is projective with some subinterval I [ei, of I [c, a]. Again by Theorem 2.66, we have (a, b) €cgL(c,d), SO cgL(a,b) is an atom in ConL. Since L has the finite chain condition, there is a finite maximal chain from O to 1; let O = do 4 d, < < d, = 1 be one such chain. Apparently, cgL(O,1) = cgL(d0,d,) v cgL(d,,d,) v

,-,,d,).

v cgL(d

This means that Con L is a bounded distributive lattice in which the top element is the join of finitely many atoms. By Theorem 2.40, Con L is relatively complemented. Therefore Con L is a Boolean lattice. ¤

98

Chapter 2 Lattices

COROLLARY 2.76. If L is a modular lattice of finite height, then ConL is ¤ Boolean.

Exercises 2.77 1. Prove that if a -< b in the lattice L and 8~ Con L, then either a/8 = b/8 or a/8 -< b/8 in L/O. 2. Prove that every lattice of finite height is isomorphic to a direct product of finitely many directly indecomposable lattices of finite height. Does a similar assertion hold for lattices satisfying the ascending chain condition? 3. Give an example of a finite directly indecomposable modular lattice that is not simple. 4. Let L be a lattice. For each ideal I of L , define

O ( I ) = { ( a , b ) : a v c = b v c for some C E I } and for each congruence 8 E Con L , define

I ( 8 ) = { a : a/8 I b/O in L/B for al1 b E L ) . a. Prove that if L is distributive and I is an ideal of L , then O ( I )E Con L . b. Prove that I ( 0 )E Id1 L , for al1 8 E Con L. c. Prove that the following are equivalent: i. L is distributive. ii. For each ideal I of L , O ( I )E Con L and I ( O ( I ) )= I. iii. Every ideal of L has the form I ( 0 ) for some 8 E Con L. 5. Let L be a lattice and a E L . a is said to be standard iff

The element a is said to be neutral iff

Recall that a is said to be distributive iff

Thus an element is called distributive, standard, or neutral, depending on whether a certain part of the distributive law is true when that element is present. a. Prove that a is distributive iff { ( b , c ) : a v b = a v c} is a congruence relation on L. b. Prove that a is standard iff a is distributive and for al1 b, c E L, if a A b = a A c and a v b = a v c, then b = c. c. Prove that a is neutral iff for any b, c E L the sublattice generated by {a,b, c} is distributive. d. Prove that every neutral element is standard and that every standard element is distributive.

1.4 Congruence Relations and Quotient Algebras

on A and g be the quotient map from A onto A/8. Then

i. The kernel of h is a congruence relation on A. ii. The quotient map g: A -» A/8 is a homomorphism from A onto A/8. iii. If 8 = ker h, then the unique function f from A/tl onto B satisfying f o g is an isomorphism from A/8 onto B.

=h

A ker h is represented as a partition

\

the quotient

Figure 1.1

Proof. As the various definitions were virtually designed to make (i) and (ii) true, we will look only at (iii). Since we want f o g = h and since g is the quotient map, the only option is to define f by f (a/8) = h(a) for al1 a E A. To see that this definition is sound, suppose that a 8 a'. Then h(a) = h(af),since 8 = ker h. .Thus f(a/8) = h(a) = h ( a f )= f(af/8),as desired. f is one-to-one, since f (ale) = f (a'/O) implies h(a) = h(a' ) and, as ker h = 0, we have a/8 = a'/@. Finally, to demonstrate that f is a homomorphism, let Q be an operation symbol and ü be a tuple from A. Then

f ( ~ e ( a i e= ) )f ( ~ e ( ~ ( ü ) ) ) =f

(g(eA(a3)) =h(eA(4) = eB(h(a)) =

eB(f( ~ ( 3 ) ) )

= QB(f

(ale)).

Thus, f is a homomorphism and hence an isomorphism from A/B onto B.

U

C

H

A

P

T

E

R

T

H

R

E

E

Unary and Binary Operations

3.1 INTRODUCTION In the two previous chapters, we introduced the reader to some of the fundamental kinds of algebras with which we will be dealing throughout this work, such as lattices, semilattices, and Boolean algebras. Before turning to a formal presentation of the basics in Chapter 4, we devote this chapter to some other kinds of naturally occurring algebras. Such examples are central and important, because they help provide the diversity that is necessary for the discovery of general results in algebra. The idea of an algebra allows arbitrarily many operations of arbitrary rank, and yet, as we remarked at the beginning of Chapter 1, the surprising fact is that al1 of the classical algebras are built with unary and binary operations, especially binary ones. Moreover, except for R-modules, the classical algebras require only one or two binary operations and usually no unary operations. (For eac'h r in the ring R of scalars, R-modules have one operation f, of multiplication by r.) The same is true for the lesser-known algebras we will describe in this chapter. Experience has thus shown that almost al1 of the diversity of the theory of algebras already occurs for one or two binary operations. In this chapter, we provide a representative sampling of this diversity and at the same time describe some special kinds of algebras that will be useful for our later work. Before leaving this introduction, we will briefly address the question, which we mentioned in Chapter 1, of whether there is any mathematical basis for thinking of binary operations as special. (We will address this question more systematically when we come to study the algebraic representation of lattices, monoids, and groups in later volumes.) We may especially ask whether a single binary operation is inherently different from some other finite collection of operations, such as a single ternary operation or two binary operations, and whether a finite collection of operations is inherently different from an infinite collection. To understand the relationships between operations of various arities, it helps to have the notion of a term operation of an algebra A = ( A , F,, F,, . ),

101

3.1 Introduction

which we now present somewhat informally. (For a more formal presentation, see Definition 4.2 in 54.1.) he set of term operations is the smallest set that contains each Fi, that contains, for O 5 i < n, the coordinate projection operation

and that is closed under composition. (Another way of saying this is that every operation formed by composition from projections and A-operations is a term operation of A, and nothing else is a term operation of A.) Thus, for example, if G is an n-ary term operation and H,, Hn are m-ary term operations, and if H is defined by H ( x ) = G(H,(F), . H,(F)), then H is also an m-ary term operation of A. A term operation F of A may also be called a derived operation of A, and we sometimes say that F is derived from the operations F,, F,, and so on. The easiest distinction is that between unary operations and those of higher rank. Obviously, composition of unary operations yields only unary operations. It is not much more difficult to check that every operation F derived from unary operations depends on only one variable-i.e., for each such F there exists j < k and a unary operation f such that F(x,,. xk-,) = f (xj) for al1 x,, , xk-, . Obviously, a cancellative binary operation (such as a group operation) on a set of more than one element cannot depend on only one variable and hence cannot be derived from unary operations. For another intrinsic difference between unary operations and those of higher rank, we mention the obvious fact that the subuniverse lattice of every unary algebra is distributive (see Exercise 1 below). On the other hand, the subgroups of even a very small group such as Z, x Z, form a nondistributive lattice. When we consider how operations are derived from binary operations, however, the facts are altogether different, for we have the following Theorem of W. Sierpinski [1945]. S

S

,

,

S ,

THEOREM 3.1. For every set A, every operation F on A is derivable from on A. ( A , F,, F,, - ) for some binary operations F,, F,, Proof. Given a set A and F : Ak -+A, we need to show how to build F from binary operations. There are two constructions of F, depending on whether A is finite or infinite. The one for infinite A is straightforward, and we give the reader some hints for finding it in Exercise 2 below. For finite A, a number of proofs are known. It is even possible to show that for each finite set A there exists a single binary operation B such that every F can be built from B (Exercise 3 below), but as far as we know the proof of this fact is fairly complicated. At the cost of using two binary operations and 1 Al unary operations instead of a single binary operation, we will provide an entirely elementary proof of Sierpinski's Theorem. Obviously, if the result holds for one set of N elements, then it holds for al1 sets of N elements, and so we may assume that A is the set N = (0,1, N - 11, with + and defined on A by ordinary addition and multiplication modulo N. We then take our algebra to be (A, +, t,, (a E A)) where, for each a E N, t, is s.,

S ,

31

1.4 Congruence Relations and Quotient AIgebras

So the congruence relations on the ring of integers are exactly the relations =, where q is a non-negative integer. The homomorphic images of Z are, up to isomorphism, Z itself, the one element ring, and the rings of residues modulo integers greater than l.

2. Let R = (R, A , v ) where R is the set of real numbers and

R is a lattice. We wish to describe al1 the congruence relations on R. So let 0 be a congruence relation. Suppose r 0 s and r 5 t _< s. Then r = (r A t) 0 (S A t) = t, and so r 0 t. This means that the congruence classes of 0 are convex-that is, they are intervals, perhaps infinite or even degenerate. Pick an arbitrary element from each congruence class. It is evident that the set of selected elements forms a subalgebra of IW isomorphic to R/O. Now let O be any equivalence relation on R such that each O-equivalence class is a convex set of real numbers. To verify the substitution property, let r 0 r' and S 0 S'. CASE O:

r Os.

The substitution property is immediate, since r al1 belong to {r, S,r', S') E rl0.

A

S,r v S,r'

A S',

and r' v

S'

CASE 1: r and s lie in different equivalence classes modulo 0. The two 0-classes, which are intervals, cannot overlap. Thus, without loss of generality, we assume that every element of 1-16is less than every element of s/o. Hence

Therefore (r A S)0 (r' A S') and (r v S)O (r' v S'), as desired, so 0 is a congruence relation. Thus the congruence relations of R are exactly those equivalence relations whose equivalence classes are convex sets of real numbers. Since any proper convex subset of [W is a congruence class of infinitely many congruence relations, IW provides a strong contrast with the behavior of congruence relations on groups. For most algebras, the task of describing al1 the congruence relations is hopelessly difficult, so these two examples have a rather special character. The collection of al1 congruence relations of an algebra is a rich source of information about the algebra; discovering the properties of this collection often leads to a deeper understanding of the algebra.

104

Chapter 3 Unary and Binary Operations

more "algebraic" questions always seem to revolve around operations of rank two or more. (Consider, for instance, the limitations involved in writing an equation with unary operations. The only possible forms are flf2 fm(x)Ñ g g2 . gn(x),and the same thing with the second x changed to y.) Nevertheless, unary algebras merit a brief study in their own right, for many important features of an algebra A (such as Con A) are determined by its family of unary polynomial operations. (See Theorem 4.18.) As we will see, algebras with one unary operation are much simpler than those with more than one, so we will begin our study with these. Algebras with one unary operation and no other operations, otherwise known as mono-unary algebras, are particularly easy to describe informally. We simply malce a diagram which has an arrow going from a to f(a) for each a E A. For example,

denotes the successor function on o.In the finite case, such a diagram can in principle contain al1 the points of A and can therefore constitute a complete and precise definition of f : A -+ A. If A is infinite, however, or even finite but large, such a diagram can supplant a formal description of f only if there is a clear pattern discernible in the fragment that we are able to draw. A typical function on a finite set might be given by the diagram

In attempting to classify such diagrams, the first significant thing to notice is that A obviously can be divided into disjoint component diagrams, each corresponding, of course, to a subalgebra of (A, f ). For an official definition, we say that x and y are connected under f, or in the same component of (A, f ) iff f "(x) = f "(y) for some m, n 2 O (where we take f O(x) to mean x). It is not hard to check that this defines an equivalence relation on A; we will refer to the blocks (equivalence classes) of this relation as connected components of (A, f ). Again, it is not difficult to see that each connected component is a subuniverse of A on which the diagram of f is connected, in the sense that each two points x and y are connected by a path of arrows. (That is, there exist a,, - a, such that x = a,, or f(a,+,) = a,.) Equivalently, as y = a,, and for each i < n, either f(a,) = the reader may verify, a mono-unary algebra is connected iff any two nonempty subuniverses have a point in common. Now since each mono-unary algebra is seen to be a disjoint union of connected subuniverses, our description will be complete if we describe al1 connected mono-unary algebras. Obviously, every one-generated mono-unary algebra is connected, and things will be easiey for us if we first describe this easy special case. If a generates (A, f ), then A = { f "(a): m E o).If f "(a) # f "(a) for m # n, then (A, f ) has the form S

,

3.2 Unary Algebras

105

that is, it is isomorphic to (u, a), where a is the successor operation. Otherwise, we may assume that f '"(a) = f "+,(a) for some m and some k > O. Let us suppose that in fact m is the smallest integer for which such an equation holds, and that, for this value of m, k is the smallest positive integer for which the equation holds. Now one easily observes that, when restricted to the set {a,f(a), f 2(a), f"+'-'(a)}, the operation f has the form # ,

Evidently this diagram describes a subuniverse of (A, f ), which must be al1 of (A, f ), since this algebra is generated by a. Thus we see that every one-generated mono-unary algebra has the form (A,) or (A,,,). The algebras that are depicted as (A,) and (A,,,) will be denoted A, and A,,,. We now define a core of a connected mono-unary algebra (A, f ) to be any nonempty subalgebra on which f is a one-to-one function. Examining the forms (A,) and (A,,,) above, we see that every one-generated algebra has a core isomorphic to A, or A,,,, for some k. (It is entirely possible to have k = 1, in which case the core is simply a fixed point of f.) Hence every connected monounary algebra has at least one core of type A, or A,,,. LEMMA. Assume that (A, f ) is connected. If (A, f ) has any finite core C, then C must be of type (A,,,), that is, a finite cycle for f, and, moreover, C is the only core of ( A ,f ).

Proof. Let C be a finite core of (A, f ). By the above, (A, f ) has some core D of type (A,) or of type (A,,,). C ílD is a finite nonempty subuniverse of the core D, and hence type (A,) is impossible. We will now prove that every core E of ( A , f ),whether finite or infinite, is equal to D. It is evident that this will complete the proof of the lemma. Since D is a finite cycle that has E í l D as a nonempty subuniverse, it follows that E n D = D (i.e., that D c E). We will now prove by contradiction that E = D. By connectedness, if D were a proper subuniverse of E, we would have f "(e) = d E D for some e E E - D and some n. Taking n as small as possible will assure us that in fact f "-'(e) $ D; replacing e by f "-'(e) will yield an e E E such that e 4 D but f (e) = d E D. Now it is easily seen that f fails to be one-to-one on the subset {e,f'-'(d)} of E, in contradiction to the fact that E is a core. This contradiction M completes the proof that E = D. Of course, the lemma implies that if one core is infinite, then they are al1 infinite, but the reader may easily observe that (@,a) has many subalgebras

106

Chapter 3 Unary and Binary Operations

isomorphic to itself, so the core is not unique (although any two cores intersect in a core). Now for our description of al1 connected mono-unary algebras, we let C be any core of (A, f ), and observe that (by the lemma) there are no finite f-cycles in A - C. Therefore A - C may be ordered by defining a < b iff f "(b) = a for some m. The set M of minimal elements in this order consists of those a 6C such that f (a)E C. Since (by connectedness) each one-generated subuniverse ( b ,f (b), meets the core C , it is clear that each element of A - C must lie above some a E M. Now obviously, for each b E A - C, the elements of A - C that are Ib form a finite chain, b 2 f (b) 2 f 2(b)2 2 f m(b)= a E M . Let us define a rooted tree to be an ordered set T with a least element O (called the root of T) such that the interval ITIO, t ] is a finite chain for each t E T. Now it should be clear that the ordered set A - C is a disjoint union of rooted trees (whose roots form the set M). We have now supplied every detail of our description of connected monounary algebras. We collect these details into the following result. e

)

THEOREM 3.3. Every mono-unary algebra is a disjoint union of connected mono-unary algebras, in a unique way. Every connected mono-unary algebra (A, f ) has a subuniverse C with (C, f ) isomorphic either to ajinite cycle, or to co with the successor function. The set A - C can be given the order of a disjoint union of rooted trees so that f acts on A - C by mapping the root of each tree into C and mapping each other element of A - C into its unique predecessor. Two examples of connected mono-unary algebras are depicted below. A z can be formally described as (Z, f ), where Z is the set of integers and f ( n ) = n + l . The other connected mono-unary algebra depicted here has a core consisting of a single (fixed)point and thus consists wholly of a single rooted tree.

In case f happens to be a permutation rc of A, the description given in Theorem 3.3 above takes a particularly simple form. The finite components of (A, rc) obviously consist of various algebras A,,, (i.e., finite cycles), since the addition of any rooted tree would result in n's not being one-to-one. As the reader may easily check, an infinite component of (A,n) must be isomorphic to the algebra A z introduced just above. This algebra may be thought of as the core algebra A , to which has been appended an infinite linear rooted tree. On the other hand, we may also observe that A z is a core of itself. In fact, with the introduction of A z , we now have al1 isomorphism types of cores. (In Exercise 4

107

3.2 Unary Algebras

below, it is established that every core is isomorphic either to Az or to A, or to A,,, for some k.) Therefore, we have the following corollary to Theorem 3.3. COROLLARY. If n is a permutation of A, then (A,n) is uniquely a disjoint union of algebras isomorphic to AZ or to A,,, for some k. 4 The individual components of (A, n), for n a permutation of A, are closely related to the well-known cyclic decomposition of n. Before stating this connection in detail, let us recall a few things about permutations. We cal1 a permutation n of A cyclic iff (A, n) has one component isomorphic to Az or to some A,,,, and al1 other components trivial @.e.,consisting of fixed points). In the finite case, this is equivalent to saying that the elements of A that are moved by 7c can be arranged in a finite sequence a,, a,, a,-, such that n(aj) = aj+, for (O 5 j I k - 2), and n(a,-,) = a,. We will follow the usual practice of denoting such a permutation as

(This sequence denoting n is in fact uniquely determined except for the possibility of starting with a different a,. For example, the above permutation could just as well be denoted (a,c-,,a,, a,, . . . ,a,-,).) A cyclic permutation may be called a cycle for short. Cycles (a,, - ,a,-,) and (b,, - - ,b,.-,) are called disjoint if no ai is equal to any bj. We multiply permutations by composing them as functionsi.e., np(x) = n(p(x)). It is not difficult to check that two cycles permute with each other if and only if they are disjoint or one is a power of the other. For the cyclic decomposition of a permutation n, we will assume that (A, n ) is the disjoint union of its connected subalgebras (A,, nlAo), (A,-, ,nlAs- ). We then define nj (O Ij < S) to be the permutation of A that acts like n on Aj, and like the identity elsewhere-i.e., m

rcj(X) =

(

)

,

if x E Aj otherwise.

It is easy to check that each 9is a cyclic permutation of A, that the permutations nj commute with one another, and that their product is n. Moreover, these conditions determine the permutations 3 uniquely up to a permutation of their indices j. We will in fact follow the common and useful practice of writing permutations as products of disjoint cycles. For instance, if p = (24531) and a = (256) (as, say, permutations of a),then we may express the product po by its cyclic decomposition as (123)(456). Turning now to unary algebras with more than one operation, we may observe that the above unique decomposition into connected algebras remains valid, if we take "(A, f,,,f,, is connected" to mean that A is not a disjoint union of proper subuniverses. Nevertheless, there is no way to capture anything like the remainder of Theorem 3.3, even for algebras with just two unary operations. Our attention will therefore be focused on one special lcind of unary algebra with more than one unary operation. a

)

.

108

Chapter 3 Unary and Binary Operations

If each unary operation fi is a permutation of A, then (A, f,, f,, admits a description in terms of a group of permutations on A, and thus our understanding of such unary algebras is complete, modulo our understanding of permutation groups. For such algebras, the family of al1 operations generates a subgroup S of the group Sym A of al1 permutations of A. Unless our similarity type is already dictated by other considerations, the nicest way to describe such an algebra will be to take each permutation f E S as a fundamental operation. Nevertheless we will find it useful to consider such algebras from a slightly more general point of view, which we will now describe. Let G be an arbitrary but fixed group. A G-set is a unary algebra (A, fg(g E G)), which obeys the following axioms: S

i.

f,(x)

Ñ

)

x for e the unit element of G;

ii* fg(fh(4)

f g h W for 9, h

f

G.

Evidently, if the axioms are satisfied, each fg has the two-sided inverse fg- l , and hence is a permutation of A. Thus the map g Hfg is a homomorphism from G to the group S of permutations of A, which was mentioned in the previous paragraph; in fact, this assertion is equivalent to axioms (i) and (ii). (In the previous paragraph, the map g t+ fg was one-to-one; the "greater generality" to which we alluded just above lies in allowing this map to be an arbitrary homomorphism.) Although we asserted above that unary identities can take on only a limited number of forms, let us note here that they are certainly interesting in the case at hand, since equations (i) and (ii) will serve as a definition of (a group isomorphic to) G. As with mono-unary algebras, our description of a G-set begins with its decomposition into components. The connected components of a G-set are often called its orbits; it is not hard to check that x and y are in the same orbit if and only if y = &(x) for some g E G. A connected G-set (i.e., one consisting of a single orbit) is often called a transitive G-set. To complete our description of G-sets, we will presently prove that every transitive G-set is isomorphic to a special kind of G-set that is determined by a subgroup of G. Here, the description is not quite so completely pictorial as the one for mono-unary algebras, but it is algebraically easier to work with. For any subgroup K of G, we define the quotient G-set G/K to be the G-set ({hK: h E G), fg (y E G)), whose universe consists of al1 left cosets hK and whose operations are defined via f,(hK) = (gh)K. We leave it to the reader (Exercise 9 below) to check that the conjugacy class of the subgroup K is determined by the isomorphism class of the quotient G-set GIK. If (A, fg (g E G)) is any transitive G-set, then, for some b E A (in fact for any b E A) we have A = {f,(b): g E G}. For each b E A, we define the stabilizer or isotropy group of b to be the subgroup Hb = { ~ E Gfg(b) : = b}. ~ with the quotient G-set of G. To establish an isomorphism of (A,& ( g G)) G/Hb, we may define 4: G/Hb -,A via 4(gHb)= g(b). We leave it to the reader

109

3.2 Unary Algebras

to check that 4 is well defined, one-to-one, onto, and a homomorphism of G-sets; thus 4 establishes an isomorphism of our given G-set with GIH,. We now collect the facts we have proved about G-sets. THEOREM 3.4. Every G-set is isomorphic to a disjoint union of subalgebras, each of which is isomorphic to the quotient G-set G/K for some subgroup K of G. The partition into such subalgebras is unique, and for each such subalgebra the subgroup K is unique up to an inner automorphism of G. COROLLARY. divisor of ( G 1.

If G is a Jinite group and A is a transitive G-set, then 1 Al is a m

We remark that for each monoid M (semigroup with unit), there is a variety of M-sets that is defined similarly to that of G-sets for a group G; nevertheless, there is no analog of Theorem 3.4 for M-sets. In fact every unary algebra can be conceived of as an M-set for an appropriate M. (See Exercise 3.8(13) of $3.3 below.) We close this section on unary algebras with an indication of the kind of unusual behavior that can be found in algebras with only two unary operations f and g. An algebra A is called a Jónsson algebra iff it is infinite, has only countably many operations, and every proper subalgebra of A has power less than 1 Al. Jónsson algebras of power N, are not very difficult to come by, but in higher powers they are relatively rare. Here we present a simple example, due to F. Galvin, of a bi-unary Jónsson algebra of power N,. (In Exercise 13 below, we will see that one unary operation will not suffice for a Jónsson algebra of power N1 For the construction, we first recall from the Preliminaries that col denotes the first uncountable ordinal, which may be construed as the collection of al1 countable ordinals. We define two unary operations f and g on col as follows. g is the successor operation on o , . For any countable limit ordinal A, f maps the set {A + 2k: k E o}

+ +

onto the (countable) set A + o [ = {y: y < A + o}], and f(A 2k 1) = A for al1 k E o . It is easy to check that the subuniverses of (o,, f, g ) are precisely the limit ordinals A < o, (i.e., the sets of the form {x: x < A}), and thus that this is a Jónsson algebra. Since no other opportunity arises naturally in this volume, we will briefly discuss the subject of (not necessarily unary) Jónsson algebras. By and large, this subject remains mysterious and caught up in axiomatic questions of set theory. For instance, the set-theoretic axiom V = L implies that Jónsson algebras exist in al1 powers, but, on the other hand, no Jónsson algebra can have measurable cardinal. In Exercise 16 below, we will outline a result (of Galvin, Rowbottom, Erdos, Hajnal, and Chang) that, for each cardinal K, if there exists a Jónsson groupoid of power K,then there also exists a Jónsson groupoid of power K'. For

110

Chapter 3 Unary and Binary Operations $

more detailed references and more results, see either pages 469-470 of Chang and Keisler [1973] or pages 120- 135 of Jónsson [1972]. Very little is known about what Jónsson algebras can occur in most of the familiar varieties, although, for example, T. Whaley proved [1969] that no lattice is a Jónsson algebra. There exist Jónsson groups, but for years the only known examples were the groups ZPm.(For each prime p, Z,, is the group isomorphic 2ni

to the group of al1 complex numbers e 7 under multiplication.) 0. J. Johnson realized in 1930 that these are the only commutative Jónsson groups. In [1980] A. Ju. Ol'shanskiT proved the existence of a countably infinite torsion group G with the following subuniverse lattice:

The atoms of this lattice correspond to the one-generated subgroups of G, which are al1 finite, and hence it is obvious that G is a Jónsson group. (We have heard that E. Rips constructed a group with these properties plus the additional property that every element of G has order p for a fixed prime p. This is a spectacular (but not the first) contradiction to the conjecture of Burnside, which held that finitely generated groups obeying a law x n Ñ e (n 2 1) are always finite. We will return briefly to the subject of Burnside's conjecture in the chapter on equational theories and free algebras in Volume 2.) A Jónsson group of power N, was constructed by Shelah [1980].

Exercises 3.5 1. For any algebra A, prove that there exists a smallest equivalence relation 8 on A such that each block of 8 is a subuniverse of A. Moreover, if A is unary, then the blocks of 8 are the connected components of A. 2. Prove that a unary algebra is connected iff it is not a disjoint union of two proper subuniverses. 3. Prove that for each unary algebra A, there is a unique decomposition of A into a disjoint union of connected subuniverses. 4. Prove that every core is isomorphic to Az or to A, or to A,., for some k. 5. Describe, in terms of our pictorial representations, how many elements are required to generate a mono-unary algebra A. 6. Give pictorial representations for the product algebras A,, x Al,,, A,, x and Al,, x A,. 7. Show that if a unary algebra A has at least n components that have more than one element (and possibly some one point components as well), then its congruence lattice contains a sublattice isomorphic to the Boolean algebra of 2" elements.

,

,

3.2 Unary Algebras

8. Establish a natural one-to-one correspondence between mono-unary algebras and Z+-sets, where Z+ stands for the monoid of non-negative integers under addition. 9. Prove that if H and K are subgroups of G, then the quotient G-sets GIH and G / K are isomorphic iff H and K are conjugate-Le., iff K = gW1 Hg for some g E G. 10. Prove that if A is the largest quotient G-set, namely G/{e), then Con A is isomorphic to the lattice of al1 subuniverses of G. What about Con G/K for K an arbitrary subgroup of G? 11. Prove that for every unary algebra A there exists an infinite algebra B in HSP(A) with al1 operations trivial (i.e., each operation f of B satisfies either f (x) Ñ x or f (x) Ñ f (y)). What is the congruence lattice of such a B? 12. Prove the following assertion, which is a little stronger than an assertion made in the text of this section. If a and p are cyclic permutations on X and a p = pa, then either a and fl are disjoint or a = pk for some k. 13. Prove that there is no mono-unary Jónsson algebra of power N,. "14. Let rc be a cardinal. Prove that if (rc, F) is a Jónsson algebra, with F binary, then the ternary operation G defined as follows on rc' makes ( K + , G) into a Jónsson algebra. The ordinal rc' has the well-known property that if rc il < rc', then 1 1 1= 1 lcl (rc and A have the same cardinality). Therefore, for each such A, there exists a binary operation F, such that (A, F,) is a Jónsson algebra. Now for (A, a, P) E ( K + ) ~ ,we define G(A, a, 8 ) to be F,(a, P) if this is defined (i.e., if rc < A and a, /? < A), and to be O otherwise. "15. Prove that the term z(x, y, z) =

1

is universal in the following sense. For every infinite set A and for every on A ternary operation G defined on A, there exists a binary operation that satisfies the equation G(x, y,z) Ñ z(x, y,z). (It is possible to base an elementary proof of this fact on naive set theory; in Volume 2 we will return to the subject of universality.) 16. Prove that for every ternary operation G on an infinite set A, there exists a binary operation F on A such that Sub(A, F) G Sub(A, G). Conclude that if ( A , G) is a Jónsson algebra, then so is (A, F). (It thus follows from Exercise 14 that if (K,F) is a Jónsson algebra of type (2), then there exists a binary operation F' on rc' such that (rcf, F') is a Jónsson algebra.) *17. Generalize the result of Exercise 16 to algebras of arbitrary countable is a Jónsson algebra with operasimilarity type. That is, if ( A , F,, F,, . tions Fi for i E CO, then there exists a single binary operation F defined on A such that (A, F) is a Jónsson algebra. (For this, one will need to develop the analog, for an infinite collection of terms z(x,, x,, . . of the universality result of Exercise 15. This subject will be covered in Volume 2. For the moment, for a proof of the necessary universality result, we refer the reader to J. Loi [1950].)

+

e )

e),

Chapter 3 Unary and Binary Operations

3.3 SEMIGROUPS As we mentioned in Chapter 1, a semigroup is an algebra S = (S, consisting of a set S together with an associative multiplication on S. We follow tradition in writing multiplication in semigroups (and in groups, and so forth) with infix notation; that is, we normally write x y or simply xy instead of (x, y). An associative operation is, of course, one that obeys the associative law S

)

Our main purpose in this section will be to introduce a few special kinds of semigroups that are of interest in the study of universal algebra. A semigroup with unit is a semigroup S obeying the axiom 3yvx x - y Ñ y - x N x.

Even without associativity, it follows easily that if such a y exists, it is unique. The unique such y is called the unit of S and is usually denoted e or 1. A monoid is a semigroup that has a unit element explicitly mentioned in the similarity type-i.e., an algebra (M, .,e) of type (2,l) that obeys the associative law for and moreover obeys the equations X - eN e - x Ñ x.

For a single algebra, the notion of monoid is more or less equivalent to that of semigroup with unit, but as soon as we consider two or more algebras together, there are important distinctions between the two notions. For instance, a subalgebra of a monoid is always a.monoid, but the same is not true for semigroups with unit. The prototypical example of a monoid is of course the collection of al1 selfmaps of a set A (i.e., al1 functions f:A + A, with e the identity function and f - g the composition of functions f o g). We will denote this monoid by End A, since its universe is the set End A introduced in 51.2 (if we think of A as an algebra with no operations). It is almost trivial to verify the associative law for the composition of functions; we leave this to the reader. Interestingly, this fact can be seen as the source of associativity for al1 semigroups, since, as we will see in 53.5, every monoid is isomorphic to a submonoid of End A for some A. The submonoids of End A most important to our subject are those of the form End A, for A an algebra with universe A. As we said in Chapter 1, the monoid EndA, also known as the endomorphism monoid of A, is defined to be the submonoid of End A whose universe is the set End A consisting of al1 homomorphisms from A to A. An important part of our subject has been concerned with the possibility of representing an arbitrary monoid as isomorphic to an endomorphism monoid End A, with A to be found in certain restricted classes of algebras. We will present one elementary result on this topic in 53.5, saving a more systematic study for a later volume. A related monoid, equally important but less frequently encountered, is the monoid Part A of al1 partial functions from A to A (i.e., al1 functions f:B -+ A with B G A). (Here, f .g is the partial function defined as follows. If x is in the

113

3.3 Semigroups

domain of g and g(x) is in the domain of f, then we include x in the domain of

f.g and define f g(x) to be f (g(x)). Moreover, the domain of f g consists of +

these x and no others.) Clearly, we have End A E Part A. An even larger monoid is the monoid Bin A of al1 binary relations on A-i.e., al1 subsets of A x A. Here R S is defined to be the relational product R o S = ((x, 2): 3y[(x, y ) R~ and (y, z) ES]).

(To embed Part A in Bin A, we need to map each partial function f to the set of ordered pairs { (f (x), x): x E A}; the more traditional representation of f via pairs (x, f(x)) causes o to take on different meanings for functions and relations.) Notice that Bin A is a reduct of the relation algebra Re1 A that we described near the end of 51.1. The relational product will be introduced again in 54.4; see Definition 4.28. Another easily defined and useful semigroup is the free semigroup on a set A. We will in fact define the free monoid on A, denoted F,(A), observing that the free semigroup is formed from the free monoid by simply deleting the unit element. The elements of FA(A) are al1 finite sequences of elements of A, including the empty sequence, which we denote u. We define the product a p of sequences a = (a,, . . ,a,-,) and p = (b,, - .. ,bm-,) to be the sequence y = (e,, ,cm+,-,), where

Usually, we drop the formal apparatus of finite sequences and write members of the free monoid as "words" a,a, - Thus, for example, the product of abc and bcd is abcbcd. This form of writing two words together is often referred to as concatenation of two words, and the product we defined above for sequences is called the concatenation of two sequences. It is obvious that for 0,the empty word, both concatenations a n and u a are equal to a, and so is the unit element of the free monoid. The associativity of the free monoid is more or less obvious from the informal idea of concatenation of words. This intuition can easily be made into a formal proof that invokes our definition for the product of two sequences; we leave the details to the reader. In Chapter 4 we will present the general notion of free algebra. Here we only mention that the free monoid, as we have described it, satisfies one of the defining properties of free algebras. (This is the next theorem.) It is in fact rare for a free algebra to be so simply describable as is the free monoid. a

.

THEOREM 3.6. If M = (M, .,e) is any monoid and f : A -+ M is any function, then there exists a homomorphism 4: F,(A) -+ M such that $(u) = e and 4(a) = f (a) for al1 a E A. Proof. Ifa=(a,,.-.,a,-,),wesimplydefine~(a)tobe f(a,)-f(a,)..-S-f(a,-,). We leave to the reader the details of checking that this definition yields a m homomorphism with the desired properties.

34

Chapter 1 Basic concepts

4 v $ = CgA(4U $) and 4 A $ = 4 í? $. With these operations, the collection of al1 congruence relations on A becomes a lattice, which we shall cal1 the congruence lattice of A and denote by Con A. Every congruence relation of A is an equivalence relation on A and, as we saw in Exercise 1.2(7), the equivalence relations on A constitute a lattice Eqv A. In fact, Con A is a sublattice of Eqv A. The meet operation in both Con A and Eqv A is just set-theoretic intersection. To establish the contention that Con A is a sublattice of Eqv A, it is necessary to prove that the join operation in Con A is the restriction of the join operation in Eqv A. ~ h i is s the import of the next theorem, and it even applies to joins of arbitrary subsets of Con A. Note that we write Con A for the set of al1 congruence relations on A-the universe of the lattice Con A. By the same token, we write Sub A for the set of al1 subuniverses of A, and Eqv A for the set of al1 equivalence relations over the set A. THEOREM 1.24. Let A be un algebra and let C G Con A.

i. ii.

Con A = (Sub A x A) í l Eqv A. c g A ( U c ) =~ { R : U C G R U ~ ~ R E E ~ V A ) .

Proof. i. This is just a restatement of the definition of a congruence relation, using the language of subuniverses and direct products.

n

ii. Let 0 = {R: U C c R and R E Eqv A). Thus 0 is the smallest equivalence relation on A that includes UC. Since it is clear that U C c 0 c c g A ( U c ) ,we need only establish that 0 is a congruence on A. In view of part (i) of this theorem, it only remains to establish that 0 is a subuniverse of A x A. The transitive closure of relations was described in the Preliminaries. A more detailed analysis is used here. Let D be any collection of relations on A-that is, subsets of A x A. Let D* denote the set of al1 those relations that can be obtained as compositions (i.e., relational products) of finite nonempty sequences of relations belonging to D. Thus 4, o 4, o - - o &,, where 4i E D for each i < n is a typical element of D*. Checking that UD* is actually the presents no difficulties. This analysis of the transitransitive closure of tive closure leads immediately to the following conclusion: If D consists entirely of symmetric reflexive relations on A, then the transitive closure of is also symmetric and reflexive and is, therefore, the smallest equivalence relation including D. In particular, 0 is the transitive closure C* of UC. Now consider any two subuniverses 4 and $ of A x A. # o $ must be a subuniverse of A x A as well, since if F is any basic operation of A and n is the rank of F and if ai4bi$ci for al1 i < n, then

UD

UD

U

U

By the obvious inductive extension of this fact, if D consists entirely of subuniverses of A x A, then so does D*. Moreover, if every member of D is

115

3.3 Semigroups

morphism, we first observe that, in any meet-semilattice S, by definition of "greatest lower bound" we have, for al1 x E S,

From this we readily calculate: +(S A t ) = {XESIX< S A t) = {XESIXI S and x I t) = {XESIX S ) ~ ) ( X E ~ ( X t) =

n

If L = (L, A , v ) is a lattice, then one can obviously use the order relation < to infer the lattice structure from the semilattice (L, A ). Nevertheless, from the point of view of considering more than one algebra at a time, the semilattice structure (L, A ) cannot be regarded as an adequate substitute for the full lattice L. (Technically, as we will put it in Volume 2, "the variety of lattices is not interpretable in the variety of semilattices.") For instance, as we will see in Exercises 8 and 9 below, the H and S operators act very differently on (L, A ) and (L, A , v ). The reader may at this time be interested in writing his own version of Theorem 3.6 for the free semilattice FiP(A)on a set A. Suffice it for now to say that the statement and proof are very much like those above, except that F9(A) is the set of al1 cofinite proper subsets of A, and the operation is intersection. Each of the two laws defining semilattices among semigroups, namely the commutativity and idempotency laws, is interesting in its own right. For some interesting information about commutative semigroups, see Exercises 16 and 17 of $4.5 and Exercise 4 of $5.1. Idempotent semigroups are sometimes also known as bands. The very simplest bands are those obeying the law xy Ñ y (right-zero semigroups) or its dual, xy Ñ x (left-zero semigroups). These semigroups are really completely trivial, since the "multiplication" is nothing more than either the first or second coordinate projection. Somewhat less trivial are the rectangular bands, which are idempotent semigroups obeying the additional law xyz Ñ xz. It is not hard to prove that every product of a left-zero semigroup with a right-zero semigroup is a rectangular band, and in Exercise 10 we will ask the reader to prove the converse, namely that every rectangular band is isomorphic to a product of two semigroups, one of which is a right-zero semigroup and the other a left-zero semigroup. (This also follows from-and is a special case of-the idea of a decomposition operation, which will be presented in $4.4; see Definition 4.32 and the lemma that follows.) In particular, a rectangular band of prime order must be either a left-zero or a right-zero semigroup. As one further example of a variety of semigroups, we consider a variety Y described by J. A. Gerhard in [1970] and [1971]. It is defined by the laws x(y-4 XX

xyx

Ñ

(xy)z

ÑX Ñ

xy.

116

Chapter 3 Unary and Binary Operations

It is not difficult to check that -tr contains the three-element semigroup S, that has the following multiplication table:

The semigroup S, is useful to us, in that it can show us how terms act in V. The semigroup terms (i.e., the formal expressions in semigroup theory) are of course the finite formal products xilxi, ximof variables. The identities of Y allow us to shorten any term that has a repeated variable, as follows. If two occurrences of a variable stand together, one of them can be removed by applying the law xx Ñ x. If two occurrences are separated, as in xu, u,x, the second occurrence can be removed with the law xyx Ñ xy, by taking y to be u, u,. Thus every semigroup term can be reduced, using the laws that define Y , to a product of distinct variables. Now the semigroup S, can be used to show that no further reductions of terms are possible for this variety. We ask the reader to show, in Exercise 11 below, that if a is a semigroup word that contains no variable twice, and p is also such a word, then either a = ,!?or it is possible to substitute values 0,1, and 2 for the variables appearing in a and j such that upon this substitution, a and p evaluate in S, to two different elements of (O, 1,2). For another look at this variety, see Exercise 28 of $4.11. The product of two algebras can be thought of as defining a binary operation on the class of al1 algebras of a given similarity type. If we identify two algebras that are isomorphic to each other, then this operation is commutative and associative, since A x (B x C) is isomorphic to (A x B) x C, and A x B is isomorphic to B x A. The one-element algebra clearly serves as a unit element, so we have a commutative monoid of isomorphism classes of algebras under direct product. The structure of this monoid is actually very complicated; it contains every countable commutative monoid as a submonoid. (See 65.1 for a reference to a proof of this result of J. Ketonen.) Nevertheless, some simplifications are possible (this will be a major focus of Chapter 5) if we look at some special submonoids of the monoid of isomorphism classes. For instance, the submonoid of isomorphism classes of finite algebras of a given type is isomorphic to a submonoid of (a, 1)". For some even smaller classes of algebras, including some very natural generalizations of finite groups to universal algebra, the monoid of isomorphism types has unique factorization and is thus isomorphic to a submonoid of ( m , 1). S ,

a ,

Exereises 3.8 1. Describe al1 one-generated semigroups and al1 one-generated monoids. "2. Prove that for finite n, End n is generated by the following set of three selfmaps of n: first, the permutation (2-cycle) (0, l), which interchanges O and 1; next,

3.3 Semigroups

117

the cyclic permutation (O, 1, n - 1);and finally, the map f (x) = x v 1-i.e., f(0) = 1 and f(x) = x for x 2 l . 3. Prove that for 3 I n < co, End n is not generated by any two of its elements. 4. Prove that for infinite A, End A satisfies

(where f ,g2fg is of course an abbreviation for f o f O g O g O f O g). 5. Prove that End co does not satisfy

(Hint: Try h = the successor function on m.) 6. Find a distributive lattice (D, A , v ) for which the semilattices (D, A ) and (D, v ) are not isomorphic. 7. Prove that if (B, A , v ,-, 0,1) is a Boolean algebra, then (B, A ) and (B, v ) are isomorphic semilattices. 8. Give an example of two distributive lattices D, = (D,, A,, v, ) and D, = (D,, A,, v,), such that (D,, A , ) is a subsemilattice of (D,, A,) but (DI, v,) is not a subsemilattice of (D,, v,). (Thus DI is not a sublattice of D,. Hint: There is a solution with a very small finite number of elements.) Prove that in this situation we will at least have x v, y 2 x v, y for al1 x, y E DI, and that equality holds whenever we happen to have x v, y E D,. 9. With the notation of the previous exercise, find distributive lattices D, and D, such that D, is not a homomorphic image of DI, but (D,, A,) is a homomorphic image of (DI, A, ). 10. Prove that every rectangular band A is isomorphic to a product B x C, with B a left-zero semigroup and C a right-zero semigroup. (Hint: Define a relation 0, on A by saying a, 8, a, iff a, x = a,x for every x E A. Prove that 8, E Con A, define a symmetrically related congruence O,, and try to establish directly an isomorphism between A/8, x A/B, and A.) 11. Let a and p be distinct semigroup terms with no variable occurring twice in either a or p. Show how to substitute either O, 1, or 2 for each of the variables so that a and p evaluate differently in the three-element semigroup S, defined on the previous page. 12. S o get a preview of some of the pathology that can occur in the monoid of isomorphism classes under direct product, find four directly indecomposable mono-unary algebras A, B, C, and D with A x B E C x D, such that A is isomorphic neither to C nor to D. (An algebra A is directly indecomposable iff [Al > 1 and A r E x F implies [El = 1 or IFJ= l. Hint: The pictorial representation of mono-unary algebras that we developed in 53.2 should be helpful here. The reader may also look ahead to 55.1, where we present this exercise again with some more detailed suggestions. See, for example, Exercises 3 and 5 of 55.1. In that same exercise set, we also present some failures of unique factorization that involve G-sets and commutative semigroups.) 13. Every unary algebra can be expanded to an F-set for an appropriate free monoid F. More precisely, let ( A , Fi)ie, be any unary algebra, and define P

'

118

Chapter 3 Unary and Binary Operations

to be the free monoid on (xi: i~ I}. Show how to define an F-set ( A , f,),,, such that fxi = Fifor al1 i E I .

3.4 GROUPS, QUASIGROUPS, AND LATIN SQUARES Groups are frequently thought of as special kinds of semigroups. This familiar conception of group theory differs from our main definition (introduced in Chapter 1)in ways that are subtle but important. For this alternate presentation of group theory, we define a multiplication group to be a semigroup (G, that has unique solutions: When any two of the variables in the equation x .y = z are given fixed values, there is exactly one value of the third variable that satisfies the equation. One easy consequence of the definition comes if we take x and z to be equal; we thus have e such that x . e = x. It is a routine exercise to check that e does not depend on x, and that e x = x as well. Moreover, for each x there is a unique element x-l such that x - x-' = x-l x = e. In our work, the primary meaning of "group" is the one defined in Chapter l. Just as we formed monoids from semigroups with unit by promoting the unit element e to the status of a nullary operation, we may turn any multiplication group into a group by defining e and x-l as above. Thus multiplication groups are precisely the semigroups that can be formed by taking a group and deleting the operations e and -l. Thus, as for monoids and semigroups with unit, the two group notions are more or less interchangeable, especially if only one group is under consideration at a time. As before, however, distinctions arise when we consider two or more algebras together. For instance, a subalgebra of a group is always a group, but the same is not true for multiplication groups. Traditionally, writers on group theory have blurred this distinction by using the expression "subgroup": ( H ; ) is defined to be a subgroup of (G, - )iff the corresponding ( H , -,-',e) is a subalgebra (in our sense) of (G, -, e ) . Not wishing to burden our readers with fussy terminology, we will often allow ourselves to refer to a "group" and let the context provide the distinction between a group and a multiplication group. (Our primary terminology is of course that of Chapter l.) "Subgroup," however, will always take the meaning we described just above, and in keeping with this usage, when we refer to the subgroup generated by a set X c G or write Sg(X), we always mean the subuniverse sgG(x)that is generated by X in the algebra (G, -,-', e). For congruence relations on groups, however, this distinction between group theories makes no difference. As the reader will verify in Exercise 1 below, if (G, -l, e) is a group, then Con(G, -,-l, e) = Con(G, Instead of considering 0 E Con(G, it is traditional in group theory to consider in its place the normal subgroup m )

a ,

e).

e),

N,

=

(x E G: (e, x) E O } .

We leave to the reader the proof that this is indeed a normal subgroup of G.

I

3.4

Groups, Quasigroups, and Latin Squares

(Recall that a subgroup N of G is called normal iff g n g-l E N for al1 n E N and al1 g E G.) Conversely, if N is any normal subgroup of G, then, as the reader may check, the relation x8,y

iff x - Y - ~ E N

defines a congruence relation 8, on G. These two correspondences are inverse to each other @.e., 8 = 8, iff N = N,), and it is easily seen that 8 c 4 implies N, S N$. Thus we have a one-to-one correspondence between congruentes on (G,.) and normal subgroups of G, which is in fact an isomorphism between Con 1 and IKI > 1. Also, let us call G Zdivisible iff for each a E G there exists b E G such that b2 = a. 11. If A is a unary algebra with Aut A 2-divisible, then no two distinct connected components of A are isomorphic to each other. If AutA is also directly indecomposable, then al1 but one of the connected components of A have trivial automorphism groups. *12. (G. Fuhrken [1973]) The group (Q, + ) of rational numbers is not isomorphic to Aut (A, f ) for any mono-unary algebra ( A , f ). {Hint: By the last exercise, we may assume that (A, f ) is connected. Turning to the descrip-

133

3.6 Categories

tions in 83.2, first determine the action of a rational number q on the core of (A, f ). Then look at the action of q on whatever trees may be attached to the core. Make a second use of 2-divisibility.)

3.6 CATEGORIES A category is a special kind of partial semigroup. More precisely, a category C consists of a class C together with an 0peratio.n such that x -y is defined for some but not necessarily al1 pairs (x, y) E C2, and such that the following axioms hold for In Axioms (iv) and (v), a unit element of C means an element e of C such that e . x = x whenever e x is defined and x e = x whenever x - e is defined. a .

ii. If y - z and x (y z) are defined, then so are x y and (x . y) z, and x (y z) = (x . y) ' Z. iii. If x . y and y z are defined, then so is x - (y z). iv. For each x there exists a unit e such that x e is defined. v. For each x there exists a unit f such that f - x is defined.

It is easy to prove (see Exercise 3.16(1) below) that the unit elements e and f of (iv) and (v) are unique; e is called the domain of x, and f is called its codomain. For historical and motivational reasons (see the discussion of homomorphism categories below), the elements of a category C are sometimes called morphisms, and unit elements are called objects of C. If A and B are objects of C, then hom(A, B) (or homc(A, B)) stands for the collection of al1 X E C such that the domain of x is A and the codomain of x is B. Such an x is sometimes called a morphism from A to B. We sometimes write x: A -+ B or A$ B to indicate that x is a morphism from A to B. We will use the terminology of morphisms and objects whenever it seems useful or appropriate. There exists an alternative axiomatization of category theory in terms of objects and morphisms (see Exercise 3. P6(2) below). For categories C and D, a functor from C to D is a function f from C to D such that if x . y is defined in C then f(x). f(y) is defined in D and f(x). f(y) = f ( x Sy), and such that f(e) is a unit of D for each unit e of C. This notion is a natural generalization of the notion of homomorphism to this class of "algebras" whose operations are not everywhere defined. We may write F: C -+ D to denote the fact that F is a functor from C to D. If C c D and the inclusion map (restriction to C of the identity map on D) is a functor from C to D, then we say that C is a subcategory of D. We say that C is a full subcategory of D iff C is a subcategory that satisfies: Vd E D [(domain(d) E C) & (codomain(d) E C)

-+

d E C].

41

2.1 Fundamental Concepts

On the other hand, if h is a one-to-one isotone map from L onto L', and h-l is also isotone, then h is an isomorphism from the lattice L onto the lattice L'. It is also important to realize that while I may be a lattice order on L and induces a lattice order, it can happen that L' may be a subset of L on which I L' is not a subuniverse of the lattice (L, A , v ). This phenomenom can be traced to the global nature of the definition of join and meet in terms of the ordering. Figure 2.4 is an example. L' consists of the points denoted by *.

Figure 2.4

Exercises 2.3 1. Draw a Hasse diagram for the lattice of al1 subgroups of the symmetric group on (O, 1,2). 2. Draw the Hasse diagram of the lattice of al1 subsets of the set {O, 1,2,3). 3. Prove that every isotone one-to-one function from a lattice L onto a lattice L' that preserves incomparability is an isomorphism. Provide an example to show that an isotone one-to-one function from a lattice L into a lattice L' need not be a homomorphism. In the study of lattices, it is helpful to single out individual elements of lattices that have special properties with respect to the ordering or the basic operations. DEFINITION 2.4. Let L be a lattice and a E L. a is join irreducible iff a = b v c always implies a = b or a = c. Dually, a is said to be meet irreducible iff a = b A c always implies a = b or a = c. J(L) denotes the set of al1join irreducible elements of L and M(L) denotes the set of al1 meet irreducible elements of L. a is said to be join prime iff a I b v c always implies a I b or a I c. Dually, a is meet prime iff a 2 b A c always implies a 2 b or a 2 c. In the lattice N, (see Figure 2.1), every element is either join prime or meet prime. In M, only 1 is meet prime and only O is join prime, but every element is either join irreducible or meet irreducible. These four properties of an element are preserved in passing to a sublattice that contains the element. Every join prime element is join irreducible and every meet prime element is meet irreducible.

3.6 Categories

135

algebras of type p can be viewed as a class of objects in Alg, and as such determines a full subcategory CAT Y of Alg,. The universe functor U: Alg, -+ SETS takes (B,f, A) to (By f, A)-that is, an algebra goes to its universe, and a homomorphism goes to its underlying set-map. The universe functor is easily seen to be faithful, and thus the pair (Alg,, U) is an example of a concrete category-i.e., a pair (C, W) with C a category and W a faithful functor from C to SETS. Many of the categories of general importance in mathematics are concrete categories (with an obvious W), such as the category of continuous functions between topological spaces, monotone maps between ordered sets, and so on. There also exist important categories for which there is no such W, such as the category of homotopy classes of continuous maps between topological spaces. Many important aspects of set theory can be described within the category of sets, in terms of pure category theory. When mathematics is translated in this way into category theory, the elements of sets are ignored, and even the sets themselves play a minor role; the basic notions are those of functions and their composition. In principle, set theory can indeed be translated into this language with very little loss of detail, although many basic notions become considerably more complicated to define. This book is not the place to pursue this path, but we do wish to mention a few cases where the category-theoretic version of a set-theoretic concept turns out to be simple to state and understand, and some other cases where the abstraction of a set-theoretic idea in categorical terms yields a new notion of significant interest. We will put some of these ideas to use in $5.7, when we prove the Lovász Isotopy Lemma (5.24). That proof refers to the notions of product and monomorphism in a special category of relational structures. A morphism f of a category C (i.e., an element of C) is called a monomorphism iff it is left cancellable-if f .g and f . h are defined and equal, then g = h. A morphism f of C is called an epimorphism iff it is right cancellable. (In SETS, for example, epimorphisms and monomorphisms are the onto and one-toone functions, respectively.) An isomorphism is a morphism that has a two-sided inverse-i.e., an element f E hom(A, B) such that f g = B and g f = A for some g E hom(B, A). Objects A and B of C are isomorphic, written A r B, iff there exists an isomorphism in homc(A, B). An object D is called a product of objects A, and A, in C, with associated projection morphisms no and n,, provided that n i € hom(D, A,) (i = 0, l), and for every object-E and every pair of morphisms LE hom(E, A,) (i = 0,1), there is a unique h E hom(E, D) such that ni h = (i = 0,l). The product D is unique up to isomorphism (see Exercise 3.16(13) below), but only up to isomorphism, since D may clearly be replaced by any isomorphic copy of D. For any index set I and any family (A,: i~ 1 ) of objects of C, there is an analogous definition of the product of the objects Ai. The reader may easily check that in the category Alg, definedjust above, or even in one of its full subcategories CAT Y arising from a variety Y, the usual product A x B is a product in the sense of this paragraph. Categories C and D are called equivalent iff there exists a functor F: C -+D (called an equivalente) such that F : homc(A, B) -+ homD(F(A),F(B))is a bijection

136

Chapter 3 Unary and Binary Operations

for every pair of objects A, B E C , and such that for every object E of D there exists an object A of C with F(A) r E. In fact, equivalences between categories seem to occur naturally more often than isomorphisms. For instance, the doubledual operation defines an equivalence from the category of finite-dimensional vector spaces over a field K to itself. When we study duality in later volumes, we will see a number of category equivalences. We saw above that in Alg,, the full subcategory determined by a single object (A, lA,A) is essentially the same as the endomorphism monoid End A. Therefore, for A an object of an arbitrary category C, we will use the notation Endc A to stand for the monoid that is the full subcategory of C determined by A. (The universe of this monoid is of course homc(A, A).) We can likewise extend the notion of automorphism group to such an object A by defining Autc A to be the (multiplication) group of al1 invertible elements in the monoid Endc A. This monoid Endc A has a natural extension to a larger category called the clone of A, which has infinitely many objects and hence cannot be described simply as a group or monoid. Clonec A (or simply Clone A when the context permits) is defined to consist of the full subcategory of C determined by A and its finite powers (and thus the definition requires al1 finite powers of A to exist in C). More formally, Clone A is a full subcategory of C with distinct distinguished objects A, and distinguished morphisms 1L,'E hom(A,, A) (i 2 1,O I j < i) such that each A, is a product of i copies of A via the morphisms ni, - , ni-, and such that there are no objects besides the objects A,. (We remark that Clone A is determined up to isomorphism by this definition and does not really depend on the choice of the objects A, and the morphisms n,'.) An (abstract) clone (in the sense of category theory) is defined to be a category C with distinguished objects A, and morphisms írj (indexed as above) such that, with these objects and morphisms C = Clonec A,. A subclone of a clone C is a subcategory D of C such that D contains al1 the objects A, and morphisms ni', and such that each A, is the product in D of i copies of A, via the morphisms ni, - ni-, . For one natural example, consider an arbitrary set A as an object of SETS. Then ClonesETsA is most readily constructed by taking each A, to be the usual set-theoretic product A'-i.e., A x A x x A (i factors). Then for each n 2 1, homsETs(An,A) consists of al1 n-ary operations on the set A. One can easily check that if C is any subclone of ClonesETsA, then m

,

is a family of functions that is closed under composition and contains the coordinate projection functions. Conversely, for every such family F of operations on a set A, there exists a subclone C of ClonesETsA such that the above equation holds. Such families of operations are known as concrete clones, or clones of operations, or if the context permits, simply as clones. Starting at the beginning of the next chapter, we will consider this concrete notion of clone in detail, reserving our study of abstract clones for Volume 2 (but see Exercises 16- 18 and 21 -23 of 3.16 below). In Volume 2 we will say something more about the history of and relation between these notions. Suffice it for the moment to say that the word "clone" was coined in the 1940s by P. Hall, and

3.6 Categories

137

that the category-theoretic definition was invented in 1963 by F. W. Lawvere, who used the term "algebraic theory" for what we are calling a clone. As we will see in Volume 2, there is an alternative axiomatization of a clone as an algebra whose universe is a disjoint union A = A , U A , U and whose m-ary operations (for various m) are defined, not on al1 of A", but on certain subsets of the form Ail x Ai2 x x Ai, (A typical model of these axioms is the set F of operations mentioned just above, which has an obvious natural division into sets A,; for each non-negative k and n, this model has an operation corresponding to the construction of an n-ary operation from one k-ary operation and k n-ary operations, as described below at the beginning of $4.1.) The two definitions are equivalent in a sense that we will make precise in Volume 2. We treat them both on an equal footing, since they are easily interchangeable and since each has its own conceptual advantages. Our working foundation for algebra is (an applied form of) one of the standard set theories; nevertheless we find category theory useful as an alternative point of view. Let us close this chapter with three illustrations of how category theory provides a valuable viewpoint for general algebra. Firstly, notice that the monoid of isomorphism types (under direct product), which we mentioned near the end of $3.3, can be constructed directly from the category Alg, of al1 algebras of the appropriate type. Secondly, Clone A and its subcategories End A, Aut A, and so on, are al1 definable up to isomorphism as subcategories of Alg,. (For Clone A, the direct powers Ai are not definable within category theory except up to isomorphism, but nevertheless any choice of direct powers in the sense of category theory yields the same CloneA up to isomorphism.) Finally, we will devote a good bit of space in later volumes to a study of representation problems for groups and monoids: given a group (or monoid), is it isomorphic to Aut A (or End A) for some A in a given class X of algebras. (For instance, in Theorem 3.14 we solved this for al1 groups and al1 monoids with X the class of multi-unary algebras.) Of course, these problems have the following more general form: Given a monoid M and a category C, find an object A of C such that Endc A r M. From this perspective, it is apparent that representation problems really apply to al1 categories and need not be limited to monoids and groups. That is, given a small category C and another category D, we may ask whether C embeds as a full subcategory of D. The book of A. Pultr and V. Trnková [1980] is devoted to this and closely related topics. For instance, as a generalization of Cayley's Representation Theorem, along the lines of our Theorem 3.14, they prove that if D = Alg, for p a unary type with at least 1 + 1 CI operations, then C does embed as a full subcategory of D. In fact, there exist categories D into which every small C can be fully embedded regardless of 1 CI, such as the category of al1 rings with unit (i.e., CAT 2 for 2 the variety of rings with unit). See A. Pultr and V. Trnková [1980] for details.

Exercises 3.16 l. The domain of x is unique. That is, if C is a category, x, e, f E C, e and fare units, and e. x and f - x are both defined, then e = f.

138

Chapter 3 Unary and Binary Operations

2. An alternative axiomatization of category theory. Consider a structure (X, Hom, e, m) consisting of the following things: (1) A class X, called the class of objects. (2) For each A, B E X, a class Hom(A, B). (3) For each A E X, an element e, E Hom(A, A). (4) For each A, B, C E X, a function m,,:

Hom(B, C) x Hom(A, B) -,Hom(A, C).

We now define a CATEGORY to be a structure of this type that obeys the following axioms: (a) rf ( A ,B) # (C, D), then Hom(A, B) f l Hom(C, D) = D. (p) If z E Hom(A, B), y E Hom(B, C) and x E Hom(C, D), then

The exercise here is to prove that the theory of categories is equivalent to the theory of CATEGORIES in the following sense. There are mutually inverse definitions (to supply these fairly obvious definitions is part of the exercise) of a CATEGORY in terms of a category, and of a category in terms of a CATEGORY. (The definition here of "CATEGORY" is more frequently encountered than is the definition of "category" that we have adopted in this chapter. Usually one finds one extra postulate, namely that Hom(A, B) should always be a set. This postulate in fact holds for every category that we plan to discuss in this and future volumes.) 3. Prove that the notion of concrete category as we defined it in this section is equivalent to the following alternative notion. A CONCRETE CATEGORY consists of a class P (whose elements are called objects), together with a function U from P to the class of al1 sets, and a function HOM from P x P to the class of al1 sets, satisfying the following axioms. (a) H OM (x, y) E U (y)'("). (p) The identity function 1,,(,

E HOM(x, x).

(y) If f €HOM(x,y) and g ~ H o M ( y , z )then , g o f€HOM(x,z).

That is, there exist mutually inverse definitions of a CONCRETE CATEGORY in terms of a concrete category, and of a concrete category in terms of a CONCRETE CATEGORY. 4. Let 5 be an order relation on a set X (i.e., a reflexive, transitive, and anti-symmetric subset of x2). Then 5 forms a small category if we take (a, b) (c, d) to be (a, d) if b = c, and undefined otherwise. Conversely, if C is a small category in which there is at most one morphism between any two objects, and if no two distinct objects of C are isomorphic, then C derives in this way from an order relation on its set of objects.

5. In the previous exercise, the order is that of a v -semilattice (i.e., least upper

6. 7. 8.

9. .

10.

bounds exist for S) if and only if, in the corresponding category, every two objects have a product. Every small category is isomorphic to a subcategory of SETS. Prove that if f is an isomorphism in any category, then f is both a monomorphism and an epimorphism. (The converse is false; see Exercise 9 below.) A morphism (B,f, A) of SETS is a monomorphism iff f is one-to-one, and an epimorphism iff f maps A onto B. For a morphism (B,f, A) in Alg, (or in one of its full subcategories CAT Y whose objects are determined by a variety Y ) ,we have that (B,f,A) is a monomorphism iff f is one-to-one, and if f maps A onto B, then (B, f,A) is an epimorphism. Let DIST be C A T 9 for 9 the variety of al1 distributive lattices (i.e., the category whose elements consist of al1 homomorphisms between two distributive lattices). Prove that there exist a distributive lattice D and a proper sublattice Do of D such that the embedding a : Do -+D is an epimorphism in DIST. This a is therefore a morphism that is a monomorphism and an epimorphism but not an isomorphism. Although we are generally using a different meaning in this work, a groupoid is often taken to be a category in which every morphism is an isomorphism. A connected groupoid is a groupoid in which every two objects are isomorphic. One natural example is the path groupoid P(A) of a topological space A. The elements of P(A) are homotopy classes [ y ] of continuous maps y : [O, 1 1 + A with respect to homotopies that fix the endpoints. (I.e., [ y ] denotes a --equivalence class, where y y' iff there exists a continuous F: [O, 11 + A such that F(x, O) = y (x), F (x, 1) = y' (x), F(0, t) = y (O), and F(l, t) = y ( l ) . ) If [ y ] and [6] are such homotopy classes, then [ y ] [ a ] is defined iff y(1) = 6(0), in which case [ y ] [ 6 ] is defined to be the homotopy class of y - 6 where

-

Prove that this is a groupoid, and a connected groupoid iff A is a pathconnected topological space. What are the objects of this category? If a is an object in this category, what is End a? (It is something very familiar.) 11. A group can be thought of as a groupoid (previous exercise) that has only one object. 12. Prove that the notion of product of objects A, actually makes sense for i ranging over any index set I. What happens to your definition if I is the empty set? 13. Prove the assertion in the text that the product (in the sense of category theory) is uniquely defined up to isomorphism. More precisely, let A, (i E I) be some family of objects of a category C. Suppose that D is a product of the objects A, with projection morphisms ni, and that E is a product of the objects A, with projection morphisms y,. Prove that there exists an isomorphism iZ E hom(D, E) such that ni = y, - A for each i. Conversely, if E is

Chapter 3 Unary and Binary Operations

isomorphism in hom(D, E), then the formula ni = vi ;1 defines projection morphisms that make D into a product of the objects A,. 14. Prove the assertion in the text that the isomorphism type of Clonec A does not depend on the choice of a full subcategory containing al1 finite products of A. More precisely, let A be an object of G , and let D and E be full subcategories of C with the following properties. The objects of D are precisely A,, A,, . and for each i 2 1 there exist morphisms rr; (O 5 j < i) from A, to A such that A, is a product of i copies of A via the projection morphisms rr6, - nf-l. Similarly, the objects of E are B,, and each B, is a product of i copies of A via the morphisms rl,! Then there exists a category = for al1 i and j. isomorphism A from D to E such that 15. Prove the assertion in the text that for every clone of operations F on a set A, there exists a subclone C of ClonesETsA such that m ,

e ,

A(4)

F=

U homc(An,A). n 21

*16. Let C be the category of ordered sets and monotone functions. (Its elements are triples (B,f, A), where A and B are ordered sets and f : A + B is a function such that a 5 a' implies f(a) 5 f(af).) Let A be the object corresponding to the ordered set (0,l) with the usual ordering. Describe Clone A. *17. Let GROUPS be CAT $ for $ the variety of groups (i.e., the category whose morphisms consist of al1 homomorphisms between two groups). Describe CloneGRoupsA for A an object that corresponds to an Abelian group and for A an object that corresponds to a finite simple group. 18. If D = Clonec A exists, then CloneDA exists and CloneDA = D. As in our definition of the opposite of a groupoid in Exercise 3.15(3), the opposite of a category (C, ) is the category (C, *), where x * y is defined iff y x is defined, in which case its value is defined to be y x. The opposite of C may be denoted CoP. 19. If A is a product of objects A, ( i 1)~ in COP,what does this mean in terms of the category C? (I.e., what morphisms have to exist in C, and what properties must they have?) In this situation, A is said to be a coproduct of the objects A, in C. 20. In GROUPS (defined in Exercise 17 above), the free group on n generators, F9(x1, - .- ,xn), is the coproduct of n groups, each isomorphic to the free group on one generator. (It is most convenient to represent these n groups as F9(x1), Fg(x2),and so on. We will be using this notation in Exercise 22 below.) 21. Let G be the opposite of GROUPS (defined in Exercise 17 above), and let FG(1) be the object of G corresponding to the free group on one generator. Prove that CloneGFG(1) exists, and describe it. If C = (C, - ,A,, nl) and D = (D, B,, o,!) are clones, define a clone-map from C to D to be a functor F: C -+ D such that &'(A,) = Bi and ~ ( n , ' = ) o,! for al1 i and j. m ,

43

2.1 Fundamental Concepts

is nonempty, for al1 z E X . This allows us to define a function g from the set of natural numbers into X by the following recursion: g(O) = F ( X )

g(n + 1 ) = F ( ( x :x E X and x

i g ( n ) ) )for

al1 natural numbers n.

Evidently, (g(n):n is a natura-l number) is a subset of L ordered like the negative m integers. The theorem above, which is actually very straightforward, nevertheless invokes the Axiom of Choice. One more or less immediate consequence of this theorem is that the descending chain condition is sufficient for the representation of elements of a lattice as joins of finitely many join irreducible elements.

T H E O R E M 2.7. If L is a lattice with the ascending chain condition, then every element of L is a meet of finitely many meet irreducible elements; dually, if L is a lattice with the descending chain condition, then every element of L is a join of finitely many join irreducible elements.

[

Proof. Suppose that L has the descending chain condition. Let X be the set of al1elements of L that cannot be written as the join of finitely many join irreducible elements. If X is nonempty, then it would have a minimal element x. In this case, x cannot be join irreducible, so there are elements y and z, each properly less than x, such that x = y v z.Since y and z are both properly less than x, they are not in X. Consequently, y and z can be expressed as joins of jo,in irreducible elements. Thus x can be expressed in the same way. But this means that x cannot belong to X. Thus the supposition that X is nonempty is not tenable. So every element of L can be expressed as the join of join irreducible elements. E This theorem resembles the familiar theorem of arithmetic that every natural number can be written as the product of prime numbers. Here, multiplication is replaced by join and primeness is reeaced by join irreducibility. Actually, the connection is closer than it appears at first. The set of natural numbers, endowed with the operations of forming greatest common divisors and least common multiples, is a lattice with the descending chain condition. The join irreducible elements in this lattice are the powers of prime numbers. Of course, a powerful aspect of factorization of numbers into primes is the uniqueness of the factorization. For lattices in general, there may be elements that can be expressed as the join of join irreducible elements in many different ways. Later in this chapter, we will return to this topic and demonstrate that uniqueness can be obtained for some interesting classes of lattices.

Exercises 2.8 l. Construct a lattice that has the two-element lattice as a homomorphic image but has no join prime elements.

C

H

A

P

T

E

R

F

O

U

R

Fundamental Algebraic Results

This chapter is a complete, in-depth presentation of basic universal algebra including concepts, results, perspectives, and intuitions shared by workers in the field. Much of the material implicitly underlies such specialized and highly developed branches of algebra as group theory, ring theory, algebraic geometry, and number theory. It is the starting point for research in universal algebra.

4.1 AEGEBRAS AND CLONES An algebra is a set endowed with operations. The concept has been formally defined already (see the first definition in Chapter 1).According to our definition, an algebra is an ordered pair A = (A, F) in which A is a nonempty set and F = (F,: i ~ l is ) a system of finitary operations on the set A. Algebras will frequently be denoted in the form A = (A, Fi(iE 1)), or A = (A, QA(QE 1)), or more simply as A = (A, - ), B = (B, . ), and so on. The listed operations are called basic operations of the algebra, to distinguish them from other, derived, operations to be discussed in this section. The set of elements of an algebra is called its universe, or sometimes its base set. The cardinality of an algebra is the same as that of its base set, so a finite algebra is one with a finite universe. (It may have an infinite set of basic operations.) An algebra is said to have finite type if its index set is finite. (In the examples above, the set I is the index set of A.) By composition of operations is meant the construction of an n-ary operation h from k given n-ary operations f,, - - - ,f,-, and a k-ary operation g, by the rule

(Al1 of these must be operations on the same set A. The non-negative integers k and n can be arbitrary.) We shall write h = g(fo, .,A-,) for the above defined operation h. The projection operations on a set A are the trivial operations pf (with O 5 i < n) satisfying pf(xO,- x ~ - = ~ x,. ) S

,

143

4.1 Algebras and Clones

DEFINITION 4.1. A clone on a nonvoid set A is a set of operations on A that contains the projection operations and is closed under al1 compositions. The clone of al1 operations on A will be denoted by Clo A, while the set of al1 n-ary operations on A (n = 0,1,2, .) will be written as Clo, A. Clones furnish important examples of what is called a partial algebra-that is, a set endowed with partial (or partially defined) operations. The composition of a binary operation with two ternary operations, for example, is a 3-ary partial operation on the set Clo A, defined not for al1 ordered triples of elements but only for those triples (f,, f,, f2) where f, belongs to Clo, A, and f, and f, belong to Clo, A. Given any algebra, we can construct, from its basic operations, many more by forming compositions. These derived operations constitute a clone. They are frequently more useful than the given basic operations for purposes of understanding the qualities of an algebra and how it is put together.

DEFINITION 4.2. The clone of term operations of an algebra A, denoted by Clo A, is the smallest clone on the base set A that contains the basic operations of A. The set of n-ary operations in Clo A is denoted by Clo, A. The members of Clo A are called term operations of the algebra A..This usage is borrowed from logic. The term operations of an algebra are the same as the operations determined by terms built from variables and symbols denoting the basic operations of the algebra. For example, x2 xy and x(x + y) are terms that determine binary operations in any ring; in fact, in each ring they determine the same binary term operation. In $4.11, we shall deal systematically with terms and their connection with term operations. Our definition of the clone of an algebra is worthless from a computational standpoint. For example, the set of binary term operations of a finite algebra is a finite set, and we ought to be able to determine that set, given the algebra. But in order to apply the definition directly, we would have to construct the (infinite) set of all term operations before we could be sure which binary operations are term operations. The next theorem supplies an effective process for determining the n-ary term operations of any finite algebra with finitely many basic operations, and the process is effective for many other algebras as well. We will need this definition: If f is any k-ary operation on a set A and is any set of n-ary operations on A, we say that C is closed under composition by f iff g,, . ,gk-, E C always implies f (g,, .- ,gk-,) E C.

+

THEOREM 4.3. Let A = (A, FA(FE 1)) be un algebra and let n > O. Then Clo, A is the smallest set r, of n-ary operations on A including the n-ary projections and closed under compositionby every basic operation FAof A. Proof. For each n >: O let T, be the set defined in the statement of this theorem. Define

Chapter 4 Fundamental Algebraic Results

We can prove the theorem by proving that r = Clo A. It is clear from Definition 4.2 that Clo,A includes the n-ary projection operations and is closed under composition by each basic operation of A; i.e., we have that Clo, A i T, for each n, and thus Clo A r> T. To prove that Clo A c T, it will obviously suffice to establish this statement: Let r' be the set of al1 operations g on A such that for al1 n, r, is closed under composition by g. Then T' is a clone, and Clo A c T' c T. We first establish that I" is a clone. Let k, m, and n be arbitrary, and let h be a k-ary operation, h,, hk-, be m-ary operations, and f,, - ,fm-, be n-ary operations on A. We ask the reader to verify the associative law for compositions:

I l

l

l

I

e ,

Now (1) easily implies that T' is closed under compositions. (Choose any k-ary h in T' and any m-ary h,, , hk-, E r'. If fO, -,fmvl E r,, then h(h,, ,hk-,) (f,, ,fm-l) E T, by (l).) Since r' trivially contains al1 the projection operations, it is a clone. By the definition of the T,, we see that T' contains the basic operations of our algebra, so we have that Clo A c T'. To see that T' E T, let g be any member of r', say m-ary. Note that

From this it follows that g E Tm,since Tmis closed under composition by g. The proof is complete. E

+

To illustrate the concept of the clone of an algebra, let A = ( {O, 11, ), the two-element group, with 1 + 1 = O. Using Theorem 4.3, it is easy to show that Clo, A consists of the four operations f(x,y)=x,y,x+y,

and O ( = x + x ) .

It is a simple matter to characterize al1 the term operations of this algebra. For another example, let U2 = (Q, +, ., -, 0 , l ) be the ring (or field) of rational numbers. The term operations of U2 are easily determined, using Theorem 4.3. They are just the operations defined by the familiar ring polynomials with integer coefficients. Thus, a binary operation f (x, y) on rationals is a term operation of U 2 iff it can be expressed in the form of a finite sum of integers times powers of x times powers of y. The polynomials with rational coefficients define a larger class of operations on Q. This larger class of operations is an example for the following definition.

DEFINITION 4.4. The clone of polynomial operations of an algebra A, denoted by Po1 A, is the smallest clone on the universe A that contains the basic

I

I I

1

1 1

1

11

4.1 Algebras and Clones

145

operations of A and al1 the constant O-ary operations. The set of n-ary members of Po1 A (n-ary polynomial operations of A) is denoted by Pol, A. We remark that our terminology for the derived operations of an algebra is not universally used, although it is gaining wide acceptance. Be aware that in the literature of universal algebra prior to about 19.82, the term operations and polynomial operations of an algebra were often called "polynomial functions" and "algebraic functions," respectively.

THEOREM 4.5. Let A = (A, FA(FE 1)) be un algebra and let n 2 O. Then Pol, A is the smallest set T, of n-ary operations on A including the n-ary projections pr (O 5 i < n) and al1 the n-ary constant operations, and closed under composition by al1 the basic operations of A.

I

1

Proof. Comparing Definitions 4.2 and 4.4, we see that Po1 A is identical with Clo B where B has the same base set as A and its basic operations are those of A together with al1 the constant O-ary operations. Then Theorem 4.3 can easily be applied to yield this theorem. There is a direct way to construct al1 the polynomial operations of an algebra from its term operations. We first give an example and then state the result as a theorem. The function defined by

is a polynomial operation of the ring Q. To see this, let 1 = p; be the (unary) identity function, let S and M be the (binary) addition and multiplication operations of the ring, let a,, a,, a, be the O-ary operations with values 1, $, and $, and let b,, b, ,b, be the unary constant operations with the same respective values. Then we have bi = ai(4) (i = 0,1,2), where 4 is the empty sequence of unary operations. The function f defined by (2) is obtained from the basic operations of 0 , and the projection operations and constants through compositions in this way:

The operation f can also be obtained by substituting constants directly into a term operation of Q. Namely, we can use

Clearly, g is a 4-ary term operation of Q, and f (x) = g(1, $, 2,x) for al1 x, or what amounts to the same thing, f = g(b,, b,, b,, 1) where b,, b,, b, are the unary constant operations defined above.

146

Chapter 4 Fundamental Algebraic Results

THEOREM 4.6. Let A be any algebra. If t(x,,. m + n-ary term operation of A and ü = (a,,. ,a,-,) of A , then the formula (3)

P(UO,. un-,)

,xm-,, u,,

u,_,) is un is an m-tuple of elements S

,

t(ao, a,-, ,uo, ,un-,) = t(ü,U)(where ü = (u,, u,-, ))

=

S

,

defines an n-ary polynomial operation p of A. Conversely, if p is an n-ary polynomial operation of A (for some n ) then there exists m 2 O and there exists an m + n-ary term operation t of A and un m-tuple ü of elements of A satisfying (3). Proof. The proof is quite easy, so we merely describe it. Let l? be the set of al1 operations on A (n-ary for any n) for which t E Clo A and ü exist satisfying (3). Show that r c Po1 A (using that Po1 A is a clone containing Clo A andathe constant operations). Show that r is closed under compositions and contains the term operations of A and the constant operations. So l? = Po1 A. m DEFINITION 4.7. Let f (x,, ,xn-,) be any operation on a set A, and let i < n. We say that f depends on xi iff there exist n-tuples ü = (a,, ,a,-,), a@ b = (b,,..., bn-,) in A" such that aj = bj for al1j # i, O 5 j < n while f ( ü ) # f(b). We say that f is independent of xi iff f does not depend on xi. We cal1f essentially k-ary iff f depends on exactly k of its variables; and we say that f is essentially at most k-ary iff it depends on at most k variables. THEOREM 4.8. Let k, n 2 1 and let f be any n-ary operation on a set A. Then f is essentially at most k-ary 8there is a k-ary operation g on A and a sequence of n-ary projection operations g,, . gk-, such that f = g(g,, - .. ,y,-,). Moreover, if f is essentially at most k-ary, then we can take g = f ( f,, - - - ,fn-,) for some k-ary projection operations f,, - ,fn-, . S

,

Proof. First suppose that f = g(g,, - gk-,) and the g,'s are projections. There are at most k non-negative integers i < n such that for some j < k, gj = pr. If i < n is not among these, then each gj ( j < k ) is independent of x i , and from this it clearly follows that f is independent of xi. Hence f is essentially at most k-ary. Now suppose that f is essentially at most k-ary. Let i, < i, < < i,-, < n ( E < k ) be a list of non-negative integers including al1 the i such that f depends on xi. We define a ,

for al1 i in, i$(i,,-..,i,-,). f; = Now we put g = f (f,, . . . ,fn-,). Otherwise expressed, we have where xj is occurring at the ij-th place in f (for O < j < 1 ) and x , occupies al1 other places in f. Since f depends on at most xiO, , x ~ ~it -is~easy , to see that

4.1 Algebras and Clones

In other words, if we take

then we have f = g(g,;--,gk-,) and g = f(f,;.-,

f,-,).



The second volume of this work contains much material on the clones of finite algebras and on the abstract clones, which constitute an important generalization of the notion of a semigroup. The set of al1 clones on a set is lattice-ordered by inclusion. The associated lattice of clones is an algebraic lattice (see Definition 2.15.) For a finite set of three or more elements, the cardinality of this lattice of clones is equal to that of the continuum. (A proof of this fact is outlined in the ninth exercise below.) But for A = (0,1), there are just countably many clones on A, and in fact every clone on A is the clone of term operations of a two-element algebra with finitely many basic operations. In principle, therefore, one can arrange al1 of the two-element algebras in a denumerable list (algebras whose sets of term operations are identical being regarded as equal). These results were proved in E. Post [1941]. Using modern results of the theory of finite algebras, we shall present a streamlined version of Post's results in Volume 2. The concept of the Galois connection associated to a binary relation was introduced in 52.2 (Theorem 2.21 and Examples 2.22). The most basic Galois connection in algebra is the one associated to the binary relation of preservation between operations and relations. (Nearly al1 of the most basic concepts in algebra can be defined in terms of this relation.) When Q is an n-ary operation on a set A, and R is a k-ary relation over A (for some n and k), then Q preserves R just signifies that R is a subuniverse of the direct power algebra (A, Q ) ~ The . polarities associated with the relation of preservation (see $2.2) are denoted by and *. Thus, if Z is a set of relations over A, then CAis the set of al1 operations on A that preserve each and every relation of C. And if r is a set of operations on A, then TVis the set of al1 subuniverses of finite direct powers of the algebra (A, r).Every set of the form CAis a clone. Observe that the automorphisms, endomorphisms, subuniverses, and congruences of an algebra are defined by restricting the preservation relation to special types of relations. The congruences of an algebra, for example, are the equivalence relations that are preserved by the basic operations of the algebra.

Exercises 4.9 l. Let A be an algebra and n be a non-negative integer. Let B = AA"be the direct power of the algebra A (defined in the paragraph following Definition 1.5).Show that B = Clo, A (defined in Definition 4.1). Let F be a k-ary basic operation symbol of A. Show that for any g,, . . gk-, E B, we have a ,

~ ~ ( 9 0.,,gk-1) .

=F

A

(90, . . - ,gk-,),

148

Chapter 4 Fundamental Algebraic Results

that is, it is the composition of these operations. Now sharpen Theorem 4.3 by proving that Clo, A is identical to the subuniverse of B generated by the n projections p;f,pq, . - pn-, . Thus the n-ary term operations of A form an algebra, which we shall denote by Clo,,A. In the setting of the last exercise, show that Po1,A is identical to the subuniverse of AA" generated by the projections and the constant n-ary operations on A. Supply a detailed proof of Theorem 4.6. Let R be any ring with unit. Verify that the term operations of R are precisely the operations that can be defined by a polynomial (in the sense of ring theory) with integer coefficients. Verify that the polynomial operations of R (our Definition 4.4) are the operations on R defined by polynomials with coefficients from R. Construct an algebra with one essentially binary basic operation that has no term operation that depends on more than two variables. Construct an algebra al1 of whose term operations of rank less than 9 are projections, and which has a 9-ary basic operation that is not a projection. Prove that every subuniverse of an algebra is closed under each of the term operations of the algebra. Then show that the subuniverse generated by a subset S of an algebra A is identical to the set of values of term operations of A applied to elements of S. Show, in particular, that if S = {x, , ,x,-, } then S ,

2.

3. 4.

5. 6. 7.

.

SgA (S) = {t(x,,-

x,-,): t E Clo, A}.

s.,

8. Prove that every term operation of an algebra preserves al1 of its subuniverses and congruences. (See Definitions 1.3 and 1.14.) "9. This exercise is to construct a one-to-one mapping from sets of positive integers to clones on the set N = {O, 1,2}. (The existence of such a mapping was proved by Ju. 1. Janov and A. A. Mucnik [1959] and independently by A. Hulanicki and S. ~wierczkowski[1960].) For any n z 1 let fn be the n-ary operation defined by letting f,(x,, .,x,-, ) be 1 if xi 2 1 for al1 i and xi = 1 for precisely one value of i, and letting the value of the operation be O in al1 other cases. For every set S of positive integers define Cs to be the clone generated by { f,: n E S}. Now show that Cs = C,, iff S = S'; in fact, for any positive k, f,,, E CS iff k E S. *lo. (Every algebra can be constructed from a semigroup.) Let A = (A, Fi(iE 1)) be any algebra. Prove that there exists a semigroup S = (S, ) with A G S and having elements qi for i E I such that the following equation holds (for al1 i E I and, if Fi is n-ary, for al1 a,, . a,-, in A): S

,

* l l . An algebra A is called primal iff Clo, A = Clo, A for al1 n 2 1-Le., iff every n-ary operation on the universe is a term operation of A for al1 n 2 1. If an algebra is primal, then the subuniverses of its direct powers are quite special. In particular, the algebra can have no congruences, subuniverses, automorphisms, or endomorphisms other than the trivial ones possessed by every algebra. Prove that the n-valued Post algebra ({O, 1,. . . ,n - 11, A , v ,O, 1,')

149

4.2 Isomorphism Theorems

is primal. ((0,1, ,n - 11, A , v ) is the lattice determined by the ordering O < n - 1 < n - 2 < . - - < l,andxt=x+ lforx#n-l,(n-1)'=0. 12. Let f : A" -+A and g: A" -+ A be operations. Show that f is a homomorphism (A, g)" -+ (A, g) iff g is a homomorphism (A, f )"-+ (A, f ) iff f preserves g. (Recall that an n-ary operation is an n 1-ary relation.) f and g are said to commute if these equivalent conditions hold. 13. Let C be a clone on A (see Definition 4.1). Define C' to be the set of al1 operations on A that commute with al1 members of C. (See the previous exercise.) Show that C' is a clone and that (C')' i C. C' is called the centralizer of C in Clo A. Show that if A = (A, is an algebra, and if Clone A is the category defined near the end of $3.6,with objects the powers A", then the set of maps of Clone A is identical with the centralizer of Clo A in Clo A. *14. Let C be the clone on A = (0,l) that is the clone of term operations of a two-element lattice. Determine the centralizer C' (defined in the previous exercise) and the double centralizer (C')'. 1s (C')' = C?

+

S

)

4.2 ISOMORPHISM THEOREMS A type (or similarity type) of algebras is a set I and a mapping p of I into the set o of non-negative integers. The elements of I are called operation symbols, of type p. An algebra of type p is an algebra A = (A, QA(QE 1)) in which QA is a p(Q)-ary operation for every Q EI. The class of al1 algebras of type p (called a similarity class and denoted by jyp) is closed under the constructions of subalgebras, homomorphic images, and direct products introduced in Chapter 1. Whenever these constructions are discussed, it is always assumed that al1 algebras mentioned belong to the same similarity class. A similarity class is a category. If we ignore the elements and the operations of algebras, then we are left with algebras as "objects'@and homomorphisms as "morphisms," and these constitute a category as defined in 93.6. Most of the concepts and results presented in this section can be formulated in purely categorical language, but we prefer to use algebraic language in most situations. The Homomorphism Theorem of $1.4 (Theorem 1.16) is the first of severa1 results relating homomorphisms, congruences, and subalgebras. Collectively, they are called the Isomorphism Theorems. When f : A -+ B and g: A -, C, the existence of a homomorphism h: B -,C satisfying g = h o f implies that ker f E ker g. (Whenever f(x) = f(y) we must have g(x) = g(y).) The converse of this, which holds when f is surjective, is a most useful result. It implies, for instance, that if 4 E y are congruences of an algebra A, then there is a natural homomorphism of A/d onto Aly. THE SECOND ISOMORPHISM THEOREM (4.10). i.

Let f : A -,B and g: A -+ C be homomorphisms such that ker f E ker g and f is onto B. Then there is a unique homornorphism h: B -+ C satisfying g(x) = h(f (x)) for al1 x in A. h is un embedding fl ker f = ker g.

150

ii.

Chapter 4 Fundamental Algebraic Results

Let 4 and y be congruentes of A with

4 G y. Then

y14 = ( ( ~ 1 4~, 1 4 ) (x7 : Y> E Y1 is a congruente of A/$, and the formula S / ( ( ~ I ~ ) / ( Y=I ~ aly )> defines un isomorphism S/: (Al$)/(y/b) r A/y. (See Figure 4.1 .)

Solid lines for y classes Dashed lines for 9 classes

Figure 4.1

Proof. To prove (i) we notice that since f (x) = f (y) implies g(x) = g(y), and since f is onto B, the relation

is a function from B to C. Clearly g = h of. To see that h is a homomorphism, let Q be one of the operation symbols, say it is n-ary. Let bo, bn-, be elements of B, and choose ai E A with f (a,) = b, for O < i < n. Then we have s..,

which shows that h respects the interpretation of Q. It is clear from the definition of h that h is one-to-one iff g(x) = g(y) -,f(x) = f(y). To prove (ii), let f : A + A/q4 and g: A +A/y be the quotient homomorphisms. Now, the hypotheses of (i) are satisfied. The homomorphism h with g = h o f has ker h = y/$. Moreover, for any a, b E A, we have (al#, b/$) E y/# iff (a, b) E y. By the Homomorphism Theorem of 91.4, (A/$)/(y/#) r A/y, and it is easy to check that S/ is the mapping that accomplishes the isomorphism. i COROLLARY. Every homomorphism f : A -+ B factors as f = i o n where rc is un onto homomorphism and i is un embedding.

151

4.2 Isomorphism Theorems

Proof. Let n be the quotient homomorphism of A onto Alker f. Then ker f = ker n, so this eorollary follows from statement (i) in the theorem. ¤ The next theorem conveys a precise image of the relation between the congruence lattice of an algebra and the congruence lattices of its quotient algebras. DEFINITION 4.11. The least and largest congruences of an algebra (A, - ) are denoted by 0, and 1,. (Occasionally, they are denoted simply by O and l.) When elements of a lattice L satisfy a 5 b, we write I[a, b] for the interval sublattice of L formed of al1 the elements in the closed interval I[a, b] = {x: a S x 5 b). THE CORRESPONDENCE THEOREM (4.12). Let A be an algebra and let [ E Con A. The rnapping F defined on I [[,l,] by F(y) = y/[ is an isornorphisrn of I[c, l,] with Con A/c. (See Figure 4.2.)

8 '" ----------

--

u

------

Con A/C

Con A Figure 4.2

Proof. Let ngdenote the natural homomorphism of A onto Alc. This homomorphism is onto, and its kernel is 5. From this, one can easily show that the map defined on Con A/c by a + nil(a) where

(4)

n;l(a)

=

{ (x, Y> E A2: (n4(x),ncE a}

is a one-to-one order-preserving mapping of Con A/C into the interval I[[, l,]. We observed in the proof of Theorem 4.10 that for any y E I [ ~ l,] , we have y = n;'(~(y)). It is also easy to see that F(n;'(a)) = cc for al1 a ~ C o n A / [ .The two maps are mutually inverse order-preserving bijections between the lattices w in question. Thus each of them is an isomorphism of lattices. DEFINITION 4.13. Suppose that B is a subset of A and 8 is a congruence of A. Denote by B' the set {x E A: x 8 y for some y E B). Also define 81, to be 8 í l B ~ , the restriction of 8 to B. We ask the reader to verify that if B is a subalgebra of A and 8 E Con A; then Be is a subuniverse of A and 81, is a congruence on B. We denote by Be the subalgebra of A with universe Be.

152

Chapter 4 Fundamental Algebraic Results

THE THIRD ISOMORPHISM THEOREM (4.14). If B is a subalgebra of A and 6 E Con A, then Bl(6 1,) r Be/(8lB0). Proof. Verify that f (b/6 1,)

= b/0 l B 0

defines the required isomorphism.

m

When f is a mapping from a set A into a set B, there are associated mappings from relations over A to relations over B and, conversely, from relations over B to relations over A. The image under f of an n-ary relation R over A is the relation

The inverse image under f of an n-ary relation S over B is the relation

".i

Let f be a homomorphism A -,B. The image and inverse image under subuniverses are subuniverses. The inverse image map f from Sub B to ub A is order preserving; it preserves the meet operation of the lattices but oes not in general preserve the join. (See the exercises.) The direct image map fr m Sub A to SubB is order preserving; it preserves the join operation of the lattices but does not in general preserve the meet. The inverse image under f of a congruence is a congruence; the direct image of a congruence need not be a congruence. In Theorem 4.12, we proved that if f : A -» B is surjective, then the inverse image map and the direct image map, restricted to congruences above the kernel of f, are mutually inverse lattice isomorphisms between Con B and a principal filter in Con A. The notations s g A ( x )(for the subuniverse of A generated by X G A) and c g A ( x ) (for the congruence on A generated by X c A2) were introduced in Definitions 1.8 and 1.19.

-'

THEOREM 4.15. i. ii.

,

Suppose that f : A -+ B and that X c A. Then f(sgA(x)) = sgB(f(k)). Suppose that f : A -++ B is surjective and that X E A2. Then f (kerf v c g A ( x ) )= c g B ( f( ~ 1 ) .

Proof. To prove (i), note that f(sgA(x)) is a subuniverse of B including f (X), and thus sgB(f (0)) f (sgA(x)).On the other hand, f (sgB(f (x)) is a subuniverse of A including X, and thus also sgA(x).So f (sgA(x))c sgB(f (x)). To prove (ii), let a = (ker f ) v c g A ( x )and P = cgB(f ( ~ ) )Since . f -'(P) 2 X U lcer f and is a congruence, we have that f -l(P) 2 a. On the other hand, in the isomorphism between I[ker f, l.,] and Con B, a corresponds to a congruence • f (a) that includes f (X), and so f (a) 2 P.

-'

Exercises 4.16 1. Let X be a subset of an algebra A such that s g A ( x )= A. (See Exercise 4.9(7).) Let f : A -+ B and g: A -+ B be homomorphisms. Prove that if f (x) = g(x) for a l l x ~ xthen , f = g.

153

4.3 Congruences

2. Fínd an example of a homomorphism for which the direct image map does not preserve intersections of subuniverses, and an example for which the inverse image map does not preserve joins of subuniverses. 3. Find an example of an onto homomorphism f:A -+ B and an a E Con A such that f(a)# Con B.Show that A cannot be a group or a ring. (See Theorem 4.68 in $4.7.) 4. Suppose that f:A r B and B s C (subalgebra). Prove that there exists an algebra D i A and an isomorphism g:D E C with f c g. 5. In a category, an epimorphism is a map f such that for any two maps g and h, g o f = h O f -+ g = h. Prove that in the category of al1 algebras of a type, epimorphisms are onto homomorphisms. 6. Let A be an algebra and n be a natural number. According to Exercises 4.9(1-2), the sets of n-ary term operations and polynomial operations of A constitute subalgebras Clo, A and Pol, A of AA".Let f : A -+ B be an onto homomorphism. Prove that f induces onto homomorphisms f ':Clo, A -+ Clo, B and f ": Pol, A -+ Pol, B.

I

4.3 CONGRUENCES

-

In every branch of algebra, the congruences demand attention, and the deepest results often require a systematic study of congruences. The first congruences you met in the study of algebra were most likely the congruences (mod n) on the ring of integers. In groups, congruences are conveniently replaced by normal subgroups, the congruence classes of the unit element. In rings, they are replaced by ideals. In more general algebras, there is no possibility of such a replacement. Congruences of an algebra A are at once subuniverses of A2 and equivalence relations over the universe A of the algebra. More exactly, we have the equation

(7)

Con A = Sub A2 í l Eqv A

in which Eqv A denotes the set of al1 equivalence relations on A. The sets Con A, Sub A2, and Eqv A are lattice ordered by set-inclusion, and we have here three algebraic lattices. (Algebraic lattices were introduced in 52.2, Definition 2.15, and will be discussed at length in $4.6.) The lattice ConA is a sublattice of the full lattice of equivalence relations, Eqv A. It is a complete sublattice of Eqv A-infinite joins and meets of elements in ConA are the same as those computed in Eqv A. The join is the transitive closure of the union, and the meet is the set intersection. In general, ConA fails to be a sublattice of SubA2. (This holds, however, if A is a group or a ring.) Nevertheless, the congruence lattice of any algebra is equal to the subuniverse lattice of another algebra.

154

Chapter 4 Fundamental Algebraic Results

THEOREM 4.17. Let A be un algebra and let B = (A x A, be un algebra whose basic operations are those of A2 plus O-ary operations C, = (a, a) for each a E A, and a 1-ary operation S and a 2-ary operation T satisfying e

)

Then Con A = Sub B.

Proof. A subset of A2 is a subuniverse of B iff it is a subuniverse of A2, is a reflexive and symmetric relation, and is transitive. This theorem follows from (7). m The two following theorems supply useful tools for dealing with congruan appropriate place to introduce nonindexed algebras. Algebras, as we have known them so far, are indexed; basic operations are given as the values of a function mapping operation symbols to operations. A nonindexed algebra is an ordered pair (A, T) in which A, the universe, is a nonempty set and T is a set of operations over A, called the basic operations. The definitions of congruences, subuniverses, automorphisms, and endomorphisms of an algebra go over without change to nonindexed algebras. When the context permits, we will allow ourselves to omit the adjective "nonindexed." The next theorem shows that the unary polynomial operations of an algebra determine its congruences. entes. This is

THEOREM 4.18. For any algebra A, the algebras A, (A, Clo A), (A, Po1 A), and (A, Pol, A) al1 possess exactly the same congruences. Proof. We work with the polarities and A between sets of operations on A and sets of equivalence relations over A. (See the final paragraphs of 44.1. For a set T of operations, Tv is the set of al1 equivalence relations that are preserved by al1 members of T. For a set C of equivalence relations, CAis the set of al1 operations that preserve al1 members of C.) Let r be the set of basic operations of A, We have Con A = TV;moreover, rVA is a clone of operations on A containing the basic operations of A and the constant operations. Therefore Po1 A (Con A)A, which means that Con A c Con(A, Po1 A). But Con A 2 Con (A, Clo A) 3 Can (A, Po1 A) because of the reverse inclusions between the sets of basic operations of these algebras. These three algebras are thus shown to have the same congruences. Clearly, Con A S Con(A, Pol, A). To obtain the reverse inclusion, let P be any equivalence relation belonging to (Pol, A)'. For an n-ary operation symbol Q and (a,, a,-, ), (b,, b,-, ) E An satisfying ai bi (mod P) (O _< i < n), we have to show that eA(aO, un-,) z eA(b,, - - .,bn-,) (mod p). This would follow from the transitivity of P if modulo P the following congruences hold: m ,

-

m ,

e ,

~ * ( a , , . . - , abi+,;.-, ~, bn-,)

= QA (a,;-.,a,-,,

There are clearly unary polynomials of A,

b i , - . -bn-,) , (0 _< i < n).

fi (O L: i < n), such

that the ith

47

2.2 Complete Lattices and Closure Systems

Proof. Let C be an algebraic closure operator on A. We first argue that the compact elements of the lattice of closed sets coincide with the finitely generated closed sets. So suppose that Z is a finite subset of A and let H = C(Z). Let G be any collection of closed sets such that H c VG. Since C is algebraic and Z is finite, there is a finite set Y c UGsuch that Z S C(Y). Thus H 5 C(Y) c C ( ~ G=) V G . Now pick G' c G so that G' is finite and Y c UG'. So H c VG'. Therefore, every finitely generated closed set is compact. Now let H be any compact member of the lattice of closed sets. Evidently, H c V{C(Z): Z

c H and Z is finite).

So there are finitely many finite subsets Zo7Z,,

By letting Y

= Zo U Z,

U

m ,

Zk of H such that

U Zk7we see that

Since H is a closed set, we conclude that H = C(Y). Therefore, every compact member of the lattice of closed sets is finitely generated. The lattice of closed sets is algebraic, since (for any closure operator) every closed set is the union of the closures of its finite subsets. For the converse, suppose that L is an algebraic lattice, and let A be the set of al1 compact elements of L. For each b E L define D, = {a: a ~ A a n ad I b}. It is easy to check that F = {D,: b E L ) is a closed set system and that the map f from L onto P defined by f(b) = D,, for al1 b, is an isomorphism of L onto the lattice F of closed sets (that f is one-to-one follows from the fact that L is algebraic). To see that the associated closure operator is algebraic, we apply Theorem 2.14. Let G = {D,: b EX}be a collection of closed sets directed by E. Notice that for any elements b, c~ L, we have D, E D, iff c 5 b. So the set X is directed by 5 . Let g = V X . Now observe

VX VX'

a E Dgiff a is compact and a iff a is compact and a I for some finite X' iff a is compact and a I b for some b E X iff aED, for some EX iff a E UG. l

cX

So D, = UG, making UG a closed set. Thus the closure operator associated with F is algebraic. ¤

156

Chapter 4 Fundamental Algebraic Results

Conr. We leave it to the reader to verify that the congruences of r are the equivalence relations corresponding to partitioning G into left cosets of a subgroup; that this sets up an isomorphism of Con I'with Sub G; and that under this isomorphism, kern corresponds to the subgroup StabA(a). The lemma m follows from these facts. Our next lemma appeared in a paper of P. P. Pálfy and P. Pudlák [1980], where it was used in a proof of the equivalence of two statements: (1) every finite lattice is isomorphic to an interval in the subgroup lattice of a finite group; (2) every finite lattice is isomorphic to the congruence lattice of some finite algebra. At the time of this writing, it is unknown whether these equivalent statements are actually true. A major portion of the proof of their equivalence is sketched in Exercises 7, 8, and 9, where we ask the reader to supply the details. DEFINITION 4.21. Let A be an algebra and X be a nonvoid subset of A. If f is an n-ary operation on A such that X is closed under f then by f 1, we denote the restriction of f to Xn, an n-ary operation on X. We denote by (Po1A)/, the set of al1 restrictions to X of the polynomial operations of A under which X is closed. We write A 1 for the nonindexed algebra (X, (Po1A)\.), and cal1 it the algebra induced by A on X. LEMMA 4.22. Let A be any algebra, let e be a unary polynomial operation of A satisfying e = e o e, and set U = e(A). Then the restriction to U constitutes a complete lattice homomorphism of Con A onto Con Alu.

lU

ConA -» ConAl, Proof. The restriction mapping is clearly a mapping into the designated lattice, and it trivially preserves al1 intersections. Let y be any congruence of A 1 ., Define

The reader can verify that y^ is a congruence of A. (It is preserved by al1 unary polynomial operations of A-use Theorem 4.18.) It is also easy to check that 91, = y, and that for every congruence a on A we have a c y^ al, G y. From these facts, it follows in particular that 1, maps Con A onto Con Al,, and that y^ is the largest congruence mapping to y, for each y E Con Al,. To see that 1, preserves complete joins, let Os (SE S) be a family of congruences on A and put

(where the second join is computed in Con Al,). From the above remarks, we deduce that Os G y^ for al1 s E S. Thus 0 G y^,and this means that O 1, c y. Trivially y O / , , and so we have shown that which ends the proof.

157

4.3 Congruentes

The congruence lattice of an algebra determines an important property, which we now define.

DEFINITION 4.23. An algebra A is called simple iff it has exactly two congruences, 0, and 1,.

EXAMPLES l.

The lattice M, is a simple algebra. Every congruence other than O must contain a pair of elements, one covering the other. By taking meets and joins of these elements with suitably chosen third elements, we can successively derive that every such covering pair belongs to the congruence. Thus al1 elements are identified in the congruence. The lattice M,, is also simple.

,

Con M, :

2.

If R is the ring of al1 k x k-matrices with entries from a division ring F, then R is a simple algebra.

Exercises 4.24 l. Let U be a direct product of two nontrivial unary algebras. (The basic operations are l-ary; each of the two factor algebras has at least two elements.) Prove that Con U is not a sublattice of Sub U2. 2. Prove that if R is a group or a ring, then Con R is a sublattice of Sub R2. 3. If G = (G, -',e) is a group, then Con G = Con(G, 4. If A = (A, A , v , -) is a Boolean algebra, then Con A = Con(A, A , v ). 5. Let S = (S, A ) be a semilattice and S E S. The relation R defined by ( x , y ) ~ R i f f xA S = S - y A S =sisacongruenceofS. 6. Let L be any lattice and suppose that a, b, c, d E L satisfy a < b (i.e., b covers a) and c 4 d. Prove that (c, d) E cgL(a,b) iff there exists f E Pol, L such that { f(a),f (b)} = {c,d}. 7. Prove the converse of Lemma 4.20, that every interval in a subgroup lattice is isomorphic to the congruence lattice of a transitive G-set for some group G. (See Exercise 10 in $3.2.) S ,

S ) .

158

Chapter 4 Fundamental Algebraic Results

"8. (Pálfy and Pudlák [1980].) Let F be a finite algebra whose congruence lattice L has the following properties: L is a simple lattice, and the atoms of L join to 1. (An atom in a lattice is an element that covers the zero element. This assumes, of course, that the lattice has a least, or zero, element.) Show that L is isomorphic to the congruence lattice of a fínite G-set for some group G. Here are some hints for working the exercise. Define C. to be the collection of al1 sets p(F) with p any nonconstant unary polynomial of P. Let M be a minimal under inclusion member of E. Next prove that there exists a unary polynomial e of F satisfying e(F) = M and e = e o e. To accomplish this, define a binary relation p on F by

Show that p is a congruence of F and p # 1, (the largest congruence). Since the atoms of L join to l,, there is an atom a with a $ p, and so a í l p = 0,. Now use this to show that there are p, q E PollF with p(F) = M = q(F) = pq(F). Then the desired e can be found as a power of p. Now Lemma 4.22 implies that L r Con F 1., Show that every nonconstant unary polynomial of FIMis a permutation of M. "9. Find some special property of a lattice such that if the lattice L in the previous exercise has the property, then this implies that the group of nonconstant unary polynomials of FI, acts transitively on M. 10. Verify Example 2. 11. Let X be a (possibly infinite) set of at least 5 elements. Define Ax to be the group of al1 permutations o of X such that for some finite set F c X, o(x) = x for x E X - F and o is the product of a finite even number of 2-cycles. Prove that Ax is a simple group. (See the result of Exercise 10 at the end of $3.5.) 12. Prove that if A is a set of at least two elements, then the lattice Eqv A of al1 equivalence relations on A is a simple algebra. Here are hints. Let R be a congruence on Eqv A different from the identity relation. It must be shown that R = (Eqv A) x (Eqv A). First show that OARafor some atom a. Then if IA( 2 3, use intervals at the bottom of Eqv A isomorphic to Eqv{O, 1,2) to transfer this result to al1 atoms; conclude that OARPfor every atom P. The proof is essentially finished if A is finite. Assume that A is infinite. Partition A into two disjoint sets X and Y of equal cardinality, and let f be a one-to-one function of X onto Y. Let ,u be the equivalence relation such that X is the only equivalence class of ,u having more than one element, and let v be the similar equivalence relation corresponding to Y. Let y be the equivalence relation defined by (x, y) E y iff x = y or f (x) = y or f(y) = x. Show that (,u v v)RIA,then consider the sublattice generated by ,u, v, and y. *13. Prove that if A is finite, then Eqv A is generated by a set of four of its elements. Show that the two-element lattice, and M,, are the only simple lattices generated by three or fewer elements. "14. (Maurer and Rhodes [1965].) Let G be a finite non-Abelian simple group. Then Po1 G = Clo G-Le., every finitary operation on G is a polynomial operation of G. Here is a sketch of a proof. The exercise is to supply the

159

4.4 Direct and Subdirect Representations

details. Write [x, y] for the commutator x-'y-'xy, put xY= y-lxy, and use e to denote the unit element of G. a. For every u # e and v in G there are y,, y, in G (for some n) such that v = uY1uYz.. . uYn. b. For every u # e # v in G there exists y such that [u, vY] # e. c. Let b # e. For every n there is a polynomial f (x,, ,x,) such that f (b, b, - ,b) = b and f (y,, - ,y,) = e whenever for some i, y, = e. (You can take f (x, , - - ,x,) = hui for some elements a,, where h(x,, ,x,) = [x, ,x?, ,xin] for some elements cj.) d. Let b # e. There is a unary polynomial g such that g(e) = b and g(u) = e for al1 u # e. (Say u,, u, are al1 the elements different from e. Choose (by (a)) unary polynomials f , such that f,(e) = e and f,(ui)= b. Choose f (x,, ,x,) as in statement (c). Take g(x) = f (bf,(x)-l, ,bf,(x)-l).) e. Let n 2 1 and a,, a,, b E G. There exists f E Pol, G such that a,) = b and f (x,, ,x,) = e whenever xi # a, for some i. f (a,, *15. (R. Wille [1977].) If L is a finite simple lattice whose atoms join to 1, then every finitary operation on L that preserves the lattice ordering of L is a polynomial operation of L. m ,

n,,,

S ,

m ,

m ,

4.4 DIRECT AND SUBDIRECT REPRESENTATIONS We introduced the direct product construction in 51.2 and remarked that it can be used to build complicated algebras from simpler ones. Conversely, one can sometimes represent big and seemingly complicated algebras as products of simpler ones. A related construction, the subdirect product, is frequently more useful in this respect than the direct product. In this section we discuss the representations of an algebra as direct or subdirect product, focusing mainly on the direct product. The first thing to note is that homomorphisms into a direct product take a particularly simple form.

ni,,

B,. There is a LEMMA. Let Bi (i E 1)and A be similar algebras and B = hom(A, B,). In fact, for every system natural bgection between hom(A, B) and (f,: i E 1)E JJ i,, hom(A, B,) there is a unique homomorphism f E hom(A, B) satisfying f , = p, o f for a11 i E I, where p, is the projection of B onto Bi.

ni,,

Proof. The function f +t

(p, o f : i E 1) is a mapping of hom(A, B) into

ni,, hom(A, B,). Verify that it is one-to-one. For any systemf, hom(A, B,) (i I) let n (A: iE 1) f denote the function from A to B defined by f(x) E

=

E

=

(f,(x): i E I). Verify that f is a homomorphism and pi o f = fi for al1 i E I.

ni,,

DEFINITION 4.25. An isomorphism f : A z Ai (or the associated system of homomorphisms (f,: i~ 1)) is called a direct representation of A with factors A,. By a subdirect representation of A with factors A, we mean an A, (or the associated system (A: i E 1)) such that f , is embedding f : A 4

ni,,

160

Chapter 4 Fundamental Algebraic Results

onto Ai for al1 iEI. By a subdirect product of (A,: ~ E ' Iwe ) mean an algebra BE A, such that p, maps B onto A, for al1 i E l . (We say that B projects onto each factor.)

ni.,

ni,,

ni

A, and A 5 Bi We shall cal1 two subdirect representations A --* isomorphic if there exist isomorphisms h,: A, r B, (i E I) such that hif, = g, for al1 i~ 1. By Theorem 4.10, f and g are isomorphic iff kerf, = kerg, for al1 i. A subdirect representation f is determined within isomorphism by the system of congruences (kerf,: iEI). Therefore every subdirect representation is isomorphic to one in the form (8)

f : ~ + n ~ / 4 ,' isI

with f(x)

=

( ~ 1 4~ ~ :E Ifor ) a11 x E A.

In al1 work with subdirect products, we shall feel free to deal directly either with an embedding f, with the correlated homomorphisms f,,or with the congruences 4i = kerf,, whichever is most convenient for the work at hand. DEFINITION 4.26. A system (4i: i E 1) of congruences of an algebra A (or a system (f,: i E 1) of homomorphisms from A) is said to separate points of A iff 4i = O,, the identity relation on A (or E ker f, = O,).

ni

ni,,

TWEOREM 4.27. A system (qbi: i E I) of congruences of un algebra A (or system (f,: i E 1) of homomorphisms from A) gives a subdirect representation $ it separates the points of A. Proof. The kernel of the mapping (8) is precisely O{#,: i E I}, hence this mapping is an embedding iff the system of congruences separates points. (Note that if (A: i E I) separates points and f,:A + Ai, then we have a subdirect representa¤ tion with factors f,(A), but it may be that f,(A) # Ai.) Theorem 4.27 shows that the subdirect representations of an algebra can be determined (up to isomorphic representations) by examining its congruence lattice. In this sense, the concept of subdirect representation is purely latticetheoretic. Every subset of the congruence lattice that intersects to zero determines a subdirect representation. Thus an algebra is likely to have many rather different subdirect representations, with entirely different sets of congruences involved. The concept of direct representation, on the other hand, is not a purely latticetheoretic one; it requires congruences that permute with one another. To see what is involved, we focus on representations with two factors. DEFINITION 4.28. l.

The relational product of two binary relations a and P is the relation a o /3 =

((x, y): 3z((x, z) E a and (z, y) E B)}.

Two binary relations a and P are said to permute if a o P = P o a.

I

161

4.4 Direct and Subdirect Representations

Let a and /3 be equivalence relations on a set A. We write O, = a x cal1 (a, p ) a pair of complementary factor relations iff

2.

1 3. 4.

'

P and

and moreover a and /3 permute. Let a and /3 be congruences of an algebra A. Then (a, P ) is a pair of complementar~factor congruences of A iff 0, = a x p. By a factor congruence of A we mean a congruence a such that there exists a congruence p satisfying O, = a x P. Thus if a'and

P are congruences of A, then O,

=a

x

P signifies that a and

/3 are complements of one another in Con A and that they permute. We remark that equivalence relations a and P permute iff a v /3 = a o P. THEOREM 4.29. A pair (4,, representation iff 0, = 4, x 4,.

4, ) of

congruences of un algebra gives a direct

Proof. The reader may easily verify that the homomorphism x i-, (xld,, x/d, ) is injective iff 4, A 4, = O, and maps onto A/b, x A/b1 iff 4, o q51 = 1, = 4, o 4,. (Either one of these equalities implies the other.) m It is easy to see that a finite system (4,, - 4n-, ) of congruences of A is a direct representation iff 4i are pairwise permuting factor congruences and for each i < n, (4i, j # i ) ) is a pair of complementary factor congruences. Here is a characterization of arbitrary direct representations in terms of congruence relations. m ,

DEFINITION 4.30. For any system (@i:i E I ) of congruences of an algebra A we write O, = i,, 4i iff 0, = {bi:i E I } and for every Z = (x,: i E I ) E A' there is an element x E A such that (x, xi) E 4i for al1 i E 1.

n

THEOREM 4.31. A system (4i: i E I ) of congruences of un algebra A constitutes a direct representation $ OA = , &.

n

Proof. It is easily verified that the two conditions of Definition 4.30 are equivalent to the injective and surjective properties of the mapping defined by formula (8). m To every direct representation of an algebra with n factors is naturally associated a certain n-ary operation on the universe. The factor congruences of the representation and this operation are interchangeable, each definable directly from the other. The operations associated with direct representations, called decomposition operations, play a significant role in clone theory and in the theory of varieties; they will become quite familiar to the reader of these volumes. We deal here only with binary decomposition operations, saving the extension to n factors for an exercise.

162

Chapter 4 Fundamental Algebraic Results

DEFINITION 4.32. A (binary) decomposition operation of an algebra A is a -t A satisfying the following equations (for al1 x, y, zE A). homomorphism f :

Any operation f on a set A satisfying (9) will be called a decomposition operation on A. Given an algebra A = B x C, we define the canonical decomposition operation by setting f ((b, c), (b', c')) = (b, c'). Clearly, this is a homomorphism of A2 onto A, and it satisfies equations (9). Conversely, every decomposition operation on a set gives rise to a direct representation in which the operation takes this simple form. LEMMA.

Suppose that f : A2 -+ A satisfies (9).

i. For al1 x, y, u, v E A we ,have f(x,u) = f(y7 u)++f(x,v) = f(y,v), f(u,x) = f(u,~)-f(v,x) = f(v,y). ii.

The relations

are equivalence relations on A satisfying O, f (x, y) is the unique element z E A satisfying X

Z

= yo

x yl. For al1 (x, y) E A2,

y.

iii. (yo7y1 ) is a pair of complementary factor relations of the algebra (A, f ). We have (A,f) where f A O (y)~ = , x and f

1) is commutative and can be subdirectly embedded into a product of finite fields that obey the equation. The Subdirect Representation Theorem easily implies that Jacobson's theorem is equivalent to the assertion that every subdirectly irreducible ring obeying xn Ñ x (where n > 1) is a finite field. Each finite ring obeying the equation is directly decomposable as the product of fields. (This follows from Jacobson's theorem together with a lemma that precedes Theorem 4.76 in $4.7.) Jacobson's theorem follows from three assertions, two of which we shall prove. Let n be a fixed integer greater than 1. By a division ring we mean a ring having a unit element whose nonzero elements form a group under multiplication. Note that a commutative division ring is a field. (Division rings are sometimes called skew fields.) Here are the assertions. (1) Every subdirectly irreducible ring obeying xn Ñ x is a division ring. (2) A noncommutative division ring obeying xn Ñ x contains a finite noncommutative division ring. (3) Every finite division ring is commutative. If the three assertions are true, then every subdirectly irreducible ring obeying xn Ñ x is a field. The finiteness of such a field is due to the fact that the polynomial xn - x can have at most n roots in a field. The first two assertions are proved in the lemmas and corollary below. The third is a well-known theorem of Wedderburn [1905], which is proved in most textbooks on modern algebra above the elementary level. (For example, a proof of Wedderburn's theorem will be found in Jacobson [1985], p. 453.) DEFINITION 4.45. By an ideal in a ring (R, of R satisfying:

l. 2.

+,

m,

-,O) is meant a subset J

+

O E J and for al1 x, y E J , the elements x y and - x belong to J . ( J is a subgroup of (R, + ).) Forallx~Jandy~R,bothx~yandy~xbelongtoJ.

LEMMA 1. For any ring R there is a one-to-one order-preserving correspondence between the ideals of R and the congruences of R. An ideal J and the corresponding congruence ( satisfy: i. ii.

XEJ++(O,X)E[. (x,y)~(++x- EJ.

176

Chapter 4 Fundamental Algebraic Results

Proof. The correspondence between homomorphisms from a ring and ideals in the ring is a basic result proved in any undergraduate course in modern algebra. The correspondence between homomorphisms and congruences in general algebras has been worked out in $1.4 and throughout this chapter. Clearly, these two lines of thought must link up; there must be some correspondence between ideals and congruences in a ring. We leave it to the reader to verify ¤ that this is it.

LEMMA 2. Every subdirectly irreducible ring in which xn = x holds for al1 x, for some integer n > 1, is a division ring. Proof. Let R be a subdirectly irreducible ring in which xn = x holds for al1 x, where n is a fixed integer greater than 1. The set of al1 elements x E R such that x y = y x for al1 y E R is called the center of R. Elements of the center are also called central elements. An element e E R is said to be idempotent iff e e = e. We prove a series of statements. -i. For each x E R, e = xn-l is an idempotent and e x ii. F o r a l l x , y ~ R , x . y = O i f f y - x = 0 .

=x

e = x.

The proof of (i) is easy. We have e2 = x ~ = ~XXn-2- -~e and ex = xe = xn. To prove (ii), suppose that xy = O. Then yx = (yx)" = y (xy)"-l x = 0. iii. Every idempotent of R is central. To prove (iii),let e be idempotent and x be any element. Now e (x - ex) = 0. Hence (by (ii)),xe - exe = (x - ex) e = O. Thus xe = exe, and we can obviously prove in the same way that ex = exe. Since R is subdirectly irreducible, it is (a, b)-irreducible for some a # b in R. (See Definition 4.41, Theorem 4.40.) Taking u = a - b, we easily see that R is (0,u)-irreducible. Then replacing u by un-l = e, we find that R is (O, e)irreducible. (Each of these pairs of elements generates the same congruence.) Translating this concept frorn congruences to ideals, using the lemma, we have: iv. e # O, ee = e, and e E J whenever J # {O) is a nonzero ideal. Now since e lies in the center of R by (iii), the set ann(e) = (x: xe = 0) can be checked to be an ideal of R. Clearly, e&ann(e),so ann(e) = {O) by (iv). In particular we must have ex - x = O = xe - x for al1 x. Thus v. e is a two-sided unit element of R. Let us redenote this element by 1. We conclude by proving the following statement, showing that x " - ~is a multiplicative inverse of x for al1 x # O in R, and hence that R is, in fact, a division ring. vi. For al1 x # O in R, xn-' To prove it, let x # O and f

=

1.

= xn-l. Since f

= f, f

(1 - f ) = O. Thus ann(1 - f )

I

1

177

4.5 The Subdirect Representation Theorern

is a nonzero ideal of R (it contains f ). By (iv), 1 (=e) belongs to this ideal. So we have O = 1 (1 - f ), or equivalently, 1 = f. ¤

COROLLARY 4. Every ring obeying un equation xn Ñ x for un integer n > 1 is a subdirect product of finite fields that obey the equation.

I

i

Proof. Every homomorphic image of a ring that obeys the equation must also obey it. Theorem 4.44 reduces the work to proving that every subdirectly irreducible ring obeying the equation is a finite field. By the last lemma, we can assume that R is a division ring obeying xn Ñ x, where n is a fixed integer greater than 1. We wish to prove that R is commutative. (By an earlier remark, if R is commutative, then it must be a finite field.) Our strategy will be to show that if R is not commutative, then it contains a finite noncommutative division ring, which contradicts Wedderburn's theorem. In order to read this proof, the reader needs to know a modest amount of the elementary theory of fields. We shall assume that n > 2, for if n = 2, then it is elementary to prove that R is commutative (see Exercise 12 below). Observe that every subring S of R that contains a nonzero element is a division ring, because x " - ~is the multiplicative inverse of x when x # O. Also, if S is commutative, then S has at most n elements, because every element of S is a root of the polynomial xn - x. Now let A be a maximal commutative subring of R. Thus A is a field and 1 Al I n. Also, for every x E R - A there is an element a € A such that xa # ax, since A is a maximal commutative subring. We shall now assume that A # R and derive from this assumption a contradiction. Since A is a finite field, there is a prime integer p and an integer k 2 1 such that [Al = pk and pl = O in A. Then it follows that px = O for al1 X E R. The multiplicative group of the field A is cyclic; thus let a be an element satisfying A = {O, 1, a', apkW2}. Notice that R is naturally a vector space over A. The product ux of u E A and x E R is just the product of these elements in the ring R. Let EndAR denote the ring of linear transformations of this vector space. We define a mapping D: R -+R, namely, for x E R we put D(x) = xa - ax. Then it is clear that D E EndAR. Let also I denote the identity map-i.e., the unit element of the ring EndAR. Now EndAR is naturally a vector space over A, and we have that the element D commutes with u1 for al1 U E A. Let A[z] be the ring of polynomials over A in the indeterminate z. Thus there exists a homomorphism p mapping A[z] onto the commutative subring of EndAR generated by D and the uI, with p(z) = D and p(u) = uI. We shall need the following expansions for the powers of D, the truth of which can easily be verified by induction. If m 2 1 and x E R, then S

Taking m

=p

,

in the above, and noting that

(P)

is divisible by p for O < i < p,

we get that DP(x)= xaP - apx, and then DP'(X)= xap' - ap'x. Since apk= a, it follows from the above that DP"= D. In other words, D is

178

Chapter 4 Fundamental Algebraic Results

a zero of the polynomial f (2) = zpk - z E A[z]. Now f (z) has pk distinct roots in A (in fact every elenient of A is a root) and so has the factorization

in A[z]. It follows that in End, R we have

Notice that ker D = {x E R: D(x) = O} is precisely A (because A is a maximal commutative subring and is generated by a). Now let us choose any x E R - A and define x, = D(x), x, = (D - 1)(x,), x, = (D - al) (x,), and so on, running through al1 the maps D - ul (u E A). Since x $ A, we have x, # O. Then the preceding displayed equation implies that there must exist j 2 O such that ( = xj # O and xj+, = O = (D - uI)(c) for a certain u E A - {O}. This means that

where c E A. Notice that since u # O and # 0, the above equation implies that D(() # O-i.e., $A. Now every nonzero element of A is of the form w = aj, and then from the equation above we also have cw = cj(. These facts imply that the set T = {g(c):g(z) E A [z] has degree < n} is a subring of R. Since A U 15) c T, then T is a noncommutative division ring. But T is obviously finite. This contradicts Wedderburn's theorem. The concept defined below helps to illuminate al1 of the preceding examples.

DEFINITION 4.46. 1.

2.

Let A be an algebra and K be a cardinal number. We say that A is residually less than K ifffor each pair of distinct elements a and b in A, there is an algebra B satisfying JB]< ~cand a homomorphism f : A -,B with f(a) # f(b). We say that A is residually finite iff for every pair of elements a # b in A there exists a homomorphism f of A into a finite algebra with f (a) # f (b). A class X of algebras is called residually less than K,or residually finite, iff every member of X has the same property.

COROLLARY 5. Suppose that X = H(X). Then X is residually less than ~c $ every subdirectly irreducible member of X ' h a s a cardinality less than K,and X is residually finite # every subdirectly irreducible member of X is finite. Proof. We assume that X = H(X)-Le., that every homomorphic image of an algebra in X belongs to X . Thus by the Subdirect Representation Theorem, every algebra in X is a subdirect product of subdirectly irreducible algebras in X . Thus in each of the assertions, the sufficiency of the condition (in order that X have the residual property) follows from the Subdirect Representation

4.5 The Subdirect Representation Theorem

179

Theorem. Now for the necessity, suppose first that X is residually less than K and let A be a subdirectly irreducible member of X . Choose a and b such that A is (a, b)-irreducible. There exists a homomorphism f of A into an algebra of cardinality less than K satisfying f (a) # f(b). This inequality implies that ker f = O,, since A is (a, b)-irreducible. Thus f is one-to-one, and it follows that the cardinality of A is likewise less than K , as desired. The proof of the statement • regarding residual finiteness is just the same.

DEFINITION 4.47. A class of algebras is residually small iff it is residually less than some cardinal. A class of algebras is residually large iff it is not residually small. The classes of Boolean algebras, semilattices, and distributive lattices are residually < 3 (Examples 1 and 2 and Corollary 2). The class of rings satisfying x5 Ñ x is residually < 6, since every field satisfying this identity has just 2, 3, or 5 elements (Example 4 and Corollary 4). The class of monadic algebras is residually large, since there exist simple monadic algebras having any infinite cardinality (Example 3 and Lemma 2). The class of finite lattices is not residually less than any finite cardinal. (This follows by Example (d) after Definition 4.41.) The class of al1 lattices is residually large, for there exist simple lattices of cardinality larger than any given cardinal number. (See Exercise 4.24(12) for an example.) The class of al1 groups is residually large, for the same reason. (See Exercise 4.24(1l).) One of our main concerns throughout these volumes will be the study of varieties (classes of similar algebras closed under the formation of homomorphic images, subalgebras, and direct products). Residually small varieties tend to have a rather nice structure theory for their members, which residually large varieties usually do not possess. The class of monadic algebras supplies an example of the phenomenon. On the face of it, monadic'algebras would appear to be not much more complicated than Boolean algebras, since every monadic algebra is a subdirect product of special algebras, which are just Boolean algebras with an additional, but trivial, operation. Yet it is possible to build any finite graph into an infinite monadic algebra, in such a way that the graph (its vertices and edges) can be recovered from the algebra through algebraic definitions. Such constructions are impossible in Boolean algebras, indicating a considerable difference in the complexity of these two varieties.

1

Exercises 4.48 l. An algebra A is called semi-simple iff it can be subdirectly embedded into a product of simple algebras. Prove that an algebra is semi-simple iff 0, is the intersection of a set of maximal members in Con A. 2. Verify that the quasi-cyclic groups Z, (Example (e) following Definition 4.41) are subdirectly irreducible. Determine the structure of their congruence lattices.

180

Chapter 4 Fundamental Algebraic Results

3. (D. Monk) An algebra A is called pseudo-simple iff ]Al > 1 and for every 8 < 1, in Con A, A/8 r A. (The quasi-cyclic groups are pseudo-simple.) Prove that if A is pseudo-simple, then Con A is a well ordered chain. 4. Verify that the lattices N, and N, (Examples (c) and (d) following Definition 4.41) do have the congruence lattices that were pictured. 5. Represent the three-element lattice as a subdirect product of two twoelement lattices. 6. Show that any linearly ordered set of at least two elements is a directly indecomposable lattice. *7. Prove that every subdirectly irreducible Abelian group is cyclic or quasicyclic. 8. Let A be the set of functions from co to (O, 1). Define A = (A, f, g) to be the bi-unary algebra with f(a)(i) = a(i 1)

+

d a ) (9 = a(O>. Show that A is subdirectly irreducible. 9. For each infinite cardinal rc, construct a subdirectly irreducible algebra with rc unary operations having cardinality 2". 10. Prove that every algebra having at most rc basic operations, al1 of them unary, is residually 5 2 " (where K is an infinite cardinal). 11. Describe al1 subdirectly irreducible algebras with one unary operation. (They are al1 countable.) 12. Prove directly (without invoking Corollary 4) that a ring obeying the equation x 2 Ñ x obeys also x + x Ñ O and xy Ñ yx. (This class of rings is discussed in $4.12.) *13. Prove directly that a ring obeying x3 Ñ x obeys also xy Ñ yx. 14. Let A be the algebra ((O, 1,2,3), o) in which a o b = a if 1 I a, b 5 3 and la - bl < 1, and al1 other products are 0. Show that A is simple. Show that there exist simple homomorphic images of subdirect powers of A having any prescribed cardinality 2 4. (The variety generated by A is not residually small.) 15. Let S be the semigroup ( (O, 1,2}, o) in which 2 0 2 = 1 and al1 other products are 0. Show that there exist subdirectly irreducible homomorphic images of subdirect powers of S having any prescribed cardinality 2 3. 16. Describe al1 simple commutative semigroups and prove that they are al1 finite. 17. Prove that no commutative semigroup has the congruence lattice pictured below.

181

4.6 Algebraic Lattices

18. A Heyting algebra is an algebra (H, A , v , -+,O,1) such that (H, A , v ) is a lattice, O and 1 are the least and largest elements of this lattice, and this y for al1 x,y, and z. condition holds: x -+ y 2 z iff z A x I i. Prove that the lattice in a Heyting algebra is distributive. ii. Prove that Heyting algebras can be defined by a finite set of equations. iii. Prove that a Heyting algebra is subdirectly irreducible iff it has a largest element b < 1. (Show that for each a E H,

is a congruence.) iv. Prove that the only simple Heyting algebra is the two-element one.

4.6 ALGEBRAIC LATTICES The class of algebraic lattices derives its name from the fact that these are precisely the lattices normally encountered in algebra. A lattice is algebraic if and only if it is isomorphic to the subuniverse lattice of some algebra. It is a much deeper fact that a lattice is algebraic if and only if it is isomorphic to the congruence lattice of some algebra. We prove the easier fact in this section. We begin by reviewing the definition of algebraic lattices, from $2.2. An element c of a lattice L is said to be compact iff for al1 X S L, if exists and c 5 V X , then cI for some finite set X' c X. A lattice L is said to be algebraic iff L is complete and every element of L is the join of a set of compact elements of L. The congruence lattice of any algebra is an algebraic lattice. In 1963, G. Gratzer and E. T. Schmidt proved that the converse is also true: Every algebraic lattice is isomorphic to the congruence lattice of a suitably constructed algebra. A proof of the Gratzer-Schmidt Theorem along the lines of the original argument will be found in Gratzer's book [1979] (Chapter 2). A streamlined proof has been given by P. Pudlák [19'76]. Beyond the Gratzer-Schmidt Theorem, the natural question of which algebraic lattices may be represented as congruence lattices in specific classes of algebras has produced some deep results, although many hard and interesting problems remain unsolved. (We discuss this subject thoroughly in a later volume.) We remark that although every finite lattice is algebraic, it remains unknown whether every finite lattice is isomorphic to the congruence lattice of a finite algebra. (This question about the congruence lattices of finite algebras has been discussed before, just prior to Definition 4.21.) In this section we introduce a number of interesting facts about algebraic lattices, whose proofs do not require a lot of effort or sophisticated machinery. An important class of modular algebraic lattices will be examined in $4.8. Our first lemma supplies some basic properties of algebraic lattices that are often useful. Recall that for elements u and v in a lattice L, I[u, v] designates the u) and IEu, v] designates the sublattice of L having the interval {x: u i x I

VX'

VX

182

Chapter 4 Fundamental Algebraic Results

interval as its universe. We say that v covers u, written u -< v, iff I [ u , v] is a two-element set. LEMMA 4.49. Let L be any algebraic lattice.

i. If u and c are elements of L such that u < c and c is compact, then there exists an element m such that u < m -< c. ii. If u < v in L, then the interval sublattice I[u, v] is un algebraic lattice. iii. If x and y are elements of L such that x < y, then there exist a, b E L satisfyingx < a < b I .y. Proof. To prove (i), we use Zorn's Lemma. The set ( x : u 2 x < c } is closed under the formation of joins of chains, since c is compact. Therefore it has a maximal element m. Such an element will be covered by c. To prove (ii), note that I[u,vJ is complete, since L is, and that for every compact element c < v of L, u v c satisfies the condition to be a compact element of I[u, u]. Then it is clear that every element of I[u, u] is a join of compact elements of I[u, u]. To prove (iii), apply (ii) and (i). The lattice I [ x , y] must have a compact w element b > x. By (i), this lattice has an elemeilt a covered by b. There is a close connection between algebraic lattices and algebraic closure operators. In fact (Theorem 2.16), a lattice is algebraic iff it is isomorphic to the lattice of closed sets for some algebraic closure operator. The next easy lemma gives us an opportunity to recall the definition of an algebraic closure operator. LEMMA 4.50. Let A be any algebra. sgA is un algebraic closure operator whose lattice of closed sets is identical with Sub A, the lattice of subuniverses of A.

Proof. According to Definition 1.8, sgA(x)is the smallest subuniverse of A containing the set X. Since a set X c A is a subuniverse of A iff X = sgA(x),the subuniverses of A are precisely the closed sets for sgA. That C = sgAis an algebraic closure operator is equivalent to the satisfaction of the following properties (by Definitions 2.10 and 2.13): (12i) (12ii) (12iii) (12iv)

X c C(X) for al1 X E A; C(C(X)) = C(X) for al1 X E A; if X E Y,then C(X) c C(Y); and C(X) = U(C(Z): Z c X and Z is finite} for al1 X c A,

The verifications are entirely straightforward. (This result is the same as Corollary 1-11.) ¤ THEOREM 4.51. If C is un algebraic closure operator on a set A then there is un algebra A = ( A , . ) such that C = sgA.

4.6 Algebraic Lattices

Proof.

Let C be an algebraic closure operator on A. We define -

T={

a:

for some n 2 0, ü = (a,;..,a,-,,a,,)

and a , ~ C ( { a ~ , - - - , a , - ~ ) )

We endow A with an operation ji corresponding to each ü = (a,,

a

*

,

a,,) in T.

fE is the n-ary operation defined by f a ( ~ o , . . . , ~ n -=i )

if xi = 'ai for al1 i < n, otherwise

a, x,

Let A = (A,&(ÜE T)). Note that A has O-ary operations iff C(@) # @. Now let X c A. To prove that sgA(x)c C(X), we recall that X c C(X) and we show that C(X) is a subuniverse of A. Suppose that ü = (a,, a,) E T, and that {b,, ,b,-,) G C(X). If ai = bi for al1 i i n, then C({a,, - . a,-, 1) c C(C(X)) = C(X), implying that fa(b,, b,-,) = a, E C(X). In the remaining b,-,) = b,, which certainly belongs to C(X). This argument shows case, f,(b,, that C(X) is a subuniverse of A. To prove that C(X) E sgA(x),we use the fact that s$(x) is closed under the operations of A, and the fact that C satisfies (12iv).Thus it will suffice to show that C(Z) c sgA(x)for every finite subset Z c X . Let Z be such a finite set. Say Z = (c,, . ,ck-,), and let c be any element of C(Z). Clearly, c = fc(c0,- . ck-,) where E = (c,, - ,ck-, ,c), implying that c E sgA(x).That concludes the proof. a ,

e,

a ,

m.,

-

m

S ,

M

COROLLARY. (Birkhoff and Frink [1948].) For every algebraic lattice L there exists un algebra A with L Sub A. Proof.

It is immediate from Theorems 2.16 and 4.51.

¤

If the operations of an algebra are of bounded rank (or arity), what, if anything, can be inferred about its lattice of subuniverses? This question leads immediately to the definition of an n-ary closure operator, and to severa1 results.

DEFINITION 4.52. A closure operator C on a set A is of rank n + 1, or n-ary, (where n 2 O) iff for every X G A, if C(Z) 5 X for every Z c X with 1 ZI 4 n, then C(X) = X. THEOREM 4.53. Let C be a closure operator on a set A. C is of rank n + 1 iff there exists un algebra A such that C = sgA and euery basic operation of A is of an arity 5 n.

+

Proof. It is quite easy to see that sgA is of rank n 1, where A is any algebra whose operations are of arity I n ; we leave this verification to the reader. On the other hand, suppose that C is a closure operator of rank n + 1 on a set A. Let A be the algebra corresponding to C constructed in the proof of Theorem

P

2.3 Modular Lattices: The Rudiments

Figure 2.5

Not every lattice is modular. Consider Figure 2.5. Notice that a 5 c, but a v (b A c) = a v O = a, whereas (a v b) A c = 1 A c = c. So N, is not modular. There are many statements equivalent to the modular law. Some are included in the next theorem, but others can be found in the next set of exercises.

THEOREM 2.25. (Dedekind [1900].) For any lattice L the following statements are equivalent: i. ii. iii. iv. v.

L is modular. For any a, b, c E L, if c < a, then a A (b v c) 5 (a A b) v c. ((a A C) v b) A c = (a A c) v (b A c) for all a, b, c E L. Foranya,b,c~L,ifa 0; (p ok z)" = (pu)ok (zu) for al1 odd k > 0.

=

p on a, ending the proof that (b) implies (a) for n even.

198

Chapter 4 Fundamental Algebraic Results

If a and p are congruence relations of a finite algebra A, then the expression for their join given by part (ii) of the lemma involves only finitely many distinct relations a o" p. Therefore a and P certainly n-permute if n is sufficiently large. In fact, it can easily be shown that al1 congruences of an n-element algebra npermute. For a simple example of an n-element algebra and two congruences that fail to n - 1-permute, see Exercise 1 below. Suppose that L is a sublattice of Eqv A for some set A (i.e., a lattice of equivalence relations on A). Part (iii) of the lemma implies that al1 members of L n-permute iff a v p = a on P for al1 a, p E L. For this reason, lattices of n-permuting equivalence relations are sometimes said to have type n - 1 joins. We remark that our proof of Theorem 4.62 shows that every lattice is isomorphic to a lattice of 4-permuting equivalence relations. That this is a sharp result is a consequence of the next theorem.

THEOREM 4.67. Every lattice of 3-permuting equivalence relations is modular. If un algebra A has 3-permuting congruences, then Con A is a modular lattice. Proof. The proof uses the following version of the modular law of relational arithmetic. (15)

Suppose that a, fl, and y are binary relations on the set A, and that a" = a i (aoa)UP. Then a n ( p o y o p ) = /?0(any)0/3.

The law is proved by considering Figure 4.6. Suppose that (x, y) E a í l (P o y o P). Thus there exist elements a and b that complete the picture-that is, (a, b) E y and (x, a), (b, y) E p. Then, by considering the path (a, x), (x, y), (y, b), we see that (a, b) E bu o a o pu. Since p c a and a is assumed to be symmetric and transitive, we can deduce that (a, b) E a. It follows that (x, y) E /3 o (a í l y) o P, as desired. The other inclusion implied in (15 ) is trivial.

Figure 4.6

Now suppose that L is a lattice of 3-permuting equivalence relations and that a, p, y E L with a i p. It must be shown that a A (p v y) = /3 v (a A y). (See $2.3.)But the meet operation in Lis identical with set intersection, and by Lemma 4.66(iii), the join operation is identical with the operation u * v = u o u 0 u. Thus the modular law of relational arithmetic implies the modularity of L. ¤ In 54.2, we observed that the image of a congruence of an algebra A under an onto homomorphism f : A -,B need not be a congruence of B. This will be true, however, if A has 3-permuting congruences. In fact, the next theorem shows that the property is equivalent to congruence 3-permutability.

199

4.7 Permuting Congruentes

THEOREM 4.68. Suppose that f : A -,B is an onto homomorphism. Let a = ker f, let p be any congruence of A, and let f (P) = ( (f (x),f (y)): (x, y) E P). Then f(P) is a congruence of B iff Poaop E aoPoa. Proof. f (P) is obviously a subuniverse of B2, and it is a reflexive and symmetric relation. Thus it must be shown that f (P) is transitive iff P o a o P G a o P o a. The proof takes the form of a sequence of bi-implications. We leave it to the reader to verify that each successive pair of the following statements are equivalent.

f (P) is transitive for al1 x, Y, ZEB if (X,Y), (y,z) €f(P) then ( x , z ) Ef(P)for al1 (a, b), (c, d) E p, if f(b) = f(c) then there exists (r, S) E p with f(r) = f(a) and f(s) = f(d)for al1 a, b, c, d E A if (a, b) E P, (b, C) E a and (c, d) E p, then there exists (r, S ) E P with (a, r) E a and (S, d) E a PoaoP G aopoa.

-

M

We have seen that any algebra possessing a term operation that obeys the equations in statement (14) has permuting congruences; that permuting congruences 3-permute; and that algebras with 3-permuting congruences possess modular congruence lattices. Before presenting some consequences of congruente permutability that cannot be derived from congruence 3-permutability, we present an important property of congruences that is implied by (14) and cannot be derived from congruence permutability. We require a simple auxiliary result, which will find some applications in later chapters.

THEOREM 4.69. Let A be any algebra and Y be any subuniverse of A" (where n 2 1). The following are equivalent.

i. ii.

Y is preserved by every polynomial operation of A. ( a , . . . , a ) ~ Y f o r a l E a ~.A

Proof. To see that (i) * (ii), let Y satisfy (i) and let a E A. The constant O-ary operation c, with value a is a polynomial operation of A. To assert that c, preserves Y is to assert that when we apply c, coordinatewise to the empty sequence of members of Y, the resulting n-tuple belongs to Y. This resulting n-tuple is none other than the constant sequence (a, ,a). Thus this sequence belongs to Y. To see that (ii) * (i), assume that Y satisfies (ii) and let f be any polynomial operation of A, say f is k-ary. By Theorem 4.6, for some m 2 O there is a k + m-ary term operation t(xo,- xm-, ,u,, u,-,) of A and some elements a,, , a,-, of A such that f is defined by the equality S ,

S ,

. m .

To finish showing that f preserves Y, we let yo, yk-l be any members of Y ). Denote by ¿ i j ( j = 0, , m 1) the constant sequence with yi = , s.,

200

Chapter 4 Fundamental Algebraic Results

¿ij = (aj, ,aj) belonging to A" (and to Y since Y satisfies (ii)).Now f applied coordinatewise to yo, . . yk-l gives the same result as t applied coordinatewise to the elements ¿iO,, ¿im-l, yo, . , yk-l of Y. Since Y is a subuniverse, it is preserved under the coordinatewise application of any operation derived by composing the basic operations of A (i.e., by any term operation). In particular, Y is preserved by t, and hence it is preserved by f. m S ,

The subuniverses of A" satisfying 4.69(ii) are called diagonal subuniverses.

THEOREM 4.70. Suppose that the algebra A has a polynornial operation d(x, y, z ) satisfying d(a, b, b) = a and d(a, a, b) = b for al1 a, b E A. i. For every subuniverse p of A2 the following are equivalent. a. p is a reflexive relation over A. b. p is a congruence relation of A. ii. For al1 a, b, c, d E A we have (a, b) E cgA(c,d) if there exists a unary polynomial f (x) of A with f(c) = a and f (d) = b. iii. Let X G A2 and (a, b) E A2. We have (a, b) E c g A ( x )if there is a finite sequence of elements (x,, y,), (x,-, ,y,-, ) E X and a polynomial f E Pol, A such that f (x,, - x,-,) = a and f (y,, . y,-,) = b. S ,

a ,

S ,

Proof. Recall that a relation p is a congruence of A iff it is a reflexive, symmetric, and' transitive subuniverse of A2. (This is formula (7).) By the theorem just proved, every reflexive subuniverse of A2 is preserved by the polynomial operation d. In order to prove statement (i), let p be any subuniverse of A2. Clearly, what we have to do is prove that if p is reflexive, then it is also symmetric and transitive. So assume that p is reflexive. Thus d preserves p (by 4.69). Let (a, b) E p. Then we apply d to the triple (a, a), (a, b), (b, b) of members of p, obtaining

, it follows that p is symmetric. To see that p is transitive, Thus (b, a) ~ p and assume that (a, b), (b, c) E p and apply d to the triple (a, b), (b, b), (b, c). The resulting pair is just (a, c), showing that p is transitive. Notice that statement (ii) is the special case of (iii) that results when X = { (c, d)}. The argument for (iii) (with most of the details left for the reader to supply) goes like this. The relation p = cgA(x),which is just the smallest congruence relation of A that includes the set X, is by (i) the smallest reflexive subuniverse of A2 that includes X. In other words, p is the subuniverse of A2 generated by X U O, (where O, = ((a, a): a E A)). Using Theorem 4.69, argue, in parallel with Exercise 4.9(7), that p is identical with the set of values of polynomial operations of A applied coordinatewise in A2 to finite sequences of m elements of X. Verify that this statement is precisely the required result.

201

4.7 Permuting Congruences

COROLLARY. Among these properties of un algebra, ( i ) => ( i i ) * (iii). i. A satisfies (1 4). ii. ConA is asublattice of SubA2. iii. The congruences of A permute. Proof. Assume that (i) holds. Then by Theorem 4.70(i), congruences of A are the same as diagonal subuniverses of A2. These diagonal subuniverses constitute the principal filter in SubA2 consisting of the subuniverses that contain the identity relation. Thus (ii) holds. Now assume that (ii) holds. Let a and P be abitrary congruence relations of A. We have where a v p is the equivalence relation join of a and P. By (ii), a v P is identical with the join of a and /? in Sub A2. Moreover, we can easily check that n 0 B is a subuniverse of A2. Thus the displayed inclusions imply that a o p = a v 8. Reversing the roles of a and P in the argument, we get that P o a = P v a = a o b, so (iii) holds. When the congruences of A permute, the concept of a finite direct representation of A becomes equivalent to a purely lattice-theoretical concept. This permits the application of results in modular lattice theory to derive results about direct representations. Recall from 54.4 (Definition 4.30, Theorem 4.31) that a system of congruences (g5i: i~ 1 ) is a direct representation of A iff the mapping f : A -+ A/4i is an isomorphism, and that we write O, = i,, 4i to denote that the system is a direct representation. The concept of a directly join independent subset of a lattice, and that of a directly join irreducible element, were defined in $2.4, Definition 2.43. The dual concepts of a directly meet independent subset and a directly meet irreducible element are the relevant ones for the latticetheoretical discussion of direct products.

ni,,

n

LEMMA. Let a, B and &, 4,, - #, be congruences of un algebra A with 4i # (for i # j). If al1 congruences of A permute, then S ,

Zj

i. O, = a x P iff a and P are complements of one another in Con A; ii. O, = i,,, iff {#i: i 5 n ) is a directly meet independent subset of Con A whose meet is O,; iii. Ala is directly indecomposable iff a is a directly meet irreducible element of Con A.

n mi

Proof. We assume that A has permuting congruences. Then statement (i) follows immediately from Definition 4.28 and Theorem 4.29. To prove the second statement, assume first that O, = i5n gbi. Thus O, = A{q5i: i n). To conclude that {bi: i 5 n} is directly meet independent, we have to show that for each i S n we have $i v v{bj: j # i } = 1,. Without losing

n

202

Chapter 4 Fundamental Algebraic Results

generality, we can assume that i = n. Let (a, b) be any pair of elements of A. Let then (xj: j 5 n) be the sequence with xj = b for al1 j < n and x, = a. Since 0, = #,, by Definition 4.30 there is an element x E A with x = xj (mod #j) for al1 j I n-in other words, with (a, x) E 4, and (x, b) E {#j:j < n}. Since (a, b) was arbitrarily chosen, this proves what we desired: #, and j < n) join to 1, in Con A. Continuing with the proof of (ii), let us now assume that the 6,constitute a x, be directly meet independent set of congruences that meet to O,. Let x,, any elements of A. We wish to show that there exists an element y such that y E xi (mod 4,) for every i 5 n. For i I n let Ji = A(#j: j # i]. Define elements y, (i 5 n) by induction on i. Let y, = x,. Suppose that y, has been defined and that y, 7 x,(mod 4,) for al1 k 5 i. If i < n define y,+, as follows. Since Ji+,0 #,+, = 1, and #i+l n$+, = O, there is a uniquely determined element a E A satisfying y, a (mod #i+l) and a = x,+, (mod bi+,). Let y,+, be this element a. Clearly y,+, E y, (mod 4,) for each k I i (since Ji+,c &) and thus y,+, S x, (mod A). Therefore y,+, r x, (mod #,) for al1 k I i + 1. This inductive definition produces an element y = y, that satisfies al1 of the required congruences. The proof of (ii) is now finished. The proof of (iii) is left as an exercise.

n,(,

S ,

Direct representations with an infinite number of factors admit no characterization in purely lattice theoretical language, even if we assume that congruentes permute. Exercise 4.38(17) shows why this is so.

THEOREM 4.71. Let A be any algebra whose congruences permute and whose congruence lattice has jinite height. A is isomorphic to a product of jinitely many directly indecomposable algebras. Moreover, for every two direct representations Ci E A E n j , , D j with directly indecomposable factors, the sets I and J are jinite and 111 = IJI.

ni,,

Proof. The congruence lattice of A is a modular lattice by Theorem 4.67. A has no infinite direct representations because Con A has finite height. According to the lemma, the finite systems of congruences (#,, . . ,4,) giving direct representations of A are the same as the finite directly meet independent systems that meet to O, and A/#, is directly indecomposable iff 4, is directly meet irreducible. This theorem follows therefore from the Direct Meet Decomposition Theorem (i.e., Theorem 2.47 applied to the dual lattice of Con A). Theorem 4.71 is a rather weak application of Theorem 2.47. With slight changes in the hypotheses of the theorem, it is possible to prove that A is a direct product of directly indecomposable algebras in essentially only one way. (The algebras occurring as the factors in a direct decomposition of A into indecomposable factors are determined up to isomorphism and rearrangement of their order.) This conclusion holds for example if ConA is a finite height lattice of permuting equivalence relations and A has a 1-element subalgebra, or if Con A

is modular, A is finite, and A has a one-element subalgebra. These results are proved in Chapter 5 using a variant of Theorem 2.47. For subdirect representations there is a result somewhat analogous to Theorem 4.71. It is a corollary of the Kurosh-Ore Theorem (Theorem 2.33) and requires only that the algebra have a modular congruence lattice. As we saw in $4.4, there is no essential difference between a subdirect representation of an algebra and a system of congruences that intersect to zero. Systems of congruentes that intersect irredundantly to zero correspond to irredundant subdirect representations (in which no factor can be removed without losing the one-to-one property of the subdirect embedding). Let us define these notions more precisely.

/

l

DEFINITION 4.72. A subset M in a complete lattice L is called meet irredundant iff for al1 proper subsets N of M we have /\M < &v. A subdirect representation f : A -* A, is called irredundant iff for every proper subset J of 1 the natural map Ai -+ Aj is not one-to-one on the f-image of A.

l.

ni,, ni,, n,, ,

2.

THEOREM 4.73. Let A be un algebra whose congruence lattice is modular. Then al1 irredundant subdirect representations of A with a Jinite number of subdirectly irreducible factors have the sume number of factors. For any two such subdirect bn) and (S/, . . S/,), representations given by systems of congruences (4,, it is possible to renumber the $'S so that for every k 2 n the system (S/, S/2, - Sk,4k+l,. #,,) giues un irredundant subdirect representation of A. m ,

S ,

S

S ,

,

Proof. Suppose that (4,,

m ,

4,)

and (S/, ,

,SI,) are finite systems of congrux - x B,

entes associated with irredundant subdirect representations f : A -+ B,

and g : A -. C, x x C, where B, and Cj are subdirectly irreducible. Then clearly the 4's are distinct meet irreducible elements of ConA and {4i: i = 1, m ) is meet irredundant, and the same conclusion holds for the $'s. The desired conclusion immediately follows upon an application of Theorem 2.33 to the dual lattice of L = Con A. ¤ S

,

A subdirect product of two algebras in which the kernels of the projections permute has an especially simple structure. In the terminology of category theory, such an algebra (or its inclusion map into the full product) is an equalizer of two morphisms. THEOREM 4.74. (Fleischer J19.551.) Suppose that A S A, x A, is a subdirect product and that yo o y, = y, o y, where y, is the kernel of the map A: A -+ A, for i = 0, 1. There exists un algebra C and surjective homomorphisms a,: A, -+C such t hat

2.4 Modular Lattices with Finite Chain Condition

61

THEOREM 2.34. Every complemented modular lattice is relatively complemented. Proof. Let M be a complemented modular lattice and let a I x 5 b hold in M . Let y be a complement of x in M. So x v y = 1 and x A y 5 O. But just notice:

a = O v a = ( O ~ b ) v a = ( ( x ~ y ) ~ b ) v a = ( x ~ ( y ~ b ) ) v a = x A ( ( y A b) v a) and dually,

b = l ~ b = ( vl a ) ~ b = ( ( x v y ) v ab )=~( x v ( y v a ) ) b~ = x v ( ( y v a) A b). But since(y v a) A b = (a v y)

A

b = a v ( y A b) = ( y A b) v a andsince

aq~,then~~y,-~$~ Now Bi = B/yi E HS(Ai)since /?¡i y¡. A is a quotient of the algebra B/(y, A y ,) (since y, A y, 5 /?), and this algebra is isomorphic to a subdirect product of B, and B,. What remains to be proved is that each of the algebras B/yi is a one-element algebra or is subdirectly irreducible-in other words, that y, = 1, or vi is a strictly meet irreducible element of L. (See Lemma 4.43.) If y, = l,, then y, 5 P, SO q1 = /? (by the maximality), and since /? is strictly meet irreducible, we are done. Likewise, if q, = 1,, we are done. So we shall assume that neither of these equalities holds, or equivalently, that y, $ fl (i = 0,l). Now observe . that y, A (y, v (/? A y,)) 5 fl by modularity. Thus the maximality gives y, = y, v (p A y,), so that p A y, 5 y,. This argument, which works also with y, and y, interchanged, shows that y, A y, = /? A q, = /? A y,. The situation is pictured in Figure 4.8.

Figure 4.8

Chapter 2 Lattices

For the converse, we need the following extension of the dimension formula occurring as (iii) in Definition 2.41: For any n distinct elements a,, a,, a,-, of L m ,

d(a, v a, v

v a,,-,)

= d(a,) -

pI

+ d(a,) + - - + d(an-,)

[d(a, A (a, v v a,-,)) d(a, A (a, v v a,-,))

+

+ d(an-2 A

+ -

a,-,)l.

This formula can be established by induction. Now observe that the formula above depends on the order in which the aiYshave been indexed. Plainly, we have one such formula for each of the n! ways of indexing available. Suppose that M is a set with n elements such that

Let N be any subset of M, say with k elements, and pick c E N, setting N' = N (c}. We must argue that c A = O. Now let M = (a,, ai;-.,an-,} SO that c = un-, and N' = {an-,+, , ,un-,}. According to the extended dimension , conclude that formula above and the condition just imposed on ~ ( V M )we

VN'

d(a,

A

(a, v

-

v un-,))

+ + d(c A VN') + + d(an-, A a,,-,)

= O.

Since d only produces non-negative values, we conclude that al1 the terms of this sum are 0. In particular, d(c A V N ' ) = O. But this implies that c A = O, as desired. Hence, M is directly join independent.

VN'

Before turning to the Direct Join Decomposition Theorem, we gather in the next theorem the fundamental properties of directly join independent sets in modular lattices of finite height that we shall use. Most of these properties follow very easily from the definitions and Theorem 2.45. However, the ten properties listed are more than useful tools. In fact, they constitute al1 the conditions on the family IND(L) necessary to establish the Direct Join Decomposition Theorem for the finite dimensional lattice L. As a consequence, any family I of subsets of L that has al1 the properties attributed below to IND(L) will give rise to a variant of the Direct Join Decomposition Theorem. We could have introduced an abstract concept of "join independence family," using the ten properties below as a definition, and then established a more general theorem. Observe that the notion of direct join irreducibility depends on the notion of direct join independence. Direct join isotopy and the direct join operation are also derivative notions. To obtain a variant of the Direct Join Decomposition Theorem for a "join independence family" I, these notions must both be modified by referring them to I in place of IND(L). The specific notion of direct join independence introduced above would then be one example of a "join independence family." In 55.3, we will invoke the Direct Join Decomposition Theorem for a slightly different notion of join independence. That notion and the one defined in 2.43 are the only kinds of "join independence families" in this volume.

r

226

Chapter 4 Fundamental Algebraic Results

*2. Prove that the variety of monadic algebras is congruence distributive and locally finite. (Use Corollary 3 of Theorem 4.44,94.5.) 3. Prove that the variety of monadic algebras is not finitely generated. (Use the description of subdirectly irreducible monadic algebras in 94.5, and Theorem 4.104.) 4. Show that the variety of semilattices is not congruence modular. 5. Show that every congruence modular variety -tr of semigroups is a variety of groups, that is, for some n > 1 the equations xny Ñ y Ñ yxnare valid in V . 6. Complete the proof of Lemma 4.92. 7. Prove Lemma 4.102. 8. Suppose that P, y,, y, E ConB and y, A y, I P. Prove that B / ~ E HE(l31yo B/v 1 9. Prove this fact about modular algebraic lattices. If b > b, A A bnin L and b is strictly meet irreducible, then there exist elementc ci E I [b,, 11 such that b > c, A A c,, and this inclusion fails if any ci is replaced by a larger element. Moreover, when this holds, then every ci satisfies: ci = 1 or ci is strictly meet irreducible. Use the fact to formulate and prove a version of Theorem 4.105 valid for n factors. 10. Show that SH # HS, PS # SP, and PH # HP. 11. Show that restricted to classes of commutative semigroups, the operators SPHS, SHPS, and HSP are distinct. In fact, if X is the class of finite cyclic groups considered as semigroups (i.e., multiplication groups), then the three operators applied to X give different classes. 12. An ordered monoid is a system (M, ,e, I ) such that (M, e) is a monoid, (M, I ) is an ordered set, and I is a subuniverse of (M, -,e)'. Prove that if M is an ordered monoid generated by elements h, S,p satisfying e I h = h2, eI s = s2, e I p = p2, sh I hs, ps I sp, and ph I hp, then M is a homomorphic image of the ordered monoid of class operators diagrammed in Figure 4.9. If also hsp # shps and hp $ sphs, then show that M is isomorphic to the ordered monoid of class operators. 13. Verify that if X is a class of Abelian groups then HS(X) = SH(X). Formulate a property of varieties involving the behavior of congruences such that if -tr has the property and X E -Ir, then H S ( X ) = SH(X). 14. Prove that if X is a class of Boolean algebras, or distributive lattices, then HSP(X) = SP(X). 3

S ,

4.11 FREE ALGEBRAS AND THE HSP THEOREM In this section we introduce free algebras, terms, algebras of terms, and equations, and prove that every variety of algebras is defined by a set of equations (the HSP Theorem). These are essential tools for the study of varieties. We begin with a definition of free algebras and a series of easy results about them.

4.11 Free Algebras and the HSP Theorem

227

DEFINITION 4.107. Let X be a class of algebras of one type and let U be an algebra of the same type. Let X be any subset of U. We say that U has the universal mapping property for X over X iff for every A E X and for every mapping a: X -,A, there is a homomorphism P : U -,A that extends a (i.e., P(x) = a(x) for x EX). We say that U is free for X over X iff U is generated by X and U has the universal mapping property for X over X. We say that U is free in X over X iff U E X and U is free for X over X. If U is free in X over X, then X is called a free generating set for U, and U is said to be freely generated by X. LEMMA 4.108. Suppose that U is free for X over X and that A E X . Then hom(U, A) is in one-to-one correspondence with the set AXof mappings. For every a: X + A, there is a unique extension of a to a homomorphism P: U -+A. Proof. A homomorphism is completely determined by how it maps a set of m generators; see Exercise 4.16(1). LEMMA 4.109. i. ii.

l

Let & c X,. If U is free for X, over X, then U is free for Xoover X . If U is free for X over X, then U is free for HSP(X) over X.

Proof. Statement (i) is trivial. Ta prove (ii), one shows that when U is free for X over X, then U is also free over X for each of the classes H(X), S(X), and P(X). We prove this now for H(X), and leave the rest for the reader to work out. Suppose that U is free for X over X. Let A E H(X); say B E X , and we have a surjective homomorphism 6: B + A. Let a be any mapping of X into A. We have to show that a extends to a homomorphism of U into A. By the Axiom of Choice, there is a mapping &: X -,B such that &(x)E 6-' {a(x)} for al1 x E X. By the universal mapping property of U for X over X, there is a homomorphism ):U -+B extending 2. Let P = 6 0 /?.This is a homomorphism of U into A. For X E Xwe have

1

Thus P is the required extension of a.

1

LEMMA 4.110. Suppose that U, and U, are free in X over X, and X,, respectively. If (X,1 = 1 X, 1 then U, U,.

I

i

Proof. Let 1 X, 1 = 1 X, 1, say f is a bijective mapping of X, onto X,.Then there exist homomorphisms P: U, -+ U, and y : U, -,U, extending f, and f - l . y o extends f - ' o f = idxl; by Lemma 4.108, it can only be the identity homomorphism of U, onto itself. Likewise, we conclude that P o y is the identity on U,. It follows that P is an isomorphism of the two algebras and y is the inverse isomorphism. m

F

2.4 Modular Lattices with Finite Chain Condition

, ,

THEOREM 2.46. Let L be a modular lattice of finite height.

1

i. ii. iii. iv. v. vi. vii. viii. ix. x.

t 1

If N s M E IND(L), then N E IND(L). If M E IND(L),then M U {O)E IND(L). If a @ b E M E IND(L),then ( M - ( a @ b ) )U {a,b ) E IND(L). If a, b E M E I N D ( L ) and a # b, then ( M - {a,b )) U {a @ b ) E IND(L). If M E I N D ( L ) and f : M -+ L such that f ( x ) 5 x for al1 x E M , then { f ( x ) :x E M ) E IND(L). If a @ a' = b @ b' = a v b' = a' v b, then a @ b' = a' b = a a'. If {a,a ' ) E I N D ( L ) with a # a' and b < a, then b @ a' < a @ a'. If b 5 a @ a', b $ a, and ( a A (a' v b),b ) E IND(L),then {a,b ) E IND(L). If a = a @ b, then b = 0. If a @ b is directly join isotopic with c, then there are a' and b' such that c = a' @ b', and a is directly join isotopic with a' and b is directly join isotopic with b'.

Proof.

i. This is completely straightforward. ii. This is also immediate. iii. Suppose that a O b, a,, , a,-, is a list of al1 the distinct elements of M. To see that (M - {a O b } )U (a,b ) is directly join independent, we invoke Theorem 2.45.

iv. The argument just given for (iii) can be easily rearranged to prove this part. v. This follows easily from the definition of direct join independence. vi. We assume that none of a, a', b, b' is O, since otherwise the desired result is immediate from the definition of direct join independence. Hence a # a' and b # b'. The hypotheses now give the following dimension equations: d(a) + d ( a f )= d(b) + d(bl) d(a v b') = d(af v b) d(a) + d(a') = d(a v b').

In turn, these equation yield

+ d ( a f )= d(a) + d(bl) d(a A b') + d ( b f )= d ( a f )+ d(b) d(af A b). From these equations we obtain d(a A b') + d(af A b) = O. Therefore both d(a) d(b)

d(a A b') = O and d(af A b) = O. Hence a

-

A

b'

= O = a' A

b. Thus both

l

4.11 Free Algebras and the HSP Theorem

In the next few paragraphs, o denotes a similarity type (as defined a t the beginning of §4.2), 1 is the correlated set of operation symbols, and X, is the class of al1 algebras of type o. Let X be a set disjoint from I. (Later we can remove this assumption, but in the beginning it is crucial.) For O < n < o,let I, = {Q E 1: o(Q) = n) (the set of n-ary operation symbols in I). By a sequence from X U I we shall mean a finite sequence (S,, ,S,-,} whose ith member si belongs to X U I for every i < n. Such a sequence will also be called a word on the alphabet X U I and written as s,s, . S,-, . The product of two words a = a,. . a,-, and b = b, - . bm-, is the word ab = a, a,-, b, . bm-, (i.e., the sequence (e,, e,-,) in which k = n + m and ci = ai for i < n, and = bi for i < m). Note that in this usage, when we write "the word u" (where u E X U 1), we mean the sequence (u). e ,

DEFINITION 4.113. The set T,(X) of terms of type o over X is the smallest set T of words on the alphabet X U I such 'that

l. 2.

XUI,ET. If p,,

m ,

p,-,

E

T and Q E I,, then the word Qp,p,

p,-,

E

T.

We adopt some conventions for informally denoting terms in print. If p,-, are terms (of type o over X) and Q is an n-ary operation symbol p,, (of type rr), then we often denote the term Qp, p,-, by Q(p,,. ,p,-,). If is a binary operation symbol, we often denote (p,, p,) by p, p, . Parentheses will be used freely to make terms more readable. For example, if x, y, z E X and E I,, then x yz, xyz, and * xy zx are terms that we may write as x (y z), (x y) z, (x * y) (z x), respectively. Note that T,(X) is the empty set iff X U I, is empty. e ,

DEFINITION 4.114. If T,(X) # @, then the term algebra of type o over X, denoted T,(X), is the algebra of type o that has for its universe the set T,(X) and whose fundamental operations satisfy

I

for Q E 1, and p, E T,(X), O 5 i < n.

I

LEMMA 4.115.

The term algebra T,(X) is free in jya over the set X of words (or more precisely, over the set ((x): x EX)). This will follow from the fact that given any term p there is exactly one way to reach p by starting with elements of X and applying repeatedly the operations. The fact is called the unique readability of terms. The next lemma gives a precise formulation of this fact.

i. T,(X) is generated by the set X of words. ii. For Q E I,, T,(X) - X.

~ ' 0 ' ~ )

is a one-to-one function of (T,(X))" onto a subset of

230

Chapter 4 Fundamental Algebraic Results

Proof. Statement (i) is immediate from Definition 4.113. From (i) it follows that every member of the algebra either belongs to X or belongs to the range of at least one of its basic operations. It is also immediately clear that the range of each basic operation is disjoint from X. (Here we need the assumption that x n I, = What now remains to be shown is equivalent to this statement:

a.)

(26)

= 1 If Q,, Q, E I and Q > ( ~ ) ( ~- ,.,, (qo, ., qm-l), then Q, = Q, (so m = n) and pi = qi for al1 i < n. ~

~

0

"

)

e

A detailed proof of (26), and a simple algorithm to recognize whether a word is a term are outlined in Exercises 1-4 at the end of this section. • LEMMA 4.116. T,(X) has the universal mapping property for Xoover X.

Proof. Given a term p, define l(p) to be the length of p. (Recall that a term is a sequence. The length of a sequence (S,, - S,-,) is n.) Let A be any algebra of type o and a be any mapping of X into A. Define P(p) for p~ T,(X) by recursion on l(p). If EX, then put P(p) = a(p). Now if p E T,(X) - X, then, by the previous theorem, there are Q E I and p,, . ,pn-, E T,(X) (where n = o(Q)) such that p = Q(p,, ,p.-,); moreover, Q, p,, p,-, are al1 uniquely determined. We define P(p) = Q*(D(~,), P(pn-,)), and note that since I(p,) < l(p), this serves to define P(p) recursively for al1 p. P is clearly a ¤ homomorphism extending a. S

,

S ,

S ,

We can now state our principal result on the existence of free algebras. THEOREM 4.117. Let o be a similarity type of algebras and I be the associated set of operation symbols. Let X be a set such that 1x1> 1 if 1, = @. Suppose that Y is a variety of algebras of type o.

i. If X í l I = @, then the term algebra T,(X) is an absolutely free algebra of type a, freely generated by the set ((x): x E X). ii. If X f l I = @, then T,(X)/O(Y") is free in Y" over the set { (x)/O(Y): x E X). iii. If Y has a nontrivial member, then there exists an algebra free in Y over the set X. Proof. Statement (i) follows from the preceding lemma. Then statement (ii) follows from Lemmas 4.109 and 4.112. Now suppose that Y" has a nontrivial member. Then the map x H (x)/O(Y") is one-to-one. In set theory, we can find a set F and a one-to-one map f of F onto T,(X)/O(Y") such that X E F and f (x) = (x)/O(Y) for x EX. Since f is a bijection, there are unique operations on F giving us an algebra F isomorphic to T,(X)/O(Y") under f. Then F is free in Y" over X. ¤ DEFINITION 4.118. F'(X) denotes a free algebra in V(X) with free generating set X. Fx(rc), where rc is a cardinal number, denotes an algebra Fx(X) where 1x1= rc.

I

231

4.11 Free Algebras and the HSP Theorem

Note that Fx(X), if it exists, is determined only up to an isomorphism that is the identity on X. Fx(ic) is determined only up to isomorphism. Fx(0) exists iff the type has O-ary operations. If TC # O, then Fx(lc) exists iff X has nontrivial members or rc = l. COROLLARY 4.119. Let X be a set and X be a class of algebras of type a such that F,(X) exists. Then &(X) belongs to S P ( X ) and FUx)(X) z Fx(X) r T,(X)/@ ( X ) .

Proof. This is a corollary of Lemmas 4.110 and 4.1 12 and Theorem 4.1 17.

l 1

i

Absolutely free algebras (isomorphic to term algebras) have a rather transparent structure, as we have seen. Explicit constructions of free algebras in the variety of al1 groups and in the variety of al1 monoids are given in Chapter 3 (553.3 and 3.4). Free groups and free monoids are not very complicated; their elements and operations are easily visualized. Free algebras in some classes, especially in certain varieties of groups or of lattices, can be quite complicated, or impossible, to describe. Severa1 free algebras that are relatively straightforward to construct are presented in examples and exercises at the end of this section. Free algebras can be very useful, even where they cannot be readily described. Many applications require only that they exist, satisfy the universal mapping property, and bear a certain relation to the equations holding in a variety, which we shall examine below. We can now discuss the correspondence between terms and term operations. Every term of type a can be regarded as a name for one or more term operations in every algebra of type a. We could define term quite broadly, to mean any element of an absolutely free algebra. There are, however, good and traditional reasons for adopting a more restrictive definition, according to which the terms of type o are the elements of one fixed absolutely free algebra with a denumerably infinite free generating set. We wish to choose once and for al1 the set of free generators, independently of the type. To this end we adopt a technical convention: Henceforth we assume that the set 1of operation symbols of any type a is chosen Then the following definition so that I í l co = @ (where co = {O, 1, ,n, makes sense for al1 types. S)).

DEFINITION 4.120. Let a be a type. By a term (of type o)is meant an element of the term algebra T,(co). We put v, = (n), and the terms un (n E co) are called variables. Terms of type o are elements of the absolutely free algebra T,(co) generated by the set of variables {u,, v,, - ,u,, For each n 2 1, the term algebra T,(n) is the subalgebra of T,(co) generated by v,, ,un-, . Note that a } .

U T,(n) and l 1,andA r B x Cimplies IBI = 1 or ICI = 1. LEMMA 2. For al1 9 E Con A, Al9 is directly indecomposable 8 9 # 1, and 9 = 9, x 9, implies 9, = 1 or 9,

=

1.

Thus we will cal1 a congruence 0 indecomposable iff it satisfies these conditions: 9 # 1, and 9 = 9, x 9, implies 9, = 1 or 9, = 1. Lemma 2 will, of course, be our key to results involving indecomposable factors. We continue with a lemma that contains some useful results on product congruences. In omitting the proofs, we emphasize that the relation = 8, x 9, is not lattice-theoretical, since the existence of 9, x 8, cannot be determined in the (abstract) lattice Con A. (See Exercise 1 below.) LEMMA 3.

n

n

i. If r exists, then ,?l exists for all r, c r. ii. Let us be given un index set I and index sets Jifor each i E I; moreover let us be given O,, E Con A for each i E I and j E Ji.U eijexists and equals 9, for 9, = 9, then we also have each i E I , and if

ni

n

270

Chapter 5 Unique Factorization

iii.

The product is completely commutative and associative-i.e., both its existence and its value are independent of any bracketings or permutations of the factors, including infinite permutations. iv. If 6,exists and bi 2 6,for each i, then dialso exists. •

n

n

Our next lemma permits a special form of reasoning about x for congruences on finite algebras. It is a direct analog for finite algebras of assertion (vi) of Theorem 2.46 in $2.4, and it will be used in a similar way. It will form assertion (vi)of Lemma 16 in 55.3 below, which plays a role similar to that of Theorem 2.46. LEMMA 4.

If A is finite and $

axal=PxP'=a~P'=a'~P in Con A, then also a x

p' exists and equals a

x a'.

Proof. Since we may replace A by A/(a x a'), it is enough to prove the lemma for the case that a x a' = O. Let m = IAlal, n = IAla'l, p = IAIPI, and q = IAIP'I. Our assumptions te11 us that A is isomorphic to Ala x Ala' and to AIP x AIF, and moreover embeddable both in Ala x AIP' and in Ala' x AIP. From these isomorphisms and embeddings, we obviously have the following equalities and inequalities:

IAJ 5 np. These inequalities easily imply that (Al = mq, so the given embedding of A into • Ala x Al/?' is an isomorphism. Closely associated with the notions of direct product and factor congruence are the notions of isotopic algebras and isotopic congruences; we begin with the first of these. Algebras A and B (of the same similarity type) are said to be isotopic over the algebra C (of the same type), written A -c B, iff there exists an isomorphism such that the second coordinate of b(a, c ) is c for al1 a E A and for al1 c E C. We say that A and B are isotopic, and write A B iff A -c B for some C. Obviously, isomorphic algebras are isotopic, but the first two exercises of $5.1 give simple examples of isotopic algebras that are not isomorphic. (In Exercise 2 there, we have A, A, but A, $ A,, even though A, has a one-element subuniverse and is congruence modular.) In 55.7 we will prove the surprising and nontrivial result (of L. Lovász) that if A x C E B x C, with A, B, and C al1 finite (with no restriction on the form taken by the isomorphism between A x C and B x C), then A -cB. (Exercise 8 of 55.1 shows that this is false without the finiteness condition.)

-

-

271

5.2 Direct Factorization and Isotopy

i

LEMMA 5. Isotopy is un equivalence relation on the class of al1 algebras of a given similarity type.

--

Proof. It is obvious that is reflexive and symmetric, so we need only prove transitivity. Suppose that A B via $: A x D S B x D, and that B C via $ : B X E ~ C X E Define . $ : A x D x E + B x D x E via $(a,d,e)= (F(a,d), d, e), where $(a, d) = (F(a,d), d), and define $: B x D x E -+C x D x E via $(b, d, e) = (G(b,e), d, e), where $(b, e) = (G(b,e), e). It is clear that $ and are isomorphisms, so $ 0 $ is an isomorphism that effects the isotopy of A with C w over D x E. LEMMA 6. If A -B,thenIAl

-

=

IBI.

We cal1 congruences a and fl on an algebra E isotopic over the congruence y on E, written a P, iff a x y = P x y. We say that a and fi are isotopic in one step, written a p, iff a P for some y. Finally, we define to be the transitive closure of -,; that is, we say that a and fl are isotopic, written a p, iff there exist congruences dl, - 6, such that a = 6,, p = 6,, and 6, 6, 6,. More specifically, if 6, 6, -¿&, then we will say that a and P are isotopic over y,, y,, . ., y,-,. The connection between isotopy of algebras and isotopy of congruences is as follows.

-, -,

-,

e ,

-

-,

--,

LEMMA 7. Algebras A and B are isotopic JfL there exist un algebra E and isotopic congruences a and fl on E such that A r E/a and B E E/P. In more detail, f A -c B, then there are congruences a, P, and y on E = A x C such that E/a E A, E/P E B, E/y E C, and a -,P. And f a, P, y E Con E, with a P, then E/a "Ely ElP.

-,

Proof. Suppose first that we are given A -c B via an isomorphism $: A x C + B x C. Take E to be A x C, and let p,, p, be the projection maps from this product onto A and C. Now define a = ker p,, P = ker(p, o $), and y = ker p, = ker(pl o 4). We leave it to the reader to check that a, P, and y are as required. For the final statement of the theorem, we take a, p, and y as given there, and define $: E/a x E/y -+E/P x E/y as follows: $(x/a, yly) is (w/P, wly), where a x-wY y. We leave it to the reader to prove that this $ is a well-defined isomorphism of E/a x E/y with E//? x Ely, which commutes with the second coordinate projection. The first sentence follows immediately from the other things proved here, together with the fact (Lemma 5) that isotopy of algebras is a transitive relation. Given congruences on E that are one-step isotopic over y, we will define the y-projectivity rnap from I [a, 11 c Con E to I [P, 11 G Con E to be the projectivity map (see Definition 2.26 in $2.3)defined by 6 I+ (6 A y) v P. It is the composition of two perspectivities: meet with y, and then join with P. More generally, if a and p are isotopic over y,, - , y,-,, then by the associated projectivity map from

272

Chapter 5 Unique Factorization

I [a, 11 to I [p, 11 we mean the composition of the y-projectivity maps for y = y,, y,, . . , y,-, . Of course, if Con E is modular, then the associated projectivity map is an isomorphism between T[a, 11 and I[P, 11, but in general al1 we know is that it is an order-preserving map. As in the Correspondence Theorem (4.12), for 6 E I [a, 11, we will take 6/a to denote the congruence on Ela that corresponds naturally to 6 via the stipulation that (ala, a'la) E 6/a iff (a, a') E 6.

LEMMA 8. If a and /3 are isotopic congruences on E (over y,, . - ,y,-,), then there exists a bijection 41,: Ela -+ E//? such that

for al1 a, a' E E, for al1 6 E I [a, 11 and for f the associated projectivity rnap from I [a, 11 to I [P, 11. Similarly, (blP7b'IP) E LlP

(41,-l (blP),

-+

4-l (b11P))E g(A)la

for al1 b, b' E E, for al1 L E I [P, 11 and for g the associated projectivity rnap from I [P, 11 to,I [a, 11.

Proof. We will assume that k = 2-i.e., that a and P are one-step isotopic (over y), that f is the y-projectivity rnap from I[a, 11 to I[P, 11, and g is the y-projectivity rnap from I [p, 11 to I [a, 11. Clearly the general case follows easily from this one. Fix an element e of E. Define 41,(a/a)to be b/P for any b such that

The existence of such an element b comes from the fact that a x y is defined, and hence that a o y = 1. Moreover, if a is changed to a' in the same a-class (i.e., (a, a') E a), and if b' is any element satisfying a' b' Y e, then we clearly have (b, b') E a A y < P, and so 4 is well defined as a rnap from Ela to EIP. To see that 41, is a bijection, we will show that the rnap S/ constructed symmetrically to 41, is a two-sided inverse to 4. That is, we define S/: ElD -+ Ela by defining S/(b/P)to be the a-class of a, where

(It follows as before that ala is determined uniquely by the class blP.) There is an obvious symmetry to this situation. Therefore, to see that S/ is a two-sided inverse of 4, it will be enough to show that if d(a/a) = b/P and S/(b/P) = a'la, then a a a'. Since by definition we have

and P

b-afY we clearly also have

e,

/

5.2 Direct Factorization and Isotopy

1

(a, a') E a v ( p A y) = a v (a A y) = a, 1

1

1

as was to be proved. Now to verify the implication involving the y-projectivity map f, we will assume that (ala, a'la) E ñ/a, and prove that (b/P, b'/P) E f (6)/P, where b and b' are defined via

From this diagram, and the fact that a 5 ñ, it is evident that (b, b') ~ñ A y, and thus that (b, b') E (6 A y) v p = f (6.).Therefore, (b/P, b'/P) E f (6)/P, as desired. This proves that if (ala, a'la) E 6/a, then (4(a/a), 4(a1/a))E f (6)lP. The final sentence of m the lemma is proved similarly. LEMMA 9. In the situation of Lemma 8, if the algebra E-has a one-element subuniverse, then 4 can be taken to be an isomorphism 4: Ela 3 EIP. Proof. We continue the proof of Lemma 8, while assuming in addition that (e} is a subuniverse of E. To show that 4 is a homomorphism, we consider an n-ary operation F of E and elements a,, - a,-, E E. Let us further suppose that a ,

so that 4(ai/a) = bi/P for each i. Now, since a and y are congruences, we have

1

which reduces, since {e} is a subuniverse, to

These last congruences imply that 4 maps F(a,, - ,a,-,)/a to F(b,, . ,b,-,)/P. Therefore, since a and P are congruences on E, we also know that maps F(aola, ., a,-,la) to F(b,IP, - ., bn-,lb) = F(4(a,la), ., 4(a,-,la)), and thus 4 is a homomorphism. m We are able to make the most effective use of isotopy when it occurs in combination with the modular law. We therefore define algebras A and B to be modular-isotopic over C, and write A -ydB, iff A -C B and Con(A x C) is a modular lattice. We say that A and B are modular-isotopic in one step, and write A -yod B, iff A -Eod B for some C. Finally, we define -"Od to be the transitive closure of -yd.That is, we say that algebras A and B are modular-isotopic, and write A -"Od B, iff there exist algebras D I , D, such that A = D I , B = D,, and D, lod D2 ?Od . . -lod Dk. We have the following counterpart to Lemma 7.

-

-

m . ,

84

Chapter 2 Lattices

Proof. The existence of the set N is guaranteed by Theorem 2.7, and uniqueness is just a restatement of Theorem 2.55. Theorem 2.56 has a much stronger conclusion than the Kurosh-Ore Theorem (2.33), which holds more generally for al1 modular lattices. This simple result has some use in algebraic geometry, and we will use it in the investigation of subdirect representations of algebras with distributive congruence lattices. As shown by M,, elements of modular lattices can have severa1 complements. This cannot happen in distributive lattices. In view of Theorem 2.51(v), an element of a distributive lattice can have at most one complement relative to any bounded interval. Thus complements and relative complements in distributive lattices are unique whenever they exist. According to Dilworth [1945], there are nonmodular lattices in which every element has a unique complement. The construction is very elaborate and is not included here. On the other hand, every lattice in which relative complements are unique must be distributive, by Theorem 2.51. See Exercise 2.63(7) regarding uniquely complemented modular lattices. The complemented elements in a distributive lattice can be used to decompose the lattice.

THEOREM 2.57. Let L be a bounded distributive lattice and let a, a* E L where a and a* are cornplements of each other. L r I(a] x I[a). Proof. Define f : L -,I(a] x I [a):hby f (x) = (x A a, x v a) for al1 x E L. This f is the desired isomorphism. Given distributivity, the demonstration that f is a homomorphism presents no difficulty, so we omit it. To see that f is one-to-one, suppose f(c) = f(b). This means that c A a = b A a and c v a = b v a. But then, by Theorem 2.51 (v), c = b. Finally, f is onto I(a] x I [a): suppose c 5 a 5 b. Just observe that ((b A a*) v c) v a

=b

((b A a*) v c)

= c.

and So f((b

A

a*) v c) = (c, b).

A

a



The converse of this theorem holds in the following sense. Suppose that L = Lo x L, where Lo and L, are bounded lattices. Then the elements (1,O) and (0,l) are complements of each other in L and Lo r I((1, O)] while L, r IC(1,O)). Distributivity plays no role here. Even in the proof of the theorem, the full power of distributivity is not needed to obtain the decomposition of L into the direct product of other lattices. Later we will see how to decompose relatively complemented lattices of finite length. A complemented distributive lattice is called a Boolean lattice. In Chapter 1, we defined Boolean algebras in such a way that complementation was a basic unary operation. Thus the relation between Boolean lattices and Boolean algebras is like that between groups treated as algebras with one operation-the group

5.2 Direct Factorization and Isotopy

We first prove the simple fact, for A, p 2 a, that if A o p = 1, then f (A)o f (p) = 1. To see this, we will assume that A 0 p = 1 and take arbitrary c, d E E. We need (A) f ( ~ ) to show that c -f - y --- d for some y E E. Choose a, b E E such that 4(a/a) = c/P and d(b/ct) = d/P. Since A o p = 1, we know that a x Y b for some x. Choose y so that d(x/a) = y/P. Now, by the definition of the relation v a , we have (ala, xla) E Ala, so by our assumption on 4, we have (4(a/a),4(x/a))E f (A)/P. By our previous choices of a and y, this means that (c/P, y/P) E f (A)/P, and so, by the definition of the relation f(A)/P we have (c,y) E f(A). A similar argument shows that (y, d) E f (p), SO y is as required-that is, we have shown that if A O p = 1, then f(A) o f(p) = 1. In fact, the reverse implication holds as well, by an almost identical argument, so we have A o p = 1t,f (A) o f (p) = 1. The conclusions of the lemma are now almost immediate. For instance, if 6, x S, = 6,, then we have f (6,) A f (6,) = f (6,) from the fact that f is an isomorphism of lattices, and f (6,) o f (6,) = f (6,) o f (6,) = 1, from what we have just proved. These two facts together te11 us that f(6,) x f(6,) = f(6,). M As a final preparation for the results of $5.3, we ask the reader to take a second look at the lemma before Theorem 4.71 and the remarks immediately preceding that lemma.

Exercises 5.2 1. Find an algebra A with congruence lattice

2. 3.

such that A is not isomorphic to A/O, x A/O,. (This supports our claim that the relation 4 = O, x O, is not definable within lattice theory.) Prove Lemma 3. Let O,, O, be congruences on a finite algebra A. O, x - - - x O, exists iff, for v each i = 2, - . n, (O, A - - - A Oi-,) permutes with Oi and (O1 A . - A Oi = 1. (For an analog in modular lattice theory, see the next exercise.) In a modular lattice L of finite height ($2.4), {a,, a,) is a directly join independent subset of L iff, for each i = 2, - n, (a, v - -. v a,-,) A ai = 0. Let A be a five-element algebra with no operations. Find congruences O,, O, O, on A such that each B, permutes with each Oj, each quotient A/Oi has exactly two elements, 6, A 13, A 83 = 0, and m ,

S ,

4.

S ,

a,

5.

(O1 A 8,) v O,

= (O2 A

O,) v O1 = (O3 A 8,) v O,

=

1.

(Clearly, we cannot have O = O, x O, x 83, so this exercise places some limits on possible weakenings of the hypotheses for Exercise 3.)

In this section, we will prove the following two unique factorization results, which follow from Ore's Theorem and its generalization, the Direct Join Decomposition Theorem (2.47). The first of these comes from the second [1948] edition of Birkhoff's Lattice Theory; since it relies so heavily on Ore's Theorem, it is generally called the Birkhoff-Ore Theorem.

THEOREM 5.3. (G. Birlzhoff[l948].) If A has permuting congruence relations, ConA has finite height, and A has a one-element subuniverse, then A is uniquely factorable. Our other unique factorization result for this section is Bjarni Jónsson's modification of the Birkhoff-Ore Theorem to hold for finite congruence modular algebras.

TNEOREM 5.4. (B. Jónsson [1966].) If A is finite, Con A is modular, and A has a one-element subuniverse, then A is uniquely factorable. It remains open whether the natural common generalization of Theorems 5.3 and 5.4 is true.

PROBLEM. If ConA is a modular lattice of finite height, and A has a oneelement subalgebra, then must A be uniquely factorable? REMARK: Our proof of Theorem 5.4 requires finiteness of A in one place only, namely for proving Lemma 4 above, which enters into our proof as assertion (vi) of Lemma 16 below. If Lemma 4 should turn out to hold for Con A modular and of finite height, then we would have a positive answer to this problem. Exercises 1-6 of 55.1 show that none of the three hypotheses (congruence modularity, finite height, one-element subalgebra) can be completely removed, either from the problem or from 5.3 or 5.4. Even in the absence of a one-element subalgebra, we still have modular-isotopy versions of Theorems 5.3 and 5.4. (Recall that modular isotopy of algebras, denoted by -"Od, was introduced in $5.2.)

THEOREM 5.5. Suppose that Con A has finite height and the congruences of A permute. Then A has a product representation with directly indecomposablefactors, and al1 suchfactorizations have only finitely many factors. Moreover, if A r B, x

x B,

r C,

x

x C,,

with al1 .factors directly indecomposable, then m = n and, after renumbering, Bi -"Od Ci for 1 5 i 5 n. Consequently, for each i, 1 Bil = 1 Gil and Con Bi r Con Ci.

THEOREM 5.6. If A is finite and Con A is modular, and if

1

2.5 Distributive Lattices

85

multiplication-and groups as we have introduced thein in Chapter 1. Homomorphic images of Boolean lattices are again Boolean lattices and direct products of systems of Boolean lattices are also Boolean lattices. However, subalgebras of Boolean lattices are not generally Boolean lattices. For example, the only chains that are Boolean are those with one or two elements, but long chains are quite common in most Boolean lattices. Theorem 2.57 can obviously be applied to finite Boolean lattices to obtain the following result. COROLLARY 2.58. Every finite Boolean lattice is isomorphic to a direct power ¤ of the two-element chain. Another way to formulate this corollary is the following: Every finite Boolean lattice is isomorphic to the lattice of al1 subsets of some set, where the join of two subsets is just their union and the meet of two subsets is just their intersection. This reformulation of the corollary is a simple consequence of the connection between sets and characteristic functions. Indeed, the members of the kth direct power of the two-element chain with elements O and 1 can be regarded as the characteristic functions defined on a k-element set. The correlation of subsets with their characteristic functions is an isomorphism from the lattice of subsets onto the direct power. Neither the corollary nor its reformulation hold for arbitrary finite distributive lattices or for arbitrary infinite Boolean lattices in place of finite Boolean lattices. But it is possible to accommodate these lattices by giving up only a little of the power of the conclusion. It turns out that every finite distributive lattice is isomorphic, in a rather special way, to a sublattice of a direct power of the two-element chain (or, in the language of the reformulation, to a sublattice of the lattice of al1 subsets of some set, consisting of certain kinds of subsets.) Let J = (J, 5 ) be any ordered set. An order ideal of J is just a subset of J that is closed downward. That is, I c J is an order ideal of J iff for al1 a and b in J , if a E I and b 2 a, then b E 1. Let L be a lattice and let J(L) be the set of al1 nonzero join irreducible elements of L. The ordered set obtained by restricting the ordering of L to J(L) is denoted by J(L), and the set of order ideals of J(L) is denoted by Ord J(L). Evidently, (Ord J(L), fl, U ) is a distributive lattice. Ord J(L) will denote this lattice. Also let I S O ( J ~ , C,) stand for the set of al1 isotone maps from the ordered set J~into the two element chain C,. (Recall that the superscript a indicates the operation of forming the dual of an ordered set.) Evidently ISO(J~,C,) is a sublattice of the direct power C: of the two-element chain. The connection between Ord J(L) and ISO(J~,C,), where J = J(L), is that the characteristic functions of the order ideals are exactly the isotone maps, and this correlation establishes an isomorphism. The next theorem is due to Birkhoff [1933]. THE REPRESENTATION THEOREM FOR FINITE DISTRIBUTIVE LATTICES (2.59). Let L be a finite distributive lattice and let J be the ordered set of nonzero join irreducible elements of L. Then L E Ord J(L) lso(Ja, C,). Moreover, each projection function on C: maps ISO(J~,C,) onto C,.

278

Chapter 5 Unique Factorization

p v y). There is one special case, however, in which modularity

permits us to deduce that two congruences permute. We present this special case in the next lemma. In Lemma 15 we will apply it to obtain a product decomposition for one special case, which will enter the main argument stream as part (viii) of Lemma 16 below.

LEMMA 14. (B. Jónsson.) If ConA is modular and a x a' then a A p permutes with a'. Proof. Suppose (a, b) E (a A P) o a',-that is, (a, x) E (a some x E A. Thus we have the following diagram

A

=8

I P in Con A,

P) and (x, b) E a' for

where the existence of y follows from condition (1)in Lemma 1 of $5.2. To obtain (a, b) E a' o (a A p), and thus complete the proof, it will be enough to prove that (y, b) E p. To see this, we note that (y, b) E a A (a' v (a A p)), and then use modularity to compute a

A

(a' v (a

A

p)) = (a A a') v (a A p) = O v ( a ~ p)IPv(a~p)=P;

and hence (y, b) E p.

¤

LEMMA 15. Let a, a', 8, and y be elernents of Con A, and assume that Con A is modular. Suppose that a x d and (d v (a A P)) x y exist, and that y 2 a A p and p 2 a A a'. Then a' x y also exists. Proof. We need to prove that a' o y = 1 and y o a' = 1 in Con A. The two proofs are very similar, and so we present only the first. Given any a, b E A, we need to find x E A such that (a, x) E a' and (x, b) E y. By hypothesis, (a' v (a A P)) 0 y = 1, so there exists y E A such that

Now a' permutes with a

Since a

A

p

A

p, by Lemma 14, so we have

y, we readily obtain a

which is the desired conclusion.

a'

x

-

b,

i 1

i

1

j 1

1 I I

!

5.3 Consequences of Ore's Theorem

In order to prove Theorem 5.6, we will modify slightly the proof used for Theorem 5.5. The main change is that we need to invoke a version of the Direct Join Decomposition Theorem (2.47) that applies to the isotopy relation for congruences, introduced in $5.2, rather than to isotopy in the sense of lattice theory, introduced in $2.4. Let us recall that in proving the Direct Join Decomposition Theorem, we based our argument on only those facts about (latticetheoretic) join independence, and its associated notion of isotopy, that are contained in the ten conditions of Theorem 2.46. Therefore, to have a version of the Direct Join Decomposition Theorem that applies to isotopy of congruences, we need only prove the counterpart of Theorem 2.46 for the direct product notion of independence and its associated notion of isotopy. We now define a set r G Con A to be independent iff the product of r exists, in the sense of $5.2, that is, iff condition (1) of Lemma 1 ($5.2)holds. We will let IND(A) denote the family of al1 independent sets of congruences on A. We now prove such a counterpart, which, for convenience, we have stated in a form dual to that of 2.46. The finiteness assumption is actually required only for assertion (vi).

-

LEMMA 16. Let A be a finite algebra with Con A modular. i. ii. iii. iv. v. vi. vii. viii. ix. X.

If N E M E IND(A), then N E IND(A). If M E IND(A), then M U (1) E IND(A). If a x B E M E IND(A), then (M - {a x P) ) U {a,P) E IND(A). If a, p E M E IND(A) and a # /?, then (M - {a,P)) U {a x /?) E IND(A). If M E IND(A) and f : M -+ Con A such that f (x) 2 x for al1 x, then { f (x): x E M) E IND(A). I f a x a ' = p x p l = a r \ p l = a f ~ p , t h e n ax p = a ' x / ? = a x a'. If {a,a') E IND(A) with a # a' and P > a, then fi x a' > a x d. If p 2 a x a', p 2 a, and (a v (a' A p), p) E IND(A), then {a,P) E IND(A). If a = a x p, then p = 1. If a x P y, then there are a' and P' such that y = a' x j', a a', and P P'.

-

-

-

Proof. The ninth assertion, and the first four, are obvious from the definitions involved and from the elementary remarks in $5.2. The fifth assertion is a restatement of clause (iv) of Lemma 3, and the sixth is immediate from Lemma 4. The seventh assertion holds in al1 algebras (see Exercise 1 below). The eighth assertion is immediate from Lemma 15, and the tenth follows immediately from Lemma 13. THE DIRECT JOIN DECOMPOSITION THEOREM. (Second version.) Let A be a finite algebra with Con A modular. If P,, - Pm,y,, - . , y,, E Con A and 8, x - x pm y, x .- x y,,, with each pi and each yj directly indecomposable, then m = n, and, after renumbering the y,, we have Pi y, for each i. S ,

N

-

,

280

Chapter 5 Unique Factorization

Broof. The proof is exactly the same as the one we gave for the original Direct Join Decomposition Theorem (2.47),with the following modifications. The original proof must be dualized, the notions of IND and "isotopic" from Chapter 2 must be replaced by the corresponding notions from this chapter, and finally, each allusion in the original proof to one of the ten assertions of Theorem 2.46 must be changed to refer to the corresponding assertion of Lemma 16 here.

m Proof of Theorem 5.6. By Lemmas 6 and 11, the final sentence of the theorem follows immediately from the modular isotopy of B, and Ci, so we need only prove the main conclusion, namely that B, wmodC, for each i. E, -yd . -,"Od By definition of wmod,we have a chain of algebras E, -,"Od E,, where E, is B, x - x B, and E, is C, x x C,. By Lemma 11, we know that Con E, z Con A for each i, and by Lemma 6, we know that each Ei is finite. Therefore, we may assume that each E, is a finite algebra with a modular congruence lattice, and has been written as a product of indecomposables. Clearly, if we prove the conclusions of the theorem for each pair E,, E,,,, we will then have the full conclusion for Bi and C,. In other words, it will suffice to prove the theorem under the simpler assumption that B, x x B, -ydC, x x C,. Under this assumption we have, by Lemmas 10 and 12, a finite congruence modular algebra E and congruences P, y on E such that E/P z B, x x B, and E/y r C, x x C, and P y. By Lemma 1, we have P = P, x x P,, with Bi r E//&for each i, and likewise y = y, x - - x y,, with Cj z E/yj for each j. We therefore have P, x - x P, x y,, so the conclusion of the y, x theorem is immediate from the Direct Join Decomposition Theorem (and m Lemma 10). -

a

-

-

-

Proof of Theorem 5.4. We look over the previous proof under two assumptions: that A has a one-element subuniverse, and that we actually have A r B, x - x B, r C, x x C,. In this case, we may take E = A, and p = y = O. Thus, as in the previous proof, we have yi Pi for each i. Therefore, A/Pi r Aly, for each m i, by Lemma 9, which is to say that B, r Ci for each i.

-

REMARK ON THE PROOF: We have perhaps made it look as if Theorem 5.4 (of B. Jónsson) were a simple corollary of the lattice theoretic results of O. Ore. If we have given such an impression, it is a misleading one. In fact, Jónsson's proof required a very careful treatment of Ore's Theorem, including some substantial improvements. In order to present a unified development, we decided to incorporate those improvements into our statement of the Direct Join Decomposition Theorem in Chapter 2.

Exercises 5.7 l. Prove assertion (vii) of Lemma 16: If a, a', and P are congruences on an algebra A such that a x a' exists and /? > a, then fl A a' > a A a'. (Lemma 16 assumes finiteness and modularity, but they are not needed for this part, which holds

281

5.3 Consequences of Ore's Theorem

for any algebra A, whether finite or infinite, and with no assumption of modularity.) In Exercises 2-8 below, we consider a finite dimensional vector space V = ( V; + ,r,(L E F) ) over an algebraically closed field F, together with a linear transformation T: V -+ V. We analyze T by considering the algebra V, = ( V ; +, rA(LE F), T). Clearly, every finite product of such algebras is again an algebra of the same type (i.e., a finite dimensional vector space W over F, expanded by a single linear endomorphism of W), and a similar result holds for direct factors. These algebras V, clearly satisfy al1 hypotheses of the BirkhoffOre Theorem, so each V, has a product representation with directly indecomposable factor algebras, which are unique up to rearrangement and isomorphism. As we will see in these exercises, the indecomposable factors of V, correspond exactly to the elementary Jordan matrices into which the matrix of T can be decomposed. Let us call T (or the corresponding algebra V,) m-nilpotent iff Tm= 0, and nilpotent iff it is m-nilpotent for some m. We call T (or V,) multicyclic iff V has a basis {u!: 1 i i 5 k, 1 5 j 5 m(i)} such that T has the following form when restricted to elements of this basis: ...r,v:5,; 5 0 ...1,v;r,v;r,o ..T*v;5v; 5 0 . Finally, let us call T (or V,) cyclic iff it is multicyclic with k = l. 2. If T is multicyclic, then k and m(i) (1 i i i k) are determined by the isomorphism class of V,, and conversely, these integers determine the isomorphism type of V,. 3. The product of two multicyclic algebras is multicyclic (with the path lengths m(i) of the product easily derivable from those in the two factors). 4. Every multicyclic algebra is a product of cyclic algebras. 5. A cyclic algebra of more than one element is directly indecomposable. Hence a multicyclic algebra V, is directly indecomposable iff 1 V )# 1 and V, is cyclic. *6. V, is nilpotent iff V, is multicyclic (and, in fact, the least m such that Tm= O is the largest of the m,). (Hint: Multicyclic -+ nilpotent, almost trivially. For the converse, suppose that Tm= 0, but T"-' # O. Start by choosing u,: vy, - so that vl/ker(Tm-'), v2/ker(Tm-l), form a basis of the quotient space V/ker(~"-l). Define u:-' = T(vy), u,"-' = T(vy), and so on, and prove that v;"-'/ker(~"-~),vY-'/ker(~"-~),. are linearly independent in k e r ( ~ " - ' ) / k e r ( ~ " - ~Take ). as many further elements u?-' as are required to make v ? - ' / k e r ( ~ ~ - ~~,"-'/ker(T"-~), ), form a basis of the space k e r ( ~ " - ' ) / k e r ( ~ " - ~ )Continue . this procedure to define al1 vi. It is n necessary to prove that the collection of al1 vf forms a basis of V.) m

282

Chapter 5 Unique Factorization

7. If VT is directly indecomposable, then T = S + 1for some cyclic S and some A E F. (Hint: By finite dimensionality, some nontrivial linear relation must hold between the various powers I = TO, 7; T2, T 3 , . Let p(x) be a nonzero polynomial of smallest degree in F[x] such that p(T) = O. Taking A to be a zero of p(x) in F, we have p(x) = (x - A)Mq(~) with M 2 1 and with A not a zero of q(x). Since F[x] is a principal ideal domain, we have 1 = r(x)(x - A)M+ s(x)q(x). From this we easily deduce that V, is isomorphic x (ker q(T)),. Since q(x) has degree smaller than that of to (ker(T p(x), we know that q(T) # O (i.e., ker q(T) # V). Hence, if indecomposable, V, is isomorphic to (ker ( T - A)M),.)

Therefore, every finite dimensional VT is isomorphic to a direct product of algebras on which T is A plus a cyclic transformation (for some A). From this it is easy to deduce the Jordan normal form for T.

8. (Jordan normal form.) If V is a finite dimensional vector space, and T is an endomorphism of V, then there exists a linear basis for V under which T takes the form

where each A iis a square matrix of the form

5.4 ALGEBRAS WITH A ZERO ELEMENT Earlier in this chapter, we saw how some hypotheses on the congruence lattice could yield unique factorization results. In this section, we consider instead the endomorphism monoid. If A is an Abelian group, then EndA has structure beyond that of an Abelian group, as mentioned in Exercise 1.6(6);it is in fact a ring. It turns out that a useful fragment of this ring structure persists under much weaker assumptions on A: it suffices to assume that A has a binary operation and a one-element subuniverse (O) such that the equations O + x Ñ x + O Ñ x hold in A. Throughout this section and the next, we will consider only algebras

+

283

5.4 Algebras with a Zero Element

A = (A, +,O, - - ) of this type. For short, we will cal1 such an algebra A an algebra with zero. Our principal result in this section will be the theorem ofSB.Jónsson and A. Tarski that finite algebras with zero are uniquely factorable. (Some elaborations of this result will come in the next section.) Their starting point was a simple way to identify direct factorizations within the endomorphism "ring" of an algebra A with zero; this is a straightforward generalization of the well-known technique of viewing a direct product G x H of two Abelian groups as the "direct sum" of its two subgroups G x {O) and (0) x H. To see the general case, let us be given an algebra A with zero and a direct product decomposition

in Con A. Now, for i = 1, n, define fi: A -+A as follows: f,(a) is the unique u, E A with (u,, a) E ai and (ui,O) E aj for each j i. The reader should establish (see Exercises 1 and 2 below) that each fi is an endomorphism of A, and that together the fi's satisfy the following conditions: S ,

+

(Here, multiplication of functions means composition, and functions are added pointwise: (f + g)(x) is by definition equal to f(x) g(x). The identity function is denoted 1, and O denotes the constantly O function.) Conversely, given f,, f, satisfying (2), then, as the reader may establish,

+

m ,

1

(3)

O

= kerf,

x - - - x ker f,,

and our two procedures are reciprocal, thus yielding a bijection

Since the order of occurrence of f,, ,f,makes no difference in (3), the same must be true for (2). By analogy with Abelian group theory, we will abbreviate (2) as

1

l

HISTORICAL NOTE: The original 1947 treatment by Jónsson and Tarski was concerned with the ranges Bi = L(A) of the endomorphisms L. Here A is the inner direct sum of its subalgebras Bi, written

+ in the sense that each a E A is uniquely expressible as a sum ( . (b, + b,) + b, with b, E B, for each i. We leave it to the reader to examine the equivalence of this notion with our notion concerning endomorphisms. (We outline the ideas and precise definitions in Exercises 4-6 below.) For our purposes, it will be more convenient to deal with the endomorphisms directly. (This apparently was also true for the later writings of Jónsson on this subject; by 1964 he was writing in terms of the endomorphisms.) e )

284

Chapter 5 Unique factor iza ti^^

We may also ask for conditions in terms of endomorphisms that are equivalent to indecomposability of factors. Now the general notion of the decomposability of a, namely a = P x y from 55.2, is not amenable to a treatment via endomorphisms, since a may not be the kernel of any endomorphism. But if a is itself a direct factor (i.e., if a = ker f, where f @ f ' = l), then we do have an endomorphism theoretical characterization of decompositions of a. LEMMA l. Let f @ f 1 = 1 . If 1 = q 5 @ $ @ f 1 and # + S / = f, then kerf = ker 4 x ker $. In fact, the correspondence (4, $) ++ (ker 4, ker $) is a bijection between

and

((B, Y):k e r f

=

P

x Y}.

Proof. See Exercise 7 below.

¤

Our ringlike structure on EndA arises from the obvious binary operation on endomorphisms formed by pointwise addition (as we had it above): (f + g)(x) = f(x) + g(x). Unfortunately, End A is not usually closed under this addition. Moreover, + does not usually obey any laws like associativity or commutativity. Our next lemma will describe some special situations where the sum of two endomorphisms is again an endomorphism, and some situations where associativity and commutativity hold. The lemma is phrased in terms of functions A -+ A, so that one can see these results as mirroring a fragment of ring theory. In many applications, we will apply them to situations where a, b, and c are constant functions. Thus we will often think of a, b, and c in the next lemma as elements of A. The next three lemmas will be in this ring theoretical spirit. Remember that in this case we are writing f 0 g simply as fg. As usual in unique factorization work, we ultimately intend to compare two direct factorizations of a given algebra A (with zero). Thus, for the remainder of this section, we will assume that

LEMMA 2. Let a, b, c and d be any functions A + A, and let S, t, f, f ', g, g', h, h' be endomorphisms of A with 1 = f @ f ' = g @ g' = h O h'. Then

(4)

a(bc)=(ab)c, al=la=a, Oa = O, (a + b)c = ac + bc, a+O=O+a=a, s(a+b)=sa+sb, so = o.

(5) (6)

fa fs

= fb

+ f 't

and f 'a = f 'b imply a = b. is an endomorphism of A.

5.4

(7) (8) (9)

Algebras with a Zero Element

+ hfgf ' t is an endomorphism of a + hfgf ' b = hfgf'b + a.

s

A.

(a+hfgf'b)+c=a+(hfgf'b+c). (10) There exists a function x : A -+A such that x hfgf ' b = 0. (11) If c = hfgf 'd, or if c is any finite sum of maps of that form, then c is cancellative-i.e., a + c = b + c implies that a = b.

+

(12) fg'fg = f g f f g . (13) fgfg'f = fg'fgf, i.e., fgf commutes with fg'f.

/

j

1

Proof. Assertions (4) and (5) are obvious. For (6),let F be any n-ary A-operation, and select x,, x,EA. We need to see that a = ( f s f't)(F(x,;..,x,)) and /3 = F( fs(x,) f ' t ( x , ) , ,fs(x,) f 't(x,)) are equal to each other. For this, we make the following calculation, using the fact that f and s are endomorphisms of A: .

.

S

,

+

+

+

't(x,), ,fs(xn) + f 't(xn)) f 2 ~ ( ~ +n )(ffrt(xn)) = F ( f 2 ~ ( ~+1( f f ' t ( x i ) ., = F(fs(x,),. ..,fs(xn)) = fsF(x,,.. ,x,) = ( f 2 s ( f f f t ) F ( x 1- ,., x,) = f(a).

f (P>= Jz'(fs(x1) + f

1

/

1I

+

A similar calculation yields f ' ( a )= f ' ( j ) ,and so a = P, by (5). We now give an argument that will yield (7), (8), and (9);it begins with a long calculation. (Here a is introduced as an abbreviation for the expression that appears in the previous line; similarly, when we introduce j a few lines below, it is meant as an abbreviation for the expression in the line that precedes it, and likewise for y and 6 farther on.)

+ ( f ' b + f e ) = fCfa + ( f ' b + fc)l + f'Cfa + ( f ' b + fc)l

fa

+ + + + + + + + +

+ +

+

[ f ' a ((ff'b f2c)] [ f ' f a ( f t 2 b f'fc)] [ f a (O fc)] [O ( f ' b O)] = ( f a fc) f'b = a. = =

Similar1y, fa

+ ( g f ' b + fc) = gCfa + ( g f ' b + fc)l + s'Cfa + (gf ' b + fc)l = ga

i I

=

a I

P,

and

1

1

+ g'f(a + c)

+ ( f g f ' b + c) = f [a + ( f g f ' b + 4 1 + f =f

j + ff(a+c)

' [ a + (fgf'b + 4 1

286

Chapter 5 Unique Factorization

and finally, a

+ (hfgf'b + c) = h[a + (hfgf'b + c)] + hf[a + (hfgf'b + c)] = hy + hf(a + c) = 6.

Now to prove (7), we examine these calculations for a = S , b = t, and c = 0. Successive applications of (6) yield first that a is an endomorphism, then that P is an endomorphism, then y, and finally that 6 is an endomorphism, thereby proving (7). To prove (8) and (9), we first note that almost identical calculations will establish that (f'b

+ fa) + fc = a

(9f 'b + fa) + f c = P (fgf'b a) c = y (hfgf'b + a) + c = 6.

+ +

Our two different formulas for 6 yield a

1

+ (hfgf'b + c) = (hfgf'b + a) + c.

Taking c to be O yields the special case (8) of commutativity. Finally, (8) can be applied to this last equation to yield equation (9). For (lo), we simply take x = hfg'f'b and calculate x

+ hfgf

'b

(g + g')f 'b = hf lf'b = hff'b = hOb = 0. = hfg'f'b

+ hfgf

'b

= hf

E;==,

For (ll), we will assume that c = hihgiLfbi(and, by (8) and (9) it does not matter precisely how this "sum" is associated). By (lo), for each i we have an additive inverse xi for hifigif,'bi. Taking y = (. (x, + x,) + - - - ) + x,, one easily checks, using (8) and (9), that

for al1 a. It is now immediate that c is cancellative. For (12), we calculate

and

Since fg'f'g is cancellative by (1l), we obtain fg'fg Finally, (13) follows easily from (12):

= fgf'g.

The next lemma was proved for groups by H. Fitting in 1934 and later extended to algebras with zero by J. LOS.Recall that we are still assuming that

1 j

287

5.4 Algebras with a Zero Element J

l

f and g are endomorphisms of A such that 1 = f @ f ' 4: A + A is called idempotent iff 42 = 4.

= g @ g'.

A function

LEMMA 3. ( n 2 1) If 4 = ( f g f ) " is idempotent, then there exists an endomorphismSof Asuchthat f = 4 + S a n d 1 = $ @ S @ f r . Proof. Define

,

(Notice that by (9) we do not need further parentheses.) By (7), q is an endomorphism. We now claim that q + 4 = f. In the following calculation, we make free use of (8) and (9):

This is the same expression we began with, except that the top exponent has been reduced by l . We may continue this procedure until we obtain A similar calculation yields 4 + q = f. We now take the desired to be q2 and claim that bS/ = $4 = O. To see this, we first observe that dS/ = $4 follows from 4q = q4, which in turn follows from (13). Then for 4S/ = O we calculate Now 4q is cancellative by ( l l ) ,so we obtain #S/ = 0. Now q42 = 4 by hypothesis. To obtain S/2 = S/, we first calculate q3 = o + q3 = 4q2

+ qq2 = (4 + q)q2 = fq2 = q2.

Hence, $2

j ,1

l

l

1 1

= q4 = q2 =

S/.

It is now easy to obtain 4f' = S/f' = f ' 4 = f 'S/ = O. For example, 4f' = ( 4 f )f ' = 4 ( f f 1 )= 4 0 = 0, and similarly in the other cases. Al1 that remains to be proved is that 4 + S/ = f. We first observe that and so

4 + S/ = 4 + q2 = (4 + $4) + q2 = 4 + (4q + q 2 ) = (4 + 44) + (44 + q 2 ) = (4 + d 2 = f 2 = f .

(by (9)) m

LEMMA 4. 1 = f @ f ' = g @ g'; ( n > 1). If (fgf)" = f , then there is an endomorphism S/ such that (gfg)" + S/ = g, and the following two equations hold:

Chapter S Unique Factorization

1 = (gfg)" o $

o 9'

1 = (fg)" o C(f9f )"-'9'

+ 1If

'a

Proof. One immediately calculates that (gfg)" is idempotent, so the facts about $ follow from the previous lemma (with the roles of f and g reversed). For the final equation, the idempotence of (fg)" is immediate from (fgf)" = f, and the idempotence of the second summand is easily calculated from the fact that it has the form (f h + 1)f '; we omit the details. For the products of the two summands, we have first the obvious equation

and then we calculate

Finally, we need to see that the two summands in the second equation really do add to 1:

+ [(fsf )"-'9' + llf' C(f9)" + (fsf )"-'slf'l + f ' = (fg)"-'Cfs + fslf'l + f '

(fs)"

=

C(f9f + fsf ') + fslf'l + f ' = (fg)"-'Cfsf + (fsf' + f9lf')l + f ' = (f!J)"-'Cfgf + 01 + f ' =f + f1=1.

(by (9))

= (f9)"-'

(by (9)) 1

l

1

i

We now turn away from the pure "ring" theory of endomorphisms and look instead at how endomorphisms interact with the algebra A and its congruence relations. Thus, we will continue to have 1 = f f ' = g @gr. Moreover, we adopt the notational convention that a = ker f, a' = ker f ', P = ker g, and P' = ker g'.

LEMMA 5.

If (fgf )" = f,then there are congruentes y and y # on A such that P = y x y # , and

O

=y

x a'.

In fact, we muy take y to be ker fg.

Proof. This will follow immediately from Lemma 4 (using Lemrna l), as soon as we establish that ker fg = ker(fg)" = ker(gfg)" and that ker[( fgf)"-'g' + 11f ' = a'. The first assertions are immediate from (fgf)" = f. For the final assertion, we ask the reader to verify the simple result that ker(f h + 1)f ' = ker f ' for any endomorphism h (regardless of whether f h + 1 is an endomorphism). Our next lemma is not particularly related to the theory of algebras with zero; it is rather a general set-theoretical result. We have not had much occasion

289

5.4 Algebras with a Zero Element

to mention it in this book, but, in fact, ker4 makes sense (as an equivalence relation on the domain of 4) for any function 4 whatever. Among equivalence relations, O as usual denotes the smallest one, the kernel of the identity function. LEMMA 6. Let A be any set, and suppose that 4, $: A + A with ker 4 í l ker $ = O. If 4 o S/ = $ o 4, then ker 4" ílker $" = O for every positive integer n.

1

l l

Proof. The proof is by induction on n, with the case n = 1 of course being one of our assumptions on 4 and $. Assuming inductively that ker 4"-' í l ker $"-' = O, let us suppose that a, b E A, with bn(a)= #"(b) and $"(a) = $"(b). To see that a = b, let us first consider u = 4"-' ($ (a)) and v = 4"-' ($"-' (b)).Since 4 o $ = $ o 4, our assumptions on a and b easily yield #(u) = #(v) and $(u) = $(u), and so u = u by our hypothesis that ker 4 í l ker $ = O. In other words, ($"-'(a), $"-'(b))~ker 4"-', and clearly our assumption on a and b yields ($"-'(a), $"-'(b)) E ker $"-' By induction, we therefore have $"-'(a) = $"-'(b). A similar argument yields $"-'(a) = $"-l(b). Finally, one more use of induction yields a = b.

"-'

LEMMA7. For al1 n 2 1, ker(fgf)"ílker(fg'f)"

1

1

1

= kerf.

Proof. Clearly, ker f c ker(fgf)" 17 ker(fg'f)". For the reverse inclusion, we will take 4 = fgf and $ = fg'f, considered as maps of f(A) into itself. Since $ + 4 = 1 on f(A), we know that ker 4 í l ker S/ = 0; moreover, 4 commutes with S/, by (13). Therefore, Lemma 6 tells us that ker 4" ker S/" = O. Thus if (x, y) E ker(fgf)" í l ker(fg'f)", then (f (x),f (y))E ker 4" ílker S/" = O, which means that f (x) = f (y), or (x, y) E ker f. The next lemma is the counterpart, for this theory, to the difficult lemma that appeared in 92.4, just before the Direct Join Decomposition Theorem. It will play a role in the proof of the Jónsson-Tarski Theorem much like the role of that lemma in the proof of the Direct Join Decomposition Theorem. LEMMA 8. If A is un algebra with zero, and O = a x a' = P x P' for a, a', P, p' E Con A, with a indecomposable and Ala finite, then there exist y, y E Con A such that O = y x a' and either /3 = y x y # or = y x y#.

Proof. Of course, as in the rest of this section, we will take 1 = f @ f ' = g @ g', with a = kerf, and so on. The range of f is in one-to-one correspondence with Ala, and so by hypothesis is finite. Therefore, as maps from the range of f into itself, the sequence of maps (( fgf)": n = 1,2, must contain a repetition. But in fact, fgf maps A into the range of f, and so the sequence of maps (( fgf)"": n = 1,2, ...) contains a repetition, even when we consider these as maps defíned on al1 of A. Therefore we have (fgf )" = (fgf)m+kfor some m and k with k > 1. A similar argument yields (fg'f)' = (fg'f)'" for some r and s with s 2 1. Taking n larger than both m and r, and divisible by both k and S, we obviously have (fgf )2n= (fgf )" and (fg'f)2" = (fs'f)" Since a < 1,we know from m)

290

Chapter 5 Unique Factorization

Lemma 7 that either ker(fgf )" < 1 or ker(fg'f)" < 1. These two alternatives give rise to the two alternative conclusions of the lemma. Since the arguments are similar, we shall continue our proof only for the case that ker(fgf)" < 1. By Lemma 3, we have 1 = (fgf )" @ $ @ f ' for some $ with (fgf)" + $ = f, and therefore a = ker f = ker(fgf)" x ker $, by Lemma 1. Since a is indecomposable and ker(fgf )" < 1, we must have a = ker(fgf )". We next claim that (fgf >" = f. To see this, consider any x E A, and observe that, by idempotence, (x, (fgf )"(x))E ker(fgf )" = ker f. By definition of ker f, we immediately have f (x) = f (fgf )"(x), and this last term is equal to (fgf )"(x), by the idempotence of f. Thus we have established our claim that (fgf )" = f. Now the conclusion of the lemma is immediate from Lemma 5. ¤ THE JÓNSSON-TARSKI UNIQUE FACTORIZATION THEOREM (5.8). Every finite algebra with zero is uniquely factorable. Proof. We will prove, by induction on the smaller of m and n, that if A is a finite algebra with zero, and

with each Ai and each Bj directly indecomposable, then m = n and, after renumbering, A, r Bi for each i. We may assume, without loss of generality, that in fact n 5 m. The theorem obviously holds (by the definition of "directly indecomposable") for n = 1. Therefore we move to the inductive step and assume that unique factorization holds for al1 smaller values of n. By 55.2, there exist directly indecomposable congruences a,, . , a, and P,, . Pn on A such that Ala, z Ai for each i and A/Pj r Bj for each j, and such that S ,

We apply Lemma 8 to-the algebra A and the congruences a' = a, x

a = a,,

P=Pl

x ... x Pn-1,

m - .

X

a,,

P'=Pn,

thereby obtaining congruences y, y # on A such that y x a' = 0, and such that or = y x We will consider these two alternatives sepaeither /?= y x rately. Before attending to those two cases, let us first note that from y x a' = a, x a' we have y a,, and hence Aly r A/a,, by Lemma 9 of $5.2. Therefore, we also know that y is directly indecomposable.

-

CASE 1: fl = y x y #. Let y # = y, x - x y,-, be a factorization of y # into a product of directly indecomposable congruences. We have

with al1 factors directly indecomposable. Correspondingly, we have

with al1 factors directly indecomposable. Therefore, by induction, s = n, and we

291

5.4 Algebras with a Zero Elernent 1

may permute the indices of the algebras Bj so that B, r A/y and Bi r A/yi for 2Si1n-1. Now we also know that y x a' = O. In other words,

and so

,

with al1 factors directly indecomposable, which implies (by Lemma 9 of $5.2) that A, x

1

-- x

A,

2 A/y,

x

-

x A/y,-, x B,,

I

with al1 factors directly indecomposable. By induction, m = n and we may permute the indices of the algebras A,, - A, so that A, r B, and Ai r A/yi for 2 5 i < n - 1. To see the full conclusions of the theorem, we note that A, S A/y r B,, that for 2 5 i 5 n - 1 we have A, r A/yi r B,, and finally, as we remarked above, that A, r B,. This completes the proof of the theorem in Case 1. a,

CASE 2: jY = y x y # (i.e., pn = y x y#). The equation O = y x a' tells us that y cannot be 1; therefore the direct indecomposability of 8, tells us that y # = 1 and p, = y. Therefore A, r Aly = A/p, r B,. Moreover, we have and so p

I

l

1 1

I

-

a'. In other words,

Pl

x

---

x

Pn-,

N

a, x

m

.- x a,,

with al1 factors directly indecomposable, which implies (by Lemma 9 of $5.2) that Bl x

x Bn-l z A2 x

- x A,,

with al1 factors directly indecomposable. By induction, m = n, and we may permute the indices of the algebras Ai so that B, r A, and Bi r Ai for 2 5 i 5 n - 1. Finally, if we interchange the indices of A, and A,, then we will have Ai Bi for al1 i. •

l

I

1

The next theorem looks ahead to the theory of cancellation, which will be developed in $5.7, although in a different framework. In 55.7 we will be able to prove this result for A, B, and C arbitrary finite algebras, so long as C has a one-element subuniverse. Here we allow A and B to be infinite, but they (and C) must be algebras with O. Exercise 8 of $5.1 shows that we cannot omit the finiteness condition on C.

I

THEOREM 5.9. (Jónsson and Tarski.) Every finite algebra with zero is cancellable in the class of al1 algebras with zero (of the sume type). In other words, if

292

Chapter 5 Unique Factorization

AxCrBxCrD for D un algebra with zero and C finite, then A r B.

Proof. We may assume that C is directly indecomposable, for in the general case indecomposable factors of C may be cancelled one after another. Now D has congruences a, a', p, p' with O = a x a' = p x fi' and

Obviously a is directly indecomposable and D/a is finite, and so by Lemma 8 there exist congruences y and y # on D such that O = y x a' and either P = y x y # or P' = y x y # . CASE 1: P = y x y#. Notice that the theorem is trivial for a' = 0, and so we may assume that a' > O; hence y < 1. Since /?is directly indecomposable, we have p = y, and so O = /? x a' = P x pl. Therefore, a' P', and so by Lemma 9 of 55.2 we have A r D/af r D/Pt r B.

-

CASE 2: P'= y x y#. Therefore, a' x y = 0 = P x a' P x y#, which entails

-

On the other hand, y x a' x y # , yields

p' = y

=O =a

x a' and so y

B'=P

-

x y x y # , and so

a, which, together with

Now our isomorphisms for A and B show that A r B, thereby completing the proof of this case, and hence completing the proof of the theorem. ¤

Exercises 5.10 The first six exercises use the notation of the beginning of this section without further notice. In particular, they al1 concern A = (A, + ,O, ),an algebra with zero. l. Prove that fi defined at the beginning of this section is an endomorphism. 2. Verify equations (2) from the beginning of this section. 3. Verify equation (3) and verify that the bijection mentioned there is really a bijection.

Exercises 4, 5, and 6 al1 concern the connection between direct product decompositions A r C, x x C, and inner direct sum decompositions A = B, @ @ B,. As intermediate links for this connection, we have chosen to use both product decompositions of congruences (e.g., O = a, x x a,) and direct sum decompositions of functions (e.g., 1 = f, @ @ f,). But al1 four of the

293

5.4 Algebras with a Zero Element

subjects are closely interrelated, so the reader may choose instead to find a different path connecting direct products and inner direct sums. The next three exercises are therefore only a suggestion of a possible way to proceed.

4. Prove that if 1 = f1@ - @ f,, and Bi = &(A) (1 < i 5 n), then each a E A can be written uniquely as a sum -(bl + b,) + + b,,, with each bi E Bi. This last sum is equal to any of its rearrangements by commuting or associating its summands. Finally, if F is an m-ary operation of A and b i j Bi ~ (1 < i < n , l < j < m ) , t h e n ( e

a )

5. Conversely, if the subuniverses Bi have the property that each a E A can be -(bl + b,) + bn, and if (14) holds for al1 written uniquely as a sum F and al1 b(, then there exist endomorphisms fi such that 1 = f , @ - @ f, and B, = f,(A) for each i. In such a case, we may write ( m -

m..)

+

where Bi is the subalgebra of A with universe Bi. 6. If A = B l @ - . . @ B n , then A E B , x x B,. Conversely, if A r x C,, then A has subalgebras Bi such that Bi E Ci (1 < i < n) and C1 x A = B1 @ - . . @ B , . 7. Prove Lemma 1. 8. Prove that Lemma 6 fails if we remove the assumption that 4 o S/ = $ o 4. 9. Prove that the congruence y = ker fg appearing in Lemma 5 is in fact equal t o p v (a A P'). We now present some exercises that will allow the reader to step through the main proofs in this section with some matrix calculations that go smoothly but are nonetheless not completely trivial. In particular, these examples show that there is no way to limit the exponent n appearing in Lemmas 3 and 4 and in the proof of Lemma 8. These exercises ask the reader for some of the high points, but in fact each detail of the proofs in this section can be examined by seeing what it means for these examples. 10. An illustration of Lemma 3. Let R be a commutative ring with unit, and define A to be a three-dimensional free module over R. Thinking of elements of A as column vectors, 3 x 3 matrices over R act as endomorphisms of A. Thus we have the following endomorphisms of A:

Chapter 5 Unique Factorization

Prove that 1 = f @ f ' = g O 9'. Evaluate fgf, and notice that fgf = 2h, where h is idempotent. (Thus, in fact, the behavior of the powers (fgf)" depends on the behavior of the sequence (2") in the ring R. In particular, if al1 powers 2" are distinct, then no (fgf)" will be idempotent, which shows that Lemma 3 is not generally applicable (e.g., as in proving Lemma 8), without some finiteness assumption.) Examine the special case R = Z, distinguishing between three cases: m = 2k,m odd, and the mixed case-m = 2kq with k 2 1 and q an odd number 2 3. In these various cases, find the 4 and $ of Lemma 3. For example, in the ring h,, we have

Verify directly that 1 = 4 @ S/ O f '. 11. Verify that equations (12)and (13)hold for f,f ',g, and g' taken from Exercise 1o. 12. An illustration of the proof of Lemma 8. Let A be the algebra of Exercise 10, with one extra unary operation: multiplication by the matrix

(Equivalently, we consider our algebra to be a module over the ring Q of al1 matrices of the form

with a, b E R.) Certainly, every matrix which commutes with E acts as an endomorphism of A. First prove that f, f', g, and g' (defined below) are endomorphisms of A:

and then prove that 1 = f @ f ' = g @ g'. Next prove under certain assumptions on R (say, R is a field) that a = kerf, a' = ker f ', P = ker g, and P' = ker g' are al1 directly indecomposable. Calculate fgf and work out a simple formula for (fgf )" (as matrices). Prove that the smallest n for which (fgf )" is idempotent is precisely the characteristic of the ring R. (If R has characteristic zero, then no (fgf)" will be idempotent.) Verify for this smallest n that (fgf )" = f, and also check the assertions of Exercise 9 for this example.

l

1I

f

j

5.5 The Center of an Algebra with Zero

i

295

13. Obtain al1 the details of Lemmas 4 and 5 for the example given in Exercise 12. For instance, evaluate (fg)" and [( fgf)"-'g' + 11f ' as matrices. Get a direct calculation of S/ by applying Lemma 3. 14. In the example of Exercise 12, describe the Q-submodules B = f(A), B' = fl(A), C = g(A) and C' = gl(A). In the framework of Exercises 4-6, prove that A = B @ B' = C @ C' = C @ B'.

5.5

THE CENTER OF AN ALGEBRA WITH ZERO

In the preceding section of this chapter, our only assumption about the binary was that it obeys the laws x + O Ñ O + x Ñ x. Thus, perhaps the operation most surprising single feature of the proofs in that section was the discovery of a family of elements y such that x + y = y + x and x + (y + z) = (x + y) + z for al1 x and z. In particular, (8) and (9) te11 us that any y of the form fgf '(w) has this property for any w and any endomorphisms f,f ',g, and g' such that 1 = f @ f ' = g @ gr. (Of course, it is possible that al1 such y are 0, as happens, for instance, in a centerless group, but in that case we obtain the law fgf '(w) Ñ O, which has very strong consequences, as we shall see in Corollary 2 to Theorem 5.17 in $5.6 below.) In group theory, the set of al1 such y is easily seen to form a normal subgroup, commonly known as the center. With a little more care, we can define the center C of any algebra A with zero. Every J E C will have the properties stated above (and more); fgf' (as above) will map A into C; and C will be a subuniverse of A. In fact, for a finite algebra A, C is the O-block of O for 0 the center congruence Z(A), which we defined in $4.13.(We will show in a later volume that this also holds if A lies in a congruence modular variety.) Our objective in this section will be to define this center and then strengthen the Jónsson-Tarski Unique Factorization Theorem (5.8) to hold for any algebra whose center is finite (Theorem 5.14 below). This stronger result was also achieved by Jónsson and Tarski [1947]; they used the center right from the start, rather than giving our development of the preceding section. We will also prepare for a proof in $5.6 that if D is an algebra with center = (O), then its direct factorizations are unique in the sense that if O = a, x - x a, = p, x - x P,, with al1 factors directly indecomposable, then m = n and, renumbering if necessary, al = P1, a2 = P2, and so on. For this result and a parallel result about algebras with distributive congruence lattice, see Corollaries 1 and 2 to Theorem 5.17 of $5.6. We remarked above that a little more care is required in defining the center in the general case than was required to define the center of a group. The main reason for this is that there seems to be no simple criterion for a single element c to be central. Instead, we must work with centrality of an entire subuniverse.

+

i

DEFINITION 5.11. B is a central subuniverse of A iff B is a subuniverse and

296

Chapter 5 Unique Factorization

for every A-operation F. An element b~ A is central iff b lies in some central subuniverse. Let d be a central element and a an arbitrary element of A. By (16),we have

+ d ) + (a + 0 ) = (O + a) + (d + O), from which we see that d + a = a + d-i.e., a central element commutes with every element of A. The reader may similarly show that a central element d associates with every two elements of A, in the sense that (a + d ) + c = a + (d + c) for (O

arbitrary a, c E A. (See Exercise 6 below.) The next two lemmas follow readily from the definitions, and the reader may work out the details.

L E M M A l . Any two central subuniverses together generate a central subuniverse, so the set of al1 central elements forms a central subuniverse, which is in U fact the largest central subuniverse. DEFINITION 5.12. The center of A, denoted C(A),or simply C, is this largest central subuniverse.

+

+

L E M M A 2. Every central element c is cancellable in the sense that a c = b c implies a = b, for any a, b E A. The center is an afine subuniverse in the sense that ( C , +,O) is un Abelian group and each A-operation F acts linearly on C. I.e., F(c,,c,,.-.) for some endomorphisms a,, a,,

-

= al(cl)

+ a,(c,) +

a

-

.

of ( C , + ,O).

The next lemma shows the relevance of the center to direct factorization theory. In a sense, it contains a large part of the information of Lemma 2 in 55.4. Here f , f ', g, and g' al1 refer of course to endomorphisms of an algebra A with zero.

L E M M A 3. l f 1 = f @ f' = g @ g', then f maps the center of A into itself and fgf' maps al1 of A into the center of A. Proof. Clearly f ( C )is a subuniverse, so for the first assertion it will be enough to check that f (b)satisfies (15)for b E C, and also that (16)holds for arbitrary a,, a,, . . E A and with b, ,b, . replaced by f (b,),f (b,), for arbitrary b,, b,, . . E C. For the first of these, we know by the centrality of b that b + c = O for some c, so f (b) f (c) = O. For the second one, we must prove the equality of a = F(a1 + f(b,),a, + f(b,), - - . ) with p = F(a,,a,, ...) F(f(b,),f(b,), ...). The centrality of the b,'s yields f(a) = f(P), and clearly f ' ( a )= f ' ( P ) (without any centrality assumption). Therefore a = P. Next let us show that the range of fgf' is contained in the center of A. We

+

+

297

5.5 The Center of an Algebra with Zero

will first show that the subuniverse B to say that

= fgf

'(A) satisfies condition (16), which is

for al1 a,, b,, a,, b2, . E A. Denoting the two sides of this equation by y and 6, we note that obviously f '(y) = f1(6), and so it only remains to prove that f(y) = f (6). Working toward this equation, we first get since the two sides are equal under application of f and of f '. We next observe that

since both sides are equal under application of g and of g'. Finally, application of f to both sides of this equation yields the desired equation f(y) = f (a), which leads, as we remarked above, to the conclusion that B = fgf'(A) satisfies condition (16). Likewise, B' = fg'ff(A) satisfies condition (16), and so one easily checks that D = B + B' is a subuniverse satisfying (16). Finally, since each member of B has an additive inverse in B', and vice versa. Then one easily checks that D obeys (15) and thus is a central subuniverse. Before going on to obtain unique factorization for algebras with finite center, we will pause to study the connection between this center C = C(A) (of Jónsson and Tarski) and the center congruence Z(A) of $4.13 above, which is defined for al1 algebras A (not just for algebras with zero). If A is an algebra with zero, then the congruence block O/Z(A) is obviously a subuniverse of A. If A is a group, then C(A) = O/Z(A), since both are equal to the center of A in the usual sense of group theory. When we return in Volume 3 to the study of the commutator in congruence modular varieties, we shall see that the equation C(A) = O/Z(A) holds for A an algebra with zero in any congruence modular variety. For now, we prove a related result that does not require modularity. As we will see in Exercise 1 below, we cannot remove the finiteness assumption, since, e.g., if A = ( o , +) (usual addition of non-negative integers), then C(A) = {O} and O/Z(A) = o .

THEOREM 5.13. If A is un algebra with zero, then C(A) holding if O/Z(A) is finite.

O/Z(A), with equality

Proof. We first need to see, quite simply, that if b is a central element of A (in the sense of this section), then (O, b) E Z(A). Since b is central, we have

+

t ( b , c 1 , . - - , ~ ,= ) t(b,O;-.,O) t(O,cl;--,c,) t(b, dl;..,dn) = t(b, O,... ,O) + t(0, d l , - a *d,) ,

298

Chapter 5 Unique Factorization

for every n > 1, for every term operation t E Clo,+,A, and for al1 c,, - c,, d,, - d, E A. Since t(b, O,. O) is central, and hence cancellable, it is clear that s.,

m ,

m ,

and hence, by Definition 4.146, that (O, b) E Z(A). We will next prove that, regardless whether O/Z(A) is finite or infinite, it forms a subuniverse B that satisfies condition (16) in the definition of centrality. To see this, we first take any A-operation F, of arity n, and define t to be the 3n-ary term operation

Now we easily check that

for arbitrary a,, - - a, E A. Now let b,, b,, . be arbitrary elements of O/Z(A). By definition of Z(A), we may change the final O on both sides of our last equation to b,, yielding a ,

Continuing to make changes of this sort, we ultimately obtain

Referring back to our definition of the term t, we may rewrite this last equation as F(O, ..,O) + F(a1

+ bl,

- ,a,

+ b,) = F(a,,. - - ,a,) + F(O + b,,

,O + b,),

which in turn is easily reduced to Since a,, - ,a, were arbitrary elements of A and b, , ,b, were arbitrary elements of B = O/Z(A), we have now established condition (16) for the subuniverse B. As we remarked above, it is not hard to prove from condition (16) that + is a commutative and associative operation on B. We will next observe that for b E B, and c, d arbitrary elements of A, the cancellation law b + c = b + d -,c = d holds. This follows immediately from

which is a special case of the defining condition (17) for (0, b) EZ(A),which we stated in the first paragraph of this proof (and which carne originally from Definition 4.146). Therefore, in particular, + obeys the cancellation law when restricted to B. We now invoke the hypothesis that B = O/Z(A) is finite. It is well known

5.5 The Center of an Algebra with Zero

299

+

that if is an associative cancellative operation on a finite set B, then (B, + ) is a (multiplication) group, so this must be true for the case at hand. Now condition (15) of the definition of centrality is immediate, since it obviously holds for groups. This completes the proof that B = O/Z(A) is a central subuniverse of A, and hence that O/Z(A) C(A). ¤ Now that we have established this connection between the Tarski-Jónsson center C(A) and our other center Z(A), we will continue, in this chapter, to refer to C simply as the center of A. We will not need to mention the other center, although the reader may compare the examples in Exercises 2-5 below with various examples of Z(A) presented in 84.13. We conclude this section with its main result, which is a strengthening of the main result of the previous section.

THEOREM 5.14. (Jónsson and Tarski.) Every algebra with zero whose center is finite is uniquelyfactorable, provided that each of its directfactors is decomposable into a finite product of indecomposable algebras. Proof. If we look over the proof of the Jónsson-Tarski Unique Factorization Theorem (5.8) and its lemmas, in the previous section, we see that finiteness was invoked in one place only. In the proof of Lemma 8 (just before Theorem 5.8), we used the finiteness of A to establish that (fgf )" = (fgf )m+kfor some m and k with k 2 1 (and to establish a similar result for fg'f). Here we will prove this same fact under the weaker assumption that the center is finite, and then we may invoke the entire earlier proof of Theorem 5.8. Of course, we adopt al1 the notation that was in force in the earlier proof, especially 1 = f @ f ' = g Q gr. For m 2 3, we define

which, by Lemma 3, may be considered as a map of the center into itself. Since the center is finite, we must have h, = h,+, (as maps defined on the center) for some m and some k 2 1. Since the range of gfg' is contained in the center (again by Lemma 3), we have hm!?f~'f =hm+k~f~'f on the entire algebra. Now, we calculate

Iterating this calculation k times (with k as determined above), we obtain

Since hmgfg'f = h,+,gfg'f, and since this is moreover a cancellable element, we finally obtain (fgf)" = (fgf)"+" A similar calculation yields (fg'f)' = (fg'f)'+"

300

Chapter 5 Unique Factorization

for some r and S with s 2 l . From this point on, we simply follow the remainder of the proof of Lemma 8 of the previous section and then follow the proof of the Jónsson-Tarski Unique Factorization Theorem (5.8). m

Exercises 5.15

+

l. Let A be (co, )- that is, the cancellative commutative monoid consisting of al1 non-negative integers under the operation of addition. Prove that C(A) = (O}, but O/Z(A) = co (Le., A is Abelian, in the sense of 54.13). 2. The center of a group C is its center in the classical sense, namely (xE G: xy = yx for al1 y E G}. 3. The center of a ring R is the set of x E R such that xy = yx = O for al1 y E R. *4. If A is an algebra with zero that lies in some congruence distributive variety, then C = (O}. Thus, of course, a lattice with O has a trivial center (although we will see a stronger fact in the next exercise). (For some hints, refer to Exercise 4.156(7) in 54.13.) 5. If A is an algebra with zero such that the implication

6. 7. 8.

9.

*lo. 11.

holds for al1 a and b in A, then C = {O}. This applies, for instance, to A = (S, v ,O), a join semilattice with zero. (In the next section we will see that the hypothesis C = (O} has very strong consequences, which go beyond unique factorization theorems as we have known them up until now.) If d is central, then (a + d) + c = a + (d + c) = d + (a + c) for al1 elements a and c. Prove Lemma 1 and Lemma 2. If A is a subdirect product of algebras Bi(with zero), then the center of A is contained in the product of the centers of the algebras B,.In particular, if each Bihas center (O}, then the same is true of A. Prove that Theorem 5.9 extends to the case where C is a finite product of indecomposable algebras, each with finite center. Let X be the class of algebras with zero of a given similarity type. Prove that every indecomposable algebra in Y with finite center is prime in X . (For the definition of X-primality, see page 263,§5.1.) The previous exercise is false without the assumption of finite center.

5.6 SOME REFINEMENT THEOREMS We will say that an algebra A has the refinement property (R) (for direct factorizations) iff

implies the existence of algebras Dij(i E 1,j E J) such that, for al1 i and j,

92

Chapter 2 Lattices

While this characterization of principal congruences in lattices is useful, it can be sharpened in certain classes of lattices. A lattice L is said to have the projectivity property iff whenever I [a, b] is weakly projective into I [c, d], then I [a, b] is actually projective with a subinterval of I [c, d]. For lattices with this property, principal congruences can be characterized as in Theorem 2.66, except that each I [ei, e,,,] is projective with a subinterval of I [c, d].

EXAMPLE 2.67. A lattice without the projectivity property. Proof. Let L be the lattice diagrammed in Figure 2.15. The interval I [b, a] transposes weakly down to I [O, c], which transposes up to I [f, e]. So I [b, a] is weakly projective into I [f, e]. But b is meet irreducible, so it is impossible for I[b,a] to transpose up to any other interval. Since c is the only element that gives a when joined with b, I [d, c] is the only interval to which I [b, a] transposes. Similarly, since c is join irreducible and b is the only element that gives d when met with c, we find that I[b, a] is the only interval to which I[d, c] transposes. Hence I[b, a] is not projective with I[ f, e] (or any of its subintervals), even though it is weakly projective into I [f, e]. •

Figure 2.15

THEOREM 2.68. Every modular lattice and every relatively cornplemented lattice has the projectivity property. Proof. First let L be relatively complemented. Suppose that I [a', b'] transposes a bI b'. We argue that I [a, b] is projective up to I [c, d] weakly and that a' I with a subinterval of I[c,d]. Let a* be a complement of a in I[al, b]. Now just note that

so I [a, b] L I [a', a*] and since a' = b' A c and a' I a* I b'. So I [a', a*] r I[c, a* v c]. Therefore I [a, b] is projective with I[c, a* v c], a subinterval of I[c, d]. The proof can now be

302

Chapter 5 Unique Factorization

later results of this section) with our first unique factorization results that do not require a one-element subuniverse. Our main tool in refining decompositions is the notion of binary decomposition operation, which was defined in Definition 4.32 in 54.4. A decomposition operation of A is a homomorphism f : A2 -+ A that satisfies the equations

If f is a decomposition operation, then for each u, u E A we define functions f, and f::A + A as follows: fv(x) = f (x, u), and f,'(y) = f (u,y). (In 54.4, we wrote f u where we now have f:.) If a x a' = O for congruences a, a' on A, we define f.,..:A2 -+ A as follows:f.,,.(x, y) is the unique w such that (x, w)E a and (w, y) E a'. The following lemma recapitulates various results from 54.4.

LEMMA 1. ker fv and kerf,' are congruences on A that do not depend on the choice of u and v. The correspondences

form a bijection, and its inuerse, between the set of pairs (a, a') E (Con A)2 with a x a' = O and the set of decomposition operations on A. ¤ In case A is an algebra with zero and u = v = O, then f, and f:are the f and f ' of the previous two sections. More generally, if A happens to have a oneelement subuniverse (e}, then fe and fé will be endomorphisms of A, but in the general case f, and f: will not be endomorphisms. Recall from Definition 4.28 in 54.4 that a factor congruence of A is a congruente a on A such that a x a' = O for some a' E Con A. As we will see in our next theorem, the strict refinement property is equivalent to the assertion that the factor congruences form a Boolean sublattice of Con A. First, a lemma showing that certain subsets of the factor congruences always form a Boolean lattice.

LEMMA2. If O = n i & ,

then themap

KH nai I-K

is a (O, 1)-homomorphismfrom the (Boolean) lattice of subsets of I into Con A.

Proof. It is enough to prove that if O = a x

Pxy

x 6, then

( a x P ) ~ ( a x y ) = a x p x yand (a x P) v (a x y) = a. The first equation is immediate, and for the second, we certainly have a 2 (a x p) v (a x y). For the opposite inclusion, we will take (x, y) E a, and show that (x, y) E (a x p) o (a x y). Since a x p x y exists, we know (from Lemma 1 of 55.2)

303

5.6 Some Refinement Theorems

that there exists u E A with (u, x) E a, (U,X)E p, and (u, y) E y. Since (u, x), (x, y) E a, we have (u, y) E a by transitivity; thus (u, y) E a A y. Combined with (u, x) E a A P, this yields (x, y) E (a A P) o (a A y). ¤ Our next theorem gives various characterizations of the strict refinement property. This is the theorem we promised in the last paragraph of 54.4. For our applications, the most useful conditions will be those involving the functions f,. Notice that if f is a decomposition operation, and we define h(x, y) to be f (y, x), then h is also a decomposition operation, and fi = h, for every u. Therefore, conditions (v) and (vi) below also apply to f: and g:.

THEOREM 5.17. For any algebra A, the following conditions are equivalent:

i. A has the strict refinement property. ii. A has the strict refinement property for finite index sets I and J. iii. The set of factor congruentes of A forms a Boolean lattice (i.e., it is a sublattice of Con A, and the distributive laws hold on this sublattice). iv. If O = a x a' = p x pl for a, a', P, p' E Con A, then (a v p) A a' r P. v. f,g, = g, f, for al1 decomposition operations f, g, and al1 v E A. vi. There exists V EA such that f,g, = g, f, for al1 decomposition operations f and g. REMARK: In (iii), we did not have to mention complementation explicitly, since, of course, every factor congruence has a complement that is itself a factor congruence. Proof. Lemma 2 immediately yields (ii) --+ (iii), and the implications (i) --+ (ii), (iii) + (iv), and (v) -+(vi) hold a fortiori. It remains to show (iv) -+(v) and (vi) -+ (i); we begin with the latter. Assuming (vi), and given O = nBi = y,, we define S, to be the congruence Pi v y j for each i and j. To prove (i), it remains to show that di, = Di for each i, and the 6,'s are as required for strict refinement, i.e., that 6, = y, for each j. By Lemma 1, for each i there is a map fi: A + A such that Di = kerfi and fi has the form f, for some decomposition function f. Likewise, we have y, = ker gj We wil¡ for some g, of the same form. By (vi), every fi commutes with every now prove that 6, = kerfigj = kergjfi for each i and j. It is immediate that ker fi 5 ker gjfi = ker figj and that ker gj 2 ker figj, and thus that 6, = ker A v ker gj 5 kerfig,. For the reverse inclusion, it is clear that for any (x, y) E kerfigj we have

n

n

n

ker gj

x ---- gj(x)

ker fi

ker fi

f i ~ j ( x= ) fi~j(~)

gj(y) JE!x- Y

and so kerfigj 5 kerfi v kerg, = dij. Certainly A(x) = fi(y) iff gjfi(x) = gjfi(y) for al1 j, and so 6, = Pi for each i. To verify that this intersection is actually a product, we need only show (for fixed i) that for any a E A' there exists b E A with fig,(b) = figj(aj) for al1j E J. This follows immediately from the existence of b E A with gj(b) = gj(aj) for al1j,

,.

304

Chapter 5 Unique Factorization

which in turn is immediate from the existence of n y , . Therefore, the product 8, exists and is equal to Pi. A similar argument shows that 6, exists and is equal to yj. This completes the proof of (i) from (vi). For (iv) (v), we will assume (iv) and then prove f,g,(x) = g, f,(x) for any two decomposition operations f, g, and any u, x E A. In order to apply (iv), we will take a = kerf,, a' = kerf:, p = kerg,, and p' = kergú; thus O = a x a' = p x P'. We will prove the equality fug,(x) = guf,(x) in two steps: first, (fug,(x), g, f,(x)) E p, and then the same for p'. To begin, we have

n,

ni

-

and so (fu(x),f,g,(x)) E (a v p) A a'. Therefore, by (iv) we have (f,(x), fugu(x))E P. Moreover, (f,(x), g,f,(x)) E p, and so by transitivity (f,g,(x), g,f,(x)) E P. For the corresponding p'-relation, we have

and so by (iv)(with p' in place of P) we have (u,fugu(x))E P'. Moreover, (u, g, fu(x))E I(', and so by transitivity (fugu(x),gufu(x))E P'. Thus fig,(x) = g, f,(x), and the M proof of (v) is complete. This theorem has two especially important corollaries closely related to the main results of 555.3-5.5 (the unique factorization theorems of Ore, Birkhoff, and Jónsson, and of Jónsson and Tarski). Notice that they do not require finiteness and that the first one does not require a one-element subuniverse. The first corollary is immediate from (iv) + (i) of the theorem but can also be seen from (iii) (i) together with Exercise 16 of $4.4.

-

COROLLARY l. If Cona is distributive, then A has the strict refinement property. m COROLLARY 2. If A is un algebra with zero and the center of A is {O}, then A has the strict refinement property. Proof. In the notation of 5$5.4 and 5.5, we need only see that fg = gf whenever 1 = f @ f ' = g @ gr. By Lemma 3 in 55.5, fgf' and f 'gf both map al1 of A into the center of A, so we must have fgf' = f 'gf. Hence, fg = f s ( f + f ' ) = fsf + fsf' = fsf + f 'sf = ( f + f ')sf

= sf.

As we remarked above, it follows from these corollaries that an algebra with distributive congruences or with zero center has the refinement property. Hence, if it has any factorization into indecomposables, then it has the unique factorization property, and even more: by Theorem 5.16, if

I

305

5.6 Some Refinement Theorems

with each Diand yj indecomposable, then the 13,'s are exactly the same as the y~s. Corollary 1 tells us that al1 lattices have the strict refinement property, and Corollary 2 tells us that the same is true of any join semilattice S with O. Let us look at another particularly simple proof for a lattice L with O and 1, which mentions neither the center nor congruence distributivity. If L = M x M', and if we let f denote the decomposition operation associated to this factorization of L, then fo(a, a') = (a, O) = (a, a') A (1, O). This general form for fo is isomorphisminvariant, so we may conclude that for each decomposition operation f there exists b~ L such that f,(y) = y A b. Now the semilattice laws te11 us that fo (go(x)) = go(fo(x)), so condition (vi) of the theorem is immediately verified for this example. One interesting feature of this alternate line of reasoning is that it made almost no use of the fact that L is a lattice, only that (x, x') A (y, y') exists in any direct product ordering when x I y and y' x'. Thus the argument extends automatically to ordered sets with O and 1: every ordered set with O and 1 has the strict refinement property. As we remarked in the introduction ($5.1), unique factorization does not hold for al1 finite ordered sets. Therefore, we cannot completely remove our assumption, in this last result, of the existence of O and 1. Nevertheless, as J. Hashimoto proved in [1951] by a more sophisticated argument, refinement holds for every connected ordered set. (See below for a definition of connectedness.) In our treatment of the subject, Hashimoto's Theorem will follow readily from an important lemma proved by R. McKenzie in [1971]. As we shall see, McKenzie's Lemma permits the deduction of many interesting conclusions, including some that are significantly stronger than Hashimoto's Theorem. For instance, as an easy corollary of McKenzie's Lemma we have C. C. Chang's theorem that if A = (A, R) is any connected reflexive binary relational structure with at least one antisymmetric element u (i.e., u satisfying Vx[(R(x, u) & R(u, x)) + x = u]), then A has the refinement property (Corollary to Theorem 5.19 below). McKenzie's work followed a small but important group of articles by C. C. Chang, B. Jónsson, and A. Tarski during 1961-1967. These works pioneered the notions of refinement and strict refinement and developed the techniques of calculating with the maps f,and fd. (Our Theorem 5.17 was extracted from those works.) They also formed a foundation for the lemma of McKenzie, from which some of the results of Chang, Jónsson, and Tarski can easily be derived. McKenzie's Lemma will be stated and proved for a relational structure (Exercise 7 of 54.4) A = (A, R) with a single relation R, which is binary. For short, we will cal1 such a structure a binary (relational) structure, and we will abbreviate (a, b ) R~ by a R b. We say that (A, R) is reflexive iff it satisfies Vx[R(x, x)], and connected iff the conditions BUC=A,

BiiC=@,

R E B ~ U C ~

entail that B = A or C = A. To every binary relational structure (A, R) we ., 5 , and E,which are defined as follows: associate four binary relations I , I

Chapter 5

Unique Factorization

We now begin a sequence of seven lemmas, leading up to McKenzie7s Lemma, which refer to a binary relational structure A = (A, R). Recall, from our construction in 51.1 of a relation algebra consisting of al1 binary relations R on A, that Ru denotes the relation converse to R, which is given by the formula RU = ((Y,x): (x, Y) E R). LEMMA l. Let A = (A, R) be rejexive. A is connected iff, for,each a, b E A there exist wo, z, , - zn-, , wn-, E A such that a R u w oRz, R u w l R.-.Rz,-, Ruwn-, Rb. In other words, it is connected iff A2 = R1 U R2 U R3 U (R" O R) 0.- o (R" o R), with n factors (R" o R). is un equivalence relation on A such that, if a LEMMA 2. then a R b-a1Rb'.

-

-

e ,

-

where Rn =

a' and b = b',

Therefore, we may define a quotient structure A/ = (Al=, S) where (a/ = ,b/ r ) E S iff (a, b) E R. We cal1 A thin iff r is the equality relation on A. Equivalently, A is thin iff A r A/ . The theory of products for relational structures is similar in almost al1 details to the corresponding theory for algebras, so we will summarize what we need of it, with very little in the way of proof. For simplicity, we will state our definitions and lemmas only for the case of major interest, namely for binary structures. The general case can easily be guessed from this case. The product of binary structures A = (A, R) and B = (B,S), denoted A x B, is the binary structure ( A x B, T) where, by definition, (a, b) T(a', b') iff Ai of any family (A,: i E I ) of binary structures a R a' and b S b'. The product is defined analogously. A homomorphism f : A -+ B is a function f : A -+ B such that if a R b, then f(a) S f(b). If A = (A, R) is a binary structure and 8 is an equivalence relation on A, then the quotient structure A/0 = (Ale, R') is defined so that R' is the smallest relation rnaking the natural map A -+ A/8 a homomorphism. If A = (A, R) is a binary structure, then we may define a decomposition operation of A to be a homomorphism f : A2 -t A that obeys conditions (18) and (19) near the beginning of this section. A decomposition operation arises naturally on each product P x Q via f((po, qo),(p,, q,)) = (p,, q,), and, within isomorphism, al1 decomposition operations arise in this way. This fact is contained (and made precise) in the following lemma. As with algebras, if f is a decomposition operation of A, and u, v E A, then f,and f: are the selfmaps of A defined by f,(x) = f (x, v) and fd(y) = f (u, y).

ni,,

307

5.6 Sorne Refinernent Theorerns

LEMMA 3. If f is a decomposition operation of A = (A, R), then ker f, is independent of u E A and ker fi is independent of u E A. Moreover, the natural map #: A -,Alker fu x Alker fi is an isomorphism of A with Alker fu x Alker fi. Given any isomorphism #: A + B x C, one muy define a decomposition operation f on A via f (a, a') = #-'((p,(#(a)), p1 (#(a1)))),where p, and p , are the coordinate projections of B x C onto B and onto C. Moreouer, for this f and #, we have B z A/ker fu and C z Alker fi, provided that the binary relations of B and C are both nonempty. m Regarding the final proviso of Lemma 3, the reader may easily observe that if, say, the binary relation of C happens to be empty, then the binary relation of A will also be empty, regardless of what is the binary relation of B. Therefore, under these circumstances it will generally not be true that B z Alker f,. In view of Lemma 3, we will define a factor relation of A to be any equivalence relation ker f, for f any decomposition function and f, defined as above. The connection between factor relations and direct factorizations goes exactly as it did in the purely algebraic case (55.2).In particular, we have the following corollary to the last lemma. Let us first observe that the relation pi = O was defined (both in 554.4 and 5.2) purely in terms of the equivalence relations pi, with no real regard to the fact that in that context the Di were congruence relations. Therefore, we may use that concept and notation in the present context with no change.

n

If A = (A, R) is a binary relational structure with R # @, and then there exist factor relations B, of A such that B, S A/Oi for m each i and such that O = COROLLARY.

if A S

nGI a.

Now refinement and strict refinernent may 'be defined for binary structures A exactly as they were for algebras. Strict refinement implies refinement as it did before, and Theorem 5.17 continues to hold in this new context (after rewording (iii) and (iv)so as not to mention Con A). The next lemma states and reviews some elementary facts about a decomposition operation J: in succeeding lemmas, g is a decomposition operator as well.

LEMMA 4. i*

f (f,(x),fú(y)) = f (x,Y).

ii* fv(fw(x)) = f,(x). iii. fú( f,(x)) = f,( fi(x)) = fUf(v)= f,(u). In other words, for each u and u, f: commutes with f,, and f; o f, is a constant function. iv. If x R y and u R u, then f (x, u) R f (y, u). v. If w R y and x R f,(y), then f (x, w) R y. vi. If (A, R) is reflexive, and if a 5, b and a' 5 b', then f (a, a') 5 f (b, b'), and similarly for 5, 5 , and E . vii. If (A, R) is reflexive, and if f,(x) = f,(y) and fi(x) = fi(y), then x y.

,

-

308

Chapter 5 Unique Factorization

Proof. The first four are straightforward consequences of the defining conditions (18)and (19).(v)follows immediately from (iv) and the fact that f ( f,(y), y) = y, which holds by (ii). Assertion (vii) follows readily from (vi), which we now prove. To prove the conclusion of (vi), namely that f(a, a') 5 , f(b, b'), we will assume that w R f(a, a') and prove that w k f(b, b'). By reflexivity, we have a R a and a' R a'; therefore f (w, a) R f (f(a, a'), a) = a, and f (a', w) R f (a', f (a, a')) = a'. Since a 5, b and a' 5, b', we have f (w, a) R b and f (a', w) R b'. Therefore, w = f (f (w, a),f (a', w))R f(b, b'), as required.

Proof. By Lemma 4(v), we have f(x, w) R y and f(x, w) R z. Therefore f (x, w) R g(y, z). Again using x R f,(y), we have

and this obviously reduces to x R f,(g(y, 2)).

LEMMA 6. If (A, R ) is rejlexive and connected, and if x, y, z, v E A, then

Proof. This assertion is trivial if y = z. Otherwise, by connectedness and , y;-,, wn-, , y; = z E A such that Lemma 1, there exist y; = y, wó, y;, w;, wf R yf,

wf R y;,,

(i < n).

Now define wi = f(x, w,')for i < n, y, = f(z, yf)for 1 5 i < n, yo = y, and y, = z. By Lemma 4(v), it follows from w&R y and x R f,(y), that wo R y. It likewise follows from Lemma 4(v) and x R f,(z) that w,-, R y, = z. It is easy to check that w, R y, for (1 5 i < n) and that wi R y,+, for (O 5 i < n - 1). Finally, it is not hard to see (say by Lemma 4(iii)) that f (w,, X)= x for each i and that f (y,, f,(z)) = f,(yi) for 1 5 i < n. Therefore, from w, R y, and x R f,(z) we deduce x R f,(yi) for (1 5 i < n). (Moreover, this last relation was already known to hold for i = O and i = n.) In summary, what we now have is:

To complete the proof of this lemma, we will prove, by induction on n > O, that (21) implies the conclusion of (20), namely, that x R f,(g(y, 2)). The case n = 1 is covered by Lemma 5 (with w, taken for w). For the inductive step, we now assume that n > 1 in (21), and that the corresponding statement with n replaced by n - 1 implies x R f,(g(y, y,-,)). Two applications of this inductive assumption give us that Setting y = g(y, y,-,), T = g(yl, z), and w, = g(wo,w,-,), it is not difficult to check that y, Z and w0 satisfy (21) (with n = 1), so by induction we have x R f,(g(y, 5)).Now one easily checks that g(y, T ) = g(y, z). Hence x R f,(g(y, z)), E thereby completing the proof of the lemma.

309

5.6 Some Refinement Theorems

LEMMA 7. Suppose that ( A , R) is connected and reflexive, and let x, y, u, w E A. If f,(x) = f,(y), then f,(gw(x)) = f,(g,(y)).

-

Proof. We first recall that the relation is the intersection of four relations, namely 5, and 5 and their converse relations 5; and 5:. By the symmetry of the assumptions, it will be enough to deduce only that f,(g,(x)) 5, f,(g,(y)), since the corresponding fact for 5,