Representation theory investigates the different ways in which a given algebraic object such as a group or a Lie algebra

*346*
*125*
*10MB*

*English*
*Pages 654\674
[674]*
*Year 2018*

- Author / Uploaded
- Martin Lorenz

*Table of contents : CoverContentsPrefaceConventionsPart I. AlgebrasChapter 1 Representations of Algebras 1.1 Algebras 1.2 Representations 1.3 Primitive Ideals 1.4 Semisimplicity 1.5 CharactersChapter 2 Further Topics on Algebras 2.1 Projectives 2.2 Frobenius and Symmetric AlgebrasPart II. GroupsChapter 3 Groups and Group Algebras 3.1 Generalities 3.2 First Examples 3.3 More Structure 3.4 Semisimple Group Algebras 3.5 Further Examples 3.6 Some Classical Theorems 3.7 Characters, Symmetric Polynomials, and Invariant Theory 3.8 Decomposing Tensor PowersChapter 4 Symmetric Groups 4.1 Gelfand-Zetlin Algebras 4.2 The Branching Graph 4.3 The Young Graph 4.4 Proof of the Graph Isomorphism Theorem 4.5 The Irreducible Representations 4.6 The Murnaghan-Nakayama Rule 4.7 Schur-Weyl DualityPart III. Lie AlgebrasChapter 5 Lie Algebras and Enveloping Algebras 5.1 Lie Algebra Basics 5.2 Types of Lie Algebras 5.3 Three Theorems about Linear Lie Algebras 5.4 Enveloping Algebras 5.5 Generalities on Representations of Lie Algebras 5.6 The Nullstellensatz for Enveloping Algebras 5.7 Representations of sl_2Chapter 6 Semisimple Lie Algebras 6.1 Characterizations of Semisimplicity 6.2 Complete Reducibility 6.3 Cartan Subalgebras and the Root Space Decomposition 6.4 The Classical Lie AlgebrasChapter 7 Root Systems 7.1 Abstract Root Systems 7.2 Bases of a Root System 7.3 Classification 7.4 Lattices Associated to a Root SystemChapter 8 Representations of Semisimple Lie Algebras 8.1 Reminders 8.2 Finite-Dimensional Representations 8.3 Highest Weight Representations 8.4 Finite-Dimensional Irreducible Representations 8.5 The Representation Ring 8.6 The Center of the Enveloping Algebra 8.7 Weyl’s Character Formula 8.8 Schur Functors and Representations of sl(V)Part IV. Hopf AlgebrasChapter 9 Coalgebras, Bialgebras, and Hopf Algebras 9.1 Coalgebras 9.2 Comodules 9.3 Bialgebras and Hopf AlgebrasChapter 10 Representations and Actions 10.1 Representations of Hopf Algebras 10.2 First Applications 10.3 The Representation Ring of a Hopf Algebra 10.4 Actions and Coactions of Hopf Algebras on AlgebrasChapter 11 Affine Algebraic Groups 11.1 Affine Group Schemes 11.2 Affine Algebraic Groups 11.3 Representations and Actions 11.4 Linearity 11.5 Irreducibility and Connectedness 11.6 The Lie Algebra of an Affine Algebraic Group 11.7 Algebraic Group Actions on Prime SpectraChapter 12 Finite-Dimensional Hopf Algebras 12.1 Frobenius Structure 12.2 The Antipode 12.3 Semisimplicity 12.4 Divisibility Theorems 12.5 Frobenius-Schur IndicatorsAppendicesAppendix A. The Language of Categories and Functors A.1 Categories A.2 Functors A.3 Naturality A.4 AdjointnessAppendix B. Background from Linear Algebra B.1 Tensor Products B.2 Hom-⊗ Relations B.3 Vector SpacesAppendix C. Some Commutative Algebra C.1 The Nullstellensatz C.2 The Generic Flatness Lemma C.3 The Zariski Topology on a Vector SpaceAppendix D. The Diamond Lemma D.1 The Goal D.2 The Method D.3 First Applications D.4 A Simplification D.5 The Poincaré-Birkhoff-Witt TheoremAppendix E. The Symmetric Ring of Quotients E.1 Definition and Basic Properties E.2 The Extended Center E.3 Comparison with Other Rings of QuotientsBibliographySubject IndexIndex of NamesNotation*

GRADUATE STUDIES I N M AT H E M AT I C S

193

A Tour of Representation Theory Martin Lorenz

A Tour of Representation Theory

GRADUATE STUDIES I N M AT H E M AT I C S

193

A Tour of Representation Theory

Martin Lorenz

EDITORIAL COMMITTEE Daniel S. Freed (Chair) Bjorn Poonen Gigliola Staﬃlani Jeﬀ A. Viaclovsky 2010 Mathematics Subject Classiﬁcation. Primary 16Gxx, 16Txx, 17Bxx, 20Cxx, 20Gxx.

For additional information and updates on this book, visit www.ams.org/bookpages/gsm-193

Library of Congress Cataloging-in-Publication Data Names: Lorenz, Martin, 1951- author. Title: A tour of representation theory / Martin Lorenz. Description: Providence, Rhode Island : American Mathematical Society, [2018] | Series: Graduate studies in mathematics ; volume 193 | Includes bibliographical references and indexes. Identiﬁers: LCCN 2018016461 | ISBN 9781470436803 (alk. paper) Subjects: LCSH: Representations of groups. | Representations of algebras. | Representations of Lie algebras. | Vector spaces. | Categories (Mathematics) | AMS: Associative rings and algebras – Representation theory of rings and algebras – Representation theory of rings and algebras. msc | Associative rings and algebras – Hopf algebras, quantum groups and related topics – Hopf algebras, quantum groups and related topics. msc | Nonassociative rings and algebras – Lie algebras and Lie superalgebras – Lie algebras and Lie superalgebras. msc | Group theory and generalizations – Representation theory of groups – Representation theory of groups. msc | Group theory and generalizations – Linear algebraic groups and related topics – Linear algebraic groups and related topics. msc Classiﬁcation: LCC QA176 .L67 2018 | DDC 515/.7223–dc23 LC record available at https://lccn.loc.gov/2018016461

Copying and reprinting. Individual readers of this publication, and nonproﬁt libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for permission to reuse portions of AMS publication content are handled by the Copyright Clearance Center. For more information, please visit www.ams.org/publications/pubpermissions. Send requests for translation rights and licensed reprints to [email protected]. c 2018 by the American Mathematical Society. All rights reserved. Printed in the United States of America. ∞ The paper used in this book is acid-free and falls within the guidelines

established to ensure permanence and durability. Visit the AMS home page at https://www.ams.org/ 10 9 8 7 6 5 4 3 2 1

23 22 21 20 19 18

For Maria

Contents

Preface Conventions

xi xvii

Part I. Algebras Chapter 1. Representations of Algebras 1.1. Algebras

3 3

1.2. Representations

24

1.3. Primitive Ideals

41

1.4. Semisimplicity

50

1.5. Characters

65

Chapter 2. Further Topics on Algebras

79

2.1. Projectives

79

2.2. Frobenius and Symmetric Algebras

96

Part II. Groups Chapter 3. Groups and Group Algebras

113

3.1. Generalities

113

3.2. First Examples

124

3.3. More Structure

131

3.4. Semisimple Group Algebras

143

3.5. Further Examples

150

3.6. Some Classical Theorems

159 vii

viii

3.7. Characters, Symmetric Polynomials, and Invariant Theory 3.8. Decomposing Tensor Powers Chapter 4. Symmetric Groups

Contents

170 179 187

4.1. Gelfand-Zetlin Algebras

189

4.2. The Branching Graph

192

4.3. The Young Graph 4.4. Proof of the Graph Isomorphism Theorem

197 205

4.5. The Irreducible Representations

217

4.6. The Murnaghan-Nakayama Rule 4.7. Schur-Weyl Duality

222 235

Part III. Lie Algebras Chapter 5. Lie Algebras and Enveloping Algebras 5.1. Lie Algebra Basics

245 246

5.2. Types of Lie Algebras

253

5.3. Three Theorems about Linear Lie Algebras 5.4. Enveloping Algebras

257 266

5.5. Generalities on Representations of Lie Algebras

278

5.6. The Nullstellensatz for Enveloping Algebras

287

5.7. Representations of sl 2

300

Chapter 6. Semisimple Lie Algebras 6.1. Characterizations of Semisimplicity

315 316

6.2. Complete Reducibility

320

6.3. Cartan Subalgebras and the Root Space Decomposition 6.4. The Classical Lie Algebras

325 334

Chapter 7. Root Systems

341

7.1. Abstract Root Systems

342

7.2. Bases of a Root System 7.3. Classiﬁcation

349 356

7.4. Lattices Associated to a Root System

361

Chapter 8. Representations of Semisimple Lie Algebras

373

8.1. Reminders

374

8.2. Finite-Dimensional Representations 8.3. Highest Weight Representations

377 379

Contents

ix

8.4. Finite-Dimensional Irreducible Representations

385

8.5. The Representation Ring

390

8.6. The Center of the Enveloping Algebra

393

8.7. Weyl’s Character Formula

408

8.8. Schur Functors and Representations of sl(V )

418

Part IV. Hopf Algebras Chapter 9. Coalgebras, Bialgebras, and Hopf Algebras

427

9.1. Coalgebras

427

9.2. Comodules

441

9.3. Bialgebras and Hopf Algebras

447

Chapter 10. Representations and Actions

465

10.1. Representations of Hopf Algebras

466

10.2. First Applications

476

10.3. The Representation Ring of a Hopf Algebra

485

10.4. Actions and Coactions of Hopf Algebras on Algebras

492

Chapter 11. Aﬃne Algebraic Groups

503

11.1. Aﬃne Group Schemes

503

11.2. Aﬃne Algebraic Groups

508

11.3. Representations and Actions

512

11.4. Linearity

515

11.5. Irreducibility and Connectedness

520

11.6. The Lie Algebra of an Aﬃne Algebraic Group

526

11.7. Algebraic Group Actions on Prime Spectra

530

Chapter 12. Finite-Dimensional Hopf Algebras

541

12.1. Frobenius Structure

541

12.2. The Antipode

549

12.3. Semisimplicity

552

12.4. Divisibility Theorems

559

12.5. Frobenius-Schur Indicators

567

Appendices Appendix A. A.1.

The Language of Categories and Functors

Categories

575 575

x

Contents

A.2.

Functors

578

A.3.

Naturality

579

A.4.

Adjointness

583

Appendix B.

Background from Linear Algebra

587

B.1.

Tensor Products

587

B.2.

Hom-⊗ Relations

593

B.3.

Vector Spaces

594

Appendix C.

Some Commutative Algebra

599

C.1.

The Nullstellensatz

599

C.2.

The Generic Flatness Lemma

601

C.3.

The Zariski Topology on a Vector Space

602

Appendix D.

The Diamond Lemma

605

D.1.

The Goal

605

D.2.

The Method

606

D.3.

First Applications

608

D.4.

A Simpliﬁcation

611

D.5.

The Poincaré-Birkhoﬀ-Witt Theorem

612

Appendix E. The Symmetric Ring of Quotients

615

E.1.

Deﬁnition and Basic Properties

615

E.2.

The Extended Center

617

E.3.

Comparison with Other Rings of Quotients

619

Bibliography

623

Subject Index

633

Index of Names

645

Notation

649

Preface

In brief, the objective of representation theory is to investigate the diﬀerent ways in which a given algebraic object—such as an algebra, a group, or a Lie algebra—can act on a vector space. The beneﬁts of such an action are at least twofold: the structure of the acting object gives rise to symmetries of the vector space on which it acts; and, in the other direction, the highly developed machinery of linear algebra can be brought to bear on the acting object itself to help uncover some of its hidden properties. Besides being a subject of great intrinsic beauty, representation theory enjoys the additional beneﬁt of having applications in myriad contexts other than algebra, ranging from number theory, geometry, and combinatorics to probability and statistics [58], general physics [200], quantum ﬁeld theory [212], the study of molecules in chemistry [49], and, more recently, machine learning [127]. This book has evolved from my lecture notes for a two-semester graduate course titled Representation Theory that I gave at Temple University during the academic years 2012/13 and 2015/16. Some traces of the informality of my original notes and the style of my lectures have remained intact: the text makes rather copious use of pictures and expansively displayed formulae; deﬁnitions are not numbered and neither are certain key results, such as Schur’s Lemma or Wedderburn’s Structure Theorem, which are referred to by name rather than number throughout the book. However, due to the restrictions imposed by having to set forth the material on the page in a linear fashion, the general format of this book does not in fact duplicate my actual lectures and it only locally reﬂects their content. I will comment more on this below. The title A Tour of Representation Theory (ToR) is meant to convey the panoramic view of the subject that I have aimed for.1 Rather than oﬀering an 1The choice of title is also a nod to the Tour de France, and “Tor” in German is “gate” as well as “goal” (scored) and “fool”.

xi

xii

Preface

in-depth treatment of one particular area, ToR gives an introduction to three distinct ﬂavors of representation theory—representations of groups, Lie algebras, and Hopf algebras—and all three are presented as incarnations of algebra representations. The book loops repeatedly through these topics, emphasizing similarities and connections. Group representations, in particular, are revisited frequently after their initial treatment in Part II. For example, Schur-Weyl duality is ﬁrst discussed in Section 4.7 and later again in Section 8.8; Frobenius-Schur indicators are introduced in §3.6.3 in connection with the Brauer-Fowler Theorem and they are treated in their proper generality in Section 12.5; and Chapter 11, on aﬃne algebraic groups, brings together groups, Lie algebras, and Hopf algebras. This mode of exposition owes much to the “holistic” viewpoint of the monograph [72] by Etingof et al., although ToR forgoes the delightful historical intermezzos that punctuate [72] and it omits quivers in favor of Hopf algebras. Our tour does not venture very far into any of the areas it passes through, but I hope that ToR will engender in some readers the desire to pursue the subject and that it will provide a platform for further explorations. Overview of the Contents. The topics covered in ToR and the methods employed are resolutely algebraic. Lie groups, C ∗ -algebras, and other areas of representation theory requiring analysis are not covered. On the other hand, in keeping with the widely acknowledged truth that algebraic representation theory beneﬁts from a geometric perspective, the discourse involves a modicum of algebraic geometry on occasion and I have also tried my hand at depicting various noncommutative geometric spaces throughout the book. No prior knowledge of algebraic geometry is assumed, however. Representations of algebras form the unifying thread running through ToR. Therefore, Part I is entirely written in the setting of associative algebras. Chapter 1 develops the basic themes of representation theory: irreducibility, complete reducibility, spaces of primitive ideals, characters, . . . ; the chapter establishes notation to be used throughout the remainder of the book; and it furnishes the fundamental general results of representation theory, such as Wedderburn’s Structure Theorem. Chapter 2 covers topics that are somewhat more peripheral to the main thrust of ToR: projective modules (Section 2.1) and Frobenius algebras (Section 2.2). Readers whose main interest is in groups or Lie algebras may skip this chapter at a ﬁrst reading. However, Section 2.2 deploys some tools that are indispensable for the discussion of ﬁnite-dimensional Hopf algebras in Chapter 12. Parts II and III are respectively devoted to representations of groups and Lie algebras. To some degree, these two parts can be tackled in any order. However, I have made a deliberate eﬀort at presenting the material on group representations in a palatable manner, oﬀering it as an entryway to representation theory, while the part on Lie algebras is written in a slightly terser style demanding greater mathematical maturity from the reader. Most of Part II deals with ﬁnite-dimensional representations of ﬁnite groups, usually over a base ﬁeld whose characteristic does

Preface

xiii

not divide the order of the group in question. Chapter 3 covers standard territory, with the possible exception of some brief excursions into classical invariant theory (§§3.7.4, 3.8.4). Chapter 4, however, presents the representation theory of the symmetric groups in characteristic 0 via an elegant novel approach devised by Okounkov and Vershik rather than following the route taken by the originators of the theory, Frobenius, Schur, and Young. Much of this chapter elaborates on Chapter 2 of Kleshchev’s monograph [125], providing full details and some additional background. My presentation of the material on Lie algebras and their representations in Part III owes a large debt to the classics by Dixmier [63] and Humphreys [105] and also to Fulton and Harris [83] as well as the more recent monograph [69] by Erdmann and Wildon. The notation and terminology in this part are largely adopted from [105] and its Afterword (1994). Departing from tradition, the discussion of the Nullstellensatz and the Dixmier-Mœglin equivalence for enveloping algebras of Lie algebras in Section 5.6 relies on the symmetric ring of quotients rather than the classical ring of quotients; this minimizes the requisite background material from noncommutative ring theory, which is fully provided in Appendix E. Hopf algebra structures are another recurring theme throughout the book: they are ﬁrst introduced for the special case of group algebras in Section 3.3; an analogous discussion for enveloping algebras of Lie algebras follows in §5.4.4; and Hopf algebras are ﬁnally tackled in full generality in Part IV. While this part of ToR is relatively dry in comparison with the rest of the book, the reader familiar with the earlier special cases will be amply prepared and hopefully willing to face up to what may initially seem like a profusion of technicalities. The eﬀort is worthwhile: many facets of the representation theory of groups or Lie algebras, especially those dealing with tensor products of representations, take their most natural form when viewed through the lens of Hopf algebras and, of all parts of ToR, it is Part IV that leads closest to the frontier of current research. On the other hand, I believe that students planning to embark on the investigation of Hopf algebras will proﬁt from a grounding in the more classical representation theories of groups and Lie algebras, which is what ToR aims to provide. Prerequisites. The various parts of ToR diﬀer rather signiﬁcantly with regard to their scope and diﬃculty. However, much of the book was written for a readership having nothing but a ﬁrst-year graduate algebra course under their belts: the basics of groups, rings, modules, ﬁelds, and Galois theory, but not necessarily anything beyond that level. Thus, I had no qualms assuming a solid working knowledge of linear algebra—after all, representation theory is essentially linear algebra with (quite a lot of) extra structure. Appendix B summarizes some formal points of linear algebra, notably the properties of tensor products. The prospective reader should also be well acquainted with elementary group theory: the isomorphism theorems, Sylow’s Theorem, and abelian, nilpotent, and

xiv

Preface

solvable groups. The lead-in to group representations is rather swift; group algebras and group representations are introduced in quick succession and group representation theory is developed in detail from there. On the other hand, no prior knowledge of Lie algebras is expected; the rudiments of Lie algebras are presented in full, albeit at a pace that assumes some familiarity with parallel group-theoretic lines of reasoning. While no serious use of category theory is made in this book, I have frequently availed myself of the convenient language and uniﬁed way of looking at things that categories aﬀord. When introducing new algebraic objects, such as group algebras or enveloping algebras of Lie algebras, I have emphasized their “functorial” properties; this highlights some fundamental similarities of the roles these objects play in representation theory that may otherwise not be apparent. The main players in ToR are the category Vectk of vector spaces over a ﬁeld k and the categories Repﬁn A of all ﬁnite-dimensional representations of various k-algebras A. Readers having had no prior exposure to categories and functors may wish to peruse Appendix A before delving into the main body of the text. Using this Book. ToR is intended as a textbook for a graduate course on representation theory, which could immediately follow the standard abstract algebra course, and I hope that the book will also be useful for subsequent reading courses and for readers wishing to learn more about the subject by self-study. Indeed, the more advanced material included in ToR places higher demands on its readers than would probably be adequate for an introductory course on representation theory and it is unrealistic to aim for full coverage of the book in a single course, even if it spans two semesters. Thus, a careful selection of topics has to be made by the instructor. When teaching Abstract Algebra over the years, I found that ﬁnite groups have tended to be quite popular among my students—starting from minimal prerequisites, one quickly arrives at results of satisfying depth and usefulness. Therefore, I usually start the follow-up course Representation Theory by diving right into representations of groups (Part II), covering all of Chapter 3 and some of Chapter 4 in the ﬁrst semester. Along the way, I add just enough material about algebras from Chapter 1 to explain the general underpinnings, often relegating proofs to reading assignments. In the second semester, I turn to representations of Lie algebras and try to cover as much of Part III as possible. Section 5.6 is generally only presented in a brief “outlook” format and Sections 8.6–8.8 had to be left uncovered so far for lack of time. Instead, in one or two lectures at the end of the second semester of Representation Theory or sometimes in a mini-course consisting of four or ﬁve lectures in our Algebra Seminar, I try to give the briefest of glimpses into the theory of Hopf algebras and their representations (Part IV). Alternatively, one could conceivably begin with a quick pass through the representation-theoretic fundamentals of algebras, groups, Lie algebras, and Hopf algebras before spiraling back to cover each or some of these topics in greater

Preface

xv

depth. Teaching a one-semester course will most likely entail a focus on just one of Parts II, III, or IV depending on the instructor’s predilections and the students’ background. In order to enable the instructor or reader to pick and choose topics from various parts of the book, I have included numerous cross references and frequent reminders throughout the text. The exercises vary greatly in diﬃculty and purpose: some merely serve to unburden various proofs of unsightly routine veriﬁcations, while others present substantial results that are not proved but occasionally alluded to in the text. I have written out solutions for the majority of exercises and I’d be happy to make them available to instructors upon request. Acknowledgments. My original exposure to representation theory occurred in lectures by my advisor Gerhard Michler, who introduced me to the group-theoretic side of the subject, and by Rudolf Rentschler, who did the same with Lie algebras. My heartfelt thanks to both of them. I also owe an enormous debt to Don Passman, Lance Small, and Bob Guralnick who shared their mathematical insights with me and have been good friends over many years. While working on this book, I was supported by grants from the National Security Agency and from Temple University. For corrections, suggestions, encouragement, and other contributions during the writing process, I express my gratitude to Zachary Cline, Vasily Dolgushev, Karin Erdmann, Kenneth Goodearl, Darij Grinberg, Istvan Heckenberger, Birge Huisgen-Zimmermann, James Humphreys, Yohan Kang, Alexander Kleshchev, Donald Passman, Brian Rider, Louis Rowen, Hans-Jürgen Schneider, Paul Smith, Philipp Steger, Xingting Wang, and Sarah Witherspoon. I should also like to thank the publishing staﬀ of the American Mathematical Society, especially Barbara Beeton for expert LATEX support, Sergei Gelfand for seeing this project through to completion, and Meaghan Healy and Mary Letourneau for careful copyediting. My greatest debt is to my family: my children Aidan, Dalia, Esther, and Gabriel and my wife Maria, to whom this book is dedicated.

Philadelphia June 2018

Martin Lorenz

Conventions

Functions and actions will generally be written on the left. In particular, unless otherwise speciﬁed, modules are left modules. Rings need not be commutative. Every ring R is understood to have an identity element, denoted by 1R or simply 1, and ring homomorphisms f : R → S are assumed to satisfy f (1R ) = 1S . We work over a commutative base ﬁeld k. Any speciﬁc assumptions on k will be explicitly stated, usually at the beginning of a section or chapter. Here is some general notation frequently occurring in the text; a more comprehensive list appears at the end of the book. Z+ , R+ , . . . N [n] #X XI X (I ) G X G\X Vectk

V∗ · , · V ⊕I V ⊗n GL(V )

nonnegative integers, reals, . . . natural numbers, {1, 2, . . . } the set {1, 2, . . . , n} for n ∈ N disjoint union of sets number of elements if X is a ﬁnite set; otherwise ∞ set of functions f : I → X the subset of X I consisting of all ﬁnitely supported functions: f (i) = 0 for all but ﬁnitely many i ∈ I (for an abelian group X) short for G × X → X, a left action of the group G on X the set of orbits for an action G X or, alternatively, a transversal for these orbits category of k-vector spaces dual space of V ∈ Vectk , that is, Homk (V, k) evaluation pairing V ∗ × V → k direct sum of copies of V labeled by I nth tensor power of V group of invertible linear endomorphisms of V xvii

Part I

Algebras

Chapter 1

Representations of Algebras

This chapter develops the basic themes of representation theory in the setting of algebras. We establish notation to be used throughout the remainder of the book and prove some fundamental results of representation theory such as Wedderburn’s Structure Theorem. The focus will be on irreducible and completely reducible representations. The reader is referred to Appendices A and B for a brief introduction to the language of categories and for the requisite background material from linear algebra. Throughout this chapter, k denotes an arbitrary ﬁeld. The category of k-vector spaces and k-linear maps is denoted by Vectk and ⊗ will stand for ⊗k .

1.1. Algebras In the language of rings, a k-algebra can be deﬁned as a ring A (with 1) together with a given ring homomorphism k → A that has image in the center Z A of A. Below, we recast this deﬁnition in an equivalent form, starting over from scratch in the setting of k-vector spaces. The basics laid out in the following apply to any commutative ring k, working in k Mod (with the conventions of §B.1.3) rather than in Vectk . However, we will consider this more general setting only very occasionally, and hence k is understood to be a ﬁeld. 1.1.1. The Category of k-Algebras A k-algebra can equivalently be deﬁned as a vector space A ∈ Vectk that is equipped with two k-linear maps, the multiplication m = m A : A ⊗ A → A and the unit 3

4

1. Representations of Algebras

u = u A : k → A , such that the following diagrams commute: m ⊗ Id

A⊗ A⊗ A (1.1)

A⊗ A

A⊗ A

Id ⊗m

u ⊗ Id

and

m m

A⊗ A

k⊗ A

Id ⊗u

A⊗k

m

∼

∼

A A

Here, Id = Id A denotes the identity map of A. The isomorphism k ⊗ A ∼ A in (1.1) is the standard one, given by the scalar multiplication, λ ⊗ a → λa for λ ∈ k and a ∈ A; similarly for A ⊗ k ∼ A. Multiplication will generally be written as juxtaposition: m(a ⊗ b) = ab for a, b ∈ A. Thus, ab depends k-linearly on both a and b. The algebra A is said to be commutative if ab = ba for all a, b ∈ A or, equivalently, m = m ◦ τ, where τ ∈ Endk ( A ⊗ A) is given by τ(a ⊗ b) = b ⊗ a. The ﬁrst diagram in (1.1) amounts to the associative law: (ab)c = a(bc) for all a, b, c ∈ A. The second diagram expresses the unit laws: u(1k ) a = a = a u(1k ) for all a ∈ A; so A has the identity element u(1k ) = 1 A . If u = 0, then it follows that A = {0}; otherwise, the unit map u is injective and it is often notationally suppressed, viewing it as an inclusion k ⊆ A. Then 1 A = 1k , the scalar operation of k on A becomes multiplication in A, and k ⊆ Z A. Given k-algebras A and B, a homomorphism from A to B is a k-linear map f : A → B that respects multiplications and units in the sense that the following diagrams commute: A⊗ A (1.2)

f ⊗f

B⊗B

mA

mB

A

f

f

A and

uA

B uB

k

B

These diagrams are equivalent to the equations f (aa ) = f (a) f (a ) for all a, a ∈ A and f (1 A ) = 1B . The category whose objects are the k-algebras and whose morphisms are the homomorphisms between k-algebras will be denoted by Algk .

Thus, HomAlgk ( A, B) is the set of all k-algebra homomorphisms f : A → B. Algebra homomorphisms are often simply called algebra maps. The variants isomorphism and monomorphism have the same meaning as in Vectk : algebra homomorphisms that are bijective and injective, respectively; similarly for automorphism and endomorphism.1 A subalgebra of a given k-algebra A is a k-subspace B of A that is 1Every surjective algebra map is an epimorphism in Algk in the categorical sense, but the converse does not hold [143, Section I.5].

1.1. Algebras

5

a k-algebra in its own right in such a way that the inclusion B → A is an algebra map. Tensor Products of Algebras. The tensor product of two algebras A, B ∈ Algk is obtained by endowing A ⊗ B ∈ Vectk with the multiplication (1.3)

(a ⊗ b)(a ⊗ b ) := aa ⊗ bb

for a, a ∈ A and b, b ∈ B. It is easy to check that this multiplication is well-deﬁned. Taking u A ⊗ u B : k k ⊗ k → A ⊗ B as unit map, the vector space A ⊗ B turns into a k-algebra. Observe that the switch map a ⊗ b → b ⊗ a is an isomorphism A ⊗ B ∼ B ⊗ A in Algk . Exercise 1.1.11 spells out some functorial properties of this construction and explores some examples. Extending the Base Field. A k-algebra that is a ﬁeld is also called a k-ﬁeld. For any A ∈ Algk and any k-ﬁeld K, we may regard K ⊗ A as a k-algebra as in the preceding paragraph and also as a K-vector space as in §B.3.4. The multiplication (K ⊗ A) ⊗ (K ⊗ A) → K ⊗ A in (1.3), given by (λ ⊗ a)(λ ⊗ a ) = λλ ⊗ aa for λ, λ ∈ K and a, a ∈ A, passes down to a K-linear map (K ⊗ A) ⊗K (K ⊗ A) A ⊗ (K ⊗K K ) ⊗ A K ⊗ ( A ⊗ A) −→ K ⊗ A, where the last map is K ⊗ m A . With this multiplication and with K ⊗ uA : K K ⊗ k → K ⊗ A as unit map, K ⊗ A becomes a K-algebra. The above construction is functorial: any map f : A → B in Algk gives rise to the map K ⊗ f : K ⊗ A → K ⊗ B in AlgK . Thus, the ﬁeld extension functor K ⊗ · : Vectk → VectK of §B.3.4 restricts to a functor K ⊗ · : Algk −→ AlgK . 1.1.2. Some Important Algebras We now describe a selection of algebras that will play prominent roles later on in this book, taking the opportunity to mention some standard concepts from the theory of algebras and from category theory along the way. Endomorphism Algebras The archetypal algebra from the viewpoint of representation theory is the algebra Endk (V ) of all k-linear endomorphisms of a vector space V ∈ Vectk . Multiplication in Endk (V ) is given by composition of endomorphisms and the unit map sends each λ ∈ k to the scalar transformation λ IdV . If dimk V = n < ∞, then any choice of k-basis for V gives rise to a k-linear isomorphism V ∼ k ⊕n and to an isomorphism of k-algebras Endk (V ) ∼ Matn (k), the n × n matrix algebra over k. The matrix algebra Matn (k) and the endomorphism algebra Endk (V ) of a ﬁnitedimensional vector space V are examples of ﬁnite-dimensional algebras, that is,

6

1. Representations of Algebras

algebras that are ﬁnite dimensional over the base ﬁeld k. Such algebras are also occasionally simply called “ﬁnite”. Free and Tensor Algebras We will also on occasion work with the free k-algebra that is generated by a given set X; this algebra will be denoted by kX. One can think of kX as a noncommutative polynomial algebra over k with the elements of X as noncommuting variables. Assuming X to be indexed, say X = (x i )i ∈I , a k-basis of kX is given by the collection of all ﬁnite products x i1 x i2 · · · x ik , where (i 1, i 2, . . . , i k ) is a ﬁnite (possibly empty) sequence of indices from I. These products are also called monomials or words in the alphabet X; the order of the symbols x i j in words does matter. Multiplication in kX is deﬁned by concatenation of words. The empty word is the identity element 1 ∈ kX. Formally, kX can be constructed as the tensor algebra T (kX ), where kX is the k-vector space of all formal k-linear combinations of the elements of X (Example A.5). Here, the tensor algebra of an arbitrary vector space V ∈ Vectk is deﬁned as the direct sum def TV = V ⊗k , k ∈Z+

where V ⊗k is the k th tensor power of V as in (B.10); so (B.9) gives dimk V ⊗k = (dimk V ) k . The unit map of TV is given by the canonical embedding k = V ⊗0 → TV , and multiplication in TV comes from the associativity isomorphisms (B.11) for tensor powers: (v1 ⊗ · · · ⊗ vk )(v1 ⊗ · · · ⊗ vl ) = v1 ⊗ · · · ⊗ vk ⊗ v1 ⊗ · · · ⊗ vl for vi, v j ∈ V . This multiplication is distributively extended to deﬁne products of arbitrary elements of TV . In this way, TV becomes a k-algebra. Note that the subspace V = V ⊗1 ⊆ TV generates the algebra TV in the sense that the only ksubalgebra of TV containing V is TV itself. Equivalently, every element of TV is a k-linear combination of ﬁnite products with factors from V . In fact, any generating set of the vector space V will serve as a set of generators for the algebra TV . The importance of tensor algebras stems from their functorial properties, which we shall now explain in some detail. Associating to a given k-vector space V the k-algebra TV , we obtain a functor T : Vectk −→ Algk .

1.1. Algebras

7

As for morphisms, let f ∈ Homk (V, W ) be a homomorphism of k-vector spaces. Then we have morphisms f ⊗k ∈ Homk (V ⊗k , W ⊗k ) for each k ∈ Z+ as in §B.1.3: f ⊗k (v1 ⊗ · · · ⊗ vk ) = f (v1 ) ⊗ · · · ⊗ f (vk ). The k-linear map def

Tf =

f ⊗k : TV =

k ∈Z+

k ∈Z+

V ⊗k −→

W ⊗k = TW

k ∈Z+

is easily seen to be a k-algebra map and it is equally straightforward to check that T satisﬁes all requirements of a functor. The property of the tensor algebra that is expressed in the following proposition is sometimes referred to as the universal property of the tensor algebra; it determines the tensor algebra up to isomorphism (Exercise 1.1.1). Proposition 1.1. For V ∈ Vectk and A ∈ Algk , there is the following natural bijection of sets: Homk (V, AVectk ) ∈

∼

∈

HomAlgk (TV, A) f

f V

Here, f V denotes the restriction of f to V = V ⊗1 ⊆ TV . The notation AVectk indicates that the algebra A is viewed merely as a k-vector space, with all other algebra structure being ignored. We use the symbol ∼ for an isomorphism in any category (Section A.1); in Sets, this is a bijection. The bijection in Proposition 1.1 behaves well with respect to varying the input data, V and A—this is what “naturality” of the bijection is meant to convey. Technically, the functor T : Vectk → Algk and the forgetful functor · Vectk : Algk → Vectk are a pair of adjoint functors. The reader wishing to see the speciﬁcs spelled out is referred to Section A.4. We also mention that any two left adjoint functors of a given functor are naturally isomorphic [143, p. 85]; see also Exercise 1.1.1. Proof of Proposition 1.1. The map in the proposition is injective, because V generates the algebra TV . For surjectivity, let φ ∈ Homk (V, AVectk ) be given. Then the

map V k → A, (v1, v2, . . . , vk ) → φ(v1 )φ(v2 ) · · · φ(vk ) is k-multilinear, and hence it gives rise to a unique k-linear map φk : V ⊗k → A, v1 ⊗· · ·⊗vk → φ(v1 ) · · · φ(vk ), by (B.13). The maps φk yield a unique k-linear map f : TV → A such that f V ⊗k = φk for all k. In particular, f V = φ as needed, and it is also immediate that f is in fact a k-algebra map. This establishes surjectivity. Grading. Tensor algebras are examples of graded algebras, that is, algebras that are equipped with a meaningful notion of “degree” for their nonzero elements. In

8

1. Representations of Algebras

detail, a k-algebra A is said to be graded if Ak A= k ∈Z+

for k-subspaces Ak ⊆ A such that Ak Ak ⊆ Ak+k for all k, k ∈ Z+ . More precisely, such algebras are called Z+ -graded; grading by monoids other than (Z+ , +) are also often considered. The nonzero elements of Ak are called homogeneous of degree k.2 An algebra map f : A → B between graded algebras A and B is called a homomorphism of graded algebras if f respects the gradings in the sense that f ( Ak ) ⊆ B k for all degrees k. All this applies to the tensor algebra TV , with V ⊗k being the component of degree k. The algebra maps T f : TV → TW constructed above are in fact homomorphisms of graded algebras. Numerous algebras that we shall encounter in later chapters carry a natural grading. See Exercise 1.1.12 for more background on gradings. Returning to the case where V = kX is the vector space with basis X = (x i )i ∈I , the k th tensor power (kX ) ⊗k has a basis given by the tensors x i1 ⊗ x i2 ⊗ · · · ⊗ x ik for all sequences of indices (i 1, i 2, . . . , i k ) of length k. Sending the above k-tensor to the corresponding word x i1 x i2 · · · x ik , we obtain an isomorphism of T (kX ) with

the free algebra kX. The grading of T (kX ) by the tensor powers (kX ) ⊗k makes kX a graded algebra as well: the homogeneous component of degree k is the k-subspace of kX that is spanned by the words of length k. This grading is often referred to as the grading by “total degree”. Proposition 1.1 in conjunction with the (natural) bijection Homk (kX, AVectk ) HomSets (X, ASets ) from (A.4) gives the following natural bijection, for any k-algebra A: HomSets (X, ASets ) ∈

(1.4)

∼

∈

HomAlgk (kX, A) f

f X

Thus, an algebra map f : kX → A is determined by the values f (x) ∈ A for the generators x ∈ X and these values can be freely assigned in order to deﬁne f . If X = {x 1, x 2, . . . , x n } is ﬁnite, then we will also write kx 1, x 2, . . . , x n for kX. In 2It will be clear from the context whether Ak denotes the k th homogeneous component or the k-fold cartesian product A × · · · × A of A. k

1.1. Algebras

9

this case, (1.4) becomes:

f

∼

An

∈

(1.5)

∈

HomAlgk (kx 1, x 2, . . . , x n , A)

f (x i )

i

Algebras having a ﬁnite set of generators are called aﬃne. They are exactly the homomorphic images of free algebras kX generated by a ﬁnite set X or, equivalently, the homomorphic images of tensor algebras TV with V ﬁnite dimensional. Polynomial and Symmetric Algebras Our next example is the familiar commutative polynomial algebra k[x 1, x 2, . . . , x n ], with unit map sending k to the constant polynomials. Formally, the polynomial algebra can be deﬁned by def k[x 1, x 2, . . . , x n ] = kx 1, x 2, . . . , x n / x i x j − x j x i | 1 ≤ i < j ≤ n , where ( . . . ) denotes the ideal that is generated by the indicated elements. Since these elements are all homogeneous (of degree 2), the total degree grading of the free algebra kx 1, x 2, . . . , x n passes down to a grading of k[x 1, x 2, . . . , x n ] (Exercise 1.1.12); the grading thus obtained is the usual total degree grading of the polynomial algebra. The universal property (1.5) of the free algebra yields a corresponding universal property for k[x 1, x 2, . . . , x n ]. Indeed, for any k-algebra A, the set HomAlgk (k[x 1, x 2, . . . , x n ], A) can be identiﬁed with the set of all algebra maps f : kx 1, x 2, . . . , x n → A such that f (x i x j − x j x i ) = 0 or, equivalently, f (x i ) f (x j ) = f (x j ) f (x i ) for all i, j. Thus, for any k-algebra A, sending an algebra map f to the n-tuple f (x i ) yields a natural bijection of sets

(1.6) HomAlgk (k[x 1, x 2, . . . , x n ], A) ∼ (ai ) ∈ An | ai a j = a j ai ∀ i, j . Letting CommAlgk denote the full subcategory of Algk consisting of all commutative k-algebras, then this becomes the following natural bijection, for any A ∈ CommAlgk :

f

∼

An

∈

(1.7)

∈

HomCommAlgk (k[x 1, x 2, . . . , x n ], A)

f (x i )

i

Since this bijection is analogous to (1.5), but in the world of commutative algebras, k[x 1, x 2, . . . , x n ] is also called the free commutative k-algebra generated by the x i s. Exactly as the tensor algebra TV of a k-vector space V can be thought of as a more general basis-free version of the free algebra kx 1, x 2, . . . , x n , the symmetric

10

1. Representations of Algebras

algebra of V generalizes the polynomial algebra k[x 1, x 2, . . . , x n ]; it is deﬁned by def

Sym V = (TV )/I

with

I = I (V ) = v ⊗ v − v ⊗ v | v, v ∈ V .

Since the ideal⊗kI is generated by homogeneous elements of TV , it follows that I= , thereby making Sym V a graded algebra (Exercise 1.1.12): k I ∩V k Sym V = Sym V with Symk V = V ⊗k /I ∩ V ⊗k . k ∈Z+

Since the nonzero generators of I have degree > 1, it follows that I ∩ V = 0. Thus, we may again view V ⊆ Sym V and we can write the image of v1 ⊗ · · · ⊗ vk ∈ V ⊗k in Sym V as v1 v2 · · · vk ∈ Symk V . The foregoing yields a functor Sym : Vectk −→ CommAlgk .

Indeed, Sym V is clearly a commutative k-algebra for every V ∈ Vectk . Moreover, if f ∈ Homk (V, W ) is a homomorphism of vector spaces, then the image of a typical generator v ⊗ v − v ⊗ v ∈ I (V ) under the map T f ∈ HomAlgk (TV, TW ) is the element f (v) ⊗ f (v ) − f (v ) ⊗ f (v) ∈ I (W ). Thus T f maps I (V ) to I (W ), and hence T f passes down to an algebra map Sym f : Sym V → Sym W . This is in fact a homomorphism of graded algebras: Sym f = Sym f Symk V : Sym V k

k

∈

Sym W

∈

k

v1 v2 · · · v k

f (v1 ) f (v2 ) · · · f (vk )

For any commutative k-algebra A, there is the following natural bijection: Homk (V, AVectk ) ∈

(1.8)

∼

∈

HomCommAlgk (Sym V, A) f

f V

This follows from Proposition 1.1 exactly as (1.7) was derived from (1.5) earlier. As in Proposition 1.1, the bijection (1.8) states, more formally, that the functor Sym is left adjoint to the forgetful functor · Vectk : CommAlgk → Vectk . If V = kX for a set X, then Homk (kX, AVectk ) HomSets (X, ASets ) by (A.4) and so (1.8) gives the following natural bijection, for any A ∈ CommAlgk : HomSets (X, ASets ) ∈

(1.9)

∼

∈

HomCommAlgk (Sym kX, A) f

f X

1.1. Algebras

11

If X = {x 1, x 2, . . . , x n }, then HomSets (X, ASets ) An . Comparing the above bijection with (1.7), it follows that (Exercise 1.1.1) Sym kX k[x 1, x 2, . . . , x n ] .

As is well known (see also Exercise 1.1.13), a k-basis of the homogeneous component of degree k of the polynomial algebra k[x 1, x 2, . . . , x n ] is given by the so-called standard monomials of degree k,

k k k with ki ∈ Z+ and i ki = k . (1.10) x 11 x 22 · · · x nn In particular,

k +n−1 (n = dimk V ), dimk Sym V = n−1 as can be seen by identifying each standard monomial with a pattern consisting of · · · ∗ | · · · | ∗···∗ . k stars and n − 1 bars: ∗ · · · ∗ | ∗ k

k1

k2

kn

Exterior Algebras

The exterior algebra V of a k-vector space V is deﬁned by

def

V = (TV )/J

J = J (V ) = v ⊗ v | v ∈ V .

with

⊗k Exactly as for the ideal I of Sym V , one sees that J = and J ∩ V = 0. k J ∩V Thus, we may again view V ⊆ V and V is a graded algebra: k k V= V with V = V ⊗k /J ∩ V ⊗k . k ∈Z+

Writing the canonical map V ⊗k in V becomes

k

V as v1 ⊗· · ·⊗vk → v1 ∧· · ·∧vk , multiplication

(v1 ∧ · · · ∧ vk )(v1 ∧ · · · ∧ vl ) = v1 ∧ · · · ∧ vk ∧ v1 ∧ · · · ∧ vl . Following the reasoning for Sym, one obtains a functor

: Vectk −→ Algk ,

and the map f : V → W for f ∈ Homk (V, W ) is in fact a homomorphism of graded algebras:

f k V :

k

V

k

W

∈

f =

∈

k

v1 ∧ · · · ∧ v k

f (v1 ) ∧ · · · ∧ f (vk )

The deﬁning relations of the exterior algebra state that v ∧ v = 0 for all v ∈ V ; in words, the exterior product is alternating on elements of V . Expanding the product (v + v ) ∧ (v + v ) = 0 and using v ∧ v = v ∧ v = 0, one obtains the

12

1. Representations of Algebras

rule v ∧ v = − v ∧ v for all v, v ∈ V . Conversely, with v = v , this rule gives v ∧ v = − v ∧ v for all v ∈ V , which in turn forces v ∧ v = 0 in case char k 2. Thus, in this case, V = (TV )/(v ⊗ v + v ⊗ v | v, v ∈ V ). In general, the rule v ∧ v = − v ∧ v for v, v ∈ V implies by induction that ab = (−1) kl ba for all a ∈ k V and b ∈ l V . Using | · | to denote degrees of homogeneous elements, the latter relation gives the following property, which is referred to as anticommutativity or graded-commutativity of the exterior algebra: ab = (−1) |a | |b | ba .

(1.11)

It follows that, for any given collection of elements v1, v2, . . . , vn ∈ V and any permutation s of the indices {1, 2, . . . , n}, (1.12)

vs(1) ∧ vs(2) ∧ · · · ∧ vs(k) = sgn(s) v1 ∧ v2 ∧ · · · ∧ vk ,

where sgn(s) denotes the sign of the permutation s. Indeed, (1.12) is clear from anticommutativity in case s is a transposition interchanging two adjacent indices; the general case is a consequence of the standard fact that these transpositions generate the symmetric group Sn (Example 7.10). Anticommutativity implies that if V has basis (x i )i ∈I , then the elements x i1 ∧ x i2 ∧ · · · ∧ x ik

(1.13)

with i 1 < i 2 < · · · < i k

generate the k-vector space k V . These elements do in fact form a basis of see Exercise 1.1.13. Therefore, if dimk V = n < ∞, then dimk k V = kn and dimk V = 2n .

k

V;

In particular, n V is 1-dimensional and, for any f ∈ Endk (V ), the endomorphism n f ∈ Endk ( n V ) = k is given by the determinant (see also Lemma 3.33): n

(1.14)

f = det f .

The Weyl Algebra The following algebra is called the (ﬁrst) Weyl algebra over k: (1.15)

def

A1 (k) = kx, y/(yx − x y − 1).

Committing a slight abuse of notation, let us keep x and y to denote their images in A1 (k); so yx = x y+1 in A1 (k). This relation allows us to write each ﬁnite product in A1 (k) with factors x or y as a k-linear combination of ordered products of the form x i y j (i, j ∈ Z+ ). These standard monomials therefore generate A1 (k) as a k-vector space. One can show that they are in fact linearly independent (Exercise 1.1.15; see also Examples 1.8 and D.3); hence the standard monomials form a k-basis of the Weyl algebra. If f : A1 (k) → A is any k-algebra map and a = f (x), b = f (y), then we must have ba − ab − 1 = 0 in A. However, this relation is the only restriction, because

1.1. Algebras

13

it guarantees that the homomorphism kx, y → A that corresponds to the pair (a, b) ∈ A2 in (1.5) does in fact factor through A1 (k). Thus, we have a bijection, natural in A ∈ Algk ,

(1.16) HomAlgk ( A1 (k), A) (a, b) ∈ A2 ba − ab − 1 = 0 .

1.1.3. Modules In order to pave the way for the dual concept of a “comodule”, to be introduced later in §9.2.1, we now review the basic deﬁnitions concerning modules over k-algebras in the diagrammatic style of §1.1.1, working in the category Vectk . We will also brieﬂy discuss some issues related to switching sides. Left Modules Let A = ( A, m, u) be a k-algebra. A left module over A, by deﬁnition, is an abelian group (V, +) that is equipped with a left action of A, that is, a biadditive map A × V → V , (a, v) → a.v, satisfying the conditions a.(b.v) = (ab).v

1 A .v = v

and

for all a, b ∈ A and v ∈ V . Putting λv := u(λ).v for λ ∈ k, the group V becomes a k-vector space. The action map is easily seen to be k-bilinear, and hence it corresponds to a k-linear map A ⊗ V → V by (B.12). Thus, a left A-module may equivalently be deﬁned as a vector space V ∈ Vectk together with a linear map μ = μV : A ⊗ V → V such that the following two diagrams commute: A⊗ A⊗V (1.17)

m ⊗ IdV

A⊗V μ

Id A ⊗μ

A⊗V

μ

k⊗V

u ⊗ IdV

A⊗V

∼

and

V

μ

V

We will generally suppress μ, writing instead μ(a ⊗ v) = a.v as above or else use simple juxtaposition, μ(a ⊗ v) = av. Given left A-modules V and W , a homomorphism from V to W is the same as a k-linear map f : V → W such that the following diagram commutes: A⊗V (1.18)

Id A ⊗ f

A⊗W

μV

μW

V

f

W

In terms of elements, this states that f (a.v) = a. f (v) for all a ∈ A and v ∈ V . As in Appendices A and B, the set of all A-module homomorphisms f : V → W will

14

1. Representations of Algebras

be denoted by Hom A (V, W ) and the resulting category of left A-modules by AMod .

Thus, AMod is a subcategory of Vectk . Note also that End A (V ) := Hom A (V, V ) is always a k-subalgebra of Endk (V ). We refrain from reminding the reader in tedious detail of the fundamental module-theoretic notions such as submodule, factor module, . . . and we shall also assume familiarity with the isomorphism theorems and other standard facts. We will however remark that, by virtue of the bijection Homk ( A ⊗ V, V ) Homk ( A, Endk (V )) that is given by Hom-⊗ adjunction (B.15), a left module action μ : A ⊗ V → V corresponds to an algebra map, ρ : A → Endk (V ). In detail, for a given ρ, we may deﬁne an action μ by μ(a ⊗ v) := ρ(a)(v) for a ∈ A and v ∈ V . Conversely, from a given action μ, we obtain ρ by deﬁning ρ(a) := v → μ(a ⊗ v) . Changing Sides: Opposite Algebras Naturally, the category Mod A of all right modules over a given algebra A (as in Appendices A and B) can also be described by diagrams in Vectk analogous to (1.17) and (1.18). However, it turns out that right A-modules are essentially the same as left modules over a related algebra, the so-called opposite algebra Aop of A. As a k-vector space, Aop is identical to A but Aop is equipped with a new multiplication ∗ that is given by a ∗ b = ba for a, b ∈ A. Alternatively, we may realize Aop as a vector space isomorphic to A via · op : A ∼ Aop and with multiplication given by aop bop = (ba) op . Clearly, Aop op A in Algk . Now suppose that V is a right A-module with right action μ : V ⊗ A → V . Then we obtain a left Aop -module structure on V by deﬁning μop : Aop ⊗ V → V , μop (aop ⊗ v) = μ(v ⊗ a). Likewise, any left A-module action μ : A ⊗ V → V gives rise to a right Aop -action via μop : V ⊗ Aop → V , μop (v ⊗ aop ) = μ(a ⊗ v). Left Aop -modules become right modules over Aop op A in this way. Therefore, we obtain an equivalence of categories (§A.3.3) Mod A ≡ Aop Mod .

Alternatively, in terms of algebra maps, it is straightforward to check as above that a right A-module action V ⊗ A → V corresponds to an algebra map A → Endk (V ) op . Such a map in turn clearly corresponds to an algebra map Aop → Endk (V ) op op Endk (V ), and hence to a left Aop -module action on V .

1.1. Algebras

15

Bimodules: Tensor Products of Algebras We will almost exclusively work in the context of left modules, but occasionally we shall also encounter modules that arise naturally as right modules or even as bimodules (§B.1.2). If A and B are k-algebras, then an ( A, B)-bimodule is the same as a k-vector space V that is both a left A-module and a right B-module, with module actions μ : A ⊗ V → V and μ : V ⊗ B → V , such that the following diagram commutes:

A⊗V ⊗ B (1.19)

Id A ⊗ μ

μ ⊗ Id B

V⊗B μ

μ

A⊗V

V

Deﬁning morphisms between ( A, B)-bimodules to be the same as k-linear maps that are left A-module as well as right B-module maps, we once again obtain a category, AModB . As with right modules, ( A, B)-bimodules are in fact left modules over some algebra, the algebra in question being the tensor product A ⊗ Bop (§1.1.1). Indeed, suppose that V is an ( A, B)-bimodule. As we have remarked above, the module actions correspond to algebra maps α : A → Endk (V ) and β : Bop → Endk (V ). Condition (1.19) can be expressed by stating that the images of these maps commute elementwise. The “universal property” of the tensor product of algebras (Exercise 1.1.11) therefore provides us with a unique algebra map A ⊗ Bop → Endk (V ), a ⊗ bop → α(a) β(bop ), and this algebra map in turn corresponds to a left A ⊗ Bop module structure on V . In short, we have an equivalence of categories, AMod B

≡

A⊗B op Mod .

Example 1.2 (The regular bimodule). Every algebra A carries a natural ( A, A)bimodule structure, with left and right A-module actions given by left and right multiplication, respectively. Commutativity of (1.19) for these actions is equivalent to the associative law of A. The resulting left, right, and bimodule structures will be referred to as the regular structures. We will be primarily concerned with the left regular module structure; it will be denoted by Areg ∈ AMod so as to avoid any confusion with the algebra A ∈ Algk . By the foregoing, we may view the regular ( A, A)-bimodule A as a left module over the algebra A ⊗ Aop . Example 1.3 (Bimodule structures on Hom-spaces). For A, B ∈ Algk and given modules V ∈ AMod and W ∈ B Mod, the k-vector space Homk (W, V ) becomes an ( A, B)-bimodule by deﬁning (a. f .b)(w) := a. f (b.w)

16

1. Representations of Algebras

for a ∈ A, b ∈ B, f ∈ Homk (W, V ), and w ∈ W . Thus, Homk (W, V ) becomes a left A ⊗ Bop -module. We may also regard V as a left module over the endomorphism algebra End A (V ) and likewise for W . If A = B, then the above bimodule action equips Hom A (W, V ) with an (End A (V ), End A (W ))-bimodule structure, with actions given by composition in AMod.

1.1.4. Endomorphism Algebras and Matrices This subsection provides some technicalities for later use; it may be skipped at a ﬁrst reading and referred to as the need arises. Throughout, A denotes an arbitrary k-algebra. Direct Sums

n Our ﬁrst goal is to describe the endomorphism algebra a ﬁnite direct sum i=1 Vi of n ⊕n with Vi ∈ AMod. If all Vi = V , then we will write . In general, the i=1 Vi = V various embeddings and projections are module maps μi : Vi →

n

Vi

and

πi :

i=1

n

Vi Vi .

i=1

Explicitly, πi (v1, v2, . . . , vn ) = vi and μi (v) = (0, . . . , 0, v, 0, . . . , 0) with v occupying the i th component on the right. Consider the generalized n × n matrix algebra,

Hom A (Vj , Vi )

i, j

Hom A (V1, V1 ) · · · Hom A (Vn, V1 ) .. .. = . . . Hom A (V1, Vn ) · · · Hom A (Vn, Vn )

The k-vector space structure of this set is “entrywise”, using the standard k-linear structure on each Hom A (Vj , Vi ) ⊆ Homk (Vj , Vi ) and identifying the generalized Hom A (Vj , Vi ). Multiplimatrix algebra with the direct sum of vector spaces i, j cation comes from composition: f ik gk j = f ◦ gk j . Note the reversal k ik of indices: f i j ∈ Hom A (Vj , Vi ). The identity element of the generalized matrix algebra is the diagonal matrix with entries IdVi . Lemma 1.4. (a) For V1, . . . , Vn ∈ AMod , there is the following isomorphism in Algk : n End A ( i=1 Vi ) ∼ Hom A (Vj , Vi ) f

∈

∈

i, j

πi ◦ f ◦ μ j

i, j

1.1. Algebras

17

(b) Let V ∈ AMod . Then V ⊕n becomes a left module over Matn ( A) via matrix multiplication and there is the following isomorphism in Algk : ∼

∈

EndMat n ( A) (V ⊕n )

∈

End A (V )

f ⊕n =

f

i

μi ◦ f ◦ πi

n Proof. (a) Let us put V = i=1 Vi and denote the map in (a) by α; it is clearly klinear. In fact, α is an isomorphism by (B.14). In order to show that α is an algebra

map, we note the relations k μk ◦ πk = IdV and πi ◦ μ j = δi, j IdVi (Kronecker δ). Using this, we compute α( f ◦ g) = πi ◦ f ◦ g ◦ μ j = πi ◦ f ◦ k μk ◦ πk ◦ g ◦ μ j = k (πi ◦ f ◦ μk ) ◦ (πk ◦ g ◦ μ j ) = α( f )α(g). Similarly α(1) = 1. This shows that α is a k-algebra homomorphism, proving (a).

(b) In componentwise notation, the map f ⊕n is given by (vi ) → ( f (vi )) and the “matrix multiplication” action of Matn ( A) on V ⊕n by ai j . v j = j ai j .v j . It is straightforward to check that f ⊕n ∈ EndMatn ( A) (V ⊕n ) and that f → f ⊕n is a k-algebra map. The inverse map is EndMat n ( A) (V ⊕n ) −→ End A (V ) , g → π1 ◦ g ◦ μ1 .

For example, in order to check that i μi ◦ π1 ◦ g ◦ μ1 ◦ πi = g, observe that g commutes with the operators μi ◦ π j : V ⊕n → V ⊕n , because μi ◦ π j is given by the action of the matrix ei, j ∈ Matn ( A), with 1 in the (i, j)-position and 0s elsewhere.

Therefore, i μi ◦ π1 ◦ g ◦ μ1 ◦ πi = i μi ◦ π1 ◦ μ1 ◦ πi ◦ g = IdV ⊕n ◦g = g. Free Modules We now turn to a generalization of the familiar fact from linear algebra that the n × n-matrix algebra Matn (k) is the endomorphism algebra of the vector space ⊕n of the regular module k ⊕n . In place of k ⊕n , we consider the n-fold direct sum Areg ⊕n (Example 1.2). Left A-modules isomorphic to Areg for some n ∈ Z+ are called ﬁnitely generated free (Example A.5). Lemma 1.5.

(a) Matn ( A) op Matn ( Aop ) in Algk , via the matrix transpose.

(b) There is the following isomorphism in Algk , given by right matrix multiplication:

xi j

∼

⊕n End A ( Areg )

∈

∈

Matn ( A) op

(ai ) → (

i

ai x i j )

18

1. Representations of Algebras

Proof. (a) We will identify opposite algebras with the originals, but with multiplication ∗ . Consider the map · T : Matn ( A) op → Matn ( Aop ) sending each matrix to its transpose; this is clearly a k-linear bijection ﬁxing the identity matrix 1n×n . We need to check that, for X = x i j and Y = yi j ∈ Matn ( A) op , the equation (X ∗ Y ) T = X TY T holds in Matn ( Aop ). But the matrix (X ∗ Y ) T = (Y X ) T has

(i, j)-entry y j x i , whereas the (i, j)-entry of X TY T equals x i ∗ y j . By deﬁnition of the multiplication in Aop , these two entries are identical. (b) Right multiplication by x ∈ A gives the map r x = · x ∈ End A ( Areg ). Since r x ◦ r y = r yx = r x∗y for x, y ∈ A, the assignment x → r x is an algebra map Aop → End A ( Areg ). This map has inverse End A ( Areg ) → Aop , f → f (1). Hence, End A ( Areg ) Aop as k-algebras and so ⊕n ) End A ( Areg

Lemma 1.4(a)

Matn (End A ( Areg )) Matn ( Aop )

part (a)

Matn ( A) op .

It is readily checked that this isomorphism is explicitly given as in the lemma.

Exercises for Section 1.1 In these exercises, A denotes a k-algebra. 1.1.1 (Universal properties). (a) Let T V ∈ Algk be equipped with a k-linear map t : V → T V Vectk such that the map t ∗ : HomAlgk (T V, A) → Homk (V, AVectk ) is a bijection for any A ∈ Algk . Show that T V TV . (b) Deduce from (1.7) and (1.9) that Sym k[n] k[x 1, . . . , x n ], where [n] = {1, 2, . . . , n}. f

g

1.1.2 (Splitting maps). (a) Let U −→ V −→ W be maps in AMod. Show that g ◦ f is an isomorphism if and only if f is mono, g is epi, and V = Im f ⊕ Ker g. If U = W and g ◦ f = IdW , then one says that the maps f and g split each other. f

g

(b) Let 0 → U −→ V −→ W → 0 be a short exact sequence in AMod (§B.1.1) and put S := Im f = Ker g. Show that the following conditions are equivalent; if they hold, the given short exact sequence is said to be split: (i) f ◦ f = IdU for some f ∈ Hom A (V, U); (ii) g ◦ g = IdW for some g ∈ Hom A (W, V ); (iii) S has a complement, that is, V = S ⊕ C for some A-submodule C ⊆ V . 1.1.3 (Generators of a module). Let V ∈ AMod. A subset Γ ⊆ V is said to generate V if the only submodule of V containing Γ is V itself. Modules that have a ﬁnite generating set are called ﬁnitely generated; modules that are generated by one element are called cyclic.

1.1. Algebras

19

(a) Let V be ﬁnitely generated. Use Zorn’s Lemma to show that every proper submodule U V is contained in a maximal proper submodule M, that is, M V and M ⊆ M V implies M = M . (b) Let 0 → U → V → W → 0 be a short exact sequence in AMod. Show that if both U and W are both ﬁnitely generated, then V is ﬁnitely generated as well. Conversely, assuming V to be ﬁnitely generated, show that W is ﬁnitely generated but this need not hold for U. (Give an example to that eﬀect.) (c) Show that the following are equivalent: (i) V has a generating set consisting of n elements; ⊕n (ii) V is a homomorphic image of the free left A-module Areg ;

(iii) V ⊕n is a cyclic left Matn ( A)-module (Lemma 1.4). 1.1.4 (Chain conditions). A partially ordered set (X , ≤) is said to satisfy the Ascending Chain Condition (ACC) if every ascending chain in X stabilizes: if x 1 ≤ x 2 ≤ x 3 ≤ · · · are elements of X, then x n = x n+1 = · · · for some n. Assuming the Axiom of Choice, show: (a) ACC is equivalent to the Maximum Condition: every ∅ Y ⊆ X has at least one maximal member, that is, there exists y ∈ Y such that y ≤ y , y ∈ Y implies y = y. (b) The Descending Chain Condition (DCC) and Minimum Condition, both similarly deﬁned, are also equivalent. 1.1.5 (Noetherian and artinian modules). V ∈ AMod is said to be noetherian if ACC holds for its submodules: every ascending chain U1 ⊆ U2 ⊆ U3 ⊆ · · · of submodules of V stabilizes; equivalently, every nonempty collection of submodules of V has at least one maximal member (Exercise 1.1.4). Modules satisfying DCC or, equivalently, the minimum condition for submodules are called artinian. Show: (a) V is noetherian if and only if all submodules of V are ﬁnitely generated. (b) If 0 → U → V → W → 0 is a short exact sequence in AMod, then V is noetherian if and only if both U and W are noetherian; likewise for artinian. 1.1.6 (Noetherian and artinian algebras ). The algebra A is called left noetherian if Areg ∈ AMod is noetherian, that is, A satisﬁes ACC on left ideals. Right noetherian algebras are deﬁned likewise using right ideals. Algebras that are both right and left noetherian are simply called noetherian. Artinian (left, right) algebras are deﬁned similarly using DCC. (a) Assuming A to be left noetherian, show that all ﬁnitely generated left Amodules are noetherian; likewise for artinian. (b) Let B be a subalgebra of A such that the k-algebra A is generated by B and an element x such that Bx + B = x B + B. Adapt the proof of the Hilbert Basis Theorem to show that if B is left (or right) noetherian, then so is A.

20

1. Representations of Algebras

1.1.7 (Skew polynomial algebras). Like the ordinary polynomial algebra A[x], a skew polynomial algebra over A is a k-algebra, B, containing A as a subalgebra and an additional element x ∈ B whose powers form a basis of B as a left A-module.

Thus, as in A[x], every element of B can be uniquely written as a ﬁnite sum i ai x i with ai ∈ A. However, we now only insist on the inclusion x A ⊆ Ax + A to hold; so all products xa with a ∈ A can be written in the form xa = σ(a)x + δ(a) with unique σ(a), δ(a) ∈ A. (a) Show that the above rule leads to a k-algebra multiplication on B if and only if σ ∈ EndAlgk ( A) and δ is a k-linear endomorphism of A satisfying δ(aa ) = σ(a)δ(a ) + δ(a)a

(a, a ∈ A).

Maps δ as above are called left σ-derivations of A and the resulting algebra B is denoted by A[x; σ, δ]. If σ = Id A, then one simply speaks of a derivation δ and writes A[x; δ] for A[x; Id A, δ]. Similarly, A[x; σ] = A[x; σ, 0]. If σ ∈ AutAlgk ( A), then we may deﬁne the skew Laurent polynomial algebra A[x ±1 ; σ] as above except ±1 i that negative powers of the variable x are permitted: A[x ; σ] = i ∈Z Ax and i i i x a = σ (a)x for a ∈ A. Assuming σ ∈ AutAlgk ( A), show: (b) If A is a domain, that is, A 0 and products of nonzero elements of A are nonzero, then A[x; σ, δ] is likewise. (c) If A is left (or right) noetherian, then so is A[x; σ, δ]. (Use Exercise 1.1.6.) 1.1.8 (Artin-Tate Lemma). Let A be aﬃne and let B ⊆ A be a subalgebra such that

m Bai . Show: A is ﬁnitely generated as a left B-module, say A = i=1

m B ai . (a) There exists an aﬃne k-subalgebra B ⊆ B such that A = i=1 (b) If B is commutative, then B is aﬃne. (Use (a) and Hilbert’s Basis Theorem.) 1.1.9 (Aﬃne algebras and ﬁnitely generated modules). Let A be aﬃne and let M ∈ AMod be ﬁnitely generated. Show that if N is an A-submodule of M such that dimk M/N < ∞, then N is ﬁnitely generated. 1.1.10 (Subalgebras as direct summands). Let B be a subalgebra of A such that A is free as a left B-module. Show that A = B ⊕ C for some left B-submodule C ⊆ A. 1.1.11 (Tensor product of algebras). Let A, B ∈ Algk . Prove: (a) The tensor product A ⊗ B ∈ Algk has the following universal property: the maps a : A → A ⊗ B, x → x ⊗ 1, and b: B → A ⊗ B, y → 1 ⊗ y, are k-algebra maps such that Im a commutes elementwise with Im b. Moreover, if α : A → C and β : B → C are any k-algebra maps such that Im α commutes elementwise with Im β, then there exists a unique k-algebra map t : A ⊗ B → C satisfying t ◦ a = α and t ◦ b = β. In particular, the tensor product gives a bifunctor · ⊗ · : Algk × Algk −→ Algk .

1.1. Algebras

21

α

A

a

A⊗B B

∃! t

C

b β

(b) Z ( A ⊗ B) Z A ⊗ Z B, where Z · denotes centers. (c) C ⊗R H Mat2 (C) as C-algebras. (d) A ⊗ k[x 1, . . . , x n ] A[x 1, . . . , x n ] as k-algebras. In particular, k[x 1, . . . , x n ] ⊗ k[x 1, . . . , x m ] k[x 1, . . . , x n+m ]. (e) A ⊗ Matn (k) Matn ( A) as k-algebras. In particular, Matn (k) ⊗ Matm (k) Matnm (k). 1.1.12 (Graded vector spaces, algebras, and modules). Let Δ be a monoid, with binary operation denoted by juxtaposition and with identity element 1. A Δ-grading of a k-vector space V is given by a direct sum decomposition Vk V= k ∈Δ k

with k-subspaces V . The nonzero elements of V k are said to be homogeneous of degree k. If V and W are Δ-graded, then a morphism f : V → W of Δ-graded vector spaces, by deﬁnition, is a k-linear map that preserves degrees in the sense that f (V k ) ⊆ W k for all k ∈ Δ. In this way, Δ-graded k-vector spaces form a category, Δ Δ Vectk . For any V, W ∈ Vectk , the tensor product V ⊗ W inherits a Δ-grading with Vi ⊗ W j . (V ⊗ W ) k = i j=k

A k-algebra A is said to be Δ-graded if the underlying k-vector space of A is Δgraded and multiplication A ⊗ A → A as well as the unit map k → A are morphisms of graded vector spaces. Here, k has the trivial grading: k = k1 . Explicitly, this k k k k kk for k, k ∈ Δ means that A = k ∈Δ A for k-subspaces A satisfying A A ⊆ A and 1 A ∈ A1 . In particular A1 is a k-subalgebra of A. Taking as morphisms the kalgebra maps that preserve the Δ-grading, we obtain a category, AlgΔk . Let A ∈ AlgΔk . A module V ∈ AMod is called Δ-graded if the underlying k-vector space of V is Δ-graded and the action map A ⊗ V → V is a morphism of graded vector spaces: k k k k kk V for k-subspaces V such that A V ⊆ V for all k, k ∈ Δ. V= k ∈Δ

(a) Let A ∈ Algk be such that the underlying k-vector space of A is Δ-graded. Assuming that k 1 implies k k k for all k ∈ Δ, show that 1 ∈ A1 is in fact automatic if multiplication of A is a map of graded vector spaces.

22

1. Representations of Algebras

(b) Let A ∈ AlgΔk , let V ∈ AMod be Δ-graded, and let U be an A-submodule k of V . Show that U = k (U ∩ V ) if and only if U is generated, as an Amodule, by homogeneous elements. In this case, the A-module V /U is graded with homogeneous components (V /U) k = V k /U ∩ V k . k (c) Let A ∈ AlgΔk and let I be an ideal of A. Show that I = k (I ∩ A ) if and only if I is generated, as an ideal of A, by homogeneous elements. In this case, the algebra A/I is graded with homogeneous components ( A/I) k = Ak /I ∩ Ak . 1.1.13 (Some properties of symmetric and exterior algebras). Let AlgZk denote the category of Z-graded k-algebras as in Exercise 1.1.12, with Z = (Z , +). (a) For any V, W ∈ Vectk , show that Sym (V ⊕ W ) Sym V ⊗ Sym W in AlgZk . (Use Exercise 1.1.11(a) and the universal property (1.8) of the symmetric algebra.) (b) An algebra A ∈ AlgZk is called anticommutative or graded commutative if ab = (−1) |a | |b | ba for all homogeneous a, b ∈ A as in (1.11) . If, in addition, a2 = 0 for all homogeneous a ∈ A of odd degree, then A is called alternating. (Anticommutative algebras are automatically alternating if char k 2.) Show that the exterior algebra V is alternating and that, for any alternating k-algebra A, there is the following natural bijection of sets:

Homk (V, A1 )

∈

∈

∼

k

f

f V

HomAlgZ ( V, A)

(c) Let A, B ∈ AlgZk be alternating. Deﬁne A g ⊗ B to be the usual tensor product A ⊗ B as a k-vector space, with the Z-grading of Exercise 1.1.12. However, multiplication is not given by (1.3) but rather by the Koszul sign rule:

(a ⊗ b)(a ⊗ b ) := (−1) |b | |a | aa ⊗ bb . Show that this makes A g⊗ B an alternating k-algebra.

(d) Conclude from (b) and (c) that (V ⊕W ) V g⊗ W as graded k-algebras.

(e) Deduce the bases of Sym V and V as stated in (1.10) and (1.13) from the isomorphisms in (a) and (d). 1.1.14 (Central simple algebras). A k-algebra A 0 is called simple if 0 and A are the only ideals of A. In this case, the center Z A is easily seen to be a k-ﬁeld. A simple algebra A is called central simple if Z A = k, viewing the unit map k → A as an embedding. 3 3In the literature, central simple k-algebras are often also understood to be ﬁnite dimensional, but we will not assume this here.

1.1. Algebras

23

(a) Show that if A ∈ Algk is central simple and B ∈ Algk is arbitrary, then the ideals of the algebra A ⊗ B are exactly the subspaces of the form A ⊗ I, where I is an ideal of B. (b) Let V ∈ Vectk . Show that Endk (V ) is central simple if and only if dimk V < ∞. Conclude from (a) and Exercise 1.1.11(e) that the ideals of the matrix algebra Matn (B) are exactly the subspaces Matn (I), where I is an ideal of B. (c) Conclude from (a) and Exercise 1.1.11(b) that the tensor product of any two central simple algebras is again central simple. 1.1.15 (Weyl algebras). Let A1 (k) denote the Weyl algebra, with standard algebra generators x and y and deﬁning relation yx = x y + 1 as in (1.15). (a) Consider the skew polynomial algebra B = A[η; δ] (Exercise 1.1.7) with d . Show that A = k[ξ] the ordinary polynomial algebra and with derivation δ = dξ ηξ = ξη + 1 holds in B and conclude from (1.16) that there is a unique algebra map f : A1 (k) → B with f (x) = ξ and f (y) = η. Conclude further that f is an isomorphism and that the standard monomials x i y j form a k-basis of A1 (k). Finally, conclude from Exercise 1.1.7 that A1 (k) is a noetherian domain. (b) Assuming char k = 0, show that A1 (k) is central simple in the sense of Exercise 1.1.14. Conclude from Exercise 1.1.14 that the algebra An (k) := A1 (k) ⊗n is central simple for every positive integer n; this algebra is called the nth Weyl algebra over k. (c) Now let char k = p > 0 and put Z := Z ( A1 (k)). Show that Z = k[x p, y p ] 2

is a polynomial algebra over k and that A1 (k) Z ⊕p as a Z -module: the standard monomials x i y j with 0 ≤ i, j < p form a Z -basis of A1 (k). 1.1.16 (Quantum plane and quantum torus). Fix a scalar q ∈ k× and consider the following algebra, called the quantum plane: def

Oq (k2 ) = kx, y/(x y − q yx) . As in the case of the Weyl algebra A1 (k), denote the images of x and y in Oq (k2 ) by x and y as well; so x y = q yx holds in Oq (k2 ). (a) Adapt the method of Exercise 1.1.15(a) to show that the quantum plane can be realized as the skew polynomial algebra Oq (k2 ) k[x][y; σ], where k[x] is the ordinary polynomial algebra and σ ∈ AutAlgk (k[x]) is given by σ(x) = q−1 x. Conclude from Exercise 1.1.7 that Oq (k2 ) is a noetherian domain.

(b) Observe that σ extends to an automorphism of the Laurent polynomial algebra k[x ±1 ]. Using this fact, show that there is a tower of k-algebras, Oq (k2 ) k[x][y; σ] ⊆ k[x ±1 ][y; σ] ⊆ Oq ((k× ) 2 ) = k[x ±1 ][y ±1 ; σ] , def

where the last algebra is a skew Laurent polynomial algebra (Exercise 1.1.7). The algebra Oq ((k× ) 2 ) is called a quantum torus.

24

1. Representations of Algebras

(c) Show: if the parameter q is not a root of unity, then Oq ((k× ) 2 ) is a central simple k-algebra. Conclude that every nonzero ideal of the quantum plane Oq (k2 ) contains some standard monomial x i y j in this case. (d) If q is a root of unity of order n, then show that Z := Z (Oq ((k× ) 2 )) is a Laurent polynomial algebra in the two variables x ±n, y ±n and the standard monomials x i y j with 0 ≤ i, j < n form a basis of Oq ((k× ) 2 ) as modules over Z .

1.2. Representations By deﬁnition, a representation of a k-algebra A is an algebra homomorphism ρ : A → Endk (V ) with V ∈ Vectk . If dimk V = n < ∞, then the representation is called ﬁnite-dimensional and n is referred to as its dimension or sometimes its degree. We will usually denote the operator ρ(a) by aV ; so:

∈

(1.20)

Endk (V )

∈

ρ: A a

aV

The map ρ is often de-emphasized and the vector space V is referred to as a representation of A. For example, instead of Ker ρ, we will usually write def

Ker V = a ∈ A | aV = 0 . Representations with kernel 0 are called faithful. The image ρ( A) of a representation (1.20) will be written as AV ; so A/ Ker V AV ⊆ Endk (V ). Throughout the remainder of this section, A will denote an arbitrary k-algebra unless explicitly speciﬁed otherwise. 1.2.1. The Category Rep A and First Examples As was explained in §1.1.3, representations of A are essentially the same as left A-modules: every representation A → Endk (V ) gives rise to a left A-module action A ⊗ V → V and conversely. This connection enables us to transfer familiar notions from the theory of modules into the context of representations. Thus, we may speak of subrepresentations, quotients, and direct sums of representations and also of homomorphisms, isomorphisms, etc., of representations by simply using the corresponding deﬁnitions for modules. For example, a homomorphism from a representation ρ : A → Endk (V ) to a representation ρ : A → Endk (V ) is given by an A-module homomorphism f : V → V , that is, is a k-linear map satisfying (1.21)

ρ (a) ◦ f = f ◦ ρ(a)

(a ∈ A).

1.2. Representations

25

This condition is sometimes stated as “ f intertwines ρ and ρ”. Thus, the representations of an algebra A form a category, Rep A, that is equivalent to the category of left A-modules: Rep A

≡

AMod .

An isomorphism ρ ∼ ρ in Rep A is given by an isomorphism f : V ∼ V in Vectk satisfying the intertwining condition (1.21), which amounts to commutativity of the following diagram in Algk : Endk (V ) (1.22)

∼ f ◦ ·◦f

Endk (V )

−1

ρ

ρ

A Here, f ◦ · ◦ f −1 = f ∗ ◦ ( f −1 ) ∗ in the notation of §B.2.1. Isomorphic representations are also called equivalent, and the symbol is used for equivalence or isomorphism of representations. In the following, we shall freely use module-theoretic terminology and notation for representations. For example, ρ(a)(v) = aV (v) will usually be written as a.v or av. Example 1.6 (Regular representations). The representation of A that corresponds to the module Areg ∈ AMod (Example 1.2) is called the regular representation of A; it is given by the algebra map ρreg : A → Endk ( A) with ρreg (a) = a A := (b → ab)

(a, b ∈ A).

As in Example 1.2, we may also consider the right regular A-module as well as the regular ( A, A)-bimodule; these correspond to representations Aop → Endk ( A) and A ⊗ Aop → Endk ( A), respectively. Example 1.7 (The polynomial algebra). Let A = k[t] be the ordinary polynomial algebra. By (1.6) representations ρ : k[t] → Endk (V ) for a given V ∈ Vectk are in bijection with linear operators τ ∈ Endk (V ) via ρ(t) = τ. For a ﬁxed positive integer n, we may describe the equivalence classes of n-dimensional representations of k[t] as follows. For any such representation, given by V ∈ Vectk and an operator τ ∈ Endk (V ), we may choose an isomorphism V ∼ k ⊕n in Vectk . This isomorphism and the resulting isomorphism Endk (V ) Matn (k) allow us to replace V by k ⊕n and τ by a matrix T ∈ Matn (k) without altering the isomorphism type. By (1.22) two representations of k[t] that are given by T, T ∈ Matn (k) are equivalent if and only if the matrices T and T are conjugate to each other, that is, T = gT g−1 for some g ∈ GLn (k) = Matn (k) × . Thus, letting GLn (k)\ Matn (k) denote the set of orbits for the conjugation action GLn (k) Matn (k), we have a

26

1. Representations of Algebras

bijection of sets, (1.23)

equivalence classes of n-dimensional representations of k[t]

∼

GLn (k)\ Matn (k) .

From linear algebra, we further know that a full representative set of the GLn (k)orbits in Matn (k) is given by the matrices in rational canonical form or in Jordan canonical form over some algebraic closure of k, up to a permutation of the Jordan blocks; see [67, Chapter 12, Theorems 16 and 23]. Example 1.8 (Representations of the Weyl algebra). In view of (1.16), a representation of the Weyl algebra A1 (k) = kx, y/(yx − x y − 1) is given by a V ∈ Vectk and a pair (a, b) ∈ Endk (V ) 2 satisfying the relation ba = ab + IdV . As in Example 1.7, it follows from (1.22) that two such pairs (a, b), (a , b ) ∈ Endk (V ) 2 yield equivalent representations if and only if (a , b ) = g.(a, b) = (gag−1, gbg−1 ) for some g ∈ GL(V ) = Endk (V ) × , the group of invertible linear transformations of V . If char k = 0, then A1 (k) has no nonzero ﬁnite-dimensional representations V . Indeed, if dimk V < ∞, then we may take the trace of both sides of the equation ba = ab + IdV to obtain that trace(ba) = trace(ab) + dimk V . Since trace(ba) = trace(ab) for any a, b ∈ Endk (V ), this forces dimk V = 0. The standard representation of A1 (k), for any base ﬁeld k, is constructed by taking V = k[t], the polynomial algebra, and the two k-linear endomorphisms of k[t] that d . are given by multiplication with the variable t and by formal diﬀerentiation, dt d d Denoting the former by just t, the product rule gives the relation dt t = t dt + Idk[t] . d Thus, we obtain a representation A1 (k) → Endk (k[t]) with x → t and y → dt . It j

is elementary to see that the operators t i d j ∈ Endk (k[t]) (i, j ∈ Z+ ) are k-linearly dt independent if char k = 0 (Exercise 1.2.9). It follows that the standard monomials x i y j form a k-basis of A1 (k) when char k = 0. This also holds for general k (Exercise 1.1.15 or Example D.3). 1.2.2. Changing the Algebra or the Base Field In studying the representations of a given k-algebra A, it is often useful to extend the base ﬁeld k—things tend to become simpler over an algebraically closed or at least suﬃciently large ﬁeld—and to take advantage of any available information concerning the representations of certain related algebras such as subalgebras or homomorphic images of A. Here we describe some standard ways to go about doing this. This material may appear dry and technical at ﬁrst; it can be skipped or only brieﬂy skimmed at a ﬁrst reading and referred to later as needed. Pulling Back: Restriction. Suppose we are given a k-algebra map φ : A → B. Then any representation ρ : B → Endk (V ), b → bV , gives rise to a representation φ∗ ( ρ) := ρ ◦ φ : A → Endk (V ); so aV = φ(a)V for a ∈ A. We will refer to this

1.2. Representations

27

process as pulling back the representation ρ along φ; the representation φ∗ ( ρ) of A is also called the restriction of ρ from B to A. The “restriction” terminology is especially intuitive in case A is a subalgebra of B and φ is the embedding, or if φ is at least a monomorphism, but it is also used for general φ. If φ is surjective, then φ∗ ( ρ) is sometimes referred to as the inﬂation of ρ along φ. In keeping with the general tendency to emphasize V over the map ρ, the pullback φ∗ ( ρ) is often denoted by φ∗V . When φ is understood, we will also write V ↓ A or ResBA V . The process of restricting representations along a given algebra map clearly is functorial: any morphism ρ → ρ in Rep B gives rise to a homomorphism φ∗ ( ρ) → φ∗ ( ρ ) in Rep A, because the intertwining condition (1.21) for ρ and ρ is clearly inherited by φ∗ ( ρ) and φ∗ ( ρ ). In this way, we obtain the restriction functor, φ∗ = ResBA : Rep B → Rep A. Pushing Forward: Induction and Coinduction. In the other direction, we may also “push forward” representations along an algebra map φ : A → B. In fact, there are two principal ways to do this. First, for any V ∈ Rep A, the induced representation of B is deﬁned by def

IndBA V = B ⊗ A V . On the right, B carries the (B , A)-bimodule structure that comes from the regular (B , B)-bimodule structure via b.b.a := bb φ(a). As in §B.1.2, this allows us to form the tensor product B ⊗ A V and equip it with the left B-module action b.(b ⊗ v) := bb ⊗ v. This makes IndBA V a representation of B. Alternative notation for IndBA V includes φ∗V and V↑B . Again, this construction is functorial: if f : V → V is a morphism in Rep A, then IndBA f := IdB ⊗ f : IndBA V → IndBA V is a morphism in Rep B. All this behaves well with respect to composition and identity morphisms; so induction gives a functor, φ∗ = IndBA : Rep A → Rep B. Similarly, we may use the ( A, B)-bimodule structure of B that is given by a.b .b := φ(a)b b to form Hom A (B, V ) and view it as a left B-module as in §B.2.1:

(b. f )(b ) = f (b b). The resulting representation of B is called the coinduced representation: def

CoindBA V = Hom A (B, V ).

28

1. Representations of Algebras

If f : V → V is a morphism in Rep A, then CoindBA f := f ∗ : CoindBA V → CoindBA V , g → f ◦ g, is a morphism in Rep B. The reader will have no diﬃculty conﬁrming that this gives a functor, CoindBA : Rep A → Rep B. We shall primarily work with induction hereafter. In some situations that we shall encounter, there is an isomorphism of functors CoindBA IndBA ; see Exercise 2.2.7 and Proposition 3.4. Adjointness Relations. It turns out that the functors IndBA and CoindBA are left and right adjoint to ResBA , respectively, in the sense of Section A.4. These abstract relations have very useful consequences; see Exercises 1.2.7 and 1.2.8, for example. The isomorphism in part (a) of the proposition below, and various consequences thereof, are often referred to as Frobenius reciprocity. Proposition 1.9. Let φ : A → B be a map in Algk . Then, for any V ∈ Rep A and W ∈ Rep B, there are natural isomorphisms in Vectk : (a) Hom B (IndBA V, W ) Hom A (V, ResBA W ) and (b) Hom B (W, CoindBA V ) Hom A (ResBA W, V ). Proof. Both parts follow from Hom-⊗ adjunction (B.16). For (a), we use the (B , A)-bimodule structure of B that was explained above to form HomB (B , W ) and equip it with the left A-action (a. f )(b) := f (bφ(a)). In particular, (a. f )(1) = f (φ(a)) = φ(a). f (1); so the map f → f (1) is an isomorphism HomB (B, W ) ∼ ResBA W in Rep A. Therefore, Hom B (IndBA V, W ) = HomB (B ⊗ A V, W ) Hom A (V, Hom B (B, W ))

(B.16)

Hom A (V, ResBA W ). Tracking a homomorphism f ∈ Hom A (V, ResBA W ) through the above isomorphisms, one obtains the map in HomB (IndBA V, W ) that is given by b ⊗ v → b. f (v) for b ∈ B and v ∈ V . Part (b) uses the above ( A, B)-bimodule structure of B and the standard Bmodule isomorphism B ⊗B W W . This isomorphism restricts to an isomorphism B ⊗B W ResBA W in Rep A, giving HomB (W, CoindBA V ) = HomB (W, Hom A (B, V )) Hom A (B ⊗B W, V )

(B.16)

Hom A (ResBA W, V ).

1.2. Representations

29

Twisting. For a given V ∈ Rep A, we may use restriction or induction along some α ∈ AutAlgk ( A) to obtain a new representation of A, called a twist of V . For restriction, each a ∈ A acts via α(a)V on α ∗V = V . Using induction instead, we have α∗V = A ⊗ A V V , with 1 ⊗ v ↔ v, and a.(1 ⊗ v) = a.1 ⊗ v = a ⊗ v = 1.α −1 (a) ⊗ v = 1 ⊗ α −1 (a).v . Thus, identifying α∗V with V as above, each a ∈ A acts via α −1 (a)V . Alternatively, putting α V := α∗V and α v := 1 ⊗ v, we obtain an isomorphism α : V ∼ α V in α α −1 Vectk and the result of the above calculation can be restated as a. v = (α (a).v) or, equivalently, α

(1.24)

(a.v) = α(a).α v

(a ∈ A, v ∈ V ).

Extending the Base Field. For any ﬁeld extension K/k and any representation ρ : A → Endk (V ) of A, we may consider the representation of the K-algebra K ⊗ A that is obtained from ρ by “extension of scalars”. The resulting representation may be described as the representation IndKA ⊗ A V = (K ⊗ A) ⊗ A V K ⊗ V that comes from the k-algebra map A → K ⊗ A, a → 1 ⊗ a. However, we view K ⊗ A as a K-algebra as in §1.1.1, moving from Algk to AlgK in the process. Explicitly, the action of K ⊗ A on K ⊗ V is given by (λ ⊗ a).(λ ⊗ v) = λλ ⊗ a.v for λ, λ ∈ K, a ∈ A, and v ∈ V ; equivalently, in terms of algebra homomorphisms: Id K ⊗ρ

K ⊗ Endk (V )

can.

EndK (K ⊗ V ) ∈

K⊗A

∈

(1.25)

K ⊗ ρ:

λ⊗ f

ρreg (λ) ⊗ f

The “canonical” map in (1.25) is a special case of (B.27); this map is always injective, and it is an isomorphism if V is ﬁnite dimensional or the ﬁeld extension K/k is ﬁnite. Example 1.10 (The polynomial algebra). Recall from Example 1.7 that the equivalence classes of n-dimensional representations of k[t] are in bijection with the set of orbits for the conjugation action GLn (k) Matn (k). It is a standard fact from linear algebra (e.g., [67, Chapter 12, Corollary 18]) that if two matrices T, T ∈ Matn (k) belong to the same GLn (K )-orbit for some ﬁeld extension K/k, then T, T also belong to the same GLn (k)-orbit. In other words, if V and V are ﬁnite-dimensional representations of k[t] such that K ⊗ V K ⊗ V in Rep K[t] for some ﬁeld extension K/k, then we must have V V to start with. This does in fact hold for any k-algebra in place of k[t], by the Noether-Deuring Theorem (Exercise 1.2.5).

30

1. Representations of Algebras

1.2.3. Irreducible Representations A representation ρ : A → Endk (V ), a → aV , is said to be irreducible4 if V is an irreducible A-module. Explicitly, this means that V 0 and no k-subspace of V other than 0 and V is stable under all operators aV with a ∈ A; equivalently, it is impossible to ﬁnd a k-basis of V such that the matrices of all operators aV have block upper triangular form

∗

∗

0

∗

.

Example 1.11 (Division algebras). Recall that a division k-algebra is a k-algebra D 0 whose nonzero elements are all invertible: D× = D \ {0}. Representations of D are the same as left D-vector spaces, and a representation V is irreducible if and only if dimD V = 1. Thus, up to equivalence, the regular representation of D is the only irreducible representation of D. Example 1.12 (Tautological representation of EndD (V )). For any k-vector space V 0, the representation of the algebra Endk (V ) that is given by the identity map Endk (V ) → Endk (V ) is irreducible. For, if u, v ∈ V are given, with u 0, then there exists f ∈ Endk (V ) such that f (u) = v. Therefore, any nonzero subspace of V that is stable under all f ∈ Endk (V ) must contain all of V . The foregoing applies verbatim to any nonzero representation V of a division k-algebra D: the embedding EndD (V ) → Endk (V ) is an irreducible representation of the algebra EndD (V ). If dimD V < ∞, then this representation is in fact the only irreducible representation of EndD (V ) up to equivalence; this is a consequence of Wedderburn’s Structure Theorem (§1.4.4). Example 1.13 (The standard representation of the Weyl algebra). Recall from Example 1.8 that the standard representation of A1 (k) is the algebra homomorphism d . If A1 (k) → Endk (V ) with V = k[t], that is given by xV = t· and yV = dt char k = 0, then the standard representation is irreducible. To see this, let U ⊆ V be any nonzero subrepresentation of V and let 0 f ∈ U be a polynomial of minimal degree among all nonzero polynomials in U. Then f ∈ k× ; for, if deg f > 0, d d f = y. f ∈ U and dt f has smaller degree than f . Therefore, k ⊆ U then 0 dt and repeated application of xV gives that all kt n ⊆ U. This shows that U = V , proving irreducibility of the standard representation for char k = 0. There are many more irreducible representations of A1 (k) in characteristic 0; see Block [17]. For irreducible representations of A1 (k) in positive characteristics, see Exercise 1.2.9.

4Irreducible representations are also called simple and they are often informally referred to as “irreps”.

1.2. Representations

31

One of the principal, albeit often unachievable, goals of representation theory is to provide, for a given k-algebra A, a good description of the following set: def

Irr A = the set of equivalence classes of irreducible representations of A. Of course, Irr A can also be thought of as the set of isomorphism classes of irreducible left A-modules. We will generally use Irr A to denote a full set of representatives of the equivalence classes and S ∈ Irr A will indicate that S is an irreducible representation of A. To see that Irr A is indeed a set, we observe that every irreducible representation of A is a homomorphic image of the regular representation Areg . To wit: Lemma 1.14. A full representative set for Irr A is furnished by the nonequivalent factors Areg /L, where L is a maximal left ideal L of A. In particular, dimk S ≤ dimk A for all S ∈ Irr A. Proof. If S is an irreducible representation of A, then any 0 s ∈ S gives rise to a homomorphism of representations f : Areg → S, a → as. Since Im f is a nonzero subrepresentation of S, it must be equal to S. Thus, f is an epimorphism and S Areg /L with L = Ker f ; this is a maximal left ideal of A by irreducibility of S. Conversely, all factors of the form Areg /L, where L is a maximal left ideal of A, are irreducible left A-modules, and hence we may select our equivalence classes of irreducible representations of A from the set of these factors. The last assertion of the lemma is now clear. Example 1.15 (The polynomial algebra). By Example 1.7, a representation V ∈ Rep k[t] corresponds to an endomorphism τ = tV ∈ Endk (V ). Lemma 1.14 further tells us that irreducible representations of k[t] have the form V k[t]reg /L, where L is a maximal ideal of k[t]; so L = m(t) for a unique monic irreducible polynomial m(t) ∈ k[t]. Note that m(t) is the characteristic polynomial of τ = tV . Thus, an irreducible representation of k[t] is given by a ﬁnite-dimensional V ∈ Vectk and an endomorphism τ ∈ Endk (V ) whose characteristic polynomial is irreducible. In particular, if k is algebraically closed, then all irreducible representations of k[t] are 1-dimensional. In sharp contrast to the polynomial algebra, the Weyl algebra A1 (k) has no nonzero ﬁnite-dimensional representations at all if char k = 0 (Example 1.8). In general, “most” irreducible representations of a typical inﬁnite-dimensional noncommutative algebra will tend to be inﬁnite dimensional, but the ﬁnite-dimensional ones, insofar as they exist, are of particular interest. Therefore, for any k-algebra A, we denote the full subcategory of Rep A whose objects are the ﬁnite-dimensional representations of A by Repﬁn A

32

1. Representations of Algebras

and we also deﬁne def

Irrﬁn A = {S ∈ Irr A | dimk S < ∞}. 1.2.4. Composition Series Every nonzero V ∈ Repﬁn A can be assembled from irreducible pieces in the following way. To start, pick some irreducible subrepresentation V1 ⊆ V ; any nonzero subrepresentation of minimal dimension will do. If V1 V , then we may similarly choose an irreducible subrepresentation of V /V1 , which will have the form V2 /V1 for some subrepresentation V2 ⊆ V . If V2 V , then we continue in the same manner. Since V is ﬁnite dimensional, the process must stop after ﬁnitely many steps, resulting in a ﬁnite chain (1.26)

0 = V0 ⊂ V1 ⊂ · · · ⊂ Vl = V

of subrepresentations Vi such that all Vi /Vi−1 are irreducible. An analogous construction can sometimes be carried out even when the representation V ∈ Rep A is inﬁnite dimensional (Exercises 1.2.10, 1.2.11). Any chain of the form (1.26), with irreducible factors V i = Vi /Vi−1 , is called a composition series of V and the number l is called the length of the series. If a composition series (1.26) is given and a k-basis of V is assembled from bases of the V i , then the matrices of all operators aV (a ∈ A) have block upper triangular form, with (possibly inﬁnite) diagonal blocks coming from the irreducible representations V i : aV 1 aV 2 . .. . aV l Example 1.16 (The polynomial algebra). Let V ∈ Repﬁn k[t] and assume that k is algebraically closed. Then, in view of Example 1.15, ﬁxing a composition series for V amounts to the familiar process of choosing a k-basis of V such that the matrix of the endomorphism tV ∈ Endk (V ) is upper triangular. The eigenvalues of tV occupy the diagonal of the matrix.

∗

0

Example 1.17 (Composition series need not exist). If A is any domain (not necessarily commutative) that is not a division algebra, then the regular representation Areg does not have a composition series; in fact, Areg does not even contain any irreducible subrepresentations. To see this, observe that subrepresentations of Areg are the same as left ideals of A. Moreover, if L is any nonzero left ideal of A, then there exists some 0 a ∈ L with a A× . Then L ⊇ Aa Aa2 0, showing that L is not irreducible.

1.2. Representations

33

The Jordan-Hölder Theorem Representations that admit a composition series are said to be of ﬁnite length. The reason for this terminology will be clearer shortly. For now, we just remark that ﬁnite-length representations of a division algebra D are the same as ﬁnitedimensional left D-vector spaces (Example 1.11). For any algebra A, the class of all ﬁnite-length representations V ∈ Rep A behaves quite well in several respects. Most importantly, all composition series of any such V are very much alike; this is the content of the Jordan-Hölder Theorem, which is stated as part (b) of the theorem below. Part (a) shows that the property of having ﬁnite length also transfers well in short exact sequences in Rep A, that is, sequences of morphisms in Rep A of the form (1.27)

f

g

0 → U −→ V −→ W → 0

with f injective, g surjective, and Im f = Ker g (as in §B.1.1). Theorem 1.18. (a) Given a short exact sequence (1.27) in Rep A , the representation V has ﬁnite length if and only if both U and W do. (b) Let 0 = V0 ⊂ V1 ⊂ · · · ⊂ Vl = V and 0 = V0 ⊂ V1 ⊂ · · · ⊂ Vl = V be two composition series of V ∈ Rep A . Then l = l and there exists a /Vs(i)−1 for all i . permutation s of {1, . . . , l} such that Vi /Vi−1 Vs(i) Proof. (a) First, assume that U and W have ﬁnite length and ﬁx composition series 0 = U0 ⊂ U1 ⊂ · · · ⊂ Ur = U and 0 = W0 ⊂ W1 ⊂ · · · ⊂ Ws = W . These series can be spliced together to obtain a composition series for V as follows. Put Xi = f (Ui ) and Yj = g−1 (W j ). Then Xi /Xi−1 Ui /Ui−1 via f and Yj /Yj−1 W j /W j−1 via g. Thus, the following is a composition series of V : (1.28)

0 = X0 ⊂ X1 ⊂ · · · ⊂ Xr = Y0 ⊂ Y1 ⊂ · · · ⊂ Ys = V .

Conversely, assume that V has a composition series (1.26). Put Ui = f −1 (Vi ) and observe that Ui /Ui−1 → Vi /Vi−1 via f ; so each factor Ui /Ui−1 is either 0 or irreducible (in fact, isomorphic to Vi /Vi−1 ). Therefore, deleting repetitions from the chain 0 = U0 ⊆ U1 ⊆ · · · ⊆ Ul = U if necessary, we obtain a composition series for U. Similarly, putting Wi = g(Vi ), each factor Wi /Wi−1 is a homomorphic image of Vi /Vi−1 , and so we may again conclude that Wi /Wi−1 is either 0 or irreducible. Thus, we obtain the desired composition series of W by deleting superﬂuous members from the chain 0 = W0 ⊆ W1 ⊆ · · · ⊆ Wl = W . This proves (a). In preparation for the proof of (b), let us also observe that if U 0 or, equivalently, Ker g 0, then some factor Wi /Wi−1 will deﬁnitely be 0 in the above construction. Indeed, there is an i such that Ker g ⊆ Vi but Ker g Vi−1 . Irreducibility of Vi /Vi−1 forces Vi = Vi−1 + Ker g and so Wi = Wi−1 . Therefore, W has a composition series of shorter length than the given composition series of V .

34

1. Representations of Algebras

(b) We will argue by induction on (V ), which we deﬁne to be the minimum length of any composition series of V . If (V ) = 0, then V = 0 and the theorem is clear. From now on assume that V 0. For each subrepresentation 0 U ⊆ V , the factor V /U also has a composition series by part (a) and the observation in the preceding paragraph tells us that (V /U) < (V ). Thus, by induction, the theorem holds for all factors V /U with U 0. Now consider two composition series as in the theorem. If V1 = V1, then

(1.29)

0 = V1 /V1 ⊂ V2 /V1 ⊂ · · · ⊂ Vl /V1 = V /V1

and 0 = V1/V1 ⊂ V2/V1 ⊂ · · · ⊂ Vl /V1 = V /V1 are two composition series of V /V1 with factors isomorphic to Vi /Vi−1 (i = 2, . . . , l) ( j = 2, . . . , l ), respectively. Thus the result follows in this case, and Vj/Vj−1 because the theorem holds for V /V1 . So assume that V1 V1 and note that this implies V1 ∩ V1 = 0 by irreducibility of V1 and V1. First, let us consider composition series for V /V1 . One is already provided by (1.29). To build another, put U = V1 ⊕ V1 ⊆ V and ﬁx a composition series for V /U, say 0 ⊂ U1 /U ⊂ · · · ⊂ Us /U = V /U. Then we obtain the following composition series for V /V1 : (1.30)

0 ⊂ U/V1 ⊂ U1 /V1 ⊂ · · · ⊂ Us /V1 = V /V1 .

The ﬁrst factor of this series is U/V1 V1 and the remaining factors are isomorphic to Ui /Ui−1 (i = 1, . . . , s), with U0 := U. Since the theorem holds for V /V1 , the collections of factors in (1.29) and (1.30), with multiplicities, are the same up to isomorphism. Adding V1 to both collections, we conclude that there is a bijective correspondence between the following two families of irreducible representations, with corresponding representations being isomorphic: Vi /Vi−1 (i = 1, . . . , l)

and

V1, V1, Ui /Ui−1 (i = 1, . . . , s).

Considering V /V1 in place of V /V1 , we similarly obtain a bijection between the ( j = 1, . . . , l ), which implies the theorem. family on the right and Vj/Vj−1 Length In light of the Jordan-Hölder Theorem, we may deﬁne the length of any ﬁnite-length representation V ∈ Rep A by def

length V = the common length of all composition series of V . If V has no composition series, then we put length V = ∞. Thus, length V = 0 means that V = 0 and length V = 1 says that V is irreducible. If A = D is a division algebra D, then length V = dimD V . In general, for any short exact sequence

1.2. Representations

35

0 → U → V → W → 0 in Rep A, we have the following generalization of a standard dimension formula for vector spaces (with the usual rules regarding ∞): (1.31)

length V = length U + length W .

To see this, just recall that any two composition series of U and W can be spliced together to obtain a composition series for V as in (1.28). The Jordan-Hölder Theorem also tells us that, up to isomorphism, the collection of factors Vi /Vi−1 ∈ Irr A occurring in (1.26) is independent of the particular choice of composition series of V . These factors are called the composition factors of V . The number of occurrences, again up to isomorphism, of a given S ∈ Irr A as a composition factor in any composition series of V is also independent of the choice of series; it is called the multiplicity of S in V . We will write def

μ(S, V ) = multiplicity of S in V . For any ﬁnite-length representation V ∈ Rep A, we evidently have μ(S, V ). length V = S ∈Irr A

Finally, by the same argument as above, (1.31) can be reﬁned to the statement that multiplicities are additive in short exact sequences 0 → U → V → W → 0 in Rep A: for every S ∈ Irr A, (1.32)

μ(S, V ) = μ(S, U) + μ(S, W ).

1.2.5. Endomorphism Algebras and Schur’s Lemma The following general lemma describes the endomorphism algebras of irreducible representations. Although very easy, it will be of great importance in the following. Schur’s Lemma. Let S ∈ Irr A. Then every nonzero morphism S → V in Rep A is injective and every nonzero morphism V → S is surjective. In particular, End A (S) is a division k-algebra. If S ∈ Irrﬁn A, then End A (S) is algebraic over k. Proof. If f : S → V is nonzero, then Ker f is a subrepresentation of S with Ker f S. Since S is irreducible, it follows that Ker f = 0 and so f is injective. Similarly, for any 0 f ∈ Hom A (V, S), we must have Im f = S, because Im f is a nonzero subrepresentation of S. It follows that any nonzero morphism between irreducible representations of A is injective as well as surjective, and hence it is an isomorphism. In particular, all nonzero elements of the algebra End A (S) have an inverse, proving that End A (S) is a division k-algebra. Finally, if S is ﬁnite dimensional over k, then so is Endk (S). Hence, for each f ∈ Endk (S), the powers f i (i ∈ Z+ ) are linearly dependent and so f satisﬁes a

36

1. Representations of Algebras

nonzero polynomial over k. Consequently, the division algebra End A (S) is algebraic over k.

We will refer to End A (S) as the Schur division algebra of the irreducible representation S and write

def

D(S) = End A (S).

The Weak Nullstellensatz. Algebras A such that D(S) is algebraic over k for all S ∈ Irr A are said to satisfy the weak Nullstellensatz. See the discussion in §5.6.1 and in Appendix C for the origin of this terminology. Thus, ﬁnite-dimensional algebras certainly satisfy the weak Nullstellensatz, because all their irreducible representations are ﬁnite dimensional (Lemma 1.14). The weak Nullstellensatz will later also be established, in a more laborious manner, for certain inﬁnitedimensional algebras (Section 5.6). Exercise 1.2.12 discusses a “quick and dirty” way to obtain the weak Nullstellensatz under the assumption that the cardinality of the base ﬁeld k is larger than dimk A. In particular, if k is uncountable, then any aﬃne k-algebra satisﬁes the weak Nullstellensatz. Splitting Fields. We will say that the base ﬁeld k of a k-algebra A is a splitting ﬁeld for A if D(S) = k for all S ∈ Irrﬁn A. By Schur’s Lemma, this certainly holds if k is algebraically closed, but often much less is required; see Corollary 4.16 below for an important example. We will elaborate on the signiﬁcance of the condition D(S) = k in the next paragraph and again in Proposition 1.36. Centralizers and Double Centralizers. The endomorphism algebra End A (V ) of an arbitrary V ∈ Rep A is the centralizer of AV in Endk (V ): End A (V ) = { f ∈ Endk (V ) | aV ◦ f = f ◦ aV for all a ∈ A} (1.21)

= CEndk (V ) ( AV ) . The centralizer of End A (V ) in Endk (V ) is called the bicommutant or double centralizer of the representation V ; it may also be described as the endomorphism algebra of V , viewed as a representation of End A (V ) via the inclusion End A (V ) → Endk (V ). Thus, we deﬁne def

BiEnd A (V ) = CEndk (V ) (End A (V )) = EndEnd A (V ) (V ).

1.2. Representations

37

Evidently, AV ⊆ BiEnd A (V ); so we may think of any representation of A as an algebra map:

∈

(1.33)

BiEnd A (V )

∈

ρ: A a

aV

Endk (V )

Moreover, BiEnd A (V ) = Endk (V ) if and only if End A (V ) ⊆ Z (Endk (V )). Thus, (1.34)

BiEnd A (V ) = Endk (V )

⇐⇒

End A (V ) = k IdV .

Consequently, k is a splitting ﬁeld for A if and only if BiEnd A (S) = Endk (S) for all S ∈ Irrﬁn A. 1.2.6. Indecomposable Representations A nonzero V ∈ Rep A is said to be indecomposable if V cannot be written as a direct sum of nonzero subrepresentations. Irreducible representations are evidently indecomposable, but the converse is far from true. For example, Areg is indecomposable for any commutative domain A, because any two nonzero subrepresentations (ideals) of A intersect nontrivially; the same holds for the ﬁeld of fractions of A, because nonzero A-submodules of Fract A have nonzero intersection with A. Example 1.19 (The polynomial algebra). The Structure Theorem for Modules over PIDs ([67, Chapter 12] or [114, Chapter 3]) yields all indecomposable representations of k[t] that are ﬁnitely generated: the only inﬁnite-dimensional such representation, up to isomorphism, is k[t]reg and every ﬁnite-dimensional such V is isomorphic to k[t]reg /(pr ) for a unique monic irreducible polynomial p ∈ k[t]. The Structure Theorem also tells us that an arbitrary V ∈ Repﬁn k[t] is the direct sum of indecomposable subrepresentations corresponding to the elementary divisors pr of V . This decomposition of V is unique up to the isomorphism type of the summands and their order in the sum. In this subsection, we will see that the existence and uniqueness of the decomposition in Example 1.19 holds for any k-algebra A. To start with, it is clear that any V ∈ Repﬁn A can be decomposed into a ﬁnite direct sum of indecomposable subrepresentations. Indeed, V = 0 is a direct sum with zero indecomposable summands; and any 0 V ∈ Rep A is either already indecomposable or else V = V1 ⊕ V2 for nonzero subrepresentations Vi which both have a decomposition of the desired form by induction on the dimension. More interestingly, the decomposition of V thus obtained is essentially unique. This is the content of the following classical theorem, which is usually attributed to Krull and Schmidt. 5 5Various generalizations of the theorem also have the names of Remak and/or Azumaya attached.

38

1. Representations of Algebras

Krull-Schmidt Theorem. Any ﬁnite-dimensional representation of an algebra can be decomposed into a ﬁnite direct sum of indecomposable subrepresentations and this decomposition is unique up to the order of the summands and up to isomorphism. r s More explicitly, the uniqueness statement asserts that if i=1 Vi j=1 W j for indecomposable Vi, W j ∈ Repﬁn A , then r = s and there is a permutation s of the indices such that Vi Ws(i) for all i . The proof will depend on the following lemma. Lemma 1.20. Let V ∈ Repﬁn A be indecomposable. Then each φ ∈ End A (V ) is either an automorphism or nilpotent. Furthermore, the nilpotent endomorphisms form an ideal of End A (V ). Proof. Viewing V as a representation of the polynomial algebra k[t] with tV = φ, we know from the Structure Theorem for Modules over PIDs that V is the direct sum of its primary components, V (p) = {v ∈ V | p(φ) r (v) = 0 for some r ∈ Z+ } , where p ∈ k[t] runs over the monic irreducible factors of the minimal polynomial of φ. Each V (p) is an A-subrepresentation of V . Since V is assumed indecomposable, there can only be one nonzero component. Thus, p(φ) r = 0 for some monic irreducible p ∈ k[t] and some r ∈ Z+ . If p = t, then φr = 0. Otherwise, 1 = ta + pr b for suitable a, b ∈ k[t] and it follows that a(φ) = φ−1 . This proves the ﬁrst assertion. For the second assertion, consider φ, ψ ∈ End A (V ). If φ ◦ ψ is bijective, then so are both φ and ψ. Thus, we only need to show that if φ, ψ are nilpotent, then θ = φ + ψ is nilpotent as well. But otherwise θ is an automorphism and IdV −θ −1 ◦ φ = θ −1 ◦ ψ. The right-hand side is nilpotent whereas the left-hand side

has inverse i ≥0 (θ −1 ◦ φ) i , giving the desired contradiction. The ideal N = {φ ∈ End A (V ) | φ is nilpotent} in Lemma 1.20 clearly contains all proper left and right ideals of End A (V ) and End A (V )/N is a division algebra. Thus, the algebra End A (V ) is local. Proof of the Krull-Schmidt Theorem. Only uniqueness remains to be addressed. s r So let V := i=1 Vi and W := j=1 W j be given with indecomposable Vi, W j ∈ Repﬁn A and assume that φ : V ∼ W is an isomorphism. Let μi : Vi → V and πi : V Vi be the standard embedding and projection maps as in §1.1.4; so

π j ◦ μi = δi, j IdVi and i πi ◦ μi = IdV . Similarly, we also have μj : W j → W and π j : W W j . The maps α j := π j ◦ φ ◦ μ1 : V1 −→ W j and β j := π1 ◦ φ−1 ◦ μj : W j −→ V1

satisfy j β j ◦ α j = IdV1 . It follows from Lemma 1.20 that some β j ◦ α j must be an automorphism of V1 ; after renumbering if necessary, we may assume that j = 1.

1.2. Representations

39

Since W1 is indecomposable, it further follows that α1 and β1 are isomorphisms (Exercise 1.1.2); so V1 W1 . Finally, consider the map ∼ Vi V W W>1 := Wj , ψ: V>1 := i>1

μ>1

φ

π>1

j>1

π>1

where μ>1 and again are the standard embedding and projection maps. It suﬃces to show that ψ is injective. For, then ψ must be an isomorphism for dimension reasons and an induction ﬁnishes the proof. So let v ∈ Ker ψ. Then φ ◦ μ>1 (v) = μ1 (w) for some w ∈ W1 and β1 (w) = π1 ◦ φ−1 ◦ φ ◦ μ>1 (v) = 0. Since β1 is mono, it follows that w = 0, and since φ ◦ μ>1 is mono as well, it further follows that v = 0 as desired.

Exercises for Section 1.2 Unless mentioned otherwise, A ∈ Algk is arbitrary in these exercises. 1.2.1 (Kernels). Given a map φ : A → B in Algk , consider the functors φ∗ = ResBA : Rep B → Rep A and φ∗ = IndBA : Rep A → Rep B (§1.2.2). Show: (a) Ker(φ∗V ) = φ−1 (Ker V ) for V ∈ Rep B.

(b) Assume that B is free as a right A-module via φ. Then, for any W ∈ Rep A, Ker(φ∗W ) = {b ∈ B | bB ⊆ Bφ(Ker W )}, the largest ideal of B that is contained in the left ideal Bφ(Ker W ). A 1.2.2 (Faithfulness). Let V ∈ Rep A be such that ResZ A V is ﬁnitely generated. ⊕n Show that V is faithful if and only if Areg embeds into V for some n ∈ N.

1.2.3 (Twisting representations). For V ∈ Rep A and α ∈ AutAlgk ( A), consider the twisted representation α V as in (1.24). Show: (a) α (β V ) α

α◦β

V for all α, β ∈ AutAlgk ( A).

(b) If α is an inner automorphism, that is, α(a) = uau−1 for some u ∈ A× , then V V in Rep A via α v ↔ u.v. (c) The map α : V ∼ α V yields a bijection between the subrepresentations of

V and α V . In particular, α V is irreducible, completely reducible, has ﬁnite length, etc. if and only if this holds for V . (d) α Areg Areg via α a ↔ α(a). 1.2.4 (Extension of scalars for homomorphisms). For given representations V, W ∈ Rep A and a given ﬁeld extension K/k, show that the K-linear map (B.27) restricts to a K-linear map K ⊗Hom A (V, W ) → HomK ⊗ A (K ⊗V, K ⊗W ), λ ⊗ f → ρreg (λ) ⊗ f . Use the facts stated in §B.3.4 to show that this map is always injective and that it is bijective if V is ﬁnite dimensional or the ﬁeld extension K/k is ﬁnite. 1.2.5 (Noether-Deuring Theorem). Let V, W ∈ Repﬁn A and let K/k be a ﬁeld extension. The Noether-Deuring Theorem states that K ⊗V K ⊗W in Rep (K ⊗ A)

40

1. Representations of Algebras

if and only if V W in Rep A. To prove the nontrivial direction, assume that K ⊗ V K ⊗ W in Rep (K ⊗ A) and complete the following steps. t of Hom A (V, W ) and identify HomK ⊗ A (K ⊗ V, K ⊗ W ) (a) a k-basis (φi )i=1 Fix

t with i=1 K ⊗ φi (Exercise 1.2.4). Observe that det( i λ i ⊗ φi ) = n ( i λ i ⊗ φi ) is a homogeneous polynomial f (λ 1, . . . , λ t ) of degree n = dimk V = dimk W over k and f (λ 1, . . . , λ t ) 0 for some (λ i ) ∈ K t .

(b) If |k| ≥ n, conclude that f (λ 1 , . . . , λ t ) 0 for some (λ i ) ∈ kt (Exer cise C.3.2). Deduce that i λ i φi ∈ Hom A (V, W ) is an isomorphism.

(c) If |k| < n, then choose some ﬁnite ﬁeld extension F/k with |F| > n and elements μi ∈ F with f ( μ1, . . . , μt ) 0 to obtain F ⊗ V F ⊗ W . Conclude that V ⊕d W ⊕d in Rep A, with d = [F : k]. Invoke the Krull-Schmidt Theorem (§1.2.6) to further conclude that V W in Rep A. 1.2.6 (Reynolds operators). Let B be a subalgebra of A. A Reynolds operator for the extension B ⊆ A, by deﬁnition, is a map π : A → B in B ModB such that π|B = IdB .6 Assuming that such a map π exists, prove: (a) If A is left (or right) noetherian, then so is B. Likewise for left (or right) artinian A. (See Exercise 1.1.6.) (b) Let W ∈ Rep B. The composite of π ⊗B IdW : IndBA W IndB B W with ∼ W is an epimorphism Res A Ind A W W W the canonical isomorphism IndB B B B in Rep B that is split by the map σ : W → ResBA IndBA W , w → 1 ⊗ w. (See Exercise 1.1.2.) Conclude that W is isomorphic to a direct summand of ResBA IndBA W . (c) Similarly, the composite of the canonical isomorphism W ∼ CoindB W B

A A A with π ∗ : CoindB B W → Coind B W is a monomorphism ψ π : W → Res B Coind B W A A in Rep B that splits the map τ : ResB CoindB W → W , f → f (1). Thus, W is isomorphic to a direct summand of ResBA CoindBA W . The unique lift of ψ π to a map Ψπ : IndBA W → CoindBA W in Rep A (Proposition 1.9) satisﬁes τ ◦ Ψπ ◦ σ = IdW .

1.2.7 (Coﬁnite subalgebras). Let B be a subalgebra of A such that A is ﬁnitely generated as a left B-module, say A = Ba1 + · · · + Bam . (a) Show that, for any W ∈ Rep B, there is a k-linear embedding CoindBA W → W given by f → f (ai ) . ⊕m

(b) Let 0 V ∈ Rep A be ﬁnitely generated. Use Exercise 1.1.3(a) to show that, for some W ∈ Irr B, there is an epimorphism Res BA V W in Rep B. (c) Conclude from (a), (b), and Proposition 1.9 that, for every V ∈ Irr A, there exists some W ∈ Irr B such that V embeds into W ⊕m as a k-vector space. 1.2.8 (Commutative coﬁnite subalgebras). Let A be an aﬃne k-algebra having a commutative subalgebra B ⊆ A such that A is ﬁnitely generated as a left B-module. 6Reynolds operators are also referred to as conditional expectations in the theory of operator algebras.

1.3. Primitive Ideals

41

Use the weak Nullstellensatz (Section C.1), the Artin-Tate Lemma (Exercise 1.1.8), and Exercise 1.2.7(c) to show that all V ∈ Irr A are ﬁnite dimensional. 1.2.9 (Representations of the Weyl algebra). Let A = A1 (k) denote the Weyl algebra and let V = k[t] be the standard representation of A (Examples 1.8 and 1.13). (a) Show that V Areg in Rep A. Show also that V is faithful if char k = 0 (and recall from Example 1.13 that V is also irreducible in this case), but V is neither irreducible nor faithful if char k = p > 0; determine Ker V in this case. (b) Assuming k to be algebraically closed with char k = p > 0, show that all S ∈ Irr A have dimension p. (Use Exercises 1.1.15(c) and 1.2.8.) 1.2.10 (Finite length and chain conditions). Show that V ∈ Rep A has ﬁnite length if and only if V is artinian and noetherian. Deduce Theorem 1.18(a) from this fact. (See Exercise 1.1.5.) 1.2.11 (Finite length and ﬁltrations). A ﬁltration of length l of V ∈ Rep A, by deﬁnition, is any chain of subrepresentations F : 0 = V0 V1 · · · Vl = V . If all Vi also occur in another ﬁltration of V , then the latter ﬁltration is called a reﬁnement of F ; the reﬁnement is said to be proper if it has larger length than F . Thus, a composition series of V is the same as a ﬁltration of ﬁnite length that admits no proper reﬁnement. Prove: (a) If V has ﬁnite length, then any ﬁltration F can be reﬁned to a composition series of V . (b) V has ﬁnite length if and only if there is a bound on the lengths of all ﬁnite-length ﬁltrations of V . 1.2.12 (Weak Nullstellensatz for large base ﬁelds). Consider the Schur division algebra D(S) for S ∈ Irr A. (a) Show that dimk D(S) ≤ dimk S ≤ dimk A. (b) Show that, for any division k-algebra D and any d ∈ D that is not algebraic over k, the set {(d − λ) −1 | λ ∈ k} is linearly independent over k. (c) Conclude from (a) and (b) that if the cardinality |k| is strictly larger than dimk A, then D(S) is algebraic over k.

1.3. Primitive Ideals The investigation of the set Irr A of irreducible representations of a given algebra A, in many cases of interest, beneﬁts from an ideal theoretic perspective. The link between representations and ideals of A is provided by the notion of the kernel of a representation V ∈ Rep A, Ker V = {a ∈ A | a.v = 0 for all v ∈ V }. Isomorphic representations evidently have the same kernel, but the converse is generally far from true. For example, the standard representation and the regular

42

1. Representations of Algebras

representation of the Weyl algebra are not isomorphic (in any characteristic), even though they are both faithful in characteristic 0 (Exercise 1.2.9). The kernels of irreducible representations of A are called the primitive ideals7 of A. If S ∈ Irr A is written in the form S A/L for some maximal left ideal L of A as in Lemma 1.14, then Ker S = {a ∈ A | a A ⊆ L}; this set can also be described as the largest ideal of A that is contained in L. We shall denote the collection of all primitive ideals of A by Prim A . Thus, there always is the following surjection:

∈

(1.35)

Prim A

∈

Irr A S

Ker S

While this map is not bijective in general, its ﬁbers do at least aﬀord us a rough classiﬁcation of the irreducible representations of A. 1.3.1. One-dimensional Representations Representations of dimension 1 of any algebra A are clearly irreducible. They are given by homomorphisms φ ∈ HomAlgk ( A, k), since Endk (V ) = k if dimk V = 1. For any such φ, we will use the notation kφ to denote the ﬁeld k with A-action a.λ = φ(a)λ for a ∈ A and λ ∈ k. The primitive ideal that is associated to the irreducible representation kφ is Ker kφ = Ker φ; this is an ideal of codimension 1 in A and all codimension-1 ideals have the form Ker φ with φ ∈ HomAlgk ( A, k). Assuming A 0 (otherwise Irr A = ∅) and viewing k ⊆ A via the unit map, we have A = k ⊕ Ker φ and φ(a) is the projection of a ∈ A onto the ﬁrst summand. Thus, we can recover φ from Ker φ. Consequently, restricting (1.35) to 1-dimensional representations, we obtain the following bijections of sets:

⊆

Irr A

codimension-1 ideals of A

∼

equivalence classes of 1-dimensional representations of A

∈

∼

∈

HomAlgk ( A, k) ∈

(1.36)

φ

Ker φ

kφ

7Strictly speaking, primitive ideals should be called left primitive, since irreducible representations are irreducible left modules. Right primitive ideals, deﬁned as the annihilators of irreducible right modules, do not always coincide with primitive ideals in the above sense [14].

1.3. Primitive Ideals

43

1.3.2. Commutative Algebras If A is a commutative k-algebra, then maximal left ideals are the same as maximal ideals of A. Thus, denoting the collection of all maximal ideals of A by MaxSpec A, we know from Lemma 1.14 that each S ∈ Irr A has the form S A/P for some P ∈ MaxSpec A. Since Ker( A/I) = I holds for every ideal I of A, we obtain that P = Ker S and S A/ Ker S. This shows that the primitive ideals of A are exactly the maximal ideals and that (1.35) is a bijection for commutative A: Prim A = MaxSpec A ∈

(1.37)

∼

∈

Irr A A/P

P

Thus, for commutative A, the problem of describing Irr A reduces to the description of MaxSpec A. Now assume that A is aﬃne commutative and that the base ﬁeld k is algebraically closed. Then all irreducible representations A/P are 1-dimensional by Hilbert’s Nullstellensatz (Section C.1). Hence, for any ideal P of A, the following are equivalent: (1.38)

P is primitive

⇐⇒

⇐⇒

P is maximal

A/P = k .

In view of (1.36) we obtain a bijection of sets (1.39)

Irr A

∼

HomAlgk ( A, k).

With this identiﬁcation, Irr A can be thought of geometrically as the set of closed points of an aﬃne algebraic variety over k. For example, (1.7) tells us that, for the polynomial algebra k[x 1, x 2, . . . , x n ], the variety in question is aﬃne n-space kn : Irr k[x 1, x 2, . . . , x n ]

∼

kn .

The pullback of an irreducible representation of A along a k-algebra map φ : B → A (Section 1.2.2) is a 1-dimensional representation of B; so we obtain a map φ∗ = ResBA : Irr A → Irr B. If B is also aﬃne commutative, then this is a morphism of aﬃne algebraic varieties [100]. These remarks place the study of irreducible representations of aﬃne commutative algebras over an algebraically closed base ﬁeld in the realm of algebraic geometry. A proper treatment of this topic is outside the scope of this book, but the geometric context sketched above provides the background for some of the material on primitive ideals to be discussed later in this section. We end our excursion on commutative algebras with a simple example.

44

1. Representations of Algebras

Example 1.21. As was mentioned, the irreducible representations of the polynomial algebra A = k[x, y] over an algebraically closed ﬁeld k correspond to the points of the aﬃne plane k2 . Let us consider the subalgebra B = k[x 2, y 2, x y] and let φ : B → A denote the inclusion map. It is not hard to see that B k[x 1, x 2, x 3 ]/(x 1 x 2 −x 23 ); so the irreducible representations of B correspond to the points of the cone x 23 = x 1 x 2 in k3 . The following picture illustrates the restriction map φ∗ = ResBA : Irr A → Irr B; this map is surjective: k[x, y]

(λ , μ)

φ∗

φ

k[x 2, y 2, x y]

(λ2, μ2, λμ)

1.3.3. Connections with Prime and Maximal Ideals For a general A ∈ Algk , primitive ideals are sandwiched between maximal and prime ideals of A: (1.40)

MaxSpec A ⊆ Prim A ⊆ Spec A.

Here, MaxSpec A is the set of all maximal ideals of A as in §1.3.2 and Spec A denotes the set of all prime ideals of A. Recall that an ideal P of A is prime if P A and I J ⊆ P for ideals I, J of A implies that I ⊆ P or J ⊆ P. To see that primitive ideals are prime, assume that P = Ker S for S ∈ Irr A and let I, J be ideals of A such that I P and J P. Then I.S = S = J.S by irreducibility, and hence I J.S = S. Therefore, I J P as desired. For the ﬁrst inclusion in (1.40), let P ∈ MaxSpec A and let L be any maximal left ideal of A containing P. Then A/L is irreducible and Ker( A/L) = P. Thus, all maximal ideals of A are primitive, thereby establishing the inclusions in (1.40). As we shall see, these inclusions are in fact equalities if the algebra A is ﬁnite dimensional (Theorem 1.38). However, in general, all inclusions in (1.40) are strict; see Example 1.24 below and many others later on in this book. We also remind the reader that an ideal I of A is called semiprime if, for any ideal J of A and any nonnegative integer n, the inclusion J n ⊆ I implies that J ⊆ I. Prime ideals are clearly semiprime and intersections of semiprime ideals are evidently semiprime again. Thus, the intersection of any collection of primes is a semiprime ideal. It is a standard ring-theoretic fact that all semiprime ideals arise in this manner: semiprime ideals are exactly the intersections of collections of primes (e.g., [128, 10.11]).

1.3. Primitive Ideals

45

1.3.4. The Jacobson-Zariski Topology The set Spec A of all prime ideals of an arbitrary algebra A carries a useful topology, the Jacobson-Zariski topology. This topology is deﬁned by declaring the subsets of the form def V (I) = {P ∈ Spec A | P ⊇ I } to be closed, where I can be any subset of A. Evidently, V (∅) = Spec A, V ({1}) = ∅, and V ( α Iα ) = α V (Iα ) for any family of subsets Iα ⊆ A. Moreover, we may clearly replace a subset I ⊆ A by the ideal of A that is generated by I without changing V (I). Thus, the closed subsets of Spec A can also be described as the sets of the form V (I), where I is an ideal of A. The deﬁning property of prime ideals implies that V (I) ∪ V ( J) = V (I J) for ideals I and J. Thus, ﬁnite unions of closed sets are again closed, thereby verifying the topology axioms. The Jacobson-Zariski topology on Spec A induces a topology on the subset Prim A, the closed subsets being those of the form V (I) ∩ Prim A; likewise for MaxSpec A. The Jacobson-Zariski topology is related to the standard Zariski topology on a ﬁnite-dimensional k-vector space V (Section C.3). Indeed, let O(V ) = Sym V ∗ denote the algebra of polynomial functions on V . If k is algebraically closed, then the weak Nullstellensatz (Section C.1) yields the following bijection: MaxSpec O(V ) ∈

∼

∈

V v

mv := { f ∈ O(V ) | f (v) = 0}

Viewing this as an identiﬁcation, the Zariski topology on V is readily seen to coincide with the Jacobson-Zariski topology on MaxSpec O(V ). In comparison with the more familiar topological spaces from analysis, say, the topological space Spec A generally has rather bad separation properties. Indeed, a “point” P ∈ Spec A is closed exactly if the prime ideal P is in fact maximal. Exercise 1.3.1 explores the Jacobson-Zariski topology in more detail. Here, we content ourselves by illustrating it with three examples. Further examples will follow later. Example 1.22 (The polynomial algebra k[x]). Starting with k[x], we have

Spec k[x] = (0) MaxSpec k[x]

= (0) ( f ) | f ∈ k[x] irreducible . If k is algebraically closed, then MaxSpec k[x] is in bijection with k via (x −λ) ↔ λ. Therefore, one often visualizes Spec k[x] as a “line”, with the points on the line corresponding to the maximal ideals and the line itself corresponding to the ideal (0). The latter ideal is a generic point for the topological space Spec A: the closure

46

1. Representations of Algebras

of (0) is all of Spec A. Figure 1.1 renders Spec k[x] in three ways, with red dots representing maximal or, equivalently, primitive ideals in each case. The solid gray lines in the top picture represent inclusions. The large black area in the other two pictures represents the generic point (0). The third picture also aims to convey the fact that (0) is determined by the maximal ideals, being their intersection, and that the topological space Spec k[x] is quasicompact (Exercise 1.3.1). ...

. . . (x − λ)

(0)

Figure 1.1. Spec k[x]

Example 1.23 (The polynomial algebra k[x, y]). The topology for k[x, y] is slightly more diﬃcult to visualize than for k[x]. As a set,

Spec k[x, y] = (0) ( f ) | f ∈ k[x, y] irreducible MaxSpec k[x, y] . Assuming k to be algebraically closed, maximal ideals of k[x, y] are in bijection with points of the plane k2 via (x − λ, y − μ) ↔ (λ, μ). Figure 1.2 depicts the topological space Spec k[x, y], the generic point (0) again being represented by a large black region. The two curves in the plane are representative for the inﬁnitely many primes that are generated by irreducible polynomials f ∈ k[x, y]; and ﬁnally, we have sprinkled a few red points throughout the plane to represent MaxSpec k[x, y]. A point lies on a curve exactly if the corresponding maximal ideal contains the principal ideal ( f ) giving the curve.

Figure 1.2. Spec k[x, y]

1.3. Primitive Ideals

47

Example 1.24 (The quantum plane). Fix a scalar q ∈ k× that is not a root of unity. The quantum plane is the algebra def A = Oq (k2 ) = kx, y/ x y − qyx . Our goal is to describe Spec A, paying attention to which primes are primitive or maximal. First note that the zero ideal of A is certainly prime, because A is a domain (Exercise 1.1.16). It remains to describe the nonzero primes of A. We refer to Exercise 1.1.16 for the fact that every nonzero ideal of A contains some standard monomial x i y j . Observe that both x and y are normal elements of A in the sense that (x) = x A = Ax and likewise for y. Therefore, if x i y j ∈ P for some P ∈ Spec A, then x i y j A = (x) i (y) j ⊆ P, and hence x ∈ P or y ∈ P. In the former case, P/(x) is a prime ideal of A/(x) k[y], and hence P/(x) is either the zero ideal of k[y] or else P/(x) is generated by some irreducible polynomial g(y). Thus, if x ∈ P, then either P = (x) or P = x, g(y) , which is maximal. Similarly, if y ∈ P, then either P = (y) or P is the maximal ideal y, f (x) for some irreducible f (x) ∈ k[x]. Only (x, y) occurs in both collections of primes, corresponding to g(y) = y or f (x) = x. Therefore Spec A can be pictured as shown in Figure 1.3. Solid gray lines represent inclusions, as in Figure 1.1, and primitive ideals are marked in red. The maximal ideals on top of the diagram in Figure 1.3 are all primitive by (1.40). On the other hand, neither (x) nor (y) are primitive by (1.37), because they correspond to nonmaximal ideals of commutative (in fact, polynomial) algebras. It is less clear why the zero ideal should be primitive. The reader is asked to verify this in Exercise 1.3.4, but we will later see (Exercise 5.6.5) that primitivity of (0) also follows from the fact that the intersection of all nonzero primes is nonzero, which is clear from Figure 1.3: (x) ∩ (y) (0). Note that, in this example, all inclusions in (1.40) are strict.

...

x, g(y)

x, y

y, f (x)

...

(y)

(x)

(0)

Figure 1.3. Spec Oq (k2 ) (q not a root of unity)

48

1. Representations of Algebras

We ﬁnish our discussion of the quantum plane by oﬀering another visualization which emphasizes the fact that (0) is a generic point for the topological space Spec A—this point is represented by the large red area in the picture on the right. We assume k to be algebraically closed. The maximal ideals x, g(y) = (x, y − η) with η ∈ k are represented by points on the y-axis, the axis itself being the generic point (x); similarly for the points on the x-axis, with generic point (y). 1.3.5. The Jacobson Radical The intersection of all primitive ideals of an arbitrary algebra A will play an important role in the following; it is called the Jacobson radical of A: def

rad A =

P = a ∈ A | a.S = 0 for all S ∈ Irr A .

P ∈Prim A

Being an intersection of primes, the Jacobson radical is a semiprime ideal of A. Algebras with vanishing Jacobson radical are called semiprimitive. We put def

As.p. = A/ rad A. Since (rad A).S = 0 holds for all S ∈ Irr A, inﬂation along the canonical surjection A As.p. as in §1.2.2 yields a bijection (1.41) Irr As.p. ∼ Irr A and P ↔ P/ rad A gives a bijection Prim A semiprimitive: (1.42)

∼

Prim As.p. . Thus, As.p. is

rad As.p. = 0 .

In describing Irr A, we may therefore assume that A is semiprimitive. We ﬁnish this section by giving, for a ﬁnite-dimensional algebra A, a purely ring-theoretic description of the Jacobson radical that makes no mention of representations: rad A is the largest nilpotent ideal of A. Recall that an ideal I of an algebra A is called nilpotent if I n = 0 for some n; likewise for left or right ideals. Proposition 1.25. The Jacobson radical rad A of any algebra A contains all nilpotent left and right ideals of A. Moreover, for each ﬁnite-length V ∈ Rep A, (rad A) length V .V = 0 . If A is ﬁnite dimensional, then rad A is itself nilpotent.

1.3. Primitive Ideals

49

Proof. We have already pointed out that rad A is a semiprime. Now, any semiprime ideal I of A contains all left ideals L of A such that L n ⊆ I. To see this, note that L A is an ideal of A that satisﬁes (L A) n = L n A ⊆ I. By the deﬁning property of semiprime ideals, it follows that L A ⊆ I and hence L ⊆ I. A similar argument applies to right ideals. In particular, every semiprime ideal contains all nilpotent left and right ideals of A. This proves the ﬁrst statement. Now assume that 0 = V0 ⊂ V1 ⊂ · · · ⊂ Vl = V is a composition series of V . Since rad A annihilates all irreducible factors Vi /Vi−1 , it follows that (rad A).Vi ⊆ Vi−1 and so (rad A) l .V = 0. For ﬁnite-dimensional A, this applies to the regular representation V = Areg , giving (rad A) l . Areg = 0 and hence (rad A) l = 0.

Exercises for Section 1.3 In these exercises, A denotes an arbitrary k-algebra unless otherwise speciﬁed. 1.3.1 (Jacobson-Zariski topology). For any subset X ⊆ Spec A and any ideal I of A, put def

I (X ) =

P

and

√

def

I = I (V (I)) =

P ∈X

P.

P ∈ Spec A P ⊇I

These√are semiprime ideals of A and all semiprime ideals are of this form. The ideal I is called the semiprime radical of I; it is clearly the smallest semiprime ideal of A containing I. Consider the Jacobson-Zariski topology of Spec A. (a) Show that the closure of X is given by X = V (I (X )). (b) Conclude that the following are inclusion reversing bijections that are inverse to each other:

closed subsets of Spec A

V (·)

semiprime ideals of A .

I(·)

Thus, the Jacobson-Zariski topology on Spec A determines all semiprime ideals of A and their inclusion relations among each other. √ (c) Show that I (X ) = I (X ) and V ( I) = V (I). (d) A topological space is said to be irreducible if it cannot be written as the union of two proper closed subsets. Show that, under the bijection in (b), the irreducible closed subsets of Spec A correspond to the prime ideals of A. (e) Show that Spec A is quasicompact: if Spec A = l ∈L Ul with open subsets of Ul , then Spec A = l ∈L Ul for some ﬁnite L ⊆ L.

50

1. Representations of Algebras

1.3.2 (Maximum condition on semiprime ideals). Assume that A satisﬁes the maximum condition on semiprime ideals: every nonempty collection of semiprime ideals of A has at least one maximal member.8 (a) Show that every semiprime ideal of A is an intersection of ﬁnitely many primes of A. (b) Conclude that every closed subset of Spec A, for the Jacobson-Zariski topology, is a ﬁnite union of irreducible closed sets (Exercise 1.3.1). Moreover, the topology of Spec A is determined by the inclusion relations among the primes of A. 1.3.3 (Characterization of the Jacobson radical). Show that the following subsets of A are all equal to rad A: (i) the intersection of all maximal left ideals of A, (ii) the intersection of all maximal right ideals of A, (iii) the set {a ∈ A | 1 + xay ∈ A× for all x, y ∈ A}. 1.3.4 (Quantum plane). Let A = Oq (k2 ) be the quantum plane, with q ∈ k× not a root of unity. (a) Show that V = A/A(x y − 1) is a faithful irreducible representation of A. Thus, (0) is a primitive ideal of A. (b) Assuming k to be algebraically closed, show that the following account for all closed subsets of Spec A: all ﬁnite subsets of MaxSpec A (including ∅), V (x) ∪ X for any ﬁnite subset X ⊂ {(x − ξ, y) | ξ ∈ k× }, V (y) ∪ Y for any ﬁnite subset Y ⊂ {(x, y − η) | η ∈ k× }, V (x) ∪ V (y), and Spec A. Here, we have written V ( f ) = V ({ f }) for f ∈ A. 1.3.5 (Centralizing homomorphisms). An algebra map φ : A → B is said to be centralizing if φ( A) and the centralizer CB (φ( A)) = {b ∈ B | bφ(a) = φ(a)b ∀a ∈ A} together generate the algebra B. Surjective algebra maps are clearly centralizing, but there are many others, e.g., the standard embedding A → A[x]. (a) Show that composites of centralizing homomorphisms are centralizing. (b) Let φ : A → B be centralizing. Show that φ(Z A) ⊆ Z B. For every ideal I of A, show that Bφ(I) = φ(I)B. Deduce the existence of a map Spec B → Spec A, P → φ−1 (P).9

1.4. Semisimplicity In some circumstances, a given representation of an algebra can be broken down into irreducible building blocks in a better way than choosing a composition series, namely as a direct sum of irreducible subrepresentations. Representations allowing 8Clearly, every right or left noetherian algebra satisﬁes this condition. Furthermore, aﬃne PI-algebras are also known to satisfy the maximum condition on semiprime ideals (e.g., Rowen [184, 6.3.36’]). 9This fails for the standard embedding of A into the power series algebra Ax: there are examples, due to G. Bergman, of primes P ∈ Spec Ax such that P ∩ A is not even semiprime [168, Example 4.2].

1.4. Semisimplicity

51

such a decomposition are called completely reducible.10 It turns out that completely reducible representations share some useful features with vector spaces, notably the existence of complements for subrepresentations. In this section, we give several equivalent characterizations of complete reducibility (Theorem 1.28); we describe a standard decomposition of completely reducible representations, the decomposition into homogeneous components (§1.4.2); and we determine the structure of the algebras A having the property that all V ∈ Rep A are completely reducible (Wedderburn’s Structure Theorem). Algebras with this property are called semisimple. Unless explicitly stipulated otherwise, A will continue to denote an arbitrary k-algebra in this section. 1.4.1. Completely Reducible Representations Recall that V ∈ Rep A is said to be completely reducible if Si V= i ∈I

with irreducible subrepresentations Si ⊆ V . Thus, each v ∈ V can be uniquely

written as a sum v = i ∈I vi with vi ∈ Si and vi = 0 for all but ﬁnitely many i ∈ I. The case V = 0 is included here, corresponding to the empty sum. Example 1.26 (Division algebras). Every representation V of a division algebra is completely reducible. Indeed, V is a vector space and any choice of basis for V yields a decomposition of V as a direct sum of irreducible subrepresentations. Example 1.27 (Polynomial algebras). By (1.6) representations of the polynomial algebra A = k[x 1, x 2, . . . , x n ] are given by a k-vector space V and a collection of n pairwise commuting operators ξi = (x i )V ∈ Endk (V ). Assuming k to be algebraically closed, V is irreducible if and only if dimk V = 1 by (1.39). A completely reducible representation V of A is thus given by n simultaneously diagonalizable operators ξi ∈ Endk (V ); in other words, V has a k-basis consisting of eigenvectors for all ξi . If V is ﬁnite dimensional, then we know from linear algebra that such a basis exists if and only if the operators ξi commute pairwise and the minimal polynomial of each ξi is separable, that is, it has no multiple roots. Characterizations of Complete Reducibility Recall the following familiar facts from linear algebra: all bases of a vector space have the same cardinality; every generating set of a vector space contains a basis; and every subspace of a vector space has a complement. The theorem below extends these facts to completely reducible representations of arbitrary algebras. Given a representation V and a subrepresentation U ⊆ V , a complement for U in V is a subrepresentation C ⊆ V such that V = U ⊕ C. 10Completely reducible representations are also referred to as semisimple.

52

1. Representations of Algebras

Theorem 1.28. (a) Let V ∈ Rep A be completely reducible, say V = i ∈I Si for irreducible subrepresentations Si . Then the following are equivalent: (i) I is ﬁnite; (ii) V has ﬁnite length; (iii) V is ﬁnitely generated. In i ∈I Si this case, |I | = length V (as in §1.2.4). In general, if T with irreducible S , T ∈ Rep A, then |I | = | J |. i j j ∈J j (b) The following are equivalent for any V ∈ Rep A : (i) V is completely reducible; (ii) V is a sum (not necessarily direct) of irreducible subrepresentations; (iii) every subrepresentation U ⊆ V has a complement. Proof. (a) First assume that I is ﬁnite, say I = {1, 2, . . . , l}, and put Vi = j ≤i S j . Then 0 = V0 ⊂ V1 ⊂ · · · ⊂ Vl = V is a composition series of V , with factors Vi /Vi−1 Si . Thus length V = l, proving the implication (i) ⇒ (ii). Furthermore, since irreducible modules are ﬁnitely generated (in fact, cyclic), (ii) always implies (iii), even when V is not necessarily completely reducible (Exercise 1.1.3). Now assume that V is ﬁnitely generated, say V = Av1 + Av2 + · · · + Avt . For each j, t we have v j ∈ j=1 I j is i ∈I j Si for some ﬁnite subset I j ⊆ I. It follows that I = ﬁnite. This proves the equivalence of (i) – (iii) as well as the equality |I | = length V for ﬁnite I. Since property (ii) and the value of length V only depend on the isomorphism type of V and not on the given decomposition V = i ∈I Si , we also T with irreducible T obtain that |I | = | J | if I is ﬁnite and V j ∈ Rep A. j ∈J j

It remains to show that |I | = | J | also holds if I and J are inﬁnite. Replacing each Tj by its image in V under the isomorphism V j ∈J T j , we may assume Av j and so that Tj ⊆ V and V = j ∈J T j . Select elements 0 v j ∈ T j . Then T j = {v j } j ∈J is a generating set for V . Exactly as above, we obtain that I = j ∈J I j for suitable ﬁnite subsets I j ⊆ I . Since J is inﬁnite, the union j ∈J I j has cardinality at most | J |; see [26, Corollary 3 on p. E III.49]. Therefore, |I | ≤ | J |. By symmetry, equality must hold. (b) The implication (i) ⇒ (ii) being trivial, let us assume (ii) and write V = i ∈I Si with irreducible subrepresentations Si ⊆ V .

Claim. Given a subrepresentation U ⊆ V , there exists a subset J ⊆ I such that V =U ⊕ i ∈J Si . This will prove (iii), with C = i ∈J Si , and the case U = 0 also gives (i). To prove the claim, choose a subset J ⊆ I that is maximal with respect to the property

that the sum V := U + i ∈J Si is direct. The existence of J is clear if I is ﬁnite; in general, it follows by a straightforward Zorn’s Lemma argument. We aim to show that V = V . If not, then Sk V for some k ∈ I. Since Sk is irreducible, this

1.4. Semisimplicity

53

forces Sk ∩ V = 0, which in turn implies that the sum V + Sk = U + i ∈J∪{k } S j is direct, contradicting maximality of J. Therefore, V = V , proving the claim. Finally, let us derive (ii) from (iii). To this end, let S denote the sum of all irreducible subrepresentations of V . Our goal is to show that S = V . If not, then V = S ⊕ C for some nonzero subrepresentation C ⊆ V by virtue of (iii). To reach a contradiction, it suﬃces to show that every nonzero subrepresentation C ⊆ V contains an irreducible subrepresentation. For this, we may clearly replace C by Av for any 0 v ∈ C, and hence we may assume that C is cyclic. Thus, there is some subrepresentation D ⊆ C such that C/D is irreducible (Exercise 1.1.3). Using (iii) to write V = D ⊕ E, we obtain C = D ⊕ (E ∩ C). Hence, E ∩ C C/D is the desired irreducible subrepresentation of C. This proves (ii), ﬁnishing the proof of the theorem. Corollary 1.29. Subrepresentations and homomorphic images of completely re ducible representations are completely reducible. More precisely, if V = i ∈I Si for irreducible subrepresentations Si ⊆ V , then all subrepresentations and all ho momorphic images of V are equivalent to direct sums S for suitable J ⊆ I. i ∈J i

Si W in Rep A. The claim in Proof. Consider an epimorphism f : V = i ∈I the proof of Theorem 1.28 gives V = Ker f ⊕ i ∈J Si for some J ⊆ I. Hence, S , proving the statement about homomorphic images. W V / Ker f i ∈J i Every subrepresentation U ⊆ V is in fact also a homomorphic image of V : choosing a complement C for U in V , we obtain a projection map V = U ⊕ C U. Theorem 1.28(a) allows us to deﬁne the length of any completely reducible representation V = i ∈I Si by def

length V = |I |. In the ﬁnite case, this agrees with our deﬁnition of length in §1.2.4, and it reﬁnes the earlier deﬁnition for completely reducible representations V of inﬁnite length. We shall mostly be interested in completely reducible representations V having ﬁnite length. The set-theoretic calisthenics in the proof of Theorem 1.28 are then unnecessary. 1.4.2. Socle and Homogeneous Components The sum of all irreducible subrepresentations of an arbitrary V ∈ Rep A, which already featured in the proof of Theorem 1.28, is called the socle of V . For a ﬁxed S ∈ Irr A, we will also consider the sum of all subrepresentations of V that are equivalent to S; this sum is called the S-homogeneous component of V :

54

1. Representations of Algebras

def

⊆

soc V = the sum of all irreducible subrepresentations of V def

V (S) = the sum of all subrepresentations U ⊆ V such that U S Thus, V is completely reducible if and only if V = soc V . In general, soc V is the unique largest completely reducible subrepresentation of V , and it is the sum of the various homogeneous components V (S). We will see below that this sum is in fact direct. Of course, it may happen that soc V = 0; for example, this holds for the regular representation of any domain that is not a division algebra (Example 1.17). Example 1.30 (Weight spaces and eigenspaces). Let S = kφ be a 1-dimensional representation of an algebra A, with φ ∈ HomAlgk ( A, k) as in §1.3.1. Then the S-homogeneous component V (kφ ) will be written as Vφ . Explicitly, def

Vφ = {v ∈ V | a.v = φ(a)v for all a ∈ A}. If Vφ 0, then φ is called a weight of the representation V and Vφ is called the corresponding weight space. In the special case where A = k[t] is the polynomial algebra, the map φ is determined by the scalar λ = φ(t) ∈ k and Vφ is the usual eigenspace for the eigenvalue λ of the endomorphism tV ∈ Endk (V ). The following proposition generalizes some familiar facts about eigenspaces. Proposition 1.31. Let V ∈ Rep A. Then: (a) soc V = S ∈Irr A V (S).

(b) If f : V → W is a map in Rep A, then f (V (S)) ⊆ W (S) for all S ∈ Irr A . (c) For any subrepresentation U ⊆ V , we have soc U = U ∩ soc V and U (S) = U ∩ V (S) for all S ∈ Irr A .

Proof. (a) We only need to show that the sum of all homogeneous components is direct, that is, V (S) ∩ V (T ) = 0 T ∈Irr A T S

for all S ∈ Irr A. Denoting the intersection on the left by X, we know by Corollary 1.29 that X is completely reducible and that each irreducible subrepresentation of X is equivalent to S and also to one of the representations T ∈ Irr A with T S. Since there are no such irreducible representations, we must have X = 0. This proves (a). For (b) and (c), note that Corollary 1.29 also tells us that f (V (S)) and U ∩V (S) are both equivalent to direct sums of copies of the representation S, which implies

1.4. Semisimplicity

55

the inclusions f (V (S)) ⊆ W (S) and U ∩ V (S) ⊆ U (S). Since the inclusion U (S) ⊆ U ∩ V (S) is obvious, the proposition is proved. Multiplicities. For any V ∈ Rep A and any S ∈ Irr A, we put def

m(S, V ) = length V (S). Thus, (1.43)

V (S) S ⊕m(S,V ),

where the right-hand side denotes the direct sum of m(S, V ) many copies of S, and Proposition 1.31(a) implies that (1.44) soc V S ⊕m(S,V ) . S ∈Irr A

The foregoing will be most important in the case of a completely reducible representation V . In this case, (1.44) shows that V is determined, up to equivalence, by the cardinalities m(S, V ). Any S ∈ Irr A such that m(S, V ) 0 is called an irreducible constituent of V . If V is completely reducible of ﬁnite length, then each m(S, V ) is identical to the multiplicity μ(S, V ) of S in V as deﬁned in §1.2.4. Therefore, m(S, V ) is also referred to as the multiplicity of S in V , even when V has inﬁnite length. If V is a ﬁnite-length representation, not necessarily completely irreducible, then m(S, V ) ≤ μ(S, V ) for all S ∈ Irr A. The following proposition expresses multiplicities as dimensions. Recall that, for any V, W ∈ Rep A, the vector space Hom A (V, W ) is an (End A (W ), End A (V ))bimodule via composition (Example 1.3 or §B.2.1). In particular, for any S ∈ Irr A, we may regard Hom A (V, S) as a left vector space over the Schur division algebra D(S) and Hom A (S, V ) is a right vector space over D(S). Proposition 1.32. Let V ∈ Rep A be completely reducible of ﬁnite length. Then, for any S ∈ Irr A, m(S, V ) = dimD(S) Hom A (V, S) = dimD(S) Hom A (S, V ). Proof. The functors Hom A ( · , S) and Hom A (S, · ) commute with ﬁnite direct sums; see (B.14). Moreover, by Schur’s Lemma, Hom A (V (T ), S) = 0 for distinct T, S ∈ Irr A. Therefore, Hom A (V, S) Hom A (V (S), S) Hom A (S ⊕m(S,V ), S) D(S) ⊕m(S,V ) . (1.43)

56

1. Representations of Algebras

Consequently, m(S, V ) = dimD(S) Hom A (V, S). The veriﬁcation of the second equality is analogous; an explicit isomorphism is given by: V (S) ∈

∼

∈

Hom A (S, V ) ⊗D(S) S f ⊗s

f (s)

1.4.3. Endomorphism Algebras In this subsection, we give a description of the endomorphism algebra End A (V ) of any completely reducible V ∈ Rep A having ﬁnite length. Recall from (1.44) that V can be uniquely written as (1.45)

⊕m1

V S1

⊕m2

⊕ S2

⊕mt

⊕ · · · ⊕ St

for pairwise distinct Si ∈ Irr A and positive integers mi . The description of End A (V ) will be in terms of the direct product of matrix algebras over the Schur division algebras D(Si ). Here, the direct product of algebras A1, . . . , At is the cartesian product, t Ai = A1 × · · · × At , i=1

with the componentwise algebra structure; e.g., multiplication is given by (x 1, x 2, . . . , x t )(y1, y2, . . . , yt ) = (x 1 y1, x 2 y2, . . . , x t yt ) . Proposition 1.33. Let V be as in (1.45) and let D(Si ) = End A (Si ) denote the Schur division algebra of Si . Then End A (V )

t

Matmi (D(Si )) .

i=1

Proof. By Schur’s Lemma, Hom A (Si, S j ) = 0 for i j. Therefore, putting Di = D(Si ), Lemma 1.4(a) gives an algebra isomorphism Matm1 (D1 ) Mat m (D2 ) 2 End A (V ) This is exactly what the proposition asserts.

0 ..

.

0

Mat m (Dt ) t

.

1.4. Semisimplicity

57

1.4.4. Semisimple Algebras The algebra A is called semisimple if the following equivalent conditions are satisﬁed: (i) the regular representation Areg is completely reducible; (ii) all V ∈ Rep A are completely reducible. Condition (ii) certainly implies (i). For the converse, note that every V ∈ Rep A is a homomorphic image of a suitable direct sum of copies of the regular representation ⊕I V, Areg : any family (vi )i ∈I of generators of V gives rise to an epimorphism Areg

⊕I (ai ) → i ai vi . Now Areg is completely reducible by (i), being a direct sum of completely reducible representations, and it follows from Corollary 1.29 that V is completely reducible as well; in fact, V is isomorphic to a direct sum of certain irreducible constituents of Areg , possibly with multiplicities greater than 1. Thus, (i) and (ii) are indeed equivalent. Since property (ii) evidently passes to homomorphic images of A, we obtain in particular that all homomorphic images of semisimple algebras are again semisimple As we have seen, division algebras are semisimple (Example 1.26). The main result of this section, Wedderburn’s Structure Theorem, gives a complete description of all semisimple algebras: they are exactly the ﬁnite direct products of matrix algebras over various division algebras. In detail: Wedderburn’s Structure Theorem. The k-algebra A is semisimple if and only if A

t

Matmi (Di )

i=1

for division k-algebras Di . The data on the right are determined by A as follows: • t = # Irr A , say Irr A = {S1, S2, . . . , St }; • Di D(Si ) op ; • mi = m(Si, Areg ) = dimD(Si ) Si ; • Matmi (Di ) BiEnd A (Si ). Proof. First assume that A is semisimple. Since the regular representation Areg has ﬁnite length, being generated by the identity element of A, it follows that ⊕m1

Areg S1

⊕m2

⊕ S2

⊕mt

⊕ · · · ⊕ St

58

1. Representations of Algebras

with pairwise distinct Si ∈ Irr A and positive integers mi as in (1.45). We obtain algebra isomorphisms, A

End A ( Areg ) op

Lemma 1.5(b)

t

Proposition 1.33

i=1 t

Lemma 1.5(a)

Matmi (D(Si )) op Matmi (Di )

i=1

with Di = D(Si ) op . Here, we have tacitly used the obvious isomorphism ( Aop ) op A and that · op commutes with direct products. Since opposite algebras of division algebras are clearly division algebras as well, it follows that all Di are division algebras (Schur’s Lemma), and so we have proved the asserted structure of A. t Conversely, assume that A i=1 Ai with Ai = Matmi (Di ) for division kalgebras Di and positive integers mi . The direct product structure of A implies that t πi∗ ( Ai )reg , Areg i=1

where πi∗ : Rep Ai → Rep A denotes the inﬂation functor along the standard projection πi : A Ai (§1.2.2). Inﬂation injects each Irr Ai into Irr A and yields a t bijection Irr A ∼ i=1 Irr Ai . Viewing any V ∈ Rep Ai as a representation of A by inﬂation, the subalgebras AV and ( Ai )V of Endk (V ) coincide. In particular, V is completely reducible in Rep A if and only if this holds in Rep Ai . Moreover, End A (V ) = End Ai (V ) and BiEnd A (V ) = BiEnd Ai (V ).

In light of these remarks, we may assume that A = Matm (D) for some division k-algebra D and we must show: Areg is completely reducible; # Irr A = 1, say Irr A = {S}; m = m(S, Areg ) = dimD(S) S; D D(S) op ; and A BiEnd A (S). Let L j ⊆ A denote the collection of all matrices such that nonzero matrix entries can only occur in the j th column. Then each L j is a left ideal of A and Areg = j Lj. ⊕m ⊕m Moreover, L j S := Dreg as a left module over A, with A acting on Dreg by matrix multiplication as in Lemma 1.4(b). Therefore, Areg S ⊕m . Moreover, since the regular representation of D is irreducible, it is easy to see that S ∈ Irr A (Exercise 1.4.2). This shows that Areg is completely reducible, with m = m(S, Areg ), and so A is semisimple. Since every representation of a semisimple algebra is isomorphic to a direct sum of irreducible constituents of the regular representation, as we have remarked at the beginning of this subsection, we also obtain that Irr A = {S}. As for the Schur division algebra of S, we have ⊕m ) D(S) = End A (Dreg

Lemma 1.4

EndD (Dreg )

Lemma 1.5

Dop .

1.4. Semisimplicity

59

Consequently, D D(S) op and dimD(S) S is equal to the dimension of D ⊕m as a right vector space over D, which is m . Finally, Lemma 1.5 implies that EndD op (D) D and so we obtain ⊕m BiEnd A (S) = EndD(S) (S) EndD op (Dreg )

Lemma 1.4

Matm (EndD op (D))

Matm (D) = A . This ﬁnishes the proof of Wedderburn’s Structure Theorem.

1.4.5. Consequences of Wedderburn’s Structure Theorem First, for future reference, let us restate the isomorphism in Wedderburn’s Structure Theorem.

∈

∈

Corollary 1.34. If A is semisimple, then there is the following isomorphism of algebras: A ∼ S ∈Irr A BiEnd A (S)

a

aS

Split Semisimple Algebras. A semisimple algebra A is called split semisimple if A is ﬁnite dimensional and k is a splitting ﬁeld for A. Then BiEnd A (S) = Endk (S) for all S ∈ Irr A by (1.34), and so the isomorphism in Corollary 1.34 takes the form Endk (S) Matdimk S (k) . (1.46) A S ∈Irr A

S ∈Irr A

Our next corollary records some important numerology resulting from this isomorphism. For any algebra A and any a, b ∈ A, the expression [a , b] := ab − ba will be called a Lie commutator and the k-subspace of A that is generated by the Lie commutators will be denoted by [A, A]. Corollary 1.35. Let A be a split semisimple k-algebra. Then: (a) # Irr A = dimk A/[A, A] .

(b) dimk A = S ∈Irr A (dimk S) 2 . (c) m(S, Areg ) = dimk S for all S ∈ Irr A . Proof. Under the isomorphism (1.46), the subspace [A, A] ⊆ A corresponds to S ∈Irr A[Matd S (k) , Matd S (k)], where we have put d S = dimk S. Each of the subspaces [MatdS (k) , MatdS (k)] coincides with the kernel of the matrix trace, MatdS (k) k. Indeed, any Lie commutator of matrices has trace zero and, on the

60

1. Representations of Algebras

other hand, using the matrices ei, j having a 1 in position (i, j) and 0s elsewhere, we can form the Lie commutators [ei,i , ei, j ] = ei, j (i j)

and

[ei,i+1 , ei+1,i ] = ei,i − ei+1,i+1,

which together span a subspace of codimension 1 in MatdS (k). Thus, each [MatdS (k) , MatdS (k)] has codimension 1, and hence dimk A/[A, A] is equal to the number of matrix components in (1.46), which in turn equals # Irr A by Wedderburn’s Structure Theorem. This proves (a). Part (b) is clear from (1.46). Finally, (c) follows from (1.46) and the statement about multiplicities in Wedderburn’s Structure Theorem. Primitive Central Idempotents. For any semisimple algebra A, we let e(S) ∈ A denote the element corresponding to (0, . . . , 0, IdS, 0, . . . , 0) ∈ S ∈Irr A BiEnd A (S) under the isomorphism of Corollary 1.34. Thus, (1.47)

e(S)S = δ S,S IdS

for S, S ∈ Irr A. All e(S) belong to the center Z A and they satisfy and e(S) = 1 . (1.48) e(S)e(S ) = δ S,S e(S) S ∈Irr A

The elements e(S) are called the primitive central idempotents of A. For any V ∈ Rep A, it follows from (1.47) that the operator e(S)V is the identity on the S-homogeneous component V (S) and it annihilates all other homogeneous compo nents of V . Thus, the idempotent e(S)V is the projection of V = S ∈Irr A V (S ) onto V (S): proj. (1.49) V (S ) V (S) . e(S)V : V = S ∈Irr A

1.4.6. Finite-Dimensional Irreducible Representations In this subsection, we record some applications of the foregoing to representations of arbitrary algebras A, not necessarily semisimple. Recall from (1.33) that the image AV = ρ( A) of every representation ρ : A → Endk (V ) is contained in the double centralizer BiEnd A (V ) ⊆ Endk (V ). Our focus will be on ﬁnite-dimensional representations. Burnside’s Theorem. Let A ∈ Algk and let V ∈ Repﬁn A . Then V is irreducible if and only if End A (V ) is a division algebra and AV = BiEnd A (V ). In this case, AV is isomorphic to a matrix algebra over the division algebra D(V ) op . Proof. First, assume that V is irreducible. Then V is a ﬁnite-dimensional left vector space over the Schur division algebra D(V ) = End A (V ). Therefore, Lemma 1.5 implies that BiEnd A (V ) = EndD(V ) (V ) is a matrix algebra over D(V ) op . In order to show that AV = BiEnd A (V ), we may replace A by A = A/ Ker V , because

1.4. Semisimplicity

61

AV = AV and BiEnd A (V ) = BiEnd A (V ). It suﬃces to show that A is semisimple; for, then Corollary 1.34 will tell us that AV = BiEnd A (V ). Fix a k-basis (vi )1n of V . Then {a ∈ A | a.vi = 0 for all i} = Ker A (V ) = 0, and hence we have the following embedding:

∈ a

∈

V ⊕n

Areg

a.vi

n 1

Since V ⊕n is completely reducible, it follows from Corollary 1.29 that Areg is completely reducible as well, proving that A is semisimple as desired. Conversely, assume that D = End A (V ) is a division algebra and that AV = BiEnd A (V ). Recall that BiEnd A (V ) = EndD (V ). Thus, AV = EndD (V ) and it follows from Example 1.12 that V is an irreducible representation of AV . Hence V ∈ Irr A, completing the proof of Burnside’s Theorem. Absolute Irreducibility Recall that the base ﬁeld k is said to be a splitting ﬁeld for the k-algebra A if D(S) = k for all S ∈ Irrﬁn A. We now discuss the relevance of this condition in connection with extensions of the base ﬁeld (§1.2.2). Speciﬁcally, the representation V ∈ Rep A is called absolutely irreducible if K ⊗V is an irreducible representation of K ⊗ A for every ﬁeld extension K/k. Note that irreducibility of K ⊗ V for even one given ﬁeld extension K/k certainly forces V to be irreducible, because any subrepresentation 0 U V would give rise to a subrepresentation 0 K ⊗ U K ⊗ V . Proposition 1.36. Let A ∈ Algk and let S ∈ Irrﬁn A. Then S is absolutely irreducible if and only if D(S) = k. Proof. First assume that D(S) = k. Then AS = Endk (S) by Burnside’s Theorem. For any ﬁeld extension K/k, the canonical map K ⊗ Endk (S) → EndK (K ⊗ S) is surjective; in fact, it is an isomorphism by (B.27). Hence, K ⊗ ρ maps K ⊗ A onto EndK (K ⊗ S) and so K ⊗ S is irreducible by Example 1.12. Conversely, if S is absolutely irreducible and k is an algebraic closure of k, then k ⊗ S is a ﬁnite-dimensional irreducible representation of k ⊗ A. Hence Schur’s Lemma implies that D(k ⊗ S) = k. Since D(k ⊗ S) k ⊗ D(S) (Exercise 1.2.4), we conclude that D(S) = k. Corollary 1.37 (Frobenius reciprocity). Let φ : A → B be a homomorphism of semisimple k-algebras and let S ∈ Irrﬁn A and T ∈ Irrﬁn B be absolutely irreducible. Then m(S, ResBA T ) = m(T, IndBA S) .

62

1. Representations of Algebras

Proof. Observe that V := IndBA S = φ∗ S ∈ Rep B is completely reducible of ﬁnite length, being a ﬁnitely generated representation of a semisimple algebra. Thus, Proposition 1.32 gives m(T, V ) = dimk HomB (V, T ), because D(T ) = k. Similarly, putting W = ResBA T = φ∗T, we obtain m(S, W ) = dimk Hom A (S, W ). Finally, HomB (V, T ) Hom A (S, W ) as k-vector spaces by Proposition 1.9. Kernels An ideal I of an arbitrary k-algebra A will be called coﬁnite if dimk A/I < ∞. We put def

Speccoﬁn A = {P ∈ Spec A | dimk A/P < ∞} and likewise for Primcoﬁn A and MaxSpeccoﬁn A. The next theorem shows that all three sets coincide and that they are in bijection with Irrﬁn A. Of course, if A is ﬁnite dimensional, then Irrﬁn A = Irr A, Speccoﬁn A = Spec A, etc. Theorem 1.38. All coﬁnite prime ideals of any A ∈ Algk are maximal; so MaxSpeccoﬁn A = Primcoﬁn A = Speccoﬁn A . Moreover, there is the following bijection:

∈

Speccoﬁn A

∈

Irrﬁn A ∼ S

Ker S

Proof. In view of the general inclusions (1.40), the equalities MaxSpeccoﬁn A = Primcoﬁn A = Speccoﬁn A will follow if we can show that any P ∈ Speccoﬁn A is in fact maximal. For this, after replacing A by A/P, we may assume that A is a ﬁnite-dimensional algebra that is prime (i.e., the product of any two nonzero ideals of A is nonzero) and we must show that A is simple. Choose a minimal nonzero left ideal L ⊆ A. Then L ∈ Irrﬁn A. Furthermore, since A is prime and I = L A is a nonzero ideal of A with (Ker L)I = 0, we must have Ker L = 0. Therefore, A AL and so Burnside’s Theorem implies that A is isomorphic to a matrix algebra over some division algebra, whence A is simple (Exercise 1.1.14). For the asserted bijection with Irrﬁn A, note that an irreducible representation S is ﬁnite dimensional if and only if Ker S is coﬁnite. Therefore, the surjection Irr A Prim A in (1.35) restricts to a surjection Irrﬁn A Speccoﬁn A. In order to show that this map is also injective, let S, S ∈ Irrﬁn A be such that Ker S = Ker S and let A denote the quotient of A modulo this ideal. Then S, S ∈ Irr A and A is isomorphic to a matrix algebra over some division algebra by Burnside’s Theorem. Since such algebras have only one irreducible representation up to equivalence by Wedderburn’s Structure Theorem, we must have S S in Rep A and hence in Rep A as well.

1.4. Semisimplicity

63

1.4.7. Finite-Dimensional Algebras The following theorem gives an ideal-theoretic characterization of semisimplicity for ﬁnite-dimensional algebras. Theorem 1.39. The following are equivalent for a ﬁnite-dimensional algebra A: (i) A is semisimple; (ii) rad A = 0; (iii) A has no nonzero nilpotent right or left ideals. Proof. If A is semisimple, then Areg is a sum of irreducible representations. Since (rad A).S = 0 for all S ∈ Irr A, it follows that (rad A). Areg = 0 and so rad A = 0. Thus (i) implies (ii). In view of Proposition 1.25, (ii) and (iii) are equivalent. It remains to show that (ii) implies (i). So assume that rad A = S ∈Irr A Ker S = 0. Since A is ﬁnite dimensional, some ﬁnite intersection i=1 Ker Si must be 0, the Ker Si being pairwise distinct maximal ideals of A (Theorem 1.38). The Chinese Remainder Theorem (e.g., [25, Proposition 9 on p. A I.104]) yields an isomorphism of algebras A ri=1 A/ Ker Si and Burnside’s Theorem (§1.4.6) further tells us that each A/ Ker Si is a matrix algebra over a division algebra. Semisimplicity of A now follows from Wedderburn’s Structure Theorem. This proves the theorem. Condition (iii), for an arbitrary algebra A, is equivalent to semiprimeness of the zero ideal of A; see the proof of Proposition 1.25. Such algebras are called semiprime. Similarly, any algebra whose zero ideal is prime is called prime and likewise for “primitive”. For a ﬁnite-dimensional algebra A, the properties of being prime, primitive, or simple are all equivalent by Theorem 1.38, and Theorem 1.39 gives the same conclusion for the properties of being semiprime, semiprimitive, or semisimple. We will refer to the algebra As.p. = A/ rad A, which is always semiprimitive by (1.42), as the semisimpliﬁcation of A when A is ﬁnite dimensional.

Exercises for Section 1.4 In these exercises, A denotes a k-algebra. 1.4.1 (Radical of a representation). The radical of a representation V ∈ Rep A is deﬁned by

def all maximal subrepresentations of V . rad V = Here, a subrepresentation M ⊆ V is called maximal if V /M is irreducible. The empty intersection is understood to be equal to V . Thus, the Jacobson radical rad A is the same as rad Areg ; see Exercise 1.3.3. Prove: (a) If V is ﬁnitely generated, then rad V V . (Use Exercise 1.1.3.) (b) If V is completely reducible, then rad V = 0. Give an example showing that the converse need not hold.

64

1. Representations of Algebras

(c) If U ⊆ V is a subrepresentation such that V /U is completely reducible, then U ⊇ rad V . If V is artinian (see Exercise 1.1.5), then the converse holds. (d) (rad A).V ⊆ rad V ; equality holds if A is ﬁnite dimensional. 1.4.2 (Matrix algebras). Let S ∈ Irr A. Viewing S ⊕n as a representation of Matn ( A) as in Lemma 1.4(b), prove that S ⊕n is irreducible. 1.4.3 (Semisimple algebras). (a) Show that the algebra A is semisimple if and only if all short exact sequences in Rep A are split. (See Exercise 1.1.2). t Matmi (Di ) be semisimple. Describe the ideals I of A and (b) Let A i=1 the factors A/I. Show that A has exactly t prime ideals and that there is a bijection Irr A ∼ Spec A, S ↔ Ker S. Moreover, all ideals I of A are idempotent: I 2 = I. 1.4.4 (Faithful completely reducible representations). Assume that V ∈ Rep A is faithful and completely reducible. Show: (a) The algebra A is semiprime. In particular, if A is ﬁnite dimensional, then A is semisimple. (b) If V is ﬁnite dimensional, then A is ﬁnite dimensional and semisimple. Moreover, Irr A is the set of distinct irreducible constituents of V . (c) The conclusion of (b) fails if V is not ﬁnite dimensional: A need not be semisimple. 1.4.5 (PI-algebras). Let A be an aﬃne PI-algebra. It is known that all irreducible representations of A are ﬁnite dimensional and rad A is nilpotent; see [184, Theorems 6.3.3 and 6.3.39]. Deduce that Prim A is ﬁnite if and only if A is ﬁnite dimensional. 1.4.6 (Galois descent). Let K be a ﬁeld and let Γ be a subgroup of Aut(K ). Let k = K Γ be the ﬁeld of Γ-invariants in K. For a given V ∈ Vectk , consider the action Γ K ⊗ V with γ ∈ Γ acting by γ(λ ⊗ v) = (γ ⊗ IdV )(λ ⊗ v) = γ(λ) ⊗ v. View V as a k-subspace of K ⊗ V via the embedding V → K ⊗ V , v → 1 ⊗ v. (a) Show that V = (K ⊗ V ) Γ = {x ∈ K ⊗ V | γ(x) = x for all γ ∈ Γ}, the space of Γ-invariants in K ⊗ V . (b) Let W ⊆ K ⊗ V be any Γ-stable K-subspace. Show that W = K ⊗ W Γ , where W Γ = W ∩ V is the space of Γ-invariants in W . 1.4.7 (Galois action). Let K/k be a ﬁnite Galois extension with Galois group Γ = Gal(K/k). Show: (a) The kernel of any f ∈ HomAlgk ( A, K ) belongs to MaxSpec A.

(b) For f , f ∈ HomAlgk ( A, K ), we have Ker f = Ker f if and only if f = γ ◦ f for some γ ∈ Γ. (You can use Exercise 1.4.6 for this.) 1.4.8 (Extension of scalars and complete reducibility). Let V ∈ Rep A. For a given ﬁeld extension K/k, consider the representation K ⊗ V ∈ Rep (K ⊗ A) (§1.2.2). Prove:

1.5. Characters

65

(a) If K ⊗ V is completely reducible, then so is V . (b) If V is irreducible and K/k is ﬁnite separable, then K ⊗ V is completely reducible of ﬁnite length. (Reduce to the case where K/k is Galois and use Exercise 1.4.6.) Give an example showing that K ⊗ V need not be irreducible. (c) If the ﬁeld k is perfect and V is ﬁnite dimensional and completely reducible, then K ⊗ V is completely reducible for every ﬁeld extension K/k. 1.4.9 (Extension of scalars and composition factors). Let V, W ∈ Repﬁn A and let K/k be a ﬁeld extension. Prove: (a) If V and W are irreducible and nonequivalent, then the representations K ⊗V and K ⊗ W of K ⊗ A have no common composition factor. (b) Conclude from (a) that, in general, K ⊗ V and K ⊗ W have a common composition factor if and only if V and W have a common composition factor. 1.4.10 (Splitting ﬁelds). Assume that A is ﬁnite dimensional and deﬁned over some subﬁeld k0 ⊆ k, that is, A k ⊗k0 A0 for some k0 -algebra A0 . Assume further that k is a splitting ﬁeld for A. For a given ﬁeld F with k0 ⊆ F ⊆ k, consider the F-algebra B = F ⊗k0 A0 . Show that F is a splitting ﬁeld for B if and only if each S ∈ Irr A is deﬁned over F, that is, S k ⊗F T for some T ∈ Irr B. 1.4.11 (Separable algebras). The algebra A is called separable if there exists an

element e = i x i ⊗ yi ∈ A ⊗ A satisfying m(e) = 1 and ae = ea for all a ∈ A. Here, m : A ⊗ A → A is the multiplication map and A ⊗ A is viewed as an ( A, A)bimodule using left multiplication on the ﬁrst factor and right multiplication on the

second; so the conditions on e are: x i yi = 1 and i ax i ⊗ yi = i x i ⊗ yi a. (a) Assuming A to be separable, show that K ⊗ A is separable for every ﬁeld extension K/k. Conversely, if K ⊗ A is separable for some K/k, then A is separable. (b) With e as above, show that the products x i y j generate A as a k-vector space. Thus, separable k-algebras are ﬁnite dimensional. (c) For V, W ∈ Rep A, consider Homk (V, W ) ∈ Rep ( A⊗ Aop ) as in Example 1.3 and view e ∈ A ⊗ Aop . Show that e. f ∈ Hom A (V, W ) for all f ∈ Homk (V, W ). Furthermore, assuming W to be a subrepresentation of V and f W = IdW , show that (e. f ) W = IdW . Conclude that separable algebras are semisimple. (d) Conclude from (a)–(c) that A is separable if and only if A is ﬁnite dimensional and K ⊗ A is semisimple for every ﬁeld extension K/k.

1.5. Characters The focus in this section continues to be on the ﬁnite-dimensional representations of a given k-algebra A. Analyzing a typical V ∈ Repﬁn A can be a daunting task, especially if the dimension of V is large. In this case, explicit computations with the operators aV (a ∈ A) involve prohibitively complex matrix operations. Fortunately,

66

1. Representations of Algebras

it turns out that a surprising amount of information can be gathered from just the traces of all aV ; these traces form the so-called character of V . For example, we shall see that if V is completely reducible and char k = 0, then the representation V is determined up to equivalence by its character (Theorem 1.45). Before proceeding, the reader may wish to have a quick look at Appendix B for the basics concerning traces of linear operators on a ﬁnite-dimensional vector space. Throughout this section, A denotes an arbitrary k-algebra. 1.5.1. Deﬁnition and Basic Properties The character of V ∈ Repﬁn A is the linear form on A that is deﬁned by:

∈

(1.50)

k

∈

χV : A a

trace(aV )

Characters tend to be most useful if char k = 0; the following example gives a ﬁrst illustration of this fact. Example 1.40 (The regular character). If the algebra A is ﬁnite dimensional, then we can consider the character of the regular representation Areg , def

χreg = χ Areg : A → k. The regular character χreg is also denoted by TA/k when A or k need to be made explicit. For the matrix algebra A = Matn (k), one readily checks (Exercise 1.5.2) that χreg = n trace : Matn (k) → k . In particular, the regular character χreg of Matn (k) vanishes iﬀ char k divides n. Now let K/k be a ﬁnite ﬁeld extension and view K as a k-algebra. All ﬁnite⊕n for suitable n. It is a dimensional representations of K are equivalent to Kreg standard fact from ﬁeld theory that χreg 0 if and only if the extension K/k is separable (Exercise 1.5.5). Thus, by the lemma below, all characters χV of the k-algebra K vanish if K/k is not separable. Additivity. The following lemma states a basic property of characters: additivity on short exact sequences of representations. Lemma 1.41. If 0 → U → V → W → 0 is a short exact sequence in Repﬁn A , then χV = χU + χW . Proof. First note that if f : U ∼ V is an isomorphism of ﬁnite-dimensional representations, then χV = χU . Indeed, aV = f ◦ aU ◦ f −1 holds for all a ∈ A by (1.22), and hence trace(aV ) = trace( f ◦aU ◦ f −1 ) = trace(aU ◦ f −1 ◦ f ) = trace(aU ).

1.5. Characters

67

f

g

Now let 0 → U −→ V −→ W → 0 be a short exact sequence of ﬁnitedimensional representations. Thus, Im f is an A-submodule of V such that U Im f and W V / Im f as A-modules. In view of the ﬁrst paragraph, we may assume that U is an A-submodule of V and W = V /U. Extending a k-basis of U to a k-basis of V , the matrix of each aV has block upper triangular form:

∗ aU . 0 aV /U Taking traces, we obtain trace(aV ) = trace(aU ) + trace(aV /U ) as desired.

Multiplicativity. Let V ∈ Rep A and W ∈ Rep B; so we have algebra maps A → Endk (V ) and B → Endk (W ). By bifunctoriality of the tensor product of algebras (Exercise 1.1.11), we obtain an algebra map A⊗ B → Endk (V ) ⊗ Endk (W ). Composing this map with the canonical map Endk (V ) ⊗ Endk (W ) → Endk (V ⊗ W ) in (B.17), which is also a map in Algk , we obtain the following algebra map:

∈

(1.51)

Endk (V ⊗ W )

∈

A⊗ B a⊗b

aV ⊗ bW

This makes V ⊗ W a representation of A ⊗ B, called the outer tensor product of V and W ; it is sometimes denoted by V W . If V and W are ﬁnite dimensional, then (B.26) gives (1.52)

χV ⊗W (a ⊗ b) = χV (a) χW (b) .

1.5.2. Spaces of Trace Forms Each character χV (V ∈ Repﬁn A) is a linear form on A, but more can be said: (i) By virtue of the standard trace identity, trace( f ◦ g) = trace(g ◦ f ) for f , g ∈ Endk (V ), all characters vanish on the k-subspace [A, A] ⊆ A that is spanned by the Lie commutators [a, a ] = aa − a a with a, a ∈ A. (ii) The character χV certainly also vanishes on Ker V ; note that this is a coﬁnite ideal of A. √ (iii) In fact, χV vanishes on the semiprime √ radical Ker V (as deﬁned in Exercise 1.3.1). To see this, note that Ker V coincides with the√preimage of rad( A/ Ker V ) in A by Theorem 1.38 and so some power of Ker √ V is contained in Ker V by Proposition 1.25. Therefore, all elements of Ker V act as nilpotent endomorphisms on V . Since nilpotent endomorphisms √ have trace 0, it follows that Ker V ⊆ Ker χV . Below, we will formalize these observations.

68

1. Representations of Algebras

Universal Trace and Trace Forms. By (i), each character factors through the canonical map: def

Tr A = A/[A, A] ∈

(1.53)

∈

Tr : A a

a + [A, A]

The map Tr satisﬁes the trace identity Tr(aa ) = Tr(a a) for a, a ∈ A; it will be referred to as the universal trace of A. Note that Tr gives a functor Algk → Vectk , because any k-algebra map φ : A → B satisﬁes φ([A, A]) ⊆ [B , B], and hence φ passes down to a k-linear homomorphism Tr φ : Tr A → Tr B.

We will identify the linear dual (Tr A) ∗ with the subspace of A∗ consisting of all linear forms on A that vanish on [A, A]. This subspace will be referred to as the space of trace forms on A and denoted by A∗trace . Thus, all characters are trace forms. If φ : A → B is a k-algebra map, then the restriction of the dual map φ∗ : B∗ → A∗ in Vectk to the space of trace forms on B ∗ → A∗trace . In this way, we obtain a contravariant functor, gives a map φ∗trace : Btrace ∗ · trace : Algk → Vectk .

Finite Trace Forms. By (ii) and (iii), all characters belong to the following subspaces of A∗trace : def

f ∈ A∗trace | f vanishes on some coﬁnite ideal of A

def

t ∈ A∗trace | t vanishes on some coﬁnite semiprime ideal of A

⊆

A◦trace =

C( A) =

To see that these are indeed subspaces of A∗trace , observe that the intersection of any two coﬁnite (semiprime) ideals is again coﬁnite (semiprime). We will call A◦trace the space of ﬁnite trace forms on A in reference to the so-called ﬁnite dual, A◦ = { f ∈ A∗ | f vanishes on some coﬁnite ideal of A}, which will play a prominent role in Part IV. As above, it is easy to see that · ◦ and · ◦trace are contravariant functors Algk → Vectk . To summarize, for any V ∈ Repﬁn A, we have (1.54)

χV ∈ C( A) ⊆ A◦trace ⊆ A∗trace ⊆ A∗ .

We will see shortly that, if k is a splitting ﬁeld for A, then C( A) is the subspace of A∗ that is spanned by the characters of all ﬁnite-dimensional representations of A (Theorem 1.44).

1.5. Characters

69

The Case of Finite-Dimensional Algebras. Now assume that A is ﬁnite dimensional. Then all ideals are coﬁnite and the Jacobson radical rad A is the unique smallest semiprime ideal of A (Proposition 1.25). Therefore, C( A) is exactly the subspace of A∗trace = A◦trace consisting of all trace forms that vanish on rad A. ∗ The latter space may be identiﬁed with ( As.p. ) trace , where As.p. = A/ rad A is the semisimpliﬁcation of A. So (1.55)

∗ C( A) A/[A, A] + rad A ∗ (Tr As.p. ) ∗ ( As.p. ) trace .

1.5.3. Algebras in Positive Characteristics This subsection focuses on the case where char k = p > 0. Part (a) of the following lemma shows that the pth power map gives an additive endomorphism of Tr A = A/[A, A], often called the Frobenius endomorphism. For part (b), we put n def

T ( A) = a ∈ A | a p ∈ [A, A] for some n ∈ Z+ . Lemma 1.42. Assume that char k = p > 0. Then: (a) (a + b) p ≡ a p + bp mod [A, A] for all a, b ∈ A. Furthermore, a ∈ [A, A] implies a p ∈ [A, A]. (b) T ( A) is a k-subspace of A containing [A, A]. If I is a nilpotent ideal of A, then T ( A)/I T ( A/I) via the canonical map A A/I. Proof. (a) Expanding (a + b) p , we obtain the sum of all 2 p products of length p with factors equal to a or b. The cyclic group Cp acts on the set of these products by cyclic permutations of the factors. There are two ﬁxed points, namely the products a p and bp ; all other orbits have size p. Modulo [A, A], the elements of each orbit are identical, because x 1 x 2 · · · x s ≡ x s x 1 · · · x s−1 mod [A, A]. Since char k = p, the orbits of size p contribute 0 to the sum modulo [A, A] and we are left with (a + b) p ≡ a p + bp mod [A, A]. This proves the ﬁrst assertion of (a). Next, we show that a ∈ [A, A] implies a p ∈ [A, A]. By the foregoing, we may assume that a = x y − yx for some x, y ∈ A. Calculating modulo [A, A], we have a p ≡ (x y) p − (yx) p = [x, z] with z = y(x y) p−1 ; so a p ≡ 0 mod [A, A] as desired. (b) Let φ = · p denote the Frobenius endomorphism of Tr A as provided by part (a). Then 0 = Ker φ0 ⊆ Ker φ ⊆ · · · ⊆ Ker φn ⊆ · · · is a chain of subgroups of Tr A and each Ker φn is also stable under scalar multiplication, because n φn (λ x) = λ p φn (x) for x ∈ Tr A and λ ∈ k. Therefore, the union of all Ker φn is a k-subspace of Tr A and the preimage of this subspace under the canonical map A Tr A is a k-subspace of A. This subspace is T ( A). Finally, let I be a nilpotent ideal of A. Then I is clearly contained in T ( A) and T ( A) maps to T ( A/I) under the canonical map. For surjectivity, let z = a + I ∈ T ( A/I). Then n n n z p = a p + I ∈ [A/I , A/I] for some n and so a p ∈ [A, A] + I. Part (a) gives

70

1. Representations of Algebras

m

m

([A, A] + I) p ⊆ [A, A] + I p for all m ∈ Z+ . Choosing m large enough, we have m n+m I p = 0. Hence a p ∈ [A, A], proving that a ∈ T ( A) as desired. Returning to our discussion of trace forms, we now give a description of C( A) for a ﬁnite dimensional algebra A over a ﬁeld k with char k = p > 0. We also assume that k is a splitting ﬁeld for A in the sense of §1.2.5. Proposition 1.43. Let A be ﬁnite dimensional and assume that char k = p > 0 and that k is a splitting ﬁeld for A. Then C( A) A/T ( A) ∗ . Proof. In light of (1.55), we need to show that A/T ( A) Tr As.p. . But rad A is nilpotent (Proposition 1.25), and so T ( A)/ rad A T ( As.p. ) by Lemma 1.42. Therefore, it suﬃces to show that T ( As.p. ) = [As.p., As.p. ]. By assumption of k, the Wedderburn decomposition As.p. S ∈Irr A AS has components AS that are matrix algebras over k. Clearly, T ( As.p. ) S T ( AS ) and [As.p., As.p. ] S [AS , AS ]. Thus, it suﬃces to consider a matrix algebra Matd (k). We have seen in the proof of Corollary 1.35 that the space [Matd (k) , Matd (k)] consists of all trace-0 matrices and hence has codimension 1 in Matd (k). Since the idempotent matrix e1,1 does not belong to T (Matd (k)), it follows that T (Matd (k)) = [Matd (k) , Matd (k)], ﬁnishing the proof. 1.5.4. Irreducible Characters Characters of ﬁnite-dimensional irreducible representations are referred to as irreducible characters. Since every ﬁnite-dimensional representation has a composition series, all characters are sums of irreducible characters by Lemma 1.41. The following theorem lists some important properties of the collection of irreducible characters of an arbitrary algebra A ∈ Algk . Theorem 1.44. (a) The irreducible characters χ S for S ∈ Irrﬁn A such that char k does not divide dimk D(S) are linearly independent. (b) # Irrﬁn A ≤ dimk C( A). (c) If k is a splitting ﬁeld for A, then the irreducible characters of A form a k-basis of C( A). In particular, # Irrﬁn A = dimk C( A) holds in this case.

Proof. (a) Suppose that ri=1 λ i χ Si = 0 with λ i ∈ k and distinct Si ∈ Irrﬁn A such that char k does not divide dimk D(Si ). We need to show that all λ i = 0. By Theorem 1.38, the annihilators Ker Si are distinct maximal ideals of A, and A/ Ker Si Bi := Matmi (Di ) by Burnside’s Theorem (§1.4.6), with Di = D(Si ) op . The Chinese Remainder Theorem yields the following surjective homomorphism

1.5. Characters

71

of algebras:

a

A/

r

i=1 Ker Si

∼

r

B :=

i=1

Bi

∈

(1.56)

∈

A

aSi

Let e j ∈ B be the element corresponding to the t-tuple with the matrix e1,1 ∈ Matmi (Di ) in the j th component and the 0-matrix in all other components. It is easy to see (Exercise 1.5.2(a)) that χ Si (e j ) = dimk (Di )1k δi, j , because A acts ⊕m

on each Si via the standard action of Matmi (Di ) on Di i . We conclude that

0 = i λ i χ Si (e j ) = λ j dimk (D j )1k for all j. Finally, our hypothesis implies that dimk (D j )1k 0, giving the desired conclusion λ j = 0. (b) Let S1, . . . , Sr be nonequivalent ﬁnite-dimensional irreducible representations of A and consider the epimorphism (1.56). Since B is ﬁnite dimensional ∗ ∗ ∗ and semiprime, we have Btrace → C( A). Moreover, clearly, Btrace i (Bi ) trace . Thus, it suﬃces to show that every ﬁnite-dimensional k-algebra B has a nonzero = B ⊗ k, where k is an algebraic trace form. To see this, consider the algebra B closure of k, and ﬁx some S ∈ Irr B. The character χ S is nonzero by (a) and its restriction to B is nonzero as well, because the canonical image of B generates B as a k-vector space. Composing χ S with a suitable k-linear projection of k onto k yields the desired trace form. (c) Since D(S) = k for all S ∈ Irrﬁn A by hypothesis, the irreducible characters χ S are linearly independent by (a). It suﬃces to show that the χ S span C( A). But any t ∈ C( A) vanishes some coﬁnite semiprime ideal I of A. The algebra A/I is split semisimple by Theorem 1.39 and the trace form t can be viewed as an element of C( A/I) = ( A/I)∗trace . Since # Irr A/I = dimk ( A/I)∗trace by Corollary 1.35(a), we know that t is a linear combination of irreducible characters of A/I. Viewing these characters as (irreducible) characters of A by inﬂation, we have written t as a linear combination of irreducible characters of A. This completes the proof. Characters of Completely Reducible Representations. It is a fundamental fact of representation theory that, under some restrictions on char k, ﬁnite-dimensional completely reducible representations are determined up to equivalence by their character. In detail: Theorem 1.45. Let V, W ∈ Repﬁn A be completely reducible and assume that char k = 0 or char k > max{dimk V, dimk W }. Then V W if and only if χV = χW . assume Proof. Since V W clearly always implies χV = χW (Lemma 1.41), let us⊕m S and S χV = χW and prove that V W . To this end, write V S ∈Irrﬁn A S ⊕nS with mS = m(S, V ) and nS = m(S, W ). Lemma 1.41 gives W

S ∈Irrﬁn A

χV = S mS χ S and χW = S nS χ S , and we need to show that mS = nS for all S.

72

1. Representations of Algebras

But mS 0 or nS 0 implies that dimk S is bounded above by dimk V or dimk W . Thus, our hypothesis on k implies that char k does not divide dimk D(S), because dimk D(S) is a divisor of dimk S. Therefore, Theorem 1.44(a) allows us to deduce

from the equality 0 = χV − χW = S (mS − nS ) χ S that (mS − nS )1k = 0 for all S. Since mS, nS ≤ max{dimk V, dimk W }, our hypothesis on k gives mS = nS , as desired. 1.5.5. The Grothendieck Group R ( A) Certain aspects of the representation theory of a given k-algebra A, especially those related to characters, can be conveniently packaged with the aid of the Grothendieck group of ﬁnite-dimensional representations of A. This group will be denoted11 by R ( A). By deﬁnition, R ( A) is the abelian group with generators [V ] for V ∈ Repﬁn A and with relations [V ] = [U] + [W ] for each short exact sequence 0 → U → V → W → 0 in Repﬁn A. Formally, R ( A) is the factor of the free abelian group on the set of all isomorphism classes (V ) of ﬁnite-dimensional representations V of A —these isomorphism classes do indeed form a set—modulo the subgroup that is generated by the elements (V ) − (U) − (W ) arising from short exact sequences 0 → U → V → W → 0 in Repﬁn A. The generator [V ] is the image of (V ) in R ( A). The point of this construction is as follows. Suppose we have a rule assigning to each V ∈ Repﬁn A a value f (V ) in some abelian group (G, +) in such a way that the assignment is additive on short exact sequences in Repﬁn A in the sense that exactness of 0 → U → V → W → 0 implies that f (V ) = f (U) + f (W ) holds in G. Then we obtain a well-deﬁned group homomorphism f : R ( A) → G , [V ] → f (V ). We will use the above construction for certain classes of representations other than the objects of Repﬁn A in §2.1.3; for a discussion of Grothendieck groups in greater generality, the reader may wish to consult [31, §11]. Group-Theoretical Structure. If 0 = V0 ⊆ V1 ⊆ · · · ⊆ Vl = V is any chain of ﬁnite-dimensional representations of A, then the relations of R ( A) and a straightforward induction imply that [V ] =

l

[Vi /Vi−1 ] .

i=1

In particular, taking a composition series for V , we see that R ( A) is generated by 11Other notation is also used in the literature; for example, R( A) is denoted by G0k ( A) in Swan [201] and by Rk ( A) in Bourbaki [31].

1.5. Characters

73

the elements [S] with S ∈ Irrﬁn A. In fact, these generators are Z-independent: Proposition 1.46. The group R ( A) is isomorphic to the free abelian group with basis the set Irrﬁn A of isomorphism classes of ﬁnite-dimensional irreducible representations of A. An explicit isomorphism is given by multiplicities: ∼

Z ⊕ Irrﬁn A

[V ]

∈

∈

μ : R ( A)

μ(S, V )

S

In particular, for V, W ∈ Repﬁn A , the equality [V ] = [W ] holds in R ( A) if and only if μ(S, V ) = μ(S, W ) for all S ∈ Irrﬁn A . Proof. The map μ yields a well-deﬁned group homomorphism by virtue of the fact that multiplicities are additive on short exact sequences by (1.32). For S ∈ Irrﬁn A, one has μ([S]) = δ S,T T . These elements form the standard Z-basis of Z ⊕ Irrﬁn A. Therefore, the generators [S] are Z-independent and μ is an isomorphism. Functoriality. Pulling back representations along a given algebra map φ : A → B (§1.2.2) turns short exact sequences in Repﬁn B into short exact sequences in Repﬁn A. Therefore, we obtain the following group homomorphism:

∈

R ( A)

∈

R (φ) : R (B) [V ]

[φ∗V ]

In this way, we may view R as a contravariant functor from Algk to the category AbGroups ≡ Z Mod of all abelian groups. Lemma 1.47. Let φ : A B be a surjective algebra map. Then R (φ) is a split injection coming from the inclusion φ∗ : Irrﬁn B → Irrﬁn A : R (B)

R(φ)

R ( A) ∼

∼ Z ⊕ Irrﬁn B

Z ⊕ Irrﬁn A

If Ker φ ⊆ rad A, then R (φ) is an isomorphism. Proof. The ﬁrst assertion is immediate from Proposition 1.46, since inﬂation φ∗ clearly gives inclusions Irr B → Irr A and Irrﬁn B → Irrﬁn A. If Ker φ ⊆ rad A, then these inclusions are in fact bijections by (1.41).

74

1. Representations of Algebras

Extension of the Base Field. Let K/k be a ﬁeld extension. For any A ∈ Algk and any V ∈ Repﬁn A, consider the algebra K ⊗ A ∈ AlgK and the representation K ⊗ V ∈ Repﬁn (K ⊗ A) as in (1.25). By exactness of the scalar extension functor K ⊗ · , this process leads to the following well-deﬁned group homomorphism:

∈

(1.57)

R (K ⊗ A)

∈

K ⊗ · : R ( A) [V ]

[K ⊗ V ]

Lemma 1.48. The scalar extension map (1.57) is injective. Proof. In view of Proposition 1.46, it suﬃces to show that, for S T ∈ Irrﬁn A, the representations K ⊗ S, K ⊗ T ∈ Repﬁn (K ⊗ A) have no common composition factor. To prove this, we may replace A by A/ Ker S ∩ Ker T, thereby reducing to the case where the algebra A is semisimple. The central primitive idempotent e(S) ∈ A acts as the identity on S and as 0 on T; see (1.47). Viewed as an element of K ⊗ A, we have the same actions of e(S) on K ⊗ S and on K ⊗ T, whence these representations cannot have a common composition factor. The Character Map. Since characters are additive on short exact sequences in Repﬁn A by Lemma 1.41, they give rise to a well-deﬁned group homomorphism

∈

(1.58)

A◦trace

C( A)

∈

χ : R ( A) [V ]

χV

with C( A) ⊆ A◦trace as in (1.54). The character map χ is natural in A, that is, for any morphism φ : A → B in Algk , the following diagram clearly commutes: R (B) (1.59)

χ

◦ Btrace ◦ φtrace

R(φ)

R ( A)

χ

A◦trace

Since C( A) is a k-vector space, χ lifts uniquely to a k-linear map def

χk : Rk ( A) = R ( A) ⊗Z k −→ C( A) . Proposition 1.49. The map χk is injective. If k is a splitting ﬁeld for A, then χk is an isomorphism. Proof. First assume that k is a splitting ﬁeld for A. By Proposition 1.46, the classes [S] ⊗ 1 with S ∈ Irrﬁn A form a k-basis of Rk ( A), and, by Theorem 1.44, the

1.5. Characters

75

irreducible characters χ S form a k-basis of C( A). Since χk ([S] ⊗ 1) = χ S , the proposition follows in this case. If k is arbitrary, then ﬁx an algebraic closure k and consider the algebra A = k ⊗ A. Every trace form A → k extends uniquely to a trace form A → k, giving ∗ a map A∗trace → Atrace (which is in fact an embedding). The following diagram is evidently commutative: Rk ( A) (1.60)

χk

A∗trace

k⊗ ·

Rk ( A)

∗

χk

Atrace

Here, k ⊗ · and χk are injective by Lemma 1.48 and the ﬁrst paragraph of this proof, respectively, whence χk must be injective as well. Positive Structure and Dimension Augmentation. The following subset is called the positive cone of R ( A): def

R ( A)+ = [V ] ∈ R ( A) | V ∈ Repﬁn A . This is a submonoid of the group R ( A), because 0 = [0] ∈ R ( A)+ and [V ] + [V ] = [V ⊕ V ] ∈ R+ ( A) for V, V in Repﬁn A. Under the isomorphism R ( A) Z ⊕ Irrﬁn A ⊕ Irr A (Proposition 1.46), R ( A)+ corresponds to Z+ ﬁn . Thus, every element of R ( A) is a diﬀerence of two elements of R ( A)+ and x = 0 is the only element of R ( A)+ such that −x ∈ R ( A)+ . This also follows from the fact that R ( A) is equipped with a group homomorphism, called the dimension augmentation,

∈

(Z , +)

∈

dim : R ( A) [V ]

dimk V

and dim x > 0 for 0 x ∈ R ( A)+ . Deﬁning def

x ≤ y ⇐⇒ y − x ∈ R ( A)+ we obtain a translation invariant partial order on the group R ( A).

Exercises for Section 1.5 Unless otherwise speciﬁed, we consider an arbitrary A ∈ Algk below. 1.5.1 (Idempotents). Let e = e2 ∈ A be an idempotent and let V ∈ Rep A. Show: (a) U ∩ e.V = e.U holds for every subrepresentation U ⊆ V . (b) If V is ﬁnite dimensional, then χV (e) = (dimk e.V )1k .

76

1. Representations of Algebras

1.5.2 (Matrices). (a) Let V ∈ Repﬁn A. Viewing V ⊕n as a representation of Matn ( A)

as in Lemma 1.4(b), show that χV ⊕n (a) = i χV (ai,i ) for a = (ai, j ) ∈ Matn ( A). (b) Let trace : Matn (k) → k be the ordinary matrix trace. Show that all characters of Matn (k) are of the form k trace with k ∈ Z+ and that χreg = n trace. (c) Show that Matn ( A)/[Matn ( A) , Matn ( A)] A/[A, A] in Vectk via the map

Matn ( A) → A/[A, A], ai, j → i Tr(ai,i ), where Tr is the universal trace (1.53). 1.5.3 (Regular character and ﬁeld extensions). Assume that dimk A < ∞ and let K be a ﬁeld with k ⊆ K ⊆ Z ( A). Thus, we may also view A ∈ AlgK . Let TA/k , TA/K , and TK/k denote the regular characters of A ∈ Algk , A ∈ AlgK , and K ∈ Algk , respectively. Show that TA/k = TK/k ◦ TA/K . 1.5.4 (Finite-dimensional central simple algebras). This exercise uses deﬁnitions and results from Exercises 1.1.14 and 1.4.11. We assume dimk A < ∞. (a) Show that A is central simple iﬀ k ⊗ A Matn (k), where k denotes an algebraic closure of k and n = dimk A. Consequently, ﬁnite-dimensional central simple algebras are separable. (b) If A is central simple, show that the regular character χreg = TA/k vanishes if and only if char k divides dimk A. 1.5.5 (Separable ﬁeld extensions). Let K/k be a ﬁnite ﬁeld extension. Recall from ﬁeld theory that K/k is separable iﬀ there are [K : k] distinct k-algebra embeddings σi : K → k, where k is an algebraic closure of k (e.g., [130, Sections V.4 and V.6]). If K/k is not separable, then char k = p > 0 and there is an intermediate ﬁeld k ⊆ F K such that the minimal polynomial of every α ∈ K over F has the r form x p − a for some r ∈ N and a ∈ F. Show that the following are equivalent: (i) K/k is a separable ﬁeld extension; (ii) K ⊗ k k

[K:k]

as k-algebras;

(iii) K is a separable k-algebra in the sense of Exercise 1.4.11; (iv) the regular character TK/k (Exercise 1.5.3) is nonzero. 1.5.6 (Separable algebras, again). (a) Using Exercise 1.4.11 and the preceding two exercises, show that A is separable if and only if A is ﬁnite dimensional, semisimple, and Z (D(S))/k is a separable ﬁeld extension for all S ∈ Irr A. (b) Assume dimk A < ∞ and that A k ⊗k0 A0 for some perfect subﬁeld k0 ⊆ k and some A0 ∈ Algk0 . Show that rad A k ⊗k0 rad A0 . 1.5.7 (Independence of nonzero irreducible characters). For S ∈ Irrﬁn A, put K (S) = Z (D(S)). Show: (a) χ S 0 if and only if char k dimK (S) D(S) and K (S)/k is separable. (b) The nonzero irreducible characters χ S are k-linearly independent.

1.5. Characters

77

1.5.8 (Algebras that are deﬁned over ﬁnite ﬁelds). Assume dimk A < ∞ and that A k ⊗k0 A0 for some ﬁnite subﬁeld k0 ⊆ k and some A0 ∈ Algk0 .

(a) Let S ∈ Irr A be absolutely irreducible, let F(S) be the subﬁeld of k that is generated by k0 and χ S ( A0 ), and let B(S) = F(S) ⊗k0 A0 . Show that F(S) is ﬁnite and that S k ⊗F (S) T for some T ∈ Irr B(S). (Use Wedderburn’s Theorem on ﬁnite division algebras.) (b) Assume that k is a splitting ﬁeld for A and let F be the (ﬁnite) subﬁeld of k that is generated by k0 and the subﬁelds F(S) with S ∈ Irr A. Show that F is a splitting ﬁeld for the algebra B = F ⊗k0 A0 . (Use Exercise 1.4.10.) 1.5.9 (Irreducible representations of tensor products). (a) Let S ∈ Repﬁn A and T ∈ Repﬁn B be absolutely irreducible. Use Burnside’s Theorem (§1.4.6) to show that the outer tensor product S T is an absolutely irreducible representation of the algebra A ⊗ B. (b) Assuming A and B are split semisimple, show that A ⊗ B is split semisimple as well and all its irreducible representations arise as in (a).

Chapter 2

Further Topics on Algebras

This short chapter concludes our coverage of general algebras and their representations by introducing two topics: projective modules and Frobenius algebras. Both topics are more advanced than the bulk of the material in Chapter 1 and neither will be needed in Part III. Frobenius algebras, or rather the special case of symmetric algebras, will play a small role in Part II and our treatment of Hopf algebras in Part IV will make signiﬁcant use of Frobenius algebras and also, to a lesser degree, of projective modules. Therefore, at a ﬁrst pass through this book, this chapter can be skipped and later referred to as the need arises. To the reader wishing to learn more about the representation theory of algebras, we recommend the classic [52] by Curtis and Reiner along with its sequels [53], [54] and the monograph [10] by Auslander, Reiten, and Smalø, which focuses on artinian algebras.

2.1. Projectives So far in this book, the focus has been on irreducible and completely reducible representations. For algebras that are not necessarily semisimple, another class of representations also plays an important role: the projective modules. We will also refer to them as the projectives of the algebra in question, for short; we shall however refrain from calling them “projective representations”, since this term has a diﬀerent speciﬁc meaning in group representation theory, being reserved for group homomorphisms of the form G → PGL(V ) for some V ∈ Vectk . In this section, with the exception of §2.1.4, A denotes an arbitrary k-algebra. 79

80

2. Further Topics on Algebras

2.1.1. Deﬁnition and Basic Properties A module P ∈ AMod is called projective if P is isomorphic to a direct summand of ⊕I some free module: P ⊕ Q = Areg for suitable P , Q ∈ AMod with P P and some set I. Projective modules, like free modules, can be thought of as approximate “vector spaces over A”, but projectives are a much more ample and stable class of modules than free modules. Proposition 2.1. The following are equivalent for P ∈ AMod: (i) P is projective; (ii) given an epimorphism f : M N and an arbitrary p : P → N in AMod, p = p: there exists a “lift” p : P → M in AMod such that f ◦ P ∃p

M

f

p

N

(iii) every epimorphism f : M P in AMod splits: there exists a homomorphism s : P → M in AMod such that f ◦ s = Id P . ⊕I and P P. Identifying P with Proof. (i)⇒(ii): Say P ⊕ Q = F with F = Areg P , as we may, the embedding μ : P → F and the projection π : F P along Q satisfy π ◦ μ = Id P . Consider the module map q = p ◦ π : F → N. In order to construct a lift for q, ﬁx an A-basis ( f i )i ∈I for F. Since the map f is surjective, we may choose elements mi ∈ M such that f (mi ) = q( f i ) for all q i ∈ I. Now deﬁne the desired lift q : F → M by q ( f i ) = mi ; this determines unambiguously on F and we have

f ◦ q = q, as one checks by evaluating both functions on the basis ( f i )i ∈I . Putting p = q ◦ μ : P → M, we obtain the desired equality, f ◦ p= f ◦ q ◦ μ = q ◦ μ = p ◦ π ◦ μ = p ◦ Id P = p. (ii)⇒(iii): Taking N = P and p = Id P , the lift p : P → M from (ii) will serve as the desired splitting map s. (iii)⇒(i): As we have observed before (§1.4.4), any generating family (x i )i ∈I

⊕I of P gives rise to an epimorphism f : F = Areg P, (ai ) → i ai x i . If s is the splitting provided by (iii), then F = P ⊕ Q with P = Im s P and Q = Ker f (Exercise 1.1.2), proving (i).

2.1. Projectives

81

Description in Terms of Matrices. Any ﬁnitely generated projective P ∈ AMod can be realized by means of an idempotent matrix over A. Indeed, there are Aμ ⊕n ⊕n for some n with π ◦ μ = Id P . Then μ ◦ π ∈ End A ( Areg ) module maps P Areg π op 2 Matn ( A) (Lemma 1.5) gives an idempotent matrix e = e ∈ Matn ( A) with ⊕n e. P Areg

(2.1)

Functorial Aspects. By §B.2.1, each M ∈ AMod gives rise to a functor Hom A (M, · ) : AMod −→ Vectk . f

g

This functor is left exact: if 0 → X −→ Y −→ Z → 0 is a short exact sequence f∗

g∗

in AMod, then 0 → Hom A (M, X ) −→ Hom A (M, Y ) −→ Hom A (M, Z ) is exact in Vectk . However, Hom A (M, · ) is generally not exact, because g∗ need not be epi. Characterization (ii) in Proposition 2.1 can be reformulated as follows: M ∈ AMod is projective if and only if Hom A (M, · ) is exact. In lieu of Hom A (M, · ), we can of course equally well consider the (contravariant) functor Hom A ( · , M) : AMod −→ Vectk . The A-modules M for which the latter functor is exact are called injective; see Exercise 2.1.1.

(2.2)

Dual Bases. Let us write · ∨ = Hom A ( · , Areg ) and, for any M ∈ AMod, let · , · : M∨ × M → A denote the evaluation pairing: f , m = f (m). A family (x i , x i )i ∈I in M × M ∨ is said to form a pair of dual bases for M if, for each x ∈ M, (i) x i, x = 0 for almost all i ∈ I, and

(ii) x = i ∈I x i, x.x i . We equip M ∨ with a right A-module structure by deﬁning f a, m = f , ma for f ∈ M ∨ , a ∈ A, and m ∈ M. Then we have the following map in Vectk :

f ⊗m

End A (M)

∈

(2.3)

∈

M∨ ⊗A M

x → f , xm

For A = k, part (b) of the following lemma reduces to the standard isomorphism Endk (V ) V ⊗ V ∗ V ∗ ⊗ V for a ﬁnite-dimensional V ∈ Vectk ; see (B.19). Lemma 2.2. Let M ∈ AMod. (a) A pair of dual bases (x i , x i )i ∈I for M exists if and only if M is projective. In this case, the family (x i )i ∈I generates M, and any generating family of M can be chosen for (x i )i ∈I . (b) The map (2.3) is an isomorphism if and only if M is ﬁnitely generated projective.

82

2. Further Topics on Algebras

⊕I Proof. (a) For given generators (x i )i ∈I of M, consider the epimorphism Areg M,

⊕I (ai )i → i ai x i . If M is projective, then we may ﬁx a splitting s : M → Areg . Now ⊕I → A, (ai )i → ai , be the projection onto the i th component and deﬁne let πi : Areg x i = πi ◦ s ∈ Hom A (M, Areg ) to obtain the desired pair of dual bases. Conversely, if (x i , x i )i ∈I are dual bases for M, then condition (ii) implies that (x i )i ∈I generates ⊕I ⊕I , x → (x i, x)i , splits the epimorphism Areg M, M and the map M → Areg

⊕I (ai )i → i ai x i . Therefore, M is isomorphic to a direct summand of Areg , and hence M is projective.

(b) Let μ = μ M denote the map (2.3) and note that φ ◦ μ( f ⊗ m) = μ( f ⊗ φ(m)) and μ( f ⊗ m) ◦ φ = μ(( f ◦ φ) ⊗ m) for φ ∈ End A (M). Therefore, Im μ is an ideal of End A (M), and hence surjectivity of μ is equivalent to the condition Id M ∈ Im μ.

n f i ⊗ mi ) = Id M says exactly that ( f i )1n , (mi )1n are dual bases Furthermore, μ( i=1 for M. Hence we know by (a) that Id M ∈ Im μ if and only if M is ﬁnitely generated projective. Consequently, μ is surjective if and only if M is ﬁnitely generated projective. It remains to show that μ is also injective in this case. Fixing A-module μ maps M F = A ⊕n with π ◦ μ = Id M , we obtain the following commutative π diagram: M∨ ⊗A M

μM

End A (M)

π∨ ⊗ μ

μ◦ · ◦π

F∨ ⊗A F

μF

End A (F)

The vertical maps are injective, because they have left inverses μ∨ ⊗ π and π ◦ · ◦ μ. th coordinate map and let x i ∈ F be the standard i th basis Let x i ∈ F ∨ be the i i element. Then F ∨ = i x A, F = i Ax i , and μF ( x i ai j ⊗ x j ) = (ai )i → ai ai j . i, j

i, j

Therefore, μF is a bijection by Lemma 1.5(b), and hence μ M is injective.

Categories of Projectives. We shall consider the following (full) subcategories of AMod: the categories consisting of all projectives, the ﬁnitely generated projectives, and the ﬁnite-dimensional projectives of A; they will respectively be denoted by AProj ,

Aproj ,

and

Aprojﬁn

.

Our primary concern will be with the latter two. Induction along an algebra homomorphism α : A → B gives a functor α∗ = IndBA : AProj → B Proj . Indeed, for any P ∈

AProj,

the functor Hom A (P, ResBA · ) : B Mod → Vectk is

2.1. Projectives

83

exact, being the composite of exact functors by (2.2), and Hom A (P, ResBA · ) HomB (IndBA P, · ) by Proposition 1.9. Thus, HomB (IndBA P, · ) is exact, showing that IndBA P is projective. Alternatively, P is isomorphic to a direct summand of ⊕I for some set I, and so IndBA P = B ⊗ A P is isomorphic to a direct summand Areg ⊕I ⊕I , because IndBA commutes with direct sums. If P is ﬁnitely of Breg = IndBA Areg generated, then I may be chosen ﬁnite. Thus, induction restricts to a functor IndBA : Aproj → B proj. Using the functor Matn : Algk → Algk and (2.1), we obtain ⊕n Matn (α)(e). IndBA P Breg

2.1.2. Hattori-Stallings Traces For P ∈ Aproj, the Dual Bases Lemma (Lemma 2.2) allows us to consider the following map:1

∈

(2.4)

Tr A = A/[A, A]

∈

Tr P : End A (P) P∨ ⊗ A P f ⊗p

f , p + [A, A]

Observe that this map is k-linear in both f and p and, for any a ∈ A, f a, p = f , pa ≡ a f , p = f , ap mod [A, A]. Therefore, Tr P is a well-deﬁned k-linear map, which generalizes the standard trace map (B.23) for A = k. Extending the notion of dimension over k, the rank of P is deﬁned by (2.5)

def

rank P = Tr P (Id P ) ∈ Tr A.

n Using dual bases (x i , x i )i=1 for P, the rank of P may be expressed as follows: (2.6) rank P = x i, x i + [A, A]. i ⊕n Areg

e for some idempotent matrix e = e2 ∈ Matn ( A) Alternatively, writing P

as in (2.1), one obtains rank P = i eii + [A, A] (Exercise 2.1.5). In particular, ⊕n = n + [A, A] and so (2.5) extends the standard deﬁnition of the rank of a rank Areg ﬁnitely generated free module over a commutative ring (B.9). Lemma 2.3. Let A be a k-algebra and let P, P ∈ Aproj. (a) Tr P (φ ◦ ψ) = Tr P (ψ ◦ φ) for all φ, ψ ∈ End A (P).

(b) Tr P ⊕P (φ ⊕ φ ) = Tr P (φ) +TrP (φ ) for φ ∈ End A (P) and φ ∈ End A (P ). In particular, rank(P ⊕ P ) = rank P + rank P . 1This map was introduced independently by Hattori [101] and by Stallings [198].

84

2. Further Topics on Algebras

Proof. (a) Viewing the isomorphism End A (P) P∨ ⊗ A P as an identiﬁcation, it suﬃces to establish the trace property Tr P (φ ◦ ψ) = Tr P (ψ ◦ φ) for φ = f ⊗ p and ψ = f ⊗ p with f , f ∈ P∨ and p, p ∈ P. Note that composition in End A (P) takes the form φ ◦ ψ = ( f ⊗ p) ◦ ( f ⊗ p ) = f , p f ⊗ p. Therefore, Tr P (φ ◦ ψ) = f , p f , p + [A, A] = f , p f , p + [A, A] = Tr P (ψ ◦ φ). (b) Put Q = P ⊕ P and Φ = φ ⊕ 0 P, Φ = 0 P ⊕ φ ∈ End A (Q). Then φ ⊕ φ = Φ + Φ. It is easy to see that TrQ (Φ) = Tr P (φ) and likewise for Φ. Indeed, viewing End A (P) P∨ ⊗ A P as a direct summand of End A (Q) Q∨ ⊗ A Q in the canonical way, we have Φ = φ. Thus, the desired formula Tr P ⊕P (φ ⊕ φ ) = Tr P (φ) + Tr P (φ ) follows by linearity of TrQ . Since IdQ = Id P ⊕ Id P , we also obtain the rank formula rank Q = rank P + rank P . Functoriality. We brieﬂy address the issue of changing the algebra. So let α : A → B be a map in Algk . Then we have the k-linear map Tr α : Tr A → Tr B (§1.5.2) and the functor α∗ = IndBA P : Aproj → B proj from the previous paragraph. For any P ∈ Aproj and any φ ∈ End A (P), we have the formula (2.7)

Trα∗ P (α∗ φ) = (Tr α)(Tr P (φ)).

Indeed, with φ = f ⊗ p as above, we have α∗ φ = (IdB ⊗ f ) ⊗ (1B ⊗ p) and (IdB ⊗ f , 1B ⊗ p) = α( f , p), proving (2.7). With φ = Id P , (2.7) gives (2.8)

rank(α∗ P) = (Tr α)(rank P).

⊕n e as In terms of matrices, this can also be seen as follows. Write P Areg

⊕n in (2.1). Then rank P = i eii + [A, A] and α∗ P B Matn (α)(e), and so

rank(α∗ P) = i α(eii ) + [B , B] = (Tr α)(rank P).

2.1.3. The Grothendieck Groups K0 ( A) and P ( A) Finitely Generated Projectives. Using Aproj in place of Repﬁn A, one can construct an abelian group along the exact same lines as the construction of R ( A) in §1.5.5. The group in question has generators [P] for P ∈ Aproj and relations [Q] = [P] + [R] for each short exact sequence 0 → P → Q → R → 0 in Aproj. Since all these sequences split by Proposition 2.1, this means that we have a relation [Q] = [P] + [R] whenever Q P ⊕ R in Aproj. The resulting abelian group is commonly denoted by K0 ( A). Given an abelian group (G, +) and a rule assigning to each P ∈ Aproj a value f (P) ∈ G in such a way that f (Q) = f (P) + f (R) whenever Q P ⊕ R in Aproj, we obtain a well-deﬁned group homomorphism K0 ( A) → G, [P] → f (P). The construction of K0 is functorial: for any map α : A → B in Algk , the induction functor α∗ = IndBA : Aproj → B proj commutes with direct sums, and

2.1. Projectives

85

hence it yields a well-deﬁned homomorphism of abelian groups:

∈

K0 (B)

∈

K0 (α) : K0 ( A) [P]

[IndBA P]

In this way, we obtain a functor K0 : Algk → AbGroups. We remark that, for P, Q ∈ Aproj, the equality [P] = [Q] in K0 ( A) means that ⊕r ⊕r Q ⊕ Areg for some P and Q are stably isomorphic in the sense that P ⊕ Areg r ∈ Z+ (Exercise 2.1.9). We could of course conceivably perform an analogous construction using the category of AProj of all projectives of A in place of Aproj. However, the resulting group would be trivial in this case (Exercise 2.1.6). Finite-Dimensional Projectives. For the purposes of representation theory, we shall often be concerned with the full subcategory Aprojﬁn of Aproj consisting of all ﬁnite-dimensional projectives of A. The corresponding Grothendieck group, constructed from Aprojﬁn exactly as we did for Aproj, will be denoted by P ( A). We start by sorting out the group-theoretical structure of P ( A) in a manner analogous to Proposition 1.46, where the same was done for R ( A). While the latter result was a consequence of the Jordan-Hölder Theorem (Theorem 1.18), the operative fact in the next proposition is the Krull-Schmidt Theorem (§1.2.6). Proposition 2.4. (a) P ( A) is isomorphic to the free abelian group with basis the set of isomorphism classes of ﬁnite-dimensional indecomposable projectives of A. (b) For P, Q ∈ Aprojﬁn , we have [P] = [Q] in P ( A) if and only if P Q. Proof. By the Krull-Schmidt Theorem, any P ∈ Aprojﬁn can be decomposed into a ﬁnite direct sum of indecomposable summands and this decomposition is unique up to the order of the summands and their isomorphism type. Thus, letting I denote a full set of representatives for isomorphism classes of ﬁnite-dimensional indecomposable projectives of A, we have I ⊕n I (P) P I ∈I

for unique n I (P) ∈ Z+ almost all of which are zero. Evidently, n I (Q) = n I (P) + n I (R) if Q P ⊕ R in Aprojﬁn ; so we obtain the following well-deﬁned group homomorphism: ∈ [P]

∈

Z ⊕I

P ( A)

n I (P)

I

86

2. Further Topics on Algebras

The map sending the standard Z-basis element e I = (δ I,J )J ∈ Z ⊕I to [I] ∈ P ( A) is inverse to the above homomorphism, and so we have in fact constructed an isomorphism. This proves (a), and (b) is an immediate consequence as well. The Cartan Homomorphism. Since any P ∈ Aprojﬁn is of course also an object of Aproj and of Repﬁn A, the symbol [P] can be interpreted in P ( A) as well as in K0 ( A) and in R ( A). In fact, it is clear from the deﬁnition of these groups that there are group homomorphisms

∈

(2.9)

R ( A)

∈

c : P ( A) [P]

[P]

and an analogous homomorphism P ( A) → K0 ( A). The map (2.9) is particularly important; it is called the Cartan homomorphism. Despite the deceptively simple looking expression above, c need not be injective, whereas the homomorphism P ( A) → K0 ( A) is in fact always mono (Exercises 2.1.10, 2.1.14). A Pairing Between K0 ( A) and R ( A). For any ﬁnitely generated V ∈ AMod and any W ∈ Repﬁn A, the k-vector space Hom A (V, W ) is ﬁnite dimensional, because ⊕n ⊕n V for some n and so Hom A (V, W ) → Hom A ( Areg , W ) W ⊕n . Thus, we Areg may deﬁne

(2.10)

def V , W = dimk Hom A (V, W ).

Now let P ∈ Aproj. Then the functor Hom A (P, · ) is exact by (2.2) and we obtain a group homomorphism P , · : R ( A) → Z. For any V ∈ Repﬁn A, the functor Hom A ( · , V ) does at least commute with ﬁnite direct sums. Thus, we also have a group homomorphism · , V : P ( A) → Z. The value P , V only depends on the classes [P] ∈ P ( A) and [V ] ∈ R ( A), giving the following biadditive pairing:

([P] , [V ])

Z

∈

∈

K0 ( A) × R ( A)

P, V

Under suitable hypotheses, this pairing gives a “duality” between K0 ( A) and R ( A) that will play an important role later (e.g., §3.4.2).

2.1. Projectives

87

Hattori-Stallings Ranks and Characters. By Lemma 2.3(b), the Hattori-Stallings rank gives a well-deﬁned group homomorphism:

∈

(2.11)

Tr A

∈

rank : K0 ( A) [P]

rank P

If α : A → B is a homomorphism of k-algebras, then the following diagram commutes by (2.8): K0 ( A) (2.12)

K0 (α)

K0 (B)

rank

rank

Tr A

Tr α

Tr B

The following proposition, due to Bass [11], further hints at the aforementioned duality between K0 ( A) and R ( A) by relating the pairing (2.10) to the evaluation pairing between Tr A and A∗trace (Tr A) ∗ . Recall that χ : R ( A) → A∗trace is the character homomorphism (1.58). Proposition 2.5. The following diagram commutes:

K0 ( A) × R ( A)

· , ·

Z

rank ×χ

can.

Tr A × A∗trace

evaluation

k

Proof. The proposition states that, for P ∈ Aproj and V ∈ Repﬁn A, χV , rank P = dimk Hom A (P, V ) 1k . ⊕n ; for, then Hom A (P, V ) V ⊕n and χV , rank P = This is clear if P Areg χV , n + [A, A] = n dimk V . The general case elaborates on this observation. In μ ⊕n for some n with π ◦ μ = Id P . The detail, ﬁx A-module maps P F = Areg π functor Hom A ( · , V ) then yields k-linear maps

Hom A (P, V )

π∗ = · ◦ π μ∗ = · ◦ μ

Hom A (F, V ) V ⊕n

with μ∗ ◦ π ∗ = IdHom A (P,V ) . Thus, h := π ∗ ◦ μ∗ ∈ Endk (Hom A (F, V )) is an idempotent with Im h Hom A (P, V ). Therefore, by standard linear algebra (Exercise 1.5.1(b)), dimk Hom A (P, V ) 1k = trace h.

88

2. Further Topics on Algebras

n ⊕n n Let (ei , ei )i=1 be dual bases for F = Areg . Then we obtain dual bases (x i , x i )i=1 for P by putting x i = π(ei ) and x i = ei ◦ μ. Chasing the idempotent h through the isomorphism ∼ Mat (End (V )) End (Hom (F, V )) ∼ End (V ⊕n ) k

k

A

Lemma 1.4

n

k

coming from Hom A (F, V ) ∼ V ⊕n , f → f (ei ) , one sees that h corresponds to the matrix (hi, j ) ∈ Matn (Endk (V )) that is given by hi, j (v) = x j, x i v for v ∈ V . Therefore, trace hi,i = trace(x i, x i V ) trace h = i

i

= χV , x i, x i = χV , rank P i

and the proof is complete.

(2.6)

2.1.4. Finite-Dimensional Algebras We now turn our attention to the case where the algebra A is ﬁnite dimensional. Then the categories Aproj and Aprojﬁn coincide and so K0 ( A) = P ( A). Our ﬁrst goal will be to describe the indecomposable projectives of A. This will result in more explicit descriptions of the group P ( A) and of the Cartan homomorphism c : P ( A) → R ( A) than previously oﬀered in Proposition 2.4 and in (2.9). Lifting Idempotents We start with a purely ring-theoretic lemma, for which the algebra A need not be ﬁnite dimensional. An ideal I of A is called nil if all elements of I are nilpotent. A family (ei )i ∈I of idempotents of A is called orthogonal if ei e j = δi, j ei for i, j ∈ I. Lemma 2.6. Let I be a nil ideal of A and let f 1, . . . , f n be orthogonal idempotents of A/I. Then there exist orthogonal idempotents ei ∈ A such that ei + I = f i . Proof. First, consider the case n = 1 and write f = f 1 ∈ A/I. Let : A A/I denote the canonical map and ﬁx any a ∈ A such that a = f . Then the element b = 1 − a ∈ A satisﬁes ab = ba = a − a2 ∈ I, and hence (ab) m = 0 for some m ∈ N. Therefore, by the Binomial Theorem, 1 = (a + b) 2m = e + e

m 2m 2m−i i

2m 2m−i i with e = i=0 i a b and e = 2m b . By our choice of m, i=m+1 i a

we have ee = e e = 0 and so e = e(e + e ) = e2 is an idempotent. Finally, e ≡ a2m ≡ a mod I, whence e = a = f as desired. Note also that e is a polynomial in a with integer coeﬃcients and zero constant term. Now let n > 1 and assume that we have already constructed e1, . . . , en−1 ∈ A

n−1 ei is an idempotent of A such that ei x = xei = ei as in the lemma. Then x = i=1 for 1 ≤ i ≤ n − 1. Fix any a ∈ A such that a = f n and put a = (1 − x)a(1 − x) ∈ A.

n−1 f i and a = f n are orthogonal Then xa = a x = 0. Furthermore, since x = i=1

2.1. Projectives

89

idempotents of A/I, we have a = f n . Now construct the idempotent en ∈ A with en = f n from a as in the ﬁrst paragraph. Then xen = en x = 0, since en is a polynomial in a with integer coeﬃcients and zero constant term. Therefore, ei en = ei xen = 0 and, similarly, en ei = 0 for i n, completing the proof. Projective Covers Let us now assume that A ∈ Algk is ﬁnite dimensional. We have already repeatedly used the fact that, for any V ∈ Rep A, there exists an epimorphism P V with P projective or even free. It turns out that now there is a “minimal” choice for such an epimorphism, which is essentially unique. To describe this choice, consider the completely reducible factor def

head V = V /(rad A).V . This construction is functorial: head · As.p. ⊗ A · , where As.p. = A/ rad A is the semisimpliﬁcation of A. Theorem 2.7. Let A ∈ Algk be ﬁnite dimensional. Then, for any V ∈ Rep A, there exist a P ∈ AProj and an epimorphism φ : P V satisfying the following equivalent conditions: (i) Ker φ ⊆ (rad A).P; (ii) head φ : head P ∼ head V ; (iii) every epimorphism φ : P V with P ∈ AProj factors as φ = φ ◦ π for some epimorphism π : P P. In particular, P is determined by V up to isomorphism. Proof. We start by proving the existence of an epimorphism φ satisfying (i). First assume that V is irreducible. Then V is a direct summand of the regular representation of As.p. = A/ rad A, and hence V As.p. f for some idempotent f ∈ As.p. . Since rad A is nil, even nilpotent, Lemma 2.6 guarantees the existence of an idempotent e ∈ A so that e = f under the canonical map : A As.p. . Putting P = Ae and φ = , we obtain a projective P ∈ AProj and an epimorphism φ : P V satisfying Ker φ = Ae ∩ rad A = (rad A)e = (rad A).P as required. ⊕m(S,V ) Next assume that V is completely reducible and write V S ∈Irr A S as in (1.44). For each S, let φS : PS S be the epimorphism constructed in the previous paragraph. Then the following map satisﬁes the requirements of (i): φS⊕m(S,V ) : P = PS⊕m(S,V ) S ⊕m(S,V ) ∼ V . φ= S ∈Irr A

S ∈Irr A

S ∈Irr A

90

2. Further Topics on Algebras

For general V , consider the epimorphism φ : P head V constructed in the previous paragraph. Proposition 2.1 yields a morphism φ as in the following diagram: P ∃φ

V

can.

φ

head V

Since φ = φ ◦ can is surjective, Im φ + (rad A).V = V . Iterating this equality, we obtain Im φ + (rad A) i .V = V for all i. Hence, Im φ = V , because rad A is nilpotent. Moreover, Ker φ ⊆ Ker φ ⊆ (rad A).P. This completes the proof of the existence claim in the theorem. In order to prove the equivalence of (i)–(iii), note that φ : P V gives rise to an epimorphism head φ : head P head V with Ker(head φ) = φ−1 ((rad A).V )/(rad A).P = (Ker φ + (rad A).P)/(rad A).P. Therefore, head φ is an isomorphism if and only if Ker φ ⊆ (rad A).P, proving the equivalence of (i) and (ii). Now assume that φ satisﬁes (i) and let φ be as in (iii). Then Proposition 2.1 yields: P ∃π

P

φ

φ

V

As above, it follows from surjectivity of φ that P = Im π +Ker φ = Im π + (rad A).P and iteration of this equality gives P = Im π. This shows that (i) implies (iii). For the converse, assume that φ satisﬁes (iii) and pick some epimorphism φ : P V with P ∈ AProj and Ker φ ⊆ (rad A).P . By (iii), there exists an epimorphism π : P P with φ = φ ◦ π. Therefore, Ker φ = π(Ker φ ) ⊆ (rad A).π(P ) = (rad A).P and so φ satisﬁes (i). This establishes the equivalence of (i)–(iii). Finally, for uniqueness, let φ : P V and φ : P V both satisfy (i)–(iii). π Then there are epimorphisms P P such that φ ◦ π = φ and φ ◦ π = φ. π Consequently, φ = φ ◦ π ◦ π and so Ker π ⊆ Ker φ ⊆ (rad A).P. On the other hand, P = Q ⊕ Ker π for some Q, because the epimorphism π splits. Therefore, Ker π = (rad A). Ker π, which forces Ker π = 0 by nilpotency of rad A. Hence π is an isomorphism and the proof of the theorem is complete. The projective constructed in the theorem above for a given V ∈ Rep A is called the projective cover of V ; it will be denoted by PV .

2.1. Projectives

91

Thus, we have an epimorphism PV V and PV is minimal in the sense that PV is isomorphic to a direct summand of every P ∈ AProj such that P V by (iii) of Theorem 2.7. Moreover, (ii) states that (2.13)

head PV head V .

Exercise 2.1.7 explores some further properties of the operator P . Principal Indecomposable Representations Since A is assumed ﬁnite dimensional, the regular representation Areg decomposes into a ﬁnite direct sum of indecomposable representations. A full representative set of the isomorphism types of the summands occurring in this decomposition is called a set of principal indecomposable representations of A. All principal indecomposables evidently belong to Aprojﬁn and they are unique up to isomorphism by the Krull-Schmidt Theorem (§1.2.6). The following proposition lists some further properties of the principal indecomposable representations. Recall that, for any V ∈ Repﬁn A and any S ∈ Irr A, the multiplicity of S as a composition factor of V is denoted by μ(S, V ); see the Jordan-Hölder Theorem (Theorem 1.18). Proposition 2.8. Let A ∈ Algk be ﬁnite dimensional. Then: (a) The principal indecomposable representations of A are exactly the projective covers P S with S ∈ Irr A ; they are a full representative set of the isomorphism classes of all indecomposable projectives of A. ⊕ dim D (S) S . (b) Areg S ∈Irr A ( P S) (c) P S , V = μ(S, V ) dimk D(S) for any V ∈ Repﬁn A and S ∈ Irr A. Proof. Since head P S S by (2.13), the various P S are pairwise nonisomorphic and they are all indecomposable. Now let P ∈ AProj be an arbitrary indecomposable ⊕I for some set I, there exists a nonzero projective. Since P is a submodule of Areg homomorphism P → Areg , and hence there certainly exists an epimorphism P S for some S ∈ Irr A. But then P S is isomorphic to a direct summand of P by Theorem 2.7(iii), and hence P P S. Thus, the collection P S with S ∈ Irr A forms a full set of nonisomorphic indecomposable projectives for A. To see that this collection also coincides with the principal indecomposables, observe that P As.p. reg Areg , berad A. Since Wedderburn’s Structure cause the canonical map A As.p. has kernel s.p. ⊕ dim D (S) S , the isomorphism Theorem gives the decomposition Areg S ∈Irr A S in (b) now follows by additivity of the operator P · on direct sums (Exercise 2.1.7). This proves (a) as well as (b). For (c), note that the function P S , · = dimk Hom A (P S, · ) is additive on short exact sequences in Repﬁn A by exactness of the functor Hom A (P S, · ), and so is the multiplicity μ(S, · ) by (1.32). Therefore, by considering a composition series of V , one reduces (c) to the case where V ∈ Irr A. But then μ(S, V ) = δ S,V

92

2. Further Topics on Algebras

and Hom A (P S, V ) Hom A (head P S, V ) Hom A (S, V ) = δ S,V D(S) (2.13)

by Schur’s Lemma. The formula in (c) follows from this.

As a special case of the multiplicity formula in Proposition 2.8(c), we note the so-called orthogonality relations: (2.14)

PS , S

= δ S,S dimk D(S)

(S, S ∈ Irr A).

The multiplicity formula in Proposition 2.8(c) and the orthogonality relations (2.14) have a particularly appealing form when the base ﬁeld k is a splitting ﬁeld for A; for, then dimk D(S) = 1 for all S ∈ Irr A. The Cartan Matrix In our current ﬁnite-dimensional setting, the Grothendieck groups P ( A) = K0 ( A) and R ( A) are both free abelian of ﬁnite rank equal to the size of Irr A. Indeed, the classes [S] ∈ R ( A) with S ∈ Irr A provide a Z-basis of R ( A) (Proposition 1.46) and the classes [P S] ∈ P ( A) form a Z-basis of P ( A) (Propositions 2.4(a) and 2.8(a)). In terms of these bases, the Cartan homomorphism (2.9) has the following description: R ( A) Z ⊕ Irr A

∈

c

[P S]

∈

P ( A) Z ⊕ Irr A

S ∈Irr A

μ(S , P S)[S ]

Thus, the Cartan homomorphism can be described by the following integer matrix: . (2.15) C = μ(S , P S) S ,S ∈Irr A

This matrix is called the Cartan matrix of A. Note that all entries of C belong to Z+ and that the diagonal entries are strictly positive. If k is a splitting ﬁeld for A, then the Cartan matrix takes the following form by Proposition 2.8(c): (2.16) C = PS , PS . S ,S ∈Irr A

Characters of Projectives In this paragraph, we will show that the Hattori-Stallings rank of P ∈ Aprojﬁn determines the character χ P . The reader is reminded that the character map χ : R ( A) → A∗trace = (Tr A) ∗ has image in the subspace C( A) (Tr As.p. ) ∗ of (Tr A) ∗ ; see (1.55). Let A a, b A ∈ Endk ( A) denote right and left multiplication with a, b ∈ A, respectively, and deﬁne a k-linear map t = t A : A −→ A∗ by (2.17)

def

t (a), b = trace(b A ◦ A a).

2.1. Projectives

93

Note that if a or b belongs to [A, A], then b A ◦ A a ∈ [Endk ( A) , Endk ( A)] and so trace(b A ◦ A a) = 0 . Moreover, if a or b belongs to rad A, then the operator b A ◦ A a is nilpotent, and so trace(b A ◦ A a) = 0 again. Therefore, the map t can be reﬁned as in the following commutative diagram, with t denoting all reﬁnements: A∗

t

A can.

(2.18)

Tr A

t

A∗trace = (Tr A) ∗

t

(Tr As.p. ) ∗ C( A)

can.

Tr As.p.

The following proposition is due to Bass [11]. Proposition 2.9. Let A ∈ Algk be ﬁnite dimensional. The following diagram commutes: c P ( A) R ( A) χ

rank t

Tr A

(Tr A) ∗

Proof. We need to check the equality χ P , a = trace(a A ◦ A rank P) for P ∈ i j Aprojﬁn and a ∈ A. Fix dual bases (bi , b )i for AVectk and let (x j , x ) j be dual ; this follows from the bases for P. Then (b .x , bi ◦ x j ) are dual bases for P i

j

i, j

Vectk

calculation, for p ∈ P, x j, p.x j = bi, x j , pbi .x j = bi ◦ x j , pbi .x j . p= j

i, j

i, j

Thus, Id P corresponds to i, j bi .x j ⊗ bi ◦ x j ∈ P ⊗ P∗ under the standard isomorphism Endk (P) P ⊗ P∗ and, for any a ∈ A, the endomorphism a P ∈ Endk (P)

corresponds to i, j abi .x j ⊗ bi ◦ x j . Therefore, bi ◦ x j, abi .x j = bi, abi x j , x j χ P, a = trace a P = =

(2.6)

i, j

i, j

i

b , abi rank P = trace(a A ◦ A rank P)

i

as claimed. The Hattori-Stallings Rank Map

If k is a splitting ﬁeld for A, then the character map yields an isomorphism of vector spaces (Proposition 1.49), R ( A) = R ( A) ⊗ k ∼ C( A) (Tr As.p. ) ∗ . k

Z

94

2. Further Topics on Algebras

Our goal in this paragraph is to prove a version of this result for P ( A), with the Hattori-Stallings ranks replacing characters. This will further highlight the duality between P ( A) and R ( A). Let def

rankk : Pk ( A) = P ( A) ⊗Z k −→ Tr A denote the k-linear extension of the Hattori-Stallings rank map. Theorem 2.10. Let A ∈ Algk be ﬁnite dimensional and let τ : Tr A Tr As.p. denote the canonical epimorphism. (a) If char k = 0, then τ ◦ rank is a group monomorphism P ( A) → Tr As.p. . (b) If k is a splitting ﬁeld for A, then we have a k-linear isomorphism τ ◦ rank : P ( A) ∼ Tr As.p. . k

k

Thus, the images of the principal indecomposable representations P S with S ∈ Irr A form a k-basis of Tr As.p. . Proof. Put ρ = τ ◦ rankk : Pk ( A) → Tr As.p. . Then, for S, S ∈ Irr A, we have P S , S 1k χ S, ρ[P S] = χ S, rank P S = Prop. 2.5

= δ S,S dimk D(S) 1k .

(2.14)

Thus, the images ρ[P S] with dimk D(S) 1k 0 form a k-linearly independent subset of Tr As.p. . If char k = 0 or if k is a splitting ﬁeld for A, then this holds for all S ∈ Irr A. Since Pk ( A) is generated by the classes [P S] ⊗ 1 (Propositions 2.4(a) and 2.8(a)), we obtain a k-linear embedding ρ : Pk ( A) → Tr As.p. in these cases. If char k = 0, then the canonical map P ( A) Z ⊕ Irr A → Pk ( A) k ⊕ Irr A is an embedding, proving (a). For a splitting ﬁeld k, we have dimk Tr As.p. = dimk C( A) = # Irr A (Theorem 1.44) and (b) follows.

Exercises for Section 2.1 Without any mention to the contrary, A ∈ Algk is arbitrary in these exercises. 2.1.1 (Injectives). A module I ∈ AMod is called injective if I satisﬁes the following equivalent conditions: (i) given a monomorphism f : M → N and an arbitrary g : M → I in AMod, g ◦ f = g: there exists a “lift” g : N → I in AMod such that I ∃g

N

g

f

M

2.1. Projectives

95

(ii) every monomorphism f : I → M in AMod splits: there exists s : M → I such that s ◦ f = IdI ; (iii) the functor Hom A ( · , I) : AMod → Vectk is exact. (a) Prove the equivalence of the above conditions. (b) Let A → B be an algebra map. Show that CoindBA : AMod → B Mod (§1.2.2) sends injectives of A to injectives of B. (c) Let (Mi )i be a family of A-modules. Show that the direct product i Mi is injective if and only if all Mi are injective. 2.1.2 (Semisimplicity). Show that the following are equivalent: (i) A is semisimple; (ii) all V ∈ AMod are projective; (iii) all V ∈ AMod are injective. 2.1.3 (Morita contexts). A Morita context consists of the following data: algebras A, B ∈ Algk , bimodules V ∈ AModB and W ∈ B Mod A, and bimodule homomorphisms f : V ⊗B W → A and g : W ⊗ AV → B with A, B being the regular bimodules. Writing f (v ⊗ w) = vw and g(w ⊗ v) = wv, the maps f and g are required to satisfy the associativity conditions (vw)v = v(wv ) and (wv)w = w(vw ) for all v, v ∈ V and w, w ∈ W . Under the assumption that g is surjective, prove: (a) g is an isomorphism. (b) Every left B-module is a homomorphic image of a direct sum of copies of W and every right B-module is an image of a direct sum of copies of V . (c) V and W are ﬁnitely generated projective as A-modules. 2.1.4 (Morita contexts and ﬁniteness conditions). This problem assumes familiarity with Exercise 2.1.3 and uses the same notation. Let ( A, B, V, W, f , g) be a Morita context such that A is right noetherian and g is surjective. Prove: (a) B is right noetherian and V is ﬁnitely generated as a right B-module. (b) If A is also aﬃne, then B is aﬃne as well. 2.1.5 (Hattori-Stallings rank). Let e = ei j ∈ Matn ( A) be an idempotent matrix

⊕n e as in (2.1). Show that rank P = i eii + [A, A]. and let P = Areg 2.1.6 (“Eilenberg swindle”). Let P ∈ AProj be arbitrary and let F be a free Amodule such that F = P ⊕ Q with P P. Show that F ⊕N = F ⊕ F ⊕ F ⊕ · · · is a free A-module satisfying P ⊕ F ⊕N F ⊕N . Conclude that if K0∞ ( A) is constructed exactly as K0 ( A) but using arbitrary projectives of A, then K0∞ ( A) = {0}. 2.1.7 (Properties of projective covers). Assume that A is ﬁnite dimensional. Let V, W ∈ Rep A and let α : PV V , β : PW W be the projective covers. Prove: (a) If φ : V → W is a homomorphism in Rep A, then there exists a lift φ : PV → PW with φ ◦ α = β ◦ φ. Furthermore, any such φ is surjective iﬀ φ is surjective. (b) P ( head V ) PV . (c) P (V ⊕ W ) PV ⊕ PW .

96

2. Further Topics on Algebras

2.1.8 (Local algebras). Assume that A is ﬁnite dimensional and local, that is, As.p. = A/ rad A is a division algebra. Show that every P ∈ Aprojﬁn is free. 2.1.9 (Equality in K0 ( A) and stable isomorphism). (a) For P, Q ∈ Aproj, show that [P] = [Q] holds in K0 ( A) if and only if P and Q are stably isomorphic, that is, ⊕r ⊕r Q ⊕ Areg for some r ∈ Z+ . P ⊕ Areg (b) Assume that Matn ( A) is directly ﬁnite for all n, that is, x y = 1n×n implies yx = 1n×n for x, y ∈ Matn ( A)—this holds, for example, whenever A is commutative or (right or left) noetherian; see [87, Chapter 5]. Show that [P] 0 in K0 ( A) for any 0 P ∈ Aproj. 2.1.10 (K0 ( A) and P ( A)). Show that the map P ( A) → K0 ( A), [P] → [P], is a monomorphism of groups. 2.1.11 (K0 of the opposite algebra). Let · ∨ : AMod → Mod A ≡ Aop Mod be the contravariant functor introduced just before Lemma 2.2. Show that this functor yields an isomorphism K0 ( A) K0 ( Aop ). 2.1.12 (Grothendieck groups of A and As.p. ). Assume that dimk A < ∞ and let α : A As.p. be the canonical map. Recall that R (α) : R ( As.p. ) ∼ R ( A) (Lemma 1.47). Show that induction along α gives an isomorphism K0 (α) : K0 ( A) = P ( A) ∼ K0 ( As.p. ) = P ( As.p. ) = R ( As.p. ), [P] → [head P]. 2.1.13 (Some Cartan matrices). (a) Let A = k[x]/(x n ) with n ∈ N. Show that the Cartan matrix of A is the 1 × 1-matrix C = (n). (b) Let A be the algebra of upper triangular n × n-matrices over k. Show that the Cartan matrix of A is the upper triangular n × n-matrix 1 · · · 1. . . . .. . C= 1

2.1.14 (Cartan matrix of the Sweedler algebra). Let char k 2. The algebra A = kx, y/(x 2, y 2 − 1, x y + yx) is called the Sweedler algebra. (a) Realize A as a homomorphic image of the quantum plane Oq (k2 ) with q = −1 (Exercise 1.1.16) and use this to show that dimk A = 4. (b) Show that rad A = (x) and As.p. k × k. There are two irreducible A-modules, k± , with x.1 = 0 and y.1 = ±1. (c) Show that e± = 12 (1 ± y) ∈ A are idempotents with A = Ae+ ⊕ Ae− and xe± = e∓ x. Conclude that Pk± = Ae± and that the Cartan matrix of A is C = 11 11 .

2.2. Frobenius and Symmetric Algebras This section features a special class of ﬁnite-dimensional algebras, called Frobenius algebras, with particular emphasis on the subclass of symmetric algebras. As we will see, all ﬁnite-dimensional semisimple algebras are symmetric and it is in fact

2.2. Frobenius and Symmetric Algebras

97

often quite useful to think of semisimple algebras in this larger context. We will learn later that Frobenius algebras encompass all group algebras of ﬁnite groups and, more generally, all ﬁnite-dimensional Hopf algebras. The material in this section is rather technical and focused on explicit formulae. The tools deployed here will see some heavy use in Chapter 12 but they will not play an essential role elsewhere in this book. 2.2.1. Deﬁnition of Frobenius and Symmetric Algebras Recall that every A ∈ Algk carries the regular ( A, A)-bimodule structure (Example 1.2): the left and right actions of a ∈ A on A are respectively given by left multiplication, a A, and by right multiplication, A a. This structure gives rise to a bimodule structure on the linear dual, A∗ = Homk ( A, k), for which it is customary to use the following notation: a f b = f ◦ b A ◦ A a

(a, b ∈ A, f ∈ A∗ ).

Using · , · : A∗ × A → k to denote the evaluation pairing, the ( A, A)-bimodule action becomes (2.19)

a f b, c = f , bca

(a, b, c ∈ A, f ∈ A∗ ).

The algebra A is said to be Frobenius if A∗, viewed as a left A-module, is isomorphic to the left regular A-module Areg = A A. We will see in Lemma 2.11 below that this condition is equivalent to a corresponding right A-module condition. If A∗ and A are in fact isomorphic as ( A, A)-bimodules—this is not automatic from the existence of a one-sided module isomorphism—then the algebra A is called symmetric. Note that even a mere isomorphism A∗ A in Vectk forces A to be ﬁnite dimensional (Appendix B); so Frobenius algebras will necessarily have to be ﬁnite dimensional. 2.2.2. Frobenius Data For any ﬁnite-dimensional algebra A, a left A-module isomorphism A∗ A amounts to the existence of an element λ ∈ A∗ such that A∗ = A λ; likewise for the right side. The next lemma shows in particular that any left A-module generator λ ∈ A∗ also generates A∗ as a right A-module and conversely. In the lemma, we will dispense with the summation symbol, and we shall continue to do so hereafter: Summation over indices occurring twice is implied throughout Section 2.2. Lemma 2.11. Let A ∈ Algk be ﬁnite dimensional. Then the following are equivalent for any λ ∈ A∗ : (i) A∗ = Aλ;

98

2. Further Topics on Algebras

(ii) there exist elements x i, yi ∈ A (i = 1, . . . , dimk A) satisfying the following equivalent conditions: (2.20)

a = x i λ , ayi

(2.21)

λ , x i y j = δi, j ;

(2.22)

a = yi λ , x i a

for all a ∈ A ; for all a ∈ A ;

(iii) A∗ = λ A. Proof. Conditions (2.20) and (2.21) both state that (x i )i is a k-basis of A and (yi λ)i is the corresponding dual basis of A∗ . Thus, (2.20) and (2.21) are equivalent, and they certainly imply that λ generates A∗ as a left A-module; so (i) follows. Conversely, if (i) is satisﬁed, then for any k-basis (x i )i of A, the dual basis of A∗ has the form (yi λ)i for suitable yi ∈ A, giving (2.20), (2.21). Similarly, (2.22) and (2.21) both state that (yi )i and (λ x i )i are dual bases of A and A∗ , and the existence of such bases is equivalent to (iii). Frobenius Form. To summarize the foregoing, a ﬁnite-dimensional algebra A is a Frobenius algebra if and only if there is a linear form λ ∈ A∗ satisfying the equivalent conditions of Lemma 2.11; any such λ is called a Frobenius form. Note that the equality A∗ = A λ is equivalent to the condition that 0 a ∈ A implies λ , Aa 0, which in turn is equivalent to the corresponding condition for a A. Thus, a Frobenius form is a linear form λ ∈ A∗ such that Ker λ contains no nonzero left ideal or, equivalently, no nonzero right ideal of A. We will think of a Frobenius algebra as the pair ( A, λ). A homomorphism of Frobenius algebras f : ( A, λ) → (B , μ) is a map in Algk such that μ ◦ f = λ. Dual Bases. The family (x i , yi )i in Lemma 2.11 is called a pair of dual bases for ( A, λ). The identities (2.20) and (2.22) can be expressed by the following diagram: A ⊗ A∗ ∈

(2.23)

∼

can.

∈

Endk ( A) Id A

x i ⊗ (yi λ) = yi ⊗ (λ x i )

Nakayama Automorphism. For a given Frobenius form λ ∈ A∗ , Lemma 2.11 implies that (2.24)

λ a = νλ (a) λ

(a ∈ A)

for a unique νλ (a) ∈ A. Thus, λ , ab = λ , bνλ (a) for a, b ∈ A. This determines an automorphism νλ ∈ AutAlgk ( A), which is called the Nakayama automorphism

2.2. Frobenius and Symmetric Algebras

99

of ( A, λ). In terms of dual bases (x i , yi ), the Nakayama automorphism is given by (2.25)

νλ (a) = yi λ , x i νλ (a) = yi λ , ax i . (2.22)

Changing the Frobenius Form. The data associated to A that we have assembled above, starting with a choice of Frobenius form λ ∈ A∗ , are unique up to units. Indeed, for each unit u ∈ A× , the form uλ ∈ A∗ is also a Frobenius form and all Frobenius forms of A arise in this way, because they are just the possible generators of the left A-module A A∗ A A. The reader will easily check that if (x i , yi ) are dual bases for ( A, λ), then (x i , yi u−1 ) are dual bases for ( A, uλ) and the Nakayama automorphisms are related by (2.26)

νuλ (a) = uνλ (a)u−1

(a ∈ A).

2.2.3. Casimir Elements Let ( A, λ) be a Frobenius algebra. The elements of A ⊗ A that correspond to Id A under the two isomorphisms Endk ( A) A ⊗ A∗ A ⊗ A that are obtained by identifying A∗ and A via · λ and λ · will be referred to as the Casimir elements of ( A, λ); they will be denoted by cλ and cλ , respectively.2 By (2.23), the Casimir elements are given by (2.27)

cλ = x i ⊗ yi

and

cλ = yi ⊗ x i .

Thus cλ = τ(cλ ), where τ ∈ AutAlgk ( A ⊗ A) is the switch map, τ(a ⊗ b) = b ⊗ a. The Casimir elements do depend on λ but not on the choice of dual bases (x i , yi ). Lemma 2.12. Let ( A, λ) be a Frobenius algebra with Nakayama automorphism νλ ∈ AutAlgk ( A). The following identities hold in the algebra A ⊗ A, with a, b ∈ A: (a) cλ = (Id ⊗νλ )(cλ ) = (νλ ⊗ νλ )(cλ ). (b) cλ = (νλ ⊗ Id)(cλ ) = (νλ ⊗ νλ )(cλ ). (c) (a ⊗ b)cλ = cλ (b ⊗ νλ (a)). (d) (a ⊗ b)cλ = cλ (νλ (b) ⊗ a). Proof. The identities in (b) and (d) follow from those in (a) and (c) by applying τ; so we will focus on the latter. For (a), we calculate x i ⊗ (yi λ) = yi ⊗ (λ x i ) = yi ⊗ νλ (x i ) λ . (2.23)

(2.24)

This gives x i ⊗ yi = yi ⊗ νλ (x i ) or cλ = (Id ⊗νλ )(cλ ) by (2.27). Applying νλ ⊗ Id to this identity, we obtain (νλ ⊗ Id)(cλ ) = (νλ ⊗ νλ )(cλ ) and then τ 2In later chapters, we will consider similar elements, also called Casimir elements, for semisimple Lie algebras; see (5.54) and §6.2.1.

100

2. Further Topics on Algebras

yields (Id ⊗νλ )(cλ ) = (νλ ⊗ νλ )(cλ ). This proves (a). Part (c) follows from the computations ax i ⊗ yi = x j λ , ax i y j ⊗ yi = x j ⊗ yi λ , x i y j νλ (a) = x j ⊗ y j νλ (a) (2.20)

(2.24)

(2.22)

and x i ⊗ byi = x i ⊗ y j λ , x j byi = x i λ , x j byi ⊗ y j = x j b ⊗ y j . (2.22)

(2.20)

Casimir Operator and Higman Trace. We now discuss two closely related operators that were originally introduced by D. G. Higman [102]. Continuing with the notation of (2.23), (2.27), they are deﬁned by:

x i ayi

A ∈

a

and

ZA

∈

γλ : A

A ∈

(2.28)

Tr A

∈

γλ : A

a

yi ax i

The operator γλ will be called the Higman trace and γλ will be referred to as the Casimir operator. The following lemma justiﬁes the claims, implicit in (2.28), that the Higman trace does indeed factor through the universal trace Tr : A Tr A = A/[A, A] and the Casimir operator has values in the center Z A . Lemma 2.13. Let ( A, λ) be a Frobenius algebra with Nakayama automorphism νλ ∈ AutAlgk ( A). Then, for all a, b, c ∈ A, aγλ (bc) = γλ (cb)νλ (a)

aγλ (bc) = γλ (νλ (c)b)a .

and

Proof. The identity in Lemma 2.12(c) states that ax i ⊗ byi = x i b ⊗ yi νλ (a). Multiplying this on the right with c ⊗ 1 and then applying the multiplication map of A gives ax i cbyi = x i bcyi νλ (a) or, equivalently, aγλ (cb) = γλ (bc)νλ (a). The formula for γλ follows in the same way from Lemma 2.12(d). 2.2.4. Traces In this subsection, we use the Frobenius structure to derive some trace formulae that will be useful later on. To start with, the left and right regular representations of any Frobenius algebra A have the same character. Indeed, for any a ∈ A, we compute trace(a A ) = trace(a A∗ ) = trace(( A a) ∗ ) = trace( A a), (2.19)

(B.25)

∗

where the ﬁrst equality uses the fact that A A in Mod A and the second is due to the switch in sides when passing from A to A∗ in (2.19). Lemma 2.14. Let ( A, λ) be a Frobenius algebra with dual bases (x i , yi ). Then, for any f ∈ Endk ( A), trace( f ) = λ , f (x i )yi = λ , x i f (yi ).

2.2. Frobenius and Symmetric Algebras

101

Proof. By (2.23), we have: ∼

A ⊗ A∗ ∈

can.

∈

Endk ( A) f

f (x i ) ⊗ (yi λ) = f (yi ) ⊗ (λ x i )

Since the trace function on Endk ( A) becomes evaluation on A ⊗ A∗ , we obtain the formula in the lemma. With f = b A ◦ A a for a, b ∈ A, Lemma 2.14 yields the following expression for the map t : A → A∗ from (2.17) in terms of the Higman trace: (2.29)

trace(b A ◦ A a) = λ , bγλ (a) = λ , γλ (b)a.

Equation (2.29) shows in particular that the left and right regular representations of A have the same character, as was already shown above: (2.30)

χreg (a) = trace(a A ) = trace( A a) = λ , γλ (a) = λ , γλ (1)a = λ , aγλ (1).

2.2.5. Symmetric Algebras Recall that the algebra A is symmetric if there is an isomorphism A ∼ A∗ in ∗ AMod A . In this case, the image of 1 ∈ A will be a Frobenius form λ ∈ A such that a λ = λ a holds for all a ∈ A. Thus: (2.31)

νλ = Id A

or, equivalently,

λ ∈ A∗trace .

Recall also that Frobenius forms λ ∈ A∗ are characterized by the condition that Ker λ contains no nonzero left or right ideal of A. For λ ∈ A∗trace , this is equivalent to saying that Ker λ contains no nonzero two-sided ideal of A, because λ , Aa = λ , Aa A = λ , a A for a ∈ A. Thus, a ﬁnite-dimensional algebra A is symmetric if and only if there is a trace form λ ∈ A∗trace such that Ker λ contains no nonzero ideal of A. In light of (2.26), a symmetric algebra is also the same as a Frobenius algebra A possessing a Frobenius form λ ∈ A∗ such that the Nakayama automorphism νλ is an inner automorphism of A, in which case the same holds for any Frobenius form of A. When dealing with a symmetric algebra A, it will be convenient to always ﬁx a Frobenius form as in (2.31); this then determines λ up to a central unit of A. Casimir Element and Trace. Let us note some consequences of (2.31). First, cλ = cλ by Lemma 2.12(a). Therefore, the Casimir operator is the same as the Higman trace, γλ = γλ . We will simply write cλ and γλ , respectively, and refer to γλ as the Casimir trace of ( A, λ). Fixing dual bases (x i , yi ) for ( A, λ) as in (2.27) and (2.28), the Casimir element and trace are given by: (2.32)

cλ = x i ⊗ yi = yi ⊗ x i

102

2. Further Topics on Algebras

and γλ : A ∈

∈

(2.33)

ZA

Tr A

a

x i ayi = yi ax i

The square of the Casimir element, cλ2 , belongs to Z ( A⊗ A) = Z A⊗ Z A, because (a ⊗ b)cλ = cλ (b ⊗ a) for all a, b ∈ A by Lemma 2.12(c). Example 2.15 (Matrix algebras). Let A = Matn (k) be the n×n matrix algebra. Then we can take the ordinary matrix trace as Frobenius form: λ = trace . Dual bases for this form are provided by the standard matrices e j,k , with 1 in the ( j, k)-position and 0s elsewhere: trace(e j,k ek , j ) = δ j, j δk,k . Thus, with implied summation over both j and k, the Casimir element is ctrace = e j,k ⊗ ek, j 2 and its square is ctrace = 1n×n ⊗ 1n×n . By (2.33), the Casimir trace of a matrix a = al,m ∈ A is e j,k aek, j = ak,k e j, j ; so

γtrace (a) = trace(a)1n×n

(a ∈ A).

Now (2.30) gives the formula χreg (a) = n trace(a) for the regular character. (This was already observed in Exercise 1.5.2.) 2.2.6. Semisimple Algebras as Symmetric Algebras Proposition 2.16. Every ﬁnite-dimensional semisimple k-algebra is symmetric. Proof. Let A ∈ Algk be ﬁnite dimensional and semisimple. Wedderburn’s Structure Theorem allows us to assume that A is in fact simple, because a ﬁnite direct product of algebras is symmetric if all its components are (Exercise 2.2.1). By Theorem 1.44(b), we also know that A∗trace 0. If λ is any nonzero trace form, then Ker λ contains no nonzero ideal of A, by simplicity. Thus, λ serves as a Frobenius form for A. Now let A be split semisimple. The algebra structure of A is completely determined by the dimensions of the irreducible representations of A in view of the Wedderburn isomorphism (1.46): A ∼ Endk (S) Matdimk S (k)

a

S ∈Irr A

∈

∈

S ∈Irr A

aS

2.2. Frobenius and Symmetric Algebras

103

The determination of all dimk S for a given A can be a formidable task, however, while it is comparatively easy to come up with a Frobenius form λ ∈ A∗trace and assemble the Frobenius data of A. Therefore, it is of interest to ﬁgure out what information concerning the dimensions dimk S can be gleaned from these data. We shall in particular exploit the Casimir square cλ2 ∈ Z A ⊗ Z A and the value γλ (1) ∈ Z A of the Casimir trace γλ . Note that the operator cS ∈ Endk (S) is a scalar for c ∈ Z A; so we may view γλ (1)S ∈ k for all S ∈ Irr A. Recall that the primitive central idempotent e(S) ∈ Z A is the element cor responding to (0, . . . , 0, IdS, 0, . . . , 0) ∈ S ∈Irr A Endk (S) under the above isomorphism; so Z A = S ∈Irr A ke(S) and e(S)T = δ S,T IdS

(S, T ∈ Irr A).

Our ﬁrst goal is to give a formula for e(S) in terms of Frobenius data of A. We will also describe the image of the Casimir square cλ2 under the following isomorphism coming from the Wedderburn isomorphism: Endk (S) ⊗ Endk (T ) A⊗ A ∼ ∈

∈

S,T ∈Irr A

(2.34)

a⊗b

(a ⊗ b)S,T := aS ⊗ bT

Theorem 2.17. Let A ∈ Algk be split semisimple with Frobenius form λ ∈ A∗trace . Then, for all S, T ∈ Irr A, (a) e(S) γλ (1)S = (dimk S) ( χ S ⊗ Id A )(cλ ) = (dimk S) (Id A ⊗ χ S )(cλ ) and γλ (1)S = 0 if and only if (dimk S) 1k = 0 . (b) (cλ )S,T = 0 if S T and (dimk S) 2 (cλ2 )S,S = γλ (1)S2 . Proof. (a) The equality ( χ S ⊗Id A )(cλ ) = (Id A ⊗ χ S )(cλ ) follows from (2.32). In order to prove the equality e(S) γλ (1)S = (dimk S) (Id A ⊗ χ S )(cλ ), use (2.20) to write e(S) = x i λ , e(S)yi . We need to show that λ , e(S)yi γλ (1)S = (dimk S) χ S (yi ) for all i or, equivalently, (2.35)

λ , ae(S)γλ (1)S = (dimk S) χ S (a)

(a ∈ A).

For this, we use the regular character: χreg (ae(S)) = λ , ae(S)γλ (1) = λ , ae(S)γλ (1)S = λ , ae(S)γλ (1)S . (2.30)

⊕ dimk T from Wedderburn’s On the other hand, the isomorphism Areg T ∈Irr A T

Structure Theorem gives χreg = T ∈Irr A (dimk T ) χT . Since e(S) χT = δ S,T χ S , we obtain (2.36)

e(S) χreg = (dimk S) χ S .

104

2. Further Topics on Algebras

Thus, χreg (ae(S)) = (dimk S) χ S (a), proving (2.35). Since χ S and λ , · e(S) are both nonzero, (2.35) also shows that γλ (1)S = 0 if and only if (dimk S) 1k = 0. (b) For S T, the identity (a ⊗ b)cλ = cλ (b ⊗ a) from Lemma 2.12 gives (cλ )S,T = (e(S) ⊗ e(T ))cλ S,T = cλ (e(T ) ⊗ e(S)) S,T = (cλ )S,T (0S ⊗ 0T ) = 0. It remains to consider the case S = T. Here, the identity in Lemma 2.12 gives cλ2 = cλ (x i ⊗ yi ) = (yi ⊗ 1)cλ (x i ⊗ 1) = (γλ ⊗ Id)(cλ ).

(2.37)

For c ∈ Z A, the operator cS ∈ Endk (S) is a scalar and so χ S (c) = (dimk S)cS . Therefore, writing aS = ρS (a) for a ∈ A, we calculate (dimk S)( ρS ◦ γλ )(a) = ( χ S ◦ γλ )(a) = χ S (x i ayi ) = χ S (ayi x i )

(2.38)

= χ S (a γλ (1)) = χ S (a) γλ (1)S

and further

(dimk S) 2 (cλ2 )S,S = (dimk S) 2 ( ρS ⊗ ρS ) (γλ ⊗ Id)(cλ ) (2.37)

= (dimk S) 2 ( ρS ◦ γλ ) ⊗ ρS (cλ ) = (dimk S) ( χ S ⊗ ρS )(cλ ) γλ (1)S = (Idk ⊗ ρS ) (dimk S) ( χ S ⊗ Id)(cλ ) γλ (1)S

(2.38)

= ρS e(S)γλ (1)S γλ (1)S = γλ (1)S2 .

(a)

This completes the proof of the theorem.

2.2.7. Integrality and Divisibility Theorem 2.17 is a useful tool in proving certain divisibility results for the dimensions of irreducible representations. For this, we recall some standard facts about integrality; proofs can be found in most textbooks on commutative algebra or algebraic number theory. Let R be a ring and let S be a subring of the center Z R. An element r ∈ R is said to be integral over S if f (r ) = 0 for some monic polynomial f ∈ S[x]. The following facts will be referred to repeatedly, in later sections as well: • An element r ∈ R is integral over S if and only if r ∈ R for some subring R ⊆ R such that R contains S and is ﬁnitely generated as an S-module. • If R is commutative, then the elements of R that are integral over S form a subring of R containing S; it is called the integral closure of S in R. • An element of Q that is integral over Z must belong to Z. The last fact above reduces the problem of showing that a given nonzero s ∈ Z divides another t ∈ Z to proving that the fraction st is merely integral over Z.

2.2. Frobenius and Symmetric Algebras

105

Corollary 2.18. Let A be a split semisimple algebra over a ﬁeld k of characteristic 0 and let λ ∈ A∗trace be a Frobenius form such that γλ (1) ∈ Z . Then the following are equivalent: (i) the dimension of every irreducible representation of A divides γλ (1); (ii) the Casimir element cλ is integral over Z. Proof. Theorem 2.17 gives the formula (cλ2 )S,S =

(2.39)

γ (1) 2 λ dimk S

.

If (i) holds, then the isomorphism (2.34) sends Z[cλ2 ] to S ∈Irr A Z, because (cλ )S,T = 0 for S T (Theorem 2.17). Thus, Z[cλ ] is a ﬁnitely generated Zmodule and (ii) follows. Conversely, (ii) implies that cλ2 also satisﬁes a monic polynomial over Z and all (cλ2 )S,S satisfy the same polynomial. Therefore, the γλ (1) fractions dim S must be integers, proving (i). k

Corollary 2.19. Let A be a split semisimple k-algebra, with char k = 0, and let λ ∈ A∗trace be a Frobenius form for A. Furthermore, let φ : ( A, λ) → (B , μ) be a homomorphism of Frobenius k-algebras and assume that γμ (1) ∈ k . Then, for all S ∈ Irr A, γμ (1)

dimk Ind BA

S

=

γλ (1)S . dimk S

If the Casimir element cλ is integral over Z, then so is the scalar

γμ (1) dimk Ind B AS

∈ k.

Proof. Putting e := e(S), we have S ⊕ dimk S Ae and so IndBA S ⊕ dimk S Bφ(e). Since φ(e) ∈ B is an idempotent, we have dimk Bφ(e) = trace( B φ(e)) (Exercise 1.5.1). Therefore, dimk IndBA S ⊕ dimk S = trace( B φ(e)) = μ, φ(e)γμ (1) = μ, φ(e)γμ (1) (2.30)

(dimk S) 2 γμ (1). (2.35) γλ (1)S

= λ , eγμ (1) = γ

(1)

γ (1)

λ S is immediate from this. Finally, TheoThe claimed equality μ B = dim dimk Ind A S kS γμ (1) 2 rem 2.17 gives = (cλ2 )S,S , which is integral over Z if cλ is. B

dimk Ind A S

2.2.8. Separability A ﬁnite-dimensional k-algebra A is called separable if K ⊗ A is semisimple for every ﬁeld extension K/k. One can show that this condition is equivalent to semisimplicity of A plus separability of the ﬁeld extension Z (D(S))/k for every S ∈ Irr A. The reader wishing to see some details on this connection and other properties of separable algebras is referred to Exercises 1.4.11 and 1.5.6. Here, we

106

2. Further Topics on Algebras

give a characterization of separability in terms of the Casimir operator, which is due to D. G. Higman [102]. Proposition 2.20. Let ( A, λ) be a Frobenius algebra. Then γλ and γλ both vanish on rad A and their images are contained in the socle of Areg . Proof. As we have already observed in (2.18), the operator b A ◦ A a ∈ Endk ( A) is nilpotent if a or b belong to rad A, and hence trace(b A ◦ A a) = 0. Consequently, (2.29) gives λ , Aγλ (rad A) = 0 and λ , rad A · γλ ( A) = 0. Since the Frobenius form λ does not vanish on nonzero left ideals, we must have γλ (rad A) = 0 and rad A · γλ ( A) = 0. This shows that rad A ⊆ Ker γλ and Im γλ ⊆ soc Areg . For γλ , we ﬁrst compute λ , bγλ (a) = λ , byi ax i = λ , x i νλ (byi a) (2.24)

=

Lemma 2.14

trace(νλ ◦ b A ◦ A a).

The operator νλ ◦ b A ◦ A a ∈ Endk ( A) is again nilpotent if a or b belong to rad A, because its nth power has image in (rad A) n in this case. We can now repeat the above reasoning verbatim to obtain the same conclusions for γλ . For any Frobenius algebra ( A, λ), the Casimir operator γλ : A → Z A is Z A-linear. Hence, the image γλ ( A) is an ideal of Z A. This ideal does not depend on the choice of Frobenius form λ; indeed, if λ ∈ A∗ is another Frobenius form, then γλ (a) = γλ (ua) for some unit u ∈ A× (§2.2.2). Thus, we may deﬁne def

Γ A = γλ ( A). Theorem 2.21. The following are equivalent for a ﬁnite-dimensional A ∈ Algk : (i) A is separable; (ii) A is symmetric and Γ A = Z A; (iii) A is Frobenius and Γ A = Z A. Proof. The proof of (i) ⇒ (ii) elaborates on the proof of Proposition 2.16; we need to make sure that the current stronger separability hypothesis on A also gives Γ A = Z A. As in the earlier proof, Exercise 2.2.1 allows us to assume that A is simple. Thus, F := Z A is a ﬁeld and F/k is a ﬁnite separable ﬁeld extension (Exercise 1.5.6). It suﬃces to show that Γ( A) 0. For this, let F denote an algebraic closure of F. Then A ⊗F F Matn (F) for some n (Exercise 1.1.14). The ordinary trace map trace : Matn (F) → F is nonzero on A, since A generates Matm (F) as an F-vector space. It is less clear that the restriction of the trace map to A has values in F, but this is in fact the case (e.g., Reiner [179, (9.3)]), giving a trace form3 tr : A F. Since F/k is a ﬁnite separable ﬁeld extension, we also have the ﬁeld trace TF/k : F k; this is the same as the regular character χreg of 3This map is called the reduced trace of the central simple F-algebra A.

2.2. Frobenius and Symmetric Algebras

107

the k-algebra F (Exercise 1.5.5). The composite λ := TF/k ◦ tr : A k gives a nonzero trace form for the k-algebra A, which we may take as our Frobenius form. If (ai, bi ) are dual F-bases for ( A, tr) and (e j, f j ) are dual k-bases for (F, TF/k ), then (e j ai, bi f j ) are dual k-bases for ( A, λ): λ , e j ai bi f j = δ (i,i ),( j, j ) . Moreover, γtr = tr by Example 2.15, and so Γ A = e j ai A bi f j = (γTF /k ◦ γtr )( A) = (γTF /k ◦ tr)( A) = γTF /k (F). This is nonzero, because TF/k ◦ γTF /k = χreg = TF/k . (2.30)

The implication (ii)⇒(iii) being trivial, let us turn to the proof of (iii)⇒(i). Here, we can be completely self-contained. Note that the properties in (iii) are preserved under any ﬁeld extension K/k: If λ ∈ A∗ is a Frobenius form for A such that γλ (a) = 1 for some a ∈ A, then λ K = IdK ⊗λ is a Frobenius form for AK = K ⊗ A—any pair of dual bases (x i, yi ) for ( A, λ) also works for ( AK , λ K )— and γλ K (a) = γλ (a) = 1. Thus, it suﬃces to show that (iii) implies that A is semisimple. But 1 = γλ (a) ∈ soc A by Proposition 2.20. Hence soc A = A, proving that A is semisimple. 2.2.9. Projectives and Injectives In this subsection, we assume that the reader is familiar with the material in Section 2.1, including Exercise 2.1.1. We start with some remarks on duality, for which A ∈ Algk can be arbitrary. For any M ∈ AMod, the action in (2.19) makes the linear dual M ∗ = Homk (M, k) a right A-module: f a , m = f , am. Likewise, the dual N ∗ of any N ∈ Mod A becomes a left A-module via . Moreover, for any map φ : M → M in AMod, the dual map φ∗ : (M ) ∗ → M ∗ is a map in Mod A and similarly for maps in Mod A. In this way, the familiar contravariant and exact functor · ∗ : Vectk → Vectk (§B.3.2) restricts to functors, also contravariant and exact, ∗ ∗ and · : Mod A → AMod . · : AMod → Mod A We will focus on ﬁnite-dimensional modules M ∈ AMod in the proposition below; see Exercise 2.2.8 for the general statement. In this case, the canonical isomorphism (B.22) in Vectk is in fact an isomorphism M ∼ M ∗∗ in AMod.

Proposition 2.22. Let A ∈ Algk be Frobenius. Then ﬁnite-dimensional A-modules are projective if and only if they are injective. Proof. First, let A be an arbitrary ﬁnite-dimensional k-algebra and let M ∈ AMod be ﬁnite dimensional. We claim that (2.40)

M is projective ⇐⇒ M ∗ is injective.

To prove this, assume that M is projective and let f : M ∗ → N be a monomorphism in Mod A. We must produce a map g : N → M ∗ such that g ◦ f = Id M ∗ (Exercise 2.1.1). But the dual map f ∗ : N ∗ M ∗∗ M splits, since M is projective;

108

2. Further Topics on Algebras

so there is a map s : M → N ∗ in AMod with f ∗ ◦ s = Id M (Proposition 2.1). Then s∗ ◦ f ∗∗ = Id M ∗ and f = f ∗∗ : M ∗ M ∗∗∗ → N → N ∗∗ ; so we may take g = s∗ . The proof of the converse is analogous, and the foregoing does of course also apply to right modules. Now assume that A is Frobenius and let M be injective. Then M ∗ ∈ Mod A is projective by (2.40) and so M ∗ is isomorphic to a direct summand of A ⊕r A for some r. It follows that M M ∗∗ is isomorphic to a direct summand of ( A A∗ ) ⊕r A A ⊕r , proving that M is projective. Conversely, if M is projective, then M ∗ is injective by (2.40). Since the Frobenius property is right-left symmetric, the preceding argument applies to right modules as well, giving that M ∗ is projective. Therefore, M M ∗∗ is injective by (2.40). Next, we turn our attention to the indecomposable projectives. By Proposition 2.8, we know that they are given, up to isomorphism, by the projective covers P S with S ∈ Irr A and (2.13) further tells us that head P S S. All this is true for any ﬁnite-dimensional algebra. When A is Frobenius, the socle of P S is irreducible as well. To state the precise result, we recall that the Nakayama automorphism νλ of A is determined up to an inner automorphism of A; see (2.26). Since twisting any M ∈ AMod by an inner automorphism does not change the isomorphism type of M (Exercise 1.2.3), the various νλ -twists of M for speciﬁc choices of the Frobenius form λ are all isomorphic. Therefore, we may unambiguously speak of the Nakayama twist νM of M. If A is symmetric, then νM M. Proposition 2.23. Let A ∈ Algk be Frobenius and let S ∈ Irr A. Then S ν(soc P S). Proof. Put P = P S and T = soc P. The inclusion T → P gives rise to an epimorphism P∗ T ∗ in Mod A. Here, T ∗ is completely reducible, because the functor · ∗ commutes with ﬁnite direct sums and preserves irreducibility of Amodules. Moreover, P∗ is projective by Proposition 2.22 and (2.40) and P∗ is indecomposable, because P is. Therefore, P∗ has an irreducible head, whence T ∗ must be irreducible. Thus, T is irreducible as well. In order to describe T more precisely, write P Ae for some idempotent e = e2 ∈ A (Proposition 2.8) and identify T with an irreducible left ideal of A satisfying T = T e. Fixing a Frobenius form λ ∈ A∗ for A, the image νλ (T ) is an irreducible left ideal of A such that 0 λ , T e = λ , eνλ (T ). (2.24)

Therefore, ex 0 for some x ∈ νλ (T ) and ae → aex gives an epimorphism P Ae νλ (T ). Since head P S, it follows that νλ (T ) S, which proves the proposition. The main result of this subsection concerns the Cartan matrix of A. If k is a splitting ﬁeld for A, then the Cartan matrix has entries P S , P S , where V , W = dimk Hom A (V, W ) for V, W ∈ Repﬁn A; see (2.16).

2.2. Frobenius and Symmetric Algebras

109

Theorem 2.24. Let A ∈ Algk be Frobenius and let S, S ∈ Irr A. Then ν PS , PS = PS , P S . In particular, if A is symmetric and k is a splitting ﬁeld for A, then the Cartan matrix of A is symmetric. Proof. Note that P ν · ν P · . If A is symmetric, then ν · · ; so it suﬃces to prove the ﬁrst assertion. Putting P = P S and P = P S , we need to show that P , P = P , νP . Fix a composition series 0 = V0 ⊂ V1 ⊂ · · · ⊂ Vl = P for P and put V i = Vi /Vi−1 ∈ Irr A. Since P is projective, P , · is additive on short exact sequences in Repﬁn A. Therefore, dimk D(S) = μ(S, P ) dimk D(S). P , P = P, Vi = (2.14)

i

i:V i S

ν

Next, P is injective (Proposition 2.22) and soc νP ν (soc P) S (Proposi tion 2.23). Therefore, · , νP is additive on short exact sequences in Repﬁn A and ν S , P = δ S,S dimk D(S) for S ∈ Irr A. The following calculation now gives the desired equality: ν V i , νP = dimk D(S) = μ(S, P ) dimk D(S). P , P = i

i:V i S

Exercises for Section 2.2 2.2.1 (Direct and tensor products, matrix rings, corners). Prove: (a) The direct product A × B is Frobenius if and only if both A and B are Frobenius; likewise for symmetric. In this case, Γ( A × B) = Γ A × ΓB. (b) If ( A, λ) and (B , μ) are Frobenius, then so is ( A ⊗ B, λ ⊗ μ); similarly for symmetric. Furthermore, Γ( A ⊗ B) = Γ A ⊗ ΓB. (c) If ( A, λ) is Frobenius (or symmetric), then so is (Matn ( A), λ n ), with

λ n, (ai, j ) := i λ , ai,i , and Γ Matn ( A) = (Γ A) 1n×n . (d) Let ( A, λ) be symmetric with λ ∈ A∗trace , and let 0 e = e2 ∈ A. Then (e Ae, λ e Ae ) is also symmetric. 2.2.2 (Center and twisted trace forms). Let ( A, λ) be a Frobenius algebra. Show that the isomorphism λ · : A ∼ A∗ in Mod A restricts to an isomorphism Z A ∼ { f ∈ A∗ | f a = νλ (a) f for all a ∈ A} in Vectk . In particular, if A is symmetric, then Z A A∗trace . 2.2.3 (Sweedler algebra). Assume that char k 2 and let A be the Sweedler k-algebra (Exercise 2.1.14) Thus, A has k-basis 1, x, y, x y and multiplication is

110

2. Further Topics on Algebras

given by x 2 = 0, y 2 = 1, and x y = −yx. Deﬁne λ ∈ A∗ by λ , x = 1 and λ , 1 = λ , y = λ , x y = 0. Show: (a) ( A, λ) is Frobenius, but A is not symmetric. (b) The Nakayama automorphism νλ is given by νλ (x) = x, νλ (y) = −y. (c) The Casimir operator γλ vanishes and the Higman trace γλ is given by γλ (1) = 4x and γλ (x) = γλ (y) = γλ (x y) = 0. 2.2.4 (Annihilators). Let ( A, λ) be a Frobenius algebra. For any subset X ⊆ A, put l. ann A X = {a ∈ A | aX = 0} and r. ann A I = {a ∈ A | X a = 0}. Show: (a) If I is an ideal of A, then r. ann A I = l. ann A νλ (I). (b) For any X, we have l. ann A (r. ann A X ) = AX and r. ann A (l. ann A X ) = X A. 2.2.5 (Socle series of a Frobenius algebra). The left and right socle series of any ﬁnite-dimensional A ∈ Algk are deﬁned by l. socn A := l. ann A (rad A) n and r. socn A := r. ann A (rad A) n . For a Frobenius algebra ( A, λ), use Exercise 2.2.4 to show that l. socn A = r. socn A for all n. 2.2.6 (Casimir operator and Higman trace). Let ( A, λ) be a Frobenius algebra. Show: (a) νλ ◦ γλ = γλ ◦ νλ and νλ ◦ γλ = γλ ◦ νλ . 2 2 (b) γλ (a) = γλ (a)γλ (1) and γλ (a) = γλ (a)γλ (1).

(c) The identity trace(a A ◦ A b) = trace(b A ◦ A a) holds for all a, b ∈ A if and only if νλ ◦ γλ = γλ . The identity holds if A is symmetric. 2.2.7 (Frobenius extensions). Let φ : B → A be a k-algebra map and view A as a (B , B)-bimodule via φ (§1.2.2). The extension A/B is called a Frobenius extension if there exists a (B , B)-bimodule map E : A → B and elements (x i , yi )1n of A such that a = yi .E(x i a) = E(ayi ).x i for all a ∈ A. Thus, Frobenius algebras are the same as Frobenius extensions of k, with any Frobenius form playing the role of E. Prove: (a) For any Frobenius extension A/B, there is an isomorphism of functors CoindBA IndBA ; it is given by CoindBA W → IndBA W , f → yi ⊗ f (x i ) for W ∈ Rep B, with inverse given by a ⊗ w → (a → E(a a).w) . (b) Conversely, if CoindBA IndBA , then A/B is a Frobenius extension. 2.2.8 (Projectives and injectives). Let A be a Frobenius algebra. Use the isomorphism of functors CoindkA IndkA (Exercise 2.2.7) and Exercise 2.1.1(b),(c) to show that arbitrary A-modules are projective if and only if they are injective.

Part II

Groups

Chapter 3

Groups and Group Algebras

The theory of group representations is the archetypal representation theory, alongside the corresponding theory for Lie algebras (Part III). A representation of a group G, by deﬁnition, is a group homomorphism G → GL(V ) , where V is a vector space over a ﬁeld k and GL(V ) = Autk (V ) denotes the group of all k-linear automorphisms of V . More precisely, such a representation is called a linear representation of G over k. Not being part of the deﬁning data of G, the base ﬁeld k can be chosen depending on the purpose at hand. The traditional choice, especially for representations of ﬁnite groups, is the ﬁeld C of complex numbers; such representations are called complex representations of G. One encounters a very diﬀerent ﬂavor of representation theory when the characteristic of k divides the order of G. Representations of this kind are referred to as modular representations of G. Our main focus in this chapter will be on nonmodular representations of ﬁnite groups. Throughout this chapter, k denotes a ﬁeld and G is a group, not necessarily ﬁnite and generally in multiplicative notation. All further hypotheses will be spelled out when they are needed.

3.1. Generalities This section lays the foundations of the theory of group representations by placing it in the framework of representations of algebras; this is achieved by means of the group algebra of G over k. 113

114

3. Groups and Group Algebras

3.1.1. Group Algebras As a k-vector space, the group algebra of the group G over k is the k-vector space kG of all formal k-linear combinations of the elements of G (Example A.5). Thus,

elements of kG have the form x ∈G α x x with unique α x ∈ k that are almost all 0. The multiplication of k and the (multiplicative) group operation of G give rise to a multiplication for kG: (3.1)

x ∈G

αx x

y ∈G

βy y

def

=

α x βy x y =

x,y ∈G

z ∈G

α x βy z .

x,y ∈G xy=z

It is a routine matter to check that this yields an associative k-algebra structure on kG with unit map k → kG, λ → λ1G , where 1G is the identity element of the group G. The group algebra kG will also be denoted by k[G] in cases where the group in question requires more complex notation. Universal Property. Despite being simple and natural, it may not be immediately apparent why the above deﬁnition should be worthy of our attention. The principal reason is provided by the following “universal property” of group algebras. For any k-algebra A, let A× denote its group of units, that is, the group of invertible elements of A. Then there is a natural bijection (3.2)

HomAlgk (kG, A) HomGroups (G, A× ).

The bijection is given by sending an algebra map f : kG → A to its restriction f |G to the basis G ⊆ kG as in (A.4). Observe that each element of G is a unit of kG and that f |G is indeed a group homomorphism G → A× . Conversely, if G → A× is any group homomorphism, then its unique k-linear extension from G to kG is in fact a k-algebra map. Functoriality. Associating to a given k-algebra A its group of units, A× , is a “functorial” process: any algebra map A → B restricts to a group homomorphism A× → B× . The usual requirements on functors with respect to identities and composites are clearly satisﬁed; so we obtain a functor × · : Algk → Groups .

Similar things can be said for the rule that associates to a given group G its group algebra kG. Indeed, we have already observed above that G is a subgroup of the group of units (kG) × . Thus, if f : G → H is a group homomorphism, then the composite of f with the inclusion H → (kH)× is a group homomorphism G → (kH) × . By (3.2) there is a unique algebra homomorphism k f : kG → kH

3.1. Generalities

115

such that the following diagram commutes: f

G

kG

H

∃! k f

kH

It is straightforward to ascertain that k · respects identity maps and composites as is required for a functor, and hence we again have a functor k · : Groups → Algk . Finally, it is routine to verify that the bijection (3.2) is functorial in both G and A: the group algebra functor k · is left adjoint to the unit group functor · × in the sense of Section A.4. 3.1.2. First Examples and Some Variants Having addressed the basic formal aspects of group algebras in general, let us now look at two explicit examples of group algebras and describe their algebra structure. Example 3.1 (The group algebra of a lattice). Abelian groups isomorphic to some Zn are often referred to as lattices. While it is desirable to keep the natural additive notation of Zn , the group law of Zn becomes multiplication in the group algebra k[Zn ]. In order to resolve this conﬂict, we will denote an element m ∈ Zn by xm when thinking of it as an element of the group algebra k[Zn ]. This results in the following rule, which governs the multiplication of k[Zn ]:

(m, m ∈ Zn ).

xm+m = xm xm

Fixing a Z-basis (ei )1n of Zn and putting x i = xei , each xm takes the form m

m

m

xm = x 1 1 x 2 2 · · · x n n with unique mi ∈ Z. Thus, xm can be thought of as a monomial in the x i and their inverses and the group algebra k[Zn ] is isomorphic to a Laurent polynomial algebra over k, ±1 ±1 k[Zn ] k[x ±1 1 , x2 , . . . , x n ] . The monomials xm with m ∈ Z+n span a subalgebra of k[Zn ] that is isomorphic to the ordinary polynomial algebra k[x 1, x 2, . . . , x n ]. Example 3.2 (Group algebras of ﬁnite abelian groups). Now let G be ﬁnite abelian. Then, for suitable positive integers ni , (3.3)

G Cn1 × Cn2 × · · · × Cnt ,

where Cn denotes the cyclic group of order n. Sending a ﬁxed generator of Cn to the variable x gives an isomorphism of algebras kCn k[x]/(x n − 1). Moreover,

116

3. Groups and Group Algebras

the isomorphism (3.3) yields an isomorphism kG kCn1 ⊗ kCn2 ⊗ · · · ⊗ kCnt in Algk (Exercise 3.1.2). Therefore, kG

t

k[x]/(x ni − 1) .

i=1

By ﬁeld theory, this algebra is isomorphic to a direct product of ﬁelds if and only if char k does not divide any of the integers ni or, equivalently, char k |G|. This is a very special case of Maschke’s Theorem (§3.4.1). Monoid Rings. The deﬁnition of the product in (3.1) makes sense with any ring R in place of k, resulting in the group ring RG of G over R. The case R = Z will play a role in Sections 8.5 and 10.3. We can also start with an arbitrary monoid Γ rather than a group and obtain the monoid algebra kΓ or the monoid ring RΓ in this way. Finally, (3.1) is also meaningful with possibly inﬁnitely many α x or βy being nonzero, provided the monoid Γ satisﬁes the following condition:

(3.4) (x, y) ∈ Γ × Γ | x y = z is ﬁnite for each z ∈ Γ . In this case, (3.1) deﬁnes a multiplication on the R-module RΓ of all functions Γ → R, not just on the submodule RΓ = R (Γ) of all ﬁnitely supported functions. The resulting ring is called the total monoid ring of Γ over R. Example 3.3 (Power series). Let Γ be the (additive) monoid Z+(I ) of all ﬁnitely supported functions n : I → Z+ for some set I, with pointwise addition: (n + m)(i) = n(i) + m(i). Then, as in Example 3.1, the resulting monoid ring RΓ is isomorphic to the polynomial ring R[x i | i ∈ I]. Condition (3.4) is easily seen to be satisﬁed for Γ. The total monoid ring is Rx i | i ∈ I, the ring of formal power series in the commuting variables x i over R. 3.1.3. Representations of Groups and Group Algebras Recall that a representation of the group G over k, by deﬁnition, is a group homomorphism G → GL(V ) = Endk (V ) × for some V ∈ Vectk . The adjoint functor relation (3.2) gives a natural bijection, HomAlgk (kG, Endk (V )) HomGroups (G, GL(V )). Thus, representations of G over k are in natural 1-1 correspondence with representations of the group algebra kG: representations of kG

≡

representations of G over k.

This observation makes the material of Chapter 1 available for the treatment of group representations. In particular, we may view the representations of G over k as a category that is equivalent to Rep kG (or kG Mod) and we may speak of homomorphisms and equivalence of group representations as well as of irreducibility, composition series, etc., by employing the corresponding notions from Chapter 1.

3.1. Generalities

117

We will also continue to use the notation of Chapter 1, writing the group homomorphism G → GL(V ) as g → gV and putting g.v = gV (v) for v ∈ V .

k× ∈

(3.5)

1: G ∈

Many interesting irreducible representations for speciﬁc groups G will be discussed in detail later on. For now, we just mention one that always exists, for every group, although it does not appear to be very exciting at ﬁrst sight.1 This is the socalled trivial representation, 1, which arises from the trivial group homomorphism G → GL(k) = k× :

g

1k

A Word on the Base Field. In the context of group representations, the base ﬁeld k is often understood and is notationally suppressed. Thus, for example, HomkG (V, W ) is frequently written as HomG (V, W ) in the literature. Generally (except in Chapter 4), we will write kG, acknowledging the base ﬁeld. We will however say that k is a splitting ﬁeld for G rather than for kG. Recall that this means that D(S) = k for all S ∈ Irrﬁn kG (§1.2.5). Thus, an algebraically closed ﬁeld k is a splitting ﬁeld for every group G by Schur’s Lemma. Much less than algebraic closure suﬃces for ﬁnite groups; see Exercise 3.1.5 and also Corollary 4.16. 3.1.4. Changing the Group We have seen that each group homomorphism H → G lifts uniquely to an algebra map kH → kG. Therefore, we have the restriction (or pullback) functor from kG-representations to kH-representations and, in the other direction, the induction and coinduction functors (§1.2.2). In the context of ﬁnite groups, which will be our main focus, we may concentrate on induction, because the induction and coinduction functors are isomorphic by Proposition 3.4 below. When H is a subgroup of G, all of the following notation is commonly used: G G kG · ↓ H = · ↓ H = Res H = ReskH : Rep kG −→ Rep kH

and

G G G kG · ↑ = · ↑ H = IndH = IndkH : Rep kH −→ Rep kG . In this chapter and the next, we will predominantly use the up and down arrows, as they are especially economical and suggestive.2

In the language of Exercise 2.2.7, part (b) of the following proposition states that if H is a ﬁnite-index subgroup of G, then kH → kG is a Frobenius extension. We let G/H denote the collection of all left cosets gH (g ∈ G) and also a transversal for these cosets. 1Nonetheless, 1 turns out to have great signiﬁcance. For example, if 1 is projective as a kG-module, then all kG-modules are projective; see the proof of Maschke’s Theorem (§3.4.1). 2Coinduction is often denoted by · ⇑G = · ⇑G H : Rep kH → Rep kG in the literature.

118

3. Groups and Group Algebras

Proposition 3.4. Let H be a subgroup of G and let W ∈ Rep kH. Then: G (a) W ↑G = g ∈G/H g.W for some subrepresentation W ⊆ W ↑ ↓ H with W W . In particular, dimk W↑G = |G : H | dimk W .

(b) If |G : H | < ∞, then there is a natural isomorphism CoindkG kH W kG IndkH W in Rep kG. Proof. (a) The crucial observation is that kG is free as a right kH-module: the partition G = g ∈G/H gH yields the decomposition kG = g ∈G/H g kH. By

(B.5), it follows that the elements of W↑G = kG ⊗kH W have the form g ⊗ wg g ∈G/H

with unique wg ∈ W . The map μ : W → W↑G↓ H , w → 1 ⊗ w, is a morphism in Rep kH and μ is injective, since we may choose the transversal so that 1 ∈ G/H. Putting W = Im μ, we have W W and g.W = {g ⊗ w | w ∈ W }. Thus, the above form of elements of W↑G also implies the remaining assertions of (a). (b) Consider the following projection of kG onto kH:

∈

(3.6)

kH

x ∈G

αx x

∈

kG

πH :

x ∈H

αx x

Thus, π H is the identity on kH and it is easy to see that π H is a (kH, kH)-bimodule map.3 Moreover, the following identity holds for every a ∈ kG: π H (ag)g−1 = gπ H (g−1 a) . (3.7) a= g ∈G/H

g ∈G/H

By linearity, it suﬃces to check the equalities for a ∈ G, in which case they are immediate. The map π H leads to the following map in Rep kH:

w

CoindkH kH W

b → b.w

∗ πH

(CoindkG kH W )↓ H

∈

∼

∈

W ∈

φ:

a → π H (a).w

kG By Proposition 1.9(a), φ gives rise to the map Φ : IndkG kH W → CoindkH W in Rep kG, with Φ(a ⊗ w) = a.φ(w) for a ∈ kG and w ∈ W . In the opposite direction,

kG −1 deﬁne Ψ : CoindkG g ∈G/H g ⊗ f (g ). Using (3.7), kH W → IndkH W by Ψ( f ) = one veriﬁes without diﬃculty that Φ and Ψ are inverse to each other. 3In the terminology of Exercise 1.2.6, the map π is a Reynolds operator.

3.1. Generalities

119

Frobenius Reciprocity. Let H be a subgroup of G. Then Proposition 1.9 states that, for any W ∈ Rep kH and V ∈ Rep kG, we have a natural isomorphism (3.8)

HomkG (W↑G, V ) HomkH (W, V↓ H )

in Vectk . When [G : H] < ∞, Propositions 1.9 and 3.4 together give (3.9)

HomkG (V, W↑G ) HomkH (V↓ H , W ).

Explicitly, HomkH (V ↓ H , W ) f ↔ g ∈G/H g ⊗ f (g−1 · ) ∈ HomkG (V, W ↑G ). Both (3.8) and (3.9) are referred to as Frobenius reciprocity isomorphisms.4 Dimension Bounds. We conclude our ﬁrst foray into the categorical aspects of group representations by giving some down-to-earth applications to the dimensions of irreducible representations. The argument in the proof of part (b) below can be used to similar eﬀect in the more general context of coﬁnite subalgebras (Exercise 1.2.7). Corollary 3.5. Let H be a subgroup of G. Then: (a) Every W ∈ Irr kH is a subrepresentation of V ↓ H for some V ∈ Irr kG. Thus, any upper bound for the dimensions of irreducible representations of kG is also an upper bound for the irreducible representations of kH. (b) Assume that |G : H | < ∞. Then every V ∈ Irr kG is a subrepresentation of W ↑G for some W ∈ Irr kH. Consequently, if the dimensions of irreducible representations of kH are bounded above by d, then the irreducible representations of kG have dimension at most |G : H |d. Proof. (a) We know by Proposition 3.4(a) that W↑G is a cyclic kG-module, because W is cyclic. Hence there is an epimorphism W ↑G V for some V ∈ Irr kG (Exercise 1.1.3). By (3.8), this epimorphism corresponds to a nonzero map of kH-representations W → V ↓ H , which must be injective by irreducibility of W . This proves the ﬁrst assertion of (a). The statement about dimension bounds is clear. (b) The restriction V ↓ H is ﬁnitely generated as a kH-module, because V is cyclic and kG is ﬁnitely generated as a left kH-module by virtue of our hypothesis on [G : H]. Therefore, there is an epimorphism V↓ H W for some W ∈ Irr kH, and this corresponds to a nonzero map V → W↑G by (3.9). By irreducibility of V , the latter map must be injective, proving the ﬁrst assertion of (b). The dimension bound is now a consequence of the dimension formula in Proposition 3.4(a). 4(3.8) and (3.9) are also known as the Nakayama relations.

120

3. Groups and Group Algebras

3.1.5. Characters of Finite-Dimensional Group Representations For any V ∈ Repﬁn kG, we have the associated character χV ∈ (kG) ∗trace ⊆ (kG) ∗ .

Here, (kG) ∗ is the space of linear forms on kG and (kG) ∗trace kG/[kG, kG] ∗ is the subspace of all trace forms as in (1.54). Linear forms on kG can be identiﬁed with functions G → k by (A.4). In particular, each character χV and, more generally, each trace form on kG can be thought of as a k-valued function on G. Below, we shall describe the kind of functions arising in this manner. Relations with Conjugacy Classes. The group G acts on itself by conjugation: G×G (x, y)

∈

∈

(3.10)

G y := x yx −1

x

The orbits in G under this action are called the conjugacy classes of G; the conjugacy class of x ∈ G will be denoted by Gx . Using bilinearity of the Lie commutator [ · , · ], one computes k(x y − yx) = k(x y − y (x y)) = k(x − yx). [kG, kG] = x,y ∈G

x,y ∈G

x,y ∈G

Thus, a linear form φ ∈ (kG) ∗ is a trace form if and only if φ(x) = φ( yx) for all x, y ∈ G, that is, φ is constant on all conjugacy classes of G. Functions G → k that are constant on conjugacy classes are called (k-valued) class functions; so we will think of characters χV as class functions on G. To summarize, we have the following commutative diagram in Vectk :

∼

functions G → k

(kG) ∗trace

⊆

(3.11)

⊆

(kG) ∗

∼

def

cf k (G) =

class functions G → k

Proposition 3.6. # Irrﬁn kG ≤ # conjugacy classes of G . Proof. We have an obvious isomorphism cf k (G) kC in Vectk , where C = C (G) denotes the set of all conjugacy classes of G and kC is the vector space of all functions C → k. For the proposition, we may assume that C is ﬁnite; so dimk cf k (G) = #C . Since # Irrﬁn kG ≤ dimk C(kG) by Theorem 1.44 and C(kG) is a subspace of (kG) ∗trace cf k (G), the bound for # Irrﬁn kG follows.

3.1. Generalities

121

We will apply the proposition to ﬁnite groups only, but we remark that there are inﬁnite groups with ﬁnitely many conjugacy classes; in fact, every torsion free group embeds into a group with exactly two conjugacy classes [183, Exercise 11.78]. If the set C (G) of conjugacy classes of G is ﬁnite, then the foregoing allows us to identify the space of class functions cf k (G) kC (G) with its dual, Tr kG = kG/[kG, kG], by means of the following isomorphism:

∼

x ∈G

cf k (G)

α x x + [kG, kG]

∈

(3.12)

∈

Tr kG

x ∈C

αx

C ∈C (G)

Character Tables. Important representation-theoretic information for a given ﬁnite group G is recorded in the character table of G over k. By deﬁnition, this is the matrix whose (i, j)-entry is χi (g j ), where { χi } are the irreducible characters of kG in some order, traditionally starting with the trivial character, χ1 = 1, and {g j } is a set of representatives for the conjugacy classes of G, generally with g1 = 1. Thus, the ﬁrst column of the character table gives the dimensions, viewed in k , of the various irreducible representations of G over k. Usually, the sizes of the conjugacy classes of G are also indicated in the character table and other information may be included as well. gj

classes sizes

1 1

... ...

1 .. .

1k .. .

...

1k .. .

...

(dimk Si )1k

...

χi (g j )

...

χ i = χ Si .. .

.. .

G

| gj |

... ...

.. .

Conjugacy Classes of p-Regular Elements. The only substantial result on modular group representations that we shall oﬀer is the theorem below, which is due to Brauer [32]. To state it, let p be a prime number. An element g ∈ G is called p-regular if its order is not divisible by p. Since conjugate elements have the same order, we may also speak of p-regular conjugacy classes of G. Theorem 3.7. Let G be a ﬁnite group and assume that char k = p > 0. Then # Irr kG ≤ #{ p-regular conjugacy classes of G }. Equality holds if k is a splitting ﬁeld for G. Proof. By Proposition 1.46 and Lemma 1.48, it suﬃces to consider the case where k is a splitting ﬁeld. Each g ∈ G can be uniquely written as g = gp gp = gp gp ,

122

3. Groups and Group Algebras

where gp ∈ G is p-regular and the order of gp is a power of p.5 We will prove that the isomorphism (kG) ∗trace cf k (G) of (3.11) restricts to an isomorphism

(3.13) C(kG) f ∈ cf k (G) | f (g) = f (gp ) for all g ∈ G , and show that dimk C(kG) is equal to the number of p-regular conjugacy classes of G. In view of Theorem 1.44, this will imply the statement about # Irr kG. Recall that the space [kG, kG] of Lie commutators is stable under the pth power map and that (a + b) p ≡ a p + bp mod [kG, kG] for all a, b ∈ kG (Lemma 1.42). By Proposition 1.43, C(kG) consists of exactly those linear forms on kG that vanish on the subspace T = {a ∈ kG | a q ∈ [kG, kG] for some q = pn, n ∈ Z+ }. Write a given g ∈ G as g = gp gp as above, with gpq = 1 for q = pn . Then q (g − gp ) q = g q − gp = 0; so g ≡ gp mod T. Thus, ﬁxing representatives x i for the p-regular conjugacy classes of G, the family (x i + T )i spans kG/T and it suﬃces

to prove linear independence. So assume that i λ i x i ∈ T with λ i ∈ k. Then

( i λ i x i ) q ∈ [kG, kG] for all suﬃciently large q = pn . Writing |G| = pt m with q (p, m) = 1 and choosing a large n so that q ≡ 1 mod m, we also have x i = x i for all i. Since the pth -power map yields an additive endomorphism of kG/[kG, kG], we obtain q q q q 0≡ λi xi ≡ λi xi = λ i x i mod [kG, kG] . i

i

i

Finally, nonconjugate elements of G are linearly independent modulo [kG, kG] by q (3.12), whence λ i = 0 for all i and so λ i = 0 as desired. The result remains true for char k = 0 with the understanding that all conjugacy classes of a ﬁnite group G are 0-regular. Indeed, by Proposition 3.6, we already know that # Irr kG is bounded above by the number of conjugacy classes of G, and we shall see in Corollary 3.21 that equality holds if k is a splitting ﬁeld for G. 3.1.6. Finite Group Algebras as Symmetric Algebras We mention here in passing that group algebras of ﬁnite groups are symmetric algebras. As we shall see later (§3.6.1), certain properties of ﬁnite group algebras are in fact most conveniently derived in this more general ring-theoretic setting. In detail, the map π1 in (3.6) is a trace form for any group algebra kG, even if G is inﬁnite. Denoting π1 by λ as in Section 2.2 and using · , · for evaluation of linear forms, we have α x x = α1 . (3.14) λ , x ∈G 5If g has order p n m with p m, then write 1 = p n a + mb with a, b ∈ Z and put g p = g p gp = g

mb

. The factor g p is often called the p-regular part of g.

n

a

and

3.1. Generalities

123

If 0 a = x ∈G α x x ∈ kG, then λ , ax −1 = α x 0 for some x ∈ G. Thus, if the group G is ﬁnite, then λ is a Frobenius form for kG. Since λ , x y = δ x,y −1 for x, y ∈ G, the Casimir element and Casimir trace are given by g ⊗ g−1 and γλ (a) = gag−1 . (3.15) cλ = g ∈G

g ∈G

In particular, γλ (1) = |G|1. Thus, if char k |G|, then 1 = γλ (|G| −1 1) belongs to the Higman ideal Γ(kG) and so kG is semisimple by Theorem 2.21. The converse is also true and is in fact easier. Later in this chapter, we will give a direct proof of both directions that is independent of the material on symmetric algebras; see Maschke’s Theorem (§3.4.1).

Exercises for Section 3.1 3.1.1 (Representations are functors). For a given group G, let G denote the category with one object, ∗ , and with HomG (∗ , ∗) = G. The binary operation of G is the composition in HomG (∗ , ∗) and the identity element of G is the identity morphism 1∗ : ∗ → ∗. Show: (a) Any V ∈ Rep kG gives a functor FV : G → Vectk and conversely. (b) A map f : V → W in Rep kG amounts to a natural transformation φ : FV ⇒ FW ; and f is an isomorphism in Rep kG if and only if φ is an isomorphism of functors (§A.3.2). 3.1.2 (Some group algebra isomorphisms). Establish the following isomorphisms in Algk , for arbitrary groups G and H. Here, ⊗ = ⊗k as usual. (a) k[G × H] kG ⊗ kH. (b) k[Gop ] (kG) op . Here Gop is the opposite group: Gop = G as sets, but with new group operation ∗ given by x ∗ y = yx. (c) K ⊗ kG KG for any ﬁeld extension K/k. More generally, K can be any k-algebra here. 3.1.3 (Induced representations). Let H be a subgroup of G. (a) Show that kG is free as a left (and right) module over kH by multiplication: any set of right (resp., left) coset representatives for H in G provides a basis. (b) Conclude from (a) and Exercise 1.2.1 that, for any W in Rep kH, we have Ker(W↑G ) = {a ∈ kG | a kG ⊆ kG Ker W }, the largest ideal of kG that is contained in the left ideal kG Ker W . (c) Let W be a ﬁnite-dimensional representation of kH. Use Proposition 3.4 to show that the character of the induced representation W↑G is given by χW (g−1 xg). χW↑G (x) = g ∈G/H g−1 xg ∈H

124

3. Groups and Group Algebras

(d) Use Proposition 3.4 to show that the induction functor ↑G is exact: for any short exact sequence 0 → W → W → W → 0 in Rep kH, the sequence 0 → W ↑G → W↑G → W ↑G → 0 is exact in Rep kG. 3.1.4 (An irreducibility criterion). Let H be a subgroup of G. Assume that V ∈ Rep kG is such that V↓ H = i ∈I Wi for pairwise nonisomorphic Wi ∈ Irr kH such that kG.Wi = V for all i. Show that V is irreducible. 3.1.5 (Splitting ﬁelds in positive characteristic). Let G be ﬁnite and let e = exp G denote its exponent, that is, the least common multiple of the orders of all elements of G. Let k be a ﬁeld with char k = p > 0 and assume that k contains all eth roots of unity in some ﬁxed algebraic closure of k. Use Exercise 1.5.8 to show that k is a splitting ﬁeld for G.6 3.1.6 (Hattori’s Lemma). Let G be ﬁnite and let P ∈ kG projﬁn . Using the isomorphism (3.12), we may view the Hattori-Stallings rank (2.11) as a function rank : K0 (kG) → Tr(kG) ∼ cf k (G). Use Proposition 2.9 to establish the formula χ P (g) = |CG (g)| rank(P)(g−1 ) for g ∈ G.

3.2. First Examples Thus far, we have only mentioned a single example of a group representation: the trivial representation 1 in (3.5). This section adds to our cast of characters, and we will also determine some explicit character tables. 3.2.1. Finite Abelian Groups Let G be ﬁnite abelian. Then the group algebra kG is a ﬁnite-dimensional commutative algebra and (1.37) gives the following bijection: Irr kG ∈

∼

∈

MaxSpec kG P

kG/P

The Schur division algebra of the irreducible representation S = kG/P is given by D(S) = EndkG (kG/P) kG/P. Let e = exp G denote the exponent of G, that is, the smallest positive integer e such that x e = 1 for all x ∈ G. Fix an algebraic closure k of k and put

μ e = ζ ∈ k | ζ e = 1 , K = k(μ e ) ⊆ k , and Γ = Gal(K/k) . Consider the group algebra KG. Every Q ∈ MaxSpec KG satisﬁes KG/Q K, because the images of all elements of G in the ﬁeld KG/Q are eth roots of unity, 6The fact stated in this exercise is also true in characteristic 0 by another result of Brauer. For a proof, see [109, Theorem 10.3] for example. (3.16) is an easy special case.

3.2. First Examples

125

and hence they all belong to K. Thus, (3.16)

K is a splitting ﬁeld for G.

For each P ∈ MaxSpec kG, the ﬁeld kG/P embeds into K; so there is a k-algebra map f : kG → K with Ker f = P. Conversely, if f ∈ H := HomAlgk (kG, K ), then Ker f ∈ MaxSpec kG. Consider the action of Γ on H that is given by γ. f = γ∗ ( f ) = γ ◦ f for γ ∈ Γ and f ∈ H. By general facts about Galois actions (Exercise 1.4.7), Ker f = Ker f for f , f ∈ H if and only if f and f belong to the same Γ-orbit. Thus, we have a bijection MaxSpec kG ∼ Γ\ Hom (kG, K ) . Algk

By (3.2), we may identify HomAlgk (kG, K ) with HomGroups (G, K × ). Pointwise multiplication equips HomGroups (G, K × ) with the structure of an abelian group: (φψ)(x) = φ(x)ψ(x) for x ∈ G. The identity element of HomGroups (G, K × ) is the trivial homomorphism sending all x → 1. In order to further describe this group, put p = char k (≥ 0) and let G p denote the Sylow p-subgroup of G and G p the subgroup consisting of the p-regular elements of G, with the understanding that G0 = 1 and G0 = G. Then G = G p ×G p and restriction gives a group isomorphism HomGroups (G, K × ) ∼ HomGroups (G p , K × ), because all homomorphisms G → K × are trivial on G p . Finally, it is not hard to show that HomGroups (G p , K × ) G p as groups (noncanonically). Example 3.9 below does this for cyclic groups and you are asked to generalize the example in Exercise 3.2.1 using (3.3) . To summarize, HomGroups (G, K × ) G p as groups and we have a bijection (3.17)

Irr kG

∼

Γ\ HomGroups (G, K × ).

Under the foregoing bijection, the (singleton) orbit of the identity element of HomGroups (G, K × ) corresponds to the trivial representation, 1. For future reference, let us record the following fact, which is of course a very special case of Theorem 3.7 if p > 0, but which follows directly from the foregoing in any characteristic. Proposition 3.8 (Notation as above). Assume that K = k . Then # Irr kG = |G p |. Example 3.9 (Cn over Q). Taking k = Q and G = Cn , the cyclic group of order n, we obtain K = Q(ζ n ) with ζ n = e2πi/n ∈ C, and Γ = (Z/nZ)× . The group HomGroups (G, K × ) is isomorphic to G: ﬁxing a generator x for G, the group homomorphism φ that is determined by φ(x) = ζ n will serve as a generator for HomGroups (G, K × ). Explicity, HomGroups (G, K × ) consists of the maps φk : x → ζ nk with 0 ≤ k ≤ n − 1. Moreover, φk and φl belong to the same Γ-orbit if and only if the roots of unity ζ nk and ζ nl have the same order. Thus,

# Irr QCn = # divisors of n .

126

3. Groups and Group Algebras

This could also have been obtained from the Chinese Remainder Theorem together with the familiar decomposition x n −1 = m |n Φm , where Φm is the mth cyclotomic polynomial (which is irreducible over Q): Q[x]/(Φm ) . QCn Q[x]/(x n − 1) m |n

3.2.2. One-dimensional Representations Recall from (1.36) that, for any A ∈ Algk , the equivalence classes of 1-dimensional representations form a subset of Irr A that is in natural one-to-one correspondence with HomAlgk ( A, k). Since HomAlgk (kG, k) HomGroups (G, k× ) by (3.2), the bijection takes the following form for A = kG, where kφ = k with G-action g.λ = φ(g)λ for g ∈ G, λ ∈ k as in (1.36):

equivalence classes of 1-dim’l representations of kG

∈

∼

∈

HomGroups (G, k× ) φ

kφ

⊆

Irr kG

Note that kφ has character χkφ = φ. Occasionally, we will simply write φ for kφ . Recall from §3.2.1 that the group structure of k× makes HomGroups (G, k× ) an abelian group with identity element 1. The canonical bijection HomGroups (G, k× ) HomGroups (Gab, k× ), where Gab = G/[G , G] denotes the abelianization of G (Example A.4), is in fact an isomorphism of groups. Proposition 3.8 has the following Corollary 3.10. Assume that Gab is ﬁnite with exponent e. If char k e and k contains all eth roots of unity (in some algebraic closure of k), then the number of nonequivalent 1-dimensional representations of G is equal to |Gab |. For representations of ﬁnite p-groups in characteristic p > 0, we have the following important fact, which is once again immediate from Theorem 3.7. However, we give a simple direct argument below. Proposition 3.11. If char k = p > 0 and G is a ﬁnite p-group, then 1 is the only irreducible representation of kG up to equivalence. Proof. The case G = 1 being obvious, assume that G 1 and let S ∈ Irr kG. Our hypotheses on k and G imply that S is ﬁnite dimensional and that 1k is the only eigenvalue of gS for all g ∈ G. Choosing 1 g ∈ Z G, the eigenspace of gS is a nonzero subrepresentation of S, and hence it must be equal to S. Thus, S is a representation of k[G/g], clearly irreducible, and so S 1 by induction on the order of G.

3.2. First Examples

127

3.2.3. The Dihedral Group D4 The dihedral group Dn is given by the presentation def (3.18) Dn = x, y | x n = 1 = y 2, x y = yx n−1 . Geometrically, Dn can be described as the symmetry group of the regular n-gon in R2 . The order of Dn is 2n and Dn has the structure of a semidirect product: Dn = x y Cn C2 . Since x 2 = x yx −1 y −1 ∈ [Dn , Dn ], it is easy to see that Dnab C2 for odd n and Dnab C2 × C2 for even n. v

2 Let us now focus on D4 and work over a base ﬁeld k ab with char k 2. Since D4 C2 × C2 , we know by Corolx lary 3.10 that D4 has four 1-dimensional representations: they are given by φ±,± : x → ±1, y → ±1; so φ+,+ = 1. Anv1 other representation arises from the realization of D4 as the symmetry group of the square: x acts as the counterclocky wise rotation by π/2 and y as the reﬂection across the vertical axis of symmetry; see the on the right. With respect to the indicated basis picture 0 and y has matrix −1 v1, v2 , the matrix of x is 01 −1 0 0 1 . These matrices make sense a in Mat2 (k) and they satisfy the deﬁning relations (3.18) of D4 ; hence, they yield representation of kD4 . Let us call this representation S. Since the matrices 01 −1 0 −1 0 and 0 1 have no common eigenvector, S is irreducible. Furthermore, D(S) = k: only the scalars in Mat2 (k) commute with both matrices. It is easy to check that the two matrices generate the algebra Mat2 (k); so kD4 / Ker S Mat2 (k)—this also follows from Burnside’s Theorem (§1.4.6). To summarize, we have constructed ﬁve nonequivalent irreducible representations of kD4 . Since D4 has ﬁve conjugacy classes, represented by 1, x 2 , x, y, and x y, the list is complete by Proposition 3.6: Irr kD4 = {1 , φ+,− , φ−,+ , φ−,− , S}. Alternatively, the kernels are distinct maximal ideals of kD4 , with factors isomorphic to k for all φ±,± and to Mat2 (k) for S. Therefore, the Chinese Remainder Theorem yields a surjective map of k-algebras, kD4 k × k × k × k × Mat2 (k), which must be an isomorphism for dimension reasons. Thus, kD4 k × k × k × k × Mat2 (k) .

In particular, kD4 is split semisimple. We record the character table of kD4 (Table 3.1); all values in this table have to be interpreted in k. 3.2.4. Some Representations of the Symmetric Group Sn Let Sn denote the group of all permutations of the set [n] = {1, 2, . . . , n} and assume that n ≥ 2. Then Snab = Sn /A n C2 , where A n is the alternating group consisting of the even permutations in Sn . Thus, besides the trivial representation

128

3. Groups and Group Algebras

Table 3.1. Character table of D4 (char k 2)

classes sizes

1 1

x2 1

x 2

y 2

xy 2

1

1 1 1 1 2

1 1 1 1 −2

1 −1 1 −1 0

1 1 −1 −1 0

1 −1 −1 1 0

φ−,+ φ+,− φ−,− χS

1, there is only one other 1-dimensional representation, up to equivalence, and only if char k 2: this is the so-called sign representation, sgn : Sn Snab {±1} ⊆ k× . In order to ﬁnd additional irreducible representation of Sn , we use the action of Sn on the set [n], which we will write as [n] = {b1, b2, . . . , bn } so as to not confuse its elements with scalars from k. Let Mn = k[n] denote the k-vector space with basis [n]. The standard permutation representation of Sn is deﬁned by n def kbi with s.bi = bs(i) (s ∈ Sn ). (3.19) Mn = i=1

In terms of the isomorphism GL(Mn ) GLn (k) that is provided by the given basis of Mn , the image of the homomorphisms Sn → GLn (k) consists exactly of the permutation matrices, having one entry equal to 1 in each row and column with all other entries being 0. Note that Mn is not irreducible: the 1-dimensional

subspace spanned by the vector i bi ∈ Mn is a proper subrepresentation of Mn that is equivalent to the trivial representation 1. Also, the map k=1

i

∈

∈

π : Mn (3.20)

λ i bi

i

λi

is easily seen to be an epimorphism of representations. Therefore, we obtain a representation of dimension n − 1 by putting def

Vn−1 = Ker π. This is called the standard representation7 of Sn . It is not hard to show that Vn−1 is irreducible if and only if either n = 2 or n > 2 and char k n and that 7Vn−1 is also called the deleted permutation representation of Sn .

3.2. First Examples

129

one always has EndkSn (Vn−1 ) = k (Exercise 3.2.3). Consequently, if n > 2 and char k n, then Burnside’s Theorem (§1.4.6) gives a surjective map of algebras kSn BiEndkSn (Vn−1 ) Matn−1 (k). Example 3.12 (The structure of kS3 ). Assume that char k 2, 3. (See Exercise 3.4.1 for characteristics 2 and 3.) Then the foregoing provides us with three nonequivalent irreducible representations for S3 over k: 1, sgn, and V2 . Their kernels are three distinct maximal ideals of kS3 with factors k, k, and Mat2 (k), respectively. Exactly as for kD4 above, we obtain an isomorphism kS3 k × k × Mat2 (k) in Algk . Thus, kS3 is split semisimple and Irr kS3 = {1 , sgn , V2 }. Note also that S3 has three conjugacy classes with representatives (1), (1 2), and (1 2 3). With respect to the basis (b1 − b2, b2 − b3 ) of V2 , the operators (1 2)V2 and (1 2 3)V2 have 1 and 0 −1 , respectively. Here is the character table of kS : matrices −1 3 0 1 1 −1 Table 3.2. Character table of S3 (char k 2, 3)

classes sizes 1 sgn χV2

(1) 1

(1 2) 3

(1 2 3) 2

1 1 2

1 −1 0

1 1 −1

v

2 We remark that S3 is isomorphic to the dihedral group D3 , the group of symmetries of a unilateral triangle, by (1 2 3) sending (1 2) to the reﬂection across the vertical line of symmetry and (1 2 3) to counterclockwise rotation by 2π/3. v1 If k = R, then we may regard V2 R2 as √ the Euclidean (1 2) plane. Using the basis consisting of v1 = 3(b1 − b2 ) and v2 = b1 + b2 − 2b3 , the matrices of (1 2)V2 and (1 2 3)V2 are −1 0 and cos 2π/3 − sin 2π/3 , respectively. Thus, over R, the representation V 2 0 1 sin 2π/3 cos 2π/3 also arises from the realization of S3 as the group of symmetries of the triangle.

3.2.5. Permutation Representations

G×X

X

∈

∈

Returning to the case of an arbitrary group G, let us now consider a G-set, that is, a set X with a G-action

(g, x)

g.x

130

3. Groups and Group Algebras

satisfying the usual axioms: 1.x = x and g.(g .x) = (gg ).x for all g, g ∈ G and x ∈ X. We will write G X to indicate a G-action on X. Such an action extends uniquely to an action of G by k-linear automorphisms on the vector space kX of all formal k-linear combinations of the elements of X (Example A.5), thereby giving rise to a representation ρ X : G → GL(kX ). Representations V ∈ Rep kG that are equivalent to a representation of this form are called permutation representations of G; they are characterized by the fact that the action of G on V stabilizes some k-basis of V . If the set X is ﬁnite, then we can consider the character χkX : G → k. Letting FixX (g) = {x ∈ X | g.x = x} denote the set of all ﬁxed points of g in X, we evidently have (3.21)

χkX (g) = # FixX (g)1k

(g ∈ G).

Examples 3.13 (Some permutation representations). (a) If |X | = 1, then kX 1 and χ1 = 1. (b) Taking X = G, with G acting on itself by left multiplication, we obtain the regular representation ρG = ρreg of kG. As elsewhere in this book, we will write (kG)reg for this representation. Evidently, FixG (1) = G and FixG (g) = ∅ if g 1. Thus, if G is ﬁnite, then the regular character of kG is given by ⎧ ⎪ |G|1k for g = 1, χreg (g) = ⎨ ⎪0 otherwise, ⎩

or χreg ( g ∈G α g g) = |G|α1 . Viewing kG as a symmetric algebra as in §3.1.6, this formula is identical to (2.30). (c) We can also let G act on itself by conjugation as in (3.10). The resulting permutation representation is called the adjoint representation of kG; it will be denoted by (kG)ad . Now we have FixG (g) = CG (g), the centralizer of g in G. Hence, for ﬁnite G, the character of the adjoint representation is given by χad (g) = |CG (g)|1k . (d) With G = Sn acting as usual on X = [n], we recover the standard permutation representation Mn of Sn . Here, Fix[n] (s) is the number of 1-cycles in the disjoint cycle decomposition of s ∈ Sn . Writing this number as Fix(s), we obtain χ Mn (s) = # Fix(s)1k

(s ∈ Sn ).

Recall from (3.20) that there is a short exact sequence 0 → Vn−1 → Mn → 1 → 0 in Rep kSn . Thus, Lemma 1.41 gives χVn−1 (s) = # Fix(s)1k − 1k .

3.3. More Structure

131

Exercises for Section 3.2 3.2.1 (Dual group). Let G be ﬁnite abelian and assume that char k |G|. Put e = exp G, μ e = {ζ ∈ k | ζ e = 1}, and K = k(μ e ) ⊆ k, a ﬁxed algebraic closure of k. Use (3.3) to show that G HomGroups (G, K × ) as groups. 3.2.2 (Splitting ﬁelds). Show: (a) If k is a splitting ﬁeld for G, then k is also a splitting ﬁeld for all homomorphic images of G. (b) Assume that k is a splitting ﬁeld for G and that Gab is ﬁnite. Show that μ e ⊆ k, where e = exp(Gab ) and μ e is as in Exercise 3.2.1. (c) Give an example showing that if k is a splitting ﬁeld for G, then k need not be a splitting ﬁeld for all subgroups of G. 3.2.3 (Standard representation of Sn ). Let Vn−1 (n ≥ 2) be the standard representation of the symmetric group Sn . Show: (a) Vn−1 is irreducible if and only if n = 2 or char k n. (b) EndkSn (Vn−1 ) = k. 3.2.4 (Deleted permutation representation). Let X be a G-set with |X | ≥ 2 and let

π : kX → 1 be deﬁned by π( x ∈X λ x x) = x ∈X λ x . The kernel V = Ker π ∈ Rep kG is called a deleted permutation representation. Assume that G is ﬁnite with char k |G| and that the action G X is doubly transitive, that is, the G-action on {(x , y) ∈ X × X | x y} that is given by g.(x , y) = (g.x , g.y) is transitive. Show that V is irreducible. 3.2.5 (The character table does not determine the group). Consider the real quaternions, H = R ⊕ Ri ⊕ R j ⊕ Rk with i 2 = j 2 = k 2 = i j k = −1, and the quaternion group Q8 = i, j = {±1, ±i, ± j, ±k} ≤ H× . Show that Q8 has the same character table over any ﬁeld k with char k 2 as the dihedral group D4 (Table 3.1), even though Q8 D4 .

3.3. More Structure Returning to the general development of the theory of group representations, this section applies the group algebra functor k · : Groups → Algk to construct some algebra maps for the group algebra kG that add important structure to kG and its category of representations.

132

3. Groups and Group Algebras

3.3.1. Invariants The Augmentation. The trivial group homomorphism G → {1} gives rise to the following algebra map, called the augmentation map or counit of kG:

k

x

∈

(3.22)

kG ∈

ε:

αx x

x

αx

The map ε is the k-linear extension of the trivial representation 1 in (3.5) from G to kG, and we can also think of it as a map ε : (kG)reg 1 in Rep kG. The kernel of ε is called the augmentation ideal of kG; we will use the notation (kG) + = Ker ε . def

(3.23)

Clearly, (kG) + is the k-subspace of kG that is generated by the subset {g−1 | g ∈ G}. Invariants and Weight Spaces. For any V ∈ Rep kG, the k-subspace of Ginvariants in V is deﬁned by def

VG =

v ∈ V | g.v = v for all g ∈ G .

The invariants can also be described as the common kernel of the operators aV with a ∈ (kG) + or, alternatively, as the 1-homogeneous component V (1) of V :

V G = v ∈ V | (kG) + .v = 0 = v ∈ V | a.v = ε(a)v for all a ∈ kG . More generally, if kφ is any 1-dimensional representation of G, given by a group homomorphism φ : G → k× (§3.2.2), then the homogeneous component V (kφ ) will be written as Vφ and, if nonzero, referred to as a weight space of V as in Example 1.30: def

Vφ =

v ∈ V | g.v = φ(g)v for all g ∈ G .

The elements of Vφ are called weight vectors or semi-invariants. Invariants of Permutation Representations. Let X be a G-set and let V = kX be the associated permutation representation of kG (§3.2.5). An element v =

G x ∈X λ x x ∈ kX belongs to (kX ) if and only if λ g.x = λ x for all x ∈ X and g ∈ G; in other words, the function X → k, x → λ x , is constant on all G-orbits G.x ⊆ X. Since λ x = 0 for almost all x ∈ X, we conclude that if λ x 0, then x must belong to the following G-subset of X:

Xﬁn = x ∈ X | the orbit G.x is ﬁnite .

3.3. More Structure

133

For each orbit O ∈ G\Xﬁn , we may deﬁne the orbit sum def

σO =

x ∈ (kX ) G .

x∈O

Since distinct orbits are disjoint, the various orbit sums are k-linearly independent.

The orbit sums also span (kX ) G . For, any v = x ∈X λ x x ∈ (kX ) G can be written

as v = O ∈G\Xﬁn λ O σO , where λ O denotes the common value of all λ x with x ∈ O. To summarize, k σO . (3.24) (kX ) G = O ∈G\Xﬁn

The foregoing applies verbatim to any commutative ring k rather than a ﬁeld. Example 3.14 (Invariants of the adjoint representation). Since G generates the group algebra kG, the invariants of the adjoint representation (Example 3.13) coincide with the center of kG:

G = a ∈ kG | gag−1 = a for all g ∈ G = Z (kG). (kG)ad The set Gﬁn for the conjugation action G G consists of the elements of G whose conjugacy class is ﬁnite. The corresponding orbit sums are also called the class sums of G; they form a k-basis of Z (kG) by (3.24). See Exercise 3.3.1 for more on Gﬁn . Example 3.15 (Invariants of the regular representation). Applying (3.24) to the regular representation (kG)reg and noting that X = G consists of just one G-orbit in this case, we obtain ⎧ ⎪0 G =⎨ (kG)reg ⎪k σ ⎩ G

if G is inﬁnite,

if G is ﬁnite, with σG = g ∈G g.

G Focusing on the case where G is ﬁnite, we have a2 = ε(a)a for any a ∈ (kG)reg G and ε(a) ∈ ε(σG ) k = |G| k. Thus, ε is nonzero on (kG)reg if and only if the group G satisfying G is ﬁnite and char k |G|. In this case, the unique element e ∈ (kG)reg 2 ε(e) = 1 or, equivalently, 0 e = e is given by 1 1 σG = g. (3.25) e=

|G|

|G|

g ∈G

Invariants for Finite Groups: Averaging. For a ﬁnite group G and an arbitrary

V ∈ Rep kG, the operator (σG )V : V → V , v → g ∈G g.v, clearly has image in V G . If the order of G is invertible in k, then the following proposition shows that all G-invariants in V are obtained in this way.

134

3. Groups and Group Algebras

Proposition 3.16. Let G be ﬁnite with char k |G| and let e be as in (3.25). Then, for every V ∈ Rep kG, the following “averaging operator” is a projection onto V G : eV : V

∈

∈

V 1 |G |

v

g ∈G

g.v

If V is ﬁnite dimensional, then dimk V G · 1k = χV (e) =

1 |G |

g ∈G

χV (g).

Proof. Clearly, Im eV ⊆ V G . If v ∈ V G , then e.v = ε(e)v = v, because ε(e) = 1. Thus, eV is a projection onto V . Now let V be ﬁnite dimensional. With respect to a k-basis of V = e.V ⊕ (1 − e).V = V G ⊕ (1 − e).V that is the union of bases of V G and (1 − e).V , the matrix of the projection eV has the form

IdV G

0

0

. 0

Therefore, χV (e) = trace(eV ) = dimk V G · 1k , which completes the proof.

The following corollary is variously referred to as Burnside’s Lemma or the Cauchy-Frobenius Lemma. Corollary 3.17. If a ﬁnite group G acts on a ﬁnite set X, then the number of G-orbits in X is equal to the average number of ﬁxed points of elements of G: 1 # Fix X (g) . # G\X = |G|

g ∈G

Proof. By (3.24) we know that dimQ (QX ) G = # G\X and (3.21) tells us that the character of QX is given by χQX (g) = # FixX (g) for g ∈ G. The corollary therefore follows from the dimension formula for invariants in Proposition 3.16. 3.3.2. Comultiplication and Antipode In this subsection, we construct two further maps for the group algebra kG: the comultiplication and the antipode. Equipped with these new maps and the augmentation (counit), kG becomes a ﬁrst example of a Hopf algebra. Comultiplication. Applying the group algebra functor k · : Groups → Algk to the diagonal group homomorphism G → G×G, x → (x , x), and using the isomorphism k[G × G] ∼ kG ⊗ kG that is given by (x , y) → x ⊗ y for x, y ∈ G (Exercise 3.1.2),

3.3. More Structure

135

we obtain the following algebra map:

x

∈

kG ⊗ kG

∈

kG

Δ:

αx x

x

α x (x ⊗ x)

This map is called the comultiplication of kG. The nomenclature “comultiplication” and “counit” derives from the fact that these maps ﬁt into commutative diagrams resembling the diagrams (1.1) for the multiplication and unit maps, except that all arrows now point in the opposite direction: kG ⊗ kG ⊗ kG (3.26)

Δ ⊗ Id

Id ⊗Δ

kG ⊗ kG

kG ⊗ kG

kG ⊗ kG

ε ⊗ Id

and

Δ

k ⊗ kG

kG

Δ

Id ⊗ε

∼

kG ⊗ k

Δ

∼ kG

Both diagrams are manifestly commutative. The property of Δ that is expressed by the diagram on the left is called coassociativity. Another notable property of the comultiplication Δ is its cocommutativity: letting τ : kG ⊗ kG → kG ⊗ kG denote the map given by τ(a ⊗ b) = b ⊗ a, we have Δ = τ ◦ Δ.

(3.27)

Again, this concept is “dual” to commutativity: recall that an algebra A with multiplication m : A ⊗ A → A is commutative if and only if m = m ◦ τ. Antipode. Inversion gives a group isomorphism G ∼ Gop , x → x −1 . Here Gop denotes the opposite group: Gop = G as sets, but Gop has a new group operation ∗ given by x ∗ y = yx. We obtain a k-linear map

x

αx x

kG ∈

(3.28)

∈

S : kG

x

α x x −1

satisfying S (ab) = S (b) S (a) for all a, b ∈ kG and S2 = Id. The map S is called the standard involution or the antipode of kG. We can also think of S as an isomorphism S : kG ∼ k[Gop ] (kG) op in Algk (Exercise 3.1.2). 3.3.3. A Plethora of Representations The structure maps in §§3.3.1 and 3.3.2 allow us to construct many new representations of kG from given representations. This is sometimes referred to under the moniker “plethysm”.8 Analogous constructions will later be carried out also in the 8This term originated in the theory of symmetric functions; see Littlewood [136, p. 289].

136

3. Groups and Group Algebras

context of Lie algebras and, more generally, Hopf algebras. We will then refer to some of the explanations given below. Homomorphisms. For given V, W ∈ Rep kG, the k-vector space Homk (V, W ) can be made into a representation of kG by deﬁning (3.29)

(g. f )(v) = g. f (g−1 .v) def

(g ∈ G, v ∈ V, f ∈ Homk (V, W )).

Even though it is straightforward to verify that this rule does indeed deﬁne a representation of G, let us place it in a more conceptual framework. If V and W are representations of arbitrary algebras B and A, respectively, then Homk (V, W ) becomes a representation of the algebra A ⊗ Bop as in Example 1.3: (a ⊗ bop ). f = aW ◦ f ◦ bV

(a ∈ A, b ∈ B, f ∈ Homk (V, W )).

Thus, we have a map A ⊗ Bop → Endk (Homk (V, W )) in Algk . For A = B = kG, we also have the map (Id ⊗S) ◦ Δ : kG → kG ⊗ kG → kG ⊗ (kG) op . Restricting the composite of these two algebra maps to G leads to (3.29). The bifunctor Homk for k-vector spaces (§B.3.2) restricts to a bifunctor Homk ( · , · ) : (Rep kG) op × Rep kG −→ Rep kG . Here, we use op for the ﬁrst variable, because Homk is contravariant in this variable whereas Homk is covariant in the second variable: for any map f : W → W in ∗ Rep kG, we have Homk ( f , V ) = f = · ◦ f : Homk (V, W ) → Homk (V, W ) but Homk (V, f ) = f ∗ = f ◦ · : Homk (V, W ) → Homk (V, W ). It is readily veriﬁed that f ∗ and f ∗ are indeed morphisms in Rep kG. Recall also that Homk is exact in either argument (§B.3.2). Evidently, g. f = f holds for all g ∈ G if and only if f (g.v) = g. f (v) for all g ∈ G and v ∈ V , and the latter condition in turn is equivalent to f (a.v) = a. f (v) for all a ∈ kG and v ∈ V . Thus, the G-invariants of Homk (V, W ) are exactly the homomorphism V → W in Rep kG: (3.30)

Homk (V, W ) G = HomkG (V, W ).

Example 3.18. The following map is easily seen to be an isomorphism in Rep kG: V ∈

∼

∈

Homk (1, V ) f

f (1)

By (3.30), this map restricts to an isomorphism HomkG (1, V ) ∼ V G .

3.3. More Structure

137

Duality. Taking W = 1 = kε in the preceding paragraph, the dual vector space V ∗ = Homk (V, k) becomes a representation of kG. By (3.29), the G-action on V ∗ is given by g. f , v = f , g−1 .v for g ∈ G, v ∈ V , and f ∈ V ∗ or, equivalently, a. f = f ◦ S (a)V

(a ∈ kG, f ∈ V ∗ ),

where S is the antipode (3.28). By our remarks about Homk , duality gives an exact contravariant functor ∗ · : Rep kG → Rep kG .

A representation V is called self-dual if V V ∗ in Rep kG. Note that this forces V to be ﬁnite dimensional, because otherwise dimk V ∗ > dimk V (§B.3.2). The lemma below shows that ﬁnite-dimensional permutation representations are self-dual; further self-dual representations can be constructed with the aid of Exercise 3.3.11. Lemma 3.19. The permutation representation kX for a ﬁnite G-set X is self-dual. Proof. Let (δ x )x ∈X ∈ (kX ) ∗ be the dual basis for the basis X of kX; so δ x , y = δ x,y 1k for x, y ∈ X. Then x → δ x deﬁnes a k-linear isomorphism δ : kX ∼ (kX ) ∗ . We claim that this is in fact an isomorphism in Rep kG, that is, δ(a.v) = a.δ(v) holds for all a ∈ kG and v ∈ kX. By linearity, we may assume that a = g ∈ G and v = x ∈ X. The following calculation, for any y ∈ X, shows that δ(g.x) = g.δ(x): δ g.x , y = δg.x,y 1k = δ x,g−1 .y 1k = δ x , g−1 .y = g.δ x , y.

Tensor Products. Let V, W ∈ Rep kG be given. Then the tensor product V ⊗ W becomes a representation of kG via the “diagonal action” gV ⊗W = gV ⊗ gW for g ∈ G, or (3.31)

def

g.(v ⊗ w) = g.v ⊗ g.w

(g ∈ G, v ∈ V, w ∈ W ).

The switch map gives an isomorphism τ : V ⊗W ∼ W ⊗V , v ⊗ w → w ⊗ v, and it is also clear that V ⊗ 1 V in Rep kG. Finally, the G-action (3.31) is compatible with the standard associativity isomorphism for tensor products; so the tensor product in Rep kG is associative. The tensor product construction makes Rep kG an example of a tensor category or monoidal category; see [71]. Again, let us place the action rule (3.31) in a more general context. Recall from (1.51) that the outer tensor product of representations V ∈ Rep A and W ∈ Rep B for arbitrary algebras A and B is a representation of the algebra A ⊗ B: the algebra map A ⊗ B → Endk (V ⊗ W ) is given by a ⊗ b → aV ⊗ bW . If A = B = kG, then we also have the comultiplication Δ : kG → kG ⊗ kG. The composite with the previous map is an algebra map kG → Endk (V ⊗ W ) that gives the diagonal G-action (3.31).

138

3. Groups and Group Algebras

Tensor, Symmetric, and Exterior Powers. The action (3.31) inductively gives diagonal G-actions on all tensor powers V ⊗k of a given V ∈ Rep kG, with g ∈ G acting on V ⊗k by the k-linear automorphisms gV⊗k (§B.1.3): g.(v1 ⊗ v2 ⊗ · · · ⊗ vk ) = g.v1 ⊗ g.v2 ⊗ · · · ⊗ g.vk . Deﬁning V ⊗0 to be the trivial representation, 1, the tensor Thus, V ⊗k ∈ Rep kG. ⊗k algebra TV = becomes a kG-representation as well. An element g ∈ G k ≥0 V ⊗k acts on TV by the graded k-algebra automorphism TgV = k gV that comes from the functor T : Vectk → Algk (§1.1.2).

Similarly, the symmetric algebra Sym V and the exterior algebra V become representations of kG via the functors Sym : Vectk → CommAlgk and : Vectk → Algk , with g ∈ G acting by the graded k-algebra automorphisms Sym gV and gV , respectively. Since the homogeneous components Symk V and k V are stable under these actions, we also obtain Symk V, k V ∈ Rep kG. The canonical epimorphisms V ⊗k Symk V and V ⊗k k V are maps in Rep kG. On k V , for example, an k element g ∈ G acts via the map gV (§1.1.2): g.(v1 ∧ v2 ∧ · · · ∧ vk ) = g.v1 ∧ g.v2 ∧ · · · ∧ g.vk .

If dimk V = n < ∞, then n V is the 1-dimensional representation that is given by the group homomorphism det V : G → k× , g → det(gV ) by (1.14): n

(3.32)

V kdet V .

G-Algebras. As we have seen, the tensor, symmetric, and exterior algebras of a given V ∈ Rep kG all become representations of kG, with G acting by algebra automorphisms. More generally, any A ∈ Algk that is equipped with an action A by k-algebra automorphisms is called a G-algebra in the literature (e.g., G [205]). Thus, A ∈ Rep kG by virtue of the given G-action. The conditions g.(ab) = (g.a)(g.b)

and

g.1 = 1

(g ∈ G, a, b ∈ A)

state, respectively, that the multiplication m : A⊗ A → A and the unit u : k = 1 → A are maps in Rep kG. Thus, G-algebras can be described concisely as “algebras in the category Rep kG”: objects A ∈ Rep kG that are equipped with two maps in Rep kG, the multiplication m : A ⊗ A → A and the unit map u : 1 → A, such that the algebra axioms (1.1) are satisﬁed. (Ordinary k-algebras are algebras in Vectk .) Morphisms of G-algebras, by deﬁnition, are maps that are simultaneously maps in Rep kG and Algk , that is, G-equivariant algebra maps. With this, we obtain a category, G Algk

.

We shall later meet some variants and generalizations of the concept of a G-algebra (§§5.5.5 and 10.4.1).

3.3. More Structure

139

Canonical Isomorphisms and Characters. The standard maps in Vectk discussed in Appendix B all restrict to morphisms in Rep kG; Exercises 3.3.10 and 3.3.13 ask the reader to check this. Speciﬁcally, for U, V, W ∈ Rep kG, the Hom-⊗ adjunction isomorphism (B.15) is an isomorphism in Rep kG: (3.33)

Homk (U ⊗ V, W ) Homk (U, Homk (V, W )).

Similarly the canonical monomorphisms W ⊗ V ∗ → Homk (V, W ) and V → V ∗∗ in (B.18) and (B.22) are morphisms in Rep kG, and so is the trace map Endk (V ) → k for V ∈ Repﬁn kG when k is viewed as the trivial representation, k = 1. Thus, we have the following isomorphisms in Rep kG: (3.34)

W ⊗ V ∗ Homk (V, W ),

provided at least one of V, W is ﬁnite dimensional. In this case, (3.33) and (3.34) give the following isomorphism, for any U ∈ Rep kG: (3.35)

Homk (U ⊗ V, W ) Homk (U, W ⊗ V ∗ ).

Finally, for any V ∈ Repﬁn kG, (3.36)

V V ∗∗ .

Lemma 3.20. Let V, W ∈ Repﬁn kG. (a) The characters of the representations V ∗ , V ⊗ W , and Homk (V, W ) are given, for g ∈ G, by (i) χV ∗ (g) = χV (g−1 ), (ii) χV ⊗W (g) = χV (g) χW (g), (iii) χHomk (V,W ) (g) = χW (g) χV (g−1 ).

(b) If G is ﬁnite with char k |G|, then dimk HomkG (V, W ) · 1k =

1 χW (g) χV (g−1 ) . |G| g ∈G

Proof. (a) The formula a. f = f ◦ S (a)V for a ∈ kG and f ∈ V ∗ can be written as aV ∗ = S (a)V∗ , where S (a)V∗ is the transpose of the operator S (a)V (§B.3.2). Since trace(S (a)V ) = trace(S (a)V∗ ) by (B.25), we obtain χV ∗ (a) = χV (S (a)). Formula (i) follows, because S (g) = g−1 for g ∈ G. For (ii), recall that gV ⊗W = gV ⊗ gW . Thus, (ii) is a special case of formula (1.52). Finally, in view of the isomorphism (3.34), formula (iii) follows from (i) and (ii).

(b) By Proposition 3.16, dimk V G · 1k = |G1 | g ∈G χV (g). Applying this to

the representation Homk (V, W ) in place of V and recalling that Homk (V, W ) G = HomkG (V, W ) by (3.30), the dimension formula results from (iii).

140

3. Groups and Group Algebras

Exercises for Section 3.3 3.3.1 (The f.c. center of a group). Let Gﬁn = {g ∈ G | |G : CG (g)| < ∞} be the set of elements of G whose conjugacy class is ﬁnite as in Example 3.14. Prove: (a) Gﬁn is a characteristic subgroup of G; it is called the f.c. (ﬁnite conjugate) center of G. (b) The elements of ﬁnite order of Gﬁn also form a characteristic subgroup of G. This subgroup contains the commutator subgroup [Gﬁn , Gﬁn ] and it is generated by the ﬁnite normal subgroups of G. (Use the fact that if the center of a group has ﬁnite index, then the commutator subgroup is ﬁnite; see [182, 10.1.4].) 3.3.2 (The adjoint representation). Consider the adjoint representation of (kG)ad of group G as in Example 3.13(c). runs over a set of represen(a) Show that (kG)ad x k[G/CG (x)], where x tatives of the conjugacy classes of G, and Ker (kG)ad = g ∈G kG(kCG (g)) + . (Use Exercise 3.3.7.) (b) For G = S3 , show that Ker (kG)ad = 0 if and only if char k 3. 3.3.3 (Invariants of outer tensor products). Let G and H be arbitrary groups and let V ∈ Rep kG and W ∈ Rep kH. Identifying k[G× H] with kG ⊗kH (Exercise 3.1.2), consider the outer tensor product V W ∈ Rep k[G × H] as in (1.51). Show that (V W ) G×H V G ⊗ W H . 3.3.4 (Coinvariants). Let V ∈ Rep kG. Dually to the deﬁnition of the G-invariants def V G (§3.3.1), the G-coinvariants in V are deﬁned by VG = V /(kG) + .V =

V / g ∈G (g − 1).V . Thus, VG 1 ⊗kG V , where 1 is the right kG-module k with trivial G-action.

Let G be ﬁnite and let σG = g ∈G g ∈ kG. Show that the operator (σG )V ∈ Endk (V ) yields a well-deﬁned k-linear map VG → V G and that this map gives a natural equivalence of functors · G · G : Rep kG → Vectk if char k |G|. 3.3.5 (Representations as functors: limits and colimits). This exercise assumes familiarity with limits and colimits of functors; see [143, III.3 and III.4]. Let G denote the category with one object, ∗ , and with HomG (∗ , ∗) = G as in Exercise 3.1.1. Recall that any V ∈ Rep kG gives a functor FV : G → Vectk and conversely. Show that lim FV V G and colim FV VG . 3.3.6 (Permutation representations). Let G-Sets denote the category with objects the G-sets (§3.2.5) and morphisms the G-equivariant functions, that is, functions f : X → Y for X, Y ∈ G-Sets such that f (g.x) = g. f (x) for g ∈ G, x ∈ X. (a) Show that X → kX gives afunctor G-Sets → Rep kG satisfying kX α X α of a family X α ∈ G-Sets . Furthermore, α kX α for the disjoint union X = equipping the cartesian product X ×Y of X, Y ∈ G-Sets with the G-action g.(x , y) = (g.x , g.y), we have the isomorphism k[X × Y ] (kX ) ⊗ (kY ) in Rep kG.

3.3. More Structure

141

(b) Let H be a subgroup of G. For any X ∈ H-Sets, let H act on the cartesian product G × X by h.(g, x) := (gh−1, hx) and let G × H X := H\(G × X ) denote the set of orbits under this action. Writing elements of G × H X as [g, x], show that G × H X ∈ G-Sets via the G-action g.[g , x] := [gg , x]. Moreover, show that G k[G × H X] (kX )↑G H in Rep kG. Conclude in particular that 1↑ H k[G/H]. (c) Let X ∈ G-Sets be transitive; so X G/H, where H = {g ∈ G | g.x = x} is the isotropy group of some x ∈ X. Show that a k-basis of EndkG (kX ) is given by the endomorphisms φ O that are deﬁned by φ O (x) = σO for O ∈ H\X. 3.3.7 (Relative augmentation ideals). Let H be a subgroup of G. Consider the map G G ε↑G H : kG (kH)↑ H 1↑ H

Exercise 3.3.6

k[G/H] .

+ Show that Ker ε ↑G H is the left ideal kG(kH) of kG and that this left ideal is generated by the elements hi − 1, where {hi } is any generating set of the subgroup H. If H is normal in G, then show that kG(kH) + = (kH) + kG is an ideal of kG.

3.3.8 (Coinvariants of permutation representations). Let X be a G-set. For each x ∈ X, let G.x denote the G-orbit in X containing x and, for each O ∈ G\Xﬁn , let σO ∈ kG denote the orbit sum (3.24). (a) Show that the orbit map kX k[G\X], x → G.x, descends to (kX )G and yields an isomorphism (kX )G ∼ k[G\X] in Vectk . Furthermore, show that the image of (kX ) G under the orbit map consists of the k-linear span of all ﬁnite orbits whose size is not divisible by char k. (b) Show that (kX ) G → k[G\X] in Vectk via σO → O. 3.3.9 (Complete reducibility of permutation representations). (a) Let X be a G-set. Show that if the permutation representation kX is completely reducible, then all G-orbits in X are ﬁnite and have size not divisible by char k. (b) Let char k = 3 and let X denote the collection of all 2-element subsets of {1, 2, . . . , 5} with the natural action by G = S5 . So X G/H as G-sets, where H = S2 × S3 ≤ S5 , and |X | = 10. Use Exercise 3.3.6(c) to show that EndkG (kX ) is a 3-dimensional commutative k-algebra that has nonzero nilpotent elements. Conclude from Proposition 1.33 that kX is not completely reducible. Thus, the converse of (a) fails. (This example was communicated to me by Karin Erdmann.) 3.3.10 (Duality). For V ∈ Rep kG, show: (a) The canonical k-linear injection V → V ∗∗ in (B.22) is a morphism in ∗∗ Rep kG. In particular, if V is ﬁnite dimensional, then V V in Rep kG. (b) Conclude from exactness of the contravariant functor · ∗ : Rep kG → ∗ Rep kG that irreducibility of V forces V to be irreducible. The converse holds if V is ﬁnite dimensional but not in general. (c) The dual (VG ) ∗ → V ∗ of the canonical map V VG (Exercise 3.3.4), gives a natural isomorphism (VG ) ∗ ∼ (V ∗ ) G .

142

3. Groups and Group Algebras

3.3.11 (Duality, induction, and coinduction). (a) Let H → G be a group homomor ∗ ∗ kG phism and let W ∈ Rep kH. Show that CoindkG in Rep kG. kH W IndkH W (b) Conclude from (a) and Proposition 3.4 that, for any ﬁnite-index subgroup ∗ ∗ IndkG for all H ≤ G, dualizing commutes with induction: IndkG kH W kH W W ∈ Rep kH. 3.3.12 (Twisting). Let V ∈ Rep kG. Representations of the form kφ ⊗ V with φ ∈ HomGroups (G, k× ) are called twists of V . Prove: (a) The map f → 1 ⊗ f (1) is an isomorphism Homk (kφ, V ) ∼ kφ−1 ⊗ V in ∗ Rep kG. In particular, (kφ ) kφ−1 . (b) (kφ ⊗ V ) G Vφ−1 , the φ−1 -weight space of V (§3.3.1). (c) The map G → (kG) × , g → φ(g)g, is a group homomorphism that lifts uniquely to an algebra automorphism φ ∈ AutAlgk (kG).9 The φ -twist (1.24) of V is isomorphic to kφ−1 ⊗ V . (d) Twisting gives an action of the group HomGroups (G, k× ) on Irr kG, on the set of completely reducible representations of kG, etc. (See Exercise 1.2.3.) 3.3.13 (Hom-Tensor relations). Let U, V, W ∈ Rep kG. Show: (a) The canonical embedding W ⊗ V ∗ → Homk (V, W ) in (B.18) is a morphism in Rep kG. In particular, if at least one of V, W is ﬁnite dimensional, then W ⊗ V ∗ Homk (V, W ) in Rep kG. (b) The trace map End (V ) ∼ V ⊗ V ∗ → k in (B.23) is a morphism in Rep kG k

when k = 1 is viewed as the trivial representation.

(c) Hom-⊗ adjunction (B.15) gives an isomorphism Homk (U ⊗ V, W ) Homk (U, Homk (V, W )) in Rep kG. Conclude that if V or W is ﬁnite dimensional, then the isomorphisms (B.21) and (B.20) are isomorphisms V ∗ ⊗ W ∗ (W ⊗ V ) ∗ and Homk (U ⊗ V, W ) Homk (U, W ⊗ V ∗ ) in Rep kG. 3.3.14 (Tensor product formula). Let H be a subgroup of G and let V ∈ Rep kG and W ∈ Rep kH. (a) Show that V ⊗ (W↑G ) (V↓ H ⊗ W )↑G in Rep kG. (b) Conclude from (a) that V↓ H↑G V ⊗ (1↑G H ) in Rep kG. 3.3.15 (Symmetric and exterior powers). Let V, W ∈ Rep kG. Prove: (a) The isomorphisms Sym (V ⊕ W ) Sym V ⊗ Sym W and V ⊗ W of Exercise 1.1.13 are also isomorphisms in Rep kG.

(b) If dimk V = n < ∞, then in Rep kG.

k

V Homk (

n−k

V,

n

(V ⊕ W )

V ) kdet V ⊗ (

9Automorphisms of this form are called winding automorphisms; see also Exercise 10.1.6.

n−k

V )∗

3.4. Semisimple Group Algebras

143

3.4. Semisimple Group Algebras The material set out in Section 3.3 allows for a quick characterization of semisimple group algebras; this is the content of Maschke’s Theorem (§3.4.1). The remainder of this section then concentrates on the main tools of the trade: characters, especially their orthogonality relations. 3.4.1. The Semisimplicity Criterion The following theorem is a landmark result due to Heinrich Maschke [149] dating back to 1899. Maschke’s Theorem. The group algebra kG is semisimple if and only if G is ﬁnite and char k does not divide |G|. Proof. First assume that kG is semisimple; so (kG)reg is the direct sum of its various homogeneous components. By Schur’s Lemma, the counit ε : (kG)reg 1 vanishes on all homogeneous components except for the 1-homogeneous compoG G . Therefore, ε must be nonzero on (kG)reg , which forces G to be ﬁnite nent, (kG)reg with char k |G| (Example 3.15). Conversely, assume that G is ﬁnite and char k |G|. Then, as we have already pointed out in §3.1.6, semisimplicity of kG can be established by invoking Theorem 2.21. However, here we oﬀer an alternative argument by showing directly that every kG-representation V is completely reducible: every subrepresentation U ⊆ V has a complement (Theorem 1.28). To this end, we will construct a map π ∈ HomkG (V, U) with π U = IdU ; then Ker π will be the desired complement for U (Exercise 1.1.2). In order to construct π, start with a k-linear projection map p : V U along some vector space complement for U in V ; so p ∈ Homk (V, U)

and pU = IdU . With e = |G1 | g ∈G g ∈ kG as in Proposition 3.16, put π = e.p ∈ Homk (V, U) G = HomkG (V, U). (3.30)

For u ∈ U, we have π(u) =

1 |G |

g ∈G

g.p(g−1 .u) = u, because each g−1 .u ∈ U and

so p(g−1 .u) = g−1 .u . Thus, π U = IdU and the proof is complete.

For a completely diﬀerent argument proving semisimplicity of kG for a ﬁnite group G and a ﬁeld k of characteristic 0, see Exercise 3.4.2. The following corollary specializes some earlier general results about split semisimple algebras to group algebras. Corollary 3.21. Assume that G is ﬁnite with char k |G| and that k is a splitting ﬁeld for G. Then: (a) The irreducible characters form a basis of the space cf k (G) of all k-valued class functions on G. In particular, # Irr kG = #{ conjugacy classes of G }.

144

3. Groups and Group Algebras

(b) |G| =

S ∈Irr kG (dimk

S) 2 .

(c) m(S, (kG)reg ) = dimk S for all S ∈ Irr kG. Proof. All parts come straight from the corresponding parts of Corollary 1.35. Part (a) also uses the fact that the irreducible characters form a k-basis of the space (kG) ∗trace cf k (G) by Theorem 1.44 and (3.11). 3.4.2. Orthogonality Relations For the remainder of Section 3.4, we assume G to be ﬁnite with char k |G|. An Inner Product for Characters. For φ, ψ ∈ cf k (G), we deﬁne

(3.37)

φ, ψ

def

=

1 φ(g)ψ(g−1 ). |G| g ∈G

This gives a symmetric k-bilinear form · , · : cf k (G) × cf k (G) → k that is nondegenerate. For, if (δC )C is the basis of cf k (G) given by the class functions δC with δC (g) = 1k if g belongs to the conjugacy class C and δC (g) = 0k otherwise, |C | then δC , δ D = |G | δC, S D . We may now restate Lemma 3.20(b) as follows: (3.38)

χV , χW = χW , χV = dimk HomkG (V, W ) · 1k .

In particular, for any subgroup H ≤ G and any W ∈ Repﬁn kH and V ∈ Repﬁn kG, (3.39) χW↑G , χV = χW , χV↓ H . This is the original version of Frobenius reciprocity; it follows from (3.38) and the isomorphism (3.8): HomkG (W↑G , V ) HomkH (W, V↓ H ). Orthogonality. We now derive the celebrated orthogonality relations; they also follow from the more general orthogonality relations (2.14). Orthogonality Relations. Assume that G is ﬁnite and that char k |G|. Then, for S, T ∈ Irr kG, ⎪ dimk D(S) · 1k if S T, ⎧ χ S , χT = ⎨ ⎪ if S T . 0k ⎩ Proof. This follows from (3.38) and Schur’s Lemma: HomkG (S, T ) = 0 if S T and HomkG (S, T ) D(S) = EndkG (S) if S T. By the orthogonality relations, the irreducible characters χ S are pairwise or thogonal for the form · , · and χ S , χ S = 1k holds whenever S is absolutely irreducible (Proposition 1.36). Thus, if k is a splitting ﬁeld for G, then ( χ S )S ∈Irr kG is an orthonormal basis of the inner product space cf k (G) by Corollary 3.21(a).

3.4. Semisimple Group Algebras

145

Multiplicities and Irreducibility. The following proposition uses the orthogonality relations to derive information on the multiplicity m(S, V ) of S ∈ Irr kG in an arbitrary ﬁnite-dimensional representation V and on the dimension of the Shomogeneous component V (S). The proposition also gives a criterion for absolute irreducibility. Proposition 3.22. Let G be ﬁnite with char k |G| and let V ∈ Repﬁn kG and S ∈ Irr kG. Then: (a) χV , χV = 1k if V is absolutely irreducible. The converse holds if char k = 0 or char k ≥ (dimk V ) 2 . (b) χ S , χV = m(S, V ) dimk D(S) · 1k . (c) dimk V (S) · 1k = dimD(S) S · χ S , χV . Proof. (a) The ﬁrst assertion is clear from the orthogonality relations, as we have already remarked. For the converse, assume that χV , χV = 1k and char k = 0 or ⊕m(S,V ) char k ≥ (dimk V ) 2 . The decomposition V implies S ∈Irr kG S m(S, V ) χ S . χV =

S ∈Irr kG

Therefore, 1k = χV , χV = S ∈Irr kG m(S, V ) 2 dimk D(S) · 1k by the orthogonality relations. Since dimk S = dimD(S) S · dimk D(S) ≥ dimk D(S), we have

2 2 S ∈Irr kG m(S, V ) dimk D(S) ≤ (dimk V ) . In view of our hypothesis on k, it

follows that the equality 1 = S ∈Irr kG m(S, V ) 2 dimk D(S) does in fact hold in Z . Therefore, m(S, V ) is nonzero for exactly one S ∈ Irr kG and we must also have dimk D(S) = 1 = m(S, V ). Thus, V S is absolutely irreducible.

(b) The above expression χV = S ∈Irr kG m(S, V ) χ S in conjunction with the orthogonality relations gives m(T, V ) χ S , χT = m(S, V ) dimk D(S) · 1k . χ S , χV = T ∈Irr kG

(c) From V (S) S ⊕m(S,V ) and dimk S = dimD(S) S · dimk D(S), we obtain dimk V (S) = m(S, V ) dimD(S) S · dimk D(S) = dimD(S) S · χ S , χV . 3.4.3. The Case of the Complex Numbers The inner product · , · is often replaced by a modiﬁed version when the base ﬁeld k is the ﬁeld C of complex numbers. In this case, the formula χV ∗ (g) = χV (g−1 ) in Lemma 3.20 can also be written as (3.40)

χV ∗ (g) = χV (g)

(g ∈ G)

with denoting complex conjugation. Indeed, the Jordan canonical form of the operator gV is a diagonal matrix having the eigenvalues ζi ∈ C of gV along the

146

3. Groups and Group Algebras

diagonal. The Jordan form of gV−1 has the inverses ζi−1 on the diagonal. Since all ζ i are roots of unity, of order dividing the order of g, they satisfy ζi−1 = ζi , which implies (3.40). The inner product of characters χV and χW can therefore also be written as follows: 1 χV (g) χW (g). (3.41) χV , χW = |G|

g ∈G

It is common practice to deﬁne a form · , · : cf C (G) × cf C (G) → C by def 1 (3.42) φ , ψ = φ(g)ψ(g) |G|

g ∈G

for φ, ψ ∈ cf C (G). This form is a Hermitian inner product on cf C (G), that is, · , · is C-linear in the ﬁrst variable, it satisﬁes φ , ψ = ψ , φ, and it is positive deﬁnite: φ , φ ∈ R>0 for all 0 φ ∈ cf C (G). While · , · and · , · are of course diﬀerent on cf C (G), they do coincide on the subgroup that is spanned by the characters, taking integer values there by (3.38). 3.4.4. Primitive Central Idempotents of Group Algebras We continue to assume that G is ﬁnite with char k |G|. Recall that kG is a symmetric algebra and, with λ chosen as in (3.14), the Casimir element is

cλ = g ∈G g ⊗ g−1 and γλ (1) = |G|1 (§3.1.6). Thus, if k is a splitting ﬁeld for G, dim S −1 then Theorem 2.17 gives the formula e(S) = |Gk| g ∈G χ S (g )g for the central primitive idempotent of S ∈ Irr kG. Here, we give an independent veriﬁcation of this formula, without assuming k to be a splitting ﬁeld, although our argument will be identical to the one in the proof of Theorem 2.17. Recall that the primitive central idempotents e(S) of a semisimple algebra A are characterized by the conditions (1.47): e(S)T = δ S,T IdS for S, T ∈ Irr A. Proposition 3.23. Let G be ﬁnite with char k |G| and let S ∈ Irr kG. Then dimD(S) S e(S) = χ S (g−1 )g . |G|

Proof. Writing e(S) = dim D (S ) S |G |

−1

g ∈G

g ∈G

ε g g with ε g ∈ k, our goal is to prove the equality ε g =

χ S (g ). By Example 3.13(b), the character of the regular representation of kG satisﬁes χreg (e(S)g−1 ) = |G|ε g ; so we need to prove that

But kGreg

χreg (e(S)g−1 ) = dimD(S) S · χ S (g−1 ) .

T ∈Irr kG T

⊕ dim D (T ) T

χreg (e(S)g−1 ) =

by Wedderburn’s Structure Theorem and so dimD(T ) T · χT (e(S)g−1 ) .

T ∈Irr kG

3.4. Semisimple Group Algebras

147

Finally, (e(S)g−1 )T = δ S,T gS−1 by (1.47), and hence χT (e(S)g−1 ) = δ S,T χ S (g−1 ). Therefore, χreg (e(S)g−1 ) = dimD(S) S · χ S (g−1 ), as desired. The idempotent e(1) is identical to the idempotent e from Proposition 3.16. G-invariants V G in The “averaging” projection of a given V ∈ Rep kG onto the Proposition 3.16 generalizes to the projection (1.49) of V = S ∈Irr kG V (S) onto the S-homogeneous component V (S): e(S)V : V v

dim D (S) S |G |

∈

∈

(3.43)

V (S) g ∈G

χ S (g−1 )g.v

In particular, if S = kφ is a 1-dimensional representation, then we obtain the following projection of V onto the weight space Vφ = {v ∈ V | g.v = φ(g)v for all g ∈ G}:

v

1 |G |

∈

Vφ

∈

V (3.44)

g ∈G

φ(g−1 )g.v

Exercises for Section 3.4 If not mentioned otherwise, the group G and the ﬁeld k are arbitrary in these exercises. 3.4.1 (kS3 in characteristics 2 and 3). We know by Maschke’s Theorem that the group algebra kS3 fails to be semisimple exactly for char k = 2 and 3. Writing S3 = x, y | y 2 = x 3 = 1, x y = yx 2 , show: (a) If char k = 3, then Irr kS3 = {1, sgn} and rad kS3 = kS3 (x − 1). (b) If char k = 2, then Irr kS3 = {1, V2 }, where V2 is the standard representation

of S3 (see Exercise 3.2.3), and rad kS3 = kσS3 with σS3 = g ∈S3 g. 3.4.2 (Standard involution and semisimplicity). Write the standard involution (3.28) of kG as a∗ = S (a) for a ∈ kG and recall that (ab) ∗ = b∗ a∗ and a∗∗ = a for all a, b ∈ kG. Assuming that k is a subﬁeld of R, prove: (a) aa∗ = 0 for a ∈ kG implies a = 0. Also, if aa∗ a = 0, then a = 0.

(b) All ﬁnite-dimensional subalgebras of kG that are stable under · ∗ are semisimple. (Use Theorem 1.39.) Use (b) and the fact that ﬁnite-dimensional semisimple algebras over a ﬁeld of characteristic 0 are separable (Exercises 1.4.11 and Exercises 1.5.6) to prove: (c) If G is ﬁnite and k is any ﬁeld with char k = 0, then kG is semisimple.

148

3. Groups and Group Algebras

3.4.3 (Relative trace map and a relative version of Maschke’s Theorem). Let H be a ﬁnite-index subgroup of G such that char k |G : H |. G (a) For V ∈ Rep kG, deﬁne a k-linear map τH : V H → V by G (v) = |G : H | −1 g.v (v ∈ V H ). τH g ∈G/H G τH

is independent of the choice of the transversal for G/H, takes values Show that G in V G , and is the identity on V G . The map τH is called the relative trace map. (b) Let 0 → U → V → W → 0 be a short exact sequence in Rep kG. Mimic the proof of Maschke’s Theorem to show that if 0 → U↓ H → V ↓ H → W↓ H → 0 splits in Rep kH (Exercise 1.1.2), then the given sequence splits in Rep kG. 3.4.4 (Characters and conjugacy). Consider the following statements, with x, y ∈ G: (i) Gx = Gy, that is, x and y are conjugate in G; (ii) χV (x) = χV (y) for all V ∈ Repﬁn kG; (iii) χ S (x) = χ S (y) for all S ∈ Irrﬁn kG. Show that (i) =⇒ (ii) ⇐⇒ (iii). For G ﬁnite and k a splitting ﬁeld for G with char k |G|, show that all three statements are equivalent. 3.4.5 (Values of complex characters). Let G be ﬁnite. A complex character of G is a character χ = χV for some V ∈ Repﬁn CG; if V is irreducible, then χ is called an irreducible complex character. For g ∈ G, show: (a) χ(g) ∈ R for every (irreducible) complex character χ of G if and only if g is conjugate to g−1 in G. (b) χ(g) ∈ Q for every (irreducible) complex character χ if and only if g is conjugate to g m for every integer m with (m, |G|) = 1. (c) | χ(g)| ≤ χ(1) for every complex character χ = χV and equality occurs precisely if gV is a scalar operator. (Use the triangle inequality.) (d) If G is nonabelian simple, then | χ(g)| < χ(1) for every complex character χ = χV with V 1 ⊕d (d = dimC V ) and every 1 g ∈ G. 3.4.6 (The p-core of a ﬁnite group). Let G be ﬁnite and assume that char k = p > 0. The p-core O p (G) of G, by deﬁnition, is the unique largest normal p-subgroup of G or, equivalently, the intersection of all Sylow p-subgroups. Show that O p (G) = {g ∈ G | gS = IdS for all S ∈ Irr kG} = G ∩ (1 + rad kG). 3.4.7 (Column orthogonality relations). Let G be ﬁnite and assume that k is a splitting ﬁeld for G with char k |G|. Prove: ⎧ ⎪ |CG (g)| if g, h ∈ G are conjugate, χ S (g−1 ) χ S (h) = ⎨ ⎪0 otherwise. S ∈Irr kG ⎩ Here, CG (g) denotes the centralizer of g in G.

3.4. Semisimple Group Algebras

149

3.4.8 (Generalized orthogonality relations). Let G be a ﬁnite group with char k |G|. Use the fact that the primitive central idempotents satisfy e(S)e(T ) = δ S,T e(S) to prove the following relations: 1 1 χ S (gh) χT (g−1 ) = δ S,T χ S (h). |G|

g ∈G

dimD(S) S

For h = 1, this reduces to the ordinary orthogonality relations. 3.4.9 (Irreducibility and inner products). Give an example of a ﬁnite group G with char k |G| and a nonirreducible V ∈ Repﬁn kG such that χV , χV = 1k . Thus, that the hypothesis on char k in Proposition 3.22(a) cannot be omitted. 3.4.10 (Complete reducibility of the adjoint representation of kG). Consider the adjoint representation of (kG)ad of a ﬁnite group G; see Example 3.13(c). Use Exercise 3.3.9 and Maschke’s Theorem to show that the following are equivalent: (i) (kG)ad is completely reducible; (ii) char k does not divide the order of G/Z G; (iii) char k does not divide the size of any conjugacy class of G. 3.4.11 (Isomorphism of ﬁnite G-sets and permutation representations). Let X and Y be ﬁnite G-sets. For each subgroup H ≤ G, put X H = {x ∈ X | h.x = x for all h ∈ H } and likewise for Y H . (a) Show that X and Y are isomorphic in the category G- Sets (Exercise 3.3.6) if and only if #X H = #Y H for all subgroups H ≤ G. (b) Assuming that char k = 0, show that kX kY in Rep kG if and only if #X = #Y H for all cyclic subgroups H ≤ G. H

3.4.12 (Hermitian inner products). A Hermitian inner product on a C-vector space V is a map H : V × V → C such that (i) the map H ( · , v) : V → C is a C-linear form for each v ∈ V ; (ii) H (v, w) = H (w, v) holds for all v, w ∈ V (

= complex conjugation); and

(iii) H (v, v) > 0 for all 0 v ∈ V . Note that (ii) implies that H (v, v) ∈ R for all v ∈ V ; so (iii) makes sense. Now let G be ﬁnite and let V ∈ Repﬁn CG. Prove: (a) There exists a Hermitian inner product on V that is G-invariant, that is, H (g.v, g.w) = H (v, w) holds for all g ∈ G and v, w ∈ V . (b) Let V be irreducible. Then the inner product in (a) is unique up to a positive real factor: if H is another Hermitian inner product on V that is preserved by G, then H = λ H for some λ ∈ R>0 .

150

3. Groups and Group Algebras

3.5. Further Examples In this section, we assume that char k = 0. We will be concerned with certain ﬁnitedimensional representations of the symmetric groups Sn (n ≥ 3) and some of their subgroups. In particular, by Maschke’s Theorem, all representations under consideration will be completely reducible and therefore determined, up to equivalence, by their character (Theorem 1.45). 3.5.1. Exterior Powers of the Standard Representation of Sn We have seen (§3.2.4) that kSn has two 1-dimensional representations, 1 and sgn, and an irreducible representation of dimension n − 1, the standard representation Vn−1 (Exercise 3.2.3). Looking for new representations, we may try the “sign twist” of a given representation V : V ± = sgn ⊗V . def

Since χV ± = sgn χV (Lemma 3.20), we know that V ± V if and only if χV (s) 0 for some odd permutation s ∈ Sn . Furthermore, if V is irreducible, then it is easy to see that V ± is irreducible as well (Exercise 3.3.12). In principle, we could also consider the dual representation V ∗ . However, this yields nothing new for the symmetric groups: Lemma 3.24. All ﬁnite-dimensional representations of kSn are self-dual. Proof. This is a consequence of the fact that each s ∈ Sn is conjugate to its inverse, because s and s−1 have the same cycle type. In view of Lemma 3.20, it follows that χV = χV ∗ holds for each V ∈ Repﬁn kSn and so V V ∗ . Our goal in this subsection is to prove the following proposition by an elementary if somewhat lengthy inner product calculation following [83, §3.2].

Proposition 3.25. The exterior powers k Vn−1 (0 ≤ k ≤ n − 1) of the standard representation Vn−1 are all (absolutely) irreducible and pairwise nonequivalent. Before proceeding to prove the proposition in general, let us illustrate the result by discussing some special cases. First, 0Vn−1 1 is evidently irreducible and n 1 we also know that Vn−1 = Vn−1 is irreducible. Next, let Mn = i=1 kbi be the standard permutation representation of Sn with s.bi = bs(i) for s ∈ Sn , and recall that Mn /Vn−1 1 (§3.2.4). By complete reducibility, we obtain the decomposition Mn 1 ⊕ Vn−1 . It is easy to see that det Mn = sgn. Therefore, we also have det Vn−1 = sgn and so n−1Vn−1 sgn by (3.32), which is clearly irreducible. From Exercise 3.3.15(b) and Lemma 3.24, we further obtain (3.45)

n−1−k

Vn−1 (

k

Vn−1 ) ±

3.5. Further Examples

for all k. In particular,

151 n−2

± Vn−1 Vn−1 , which is irreducible as well.

Proof of Proposition 3.25. First, we check nonequivalence. n−1 n−1 The representations k k Vn−1 and Vn−1 have the same dimension, k = k , if and only if k = k

or k + k = n − 1. In the latter case, k Vn−1 ( k Vn−1 ) ± by (3.45). Therefore, it suﬃces to show that k Vn−1 ( k Vn−1 ) ± for 2k n − 1. Put χ k := χk V

.

n−1

We need to check that χ k (s) 0 for some odd permutation s ∈ Sn . Let s be a 2-cycle. Then s acts as a reﬂection on Mn : the operator s Mn has a simple eigenvalue −1 and the remaining eigenvalues are all 1. From the isomorphism Mn 1 ⊕ Vn−1 , we see that the same holds for the operator sVn−1 . Therefore, we may choose a basis v1, . . . , vn−1 of Vn−1 with s.vi = vi for 1 ≤ i ≤ n − 2 but s.vn−1 = −vn−1 . By (1.13), a basis of k Vn−1 is given by the elements ∧vI = vi1 ∧ vi2 ∧ · · · ∧ vik with I = {i 1, i 2, . . . , i k } a k-element subset of [n − 1] = {1, 2, . . . , n − 1} in increasing order: i 1 < i 2 < · · · < i k . Since s. ∧ vI = ∧vI if n − 1 I and s. ∧ vI = − ∧ vI otherwise, we obtain ⎧ ⎪1 if k = 0, χ k (s) = ⎨ ⎪ n−2 − n−2 if 1 ≤ k ≤ n − 1. k−1 ⎩ k n−2 n−2 Finally, k = k−1 if and only if 2k = n − 1, which we have ruled out. This proves nonequivalence of the various

k

Vn−1 .

It remains to prove absolute irreducibility of k Vn−1 . By Proposition 3.22(a), this is equivalent to the condition χ k , χ k = 1. The case k = 0 being trivial, we will assume that k ≥ 1. Since Mn 1 ⊕ Vn−1 , we have Mn 1 ⊗ Vn−1 in Rep kSn (Exercise 3.3.15), and hence k r Mn 1 ⊗ s Vn−1 k Vn−1 ⊕ k−1Vn−1 . r+s=k

Putting χ := χk M , we obtain χ , χ = χ k−1 , χ k−1 +2 χ k−1 , χ k + χ k , χ k . n Since the ﬁrst and last term on the right are positive integers and the middle term is nonnegative, our desired conclusion χ k , χ k = 1 will follow if we can show that χ , χ = 2.

To compute χ, we use the basis (∧bI )I of k Mn , where ∧bI = bi1 ∧ · · · ∧ bik and I = {i 1, . . . , i k } is a k-element subset of [n] in increasing order. Each s ∈ Sn permutes the basis (∧bI )I up to a ± sign by (1.12). The diagonal (I, I)-entry of the matrix of sk M with respect to this basis is given by n

⎧ ⎪0 {s}I := ⎨ ⎪ sgn(s| ) I ⎩

if s(I) I, if s(I) = I.

152

Thus, χ(s) =

3. Groups and Group Algebras

= χ(s−1 ) and so 2 1 χ, χ = {s}I

I {s} I

n!

= =

s ∈Sn

I

1 n!

{s}I {s}J

s ∈Sn I,J

1 {s}I {s}J n! I,J s ∈Sn

=

1 n!

sgn(s|I ) sgn(s|J ) .

I,J s ∈YI, J

Here I and J run over the k-element subsets of [n] and YI,J consists of those s ∈ Sn that stabilize both I and J or, equivalently, all pieces of the partition [n] = (I ∪ J) I J (I ∩ J), where · denotes the complement and · = · \ (I ∩ J). Thus, YI,J is a subgroup10 of Sn with the following structure: YI,J S(I∪J ) × SI × SJ × SI∩J . Since sgn(s|I ) = sgn(s|I ) sgn(s|I∩J ) for s ∈ YI,J and likewise for sgn(s|J ), we obtain 1 sgn(s|I ) sgn(s|J ) sgn(s|I∩J ) 2 χ, χ = n!

= (3.46)

=

I,J s ∈YI, J

1 n!

1 |S(I∪J ) | |SI∩J | sgn(α) sgn( β) n! I,J

=

sgn(s|I ) sgn(s|J )

I,J s ∈YI, J

α∈SI β ∈SJ

2 1 |S(I∪J ) | |SI∩J | sgn(α) . n! I,J

α∈SI

The last equality above uses the fact that I and J have the same number of elements; so SI and SJ are symmetric groups of the same degree and hence

β ∈SJ sgn( β) = α∈SI sgn(α). If I has at least two elements, then this sum is 0; otherwise the sum equals 1. Therefore, the only nonzero contributions to the last expression in (3.46) come from the following two cases. Case 1: I = J. Then the (I, J)-summand of the last sum in (3.46) is (n − k)! k!. Since there are a total of nk summands of this type, their combined contribution 1 (n − k)! k! kn = 1. is n! 10Subgroups of this form are called Young subgroups after Alfred Young (1873–1940); they will be considered more systematically later (§3.8.2).

3.5. Further Examples

153

Case 2: |I ∩ J | = k − 1. Now the (I, J)-summand is (n − k − 1)! (k − 1)! and there n are k (n − k)k summands of this type: choose I and then ﬁx J by adding one point from outside I and deleting one point from inside I. The combined contribution of n 1 these summands is n! (n − k − 1)! (k − 1)! k (n − k)k = 1. Thus, we ﬁnally obtain that χ , χ = 2 as was our goal.

3.5.2. The Groups S4 and S5 The irreducible representations of kS3 and the character table have already been determined (Example 3.12). Recall in particular that Irr kS3 = {1, sgn, V2 }. Now we shall do the same for S4 and S5 . Before we enter into the speciﬁcs, let us remind ourselves of some basic facts concerning the symmetric groups Sn in general. Conjugacy Classes of Sn . The conjugacy classes of Sn are in one-to-one correspondence with the partitions of n, that is, sequences λ = (λ 1 ≥ λ 2 ≥ · · · ) with

λ i ∈ Z+ and i λ i = n. Speciﬁcally, the partition λ corresponds to the conjugacy class consisting of all s ∈ Sn whose orbits in [n] have sizes λ 1, λ 2, . . . ; equivalently, s is a product of disjoint cycles of lengths λ 1, λ 2, . . . . The size of the conjugacy class corresponding to λ is given by

n! ii

mλ (i)

, mλ (i)!

where mλ (i) = #{ j | λ j = i} (e.g., [199, Proposition 1.3.2]). Representations of S4 . We can take advantage of the fact that S4 has the grouptheoretical structure of a semidirect product: (3.47)

S4 = V4 S3

with V4 = {(1), (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)} C2 × C2 , the Klein 4-group, and with S3 being identiﬁed with the stabilizer of 4 in S4 . Thus, there is a group epimorphism f : S4 S3 with Ker f = V4 and f S3 = Id. Inﬂation along the algebra map φ = k f : kS4 kS3 allows us to view Irr kS3 = {1, sgn, V2 } as a subset of Irr kS4 . Besides the obvious 1-dimensional representations, 1 and sgn, this yields the representation V2 , inﬂated from S3 to S4 . We will denote this representation by 2 . By Proposition 3.25, we also have the irreducible representations V3 and 2V3 V (V3 ) ± ; see (3.45). Thus we have found ﬁve nonequivalent absolutely irreducible representations of kS4 , having dimensions 1, 1, 2, 3, and 3. Since the squares of these dimensions add up to the order of S4 , we know by Wedderburn’s Structure Theorem that there are no further irreducible representations. Alternatively, since S4 has ﬁve conjugacy classes, this could also be deduced from Proposition 3.6. Table 3.3 records the character table; see Example 3.13(d) for the character of the standard representation V3 .

154

3. Groups and Group Algebras

Table 3.3. Character table of S4 (char k = 0)

classes sizes 1 sgn χV 2

χV3 χV3±

(1) 1

(1 2) 6

1 1 2 3 3

1 −1 0 1 −1

(1 2 3) (1 2 3 4) (1 2)(3 4) 8 6 3 1 1 −1 0 0

1 −1 0 −1 1

1 1 2 −1 −1

Representations of S5 . Unfortunately, no mileage is to be gotten from inﬂation here due to the scarcity of normal subgroups in S5 . However, Proposition 3.25 provides us with the following ﬁve nonequivalent absolutely irreducible represen tations: 1, V4 , 2V4 , 3V4 = V4± , and 4V4 = sgn. Since the sum of the squares of their dimensions is short of the order of S5 , there must be further irreducible representations by Wedderburn’s Structure Theorem. We shall later discuss a general result (Theorem 10.13) that, in the case of S5 , guarantees that all irreducible representations must occur as constituents of tensor powers of V4 . So let us investigate V4⊗2 . First, the isomorphism V4⊗2 S5 V4 ⊗ V4∗ S5 EndkS5 (V4 ) k tells us

that 1 is an irreducible constituent of V4⊗2 with multiplicity 1. Next, 2V4 is also an irreducible constituent of V4⊗2 and χ2V (s) = 12 ( χV4 (s) 2 − χV4 (s2 )) for s ∈ S5 . 4 Let us accept these facts for now; they will be proved in much greater generality in (3.63) and (3.67) below. Since χV4 is known by Example 3.13(d), we obtain the following table of values for the characters of V4 and 2V4 : classes sizes

(1) 1

χV4 χ 2 V

4 6

4

(1 2) (1 2 3) 10 20 2 0

1 0

(1 2 3 4) 30

(1 2 3 4 5) (1 2)(3 4) (1 2 3)(4 5) 24 15 20 −1 1

0 0

0 −2

−1 0

Using these values and the fact that χV ⊗2 = χV2 4 (Lemma 3.20), we compute 4

χV4 , χV ⊗2 4

1 = 1 · 43 + 10 · 23 + 20 · 13 + 24 · (−1) 3 + 20 · (−1) 3 = 1 . 120

This shows that V4 is a constituent of V4⊗2 , with multiplicity 1 (Proposition 3.22). Letting W denote the sum of the other irreducible constituents, we can write V4⊗2 1 ⊕ V4 ⊕

2

V4 ⊕ W .

The character χW = χV2 4 −1− χV4 − χ2V along with the character of W ± = sgn ⊗W 4 are given by the following table:

3.5. Further Examples

classes sizes

(1) 1

χW χW ±

5 5

155

(1 2) (1 2 3) 10 20 −1 −1

1 −1

(1 2 3 4) 30

(1 2 3 4 5) (1 2)(3 4) (1 2 3)(4 5) 24 15 20

−1 1

0 1 1 0 1 −1 It is a simple matter to check that χW , χW = χW ± , χW ± = 1. Hence W and W ± are both absolutely irreducible by Proposition 3.22, and they are not equivalent to each other or to any of the prior irreducible representations. Altogether we have now found seven irreducible representations, which are all in fact absolutely irreducible. Since there are also seven conjugacy classes, we have found all irreducible representations of kS5 by Proposition 3.6. For completeness, we record the entire character table as Table 3.4. Table 3.4. Character table of S5 (char k = 0)

classes sizes 1 sgn χV4 χV4± χW χW ±

χ 2 V

4

(1) 1 1 1 4 4 5 5 6

(1 2) (1 2 3) 10 20 1 −1 2 −2 1 −1 0

1 1 1 1 −1 −1 0

(1 2 3 4) 30 1 −1 0 0 −1 1 0

(1 2 3 4 5) (1 2)(3 4) (1 2 3)(4 5) 24 15 20 1 1 −1 −1 0 0 1

1 1 0 0 1 1 −2

1 −1 −1 1 1 −1 0

3.5.3. The Alternating Groups A4 and A5 It is a notable fact that our arbitrary ﬁeld k of characteristic 0 is a splitting ﬁeld for kSn ; this was observed above for n ≤ 5 and it is actually true for all n as we shall see in Section 4.5. However, the corresponding fact fails to hold for the alternating groups A n . Indeed, even the group A3 C3 requires all third roots of unity to be contained in k if kA3 is to be split (§3.2.1). Therefore, we shall assume in this subsection, in addition to char k = 0, that k is algebraically closed. Conjugacy Classes of A n . The conjugacy classes of A n are quickly sorted out starting from those of Sn . Clearly, for any permutation s ∈ Sn , the A n -conjugacy class A ns is contained in the Sn -conjugacy class Sns. By simple general considerations about restricting group actions to subgroups of index 2 (Exercise 3.5.1), there are two possibilities: if CSn (s) A n , then A ns = Sns ; otherwise, A n -conjugacy classes of equal size.

Sn

s splits into two

156

3. Groups and Group Algebras

It is also easy to see that the ﬁrst case occurs precisely if s has at least one orbit of even size or at least two orbits of the same size, and the second if the orbit sizes of s are all odd and distinct (Exercise 3.5.2). Representations of A4 . The semidirect product decomposition (3.47) of S4 yields the following decomposition of the alternating group A4 : A4 = V4 C3

(3.48)

with V4 = {(1), (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)} and C3 = A3 = (1 2 3). By inﬂation from C3 , we obtain three 1-dimensional representations of A4 : the trivial representation 1, the representation φ : A4 → k× that sends (1 2 3) to a ﬁxed primitive third root of unity ζ3 ∈ k, and φ2 . Since A4ab C3 , there are no further 1-dimensional representations. The squares of the dimensions of all irreducible representations need to add up to |A4 | = 12 by Corollary 3.21(b); so we need one more irreducible representation, necessarily of dimension 3. For this, we try the restriction of the standard S4 -representation V3 to A4 . While there is no a priori guarantee that the restriction V3↓ A4 remains irreducible, the following inner product computation shows that this is indeed the case—note that only the classes of (1), (1 2 3), and (1 2)(3 4) in Table 3.3 give conjugacy classes of A4 and χV4 vanishes on S4 -conjugacy class of (1 2 3), which breaks up into two A4 -classes of size 4: 1 (1 · 32 + 4 · 02 + 4 · 02 + 3 · (−1) 2 ) = 1 . χV3 ↓A , χV3 ↓A = 12 4

4

Thus, we have found all irreducible representations of kA4 . The character table (Table 3.5) is easily extracted from the character table of S4 (Table 3.3). Table 3.5. Character table of A4 (char k = 0, ζ3 ∈ k× a primitive third root of unity)

classes sizes 1 φ φ2 χV3 ↓A

4

(1) 1

(1 2 3) 4

1 1 1 3

1 ζ3 ζ32 0

(1 3 2) (1 2)(3 4) 4 3 1 ζ32 ζ3 0

1 1 1 −1

Subgroups of Index 2. By Corollary 3.5(a), all irreducible representations of A5 must arise as constituents of restrictions of suitable irreducible representations of S5 . Since the signed versions of irreducible representations of S5 have the same restrictions to A5 , we must look at 1↓ A5 ,

V4↓ A5 ,

W↓ A5 ,

and

2

V4↓ A5 .

3.5. Further Examples

157

The following lemma gives a simple criterion for deciding which of these restrictions remain irreducible. The process of restricting irreducible representations to normal subgroups will be addressed in greater generality in Cliﬀord’s Theorem (§3.6.4). Lemma 3.26. Let G be arbitrary and let H be a subgroup of G with |G : H | = 2. Then, for every V ∈ Irrﬁn kG, the restriction V↓ H is either irreducible or a direct sum of two irreducible kH-representations of equal dimension. The former case happens if and only if χV does not vanish on G \ H. Proof. Note that, for any x ∈ G \ H, we have G = H ∪ x H and x H = H x. Now let W be some irreducible subrepresentation of V↓ H . Then V = kG.W = (kH + xkH).W = W + x.W . Since H x = x H, it follows that x.W is a subrepresentation of V ↓ H . It is readily seen that x.W is in fact irreducible, because W is so, and x.W clearly has the same dimension as W . We conclude that either V↓ H = W is irreducible or V↓ H = W ⊕ x.W is the direct sum of two irreducible kH-representations of equal dimension. In the latter case, we have χV (x) = 0, because the matrix of xV with respect to a basis of V that is assembled from bases of W and x.W has two blocks of 0-matrices of size dimk W along the diagonal. Therefore, χV vanishes on G \ H if V↓ H is not irreducible. Conversely, if χV vanishes on G \ H, then the following computation shows that V↓ H is not irreducible: 1 χV (h) χV (h−1 ) χV ↓ H , χV ↓ H = |H |

h ∈H

2 χV (g) χV (g−1 ) = |G|

g ∈G

= 2 χV , χV = 2 .

Note that the above proof only needs k to be algebraically closed of characteristic not dividing |G|. Representations of A5 . Observe that the characters of the S5 -representations V4 and W in Table 3.4 have nonzero value on the transposition (1 2), and hence both representations remain irreducible upon restriction to A5 by Lemma 3.26. The character of 2V4 , on the other hand, vanishes on S5 \ A5 ; so we must have 2

V4↓ A5 = X ⊕ X

for two 3-dimensional irreducible A5 -representations X and X . These representations along with 1, V4 ↓ A5 , and W ↓ A5 will form a complete set of irreducible representations of A5 by Corollary 3.5(a). In order to determine χ X and χ X , we extract the following information from the character table of S5 (Table 3.4):

158

3. Groups and Group Algebras

classes sizes χV4 ↓A

χW ↓ A

χ 2

V4 ↓A

5 5 5

χX

(1) 1

(1 2 3) 20

(1 2 3 4 5) (2 1 3 4 5) (1 2)(3 4) 12 12 15

4

1

−1

−1

0

5

−1

0

0

1

6 3

0 α

1 β

1 γ

−2 δ

The orthogonality relations give the following system of equations for the unknowns α, β, γ, and δ: 1 0 = 1 , χ X = 60 (3 + 20α + 12 β + 12γ + 15δ), 1 (3 · 4 + 20α − 12 β − 12γ), 0 = χV4 ↓A , χ X = 60 5 1 (3 · 5 − 20α + 15δ), 0 = χW ↓A , χ X = 60 5 1 (32 + 20α 2 + 12 β 2 + 12γ 2 + 15δ2 ). 1 = χ X , χ X = 60 Here we have used the fact that each element of A5 is conjugate to its inverse. √The system leads to α = 0, β + γ = 1, δ = −1, and β 2 − β = 1. Thus, β = 12 (1 ± 5). The analogous system of equations also holds with χ X in place of χ X . Let us choose + in β for χ X and take − for χ X ; this will guarantee that the required equation χ X + χ X = χ2V ↓ is satisﬁed. The complete character table of A5 4 A5

is given in Table 3.6. We remark that, for k = C, the representations X and X arise from identifying A5 with the group of rotational symmetries of the regular icosahedron; see Example 3.35 below. √ √ Table 3.6. Character table of A5 (char k = 0, β = 12 (1 + 5), γ = 12 (1 − 5))

classes sizes 1

χV4 ↓A

χW ↓ A

5 5

χX χ X

(1) 1

(1 2 3) (1 2 3 4 5) (2 1 3 4 5) (1 2)(3 4) 20 12 12 15

1 4

1 1

1 −1

1 −1

1 0

5

−1

0

0

1

3 3

0 0

β γ

γ β

−1 −1

3.6. Some Classical Theorems

159

Exercises for Section 3.5 3.5.1 (Restricting group actions to subgroups of index 2). Let G be a group acting on a ﬁnite set X (§3.2.5), and let H be a subgroup of G with |G : H | = 2. For any x ∈ X, let G x = {g ∈ G | g.x = x} be the isotropy group of x. Show: (a) If G x H, then the G-orbit G.x is identical to the H-orbit H.x . (b) If G x ⊆ H, then G.x is the union of two H-orbits of equal size. 3.5.2 (Conjugacy in A n ). For s ∈ Sn , show that CSn (s) ⊆ A n precisely if the orbit sizes of s are all odd and distinct. 3.5.3 (The 5-dimensional representation of S5 ). The goal of this exercise is to construct a 5-dimensional irreducible representation of S5 over any ﬁeld k with char k 2, 3, or 5. (a) For any ﬁeld F, the standard action of GL2 (F) on F 2 induces a permutation action of PGL2 (F) = GL2 (F)/F × on the projective line P1 (F) = (F 2 \ {0})/F ×. Show that this action is faithful, that is, only the identity element of PGL2 (F) ﬁxes all elements of P1 (F), and doubly transitive in the sense of Exercise 3.2.4.11 (b) Let F = Fq be the ﬁeld with q elements and assume that char k does not divide (q − 1)q(q + 1). Conclude from (a) and Exercise 3.2.4 that the deleted permutation representation over k for the permutation action PGL2 (Fq ) P1 (Fq ) is irreducible of dimension q. (c) Conclude from (a) that the action PGL2 (F5 ) P1 (F5 ) gives an embedding of PGL2 (F5 ) as a subgroup of index 6 in S6 . The standard permutation action of S6 on the set S6 / PGL2 (F5 ) of left cosets of PGL2 (F5 ) gives an automorphism φ ∈ Aut(S6 ) such that S5 = φ(PGL2 (F5 )). Thus, the deleted permutation representation in (b) gives a 5-dimensional irreducible representation of S5 if char k 2, 3, or 5.

3.6. Some Classical Theorems This section is devoted to some landmarks in group representation theory and some purely group-theoretical theorems whose proof uses representation theory as a tool. The reader is referred to Curtis’ Pioneers of Representation Theory: Frobenius, Burnside, Schur, and Brauer [51] for a historical account of the formative stages in the development of representation theory. 3.6.1. Divisibility Theorems of Frobenius, Schur, and Itô We ﬁrst consider the dimensions of irreducible representations of a ﬁnite group G. Clearly, dimk S ≤ |G| for any S ∈ Irr kG, because S is an image of the regular 11In fact, the action PGL2 (F ) P1 (F ) is sharply 3-transitive: given a set of three distinct points 1 z1, z2, z3 ∈ P (F ) and a second set of distinct points w1, w2, w3 , there exists precisely one g ∈ PGL2 (F ) such that g.zi = wi for i = 1, 2, 3.

160

3. Groups and Group Algebras

representation. In fact, dimk S ≤ |G : A| for any abelian subgroup A ≤ G provided k contains all eth roots of unity, where e = exp A is the exponent of A; this follows from Corollary 3.5 and (3.16). Our goal in this subsection is to show that, for a large enough ﬁeld k of characteristic 0, the dimensions of all S ∈ Irr kG divide the index |G : A| of any abelian normal subgroup A ≤ G (Itô’s Theorem). We shall repeatedly make use of the standard facts about integrality that were stated in §2.2.7. Our starting point is a result of Frobenius from 1896 [78, §12]. We oﬀer two proofs; the ﬁrst quickly derives the theorem from a more general result about Casimir elements of Frobenius algebras while the second, more traditional, proof works from scratch. Frobenius’ Divisibility Theorem. If S is an absolutely irreducible representation of a ﬁnite group G over a ﬁeld k of characteristic 0, then dimk S divides |G|. First Proof. After a ﬁeld extension, we may assume that kG is split semisimple.

Choosing λ as in (3.14), we have cλ = g ∈G g ⊗ g−1 and γλ (1) = |G| ∈ Z ⊆ k by (3.15). Thus, Corollary 2.18 applies and we need to check that the Casimir element cλ is integral over Z. But cλ belongs to the subring ZG ⊗Z ZG ⊆ kG ⊗ kG, which is ﬁnitely generated over Z, and integrality follows. The second proof uses some ideas that will also be useful later on. Note that the restriction to Z (kG) of the algebra map kG → Endk (S), a → aS , takes values in EndkG (S) = D(S) and D(S) = k by hypothesis on S (Proposition 1.36). Thus, we obtain an algebra map Z (kG) → k, called the central character of S. The relationship to the ordinary character is easy to sort out: dimk S · cS = χ S (c)

(3.49)

(c ∈ Z (kG)).

Recall also that a k-basis of Z (kG) is given by the distinct class sums σx = G

g ∈ Gx

g,

where x denotes the conjugacy class of x in G (Example 3.14). Since χ S is constant on Gx, (3.49) gives the formula | Gx| χ S (x) (σx )S = dimk S

(3.50)

(x ∈ G).

Second Proof. By the orthogonality relations, χ S , χ S = 1. Therefore, |G| 1 1 G = χ S (g) χ S (g−1 ) = | x| χ S (x) χ S (x −1 ) dimk S

dimk S

=

(3.50)

dimk S

g ∈G

x

−1

(σx )S χ S (x ) ,

x

where x runs over a full set of nonconjugate elements of G. Observe that each σx belongs to the subring ZG ⊆ kG, which is ﬁnitely generated over Z. Therefore, σx is integral over Z. Since (σx )S is a root of the same monic polynomial over Z as

3.6. Some Classical Theorems

161

σx , it follows that (σx )S ∈ A := {α ∈ k | α is integral over Z}. As was mentioned in §2.2.7, A is a subring of k such that A ∩ Q = Z . Finally, for any g ∈ G, the eigenvalues of the operator gS in some algebraic closure of k are roots of unity of order dividing the exponent of G, and hence they are all integral over Z. Therefore, |G | χ S (g) ∈ A for all g ∈ G and so the above formula shows that dim S ∈ A. It follows that

|G | dimk S

k

∈ A ∩ Q = Z, ﬁnishing the proof.

We remark that Frobenius’ Divisibility Theorem and the remaining results in this subsection remain valid in positive characteristics as long as char k |G|. In fact, the generalized versions follow from the characteristic-0 results proved here (e.g., Serre [189, Section 15.5]). However, Frobenius’ Divisibility Theorem generally no longer holds if char k divides |G| as the following example shows. Example 3.27 (Failure of Frobenius divisibility). Let k be a ﬁeld with char k = p > 0 and let G = SL2 (F p ), the group of all 2 × 2-matrices over F p having determinant 1. Since G is the kernel of det : GL2 (F p ) F×p , we have |G| = 2

2

(p −1)(p −p) p−1 2

| GL2 (F p ) | p−1

=

= p(p + 1)(p − 1). For the second equality, observe that there are

p − 1 choices for the ﬁrst column of an invertible 2 × 2-matrix, and then p2 − p choices for the second column. Via the embedding G → SL2 (k), the group G acts naturally on the vector space k2 , and hence on all its symmetric powers, V (m) := Symm (k2 )

(m ≥ 0).

The dimension of V (m) is m + 1: if x, y is any basis of k2 , then the monomials x m−i y i (i = 0, 1, . . . , m) form a basis of V (m). Moreover, V (m) is (absolutely) irreducible for m = 0, 1, . . . , p − 1 (Exercise 3.6.1). However, dimk V (p − 3) = p − 2 does not divide |G| for p ≥ 7, because |G| ≡ 6 mod p − 2. The ﬁrst sharpening of Frobenius’ Divisibility Theorem is due to I. Schur [188, Satz VII]. Proposition 3.28. Let S be an absolutely irreducible representation of a ﬁnite group G over a ﬁeld k of characteristic 0. Then dimk S divides |G : Z G|. Proof (following J. Tate). By hypothesis, the representation map kG → Endk (S) is surjective (Burnside’s Theorem (§1.4.6)). Consequently, for each positive integer m, we have a surjective map of algebras ∼

(kG) ⊗m

Endk (S) ⊗m

∼ (B.17)

Endk (S ⊗m ) ∈

Exercise 3.1.2

∈

k[G×m ]

(g1, . . . , g m )

gS1 ⊗ · · · ⊗ gSm

with gi ∈ G. It follows that S ⊗m is an absolutely irreducible representation of the group G×m over k. For c ∈ Z := Z G, the operator cS ∈ Endk (S) is a scalar; so each

162

3. Groups and Group Algebras

(c1, . . . , c m ) ∈ Z ×m acts on S ⊗m as the scalar cS1 · · · cSm = (c1 · · · c m )S . Therefore, S ⊗m is in fact a representation (absolutely irreducible) of the group G×m /C, where we have put C := {(c1, . . . , c m ) ∈ Z ×m | c1 · · · c m = 1}. By Frobenius’ Divisibility Theorem, dimk S ⊗m = (dimk S) m divides |G×m /C| = |Z | · |G : Z | m . In other |G:Z | satisﬁes q m ∈ |Z1 | Z for all m and so Z[q] ⊆ |Z1 | Z. By the facts words, q = dim kS about integrality stated in §2.2.7, this implies q ∈ Z, proving the proposition. The culminating point of the developments described in this subsection is the following result due to N. Itô from 1951 [110]. Itô’s Theorem. Let G be ﬁnite, let A be a normal abelian subgroup of G, and let k be a ﬁeld of characteristic 0. Then the dimension of every absolutely irreducible representation of G over k divides |G : A|. Proof. Let S be an absolutely irreducible representation of kG. In order to show that dimk S divides the index |G : A|, we may replace k by its algebraic closure, k, and S by k ⊗ S. Hence we may assume that k is algebraically closed. The result is clear if G = A, because all irreducible representations of kA have dimension 1. In general, we proceed by induction on |G : A|. The inductive step will be based on Proposition 3.28 and on a special case of Cliﬀord’s Theorem (§3.6.4), which we explain here from scratch. The operators aS with a ∈ A have a common eigenvector. Thus, there is a group homomorphism φ : A → k× with Sφ = {s ∈ S | a.s = φ(a)s for all a ∈ A} 0. Put H = {g ∈ G | φ(g−1 ag) = φ(a) for all a ∈ A} and observe that H is a subgroup of G such that A ⊆ H. Furthermore, h.Sφ ⊆ Sφ

for all h ∈ H and the sum g ∈G/H g.Sφ is direct, because the various g.S are distinct homogeneous components of S↓ A . Since S = kG.Sφ by irreducibility, we must have S = g ∈G/H g.Sφ . Thus, Sφ is a subrepresentation of S↓ H and the

canonical map Sφ ↑G H → S from (3.8) is an isomorphism. It follows that Sφ is an (absolutely) irreducible kH-representation.

First assume that H G. Then we know by induction that dimk Sφ divides |H : A| and, hence, dimk S = |G : H | dimk Sφ divides |G : A|. Finally, if H = G, then Sφ = S and so A acts by scalars on S. Letting denote the images in GL(S), we have A ≤ Z G. Hence, Proposition 3.28 gives that dimk S divides |G : A| and therefore also |G : A|, proving the theorem. 3.6.2. Burnside’s pa q b -Theorem The principal results in this section are purely group theoretical, but the proofs employ representation-theoretic tools. We will work over the ﬁeld C of complex

3.6. Some Classical Theorems

163

numbers and, as in the second proof of Frobenius’ Divisibility Theorem, we will write def

A = s ∈ C | s is integral over Z ; this is a subring of C such that A ∩ Q = Z . Now let V be a ﬁnite-dimensional complex representation of a ﬁnite group G. Recall from the second proof of Frobenius’ Divisibility Theorem that all character values χV (g) for g ∈ G are contained in the subring Z[ζ m ] ⊆ A, where m is the exponent of G and ζ m := e2πi/m . The following lemma contains the technicalities needed for the proof of Burnside’s pa q b -Theorem. Lemma 3.29. Let G be ﬁnite and let S ∈ Irr CG and g ∈ G be such that dimC S and | Gg| are relatively prime. Then either χ S (g) = 0 or gS is a scalar operator. Proof. First, let V ∈ Repﬁn CG and g ∈ G be arbitrary and put s :=

χV (g) dimC V

∈ Q(ζ m ).

Claim. s ∈ A if and only if χV (g) = 0 or gV ∈ EndC (V ) is a scalar operator. Indeed, χV (g) = 0 implies s = 0 ∈ A. Also, if gV ∈ EndC (V ) is a scalar r r operator, necessarily of the form gV = ζ m IdV for some r, then again s = ζ m ∈ A. Conversely, assume that s ∈ A and χV (g) 0. Then 0 γ(s) ∈ A for all γ(χ (g)) s := γ ∈Γ γ(s) = γ ∈Γ dimV V ∈ A ∩ Q = Z γ ∈ Γ := Gal(Q(ζ m )/Q). Thus, 0 C and so | s | ≥ 1. On the other hand, since each γ( χV (g)) is a sum of dimC V many mth roots of unity, the triangle inequality implies that |γ( χV (g))| ≤ dimC V for all γ ∈ Γ. It follows that | s | = 1 and | χV (g)| = dimC V , which forces all eigenvalues of gV to be identical. Therefore, gV is a scalar operator, proving the claim.

Now let V = S ∈ Irr CG and consider the class sum σg = x ∈ Gg x ∈ Z (CG). As we have argued in the second proof of Frobenius’ Divisibility Theorem, the operator (σg )S is a scalar belonging to A. Thus (3.50) shows that A, where we write s =

χS (g) dimC S

| Gg | χS (g) dimC S

= | Gg|s ∈

as above. Finally, if | Gg| and dimC S are relatively

prime, then it follows that s ∈ A, because | Gg|s ∈ A and (dimC S)s = χ S (g) ∈ A. We may now invoke the claim to ﬁnish the proof. The following result of Burnside originally appeared in the second edition (1911) of his monograph Theory of Groups of Finite Order [37]. Burnside’s pa q b -Theorem. Every group of order pa q b , where p and q are primes, is solvable. Before embarking on the argument, let us make some preliminary observations. By considering composition series of the groups in question, the assertion of the theorem can be reformulated as the statement that every simple group G of order pa q b is abelian. Assume that a > 0 and let P be a Sylow p-subgroup of G. Then Z P 1 and, for every g ∈ Z P, the centralizer CG (g) contains P and,

164

3. Groups and Group Algebras

consequently, the size of the conjugacy class of g is a power of the prime q. Therefore, Burnside’s pa q b -Theorem will be a consequence of the following. Theorem 3.30. Let G be a ﬁnite nonabelian simple group. Then {1} is the only conjugacy class of G having prime power size. Proof. Assume, for a contradiction, that there is an element 1 g ∈ G such that | Gg| is a power of the prime p. Representation theory enters the argument via the following. Claim. For 1 S ∈ Irr CG, we have χ S (g) = 0 or p | dimC S. To prove this, assume that p dimC S. Then Lemma 3.29 tells us that either χ S (g) = 0 or else gS is a scalar operator. However, since S 1 and G is simple, the representation g → gS is an embedding G → GL(S). Thus the possibility gS ∈ C would imply that g ∈ Z G, which in turn would force G to be abelian contrary to our hypothesis. Thus, we are left with the other possibility, χ S (g) = 0. We can now complete the proof of the theorem Since the regular as follows. ⊕ dimC S by Maschke’s representation of CG has the form (CG)reg S ∈Irr CG S

Theorem (§3.4.1), we can write χreg (g) = 1+ps with s := 1S ∈Irr CG dimpC S χ S (g). Note that s ∈ A by the claim and our remarks about character values above. On the other hand, since χreg (g) = 0 by (3.21), we obtain s = − p1 ∈ Q \ Z, contradicting the fact that A ∩ Q = Z and ﬁnishing the proof. 3.6.3. The Brauer-Fowler Theorem The Brauer-Fowler Theorem [33] (1955) is another purely group-theoretical result. It is of historical signiﬁcance inasmuch as it led to Brauer’s program of classifying ﬁnite simple groups in terms of the centralizers of their involutions. Indeed, as had been conjectured by Burnside in his aforementioned monograph Theory of Groups of Finite Order ([37, Note M]), all ﬁnite nonabelian simple groups are of even order—this was eventually proved by Feit and Thompson in 1963 in their seminal odd-order paper [74]. Thus, any ﬁnite nonabelian simple group G must contain an involution, that is, an element 1 u ∈ G such that u2 = 1. The BrauerFowler Theorem states that G is “almost” determined by the size of the centralizer CG (u) = {g ∈ G | gu = ug}: Brauer-Fowler Theorem. Given n, there are at most ﬁnitely many ﬁnite nonabelian simple groups (up to isomorphism) containing an involution with centralizer of order n. In fact, each such group embeds into the alternating group A n2 −1 . In light of this result, Brauer proposed a two-step strategy to tackle the problem of classifying all ﬁnite simple groups: investigate the possible group-theoretical structures of the centralizers of their involutions and then, for each group C in the resulting list, determine the ﬁnitely many possible ﬁnite simple groups G containing

3.6. Some Classical Theorems

165

an involution u with CG (u) C. This program was the start of the classiﬁcation project for ﬁnite simple groups. The project was essentially completed, with D. Gorenstein at the helm, in the early 1980s; some gaps had to be ﬁlled in later. In the course of these investigations, it turned out that, with a small number of exceptions, G is in fact uniquely determined by the involution centralizer C. (Exercise 3.6.2 considers the easiest instance of this.) For an overview of the classiﬁcation project, its history, and the statement of the resulting Classiﬁcation Theorem, see R. Solomon’s survey article [196] or Aschbacher’s monograph [7]. To explain the representation-theoretic tools used in the proof of the BrauerFowler Theorem, let G be any ﬁnite group and consider the following function, for any given positive integer n:

∈

Z

∈

θn : G (3.51)

g

#{h ∈ G | h n = g}

Thus, θ 2 (1) − 1 is the number of involutions of G. Each θ n is clearly a C-valued class function on G, and hence we know that θ n is a C-linear combination of the irreducible complex characters of G (Corollary 3.21). In detail: Lemma 3.31. Let G be a ﬁnite group and let θ n be as in (3.51). Then: def 1 νn (S) χ S with νn (S) = χ S (g n ). θn = In particular,

|G|

S ∈Irr CG 1S ∈Irr CG ν2 (S) dimC

g ∈G

S is the number of involutions of G.

Proof. Write θ n = S ∈Irr CG λ S χ S with λ S ∈ C and note that θ n (g−1 ) = θ n (g) for g ∈ G. Now use the orthogonality relations to obtain 1 1 χ S (g)θ n (g) = χ S (h n ) = νn (S). λ S = χS , θ n = |G|

g ∈G

|G|

h ∈G

The involution count formula just expresses θ 2 (1) − 1.

The complex numbers νn (S) are called the Frobenius-Schur indicators of the representation S; they will be considered in more detail and in greater generality in Section 12.5. In particular, we will show there that ν2 (S) can only take the values 0 and ±1 for any S ∈ Irr CG (Theorem 12.26). Granting this fact for now, we can give the Proof of the Brauer-Fowler Theorem. Let G be a ﬁnite nonabelian simple group containing an involution u ∈ G such that |CG (u)| = n, and let t denote the number of involutions of G. It will suﬃce to prove the following. Claim. For some 1 g ∈ G, the size of the conjugacy class Gg is at most |Gt|−1 2 .

166

3. Groups and Group Algebras

To see how the Brauer-Fowler Theorem follows from this, note that t ≥ |Gn | = | Gu|, because the conjugacy class Gu consists of involutions. Thus, | Gg| ≤ |Gt|−1 2 ≤ |G |−1 2 < n2 and so the conjugation action G Gg gives rise to a homomorphism |G | n G → Sn2 −1 . Since G is simple and not isomorphic to C2 , this map is injective and has image in A n2 −1 . It remains to prove the claim, which does in fact hold for any ﬁnite group G. Suppose on the contrary that | Gg| > |Gt|−1 2 for all 1 g ∈ G and let k denote the number of conjugacy classes of G. Then |G| − 1 > (k − 1) |Gt|−1 2 or, equivalently, t 2 > (k − 1)(|G| − 1). In order to prove that this is absurd, we use Lemma 3.31 and the fact that all ν2 (S) ∈ {0, ±1} to obtain the estimate ν2 (S) dimC S ≤ dimC S . t= 1S ∈Irr CG

1S ∈Irr CG

d 2

d x i ) 2 ≤ d i=1 x i in conjunction with the equalities k = The inequality12 ( i=1

| Irr CG| and S ∈Irr CG (dimC S) 2 = |G| (Corollary 3.21) then further implies dimC S 2 ≤ (k − 1) (dimC S) 2 = (k − 1)(|G| − 1). t2 ≤ 1S ∈Irr CG

1S ∈Irr CG

This contradicts our prior inequality for t 2 , thereby proving the claim and ﬁnishing the proof of the Brauer-Fowler Theorem.

3.6.4. Cliﬀord Theory Originally developed by A. H. Cliﬀord [46] in 1937, Cliﬀord theory studies the interplay between the irreducible representations of a group G, not necessarily ﬁnite, over an arbitrary ﬁeld k and those of a normal subgroup N G having ﬁnite index in G. Special cases have already been considered in passing earlier (Lemma 3.26 and the proof of Itô’s Theorem). We will now address the passage between Rep kG and Rep kN more systematically and in greater generality, the G = IndkG principal tools being restriction · ↓ N = ReskG kN and induction · ↑ kN . In fact, it turns out that the theory is not speciﬁc to group algebras and can be explained at little extra cost in the more general setting of crossed products. Crossed Products. First, let N be an arbitrary normal subgroup of G and put Γ = G/N. It is an elementary fact that if {x | x ∈ Γ} ⊆ G is any transversal for the cosets of N in G, then kG = x ∈Γ kN x and kN x = x kN for all x. Thus, 12This inequality follows from the Cauchy-Schwarz inequality | x · y | ≤ | x | |y | in Euclidean space R d by taking y = (1, 1, . . . , 1).

3.6. Some Classical Theorems

167

putting B = kG and B x = kN x, the algebra B is Γ-graded (Exercise 1.1.12) and each homogeneous component B x contains a unit, for example x: (3.52) B= B x , B x B y ⊆ B xy , and B x ∩ B× ∅ for all x ∈ Γ. x ∈Γ

In general, any Γ-graded algebra B as in (3.52), with Γ a group and with identity component A = B1 , is called a crossed product of Γ over A and denoted by B = A∗Γ. Thus, the group algebra kG is a crossed product, kG = (kN ) ∗ Γ with Γ = G/N. It is also clear that, for any crossed product B = A ∗ Γ and any submonoid Δ ⊆ Γ, the

sum A ∗ Δ := x ∈Δ B x is a subalgebra of B. Furthermore, for any choice of units x ∈ B x ∩ B× , one easily shows (Exercise 3.6.5) that the homogeneous components of B = A ∗ Γ are given by (3.53)

B x = Ax = x A.

Therefore, the units x are determined up to a factor in A× and conjugation by x gives an automorphism x( · )x −1 ∈ AutAlgk ( A), which depends on the choice of x only up to an inner automorphism of A. Twisting. Let B = A ∗ Γ be an arbitrary crossed product. Then each homogeneous component B x is an ( A, A)-bimodule via multiplication in B. Thus, for any W ∈ Rep A, we may deﬁne def def

x W = B x ⊗ A W ∈ Rep A and ΓW = x ∈ Γ | x W W . Lemma 3.32. With the above notation, ΓW is a subgroup of Γ and x W y W if and only if xΓW = yΓW . Proof. It follows from (B.5) and (3.53) that multiplication of B gives an isomorphism B x ⊗ A B y B xy as ( A, A)-bimodules for any x, y ∈ Γ. By associativity of the tensor product ⊗ A and the canonical isomorphism A ⊗ A W W in Rep A, we obtain isomorphisms x ( y W ) xy W and 1W W in Rep A. Both assertions of the lemma are immediate consequences of these isomorphisms. Restriction. The main result of this subsection concerns the behavior of irreducible representations of a crossed product B = A ∗ Γ with Γ ﬁnite under restriction to the identity component A. This covers the process of restricting irreducible representations of an arbitrary group G to a normal subgroup N G having ﬁnite index in G. For any W ∈ Rep A, we will consider the subalgebra def

BW = B ∗ ΓW ⊆ B . Cliﬀord’s Theorem. Let B = A ∗ Γ be a crossed product with Γ a ﬁnite group. Then, for any V ∈ Irr B, the restriction V↓ A is completely reducible of ﬁnite length.

168

3. Groups and Group Algebras

More precisely, if S is any irreducible subrepresentation of V ↓ A and V (S) is the S-homogeneous component of V↓ A , then x ⊕ length V (S) V↓ A S . x ∈Γ/ΓS

Furthermore, V (S) is a subrepresentation of V↓BS and V V (S)↑BBS . Proof. Since Γ is ﬁnite, the restriction V ↓ A is ﬁnitely generated. Hence there exists a maximal subrepresentation M V ↓ A (Exercise 1.1.3). All x.M with x ∈ Γ are maximal subrepresentations of V ↓ A as well and (3.53) implies that x.(y.M) = x y.M for x, y ∈ Γ. Therefore, x ∈Γ x.M is a proper subrepresentation of V and, consequently, x ∈Γ x.M = 0 by irreducibility. This yields an embedding V↓ A → x ∈Γ V /x.M, proving that V↓ A is completely reducible of ﬁnite length. Fix an irreducible subrepresentation S ⊆ V↓ A . Then all x.S are also irreducible

subrepresentations of V ↓ A and xS x.S via x ⊗ s → x.s. The sum x ∈Γ x.S is

a nonzero subrepresentation of V and so we must have x ∈Γ x.S = V . By Corollary 1.29 and Lemma 3.32, the irreducible constituents of V ↓xA are exactly the x various twists S with x ∈ Γ/ΓS . Therefore, V↓ A = x ∈Γ/ΓS V ( S). Furthermore, for x, y ∈ Γ, we certainly have x.V ( yS) ⊆ V ( xyS), because x.yS xyS. In particular, V (S) is stable under the action of BS and x.V (S) = V ( xS) for all x ∈ Γ. x ⊕ length V (S) . It follows Consequently, V ↓ A = x ∈Γ/Γ x.V (S) x ∈Γ/Γ S S

S

that V V (S)↑BBS via the canonical map V (S)↑BBS → V that corresponds to the inclusion V (S) → V↓BS in Proposition 1.9. This completes the proof of Cliﬀord’s Theorem. Outlook. More can be said in the special case where B = kG = (kN ) ∗ (G/N ) for a ﬁnite group G and N G. If k is a ﬁeld of characteristic 0 that contains suﬃciently many roots of unity, then one can use Schur’s theory of projective representations to show that, in the situation of Cliﬀord’s Theorem, length V (S) divides the order dim V of ΓS or, equivalently, dimk S divides |Γ|. This allows for a generalization of Itô’s k Theorem to subnormal abelian subgroups A ≤ G; see [109, Corollaries 11.29 and 11.30]. The monograph [170] by Passman is devoted to the ring-theoretic properties of crossed products B = A ∗ Γ with Γ (mostly) inﬁnite. Crossed products also play a crucial role in the investigation of division algebras, Galois cohomology, and Brauer groups; see [173, Chapter 14] for an introduction to this topic.

3.6. Some Classical Theorems

169

Exercises for Section 3.6 3.6.1 (Irreducible representations of SL2 (F p ) in characteristic p). Let k be a ﬁeld with char k = p > 0 and let G = SL2 (F p ). To justify a claim made in Example 3.27, show: (a) The number of p-regular conjugacy classes of G is p. (b) V (m) = Symm (k2 ) ∈ Irr kG for m = 0, 1, . . . , p−1. Consequently, the V (m) are a full set of nonequivalent irreducible representations of kG (Theorem 3.7). 3.6.2 (The Brauer program for involution centralizer C2 ). Let G be a ﬁnite group containing an involution u ∈ G such that the centralizer CG (u) has order 2. Use the column orthogonality relations (Exercise 3.4.7) to show that Gab = G/[G, G] has order 2. Consequently, if G is simple, then G C2 . 3.6.3 (Frobenius-Schur indicators). Let G be a ﬁnite group and assume that char k |G|. Recall that the nth Frobenius-Schur indicator of V ∈ Repﬁn kG is deﬁned by

νn (V ) = |G1 | g ∈G χV (g n ) (Lemma 3.31). The goal of this exercise is to show that, if n is relatively prime to |G|, then νn (S) = 0 for all 1 S ∈ Irr kG. To this end, prove: (a) The nth power map G → G, g → g n , is a bijection. (b) For any ﬁeld k and any 1 S ∈ Irr kG, the equalities

0 hold in Endk (S). Consequently, g ∈G χ S (g n ) = 0.

g ∈G

gSn =

g ∈G

gS =

3.6.4 (Frobenius’ formula). Let G be a ﬁnite group and let C1, . . . , Ck be arbitrary conjugacy classes of G. Put

N (C1, . . . , Ck ) = # (g1, . . . , gk ) ∈ C1 × · · · × Ck | g1 g2 · · · gk = 1 . For any S ∈ Irr CG, let χ S,i denote the common value of all χ S (g) with g ∈ Ci . Prove the equalities (a) and (b) below for the product of the class sums σi =

g ∈Ci g ∈ Z (CG) to obtain the following formula of Frobenius: |C | · · · |Ck | χ S,1 . . . χ S,k . N (C1, . . . , Ck ) = 1 k−2 |G| S ∈Irr CG (dimC S) (a) χreg (σ1 σ2 · · · σk ) = |G| N (C1, . . . , Ck ).

(b) χreg (σ1 σ2 · · · σk ) = |C1 | · · · |Ck | S ∈Irr CG

χS,1 ...χS, k (dimC S) k−2

. (Use (3.50).)

3.6.5 (Basic properties of crossed products). Let B = A ∗ Γ be a crossed product −1 of Γ over A and ﬁx units x ∈ B x ∩ B× . Show that x −1 ∈ B x , B x = Ax = x A, and B x B y = B xy holds for all x, y ∈ Γ. 3.6.6 (Simplicity of crossed products). Let B = A ∗ Γ be a crossed product. Assuming that A is a simple algebra and that all x( · )x −1 ∈ AutAlgk ( A) with 1 x ∈ Γ are outer automorphisms of A, show that the algebra B is simple.

170

3. Groups and Group Algebras

3.7. Characters, Symmetric Polynomials, and Invariant Theory This section presents some applications of the material from the previous sections to invariant theory. Along the way, we also discuss some connections between characters and symmetric functions. 3.7.1. Symmetric Polynomials We start by collecting some basic facts concerning symmetric polynomials.13 Throughout, x 1, x 2, . . . , x d denote independent commuting variables over k. A polynomial in x 1, x 2, . . . , x d is called symmetric if it is unchanged under any permutation of the variables: f (x 1, . . . , x d ) = f (x s(1), . . . , x s(d) ) for any s ∈ Sd . Foremost among them are the elementary symmetric polynomials, the k th of which is deﬁned by def ek = ek (x 1, x 2, . . . , x d ) = x i1 x i2 · · · x ik = xi . I ⊆[d] i ∈I |I |=k

1≤i1 n. Then we have the following isomorphisms in Rep kG: (3.72)

P : O n (V )

∼ Lemma 3.36

n

∗

ST (V )

∼ (3.71)

symmetric multilinear maps V n → k.

In the invariant theory literature, this isomorphism is called polarization and its inverse, restitution; see Weyl [213, p. 5ﬀ] or Procesi [174, p. 40ﬀ]. To ﬁnd the polarization P f of a given f ∈ O n (V ), choose any preimage t = t f ∈ (V ∗ ) ⊗n for f

184

3. Groups and Group Algebras

and symmetrize it. Letting Ot = Sn .t denote the Sn -orbit of t, the symmetrization (3.66) may be written as 1 St = t ∈ STn (V ∗ ). |Ot |

t ∈Ot

Then P f is the image of S t in MultLin(V n, k) under the map (3.71). Examples 3.38 (Some polarizations). Fix a basis x 1, . . . , x d of V ∗ and identify V with kd via v ↔ x i, v i . For simplicity, let us write x = x 1 and y = x 2 . The element x 2 ∈ O 2 (V ) comes from x ⊗ x ∈ (V ∗ ) ⊗2 , which is already symmetric. The tensor x ⊗ x corresponds in (3.71) to the symmetric bilinear map V 2 → k that is given by (ξ1, . . . ) , (ξ2, . . . ) → ξ1 ξ2 . Denoting this bilinear map by x 1 x 2 , we obtain P (x 2 ) = x 1 x 2 . For x y ∈ O 2 (V ), we need to symmetrize, S (x ⊗ y) = 12 (x ⊗ y + y ⊗ x) ∈ ST2 (V ∗ ), which corresponds to the bilinear map

1 (ξ1, η 1, . . . ) , (ξ2, η 2, . . . ) → (ξ η + η 1 ξ2 ). 2 1 2

Using the above notational convention, this yields P (x y) =

1 (x y + x 1 y2 ). 2 1 2

Similarly, P (x 2 y) comes from S (x ⊗ x ⊗ y) = 13 (x ⊗ x ⊗ y + x ⊗ y ⊗ x + y ⊗ x ⊗ x), whence 1 P (x 2 y) = (x 1 x 2 y3 + x 1 y2 x 3 + y1 x 2 x 3 ). 3 For future reference, we give an explicit description of the restitution map. By our hypothesis on k, the canonical map O n (V ) → kV is an embedding (Exercise C.3.2); so we may describe elements of O n (V ) as functions on V . Lemma 3.39. The preimage of a symmetric multilinear map l : V n → k under the isomorphism (3.72) is the degree-n form v → l (v, v, . . . , v) ∈ O n (V ). Proof. Let (x i )1d be a basis of V and let (x i )1d be the dual basis of V ∗ . For i i I = (i 1, . . . , i n ) ∈ [d]n , put x I = x i1 ⊗ · · · ⊗ x in ∈ (V ∗ ) ⊗n and let x 11 · · · x nn denote the multilinear map V n → k that corresponds to x I under the isomorphism

i i (3.71). Then l = I λ I x 11 · · · x nn with λ I = l (x i1, . . . , x in ). The preimage of l in

(3.71) is I λ I x I ∈ STn (V ∗ ) and the image of I λ I x I under the isomorphism of

Lemma 3.36 is the degree-n form f l := I λ I x i1 · · · x in ∈ O n (V ). Thus, P f l = l.

3.8. Decomposing Tensor Powers

185

The following computation shows that l (v, . . . , v) = f l (v), proving the lemma: l (v, . . . , v) = l ( x i, vx i , . . . , x i, vx i ) i

=

i i1

i1

x , v · · · x , v l (x i1, . . . , x in )

I

=

x i1 · · · x in (v) λ I .

I

Exercises for Section 3.8 3.8.1 (Sn -coinvariants). (a) Consider the coinvariants (V ⊗n ) Sn (Exercise 3.3.4) for the place permutation action Sn V ⊗n . Show the canonical map V ⊗n (V ⊗n ) Sn is a morphism in Rep kG, which has the same kernel as the canonical map V ⊗n n n ⊗n Sym V . Thus, Sym V (V ) S in Rep kG. n

(b) For V in Repﬁn kG, show that (Symn V ) ∗ Symn (V ∗ ) in Rep kG provided char k n! and always ( n V ) ∗ n (V ∗ ) in Rep kG. 3.8.2 (Groups of odd order). Let G be ﬁnite of odd order and let char k = 0. For any 1 S ∈ Irr kG, show: (a) 1 , χSym2 S = 1 , χ2 S . (Use (3.63) and Exercises 3.7.1 and 3.6.3.) (b) S S ∗ . (Use (3.67) and part (a).)

(c) The number c of conjugacy classes of G satisﬁes |G| ≡ c mod 16. 3.8.3 (Generating tensor powers). Prove the following statements by modifying the proof of Proposition 3.37 and referring to Exercise C.3.2: (a) (V ⊗n ) Sn = v ⊗n | v ∈ V k provided |k| ≥ n. (b) Let 0 f ∈ O m (V ). If |k| ≥ n + m, then (V ⊗n ) Sn = v ⊗n | v ∈ Vf k .

Chapter 4

Symmetric Groups

This chapter presents the representation theory of the symmetric groups Sn over an algebraically closed base ﬁeld of characteristic 0. The main focus will be on the sets Irr Sn of all irreducible representations (up to isomorphism) of the various Sn . We will follow Okounkov and Vershik [165], [208] rather than taking the classical approach invented by the originators of the theory, Frobenius, Schur, and especially Young [215], about a century earlier. A remarkable feature of the Okounkov-Vershik method is that the entire chain of groups (4.1)

1 = S1 ≤ S2 ≤ · · · ≤ Sn−1 ≤ Sn ≤ · · ·

is considered simultaneously and relations between the irreducible representations of successive groups in this chain are systematically exploited from the outset. Here, Sn−1 is identiﬁed with the subgroup of Sn consisting of all permutations of [n] = {1, 2, . . . , n} that ﬁx n. The starting point for our investigation of the sets Irr Sn is the MultiplicityFreeness Theorem (Section 4.2), which states that the restriction of any V ∈ Irr Sn to Sn−1 decomposes into a direct sum of pairwise nonequivalent irreducible representations. This fact naturally leads to the deﬁnition of the so-called branching graph B: the set of vertices is the disjoint union of all Irr Sn and we draw an arrow from W ∈ Irr Sn−1 to V ∈ Irr Sn if W is an irreducible constituent of V ↓ Sn−1 . Another fundamental result in the representation theory of the symmetric groups, the Graph Isomorphism Theorem, establishes an isomorphism between B and a more elementary graph, the so-called Young graph Y. The vertex set of Y is the disjoint union of the sets Pn consisting of all partitions of n, each represented by its Young diagram. An arrow from μ ∈ Pn−1 to λ ∈ Pn signiﬁes that the diagram of λ is obtained by adding a box to the diagram of μ. 187

188

4. Symmetric Groups

1

W±

V4±

V 2

V3±

sgn

V2

sgn

V4

W

2

1

V3

1

V4

1

sgn

sgn

1

The Graph Isomorphism Theorem makes combinatorial tools available for the investigation of Irr Sn . In particular, we will present the elegant “hook-walk” proof, due to Greene, Nijenhuis, and Wilf [94], for the hook-length formula giving the dimensions of the irreducible representations of Sn . The proof of the isomorphism B ∼ Y uses an analysis of the spectrum of the so-called Gelfand-Zetlin algebra GZ n , a commutative subalgebra of kSn whose deﬁnition is based directly on the chain (4.1). It turns out that each V ∈ Irr Sn has a basis consisting of eigenvectors for GZ n and this basis is unique up to rescaling. For a suitable choice of scalars, the matrices of all operators sV (s ∈ Sn ) will be shown to have rational entries; another normalization leads to orthogonal matrices (Young’s orthogonal form). Finally, we will present an eﬃcient method for computing the irreducible characters of Sn , the Murnaghan-Nakayama Rule. The general strategy employed in this chapter emulates traditional methods from the representation theory of semisimple Lie algebras, which will be the subject of Chapters 6–8. The role of the GZ-algebra GZ n echoes the one played by the Cartan subalgebra of a semisimple Lie algebra g and analogs of sl 2 -triples of g will also occur in the present chapter. More extensive accounts of the representation theory of the symmetric groups along the lines followed here can be found in the original papers of Vershik and Okounkov [165], [208], [207], [209] and in the monographs [125] and [42]. If not explicitly stated otherwise, the base ﬁeld k is understood to be algebraically closed of characteristic 0 throughout this chapter. Therefore, the group algebra kSn is split semisimple by Maschke’s Theorem (§3.4.1). We will often suppress k in our notation, writing Irr Sn = Irr kSn as above and also Hom Sn = HomkSn , dim = dimk , etc.

4.1. Gelfand-Zetlin Algebras

189

4.1. Gelfand-Zetlin Algebras The chain (4.1) gives rise to a corresponding chain of group algebras, (4.2)

k = kS1 ⊆ kS2 ⊆ · · · ⊆ kSn−1 ⊆ kSn ⊆ · · · .

The Gelfand-Zetlin (GZ) algebra GZ n [85], [86] is deﬁned to be the subalgebra of kSn that is generated by the centers Zk := Z (kSk ) for k ≤ n: def

GZ n = k[Z1, Z2, . . . , Zn ] ⊆ kSn . Note that all Zk commute elementwise with each other: if α ∈ Zk and β ∈ Zl with k ≤ l, say, then α ∈ kSk ⊆ kSl and β ∈ Z (kSl ) and hence α β = βα. Therefore, GZ n is certainly commutative; in fact, the same argument will work for any chain of algebras in place of (4.2). In order to derive more interesting facts about GZ n , we will need to use additional properties of (4.2). For example, we will show that GZ n is a maximal commutative subalgebra of kSn and that GZ n is semisimple; see Theorem 4.4 below. 4.1.1. Centralizer Subalgebras Our ﬁrst goal is to exhibit a more economical set of generators for the algebra GZ n . This will be provided by the so-called Jucys-Murphy (JM) elements, which will play an important role throughout this chapter. The nth JM-element, denoted by X n , is deﬁned as the orbit sum of the transposition (1, n) ∈ Sn under the conjugation action Sn−1 kSn : def

Xn =

n−1

(i, n) ∈ (kSn ) Sn−1 .

i=1

Here, (kSn ) Sn−1 denotes the subalgebra of kSn consisting of all Sn−1 -invariants. Evidently, (kSn ) Sn−1 is contained in the invariant algebra (kSn ) Sk of the conjugation action Sk kSn for all k < n , and (kSn ) Sk can also be described as the centralizer of the subalgebra kSk in kSn :

(kSn ) Sk = a ∈ kSn | ab = ba for all b ∈ kSk . By the foregoing, the JM-elements Xk+1, . . . , X n all belong to (kSn ) Sk , and this algebra clearly also contains the center Zk = (kSk ) Sk as well as the subgroup ≤ Sn consisting of the permutations of [n] = {1, 2, . . . , n} that ﬁx all elements Sn−k of [k]. The following theorem is due to Olshanski˘ı [166]. Theorem 4.1. The k-algebra (kSn ) Sk is generated by the center Zk = Z (kSk ), ≤ Sn , and the JM-elements Xk+1, . . . , X n . the subgroup Sn−k

190

4. Symmetric Groups

Note that, for m ≥ k + 1, X m − (k + 1, m) − · · · − (m − 1, m) = (k + 1, m)Xk+1 (k + 1, m).

(4.3)

Since (i, m) ∈ Sn−k for k < i < m, all but one of the JM-elements could be deleted from the list of generators in the theorem. However, our focus later on will be on the JM-elements rather than the other generators.

Before diving into the proof of Olshanski˘ı’s Theorem, we remark that the Sk -conjugacy class of any s ∈ Sn can be thought of in terms of “marked cycle shapes”. In detail, if s is given as a product of disjoint cycles, possibly including 1-cycles, then we can represent Sks by the shape that is obtained by keeping each of k + 1, . . . , n in its position in the given product while placing the symbol ∗ in all other positions. For example, the marked cycle shape (∗ , ∗ , ∗)(∗ , ∗)(∗ , ∗)(∗)(12 , ∗ , 15)(13 , ∗ , ∗)(14) represents the S11 -conjugacy class consisting of all permutations of [15] that are obtained by ﬁlling the positions marked ∗ by the elements of [11] in some order. , Xk+1, . . . , X n ] ⊆ Proof of Theorem 4.1. We already know that A := k[Zk , Sn−k Sk B := (kSn ) . In order to prove the inclusion B ⊆ A, observe that kSn is a permutation representation of Sk . Hence a k-basis of B is given by the Sk -orbit sums (§3.3.1), t (s ∈ Sn ), σs := Sk

t ∈Sks

where s denotes the Sk -conjugacy class of s. Our goal is to show that σs ∈ A for all s ∈ Sn . To this end, we use a temporary notion of length1 for elements s ∈ Sn , deﬁning l (s) to be the number of points from [n] that are moved by s or, equivalently, the number of symbols occurring in the disjoint cycle decomposition of s with all 1-cycles omitted. Clearly, l ( · ) is constant on conjugacy classes of Sn . Moreover, l (ss ) ≤ l (s) + l (s ) for s, s ∈ Sn and equality holds exactly if s and s do not move a common point. Letting Fl ⊆ kSn denote the k-linear span of all s ∈ Sn with l (s) ≤ l, we have F0 = k ⊆ · · · ⊆ Fl−1 ⊆ Fl ⊆ · · · ⊆ Fn = kSn and Fl Fl ⊆ Fl+l . Moreover, all subspaces Fl are Sk -stable. Put S

Bl = B ∩ Fl = Fl k . A basis of Bl is given by the orbit sums σs with l (s) ≤ l. We will show by induction on l that Bl ⊆ A or, equivalently, σs ∈ A for all s ∈ Sn with l (s) = l. To start, if l = l (s) ≤ 1, then σs = s = 1 ∈ A. More generally, if s = rt with , then σs = σr t and σr ∈ Zk . Hence, σs ∈ A again. Thus, we r ∈ Sk and t ∈ Sn−k ; so the disjoint cycle decomposition of s involves may assume that s Sk × Sn−k a cycle of the form (. . . , i, m) with i ≤ k < m. If l = 2, then s = (i, m) and σs 1This is not to be confused with another notion of “length”, considered in Example 7.10.

4.1. Gelfand-Zetlin Algebras

191

is identical to the left-hand side of (4.3), which belongs to A. Therefore, we may assume that l > 2 and that Bl−1 ⊆ A. Next, assume that s = rt with r, t 1 and l (r ) + l (t) = l. Then σr , σt ∈ A

by induction, and hence A σr σt = r ,t r t , where r and t run over the Sk -conjugacy classes of r and t, respectively. If r and t move a common point from [n], then l (r t ) < l (r ) + l (t ) = l. The sum of all these products r t is an Sk -invariant belonging to Bl−1 ⊆ A. Therefore, it suﬃces to consider the sum of the nonoverlapping products r t . Each of these products has the same marked cycle shape as s, and hence it belongs to the Sk -conjugacy class of s. By Sk -invariance, the sum of the nonoverlapping products r t is a positive integer multiple of σs . This shows that σr σt ≡ zσs mod A for some z ∈ N, and we conclude that σs ∈ A. It remains to treat the case where s is a cycle, say s = ( j1, . . . , jl−2, i, m) with i ≤ k < m. Write s = rt with r = (i, m) and t = ( j1, . . . , jl−2, m). Since σr , σt ∈ A

by induction, we once again have A σr σt = i,t (i , m)t with i ≤ k and t running over the Sk -conjugacy class of t. As above, the sum of all these products (i , m)t having length less than l belongs to A by induction. The products of length , m) = ( j1, . . . , jl−2 , i , m), equal to l all have the form (i , m)t = (i , m)( j1, . . . , jl−2 and these products form the Sk -conjugacy class of s. Therefore, we again have σr σt ≡ zσs mod A for some z ∈ N, which ﬁnishes the proof. 4.1.2. Generators of the Gelfand-Zetlin Algebra As a consequence of Theorem 4.1, we obtain the promised generating set for the Gelfand-Zetlin algebra: GZ n is generated by the JM-elements Xk with k ≤ n. Even though X1 = 0 is of course not needed as a generator, it will be convenient to keep this element in the list. Corollary 4.2. GZ n = k[X1, X2, . . . , X n ]. Proof. First note that

all transpositions of Sk−1 . Xk = all transpositions of Sk − Since the ﬁrst sum belongs to Zk and the second to Zk−1 , it follows that Xk ∈ GZ n = k[Z1, . . . , Zn ]. For the inclusion GZ n ⊆ k[X1, . . . , X n ], we proceed by induction on n. The case of GZ 1 = k being clear, assume that n > 1 and that GZ n−1 ⊆ k[X1, . . . , X n−1 ]. Since GZ n = k[GZ n−1, Zn ] by deﬁnition, it suﬃces to show that Zn ⊆ k[GZ n−1, X n ]. But Zn = (kSn ) Sn ⊆ (kSn ) Sn−1 as desired.

=

Theorem 4.1

k[Zn−1, X n ] ⊆ k[GZ n−1, X n ]

192

4. Symmetric Groups

Exercises for Section 4.1 4.1.1 (Relations between JM-elements and Coxeter generators). The transpositions si = (i, i + 1) (i = 1, . . . , n − 1) are called the Coxeter generators of Sn . Show that the following relations hold for the Coxeter generators and the JM-elements: si Xi + 1 = Xi+1 si and si X j = X j si if j i, i + 1. 4.1.2 (Product of the JM-elements). Show that X2 X3 · · · X n is the sum of all n-cycles in Sn . 4.1.3 (Semisimplicity of some subalgebras of kSn ). Show that any subalgebra of kSn that is generated by some of the JM-elements Xi (i ≤ n) and some subgroups of Sn and the centers of some subgroup algebras of kSn is semisimple. In particular, the centralizer algebras (kSn ) Sk and GZ n are semisimple. (Use Exercise 3.4.2 and the fact that all these subalgebras are stable under the standard involution of kSn and deﬁned over Q.)

4.2. The Branching Graph In this section, we deﬁne the ﬁrst of two graphs that will play a major role in this chapter: the branching graph B. This graph eﬃciently encodes a great deal of information concerning the irreducible representations of the various symmetric groups Sn . 4.2.1. Restricting Irreducible Representations The developments in this section hinge on the following observation. Multiplicity-Freeness Theorem. For each V ∈ Irr Sn , the restriction V↓ Sn−1 is a direct sum of nonisomorphic irreducible representations of Sn−1 . ⊕m(W ) Proof. Since kSn−1 is split semisimple, we have V ↓ Sn−1 W ∈Irr Sn−1 W with m(W ) ∈ Z+ and so End Sn−1 (V ) W Matm(W ) (k) (Proposition 1.33). The theorem states that m(W ) ≤ 1 for all W , which is equivalent to the assertion that the algebra End Sn−1 (V ) is commutative. Similarly, since kSn is split semisimple, we have the standard isomorphism (1.46) of k-algebras: kSn

∼

V ∈Irr Sn

Endk (V )

a

∈

∈

(4.4)

aV

Under this isomorphism, the conjugation action Sn kSn translates into the s standard Sn -action on each component Endk (V ): ( a)V = sV ◦ aV ◦ sV−1 = s.aV .

4.2. The Branching Graph

193

Therefore, the isomorphism (4.4) restricts to an isomorphism of algebras of Sn−1 invariants, (kSn ) Sn−1 ∼ Endk (V ) Sn−1 = End Sn−1 (V ). (3.30)

V ∈Irr Sn

V ∈Irr Sn

By Theorem 4.1, (kSn ) Sn−1 = k[Zn−1, X n ] is a commutative algebra. Conse quently, all End Sn−1 (V ) are commutative as well, as desired. 4.2.2. The Graph B Consider the following graph B, called the branching graph of the chain (4.2). The set of vertices of B is deﬁned by ! Irr Sn . vert B = n ≥1

For given vertices W ∈ Irr Sn−1 and V ∈ Irr Sn , the graph B has a directed edge W → V if and only if there is a nonzero map W → V ↓ Sn−1 in Rep Sn−1 , that is, W is an irreducible constituent of V ↓ Sn−1 . Thus, the vertices of B are organized into levels, with Irr Sn being the set of level-n vertices, and all arrows in B are directed toward the next higher level. Figure 4.1 shows the ﬁrst ﬁve levels of B (Exercise 4.2.1). Irr S5 :

Irr S4 : Irr S3 : Irr S2 :

1 S5

V4

W

1 S4

V3 1 S3

2

W±

V4±

2 V

V3±

sgn S4

V2

sgn S3

V4

1 S2

sgn S2 1 S1

Irr S1 :

Figure 4.1. Bottom of the branching graph B (notation of §3.5.2)

The Multiplicity-Freeness Theorem can now be stated as follows: (4.5)

V↓ Sn−1

W →V in B

W.

sgn S5

194

4. Symmetric Groups

Note that the decomposition (4.5) is canonical: the image of W in V ↓ Sn−1 is uniquely determined as the W -homogeneous component of V ↓ Sn−1 and the map W → V↓ Sn−1 is a monomorphism in Rep Sn−1 that is uniquely determined up to a scalar multiple by Schur’s Lemma. 4.2.3. Gelfand-Zetlin Bases Let V ∈ Irr Sn be given. The following procedure yields a canonical decomposition of V into 1-dimensional subspaces. Start by decomposing V↓ Sn−1 into irreducible constituents as in (4.5); then, for each arrow W → V in B, decompose W ↓ Sn−2 into irreducibles. Proceeding in this manner all the way down to S1 = 1, we obtain the desired decomposition of the vector space V↓ S1 into 1-dimensional subspaces, one for each path T : 1 S1 → · · · → V in B. The resulting decomposition of V is uniquely determined, because the various decompositions at each step are unique. Choosing 0 vT in the subspace of V corresponding to the path T, we obtain a basis (vT ) of V and each vT is determined up to a scalar multiple. This basis is called “the” Gelfand-Zetlin (GZ) basis of V ; of course, any rescaling of this basis would also be a GZ-basis. To summarize, V↓ S1 =

(4.6)

k vT .

T : 1S →···→V 1 in B

It follows in particular that (4.7)

dim V = # paths 1 S1 → · · · → V in B .

The reader is invited to check that there are ﬁve such paths in Figure 4.1 for V = W , and six paths for V = 2V4 . This does of course agree with what we know already about the dimensions of these representations (§3.5.2). For a generalization of (4.7), see Exercise 4.2.3. Example 4.3 (GZ-basis of the standard representation Vn−1 ). Consider the chain n M1 ⊆ M2 ⊆ · · · ⊆ Mn ⊆ · · · , where Mn = kb is the standard permutation i=1 i representation of Sn (§3.2.4). Working inside n ≥1 Mn = i ≥1 kbi , we have

Vn−1 = { i λ i bi | i λ i = 0 and λ i = 0 for i > n} and so · · · ⊆ Vn−2 ⊆ Vn−1 ⊆ · · · . Thus, Vn−2 provides us with an irreducible component of Vn−1↓ Sn−1 . The vector vn−1 =

n−1 i=1

(bi − bn ) =

n−1

bi − (n − 1)bn ∈ Vn−1

i=1

is a nonzero Sn−1 -invariant that does not belong to Vn−2 . For dimension reasons, we conclude that Vn−1 ↓ Sn−1 = kvn−1 ⊕ Vn−2 1 Sn−1 ⊕ Vn−2 is the decomposition of Vn−1 ↓ Sn−1 into irreducible constituents. Inductively we further deduce that

4.2. The Branching Graph

195

n−1 Vn−1 = j=1 kv j and that (v1, . . . , vn−1 ) is the GZ-basis of Vn−1 . It is straightforward to check that the Coxeter generator si = (i, i + 1) ∈ Sn acts on this basis as follows: ⎧ ⎪ vj for j i − 1, i, ⎪ ⎪ ⎪ 1 1 (4.8) si .v j = ⎨ i vi−1 + (1 − i ) vi for j = i − 1, ⎪ ⎪ ⎪ ⎪ (1 + 1 )v − 1 v for j = i. i i−1 i i ⎩ These equations determine the vectors v j ∈ Vn−1 up to a common scalar factor: if (4.8) also holds with w j in place of v j , then v j → w j is an Sn -equivariant endomorphism of Vn−1 and hence an element of D(Vn−1 ) = k. We shall discuss some rescalings of the GZ-basis (v j ) in Examples 4.17 and 4.19. 4.2.4. Properties of GZ n We have seen that the Gelfand-Zetlin algebra GZ n is commutative and generated by the JM-elements X1, . . . , X n . Now we derive further information about GZ n from the foregoing. Theorem 4.4. (a) GZ n is the set of all a ∈ kSn such that the GZ-basis of each V ∈ Irr Sn consists of eigenvectors for aV . (b) GZ n is a maximal commutative subalgebra of kSn .

(c) GZ n is semisimple: GZ n k×dn with d n = V ∈Irr Sn dim V . Proof. For each V ∈ Irr kSn , identify Endk (V ) with the matrix algebra Matdim V (k) via the GZ-basis of V . Then the isomorphism (4.4) identiﬁes the group algebra kSn with the direct product of these matrix algebras. Let D denote the subalgebra of kSn that corresponds to the direct product of the algebras of diagonal matrices in each component. Part (a) asserts that D = GZ n : via GZ-bases

V ∈Irr Sn

GZ n

Matdim V (k)

⊆

(4.9)

∼

⊆

kSn

∼

V ∈Irr Sn

k

..

.

k

The isomorphism GZ n k×dn in (c) is then clear and so is the maximality assertion in (b). Indeed, the subalgebra of diagonal matrices in any matrix algebra Matd (k) is self-centralizing: the only matrices that commute with all diagonal matrices are themselves diagonal. Therefore, D is a self-centralizing subalgebra of kSn , and hence D is a maximal commutative subalgebra. In particular, in order to prove the equality D = GZ n , it suﬃces to show that D ⊆ GZ n , because we already know that GZ n is commutative.

196

4. Symmetric Groups

To prove the inclusion D ⊆ GZ n , let e(V ) ∈ Zn denote the primitive central idempotent of kSn corresponding to V ∈ Irr Sn (§1.4.4). Recall that, for any W ∈ Rep Sn , the operator e(V )W projects W onto the V -homogeneous component W (V ), annihilating all other homogeneous components of W . Therefore, for any path T : 1 S1 = W1 → W2 → · · · → Wn = V in B, the element e(T ) := e(W1 )e(W2 ) · · · e(Wn ) ∈ k[Z1, Z2, . . . , Zn ] = GZ n acts on V as the projection πT : V VT = kvT in (4.6) and e(T )V = 0V for all V V ∈ Irr Sn . Thus, in (4.9), we have:

V ∈Irr Sn

Matdim V (k)

∈

∼

∈

kSn e(T )

(0, . . . , πT , . . . , 0)

This shows that the idempotents e(T ) form the standard basis of the diagonal algebra D, consisting of the diagonal matrices with one entry equal to 1 and all others 0, which proves the desired inclusion D ⊆ GZ n . 4.2.5. The Spectrum of GZ n We now give a description of Spec GZ n = MaxSpec GZ n HomAlgk (GZ n, k) (§1.3.2) that will play an important role in Section 4.4. For this, we elaborate on some of the properties of GZ n stated in Theorem 4.4. First, the fact that the GZ-basis (vT ) of any V ∈ Irr Sn consists of eigenvectors for GZ n says that each vT is a weight vector for a suitable weight φT ∈ HomAlgk (GZ n, k): a.vT = φT (a)vT

(4.10)

(a ∈ GZ n ).

Moreover, in view of (4.7), the dimension d n = dim GZ n is equal to the total number of paths in B from 1 S1 to some vertex ∈ Irr Sn . Finally, the isomorphism GZ ∼ k×dn in (4.9) is given by a → (φ (a)) . Therefore, each φ is a weight n

T

T

T

of a unique V ∈ Irr Sn , the endpoint of the path T : 1 S1 → · · · → V in (4.6), and (4.11)

T a path 1 → · · · in B S1 . HomAlgk (GZ n, k) = φT with endpoint ∈ Irr Sn

Since the algebra GZ n is generated by the JM-elements X1, . . . , X n (Corollary 4.2), each weight φT is determined by the n-tuple φT (Xi ) 1n ∈ kn . Therefore, the spectrum HomAlgk (GZ n, k) of GZ n is in one-to-one correspondence with the set def

Spec(n) =

φ(X1 ), φ(X2 ), . . . , φ(X n ) ∈ kn φ ∈ HomAlgk (GZ n, k) .

4.3. The Young Graph

197

To summarize, we have the following bijections:

φT (Xi )

n 1

via GZ-bases

paths 1 → · · · in B S1 with endpoint ∈ Irr Sn

∈

∼

∈

(4.12)

∈

Spec(n) ∼ HomAlgk (GZ n, k) φT

T

Exercises for Section 4.2 4.2.1 (Bottom of B). Verify the bottom of the branching graph B as in Figure 4.1. 4.2.2 (Dimension of the Gelfand-Zetlin algebra). Note that d n = dim GZ n is also the length of the group algebra kSn . Show that the ﬁrst ﬁve values of the sequence d n are: 1, 2, 4, 10, 26.2 4.2.3 (Lengths of homogeneous components). Let V ∈ Irr Sn and W ∈ Irr Sk and assume that k ≤ n. Show that the length of the W -homogeneous component of V↓ Sk is equal to the number of paths W → · · · → V in B. 4.2.4 (Orthogonality of GZ-bases). Let V ∈ Irr Sn and let ( · , · ) : V × V → k be any Sn -invariant bilinear form; so (s.v, s.v ) = (v, v ) for all v, v ∈ V and s ∈ Sn . Show that the GZ-basis (vT ) of V is orthogonal: (vT , vT ) = 0 for T T . (Use the fact that representations of symmetric groups are self-dual by Lemma 3.24.) 4.2.5 (Weights of the standard representation). Let Vn−1 be the standard representation of Sn and let v j = b1 + b2 + · · · + b j − jb j+1 as in Example 4.3. Show that v j has weight (0, 1, . . . , j − 1, −1, j, . . . , n − 2).

4.3. The Young Graph We now start afresh, working in purely combinatorial rather than representationtheoretic territory. 4.3.1. Partitions and Young Diagrams The main player in this section is the following set of nonnegative integer sequences: def

P =

(λ 1, λ 2, . . . ) ∈ ZN + λ 1 ≥ λ 2 ≥ · · · and

i λi

1, we use the following recursion, which is evident from the deﬁnition of f λ

4.3. The Young Graph

as the number of paths

203

→ · · · → λ in Y: fλ = f μ. μ→λ μ

(n−1)! H (μ)

for all μ in the above sum. Thus, we need By induction, we know that f =

H (λ) (n−1)! n! to show that H (λ) = μ→λ H (μ) or, equivalently, 1 = μ→λ n1 H (μ) . Recall that μ → λ means that μ arises from λ by removing one corner box, say c. Denoting the resulting μ by λ \ c, our goal is to show that 1 H (λ) 1= , c

n H (λ \ c)

where c runs over the corner boxes of λ. Comparison with (4.19) shows that it suﬃces to prove the following equality, for each corner box c: H (λ) q(ω) = . (4.20) H (λ \ c)

ω ∈Ec

First, let us consider the right-hand side of (4.20). Note that λ \ c has the same hooks as λ, except that the hook at c, of length 1, is missing and the hooks at all boxes in the gray region marked B = B(c) on the right have lengths shorter by 1 than the corresponding hooks of λ. Therefore, the right-hand side of (4.20) can be written as follows: h(b) H (λ) = . H (λ \ c)

b ∈B

B

B

c

h(b) − 1

1 from the hook-walk experiment, we can write the Using the notation q(b) = h(b)−1

product on the right as b ∈B (1 + q(b)) = S ⊆B b ∈S q(b). Hence, H (λ) = q(b) . (4.21)

H (λ \ c)

S ⊆B b ∈S

B

B

x

x

x

c

Now for the left-hand side of (4.20). For each hook walk ω ∈ Ec , let Sω ⊆ B denote the set of boxes that arise as the horizontal and vertical projections of the boxes of ω into B; see the green boxes on the left. Note that, while Sω generally does not determine the entire walk ω, the starting point, x, is certainly determined, and if x ∈ B, then ω is determined. We claim that, for each subset S ⊆ B,

204

4. Symmetric Groups

(4.22)

q(ω) =

ω ∈Ec Sω =S

q(b) .

b ∈S

This will give the following expression for the left-hand side of (4.20): q(ω) = q(ω) = q(b) . ω ∈Ec

S ⊆B ω ∈Ec Sω =S

S ⊆B b ∈S

By (4.21) this is identical to the right-hand side of (4.20), thereby proving (4.20). We still need to justify the claimed equality (4.22). For this, we argue by induction on |S|. The only hook walk ω ∈ Ec with Sω = ∅ is the walk starting and ending at c without ever moving; so the claim is trivially true for |S| = 0. The claim is also clear if the starting point, x, of ω belongs to B, because then the sum on the left has only one term, which is equal to the right-hand side of (4.22). So assume that x B ∪ {c}. Then there are two kinds of possible hook walks ω ∈ Ec with Sω = S: those that start with a move to the right and those that start with a move down. Letting η denote the remainder of the walk and letting x , x ∈ B be the vertical and horizontal projections of x into B, we have Sη = S \ {x } in the former case and Sη = S \ {x } in the latter. Therefore, q(ω) = q(x) q(η) + q(η) ω ∈Ec Sω =S

η ∈Ec Sη =S\{x }

=

by induction

q(x)

η ∈Ec Sη =S\{x }

q(b)

b ∈S\{x }

+

q(b)

b ∈S\{x }

1 1 + = q(x) q(b) . q(x ) q(x ) b ∈S To complete the proof of (4.22), it remains to observe that q(x) q(x1 ) + or, equivalently, h(x) + 1 = h(x ) + h(x ), which is indeed the case: x

x

x

c

This ﬁnishes the proof of the hook-length formula.

1 q(x )

=1

4.4. Proof of the Graph Isomorphism Theorem

205

Exercises for Section 4.3 4.3.1 (Up and down operators). Let ZP = n ZPn denote the Z-module of all formal Z-linear combinations of partitions λ. Here, ZPn = 0 for n < 0, because Pn is empty in this case, and ZP0 Z, because P0 contains only (0, 0, . . . ), with Young diagram ∅. Thus, we have added ∅ as a root vertex to Y and a unique arrow ∅ → . Consider the operators U, D ∈ EndZ (ZP ) that are deﬁned by

U (λ) = λ→μ μ and D(λ) = μ→λ μ. Show that these operators satisfy the Weyl algebra relation DU = U D + 1. 4.3.2 (Automorphisms of Y). (a) Show that each λ ∈ Pn (n > 2) is determined by the set S(λ) := { μ ∈ Pn−1 | μ → λ in Y}. (b) Conclude by induction on n that the graph Y has only two automorphisms: the identity and conjugation. 4.3.3 (Rectangle partitions). Show: th 1 2n (a) f (n,n) = n+1 n , the n Catalan number. (b) f λ > rc for λ = (cr ) := (c, . . . , c ) with c, r ≥ 2 and rc ≥ 8. r

4.3.4 (Dimensions of the irreducible representations of S6 ). Extend Figure 4.2 up to layer P6 and ﬁnd the dimensions of all irreducible representations of S6 (assuming the Graph Isomorphism Theorem). 4.3.5 (Hook partitions and exterior powers of the standard representation). Let λ → V λ be the bijection on vertices that is given by a graph isomorphism Y ∼ B as in §4.3.2. Assume that the isomorphism has been chosen so that → 1 S2 . k

Show that V (n−k,1 ) = k Vn−1 holds for all n and k = 0, . . . , n − 1, where (n − k, 1k ) is the “hook partition” (n − k, 1, . . . , 1). k

4.3.6 (Irreducible representations of dimension < n). Let n ≥ 7. Assuming the Graph Isomorphism Theorem, show that 1, sgn, the standard representation Vn−1 , ± are the only irreducible representations of dimension < n of and its sign twist Vn−1 Sn . (Use Exercise 4.3.3(b).)

4.4. Proof of the Graph Isomorphism Theorem The main goal of this section is to provide the proof of the Graph Isomorphism Theorem. This will be accomplished in Corollary 4.13 after some technical tools have been deployed earlier in this section. In short, the strategy is to set up, for all n, a bijection

paths → · · · in Y

paths 1 → · · · in B S1 ∼ Γn : with endpoint ∈ Pn

with endpoint ∈ Irr Sn

206

4. Symmetric Groups

in such a way that sending the endpoint of a path T in Y to the endpoint of Γn (T ) gives a bijection Pn ∼ Irr Sn and such that Γn (T ) with its endpoint deleted is the image under Γn−1 of T without its endpoint. We have already constructed a bijection between the collection of paths 1 S1 → · · · in B having endpoint ∈ Irr Sn and a certain subset Spec(n) ⊆ kn (4.12) and we have also identiﬁed paths in Y with standard Young tableaux (4.16). We will show in §4.4.1 that standard Young tableaux with n boxes in turn are in one-to-one correspondence with a certain set of n-tuples Cont(n) ⊆ Zn . The main point of the proof is to show that Cont(n) = Spec(n); this will yield the desired bijection

paths

paths 1 → · · · in B → · · · in Y ∼ S1 Cont(n) = Spec(n) ∼ . with endpoint ∈ Pn (4.12) with endpoint ∈ Irr Sn

4.4.1. Contents Let T be a standard Young tableau with n boxes. We deﬁne the content of T to be the n-tuple cT = (cT,1, cT,2, . . . , cT,n ), where cT,i = a means that the box of T containing the number i lies on the line y = x + a; we will call this line the a-diagonal. Thus, cT,i = (column number) − (row number), where i occurs in T . Since 1 must occupy the (1 , 1)-box of any standard Young tableau, we always have cT,1 = 0. Any standard Young tableau T is determined by its content. To see this, consider the ﬁbers

(a ∈ Z). cT−1 (a) = i | cT,i = a

y i

y=x+a

The size of these ﬁbers tell us the shape of T or, x equivalently, the underlying Young diagram: there must be |cT−1 (a)| boxes on the a-diagonal. The elements of cT−1 (a) give the content of these boxes: we must ﬁll them into the boxes on the a-diagonal in increasing order from top to bottom. In this way, we reconstruct T from cT .

−1 −2

0

1

2

3

1 2 4 5 3 7 9 6 8

Example 4.6. Suppose we are given the content vector cT = (0, 1, −1, 2, 3, −2, 0, −1, 1). The nonempty ﬁbers are cT−1 (−2) = {6}, cT−1 (−1) = {3, 8}, cT−1 (0) = {1, 7}, cT−1 (1) = {2, 9}, cT−1 (2) = {4}, and cT−1 (3) = {5}. Thus, the resulting standard Young tableau T is as on the left.

Of course, not every n-tuple of integers is the content of a standard Young tableau with n boxes, since there are only ﬁnitely many such Young tableaux. We

4.4. Proof of the Graph Isomorphism Theorem

207

also know already that the ﬁrst component of any content vector must always be 0. Our next goal will be to give a complete description of the following set: def

Cont(n) =

cT T is a standard Young tableau with n boxes ⊆ Zn .

Example 4.7. We list all standard Young tableaux with four boxes and their content vectors; these form the set Cont(4). Note that, quite generally, reﬂecting a given standard Young tableau T across the diagonal y = x results in another Young tableau, T c , satisfying cT c = −cT .

1 2 3 4 (0, 1, 2, 3)

1 2 3 4 (0, −1, −2, −3)

1 3 4 2 (0, −1, 1, 2)

1 2 4 3 (0, 1, −1, 2)

1 2 3 4 (0, 1, 2, −1)

1 3 2 4 (0, −1, 1, 0)

1 2 3 4

1 3 2 4

1 4 2 3

1 2 3 4

(0, 1, −1, −2)

(0, −1, 1, −2)

(0, −1, −2, 1)

(0, 1, −1, 0)

An Equivalence Relation. As we have remarked above, the shape of a standard Young tableau T is determined by the multiplicities |cT−1 (a)| for a ∈ Z. Thus, two standard Young tableaux T and T have the same shape if and only if cT = s.cT for some s ∈ Sn , where Sn acts on Zn by place permutations: s.(c1, . . . , cn ) := (cs−1 1, . . . , cs−1 n ). Alternatively, T and T have the same shape if and only if T = sT for some s ∈ Sn , where sT denotes the tableau that is obtained from T by replacing the entries in all boxes by their images under the permutation s. The reader will readily verify (Exercise 4.4.2) that csT = s.cT .

(4.23)

Deﬁne an equivalence relation ∼ on Cont(n) by def

cT ∼ cT ⇐⇒

cT = s.cT for some s ∈ Sn

⇐⇒

T and T have the same shape

⇐⇒

T and T describe paths → · · · in Y having the same endpoint.

(4.16)

208

4. Symmetric Groups

We summarize the correspondences discussed in the foregoing:

standard Young tableaux

Cont(n) ∼ (4.24)

with n boxes

forget places

Cont(n)/ ∼

∼

paths

→ · · · in Y with endpoint ∈ Pn

∼

remember endpoint

empty boxes

Young diagrams

∼

with n boxes

Pn

Contents and the Young Graph. Contents lead to a more economical but equivalent version of the Young graph Y; see Figure 4.3 for the ﬁrst ﬁve levels. Young diagrams are replaced by simple bullets, but instead each arrow is now decorated by a number indicating the diagonal where the new box is to be placed. The content of a standard Young tableau T is obtained from the labels of the arrows in the corresponding path: cT = (0 , label of arrow #1 , label of arrow #2 , . . . ).

•

•

•

• −2

2 4

−1

3

•

0

0

• 3

• −2

2

•

−1 •

−2

1

• −1

2

−3

•

0

2

• 1

• −4

• −3

• −2

1

•

• −1

1 •

Figure 4.3. Contents and the Young graph Y

Admissible Transpositions. Not all place permutations of the content vector cT of a given standard Young tableau T necessarily produce another content vector of a standard Young tableau. For one, the ﬁrst component must always be 0. The case of the transpositions si = (i, i + 1) ∈ Sn will be of particular interest; si swaps the boxes of T containing i and i + 1 while leaving all other boxes untouched and si .(c1, . . . , cn ) = (c1, . . . , ci−1, ci+1, ci, ci+2, . . . , cn ). It is easy to see (Exercise 4.4.2) that si T is another standard Young tableau if and only if i and i + 1 occur in diﬀerent rows and columns of T. In this case, si will be called an admissible transposition for T. Note that the boxes containing i and i + 1 belong to diﬀerent rows and columns of T if and only if these boxes are not direct neighbors in T, sharing a common boundary line, and this is also equivalent to the condition that the boxes

4.4. Proof of the Graph Isomorphism Theorem

209

of T containing i and i + 1 do not lie on adjacent diagonals. To summarize, si is admissible for T ⇐⇒ (4.25) ⇐⇒

i and i + 1 belong to diﬀerent rows and columns of T cT,i+1 cT,i ± 1 .

Alternatively, si is admissible for c = (c1, . . . , cn ) ∈ Cont(n) iﬀ ci+1 ci ± 1. ...

For a given partition λ % n, consider the particular λ-tableau T (λ) that is obtained from the Young diagram of λ by ﬁlling 1, 2, . . . , n into the boxes as in the T (λ) picture on the left. Clearly, for any λ-tableau T, there is a unique s ∈ Sn with T = sT (λ). It is a standard ... n fact that the transpositions s1, . . . , s n−1 generate Sn and that the minimal length of a product representing a given permutation s ∈ Sn in terms of these generators is equal to the number of inversions of s, that is, the number of pairs (i, j) ∈ [n] × [n] with i < j but si > s j; see Exercise 4.4.3 or Example 7.10. This number, called the length of s, will be denoted by (s). 1 2 3

Lemma 4.8. (a) Let λ % n and let T be a λ-tableau. Then there exists a sequence si1, . . . , sil of admissible transpositions in Sn such that si1 · · · sil T = T (λ) and l = (si1 · · · sil ). (b) Let c, c ∈ Cont(n). Then c ∼ c if and only if there exists a ﬁnite sequence of admissible transpositions that transforms c into c .

Proof. (a) Let nT be the number in the box at the end of the last row of T. We argue by induction on n and n − nT . The case n = 1 being trivial, assume that n > 1.

If nT = n, then remove the last box from T and let T denote the resulting standard Young tableau, of shape λ % n − 1. By induction, we can transform T into T (λ ) by a sequence of admissible transpositions given by Coxeter generators ∈ Sn−1 and the sequence may be chosen to have the desired length. The same sequence will move T to T (λ). Now assume that nT < n. Since the box of T containing nT + 1 cannot occur in the same row or column as the last box, containing nT , the transposition s nT is admissible for T by (4.25). The λ-tableau T = s nT T satisﬁes nT = nT + 1. By induction, there is a ﬁnite sequence si1, . . . , sil of admissible transpositions such that s = si1 · · · sil satisﬁes sT = T (λ) and l = (s). It follows that ss nT T = T (λ) and l + 1 = (ss nT ), where the latter equality holds because s(nT ) < n = s(nT + 1) (Exercise 4.4.3). (b) Clearly, the existence of a sequence si1, . . . , sit such that si1 · · · sit .c = c implies that c ∼ c . Conversely, if c = cT ∼ c = cT , then T and T are λ-tableaux for the same λ. It follows from (a) that there is a sequence si1, . . . , sit of admissible transpositions such that si1 · · · sit T = T . Hence, si1 · · · sit .c = c by (4.23).

210

4. Symmetric Groups

Description of Cont(n). The following proposition gives the desired description of the set Cont(n). Since our ultimate goal is to show that Cont(n) is identical to the subset Spec(n) ⊆ kn as deﬁned in §4.2.5, we view Cont(n) ⊆ kn . Note, however, that conditions (i) and (ii) below imply that Cont(n) ⊆ Zn . Proposition 4.9. Cont(n) is precisely the set of all c = (c1, c2, . . . , cn ) ∈ kn satisfying the following conditions: (i) c1 = 0; (ii) ci − 1 or ci + 1 ∈ {c1, c2, . . . , ci−1 } for all i ≥ 2; (iii) if ci = c j = a for i < j, then {a + 1, a − 1} ⊆ {ci+1, . . . , c j−1 }. Proof. Let C(n) denote the set of n-tuples c = (c1, c2, . . . , cn ) ∈ kn satisfying conditions (i)–(iii). We need to show that Cont(n) = C(n). We ﬁrst check that Cont(n) ⊆ C(n). As we have observed earlier, (i) certainly holds if c = cT for some standard Young tableau T, because the number 1 must be in the (1 , 1)-box. Any i ≥ 2 must occupy a box of T in position (x , y) with x > 1 or y > 1. In the former case, let j be the entry in the (x − 1 , y)-box. Then j < i and c j = y − (x − 1) = ci + 1, whence ci + 1 ∈ {c1, c2, . . . , ci−1 }. In an analogous fashion, one shows that ci − 1 ∈ {c1, c2, . . . , ci−1 } if y > 1, proving (ii). y i k l j

y=x+a x

Now suppose that ci = c j = a for i < j. Then the entries i and j both lie on the a-diagonal; say i occupies the (x , x + a)-box and j the (x , x + a)-box, with x < x . Let k and l denote the entries in the boxes at positions (x + 1 , x + a) and (x − 1 , x + a), respectively. Then k, l ∈ {i +1, . . . , j −1} and ck = (x + a) − (x +1) = a −1, cl = (x + a) − (x −1) = a +1. This proves (iii), thereby completing the proof of the inclusion Cont(n) ⊆ C(n).

For the reverse inclusion, Cont(n) ⊇ C(n), we proceed by induction on n. The case n = 1 being clear, with C(1) = {(0)} = Cont(1), assume that n > 1 and that C(n − 1) ⊆ Cont(n − 1). Let c = (c1, c2, . . . , cn ) ∈ C(n) be given. Clearly, the truncated c = (c1, c2, . . . , cn−1 ) also satisﬁes conditions (i)–(iii); so c ∈ C(n − 1) ⊆ Cont(n − 1). Therefore, there exists a (unique) standard Young tableau T with cT = c . We wish to add a box containing the number n to T so as to obtain a standard Young tableau, T, with cT = c. Thus, the new box must be placed on the cn -diagonal, y = x + cn , at the ﬁrst slot not occupied by any boxes of T . We need to check that the resulting T has the requisite “ﬂag shape” of a partition; the monotonicity requirement for standard Young tableaux is then automatic, because the new box contains the largest number.

4.4. Proof of the Graph Isomorphism Theorem

211

First assume that cn {c1, c2, . . . , cn−1 }; so T has no boxes on the cn -diagonal. Since there are no gaps between the diagonals of T , the values c1, . . . , cn−1 , with repetitions omitted, form an interval in Z containing 0. Therefore, if cn > 0, then cn > ci for all i < n, while condition (ii) tells us that cn − 1 ∈ {c1, c2, . . . , cn−1 }. Thus, cn = max{ci | 1 ≤ i ≤ n − 1} + 1 and the new box labeled n is added at the end of the ﬁrst row of T , at position (1, 1 + cn ). Similarly, if cn < 0, then the new box is added at the bottom of the ﬁrst column of T . In either case, the resulting T has ﬂag shape. y Finally assume that cn ∈ {c1, c2, . . . , cn−1 } and choose i < n maximal with ci = cn =: a. Then the box labeled i is the last box on the a-diagonal of T . i s r n We also know from condition (iii) that there exist r, s ∈ {i+1, . . . , n−1} with cr = a−1 and cs = a+1. y=x+a Both r and s are unique. Indeed, if i < r < r < n and cr = a − 1 = cr , then a ∈ {cr+1, . . . , cr −1 } x by (iii), contradicting maximality of i. This shows uniqueness of r; the argument for s is analogous. Therefore, T has unique boxes on the (a − 1)- and (a + 1)-diagonals with entries > i. Necessarily these boxes are the last ones on their respective diagonals and they must be neighbors of i, as in the picture above. Therefore, the new box labeled n is slotted into the corner formed by the boxes with i, r, and s, again resulting in the desired ﬂag shape.

4.4.2. Weights Returning to the representation-theoretic side of matters, let us begin with a few reminders from §4.2.5. With X1, . . . , X n ∈ GZ n denoting the JM-elements, the spectrum of GZ n is in one-to-one correspondence with the set

Spec(n) = (φ(X1 ), φ(X2 ), . . . , φ(X n )) ∈ kn | φ ∈ HomAlgk (GZ n, k) . By (4.12) we have the following bijections:

paths 1 → · · · in B S1

via GZ-bases

with endpoint ∈ Irr Sn

∈

αT := φT (Xi ) 1n

∼

∈

∈

(4.26)

Spec(n) ∼ HomAlgk (GZ n, k) φT

T

The ﬁrst component of each αT is 0, because X1 = 0. Recall from (4.10) that each φT is a weight of a unique V ∈ Irr Sn and that the weight space is spanned by the GZ-basis vector vT ∈ V that is given by the path T : 1 S1 → · · · → V in B: (4.27)

a.vT = φT (a)vT

(a ∈ GZ n ).

We will also write elements of Spec(n) simply as n-tuples, α = (a1, a2, . . . , an ) ∈ kn

212

4. Symmetric Groups

with a1 = 0. Let V (α) ∈ Irr Sn denote the irreducible representation having weight α and let vα ∈ V (α) be the corresponding GZ-basis vector of V (α). Then (4.27) becomes Xk .vα = ak vα

(4.28)

for k = 1, . . . , n .

The vector vα ∈ V (α) is determined by these equations up to a scalar multiple. We will scale the weight vectors vα in a consistent way in §4.5.1, but this will not be necessary for now. Another Equivalence Relation. Deﬁne ≈ on Spec(n) by def

αT ≈ αT ⇐⇒ ⇐⇒

φT and φT are weights of the same representation ∈ Irr Sn T and T are paths 1 S1 → · · · in B with the same endpoint ∈ Irr Sn .

Alternatively, α ≈ α if and only if V (α) = V (α ). From (4.26) we obtain the following bijections:

Spec(n)

paths 1 → · · · in B S1

∼ via GZ-bases

(4.29)

with endpoint ∈ Irr Sn remember endpoint

Spec(n)/ ≈

∼

Irr Sn

Example 4.10 (Spec(4)). We need to ﬁnd the GZ-basis and weights of each V ∈ Irr S4 . For the representation 1, this is trivial: for any n, the unique weight of 1 Sn is (0, 1, 2, . . . , n − 1), because Xk .1 = (k − 1) . Example 4.3 provides us with the 2 . In the case of V 2 , note that X4 acts via the canonical map GZ-bases of V3 and V kS4 kS3 , which sends X4 → (2, 3) + (1, 3) + (1, 2). Next, for any n and any V ∈ Irr Sn , the sign twist V ± = sgn ⊗V has the “same” GZ-basis as V but with weights multiplied by −1: Xk .(1 ⊗ vα ) =

i s(i + 1). ⎩

Deduce that l (s) = (s).

4.4.4 (Some decompositions into irreducibles). Let Vn−1 = V (n−1,1) be the standard representation of Sn and let Mn be the standard permutation representation. Prove: S

⊕2 ⊕ V (n−2,1,1) ⊕ V (n−2,2) . (a) Vn−1 ⊗ Mn 1↑ Sn 1 ⊕ Vn−1 n−2

(b) Vn−1 ⊗ Vn−1 1 ⊕ Vn−1 ⊕ V (n−2,1,1) ⊕ V (n−2,2) .

4.5. The Irreducible Representations The purpose of this section is to derive some explicit formulae for the action of Sn on the irreducible representations V λ . In particular, we shall see that the GZ-basis of each V λ can be scaled so that the matrices of all s ∈ Sn have entries in Q; for a diﬀerent choice of normalization of the GZ-basis, the matrices will be orthogonal. 4.5.1. Realization over Q Let λ % n and let V λ be the corresponding irreducible representation of Sn as per the Graph Isomorphism Theorem. Since paths 1 S1 → · · · → V λ in B are in bijection with paths → · · · → λ in Y or, equivalently, λ-tableaux, we can rewrite (4.6) in the following form, with uniquely determined 1-dimensional subspaces VTλ : VTλ . Vλ = T a λ-tableau

Speciﬁcally, VTλ

is the GZ n -weight space of V λ for the weight cT , the content of T. We will now select a nonzero vector from each VTλ in a coherent manner. To this

218

4. Symmetric Groups

end, let πT : V λ VTλ denote the projection along the sum of the weight spaces VTλ with T T and ﬁx 0 v(λ) ∈ VTλ(λ) . Here T (λ) is the special λ-tableau considered in Lemma 4.8. Each λ-tableau T has the form T = sT T (λ) for a unique sT ∈ Sn . Put (4.32) vT := πT sT .v(λ) . In the theorem below, we will check that all vT are nonzero and, most importantly, that the action of Sn on the resulting GZ-basis of V λ is deﬁned over Q. Adopting the notation of Proposition 4.11, we will write dT,i := (cT,i+1 − cT,i ) −1,

(4.33)

where cT = (cT,1, . . . , cT,n ) is the the content of T. Thus, dT,i is a nonzero rational number. Recall also from (4.25) that dT,i ±1 if and only if the Coxeter generator si ∈ Sn is admissible for T, that is, si T is a λ-tableau. Theorem 4.15. Let λ % n . For each λ-tableau T, let sT , vT , and dT,i be as in (4.32), (4.33). Then (vT ) is a GZ-basis of V λ and the action of si = (i, i +1) ∈ Sn on this basis is as follows: (i) If dT,i = ±1, then si .vT = dT,i vT . (ii) If dT,i ±1, then ⎧ ⎪ dT,i vT + vsi T si .vT = ⎨ ⎪ d v + (1 − d 2 ) v si T T,i ⎩ T,i T

if sT−1 (i) < sT−1 (i + 1); if sT−1 (i) > sT−1 (i + 1).

Proof. Proposition 4.11 in conjunction with (4.23), (4.25) implies that si .VTλ ⊆ VTλ if si is not an admissible transposition for T and si .VTλ ⊆ Vsλi T + VTλ if si is an λ = admissible transposition for T. Consequently, putting (T ) = (sT ) and V