*425*
*37*
*3MB*

*English*
*Pages [359]*
*Year 2019*

- Author / Uploaded
- Eric S. Egge

*Table of contents : ContentsPreface1. Symmetric Polynomials, the Monomial Symmetric Polynomials, and Symmetric Functions 1.1. Symmetric Polynomials 1.2. The Monomial Symmetric Polynomials 1.3. Symmetric Functions 1.4. Problems 1.5. Notes2. The Elementary, Complete Homogeneous, and Power Sum Symmetric Functions 2.1. The Elementary Symmetric Functions 2.2. The Complete Homogeneous Symmetric Functions 2.3. The Power Sum Symmetric Functions 2.4. Problems3. Interlude: Evaluations of Symmetric Functions 3.1. Symmetric Function Identities 3.2. Binomial Coefficients 3.3. Stirling Numbers of the First and Second Kinds 3.4. q-Binomial Coefficients 3.5. Problems 3.6. Notes4. Schur Polynomials and Schur Functions 4.1. Schur Functions and Semistandard Tableaux 4.2. Schur Polynomials as Ratios of Determinants 4.3. Problems 4.4. Notes5. Interlude: A Rogues’ Gallery of Symmetric Functions 5.1. Skew Schur Functions 5.2. Stable Grothendieck Polynomials 5.3. Dual Stable Grothendieck Polynomials 5.4. The Chromatic Symmetric Function 5.5. Problems 5.6. Notes6. The Jacobi–Trudi Identities and an Involution on 6.1. The First Jacobi–Trudi Identity 6.2. The Second Jacobi–Trudi Identity 6.3. The Involution 6.4. Problems 6.5. Notes7. The Hall Inner Product 7.1. Inner Products on 7.2. The Hall Inner Product and Cauchy’s Formula 7.3. The Hall Inner Product on the Power Sum Symmetric Functions 7.4. Problems 7.5. Notes8. The Robinson–Schensted–Knuth Correspondence 8.1. RSK Insertion: Constructing 8.2. Constructing 8.3. Implementing RSK with Growth Diagrams 8.4. Problems 8.5. Notes9. Special Products Involving Schur Functions 9.1. The Pieri Rules 9.2. The Murnaghan–Nakayama Rule 9.3. Problems10. The Littlewood–Richardson Rule 10.1. Products of Tableaux 10.2. Knuth Equivalence 10.3. The Relationship Between P and word 10.4. The Littlewood–Richardson Rule 10.5. Problems 10.6. NotesAppendix A. Linear Algebra A.1. Fields and Vector Spaces A.2. Bases and Linear Transformations A.3. Inner Products and Dual Bases A.4. ProblemsAppendix B. Partitions B.1. Partitions and a Generating Function B.2. ProblemsAppendix C. Permutations C.1. Permutations as Bijections C.2. Determinants and Permutations C.3. ProblemsBibliographyIndex*

STUDENT MATHEMATICAL LIBRARY Volume 91

An Introduction to Symmetric Functions and Their Combinatorics Eric S. Egge

An Introduction to Symmetric Functions and Their Combinatorics

STUDENT MATHEMATICAL LIBRARY Volume 91

An Introduction to Symmetric Functions and Their Combinatorics Eric S. Egge

Editorial Board Satyan L. Devadoss Rosa Orellana

John Stillwell (Chair) Serge Tabachnikov

2010 Mathematics Subject Classiﬁcation. Primary 05E05, 05A05, 05A19. For additional information and updates on this book, visit www.ams.org/bookpages/stml-91 Library of Congress Cataloging-in-Publication Data Cataloging-in-Publication Data has been applied for by the AMS. See http://www.loc.gov/publish/cip/. Copying and reprinting. Individual readers of this publication, and nonproﬁt libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for permission to reuse portions of AMS publication content are handled by the Copyright Clearance Center. For more information, please visit www.ams.org/ publications/pubpermissions. Send requests for translation rights and licensed reprints to reprint-permission @ams.org.

c 2019 by the author. All rights reserved. Printed in the United States of America. ∞ The paper used in this book is acid-free and falls within the guidelines

established to ensure permanence and durability. Visit the AMS home page at https://www.ams.org/ 10 9 8 7 6 5 4 3 2 1

24 23 22 21 20 19

Contents

Preface

ix

Chapter 1.

Symmetric Polynomials, the Monomial Symmetric Polynomials, and Symmetric Functions

1

§1.1.

Symmetric Polynomials

2

§1.2.

The Monomial Symmetric Polynomials

7

§1.3.

Symmetric Functions

10

§1.4.

Problems

19

§1.5.

Notes

21

Chapter 2.

The Elementary, Complete Homogeneous, and Power Sum Symmetric Functions

23

§2.1.

The Elementary Symmetric Functions

23

§2.2.

The Complete Homogeneous Symmetric Functions

38

§2.3.

The Power Sum Symmetric Functions

44

§2.4.

Problems

49

Chapter 3.

Interlude: Evaluations of Symmetric Functions

53

§3.1.

Symmetric Function Identities

53

§3.2.

Binomial Coeﬃcients

57

§3.3.

Stirling Numbers of the First and Second Kinds

60 v

vi

Contents §3.4. q-Binomial Coeﬃcients

64

§3.5.

Problems

71

§3.6.

Notes

73

Chapter 4.

Schur Polynomials and Schur Functions

75

§4.1.

Schur Functions and Semistandard Tableaux

75

§4.2.

Schur Polynomials as Ratios of Determinants

89

§4.3.

Problems

111

§4.4.

Notes

116

Chapter 5.

Interlude: A Rogues’ Gallery of Symmetric Functions

119

§5.1.

Skew Schur Functions

119

§5.2.

Stable Grothendieck Polynomials

129

§5.3.

Dual Stable Grothendieck Polynomials

137

§5.4.

The Chromatic Symmetric Function

144

§5.5.

Problems

153

§5.6.

Notes

156

Chapter 6.

The Jacobi–Trudi Identities and an Involution on Λ

157

§6.1.

The First Jacobi–Trudi Identity

157

§6.2.

The Second Jacobi–Trudi Identity

171

§6.3.

The Involution ω

178

§6.4.

Problems

183

§6.5.

Notes

189

Chapter 7.

The Hall Inner Product

191

§7.1.

Inner Products on Λk

191

§7.2.

The Hall Inner Product and Cauchy’s Formula

196

§7.3.

The Hall Inner Product on the Power Sum Symmetric Functions

201

§7.4.

Problems

206

§7.5.

Notes

207

Contents

vii

Chapter 8.

The Robinson–Schensted–Knuth Correspondence

209

§8.1.

RSK Insertion: Constructing P (π)

210

§8.2.

Constructing Q(π)

223

§8.3.

Implementing RSK with Growth Diagrams

232

§8.4.

Problems

242

§8.5.

Notes

245

Chapter 9.

Special Products Involving Schur Functions

247

§9.1.

The Pieri Rules

248

§9.2.

The Murnaghan–Nakayama Rule

256

§9.3.

Problems

269

Chapter 10.

The Littlewood–Richardson Rule

271

§10.1.

Products of Tableaux

272

§10.2.

Knuth Equivalence

278

§10.3.

The Relationship Between P and word

285

§10.4.

The Littlewood–Richardson Rule

291

§10.5.

Problems

303

§10.6.

Notes

307

Appendix A.

Linear Algebra

309

§A.1.

Fields and Vector Spaces

309

§A.2.

Bases and Linear Transformations

312

§A.3.

Inner Products and Dual Bases

316

§A.4.

Problems

320

Appendix B.

Partitions

323

§B.1.

Partitions and a Generating Function

323

§B.2.

Problems

325

Appendix C.

Permutations

327

§C.1.

Permutations as Bijections

327

§C.2.

Determinants and Permutations

331

§C.3.

Problems

334

viii

Contents

Bibliography

337

Index

341

Preface

Many excellent books are devoted entirely or in part to the foundations of the theory of symmetric functions, including the books of Loehr [Loe17], Macdonald [Mac15], Mendes and Remmel [MR15], Sagan [Sag01], and Stanley [Sta99]. These books approach symmetric functions from several diﬀerent directions, assume various amounts of preparation on the parts of their readers, and are written for a variety of audiences. This book’s primary aim is to make the theory of symmetric functions more accessible to undergraduates, taking a combinatorial approach to the subject wherever feasible, and to show the reader both the core of the subject and some of the areas that are active today. We assume students reading this book have taken an introductory course in linear algebra, where they will have seen bases of vector spaces, transition matrices between bases, linear transformations, and determinants. We also assume readers have taken an introductory course in combinatorics, where they will have seen (integer) partitions and their Ferrers diagrams, binomial coeﬃcients, and ordinary generating functions. For those who would like to refresh their memories of some of these ideas or who think there might be gaps in their linear algebraic or combinatorial background, we have included summaries of the ideas from these areas we will use most often in the appendices. In particular, we have included explanations of dual

ix

x

Preface

bases and the relationship between determinants and permutations, since these will play important roles at key moments in our study of symmetric functions, and they may not appear in a ﬁrst course in linear algebra. Symmetric functions have deep connections with abstract algebra and, in particular, with the representation theory of the symmetric group, so it is notable that we do not assume the reader has any familiarity with groups. Indeed, we develop the one algebraic fact we need—that each permutation is a product of adjacent transpositions—from scratch in Appendix C. Leaving out the relationship between symmetric functions and representations of the symmetric group makes the book accessible to a broader audience. But to see the subject whole, one must also explore the relationship between symmetric functions and representation theory. So we encourage interested students, after reading this book, to learn about representations of the symmetric group, and how they are related to symmetric functions. Two sources for this material are the books of James [Jam09] and Sagan [Sag01]. As in other areas of mathematics, over the past several decades the study of symmetric functions has beneﬁted from the use of computers and from the dramatic increase in the amount of available computing power. Indeed, many contemporary symmetric functions researchers use software, such as Maple, Mathematica, and SageMath, for large symmetric functions computations. These computations, in turn, often lead to new conjectures, new ideas, and new directions for exploration. We take a dual approach to the use of technology in the study of symmetric functions. On the one hand, we do not assume the reader has any computer technology available at all as they read: a patient reader will be able to work out all of the examples, solve all of the problems, and follow all of the proofs without computer assistance. On the other hand, we encourage readers to become proﬁcient with some kind of symmetric functions software. Speciﬁc programs and platforms come and go, so we do not recommend any platform or software in particular. But we do recommend that you ﬁnd a way to have your computer do your symmetric functions computations for you and to use it to explore the subject.

Preface

xi

Many of the main results in the theory of symmetric functions can be understood in several ways, and they have a variety of proofs. Whenever possible, we have given a proof using combinatorial ideas. In particular, we use families of lattice paths and a tail-swapping involution to prove the Jacobi–Trudi identities, we use the Robinson– Schensted–Knuth (RSK) correspondence to prove Cauchy’s formula, and we use RSK insertion, jeu de taquin, and Knuth equivalence to prove the Littlewood–Richardson rule. The study of symmetric functions can motivate these ideas and constructions, but we ﬁnd (and we think the reader will ﬁnd) they are rich and elegant, and they will reward study for its own sake. The study of symmetric functions is old, dating back at least to the study of certain determinants in the mid- to late-nineteenth century, but it remains an active area of research today. While our primary goal is to introduce readers to the core results in the ﬁeld, we also want to convey a sense of some of the more recent activity in the area. To do this, we spend Chapter 5 looking at three areas of contemporary research interest. In Section 5.1 we introduce skew Schur functions, and we discuss the problem of ﬁnding pairs of skew tableaux with the same skew Schur function. In Sections 5.2 and 5.3 we introduce the stable and dual stable Grothendieck functions, which are analogues of the Schur functions. Finding analogues of results about Schur functions for these symmetric functions is a broad and lively area of current research. Finally, in Section 5.4 we discuss Stanley’s chromatic symmetric functions. These symmetric functions, which are deﬁned for graphs, are the subject of at least two longstanding open questions ﬁrst raised by Stanley. We introduce one of these questions, which is whether there are two nonisomorphic trees with the same chromatic symmetric function. One of the best ways to learn mathematics is to do mathematics, so in many cases we have tried to describe not only what a result says and how we prove it, but also how we might ﬁnd it. In particular, we introduce several topics by raising a natural question, looking at some small examples, and then using the results of those examples to formulate a conjecture. This process often starts with questions arising from linear algebra, which we then use combinatorial ideas to answer, highlighting the way the two subjects interact to produce new mathematics. We could use a diﬀerent expository approach to

xii

Preface

cover the material more eﬃciently, but we hope this approach brings the subject to life in a way a more concise treatment might not. To get the most out of this book, we suggest reading actively, with a pen and paper at hand. When we generate data to answer a question, try to guess the answer yourself before reading ours. Generate additional data of your own to support (or refute) your conjecture, and to verify patterns you’ve observed. Similarly, we intend the examples to be practice problems. Trying to solve them yourself before reading our solutions will strengthen your grasp of the core ideas, and prepare you for the ideas to come. Speaking of doing mathematics, we have also included a variety of problems at the end of each chapter. Some of these problems are designed to test and deepen the reader’s understanding of the ideas, objects, and methods introduced in the chapter. Others give the reader a chance to explore subjects related to those in the chapter, that we didn’t have enough space to cover in detail. A few of the problems of these types ask the reader to prove results which will be used later in the book. Finally, some of the problems are there to tell the reader about bigger results and ideas related to those in the chapter. A creative and persistent reader will be able to solve many of the problems, but those of this last type might require inventing or reproducing entirely new methods and approaches. This book has beneﬁtted throughout its development from the thoughtful and careful attention, ideas, and suggestions of a variety of readers. First among these are the Carleton students who have used versions of this book as part of a course or a senior capstone project. The ﬁrst of these students were Amy Becker ’11, Lilly BetkeBrunswick ’11, Mary Bushman ’11, Gabe Davis ’11, Alex Evangelides ’11, Nate King ’11, Aaron Maurer ’11, Julie Michelman ’11, Sam Tucker ’11, and Anna Zink ’11, who used this book as part of a senior capstone project in 2010–11. Back then it wasn’t really a book; it was just a skeletal set of lecture notes. Based on their feedback I wrote an updated and more detailed version, which I used as a text for a seminar in the combinatorics of symmetric functions in the fall of 2013. The students in this seminar were Leo Betthauser ’14, Ben Breen ’14, Cora Brown ’14, Greg Michel ’14, Dylan Peifer ’14, Kailee Rubin ’14, Alissa Severson ’14, Aaron Suiter ’15, Jon Ver Steegh

Preface

xiii

’14, and Tessa Whalen-Wagner ’15. Their interest and enthusiasm encouraged me to add even more material and detail, which led to a nearly complete version of the book in late 2018. In the winter and spring of early 2019, Patty Commins ’19, Josh Gerstein ’19, Kiran Tomlinson ’19, and Nick Vetterli ’19 used this version as the basis for a senior capstone project. All three groups of students pointed out various typographical errors, generously shared their comments, criticisms, corrections, and suggestions, and suggested several ways in which the material could be presented more clearly and eﬃciently. I am grateful to all of them for their care, interest, and enthusiasm. Also crucial in the development of this book were several other readers who shared their ideas, corrections, and suggestions. Jeﬀ Remmel showed me the combinatorial approach I use to prove the Murnaghan–Nakayama rule. Becky Patrias suggested including the results involving stable and dual stable Grothendieck polynomials and elegant tableaux. And several anonymous reviewers provided very thorough and detailed comments, corrections, and suggestions. I am grateful for the time and eﬀort all of these people put in to improve this book. Finally, Eko Hironaka, senior editor of the AMS book program, has been patiently but persistently encouraging me to ﬁnish this book for longer than I care to admit. Thank you, Eko, for not giving up on it. In spite of everyone’s best eﬀorts, it is likely some errors remain. These are all mine. There are also undoubtedly still many ways this book could be improved. You can ﬁnd more information related to the book, along with a list of known errors, at www.ericegge.net/cofsf. I would very much like to hear from you: if you have comments or suggestions, or ﬁnd an error which does not already appear on the list, please email me at [email protected] And thanks for reading! Eric S. Egge July 2019

Chapter 1

Symmetric Polynomials, the Monomial Symmetric Polynomials, and Symmetric Functions

Symmetric polynomials and symmetric functions arise naturally in a variety of settings. Certain symmetric functions appear as characters of polynomial irreducible representations of the general linear groups. The algebra of symmetric polynomials in n variables is isomorphic to the algebra generated by the characters of the irreducible representations of the symmetric group Sn . Certain graph invariants related to proper colorings of graphs are symmetric functions, and recently knot invariants like the Jones polynomial have been found to be related to symmetric functions. A variety of symmetric functions arise as generating functions for various families of combinatorial objects, and certain generalizations of symmetric polynomials represent cohomology classes of Schubert cycles in ﬂag varieties. In short, symmetric polynomials and symmetric functions are worth studying for the many places they turn up in algebra, geometry,

1

2

1. Monomial Symmetric Polynomials

topology, graph theory, combinatorics, and elsewhere. But symmetric functions and symmetric polynomials are also worth studying for their own structure, and for the rich way the algebraic questions they generate have natural and elegant combinatorial answers. Our goal will be to reveal as much of this structure as we can, without worrying too much about the rich connections symmetric functions have elsewhere. In this chapter we start our study of symmetric functions and symmetric polynomials with the symmetric polynomials. We explain what it means for a polynomial to be symmetric, we describe how to construct symmetric functions, and we explain how symmetric functions are related to symmetric polynomials. We will see that the set of all symmetric functions with coeﬃcients in Q is a ﬁnite-dimensional vector space over Q, and we will use a natural construction to give the ﬁrst of several bases we will see for this space.

1.1. Symmetric Polynomials We often think of each permutation π ∈ Sn as a bijection from [n] ∶= {1, 2, . . . , n} to [n], meaning π permutes the elements of [n]. (See Appendix C for background on permutations.) If instead of [n] we have a set of variables x1 , . . . , xn , then we can also view π as a permutation of these variables, so that π(xj ) = xπ(j) for 1 ≤ j ≤ n. Pushing this a bit further, we might eventually want to combine our variables into polynomials, and then try to view π as a function on these polynomials. This turns out to be reasonably easy to do: if f (x1 , x2 , . . . , xn ) is a polynomial in x1 , . . . , xn , then we deﬁne π(f ) by setting π(f ) ∶= f (xπ(1) , xπ(2) , . . . , xπ(n) ). That is, π acts on f by permuting (the subscripts of) its variables. Example 1.1. Suppose that in one-line notation we have π = 2413 and σ = 4312. If we also have f (x1 , x2 , x3 , x4 ) = x21 + 3x2 x4 − 2x23 x4 and g(x1 , x2 , x3 , x4 ) = 3x3 x24 , then compute π(f ), π(g), π(f + g), π(f ) + π(g), π(f g), π(f )π(g), σ(f ), π(σ(f )), and (πσ)(f ).

1.1. Symmetric Polynomials

3

Solution. By deﬁnition we have π(f ) = f (xπ(1) , xπ(2) , xπ(3) , xπ(4) ) = x2π(1) + 3xπ(2) xπ(4) − 2x2π(3) xπ(4) . Since π(1) = 2, π(2) = 4, π(3) = 1, and π(4) = 3, we ﬁnd π(f ) = x22 + 3x4 x3 − 2x21 x3 . Similarly, π(g) = 3x1 x23 . To compute π(f + g), ﬁrst note that f + g = x21 + 3x2 x4 − 2x23 x4 + 3x3 x24 , so π(f + g) = x22 + 3x4 x3 − 2x21 x3 + 3x1 x23 . On the other hand, we have π(f ) = x22 + 3x4 x3 − 2x21 x3 and π(g) = 3x1 x23 , so π(f ) + π(g) = x22 + 3x4 x3 − 2x21 x3 + 3x1 x23 . To compute π(f g), ﬁrst note that f g = 3x21 x3 x24 +9x2 x3 x34 −6x33 x34 , so π(f g) = 3x1 x22 x23 + 9x1 x33 x4 − 6x31 x33 . On the other hand, from our previous computations we ﬁnd π(f )π(g) = 3x1 x22 x23 +9x1 x33 x4 −6x31 x33 . Computing as we did for π, we ﬁnd σ(f ) = x24 + 3x2 x3 − 2x21 x2 . If we now apply π to σ(f ), we ﬁnd π(σ(f )) = x23 + 3x1 x4 − 2x22 x4 . On the other hand, πσ = 3124, so (πσ)(f ) = x23 + 3x1 x4 − 2x22 x4 . □ As we can already see in Example 1.1, we will be considering polynomials in many variables. To make our notation less cumbersome, for all n ≥ 1 we will write Xn to denote the set of variables x1 , . . . , x n . The computations in Example 1.1 suggest that the action of the permutations on polynomials is compatible with our usual arithmetic operations on polynomials, as well as with composition of permutations. In our next result we make this observation precise. Proposition 1.2. Suppose f (Xn ) and g(Xn ) are polynomials, c is a constant, and π, σ ∈ Sn are permutations. Then (i) π(cf ) = cπ(f ); (ii) π(f + g) = π(f ) + π(g); (iii) π(f g) = π(f )π(g); (iv) (πσ)(f ) = π(σ(f )).

4

1. Monomial Symmetric Polynomials

Proof. (i) This follows from the fact that π permutes the subscripts of the variables without changing any coeﬃcients. (ii) This follows from the fact that if f is a sum of monomials tj , then π(f ) is the sum of the monomials π(tj ). (iii) Suppose ﬁrst that f and g are each a single term, so that f (Xn ) = axa1 1 ⋅ ⋅ ⋅ xann and g(Xn ) = bxb11 ⋅ ⋅ ⋅ xbnn for constants a,b,a1 , . . . , an , b1 , . . . , bn . Then we have π(f g) = π(abxa1 1 +b1 ⋅ ⋅ ⋅ xnan +bn ) an +bn 1 +b1 = abxaπ(1) ⋅ ⋅ ⋅ xπ(n) 1 n 1 n = axaπ(1) ⋅ ⋅ ⋅ xaπ(n) bxbπ(1) ⋅ ⋅ ⋅ xbπ(n)

= π(f )π(g), and the result holds in this case. To show the result holds in general, suppose f (Xn ) = Y1 + ⋅ ⋅ ⋅ + Yk and g(Xn ) = Z1 + ⋅ ⋅ ⋅ + Zm , where each Yj and each Zj is a monomial in x1 , . . . , xn . Using (ii) and the fact that (iii) holds for monomials, we ﬁnd m ⎛ k ⎞ π(f g) = π ( ∑ Yj )(∑ Zl ) ⎝ j=1 ⎠ l=1 k

m

= π( ∑ ∑ Yj Zl ) j=1 l=1 k

m

= ∑ ∑ π(Yj Zl ) j=1 l=1 k

m

= ∑ ∑ π(Yj )π(Zl ) j=1 l=1 k

m

= ( ∑ π(Yj ))(∑ π(Zl )) j=1

l=1

k

m

j=1

l=1

= π( ∑ Yj )π(∑ Zl ) = π(f )π(g), which is what we wanted to prove.

1.1. Symmetric Polynomials

5

(iv) By deﬁnition we have (πσ)(f ) = f (x(πσ)(1) , . . . , x(πσ)(n) ) = f (xπ(σ(1)) , . . . , xπ(σ(n)) ) = π(f (xσ(1) , . . . , xσ(n) )) = π(σ(f )), which is what we wanted to prove.

□

After computing π(f ) for various permutations π and polynomials f , we eventually notice f and π(f ) are usually diﬀerent, but we sometimes get π(f ) = f . When π(f ) = f , we say f is invariant under π. If f is invariant under π, then we might also say f is ﬁxed by π, or π ﬁxes f . Example 1.3. Find all permutations σ ∈ S3 which ﬁx x1 x22 x3 + x21 x2 x3 . Solution. Every polynomial is invariant under the identity permutation. If we check the other ﬁve permutations in S3 , then we ﬁnd x1 x22 x3 + x21 x2 x3 is also invariant under (12), but not under any other permutation. □ Every permutation has its own set of invariant polynomials, but some polynomials are invariant under every permutation. These are the polynomials we plan to study. Deﬁnition 1.4. We say a polynomial f (Xn ) is a symmetric polynomial in x1 , . . . , xn whenever π(f ) = f for all π ∈ Sn . We write Λ(Xn ) to denote the set of all symmetric polynomials in x1 , . . . , xn with coeﬃcients in Q. We leave it as an exercise to use Proposition 1.2 to show Λ(Xn ) is a vector space over Q for all n, and that in fact Λ(Xn ) is inﬁnite dimensional for all n ≥ 1. Example 1.5. Which of the following are symmetric polynomials in x1 , x2 , x3 ? Which are symmetric polynomials in x1 , x2 , x3 , x4 ? (a) 3x1 x2 x3 + x1 x2 + x2 x3 + x1 x3 + 5. (b) x21 x2 + x22 x3 + x1 x23 .

6

1. Monomial Symmetric Polynomials

Solution. (a) This polynomial is a symmetric polynomial in x1 , x2 , x3 , but it is not invariant under the permutation (14), so it is not a symmetric polynomial in x1 , x2 , x3 , x4 . (b) The image of this polynomial under the permutation (12) is x22 x1 + x21 x3 + x23 x2 , which is not the original polynomial, so it is not a symmetric polynomial in either x1 , x2 , x3 or x1 , x2 , x3 , x4 . □ When we apply a permutation π to a polynomial f , it may alter the terms of f dramatically. But some properties will always be preserved. For example, no matter how much a term is changed by π, its total degree remains the same. As a result, it is useful to break our symmetric polynomials into pieces in which every term has the same total degree. Deﬁnition 1.6. We say a polynomial f (Xn ) in which every term has total degree exactly k is homogeneous of degree k. We write Λk (Xn ) to denote the set of all symmetric polynomials in x1 , . . . , xn which are homogeneous of degree k. Note that for every nonnegative integer k, every term in the polynomial 0 has degree k (since 0 has no terms), so 0 ∈ Λk (Xn ) for all k ≥ 0 and all n ≥ 1. Note that if f is any polynomial in x1 , . . . , xn , then we can group the terms of f by their total degree, so f can be written uniquely as a sum f0 + f1 + ⋅ ⋅ ⋅, where fk is homogeneous of degree k for each k ≥ 0. If f happens to be a symmetric polynomial in x1 , . . . , xn , then for every π ∈ Sn , we have f0 + f1 + ⋅ ⋅ ⋅ = f = π(f ) = π(f0 + f1 + ⋅ ⋅ ⋅) = π(f0 ) + π(f1 ) + ⋅ ⋅ ⋅ . Since π does not change the total degree of any monomial, we must have π(fj ) = fj for every j ≥ 0. That is, if f is a symmetric polynomial in x1 , . . . , xn and f = f0 + f1 + ⋅ ⋅ ⋅ is the decomposition of f into its homogeneous parts fj , then each fj is also a symmetric polynomial in x1 , . . . , xn . With this in mind, we often restrict our attention to the homogeneous symmetric polynomials of a given total degree.

1.2. The Monomial Symmetric Polynomials

7

Example 1.7. We saw in Example 1.5 that 3x1 x2 x3 + x1 x2 + x2 x3 + x1 x3 + 5 ∈ Λ(X3 ). Write this symmetric polynomial as a sum f0 + f1 + ⋅ ⋅ ⋅, where fk ∈ Λk (X3 ) for all k ≥ 0. Solution. In general fk is the sum of all of the terms of total degree k, so in this case we have f0 = 5, f1 = 0, f2 = x1 x2 + x2 x3 + x1 x3 , f3 = 3x1 x2 x3 , and fk = 0 for k ≥ 4. □ As we did for Λ(Xn ), we leave it as an exercise to show Λk (Xn ) is a vector space over Q for all k ≥ 0 and all n ≥ 1. In contrast with Λ(Xn ), we will see the space Λk (Xn ) is ﬁnite dimensional.

1.2. The Monomial Symmetric Polynomials So far we have seen exactly one symmetric polynomial, so it is natural to ask for more examples. One simple way to construct more symmetric polynomials is to be even more demanding: pick a set of variables x1 , . . . , xn , pick a monomial in those variables, and try to construct a symmetric polynomial that includes your monomial as a term. Example 1.8. Find a symmetric polynomial f ∈ Λ(X3 ) that includes x31 x2 as one of its terms. Similarly, ﬁnd a symmetric polynomial g ∈ Λ(X3 ) that includes 3x21 x2 x23 as one of its terms. Use as few other monomials as possible in both f and g. Solution. If f is a symmetric polynomial, then it must be invariant under (12). So if one of its terms is x31 x2 , then another of its terms must be (12)(x31 x2 ) = x1 x32 . Similarly, f must also include the terms x2 x33 , x31 x3 , x1 x33 , and x32 x3 . On the other hand, x31 x2 + x1 x32 + x2 x33 + x31 x3 + x1 x33 + x32 x3 is a symmetric polynomial, so it must be the one we are looking for. When we reason in the same way for g as we did for f , we ﬁnd we get some duplicate terms. Since we do not need these duplicates, we ﬁnd g = 3x21 x2 x23 + 3x1 x22 x23 + 3x21 x22 x3 . □ When we look more closely at our work in Example 1.8, we see we can construct a symmetric polynomial in x1 , . . . , xn by starting with a monomial in these variables, and then adding the distinct images of

8

1. Monomial Symmetric Polynomials

this monomial under the permutations in Sn . If one monomial is the image of another under some permutation, then both monomials will give us the same symmetric polynomial under this construction. This means we can rearrange the factors in our monomial to ensure that when we list the variables in the order x1 , . . . , xn , their exponents form a partition. (See Appendix B for background on partitions.) With this in mind, we set some notation and terminology for the symmetric polynomials we’ve found. Deﬁnition 1.9. Suppose n ≥ 1 and λ is a partition. Then the monomial symmetric polynomial mλ (Xn ) indexed by λ is the sum of the l(λ) λ monomial ∏j=1 xj j and all of its distinct images under the elements of Sn . Here we take xj = 0 for all j > n, so if l(λ) > n, then mλ (Xn ) = 0. We will often have partitions as subscripts, as we do for the monomial symmetric polynomials. When all of the parts of these partitions are less than 10, we will save some space by omitting the commas and parentheses. So, for example, we will write m4431 (Xn ) instead of m(4,4,3,1) (Xn ). Example 1.10. Compute the four monomial symmetric polynomials m21 (X2 ), m21 (X3 ), m3311 (X3 ), and m3311 (X4 ). Solution. The monomial x21 x2 has only one image (other than itself) under the elements of S2 , namely x1 x22 . Therefore, m21 (X2 ) = x21 x2 + x1 x22 . Notice that if we hold λ constant and increase the number of variables, then we get more images of x21 x2 , and therefore a new monomial symmetric polynomial, namely m21 (X3 ) = x21 x2 + x21 x3 + x1 x22 + x1 x23 + x22 x3 + x2 x23 . The length of the partition (32 , 12 ) is greater than three, so m3311 (X3 ) is 0. The requirement that we take only distinct images of our monomial comes into play when we compute m3311 (X4 ), which turns out to be x31 x32 x3 x4 + x31 x2 x33 x4 + x31 x2 x3 x34 + x1 x32 x33 x4 + x1 x32 x3 x34 + x1 x2 x33 x34 .

□

1.2. The Monomial Symmetric Polynomials

9

We saw in our solution to Example 1.10 that the polynomials m21 (X2 ), m21 (X3 ), m3311 (X3 ), and m3311 (X4 ) are all symmetric polynomials in their respective variables, and it may already be clear that mλ (Xn ) is a symmetric polynomial in x1 , . . . , xn for all n and all λ. Nevertheless, the following proof of this fact uses ideas and techniques which will be useful later on, so we include it here. Proposition 1.11. For all positive integers n and all partitions λ, the polynomial mλ (Xn ) is a symmetric polynomial in x1 , . . . , xn . Proof. When n < l(λ), we have mλ (Xn ) = 0, in which case the result is clear, so we assume n ≥ l(λ). In view of Proposition C.5, it is suﬃcient to show σj (mλ (Xn )) = mλ (Xn ) for all j with 1 ≤ j ≤ n − 1, where σj is the adjacent transposition (j, j + 1). To do this, it is suﬃcient to show that every term xµ1 1 ⋅ ⋅ ⋅ xµnn has the same coeﬃcient in σj (mλ (Xn )) as it has in mλ (Xn ). Note that the coeﬃcient of xµ1 1 ⋅ ⋅ ⋅ xµnn in mλ (Xn ) is 1 if µ1 , . . . , µn is a reordering of λ1 , . . . , λn and 0 otherwise. On the other hand, the coeﬃcient of xµ1 1 ⋅ ⋅ ⋅ xµnn in σj (mλ (Xn )) is the coeﬃcient µj µ ⋅ ⋅ ⋅ xµnn in mλ (Xn ). Furthermore, this coeﬃcient of xµ1 1 ⋅ ⋅ ⋅ xj j+1 xj+1 is 1 if µ1 , . . . , µj+1 , µj , . . . , µn is a reordering of λ1 , . . . , λn and 0 otherwise. But µ1 , . . . , µn is a reordering of λ1 , . . . , λn if and only if µ1 , . . . , µj+1 , µj , . . . , µn is a reordering of λ1 , . . . , λn , and the result follows. □ Note that if λ ⊢ k, then the monomial symmetric polynomial mλ (Xn ) is homogeneous of degree k, so mλ (Xn ) ∈ Λk (Xn ). In fact, immediately after Example 1.7 we promised we would see that Λk (Xn ) is ﬁnite dimensional. We conclude this section by keeping that promise, showing that if n ≥ k, then the monomial symmetric polynomials form a basis for this space. Proposition 1.12. If n ≥ 1, k ≥ 0, and n ≥ k, then the set {mλ (Xn ) ∣ λ ⊢ k} of monomial symmetric polynomials is a basis for Λk (Xn ). In particular, dim Λk (Xn ) = p(k), the number of partitions of k.

10

1. Monomial Symmetric Polynomials

αn 1 Proof. First observe that for any monomial xα 1 ⋅ ⋅ ⋅ xn , there is only one partition λ whose parts are a rearrangement of α1 , . . . , αn , and αn 1 xα 1 ⋅ ⋅ ⋅ xn is a term in mλ (Xn ). In particular, if λ and µ are partitions with λ ≠ µ, then mλ (Xn ) and mµ (Xn ) have no terms in common. Therefore, since each mλ (Xn ) is nonzero, ∑λ⊢k aλ mλ (Xn ) = 0 can only occur if aλ = 0 for all λ ⊢ k. In other words, {mλ (Xn ) ∣ λ ⊢ k} is linearly independent.

To see that this set also spans Λk (Xn ), suppose f ∈ Λk (Xn ). If f = 0, then f = ∑λ⊢k 0 ⋅ mλ (Xn ), so suppose f ≠ 0. We argue by induction on the number of terms in f . Since f is symmetric, f has µ a term of the form α ∏nj=1 xj j for some µ ⊢ k and some constant α. Moreover, by the symmetry of f all of the distinct images of this monomial under the action of Sn also appear in f . Therefore f − αmµ (Xn ) ∈ Λk (Xn ) and f − αmµ (Xn ) has fewer terms than f . By induction f − αmµ (Xn ) is a linear combination of the elements of {mλ (Xn ) ∣ λ ⊢ k}, and thus f is as well. The fact that dim Λk (Xn ) = p(k), the number of partitions of k, now follows from the fact that there is exactly one monomial symmetric polynomial in Λk (Xn ) for each partition of k. □ We note that Proposition 1.12 also tells us the monomial symmetric polynomials form a basis for Λ(Xn ).

1.3. Symmetric Functions In Proposition 1.12 we saw that if n ≥ k, then the dimension of Λk (Xn ) is independent of n. This fact is an example of a more general phenomenon: if we have enough variables, then the algebraic properties of Λk (Xn ) do not depend on exactly how many variables we have. To see another illustration of this central general principle, suppose n ≥ 2 and consider the product m11 (Xn )m21 (Xn ). When n = 2, we have m11 (X2 )m21 (X2 ) = x31 x22 + x21 x32 .

1.3. Symmetric Functions

11

Table 1.1. The product m11 (Xn )m21 (Xn ) for 2 ≤ n ≤ 7, as a linear combination of monomial symmetric polynomials

m11 (Xn )m21 (Xn ) m32 m32 + 2m311 + 2m221 m32 + 2m311 + 2m221 + 3m2111 m32 + 2m311 + 2m221 + 3m2111 m32 + 2m311 + 2m221 + 3m2111 m32 + 2m311 + 2m221 + 3m2111

n 2 3 4 5 6 7 When n = 3 we have

m11 (X3 )m21 (X3 ) = x31 x22 + x21 x32 + x31 x23 + x21 x33 + x32 x23 + x22 x33 + 2x3 x2 x3 + 2x1 x32 x3 + 2x1 x2 x33 + 2x21 x22 x3 + 2x21 x2 x23 + 2x1 x22 x23 . Our products are all homogeneous symmetric polynomials of degree 5, so we can write them as linear combinations of the monomial symmetric polynomials. When we do this for n = 2 and n = 3, we ﬁnd m11 (X2 )m21 (X2 ) = m32 (X2 ) and m11 (X3 )m21 (X3 ) = m32 (X3 ) + 2m311 (X3 ) + 2m221 (X3 ). In Table 1.1 we write m11 (Xn )m21 (Xn ) as a linear combination of the monomial symmetric polynomials, suppressing the arguments Xn in the answers. It appears from these data that if n ≥ 4, then m11 (Xn )m21 (Xn ) = m32 (Xn ) + 2m311 (Xn ) + 2m221 + 3m2111 (Xn ). In addition, it seems that if n < 4, then we have the same sum, except we remove those monomial symmetric polynomials whose indexing partition has more than n parts. We can prove this happens for any product mλ (Xn )mµ (Xn ) by using a simple relationship between mλ (x1 , . . . , xn ) and mλ (x1 , . . . , xn , 0). Proposition 1.13. For any partition λ if n ≥ l(λ), then mλ (x1 , . . . , xn , 0) = mλ (x1 , . . . , xn ).

12

1. Monomial Symmetric Polynomials

More generally, if n ≥ l(λ), then mλ (x1 , . . . , xn , 0, . . . , 0) = mλ (x1 , . . . , xn ) ´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶ j

for any j ≥ 1. Proof. Since n ≥ l(λ), both of the polynomials mλ (x1 , . . . , xn , 0) and mλ (x1 , . . . , xn ) include the term xλ1 1 ⋅ ⋅ ⋅ xλl l , where l = l(λ). In particular, neither mλ (x1 , . . . , xn , 0) nor mλ (x1 , . . . , xn ) is 0. In fact, the terms in mλ (x1 , . . . , xn , 0) are exactly those for which the exponents of x1 , . . . , xn are some permutation of λ1 , . . . , λl and n − l zeros. These are exactly the terms in mλ (x1 , . . . , xn ) as well, so mλ (x1 , . . . , xn , 0) = mλ (x1 , . . . , xn ). The fact that mλ (x1 , . . . , xn , 0, . . . , 0) = mλ (x1 , . . . , xn ) for any ´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶ j

j ≥ 1 follows by induction on j, since setting xn+j = 0 and then setting xn+j−1 = 0 in the resulting polynomial gives us the same result as setting xn+j = 0 and xn+j−1 = 0 all at once. □ Now suppose we have n ≥ 1 and partitions λ and µ. In Problem 1.15 you will have a chance to show that for any n ≥ 1, the product mλ (Xn )mµ (Xn ) is in Λ(Xn ). Since the monomial symmetric polynomials are a basis for Λ(Xn ), we know there are rational numbers aνλ,µ (n) such that mλ (Xn )mµ (Xn ) = ∑ aνλ,µ (n)mν (Xn ).

(1.1)

ν

In principle ν could be any partition, but all of the terms in mλ (Xn ) have total degree ∣λ∣, and all of the terms in mµ (Xn ) have total degree ∣µ∣, so all of the terms in their product have degree ∣λ∣+∣µ∣. This means we can assume ν is a partition of ∣λ∣ + ∣µ∣. We would like to say aνλ,µ (n) is independent of n for all n ≥ 1, but this is not quite true. In particular, if n = l(λ) + l(µ), then λl(λ) one term in mλ (Xn ) is xλ1 1 ⋅ ⋅ ⋅ xl(λ) and one term in mµ (Xn ) is µ

l(µ) 1 ⋅ ⋅ ⋅ xl(λ)+l(µ) . This means one term in their product is xµl(λ)+1

λ

µ

l(λ) µ1 l(µ) xλ1 1 ⋅ ⋅ ⋅ xl(λ) xl(λ)+1 ⋅ ⋅ ⋅ xl(λ)+l(µ) .

1.3. Symmetric Functions

13

Therefore, if we write λ ∪ µ to denote the partition we obtain by sorting λ1 , . . . , λl(λ) , µ1 , . . . , µl(µ) into weakly decreasing order, then λ∪µ we see aλ∪µ λ,µ (l(λ) + l(µ)) ≠ 0, even though aλ,µ (l(λ) + l(µ) − 1) = 0. In other words, the best we can hope for is that aνλ,µ (n) is independent of n for all n ≥ l(λ) + l(µ). Corollary 1.14. For any partitions λ, µ, and ν ⊢ ∣λ∣ + ∣µ∣ and any n ≥ 1, let the numbers aνλ,µ (n) be deﬁned by mλ (Xn )mµ (Xn ) = ∑ aνλ,µ (n)mν (Xn ). ν

If n ≥ l(λ) + l(µ), then

aνλ,µ (n)

does not depend on n.

Proof. Suppose n = l(λ) + l(µ) and ﬁx j ≥ 1. By deﬁnition, mλ (Xn+j )mµ (Xn+j ) = ∑ aνλ,µ (n + j)mν (Xn+j ). ν

If we set xn+1 = xn+2 = ⋅ ⋅ ⋅ = xn+j = 0 and use Proposition 1.13, then we ﬁnd mλ (Xn )mµ (Xn ) = ∑ aνλ,µ (n + j)mν (Xn ). ν

Comparing this last line with equation (1.1) and using the fact that the monomial symmetric polynomials are a basis for Λ(Xn ), we see aνλ,µ (n + j) = aνλ,µ (n), which is what we wanted to prove. □ As we mentioned above, the moral of Corollary 1.14 is that if we have enough variables (that is, if n is large enough), then the algebraic properties of Λk (Xn ) and our basis of monomial symmetric polynomials do not depend on exactly how many variables we have. However, we also saw that “enough” means diﬀerent things in diﬀerent contexts. If we only cared about the product m11 (Xn )m21 (Xn ), then enough would be four. But if we are actually interested in the product m4321 (Xn )m77221 (Xn ), then enough is nine. Instead of worrying about how much is enough in every new situation, we would like to just assume we have inﬁnitely many variables. To do this, we need to lay some formal groundwork involving “polynomials” which are allowed to have inﬁnitely many terms. We start with a formal deﬁnition.

14

1. Monomial Symmetric Polynomials

Deﬁnition 1.15. Let N be the set of nonnegative integers and set N∞ = N × N × N × ⋅ ⋅ ⋅. A formal power series with coeﬃcients in Q is a function f ∶ N∞ → Q such that if f (a1 , a2 , . . .) ≠ 0, then only ﬁnitely many of a1 , a2 , . . . are nonzero. At ﬁrst glance a formal power series does not seem to be related to a polynomial which may have inﬁnitely many terms. But we connect these ideas by identifying each tuple (a1 , a2 , . . .) with the monomial xa1 1 xa2 2 ⋅ ⋅ ⋅, viewing f (a1 , a2 , . . .) as the coeﬃcient of xa1 1 xa2 2 ⋅ ⋅ ⋅, and identifying the function f with the sum (1.2)

∑

(a1 ,a2 ,...)∈N∞

f (a1 , a2 , . . .)xa1 1 xa2 2 ⋅ ⋅ ⋅ .

For example, if f is given by ⎧ ⎪ ⎪a1 f (a1 , a2 , . . .) = ⎨ ⎪ ⎪ ⎩0

if a2 = a3 = ⋅ ⋅ ⋅ = 0, otherwise,

j then we identify f with the series x1 + 2x21 + 3x31 + ⋅ ⋅ ⋅ = ∑∞ j=1 jx1 . For a formal power series f , the sum in (1.2) is “formal” in the sense that we generally do not assign it meaning in the usual sense of addition.

A formal power series can have ﬁnitely many terms or inﬁnitely many terms, so every polynomial is also a formal power series. In fact, formal power series also inherit some terminology from polynomials. For instance, the total degree (or degree, for short) of a term f (a1 , a2 , . . .)xa1 1 xa2 2 ⋅ ⋅ ⋅ is a1 + a2 ⋅ ⋅ ⋅, which is ﬁnite by the last condition in Deﬁnition 1.15. We say a formal power series is homogeneous of degree k whenever each of its terms has total degree k, and we note that every formal power series f can be written uniquely as a sum f = f0 + f1 + ⋅ ⋅ ⋅, where fk is homogeneous of degree k for all k ≥ 0. In some contexts it is useful to consider convergence properties of formal power series, since this opens up the possibility of using tools from complex analysis to draw conclusions about the coeﬃcients in a given series. However, we will not be concerned with questions of convergence. Instead, it is the algebra of formal power series which will be of most interest to us. For instance, we can add formal power series and multiply them by scalars, just as we do for polynomials, so the set of formal power series in X is a vector space over Q. In fact,

1.3. Symmetric Functions

15

as the next two examples suggest, we can also multiply formal power series. Example 1.16. Compute the product of the formal power series ∞ j j f (x) = ∑∞ j=0 x and g(x) = ∑j=0 jx . Solution. To express this product as a formal power series, we need to determine, for each j, the coeﬃcient of xj in the product (1 + x + x2 + x3 + ⋅ ⋅ ⋅)(x + 2x2 + 3x3 + ⋅ ⋅ ⋅). Since every term in the second factor has a factor of x, the constant term in f (x)g(x) will be 0. Before we combine like terms, the terms in f (x)g(x) are exactly the products of one term from f (x) and one term from g(x). The only way such a term can have the form ax is if we choose 1 from f (x) and x from g(x), so the coeﬃcient of x is 1. There are two ways such a term can have the form ax2 : we can choose 1 from f (x) and 2x2 from g(x), or we can choose x from f (x) and x from g(x). Therefore the coeﬃcient of x2 in f (x)g(x) is 2 + 1 = 3. In general, there are j ways a term in f (x)g(x) can have the form axj : for each m with 1 ≤ m ≤ j, we can choose xm−1 from f (x) and (j − m + 1)xj−m+1 from g(x). Therefore the coeﬃcient of xj ), and we have in f (x)g(x) is 1 + 2 + ⋅ ⋅ ⋅ + j = (j+1 2 ∞

f (x)g(x) = ∑ ( j=1

j+1 j )x . 2

□

Example 1.17. Compute the product of the formal power series ∞ f = ∑∞ j=1 xj xj+1 and g = ∑j=1 xj . Solution. Working formally, we could write f g as ∞

∞

f g = ∑ ∑ xj xj+1 xm . j=1 m=1

Although this expresses f g correctly, some terms appear more than once on the right side, so it gives us less insight into f g as a formal power series than we would like. To gain more insight, we need to determine the coeﬃcient of every monomial in x1 , x2 , . . . in the product (x1 x2 + x2 x3 + x3 x4 + ⋅ ⋅ ⋅)(x1 + x2 + x3 + ⋅ ⋅ ⋅).

16

1. Monomial Symmetric Polynomials

Before we combine like terms, the terms in f g are exactly the products of one term from f and one term from g. As a result, the only terms in f g with nonzero coeﬃcients are those of the form xm xj xj+1 , where m < j − 1 or m > j + 2, those of the form x2j xj+1 , those of the form xj x2j+1 , and those of the form xj xj+1 xj+2 . We can only obtain terms of the ﬁrst form in one way: choose xj xj+1 from f and xm from g. Therefore, each of these terms has coeﬃcient 1. Similarly, we can only obtain terms of the second and third forms in one way, so each of these also has coeﬃcient 1. But we can obtain a term of the form xj xj+1 xj+2 in two ways: choose xj xj+1 from f and xj+2 from g, or choose xj+1 xj+2 from f and xj from g. Therefore, each of these terms has coeﬃcient 2. Combining these observations, we can express f g as ∞

∞

∞

j=1

m=1 m≠j−1 m≠j+2

j=1

f g = ∑ ∑ xj xj+1 xm + 2 ∑ xj xj+1 xj+2 .

□

In each of these examples we have a product in which each factor has inﬁnitely many terms. However, there are only ﬁnitely many ways to obtain any particular monomial in the result, so it is still possible to express each product as a formal power series. Moreover, this is true in general. That is, if X is any set of variables, then any monomial in the variables in X can be expressed as a product of monomials in only ﬁnitely many ways. Therefore, for any formal power series f and g in X, the product f g is a well-deﬁned formal power series. Now that we have our usual algebraic operations on formal power series, we would like to extend the action of permutations on polynomials to an action of permutations on formal power series. There are several ways one could do this, but we will use one that is particularly simple: if π ∈ Sn and we have variables x1 , x2 , . . . , then we will deﬁne π(xj ) = xπ(j) as usual for 1 ≤ j ≤ n, and we will set π(xj ) = xj for j > n. Now for any formal power series f = f (x1 , x2 , . . .) and any permutation π ∈ Sn , we deﬁne π(f ) by π(f ) ∶= f (π(x1 ), π(x2 ), . . .). With this deﬁnition, we have the following natural analogue of Proposition 1.2. Proposition 1.18. Suppose X = {xj }∞ j=1 is a set of variables, f and g are formal power series in X, c is a constant, and π, σ ∈ Sn are permutations. Then

1.3. Symmetric Functions

17

(i) π(cf ) = cπ(f ); (ii) π(f + g) = π(f ) + π(g); (iii) π(f g) = π(f )π(g); (iv) (πσ)(f ) = π(σ(f )). Proof. This is similar to the proof of Proposition 1.2.

□

Now that we know how a permutation acts on a formal power series, we are ready to discuss symmetric functions. Deﬁnition 1.19. Suppose X = {xj }∞ j=1 is a set of variables and f is a formal power series in X. We say f is a symmetric function in X whenever, for all n ≥ 1 and all π ∈ Sn , we have π(f ) = f . We write Λ(X) to denote the set of all symmetric functions in X, and we write Λk (X) to denote the set of all symmetric functions in X that are homogeneous of degree k. Often X is clear from context, in which case we say f is a symmetric function, we write Λ to denote the set of all symmetric functions, and we write Λk to denote the set of all symmetric functions which are homogeneous of degree k. When we had ﬁnitely many variables, we constructed the monomial symmetric functions by adding all of the distinct images of a given monomial. We can do the same thing when we have inﬁnitely many variables, but this time we get a formal power series, which is not a polynomial in general. Deﬁnition 1.20. Suppose λ is a partition and X = {xj }∞ j=1 is a set of variables. Then the monomial symmetric function in X, which we write as mλ (X), is the sum of all monomials Y for which there exists n ≥ l(λ) and a permutation π ∈ Sn such that π(Y ) = xλ1 1 xλ2 2 ⋅ ⋅ ⋅ xλnn . When the set X of variables is clear from context, we often omit it, writing mλ instead of mλ (X). Similarly, if we write mλ and no set of variables has been indicated, then we will assume our set of variables is X = {xj }∞ j=1 . Example 1.21. Compute m21 . Is m21 = m2 m1 ? Solution. By deﬁnition m21 is the sum of the images of x21 x2 under all permutations of x1 , x2 , . . ., so it is the sum of all monomials of the

18

1. Monomial Symmetric Polynomials

form x2j xk , where j and k are distinct. Formally, we have ∞ ∞

m21 = ∑ ∑ x2j xk . j=1

k=1 k≠j

The product m2 m1 = (x21 + x22 + ⋅ ⋅ ⋅)(x1 + x2 + ⋅ ⋅ ⋅) includes terms of the form x3j , so it is not equal to m21 . However, we do have m21 = (x21 + x22 + ⋅ ⋅ ⋅)(x1 + x2 + ⋅ ⋅ ⋅) − (x31 + x32 + ⋅ ⋅ ⋅), so m21 = m2 m1 − m3 .

□

As we might expect, the monomial symmetric functions form a basis for Λk , as their polynomial counterparts did for Λk (Xn ). Proposition 1.22. For all k ≥ 0, the set {mλ ∣ λ ⊢ k} of monomial symmetric functions is a basis for Λk . In particular, dim Λk = p(k), the number of partitions of k. Proof. To show {mλ ∣ λ ⊢ k} is linearly independent, ﬁrst note that if we set xj = 0 in mλ for all j > n, then we obtain the monomial symmetric polynomial mλ (Xn ). Now suppose we have constants aλ for which ∑λ⊢k aλ mλ = 0. If we set xj = 0 for all j > k, then we ﬁnd ∑λ⊢k aλ mλ (Xk ) = 0. But {mλ (Xk ) ∣ λ ⊢ k} is linearly independent by Proposition 1.12, so aλ = 0 for all λ, as desired. To show {mλ ∣λ ⊢ k} spans Λk , suppose f ∈ Λk . We argue by induction on the number of terms in f of the form axa1 1 xa2 2 ⋅ ⋅ ⋅, where a is a nonzero constant and a1 ≥ a2 ≥ ⋅ ⋅ ⋅. Note that for any such term the sequence a1 , a2 , . . . is a partition of k, so there are at most p(k) such terms. If f has exactly one term of the given form, then all of its images under any permutation are also terms in f , and we have f = amλ , where λ = a1 , a2 , . . . . Now suppose f has more than one term of the given form. For any such term axa1 1 xa2 2 ⋅ ⋅ ⋅, all of the images of this term must also be terms in f . Therefore, f − amµ ∈ Λk , where µ = a1 , a2 , . . . . Moreover, f − amµ has fewer terms of the given form than f , so by induction f − amµ is a linear combination of the elements of {mλ ∣ λ ⊢ k}. Now the result follows. □

1.4. Problems

19

1.4. Problems 1.1. Find all permutations π ∈ S4 for which π(f ) = f , where f (X4 ) = x1 x22 x44 + x22 x43 x4 + x41 x22 x3 . 1.2. Find all permutations π ∈ S4 for which π(f ) = f , where f (X4 ) = x1 x2 x23 x24 + x1 x22 x3 x24 + x21 x2 x23 x4 + x21 x22 x3 x4 . 1.3. Find a polynomial f in x1 , x2 , x3 which has π(f ) = sgn(π)f for all π ∈ S3 , and which has x71 x23 as one of its terms. 1.4. For any set P of polynomials in x1 , . . . , xn , let Sn (P ) be the set of permutations π ∈ Sn such that π(f ) = f for all f ∈ P . Prove that for every P , the following hold. (a) Sn (P ) is nonempty. (b) Sn (P ) is closed under multiplication of permutations. That is, if π, σ ∈ Sn (P ), then πσ ∈ Sn (P ). (c) Sn (P ) is closed under taking inverses. That is, if π ∈ Sn (P ), then π −1 ∈ Sn (P ). 1.5. Prove or disprove: if P is a set of polynomials in x1 , . . . , xn , then for any permutation π ∈ Sn and any σ ∈ Sn (P ), we have πσπ −1 ∈ Sn (P ). 1.6. For any set T ⊆ Sn , let Fix(T ) be the set of polynomials f (Xn ) such that τ (f ) = f for all τ ∈ T . Prove that for every T , the following hold. (a) Fix(T ) is a subspace of the vector space of polynomials in x1 , . . . , xn with coeﬃcients in Q. (b) Fix(T ) is closed under multiplication. That is, if f ∈ Fix(T ) and g ∈ Fix(T ), then f g ∈ Fix(T ). 1.7. Prove or disprove: if T ⊆ Sn , then for any polynomial f ∈ Fix(T ) and any polynomial g(Xn ), we have f g ∈ Fix(T ). 1.8. Prove, or disprove and salvage: for any n ≥ 1 and any set T ⊆ Sn , we have T = Sn (Fix(T )). 1.9. Prove, or disprove and salvage: for any n ≥ 1 and any set P of polynomials in x1 , . . . , xn , we have P = Fix(Sn (P )). 1.10. For any n ≥ 1, any k ≥ 0, and any T ⊆ Sn , let Fixk (T ) be the set of polynomials in Fix(T ) which are homogeneous of degree

20

1. Monomial Symmetric Polynomials k. Show Fixk (T ) is a subspace of the space of all homogeneous polynomials of degree k in x1 , . . . , xn with coeﬃcients in Q.

1.11. For n ≥ 1, let Cn be the set containing just the permutation (1, 2, . . . , n). Find and prove a formula for dim Fix2 (Cn ). 1.12. Show that the space Λ(Xn ) is inﬁnite dimensional for all n ≥ 1. 1.13. Show that for all n, k ≥ 0, the space Λk (Xn ) is a subspace of Λ(Xn ). 1.14. We showed in Proposition 1.12 that if n ≥ k, then Λk (Xn ) is ﬁnite-dimensional. Show Λk (Xn ) is also ﬁnite dimensional when 0 ≤ k < n. More speciﬁcally, show dim Λk (Xn ) is the number of partitions of k with at most n parts by ﬁnding a basis whose elements are indexed by these partitions. 1.15. Show that for all n, k1 , k2 ≥ 0, if f ∈ Λk1 (Xn ) and g ∈ Λk2 (Xn ), then f g ∈ Λk1 +k2 (Xn ). 1.16. Suppose n ≥ 1, k1 ≥ k2 ≥ 0, and k1 + k2 = n. Write the product mk1 mk2 as a linear combination of {mλ ∣ λ ⊢ n}. 1.17. Write the product m4 m3 m2 m1 as a linear combination of {mλ ∣ λ ⊢ 10}. 1.18. If we write the product m6 m5 m4 m3 m2 m1 as a linear combination of {mλ ∣ λ ⊢ 21}, what is the coeﬃcient of m(12,9) ? 1.19. Suppose n ≥ 1 and k1 + k2 = n. Write the product m1k1 mk2 as a linear combination of {mλ ∣ λ ⊢ n}. 1.20. If we write the product m1k1 +1 m1k2 +1 as a linear combination of monomial symmetric functions, what is the coeﬃcient of m2,1k1 +k2 ? ∞ 1.21. Write ∏∞ j=1 (1 + xj ) and ∏j=1 (1 − xj ) in terms of the monomial symmetric functions.

1.22. Write the sum ∑ mλ λ

as an inﬁnite product in as simple a form as possible. Here the sum is over all partitions.

1.5. Notes

21

1.5. Notes The background on formal power series we have developed here will be enough for our work with symmetric functions. However, more information is available in a variety of sources, including [Loe17, Ch. 7], [Niv69], and [Wil05, Ch. 2].

Chapter 2

The Elementary, Complete Homogeneous, and Power Sum Symmetric Functions

We saw in Chapter 1 how the monomial symmetric functions arise naturally when we group the terms of a symmetric function with their images under the various permutations or, equivalently, when we group terms according to their multisets of exponents. However, several other families of symmetric functions appear in other contexts, and these families give us other bases for Λk . In this chapter we introduce the oldest of these families.

2.1. The Elementary Symmetric Functions Suppose we have a polynomial f (z) whose leading coeﬃcient is 1 (that is, f is monic), and whose roots are x1 , . . . , xn . Then one way to factor f is as f (z) = (z − x1 )(z − x2 ) ⋅ ⋅ ⋅ (z − xn ). Since multiplication is commutative, we can permute these factors in any way we like, and their product will still be f . As a result, f is invariant under every permutation of x1 , . . . , xn . In particular, if we 23

24

2. The Elementary, Complete, and Power Sum Bases Table 2.1. The coeﬃcients of z k in Example 2.1

k 0 1 2 3

coeﬃcient of z k x1 x2 x3 x4 −(x1 x2 x3 + x1 x2 x4 + x1 x3 x4 + x2 x3 x4 ) x1 x2 + x1 x3 + x1 x4 + x2 x3 + x2 x4 + x3 x4 −(x1 + x2 + x3 + x4 )

expand f in powers of z, then the coeﬃcient of z k will be a symmetric polynomial in x1 , . . . , xn . Example 2.1. For 0 ≤ k ≤ 3, compute the coeﬃcients of z k in the polynomial (z − x1 )(z − x2 )(z − x3 )(z − x4 ). Solution. When we expand the product and collect like powers of z, we ﬁnd f (z) = x1 x2 x3 x4 − (x1 x2 x3 + x1 x2 x4 + x1 x3 x4 + x2 x3 x4 )z + (x1 x2 + x1 x3 + x1 x4 + x2 x3 + x2 x4 + x3 x4 )z 2 − (x1 + x2 + x3 + x4 )z 3 + z 4 . Therefore, the coeﬃcients of z k are as in Table 2.1.

□

Notice in Example 2.1 that the coeﬃcient of z k is a homogeneous symmetric polynomial of degree 4 − k. Equivalently, the coeﬃcient of z 4−k is a homogeneous symmetric polynomial of degree k. In fact, the coeﬃcient of z 4−k is the sum of all products of exactly k distinct xj ’s. In general, each term in the expansion of (z − x1 ) ⋅ ⋅ ⋅ (z − xn ) is a product of terms, one from each factor z − xj . Therefore, for any k with 0 ≤ k ≤ n, the coeﬃcient of z n−k is the sum of all products of exactly k distinct xj ’s, up to a sign. This is the elementary symmetric polynomial of degree k in x1 , . . . , xn . Deﬁnition 2.2. For all positive integers n and k, the elementary symmetric polynomial ek (Xn ) is given by (2.1)

ek (Xn ) =

k

∑

∏ xj m = ∑ ∏ xj ,

1≤j1 2, then the coeﬃcient of mµ in e421 is also 0. Continuing in this way, we see the coeﬃcient of mµ in e421 is nonzero only if µ is (3, 2, 12 ), (3, 14 ), (23 , 1), (22 , 13 ), (2, 15 ), or (17 ). Now that we have ruled out a bunch of partitions for which the coeﬃcient of mµ must be 0, we turn our attention to ﬁnding the coeﬃcients of the remaining mµ ’s, by counting ﬁllings of the Ferrers diagram of (4, 2, 1). For example, when µ = (22 , 13 ), we need to count ﬁllings of the Ferrers diagram of (4, 2, 1) with exactly two 1’s, two 2’s, one 3, one 4, and one 5, in which the entries in each row are strictly increasing from left to right. In such a ﬁlling there are three ways to place the 1’s, which we see in Figure 2.3. Reading from left to right, we call these cases A, B, and C. In case A there are three ways to place the two 2’s, which we see in Figure 2.4. Reading from left to right, we call these cases A1, A2, 1 1 1

1 1

1

Figure 2.3. The three ways to place the two 1’s in the Ferrers diagram of (4, 2, 1)

30

2. The Elementary, Complete, and Power Sum Bases 2

2

1 1

2

1

2

1

1

2

1

2

Figure 2.4. The three ways to place the 2’s in case A

and A3. In cases A1 and A2 we can choose one of 3, 4, and 5 to occupy the box that is the only empty box in its row, and the other two entries must go in the bottom row in increasing order. In case A3, we must place 3, 4, and 5 in the bottom row in increasing order. Therefore, there are three ﬁllings in cases A1 and A2, and one in case A4, for a total of seven ﬁllings in case A. In cases B and C there is only one way to place the 2’s, since they must go in diﬀerent rows. In case B there are three ways to place 3, 4, and 5, and there is one way to place 3, 4, and 5 in case C. Therefore, there are three ﬁllings in case B and one in case C, for a total of eleven in cases A, B, and C. This means the coeﬃcient of m22111 in e421 is 11. Using a similar analysis for the other partitions µ, we ﬁnd the coeﬃcients in Table 2.2. □ Our solution to Example 2.7 includes a couple of observations that also hold in the general case. First, for a given λ, we can characterize a whole family of partitions µ with ∣µ∣ = ∣λ∣ for which Mλ,µ (e, m) = 0. To describe this family, it is convenient to introduce an ordering on the set of partitions. Table 2.2. The coeﬃcients of mµ in e421

µ (3, 2, 12 ) (3, 14 ) (23 , 1) (22 , 13 ) (2, 15 ) (17 )

coeﬃcient of mµ 1 4 3 11 35 105

2.1. The Elementary Symmetric Functions

31

Deﬁnition 2.8. Suppose λ and µ are partitions. We say λ is greater than µ in lexicographic order, and we write λ >lex µ, whenever there is a positive integer m such that λj = µj for j < m and λm > µm . Here we take λj = 0 if j > l(λ) and we take µj = 0 if j > l(µ). As we see in our next example, the lexicographic ordering is essentially the natural alphabetical ordering for partitions. Example 2.9. Write the partitions of 6 in lexicographic order, from largest to smallest. Solution. To make a partition large in lexicographic order, we need to make the early parts as large as possible. Thus (6) will be the largest partition, and (5, 1) will come next. Continuing in this way, we have (6) >lex (5, 1) >lex (4, 2) >lex (4, 12 ) >lex (32 ) >lex (3, 2, 1) >lex (3, 13 ) >lex (23 ) >lex (22 , 12 ) >lex (2, 14 ) >lex (16 ).

□

Our solution to Example 2.9 suggests the lexicographic ordering is a linear ordering of the set of partitions of a given n. In fact, in Problems 2.5 and 2.6 we will ask you to show >lex is a linear order of the set of all partitions. Our solution to Example 2.7, on the other hand, suggests using >lex to compare µ with the conjugate partition λ′ to ﬁnd a collection of partitions µ with Mλ,µ (e, m) = 0. In particular, we have the following result. Proposition 2.10. Suppose λ, µ are partitions with ∣λ∣ = ∣µ∣. Then (i) if µ >lex λ′ then Mλ,µ (e, m) = 0; (ii) Mλ,λ′ (e, m) = 1. Proof. (i) By (2.2) we have (2.5)

eλ = eλ1 ⋅ ⋅ ⋅ eλl(λ) , µ

l(µ) and we note that Mλ,µ (e, m) is the coeﬃcient of xµ1 1 ⋅ ⋅ ⋅ xl(µ) in this ′ product. If µ >lex λ , then by deﬁnition there exists m ≥ 1 such that µm > λ′m and µj = λ′j for 1 ≤ j < m. Each factor eλj can contribute at most one factor x1 to our term, so µ1 = λ′1 implies each factor eλj in

32

2. The Elementary, Complete, and Power Sum Bases

(2.5) contributes exactly one factor x1 . (This corresponds to ﬁlling the ﬁrst column of the Ferrers diagram of λ with 1’s.) Similarly, only those eλj with λj ≥ 2 can contribute a factor x2 , so each such eλj must contribute exactly one factor x2 . (This corresponds to ﬁlling the second column of the Ferrers diagram of λ with 2’s.) Proceeding in this way, we see that only those eλj with λj ≥ m can contribute a factor xm to our term, so the exponent on xm is at most λ′m . Since µl(µ) µm > λ′m , the term xµ1 1 ⋅ ⋅ ⋅ xl(µ) does not appear in our product, and the result follows. (ii) Arguing as in the proof of (i), we see the only way to produce λ′

λ′

′

l(λ ) the term x1 1 ⋅ ⋅ ⋅ xl(λ ′ ) is to choose the term x1 ⋅ ⋅ ⋅ xλj from the factor eλj in (2.5) for all j. Now the result follows. □

The converse of Proposition 2.10(i) turns out to be false: there are partitions λ and µ with ∣λ∣ = ∣µ∣ and µ >/ lex λ′ which still have Mλ,µ (e, m) = 0. Indeed, in Problem 2.8 we will invite you to ﬁnd two such partitions. Remarkably, there is a weaker ordering on partitions for which the analogue of Proposition 2.10(i) and its converse both hold. You will get to explore this ordering in Problems 2.9, 2.10, 2.11, 2.12, and 2.13. Although we will get even more information from our solution to Example 2.7, Proposition 2.10 is already enough to show the elementary symmetric functions of degree k form a basis for Λk . Corollary 2.11. The set {eλ ∣ λ ⊢ k} of elementary symmetric functions is a basis for Λk . Proof. Let A be the p(k) × p(k) matrix whose rows and columns are indexed by the partitions of k, in lexicographic order from smallest to largest, and whose entries are given by Aλµ = Mλ′ ,µ (e, m). By Proposition 2.10, A is a lower triangular matrix whose diagonal entries are all equal to 1, so det A = 1 and A is invertible. Since eλ′ = ∑µ⊢k Aλµ mµ , each monomial symmetric function mµ is a linear combination of elementary symmetric functions, and {eλ ∣ λ ⊢ k} spans Λk by Proposition 1.12. But dim Λk = p(k) = ∣{eλ ∣ λ ⊢ k}∣, so {eλ ∣ λ ⊢ k} must also be linearly independent. Therefore {eλ ∣ λ ⊢ k} is a basis, which is what we wanted to prove. □

2.1. The Elementary Symmetric Functions

33

(12 ) (2) (2) 1 0 [ ] (12 ) 2 1

(1) (1) [ 1 ]

(13 ) (2, 1) (3) (3) ⎡⎢ 1 0 0 ⎤⎥ ⎢ ⎥ (2, 1) ⎢⎢ 3 1 0 ⎥⎥ (13 ) ⎢⎣ 6 3 1 ⎥⎦ (14 ) ⎡ 1 (4) ⎢ ⎢ (3, 1) ⎢⎢ 4 2 (2 ) ⎢⎢ 6 ⎢ (2, 12 ) ⎢ 12 ⎢ (14 ) ⎢⎣ 24

(2, 12 ) (22 ) 0 0 1 0 2 1 6 2 12 6

(15 ) (2, 13 ) (22 , 1) ⎡ 1 0 (5) 0 ⎢ ⎢ (4, 1) ⎢⎢ 5 1 0 (3, 2) ⎢⎢ 10 3 1 ⎢ (3, 12 ) ⎢⎢ 20 7 2 (22 , 1) ⎢⎢ 30 12 5 3 ⎢ (2, 1 ) ⎢ 60 27 12 ⎢ (15 ) ⎢⎣ 120 60 30

(3, 12 ) 0 0 0 1 2 7 20

(3, 1) 0 0 0 1 4 (3, 2) 0 0 0 0 1 3 10

(4) 0 ⎤⎥ ⎥ 0 ⎥⎥ 0 ⎥⎥ ⎥ 0 ⎥ ⎥ 1 ⎥⎦ (4, 1) (5) 0 0 ⎤⎥ ⎥ 0 0 ⎥⎥ 0 0 ⎥⎥ ⎥ 0 0 ⎥⎥ 0 0 ⎥⎥ ⎥ 1 0 ⎥ ⎥ 5 1 ⎥⎦

Figure 2.5. The coeﬃcients Mλ′ ,µ (e, m) for ∣λ∣ = ∣µ∣ ≤ 5

In Figure 2.5 we have the matrices Aλµ = Mλ′ ,µ (e, m) from the proof of Corollary 2.11 for ∣λ∣ = ∣µ∣ ≤ 5. Our solution to Example 2.7 also suggests a combinatorial interpretation of Mλ,µ (e, m) involving ﬁllings of the Ferrers diagram of λ. As we show next, this interpretation holds in general, and we can rephrase it in terms of matrices of 0’s and 1’s to get another description of Mλ,µ (e, m). Proposition 2.12. The following hold for all partitions λ, µ ⊢ k.

34

2. The Elementary, Complete, and Power Sum Bases (i) Mλ,µ (e, m) is the number of ﬁllings of the Ferrers diagram of λ with positive integers for which the entries in each row are strictly increasing from left to right, and each integer j appears exactly µj times. (ii) Mλ,µ (e, m) is the number of k × k matrices in which every entry is 0 or 1, the sum of the entries in row m is µm for all m, and the sum of the entries in column j is λj for all j. (iii) Mλ,µ (e, m) is the number of ways to place k balls, consisting of µm identical balls of type m for each m, into l(λ) urns, so that the jth urn contains exactly λj balls, no two of which have the same type.

Proof. (i) As above, we ﬁrst note that Mλ,µ (e, m) is the coeﬃcient l(µ) l(λ) of the term ∏m=1 xµmm in eλ = ∏j=1 eλj . With this in mind, suppose l(λ)

l(µ)

that for each j we have a term tj from eλj and ∏j=1 tj = ∏m=1 xµmm . Then we can construct a ﬁlling of the Ferrers diagram of λ of the given type by placing, for 1 ≤ j ≤ l(λ), the subscripts of the variables which appear in tj in increasing order across the jth row of the diagram. We can also invert this process: if we have a ﬁlling of the given type, then for each j with 1 ≤ j ≤ l(λ) we can reconstruct tj as the product ∏k xk , which is over all k which appear in the jth row of the ﬁlling. l(µ) Therefore, we have a bijection between terms of the form ∏m=1 xµmm in the product eλ1 ⋅ ⋅ ⋅ eλl(λ) and our ﬁllings of the Ferrers diagram of λ. Now the result follows. (ii) Given a ﬁlling of the Ferrers diagram of λ as in part (i), place a 1 in the mth entry of the jth column of a k × k matrix A whenever row j of the given ﬁlling contains an m, and let the remaining entries of A all be 0. By construction the sum of the entries in the jth column of A will be λj . And since m appears exactly µm times in our given ﬁlling, the sum of the entries in the mth row of A will be µm . We can also invert this construction: if we have a k × k matrix A of the given type, then for each j with 1 ≤ j ≤ l(λ) the entries in the jth row of the associated ﬁlling will be the numbers of the rows in which the jth column of A contains a 1. Now we have a bijection between

2.1. The Elementary Symmetric Functions

35

ﬁllings of the Ferrers diagram of λ of the type given in part (i) and matrices of the type given in part (ii), and the result follows. (iii) Given a ﬁlling of the Ferrers diagram of λ as in part (i), place a ball of type m in urn j whenever m appears in row j. By construction urn j will contain λj balls for each j. And since m appears exactly µm times in our given ﬁlling, we will use exactly µm balls of type m. We can also invert this construction: if we have an urn ﬁlling of the given type, then for each j with 1 ≤ j ≤ l(λ) the entries in the jth row of the associated ﬁlling will be the numbers on the balls in urn j. Now we have a bijection between ﬁllings of the Ferrers diagram of λ of the type given in part (i) and urn ﬁllings of the type given in part (iii), and the result follows. □ 1

5

2

3

1

2

4

Figure 2.6. The ﬁlling F in Example 2.13

Example 2.13. Let λ = (3, 2, 2), and let F be the ﬁlling of the Ferrers diagram for λ given in Figure 2.6. Find the corresponding 7×7 matrix and way of placing seven balls in three urns described in Proposition 2.12. Solution. The ﬁrst row of F contains 1, 2, and 4, so the ﬁrst column of the associated matrix has a 1 in its ﬁrst, second, and fourth positions, and 0’s elsewhere. After constructing the other columns in the same way, we ﬁnd the associated matrix is ⎛1 ⎜1 ⎜ ⎜0 ⎜ ⎜ ⎜1 ⎜ ⎜0 ⎜ ⎜ ⎜0 ⎝0

0 1 1 0 0 0 0

1 0 0 0 1 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0

0⎞ 0⎟ ⎟ 0⎟ ⎟ ⎟ 0⎟ . ⎟ 0⎟ ⎟ ⎟ 0⎟ 0⎠

36

2. The Elementary, Complete, and Power Sum Bases

4 2

3

5

1

2

1

urn 1 urn 2 urn 3 Figure 2.7. The ﬁlling of urns in the solution to Example 2.13

Similarly, since the ﬁrst row of F contains 1, 2, and 4, the ﬁrst urn in our placement of balls in urns will contain balls of types 1, 2, and 4. Continuing in the same way, we ﬁnd the associated placement of balls in urns is as in Figure 2.7. □ The matrices with entries Mλ,µ (e, m) in Figure 2.5 all have a nice symmetry: if you reﬂect any of them over the diagonal from the lower left corner to the upper right corner, then the matrix is unchanged. We can use one of our interpretations of Mλ,µ (e, m) in Proposition 2.12 to give an easy proof that this holds in general. Corollary 2.14. For all partitions λ, µ ⊢ k, we have Mλ,µ (e, m) = Mµ,λ (e, m). Proof. For any partitions λ, µ ⊢ k, let Bλ,µ be the set of k×k matrices in which every entry is 0 or 1, the sum of the entries in row m is µm for all m, and the sum of the entries in column j is λj for all j. By Proposition 2.12(ii) we have ∣Bλ,µ ∣ = Mλ,µ (e, m). The result follows from the fact that the transpose map is a bijection between Bλ,µ and Bµ,λ . □ Corollary 2.11 implies every element of Λ is a linear combination of elementary symmetric functions eλ for various partitions λ, and this expression is unique. But if we also allow ourselves to multiply symmetric functions, then we can construct every element of Λ using only the elementary symmetric functions {en }∞ n=1 . Our next result amounts to showing this expression, too, is unique, but we couch it in slightly diﬀerent (though traditional) terminology.

2.1. The Elementary Symmetric Functions

37

Deﬁnition 2.15. Suppose aj ∈ Λ for j ≥ 1. We say the set {aj }∞ j=1 is algebraically independent whenever there is no nonzero polynomial p(y1 , . . . , yn ) such that p(a1 , . . . , an ) = 0. There is much we could say about which sets are algebraically independent and which are not. For example, it is easy to construct a set which is not algebraically independent by starting with a nonzero symmetric function and then including some polynomial function of that symmetric function. For instance, any set including both e2 and e32 − 6e2 is not algebraically independent. But as our next example shows, there are more complicated ways for a set to fail to be algebraically independent. Example 2.16. Show that the symmetric functions m1 , m11 + m2 , m21 + m3 , m31 + m4 , m41 + m5 , . . . are not algebraically independent. Solution. With some patience we can check that m31 = e111 = 6m111 + 3m21 + m3 and (m11 + m2 )m1 = 3m111 + 2m21 + m3 . Therefore, if p(y1 , y2 , y3 ) = y3 − 2y2 y1 + y13 , then p(m1 , m11 + m2 , m21 + m3 ) = 0.

□

We now show what we suggested earlier: the elementary symmetric functions {en }∞ n=1 are a canonical example of an algebraically independent set. Theorem 2.17. The set {en }∞ n=1 is algebraically independent. Proof. Suppose f (y1 , . . . , yn ) is a polynomial with f (e1 , . . . , en ) = 0; we show f (y1 , . . . , yn ) = 0. Note that each monomial in f (e1 , . . . , en ) is an elementary symmetric function eλ for some partition λ, so f (e1 , . . . , en ) is a linear combination of elementary symmetric functions. By Corollary 2.11 the elementary symmetric functions are linearly independent, so all of the coeﬃcients in f (y1 , . . . , yn ) are equal to 0. This implies f (y1 , . . . , yn ) = 0, as we claimed. □

38

2. The Elementary, Complete, and Power Sum Bases

Theorem 2.17 essentially says the elementary symmetric functions {en }∞ n=1 form an “algebraic basis” for Λ. That is, every element of Λ can be uniquely written as a polynomial in {en }∞ n=1 , so we can think of Λ as the set of all polynomials in the variables {en }∞ n=1 . We sometimes take advantage of the point of view by substituting values for the “variables” {en }∞ n=1 , rather than for the usual variables x1 , x2 , . . .. We conclude our discussion of the elementary symmetric functions by returning to our starting point. We ﬁrst encountered the elementary symmetric functions by considering the expressions we get when we write the coeﬃcients of a generic polynomial in z in terms of its roots x1 , . . . , xn . These coeﬃcients are (specializations of) the elementary symmetric functions e0 , e1 , . . ., which we used to build all of the other elementary symmetric functions. Since e0 , e1 , . . . is a sequence, we can’t resist studying its generating function. In fact, our construction of the ej ’s guarantees this generating function will have a nice product formula. Proposition 2.18. The ordinary generating function for the sequence {en }∞ n=0 of elementary symmetric functions is (2.6)

∞

∞

n=0

j=1

n ∑ en t = ∏(1 + xj t).

We often write E(t) to denote this generating function. Proof. We can build each elementary symmetric function en uniquely by adding the terms which result from deciding, for each j, whether to include xj as a factor or not. This matches our computation of the product on the right side of (2.6): we construct each term by deciding, for each factor 1 + xj t, whether to use 1 or xj t as a factor. □

2.2. The Complete Homogeneous Symmetric Functions The elementary symmetric polynomial ek (Xn ) is a sum over all subsets of [n] of size k, with no repeated elements allowed. In general, if a problem is interesting when repetition is not allowed, then the analogous problem in which repetition is allowed is also likely to be

2.2. The Complete Homogeneous Symmetric Functions

39

interesting. This suggests we should also consider a sum over all subsets of [n] in which repeated elements are allowed. To distinguish this sum from the corresponding sum over all subsets of [n], we will write [[n]] to denote the multiset {1∞ , 2∞ , . . . , n∞ } with inﬁnitely many copies of each element of [n], and we will write J ⊆ [[n]] to mean J is a submultiset of [[n]]. Similarly, we will write [P] to denote the multiset consisting of inﬁnitely many copies of each positive integer. Deﬁnition 2.19. For all positive integers n and k, the complete homogeneous polynomial hk (Xn ) is given by hk (Xn ) =

k

∏ xjm = ∑ ∏ xj ,

∑

1≤j1 ≤⋅ ⋅ ⋅≤jk ≤n m=1

J⊆[[n]] ∣J∣=k

j∈J

while the complete homogeneous symmetric function hk is given by hk =

k

∑

∏ xj m = ∑ ∏ xj .

j1 ≤⋅ ⋅ ⋅≤jk m=1

J⊆[P] ∣J∣=k

j∈J

By convention, h0 (Xn ) = 1 and h0 = 1. For any partition λ, the complete homogeneous symmetric polynomial indexed by λ, written hλ (Xn ), is given by l(λ)

(2.7)

hλ (Xn ) = ∏ hλj (Xn ), j=1

and the complete homogeneous symmetric function indexed by λ, written hλ , is given by l(λ)

(2.8)

hλ = ∏ hλj . j=1

Since ∣{hλ ∣ λ ⊢ k}∣ = p(k) = dim Λk , if they are distinct, then we have exactly the right number of complete homogeneous symmetric functions of each degree to form a basis for Λk . Our proof that the elementary symmetric functions form a basis for Λk inspires us to consider how the complete homogeneous symmetric functions are written as linear combinations of the monomial symmetric functions. As happened for the elementary symmetric functions, one case is especially simple.

40

2. The Elementary, Complete, and Power Sum Bases

Example 2.20. For all k ≥ 1, write hk as a linear combination of monomial symmetric functions. λ

l(λ) Solution. For every λ ⊢ k, one of the terms in hk is xλ1 1 ⋅ ⋅ ⋅ xl(λ) , and this term has coeﬃcient 1. Therefore,

(2.9)

hk = ∑ mλ .

□

λ⊢k

As was the case for the elementary symmetric polynomials, as long as we have enough variables, the algebraic relationships between the complete homogeneous symmetric polynomials and the monomial symmetric polynomials do not depend on exactly how many variables we have. More speciﬁcally, for any partitions λ and µ with ∣λ∣ = ∣µ∣ and any n ≥ 1, let Mλ,µ,n (h, m) be the rational numbers deﬁned by hλ (Xn ) = ∑ Mλ,µ,n (h, m)mµ (Xn ). µ⊢∣λ∣

Similarly, for any partitions λ and µ with ∣λ∣ = ∣µ∣, let Mλ,µ (h, m) be the rational numbers deﬁned by (2.10)

hλ = ∑ Mλ,µ (h, m)mµ . µ⊢∣λ∣

Then we have the following analogue of Proposition 2.6. Proposition 2.21. For any partitions λ and µ with ∣λ∣ = ∣µ∣, if n ≥ ∣λ∣ then Mλ,µ,n (h, m) = Mλ,µ (h, m). In particular, if n ≥ ∣λ∣, then Mλ,µ,n (h, m) is independent of n. Proof. This is similar to the proof of Proposition 2.6.

□

We can also write hλ as a linear combination of the monomial symmetric functions by hand when ∣λ∣ is small. Example 2.22. For each λ ⊢ 3, write hλ as a linear combination of the monomial symmetric functions. Solution. In view of Proposition 2.21, we can compute with hλ (X3 ).

2.2. The Complete Homogeneous Symmetric Functions

41

When λ = (3), we have hλ = h3 = m3 + m21 + m111 by (2.9). When λ = (2, 1), we have h21 (X3 ) = (x21 + x22 + x23 + x1 x2 + x1 x3 + x2 x3 )(x1 + x2 + x3 ) = x31 + x32 + x33 + 2x21 x2 + 2x21 x3 + 2x1 x22 + 2x22 x3 + 2x1 x23 + 2x2 x23 + 3x1 x2 x3 , so h21 = m3 + 2m21 + 3m111 . Similarly, when λ = (13 ), we have h111 (X3 ) = (x1 + x2 + x3 )3 = x31 + x32 + x33 + 3x21 x2 + 3x21 x3 + 3x1 x22 + 3x22 x3 + 3x1 x23 + 3x2 x23 + 6x1 x2 x3 so h111 = m3 + 3m21 + 6m111 .

□

In Example 2.20 we found Mλ,µ (h, m) for all λ and µ with ∣λ∣ = ∣µ∣ = 3. In Figure 2.8 we summarize the results of Example 2.20 in a matrix, along with the corresponding matrices for ∣λ∣ = ∣µ∣ ≤ 5. Problem 2.17 has combinatorial interpretations of these numbers analogous to the interpretations of Mλ,µ (e, m) in Proposition 2.12. As the matrices in Figure 2.8 suggest, the matrix we get when we express the complete homogeneous symmetric functions in terms of the monomial symmetric functions is not triangular in general. (But you should take a look at its determinant! See Problem 2.23.) As a result, it is not as easy to show the complete homogeneous symmetric functions form a basis for Λk as it was to do this for the elementary symmetric functions. But we can do it by using a simple relationship between the ordinary generating function for {en }∞ n=0 and the ordinary generating function for {hn }∞ . n=0 Proposition 2.23. The ordinary generating function for the sequence {hn }∞ n=0 of complete homogeneous symmetric functions is ∞

(2.11)

∞

1 . 1 − xj t j=1

n ∑ hn t = ∏ n=0

We often write H(t) to denote this generating function. Proof. We can build each complete homogeneous symmetric function hn uniquely by adding the terms which result from deciding, for each

42

2. The Elementary, Complete, and Power Sum Bases (12 ) (2) (12 ) 2 1 [ ] (2) 1 1

(1) (1) [ 1 ]

(13 ) (2, 1) (3) (1 ) ⎡⎢ 6 3 1 ⎤⎥ ⎢ ⎥ (2, 1) ⎢⎢ 3 2 1 ⎥⎥ (3) ⎢⎣ 1 1 1 ⎥⎦ 3

(14 ) (14 ) ⎡⎢ 24 ⎢ (2, 12 ) ⎢⎢ 12 2 (2 ) ⎢⎢ 6 ⎢ (3, 1) ⎢ 4 ⎢ ⎢ 1 (4) ⎣

(2, 12 ) (22 ) 12 6 7 4 4 3 3 2 1 1

(15 ) (2, 13 ) (22 , 1) (1 ) ⎡⎢ 120 60 30 3 ⎢ (2, 1 ) ⎢⎢ 60 33 18 (22 , 1) ⎢⎢ 30 18 11 ⎢ (3, 12 ) ⎢⎢ 20 13 8 (3, 2) ⎢⎢ 10 7 5 ⎢ (4, 1) ⎢ 5 4 3 ⎢ ⎢ 1 (5) 1 1 ⎣ 5

(3, 12 ) 20 13 8 7 4 3 1

(3, 1) 4 3 2 2 1 (3, 2) 10 7 5 4 3 2 1

(4) 1 ⎤⎥ ⎥ 1 ⎥⎥ 1 ⎥⎥ ⎥ 1 ⎥ ⎥ 1 ⎥⎦ (4, 1) (5) 5 1 ⎤⎥ ⎥ 4 1 ⎥⎥ 3 1 ⎥⎥ ⎥ 3 1 ⎥⎥ 2 1 ⎥⎥ ⎥ 2 1 ⎥ ⎥ 1 1 ⎥⎦

Figure 2.8. The coeﬃcients Mλ,µ (h, m) for ∣λ∣ = ∣µ∣ ≤ 5

j, how many factors of xj to include. This matches our computation of the product on the right side of (2.11): we construct each term by 1 = 1 + xj t + (xj t)2 + ⋅ ⋅ ⋅, which power of deciding, for each factor 1−x jt xj t to use as a factor. □ Comparing equation (2.11) with equation (2.6), we see E(−t) and H(t) are multiplicative inverses. This gives us an elegant relationship between the elementary symmetric functions and the complete homogeneous symmetric functions.

2.2. The Complete Homogeneous Symmetric Functions

43

Proposition 2.24. For all n ≥ 1, we have n

(2.12)

j ∑ (−1) ej hn−j = 0. j=0

Proof. By equations (2.6) and (2.11) we have E(−t)H(t) = 1. When n ≥ 1, the coeﬃcient of tn on the right side of the identity is 0, and on the left side it is ∑nj=0 (−1)j ej hn−j , so (2.12) must hold. □ This proof is pleasantly short, but it is not especially combinatorial. With a little additional machinery, we can give a natural combinatorial proof of equation (2.12). Combinatorial Proof of Proposition 2.24. For any set or multiset J of positive integers, deﬁne xJ by xJ ∶= ∏j∈J xj . Now note that ej = ∑J⊆P xJ and hj = ∑J⊆[P] xJ , where the sum for ej is over all subsets of the positive integers of size j and the sum for hj is over all submultisets of the positive integers of size j. In addition, for any set J, deﬁne sgn(J) by sgn(J) ∶= (−1)∣J∣ . With this notation, (2.12) becomes J J ∑ sgn(J1 )x 1 x 2 = 0, (J1 ,J2 )

where the sum is over all ordered pairs (J1 , J2 ) in which J1 is a subset of P of size j and J2 is a submultiset of [P] of size n − j. If we set sgn(J1 , J2 ) = sgn(J1 ) for each ordered pair (J1 , J2 ), then we just need to give an involution I on these ordered pairs such that I has no ﬁxed points and sgn(I(J1 , J2 )) = − sgn(J1 , J2 ). To construct I, suppose (J1 , J2 ) is given. Among all of the elements of J1 ∪ J2 , let k be the smallest. If k ∈ J1 , then set I(J1 , J2 ) ∶= (J1 − {k}, J2 ∪ {k}), and if k ∈/ J1 , then set I(J1 , J2 ) ∶= (J1 ∪ {k}, J2 − {k}). In words, move one copy of k from J2 to J1 if you can, and otherwise move k from J1 to J2 . Moving the smallest element of J1 ∪ J2 between J1 and J2 does not change the fact that it is the smallest element in J1 ∪ J2 , and our construction guarantees J1 is always a set (rather than a multiset), so we see I(I(J1 , J2 )) = (J1 , J2 ). Furthermore, since I changes the size of J1 by one, it can have no ﬁxed points, and sgn(I(J1 , J2 )) = − sgn(J1 , J2 ), as desired. □

44

2. The Elementary, Complete, and Power Sum Bases Proposition 2.24 gives us a new way to show {hλ ∣ λ ⊢ k} spans

Λk . Proposition 2.25. For all k ≥ 0, the set {hλ ∣ λ ⊢ k} is a basis for Λk . Proof. When k = 0, we have Λ0 = Span{1}. Since h0 = e0 = 1, the result is clear in this case. When k = 1 we have Λ1 = Span{x1 + x2 + ⋅ ⋅ ⋅}. Since h1 = e1 = x1 + x2 + ⋅ ⋅ ⋅, the result is also clear in this case. Now suppose k ≥ 2; we argue by induction on k. Since ∣{hλ ∣ λ ⊢ k}∣=p(k)=dim Λk and Span{hλ ∣ λ ⊢ k}⊆Λk , it is sufﬁcient to show Λk ⊆ Span{hλ ∣ λ ⊢ k}. By Corollary 2.11, we have Λk = Span{eλ ∣ λ ⊢ k}, so it is even enough to show eµ ∈ Span{hλ ∣ λ ⊢ k} for all µ ⊢ k. To do this, we consider two cases. If µ ≠ k, then eµ is a product eµ1 eµ2 ⋅ ⋅ ⋅ eµl(µ) in which, for each j, we have µj < k and eµj ∈ Λµj . By induction, eµj ∈ Span{hλ ∣ λ ⊢ µj } for all j, so eµ ∈ Span{hλ ∣ λ ⊢ k}. If µ = k, then by (2.12) we have k−1

ek = ∑ (−1)k+j+1 ej hk−j . j=0

By induction, ej ∈ Span{hλ ∣ λ ⊢ j} for 0 ≤ j ≤ k − 1, so ek ∈ Span{hλ ∣ λ ⊢ k}, which is what we wanted to prove. □ Proposition 2.25 gives us an analogue of Theorem 2.17 for the complete homogeneous symmetric functions. Corollary 2.26. The set {hn }∞ n=1 is algebraically independent. Proof. This is similar to the proof of Theorem 2.17.

□

2.3. The Power Sum Symmetric Functions The elementary symmetric functions ek = m1k are at one extreme of the set of monomial symmetric functions in the sense that (1k ) is the partition of k with the largest number of parts. With this in mind,

2.3. The Power Sum Symmetric Functions

45

it seems natural to look at the monomial symmetric functions mµ in which µ has the smallest number of parts, namely mk . Deﬁnition 2.27. For all n, k ≥ 1 we set n

pk (Xn ) = ∑ xkj j=1

and

∞

pk = ∑ xkj . j=1

For any partition λ, the power sum symmetric polynomial pλ (Xn ) indexed by λ is l(λ)

(2.13)

pλ (Xn ) = ∏ pλj (Xn ), j=1

and the power sum symmetric function pλ indexed by λ is l(λ)

(2.14)

pλ = ∏ pλj . j=1

Note that neither p0 (Xn ) nor p0 is deﬁned. Our work with the elementary and complete homogeneous symmetric functions suggests the next natural step is to explore how the power sum symmetric functions are written as linear combinations of the monomial symmetric functions. To start, for any nonempty partitions λ and µ with ∣λ∣ = ∣µ∣ and any n ≥ 1, let Mλ,µ,n (p, m) be the rational numbers deﬁned by pλ (Xn ) = ∑ Mλ,µ,n (p, m)mµ (Xn ). µ⊢∣λ∣

Similarly, for any nonempty partitions λ and µ with ∣λ∣ = ∣µ∣, let Mλ,µ (p, m) be the rational numbers deﬁned by (2.15)

pλ = ∑ Mλ,µ (p, m)mµ . µ⊢∣λ∣

Then we have the following analogue of Propositions 2.6 and 2.21. Proposition 2.28. For any nonempty partitions λ and µ with ∣λ∣ = ∣µ∣, if n ≥ ∣λ∣, then Mλ,µ,n (p, m) = Mλ,µ (p, m). In particular, if n ≥ ∣λ∣, then Mλ,µ,n (p, m) is independent of n.

46

2. The Elementary, Complete, and Power Sum Bases □

Proof. This is similar to the proof of Proposition 2.6.

In our ﬁrst example we conﬁrm we have in fact singled out mk . Example 2.29. For each k ≥ 1, write pk as a linear combination of the monomial symmetric functions. Solution. Since pk has only terms of the form xkj , and each term has coeﬃcient 1, we see pk = mk . □ Using Proposition 2.28 to reduce our work to computations with polynomials, we ﬁnd the coeﬃcients Mλ,µ (p, m) for ∣λ∣ = ∣µ∣ ≤ 5 in Figure 2.9. These data suggest that when we write the power sum symmetric functions as linear combinations of the monomial symmetric functions and arrange our partitions in increasing lexicographic order, the resulting matrices are upper triangular with nonzero entries on their diagonals. We prove this next, and we use this result to show {pλ ∣ λ ⊢ k} is a basis for Λk . Proposition 2.30. For all k ≥ 1 and all nonempty partitions λ, µ with ∣λ∣ = ∣µ∣, the following hold. (i) If λ >lex µ, then Mλ,µ (p, m) = 0. (ii) Mλ,λ (p, m) > 0. Proof. (i) By deﬁnition we have pλ = pλ1 ⋅ ⋅ ⋅ pλl(λ) , µ

l(µ) and we note that Mλ,µ (p, m) is the coeﬃcient of xµ1 1 ⋅ ⋅ ⋅ xl(µ) in this product. Each term in the expansion of this product is the product of λ one term from each pλj , and each such term has the form xmj for some m. If all of these m are distinct, then λ = µ. If two of these m are equal, then µ is obtained from λ by merging parts and rearranging the resulting numbers into weakly decreasing order, in which case µ >lex λ, and the result follows.

(ii) Note that in the expansion of the product pλ = pλ1 ⋅ ⋅ ⋅ pλl(λ) ,

2.3. The Power Sum Symmetric Functions

47

(12 ) (2) (12 ) 2 1 [ ] (2) 0 1

(1) (1) [ 1 ]

(13 ) (2, 1) (3) (1 ) ⎡⎢ 6 3 1 ⎤⎥ ⎢ ⎥ (2, 1) ⎢⎢ 0 1 1 ⎥⎥ (3) ⎢⎣ 0 0 1 ⎥⎦ 3

(14 ) (14 ) ⎡⎢ 24 ⎢ (2, 12 ) ⎢⎢ 0 2 (2 ) ⎢⎢ 0 ⎢ (3, 1) ⎢ 0 ⎢ ⎢ 0 (4) ⎣

(2, 12 ) (22 ) 12 6 2 2 0 2 0 0 0 0

(15 ) (2, 13 ) (22 , 1) (1 ) ⎡⎢ 120 60 30 3 ⎢ (2, 1 ) ⎢⎢ 0 6 6 (22 , 1) ⎢⎢ 0 0 2 ⎢ (3, 12 ) ⎢⎢ 0 0 0 (3, 2) ⎢⎢ 0 0 0 ⎢ (4, 1) ⎢ 0 0 0 ⎢ ⎢ 0 (5) 0 0 ⎣ 5

(3, 12 ) 20 6 0 2 0 0 0

(3, 1) 4 2 0 1 0 (3, 2) 10 4 2 1 1 0 0

(4) 1 ⎤⎥ ⎥ 1 ⎥⎥ 1 ⎥⎥ ⎥ 1 ⎥ ⎥ 1 ⎥⎦ (4, 1) (5) 5 1 ⎤⎥ ⎥ 3 1 ⎥⎥ 1 1 ⎥⎥ ⎥ 2 1 ⎥⎥ 0 1 ⎥⎥ ⎥ 1 1 ⎥ ⎥ 0 1 ⎥⎦

Figure 2.9. The coeﬃcients Mλ,µ (p, m) for ∣λ∣ = ∣µ∣ ≤ 5 λ

l(λ) we can obtain the term xλ1 1 ⋅ ⋅ ⋅ xl(λ) by choosing, for each j, the term

λ

xj j from the factor pλj . Since the coeﬃcients in pλj are all positive for λ

l(λ) all j, the coeﬃcient of xλ1 1 ⋅ ⋅ ⋅ xl(λ) is at least 1, so Mλ,λ (p, m) > 0. □

Corollary 2.31. For all k ≥ 1, the set {pλ ∣ λ ⊢ k} is a basis for Λk . Proof. This is similar to the proof of Corollary 2.11. Corollary 2.32. The set {pn }∞ n=1 is algebraically independent.

□

48

2. The Elementary, Complete, and Power Sum Bases □

Proof. This is similar to the proof of Theorem 2.17.

Earlier we found the ordinary generating functions for {en }∞ n=0 and {hn }∞ n=0 , which we used to show that the complete homogeneous symmetric functions form a basis for Λk . Rather than ﬁnding the ordinary generating function for {pn }∞ n=1 , it turns out to be easier ∞ and more useful to ﬁnd the ordinary generating function for { pnn }n=1 . ∞

Our expression for this generating function for { pnn }n=1 involves a logarithm of a formal power series. This construction raises technical issues, but addressing these issues would take us too far aﬁeld. So we will proceed under the assumption that logarithms of formal power series behave in analogy with logarithms of polynomials. The interested reader can ﬁnd more details about this in a variety of sources, including [Loe17, Ch. 7], [Niv69], and [Wil05, Ch. 2]. Proposition 2.33. The ordinary generating function for the sequence { pnn }∞ n=1 of scaled power sum symmetric functions is ∞

(2.16)

∞ pn n 1 t = log(∏ ). n 1 − xj t n=1 j=1

∑

We often write P (t) to denote this generating function. 1 1 n ) = ∑∞ Proof. Since log ( 1−x n=1 n x , we have ∞

pn n t n=1 n

P (t) = ∑

∞ ∞

1 (xj t)n n=1 j=1 n

=∑∑ ∞

= ∑ log ( j=1

1 ) 1 − xj t

⎛∞ 1 ⎞ = log ∏ , ⎝j=1 1 − xj t ⎠ which is what we wanted to prove.

□

2.4. Problems

49

2.4. Problems 2.1. Compute Mλ,µ (e, m) for λ = (22 , 14 ) and µ = (32 , 12 ). 2.2. Find and prove a formula for Mλ,1n (e, m), where λ ⊢ n. 2.3. Find and prove a formula for Mλ,n (e, m), where λ ⊢ n. 2.4. Find and prove a formula for Mλ,µ (e, m), where λ ⊢ n and µ = (n − 1, 1). 2.5. Show ≥lex (which means “>lex or =”) is a partial ordering on the set of partitions. 2.6. Show that if λ and µ are distinct partitions, then λ >lex µ or µ >lex λ. That is, show the lexicographic ordering is a total ordering on the set of partitions. 2.7. Prove or disprove: if λ and µ are partitions with ∣λ∣ = ∣µ∣ and λ >lex µ, then µ′ >lex λ′ . 2.8. Show that the converse of Proposition 2.10(i) is false. In particular, ﬁnd partitions λ and µ with ∣λ∣ = ∣µ∣ for which Mλ,µ (e, m) = 0 even though λ′ >lex µ. 2.9. Suppose λ and µ are partitions. We say λ is greater than or equal to µ in the dominance order, and we write λ ⊵ µ or µ ⊴ λ, whenever ∑nj=1 λj ≥ ∑nk=1 µk for all n ≥ 0. Show that Mλ,µ (e, m) ≠ 0 if and only if ∣λ∣ = ∣µ∣ and λ′ ⊵ µ. 2.10. Show ⊴ is a partial ordering on the set of partitions of n for all n ≥ 0. 2.11. For each n ≥ 6, ﬁnd partitions λ ⊢ n and µ ⊢ n for which neither λ ⊴ µ nor µ ⊴ λ. 2.12. Prove or disprove: if λ and µ are partitions of n and λ ⊵ µ, then λ ≥lex µ. 2.13. Prove or disprove: if λ and µ are partitions with ∣λ∣ = ∣µ∣ and λ ⊵ µ, then λ′ ⊴ µ′ . 2.14. Suppose {fn }∞ n=1 is an algebraically independent set of symmetric functions, each fn is homogeneous, and deg fn ≤ deg fn+1 for all n ≥ 0. Furthermore, suppose every symmetric function can be written as a polynomial in {fn }∞ n=1 . Show deg fn = n for all n ≥ 0.

50

2. The Elementary, Complete, and Power Sum Bases

2.15. Write the formal power series ∞

∏(1 + xj + xj ) 2

j=1

in terms of the elementary symmetric functions. 2.16. Show that for all k ≥ 1 and all partitions λ, µ ⊢ k, the quantity Mλ,µ (h, m) deﬁned in (2.10) is the number of ﬁllings of the Ferrers diagram of λ with positive integers for which the entries in each row are weakly increasing from left to right, and each integer j appears exactly µj times. 2.17. Show that for all k ≥ 1 and all partitions λ, µ ⊢ k, the following hold. (a) Mλ,µ (h, m) is the number of k × k matrices in which every entry is a nonnegative integer, the sum of the entries in row j is µj for all j, and the sum of the entries in column j is λj for all j. (b) Mλ,µ (h, m) is the number of ways to place k balls, consisting of µm identical balls of type m for each m, into l(λ) urns, so the jth urn contains exactly λj balls. We will use this problem in our proof of Proposition 7.10. 2.18. Compute Mλ,µ (h, m) for λ = (22 , 14 ) and µ = (32 , 12 ). 2.19. Find and prove a formula for Mλ,1n (h, m), where λ ⊢ n. 2.20. Find and prove a formula for Mλ,n (h, m), where λ ⊢ n. 2.21. Find and prove a formula for Mλ,µ (h, m), where λ ⊢ n and µ = (n − 1, 1). 2.22. Prove or disprove: for every λ, µ ⊢ k, we have Mλ,µ (h, m) = Mµ,λ (h, m). 2.23. For each n ≥ 1, let An be the p(n)×p(n) matrix whose rows and columns are indexed by partitions in lexicographic order from smallest to largest, and for which the entry in row λ and column µ is Mλ,µ (h, m). The matrices An for 1 ≤ n ≤ 5 are shown in Figure 2.8. Prove or disprove: det(An ) = 1 for all n ≥ 1. 2.24. Show n

−1 hk (Xn ) = ∑ xn−1−k ∏(xj − xl ) . j j=1

l≠j

2.4. Problems

51

2.25. Show that for all n ≥ 0, we have en = det (h1−j+k )1≤j,k≤n . Here hn = 0 for n < 0. 2.26. Show that for all n ≥ 0, we have hn = det (e1−j+k )1≤j,k≤n . Here en = 0 for n < 0. 2.27. For all k ≥ 1 and all partitions λ, µ ⊢ k, let Mλ,µ (p, m) be deﬁned by pλ = ∑ Mλ,µ (p, m)mµ . µ⊢k

Show that for all k ≥ 1 and all partitions λ, µ ⊢ k, the quantity Mλ,µ (p, m) is the number of ﬁllings of the Ferrers diagram of λ with positive integers for which the entries in each row are constant, and each integer j appears exactly µj times. We will use this result in our proof of Proposition 7.17. 2.28. Show that for all k ≥ 1 and all partitions λ, µ ⊢ k, the following hold. (a) Mλ,µ (p, m) is the number of n × n matrices in which the sum of the entries in row m is µm for all m, and for all j exactly one entry of column j is λj and all other entries in column j are 0. (b) Mλ,µ (p, m) is the number of ways to place k balls, consisting of µm identical balls of type m for each m, into l(λ) urns, so that for all j the jth urn contains exactly λj balls, all of which have the same type. 2.29. Compute Mλ,µ (p, m) for λ = (22 , 14 ) and µ = (32 , 12 ). 2.30. Find and prove a formula for Mλ,1n (p, m), where λ ⊢ n. 2.31. Find and prove a formula for Mλ,n (p, m), where λ ⊢ n. 2.32. Find and prove a formula for Mλ,µ (p, m), where λ ⊢ n and µ = (n − 1, 1). 2.33. Find and prove a formula for Mλ,λ (p, m). 2.34. Prove or disprove: for every λ, µ ⊢ k, we have Mλ,µ (p, m) = Mµ,λ (p, m).

Chapter 3

Interlude: Evaluations of Symmetric Functions

Our interest in symmetric functions is based primarily on the relationships among them and how we can describe these relationships combinatorially. Nevertheless, many apparently disparate combinatorial quantities are, in fact, evaluations of symmetric functions at carefully chosen points. This means, in particular, that a variety of combinatorial identities and dualities are, in fact, just special cases of symmetric function identities. In other words, symmetric functions provide a common explanation for several families of combinatorial facts. In this chapter we develop a handful of symmetric function identities, which we then use to tie together binomial coeﬃcients, Stirling numbers, and q-binomial coeﬃcients.

3.1. Symmetric Function Identities Many combinatorial quantities we can recognize as evaluations of symmetric functions, such as the binomial coeﬃcients or the Stirling numbers, satisfy recurrence relations similar to Pascal’s identity. These recurrence relations come directly from the following simple recurrence relations for the elementary and complete homogeneous symmetric polynomials.

53

54

3. Evaluations of Symmetric Functions

Proposition 3.1. For all k ≥ 0 and all n ≥ 1, we have (3.1)

ek (Xn ) = ek (Xn−1 ) + xn ek−1 (Xn−1 )

and (3.2)

hk (Xn ) = hk (Xn−1 ) + xn hk−1 (Xn ).

Proof. To prove equation (3.1), note that we have two kinds of terms in ek (Xn ): those with xn as a factor and those without xn as a factor. Those without xn as a factor form ek (Xn−1 ), and when we take out the common factor xn from those terms with xn as a factor, we obtain ek−1 (Xn−1 ). The proof of equation (3.2) is Problem 3.1.

□

Equations (3.1) and (3.2) give us recurrence relations for various specializations of the elementary and complete homogeneous symmetric functions, but these specializations are at the root of many identities because the symmetric functions themselves satisfy a variety of identities. We have already seen one of these identities in (2.12), which says n

(3.3)

j ∑ (−1) ej hn−j = 0 j=0

for n ≥ 1. Our next two identities are similar to this, but they give us expressions for pk which are similar to the sum on the left side of equation (3.3). Proposition 3.2. For all k ≥ 1, we have k

(3.4)

pk = ∑ (−1)j−1 jej hk−j j=1

and k

(3.5)

pk = ∑ (−1)k+j jek−j hj . j=1

Proof. To prove equation (3.4), ﬁrst note that when we expand the products on the right in terms of x1 , x2 , . . ., the resulting terms correspond to a choice of one term from ej , followed by a choice of one of the variables xs in that term, followed by a choice of one term from

3.1. Symmetric Function Identities 1

3 4∗ 7

2

55 2

4

Figure 3.1. A ﬁlling indexing a term on the right side of equation (3.4)

hk−j . We can keep track of these terms by starting with a 1 × j tile and a 1 × (k − j) tile, ﬁlling the 1 × j tile with distinct positive integers in increasing order, marking one of these positive integers, and then ﬁlling the 1 × (k − j) tile with positive integers in weakly increasing order. For example, when k = 7 and j = 3, the ﬁlled tiles in Figure 3.1 (with the 4 marked with a star in the left tile) correspond to choosing the term x1 x3 x4 x7 from ej , marking the x4 , and then choosing the term x22 x4 from hk−j . We now describe a function f from the set of these marked, ﬁlled pairs of tiles to itself. If a marked, ﬁlled pair T has j = 1 and all of the entries in both tiles are equal, then we set f (T ) = T . Otherwise, ﬁnd the smallest number r which appears in either tile and which is not the marked number in the ﬁrst tile. If r does not appear in the left tile (corresponding with the term from ej ), then remove one copy of r from the right tile and insert it into the left tile. If r does appear in the left tile, then remove it from the left tile and insert it in the right tile. When we apply f to the ﬁlling in Figure 3.1, we ﬁnd r = 1, and we obtain the ﬁlling in Figure 3.2. Our function f has several important properties. First, notice that if f (T ) ≠ T and we apply f to f (T ), then we will choose the same number r as we did to compute f (T ), and we will put it back in its original position. Therefore, f (f (T )) = T for all marked, ﬁlled pairs of tiles. Second, notice that if f (T ) ≠ T , then the lengths of the left tiles of T and f (T ) diﬀer by 1, but their associated products of terms are the same. Therefore, if f (T ) ≠ T , then T and f (T ) 3 4∗ 7

1

2

2

4

Figure 3.2. The image of the ﬁlling in Figure 3.1 under f

56

3. Evaluations of Symmetric Functions

correspond with terms in the right side of (3.4) which are negatives of one another. In short, f is a sign-reversing involution. Because f is a sign-reversing involution, only terms on the right side of (3.4) corresponding with marked, ﬁlled pairs T of tiles with f (T ) = T remain after cancellation. From our deﬁnition of f , we see these are exactly the terms in pk . The proof of equation (3.5) is Problems 3.3 and 3.4.

□

We also have a pair of identities connecting the power sum symmetric functions with the elementary symmetric functions, and the power sum symmetric functions with the complete homogeneous symmetric functions. Proposition 3.3. For all k ≥ 1, we have k

khk = ∑ hk−j pj

(3.6)

j=1

and k

kek = ∑ (−1)j−1 ek−j pj .

(3.7)

j=1

Proof. To prove equation (3.6), ﬁrst note that the terms in khk are indexed by ﬁllings of a 1 × k tile with positive integers in weakly increasing order, in which exactly one entry has been marked. For example, the ﬁlling in Figure 3.3 corresponds to the fourth copy of x1 x43 x4 x26 x7 in h9 . On the other hand, the terms on the right side of (3.6) are indexed by pairs of tiles, one 1 × (k − j) and one 1 × j, in which the ﬁrst tile is ﬁlled with positive integers in weakly increasing order, and the second tile is ﬁlled with j copies of one positive integer. For example, the ﬁlling in Figure 3.4 corresponds to the product of x1 x23 x4 x26 x7 and x23 in the term with j = 2 on the right side of (3.6). 1

3

3 3∗ 3

4

6

6

7

Figure 3.3. A ﬁlling corresponding to a term on the left side of equation (3.6)

3.2. Binomial Coeﬃcients 1

3

3

4

57 6

6

7

3

3

Figure 3.4. A ﬁlling corresponding to a term on the right side of equation (3.6)

To transform a marked ﬁlling of the ﬁrst type into a pair of ﬁllings of the second type, suppose the marked number is r. Remove the marked r and all copies of r to its right, and use them to make a tile ﬁlled with r’s. To transform a pair of ﬁllings of the second type into a marked ﬁlling of the ﬁrst type, ﬁrst mark the ﬁrst entry of the second tile. Then, if all of the entries in this tile are r, insert the entries in the second tile immediately after the rightmost r in the ﬁrst tile. Note that the ﬁllings in Figures 3.3 and 3.4 correspond with each other under these maps. Since these maps are inverses of one another, we have a bijection between the terms on the left side of (3.6) and the right side, and the result follows. The proof of equation (3.7) is Problems 3.6 and 3.8.

□

3.2. Binomial Coeﬃcients The easiest quantities to get as evaluations of symmetric functions are the binomial coeﬃcients. We could use equations (3.1) and (3.2) along with the fact that (nk) is completely determined by (n0 ) = (nn) = 1 ) + (n−1 ) for 1 ≤ k ≤ n − 1) and Pascal’s relation (which says (nk) = (n−1 k k−1 to prove our next result by induction. But it is just as easy, and arguably more illuminating, to give a somewhat more combinatorial proof. Proposition 3.4. For all k ≥ 0 and all n ≥ 1, we have n (3.8) ek (1, . . . , 1) = ( ) k ´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶ n

and (3.9)

n+k−1 hk (1, . . . , 1) = ( ). k ´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶ n

58

3. Evaluations of Symmetric Functions

Proof. To prove equation (3.9), ﬁrst note that hk (1, . . . , 1) is the ´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶ n

number of terms in hk (Xn ), which in turn is the number of ways of choosing k elements of [n] with repetition allowed. Some people call this a “stars and bars problem”, while others call it a “ﬂowershop problem”. Whatever your terminology, we can represent each term in hk (Xn ) as a sequence of k stars and n−1 dividers, or bars, in which the number of stars (which could be 0) in the jth string of consecutive stars from the left is the exponent on xj . For example, when n = 4 and k = 6, the string ∗∣∣ ∗ ∗ ∗ ∣ ∗ ∗ corresponds to the term x1 x33 x24 . We can construct each string uniquely by choosing k positions out of the ). n + k − 1 positions for the stars, so the number of terms is (n+k−1 k The proof of equation (3.8) is similar and is Problem 3.12.

□

We can now combine equations (3.8) and (3.9) with the identities in the previous section to get a variety of binomial coeﬃcient identities, essentially for free. As we will see, some of these identities will be equivalent to familiar facts. Proposition 3.5. For all m, n ≥ 1, we have (3.10)

m m+n−j−1 j n ) = 0. ∑ (−1) ( )( j m−j j=0

Proof. In (3.3), replace n with m, evaluate the result with xr = 1 for 1 ≤ r ≤ n and xr = 0 for r > n, and use equations (3.8) and (3.9) to eliminate ej and hm−j . □ It is worth noting that we can rearrange (3.10) to ﬁnd m

∑ (−1)

j

j=0

)= where (a+b+c a,b,c

(a+b+c)! a!b!c!

n m+n−j−1 ( ) = 0, j j − 1, n − j, m − j

is the usual multinomial coeﬃcient.

Proposition 3.6. For all n, k ≥ 1, we have k

(3.11)

∑ (−1) j=1

k+j

n n+j−1 j( )( ) = n. k−j j

3.2. Binomial Coeﬃcients

59

Proof. This is similar to the proof of equation (3.10), using (3.5) and the fact that pk (1, . . . , 1) = n. □ ´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶ n

It is worth noting that we can rearrange equation (3.11) to ﬁnd k

∑ (−1) j=1

j+k

n+j−1 ( ) = 1. k − j, n − k + j, j − 1

The identities we have seen so far may not look familiar, but our next one is much more common; some people call it the “hockey stick identity” because the entries of Pascal’s triangle that it involves are arranged in the shape of a hockey stick. Proposition 3.7. For all n, k ≥ 1, we have (

(3.12)

k−1 n+k−1 n+j−1 )= ∑( ). k−1 j j=0

Proof. In (3.6) set xr = 1 for r ≤ n and xr = 0 for r > n and then use (3.9) and the fact that pj (1, . . . , 1) = n to ﬁnd ´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶ n

k(

k n+k−1 n+k−j−1 ) = ∑ n( ). k k−j j=1

) = (n+k−1 ), and Now divide both sides by n, use the fact that nk (n+k−1 k k−1 replace j with k − j in the sum to obtain (3.12). □ Our last identities are two forms of the binomial theorem, one of the ﬁrst deep facts we learn about the binomial coeﬃcients. Proposition 3.8. For all n ≥ 0, we have (3.13)

n n (1 + t)n = ∑ ( )tk . k=0 k

Proof. Set x1 = x2 = ⋅ ⋅ ⋅ = xn = 1 and xj = 0 for j > n in (2.6) and use (3.8) to simplify the result. □ Proposition 3.9. For all n ≥ 0, we have (3.14)

∞ n+k−1 k (1 − t)−n = ∑ ( )t . k k=0

60

3. Evaluations of Symmetric Functions

Proof. This is similar to the proof of Proposition 3.8, using equations (2.11) and (3.9). □

3.3. Stirling Numbers of the First and Second Kinds Given a ﬁnite set, the binomial coeﬃcients tell us how many ways there are to choose a subset of a given size. In contrast, the Stirling numbers are concerned with how many ways there are to divide that set into a given number of subsets, and possibly put some structure on these subsets. More speciﬁcally, we deﬁne the Stirling numbers of the ﬁrst and second kinds in terms of permutations and set partitions, as follows. Deﬁnition 3.10. For all n ≥ 1 and all k ≥ 1, the Stirling number of the ﬁrst kind [nk] is the number of permutations of [n] with exactly k cycles. (See Appendix C for more information on cycles in permutations.) Some authors prefer to discuss the signed Stirling numbers of the ﬁrst kind, which are given by (−1)n−k [nk]. To deﬁne the Stirling numbers of the second kind, we ﬁrst need to recall the notion of a set partition. Deﬁnition 3.11. For any set A, a set partition of A is a set {B1 , . . . , Bk } of nonempty subsets of A such that A = ⋃kj=1 Bj , and if j ≠ l, then Bj ∩ Bl = ∅. The sets Bj are called the blocks of the partition. We note that changing the order of the blocks in a set partition does not change the partition. Deﬁnition 3.12. For all n ≥ 1 and all k ≥ 1, the Stirling number of the second kind {nk} is the number of set partitions of [n] with exactly k blocks. The Stirling numbers of both kinds have much in common with the binomial coeﬃcients. For example, it is useful to arrange them in triangles as in Figures 3.5 and 3.6, as we do with the binomial

3.3. Stirling Numbers of the First and Second Kinds

24

6

2 50

1 11

1 3 35

1 6

1 10

1

61

1

Figure 3.5. The top of the Stirling triangle of the ﬁrst kind

1

1

1 15

1 7

1 3 25

1 6

1 10

1

1

Figure 3.6. The top of the Stirling triangle of the second kind

coeﬃcients when we form Pascal’s triangle. And like the binomial coeﬃcients, in both Stirling triangles each entry is a certain combination of the entries above and to the left and above and to the right. In particular, the Stirling triangles are determined by the following initial conditions and recurrence relations. Proposition 3.13. For all n ≥ 1, we have [n0 ] = 0 and [nn] = 1, and for all n ≥ 2 and k with 1 ≤ k ≤ n − 1, we have (3.15)

n n−1 n−1 [ ]=[ ] + (n − 1)[ ]. k k−1 k

Proof. There is exactly one way for a permutation of [n] to have n cycles, which is for each element to be its own cycle, so [nn] = 1. At the other extreme, every permutation of a positive number of objects must have at least one cycle, so [n0 ] = 0. To prove equation (3.15), ﬁrst note that in a permutation of [n], the n is either alone in its cycle, or it is in a cycle of length 2 or more. We will count the permutations in each of these two sets. Each permutation of the ﬁrst type with exactly k cycles consists of the cycle (n) and an arbitrary permutation of [n − 1] with exactly ] of these permutations. k − 1 cycles. Therefore there are [n−1 k−1

62

3. Evaluations of Symmetric Functions

The permutations of the second type with exactly k cycles are more complicated. If we remove the n from one of these permutations, then we get a permutation of [n − 1] with exactly k cycles. But the n can be in diﬀerent places, so we can get each of these shorter permutations from several diﬀerent longer permutations. In fact, we can arrange the longer permutation so each of its cycles begins with its minimal element. When we do this, the n will not be the ﬁrst entry in its cycle, so the n can follow any of the n − 1 entries. Therefore, we can construct each permutation of [n] with exactly k cycles uniquely by choosing a permutation of [n − 1] wth k cycles, choosing one of the n − 1 entries of this permutation, and placing the n immediately after and in the same cycle as our chosen entry. As a result, there are ] permutations of [n] with exactly k cycles in which the n (n − 1)[n−1 k is in a cycle of length 2 or more. Combining our results, we ﬁnd that equation (3.15) holds.

□

Proposition 3.14. For all n ≥ 1, we have {n1 } = {nn} = 1, and for all n ≥ 2 and all k with 2 ≤ k ≤ n − 1, we have n n−1 n−1 (3.16) { }={ } + k{ }. k k−1 k Proof. This is Problem 3.15.

□

By combining Propositions 3.13 and 3.14 with equations (3.1) and (3.2), we can see how the Stirling numbers of the ﬁrst and second kinds are evaluations of the elementary and complete homogeneous symmetric functions, respectively. Proposition 3.15. For all n ≥ 1 and all k with 0 ≤ k ≤ n, we have n+1 (3.17) ek (1, 2, . . . , n) = [ ] n+1−k and n+k (3.18) hk (1, 2, . . . , n) = { }. n n+1 ], Proof. To prove (3.17), ﬁrst note that e0 (1, 2, . . . , n) = 1 = [n−0+1 n+1 [ so the result holds for k = 0. Similarly, en (1, 2, . . . , n) = n! = n−n+1], so the result holds for k = n.

3.3. Stirling Numbers of the First and Second Kinds

63

Now suppose n ≥ 2 and the result holds for n − 1 and all k with 0 ≤ k ≤ n. Then for any k with 1 ≤ k ≤ n, we can use (3.1), induction on n, and (3.15) to ﬁnd ek (1, 2, . . . , n) = ek (1, 2, . . . , n − 1) + nek−1 (1, 2, . . . , n − 1) n n =[ ] + n[ ] n−k n−k+1 n+1 =[ ], n+1−k which is what we wanted to prove. The proof of equation (3.18) is Problem 3.16.

□

As we did with the binomial coeﬃcients, we can combine (3.17) and (3.18) with our symmetric function identities to obtain a variety of identities involving the Stirling numbers of each kind. Proposition 3.16. For all m, n ≥ 1, we have (3.19)

m n+1 n+m−j j ]{ } = 0. ∑ (−1) [ n+1−j n j=0

Proof. In equation (3.3), replace n with m, evaluate the result with xr = r for 1 ≤ r ≤ n and xr = 0 for r > n, and use (3.17) and (3.18) to eliminate ej and hm−j . □ Our symmetric function identities also enable us to express the sum of the kth powers of the ﬁrst n positive integers as an alternating sum of products of Stirling numbers. Proposition 3.17. For all k, n ≥ 1, we have (3.20)

k n+1 n+k−j 1k + 2k + ⋅ ⋅ ⋅ + nk = ∑ (−1)j−1 [ ]{ }. n + 1 − j n j=1

Proof. Evaluate equation (3.4) with xr = r for 1 ≤ r ≤ n and xr = 0 for r > n, and use (3.17) and (3.18) to eliminate ej and hk−j . Now the result follows from the fact that, by deﬁnition, pk (1, 2, . . . , n) = 1k + 2k + ⋅ ⋅ ⋅ + nk . □

64

3. Evaluations of Symmetric Functions

As we will see in Problem 3.17, in some ways equation (3.20) does not help us ﬁnd a formula for 1k + ⋅ ⋅ ⋅ + nk nearly as much as it helps us use such a formula to ﬁnd a formula for [nk] when k is near n. The binomial theorem tells us the generating function for the nth row of Pascal’s triangle factors nicely. We can use our symmetric polynomial identities to show that the same thing happens for the Stirling numbers of the ﬁrst kind. Proposition 3.18. For all n ≥ 1, we have (3.21)

n n k n−1 ∑ [ ]t = ∏ (t + j). j=0 k=1 j

Proof. Set xj = j for 1 ≤ j ≤ n − 1 and xj = 0 for j ≥ n in equation (2.6), and use (3.17) to ﬁnd n−1

n−1 n ]tj = ∏ (1 + jt). ∑[ j=0 n − j j=1

Now reindex the sum on the left with k = n − j, replace t with 1/t, and multiply both sides by tn to obtain (3.21). □ We can use a similar technique to obtain an identity involving the Stirling numbers of the second kind. Proposition 3.19. For all n ≥ 1, we have (3.22)

∞ n+j j n 1 . }t = ∏ ∑{ n j=0 j=1 1 − jt

Proof. Set xj = j for 1 ≤ j ≤ n and xj = 0 for j > n in (2.11), and use (3.18) to simplify the result. □

3.4. q-Binomial Coeﬃcients We have seen how several interesting families of numbers appear as evaluations of symmetric polynomials, and how we can use symmetric polynomial identities to recover, or maybe even discover, identities involving these numbers. In this section we describe how to do the same thing with a family of polynomial analogues of the binomial coeﬃcients. The story starts with inversions, which give us a simple

3.4. q-Binomial Coeﬃcients

65

way to measure the extent to which a sequence of integers is not in increasing order, and which we recall from Appendix C. Deﬁnition 3.20. Suppose n ≥ 1. For any sequence π = a1 , . . . , an of integers, an inversion is an ordered pair (j, k) with 1 ≤ j, k ≤ n, such that j < k and aj > ak . We write inv(π) to denote the number of inversions in π, and we sometimes call inv(π) the inversion number of π. Example 3.21. Find all of the inversions in the word π = 7192686, and compute inv(π). Solution. The inversions are the pairs of positions of the entries which are in decreasing order. For π = 7192686 these pairs of positions are (1, 2), (1, 4), (1, 5), (1, 7), (3, 4), (3, 5), (3, 6), (3, 7), and (6, 7), so inv(π) = 9. □ It is rewarding to study the distribution of inv on many diﬀerent sets. For example, in Problems 3.18 and 3.19 we will glimpse some of the interesting properties of the generating function for Sn with respect to inv. However, the sets we are interested in are sets of sequences of 0’s and 1’s. Deﬁnition 3.22. For any nonnegative integers k and n with k ≤ n, let Bn,k be the set of sequences of length n consisting of k 1’s and n − k 0’s. The q-binomial coeﬃcient (nk)q is the ordinary generating function for Bn,k with respect to inv. That is, n ( ) = ∑ q inv(π) . k q π∈Bn,k We can construct each sequence in Bn,k uniquely by choosing k positions out of a total of n for the 1’s, so ∣Bn,k ∣ = (nk). As a result, if we set q = 1 in the q-binomial coeﬃcient (nk)q , we ﬁnd (nk)1 = (nk). So the q-binomial coeﬃcients are polynomial analogues (commonly called q-analogues) of the usual binomial coeﬃcients. We often arrange the binomial coeﬃcients in Pascal’s triangle, and we arrange the q-binomial coeﬃcients in an analogous q-Pascal triangle in Figure 3.7.

66

3. Evaluations of Symmetric Functions

1 1

1 1+q

1 1+q+q

1 1

2

1+q+q +q

3

1

2

1+q+q 2

3

1 + q + 2q + q + q

4

2

1 2

1+q+q +q

3

1

Figure 3.7. The top of the q-Pascal triangle of q-binomial coeﬃcients

Pascal’s triangle reminds us of Pascal’s relation, so we might ask whether the q-binomial coeﬃcients satisfy some sort of q-Pascal relation. It is clear from the data in Figure 3.7 that the entries in the q-Pascal triangle are not just the sum of the two entries above them, as in Pascal’s triangle. But we can hope (nk)q is the sum of the entry ) , and something else. To see above and to its right, which is (n−1 k q ) , whether this might be the case, we look at the diﬀerences (nk)q −(n−1 k q which we have arranged in a triangle in Figure 3.8. Many of these polynomials have powers of q as factors. When we pull these factors out, and omit the 0’s from the left edge of the triangle, we obtain the triangle in Figure 3.9. It appears from these results that n n−1 n−1 ( ) −( ) = q n−k ( ) . k q k q k−1 q We can prove this by splitting Bn,k into those sequences which begin with 0 and those which begin with 1.

1 0

1 q

0 q

0 0

q

3

2

1 q+q

2

3

q +q +q

4

2

1 2

q+q +q

3

) ) − (n−1 Figure 3.8. The diﬀerences (n k q k q

1

3.4. q-Binomial Coeﬃcients

67

1 q(1) 2

q(1 + q)

q (1) 3

q (1)

1

2

2

q (1 + q + q )

1 2

q(1 + q + q ) 1

) − (n−1 ) , rewritten Figure 3.9. The diﬀerences (n k q k q

Proposition 3.23. For all n ≥ 0, we have (n0 )q = (nn)q = 1, and for all 1 ≤ k ≤ n − 1, we have (3.23)

n n−1 n−1 ( ) =( ) + q n−k ( ) . k q k q k−1 q

Proof. Since Bn,0 = {0n } and Bn,n = {1n }, and 0n and 1n have no inversions, we have (n0 )q = 1 and (nn)q = 1. 0 Now suppose n ≥ 1 and 1 ≤ k ≤ n − 1. Let Bn,k be the set of 1 π ∈ Bn,k whose leftmost entry is 0, and let Bn,k be the set of π ∈ Bn,k whose leftmost entry is 1. Note that each π ∈ Bn,k is in exactly one 0 1 of Bn,k and Bn,k . 0 If π ∈ Bn,k , then removing the leftmost entry of π leaves a se0 quence in Bn−1,k , and this map is a bijection from Bn,k to Bn−1,k . 0 Furthermore, if π ∈ Bn,k , then the leftmost entry of π is not part of any inversions in π. As a result, our removal map does not change the inversion number. Therefore, the ordinary generating function 0 ) . for Bn,k with respect to inv is (n−1 k q 1 If π ∈ Bn,k , then removing the leftmost entry of π leaves a se1 quence in Bn−1,k−1 , and this map is a bijection from Bn,k to Bn−1,k−1 . 1 However, if π ∈ Bn,k , then the leftmost entry of π forms an inversion with every 0 in π. As a result, removing this entry reduces the inversion number by the number of 0’s in π, which is n − k. Since this 1 1 happens for every π ∈ Bn,k , the ordinary generating function for Bn,k n−k n−1 with respect to inv is q (k−1)q . 0 1 Since Bn,k is the disjoint union of Bn,k and Bn,k , we now see that equation (3.23) holds. □

68

3. Evaluations of Symmetric Functions

There is no reason to favor the entry above and to the right of (nk)q in the analysis that led us to Proposition 3.23. In fact, looking ) leads to a diﬀerent q-Pascal identity, which instead at (nk)q − (n−1 k−1 q we ask you to ﬁnd and prove in Problem 3.21. In addition to satisfying a natural recurrence relation, the binomial coeﬃcients also have a simple formula in terms of factorials: n! (nk) = k!(n−k)! . It turns out the q-binomial coeﬃcients have an analogous formula, involving a q-analogue of the factorial. To see what this q-analogue of the factorial should be, we look at the entries (n1 )q of the q-Pascal triangle in Figure 3.7. We know (n1 ) = n, so we can hope (n1 )q is an appropriate q-analogue of n. It appears that (n1 )q = 1 + q + ⋅ ⋅ ⋅ + q n−1 , so we make the following deﬁnition. Deﬁnition 3.24. For any positive integer n, we deﬁne the q-integer [n]q by setting [n]q = 1 + q + ⋅ ⋅ ⋅ + q n−1 , and we deﬁne the q-factorial [n]q ! by setting [n]q ! = [n]q [n−1]q ⋅ ⋅ ⋅ [2]q [1]q . By convention, [0]q ! = 1. We note that [n]q =

1−q n . 1−q

As we might hope, we can replace the factorials in our formula for (nk) with q-factorial to obtain a formula for (nk)q . Proposition 3.25. For all integers n and k with 0 ≤ k ≤ n, we have [n]q ! n . ( ) = k q [k]q ![n − k]q !

(3.24)

Proof. This is Problem 3.22.

□

Now that we have the q-Pascal relation for the q-binomial coefﬁcients, we can compare it with equations (3.1) and (3.2) to try to ﬁnd a specialization of x1 , . . . , xn which will give us the q-binomial coeﬃcients. Equation (3.1) turns out to be more complicated than we would like, but if we make the guess (in analogy with (3.9)) that )q , then (3.2) tells us we will have hk (Xn ) = (n+k−1 k (

n+k−1 n+k−2 n+k−2 ) =( ) + xn ( ) . k k k−1 q q q

3.4. q-Binomial Coeﬃcients

69

On the other hand, (3.23) says n+k−1 n+k−2 n+k−2 ( ) =( ) + q n−1 ( ) , k k k−1 q q q so we can guess xn = q n−1 , and in general xj = q j−1 . In particular, we have the following result. Proposition 3.26. For all k ≥ 0 and all n ≥ 1, we have (3.25)

n+k−1 hk (1, q, q 2 , . . . , q n−1 ) = ( ) . k q

Proof. We argue by induction on n + k. If n + k = 1, then n = 1 and k = 0. In this case we have h0 (1) = 1 )q = (00)q = 1, so (3.25) holds in this case. and (1+k−1 0 Now suppose k ≥ 0 and n ≥ 1 are given, and (3.25) holds for all N ≥ 1 and all K ≥ 0 with N + K < n + k. By (3.2), our induction hypothesis, and (3.23) we have hk (1, q, q 2 , . . . , q n−1 ) = hk (1, q, q 2 , . . . , q n−2 ) + q n−1 hk−1 (1, q, q 2 , . . . , q n−1 ) n+k−2 n+k−2 ) + q n−1 ( ) k k−1 q q n+k−1 =( ) , k q

=(

which is what we wanted to prove.

□

Now that we know we want xj = q j−1 , we should also evaluate ek (1, q, q 2 , . . . , q n−1 ). We could hope ek (1, q, q 2 , . . . , q n−1 ) = (nk)q , but this turns out to be too optimistic. For example, e3 (1, q, q 2 , q 3 ) = q 3 +q 4 +q 5 +q 6 = q 3 (43)q has an extra power of q which does not appear in (43)q . In fact, ek (1, q, q 2 , . . . , q n−1 ) is always a sum of powers of q, k and the smallest power of q which can appear is 1 ⋅ q ⋅ q 2 ⋅ ⋅ ⋅ q k = q (2) . So the best we can hope for is the following result. Proposition 3.27. For all k ≥ 0 and all n ≥ 1, we have k n (3.26) ek (1, q, q 2 , . . . , q n−1 ) = q (2) ( ) . k q

70

3. Evaluations of Symmetric Functions □

Proof. This is Problem 3.23.

We can now combine our evaluations of ek (1, q, q 2 , . . . , q n−1 ) and hk (1, q, q 2 , . . . , q n−1 ) with the identities in Section 3.1 to ﬁnd several q-binomial coeﬃcient identities. For example, we have the following q-analogue of the hockey stick identity. Proposition 3.28. For all n ≥ 1 and all k ≥ 0, we have (3.27)

k n+k−1 n + k − j − 1 [nj]q k( ) = ∑( . ) k k−j q q [j]q j=1

Proof. Evaluate (3.6) with xr = q r−1 for 1 ≤ r ≤ n and xr = 0 for r > n. Use (3.25) to eliminate hk and hk−j , and use the fact that pj (1, q, q 2 , . . . , q n−1 ) = 1 + q j + q 2j + ⋅ ⋅ ⋅ + q j(n−1) 1 − q jn 1 − q ⋅ 1 − q 1 − qj [jn]q = [j]q =

to eliminate pj .

□

Equation (3.27) is a ﬁne q-analogue of the hockey stick identity, in the sense that if we set q = 1 and do a little algebra, we really can recover the original hockey stick identity. But there is another, simpler, q-analogue of the hockey stick identity, which we ask you to prove in Problem 3.24. Many other binomial coeﬃcient identities have q-analogues, including the binomial theorem. Happily, evaluating symmetric polynomials gives us two versions of the q-binomial theorem. Proposition 3.29. For all n ≥ 0, we have (3.28)

n j n (1 + t)(1 + qt) ⋅ ⋅ ⋅ (1 + q n−1 t) = ∑ q (2) ( ) tj . k q j=0

Proof. Set xj = q j−1 for 1 ≤ j ≤ n and xj = 0 for j > n in (2.6), and use (3.26) to simplify the result. □

3.5. Problems

71

Proposition 3.30. For all n ≥ 0, we have (3.29)

∞ 1 n+j−1 j = ∑( ) t . n−1 (1 − t)(1 − qt) ⋅ ⋅ ⋅ (1 − q t) j=0 j q

Proof. This is similar to the proof of (3.28), using (2.11) and (3.25). □

3.5. Problems 3.1. Prove equation (3.2). That is, prove that for all k ≥ 0 and all n ≥ 1, we have hk (Xn ) = hk (Xn−1 ) + xn hk−1 (Xn ). 3.2. Find the images of each of the pairs of tiles in Figure 3.10 under the sign-reversing involution we used to prove (3.4). 2 3∗ 5

1

1

3

4

6

2 3∗ 5

2

2

3

4

5

2∗ 3

3

3

3

3

3

5

Figure 3.10. The pairs of tiles for Problem 3.2

3.3. Use a sign-reversing involution to prove (3.5), which says that for k ≥ 1, we have n

pk = ∑ (−1)k+j jek−j hj . j=1

3.4. Use equation (3.4) to prove (3.5) directly. 3.5. Use generating functions to prove (3.4), as in our ﬁrst proof of (2.12).

72

3. Evaluations of Symmetric Functions

3.6. Use a sign-reversing involution to prove (3.7), which says k

kek = ∑ (−1)j−1 ek−j pj . j=1

3.7. Use generating functions to prove (3.6), as in our ﬁrst proof of (2.12). 3.8. Use generating functions to prove (3.7), as in our ﬁrst proof of (2.12). 3.9. Prove that for all k ≥ 1, we have kek =

(−1)k1 k2 hk1 ek2 ek3 .

∑

k1 +k2 +k3 =k 0≤k1 ,k2 ,k3 ≤k

3.10. Prove that for all k ≥ 1, we have khk =

(−1)k1 +k2 k3 ek1 ek2 hk3 hk4 hk5 .

∑

k1 +k2 +k3 +k4 +k5 =k 0≤k1 ,k2 ,k3 ,k4 ,k5 ≤k

3.11. Prove that for all k ≥ 0 and all n ≥ 1, we have k

ek (x1 + t, . . . , xn + t) = ∑ ( j=0

n−j )ej (Xn )tk−j k−j

and k n−1+k hk (x1 + t, . . . , xn + t) = ∑ ( )hj (Xn )tk−j . k − j j=0

3.12. Prove (3.8), which says that for n ≥ 1 and k ≥ 0, we have n ek (1, . . . , 1) = ( ). k ´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶ n

3.13. Use symmetric functions to show that for all n, k ≥ 1, we have k−1

∑ (−1)

k−j−1

j=0

n n−1 ( )=( ). j k−1

3.14. Use Problem 3.11 to prove the binomial theorem. 3.15. Prove Proposition 3.14. 3.16. Prove equation (3.18).

3.6. Notes

73

3.17. (a) Prove that for all n ≥ 1, we have 1+2+⋅⋅⋅+n=( 1 2 + 2 2 + ⋅ ⋅ ⋅ + n2 =

n+1 ), 2

2n + 1 n + 1 ( ), 3 2

and

n+1 2 ) . 1 3 + 2 3 + ⋅ ⋅ ⋅ + n3 = ( 2 n n ], [n−2 ], (b) Use the results in part (a) to ﬁnd formulas for [n−1 n and [n−3]. 3.18. Show that for all n ≥ 1, we have inv(π) = [n]q !. ∑ q π∈Sn

3.19. In any sequence a1 , . . . , an of positive integers, a descent is a number j with aj > aj+1 . The major index of the sequence, written maj(a1 , . . . , an ), is the sum of the descents. Show that for all n ≥ 1, we have maj(π) = ∑ q inv(π) . ∑ q π∈Sn

π∈Sn

3.20. Give a combinatorial interpretation of (nk)q when q = −1. ) , and give a second q3.21. Conjecture a formula for (nk)q − (n−1 k−1 q Pascal relation by proving your conjecture. 3.22. Prove Proposition 3.25. 3.23. Prove Proposition 3.27. 3.24. Prove the following q-analogue of the hockey stick identity: k−1 n+k−1 n+j−1 ( ) = ∑ qj ( ) . k − 1 q j=0 j q

3.6. Notes Some authors prefer to use s(n, k) to denote the signed Stirling numbers of the ﬁrst kind, c(n, k) to denote ∣s(n, k)∣, and S(n, k) to denote the Stirling number of the second kind. For our purposes, Knuth’s notation [Knu92], which is also used by Benjamin and Quinn [BQ03],

74

3. Evaluations of Symmetric Functions

is more appropriate, because it emphasizes the analogy between the Stirling numbers and the binomial coeﬃcients. There are many proofs of the q-binomial theorem, some more combinatorial than ours. For example, Bressoud gives a proof using generating functions for (integer) partitions in his book on alternating sign matrices [Bre99].

Chapter 4

Schur Polynomials and Schur Functions

So far we have four diﬀerent bases for the space Λk of homogeneous symmetric functions of degree k: the monomial symmetric functions, the elementary symmetric functions, the complete homogeneous symmetric functions, and the power sum symmetric functions. Each of these bases has its advantages and disadvantages, but it turns out there is a ﬁfth basis which is more elegant than any basis we have found so far. In this chapter we construct this basis in two diﬀerent ways—one combinatorial and the other using a ratio of determinants.

4.1. Schur Functions and Semistandard Tableaux In Chapter 2.16 and 2.17 we described analogous combinatorial interpretations of the coeﬃcients we obtain when we write the complete homogeneous symmetric functions as linear combinations of the monomial symmetric functions. As we show next, we can use these combinatorial interpretations to give new deﬁnitions of both the elementary and the complete homogeneous symmetric functions. To state these results, we adopt the convention that for any ﬁlling T of a Ferrers diagram with positive integers, we write xT to denote the 75

76

4. Schur Functions 1

4

2

2

3

6

1

6

Figure 4.1. A ﬁlling of the Ferrers diagram for (4, 22 ) with associated monomial x21 x22 x3 x4 x26

monomial given by xT = ∏j∈T xj . For example, the monomial associated with the ﬁlling of the Ferrers diagram for (4, 22 ) in Figure 4.1 is x21 x22 x3 x4 x26 . For any partition λ and any set A of positive integers, we also write StrictRow(λ, A) to denote the set of ﬁllings of the Ferrers diagram of λ with entries in A in which the entries in each row are strictly increasing from left to right. Proposition 4.1. For any partition λ and any integer n ≥ 1, we have (4.1)

eλ (Xn ) =

xT

∑

T ∈StrictRow(λ,[n])

and (4.2)

eλ =

∑

xT .

T ∈StrictRow(λ,P)

Proof. By equation (2.2) we have l(λ)

eλ (Xn ) = ∏ eλj (Xn ). j=1

The terms in the expansion of the product on the right are the prodl(λ) ucts ∏j=1 tj , where tj is a term in eλj (Xn ). For each such product we can construct a ﬁlling of the Ferrers diagram of λ of the given type by placing, for 1 ≤ j ≤ l(λ), the subscripts of the variables which appear in tj in increasing order in the jth row of the diagram. We can also invert this construction: if we have a ﬁlling of the given type, then for each j with 1 ≤ j ≤ l(λ) we can reconstruct tj as the product ∏k xk , which is over all k which appear in the jth row of the ﬁlling. Therefore, we have a bijection between terms in eλ (Xn ) and our ﬁllings of the Ferrers diagram of λ. Moreover, by construction the image of a ﬁlling T under this map is xT , and equation (4.1) follows. The proof of (4.2) is similar to the proof of (4.1).

□

4.1. Schur Functions and Semistandard Tableaux 4

77

4

1

4

6

1

2

4

1

2

4

1

4

6

Figure 4.2. The ﬁllings of the Ferrers diagram of (32 , 1) for the term x21 x2 x34 x6 in e331

Example 4.2. Find all ﬁllings of the Ferrers diagram of (32 , 1) associated with the term x21 x2 x34 x6 in e331 . Solution. Since e331 = e3 e3 e1 , we need to write x21 x2 x34 x6 as a product of three factors, of degrees 3, 3, and 1, respectively. No subscripts are repeated within a factor, so each factor must include x4 . Since the last factor has degree 1, it cannot also include x1 , so each of the ﬁrst two factors must include x1 . Now there are two ways to complete the factorization: x21 x2 x34 x6 = (x1 x2 x4 )(x1 x4 x6 )x4 and x21 x2 x34 x6 = (x1 x4 x6 )(x1 x2 x4 )x4 . The associated ﬁllings are as in Figure 4.2. □ To state the corresponding result for the complete homogeneous symmetric functions, we need some additional notation. For any partition λ and any set A of positive integers, we write WeakRow(λ, A) to denote the set of ﬁllings of the Ferrers diagram of λ with entries in A in which the entries in each row are weakly increasing from left to right. Proposition 4.3. For any partition λ and any integer n ≥ 1, we have (4.3)

hλ (Xn ) =

xT

∑

T ∈WeakRow(λ,[n])

and (4.4)

hλ =

∑

xT .

T ∈WeakRow(λ,P)

Proof. This is similar to the proof of Proposition 4.1.

□

Our characterizations of eλ and hλ in equations (4.2) and (4.4) both involve requirements on the rows of ﬁllings of the Ferrers diagram of λ, but include no restrictions on the columns of the ﬁlling. If we

78

4. Schur Functions

move one of these requirements from the rows to the columns, then we can deﬁne a new type of ﬁlling of a Ferrers diagram which interpolates between these two old types of ﬁllings. Deﬁnition 4.4. For any partition λ, a semistandard tableau of shape λ is a ﬁlling of the Ferrers diagram of λ in which the entries in the columns are strictly increasing from bottom to top and the entries in the rows are weakly increasing from left to right. If T is a semistandard tableau, then we write sh(T ) to denote the shape of T . We write SST(λ; n) to denote the set of all semistandard tableaux of shape λ with entries in [n], and we write SST(λ) to denote the set of all semistandard tableaux of shape λ with entries in P. When n and ∣λ∣ are small, we can list all of the semistandard tableaux in SST(λ; n) by hand. Example 4.5. Find all semistandard tableaux of shape (22 , 1) with entries in [3]. Solution. Each column contains a strictly increasing sequence of elements of {1, 2, 3}, so the ﬁrst column must contain 1, 2, and 3 from bottom to top. We can place any strictly increasing sequence of elements of {1, 2, 3} in the second column without violating the condition that the rows be nondecreasing, so we have the semistandard tableaux in Figure 4.3. □ 3

3

3

2

2

2

3

2

3

1

1

1

1

1

2

Figure 4.3. The semistandard tableaux of shape (22 , 1) with entries in [3]

We will sometimes want to discuss ﬁllings of a Ferrers diagram which have some of the properties of a semistandard tableau, but possibly not all of them. To simplify matters when this happens, we say a ﬁlling of a Ferrers diagram is column-strict whenever the entries in each of its columns are increasing from bottom to top. Similarly,

4.1. Schur Functions and Semistandard Tableaux

79

we say a ﬁlling of a Ferrers diagram is row-nondecreasing whenever the entries in each of its rows are weakly increasing from left to right. Inspired by Propositions 4.1 and 4.3, we can now use our semistandard tableaux to construct some new polynomials and formal power series, which we hope will be symmetric polynomials and symmetric functions, respectively. Deﬁnition 4.6. For any partition λ and any integer n ≥ 1, we write sλ (Xn ) to denote the polynomial (4.5)

sλ (Xn ) =

xT .

∑

T ∈SST(λ;n)

Similarly, we write sλ to denote the formal power series sλ =

∑

xT .

T ∈SST(λ)

We call sλ (Xn ) the Schur polynomial in x1 , . . . , xn indexed by λ and we call sλ the Schur function indexed by λ. Propositions 4.1 and 4.3 give us two families of examples of Schur polynomials and Schur functions. Example 4.7. Compute s1k (Xn ) and s1k . Solution. The polynomial s1k (Xn ) is a sum over all ﬁllings of a single column with distinct integers in [n], in which the entries are strictly increasing from bottom to top. If n < k, then there are no such ﬁllings, so our sum has no terms, and s1k (Xn ) = 0 in this case. If n ≥ k, then there is exactly one such ﬁlling for each subset of [n] of size k, and the monomial corresponding to a subset J is ∏j∈J xj . Therefore, s1k (Xn ) = ek (Xn ) in this case. Similarly, s1k = ek . □ Example 4.8. Compute sk (Xn ) and sk . Solution. The polynomial sk (Xn ) is a sum over all ﬁllings of a single row with integers in [n], in which the entries are weakly increasing from left to right. There is exactly one such ﬁlling for each submultiset of [n] of size k, and the monomial corresponding to such a submultiset J is ∏j∈J xj . Therefore, sk (Xn ) = hk (Xn ). Similarly, sk = hk . □

80

4. Schur Functions

In Examples 4.7 and 4.8 we saw that the Schur polynomials (resp., functions) interpolate between the elementary symmetric polynomials (resp., functions) and the complete homogeneous symmetric polynomials (resp., functions). In our next few examples we look at some of the Schur polynomials and Schur functions between these extremes. Example 4.9. Compute s21 (x1 ), s21 (X2 ), and s21 (X3 ). Write those which are symmetric polynomials as linear combinations of the monomial symmetric polynomials. Solution. The polynomial s21 (x1 ) is a sum over all semistandard tableaux of shape (2, 1) with entries in [1]. The leftmost column of any such tableau must contain two distinct integers, since its entries are strictly increasing from bottom to top. But we have only two integers available, so there are no semistandard tableaux of the sort we want. This means our sum has no terms, so by convention s21 (x1 ) = 0. To compute s21 (X2 ), we write down (in Figure 4.4) the two semistandard tableaux of shape (2, 1) with entries in [2]. When we write down the corresponding terms we ﬁnd s21 (X2 ) = x21 x2 + x1 x22 = m21 (X2 ). 2 1

2 1

1

2

Figure 4.4. The two semistandard tableaux of shape (2, 1) with entries in [2]

Finally, we use the semistandard tableaux of shape (2, 1) with entries in [3], which are shown in Figure 4.5, to ﬁnd s21 (X3 ) = m21 (X3 ) + 2m111 (X3 ). □ Example 4.10. Compute s221 (X2 ), s221 (X3 ), and s221 (X4 ). Write those which are symmetric polynomials as linear combinations of the monomial symmetric polynomials. Solution. The polynomial s221 (X2 ) is a sum over all semistandard tableaux of shape (22 , 1) with entries in [2]. The leftmost column

4.1. Schur Functions and Semistandard Tableaux 2 1

2 1

3 1

1

2 2

3 1

1

1

3

3 2

3 2

81

1

3

3 2

2

3

Figure 4.5. The eight semistandard tableaux of shape (2, 1) with entries in [3]

of any such tableau must contain three distinct integers, since its entries are strictly increasing from bottom to top. But we have only two integers available, so there are no semistandard tableaux of the sort we want. This means our sum has no terms, so by convention s221 (X2 ) = 0. To compute s221 (X3 ), we notice it is a sum over the set of seminstandard tableaux in Example 4.5. Therefore, we have s221 (X3 ) = x21 x22 x3 + x21 x2 x23 + x1 x22 x23 , which is m221 (X3 ). To compute s221 (X4 ), we write down (in Figure 4.6) all 20 semistandard tableaux of shape (22 , 1) with entries in [4]. Collecting terms, we ﬁnd s221 (X4 ) = m221 (X4 ) + 2m2111 (X4 ). We can also write down all 75 semistandard tableaux of shape (22 , 1) with entries in [5] and collect terms to ﬁnd s221 (X5 ) = m221 (X5 ) + 2m2111 (X5 ) + 5m11111 (X5 ).

□

All of the Schur polynomials we’ve seen so far are symmetric polynomials, and all of the Schur functions we’ve seen are symmetric functions. To prove the Schur polynomials are symmetric polynomials and the Schur functions are symmetric functions, it will be helpful to keep track of the number of entries of each type in a given semistandard tableau.

82

4. Schur Functions 3

3

3

3

3

3

2

2

2

3

2

4

2

3

2

4

2

4

1

1

1

1

1

1

1

2

1

2

1

3

4

4

4

4

4

4

2

2

2

3

2

4

2

3

2

4

2

4

1

1

1

1

1

1

1

2

1

2

1

3

4

4

4

4

4

3

3

3

4

3

3

3

4

3

4

1

1

1

1

1

2

1

2

1

3

4

4

4

3

3

3

4

3

4

2

2

2

2

2

3

Figure 4.6. The 20 semistandard tableaux of shape (22 , 1) with entries in [4]

Deﬁnition 4.11. For any semistandard tableau T , the content of T is the sequence {µj }∞ j=1 , where µj is the number of j’s in T for each j. When µj = 0 for j > n, then we sometimes abbreviate {µj }∞ j=1 as {µj }nj=1 . Our proof that the Schur polynomials and Schur functions are symmetric uses a collection of combinatorial maps showing these polynomials and formal power series are invariant under certain permutations. We combine these with an algebraic argument that it is enough to check the result for only these permutations. We start with the maps, which are called the Bender–Knuth involutions. For each j ≥ 1, the Bender–Knuth involution βj is a function from the set of semistandard tableaux with shape λ and content {µl }∞ l=1 to the set of semistandard tableaux with shape λ and content µ1 , . . . , µj−1 , µj+1 , µj , µj+2 , . . . . To describe βj , suppose T is a

4.1. Schur Functions and Semistandard Tableaux

83

semistandard tableau with shape λ and content {µl }∞ l=1 , and consider the columns of T . We only care about j’s and j + 1’s, so for us there are only four types of columns: those containing both a j and a j + 1, those containing a j but not a j + 1, those containing a j + 1 but not a j, and those containing neither a j nor a j +1. We call a j (resp., j +1) in T paired whenever there is a j + 1 (resp., j) in its column, and free otherwise. Now consider a row R of T . Like every row of T , the row R has a number (possibly 0) of j’s, followed immediately by a number (also possibly 0) of j + 1’s. Immediately above the j’s in R are some (possibly no) j + 1’s, and then some (also possibly no) larger entries. Therefore, in R we have a number of paired j’s, followed by a number of free j’s. Similarly, immediately below the j + 1’s in R, we have a number (possibly 0) of entries less than j, followed by a number (possibly 0) of j’s. Therefore, in T we have a number of free j + 1’s, followed by a number of paired j + 1’s. To construct βj (T ), we start with the bottom row of T and move to the top, doing the same thing to every row. Namely, if a row has a free j’s followed by b free j + 1’s, then we replace that sequence of free j’s and free j + 1’s with a sequence of b free j’s followed by a free j + 1’s. 5 4

4

5

5

7

7

7

3

3

3

3

3

3

4

4

4

4

2

2

2

2

2

2

2

2

3

3

3

4

4

4

5

7

1

1

1

1

1

1

1

1

1

2

2

2

2

3

4

6

Figure 4.7. The tableau T for Example 4.12

Example 4.12. Find the image of the tableau T in Figure 4.7 under the Bender–Knuth involution β3 . Solution. In Figure 4.8 we have written the free 3’s and 4’s in T in bold, and larger than the other entries. In the bottom row we have

84

4. Schur Functions 5 4

4

5

5

7

7

7

3

3

3 3 3 3 4 4 4 4

2

2

2

2

2

2

2

2

3

3

1

1

1

1

1

1

1

1

1

2

3 4 4 4 5 7 2 2 2 3 4 6

Figure 4.8. The tableau T for Example 4.12 with the free 3’s and 4’s in bold

no free 3’s and one free 4, so we replace the free 4 with a (free) 3. In the second row from the bottom we have one free 3 and two free 4’s, so we replace them with two (free) 3’s and one (free) 4. And in the third row from the bottom we have four free 3’s and two free 4’s, which we replace with two (free) 3’s and four (free) 4’s. When we’re done, we have the tableau in Figure 4.9. □ 5 4

4

5

5

7

7

7

3

3

3

3

4

4

4

4

4

4

2

2

2

2

2

2

2

2

3

3

3

3

4

4

5

7

1

1

1

1

1

1

1

1

1

2

2

2

2

3

3

6

Figure 4.9. The tableau β3 (T ) for Example 4.12

The Bender–Knuth involutions have several properties which will be important to us, so we develop these next. Lemma 4.13. If T is a semistandard tableau with shape λ and content {µl }∞ l=1 and j ≥ 1, then β(T ) is a semistandard tableau with shape λ and content µ1 , . . . , µj−1 , µj+1 , µj , µj+2 , . . . . Proof. By construction, T and βj (T ) have the same shape, and the entries in each row of βj (T ) are weakly increasing from left to right. To see βj (T ) is column-strict, ﬁrst consider a box which contains a j in T and a j + 1 in βj (T ). Since the j changed to a j + 1, it must have been free in T , so the entry immediately above it in both T and βj (T )

4.1. Schur Functions and Semistandard Tableaux

85

is greater than j + 1. Therefore the entries in that column of βj (T ) are strictly increasing from bottom to top. Similarly, only free j + 1’s in T can change to j’s in βj (T ), so the entries in a column in which a j + 1 changed to a j are also strictly increasing from bottom to top. All of the other columns of βj (T ) have the same entries as they do in T , so βj (T ) is column-strict. Therefore, βj (T ) is a semistandard tableau of shape λ. To ﬁnd the content of βj (T ), we ﬁrst notice that if a box in T does not contain a j or a j + 1, then it contains the same number in T as it does in βj (T ). Therefore if l ≠ j and l ≠ j + 1, then βj (T ) has µl l’s. By construction, the number of j’s in βj (T ) is the number of paired j’s in T plus the number of free j + 1’s in T . But the number of paired j’s in T is equal to the number of paired j + 1’s in T , since every column of T contains both a paired j and a paired j + 1, or neither a paired j nor a paired j + 1. Therefore the number of j’s in βj (T ) is equal to the number of paired j + 1’s in T plus the number of free j + 1’s in T , which is µj+1 . Similarly (or using the fact that T and βj (T ) have the same total number of entries), the number of j + 1’s in βj (T ) is µj . Therefore, the content of βj (T ) is µ1 , . . . , µj−1 , µj+1 , µj , µj+2 , . . . . □ Lemma 4.14. If T is semistandard tableau, then for any j ≥ 1 we have βj (βj (T )) = T . In particular, βj is a bijection between the set of semistandard tableaux with shape λ and content {µl }∞ l=1 and the set of semistandard tableaux with shape λ and content µ1 , . . . , µj−1 , µj+1 , µj , µj+2 , . . . .

Proof. First note that applying the action of βj to a single row does not change which j’s and j +1’s are free in any row. (As an aside, this means we could apply the action of βj to the rows in any order, and we would get the same tableau at the end.) Second, because βj only changes j’s (resp., j + 1’s) with no j + 1 (resp., j) in their columns, if a row in T has a free j’s and b free j + 1’s, then in βj (T ) it has b free j’s and a free j + 1’s. Therefore, each row returns to its original state if we apply βj again, so βj (βj (T )) = T . In particular, βj = βj−1 , so βj is a bijection. □

86

4. Schur Functions

The Bender–Knuth involutions have additional interesting properties, which we explore in Problems 4.6, 4.7, 4.8, and 4.9. For now, though, we explain how to use the Bender–Knuth involutions to prove the Schur polynomials and Schur functions are symmetric. Proposition 4.15. Suppose λ ⊢ k and n ≥ 1. Then we have sλ (Xn ) ∈ Λk (Xn ) and sλ ∈ Λk . Proof. We ﬁrst show sλ (Xn ) ∈ Λk (Xn ). If n < l(λ), then there are no semistandard tableaux of shape λ with entries in [n], so the sum sλ (Xn ) has no terms and sλ (Xn ) = 0 ∈ Λk (Xn ). Now suppose n ≥ l(λ). By construction, every term in sλ (Xn ) is homogeneous of degree k = ∣λ∣, so we just need to show π(sλ (Xn )) = sλ (Xn ) for all π ∈ Sn . To start, suppose π = (j, j + 1) for some j with 1 ≤ j ≤ n − 1, and choose a monomial xµ1 1 ⋅ ⋅ ⋅ xµnn with total degree µ1 + ⋅ ⋅ ⋅ + µn = ∣λ∣. (Notice these are the only terms that could appear in sλ (Xn ) or π(sλ (Xn )).) By deﬁnition the coeﬃcient of xµ1 1 ⋅ ⋅ ⋅ xµnn in sλ (Xn ) is the number of semistandard tableaux with shape λ and content µ = µ1 , . . . , µn . On the other hand, the coeﬃcient µ µj of xµ1 1 ⋅ ⋅ ⋅ xnµn in π(sλ (Xn )) is the coeﬃcient of xµ1 1 ⋅ ⋅ ⋅ xj j+1 xj+1 ⋅ ⋅ ⋅ xµnn in sλ (Xn ), which is the number of semistandard tableaux with shape λ and content µ1 , . . . , µj−1 , µj+1 , µj , µj+2 , . . . , µn . By Lemma 4.14 these coeﬃcients are equal. Therefore, every term xµ1 1 ⋅ ⋅ ⋅ xµnn has the same coeﬃcient in both sλ (Xn ) and π(sλ (Xn )), so we must have sλ (Xn ) = π(sλ (Xn )). Now suppose π is an arbitrary permutation in Sn . By Proposition C.5 there are positive integers j1 , . . . , jl such that π = (j1 , j1 + 1)(j2 , j2 + 1) ⋅ ⋅ ⋅ (jl , jl + 1). When we apply the result of the previous paragraph l times, we ﬁnd π(sλ (Xn )) = (j1 , j1 + 1)(j2 , j2 + 1) ⋅ ⋅ ⋅ (jl , jl + 1)(sλ (Xn )) = sλ (Xn ), so sλ (Xn ) is invariant under π, which is what we wanted to prove. A similar argument shows sλ ∈ Λk .

□

4.1. Schur Functions and Semistandard Tableaux

87

Now that we know sλ (Xn ) is always a symmetric polynomial and sλ is always a symmetric function, our solution to Example 4.12 raises an interesting question: Are the algebraic relationships between the Schur polynomials and the monomial symmetric polynomials eventually independent of the number of variables? After all, increasing the number of variables in that example added terms to our answer. To make this question more precise, we make the following deﬁnition. Deﬁnition 4.16. For any partitions λ and µ, we write Kλ,µ to denote the number of semistandard tableaux of shape λ and content µ, and we call the numbers Kλ,µ the Kostka numbers. Note that for any partition λ we have (4.6)

sλ = ∑ Kλ,µ mµ . µ⊢∣λ∣

As a result, if we followed our previous convention, then we would write Mλ,µ (s, m) instead of Kλ,µ . However, Kλ,µ is the traditional notation for these numbers, so we will stick with it. Our next result answers our question about the algebraic relationships between the Schur polynomials and the monomial symmetric polynomials. Proposition 4.17. For any partitions λ and µ with ∣λ∣ = ∣µ∣ and any n ≥ 1, let Kλ,µ,n be the number of semistandard tableaux with shape λ and content µ, with entries in [n]. If n ≥ ∣λ∣, then Kλ,µ,n = Kλ,µ . In particular, if n ≥ ∣λ∣, then Kλ,µ,n is independent of n. Proof. Since µ is a partition and n ≥ ∣λ∣ = ∣µ∣, we must have µj = 0 for j ≥ n. Therefore, any semistandard tableau with shape λ and content µ contains no entry larger than n. This means Kλ,µ,n and Kλ,µ are counting the same objects, so Kλ,µ,n = Kλ,µ . □ We would now like to show the Schur functions of degree k are a basis for Λk . As we did for the elementary and power sum symmetric functions, we do this by examining the coeﬃcients we obtain when we write sλ as a linear combination of monomial symmetric functions. That is, we look at matrices of Kostka numbers, as in Figure 4.10. As we might have hoped, when we arrange our partitions in increasing lexicographic order, our matrices of Kostka numbers appear

88

4. Schur Functions (12 ) (2) (12 ) 1 0 [ ] (2) 1 1

(1) (1) [ 1 ]

(13 ) (2, 1) (3) (1 ) ⎡⎢ 1 0 0 ⎤⎥ ⎢ ⎥ (2, 1) ⎢⎢ 2 1 0 ⎥⎥ (3) ⎢⎣ 1 1 1 ⎥⎦ 3

(14 ) (14 ) ⎡⎢ 1 ⎢ (2, 12 ) ⎢⎢ 3 2 (2 ) ⎢⎢ 2 ⎢ (3, 1) ⎢ 3 ⎢ ⎢ 1 (4) ⎣

(2, 12 ) (22 ) 0 0 1 0 1 1 2 1 1 1

(15 ) (2, 13 ) (22 , 1) (1 ) ⎡⎢ 1 0 0 3 ⎢ (2, 1 ) ⎢⎢ 4 1 0 (22 , 1) ⎢⎢ 5 2 1 ⎢ (3, 12 ) ⎢⎢ 6 3 1 (3, 2) ⎢⎢ 5 3 2 ⎢ (4, 1) ⎢ 4 3 2 ⎢ ⎢ 1 (5) 1 1 ⎣ 5

(3, 12 ) 0 0 0 1 1 2 1

(3, 1) 0 0 0 1 1 (3, 2) 0 0 0 0 1 1 1

(4) 0 ⎤⎥ ⎥ 0 ⎥⎥ 0 ⎥⎥ ⎥ 0 ⎥ ⎥ 1 ⎥⎦ (4, 1) (5) 0 0 ⎤⎥ ⎥ 0 0 ⎥⎥ 0 0 ⎥⎥ ⎥ 0 0 ⎥⎥ 0 0 ⎥⎥ ⎥ 1 0 ⎥ ⎥ 1 1 ⎥⎦

Figure 4.10. The matrices Kλ,µ for ∣λ∣ = ∣µ∣ ≤ 5

to be lower triangular with 1’s on their diagonals. In our next result we prove this pattern continues. Proposition 4.18. For all k ≥ 0, the following hold for all partitions λ, µ ⊢ k. (i) If µ >lex λ, then Kλ,µ = 0. (ii) Kλ,λ = 1.

4.2. Schur Polynomials as Ratios of Determinants

89

Proof. (i) If µ >lex λ, then by deﬁnition there exists m such that µm > λm and µj = λj for 1 ≤ j ≤ m − 1. No semistandard tableau can have a 1 in its second row or higher, a 2 in its third row or higher, or a j in its j + 1th row or higher, so a ﬁlling T of λ with content µ can be a semistandard tableau only if it has j’s in every entry of the jth row for 1 ≤ j ≤ m − 1. Since µm > λm , by the pigeonhole principle some column of T must contain two m’s, so T cannot be a semistandard tableau. (ii) As in the proof of (i), a semistandard tableau of shape λ and content λ must have j’s in every entry of its jth row for all j. Since there is only one semistandard tableau like this, we must have Kλ,λ = 1. □ As we will see in Problem 4.16, the converse of Proposition 4.18(i) is false. Before we get there, though, we note that the Schur functions of degree k really are a basis for Λk . Proposition 4.19. For all k ≥ 0, the set {sλ ∣ λ ⊢ k} is a basis for Λk . Proof. This is similar to the proof of Corollary 2.11.

□

4.2. Schur Polynomials as Ratios of Determinants So far we have used combinatorial methods to produce almost all of our families of symmetric functions. Now we describe an algebraic method for producing symmetric polynomials, which we use to give an alternative deﬁnition of the Schur polynomials. We begin with polynomials which are not symmetric, but which still interact with permutations in a natural way. Deﬁnition 4.20. Suppose n ≥ 1. We say a polynomial f (Xn ) is alternating in x1 , . . . , xn whenever π(f ) = sgn(π)f for all π ∈ Sn . Inspired by our construction of the monomial symmetric polynomials, we can produce alternating polynomials by starting with a monomial and adding all of its signed images under the elements of the relevant permutation group.

90

4. Schur Functions

Example 4.21. Find the alternating polynomial f (X3 ) with the fewest terms which contains the monomial x31 x22 x3 . Solution. Since f includes x31 x22 x3 and is alternating, it must also include sgn(π)π(x31 x22 x3 ) = −x21 x32 x3 , where π = (12). Similarly, f must include −x31 x2 x23 , −x1 x22 x33 , x1 x32 x23 , and x21 x2 x33 . Since f (X3 ) = x31 x22 x3 − x21 x32 x3 − x31 x2 x23 − x1 x22 x33 + x1 x32 x23 + x21 x2 x33 is alternating, this is the desired polynomial.

□

In contrast with the situation for symmetric polynomials, this technique does not always produce a new alternating polynomial. Example 4.22. Show that no alternating polynomial has the monomial x1 x2 as a term. Solution. If x1 x2 is a term in an alternating polynomial and π = (12), then sgn(π)π(x1 x2 ) = −x1 x2 is also a term in that polynomial, which is a contradiction. □ Example 4.22 is one instance of a much more general phenomenon. Proposition 4.23. If µ1 , . . . , µn are nonnegative integers such that µj = µl for some j ≠ l, and f (Xn ) is an alternating polynomial in Xn , then the coeﬃcient of xµ1 1 xµ2 2 ⋅ ⋅ ⋅ xµnn in f (Xn ) is 0. Proof. If axµ1 1 xµ2 2 ⋅ ⋅ ⋅ xµnn is a term in f (Xn ), then sgn((jl))(jl)(axµ1 1 xµ2 2 ⋅ ⋅ ⋅ xµnn ) = −axµ1 1 xµ2 2 ⋅ ⋅ ⋅ xµnn is also a term in f (Xn ). But if µj = µl , then these terms are equal, so a = 0. □ Proposition 4.23 tells us we can only construct nonzero alternating polynomials from monomials if we start with a monomial whose exponents are distinct. This includes zero exponents: there is no alternating polynomial in x1 , x2 , x3 , x4 with x31 x4 as a term. However, as we show next, each monomial with distinct exponents does in fact produce an alternating polynomial. In contrast with some of the monomial symmetric polynomials, this alternating polynomial

4.2. Schur Polynomials as Ratios of Determinants

91

has exactly one term for each permutation. This, in combination with the signs associated with each term, allows us to write this alternating polynomial as a determinant. Proposition 4.24. If µ is a sequence with µ1 > µ2 > ⋅ ⋅ ⋅ > µn ≥ 0 and (4.7)

1 2 n aµ (Xn ) = ∑ sgn(π)xµπ(1) xµπ(2) ⋅ ⋅ ⋅ xµπ(n) ,

π∈Sn

then aµ (Xn ) is an alternating polynomial in x1 , . . . , xn . Moreover, aµ (Xn ) is homogeneous of degree µ1 + ⋅ ⋅ ⋅ + µn , it has n! terms, and (4.8)

µ

aµ (Xn ) = det (xl j )1≤j,l≤n .

Proof. To show aµ (Xn ) is an alternating polynomial in Xn , suppose σ ∈ Sn . Then by Proposition 1.2(i),(ii), 1 n σ(aµ ) = ∑ sgn(π)xµσπ(1) ⋅ ⋅ ⋅ xµσπ(n) .

π∈Sn

If τ = σπ, then π = σ −1 τ , and as π ranges over Sn , so does τ . Therefore, 1 n σ(aµ ) = ∑ sgn(σ −1 τ )xµτ (1) ⋅ ⋅ ⋅ sµτ (n) . τ ∈Sn

By Problem C.10, we have sgn(σ −1 τ ) = sgn(σ −1 ) sgn(τ ). Moreover, we saw in Problem C.9 that inv(σ) = inv(σ −1 ), so sgn(σ) = sgn(σ −1 ). Thus, n 1 , ⋅ ⋅ ⋅ xµτ (n) σ(aµ ) = ∑ sgn(σ) sgn(τ )xµτ (1) τ ∈Sn

so σ(aµ ) = sgn(σ)aµ , and aµ (Xn ) is an alternating polynomial in Xn . By construction, each term in aµ (Xn ) has total degree µ1 +⋅ ⋅ ⋅+µn , so aµ (Xn ) is homogeneous of degree µ1 + ⋅ ⋅ ⋅ + µn . Since µ1 , . . . , µn are distinct, the terms of aµ (Xn ) are also distinct. Therefore, there are n! of them. Finally, equation (4.8) follows immediately from (4.7) and Proposition C.7. □ Our alternating polynomials aµ (Xn ) are indexed by sequences µ1 > µ2 > ⋅ ⋅ ⋅ > µn ≥ 0, but it will be more convenient to index them with partitions. To see how to do this, ﬁrst consider how large each µj must be, given that µ1 > µ2 > ⋅ ⋅ ⋅ > µn ≥ 0. For instance, µn−1 > µn ≥ 0, so µn−1 ≥ 1. But now µn−2 > µn−1 ≥ 1, so µn−2 ≥ 2. Arguing by induction, we ﬁnd µn−j ≥ j for 0 ≤ j ≤ n−1. In other words,

92

4. Schur Functions

our “smallest” indexing sequence is the sequence n − 1, n − 2, . . . , 2, 1, which we denote by δn . With this in mind, we deﬁne a sequence λ by setting λj = µj − δn (j) for 1 ≤ j ≤ n, and we sometimes abbreviate λ = µ − δn or µ = λ + δn . By construction, λ is a partition with at most n parts, and the map taking µ to λ is a bijection, since the inverse of subtracting δn is adding δn . As a result, we can view aµ (Xn ) as aλ+δn (Xn ), where λ is a partition with at most n parts. Separating δn from λ in an indexing sequence µ allows us to see more clearly the relationship between a generic aλ+δn (Xn ) and our minimal alternating polynomial aδn (Xn ). Speciﬁcally, we can use the fact that aλ+δn (Xn ) can be written as a determinant to factor aδn (Xn ) completely and to show aδn (Xn ) always divides aλ+δn (Xn ). Proposition 4.25. For all n ≥ 1, we have aδn (Xn ) =

(4.9)

∏ (xj − xl ). 1≤j