*312*
*15*
*1MB*

*English*
*Pages [117]*
*Year 2010*

*Table of contents : AbstractivAcknowledgmentsvIntroduction1Chapter 1.Basic Tableaux Combinatorics41.1.Young tableaux41.2.The plactic monoid81.3.Standardization151.4.Promotion17Chapter 2.A promotion operator on rigged configurations of type A212.1.Introduction212.2.Preliminaries and the main result222.3.Outline of the proof of the main result432.4.Proof of Proposition 2.3.3512.5.Proof of Proposition 2.3.5532.6.Proof of Proposition 2.3.7552.7.Proof of Proposition 2.3.10592.8.Proof of Proposition 2.3.14612.9.Proof of Proposition 2.3.1564Appendix 2.A.Proof of Proposition 2.8.569Appendix 2.B.Several useful facts71Chapter 3.Promotion and evacuation on rectangular and staircase tableaux743.1.Introduction743.2.Definitions and Preliminaries763.3.The embedding of SY T(sck) into SY T(k(k+1))793.4.Descent vectors813.5.Some comments and questions86Appendix 3.A.Proof of Lemma 3.4.1089Chapter 4.The commutativity between the R-matrix and the promotion operator – a combinatorialproof914.1.A combinatorial algorithm for R914.2.Interaction between ρ and bumping974.3.The proof of the commutativity100Appendix 4.A.Proof of Lemma 4.3.7106Bibliography109*

Promotion Operators in Representation Theory and Algebraic Combinatorics By WANG, Qiang B.E. (Dalian University of Technology, China) 1994 DISSERTATION Submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in Mathematics in the OFFICE OF GRADUATE STUDIES of the UNIVERSITY OF CALIFORNIA DAVIS Approved:

Prof. Anne Schilling, Chair

Prof. Alexander Soshnikov

Prof. Eric Babson Committee in Charge 2010

-i-

UMI Number: 3427418

All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion.

UMI 3427418 Copyright 2010 by ProQuest LLC. All rights reserved. This edition of the work is protected against unauthorized copying under Title 17, United States Code.

ProQuest LLC 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, MI 48106-1346

Contents Abstract

iv

Acknowledgments

v

Introduction

1

Chapter 1.

Basic Tableaux Combinatorics

4

1.1.

Young tableaux

4

1.2.

The plactic monoid

8

1.3.

Standardization

15

1.4.

Promotion

17

Chapter 2.

A promotion operator on rigged configurations of type A

21

2.1.

Introduction

21

2.2.

Preliminaries and the main result

22

2.3.

Outline of the proof of the main result

43

2.4.

Proof of Proposition 2.3.3

51

2.5.

Proof of Proposition 2.3.5

53

2.6.

Proof of Proposition 2.3.7

55

2.7.

Proof of Proposition 2.3.10

59

2.8.

Proof of Proposition 2.3.14

61

2.9.

Proof of Proposition 2.3.15

64

Appendix 2.A.

Proof of Proposition 2.8.5

69

Appendix 2.B.

Several useful facts

71

Chapter 3.

Promotion and evacuation on rectangular and staircase tableaux

74

3.1.

Introduction

74

3.2.

Definitions and Preliminaries

76

3.3.

The embedding of SY T (sck ) into SY T (k (k+1) )

79

3.4.

Descent vectors

81 -ii-

3.5.

Some comments and questions

Appendix 3.A. Chapter 4.

86

Proof of Lemma 3.4.10

89

The commutativity between the R-matrix and the promotion operator – a combinatorial proof

91

4.1.

A combinatorial algorithm for R

91

4.2.

Interaction between ρ and bumping

97

4.3.

The proof of the commutativity

Appendix 4.A.

100

Proof of Lemma 4.3.7

106

Bibliography

109

-iii-

WANG, Qiang August 2010 Mathematics

Promotion Operators in representation Theory and Algebraic Combinatorics Abstract This thesis comprises two results on the promotion operator. Sch¨utzenberger first defined promotion on Young tableaux in terms of jeu-de-taquin [Sch72, Sch76]. Much later, Mark Shimozono [Shi02] revealed a connection between the promotion on Young tableaux and the affine crystal graph of type A. More recently, Brendon Rhoades [Rho10] revealed a connection between the counting of fixed points of the promotion action on Young tableaux of rectangular shape and the q-analogues of the hook-length formula. Chapter 2 of this thesis is based on my published work “Promotion operator on rigged configurations of type A” (joint work with Anne Schiling, [SW10, Wan09]). We prove a conjecture of [Sch06], regarding the analogue of the promotion operator on crystal paths of type A under a generalization of the bijection of Kerov, Kirillov and Reshetikhin between Littlewood–Richardson tableaux and rigged configurations [KKR86]. This (1)

proof shows in particular that the bijection between type An crystal paths and unrestricted rigged configurations is an affine crystal isomorphism. Chapter 3 of this thesis is based on the preprint “Promotion and evacuation on standard Young tableaux of rectangle and staircase shape” (joint work with Steve Pon, [PW10]). We demonstrate a promotion- and evacuation-preserving embedding of SY T (sck ) into SY T (k k+1 ). This arose from an attempt to demonstrate the cyclic sieving phenomenon [RSW04] of the promotion action on SY T (sck ). Such an embedding enables us to extend Rhoades’ [Rho10] definition of the “extended descent” from rectangular tableaux to staircase tableaux. Several novel and interesting combinatorial constructions arise as by-products of the above works. In Chapter 4, we document a combinatorial proof of the commutativity between the “combinatorial R-matrix” and the promotion. This proof is alternative to a proof used in Chapter 2.

-iv-

Acknowledgments I would like to express my gratitude to: My wife, Men Qing, who, in the last six years, gave birth to and raised our two lovely kids, worked a very demanding full time engineering job, and gave up her dream of graduate school so that I could enjoy my study and research. My advisor, Prof. Anne Schilling, for her guidance of my research, and her warm-hearted line-by-line, word-by-word revision of my writing. My colleague and friend, Steve Pon, for collaboration, and possibly future collaboration.

-v-

0.0. INTRODUCTION

1

Introduction This thesis contains two logically independent parts. These two parts are related by their common study of promotion operators. The first part (Chapter 2) investigates the correspondence between two classes of combinatorial objects: unrestricted rigged configurations of type A and tensor products of semi-standard Young tableaux of rectangular shape. Each of these two classes provides a combinatorial model of the classical crystal basis structure of type A. When we consider affine type A, in a sense, the rotation action on the Dynkin diagram of affine type A “lifts” to promotion on semi-standard Young tableaux. We study how the rotation “lifts” on the unrestricted rigged configurations. The second part (Chapter 2) investigates relations between the promotion action on standard Young tableaux of rectangular shape and the promotion action on standard Young tableaux of staircase shape. We will see that there exists a nice embedding of staircase tableaux into rectangular tableaux that preserves the promotion action, and more. In the following several paragraphs, we give a brief account of the historic development of the main problem of the first part. A thorough survey on this topic can be found in [Sch07]. The study of the correspondence between unrestricted rigged configurations and semi-standard Young tableaux of rectangular shape was first initiated by the study of the solvable lattice models and their associated spin chain systems. There are two approaches for this physics problem: the Bethe Ansatz [Bet31], and the corner transfer matrix method (CRM) [Bax82]. The solution set from the CRM method can be naturally indexed by Littlewood-Richardson tableaux [SW99]. The solution set from the Bethe Ansatz method is indexed by rigged configurations [KKR86, KR86]. Based on work by Kerov, Kirillov and Reshetikhin [KKR86, KR86], Kirillov, Schilling and Shimozono [KSS02] constructed an explicit statistic-preserving bijective map Φ from Littlewood-Richardson tableaux to rigged configurations. Later, Deka and Schilling [DS06] generalized the bijection Φ to a bijection between tensor products of semi-standard Young tableaux of rectangular shapes and unrestricted rigged configurations. Indeed, the set of unrestricted rigged configurations can be defined as the image of the generalized map, which we will still call Φ. On the other hand, tensor products of semi-standard Young tableaux of rectangular shapes can be equipped with the affine crystal structure of type A(1) . Thus the bijection carries the crystal structure to the set of unrestricted rigged configurations. An immediate question is to find an explicit formulation of the crystal structure on the level of unrestricted rigged configurations.

0.0. INTRODUCTION

2

Schilling [Sch06] gave such a construction on unrestricted rigged configurations for the “classical portion” (that is, all non-zero arrows) of the crystal. In addition, Schilling conjectured a construction of “affinization” (that is, construction of zero arrows) that involves the construction of a promotion operator on unrestricted rigged configurations. The main result of the first part of this thesis is a proof of Schilling’s conjecture. This result implies that the bijection Φ is an affine crystal isomorphism between the set of unrestricted rigged configurations and the set of tensor projects of semi-standard tableaux of rectangular shapes. In the following several paragraphs, we give a brief introduction of background context and the motivation of the main result of the second part. Promotion and evacuation (and their duals) are important bijections on the set of standard Young tableaux. They were first introduced by Sch¨utzenberger [Sch72, Sch76, Sch77], and later studied extensively by various people (Edelman and Greene [EG87], Haiman [Hai92], Stembridge [Ste96], among others). In 2008, Stanley gave a terrific survey [Sta09] of previous knowledge on promotion and evacuation and their generalizations. On the representation theory side, promotion and evacuation fit in the Kazhdan-Lusztig theory. For example, evacuation corresponds to the action of the long word on the basis, and promotion corresponds to the action of long cycle on the basis. On the tableaux combinatorics side, promotion and evacuation are related to important tableaux statistics and counting formulas. For example, we have the following two amazing results:

X

(−1)comaj(w) = #{t ∈ SY T (λ) | (t) = t},

w∈SY T (λ)

and X w∈SY

(ζ)maj(w) = #{t ∈ SY T (λ) | ∂(t) = t}.

T (r c )

Here SY T (λ) is the set of standard Young tableaux of a given shape λ, with rc being some rectangular shape of height r and width c; comaj is a statistic on standard Young tableaux that is closely related to the comajor index; ∂ is the promotion operator and is the evacuation operator, which is closed related to promotion; ζ = e2πi/N is a N -th root of unity for N = r ∗ c, and maj is a statistic on standard Young tableaux that is closely related to the major index. (Please refer to Chapter 1 for detailed definitions of all terms involved.) The first result was due to Stembridge [Ste96], and the second result was due to Rhoades [Rho10]. They are both instances of the so-called cyclic sieving phenomenon (CSP) introduced by Reiner, Stanton and White [RSW04].

0.0. INTRODUCTION

The main result of the second part of the thesis comes from an ongoing research project (joint work with Steven Pon) investigating the CSP of promotion on standard Young tableaux of other shapes. Our focus is on the staircase shape, and we discovered a promotion- and evacuation-preserving embedding of staircase tableaux into the rectangular tableaux. We organize this thesis to reflect the independence of the above two parts. Each part has its own sections for introduction, preliminaries, developments and chapter-appendices. The common background knowledge of these two parts is distilled into a separate chapter immediately after this introduction. At the end we have a chapter that documents a different proof of Theorem 2.3.12.

3

4

CHAPTER 1

Basic Tableaux Combinatorics In this chapter, we go over some basic definitions and facts of tableaux combinatorics. Most of the facts in the first four sections of this chapter are classical to this field. There are several sources for more thorough introductions to these facts, for example, [Ful97] and [Sta99]. No proofs of them are repeated in this thesis. There are a couple of easy facts that the author is not aware of their appearance in any textbook. We will state them as theorems and include their proofs if too trivial. In each of the following two chapters, we will include a section on chapter-specific definitions and facts. The idea is to keep these chapters as much self-contained as possible and at the same time avoid introducing the common foundations twice. All of our directional references about partitions and tableaux (e.g., north, west, above, and below etc.) will follow the “English” notation.

1.1. Young tableaux In this text, we fix the set of natural numbers with its normal ordering hN, 0) such that λ1 + λ2 + · · · + λl = n.

1.1. YOUNG TABLEAUX

5

If we drop the “weakly decreasing” requirement in the above definition then we get what is called a composition of n. Alternatively, we can think of partitions as equivalence classes of compositions where we ignore the ordering of compositions. We use |λ|, called the size of the partition, to denote the sum λ1 + λ2 + · · · + λl . We use l(λ) = l to denote the length of the partition. Each positive integer λi is called a part of the partition. By the above definition, the only partition of n = 0 is the empty sequence (). We denote this empty partition by ∅. (In this text, we will be using ∅ to denote several “empty objects”: the empty set/partition/word/tableau, etc. Its meaning will never be ambiguous in context.) E XAMPLE 1.1.3. Let λ = (4, 4, 3, 2, 1), then λ ` 14 (or |λ| = 14), and l(λ) = 5 (or λ has 5 parts). Each partition (λ1 ≥ λ2 · · · ≥ λk > 0) ` n can be naturally visualized by its Young diagram, which is a left-adjusted array of n boxes with λi boxes on the i-th row. For example, the Young diagram of the partition (4, 4, 3, 2, 1) is .

Such a pictorial representation of a partition also provides a coordinate system to identify boxes in the diagram: the northwest corner box is in the first row and first column, thus has the coordinate (1,1), the box east of it has coordinate (1,2), the one south of it has coordinate (2,1), etc. A corner of a Young diagram is a box that has no neighbors to its east nor to its south. For example, the above diagram has four corners: (2,4),(3,3),(4,2), and (5,1). D EFINITION 1.1.4. Given two partitions λ = (λ1 , λ2 , · · · ) and µ = (µ1 , µ2 , · · · ), we say µ is contained in λ, writing µ ⊂ λ, if for each k = 1, · · · , l(µ), µk ≤ λk . If µ 6= ∅ is contained in λ, then a skew diagram, or skew partition, λ/µ is obtained by removing the µ diagram from the (northwest) of the λ diagram. E XAMPLE 1.1.5. Let λ = (4, 4, 3, 2, 1) and µ = (2, 2, 1), then λ/µ is the skew diagram

.

We note here that a skew diagram λ/µ can arise from different choices of (λ, µ) pairs. For example, (4, 1, 1)/(2, 1) and (4, 2, 1)/(2, 2) represent the same skew shape. This could be a source of confusion. But for each skew diagram there is a “canonical” choice of (λ, µ) pair, that is, we choose λ to be the minimum (with respect to containment ⊂) among all possible choices. From now on, if we do not explicitly state

1.1. YOUNG TABLEAUX

6

otherwise, we refer to the skew diagram λ/µ by its canonical choice (λ, µ). In particular, under this canonical choice, the outside corners of the skew diagram λ/µ are the corners of λ, and the inside corners are the corners of µ. In the above example, the outside corners are (2,4), (3,3), (4,2), and (5,1); the inside corners are (2,2) and (3,1). We often use the term shape for both diagram and skew diagram when we do not want or need to make the distinction. We will be particularly interested in the following couple of special shapes. A rectangular partition is of the form (r, · · · , r) of length c. We can easily see that its Young diagram is a rectangle of width r and height c. We denote such a partition by rc . A staircase partition is of the form (k, k − 1, · · · , 1) where from the second part on, each part is one less than its previous part. Its Young diagram has the shape of an upside-down staircase, thus it gets its name. Any way of associating a letter of the alphabet with each box of a Young diagram is called a filling of the diagram. D EFINITION 1.1.6. Given a partition λ ` n, a semi-standard Young tableau, or simply Young tableau, or semi-standard tableau of shape λ is a filling of λ which is (1) weakly increasing across each row, and (2) strictly increasing down each column. We use sh(T ) to denote the shape of T . In the above definition, when λ = ∅ there is only one tableau of shape ∅ (the only map of empty domain). We denote this tableau by ∅. E XAMPLE 1.1.7. 1 2 T1 = 4 5 6

2 2 4 3 3 7 4 5 7

is a semi-standard Young tableau of shape sh(T1 ) = (4, 4, 3, 2, 1). D EFINITION 1.1.8. Given a partition λ ` n, a standard Young tableau, or simply standard tableau of shape λ, is a filling of λ with elements of the set [n] = {1, · · · , n} such that the filling is (1) strictly increasing across each row, and (2) strictly increasing down each column. From this definition we see that a standard Young tableau is a special semi-standard Young tableau in which each letter in [n] appears exactly once.

1.1. YOUNG TABLEAUX

7

E XAMPLE 1.1.9. 1 3 4 7 2 5 6 9 T2 = 7 8 11 1013 12 is a standard Young tableau of the same shape as T1 .

We use SSY T (λ) to denote the set of all semi-standard Young tableaux of shape λ, and use SY T (λ) to denote the set of all standard Young tableaux of shape λ. We denote SSY Tn = ∪λ`n SSY T (λ), and denote SSY T = ∪n∈N SSY Tn . Similarly, SY Tn = ∪λ`n SY T (λ), and SY T = ∪n∈N SY Tn . Given a Young tableau T , we define the row word of T , denoted by row(T ), to be a listing of the entries of T , read first from left to right, and then from bottom to top. We define the column word of T , denoted by col(T ), to be a listing of the entries of T , read fiirst from bottom to top, and then from left to right. The row word and column word are two ways of mapping a Young tableau to a word. Using either of them we can carry the notion of weight (or content) from words to Young tableaux, that is wt(T ) = wt ◦ row(T ) (or wt(T ) = wt ◦ col(T )). Using this language we can say that a standard Young tableau is a semi-standard Young tableau with weight (1, . . . , 1).

E XAMPLE 1.1.10. Let T1 be as given before; then the row word of T1 is row(T1 ) = (6, 5, 7, 4, 4, 5, 2, 3, 3, 7, 1, 2, 2, 4); the column word of T1 is col(T1 ) = (6, 5, 4, 2, 1, 7, 4, 3, 2, 5, 3, 2, 7, 4); and the weight of T1 is wt(T1 ) = (1, 3, 2, 3, 2, 1, 3).

Given a skew diagram λ/µ, we can define the semi-standard/standard skew Young tableaux of shape λ/µ analogously, and so too the notions of their row/column word and weight.

E XAMPLE 1.1.11. 1 5 2 6 T3 = 1 5 2 3 4 is a skew Young tableau of shape (4, 4, 3, 2, 1)/(2, 2, 1). The row word of T3 is row(T3 ) = 4, 2, 3, 1, 5, 2, 6, 1, 5, the column word of T3 is col(T3 ) = 4, 2, 3, 1, 5, 2, 1, 6, 5, and the weight of T3 is wt(T3 ) = (2, 2, 1, 1, 2, 1).

1.2. THE PLACTIC MONOID

8

1.2. The plactic monoid Now we consider maps from words to tableaux. One such map uses the celebrated Schensted insertion, or row insertion algorithm. A LGORITHM 1.2.1. Let T be a Young tableau, and x ∈ N. The row insertion of x into T , denoted by row

T ←− x is the result of the following procedure: Find in the first row of T the first entry y such that y > x. If no such y exists, then append x to the end of the first row and stop. Otherwise, bump (replace) y with x in the first row. (Note here that the resulting new sequence is still weakly increasing.) Repeat the process with y on the second row. Keep going until the bumped entry can be put at the end of the row it is bumped into, or until it is bumped out the bottom, in which case it forms a new row with one box. It is clear that the result of this procedure is again a Young tableau. 1 2 2 4 2 3 3 7 E XAMPLE 1.2.2. When row inserting 1 into T1 = 4 4 5 , first 1 bumps the 2 in position (1,2) out of 5 7 6 the first row, then 2 bumps the 3 in position (2,2) out of the second row, then 3 bumps the 4 in position (3,1) out of the third row, then 4 bumps the 5 in position (4,1) out of the fourth row, then 5 bumps the 6 in position 1 1 2 4 2 2 3 7 row (5,1) out of the last row, and finally, 6 creates a new row. Therefore, T1 ←− 1 = 3 4 5 . 4 7 5 6 1 2 2 4 2 3 3 7 E XAMPLE 1.2.3. When row inserting 3 into T1 = 4 4 5 , first 3 bumps the 4 in position (1,4) out 5 7 6 of the first row, then 4 bumps the 7 in position (2,4) out of the second row, since 7 is larger than every entry 1 2 2 3 2 3 3 4 row of the third row, it is appended at the end of third row. Therefore, T1 ←− 3 = 4 4 5 7 . 5 7 6 From the definition, it is clear that the row insertion of x into T determines a sequence of positions in row

sh(T ←− x), namely, those whose entries are bumped out from their rows, together with the last box added at the end. Such a sequence is called the bumping route of the row insertion. In the above examples, the bumping route of row insertion of 1 into T1 is ((1,2),(2,2),(3,1),(4,1),(5,1),(6,1)); and the row insertion of 3 into T1 is ((1,4),(2,4),(3,4)). The entries along the bumping routes are colored red in the above examples. It is clear from the definition of Young tableaux and the construction of row insertion that the row coordinates of a bumping route are strictly increasing and the column coordinates of a bumping route are weakly decreasing.

1.2. THE PLACTIC MONOID

9

Now given a word w = (a1 , a2 , · · · , an ), the P-tableau of w, denoted P (w), can be constructed by recursively applying row insertion. A LGORITHM 1.2.4. In the base step, we let P0 be the empty tableau ∅. Once Pk is constructed, we row

construct Pk+1 = Pk ←− ak+1 . At the end, we define P (w) = Pn to be the final result of this recursive construction. E XAMPLE 1.2.5. Let w = (3, 7, 5, 2, 7, 4, 1, 6); then we obtain the following sequence of Young tableaux, the i-th tableau in this sequence corresponding to Pi in the above construction. The red entries in each Young tableau form the bumping route of the row insertion in the recursive step. 1 4 7 1 4 6 2 5 2 5 7 2 4 7 3 5 ∅, 3 , 3 7 , , 3 , 3 , 3 5 , 2 5 , 2 5 7 7 3 3 7 7 7 7 7 Thus, P (w) is the last Young tableau in the above sequence, and sh(P (w)) = (3, 3, 1, 1). Now that we have maps back and forth between Young tableaux and words, how are these maps related? It is easy to check that P ◦row and P ◦col each is the identity map. This implies that P induces an equivalence relation on words: w1 is equivalent to w2 if P (w1 ) = P (w2 ). In particular, row(T ) is equivalent to col(T ) for any Young tableau T . It turns out that this equivalence relation can be characterized by Knuth equivalence, which we now describe. D EFINITION 1.2.6. Knuth equivalence, denoted ≡K , is the transitive and reflective closure of the elementary Knuth transformation 7→: (1) Let X < Y ≤ Z be natural numbers. If w1 = uY XZv for some words u and v, and w2 = uY ZXv, then w1 7→ w2 and w2 7→ w1 . (2) Let X ≤ Y < Z be natural numbers. If w1 = uXZY v for some words u and v, and w2 = uZXY v, then w1 7→ w2 and w2 7→ w1 . E XAMPLE 1.2.7. In this example, we omit the commas between the letters in a word to have a cleaner presentation. The elementary Knuth transformation used in each step is also given:

73257146

(714)7→(174) ≡K

73251746

(251)7→(215) ≡K

73215746

(746)7→(476) ≡K

73215476.

The important fact about the Knuth equivalence is that P (w1 ) = P (w2 ) if and only if w1 ≡K w2 .

1.2. THE PLACTIC MONOID

10

The above fact gives a way of indexing Knuth equivalence classes by Young tableaux. Is there a good way to index words within a Knuth equivalence class? The beautiful RSK correspondence answers just this question: for any Young tableau T it gives a constructive bijection between the Knuth equivalence class indexed by T , {w | P (w) = T }, and standard Young tableaux of the same shape as T , SY T (sh(T )). The RSK correspondence is the result of the following enhancement of the row insertion algorithm. By row

the construction of the row insertion, we see that the shape of T ←− x is obtained by adding one more box to the shape of T . The row insertion algorithm is invertible provided that we know the position of this added box. Thus, in order to recover w from P (w), we just need to know the (reversed) sequence of positions of row

the added boxes in each step of construction Pk+1 = Pk ←− ak+1 . One way to record this sequence is to note down for each box of sh(P (w)) the “time” when it is added in. It is notationally neater (and also the convention) to keep the “time” information in a separate tableau, the Q-tableau. More precisely, at each step as we construct Pk+1 from Pk , we also construct a tableau Qk+1 from Qk (with Q0 = ∅) such that sh(Qk+1 ) = sh(Pk+1 ) and the entry inside the newly added box is k + 1. This is best understood by an example:

E XAMPLE 1.2.8. Reusing Example 1.2.5, the sequence of (Pk , Qk ) pairs constructed is then (∅, ∅), ( 3 , 1 ), ( 3 7 , 1 2 ), ( 3 5 , 1 2 ), 7 3 2 5 1 2 ( 3 , 3 ), 7 4 2 5 7 1 2 5 (3 , 3 ), 7 4 2 4 7 1 2 5 ( 3 5 , 3 6 ), 7 4 1 4 7 1 2 5 ( 2 5 , 3 6 ). 3 4 7 7 By our discussion above, the map w 7→ (P (w), Q(w)) is invertible, thus it is a correspondence between Nn and ∪λ`n SSY T (λ) × SY T (λ).

1.2. THE PLACTIC MONOID

11

Elements of Sn , in one-line notation, are just words of weight (1, 1, · · · , 1). Hence, RSK correspondence provides a bijection between permutations and pairs of standard Young tableaux of the same shape. This bijection has some very beautiful properties. For example, taking the inverse of a permutation corresponds to swaping its P-tableau and Q-tableau under RSK. Given T1 ∈ SSY T (λ), and T2 ∈ SSY T (µ), we can pick w1 ∈ P −1 (T1 ) and w2 ∈ P −1 (T2 ), and consider the tableau S = P (w1 w2 ). From our discussion above, we know that S does not depend on the choices of w1 and w2 , thus is a function of T1 and T2 . This operation defines a (non-commutative) multiplication on SSY T , given by T1 · T2 = P (row(T1 )row(T2 )). Such a multiplication equips the set SSY T with a monoid structure (the empty tableau being the multiplicative identity), which is called the plactic monoid. This structure provides a foundation for the combinatorial treatment of the theory of symmetric functions. There is a different combinatorial construction of the above tableau multiplication. To describe this construction, we need to first describe the forward sliding on a skew tableau. The forward sliding operation takes a skew tableau S and an inside corner of it, which can be thought as a “hole”. Consider the south neighbor and the east neighbor of this hole; at least one of these two boxes is in S. There is only one way of moving the entry from one of these boxes (possibly there is only one box, in which case there is trivially only one way) to fill in the hole, and at the same time preserve the rowweakly-increasing and column-strict-increasing conditions: move the smaller of these two, and if they are equal, move the one on the left. (As a general rule, if two entries in a tableau are the same, the one on the left is regarded to be smaller than that on the right.) A new hole is thus created at the position of the moved neighbor. We then consider the south and the east neighbor(s) of this new hole, and repeat the above process. Keep on repeating; the hole is moved either to the south or to the east in each step, until it stops at an outside corner, where this hole has no south neighbor nor east neighbor in S. This completes one sliding operation. The path (sequence of positions) that the hole moves on is called the forward sliding route.

1 5 X 2 6 E XAMPLE 1.2.9. Consider the inside corner (2,2) of the skew tableau T3 = (marked with 1 5 2 3 4 ’X’). The operation of forward sliding on this input data can be demonstrated by the following step by step motions: 1 5 X 2 6 1 5 , 2 3 4

1.2. THE PLACTIC MONOID

12

1 5 1 2 6 X 5 , 2 3 4 1 5 1 2 6 3 5 . 2X 4 The forward sliding route is the sequence ((2,2),(3,2),(4,2)). As shown in the above example, after one round of forward sliding, we get a skew tableau of a new skew shape λ0 /µ0 , where λ0 is obtained from λ by removing some corner box, and µ0 is obtained from µ by removing some corner box. As long as µ0 is not already empty, we can choose another inside corner of λ0 /µ0 and do the forward sliding again. Eventually, we end up with a (non-skew) tableau. The tableau is called the rectification of the original skew tableau S, denoted by rect(S). In the above description of rectification, we made a choice of an inside corner for each forward sliding step. It may seem that the rectification is not well defined. It can be shown, however, no matter the choice of inside corner, each forward sliding operation preserves the Knuth equivalence of the row (column) reading word. Therefore, rect(S) is nothing but P (row(S)), thus is well defined. How is rectification related to multiplication of tableaux? Given tableaux S and T we construct the skew tableau, called S ∗ T , by putting T to the north-west of S so that the north-west corner of S and the south-east corner of T are touching at one point. 1 3 E XAMPLE 1.2.10. Let S = 1 2 and T = 2 4 , then 2 4 1 3 2 4 4 .

S∗T = 1 2 2

It is clear that row(S ∗ T ) = row(S)row(T ), thus rect(S ∗ T ) = S · T . The sliding operation is invertible in the following sense: If we are given a tableau S (skew or not) as the result of a sliding operation, and we know the outside corner of the original shape that is removed by this operation (which is just the last hole in the forward sliding route), then we can recover the original skew tableau by backward sliding. The given hole has (at most) two neighbors in S, the north one and the west one. Whenever there are choices, there is only one way of moving one of these neighbors to fill in this hole and at the same time preserving the row-weakly-increasing and column-strictly-increasing conditions: move the bigger of these two, and if they are equal, move the one on the right. A new hole is thus created at the

1.2. THE PLACTIC MONOID

13

position of the moved neighbor. We then consider its west and north neighbors as candidate to fill this new hole. Repeating this process, we move the hole northwest and finally stop at an inside corner. It is not hard to see that the above described operation completely reverses the sliding operation that starts on the last inside corner. In particular the backward sliding route, which is defined to be the sequence of positions that the hole moves on in the backward sliding operation, is precisely the reverse of the forward sliding route. We caution the reader that in contrast to the fact that rectification does not depend on the sequence of choices of inside corners, to reverse the rectification we do need the information of the sequence of outside corners. This can be best seen from the following examples. E XAMPLE 1.2.11. The tableaux 1 2 can be backward slided to 3

2 3 1

2 or

1

by choosing

3

difference sequences of outside corners. We often omit the direction – forward or backward – when we talk about sliding if this information is clear from the context. Instead, we use the term jeu-de-taquin sliding, or jeu-de-taquin, or just sliding . Similarly, we use the term sliding route if the ordering of the sequence is clear from the context. Before we move on to the next topic, let us consider the the dual construction of row insertion, called the column insertion. A LGORITHM 1.2.12. Let T be a Young tableau, and x ∈ N. The column insertion of x into T , denoted col

by x −→ T is the result of the following procedure: Find in the first column of T the first entry y such that y ≥ x. If no such y exists, then append x to the end of the first column and stop. Otherwise, bump (replace) y with x in the first column. (Note here that the resulting new column is still strictly increasing.) Repeat the process with y on the second column. Keep going until the bumped entry can be put at the end of the column it is bumped into, or until it is bumped out of the last column, in which case it forms a new column with one box. It is clear that the result of this procedure is again a Young tableau. 1 2 2 4 2 3 3 7 E XAMPLE 1.2.13. To column insert 3 into T1 = 4 4 5 , first 3 bumps the entry 4 from the first 5 7 6 column, then 4 bumps the entry 4 from the second column, then 4 bumps the entry 5 from the third column, then 5 bumps the entry 7 from the fourth column, and 7 finally forms a new column added to the right of 1 2 2 4 7 2 3 3 5 col sh(T1 ). Therefore, 3 −→ T1 = 3 4 4 . 5 7 6 Similar to that of row insertion, we can construct a map from words to Young tableaux by using column insertion.

1.2. THE PLACTIC MONOID

14

Given a word w = (a1 , a2 , · · · , an ), the tableau Pcol (w) is constructed recursively as below. A LGORITHM 1.2.14. In the base step, we let P0 be the empty tableau ∅. Once Pk is constructed, we col

construct Pk+1 = ak+1 −→ Pk . At the end, we define Pcol (w) = Pn to be the final result of this recursive construction. E XAMPLE 1.2.15. Let w = (3, 7, 5, 2, 7, 4, 1, 6), then we obtain the following sequence of Young tableaux by following the algorithm, the i-th tableau in this sequence corresponding to Pi in the above construction. The red entries in each Young tableau form the bumping route of the column insertion in the recursive step. 1 4 6 1 4 6 1 6 1 4 6 1 4 6 1 6 ∅, 6 , 1 6 , , 4 , 2 , 2 7 , 2 7 , 2 5 7 4 5 3 7 7 5 7 7 Thus, Pcol (w) is the last Young tableau in the above sequence. Comparing to Example 1.2.5 we see that Pcol (w) = P (w) for this given word. It turns out that this is not a coincidence: it can be shown that Pcol (w) = P (w) for any word w. Moreover the RSK correspondence can be constructed in terms of column insertion. Please refer to [Ful97, A.2] for details of this construction. As with the row insertion, the column insertion is invertible given the position of the box added to the Young tableau. In particular, if we are given that T is of rectangular shape, then the added box surely forms a new column outside of the rectangle, thus its position is obvious. Since we will use the inverse of column insertion on rectangular tableaux in our later discussions, we now explicitly describe the inverse column insertion below. A LGORITHM 1.2.16. Let T be a rectangular Young tableau, and x ∈ N be such that x is greater than or col−1

equal to the entries in the first row of T . The inverse column insertion of x into T , denoted by T ←− x is the result of the following procedure: Find in the last column of T the last entry y such that y ≤ x. Such an entry always exists by the given condition on x. Bump (replace) y with x in the last column. Repeat the process with y on the second-to-last column. Since the neighbor to the left of y is less than or equal to y, y will bump out some entry from the third-to-last column. Keep on going until an entry z is bumped out of the first column. Let us call the new rectangular Young tableau resulting from the above repeated bumping S; then the pair (z, S) is the result of this algorithm. 1 E XAMPLE 1.2.17. Let T be the rectangular Young tableau 2 4 5 in the first row of T .

2 3 4 6

2 5 6 7

3 5 ; then 6 is greater than any entry 6 7

1.3. STANDARDIZATION

15

When performing column insertion of 6 into T , first the entry 6 from the last column is bumped, then the entry 6 from the second-to-last column is bumped, then the entry 6 from the third-to-last column is bumped, finally the entry 5 from the first column is bumped out of the rectangle. Therefore, 1 (T ←− 6) = (5, 2 4 6 col−1

2 3 4 6

2 5 6 7

3 5 ). 6 7

The notion of bumping route of a column insertion is analogous to that of row insertion. For inverse column insertion, since the number bumped out of the last row (z) is not included as part of the resulting tableau S, we consider the bumping route of the inverse column insertion to consist of only positions in the rectangle shape. For instance, ((3, 4), (3, 3), (4, 2), (4, 1)) is the bumping route of the inverse column insertion in the above example. 1.3. Standardization If we look carefully at the construction of the jeu-de-taquin sliding, we see that the sliding route depends only on the relative ordering of entries of the tableau instead of the exact filling. For example, just from the behavior of jeu-de-taquin sliding we cannot tell the difference between the following two tableaux: 2 8 1 5 3 9 2 6 and 1 7 1 5 . 4 5 2 3 6 4 row From our earlier discussion of row insertion, we can also see that the bumping route of T ←− x depends only on the relative ordering of the entries in T and the number x but not on the exact content of these entries and x. For example, if we are given two words w and v, such that v is obtained by shifting each letter in w up by one, then the relative ordering of the letters in these two words is the same, thus the sequences of Q-tableaux arising from the RSK construction of these two words will be the same. To capture the precise notion of “relative ordering” we define the following map. D EFINITION 1.3.1. Given a word w = (a1 a2 · · · al ), the standardization of w, denoted by std(w) = (b1 b2 · · · bl ), is such that bk = #{i | ai < ak } + #{i | ai = ak , i < k}. That is, a word w defines a linear ordering on the positions 1, · · · , l such that position i is considered lower than position j in this ordering if either ai < aj or ai = aj but i < j. Thus std(w) just records the ordering of positions. For two words u and v, we say u ≡std v if and only if std(u) = std(v). E XAMPLE 1.3.2. The words w = (3, 7, 5, 2, 7, 4, 1, 6) and v = (3, 5, 4, 2, 5, 3, 1, 4) both have the same standardization, std(w) = std(v) = 3, 7, 5, 2, 8, 4, 1, 6, thus the same “relative ordering”.

1.3. STANDARDIZATION

It is easy to verify that standardization commutes with the Knuth transformation. Therefore, the standardization of words can be projected down to the Knuth equivalence classes of words, that is, Young tableaux. More precisely, D EFINITION 1.3.3. For any Young tableau T , the standardization of T is given by std(T ) = P (std(row(T ))). For Young tableaux T1 and T2 , we say T1 ≡std T2 if and only if std(T1 ) = std(T2 ). Under this definition, and the fact that jeu-de-taquin sliding preserves the Knuth equivalence of row words, we know that jeu-de-taquin sliding commutes with standardization of Young tableaux. The commutativity with standardization is the precise formulation that captures the informal idea of “depends only on the relative ordering”. The standardization is clearly not injective, but with the extra piece of information, wt(w), we can recover the original word w from std(w). Similarly, given std(T ) and wt(T ) the Young tableau T can be recovered. It is not the case, however, that a standard Young tableau can be paired with any weight vector to recover a semi-standard Young tableau. The choice of weight vector has to respect the “boundary” of the horizontal strip decomposition of the standard Young tableau, the precise meaning of which is given below. D EFINITION 1.3.4. A horizontal strip of a shape λ is a collection of boxes of λ such that no two boxes share the same column. A horizontal strip decomposition of T ∈ SY T (λ) is a set-partition of boxes of T such that: • each partition block forms a horizontal strip of λ, • each partition block contains consecutive elements in hN, εi (b2 )

b1 ⊗ fi b2

if ϕi (b1 ) ≤ εi (b2 )

b1 ⊗ ei b2

if ϕi (b1 ) < εi (b2 )

ei b1 ⊗ b2

if ϕi (b1 ) ≥ εi (b2 )

(Here 0 ⊗ b and b ⊗ 0 are understood to be 0.)

εi (b1 ⊗ b2 ) = max{εi (b1 ), εi (b1 ) + εi (b2 ) − ϕi (b1 )} ϕi (b1 ⊗ b2 ) = max{ϕi (b2 ), ϕi (b1 ) + ϕi (b2 ) − εi (b1 )} wt(b1 ⊗ b2 ) = wt(b1 ) + wt(b2 )

2.2. PRELIMINARIES AND THE MAIN RESULT

Corresponding to the notion of morphism between representations, the morphism between crystal graphs is defined as follows. D EFINITION 2.2.6. Let B1 and B2 be P -weighted I-crystals. The a morphism ψ : B1 −→ B2 is a set map ψ : B1 t {0} −→ B2 t {0} satisfying (1) ψ(0) = 0, ψ(B1 ) ⊂ B2 , (2) for b, b0 ∈ B1 , fi b = b0 implies fi ψ(b) = ψ(b0 ), (3) wt(ψ(b)) = wt(b), for any b ∈ B1 , and (4) εi (ψ(b)) = εi (b),

ϕi (ψ(b)) = ϕi (b), for any b ∈ B1 .

In particular, an injective crystal morphism is called a crystal embedding, and a bijective crystal morphism is called a crystal isomorphism.

2.2.3. Inhomogeneous lattice paths. In this subsection, we will use semi-standard Young tableaux to realize crystal graphs for the finite dimensional irreducible Uq (g)-representations, and the crystal graphs for the Kirillov–Reshetikhin Uq0 (ˆ g)-modules, where g is the semisimple Lie algebra of type An . The finite dimensional irreducible representations of Uq (g) are indexed by partitions λ with l(λ) ≤ n. Given such a partition λ, a P -weighted [n]-crystal graph B λ can be constructed as follows. As the set of vertices, B λ contains semi-standard Young tableaux on the alphabet [n + 1]. The weight function wt : B λ → P is the weight function wt defined on SSY T , with the output interpreted as a weight in the weight lattice expressed in terms of the ambient basis. The ei and fi on B λ are defined by the following construction. D EFINITION 2.2.7. Let u be a word. For a given i ∈ [n], we first place a left parenthesis below each letter i + 1 in u, and a right parenthesis below each letter i in u. Match up all the nested parenthesis pairs. The unmatched parentheses correspond to a subword of the form is (i + 1)t . Now let e¯i (u) be obtained from u by replacing the leftmost unmatched i + 1 to i; e¯i (u) = 0 if no such i + 1 exists (that is t = 0). Let f¯i (u) be obtained from u by replacing the rightmost unmatched i to i + 1; f¯i (u) = 0 if no such i exists (that is s = 0). This construction of e¯i (u) and f¯i (u) is said to follow the signature rule on words. Let T ∈ B λ , we define ei (T ) = P (¯ ei (row(T ))) and fi (T ) = P (f¯i (row(T ))) where P is as defined in Algorithm 1.2.4 with P (0) = 0.

This algorithm is well defined, that is, it is not hard to show that ei (T ) and fi (T ) are of the same shape as sh(T ). Moreover, it can be shown that we can take any Knuth equivalent reading word of T , the result agrees with using row(T ).

30

2.2. PRELIMINARIES AND THE MAIN RESULT

31

The maps εi and ϕi on B λ are defined as εi (T ) = t and ϕi (T ) = s for t, s and T as in the above algorithm. It can be shown that the axioms in Definition 2.2.4 are satisfied by the above definition of wt, e/fi , and ε/ϕi . Next, we introduce the inhomogeneous lattice paths, which provide a combinatorial model for the (1)

crystal graphs of the Kirillov–Reshetikhin (KR) module of type An . In the rest of this chapter we use I = [n] as the index set of the Dynkin diagram of type An . Let H = I × Z>0 and define B to be a finite sequence of positive integers B = ((r1 , c1 ), . . . , (rK , cK )), with (ri , ci ) ∈ H for 1 ≤ i ≤ K. Each (ri , ci ) should be identified with a rectangular partition cri i , and thus B represents a sequence of rectangular partitions. We sometimes use the phrase “leftmost rectangle” (resp.“rightmost rectangle”) of B to mean the first (resp. last) partition in this sequence. Given B, we will use the following operations for successively “peeling off” boxes from it. D EFINITION 2.2.8.

(1) If B = ((1, 1), B 0 ), let lh(B) = (B 0 ). This operation is called left-hat.

(2) If B = ((r, c), B 0 ) with s ≥ 1, let ls(B) = ((r, 1), (r, c−1), B 0 ). This operation is called left-split. Note that when s = 1, ls is just the identity map. (3) If B = ((r, 1), B 0 ) with r ≥ 2, let lb(B) = ((1, 1), (r − 1, 1), B 0 ). This operation is called box-split. These operations are clearly invertible, and their inverses are used to “add back” boxes to a sequence of rectangles. D EFINITION 2.2.9. Given (r, c) ∈ H, define Pn (r, c) to be the set of semi-standard Young tableaux of (rectangular) shape (cr ) over the alphabet [n + 1] = {1, 2, . . . , n + 1}. As we have just discussed above, Pn (r, c) is endowed with a type An -crystal structure, with the Kashiwara operators ea , fa for 1 ≤ a ≤ n defined by the signature rule. D EFINITION 2.2.10. Given a sequence of rectangles B as above, Pn (B) = Pn (r1 , c1 ) ⊗ · · · ⊗ Pn (rK , cK ). As a set, Pn (B) consists of sequences of rectangular semi-standard Young tableaux. It is also endowed with a crystal structure through the tensor product rule.

2.2. PRELIMINARIES AND THE MAIN RESULT

Under the tensor produect rule, the weight function on Pn (B) is given by: wt(p) = wt(b1 ) + wt(b2 ) + · · · + wt(bK ), for p = b1 ⊗ b2 ⊗ · · · ⊗ bk ∈ Pn (B), with the output interpreted as a weight in the weight lattice expressed in terms of the ambient basis. Since lattice paths are built up from Young tableaux, we often also call wt(p) the content of p.

D EFINITION 2.2.11. Let λ be a weight. Define Pn (B, λ) = {p ∈ Pn (B) | wt(p) = λ}. E XAMPLE 2.2.12. Let B = ((2, 2), (1, 2), (3, 1)). Then 1 p = 1 2 ⊗ 1 2 ⊗ 2 2 3 4 is an element of P3 (B) and wt(p) = (3, 4, 1, 1).

We often omit the subscript n, writing P instead of Pn , when n is irrelevant or clear from the discussion. Let λ = (λ1 , λ2 , . . . , λn+1 ) be a partition. Define the set of highest weight paths as P n (B, λ) = {p ∈ Pn (B, λ) | ei (p) = ∅ for i = 1, 2, . . . , n}. We often refer to a rectangular tableau just as a “rectangle” when there is no ambiguity. For instance, in the above example, the leftmost rectangle in p is the tableau 1 2 . 2 3 For any p ∈ P(B), the row word (resp. column word) of p, denoted by row(p) (resp. col(p)), is the concatenation of the row (column) words of each rectangular tableau in p from left to right. It can be shown easily that row(p) ≡K col(p).

E XAMPLE 2.2.13. The row word of p in Example 2.2.12 is row(p)

=

1 row( 1 2 ) · row( 1 2 ) · row( 2 ) 2 3 4

=

(2, 3, 1, 2) · (1, 2) · (4, 2, 1)

=

(2, 3, 1, 2, 1, 2, 4, 2, 1),

32

2.2. PRELIMINARIES AND THE MAIN RESULT

33

and similarly the column word is col(p) = (2, 1, 3, 2, 1, 2, 4, 2, 1).

D EFINITION 2.2.14. We say p ∈ P(B) and q ∈ P(B 0 ) are Knuth equivalent, denoted by p ≡K q, if their row words (and hence their column words) are Knuth equivalent.

E XAMPLE 2.2.15. Let B 0 = ((2, 2), (3, 1), (1, 2)), and 1 q = 1 2 ⊗ 2 ⊗ 1 2 ∈ P(B 0 ). 2 3 4 then it can be checked that p ≡K q.

The following maps on P(B) are the counterparts of the maps lh, lb and ls defined on B. D EFINITION 2.2.16. [DS06, Sections 4.1,4.2]. (1) Let b = c ⊗ b0 ∈ P((1, 1), B 0 ). Then lh(b) = b0 ∈ P(B 0 ). (2) Let b = c ⊗ b0 ∈ P((r, c), B 0 ), where c = c1 c2 · · · cs , and ci denotes the i-th column of c. Then ls(b) = c1 ⊗ c2 · · · cs ⊗ b0 . b1 (3) Let b =

b2 0 0 .. ⊗ b ∈ P((r, 1), B ), where b1 < · · · < br . Then . br

lb(b) = br ⊗

b1 .. .

⊗ b0 .

br−1 The maps lh and ls on P(B) are clearly invertible, while lb is invertible only if the number br is remembered.

2.2.4. Rigged configurations. A general definition of rigged configurations of arbitrary type can be found in [Sch06, Section 3.1]. Here we are only concerned with type An rigged configurations, and review their definition. Given a sequence of rectangles B, following the convention of [Sch06] we denote the multiplicity of a (a)

given (a, i) ∈ H in B by setting Li

= #{(r, c) ∈ B | r = a, c = i}.

2.2. PRELIMINARIES AND THE MAIN RESULT

34

The (highest-weight) rigged configurations are indexed by a sequence of rectangles B and a dominant weight Λ. The sequence of partitions ν = {ν (a) | a ∈ I} is a (B, Λ)-configuration if X

(2.2.1)

(a)

imi αa =

(a,i)∈H (a)

where mi

X

(a)

iLi Λa − Λ,

(a,i)∈H

is the number of parts of length i in the partition ν (a) , αa is the a-th simple root and Λa is the

a-th fundamental weight. Denote the set of all (B, Λ)-configurations by C(B, Λ). The vacancy number of a configuration is defined as (a)

pi

=

X

(a)

min(i, j)Lj

−

j≥1

(b)

X

Ma,b · min(i, j)mj .

(b,j)∈H

Here M = (Ma,b ) is the Cartan matrix (of type An in our case). The (B, Λ)-configuration ν is admissible if (a)

pi

≥ 0 for all (a, i) ∈ H, and the set of admissible (B, Λ)-configurations is denoted by C(B, Λ). A partition p can be viewed as a linear ordering (p, ) of a finite multiset of positive integers, referred

to as parts, where parts of different lengths are ordered by their value, and parts of the same length are given an arbitrary ordering. Implicitly, when we draw a Young diagram of p, we are giving such an ordering. Once is specified, ≺, , and are defined accordingly. A labelling of a partition p is then a map J : (p, ) → Z≥0 satisfying that if i, j ∈ p are of the same value and i j, then J(i) ≥ J(j) as integers. A pair (x, J(x)) is referred to as a string, the part x is referred to as the size or length of the string and J(x) as its label. R EMARK 2.2.17. The linear ordering on parts of a partition p can be naturally viewed as an linear ordering on the corresponding strings. It is directly from its definition that is a finer ordering than > that compares the size (non-negative integer) of the strings. Another important distinction is that > can be used to compare strings from possibly different partitions. Given two strings s and t, the meaning of equality = is clear from the context in most cases. For example, if s and t are strings from different partitions, then s = t means that they are of the same size; s = t − 1 means that the length of s is 1 shorter than that of t. In the case that s and t are from the same partition and ambiguity may arise, we reserve s = t to mean s and t are the same string and explicitly write |s| = |t| to mean that s and t are of the same length but possibly distinct strings. A rigging J of an (admissible) (B, Λ)-configuration ν = (ν (1) , . . . , ν (n) ) is a sequence of maps J = (J (a) ), each J (a) is a labelling of the partition ν (a) with the extra requirement that for any part i ∈ ν (a) (a)

0 ≤ J (a) (i) ≤ pi .

2.2. PRELIMINARIES AND THE MAIN RESULT

35 (a)

For each string (i, J (a) (i)), the difference cJ (a) (i) = pi − J (a) (i) is referred to as the colabel of the string. cJ = (cJ (a) ) as a sequence of maps defined above is referred to as the corigging of ν. A string is said to be singular if its colabel is 0. D EFINITION 2.2.18. The pair rc = (ν, J) described above is called a (restricted-)rigged configuration. The set of all rigged (B, Λ)-configurations is denoted by RCn (B, Λ). In addition, define RC(B) = S + is the set of dominant weights. Λ∈P + RC(B, Λ), where P R EMARK 2.2.19. Since J and cJ uniquely determine each other, a rigged configuration rc can be represented either by (ν, J) or by (ν, cJ). In particular, if x is a part of ν (a) then (x, J (a) (x)) and (x, cJ (a) (x)) refer to the same string. We will use these two representations interchangeably depending on which one is more convenient for the ongoing discussion. Nevertheless, in the later part of this chapter, when we say that a string is unchanged/preserved under some construction, we mean the length and the label of the string being preserved, the colabel may change due to the change of the vacancy number resulted from the construction. Equation (2.2.1) provides an obvious way of defining a weight function on RC(B). Namely, for rc ∈ RC(B) wt(rc) =

(2.2.2)

X (a,i)∈H

(a)

iLi Λa −

X

(a)

imi αa .

(a,i)∈H

R EMARK 2.2.20. From the above definition, it is clear that RC(B) is not sensitive to the ordering of the rectangles in B. D EFINITION 2.2.21. [Sch06, Section 3.2] Let B be a sequence of rectangles. Define the set of unrestricted rigged configurations RC(B) as the closure of RC(B) under the operators fa , ea for a ∈ I, with fa , ea given by: (1) Define ea (ν, J) by removing a box from a string of length k in (ν, J)(a) leaving all colabels fixed and increasing the new label by one. Here k is the length of the string with the smallest negative label of smallest length. If no such string exists, ea (ν, J) = 0. (2) Define fa (ν, J) by adding a box to a string of length k in (ν, J)(a) leaving all colabels fixed and decreasing the new label by one. Here k is the length of the string with the smallest non positive label of largest length. If no such string exists, add a new string of length one and label -1. If in the result the new rigging is greater than the corresponding vacancy number, then fa (ν, J) = 0. The weight function (2.2.2) defined on RC(B) extends to RC(B) without change.

2.2. PRELIMINARIES AND THE MAIN RESULT

36

As their names suggest, fa and ea are indeed the Kashiwara operators with respect to the weight function above, and define a crystal structure on RC(B). This was proved in [Sch06]. From the definition of fa , it is clear that the labels of parts in an unrestricted rigged configuration may be negative. It is natural to ask what shapes and labels can appear in an unrestricted rigged configuration. There is an explicit characterization of RC(B) which answers this question [DS06, Section 3]. The statement is not directly used in our proof, so we will just give a rough outline and leave the interested reader to the original paper for further details: In the definition of RC(B), we required that the vacancy number associated to each part is non-negative. We dropped this requirement for RC(B). Yet the vacancy numbers in RC(B) still serve as the upper bound of the labels, much like the role a vacancy number plays for a restricted rigged configuration. For restricted rigged configurations, the lower bound for the label of a part is uniformly 0. For unrestricted rigged configurations, this is not the case. The characterization gives a way on how to find lower bound for each part.

R EMARK 2.2.22. By Remark 2.2.20 and Definition 2.2.21, it is clear that RC(B) is not sensitive to the ordering of the rectangles in B.

E XAMPLE 2.2.23. Here is an example on how we normally visualize a restricted/unrestricted rigged configuration. Let B = ((2, 2), (1, 2), (3, 1)). Then rc =

−1

1

−1

is an element of RC(B, −Λ1 + 3Λ2 ). In this example, the sequence of partitions ν is ((2),(1),(1)). The number that follows each part is the (1)

(2)

label assigned to this part by J. The vacancy numbers associated to these parts are p2 = −1, p1 = 1, and (3)

p1 = 0. Note that the labels are all less than or equal to the corresponding vacancy number. In the case that they are equal, e.g. for the parts in ν (1) and ν (2) , those parts are called singular as in the case of restricted rigged configuration. In this example rc ∈ RC \ RC.

The following maps on RC(B) are the counterparts of lh, lb and ls maps defined on B. D EFINITION 2.2.24. [DS06, Section 4.1,4.2] . (1) Let rc = (ν, J) ∈ RC(B). Then lh(rc) ∈ RC(lh(B)) is defined as follows: First set `(0) = 1 and then repeat the following process for a = 1, 2, . . . , n − 1 or until stopped. Find the smallest index i ≥ `(a−1) such that J (a) (i) is singular. If no such i exists, set rk(ν, J) = a and stop. Otherwise set `(a) = i and continue with a + 1. Set all undefined `(a) to ∞.

2.2. PRELIMINARIES AND THE MAIN RESULT

37

˜ = lh(ν, J) is obtained by removing a box from the seThe new rigged configuration (˜ ν , J) lected strings and making the new strings singular again. (2) Let rc = (ν, J) ∈ RC(B). Then ls(rc) ∈ RC(ls(B)) is the same as (ν, J). Note however that some vacancy numbers change. (3) Let rc = (ν, J) ∈ RC(B) with B = ((r, 1), B 0 ). Then lb(rc) ∈ RC(lb(B)) is defined by adding singular strings of length 1 to (ν, J)(a) for 1 ≤ a < r. Note that the vacancy numbers remain unchanged under lb. R EMARK 2.2.25. Although RC(B) does not depend on the ordering of the rectangles in B (see Remark 2.2.22), it is clear that the above maps depend on the ordering in B. In what follows, it is often easier to work with the inverses of the above maps lh, ls and lb maps. In the following we give explicit descriptions of these inverses. One can easily check that they are really inverses as their name suggests. See also [KSS02]. D EFINITION 2.2.26. . −1

(1) Let rc ∈ RC(B, λ) for some weight λ, and let r ∈ [n + 1]. The map lh

takes rc and r as input,

and returns rc0 ∈ RC(lh−1 (B), λ + r ) by the following algorithm: Let d(j) = ∞ for j ≥ r. For k = r − 1, . . . , 1 select the -maximal singular string in rc(k) of length d(k) (possibly of zero length) such that d(k) ≤ d(k+1) . Then rc0 is obtained from rc by adding a box to each of the selected strings, making them singular again, and leaving all other strings unchanged. We denote the sequence of strings in rc selected in the above algorithm by Dr = (D(n) , . . . , D(1) ). −1

It is called the lh

-sequence of rc with respect to r. For simplicity for future discussions, we

append D(0) = (0, 0) to the end of the sequence. In light of Remark 2.2.17, we write D(k) ≤ D(k+1) and say that Dr is a weakly decreasing sequence. −1

(2) Let rc = (ν, J) ∈ RC(B) where B = ((r, 1), (r, c), B 0 ). Then ls

(rc) ∈ RC(ls−1 (B)) is the

same as (ν, J). Note that due to the change of the sequence of rectangles, the vacancy numbers for parts in ν (r) of size less than s + 1 all decrease by 1, so the colabels of these parts decrease accordingly. −1

Thus ls

is only defined on rc ∈ RC((r, 1), (r, c), B 0 ) such that the colabels of parts in rc(k) of

size less than s + 1 is ≥ 1. All rcs that satisfy the above conditions form Dom(ls

−1

).

2.2. PRELIMINARIES AND THE MAIN RESULT

38 −1

(3) Let rc ∈ RC(B) where B = ((1, 1), (r − 1, 1), B 0 ). Then lb

(rc) ∈ RC(lb−1 (B)) is defined by

removing singular strings of length 1 from rc(a) for 1 ≤ a < r, the labels of all unchanged parts are preserved. −1

Note that the vacancy numbers remain unchanged under lb

. As a result the colabels of all

unchanged parts are preserved. The collection of all rc ∈ RC((1, 1), (r − 1, 1), B 0 ) such that there is a singular part of size 1 −1

in rc(a) for 1 ≤ a < r forms Dom(lb

).

2.2.5. The bijection between P(B) and RC(B). The map Φ : P(B, λ) → RC(B, λ) is defined recursively by various commutative diagrams. Note that it is possible to go from B = ((r1 , c1 ), (r2 , c2 ), . . . , (rK , cK )) to the empty crystal via successive application of lh, ls and lb. For further details see [DS06, Section 4]. D EFINITION 2.2.27. Define the map Φ : P(B, λ) → RC(B, λ) such that the empty path maps to the empty rigged configuration and such that the following conditions hold: (1) Suppose B = ((1, 1), B 0 ). Then the following diagram commutes: P(B, λ) lhy [

Φ

−−−−→

P(lh(B), µ) −−−−→ Φ

µ∈λ−

RC(B, λ) ylh [

RC(lh(B), µ)

µ∈λ−

where λ− is the set of all non-negative tuples obtained from λ by decreasing one part. (2) Suppose B = ((r, c), B 0 ) with s ≥ 2. Then the following diagram commutes: P(B, λ) lsy

Φ

−−−−→

RC(B, λ) yls

P(ls(B), λ) −−−−→ RC(ls(B), λ). Φ

(3) Suppose B = ((r, 1), B 0 ) with r ≥ 2. Then the following diagram commutes: P(B, λ) lby

Φ

−−−−→

RC(B, λ) ylb

P(lb(B), λ) −−−−→ RC(lb(B), λ). Φ

R EMARK 2.2.28. By definition, Φ preserves weight between rigged configurations and lattice paths. When working with rigged configurations, it is most convenient to take the fundamental weights as the basis for the weight lattice. On the other hand, when working with lattice paths we often use the ambient basis. There is a subtlety in the conversion between these two bases arising from the fact that the weight lattice is

2.2. PRELIMINARIES AND THE MAIN RESULT

39

not uniquely represented in terms of the ambient basis. In the context of this thesis, for a given B, we do have a “canonical” way to represent the weight lattice in terms of the ambient basis: Let λ = (λ1 , . . . , λn+1 ) represents some point in the weight lattice, we say λ is a canonical ambient representation if the sum of λ is the same as the total area of B: n+1 X

λi =

i=1

X

r · c.

(r,c)∈B

Note that on the lattice paths side, this canonical ambient representation is the same notion as the content.

2.2.6. Promotion operators. In Chapter 1, we had a discussion about promotion operator in semistandard Young tableaux of arbitrary shapes. Now we extend this operation to Pn (B).

D EFINITION 2.2.29. The lifting operator l on Pn (B) lifts p ∈ Pn (B) to l(p) ∈ Pn+1 (B) by adding 1 to each box in each rectangle of p. That is, it applies lifting operator (as defined in Definition 1.4.8) to each tableau in p.

D EFINITION 2.2.30. Given p ∈ Pn+1 (B), the sliding operator ρ is defined on the ρ-closure of the image of the lifting operator, Img(l): [

Dom(ρ) =

ρk (Img(l))

k=0,1,2,...

as follows. Find in p the rightmost rectangle that contains n + 2, remove one appearance of n + 2, apply jeu-de-taquin on this rectangle to move the empty box to the opposite corner, fill in 1 in this empty box. If no rectangle contains n + 2, then ρ is the identity map. That is, it applies the sliding operator (as defined in Definition 1.4.9) to the rightmost tableau in p.

D EFINITION 2.2.31. For p ∈ Pn (B), pr(p) = ρm ◦ l(p), where m is the total number of (n + 1)-entries in p. 1 E XAMPLE 2.2.32. Let p = 1 3 ⊗ 1 2 ⊗ 2 ∈ P3 ((2, 2), (1, 2), (3, 1)). There are three 4-entries, 4 4 4 2 so pr(p) = ρ3 ◦ l(p). After lifting, l(p) = 2 4 ⊗ 2 3 ⊗ 3 . The first sliding acts on the third tableau 5 5 5 2 1 2 4 ⊗ 3 , the rightmost tableau with a 5-entry. After this sliding we obtain ρ ◦ l(p) = 2 3 ⊗ 2 . The 5 5 5 3 next sliding acts on the first tableau since it is the only tableau with any 5-entry. After this sliding we obtain

2.2. PRELIMINARIES AND THE MAIN RESULT

40

1 ρ2 ◦ l(p) = 1 4 ⊗ 2 3 ⊗ 2 . The third sliding again acts on the first tableau, after which we obtain 2 5 3 1 pr(p) = 1 1 ⊗ 2 3 ⊗ 2 . 2 4 3 The proposed promotion operator pr on RCn (B) is defined in [Sch06, Definition 4.8]. To draw the parallel with pr we will phrase it as a composition of one lifting operator and then several sliding operators defined on RC(B). D EFINITION 2.2.33. The lifting operator l on RCn (B, λ) lifts rc = (ν, J) ∈ RCn (B, λ) to l(rc) ∈ ˆ by setting l(rc) = f λ1 f λ2 · · · f λn+1 (rc), where λ = (λ1 , . . . , λn+1 ) is the canonical ambient RCn+1 (B, λ) n+1 1 2 ˆ = (0, λ1 , . . . , λn+1 ) is the canonical ambient representation of the weight of rc (see Remark 2.2.28) and λ representation of the weight of l(rc). Notice that we use the fact that RCn (B) is naturally embedded in RCn+1 (B) by simply treating the (n + 1)-st partition ν (n+1) to be ∅. D EFINITION 2.2.34. Given rc ∈ RCn+1 (B), the sliding operator ρ is defined on the ρ-closure of the image of the lifting operator, Img(l): Dom(ρ) =

[

ρk (Img(l))

k=0,1,2,...

as follows. Find the -minimal singular string in rc(n+1) . Let the length be `(n+1) . Repeatedly find the -minimal singular string in rc(k) of length `(k) ≥ `(k+1) for all 1 ≤ k < n. Shorten the selected strings by one and make them singular again. If the -minimal singular string in rc(n+1) does not exist, then ρ is the identity map. Let I = (I (n+1) , . . . , I (1) , I (0) ), where for k = n + 1, . . . , 1 the entry I (k) is just the string chosen from rc(k) in the above algorithm, and I (0) = (∞, 0). We call I the ρ-sequence of rc. In light of Remark 2.2.17, we write I (k) ≥ I (k+1) and say that I is a weakly increasing sequence. D EFINITION 2.2.35. Define pr(rc) = ρm ◦ l(rc) where m is the number of boxes in rc(n+1) . R EMARK 2.2.36. It is an easy matter to show that l = Φ ◦ l ◦ Φ−1 . Indeed, since Φ is an An -crystal λ

n+1 isomorphism ([Sch06]), we could have defined l(p) = f1λ1 f2λ2 · · · fn+1 (p), where λ = (λ1 , λ2 , . . . , λn+1 )

is the canonical ambient representation of the weight of p.

2.2. PRELIMINARIES AND THE MAIN RESULT

41

There are questions on the well-definedness of Definition 2.2.31 (and Definition 2.2.35): How do we know that we can always apply m sliding operations on l(p) (and l(rc)). The following examples show what could go wrong.

E XAMPLE 2.2.37. Let p = 1 1 ∈ P3 (2, 2). 4 4 If we try to remove the last 4 in the second row and push the empty box to the upper left corner we obtain 1 , and filling the empty box with 1 will violate the column-strictness of semi-standard Young tableaux. 1 4 On the RC side, let rc =

∅

0

0

∈ RC3 (2, 2).

We see that it is not possible to construct the ρ-sequence as described in Definition 2.2.34.

However, since we restricted the domain of ρ (resp. ρ) to the ρ- (resp. ρ-) closure of Img(l) (resp. Img(l)), Theorem 1.4.13 (resp. [Sch06, Lemma 4.10]) guarantees the well-definedness of pr (resp. pr).

R EMARK 2.2.38. It is not known at this stage that ρ is fully defined on Φ(Dom(ρ)). In fact, it is a consequence of our proof.

Given a promotion operator in type An , we can define the affine crystal operators e0 and f0 as e0 = pr−1 ◦ e1 ◦ pr

and

f0 = pr−1 ◦ f1 ◦ pr.

2.2.7. Combinatorial R-matrix and right-split. Let B = ((r1 , c1 ), . . . , (rK , ck )) be a sequence of rectangles, and let σ ∈ Sk be a permutation of K letters. σ acts on B by reordering the sequence, that is, σ(B) = ((rσ−1 (1) , cσ−1 (1) ), . . . , (rσ−1 (k) , cσ−1 (k) )). The combinatorial R-matrix is the unique affine crystal isomorphism [KKM+ 92, Shi02] Rσ : P(B) → P(σ(B)), which sends u1 ⊗ · · · ⊗ uk to uσ−1 (1) ⊗ · · · ⊗ uσ−1 (k) , where ui ∈ P(ri , ci ) is the unique tableau of weight (cri i ). Combinatorially, Rσ (b1 ⊗ · · · ⊗ bk ) is defined [Shi01] to be the unique sequence of tableaux b01 ⊗ · · · ⊗ b0k such that each b0i is of the shape (rσ−1 (i) , cσ−1 (i) ) and b1 ⊗ · · · ⊗ bk ≡K b01 ⊗ · · · ⊗ b0k . E XAMPLE 2.2.39. [Sak09b] Let σ = (1, 2) ∈ S2 . Under the combinatorial R-matrix, Rσ : P((2, 2), (3, 2)) → P((3, 2), (2, 2)) 3 4 1 1 1 1 7→ 4 4 ⊗ 4 5 ⊗ 2 4, 2 2 5 6 5 6 3 5 since 5, 6, 4, 5, 3, 4, 2, 4, 1, 1 ≡K 5, 6, 3, 5, 2, 4, 1, 1.

2.2. PRELIMINARIES AND THE MAIN RESULT

42

There are combinatorial algorithms for computing R in swapping two rectangles, that is for B = ((r1 , c1 ), (r2 , c2 )), [Sak09b, Section 2.3]. These can then be used as simple transpositions to compute R on arbitrary permutation. What is the correspondence of Rσ under Φ on (unrestricted) rigged configurations? It was shown in [KSS02, Lemma 8.5] that for any σ, Φ ◦ Rσ ◦ Φ−1 = id on RC(B). Together with the fact that Rσ preserves the An -crystal structure we have the following result. T HEOREM 2.2.40. For any σ, Φ ◦ Rσ ◦ Φ−1 = id on RC(B).

In the remainder of the chapter, we often just write R and omit the subscript σ.

D EFINITION 2.2.41. rs, rs are called right-split. rs operates on sequences of rectangles as follows: Let B = ((r1 , c1 ), . . . , (rK , cK )), and suppose sK > 1 (i.e, the rightmost rectangle is not a single column). Then rs(B) = ((r1 , c1 ), . . . , (rK , cK − 1), (rK , 1)), that is, rs splits one column off the rightmost rectangle. rs operates on RC(B) as follows: If rc ∈ RC(B), then rs(rc) ∈ RC(rs(B)) is obtained by increasing the labels by 1 for all parts in rc(rK ) of size less than sK . Observe that this will leave the colabels of all parts unchanged. rs, which operates on P(B), is defined as rs = Φ ◦ rs ◦ Φ−1 .

2.2.8. The main result. We now state the main result of this chapter.

T HEOREM 2.2.42. Let B = ((r1 , c1 ), . . . , (rK , cK )) be a sequence of rectangles, and P(B), RC(B), Φ, pr, and pr as given as above. Then the following diagram commutes: Φ

P(B) −−−−→ RC(B) pr pry y

(2.2.3)

P(B) −−−−→ RC(B). Φ

E XAMPLE 2.2.43. Take p = 2 ⊗ 1 3 ∈ P3 ((1), (2, 2)), 4 4

so that pr(p) = 3 ⊗ 1 1 . 2 4

Under the bijection Φ they map to rc = Φ(p) = and

Φ(pr(p)) =

0 0

−1 −1 0

−1 − 1.

2.3. OUTLINE OF THE PROOF OF THE MAIN RESULT

43

It is not too hard to check that l(rc) =

0

−1

0

−1 −1

−1

and then using Definition 2.2.35, pr(Φ(p)) = Φ(pr(p)).

Using that the promotion operator on An -crystals defines an affine crystal, this also yields the following important corollary.

C OROLLARY 2.2.44. The bijection Φ between crystal paths and rigged configurations is an affine crystal isomorphism.

2.3. Outline of the proof of the main result In this section, we draw the outline of the proof and state all important results needed in the proof, but leave the details of the proofs to later sections. We also illustrate the main ideas with a running example. By Remark 2.2.36, for the proof of Theorem 2.2.42 it suffices to show that the following diagram commutes:

Φ

Dom(ρ) −−−−→ Φ(Dom(ρ)) ρ ρy y Dom(ρ) −−−−→ Φ(Dom(ρ)). Φ

In particular, we need to show that ρ is defined on Φ(Dom(ρ)).

2.3.1. Setup the running example. As an abbreviation, for any p ∈ Dom(ρ), we use D(p) to mean the following statement: “ρ(Φ(p)) is well-defined and the diagram Φ

p −−−−→ ρy

• ρ y

• −−−−→ • Φ

commutes”. For p, q ∈ Dom(ρ) we write D(p)

D(q) to mean that D(p) reduces to D(q), that is, D(q) is a

sufficient condition for D(p). We will let n = 3 and use the following p ∈ P3 ((2, 2), (3, 2), (2, 2)) as the starting point of the running example: 1 2 p = 2 2 ⊗ 2 3 ⊗ 1 2. 4 4 2 3 3 4

2.3. OUTLINE OF THE PROOF OF THE MAIN RESULT

After lifting to P4 we have: 2 3 l(p) = 3 3 ⊗ 3 4 ⊗ 2 3 ∈ Dom(ρ). 5 5 3 4 4 5 Our goal is to show D(l(p)) by a sequence of reductions. Note that the rightmost 5 (which is n + 2 for n=3) appears in the second rectangle. Thus ρ acts on the second rectangle. The first motivation behind our reductions is to try to get rid of boxes from the left and make ρ act on the leftmost rectangle: Step 1: 2 3 D( 3 3 ⊗ 3 4 ⊗ 2 3 ) 5 5 3 4 4 5

ls

2 3 D( 3 ⊗ 3 ⊗ 3 4 ⊗ 2 3 ) 5 5 3 4 4 5

This is called a ls-reduction, which is justified by Propositions 2.3.6 and 2.3.7 below. Step 2: 2 3 D( 3 ⊗ 3 ⊗ 3 4 ⊗ 2 3 ) 5 5 3 4 4 5

lb

2 3 D( 5 ⊗ 3 ⊗ 3 ⊗ 3 4 ⊗ 2 3 ) 5 3 4 4 5

This is called a lb-reduction, which is justified by Propositions 2.3.4 and 2.3.5 below. Step 3: 2 3 D( 5 ⊗ 3 ⊗ 3 ⊗ 3 4 ⊗ 2 3 ) 5 3 4 4 5

lh

2 3 D( 3 ⊗ 3 ⊗ 3 4 ⊗ 2 3 ) 5 3 4 4 5

This is called a lh-reduction, which is justified by Propositions 2.3.2 and 2.3.3 below. Step 4: Another application of lh-reduction. 2 3 D( 3 ⊗ 3 ⊗ 3 4 ⊗ 2 3 ) 5 3 4 4 5

lh

2 3 D( 3 ⊗ 3 4 ⊗ 2 3 ) 5 3 4 4 5

We repeat above reductions until the rightmost tableau containing 5 becomes the first tableau in the list. After that we want to further simplify the list, if possible, to get rid of boxes from right by pushing them column-by-column to the left using the R-matrix map R, until we reach the place where can prove D(•) directly: Step 8: 2 3 D( 3 4 ⊗ 2 3 ) 3 4 4 5

rs

2 3 D( 3 4 ⊗ 3 ⊗ 2 ) 4 3 4 5

This is called a rs-reduction, which is justified by Propositions 2.3.10 and 2.3.11 below. Step 9: 2 3 D( 3 4 ⊗ 3 ⊗ 2 ) 4 3 4 5

R

2 3 D( 3 ⊗ 3 4 ⊗ 2 ) 4 3 4 5

44

2.3. OUTLINE OF THE PROOF OF THE MAIN RESULT

45

This is called a R-reduction, which is justified by Proposition 2.3.12. Now since the rectangle that ρ acts on is no longer the leftmost one, we can go back to Step 1. Repeating the above steps until ρ acts on the leftmost rectangle again, we need one more R-reduction: Step 13: 2 3 D( 3 4 ⊗ 2 ) 3 4 5

R

2 2 D( 3 ⊗ 3 3 ) 5 4 4

Using these reductions, we will eventually reach one of the following two base cases: • Base case 1: p is a single rectangle that contains n + 2; or • Base case 2: p = S ⊗ q, where S is a single column that contains n + 2, and n + 2 does not appear in q. In certain cases it might be possible to reduce Base case 2 further to Base case 1. But we will prove both base cases in this full generality without specifying when this further reduction is possible. In the above example, we reached the second case. Step 14: Now we have to prove this base case directly: 2 2 D( 3 ⊗ 3 3 ) 5 4 4 This is justified by Proposition 2.3.15. Base case 1 is proved in Proposition 2.3.14. 2.3.2. The reduction. In this section, we formalize the ideas demonstrated in the previous section. D EFINITION 2.3.1. Define LM = {p ∈ Dom(ρ) | n + 2, if any exist, appears only in the leftmost rectangle of p}. The next two propositions concern the lh-reduction: D(p)

lh

D(lh(p)).

P ROPOSITION 2.3.2. Let p ∈ (Dom(ρ) \ LM ) ∩ Dom(lh). Then lh(p) ∈ Dom(ρ) and the following diagram commutes: lh

p −−−−→ ρy

• ρ y

• −−−−→ • lh

P ROOF. By definition, ρ acts on the rightmost rectangle of p that contains the number n + 2. Given p ∈ (Dom(ρ) \ LM ) ∩ Dom(lh), the rightmost rectangle that contains n + 2 is not the leftmost one in p, thus ρ does not act on the leftmost rectangle of p. But lh, by definition, acts on the leftmost rectangle, and it is clear that lh(p) ∈ Dom(ρ) if p ∈ Dom(ρ), and that the diagram commutes.

2.3. OUTLINE OF THE PROOF OF THE MAIN RESULT

46

P ROPOSITION 2.3.3. Let rc ∈ Φ((Dom(ρ)\LM )∩Dom(lh)) and assume that ρ(lh(rc)) is well-defined. Then ρ(rc) is well-defined and the following diagram commutes: lh

rc −−−−→ ρy

• ρ y

• −−−−→ • lh

P ROOF. See Section 2.4.

To see that the above two propositions suffice for the lh-reduction, we let p and rc be given as above and consider the following diagram p

Φ

?? ?? lh ?? ?

ρ

•

Φ

/ •

ρ

? •

•

/ rc ~ ~ lh ~ ~ ~~ ~~ ~ ρ

Φ

lh

/ • `AA AA lh AA AA / •

ρ

Φ

This diagram should be viewed as a “cube”, the large outside square being the front face and the small inside square being the back face, the four trapezoids between these two squares are the upper, lower, left and right faces, respectively. We observe the following: (1) The upper and lower face commute by [DS06]. (2) By Proposition 2.3.2, the left face commutes. (3) If we assume that the back face commutes, in particular that ρ on the right edge of the back face is well-defined, then by Proposition 2.3.3 we can conclude that the right face is well-defined and commutes. Thus, if we assume the commutativity of the back face, the commutativity of the front face follows by induction. The next two propositions are for lb-reduction: D(p)

lb

D(lb(p)).

2.3. OUTLINE OF THE PROOF OF THE MAIN RESULT

47

P ROPOSITION 2.3.4. Let p ∈ (Dom(ρ) \ LM ) ∩ Dom(lb). Then lb(p) ∈ Dom(ρ) and the following diagram commutes: lb

p −−−−→ ρy

• ρ y

• −−−−→ • lb

P ROOF. The proof is similar to the argument for the lh-reduction (see Proposition 2.3.2).

P ROPOSITION 2.3.5. Let rc ∈ Φ((Dom(ρ)\LM )∩Dom(lb)) and assume that ρ(lb(rc)) is well-defined. Then both ρ(rc) and lb(ρ(rc)) are well-defined and the following diagram commutes: lb

rc −−−−→ ρy

• ρ y

• −−−−→ • lb

P ROOF. See Section 2.5.

The reason that the above two propositions suffice for the lb-reduction is analogous to the reason for the lh-reduction. The next two propositions are for ls-reduction: D(p)

ls

D(ls(p)).

P ROPOSITION 2.3.6. Let p ∈ (Dom(ρ) \ LM ) ∩ Dom(ls). Then ls(p) ∈ Dom(ρ) and the following diagram commutes: ls

p −−−−→ ρy

• ρ y

• −−−−→ • ls

P ROOF. The proof is similar to the argument for lh-reduction (see Proposition 2.3.2).

P ROPOSITION 2.3.7. Let rc ∈ Φ((Dom(ρ)\LM )∩Dom(ls)) and assume that ρ(ls(rc)) is well-defined. Then ρ(rc) is well-defined and the following diagram commutes: ls

rc −−−−→ ρy

• ρ y

• −−−−→ • ls

P ROOF. See Section 2.6.

The reason that the above two propositions suffice for the ls-reduction is analogous to the reason for the lh-reduction.

2.3. OUTLINE OF THE PROOF OF THE MAIN RESULT

48

The above lh/lb/ls-reductions make it clear that we have D(p) for any p ∈ (Dom(ρ) \ LM ), thus reducing the problem to proving D(p) for p ∈ LM . For p ∈ LM , D(p) is proved by another round of reductions, until p is in one of the two base cases (not mutually exclusive):

D EFINITION 2.3.8 (Base case 1). BC1 = {p ∈ LM | p is a single rectangle}. D EFINITION 2.3.9 (Base case 2). BC2 = {p ∈ LM | the leftmost rectangle of p is a single column}. The next two propositions deal with rs-reduction: D(p)

rs

D(rs(p)).

P ROPOSITION 2.3.10. Let p ∈ LM \ BC1. Then rs(p) ∈ LM \ BC1 and the following diagram commutes:

rs

p −−−−→ ρy

• ρ y

• −−−−→ • rs

P ROOF. See Section 2.7.

P ROPOSITION 2.3.11. Let rc ∈ Φ(LM \ BC1) and assume that ρ(rs(rc)) is well-defined. Then ρ(rc) is well-defined and the following diagram commutes: rs

rc −−−−→ ρy

• ρ y

• −−−−→ • rs

P ROOF. By the definition of ρ and by the fact that rs preserves the colabels of all parts, it is clear that if the ρ-sequence of rs(rc) exists, then the ρ-sequence of rc must exist and be the same as that for rs(rc). Then commutativity follows.

2.3. OUTLINE OF THE PROOF OF THE MAIN RESULT

49

To see that the above two propositions suffice for the rs-reduction, we let p and rc be given as above and consider the following diagram p

Φ

?? ?? rs ?? ?

ρ

Φ

•

/ •

ρ

? •

/ rc ~ rs ~~ ~ ~~ ~~~ ρ

•

Φ

rs

/ • `AA AA rs AA AA / •

ρ

Φ

We observe the following: (1) The upper and lower face commute by the definition of rs and rs as stated in Definition 2.2.41. (2) The left face commutes by Proposition 2.3.10. (3) If we assume that the back face commutes, in particular ρ on the right edge of the back face is well-defined, then by Proposition 2.3.11 we can conclude that the right face is well-defined and commutes. Thus, if we assume the commutativity of the back face, the commutativity of the front face follows. The next proposition is for R-reduction: D(p)

R

D(R(p)).

P ROPOSITION 2.3.12. Let p ∈ LM ⊂ P(B) where B = ((r1 , c1 ), (r2 , 1)). Then R(p) ∈ Dom(ρ) and the following diagram commutes: R

p −−−−→ ρy

• ρ y

• −−−−→ • R

P ROOF. It was shown in [SW99, Lemma 5.5, Eq. (5.8)] that R and ρ commute on standardized highest weight paths (the maps are called σi and Cp in [SW99], respectively). For a given p, we can always find a q ∈ P(B 0 ) for some B 0 such that p ⊗ q is highest weight and q does not contain any n + 2 (basically q needs to be chosen such that ϕi (q) ≥ εi (p) for all i = 1, 2, . . . , n + 1). Since R respects Knuth relations, it is wellbehaved with respect to standardization. Similarly, ρ is well-behaved with respect to standardization because jeu-de-taquin is. Since by assumption p ∈ Dom(ρ), this implies the statement of the proposition.

R EMARK 2.3.13. The above proof was point out to me by Prof. Anne Schilling. This argument proves an apparently stronger statement than Proposition 2.3.12. Namely, this proof works when the second rectangle of B is of any dimension (not necessarily a column).

2.3. OUTLINE OF THE PROOF OF THE MAIN RESULT

50

Incidentally, in my original work, I was not aware of this sleek argument and has to assume that B is of the form B = ((r1 , c1 ), (r2 , 1)), and developed some tedious combinatorial machineries to prove this proposition. Nevertheless, this assumption on B does not weaken the proposition too much, and the arguments below shows that it is just strong enough for the R-reduction to work: By definition, ρ acts on the rightmost rectangle that contains n + 2. If p ∈ LM , then the rightmost rectangle that contains n + 2 is also the leftmost (thus the only) rectangle that contains n + 2. This implies that if some permutation σ does not involve swapping the first two rectangle (that is, s1 does not appear in the reduced word of σ), then ρ clearly commutes with Rσ . Without loss of generality, we can further assume that the second rectangle is a single column. For if it is not, we can use right-split to split off a single column from the rightmost rectangle (which commutes with ρ by Proposition 2.3.11). Then we can use the R to move this single column to be the second rectangle (which commutes with ρ by above argument). Hence it suffices to consider the case B = ((r1 , c1 ), (r2 , 1)). The alternative proof to Proposition 2.3.12 is documented in the Appendix 4. To see that the above proposition suffices for the R-reduction, we consider the following diagram p

Φ

?? ?? R ?? ?

ρ

•

Φ

ρ

? •

/ rc ~ ~ ~~ ~~ ~ ~~ id

•

/ •

ρ

Φ

R

/ • `AA AA id AA AA / •

ρ

Φ

We observe the following: (1) The upper and lower face commute by Theorem 2.2.40. (2) The left face commutes by Proposition 2.3.12. (3) The right face commutes trivially. Thus, if we assume the commutativity of the back face, the commutativity of the front face follows. Finally, we state the propositions for dealing with the base cases: P ROPOSITION 2.3.14 (Base case 1). Let p ∈ BC1, then D(p). P ROOF. See Section 2.8.

2.4. PROOF OF PROPOSITION 2.3.3

51

P ROPOSITION 2.3.15 (Base case 2). Let p ∈ BC2, then D(p).

P ROOF. See Section 2.9.

2.4. Proof of Proposition 2.3.3 The statement of Proposition 2.3.3 is clearly equivalent to the following statement: Let rc ∈ RC be such −1

that ρ is well-defined on rc and lh −1

on lh

(rc, r) ∈ Φ((Dom(ρ)\LM ) for some r ∈ [n+2]. Then ρ is well-defined

(rc, r) and the following diagram commutes: lh

−1

rc −−−−→ ρy

• ρ y

• −−−−1 −→ • lh

Indeed, this is the statement we are going to prove. Let us first consider the case that ρ is the identity map on rc. The map ρ being the identity means that −1

rc(n+1) does not contain any singular string. If r < n + 2, then lh

(rc, r)(n+1) still does not contain any

singular string since no strings or vacancy numbers in the (n + 1)-st rigged partition change. Thus ρ is the −1

identity map on lh

−1

(rc, r). Clearly lh

and ρ commute. −1

If r = n + 2, then by Definition 2.2.26, the lh −1

0s. Thus for each k, lh

-sequence of rc with respect to n + 1 is a sequence of all −1

(rc)(k) has a singular sting of size 1. Therefore the ρ-sequence of lh −1

and is a sequence of all 1s. Combining with the fact that lh −1

conclude that lh

(rc) exists

and ρ preserve all unchanged strings we can

and ρ commute.

From now on, we shall assume that ρ is not the identity map on rc −1

Let Dr be the lh

-sequence given in Definition 2.2.26. Let I be the ρ-sequence given in Defini(n+1)

tion 2.2.34. We note that by definition I (n+1) 6 Dr

(0)

and I (0) Dr . Thus, one of the following

two statements must hold: (N −1)

(N )

I (N ) and Dr

(N )

= I (N ) .

(1) There is an index N ∈ {1, . . . , n + 1} such that Dr (2) There is an index N ∈ {1, . . . , n + 1} such that Dr

≺ I (N −1) ;

R EMARK 2.4.1. In either case above, we say Dr and I cross at the position N . −1

Let rc = (u, U ), lh

−1

We denote by ρ(D) the lh

−1

(rc) = (v, V ), ρ(rc) = (w, W ), and lh

−1

-sequence of ρ(rc), and denote by lh

−1

◦ ρ(rc) = (x, X), ρ ◦ lh

(rc) = (y, Y ).

−1

(I) the ρ-sequence of lh

The readers may want to review Remark 2.2.17 for notation used in the following proof.

(rc).

2.4. PROOF OF PROPOSITION 2.3.3

52

2.4.1. Case 1. In this case we must have I (N −1) > D(N ) and D(N −1) < I (N ) . This then implies that D(N −1) < I (N −1) − 1, from which we can conclude (considering the changes in vacancy numbers) that • ρ(D)(k) = D(k) for k 6= N , and ρ(D)(N ) = I (N ) − 1; (−1)

• lh

−1

(I)(k) = I (k) for k 6= N , and lh

(I)(N ) = D(N ) + 1.

To show (x, X) = (y, Y ), we first argue that x(k) = y (k) , then we show that on corresponding parts of x(k) and y (k) , X (k) and Y (k) either agree on their labels or agree on their colabels. All the above and the fact that (x, X) and (y, Y ) have the same sequence of rectangles implies that X = Y thus (x, X) = (y, Y ). We divide the argument into the following three cases: • k>N • k = N; • 1 ≤ k < N; −1

For k > N , since ρ(D)(k) = D(k) and lh

(I)(k) = I (k) we know x(k) = y (k) , both differ from u(k) −1

by “moving a box from I (k) to D(k) ”. Furthermore, by the definition of ρ and lh

, in both x(k) and y (k) ,

the labels of all unchanged strings are preserved. In the two changed strings, one gets a box removed and one gets a box added, and they are both kept singular. −1

For k = N , lh

(I)(N ) = D(N ) + 1 implies that from u(N ) to v (N ) to y (N ) one box is added to D(N )

and then is removed, thus keeping y (N ) = u(N ) . Similarly, ρ(D)(N ) = I (N ) − 1 implies that from u(N ) to w(N ) to x(N ) one box is removed from I (N ) and then is added back, thus keeping x(N ) = u(N ) . Hence x(N ) = y (N ) . From U (N ) to V (N ) to Y (N ) the labels of all strings other than D(N ) are unchanged, and for −1

part D(N ) , both lh

and ρ preserve its singularity.

For 1 ≤ k < N , a similar argument as the k > N shows the desired result. 2.4.2. Case 2. Let N = max{k | I (k) = D(k) } and let M = max{k | I (k) > D(k) }. Thus clearly M < N and for k > N , D(k) I (k) ; for M < k ≤ N , |I (k) | = |D(k) | (it may not be the case that I (k) = D(k) ); for k ≤ M , I (k) D(k) , in particular for k = M , D(M ) < D(M +1) and I (M +1) < I (M ) , so D(k) < I (k) − 1 for k ≤ M . The above discussion implies that −1

• ρ(D)(k) = D(k) and lh

(I)(k) = I (k) for k > N ; −1

• ρ(D)(k) = I (k) − 1 and lh −1

• ρ(D)(k) = D(k) and lh

(I)(k) = D(k) + 1 for M < k ≤ N ;

(I)(k) = I (k) for k ≤ M .

Following the same strategy as in Case 1, we divide our argument into the following three cases: • k > N;

2.5. PROOF OF PROPOSITION 2.3.5

53

• M < k ≤ N; • 1 ≤ k ≤ M. For k > N , the argument is the same as the k > N discussion of Case 1. −1

For M < k ≤ N , lh

(I)(k) = D(k) + 1 implies that from u(k) to v (k) to y (k) one box is added to D(k)

and then is removed, thus keeping y (k) = u(k) . Similarly, ρ(D)(k) = I (k) − 1 implies that from u(k) to w(k) to x(k) one box is removed from I (k) and then is added back, thus keeping x(k) = u(k) . Hence x(k) = y (k) . −1

From U (k) to V (k) to Y (k) the labels of all parts other than D(k) are unchanged, and for part D(k) , both lh

and ρ preserve its singularity. Moreover, the vacancy number of parts of size |D(k) | is unchanged from U (k) to Y (k) due to the cancellation of the effects of removing D(k−1) (or changing the sequence of rectangles for the case N = 1) and adding I (k+1) . Thus the label of D(k) is unchanged from U (k) to Y (k) . Hence U (k) = Y (k) . An analogous argument shows that U (k) = X (k) . Thus X (k) = Y (k) . For k ≤ M , the argument is similar to the case k > N .

2.4.3. Some remark. We could have in both cases above defined M = max{k < N | I (k) > I (k+1) }. Then it would agree with the M defined in Case 2, and the proof of Case 2 could conceptually unify the two cases into one argument, but it probably would not make the proof more readable. But this definition of M does simplify statement like the following. −1

L EMMA 2.4.2. For any k ∈ [n + 1], lh

−1

obtained precisely on (M, N ]. In particular, lh

−1

(I)(k) ≥ I (k) . The strict inequality lh

(I)(k) < I (k) is

(I)(k) = I (k) if D(k) > I (k) .

The lemma follows from the proof in Case 1 and 2, and will be referred to in the future sections.

R EMARK 2.4.3. The same idea used in the proof of this section can be used to prove the following converse of Proposition 2.3.3, which will be used in the proof of Proposition 2.3.5 in Section 2.5.

P ROPOSITION 2.4.4. Let rc ∈ RC be such that ρ(rc) is well-defined. Then ρ is well-defined on lh(rc) and the following diagram commutes: lh

rc −−−−→ ρy

• ρ y

• −−−−→ • lh

2.5. Proof of Proposition 2.3.5 In this section we give the proof of the following equivalent statement of Proposition 2.3.5:

2.5. PROOF OF PROPOSITION 2.3.5 −1

Let rc ∈ Dom(lb −1

ρ(rc) ∈ Dom(lb

54 −1

) be such that ρ is well-defined on rc and lb −1

) and ρ is well-defined on lb

(rc) ∈ Φ(Dom(ρ) \ LM ). Then

(rc) and the following diagram commutes: lb

−1

rc −−−−→ ρy

• ρ y

• −−−−1 −→ • lb

Without loss of generality we shall assume that ρ is not the identity map. −1

Let us firstly argue that ρ is well-defined on lb

−1

(rc). Given that rc ∈ Dom(lb

−1

corresponds to the sequence of rectangles ((1, 1), (r − 1, 1), . . .). Moreover, rc = lh −1

t ≥ r. (Indeed from the condition that lb

), we know that rc

(lh(rc), t) for some

(rc) ∈ Φ((Dom(ρ) \ LM ) we can deduce a stronger conclusion

t > r, but we only need the weaker statement in the our proof. So we actually proved a stronger result.) Let −1

(D(n+1) , . . . , D(1) ) be the lh −1

definition of lh

-sequence of lh(rc) with respect to t, we know D(k) = 0 for k < r. By the

, rc(k) is obtained from lh(rc)(k) by adding a singular string of size 1 for each k < r.

Let (I (n+1) , . . . , I (1) ) be the ρ-sequence of rc. We observe that I (k) > 1 for each k < r. To see this, let j be the least index such that D(j) > 0 (the existence of such a j follows from Lemma 2.5.2 below). −1

Now rc ∈ Dom(lb

) implies that j ≥ r. Note that all strings in rc(j) of length ≤ D(j) are non-singular, it

follows that the smallest singular string of rc(j) is at least of length D(j) + 1 > 1. Since (I (n+1) , . . . , I (1) ) increases, we obtain I (k) > 1 for each k < r. −1

Now lb

acts on rc by removing a singular part of size 1 from rc(k) for each k < r, and leaving the

label and colabel of the remaining parts unchanged. Then by the result from the previous paragraph, the −1

ρ-sequence of lb

−1

(rc) is exactly the ρ-sequence of rc. This shows that lb

(rc) ∈ Dom(ρ).

We extract from the above arguments the following fact, which will be referred to in the future sections: −1

L EMMA 2.5.1. For any k ∈ [n + 1], lb

(I)(k) = I (k) . −1

Let us secondly argue that ρ(rc) ∈ Dom(lb

). We note that rc and ρ(rc) correspond to the same −1

) means that for each k < r rc(k) has a

sequence of rectangles ((1, 1), (r − 1, 1), . . .). rc ∈ Dom(lb

singular part of size 1, The arguments from the above paragraph show that ρ does not touch these parts, thus −1

for each k < r, ρ(rc)(k) has a singular part of size 1. Therefore ρ(rc) ∈ Dom(lb −1

The above arguments also clearly show that ρ(lb

−1

(rc)) = lb

).

(ρ(rc)).

Now the only thing left is the following lemma: L EMMA 2.5.2. Let rc, lh(rc) and Dt = (D(n+1) , . . . , D(1) ) be given as above. Then there exists a least index j such that D(j) > 0.

2.6. PROOF OF PROPOSITION 2.3.7 −1

P ROOF. By the definition of lh

55

(see Definition 2.2.26), the above statement is clearly true for t < n+2

since D(k) = ∞ for k ≥ t. −1

In the case that t = n + 2, our assumption that lb

(rc) 6∈ Φ(LM ) implies that rc 6∈ Φ(LM ). This

then implies lh(rc)(n+1) 6= ∅. By Proposition 2.4.4, ρ is well-defined on lh(rc), in particular this means that lh(rc)(n+1) contains a singular string. Thus D(n+1) > 0.

2.6. Proof of Proposition 2.3.7 In this section we give the proof of the following equivalent statement of Proposition 2.3.5: −1

Let rc ∈ Dom(ls −1

ρ(rc) ∈ Dom(ls

−1

) be such that ρ is well-defined on rc and ls −1

) and ρ is well-defined on ls

(rc) ∈ Φ(Dom(ρ) \ LM ). Then

(rc), and the following diagram commutes: ls

−1

rc −−−−→ ρy

• ρ y

• −−−−1 −→ • ls

−1

Let p = Φ−1 (rc). Then by the condition that rc ∈ Dom(ls

b1 p=S⊗T ⊗q =

.. .

⊗

br

), we have

T1,s

···

T1,1

.. .

.. .

.. .

Tr,s

···

Tr,1

⊗q ,

where S is a single column tableau of height r, and T is a tableau of shape (r, c) with s ≥ 0. By the condition −1

ls

(rc) 6∈ LM , we have n + 2 appears in q and 1 does not appear in S nor T , in particular b1 > 1 and

br > r. We shall induct on s, the number of columns of T . To facilitate the induction, let us denote rcs = Φ(p) where p is defined as above, that is, the subscript s of rcs indicates the width of tableau T . (n+1)

Let (Is

(1)

, . . . , Is ) be the ρ-sequence of rcs .

The hypothesis we want to carry across inductive steps is the logical disjunction of the following two sufficient conditions for the commutativity of above diagram: H YPOTHESIS 2.6.1 (Simplified version). For each s ∈ Z≥0 , rcs satisfies one of the following two conditions: (r+1)

> s;

(r+1)

≤ s but the colabel of any part of rcs with sizes in [Is

A. Is B. Is

(r)

(r+1)

, s] is ≥ 2.

2.6. PROOF OF PROPOSITION 2.3.7

56 −1

To see that the first condition is sufficient, we recall that ls (r)

−1

(r+1)

rcs of size ≤ s by 1. Now if Is

> s then ls −1

easy to see that ρ will not affect the action of ls

(rcs ) decreases the colabels for all parts in (r)

will not affect the choice of Is . In this case it is also

. (r+1)

To see that the second condition is sufficient, we notice that if Is (r)

(r+1)

rcs with sizes in [Is

−1

, s] is ≥ 2, then when ls (r+1)

1, the parts with size in the interval [Is

≤ s and the colabel of any part of (r)

decreases the colabels for parts in rcs of size ≤ s by −1

, s] will still be non-singular. Thus ls −1

(r)

of Is . In this case, this colabel condition also prevents ρ from affecting ls (r)

(r+1)

parts in rcs of size in the interval [Is

will not affect the choice

: ρ decreases the colabels of

(r)

, Is ) by 1, thus the parts in ρ(rcs )(r) of size in ≤ s will still be

non-singular. Conditions A and B can be viewed as two possible states of rcs . We will show below in a more precise setting that rcs either stays in its current state or transits from A to B (but never comes back). Intuitively, we can imagine s being time, and when s is small (s starts from 0), rcs starts in state A. As time goes by, we start merging columns of height r on the left of p, corresponding on the RC side to ls

−1

(rcs ), the condition that

after each merging we get a valid tableau (weakly increasing along rows) corresponds on the RC side to the −1

fact that lh

−1

and lb

(r+1)

(k)

affect rcs for k > r “less and less”. As time pass by, Is (r+1)

and s will possibly catch up and pass Is

will possibly stabilize,

, and from this time on rcs will get stuck in the state B forever.

To make all above statements precise, we first need to set up some notation. Let rcj,s for j ≤ r be the image under Φ of the following path element:

b1 S⊗T ⊗q =

.. .

−1

···

T1,1

.. .

.. .

.. .

Tr,s

···

Tr,1

⊗

bj Define rc0,s = Φ(T ⊗ q) = ls

T1,s

⊗ q.

(rcs−1 ). In particular, rc0,0 = Φ(q). With this notation, our previously

defined rcs is denoted by rcr,c . (n+1)

−1

(1)

Let DTj,s = (DTj,s , . . . , DTj,s ) be the lh

-sequence with respect to Tj,s . To avoid awkward sub-

indices, we denote Dj,s = DTj,s . (n+1)

Let Ij,s = (Ij,s

(1)

, . . . , Ij,s ) be the ρ-sequence of rcj,s . Our previously defined Is , in this notation, is (n+1)

denoted by Ir,c . We denote by I0,s = (I0,s

(1)

, . . . , I0,s ) the ρ-sequence of rc0,s = ls −1

Equipped with this notation, we can give a precise description on how lh

−1

(rcr,c−1 ).

-sequences interact with

ρ-sequences. (k)

For a fixed s, for each k ∈ [n + 1], let us consider the sequence (Ij,s )j∈[r] . By Lemma 2.4.2, we have:

2.6. PROOF OF PROPOSITION 2.3.7

57

(k)

L EMMA 2.6.2. (Ij,s )j∈[r] is weakly increasing with respect to j. More precisely, for any s and any k, (k)

(k)

we have Ij,s ≤ Ij+1,s for j = 0, . . . , r − 1. (k)

(k)

Set Aj,s = {k | Ij,s > I0,s }. Then by Lemmas 2.4.2 and 2.B.1 we have L EMMA 2.6.3. Aj,s = [1, aj,s ] for some 1 ≤ aj,s ≤ n + 1, and a1,s ≤ · · · ≤ ar,c . Moreover, (k)

(k)

Aj,s \ Aj−1,s = {k | Ij,s > Ij−1,s }. An equivalent statement of the above lemma is the following: (k)

(k)

(k)

L EMMA 2.6.4. (Ij,s )j∈[r] satisfies Ij,s = Ij+1,s everywhere except possibly one position Jk,s where (k)

(k)

the inequality is strict IJk,s ,s < IJk,s +1,s . Jk,s is weakly increasing with respect to k and Aj,s \ Aj−1,s = {k | Jk,s = j}. (r+1)

If we specialize Lemma 2.6.2 to the case k = r + 1, then Ij,s −1

j. Now we note that ls (r+1) Ij,s+1

does not affect

(k) rcs

for k ≥ r + 1. Thus

is weakly increasing with respect to

(r+1) I0,s+1

(r+1)

= Ir,c

. Now by the fact that

is weakly increasing with respect to j, we conclude the following lemma: (r+1)

L EMMA 2.6.5. Is

is weakly increasing with respect to s. (k)

R EMARK 2.6.6. The above argument shows that for any fixed k ≥ r + 1, Ij,s is weakly increasing with respect to (s, j) in dictionary order. Indeed this is also the case for any k < r + 1, but this fact is a consequence of Proposition 2.3.7 which we are going to prove. Now we can finally make precise the statement we want to prove. H YPOTHESIS 2.6.7 (Precise version). For each s, rcs satisfies: (r)

IH1. Is

> s + 1.

(r+1)

IH2. Is

(r+1)

= Is−1

(r)

(r+1)

when Ds−1 ≥ Is−1 .

IH3. rcs is in one of the following states: (r+1)

> s, and Ds < Is

(r+1)

> s, and Ds ≥ Is

(r+1)

≤ s, and the colabel of any part of rcs with size in [Is

SA1 Is SA2 Is SB Is

(r)

(r+1)

.

(r)

(r+1)

. (r)

(r+1)

, s] is ≥ 2, and the colabel

of any part with size s + 1 is ≥ 1. We remark here that [IH3] is the core of the hypothesis, where SA1 and SA2 combined correspond to the condition A in Hypothesis 2.6.1, and SB corresponds to the condition SB in Hypothesis 2.6.1. As we have −1

argued before, being in one of these states implies the commutativity of ρ and ls

.

2.6. PROOF OF PROPOSITION 2.3.7 (r+1)

By Lemma 2.6.5, Is+1 (r)

(r+1)

Ds ≥ Is

58

(r+1)

≥ Is

(r+1)

. [IH2] then states that Is

stabilizes (stops increasing) once

.

2.6.1. Base case: We have s = 0 as our base case. (r)

Let us first verify [IH1], that is, I0

(r)

(r)

= Ir,0 > 1. Suppose not, then it must be that Ir,0 = 1, and thus

(r)

(r)

Ij,0 = 1 for all j = 0, . . . , r by the fact that Ij,0 is weakly increasing with respect to j (see Lemma 2.6.2). But this is impossible since br > r and thus Dr,0 and Ir−1,0 cross at position ≥ r (by Lemma 2.6.3, this (r)

(r)

implies Ir,0 > I0,0 , a contradiction). [IH2] is an empty statement for base case. Finally, it is clear that rc0 is either in SA1 or SA2, thus verifies [IH3]. 2.6.2. Induction: Assume that rcs satisfies Hypothesis 2.6.7. We consider rcs+1 . (r)

(r)

> s + 1, so we just need to

Let us first verify [IH1], that is, Is+1 > s + 2. By [IH1] on rcs , Is (r)

(r)

(r)

−1

(r)

show Is+1 > Is . Suppose not, so that Is+1 ≤ Is . By [IH3] on rcs , ls (r)

(r)

(r)

(r)

does not interfere with the

(r)

(r)

ρ-sequence, so I0,s+1 = Is . Now by Lemma 2.6.2 Is+1 ≥ I0,s+1 , thus Is+1 ≥ Is . Therefore we it must (r)

(r)

(r)

(r)

be the case Is+1 = Is . This then implies that Ij,s+1 = Is

for all j ∈ [r]. However, this is impossible

since Sr,1 = br > r and thus Dr,c+1 and Ir−1,s+1 cross at position ≥ r (by Lemma 2.6.3, this implies (r)

(r)

Is+1 > Is , a contradiction). The verification of [IH2] and [IH3] depends on the state of rcs : (r+1)

(r)

2.6.2.1. rcs is in state SA1. By assumption, Ds < Is

, so [IH2] is vacuously true in this case.

We verify [IH3] by showing that rcs+1 is either in SA1 or SA2. For this we just need to argue that (r+1) Is+1

(r+1)

> s + 1. Knowing that Is

(r+1)

> s (since rcs in SA1) and Is+1

clear that we just need to show the impossibility of case that

(r+1) Is+1 −1

rcs ∈ Dom(ls (r) Is

=

(r+1) Is

(r+1) Is+1

= s + 1. This implies that

), and it is also clear that

(r) Ds

(r)

= s (it is clear that Ds

≤ s by the fact that

= s + 1 (since rcs has a singular string of size

(Lemma 2.6.5), it is

= s + 1. Suppose otherwise. Then it must be the

(r) Ds

(r) Ds

(r+1)

≥ Is

(r) Ds

=

(r+1) Is .

We have argued before that

(r+1) I0,s+1

=

(r+1) Is ,

≥

so it suffices

Using Lemmas 2.B.1, 2.B.2 and 2.6.3, it can be inductively shown that

(r) Ij−1,s+1

(r)

(r)

and Ij,s+1 = I0,s+1 for all j ∈ [r].

We verify [IH3] by showing that rcs+1 is either in state SA2 or SB. Our first observation is that by [IH2] (r+1)

on rcs+1 , which we have just verified above, Is+1 Lemma 2.B.2). Thus rcs+1 cannot be in state SA1.

(r)

= Is

(r)

≤ Ds

(r+1)

< Ds

(the last < follows from

2.7. PROOF OF PROPOSITION 2.3.10

59 (r)

(r+1)

(r+1)

Next, rcs is in state SA2 means that Ds ≥ Is (r+1)

or Is (r)

(r+1)

= s + 1. If Is (r+1)

Ds+1 ≥ Is+1

> s. There are two possibilities: either Is (r+1)

> s + 1, then by [IH2] on rcs+1 , we know Is+1 (r+1)

> s + 1, so rcs+1 is in SA2. If Is

(r+1)

= Is

> s+1

> s + 1. Hence

= s + 1, then we want to show that rcs+1 is in state

(r)

SB. For this we need to show that in rcs+1 the colabel of any part of size s + 1 is ≥ 2 and the colabel of (r)

any part of size s + 2 is ≥ 1. To see this, we note that in rcs the colabel of any part of size s + 1 is ≥ 1 (r)

(otherwise Is

−1

= s + 1, contradicting [IH1] on rcs ). Since ls −1

(r)

rcs of size s + 1, in ls (r)

does not change the colabel of any part in

(rcs )(r) the colabel of any part of size s + 1 is ≥ 1. Moreover, by Lemma 2.B.1,

(r)

(r)

Ds+1 > s + 1 implies Dk,s+1 > s + 1 for all k ∈ [r], thus the colabel of any part in rcs of size s + 1 is −1

weakly increasing along this sequence of applications of lb

−1

◦ lh

(r)

. Furthermore, Dr,c+1 > s + 1 implies

(r)

that in rcs+1 the colabel of any part of size ≤ s + 2 increases by 1. Thus any part of size s + 1 must be of colabel ≥ 2, and any part of size s + 2 must be of colabel ≥ 1. (r+1)

2.6.2.3. rcs is in state SB. This implies Is

(r)

≤ s, which further implies Ds

(r+1)

≥ Is

, so [IH2] on

rcs+1 is not vacuous in this case. The verification is exactly the same as for the case that rcs is in state SA2. (r+1)

Let us verify [IH3] by showing that rcs+1 must be in SB. Since Is+1 −1

not be in state SA1 nor SA2. Moreover, we note that ls −1

so the colabels of all parts in ls

(r+1)

= Is

≤ s < s + 1, rcs+1 can (r)

decreases colabels of all parts in rcs of size ≤ s, (r)

(rcs )(r) of size ≤ s + 1 are ≥ 1. Then by Lemma 2.B.1, Ds+1 > s + 1

(r)

(r)

implies that Dk,s+1 > s + 1 for all k ∈ [r]. Hence the colabel of any part in rcs of size ≤ s + 1 is weakly −1

increasing along this sequence of applications of lb

−1

◦ lh

(r)

. Furthermore, Dr,c+1 > s + 1 implies that in

(r)

rcs+1 the colabel of any part of size ≤ s + 2 increases by 1, thus any part of size s + 1 must be of colabel ≥ 2, and any part of size s + 2 must be of colabel ≥ 1. This finish the induction. 2.7. Proof of Proposition 2.3.10 Let p ∈ LM \ BC1. Then Proposition 2.3.10 states that rs(p) ∈ LM \ BC1 and ρ ◦ rs(p) = rs ◦ ρ(p). The main idea of the proof is to observe that both ρ and rs are “local” operations, and their actions are “far away” from each other. Thus one operation will not interfere the other operation. The notion of “local” and “far away” are made precise below. By assumption, p = T0 ⊗ · · · ⊗ TK = T0 ⊗ M = N ⊗ TK with n + 2 appearing only in T0 . By definition, ρ acts on T0 and the changes made to T0 do not depend on M . In another words, ρ(T0 ⊗ M ) = ρ(T0 ) ⊗ M . In this sense, ρ acts “locally” on the left. We claim that rs changes only TK , and the changes made to TK do not depend on N . In another word, rs(N ⊗ TK ) = N ⊗ rs(TK ). Thus rs acts “locally” on the right. Given that p 6∈ BC1, T0 and TK are distinct. Hence ρ and rs act on distinct tensor factors and therefore “far away” from each other.

2.7. PROOF OF PROPOSITION 2.3.10

60

Therefore, we are left to show rs(N ⊗ TK ) = N ⊗ rs(TK ). This is clearly implied by the following general statement:

L EMMA 2.7.1. Let p = T1 ⊗ · · · ⊗ TK ∈ P where K > 1. Then rs commutes with lh (or lb or ls) for p ∈ Dom(lh) (or p ∈ Dom(lb) or p ∈ Dom(ls), respectively): lh/lb/ls

p −−−−−→ rsy

• rs y

• −−−−−→ • lh/lb/ls

To prove Lemma 2.7.1, we recall that rs is defined to be Φ−1 ◦ rs ◦ Φ, so it is natural to consider the following counterpart statement on the RC side:

L EMMA 2.7.2. Let rc ∈ RC(B), where B = ((r1 , s1 ), . . . , (rK , sK )) where K > 1. Then rs commutes with lh (or lb or ls) for rc ∈ Dom(lh) (or rc ∈ Dom(lb) or rc ∈ Dom(ls), respectively). lh/lb/ls

rc −−−−−→ rsy

• yrs

• −−−−−→ • lh/lb/ls

P ROOF. Recall the action of rs on rc: If rc ∈ RC(B), then rs(rc) ∈ RC(rs(B)) is obtained by increasing the labels by 1 for all parts in rc(rK ) of size less than sK . In particular, rs leaves the colabels of all parts unchanged. By definition, the action of lh depends only on the colabels of parts, and lh preserves labels for all unchanged parts, thus commutes with rs. lb adds a singular string of length 1 to each rc(a) for 1 ≤ a < r1 and preserves both labels and colabels for all other parts, while rs preserves colabels and thus the singularity of all parts. So lb and rs commute. The action of ls splits one column from left of the rectangle (r1 , s1 ), and increases the colabel of any part of size < s1 in rc(r1 ) by 1. The action of rs splits one column from right on the rectangle (rK , sK ), and increases the label of any part of size < sK in rc(rK ) by 1. Clearly they commute.

2.8. PROOF OF PROPOSITION 2.3.14

61

To see that Lemma 2.7.2 implies Lemma 2.7.1, we consider now the following diagram: p

Φ

?? ?? lh/lb/ls ?? ? •

rs

rs

? •

•

Φ

/ rc ~ lh/lb/ls ~~ ~~ ~ ~~~ / • rs

Φ

lh/lb/ls

rs

/ • `AA AA lh/lb/ls AA AA / •

Φ

We observe the following on this diagram: (1) The upper and lower face commutes by the definition of Φ. (2) The front and back face commutes by the definition of rs. (3) The right face commutes by Lemma 2.7.2 The above observations imply that the left face commutes which is the statement of Lemma 2.7.1. This proves the main statement.

2.8. Proof of Proposition 2.3.14 Let p ∈ BC1, then the following diagram commutes: Φ

p −−−−→ ρy

• ρ y

• −−−−→ • Φ

Before we start the proof, we first give an alternative description of Φ for the special case of a single tensor factor.

2.8.1. An algorithm for computing Φ(p) for p ∈ P(r, c).

D EFINITION 2.8.1. Given p ∈ Pn (r, c), define Ψ(p) = (u1 , . . . , un ), where each uk is the part of p that is formed by all boxes that (1) are on j-th row for j ≤ k; (2) contain a number > k.

2.8. PROOF OF PROPOSITION 2.3.14

62

1 E XAMPLE 2.8.2. Let p ∈ P7 (4, 4) be 2 3 5 Ψ(p) = ( 2 3 4 ,

2 4 6 7

3 4 6 7

4 5 , then 6 8

4 5 3 4, 6 6 6, 4 4 5, 6 6 6, 7 7 8, 8). 4 4 5 7 7 8 6 6 6 5 7 7 8

R EMARK 2.8.3. Each uk in the above list has the shape of a Ferrers diagram of some partition rotated e by 180o . This follows from the fact that p is a semi-standard Young tableau. Thus, we can set Ψ(p) to be the list of (rotated) partitions associated to Ψ(p). It is often useful to think of each uk literally as the “area” of p occupied by the boxes of uk . For example, the box 2 in u1 = 2 3 4 is the (1, 2)-box of p; the box 3 3 4 since they are both the (1, 3)-box of p. As in u1 = 2 3 4 is the same box containing 3 in u2 = 4 4 5 another example, row 6 6 6 in u3 , u4 , and u5 literally denotes the (3, 2), (3, 3) and (3, 4)-boxes of p in all three cases. To stress the point we are making, we rewrite Example 2.8.2 by highlighting the boxes of each uk in p in red: 1 Ψ(p) = ( 2 3 5

2 4 6 7

3 4 6 7

4 5, 6 8

1 2 3 5

2 4 6 7

3 4 6 7

4 5, 6 8

1 2 3 5

2 4 6 7

3 4 6 7

4 5, 6 8

1 2 3 5

2 4 6 7

3 4 6 7

4 5, 6 8

1 2 3 5

2 4 6 7

3 4 6 7

4 5, 6 8

1 2 3 5

2 4 6 7

3 4 6 7

4 5, 6 8

1 2 3 5

2 4 6 7

3 4 6 7

4 5). 6 8

There is an alternative description of the map Ψ, which recursively constructs Ψ(p)(k) from Ψ(p)(k+1) . Our proof exploits this construction. D EFINITION 2.8.4. (1) Ψ(p)(n) is the area of boxes of p that contains n + 1. The area is clearly a horizontal strip. (2) Ψ(p)(k) is obtained from Ψ(p)(k+1) by adding all boxes of p that contain k + 1 (which forms a horizontal strip) and then removing the (k + 1)-st row of p if k + 1 ≤ r. The equivalence of the above two descriptions is clear. e and Φ: We have the following result that relates Ψ e e P ROPOSITION 2.8.5. For p ∈ P(r, c), we have Ψ(p) = Φ(p), where all strings of Ψ(p) are singular. P ROOF. This is proved in Appendix Appendix 2.A. C OROLLARY 2.8.6. For p ∈ P(r, c), the rigged configuration Φ(p) has only singular strings. e From now on, we identify Ψ(p) with a rigged configuration as described in Proposition 2.8.5.

2.8. PROOF OF PROPOSITION 2.3.14

63

2.8.2. Jeu-de-taquin on Ψ(p). By Remark 2.8.3, each element uk in Ψ(p) can be viewed as a collection of boxes in p. Thus jeu-de-taquin on p is directly reflected on Ψ(p). D EFINITION 2.8.7. For p ∈ Pn+1 (r, c), define ρ(Ψ(p)) = Ψ(ρ(p)). E XAMPLE 2.8.8.

ρ(Ψ(p)) = 1 (2 3 5

1 2 4 6

3 4 6 7

4 1 1 5, 2 2 6 3 4 7 5 6

3 4 6 7

4 1 1 5, 2 2 6 3 4 7 5 6

3 4 6 7

4 1 1 5, 2 2 6 3 4 7 5 6

3 4 6 7

4 1 1 5, 2 2 6 3 4 7 5 6

3 4 6 7

4 1 1 5, 2 2 6 3 4 7 5 6

3 4 6 7

4 1 1 5, 2 2 6 3 4 7 5 6

3 4 6 7

4 5). 6 7

Let SR be the sliding route of p ∈ BC1 under ρ (see Definition 2.2.30). We have: L EMMA 2.8.9. SR intersects Ψ(p)(k) for each k. If (i, j) is the last box in SR (recall SR is a sequence of boxes of p) that is also in Ψ(p)(k) then (i, j) is an upper left corner of Ψ(p)(k) . P ROOF. We shall induct on the recursive definition of Ψ (see Definition 2.8.4) with the following hypothesis: H YPOTHESIS 2.8.10. For each k ∈ [n + 1], IH1. SR intersects Ψ(p)(k) . If (ik , jk ) is the last box in SR that is also in Ψ(p)(k) then (ik , jk ) is an upper left corner of Ψ(p)(k) . IH2. ik ≤ k. In the base case we consider Ψ(p)(n+1) . Given that p ∈ BC1, there is a horizontal strip of (n + 2)’s in p, which forms Ψ(p)(n+1) . By the definition of sliding route, an initial segment of SR overlaps with this horizontal strip, thus in+1 = r. It is clear that IH1 and IH2 hold. Now assume that IH1 and IH2 hold for Ψ(p)(k+1) . From Ψ(p)(k+1) to Ψ(p)(k) a horizontal strip of boxes that contains k + 1 is added. We distinguish two cases: (1) There is a box containing k + 1 above box (ik+1 , jk+1 ) of p. (2) There is no box containing k + 1 above box (ik+1 , jk+1 ) of p. In the first case, let (ik , jk ) = (ik+1 − 1, jk ). Then IH1 and IH2 hold for Ψ(p)(k) . In the second case, let (ik , jk ) = (ik+1 , jk − l) where l is the length of the horizontal strip of (k + 1)’s in row ik (possibly l = 0). To check IH2, we assume the contrary that ik = ik+1 > k. We also have pik −1,jk ≤ k. Thus pik −1,jk ≤ ik − 1. This is impossible for p ∈ BC1 ⊂ Dom(ρ). IH2 then implies that the ik -th row will not be removed while constructing Ψ(p)(k) from Ψ(p)(k+1) , hence IH1 holds.

2.9. PROOF OF PROPOSITION 2.3.15

64

By the above lemma we see that ρ removes a box from some part of size bk in Ψ(p)(k) for each k. We call (bn+1 , . . . , b1 ) the ρ-sequence of Ψ(p) (or equivalently Φ(p)). E XAMPLE 2.8.11. Continuing Example 2.8.8 we have (b7 , . . . , b1 ) = (1, 3, 3, 3, 3, 3, 3). 2.8.3. Proof of Proposition 2.3.14. As the result of the above two sections, we just need to show ρ(Ψ(p)) = ρ(Ψ(p)) for p ∈ BC1. Let ~a = (an+1 , . . . , a1 ) be the ρ-sequence of Φ(p) (see Definition 2.2.34). Let ~b = (bn+1 , . . . , b1 ) be the ρ-sequence of Ψ(p) defined above. It suffices to show that ~a = ~b. We shall induct on the recursive definition of Ψ. As base case, an+1 = bn+1 since Ψ(p)(n+1) has only one (singular) part. Now assume ak+1 = bk+1 . By the definition of the ρ-sequence, ak is the size of the shortest part (all parts are singular) in Ψ(p)(k) that is no shorter than ak+1 . From Ψ(p)(k+1) to Ψ(p)(k) a horizontal strip of boxes that contains k + 1 is added. There are three possibilities: (1) There is no added box containing k + 1 adjacent (above or to the left) to the box removed from Ψ(p)(k+1) by ρ. (2) There is a box containing k + 1 added above the box removed from Ψ(p)(k+1) by ρ. (3) There is a box containing k + 1 added on the left to the box removed from Ψ(p)(k+1) by ρ. But no box containing k + 1 added above. In the first case, the removed box is already an inside corner of Ψ(p)(k) , thus by the definition of ρ, no further Sch¨utzenberger’s sliding can be done, hence bk = bk+1 . It is also clear that the part in Ψ(p)(k) that contains the removed box is the shortest part which is no shorter than ak+1 , thus ak = ak+1 = bk+1 = bk . In the second case, we need one more Sch¨utzenberger’s sliding up to get to the inside corner when constructing ρ(Ψ(p))(k) , thus ak = ak+1 = bk+1 = bk . In the third case, we possibly need several more Sch¨utzenberger’s slidings left to get to the inside corner when constructing ρ(Ψ(p))(k) . Then bk is the size of the part that contains the removed box in Ψ(p)(k) , but this part is clearly also the shortest part which is no shorter than ak+1 . Therefore, ak = bk > ak+1 = bk+1 . 2.9. Proof of Proposition 2.3.15 In this section we prove that for p ∈ BC2, we have ρ ◦ Φ(p) = Φ ◦ ρ(p). Given p ∈ BC2, we have s1 p=S⊗q =

.. . st

⊗q

2.9. PROOF OF PROPOSITION 2.3.15

65

where s1 > 1, st = n + 2, and q ∈ Pn+1 (B) for some B, and n + 2 does not appear anywhere in q. For k ≤ t, denote Sk be the single column tableau formed by the first k boxes of S. That is s1 s2

Si =

.

.. . sk

Then (ρ(S))k+1 is the single column tableau

1 (ρ(S))k+1 =

s1

.

.. . sk

Let us first lay out the road map of the proof. We will describe a combinatorial construction α : α

RC(((k, 1), B), λ) → RC(((k+1, 1), B), λ+1 ), and inductively argue that Φ(Sk ⊗q) 7→ Φ((ρ(S))k+1 ⊗q) α

for all k < t. In particular, Φ(St−1 ⊗ q) 7→ Φ(ρ(S) ⊗ q). Then we argue the commutativity of the following diagram: lb

(2.9.1)

−1

◦lh

−1

Φ(St−1 ⊗ q)

(st )

/ Φ(S ⊗ q)

α

Φ(ρ(S) ⊗ q)

ρ

/ •

=

This then implies Φ(ρ(S ⊗ q)) = Φ(ρ(S) ⊗ q) = ρ(Φ(S ⊗ q)) which finishes the proof. Recall that by Remark 2.2.19, we can either describe an rc ∈ RC as (v, J) in terms of its label or as (v, cJ) in terms of its colabel. In the following definition, it is more convenient for us to use colabels.

−1

D EFINITION 2.9.1. Let k < t. For i ∈ [k] let Di := Dsi be the lh respect to si (see Definition 2.2.26). By construction, (0) D0

(i) Di

≤

(i−1) Di−1 .

-sequence of Φ(Si−1 ⊗ q) with

For notational convenience, we take

= ∞. −1

Let Ei be the sequence of singular strings in Φ(Si ⊗ q) that were obtained by the action of lh a box to each string in Di .

that adds

2.9. PROOF OF PROPOSITION 2.3.15

66

f = α(v, cJ) is defined by the following construction (recall Let (v, cJ) = Φ(Sk ⊗ q). Then (e v , cJ) Remark 2.2.17, when we compare parts below, we are comparing their length): f (j) = (v, cJ)(j) . (1) for j > k + 1, (e v , cJ) f (2) for j = k + 1, ve(k+1) = v (k+1) , cJ (k)

(k+1)

(k)

(k+1)

f • for strings s ≤ Dk , cJ f • for strings s > Dk , cJ

(k+1)

is obtained from cJ (k+1) by

(s) = cJ (k+1) (s) + 1; (s) = cJ (k+1) (s).

(k) f (3) for j = k, ve(k) removes one box from the part Ek in v (k) . cJ

(k)

is such that

• the shortened part has colabel 0; (k)

f • for strings s ≤ Dk , cJ (k)

(k)

(s) = cJ (k) (s) − 1;

(k−1)

f • for strings Dk < s ≤ Dk−1 , cJ (k−1)

f • for strings s > Dk−1 , cJ

(k)

(k)

(s) = cJ (k) (s) + 1;

(s) = cJ (k) (s). (j)

(4) for 1 ≤ j < k, ve(j) removes one box from the part Ej

f in v (j) . cJ

(j)

is such that

• the shortened part has colabel 0; (j+1)

f • for strings s ≤ Dj+1 , cJ

(j)

(j+1)

(s) = cJ (j) (s);

(j)

f • for strings Dj+1 < s ≤ |Dj |, cJ (j)

• for strings Dj

(j−1)

f < s ≤ Dj−1 , cJ (j−1)

f • for strings s > Dj−1 , cJ

(j)

(j)

(j)

(s) = cJ (j) (s) − 1;

(s) = cJ (j) (s) + 1;

(s) = cJ (j) (s). α

(k)

L EMMA 2.9.2. For each k < t, Φ(Sk ⊗q) 7→ Φ((ρ(S))k+1 ⊗q). Moreover, Ek is the smallest singular (j)

string in (Φ(Sk ⊗ q))(k) of size greater than 0; and for 1 ≤ j ≤ k − 1, Ej (Φ(Sj ⊗ q))(j) of size greater than

is the smallest singular string in

(j+1) Ej+1 .

P ROOF. Proceed by induction. (1)

In the base case k = 1, let (v1 , cJ1 ) = Φ( s1 ⊗ q). By assumption, s1 > 1, thus D1

< ∞. The

f1 ) = α(v1 , cJ1 ) is such that definition of α then says (ve1 , cJ f1 )(j) = (v1 , cJ1 )(j) for j > 2; (1) (ve1 , cJ (2) f (2) ve1 (2) = v1 , cJ 1

(2)

(1) f (2) (s) = cJ1 (s) + 1 for strings s ≤ D1 , cJ 1

(2)

(2)

(s) = cJ1 (s) for strings

(1)

s > D1 ; (1)

(1) (1) f is such that the shortened part has colabel (3) ve1 (1) removes one box from the part E1 in v1 , cJ 1 ] (1) (1) (1) (1) (1) (1) ] 0, cJ1 (s) = cJ1 (s) − 1 for strings s ≤ D1 , cJ1 (s) = cJ1 (s) + 1 for strings s > D1 .

] (1) (1) A direct computation shows that α(v1 , cJ1 ) = Φ( 1 ⊗q). Moreover, the fact that cJ1 (s) = cJ1 (s)−1 s1 (1) (1) for strings s ≤ D1 implies that E1 is the smallest singular string of size greater than 0 in (Φ( s1 ⊗ q))(1) . This proves the base case.

2.9. PROOF OF PROPOSITION 2.3.15

67

α f k ) = Φ((ρ(S))k+1 ⊗ q), and suppose that (vk , cJk ) → Now let (vk , cJk ) = Φ(Sk ⊗ q) and (e vk , cJ 7

gk ). Consider the difference between (vk+1 , cJk+1 ) = lb−1 ◦lh−1 (Φ(Sk ⊗q), sk+1 ) and (e f k+1 ) = (vek , cJ vk+1 , cJ −1

lb

−1

◦ lh

(Φ((ρ(S))k+1 ⊗ q), sk+1 ). We will argue that the difference is exactly the effect of α.

e k+1 be the lh−1 -sequences of (vk , cJk ) and (e f k ) with respect to sk+1 > k + 1, Let Dk+1 and D vk , cJ ek+1 are defined as before corresponding to Dk+1 and D e k+1 , respectively. respectively. Ek+1 and E f k+1 )(a) for a > k + 2. By induction, Consider the difference between (vk+1 , cJk+1 )(a) and (e vk+1 , cJ f k )(j) are the same, thus D(j) = D e (j) for j = n + 1 down to k + 2. for j > k + 1, (vk , cJk )(j) and (e vk , cJ k+1 k+1 f k+1 )(j) . This then implies that for j > k + 2, (vk+1 , cJk+1 )(j) = (e vk+1 , cJ f k+1 )(k+2) . The arguments from the Consider the difference between (vk+1 , cJk+1 )(k+2) and (e vk+1 , cJ (k+2)

previous paragraph also shows that vk+1 (k) f (k+1) Dk , cJ (s) k

(k+1)

= cJk

(k+2)

(k+1)

= vek+1 . By induction, vk −1

(k+1)

= vek

(k+1)

we have Dk+1

(s) + 1. Then by definition of lh

, and for strings s ≤

e (k+1) . By the fact ≥ D k+1

e (k+1) = 0. Thus by the definition of lb−1 , v (k+1) has one that ρ(S)k+1 is of height k + 1, we have D k+1 k+1 (k+1)

(k+1)

box added on the string Dk+1 of vk (k+1)

vk+1

(k+1)

, while vek+1

(k+1)

= vek

(k+1)

. Therefore, vek+1 can be obtained from

(k+1)

by removing one box from the part Ek+1 . All above and the fact that the sequence of rectangles

f k+1 ) contributes 1 more to the vacancy number of strings in (e f k+1 )(k+2) ((k + 2, 1), B) of (e vk+1 , cJ vk+1 , cJ than the sequence of rectangles ((k + 1, 1), B) of (vk+1 , cJk+1 ) contributes to the vacancy number of strings in (vk+1 , cJk+1 )(k+2) implies that (k+1)

(k+2)

(k+2)

(k+1)

(k+2)

(k+2)

f • for strings s ≤ Dk+1 , cJ k+1 (s) = cJk+1 (s) + 1; f k+1 (s) = cJ • for strings s > Dk+1 , cJ k+1 (s). f k+1 )(k+1) . We have just shown in Consider the difference between (vk+1 , cJk+1 )(k+1) and (e vk+1 , cJ (k+1)

(k+1)

(k+1)

the previous paragraph that vek+1 is obtained from vk+1 by removing one box from the part Ek+1 . By −1

definition of lb parts ≤

(k) Dk .

(k+1)

fk , the part with this box removed has colabel 0. By induction, cJ

Also by induction

(k) vek

can be obtained from

(k) vk

(k+1)

= cJk

+ 1 for (k)

by removing one box from the part Ek .

f k+1 ) contributes 1 less All above and the fact that the sequence of rectangles ((k + 2, 1), B) of (e vk+1 , cJ f k+1 )(k+1) than the sequence of rectangles ((k + 1, 1), B) of to the vacancy number of strings in (e vk+1 , cJ (vk+1 , cJk+1 ) contributes to the vacancy number of strings in (vk+1 , cJk+1 )(k+1) implies that (k+1)

(k+1)

(k+1)

f k+1 (s) = cJ • for strings s ≤ Dk+1 , cJ k+1 (s) − 1; (k+1)

(k)

(k+1)

(k+1)

f • for strings Dk+1 < s ≤ Dk , cJ k+1 (s) = cJk+1 (s) + 1; (k)

(k+1)

(k+1)

f k+1 (s) = cJ • for strings s > Dk , cJ k+1 (s). (k+1)

By the first bullet point above, Ek+1 is the smallest singular string in (vk+1 , cJk+1 )(k+1) of size ≥ 0. f k+1 )(k) . By induction ve(k) is obtained Consider the difference between (vk+1 , cJk+1 )(k) and (e vk+1 , cJ k (k)

from vk

(k)

−1

by removing one box from the part Ek . By definition of lh

−1

and lb

(k)

(k)

, vek+1 = vek

and

2.9. PROOF OF PROPOSITION 2.3.15 (k)

(k)

68

(j)

(j)

(j)

(j)

vk+1 = vk . Moreover, we have vek+1 = vek and vk+1 = vk for all j ≤ k. Therefore, the difference (k)

(k)

(k)

f between cJk+1 and cJ k+1 is determined by the difference between cJk −1

of rectangles by lb

−1

◦ lh

(k+1)

(k)

f , the change of sequence and cJ k (k+1)

and the location of the added box Ek+1 \ Dk+1 . We note that the sequence (k)

of rectangles change decreases the contribution to cJk+1 (s) by 1 for any string s, while the added box (k+1)

(k+1)

(k)

(k+1)

Ek+1 \ Dk+1 increases the contribution to cJk+1 (s) for all s of size > |Dk+1 |. Thus (k)

(k+1)

(k)

f • for strings s ≤ Dk+1 , cJ k+1 (s) = cJk+1 (s); (k)

(k)

(k)

(k−1)

(k)

(k)

(k+1)

f k+1 (s) = cJ • for strings Dk+1 < s ≤ Dk , cJ k+1 (s) − 1; (k)

f k+1 (s) = cJ • for strings Dk < s ≤ Dk−1 , cJ k+1 (s) + 1; (k)

(k−1)

(k)

f • for strings s > Dk−1 , cJ k+1 (s) = cJk+1 (s). (k)

By the second bullet point above Ek

is the smallest singular string in (vk+1 , cJk+1 )(k) is greater or

(k+1)

equal to Ek+1 . f k+1 )(j) for 1 ≤ j < k − 1. Notice that Consider the difference between (vk+1 , cJk+1 )(j) and (e vk+1 , cJ (j)

f k+1 )(j) = (e f k )(j) . Thus by induction we have: ve (vk+1 , cJk+1 )(j) = (vk , cJk )(j) and (e vk+1 , cJ vk , cJ k+1 (j)

removes one box from the part Ej

(j)

(j)

f k+1 is such that in vk+1 , and cJ

• the shortened part has colabel 0; (j)

(j+1)

(j)

f k+1 (s) = cJ • for strings s ≤ |Dj+1 |, cJ k+1 (s); (j+1)

(j)

(j)

(j)

f k+1 (s) = cJ • for strings Dj+1 < s ≤ Dj , cJ k+1 (s) − 1; (j)

• for strings Dj

(j)

(j−1)

(j)

f < s ≤ |Dj−1 |, cJ k+1 (s) = cJk+1 (s) + 1; (j−1)

(j)

(j)

f k+1 (s) = cJ • for strings s > Dj−1 , cJ k+1 (s). (j)

By the second bullet point above Ej

(j+1)

is the smallest singular string in (vk+1 , cJk+1 )(j) ≥ Ej+1 .

L EMMA 2.9.3. The diagram (2.9.1) commutes. P ROOF. By Lemma 2.9.2, the difference between Φ(St−1 ⊗ q) = (vt , cJt ) and Φ((ρ(S))t ⊗ q) = f t ) is Φ(ρ(S ⊗ q)) = (e vt , cJ f (j) = (v, cJ)(j) . (1) for j > t, (e v , cJ) f (2) for j = t, ve(t) = v (t) , cJ

(t)

is obtained from cJ (t) by

(t−1)

(t)

(t−1)

(t)

f • for strings s ≤ Dt−1 , cJ f • for strings s > Dt−1 , cJ

(s) = cJ (t) (s) + 1; (s) = cJ (t) (s).

(t−1) f (3) for j = t − 1, ve(t−1) removes one box from the part Et−1 in v (t−1) . cJ

• the shortened part has colabel 0; (t−1)

f • for strings s ≤ Dt−1 , cJ (t−1)

(t−1)

(s) = cJ (t−1) (s) − 1;

(t−2)

f • for strings Dt−1 < s ≤ Dt−2 , cJ

(t−1)

(s) = cJ (t−1) (s) + 1;

(t−1)

is such that

APPENDIX 2.A. PROOF OF PROPOSITION 2.8.5

(t−2)

f • for strings s > Dt−2 , cJ

(t−1)

69

(s) = cJ (t−1) (s). (j)

(4) for 1 ≤ j < t − 1, ve(j) removes one box from the part Ej

f in v (j) . cJ

(j)

is such that

• the shortened part has colabel 0; (j+1)

f • for strings s ≤ Dj+1 , cJ

(j)

(s) = cJ (j) (s);

(j)

(j)

(j−1)

(j)

(j+1)

f • for strings Dj+1 < s ≤ Dj , cJ (j)

• for strings Dj

f < s ≤ Dj−1 , cJ (j−1)

f • for strings s > Dj−1 , cJ −1

Let (u, cI) = Φ(S ⊗ q) = lb a box to

(j) Dt

(j)

−1

◦ lh

(s) = cJ (j) (s) − 1; (s) = cJ (j) (s) + 1;

(s) = cJ (j) (s).

(Φ(St−1 ⊗ q), n + 2). Thus, u can be obtained from v by adding

for j = n + 1 down to t. Now we use the fact that n + 2 does not appear anywhere in q,

which implies that v (n+1) is empty. Hence Dt is the sequence of empty strings. This implies that the colabels of all unchanged strings are preserved when passing from (v, cJ) to (u, cI) (since the vacancy number for all unchanged strings are preserved, and their labels are preserved). Then the difference between u and ve is given by the following: e (j) is obtained from u by removing a string of size 1; (1) for j > t, (e v , J) (t)

(2) for j = t, ve(t) is obtained from u(t) by removing a box from the part Et ; (t−1)

(3) for j = t − 1, ve(t−1) is obtained from u(t−1) by removing a box from the part Et−1 ; (j)

(4) for 1 ≤ j < t − 1, ve(j) is obtained from u(t) by removing a box from the part Ej ; By Lemma 2.9.2, the sequence of boxes removed is precisely the ρ-sequence. Furthermore, the difference f f mandated by α precisely makes ρ(u, cI) = (e v , cJ). between (v, cJ) and (e v , cJ)

Appendix 2.A. Proof of Proposition 2.8.5 The aim here is to prove Proposition 2.8.5. To do this, we will actually prove a stronger statement. Let us first generalize the construction of Ψ in Section 2.8. Let c ≥ 0, and 0 < t ≤ r. Let p ∈ Pn ((t, 1), (r, c)) and write

T1,1 p=T ⊗S =

.. . Tt,1

⊗

S1,1

···

···

S1,c

.. .

.. .

.. .

.. .

···

Sr,c

Sr,1 Sr,2

.

APPENDIX 2.A. PROOF OF PROPOSITION 2.8.5

70

We require that Tk,1 ≤ Sk,1 for k = 1, . . . , t, and write p as ···

···

S1,c

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

···

Sr,c

T1,1 S1,1 .. . p= T t,1

Sr,1 Sr,2

.

Define ψ(p) = (u1 , . . . , un ) where each uk is the area of p that is formed by all boxes that (1) are on the j-th row for j ≤ k; (2) contain numbers > k. 1 2 3 E XAMPLE 2.A.1. Let p ∈ P7 ((2, 1), (4, 4)) be 2 4 4 6 6 7 7 ψ(p) = ( 2 3 4 ,

4 5 , then 6 8

4 5 3 4, 6 6 6, 4 4 5, 6 6 6, 7 7 8, 8). 4 4 5 7 7 8 6 6 6 7 7 8

Remark 2.8.3 also applies here.

e D EFINITION 2.A.2. Define Ψ(p) = (ψ(p), J) ∈ RCn ((t, 1), (r, c)), where ψ(p) is viewed as a sequence of partitions, and J is such that it sets the colabels for all parts in ut of size ≤ c to 1 and sets the colabels of all other parts to 0.

e L EMMA 2.A.3. For p given as above, Φ(p) = Ψ(p).

P ROOF. We prove this statement by induction. For a fixed r ≥ 0, let Er (c, t) be the following statement with c ≥ 0 and 0 ≤ t ≤ r as free variables: e ”For any p = T ⊗ S ∈ Pn ((t, 1), (r, c)) with Tk,1 ≤ Sk,1 for k = 1, . . . , t we have Φ(p) = Ψ(p).” The induction is on Er (c, t) with (c, t) in the lattice Z≥0 × [r]. For the base case, Er (0, 0) is true since e both Φ(p) and Ψ(p) are lists of empty partitions. The induction step has following two cases: (1) Assume Er (c, t) for c ≥ 0 and t < r, show Er (c, t + 1); (2) Assume Er (c, r) for c ≥ 0, show Er (c + 1, 0).

APPENDIX 2.B. SEVERAL USEFUL FACTS

71

e For the first case, given Φ(p) = Ψ(p) for the following ···

···

S1,c

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

Sr,1 Sr,2

···

Sr,c

···

···

S1,c

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

.. .

···

Sr,c

T1,1 S1,1 .. . p= T t,1

we would like to compare Φ(p0 ) and Ψ(p0 ) for

T1,1 S1,1 .. . p0 = T t,1 a

Sr,1 Sr,2 where St+1,1 ≥ a > Tt,1 .

−1

The change from Φ(p) ∈ RC((t, 1), (r, c)) to Φ(p0 ) ∈ RC((t + 1, 1), (r, c)) is caused by lb

−1

◦ lh

which is described in the following algorithm: Let s(a) = ∞. For k = a − 1 down to t + 1, select the longest singular string in Φ(p)(k) of length s(k) (possibly of zero length) such that s(k) ≤ s(k+1) . Add a box to each of the selected strings, and reset their labels to make them singular with respect to the new vacancy number, leaving all other strings unchanged. e e By the construction of Ψ(p), the inductive assumption that Φ(p) = Ψ(p), and St+1,1 ≥ a, we can conclude that s(k) = c for k = a − 1 down to t + 1 in the construction of Φ(p0 ). The changing of sequence of rectangles from ((t, 1), (r, c)) to ((t + 1, 1), (r, c)) causes the colabels of all parts in Φ(p0 )(t+1) of size ≤ c set to 1 (increased by 1 from 0), and the colabels of all parts in Φ(p0 )(t) of size ≤ c set to 0 (decreased by 1 e e 0 ). from 1). This is precisely the effect of going from Ψ(p) to Ψ(p For the second case, going from Φ(p) to Φ(p0 ) has the effect of ls

−1

, which decreases the colabels for

e e 0 ). all parts in Φ(p)(r) of size ≤ c by 1. Again this is precisely the result of going from Ψ(p) to Ψ(p

Appendix 2.B. Several useful facts In this section, several facts that are repeatedly used in Section 2.6 are stated and proved.

APPENDIX 2.B. SEVERAL USEFUL FACTS

72 −1

For any p ∈ Pn , let rc = Φ(p). Then to each numbered box in p one can associate a lh

-sequence. In

the case b1 .. .

p=S⊗T ⊗q =

T1,s

···

T1,1

.. .

.. .

.. .

Tr,s

···

Tr,1

⊗

bj

⊗ q ∈ Pn ((j, 1), (r, c), B 0 ),

we adopt the definitions of rcj,s and Dj,s as given in Section 2.6. Unlike those in Section 2.6, however, results in this section are general facts about rigged configurations and not just about Φ(Dom(ρ)). (k)

(k)

L EMMA 2.B.1. For a fixed column j of T and for 1 ≤ a < b ≤ r, we have Db,j ≤ Da,j for any k ∈ [n]. −1

P ROOF. By the definition of lh

(k)

(k−1)

(Definition 2.2.26), we can inductively show Da+1,j ≤ Da,j

k from n down to 1. Combining this with the fact that

(k−1) Da,j

≤

(k) Da,j ,

we have

(k) Da+1,j

(k)

≤

(k) Da,j .

for all

(k)

L EMMA 2.B.2. For a fixed row i of T and for 1 ≤ c < d ≤ s, we have Di,d > Di,c for all k such that (k)

(k)

(k)

Di,c 6= ∞. In the case Di,c = ∞, Di,d must be also ∞. P ROOF. We use the convention in this proof that ∞ plus any constant is ∞. We proceed by induction on the row index i, on the following inductive hypothesis (k)

(k)

H YPOTHESIS 2.B.3. For any k ∈ [n], rci−1,s+1 has a singular string of size Di,s + 1. Therefore (k)

(k)

Di,s+1 ≥ Di,s + 1. By definition of Φ, rci−1,s+1 can be obtained from rci,s in 3 steps: −1

S1. From rci,s to rcr,c by a sequence of lb −1

−1

◦ lh

rcr,c = lb

−1

◦ lh

−1

(· · · lb −1

S2. From rcr,c to rc0,s+1 by rc0,s+1 = ls

operations:

−1

◦ lh

(rci,s , Ti+1,s ) · · · , Tr,c ).

(rcr,c ). −1

S3. (This step is empty for i = 1.) From rc0,s+1 to rci−1,s+1 by a sequence of lb −1

rci−1,s+1 = lb −1

By definition of lh

−1

◦ lh

(rci,s , T1,s+1 ) · · · , Ti−1,s+1 ).

(k)

−1

for any k ∈ [n],

−1

◦ lh

operations:

(k)

, for any k ∈ [n], rci,s has a singular string of size Di,s + 1. By Lemma 2.B.1,

all the consecutive applications of lb (k) rcr,c

−1

(· · · lb

−1

◦ lh

−1

◦ lh

in S1 do not affect the singularity of the above strings. Thus (k)

−1

has a singular string of size Di,s + 1. In S2, ls

non-singular. Thus for any k ∈ [n],

(k) rc0,s+1

never makes any singular string into (k)

has a singular string of size Di,s + 1. Finally, by induction and

APPENDIX 2.B. SEVERAL USEFUL FACTS

73

Lemma 2.B.1, we see that Dk,s+1 for any k < i will not affect the singularity of the above strings. Thus in (k)

(k)

rci−1,s+1 for each k, there is a singular string of size Di,s + 1. Note that Ti,s+1 ≤ Ti,s . This implies Dj,s+1 > Dj,s for all j ≥ Ti,s+1 . Using this as base case, and the (k)

(k)

result we just proved above, a downward induction shows that Di,s+1 ≥ Di,s for all k ∈ [n] and as long as (k)

(k)

(k)

Di,s < ∞, Di,s+1 > Di,s .

74

CHAPTER 3

Promotion and evacuation on rectangular and staircase tableaux This chapter is based on the preprint [PW10] of the same name.

3.1. Introduction Let X be a finite set and let C = hai be a cyclic group of order N acting on X. Let X(q) ∈ Z[q] be a polynomial with integer coefficients. We say that the triple (X, C, X(q)) exhibits the cyclic sieving phenomenon (CSP) if for any integer k, we have (3.1.1)

X(ζ k ) = #{x ∈ X | ak · x = x},

where ζ = e2πi/N is a primitive N -th root of unity. We will call X(q) a CSP polynomial. Given X and C, the existence of a CSP polynomial with non-negative integer coefficients is necessary. Indeed, if we view a ∈ P erm(X) and let mc be the multiplicity of cycle(s) of size c in the cycle notation of P a (clearly, mc > 0 implies that c|N ), and let pc = k=0,...,c−1 q k·N/c , then pa,X =

X

mc · pc

c∈N

is such a CSP polynomial. The set of all CSP polynomials forms a coset pa,X in the ring Z[q]/hq N − 1i and pa,X is the least degree representative of this coset. The interesting instances of the CSP ([RSW04], [KM09], [Wes09], [SSW09], [PS09], [PPR09], [BR07], and [EF08], etc.) are those whose CSP polynomials have a “natural” meaning, for example, the q-analogues of some counting formulae of the set X. In nearly all these interesting instances of the CSP, X(q) is also a generating function X

q µ(x)

x∈X

of an intrinsic statistic µ : X → Z on X. In the CSP instances where such an intrinsic statistic exists, we use the more explicit triple (X, C, µ) to encode it.

3.1. INTRODUCTION

V. Reiner, D. Stanton, and D. White first formalized the notion of the CSP in [RSW04]. Before them, Stembridge considered the “q = −1” phenomenon [Ste96], which is the special case of the CSP with N = 2 (where ζ = e2πi/2 = −1). Promotion ∂ (Definition 1.4.1) and evacuation (Definition 3.2.1) are closely related permutations on the set of standard Young tableaux SY T (λ) for any given shape λ. Sch¨utzenberger studied them in [Sch72, Sch76, Sch77] as bijections on SY T (λ), and later as permutations on the linear extensions of any finite poset. Edelman and Greene [EG87], and Haiman [Hai92] described some of their important properties; in particular, they showed that the order of promotion on SY T (sck ), where sck = (k, k − 1, . . . , 1) is a staircase shape, is k(k + 1). In 2008, Stanley gave a terrific survey [Sta09] of previous results on promotion and evacuation. Important instances of the CSP arise from the actions of promotion and evacuation on standard Young tableaux. For example, Stembridge [Ste96] showed that (SY T (λ), hi, comaj) exhibits the CSP, where λ is any partition shape, and comaj is a statistic on standard Young tableaux that is closely related to the comajor index. As another example, B. Rhoades [Rho10] showed that (SY T (cr ), h∂i, maj) exhibits the CSP, where cr is the rectangular partition of r equal parts of size c, ∂ is promotion on SY T (cr ), and maj is a statistic on standard Young tableaux that is closely related to the major index. Since the introduction of the CSP in [RSW04], much effort has been made in demonstrating interesting instances of it ([KM09], [Wes09], [SSW09], [PS09], [PPR09], [BR07], and [EF08], etc.), or generalizing it ([BERS]). In this chapter we report our current progress in attacking the following problems.

P ROBLEM 3.1.1. Demonstrate the CSP of promotion action on staircase tableaux SY T (sck ). More specifically, this could mean any one of the following three tasks, with difficulty increasing from difficult to seemingly impossible: • Find a counting formula for SY T (sck ), the q-analogues of which provides a CSP polynomial for the promotion action on SY T (sck ). • Find a intrinsic statistic on SY T (sck ), the generating function of which provides a CSP polynomial for the promotion action on SY T (sck ). • Find a statistic on SY T (λ), the generating function of which provides a CSP polynomial for the promotion action on SY T (λ). In particular, this statistic should have the same distribution as maj on SY T (cr ).

75

3.2. DEFINITIONS AND PRELIMINARIES

The reported progress is the construction of an embedding ι : SY T (sck ) ,→ SY T (k (k+1) ) that preserves promotion and evacuation. This enables us to extend Rhoades’ definition of the “extended descent” from rectangular tableaux to staircase tableaux. This chapter is organized in the following way: In Section 3.2, we define the terminology and notation, and review several basic results that are used in later sections. In Section 3.3, we construct the embedding ι and prove our main results about ι: Theorems 3.3.6 and 3.3.7, which state that promotion and evacuation are preserved under the embedding. In Section 3.4, we extend Rhoades’ construction of “extended descent” on rectangular tableaux to that on staircase tableaux by using ι; our main results in this section are Theorems 3.4.11, 3.4.12, 3.4.13 and 3.4.14, which state that the extended descent data nicely records the actions of (dual-)promotion and (dual)evacuation on both rectangular and staircase tableaux. In Section 3.5, we explain how the embedding ι arose and pose some questions about it.

3.2. Definitions and Preliminaries This section is a review of those notions, notations and facts about Young tableaux that are directly used in the following sections. We assume the reader’s basic knowledge of tableaux theory – partitions, standard Young tableaux, Knuth equivalence, reading word of a tableau, jeu-de-taquin, and RSK algorithm, etc.. For a brief introduction to these topics, please refer back to Chapter 1.

3.2.1. Basic definitions.

D EFINITION 3.2.1. Given T ∈ SY T (λ) for any λ ` n, the evacuation action on T , denoted by (T ), is described in the following algorithm. Let T0 = T and λ0 = λ, and let U be an “empty” tableau of shape λ. We will fill in the entries of U to get (T ). (1) Apply sliding to Tk . The last box of the sliding path is an inside corner of λk ; call this box (ik , jk ). Fill in the number k + 1 in the (ik , jk ) box of U . (2) Remove (ik , jk ) from λk to get λk+1 , and remove the corresponding box and entry from Tk to get Tk+1 . (3) Repeat steps (1) and (2) until we get λn = ∅ and U is completely filled. Then define (T ) = U .

76

3.2. DEFINITIONS AND PRELIMINARIES

77

E XAMPLE 3.2.2. The following is a “slow motion” demonstration of the above process, where the Tk and U have been condensed. Bold entries indicate the current fillings of U . 1 2 T = 5 6 7

3 8 1 3 8 1 3 8 1 4 2 4 2 4 2 → 5 9 → 5 → 9 10 6 6 9 6 7 7 7

1 3 8 1 1 4 1 → 2 5 → 2 6 7 7

3 8 1 4 1 → 5 6 2 7 1 2 → 1 2 7

3 8 1 4 → 1 5 6 2 7

3 8 1 4 → 2 5 9 6 7 3 8 1 4 2 → 1 5 6 2 7

3 3 4 → ··· → 5 6

3 8 4 1 → 2 5 9 6 7 3 8 1 4 2 → 1 5 6 2 7

3 8 1 4 1 → 2 5 9 6 7 3 4 5 6

3 8 4 5 9

1 3 2 4 → 1 5 2 6 7

1 3 8 2 5 = (T ) 4 6 7 10 9

R EMARK 3.2.3. The above definition of evacuation follows the convention of Edelman and Greene in [EG87]. Stanley’s “evacuation” [Sta99, A1.2.8] would be our “dual-evacuation” defined below. D EFINITION 3.2.4. Given T ∈ SY T (λ) for any λ ` n, the dual-evacuation of T , denoted by ∗ (T ), is described in the following algorithm. Let T0 = T and λ0 = λ, and let U be an “empty” tableau of shape λ. We will fill in the entries of U to get ∗ (T ). (1) Apply dual-sliding to Tk . The last box of the dual-sliding path is an outside corner of λk ; call this box (ik , jk ). Fill in the number n − k in the (ik , jk ) box of U . (2) Remove (ik , jk ) from λk to get λk+1 , and remove the corresponding box and entry from Tk to get Tk+1 . (3) Repeat steps (1) and (2) until we get λn = ∅ and U is completely filled. Then define ∗ (T ) = U . E XAMPLE 3.2.5. The following is a “slow motion” demonstration of the above process, where the Tk and U have been condensed. Bold entries indicate the current fillings of U . 1 2 T = 5 6 7

3 8 3 8 2 3 8 2 3 8 2 3 8 2 3 8 2 3 8 4 2 4 4 4 4 9 4 9 4 9 → 5 9 → 5 9 → 5 9 → 5 → 5 10 → 5 10 9 10 6 10 6 10 6 10 6 10 6 6 10 7 7 7 7 7 7

3 8 3 8 3 8 4 9 4 9 4 9 → 5 10 → 5 10 → 5 10 6 10 6 10 6 10 7 7 7

3 4 → 5 6 7

8 9 8 9 4 8 9 4 9 4 9 9 5 10 → 5 10 → 5 10 → 10 6 10 6 10 6 7 7 7

8 9 9 10 10

3.2. DEFINITIONS AND PRELIMINARIES

4 8 9 4 5 9 5 → 6 10 → 6 10 7 7

8 9 4 8 9 9 5 9 10 → 6 10 → · · · → 10 7 10 8

78

1 4 9 2 5 = ∗ (T ) 3 6 7 10 8

R EMARK 3.2.6. There is an equivalent definition of ∗ via the RSK algorithm [Sta99, A1.2.10]. (Recall that Stanley’s “evacuation” is our “dual-evacuation”.) For a permutation w = (w1 , w2 , · · · , wn ) ∈ Sn (in one-line notation), let w] ∈ Sn be given by w] = (n + 1 − wn , · · · , n + 1 − w2 , n + 1 − w1 ). For example, in the case w = (3, 5, 4, 7, 1, 2, 6), w] = (2, 6, 7, 1, 4, 3, 5). The operation w → w] is equivalent to composing by the longest element in Sn . Then if w corresponds to (P, Q) under RSK, w] corresponds to (∗ (P ), ∗ (Q)) under RSK. We are not aware of any RSK definition of for general shape λ. D EFINITION 3.2.7. For T ∈ SY T (λ), i is a descent of T if i + 1 appears strictly south of i in T . The descent set of T , denoted by Des(T ), is the set of all descents of T . 1 2 3 E XAMPLE 3.2.8. In the case that T = 4 6 9 , Des(T ) = {3, 4, 6, 7}. 5 7 8 R EMARK 3.2.9. Descent statistics were originally defined on permutations. For π ∈ Sn , i is a right descent of π if π(i) > π(i + 1), and i is a left descent of π if i is to the right of i + 1 in the one-line notation of π. It is straightforward to check that left descents are preserved by Knuth equivalence. Therefore the descent set of any tableau T is the set of left descents of any reading word of T . 3.2.2. Basic facts. We list those basic facts of (dual-)promotion and (dual-)evacuation that we will assume. If not specified otherwise, the following facts are about SY T (λ) for general λ ` n. FACT 3.2.10. and ∗ are involutions. FACT 3.2.11. ◦ ∂ = ∂ ∗ ◦ and ∗ ◦ ∂ = ∂ ∗ ◦ ∗ . FACT 3.2.12. ◦ ∗ = ∂ n . The above results are due to Sch¨utzenberger [Sch63, Sch72]. Alternative proofs are given by Haiman in [Hai92]. FACT 3.2.13. For any R ∈ SY T (cr ), let n = |cr | = r · c. Then ∂ n (R) = R.

3.3. THE EMBEDDING OF SY T (SCK ) INTO SY T (K (K+1) )

79

The above result is often attributed to Sch¨utzenberger. FACT 3.2.14. On rectangular tableaux, = ∗ . The above result is an easy consequence of Fact 3.2.12 and Fact 3.2.13. FACT 3.2.15. For any S ∈ SY T (sck ), let n = |sck | = (k+1)·k/2. Then ∂ 2n (S) = S and ∂ n (S) = S t , where S t is the transpose of S. The above result is due to Edelman and Greene [EG87]. FACT 3.2.16. For any S ∈ SY T (sck ), ∗ (S) = (S)t . The above result is an easy consequence of Fact 3.2.10, Fact 3.2.12, and Fact 3.2.15.

3.3. The embedding of SY T (sck ) into SY T (k (k+1) ) In this section we describe the embedding ι : SY T (sck ) → SY T (k (k+1) ). D EFINITION 3.3.1. Given S ∈ SY T (sck ), let N = (k + 1) · k. Construct R = ι(S) as follows: • R[i, j] = S[i, j] for i + j ≤ k + 1 (northwest (upper) staircase portion). • R[i, j] = N + 1 − (T )[k + 2 − i, k + 1 − j] for i + j > k + 1 (southeast (lower) staircase portion). This amounts to the following visualization. 1 2 6 1 4 5 E XAMPLE 3.3.2. Let S = 3 5 ; then (S) = 2 6 . Rotating (S) by π, we get 6 4 3 5 4 10 we take the complement of each filling by N + 1 = 13 and get S 0 = 7 11 . 8 9 12 There is an obvious way to put S and S 0 together to create a standard tableau of shape 34 , 1 2 6 ι(t) = 3 5 10 . 4 7 11 8 9 12

3 2 . Now 1

which is

1 2 3 R EMARK 3.3.3. Recall that ∗ (S) = (S)t . Thus we could have computed ∗ (S) = 4 6 , and 5 3 flipped it along the staircase diagonal to get 6 2 , which is the same as rotating (S) by π. This point 5 4 1 of view manifests the fact that n ∈ Des(ι(S)) (Definition 3.4.1) if and only if the corner of n in ∗ (S) is southeast of the corner of n in S.

3.3. THE EMBEDDING OF SY T (SCK ) INTO SY T (K (K+1) )

80

It is also an arbitrary choice to embed SY T (sck ) into SY T (k (k+1) ) instead of into SY T ((k + 1)k ). For example, we could have put together the above S and S 0 to form 1 2 6 10 3 5 7 11 . 4 8 9 12 Our arguments below apply to either choice with little modification. From the construction of ι, we see that ι(S) contains the upper staircase portion, which is just S, and the lower staircase portion, which is essentially (S). Therefore, we can just identify ι(S) with the pair (S, (S)). We would like to understand how the promotion action on ι(S) factors through this identification. It is clear from the construction that promotion on ι(S), when restricted to the lower staircase portion, corresponds to dual-promotion on (S). If the promotion path in ι(S) passes through the box containing n = (k + 1) · k/2 (the largest number in the upper staircase portion of ι(S)), then we know that promotion on ι(S), when restricted to the upper staircase portion, corresponds to promotion on S. The following arguments show that this is indeed the case. L EMMA 3.3.4. Let T ∈ ST Y (λ), and n = |λ|. If the number n is in box (i, j) of T (clearly, it must be an outside corner), then the dual-promotion path of ∗ (T ) ends on box (i, j) of ∗ (T ). P ROOF. It follows from the definition of dual-evacuation using dual-sliding that the position of n in ∗ ◦ ∂(T ) is the same as the position of n in T (because the sliding in the action of promotion and the first application of dual-sliding in the definition of dual-evacuation will “cancel out” with respect to the position of n). By the fact that ∗ (T ) = ∂ ◦ ∗ ◦ ∂(T ) (Fact 3.2.11) and the fact that the dual-promotion path of ∗ (T ) is the reverse of the promotion path of ∗ ◦ ∂(T ) (Remark 1.4.7), the statement follows.

The above lemma, when specialized to staircase-shaped tableaux, implies the following. P ROPOSITION 3.3.5. Let S ∈ SY T (sck ). The promotion path of ι(S) always passes through the box with entry n = (k + 1) · k/2. P ROOF. Suppose n is in box (i, j) of S ∈ SY T (sck ). Since S is of staircase shape, we have ∗ (S) = (S)t (Fact 3.2.16). The above lemma then says the dual-promotion path of (S) ends on the box (j, i) of (S), which is “glued” exactly below the box (i, j) of S by the construction of ι. Now we use the observation that the promotion path of ι(S), when restricted to the lower staircase portion, corresponds to the dualpromotion path of (S). The result follows.

3.4. DESCENT VECTORS

81

This proves our first main result of the embedding ι. T HEOREM 3.3.6. For S ∈ SY T (sck ), ι ◦ ∂(S) = ∂ ◦ ι(S). By the above theorem and the definition of evacuation, we have that T HEOREM 3.3.7. For S ∈ SY T (sck ), ι ◦ (S) = ◦ ι(S). R EMARK 3.3.8. It can be show either independently or as a corollary of Theorem 3.3.6 that ι ◦ ∂ ∗ (S) = ∂ ∗ ◦ ι(S). On the other hand, it is not true that ι ◦ ∗ (S) = ∗ ◦ ι(S). On the contrary, by Fact 3.2.14 we know that ι ◦ (S) = ∗ ◦ ι(S). It is not hard to see that ι ◦ ∗ (S) = ◦ ι(S t ). 3.4. Descent vectors 3.4.1. Descent vectors of rectangular tableaux. Rhoades [Rho10] invented the notion of “extended descent” in order to describe the promotion action on rectangular tableaux. D EFINITION 3.4.1. Let R ∈ SY T (rc ), and n = c · r. We say i is an extended descent of R if either i is a descent of R, or i = n and 1 is a descent of ∂(R). The extended descent set of R, denoted by Dese (R), is the set of all extended descents of R. 1 3 6 E XAMPLE 3.4.2. In the case that R1 = 2 5 7 , Dese (R1 ) = {1, 3, 6, 7, 9, 11}. Here 12 6∈ Dese (R1 ) 4 9 11 8 1012 1 2 7 because 1 is not a descent of ∂(R1 ) = 3 4 8 . 5 6 10 9 1112 1 2 4 In the case that R2 = 3 5 9 , Dese (R2 ) = {2, 4, 5, 6, 9, 11, 12}. Here 12 ∈ Dese (R2 ) because 1 is a 6 8 11 7 1012 1 3 5 descent of ∂(R2 ) = 2 4 6 . 7 9 10 8 1112 It is often convenient to think of Dese (R) as an array of n boxes, where a dot is put at the i-th box of this array if and only if i is an extended descent of R. In this form, we will call Dese (R) the descent vector

3.4. DESCENT VECTORS

82

of R. Furthermore, we identify (“glue together”) the left edge of the left-most box and the right edge of the right-most box so that the array Dese (R) forms a circle. It therefore makes sense to talk about rotating Dese (R) to the right, where the content of the i-th box goes to the (i+1)-st box (mod n), or similarly, rotating to the left. E XAMPLE 3.4.3. Continuing the above example, Dese (R1 ) = •

•

• •

•

•

•

• •.

and Dese (R2 ) =

•

• • •

We would like to point out that the map Dese : SY T (rc ) → (0, 1)n is not injective and that the preimages of Dese are not equinumerous in general. Rhoades [Rho10] showed a nice property of the promotion action on the extended descent set. In the language of descent vectors, it has the following visualization. T HEOREM 3.4.4 (Rhoades, [Rho10]). If R is a standard tableau of rectangular shape, then the promotion ∂ rotates Dese (R) to the right by one position. 1 E XAMPLE 3.4.5. Continuing the above example, if R3 = ∂(R2 ) = 2 7 8 Dese (R3 ) = •

•

• • •

•

3 5 4 6 then 9 10 1112 •.

The action of evacuation on descent vectors is also very nice. (Note that dual-evacuation ∗ is the same as evacuation on rectangular tableaux.) T HEOREM 3.4.6. Let R ∈ SY T (rc ) and n = c · r. Then evacuation rotates Dese (R) to the right by one position and then flips the result of the rotation. More precisely, the i-th box of Dese ((R)) is dotted if and only if the (n − i)-th (mod n) box of Dese (R) is dotted. P ROOF. We first note that (R) = ∗ (R) (Fact 3.2.14). Then we note that Des(R) is the set of left descents of the column reading word wR of R (Remark 3.2.9). Now, i is a left descent of wR if and only if ] n − i is a left descent of wR (Remark 3.2.6). Therefore i ∈ Des(R) if and only if n − i ∈ Des((R)).

If n ∈ Dese (R), then 1 ∈ Des(∂(R)) (Definition 3.4.1), thus n − 1 ∈ Des( ◦ ∂(R)) by the previous paragraph, thus n − 1 ∈ Des(∂ −1 ◦ (R)) (Fact 3.2.11), thus n ∈ Dese ((R)) (Theorem 3.4.4). Since is an involution, the converse is also true.

3.4. DESCENT VECTORS

83

1 E XAMPLE 3.4.7. Continuing the above example, (R3 ) = 3 7 8 Dese ((R3 )) =

•

2 5 4 6 and 9 11 1012

• • •

•

• •.

It is clear that this action is an involution. 3.4.2. Descent vectors of staircase tableaux. For staircase tableaux, we give the following construction of descent vectors. D EFINITION 3.4.8. Let S ∈ SY T (sck ) and n = |sck | = k(k + 1)/2. Then Dese (S) is an array of 2n boxes. The rules of placing dots into these boxes are the following. • If i ∈ Des(S), then put a dot in the i-th box and leave the (n + i)-th box empty. • If i < n and i 6∈ Des(S), then put a dot in the (n + i)-th box and leave the i-th box empty. • If 1 ∈ Des(∂(S)), then leave the n-th box empty and put a dot in the (2n)-th box. • If 1 6∈ Des(∂(S)), then leave the (2n)-th box empty and put a dot in the n-th box. We identify the left edge and the right edge of this array. 1 4 5 E XAMPLE 3.4.9. In the case that S1 = 2 6 , 3 Dese (S1 ) = • •

• •

Dese (S2 ) =

•

• •

.

•

•.

1 2 5 In the case that S2 = 3 6 , 4 • •

•

As in the case of rectangular tableaux, the map Dese is not injective and the pre-images of Dese are not equinumerous in general. From the definition, we see that the first half and the second half of Dese (S) are just complements of each other, that is, for each i ∈ [n] precisely one of the i-th and (n + i)-th boxes is dotted. Thus the second half of Dese (S) is redundant. On the other hand, this redundancy demonstrates the link between Dese (S) and Dese (ι(S)) as stated in Theorem 3.4.11. First, we need a supporting lemma, whose proof is not hard but rather tedious, so we leave it to the appendix. L EMMA 3.4.10. Let S ∈ SY T (sck ) and n = |sck |. If the promotion path of S ends with a vertical (up) move, then the corner of n in ∗ (S) is northeast of the corner of n in S. If the promotion path of S ends with a horizontal (left) move, then the corner of n in ∗ (S) is southwest of the corner of n in S.

3.4. DESCENT VECTORS

84

T HEOREM 3.4.11. For S ∈ SY T (sck ), Dese (S) = Dese (ι(S)). P ROOF. Parsing through the construction of ι, we see that this claim is the conjunction of the following two statements: (1) for i 6= n, i ∈ Des(S) if and only if n − i 6∈ Des((S)); and (2) 1 ∈ Des(∂(S)) if and only if n 6∈ Des(ι(S)). For the first statement, we note that i ∈ Des(S) is equivalent to the statement that i is a left descent of a reading word wS of S (Remark 3.2.9), which is equivalent to the statement that n − i is a left descent of the word ws] (Remark 3.2.6), which is equivalent to the statement that n − i is a descent in ∗ (S), which is equivalent to the statement that n − i is not a descent of (S) (Fact 3.2.16). Now, 1 ∈ Des(∂(S)) is equivalent to the statement that the promotion path of S ends with a vertical (up) move, which is equivalent to the statement that the corner n in ∗ (S) is northeast of the corner of n in S by Lemma 3.4.10 , which is equivalent to the statement that n 6∈ ι(S) (Remark 3.3.3). The above Theorems 3.4.11, 3.3.6 and 3.4.4 imply the following analogy to Theorem 3.4.4 for staircase tableaux. T HEOREM 3.4.12. If S is a standard tableau of staircase shape, then promotion ∂ rotates Dese (S) to the right one position. Note that if we rotate Dese (S) in any direction by n positions we get the complement of Dese (S), which is Dese (S t ). This agrees with Edelman and Greene’s result [EG87] that ∂ n (S) = S t and ∂ n = ∂ −n . Unlike the case of rectangular tableaux, evacuation and dual-evacuation ∗ act differently on staircase tableaux. Their actions on descent vectors are described below. T HEOREM 3.4.13. Let S ∈ SY T (sck ) and n = k(k + 1)/2. Then evacuation rotates Dese (S) to the right by one position and then flips the result of the rotation. More precisely, the i-th box of Dese ((S)) is dotted if and only if the (2(n − i))-th box of Dese (S) is dotted. T HEOREM 3.4.14. Let S ∈ SY T (sck ) and n = k(k+1)/2. Then dual-evacuation ∗ rotates Dese (S) to the right by n − 1 positions and then flips the result of the rotation. More precisely, the i-th box of Dese ((S)) is dotted if and only if the (n − i)-th box of Dese (S) is dotted. 1 2 4 1 3 5 1 2 6 E XAMPLE 3.4.15. Let S3 = 3 6 , then (S3 ) = 2 4 and ∗ (S3 ) = 3 4 . Correspondingly, 5 6 5 Dese (S3 ) =

•

•

•

•

• •,

3.4. DESCENT VECTORS

85

Dese ((S3 )) = •

•

•

•

•,

•

and Dese (∗ (S3 )) =

•

•

• •

•

.

•

Note that Dese ((S3 )) is the complement of Dese (∗ (S3 )). This agrees with the fact that ∗ (S3 ) = (S3 )t . P ROOF OF T HEOREM 3.4.13 AND 3.4.14. Theorem 3.4.13 follows directly from Theorem 3.4.11 and Theorem 3.3.7. Theorem 3.4.14 follows from the fact that Dese (S) is the complement of Dese (S t ).

Theorems 3.4.4 and 3.4.12 imply that if T is either a rectangular or staircase tableau, Dese (T ) encodes important information about the promotion cycle that T is in. C OROLLARY 3.4.16. If T , either of rectangular or staircase shape, is in a promotion cycle of size C then Dese (T ) must be periodic with period dividing C. (The period does not have to be exactly C.) 1 E XAMPLE 3.4.17. Let T = 2 3 4

5 6 7 8

9 10 , then 11 12

Dese (T ) = • • •

• • •

• • •

.

We see that Dese (T ) has a period of 4, thus T must be in a promotion cycle of size either 4 or 12. Indeed, the promotion order of T is 4. 1 On the other hand, the promotion order of T = 2 4 6 Dese (T ) = •

•

3 5 7 9 is also 4, while its descent vector 8 11 1012 •

•

•

•

has period 2. Equipped with the above knowledge, we can say more about the promotion action on SY T (sck ). For example, C OROLLARY 3.4.18. In the promotion action on SY T (sck ) there always exists a full cycle, that is, a cycle of the same size as the order of the promotion ∂, in this case k(k + 1). P ROOF. Consider T ∈ SY T (sck ) obtained by filling the numbers 1 to k(k + 1)/2 down columns, from leftmost column to rightmost column. Then Dese (T ) has period of k(k + 1), thus T must be in a full cycle.

3.5. SOME COMMENTS AND QUESTIONS

86

1 4 6 For example, for k = 3 and T = 2 5 , 3 Dese (T ) = • •

•

•

•

•

has period 12. Indeed, computer experiments show that “most” cycles of the promotion action are full cycles. C OROLLARY 3.4.19. In the promotion action on SY T (sck ), let N = k(k + 1). If a cycle of length C appears, then C is a divisor of N , but not a divisor of N/2. P ROOF. The cycle size C is a divisor of N since the order of promotion |∂| is N . On the other hand, C cannot be a divisor of N/2 since by definition Dese (T ) can never have period of length that is a divisor of N/2.

3.5. Some comments and questions

The discovery of ι is a by-product of our attempt to solve an open question posed by Stanley ([Sta09, page 13]) that asks if Rhoades’ CSP result on rectangular tableaux can be extended to other shapes, and if there is a more combinatorial proof of this result. Rhoades’ proof uses Kazhdan-Lusztig theory, requiring special properties of rectangular tableaux. It seems (to us) that there is not an obvious analogous proof for other shapes. So we decided to try our luck in computer exploration using Sage-Combinat ([StSc09], [SCc08]). The first thing we noticed from the computer data was the nice promotion cycle structure of staircase tableaux, which is not a surprise at all due to Fact 3.2.15. Thus we decided to focus on Problem 3.1.1. It was soon clear to us that brute-force computation of the cycle structure could not proceed very far; we could only handle SY T (sck ) for k ≤ 5 on our computer. On the other hand, the promotion cycle structures on rectangular tableaux are extremely easy to compute by Rhoades’ result, as the generating function of maj is just the q−analogue of the hook length formula. So the embedding ι is an effort to study the promotion cycle structure on SY T (sck ) by borrowing information from the promotion action on SY T (k k+1 ). Among the cases of promotion action on SY T (sck ) for which we know the complete cycle structure (that is, k = 3, 4, 5), we have found that each has a CSP polynomial that is a product of cyclotomic polynomials of degree at most k(k + 1): For k = 3, 4, 5, these polynomials are

Φ2 Φ24 Φ6 Φ8 Φ12 ,

3.5. SOME COMMENTS AND QUESTIONS

87

Φ32 Φ3 Φ24 Φ8 Φ210 Φ16 Φ20 , and 3 4 Φ11 2 Φ6 Φ10 Φ11 Φ13 Φ22 Φ24 Φ30 ,

respectively. We note that these polynomials in product form are not unique, for example Φ22 Φ4 Φ6 Φ10 Φ12 gives another CSP polynomial for SY T (sc3 ). The study of this product form continues, with the hope of finding a counting formula the q−analogue of which is a CSP polynomial for the promotion action on SY T (sck ). For the case k > 5, Corollary 3.4.19 gives a necessary condition for what kind of cycles can appear in the promotion action on SY T (sck ). We do not know if this condition is sufficient. The question that interests us the most is if the embedding has any representation-theoretical interpretation. We would like to propose two possible research directions aimed at answering this question, both of which involve considering the action of symmetric groups (or their Hecke algebras) on standard Young tableaux. In the first direction, we use standard Young tableaux to index basis elements of the seminormal representations of the Hecke algebra of a symmetric group [You52, Hoe74, Wen88]. Then the action of Hecke algebra generator Ti on the seminormal basis element vL indexed by the standard Young tableau L is given by qvL Ti . vL = −vL (ai )vL + (q −1 + ai )vsi .L

if i + 1 is west of i in L, if i + 1 is south of i in L, otherwise.

Where si . L is the standard Young tableau obtained from L by switching i + 1 and i, and ai ∈ C(q) is some coefficient that depends on the relative positions of i + 1 and i in L. If we “ignore” the coefficients, the above action of Ti can be thought of as Ti acting on standard Young tableaux: If switching i + 1 and i in L destroys the property of being a standard Young tableau, then Ti just leaves L alone, otherwise Ti switches i + 1 and i in L (and also keeps a copy of the original L). On the other hand, the theory of promotion and evacuation on standard Young tableaux (more generally on linear extensions of arbitrary posets) can be considered as the representation theory of the Coxeter group G with generators and relations [Hai92, MR94, Sta09]: τi2 = 1, 1 ≤ i ≤ n − 1

3.5. SOME COMMENTS AND QUESTIONS

τi τj = τj τi , |i − j| > 1 The action of τi on a standard Young tableau L is given as follows: If switching i + 1 and i in L destroys the property of being a standard Young tableau then τi just leaves L alone, otherwise τi switches i + 1 and i in L. The promotion on Young tableaux of shape λ ` n is then defined as ∂ = τ1 τ2 · · · τn−1 and the evacuation is defined as = τ1 τ2 · · · τn−1 · τ1 τ2 · · · τn−2 · · · τ1 τ2 · τ1 Noting the similarity of the actions of Ti and τi on standard Young tableaux, it is interesting to study the relation between them. For example, can one choose q ∈ C or add extra parameters so that the τi can be “approximated” by Ti ? Gaining knowledge on this front may allow us to deploy powerful tools from the representation theory of Hecke algebras in solving our problem. The second direction was suggested to the author by B. Rhoades through a private communication. The embedding ι can be extended by C-linearity to the map ι : C[SY T (sck )] ,→ C[SY T (k (k+1) )]. We can view these vector spaces as the irreducible symmetric group representations corresponding to the shapes sck and k k+1 , and we can identify standard Young tableaux with the Kazhdan-Lusztig (left) cellular bases of these representations. Now we consider Vk = C[{ι(S) | S ∈ SY T (sck )}] ,→ C[SY T (k (k+1) ]. By the promotion/evacuationpreserving property of ι, we see that Vk is invariant under the (KL representation) actions of the long element and long cycle w0 , c ∈ Sk(k+1) , respectively [Ste96, Rho10]. Thus, we have #{S ∈ SY T (sck ) | ∂ d (S) = S} = traceVk (cd ). To compute the right-hand side, one way is to understand the largest subgroup Gk of Sk(k+1) such that Gk fixes Vk pointwise. For k = 2, it can be seen easily that G2 = S3 × S3 . For any larger k, direct computation becomes not feasible. Besides helping to compute the right-hand side of the above equation, it is itself an interesting problem to characterize Gk .

88

APPENDIX 3.A. PROOF OF LEMMA 3.4.10

Appendix 3.A. Proof of Lemma 3.4.10 To prove Lemma 3.4.10, we first make the observation that the location of the corner that contains n (the n-corner) in S cannot be the same as the n-corner in ∗ (S). This is because, as we had observed in Lemma 3.3.4, the n-corner in S is the same as the n-corner in ∗ ◦ ∂(S); and ∗ (S) = ∂ ◦ ∗ ◦ ∂(S) does not have the same n-corner as that of ∗ ◦ ∂(S). With this observation, Lemma 3.4.10 is a consequence of the following general fact.

L EMMA 3.A.1. Let T ∈ SY T (λ) and λ ` n. If the promotion path of T ends with a vertical (up) move, then the whole dual-promotion path of T must be (weakly) northeast of the promotion path. If the promotion path of T ends with a horizontal (left) move, then the whole dual-promotion path of T must be (weakly) southwest of the promotion path.

P ROOF. Without loss of generality, we argue the case where the promotion path of T ends with a vertical move. Imagine a boy and a girl standing at the most northwest box of T . The boy will walk along the promotion path in reverse towards the southeast, and the girl will walk along the dual-promotion path towards the southeast. They will walk at the same speed. In the first step, the boy goes south by assumption. The girl may go east or south. If she starts by going east, then she is already strictly northeast of the boy. If she starts by going south with the boy, then she must turn east earlier than the boy turns. (Suppose the boy turns east at box (i, j). By definition of promotion path, this implies that T [i, j] > T [i − 1, j + 1]. If the girl goes south at box (i − 1, j) then by definition of dual-promotion path, this implies that T [i, j] < T [i − 1, j + 1], a contradiction. Therefore, the girl must turn east at box (i − 1, j) or earlier.) So either way we see that the girl will be strictly northeast of the boy before the boy makes his first east turn. If they never meet again then we are done. So we assume that their next meeting position is at box (s, t), and argue that they will never cross. By induction this will prove the claim. It is clear that the girl must enter box (s, t) from north, and the boy must enter the box (s, t) from the west. From box (s, t), the girl can either go south or go east. Suppose the girl goes south from (s, t). Then the boy must also go south from (s, t). (For if he went east, it would imply that T [s − 1, t + 1] < T [s, t], which would make the girl go through (s − 1, t + 1) instead of (s, t).) Then we can use our previous argument to show that the girl must make an east turn before the boy, and stay northeast of the boy.

89

APPENDIX 3.A. PROOF OF LEMMA 3.4.10

90

Suppose the girl goes east from (s, t). Then again the boy must go south from (s, t). (For the girl’s behavior shows that T [s − 1, t + 1] > T [s, t], but the boy’s going east would imply that T [s − 1, t + 1] < T [s, t].) So the girl stays northeast of the boy.

91

CHAPTER 4

The commutativity between the R-matrix and the promotion operator – a combinatorial proof In this Chapter, we give an alternative proof of Proposition 2.3.12 which states, in the special case of a rectangle and a column, the commutativity of the combinatorial R-matrix map and the sliding operation ρ. However, as we have commented in Remark 2.3.13 of Chapter 2, this special case suffices the R-reduction, and eventually implies the commutativity of the combinatorial R-matrix and the promotion. Let us restate Proposition 2.3.12.

P ROPOSITION . Let p ∈ LM ⊂ P(B) where B = ((r, c), (t, 1)). Then R(p) ∈ Dom(ρ) and the following diagram commutes: R

p −−−−→ ρy

• ρ y

• −−−−→ • R

The combinatorial R-matrix is described in Section 2.2.7 of Chapter 2. In order to prove the above proposition, we describe an explicit combinatorial algorithm for the R-matrix in the special case that B = ((r, c), (t, 1)).

4.1. A combinatorial algorithm for R Let T be a skew tableau. We denote by Ti the i-th row of T and write T = (T1 , T2 , . . .). We denote by Ti,j the number filled in box (i, j) of T . row

In Chapter 1, we have described the algorithm of row insertion T ←− k. In the case that T ∈ SSY T (rc ) is a rectangular Young tableau, and k < T1,c , we can think of the result of the row insertion as a pair row

(k 0 , T 0 ) where k 0 , denoted by (T ←− k)1 , is the number bumped out from the last row, and T 0 , denoted by row

(T ←− k)2 , is the resulting tableau (of the same shape of T ) of this row insertion. In this usage, since the number bumped out of the last row (k 0 ) is not included as part of the resulting tableaux T 0 , we consider the bumping route of the row insertion consists of only positions in the rectangle shape.

4.1. A COMBINATORIAL ALGORITHM FOR R

92

For example, let 1 T = 2 4 5

2 3 4 6

2 5 6 7

3 5 6 7

1 (T ←− 2) = (7, 2 4 5

2 3 4 6

then row

2 3 5 6

2 5). 6 7

The bumping route is ((1,4),(2,3),(3,3),(4,3)). row

In the degenerate case, when T is the empty tableau, then define ∅ ←− k = (k, ∅) We should think of this usage of row insertion as a way of passing a number k from right to left through a rectangular Young tableau T in the case that k < T1,c . Recall that in Algorithm 1.2.16, we give the description of inverse column insertion, which is another way of passing a number k from right to left through a rectangular Young tableau T , and in the case we need k ≥ T1,c . ∗

When we do not want to distinguish row insertion and inverse column insertion, we shall use ←−. Frequently, we will row-insert a number into a portion of a tableau T . The following definition formalizes this notion:

D EFINITION 4.1.1. Let T = (T1 , . . . , Tr ) be a skew tableau. Define uppk (T ) = (T1 , . . . , Tk ) to be the first k rows of T , and lowk (T ) = (Tk+1 , . . . , Tr ) to be the rest of the tableau.

D EFINITION 4.1.2. Let S, T be two rectangular tableaux of shapes (r, c), (t, c), respectively. Whenever it make sense, define Stack(S, T ) to be the tableau of shape (r + t, c) obtained by stacking S on top of T . Formally, Stack(S, T ) = (S1 , . . . , Sr , T1 , . . . , Tt )

Clearly, T = Stack(uppk (T ), lowk (T )).

row

D EFINITION 4.1.3. T ←−k j is the operation of row inserting j into lowk (T ) portion. Formally, row

row

row

(T ←−k j) = ((lowk (T ) ←− j)1 , Stack(uppk (T ), (lowk (T ) ←− j)2 )). Let us first warm up by describing the proposed algorithm of R-matrix applying on the simplest situation, where the second rectangle is a 1 × 1 box, that is t = 1:

4.1. A COMBINATORIAL ALGORITHM FOR R

93

A LGORITHM 4.1.4. Let p = S ⊗ T ∈ P((r, c), (1, 1)). If T1,1 < S1,c , then R(p) corresponds to row1 2 2 3 inserting T1,1 into S. For example, if p = 2 3 5 5 ⊗ 2 , then we have T1,1 = 2 < 3 = S1,c , thus R(p) 4 4 6 6 5 6 7 7 1 2 2 3 row is obtained by row-inserting 2 into the tableau 2 3 5 5 ←− 2 . The number bumped out from the last 4 4 6 6 5 6 7 7 1 2 2 2 row is 7, so R(p) = 7 ⊗ 2 3 3 5 . If on the other hand T1,1 ≥ S1,c , then R corresponds to inverse 4 4 5 6 5 6 6 7 1 2 2 3 column-inserting T1,1 into S. For example, let p = 2 3 5 5 ⊗ 6 then we have T1,1 = 6 ≥ 3 = S1,c , 4 4 6 6 5 6 7 7 1 2 2 3 col−1 and R(p) is obtained by inverse column-inserting 6 into the tableau 2 3 5 5 ←− 6 . The number bumped 4 4 6 6 5 6 7 7 1 2 2 3 out from the first column is 5, thus R(p) = 5 ⊗ 2 3 5 5 . 4 4 6 6 6 6 7 7 This algorithm clearly gives the correct R-matrix map because both row insertion and inverse column insertion preserve the Knuth equivalence. We summarize this result in the following proposition. P ROPOSITION 4.1.5. Let p = S ⊗ T ∈ P((r, c), (1, 1)). Then R-matrix corresponds to row insertion if T1,1 < S1,c ; otherwise, R-matrix corresponds to inverse column insertion. In the case that p = S ⊗ T ∈ P((r, c), (t, 1)), where t > 1, the algorithm for computing R(p) involves the following construction. D EFINITION 4.1.6. Let p = S ⊗ T ∈ P((r, c), (t, 1)). The association of rows of S and rows of T is a partial map ηp : [r] → [t], inductively defined as follows: • ηp (1) = min{i ∈ [t] | Ti,1 ≥ S1,c }; • If ηp (k) is defined, then define ηp (k + 1) = min{i ∈ [t] | i > ηp (k), Ti,1 ≥ Sk+1,c }. In addition, define µp to be the rows of T that are not associated to a row of S, that is, µp = [t] \ Img(ηp ). Finally, define νp =

t

if µp = ∅,

max(µp ) if µp 6= ∅.

4.1. A COMBINATORIAL ALGORITHM FOR R

94

E XAMPLE 4.1.7. If 2 3 and T = 4 , 5 8 9

3 3 S = 6 7 7 8 1015 then the association can be depicted as

2 3 3 3 4 = 5 . 6 7 8 7 8 9 1015

ηS⊗T

In this case, µS⊗T = {1, 3, 4} and νS⊗T = 4. The algorithm for R involves certain operations on triples as given in the next definition. D EFINITION 4.1.8. Let S be the set of all triples (U, V, W ), where U is a rectangular tableau of shape (cr ), V is a tableau of shape (1t ), and W is a stack. (A stack W is a column of numbers with an opening on the top. That is, when a new number k is added to this stack, denoted by W ← k, it is always added to the top.) Define ωU ⊗V =

max{i ∈ [r] | U

i,c

≤ Vt,1 }

if µU ⊗V = ∅,

min{j ∈ [r] | Uj,c > VνU ⊗V } − 1 if µU ⊗V 6= ∅. An operator R0 on S is defined by R0 (U, V, W ) = (U 0 , V 0 , W 0 ), where • If µU ⊗V 6= ∅ then – U 0 = (U

rowωU ⊗V

←−

VνU ⊗V ,1 )2 ;

– V 0 = (V1 , . . . , V\ νU ⊗V , . . . , Vt ); – W 0 = W ← (U

rowωU ⊗V

←−

VνU ⊗V ,1 )1 .

• Otherwise, (recall in this case νU ⊗V = t) col−1

– U 0 = (U ←− Vt,1 )2 ; – V 0 = (V1 , . . . , Vbt ); col−1

– W 0 = W ← (U ←− Vt,1 )1 . Define S0 ⊂ S where elements in S0 satisfy an extra condition that W is empty. Define S1 ⊂ S be the transitive closure of S0 under R0 . Now perform the following algorithm, which by Proposition 4.1.10 yields R(S ⊗ T ): (1) Begin with U = S, V = T , and W an empty stack.

4.1. A COMBINATORIAL ALGORITHM FOR R

95

(2) Repeatedly calculate R0 (U, V, W ) until V is empty. E XAMPLE 4.1.9. Continuing Example 4.1.7 we have VνS⊗T ,1 = V4,1 = 5 and ωS⊗T = 1, so we first 6 7 row-insert 5 into lowωS⊗T (S) = low1 (S) = 7 8 . The number bumped out is 10. Therefore, R0 (S, T, ∅) = 1015 (U, V, W ) yields: 2 3 3 3 V = 4 , W = 10 . U = 5 7 , 6 8 8 7 15 9 The new association looks like: 2 3 3 3 4 5 7 8 6 8 9 7 15 (We are going to omit the subscripts on η, µ, ω, and ν when it is irrelevant or it is clear from the context.) The next two applications of R0 insert 4 and then 2 into the corresponding “lower-tableau” to obtain: 2 U = 3 4 5

3 7 , 8 15

3 V = 8 , 9

6 W = 7 . 10

At this stage, µ = ∅, ν = 3, ω = 3, and η can be depicted as: 2 η= 3 4 5

3 3 7 8 . 8 9 15

2 The next application of R0 invokes column insertion 3 4 5 2 U = 3 4 8

3 7 , 9 15

3 −1 7 col ←− 9 and we obtain: 8 15

V = 3 , 8

5 W = 6 . 7 10

Two more applications of R0 column-insert 8 and 3 to obtain: 2 U = 3 7 8

3 8 , 9 15

V = ∅,

3 4 W = 5 . 6 7 10

P ROPOSITION 4.1.10. R(S ⊗ T ) = T 0 ⊗ S 0 where T 0 = W ,and S 0 = U and W , U are those given at the termination of above algorithm.

4.1. A COMBINATORIAL ALGORITHM FOR R

96

P ROOF. We need to verify two things: First, that T 0 is indeed a decreasing sequence of length t, thus is a row (and column) word of a column tableau of the same shape as T ; and second, that S · T ≡K T 0 · S 0 . row

For the first, we observe that when µ 6= ∅ our construction constitutes a sequence of ←−k x insertions with k weakly decreasing and x strictly decreasing. It follows from well-known fact (e.g [Ful97, Pages 9,10]) that the sequence of numbers bumped out by these operations is strictly decreasing. When µ = ∅ col−1

our construction constitutes a sequence of ←− x insertions with x strictly decreasing. It is well-known that the sequence of numbers bumped out by these operations are strictly decreasing. Finally when µ switches row

from being nonempty to being empty, let a be the number bumped out by the last ←−k insertion, that is col−1

row

a = (X ←−k x)1 for some X, k and x, and let b be the number bumped out by the first ←− insertion row

col−1

immediately follows, that is b = (((X ←−k x)2 ) ←− y)1 . Then a must appear in the last row of X, and b row

row

appears in the first column of (X ←−k x)2 . Let c be the number in the lower left corner of (X ←−k x)2 , then we clearly have b ≤ c < a. For the second, it suffices to show that the two cases in the definition of R0 preserve the Knuth equivalence. That is, if (U 0 , V 0 , W 0 ) = R0 (U, V, W ), then W · U · V ≡K W 0 · U 0 · V 0 . For this, we need the following notation. Let row(ηU ⊗V ) be the bottom-to-top row reading of ηU ⊗V , and let col(ηU ⊗V ) be the left-to-right column reading of ηU ⊗V .

E XAMPLE 4.1.11. Let ηS⊗T

2 3 3 3 4 = 5 be as from Example 4.1.7, then 6 7 8 7 8 9 1015

row(ηS⊗T ) = (10, 15, 7, 8, 9, 6, 7, 8, 5, 4, 3, 3, 3, 2), and col(ηS⊗T ) = (10, 7, 6, 3, 15, 8, 7, 3, 9, 8, 5, 4, 3, 2). We claim that (4.1.1)

col(ηU ⊗V ) = col(U ⊗ V ),

(4.1.2)

row(ηU ⊗V ) ≡K col(ηU ⊗V ).

Equation (4.1.1) is clear from the definition, since the column reading does not change. To show Equation (4.1.2), by the definition of ηU ⊗V , we can construct an explicit sequence of Knuth transformations that demonstrates row(ηU ⊗V ) ≡K row(U ⊗V ), then Equation (4.1.2) follows from the fact that row(U ⊗V ) ≡K col(U ⊗ V ), and Equation (4.1.1).

4.2. INTERACTION BETWEEN ρ AND BUMPING

97

row

col−1

Let x be the number pushed into W by R0 , that is x = (U ←−ω Vν,1 )1 or x = (U ←− Vν,1 )1 . It is clear from the definition of R0 that row(η) ≡K x · row(η 0 ), where η = ηU ⊗V and η 0 = ηU 0 ⊗V 0 . Combining all previous argument, we obtain

col(W ⊗ U ⊗ V ) = col(W ) · col(U ) · col(V ) = col(W ) · col(η)

(by definition)

(by Equation (4.1.1))

≡K col(W ) · row(η)

(by Equation (4.1.2))

≡K col(W ) · x · row(η 0 ) ≡K col(W 0 ) · col(U 0 ) · col(V 0 ) = col(W 0 ⊗ U 0 ⊗ V 0 ).

Now that we have a way to realize the R-matrix as a sequence of row insertions and inverse column insertions, the next step is to study the interaction between ρ and these insertions.

4.2. Interaction between ρ and bumping Let S ∈ Pn+1 (r, c) and x ∈ [n + 1], we are interested in knowing how the sliding operation ρ and the ∗

∗

bumping S ←− x interact with each other. More precisely, let BP and BP 0 be the bumping route of S ←− x ∗

∗

and the bumping route of ρ(S) ←− x respectively, let SP and SP 0 be the sliding route of S and (S ←− x)2 respectively. We would like to know how BP and BP 0 , SP and SP 0 are related. Depending on whether x is less than Sr,c or not, we divide this discussion into two cases, which are the first two subsections below. Yet, a row insersion involved in the construction of R0 , is usually a insertion “to row

the lower portion of tableaux”. So we are really interested in the interaction between ρ and ←−k x. This is discussed in the third (last) subsection. 4.2.1. Interaction between ρ and row bumping. In the case that x < Sr,c , we are considering the row

interaction between ρ and S ←− x, and BP is the bumping route of this row insertion. There are three possibilities: BP and SP do not intersect, or BP intersects SP at a position in a horizontal segment, or BP intersects SP at a position in a vertical segment. A priori, we do not know if the second and third possibilities above can happen at the same time, but our analysis later will show that they are mutually exclusive.

4.2. INTERACTION BETWEEN ρ AND BUMPING

4.2.1.1. BP and SP do not intersect. If BP ∩ SP = ∅, we can see that for any (u, v) ∈ SP , the row row

insertion S ←− x does not affect the relative order of the north neighbor and west neighbor of (u, v). (There are a couple of cases to consider, but each one is obvious. The key observation is that at each bumping, the entry bumped out is always greater than the number bumped in.) Therefore, SP 0 = SP . Moreover, the entries in SP 0 is exactly the same as those in SP , respectively to each position. It is also clear that sliding does not affect row insertion in this case, the bumping routes are the same, BP = BP 0 , and the numbers bumped in/out are exactly the same in each row. row

Thus in this case, ρ commutes with ←− x. 4.2.1.2. BP intersects SP in a horizontal segment. If there is (u, v) ∈ SP ∩ BP , and (u, v) is on a horizontal segment of SP , then it is clear that any position in BP before (u, v) is strictly north-east of SP , and the position in BP that is immediately after (u, v) must be strictly south-west of P , thus (u, v) must be the only intersection point of SP and BP . Furthermore, it is also not hard to argue that for any (u, v) ∈ SP , row

the row insertion S ←− x does not affect the relative order of the north and west neighbors of (u, v). Thus we have SP 0 = SP , and moreover the entries in SP 0 are all the same as those in the corresponding position of SP except for the entry at the intersection point of SP and BP . It is also clear that BP 0 differs to BP at one location: (u, v + 1) ∈ BP 0 vs. (u, v) ∈ BP . Thus the row

row

(u, v + 1)-entry of (ρ(S) ←− x)2 and that of (ρ(S ←− x)2 ) are the same. row

Thus in this case, ρ commutes with ←− x. 4.2.1.3. BP intersects SP in a vertical segment: one sub-case. Let BP intersects SP at some position on a vertical segment of SP . It is clear that BP and SP can never intersect on two distinct columns. Let (u, v) ∈ SP ∩ BP be such that u is minimum. There are two sub-cases for this situation: either the v = c is the last column of S or not. Here we consider the second sub case. Let t = max{i | (i, v) ∈ SP ∩ BP } First, by the definitions of SP and BP , if (u + 1, v) ∈ SP , then we must have (u + 1, v) ∈ BP . Secondly, it is clear that BP and SP can never intersect on two distinct columns. Thus we have SP ∩ BP = {(u, v), (u + 1, v), · · · , (t, v)}. Once this is clear, we see that SP 0 must be SP \ {(i, v) | u < i ≤ t} ∪ {(i, v + 1) | u ≤ i < t}. On the other hand, it is clear that BP 0 and BP agree on the first u − 1 rows. Let the y be the entry in the (u − 1)-st row of ρ(S) in BP 0 (if u − 1 = 0, then y = x). On the u-th row of ρ(S), the first entry greater than y must be the entry in (u, v + 1). Inductively we can show that for u < i ≤ t, on the i-th row of ρ(S), the first entry greater than ρ(S)i,v+1 must be the entry in (i − 1, v + 1). Finally it is also clear that BP 0 and BP agree from the t + 1-st row on. Therefore, BP 0 must be BP \ {(i, v) | u ≤ i ≤ t} ∪ {(i, v + 1) | u ≤ i ≤ t}. row

Thus in this case, ρ commutes with ←− x.

98

4.2. INTERACTION BETWEEN ρ AND BUMPING

99

4.2.1.4. BP intersects SP in a vertical segment: the other sub case. Here we consider the sub-case that v = c. This immediately implies that BP consists the last column of S, and SP consists the the first row and the last column of S, which implies that S1,c−1 ≤ x < S1,c and the only n + 2 entry in S is Sr,c . Therefore, row

row

(S ←− x)1 = n + 2 and ρ is not defined on (S ←− x)2 . row

On the other hand, since ρ(S)1,c = S1,c−1 < x, ρ(S) ←− x is not well defined. row

It turns out that if we apply ρ on (S ←− x)1 (treat it as a single box tableau with entry n + 2), and apply ∗

row

inverse column insertion ρ(S) ←− x, then we will still have the commutativity between ρ ←− x. A detailed discussion is found in Section 4.3.1).

4.2.2. Interaction between ρ and inverse column bumping. If x ≥ Sr,c , we are considering the col−1

interaction between ρ and S ←− x, and BP = ((rc , c), · · · , (r1 , 1)) is the bumping route of this inverse column insertion. Unlike the previous row insertion case, the inverse column insertion bumping route BP must intersect SP at exactly one position on a vertical segment of SP . To see this, we note that the sliding route SP divides S into 3 areas: (1) all the boxes on the sliding path, that is, SP , (2) all the boxes north-east of the sliding path, denoted N E (possibly empty), (3) all the boxes south-west of the sliding path, denoted SW (possibly empty). Let J ∈ [c] be such that (rJ , J) is the first box in BP that is not in N E. We argue that (rJ , J) ∈ SP . Suppose not, it must be the case that (rJ , J) ∈ SW , and by the choice of J we know either J = c or (rJ+1 , J + 1) ∈ N E. It is clearly impossible that J = c. If (rJ+1 , J + 1) ∈ N E then it must be that rJ+1 < rJ − 1 (for otherwise (rJ+1 , J + 1) and (rJ , J) are not separated by SP ). In addition, there must be some rJ+1 < R < rJ such that (R, J) and (R, J + 1) are both in SP . But this implies that SR,J < SrJ ,J ≤ SrJ+1 ,J+1 ≤ SR−1,J+1 contradicting the fact that (R, J) and (R, J + 1) are both in SP yet (R − 1, J + 1) is not. Thus (rJ , J) ∈ SP . Moreover, (rJ , J) can not be on a horizontal segment of SP , for if it is, it must be that (rJ , J + 1) ∈ SP and rJ+1 > rJ , but this implies that SrJ ,J > SrJ+1 ,J+1 , a contradiction. The fact that (rJ , J) must be on a vertical segment of SP then implies that (rJ , J) must be the only intersection. (If (rJ , J) is not a left-turn corner, then the claim is clear, otherwise, SrJ +1,J−1 ≤ SrJ ,J , the claim is also clear.) col−1

For any (u, v) ∈ SP , it is clear that ←− x) does not change the relative order of its north and west neighbors, thus, SP 0 = SP and an entry in SP 0 is the same as the corresponding entry in SP expect for position (rJ , J) where in SP 0 , it contains SrJ+1 ,J+1 .

4.3. THE PROOF OF THE COMMUTATIVITY

100

On the other hand, since (rJ , J) is on a vertical segment of SP , it is clear that BP 0 = BP \ {(rJ , J)} ∪ {(rJ + 1, J)}. Furthermore, entries in BP 0 are exactly the same as those in BP , respectively to each column. col−1

Thus in this case, ρ commutes with ←− x. row

4.2.3. Interaction between ρ and ←−k x. In Section 4.2.1, we discussed the interaction between ρ and row

←− x, while in Definition 4.1.8, we see that we really need to consider the interplay between ρ and row row

insertion “to the lower portion of a tableau”: ←−k x. Let x be such that there is a row index k, Sk,c < x < Sk+1,c . There are two possibilities, either (r, c) ∈ BP or not. 4.2.3.1. (r, c) 6∈ BP . We first claim that ρ(S)k,c < x < ρ(S)k+1,c in this case. To see this, we suppose otherwise, then it must be the case that x ≥ ρ(S)k+1,c . This could happen only in the case that (k + 1, c), (k + 2, c), . . . , (r, c) are on the sliding route. Given x ≥ ρ(S)k+1,c = max{Sk,c , Sk+1,c−1 }, we row

have x ≥ Sk+1,c−1 . Thus the bumping route of S ←−k x must be ((k + 1, c), (k + 2, c), . . . , (r, c)), ending at (r, c)-box, a contradiction. It is also clear by the same argument in 4.2.1 that SP 0 and SP agrees on the first k rows. Therefore, we can just consider the upper portion uppk (S) and lower portion lowk (S) of S row

separately. Then it is clear that ρ and ←−k x commute in this case. row

row

4.2.3.2. (r, c) ∈ P B. As in the case of 4.2.1.4, ρ(( ←−k x)2 ) and ρ(S) ←−k x are not well defined in this case. row

Nevertheless, it turns out that if we consider ←−k x be just one step of the R, then everything works out and R commutes with ρ. A detailed discussion is found in Section 4.3.2.2.

4.3. The proof of the commutativity Let p = S ⊗ T ∈ LM ((r, c), (t, 1)), where S ∈ Dom(ρ) and n + 2 appears in S but not in T . Let us again warm ourself up by the simplest situation where t = 1, that is T is a single box. This case will be used in the general setting. 4.3.1. T is a single box.

Let p = S ⊗ T =

S1,1 S1,2

···

S1,c

S2,1 S2,2

···

S2,c

···

···

···

···

Sr,1 Sr,2

···

Sr,c

Due to our assumptions, Sr,c = n + 2 and T1,1 6= n + 2.

⊗ T 1,1

4.3. THE PROOF OF THE COMMUTATIVITY

101 ∗

Recall that in the case that T is a single box R(S ⊗ T ) is essentially S ←− T1,1 . From the discussion of Section 4.2, we see that in all situations except that in Section 4.2.1.4, the commutativity between R and ρ is clear. Below we elaborate the situation of Section 4.2.1.4 and show that commutativity also holds there. As we have pointed out before, in this situation, the bumping route BP must consists of the rightmost column of S, namely BP = ((1, c), · · · , (r, c)). This implies: S1,c−1 ≤ T1,1 < S1,c (4.3.1) Si,c−1 ≤ Si−1,c < Si,c

for 2 ≤ i ≤ r.

In particular,

R(p) = T 0 ⊗ S 0 =

Sr,c ⊗

S1,1 S1,2

···

T1,1

S2,1 S2,2

···

S1,c

···

···

···

···

.

· · · Sr−1,c

Sr,1 Sr,2

Equation (4.3.1) at i = r implies that n + 2 appears only in box (r, c) of S. Thus in R(p), n + 2 appears only in the single box tableau T 0 . Hence R(p) ∈ Dom(ρ), and by definition of ρ,

ρ(R(p)) = ρ(T 0 ) ⊗ S 0 =

1

⊗

S1,1 S1,2

···

T1,1

S2,1 S2,2

···

S1,c

···

···

···

···

Sr,1 Sr,2

.

· · · Sr−1,c

One the other hand, (4.3.1) and the definition of ρ imply that the sliding route SP of p consists of the first row and the last column of S, namely SP = ((r, c), (r − 1, c), . . . , (1, c), (1, c − 1), . . . , (1, 1)). Thus,

1 ρ(p) = ρ(S) ⊗ T =

S1,1

· · · S1,c−1

S2,1 S2,2

···

S1,c

···

···

···

···

Sr,1 Sr,2

⊗ T . 1,1

· · · Sr−1,c

col−1

Now (4.3.1) implies that R(ρ(p)) corresponds to ρ(S) ←− T1,1 . Furthermore, the bumping route of this insertion consists of the first row of ρ(S): ((1, c), . . . , (1, 1)). This is because by the construction of ρ(S),

4.3. THE PROOF OF THE COMMUTATIVITY

102

ρ(S)(1,k) = S(1,k−1) < S(2,k−1) for k > 1. Hence,

R(ρ(p)) =

1

⊗

S1,1 S1,2

···

T1,1

S2,1 S2,2

···

S1,c

···

···

···

···

Sr,1 Sr,2

· · · Sr−1,c

which is the same as ρ(R(p)).

4.3.2. T is a single column tableau.

Let p = S ⊗ T =

S1,1 S1,2

···

S1,c

S2,1 S2,2

···

S2,c

···

···

···

···

...

Sr,1 Sr,2

···

Sr,c

Tt,1

T1,1 ⊗

T2,1

.

If µp = ∅, then the association must looks like

S1,1 S1,2

···

S1,c T1,1

S2,1 S2,2

···

S2,c T2,1

ηp = · · ·

···

···

···

···

···

···

···

···

Tt,1

Sr,1 Sr,2

···

Sr,c

.

col−1

Then R is a sequence of ←− operations, and t-fold applications of the argument in Section 4.2.2 proves that R commutes with ρ. Therefore, without loss of generality, we may assume that µp 6= ∅. Then the first application of R0 must be from a row-insertion and there are two possibilities. row

4.3.2.1. The bumping route of S ←−ω Tν,1 does not end at (r,c). If the first application of R0 does not affect the box (r, c), then later steps will not affect box (r, c) either. Repeated applications of arguments in 4.2.3.1 reduces the problem to the case µ = ∅, and by the previous argument we are done. row

4.3.2.2. The bumping route of S ←−ω Tν,1 ends at (r,c). Unlike in the previous case, R0 and ρ do not commute in general. Notice in particular that after the first row-insertion, ρ is no longer defined on U . To deal with this situation, we will define a modified sliding operator ρ˜.

4.3. THE PROOF OF THE COMMUTATIVITY

103

Let us first denote row

R0 = {U ⊗ V ∈ S0 ∩ LM ((r, c), (t, 1)) | the bumping route of U ←−ω Vν,1 ends at (r,c)}. This is the collections of objects under our current consideration. 1 2 E XAMPLE 4.3.1. Let p = S ⊗ T = 4 6 7

2 3 5 6 8

2 4 5 6 8

3 2 5 3 6 ⊗ . Then 4 8 6 9

2 1 2 2 3 3 4 ηp = 2 3 4 5 6 . 4 5 5 6 6 6 6 8 7 8 8 9 row

We can see that νp = 3 and ωp = 1 and the bumping route of S ←−1 T3,1 ends at the lower right corner. Thus p ∈ R0 . Let R1 be the transitive closure of R0 under R0 . It is clear that R1 ⊂ S1 , but we want to make some information more explicit in R1 , and think its elements as a 4-tuples instead of 3-tuples. In the following recursive construction, we expand each 3-tuple (U, V, W ) ∈ R1 to a 4-tuple (U, V, W, s). The added data s is an element in [r], and should be thought as a separator between the s-th and (s + 1)-st row of U . D EFINITION 4.3.2. • (U, V, ∅) ∈ R0 is expanded to (U, V, ∅, r), • If p = (U, V, W ) ∈ R1 is expanded to (U, V, W, s), then R0 (p) = (U 0 , V 0 , W 0 ) is expanded to (U 0 , V 0 , W 0 , s0 ) where s0 is obtained as follows: – If µp = ∅, then s0 = min(s, ωp ). – If µp 6= ∅ (an easy induction shows that in this case ωp ≤ s always), and if the bumping route rowωp

of U ←− Vνp ,1 passes (s, c), then s0 = ωp , otherwise, s0 = s. There is an convenient way to graphically represent elements of R1 as 4-tuples. The following example explains it. E XAMPLE 4.3.3. Let p = (S, T, ∅) be as in Example 4.3.1. Then the 4-tuple corresponding to p is (S, T, ∅, 5).

4.3. THE PROOF OF THE COMMUTATIVITY

104

Graphically we represent it by: 2 1 2 2 3 3 4 (∅, 2 3 4 5 6 ) . 4 5 5 6 6 6 6 8 7 8 8 9 Here, the first object ∅ is the empty stack, which corresponds to the third object in the 4-tuple. The second object is the pictorial representation of ηS⊗T , which corresponds to S ⊗ T . The red entries highlight the sliding path (from southeast to northwest) on upps (S), thus the beginning row of this path determines s. Note that in this example, s = r = 5, so upps (S) = S. 1 2 2 3 2 3 4 4 2 After applying R0 to the above p, which corresponding to ←− 4, we get R0 (p) = ( 4 5 5 5 , 3 , 9 ). 6 6 6 6 6 7 8 8 8 1 2 2 3 2 3 4 4 2 The corresponding 4-tuple is then ( 4 5 5 5 , 3 , 9 , 2) , whose graphical representation is then 6 6 6 6 6 7 8 8 8 row1

1 (9, 2 4 6 7

2 3 5 6 8

2 4 5 6 8

2 3 3 4 6). 5 6 8

The fact that s = 2 is show from that fact that the red entries start at the second row (going from southeast to northwest). D EFINITION 4.3.4. The modified sliding operator ρ˜ maps R1 \ R0 (4 tuples) to S1 (3 tuples). It is defined by ρ˜(U, V, W, s) = (U 0 , V 0 , W 0 ) where U 0 = Stack(ρ(upps (U )), lows (U )); V 0 is obtained from V by inserting the number Us,c to the position such that V 0 is a single column tableau; W 0 is obtained from W by removing the the number from the bottom of the stack. 1 E XAMPLE 4.3.5. ρ˜( 9 , 2 4 6 7

2 3 5 6 8

2 4 5 6 8

2 3 3 4 6 ) = (∅, 5 6 8

1 2 4 6 7

1 2 5 6 8

2 3 5 6 8

2 3 3 4 4 ). 5 6 6 8

R EMARK 4.3.6. For Definition 4.3.4 to make sense, the following two conditions must hold: Firstly, when we remove Us,c and slide the “hole” along the sliding path towards the upper left corner, and place 1 in the final “hole”, we shall still get a valid tableau. In other words, upps (U ) ∈ Dom(ρ) if upps (U ) is deemed as an element of Pm (s, c) for m = Us,c − 1 and situation like Example 1.4.12 will not

4.3. THE PROOF OF THE COMMUTATIVITY

happen. Only after this condition is verified can we be sure that U 0 = Stack(ρ(upps (U )), lows (U )) is a valid tableau. This condition can be verified by showing that elements in R0 satisfy this condition, and R0 preserves this condition. Secondly, when we insert the number Us,c into V , there should be a spot for it so that V 0 is a valid column tableau. This can also be shown by induction.

Equipped with all these machineries above, we come back to prove the commutativity between R and ρ in the case that the bumping route of S

rowωS⊗T

←−

TνS⊗T ,1 ends at (r, c).

We make the following claims:

L EMMA 4.3.7. For p ∈ R0 , we have ρ(p) = ρ˜ ◦ R0 (p). For any p ∈ R1 \ R0 , we have ρ˜ ◦ R0 = R0 ◦ ρ˜. For p = (U, ∅, W, 1) ∈ R1 , we have R0 (˜ ρ(p)) = ρ(p).

And we use the following example to demonstrate that the commutativity follows from these claims. The proofs of these claims are found in Appendix Appendix 4.A.

105

APPENDIX 4.A. PROOF OF LEMMA 4.3.7

106

E XAMPLE 4.3.8. 2 1 2 2 3 3 4 (∅, 2 3 4 5 6 ) 4 5 5 6 6 6 6 8 7 8 8 9 R0

2 2 3 3 4 4 6) 5 5 6 6 8 8

1 ρ / (∅, 2 4 6 ; 7 v v vv v ρ˜ vv v vv v v v v vv

1 2 5 6 8

2 3 5 6 8

2 3 3 4 4) 5 6 6 8

R0

1 1 2 2 3 2 2 3 3 4 (7, 4 4 5 5 6) 5 6 6 6 6 8 8 8 9t tt ρ˜ tt tt R0 R0 tt t t t tt 1 2 2 2 3 1 1 2 2 3 2 3 3 4 6 2 2 3 3 4 (7, 4 4 5 5 ) (6, 4 4 5 5 ) 9 7 5 6 6 6 5 6 6 6 6 8 8 8 6 8 8 8 9r rr ρ˜ rr rr R0 R0 rr r r r rr 1 2 2 2 3 1 1 2 2 3 6 2 3 3 4 2 2 2 3 4 (7, 4 4 5 5 ) (6, 4 4 5 5 ) 9 5 6 6 6 7 5 6 6 6 6 8 8 8 6 8 8 8 9 r r ρ˜ rrr rr R0 R0 rr r r r rr 1 2 2 3 r 1 2 2 3 2 1 2 3 3 4 2 3 3 4 ρ / (2, 4 4 5 5) (6, 4 4 5 5) 7 6 5 6 6 6 5 6 6 6 9 7 6 8 8 8 6 8 8 8 1 (9, 2 4 6 7

2 3 5 6 8

Appendix 4.A. Proof of Lemma 4.3.7 L EMMA . For p ∈ R0 , we have ρ(p) = ρ˜ ◦ R0 (p). For any p ∈ R1 \ R0 , we have ρ˜ ◦ R0 = R0 ◦ ρ˜. For p = (U, ∅, W, 1) ∈ R1 , we have R0 (˜ ρ(p)) = ρ(p).

P ROOF. For p ∈ R0 , ρ(p) = ρ˜ ◦ R0 (p) follows by chasing the definitions of ρ, ρ˜ and R0 . Similarly for p = (U, ∅, W, 1) ∈ R1 , R0 (˜ ρ(p)) = ρ(p) also follows directly by definition. The only thing to remember is that ρ acts on W now, instead of acting on U . The main focus of this proof is to show that for any p ∈ R1 \ R0 , we have ρ˜ ◦ R0 = R0 ◦ ρ˜.

APPENDIX 4.A. PROOF OF LEMMA 4.3.7

107

We aim to prove this statement by cases depending on how R0 interacts with ρ˜. Before we start, let’s introduce some common notations used across these cases. Let p = (U, V, W, s) ∈ R1 \ R0 . Denote ρ˜(p) = (˜ ρ(U ), ρ˜(V ), ρ˜(W )). Denote R0 (p) = (R0 (U ), R0 (V ), R0 (W ), R0 (s)). Denote R0 ◦ ρ˜(p) = (R0 ◦ ρ˜(U ), R0 ◦ ρ˜(V ), R0 ◦ ρ˜(W )). Finally, denote ρ˜ ◦ R0 (p) = (˜ ρ ◦ R0 (U ), ρ˜ ◦ R0 (V ), ρ˜ ◦ R0 (W )).

4.A.1. Inductive case 1: µp 6= ∅ and R0 does bump out Us,c . To show the commutativity of ρ˜ and R0 on p, it suffice to show the commutativity on U , V , and W . First consider V . Recall that ρ˜(V ) is obtained by inserting Us,c into V such that the ”column increasing” rule is preserved. Let assume that the newly added row is the kth row. In fact we can give an explicit expression for k: if η(s) is not defined then k = t + 1, otherwise k = η(s) − 1. We claim that νρ(p) = k. To see this, it suffice to note the following facts: ˜ (1) ηp (s) > ηρ(p) ˜ (s); (2) ηρ(p) ˜ (s + 1) is either k + 1 or is undefined. Thus, R0 ◦ ρ˜(V ) = V . On the other hand, R0 (V ) is obtained by removing Vν,1 from V . By definition of R0 , R0 (U )R0 (s),c contains the number Vν,1 . Thus ρ˜ ◦ R0 (V ) is obtained from R0 (V ) by inserting back Vν,1 , which is just V . Thus, ρ˜ ◦ R0 (V ) = V . Next consider U . Let γ = ((s, c), . . . , (1, 2), (1, 1)) be the ρ˜-sliding route of p. From above discussion row

we know that R0 (˜ ρ(U )) = Stack(ρ(upps (U ), (lows (U ) ←− Us,c )2 ). On the other hand, the given condition that R0 bump out Us,c implies that the initial segment of the sliding route looks like ((s, c), (s−1, c), . . . , (ω+ 1, c)), and R0 (U ) can be thought as obtained from U by removing Us,c , row-inserting Us,c into lows (U ), sliding the empty box to (ω + 1, c), and then filling the empty box with Vν,1 . We note the following fact, now R0 (γ) is exactly the tail for γ, so ρ˜ ◦ R0 (U ) = R0 ◦ ρ˜(U ). row

Finally, ρ˜ ◦ R0 (W ) = R0 ◦ ρ˜(W ) since both of them are just W ← (lows (U ) ←− Us,c )1 .

APPENDIX 4.A. PROOF OF LEMMA 4.3.7

108

4.A.2. Inductive case 2: µp 6= ∅ and R0 does not bump out Us,c . First consider V . We claim that νρ(p) = νp . This can be seen from the fact that under the current assumption, η(j) = ρ˜(η(j)) for j ≤ s. Thus ˜ both ρ˜ ◦ R0 (V ) and R0 ◦ ρ˜(V ) are obtained from V by removing Vν,1 and inserting Us,c , and keeping the ”column increasing” rule. The statements about U and W follows from the discussion in Section 4.2. 4.A.3. Inductive case 3: µp = ∅ and ωp > s. First consider V . Let k be defined as above, the key fact we need is that k < ρ˜(V )ρ(ν),1 which is implied by property C, D and E. Thus both ρ˜ ◦ R0 (V ) and R0 ◦ ρ˜(V ) ˜ are obtained from V by removing Vν,1 and inserting Us,c , and keeping the ”column increasing” rule. col−1

ρ˜ ◦ R0 (U ) = R0 ◦ ρ˜(U ) = Stack(ρ(upps (U )), (lows (U ) ←− Vt,1 )2 ) row

Finally, ρ˜ ◦ R0 (W ) = R0 ◦ ρ˜(W ) since both of them are just W ← (lows (U ) ←− Us,c )1 . 4.A.4. Inductive case 4: µp = ∅ and ωp ≤ s. Let e = (s − 1, x) be the first entry in γ that e1 = s − 1. consider the initial segment of γ ended at e, γe = ((s, c), . . . , (s, x), (s − 1, x)). By property E we have ν = ω = s − 1. We claim the following fact: col−1

(1) The bumping route of U ←− Vν,1 starts with ((s − 1, c), . . . , (s − 1, x), (y, x − 1) for some y ≥ s. (2) ρ˜(V ) is obtained from V by appending at the bottom Us,c , and νρ(p) = ωρ(p) = s(p). ˜ ˜ col−1

(3) The bumping route of ρ˜(U ) ←− Us,c starts with ((s, c), . . . , (s, x), (y, x − 1) All of the above statement are not hard to see and they imply the commutativity of this case.

109

Bibliography [Bax82]

R. J. Baxter, Exactly solved models in statistical mechanics, Academic Press Inc. [Harcourt Brace Jovanovich Publishers], London, 1982.

[BERS]

A. Berget, S. Eu, V. Reiner, and D. Stanton, Bicyclic sieving and plethysm, private communication with Andrew Berget.

[Bet31]

H. Bethe, Zur Theorie der Metalle, Zeitschrift f¨ur Physik A Hadrons and Nuclei 71 (1931), 205–226, 10.1007/BF01341708.

[BR07]

D. Bessis and V. Reiner, Cyclic sieving of noncrossing partitions for complex reflection groups, 2007, arXiv:math/0701792.

[DS06]

L. Deka and A. Schilling, New fermionic formula for unrestricted Kostka polynomials, J. Combin. Theory Ser. A 113 (2006), no. 7, 1435–1461.

[EF08]

S.-P. Eu and T.-S. Fu, The cyclic sieving phenomenon for faces of generalized cluster complexes, Adv. in Appl. Math. 40 (2008), no. 3, 350–376.

[EG87]

P. Edelman and C. Greene, Balanced tableaux, Adv. in Math. 63 (1987), no. 1, 42–99.

[Ful97]

W. Fulton, Young tableaux, London Mathematical Society Student Texts 35, 1997.

[Hai92]

M. D. Haiman, Dual equivalence with applications, including a conjecture of Proctor, Discrete Math. 99 (1992), no. 1-3, 79–113.

[Hoe74]

P. N. Hoefsmit, Representations of hecke algebras of finite groups with bn-pairs of classical type, Ph.D. thesis, University of British Columbina, 1974.

[Kas90]

M. Kashiwara, Crystalizing the q-analogue of universal enveloping algebras, Comm. Math. Phys. 133 (1990), no. 2, 249– 260.

[Kas95]

, On crystal bases, Representations of groups (Banff, AB, 1994), CMS Conf. Proc., vol. 16, Amer. Math. Soc., Providence, RI, 1995, pp. 155–197.

[KKM+ 92] S.-J. Kang, M. Kashiwara, K. C. Misra, T. Miwa, T. Nakashima, and A. Nakayashiki, Affine crystals and vertex models, Infinite analysis, Part A, B (Kyoto, 1991), Adv. Ser. Math. Phys., vol. 16, World Sci. Publ., River Edge, NJ, 1992, pp. 449– 484. [KKR86]

S. V. Kerov, A. N. Kirillov, and N. Y. Reshetikhin, Combinatorics, the Bethe ansatz and representations of the symmetric group, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 155 (1986), no. Differentsialnaya Geometriya, Gruppy Li i Mekh. VIII, 50–64, 193.

[KM09]

C. Krattenthaler and T. W. Mller, Cyclic sieving for generalised non-crossing partitions associated to complex reflection groups of exceptional type - the details, 2009, arXiv:1001.0030.

[KOS+ 06]

A. Kuniba, M. Okado, R. Sakamoto, T. Takagi, and Y. Yamada, Crystal interpretation of Kerov-Kirillov-Reshetikhin bijection, Nuclear Phys. B 740 (2006), no. 3, 299–327.

[KR86]

A. N. Kirillov and N. Y. Reshetikhin, The Bethe ansatz and the combinatorics of Young tableaux, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 155 (1986), no. Differentsialnaya Geometriya, Gruppy Li i Mekh. VIII, 65–115, 194.

110

[KS08]

A. Kuniba and R. Sakamoto, Combinatorial Bethe ansatz and generalized periodic box-ball system, Rev. Math. Phys. 20 (2008), no. 5, 493–527.

[KSS02]

A. N. Kirillov, A. Schilling, and M. Shimozono, A bijection between Littlewood-Richardson tableaux and rigged configurations, Selecta Math. (N.S.) 8 (2002), no. 1, 67–135.

[MR94]

C. Malvenuto and C. Reutenauer, Evacuation of labelled graphs, Discrete Math. 132 (1994), no. 1-3, 137–143.

[PPR09]

T. K. Petersen, P. Pylyavskyy, and B. Rhoades, Promotion and cyclic sieving via webs, J. Algebraic Combin. 30 (2009), no. 1, 19–41.

[PS09]

T. K. Petersen and L. Serrano, Cyclic sieving for longest reduced words in the hyperoctahedral group, 2009, arXiv:0905.2650.

[PW10]

S. Pon and Q. Wang, Promotion and evacuation on standard young tableaux of rectangle and staircase shape, 2010, arXiv:1003.2728.

[Rho10]

B. Rhoades, Cyclic sieving, promotion, and representation theory, J. Combin. Theory Ser. A 117 (2010), no. 1, 38–76.

[RSW04]

V. Reiner, D. Stanton, and D. White, The cyclic sieving phenomenon, J. Combin. Theory Ser. A 108 (2004), no. 1, 17–50.

[Sak08]

Sakamoto, Crystal interpretation of Kerov-Kirillov-Reshetikhin bijection. II. Proof for sln case, J. Algebraic Combin. 27 (2008), no. 1, 55–98.

[Sak09a]

R. Sakamoto, Finding rigged configurations from paths, New trends in combinatorial representation theory, RIMS Kˆokyˆuroku Bessatsu, B11, Res. Inst. Math. Sci. (RIMS), Kyoto, 2009, pp. 1–17.

[Sak09b]

, Kirillov-Schilling-Shimozono bijection as energy functions of crystals, Int. Math. Res. Not. IMRN (2009), no. 4, 579–614.

[SCc08]

T. Sage-Combinat community, Sage-Combinat: enhancing sage as a toolbox for computer exploration in algebraic combinatorics, 2008, http://combinat.sagemath.org.

[Sch63]

M. P. Sch¨utzenberger, Quelques remarques sur une construction de Schensted, Math. Scand. 12 (1963), 117–128.

[Sch72]

, Promotion des morphismes d’ensembles ordonn´es, Discrete Math. 2 (1972), 73–94.

[Sch76]

, Atti dei convegni lincei, no. 17, Discrete Math. (1976), 257–264. Atti dei Convegni Lincei, No. 17.

[Sch77]

, La correspondance de robinson, Combinatoire et Reprsentation du Groupe Symtrique, Lecture Notes in Mathematics, Springer, Berlin/Heidelberg, 1977, pp. 59–113.

[Sch06] [Sch07]

A. Schilling, Crystal structure on rigged configurations, Int. Math. Res. Not. (2006), Art. ID 97376, 27. , X = M theorem: fermionic formulas and rigged configurations under review, Combinatorial aspect of integrable systems, MSJ Mem., vol. 17, Math. Soc. Japan, Tokyo, 2007, pp. 75–104.

[Shi01]

M. Shimozono, A cyclage poset structure for Littlewood-Richardson tableaux, European J. Combin. 22 (2001), no. 3, 365–393.

[Shi02]

, Affine type A crystal structure on tensor products of rectangles, Demazure characters, and nilpotent varieties, J. Algebraic Combin. 15 (2002), no. 2, 151–187.

[SSW09]

B. Sagan, J. Shareshian, and M. L. Wachs, Eulerian quasisymmetric functions and cyclic sieving, 2009, arXiv:0909.3143.

[Sta99]

R. Stanley, Enumerative combinatorics, vol. 2, Cambridge University Press, New York/Cambridge, 1999.

[Sta09]

R. P. Stanley, Promotion and evacuation, Electron. J. Combin. 16 (2009), no. 2, Special volume in honor of Anders Bjorner, Research Paper 9, 24.

[Ste96]

J. R. Stembridge, Canonical bases and self-evacuating tableaux, Duke Math. J. 82 (1996), no. 3, 585–606.

111

, A local characterization of simply-laced crystals, Trans. Amer. Math. Soc. 355 (2003), no. 12, 4807–4823 (elec-

[Ste03] tronic). [StSc09]

W. Stein and the Sage community, SAGE mathematics software, version 4.2, 2009, http://www.sagemath.org/.

[SW99]

A. Schilling and S. O. Warnaar, Inhomogeneous lattice paths, generalized Kostka polynomials and An−1 supernomials, Comm. Math. Phys. 202 (1999), no. 2, 359–401.

[SW10]

A. Schilling and Q. Wang, Promotion operator on rigged configurations of type A, Electron. J. Combin. 17 (2010), no. 1, Research Paper 24, 43.

[Wan09]

Q. Wang, A promotion operator on rigged configurations (extended abstract), Discrete Mathematics and Theoretical Computer Science proc. AK (2009), 887–898.

[Wen88]

H. Wenzl, Hecke algebras of type An and subfactors, Invent. Math. 92 (1988), no. 2, 349–383.

[Wes09]

B. W. Westbury, Invariant tensors and the cyclic sieving phenomenon, 2009, arXiv:0912.1512.

[You52]

A. Young, On quantitative substitutional analysis. IX, Proc. London Math. Soc. (2) 54 (1952), 219–253.