Clifford Algebras: An Introduction [1 ed.] 9781107096387, 9781107422193

Clifford algebras, built up from quadratic spaces, have applications in many areas of mathematics, as natural generaliza

404 90 2MB

English Pages 209 Year 2011

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Clifford Algebras: An Introduction [1 ed.]
 9781107096387, 9781107422193

Table of contents :
Cover
Table of Contents
Introduction
The Algebraic Environment
Groups and Vector Spaces
Groups
Vector Spaces
Duality of Vector Spaces
Algebras, Representations and Modules
Algebras
Group Representations
The Quaternions
Representations and Modules
Module Homomorphism
Simple Modules
Semi-simple Modules
Multilinear Algebra
Multilinear Mappings
Tensor Products
The Trace
Alternating Mappings and the Exterior Algebra
The Symmetric Tensor Algebra
Tensor Products of Algebras
Tensor Products of Super-algebras
Quadratic Forms and Clifford Algebras
Quadratic Forms
Real Quadratic Forms
Orthogonality
Diagonalization
Adjoint Mappings
Isotropy
Isometries and the Orthogonal Group
The case d = 2
The Cartan-Dieudonné Theorem
The Groups SO(3) and SO(4)
Complex Quadratic Forms
Complex Inner-product Spaces
Clifford Algebras
Cilfford Algebras
Existence
Three Involutions
Centralizers, and the Center
Simplicity
The Trace and Quadratic Form on ?(E, q)
The Group ?(E, q) of Invertible Elements of ?(E, q)
Classifying Clifford Algebras
Frobenius' Theorem
Clifford Algebras ?(E, q) with dim E = 2
Clifford's Theorem
Classifying Even Clifford Algebras
Cartan's Periodicity Law
Classifying Complex Clifford Algebras
Representing Clifford Algebras
Spinors
The Clifford Algebras ?ₖ,ₖ
The Algebras ℬₖ,ₖ₊₁ and ?ₖ,ₖ₊₁
The Algebras ?ₖ₊₁,ₖ and ?ₖ₊₂,ₖ
Clifford Algebras ?(E, q) with dim E = 3
Clifford Algebras ?(E, q) with dim E = 4
Clifford Algebras ?(E, q) with dim E = 5
The Algebras ?₆, ℬ₇, ?₇ and ?₈
Spin
Clifford Groups
Pin and Spin Groups
Replacing q by -q
The Spin Group for Odd Dimensions
Spin Groups, for d = 2
Spin Groups, for d = 3
Spin Groups, for d = 4
The Group Spin₅
Examples of Spin Groups for d ≥ 6
Table of Results
Some Applications
Some Applications to Physics
Particles with Spin 1/2
The Dirac Operator
Maxwell's Equations
Dirac Equation
Clifford Analyticity
Clifford Analyticity
Cauchy's Integral Formula
Poisson Kernels and the Dirichlet Problem
The Hilbert Transform
Augmented Dirac Operators
Subharmonicity Properties
The Riesz Transform
The Dirac Operator on a Riemannian Manifold
Representations of Spin_d and SO(d)
Compact Lie Groups and their Representations
Representations of SU(2)
Representations of Spin_d and SO(d) for d ≥ 4
Some Suggestions for Further Reading
References
Glossary
Index

Citation preview

London Mathematical Society Student Texts 78

Clifford Algebras: An Introduction D. J. H. Garling

L O N D O N MAT HE M AT I C AL S O C I E T Y S T U D E N T T E X T S Managing Editor: Professor D. Benson, Department of Mathematics, University of Aberdeen, UK 37 38 39 40 41 42 43 44 45 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77

A mathematical introduction to wavelets, P. WOJTASZCZYK Harmonic maps, loop groups, and integrable systems, MARTIN A. GUEST Set theory for the working mathematician, KRZYSZTOF CIESIELSKI Dynamical systems and ergodic theory, M. POLLICOTT & M. YURI The algorithmic resolution of Diophantine equations, NIGEL P. SMART Equilibrium states in ergodic theory, GERHARD KELLER Fourier analysis on finite groups and applications, AUDREY TERRAS Classical invariant theory, PETER J. OLVER Permutation groups, PETER J. CAMERON Introductory lectures on rings and modules. JOHN A. BEACHY Set theory, ANDRÁS HAJNAL & PETER HAMBURGER. Translated by ATTILA MATE An introduction to K-theory for C*-algebras, M. RØRDAM, F. LARSEN & N. J. LAUSTSEN A brief guide to algebraic number theory, H. P. F. SWINNERTON-DYER Steps in commutative algebra: Second edition, R. Y. SHARP Finite Markov chains and algorithmic applications, OLLE HÄGGSTRÖM The prime number theorem, G. J. O. JAMESON Topics in graph automorphisms and reconstruction, JOSEF LAURI & RAFFAELE SCAPELLATO Elementary number theory, group theory and Ramanujan graphs, GIULIANA DAVIDOFF, PETER SARNAK & ALAIN VALETTE Logic, induction and sets, THOMAS FORSTER Introduction to Banach algebras, operators and harmonic analysis, GARTH DALES et al. Computational algebraic geometry, HAL SCHENCK Frobenius algebras and 2-D topological quantum field theories, JOACHIM KOCK Linear operators and linear systems, JONATHAN R. PARTINGTON An introduction to noncommutative Noetherian rings: Second edition, K. R. GOODEARL & R. B. WARFIELD, JR Topics from one-dimensional dynamics, KAREN M. BRUCKS & HENK BRUIN Singular points of plane curves, C. T. C. WALL A short course on Banach space theory, N. L. CAROTHERS Elements of the representation theory of associative algebras I, IBRAHIM ASSEM, ´ DANIEL SIMSON & ANDRZEJ SKOWRONSKI An introduction to sieve methods and their applications, ALINA CARMEN COJOCARU & M. RAM MURTY Elliptic functions, J. V. ARMITAGE & W. F. EBERLEIN Hyperbolic geometry from a local viewpoint, LINDA KEEN & NIKOLA LAKIC Lectures on Kähler geometry, ANDREI MOROIANU Dependence logic, JOUKU VÄÄNÄNEN Elements of the representation theory of associative algebras II, DANIEL SIMSON & ´ ANDRZEJ SKOWRONSKI Elements of the representation theory of associative algebras III, DANIEL SIMSON & ´ ANDRZEJ SKOWRONSKI Groups, graphs and trees, JOHN MEIER Representation theorems in Hardy spaces, JAVAD MASHREGHI ´ PETER An introduction to the theory of graph spectra, DRAGOŠ CVETKOVIC, ´ ROWLINSON & SLOBODAN SIMIC Number theory in the spirit of Liouville, KENNETH S. WILLIAMS Lectures on profinite topics in group theory, BENJAMIN KLOPSCH, NIKOLAY NIKOLOV & CHRISTOPHER VOLL

LONDON MAT HE M AT I C A L S O C I E T Y S T U D E N T T E X T S 7 8

Clifford Algebras: An Introduction D. J. H. G A R L I N G Emeritus Reader in Mathematical Analysis, University of Cambridge, and Fellow of St John’s College, Cambridge

c amb r i d g e u n i v e r s i t y pr e ss Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Tokyo, Mexico City Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9781107096387 © D. J. H. Garling 2011 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2011 Printed in the United Kingdom at the University Press, Cambridge A catalogue record for this publication is available from the British Library ISBN 978-1-107-09638-7 Hardback ISBN 978-1-107-42219-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Contents

Introduction

Part One

page 1

The algebraic environment

5

1

Groups and vector spaces 1.1 Groups 1.2 Vector spaces 1.3 Duality of vector spaces

7 7 12 14

2

Algebras, representations and modules 2.1 Algebras 2.2 Group representations 2.3 The quaternions 2.4 Representations and modules 2.5 Module homomorphisms 2.6 Simple modules 2.7 Semi-simple modules

16 16 22 24 28 29 30 32

3

Multilinear algebra 3.1 Multilinear mappings 3.2 Tensor products 3.3 The trace 3.4 Alternating mappings and the exterior algebra 3.5 The symmetric tensor algebra 3.6 Tensor products of algebras 3.7 Tensor products of super-algebras

36 36 38 42 44 49 52 56

vi

Contents Part Two

Quadratic forms and Clifford algebras

59

4

Quadratic forms 4.1 Real quadratic forms 4.2 Orthogonality 4.3 Diagonalization 4.4 Adjoint mappings 4.5 Isotropy 4.6 Isometries and the orthogonal group 4.7 The case d = 2 4.8 The Cartan-Dieudonn´e theorem 4.9 The groups SO(3) and SO(4) 4.10 Complex quadratic forms 4.11 Complex inner-product spaces

61 61 63 64 68 69 71 73 76 80 82 84

5

Clifford algebras 5.1 Clifford algebras 5.2 Existence 5.3 Three involutions 5.4 Centralizers, and the centre 5.5 Simplicity 5.6 The trace and quadratic form on A(E, q) 5.7 The group G(E, q) of invertible elements of A(E, q)

86 86 89 91 95 96 100 101

6

Classifying Clifford algebras 6.1 Frobenius’ theorem 6.2 Clifford algebras A(E, q) with dim E = 2 6.3 Clifford’s theorem 6.4 Classifying even Clifford algebras 6.5 Cartan’s periodicity law 6.6 Classifying complex Clifford algebras

104 104 106 108 109 110 113

7

Representing Clifford algebras 7.1 Spinors 7.2 The Clifford algebras Ak,k 7.3 The algebras Bk,k+1 and Ak,k+1 7.4 The algebras Ak+1,k and Ak+2,k 7.5 Clifford algebras A(E, q) with dim E = 3 7.6 Clifford algebras A(E, q) with dim E = 4 7.7 Clifford algebras A(E, q) with dim E = 5 7.8 The algebras A6 , B7 , A7 and A8

114 114 117 119 120 121 124 131 135

Contents 8

Spin 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10

Clifford groups Pin and Spin groups Replacing q by −q The spin group for odd dimensions Spin groups, for d = 2 Spin groups, for d = 3 Spin groups, for d = 4 The group Spin5 Examples of spin groups for d ≥ 6 Table of results

Part Three

Some Applications

vii 137 137 139 141 142 142 144 146 147 148 151

153

9

Some applications to physics 9.1 Particles with spin 1/2 9.2 The Dirac operator 9.3 Maxwell’s equations 9.4 The Dirac equation

155 155 156 159 161

10

Clifford analyticity 10.1 Clifford analyticity 10.2 Cauchy’s integral formula 10.3 Poisson kernels and the Dirichlet problem 10.4 The Hilbert transform 10.5 Augmented Dirac operators 10.6 Subharmonicity properties 10.7 The Riesz transform 10.8 The Dirac operator on a Riemannian manifold

164 164 166 167 169 170 172 174 177

11

Representations of Spind and SO(d) 11.1 Compact Lie groups and their representations 11.2 Representations of SU (2) 11.3 Representations of Spind and SO(d) for d ≤ 4

179 179 181 182

12

Some suggestions for further reading References Glossary Index

186 191 193 197

Introduction

Clifford algebras find their use in many areas of mathematics: in differential analysis, where operators of Dirac type are used in proofs of the Atiyah-Singer index theorem, in harmonic analysis, where the Riesz transforms provide a higher-dimensional generalization of the Hilbert transform, in geometry, where spin groups illuminate the structure of the classical groups, and in mathematical physics, where Clifford algebras provide a setting for electromagnetic theory, spin 1/2 particles, and the Dirac operator in relativistic quantum mechanics. This book is intended as a straightforward introduction to Clifford algebras, without going on to study any of the above topics in detail (suggestions for further reading are made at the end). This means that it concentrates on the underlying structure of Clifford algebras, and this inevitably means that it approaches the subject algebraically. The first part is concerned with the background from algebra that is required. The first chapter describes, without giving details, the necessary knowledge of groups and vector spaces that is needed. Any reader who is not familiar with this material should consult standard texts on algebra, such as Mac Lane and Birkhoff [MaB], Jacobson [Jac] or Cohn [Coh]. Otherwise, skim through it, to familiarize yourself with the notation and terminology that is used. The second chapter deals with algebras, and modules over algebras. It turns out that the algebra H of quaternions has an important part to play in the theory of Clifford algebras, and fundamental properties of this algebra are developed here. It also turns out that the Clifford algebras that we study are isomorphic to the algebra of D-endomorphisms of Dk , where D is either the real field R, the complex field C or the algebra H; we develop the theory of modules over an algebra far enough to prove Wedderburn’s theorem, which explains why this is the case.

2

Introduction

Tensor products of various forms are an invaluable tool for constructing Clifford algebras. Many mathematicians are uncomfortable with tensor products; in Chapter 3 we provide a careful account of the multilinear algebra that is needed. This involves finite-dimensional vector spaces, where there is a powerful and effective duality theory, and we make unashamed use of this duality to construct the spaces of tensor products that we need. The second part is the heart of this book. Clifford algebras are constructed, starting from a vector space equipped with a quadratic form. Chapter 4 is concerned with quadratic forms on finite-dimensional real vector spaces. The reader is probably familiar with the special case of Euclidean space, where the quadratic form is positive definite. The general case is, perhaps surprisingly, considerably more complicated, and we provide complete details. In particular, we prove the Cartan-Dieudonn´e theorem, which shows that an isometry of a regular quadratic space is the product of simple reflections. In Chapter 5, we begin the study of Clifford algebras. This can be done at different levels of generality. At one extreme, we could consider Clifford algebras over an arbitrary field; these are important, for example in number theory, but this is too general for our purposes. At the other extreme, we could restrict attention to Clifford algebras over the complex field C; this has the advantage that C is algebraically complete, which leads to considerable simplifications, but in the process it removes many interesting ideas from consideration. In fact, we shall consider Clifford algebras over the real field; this provides enough generality to consider many important ideas, while at the same time provides an appropriate setting for differential analysis, harmonic analysis and mathematical physics. We shall see that the complex field C is an example of such a Clifford algebra; one of its salient features is the conjugation involution. Universal Clifford algebras also admit such an involutory automorphism, the principal involution. This leads to a Z2 grading of such algebras; they are super-algebras. This is one of the fundamental features of the structure of Clifford algebras. But they are more complicated than that; besides the principal involution there are two important involutory anti-automorphisms. Much of the charm of Clifford algebras lies in the fact that they have interesting concrete representations, and in Chapters 6 and 7 we calculate many of these. In particular, we use arguments involving Clifford algebras to prove Frobenius’ theorem, that the only finite-dimensional real division algebras are the real field R, the complex field C and the

Introduction

3

algebra H of quaternions. The calculations involve dimensions up to 5; we also establish a partial periodicity of order 4, and Cartan’s periodicity theorem, with periodicity of order 8, which enable all Clifford algebras to be calculated. As a result of these calculations, we find that every Clifford algebra is either isomorphic to a full matrix algebra over R, C or H, or is isomorphic to the direct sum of two copies of one of these. This is a consequence of Wedderburn’s theorem: a finite-dimensional simple real algebra is isomorphic to a full matrix algebra over a finitedimensional real division algebra. Matrices act on vector spaces; at the end of Chapter 7 we introduce spinor spaces, which are vector spaces on which a Clifford algebra acts irreducibly. A large part of mathematics is concerned with symmetry, and the orthogonal group and special orthogonal group describe the linear symmetries of regular quadratic spaces. An important feature of Clifford algebras is that the group of invertible elements of a Clifford algebra contains a subgroup, the spin group, which provides a double cover of the corresponding special orthogonal group. In Chapter 8, spin groups are defined, and their basic properties are proved. Spin groups, and their actions, are calculated for spaces up to dimension 4, and for 5- and 6dimensional Euclidean space. In the third part, we describe some of the applications of Clifford algebras. Our intention here is to provide an introduction to a varied collection of applications; to whet the appetite, so that the reader will wish to pursue his or her interests further. A great deal of interest in Clifford algebras goes back to 1927 and 1928. In 1927, Pauli introduced the so-called Pauli spin matrices to provide a quantum mechanical framework for particles with spin 1/2, and in 1928, Dirac introduced the Dirac operator (though not with this name, nor in terms of a Clifford algebra) to construct the Dirac equation, which describes the relativistic behaviour of an electron. In Chapter 9, we describe the use of the Pauli spin matrices to represent the angular momentum of particles with spin 1/2. We introduce the Dirac operator, and construct the Dirac equation. We also show that Maxwell’s equations for electromagnetic fields can be expressed as a single equation involving the Dirac operator. Clifford algebras have important applications in differential and harmonic analysis. A fascinating topic in two dimensions is the relationship between harmonic functions and analytic functions, using the Hilbert transform. In Chapter 10, we show how the Dirac operator, and an augmented Dirac operator, can be used to extend the idea of analyticity

4

Introduction

to higher dimensions, so that corresponding problems can be considered there; the Hilbert transform is replaced by the system of Riesz transforms. As a particular application, we show how to extend to higher dimensions the celebrated theorem of the brothers Riesz, which shows that if the harmonic Dirichlet extension of a complex measure is analytic, then the measure is absolutely continuous with respect to Lebesgue measure, and is represented by a function in the Hardy space H1 . These results concern functions defined on half a vector space. An even more important use of Dirac operators concerns analysis on a compact Riemannian manifold; this leads to proofs of the Atiyah-Singer index theorem. A full account of this demands a detailed knowledge of Riemannian geometry, and so cannot be given at this introductory level, but we end Chapter 10 by giving a brief description of the set-up in which Dirac operators can be defined. The spin groups provide a double cover of the special orthogonal groups. In Chapter 11, we show how this can be used to describe the irreducible representations of the special orthogonal groups of Euclidean spaces of dimensions 2, 3 and 4. The use of the double cover goes much further, but again this requires a detailed understanding of the representation theory of compact Lie groups, inappropriate for a book at this elementary level. These remarks show that this book is only an introduction to a large subject, with many applications. In the final chapter, we make some further comments, and also make some suggestions for further reading. I would especially like to thank the referees for their dissatisfaction with earlier drafts of this book, which led to improvements both of content and of presentation. I have worked hard to remove errors, but undoubtedly some remain. Corrections and further comments can be found on my personal web-page at www.dpmms.cam.ac.uk. I acknowledge the use of Paul Taylor’s ‘diagrams’ package, which I used for the commutative diagrams in the text; I found the package easy to use.

PART ONE THE ALGEBRAIC ENVIRONMENT

1 Groups and vector spaces

The material in this chapter should be familiar to the reader, but it is worth reading through it to become familiar with the notation and terminology that is used. We shall not give details; these are given in standard textbooks, such as Mac Lane and Birkhoff [MaB], Jacobson [Jac] or Cohn [Coh].

1.1 Groups A group is a non-empty set G together with a law of composition, a mapping (g, h) → gh from G × G to G, which satisfies: 1. (gh)j = g(hj) for all g, h, j in G (associativity), 2. there exists e in G such that eg = ge = g for all g ∈ G, and 3. for each g ∈ G there exists g −1 ∈ G such that gg −1 = g −1 g = e. It then follows that e, the identity element, is unique, and that for each g ∈ G the inverse g −1 is unique. A group G is abelian, or commutative, if gh = hg for all g, h ∈ G. If G is abelian, then the law of composition is often written as addition: (g, h) → g + h. In such a case, the identity is denoted by 0, and the inverse of g by −g. A non-empty subset H of a group G is a subgroup of G if h1 h2 ∈ H whenever h1 , h2 ∈ H, and h−1 ∈ H whenever h ∈ H. H then becomes a group under the law of composition inherited from G. If A is a subset of a group G, there is a smallest subgroup Gp(A) of G which contains A, the subgroup generated by A. If A = {g} is a singleton then we write Gp(g) for Gp(A). Then Gp(g) = {g n : n an integer}, where g 0 = e, g n is the product of n copies of g when n > 0, and g n

8

Groups and vector spaces

is the product of |n| copies of g −1 when n < 0. A group G is cyclic if G = Gp(g) for some g ∈ G. If G has finitely many elements, then the order o(G) of G is the number of elements of G. If G has infinitely many elements, then we set o(G) = ∞. If g ∈ G then the order o(g) of g is the order of the group Gp(g). A mapping θ : G → H from a group G to a group H is a homomorphism if θ(g1 g2 ) = θ(g1 )θ(g2 ), for g1 , g2 ∈ G. It then follows that θ maps the identity in G to the identity in H, and that θ(g −1 ) = (θ(g))−1 , for g ∈ G. A bijective homomorphism is called an isomorphism, and an isomorphism G → G is called an automorphism of G. The set Aut(G) of automorphisms of G forms a group, when composition of mappings is taken as the group law of composition. A subgroup K of a group G is a normal, or self-conjugate, subgroup if −1 g hg ∈ K for all g ∈ G and k ∈ K. If θ : G → H is a homomorphism, then the kernel ker(θ) of θ, the set {g ∈ G : θg = eH } (where eH is the identity in H) is a normal subgroup of G. Conversely, suppose that K is a normal subgroup of G. The relation g1 ∼ g2 on G defined by setting g1 ∼ g2 if g1−1 g2 ∈ K is an equivalence relation on G. The equivalence classes are called the cosets of K in G. If C is a coset of K then C is of the form Kg = {kg : k ∈ K} and Kg = gK = {gk : k ∈ K}. If C1 and C2 are cosets of K in G then so is C1 C2 = {c1 c2 : c1 ∈ C1 , c2 ∈ C2 }; if C1 = Kg1 and C2 = Kg2 then C1 C2 = Kg1 g2 . With this law of composition, the set G/K of cosets becomes a group, the quotient group. The identity is K and (Kg)−1 = Kg −1 . The quotient mapping q : G → G/K defined by the equivalence relation is then a homomorphism of G onto G/K, with kernel K, and q(g) = Kg. A group G is simple if it has no normal subgroups other than {e} and G. We denote the group with one element by 1, or, if we are denoting composition by addition, by 0. Suppose that (G0 = 1, G1 , . . . , Gk , Gk+1 = 1) is a sequence of groups, and that θj : Gj → Gj+1 is a homomorphism, for 0 ≤ j ≤ k. Then the diagram

1

θ0-

G1

θ1-

G2

θ2-

...

θk−1 -

Gk

θk -

1

is an exact sequence if θj−1 (Gj−1 ) is the kernel of θj , for 1 ≤ j ≤ k. If k = 3, the sequence is a short exact sequence. For example, if K is a

1.1 Groups

9

normal subgroup of g and q : G → G/K is the quotient mapping, then 1

- K

⊆-

G

q G/K

- 1

is a short exact sequence. If A is a subset of a group G then the centralizer CG (A) of A in G, defined as CG (A) = {g ∈ G : ga = ag for all a ∈ A}, is a subgroup of G. The centre Z(G), defined as Z(G) = {g ∈ G : gh = hg for all h ∈ G}, (which is CG (G)), is a normal subgroup of G. The product of two groups G1 × G2 is a group, when composition is defined by (g1 , g2 )(h1 , h2 ) = (g1 h1 , g2 h2 ) We identify G1 with the subgroup G1 × {e2 } and G2 with the subgroup {e1 } × G2 . Let us now list some of the groups that we shall meet later. 1. The real numbers R form an abelian group under addition. The set Z of integers is a subgroup of R. The set R∗ of non-zero real numbers is a group under multiplication. 2. Any two groups of order 2 are isomorphic. We shall denote the multiplicative subgroup {1, −1} of R∗ by D2 , and the additive group {0, 1} by Z2 . Z2 is isomorphic to the quotient group Z/2Z. Small though they are, these groups of order 2 play a fundamental role in the theory of Clifford algebras (and in many other branches of mathematics and physics). Suppose that we have a short exact sequence 1

- D2

j-

G1

θ-

G2

- 1.

Then j(D2 ) is a normal subgroup of G1 , from which it follows that j(D2 ) is contained in the centre of G1 . If g ∈ G1 then we write −g for j(−1)g. Then θ(g) = θ(−g), and if h ∈ G2 then θ−1 {h} = {g, −g} for some g in G. In this case, we say that G1 is a double cover of G2 . Double covers play a fundamental role in the theory of spin groups; these are considered in Chapter 8. 3. A bijective mapping of a set X onto itself is called a permutation. The set ΣX of permutations of X is a group under the composition of mappings. ΣX is not abelian if X has at least three elements. We

10

Groups and vector spaces denote the group of permutations of the set {1, . . . , n} by Σn . Σn has order n!. A transposition is a permutation which fixes all but 2 elements. Σn has a normal subgroup An of order n!/2, consisting of those permutations that can be expressed as the product of an even number of transpositions. Thus we have a short exact sequence 1

- An

⊆Σn

-

D2

- 1.

If σ ∈ Σn then (σ) is the signature of σ; (σ) = 1 if σ ∈ An , and (σ) = −1 otherwise. 4. The complex numbers C form an abelian group under addition, and R can be identified as a subgroup of C. The set C∗ of non-zero complex numbers is a group under multiplication. The set ∗ T = {z ∈ C : |z| = 1} is a subgroup of C . There is a short exact sequence 0

- Z

⊆-

R

q-

T

- 1

where Z is the additive group of integers and q(θ) = e2πiθ . 5. The subset Tn = {e2πij/n : 0 ≤ j < n} = {z ∈ C : z n = 1} of T is a cyclic subgroup of T of order n. Conversely, if G = Gp(g) is a cyclic group of order n then the mapping g k → e2πik/n is an isomorphism of G onto Tn . 6. Let D denote the group of isometries of the complex plane C which fix the origin: D = {g : C → C : g(0) = 0 and |g(z) − g(w)| = |z − w| for z, w ∈ C}. D is the full dihedral group. Then an element of D is either a rotation Rθ (where Rθ (z) = eiθ z) or a reflection Sθ (where Sθ (z) = eiθ z¯). The set Rot of rotations is a subgroup of D, and the mapping R : eiθ → Rθ is an isomorphism of T onto Rot. In particular, Rπ (z) = −z. Since Sθ2 (z) = eiθ (eiθ z¯) = eiθ e−iθ z = z, Sθ2 is the identity. A similar calculation shows that Sθ−1 Rφ Sθ = R−φ , so that R is a normal subgroup of D. We have an exact sequence 1

- T

R-

D

δ-

D2

- 1

where δ(Rθ )) = 1 and δ(Sθ ) = −1, for θ ∈ [0, 2π). 7. If n ≥ 2, let Rn = R(Tn ) and let D2n = Rn ∪Rn S0 . D2n is a subgroup of D, called the dihedral group of order 2n. (Warning: some authors denote this group by Dn .) The group D4 ∼ = D2 × D2 , and so D4 is

1.1 Groups

11

abelian. If n ≥ 3 then D2n is the group of symmetries of a regular polygon with n vertices, with centre the origin; D2n is a non-abelian subgroup of D of order 2n. If n = 2k is even, then Z(D2n ) = {1, r−1 }, and we have a short exact sequence 1

- D2

- D2n

- D2k

- 1;

D2n is a double cover of D2k . If n = 2k+1 is odd, then Z(D2n ) = {1}. 8. In particular, the dihedral group D8 is a non-abelian group, which is the group of symmetries of a square with centre the origin. Let us set α = ri , β = σ1 and γ = σi . Then D8 = {±1, ±α, ±β, ±γ} (where −x denotes r−1 x = xr−1 ), and αβ = γ βα = −γ α2 = −1

βγ = α γβ = −α β2 = 1

γα = β αγ = −β γ 2 = 1.

There is a short exact sequence 1

- D2

- D8

- D4

- 1.

9. The quaternionic group Q is a group of order 8, with elements {±1, ±i, ±j, ±k}, with identity element 1, and law of composition defined by ij = k ji = −k i2 = −1

jk = i ki = j kj = −i ik = −j j 2 = −1 k2 = −1,

and (−1)x = x(−1) = −x, (−1)(−x) = (−x)(−1) = x for x = 1, i, j, k. Then Z(Q) = {1, −1}, and there is a short exact sequence 1

- D2

- Q

- D4

- 1;

Q is a double cover of D4 . The groups D8 and Q are of particular importance in the study of Clifford algebras. Although they both provide double covers of D4 , they are not isomorphic. Q has six elements of order 4, one of order 2 and one of order 1. D8 has two elements of order 4, five of order 2 and one of order 1.

12

Groups and vector spaces

1.2 Vector spaces We shall be concerned with real or complex vector spaces. Let K denote either the field R of real numbers or the field C of complex numbers. A vector space E over K is an abelian additive group (E, +), together with a mapping (scalar multiplication) (λ, x) → λx of K × E into E which satisfies • 1.x = x, • (λ + µ)(x) = λx + µx, • λ(µx) = (λµ)x, • λ(x + y) = λx + λy, for λ, µ ∈ K and x, y ∈ E. The elements of E are called vectors and the elements of K are called scalars. It then follows that 0.x = 0 and λ.0 = 0 for x ∈ E and λ ∈ K. (Note that we use the same symbol 0 for the additive identity element in E and the zero element in K.) A non-empty subset F of a vector space E is a linear subspace if it is a subgroup of E and if λx ∈ U whenever λ ∈ K and x ∈ F . A linear subspace is then a vector space, with the operations inherited from E. If A is a subset of E then the intersection of all the linear subspaces containing A is a linear subspace, the subspace span(A) spanned by A. If E is spanned by a finite set, E is finite-dimensional. All the vector spaces that we shall consider will be finitedimensional, except when it is stated otherwise. A subset B of E is linearly independent if whenever λ1 , . . . , λk are scalars and b1 , . . . , bk are distinct elements of B for which λ1 b1 + · · · + λk bk = 0 then λ1 = · · · = λk = 0. A finite subset B of E which is linearly independent and which spans E is called a basis for E. We shall enumerate a basis as (b1 , . . . , bd ). If (b1 , . . . , bd ) is a basis for E then every element x ∈ E can be written uniquely as x = x1 b1 + · · · + xd bd (where x1 , . . . , xd are scalars). Every finite-dimensional vector space E has a basis. Indeed if A is a linearly independent subset of E contained in a subset C of E which spans E then there is a basis B for E with A ⊆ B ⊆ C. Any two bases have the same number of elements; this number is the dimension dim E of E.

1.2 Vector spaces

13

As an example, let E = K d , the product of d copies of K, with addition defined co-ordinatewise, and with scalar multiplication λ(x1 , . . . , xd ) = (λx1 , . . . , λxd ). Let ej = (0, . . . , 0, 1, 0, . . . , 0), with 1 in the jth position. Then K d is a vector space, and (e1 , . . . , ed ) is a basis for K d , the standard basis. More generally, Let Mm,n = Mm,n (K) denote the set of all K-valued functions on {1, . . . , m} × {1, . . . , n}. Mm,n becomes a vector space over K when addition and multiplication are defined co-ordinatewise. The elements of Mm,n are called matrices. We denote the matrix taking the value 1 at (i, j) and 0 elsewhere by Eij . Then the set of matrices {Eij : 1 ≤ i ≤ m, 1 ≤ j ≤ n} forms a basis for Mm,n , so that Mm,n has dimension mn. If E1 and E2 are vector spaces, then the product E1 × E2 is a vector space, with addition and scalar multiplication defined by (x1 , x2 ) + (y1 , y2 ) = (x1 + y1 , x2 + y2 ) and λ(x1 , x2 ) = (λx1 , λx2 ), Then dim(E1 × E2 ) = dim E1 + dim E2 . A mapping T : E → F , where E and F are vector spaces over the same field K, is linear if T (x + y) = T (x) + T (y) and T (λx) = λT (x) for all λ ∈ K, x, y ∈ E. The image T (E) is a linear subspace of F and the null-space N (T ) = {x ∈ E : T (x) = 0} is a linear subspace of E. The dimension of T (E) is the rank of T and the dimension of N (T ) is the nullity n(T ) of T . We have the fundamental rank-nullity formula rank(T ) + n(T ) = dim E. A bijective linear mapping J : E → F is called an isomorphism. A linear mapping J : E → F is an isomorphism if and only if N (J) = {0} and J(E) = F . If J is an isomorphism, then dim E = dim F . For example, if (f1 , . . . , fd ) is a basis for F then the linear mapping J : K d → F defined by J(λ1 , . . . , λd ) = λ1 f1 + · · · + λd fd is an isomorphism of Kd onto F . We shall occasionally need to consider the topology on a vector space F . K d is a complete metric space, with the usual metric Pd d(x, y) = ( j=1 |xj − yj |2 )1/2 given by the Euclidean norm kxk = Pd ( j=1 |xj |2 )1/2 . We can then define a norm k.k on F by setting kJ(x)k = kxk. This depends on the choice of basis, but any two such norms are equivalent, and define the same topology.

14

Groups and vector spaces

If F1 and F2 are subspaces of E for which the linear mapping (x1 , x2 ) → x1 + x2 : F1 × F2 → E is an isomorphism then E is the direct sum F1 ⊕ F2 of F1 and F2 . This happens if and only if F1 ∩ F2 = {0} and E = span(F1 ∪F2 ), and if and only if every element x of E can be written uniquely as x = x1 + x2 , with x1 ∈ F1 and x2 ∈ F2 . Suppose that (e1 , . . . , ed ) is a basis for E and that y1 , . . . , yd are elements of a vector space F . If x = λ1 e1 + · · · + λd ed ∈ E, let T (x) = λ1 y1 + · · · + λd yd . Then T is a linear mapping of E into F , and is the unique linear mapping from E to F for which T (ej ) = yj for 1 ≤ j ≤ d. The process of constructing T in this way is called extension by linearity. The set L(E, F ) of linear mappings from E to F is a vector space, when we define (S + T )(x) = S(x) + T (x) and (λS)(x) = λ(S(x)) for S, T ∈ L(E, F ), x ∈ E, λ ∈ K. dim L(E, F ) = dim E. dim F . We write L(E) for L(E, E); elements of L(E) are called endomorphisms of E. If T ∈ L(E, F ) and S ∈ L(F, G) then the composition ST = S ◦ T is in L(E, G). Suppose that (e1 , . . . , ed ) is a basis for E, (f1 , . . . , fc ) a basis for F and Pc Pd that T ∈ L(E, F ). Let T (ej ) = i=1 tij fi . If x = j=1 xj ej then T (x) = Pc Pd i=1 ( j=1 tij xj )fi . The mapping T → (tij ) is then an isomorphism of L(E, F ) onto Mc,d , so that dim L(E, F ) = cd = dim E. dim F . We say that T is represented by the c × d matrix (tij ). If (g1 , . . . , gb ) is a basis for G, and S ∈ L(F, G) is represented by the matrix (shi ) then the product R = ST ∈ L(E, G) is represented by the matrix (rhj ), where Pc rhj = i=1 shi tij . This expression defines matrix multiplication.

1.3 Duality of vector spaces K is a one-dimensional vector space over K (but C is a two-dimensional vector space over R, with basis {1, i}). The space L(E, K) is called the dual, or dual space, of E, and is denoted by E 0 . Elements of E 0 are called linear functionals on E. Suppose that (e1 , . . . , ed ) is a basis for Pd E. If x = i=1 xi ei , let φi (x) = xi , for 1 ≤ i ≤ d. Then φi ∈ E 0 and (φ1 , . . . , φd ) is a basis for E 0 , the dual basis, or basis dual to (e1 , . . . , ed ). Thus dim E = dim E 0 . If x ∈ E and φ ∈ E 0 , let j(x)(φ) = φ(x). Then

1.3 Duality of vector spaces

15

j : E → E 00 is an isomorphism of E onto E 00 , the dual of E 0 . E 00 is called the bidual of E. Suppose that T ∈ L(E, F ). If ψ ∈ F 0 and x ∈ E, let (T 0 (ψ))(x) = ψ(T (x)). Then T (ψ) ∈ E 0 , and T 0 is a linear mapping of F 0 into E 0 ; it is the transposed mapping of T . If A is a subset of E, then the annihilator A⊥ in E 0 of A is the set A⊥ = {φ ∈ E 0 : φ(a) = 0 for all a ∈ A}. It is a linear subspace of E 0 . Similarly, if B is a subset of E 0 , then the annihilator B ⊥ in E of B is the set B ⊥ = {x ∈ E : φ(x) = 0 for all φ ∈ B}. It then follows that A⊥⊥ = span(A) and B ⊥⊥ = span(B). If F is a linear subspace of E then dim F + dim F ⊥ = dim E. If T ∈ L(E, F ) then (T (E))⊥ = N (T 0 ) and (N (T ))⊥ = T 0 (F 0 ). (Several of these results depend on the fact that E and F are finitedimensional.)

2 Algebras, representations and modules

Clifford algebras are finite-dimensional algebras. Here we consider the properties of finite-dimensional algebras. We also consider how they can be represented as algebras of endomorphisms of a vector space, or equivalently as algebras of matrices. An alternative way of thinking about this is to consider modules over an algebra; this is important in the theory of Clifford algebras, where such modules appear as spaces of spinors.

2.1 Algebras Again, let K denote either the field R of real numbers or the field C of complex numbers. A finite-dimensional (associative) algebra A over K is a finite-dimensional vector space over K equipped with a law of composition: that is, a mapping (multiplication) (a, b) → ab from A × A into A which satisfies • • • •

(ab)c = a(bc) (associativity), a(b + c) = ab + ac, (a + b)c = ac + bc, λ(ab) = (λa)b = a(λb),

for λ ∈ K and a, b, c ∈ A. (As usual, multiplication is carried out before addition). An algebra A is unital if there exists 1 ∈ A, the identity element, such that 1a = a1 = a for all a ∈ A. We shall principally be concerned with unital algebras. An algebra A is commutative if ab = ba for all a, b ∈ A. A mapping φ from an algebra A over K to an algebra B over K is an algebra homomorphism if it is linear, and if φ(ab) = φ(a)φ(b) for a, b ∈ A. If A and B are unital, φ is a unital homomorphism if, in

2.1 Algebras

17

addition, φ(1A ) = 1B , where 1A is the identity element of A and 1B is the identity element of B. A bijective homomorphism is called an algebra isomorphism. An algebra homomorphism of an algebra A into itself is called an endomorphism. Let us give some examples. • If E is a vector space over K then the vector space L(E) of endomorphisms of E becomes a unital algebra over K when multiplication is defined to be the composition of mappings. The identity mapping I is the algebra identity. L(E) is commutative if and only if dim V = 1. • If (e1 , . . . , ed ) is a basis for E, then an element T of L(E) can be represented by a matrix (tij ), and the mapping T → (tij ) is an algebra isomorphism of L(E) onto the algebra Md (K) of d×d matrices, where composition is defined as matrix multiplication. • More generally, if A is an algebra, then the set Md (A) of d × d matrices with entries in A becomes an algebra when addition and scalar multiplication are defined term by term, and composition is defined by matrix multiplication: if a = (aij ) and b = (bij ) then Pd (ab)ij = k=1 aik bkj . The algebra Md (A) is unital if A is. • Let K S denote the set of mappings from a finite set S into K. K S becomes a commutative unital algebra when we define addition, scalar multiplication and multiplication pointwise: (f + g)(s) = f (s) + g(s); (λf )(s) = λ(f (s)); (f g)(s) = f (s)g(s). The identity element is the function 1 which takes the constant value 1: 1(s) = 1 for all s ∈ S. If s ∈ S, let δs (s) = 1 and δs (t) = 0 for t 6= s. Then the set of mappings {δs : s ∈ S} is a basis for K S , so that dim K = |S|, where |S| is the number of elements in S. • Suppose that G is a finite group, with identity element e. Then K G is a finite-dimensional vector space, with basis {δg : g ∈ G}. We define P P multiplication on K G ; if a = g∈G ag δg and b = g∈G bg δg then we P set ab = g∈G cg δg , where cg =

X hj=g

ah bj =

X h∈G

ah bh−1 g =

X

agj −1 bj .

j∈G

It is straightforward to verify that the axioms are satisfied: K G is the group algebra. Multiplication is defined in such a way that δg δh = δgh . In particular, K G is a unital algebra, with identity element δe . K G is commutative if and only if G is commutative. Note

18

Algebras, representations and modules

that this multiplication is quite different from the one described in the previous example. • If A is an algebra, then the opposite algebra Aopp is the algebra obtained by keeping addition and scalar multiplication the same, but by defining a new law of composition ∗ by reversing the original law, so that a ∗ b = ba. A linear subspace B of an algebra A is a subalgebra of A if b1 b2 ∈ B whenever b1 , b2 ∈ B. If A is unital, then a subalgebra B is a unital subalgebra if the identity element of A belongs to B. For example, if A is a unital algebra then the set End(A) of unital endomorphisms of A is a unital subalgebra of L(A). The centralizer CA (B) of a subset B of an algebra A is CA (B) = {a ∈ A : ab = ba for all b ∈ B}, and the centre Z(A) of A is the centralizer CA (A): Z(A)) = {a ∈ A : ab = ba for all b ∈ A}. Z(A) is a commutative subalgebra of A, and is a unital subalgebra if A is a unital algebra. A unital algebra is central if Z(A) is the onedimensional subspace span(1). An element p of an algebra A is an idempotent if p2 = p. If p is an idempotent of a unital algebra A then 1 − p is also idempotent, and A = pA⊕(1−p)A, the direct sum of linear subspaces of A. If in addition p ∈ Z(A) then pA and (1 − p)A are subalgebras of A, and are unital algebras, with identity elements p and 1 − p respectively, but are not unital subalgebras of A (unless p = 1 or p = 0). The mapping a → pa is then a unital algebra homomorphism of A onto pA. An idempotent in L(E) or Md (K) is called a projection. An element j of a unital algebra A is an involution if j 2 = 1. An element j of A is an involution if and only if (1 + j)/2 is an idempotent. Similarly, an endomorphism θ of an algebra A is an involution if 2 θ = I. If θ is an involution of a unital algebra A, and p = (I + θ)/2 then we can write A = A+ ⊕ A− , where A+ = p(A) and A− = (I − p)(A). Then p(A) is a subalgebra of A, A+ = {a ∈ A : θ(a) = a} and A− = {a ∈ A : θ(a) = −a}, and A+ .A+ ⊆ A+ ,

A− .A− ⊆ A+ ,

A− .A+ ⊆ A− ,

A− .A+ ⊆ A− .

(∗)

2.1 Algebras

19

Conversely, if A = A+ ⊕ A− is a direct sum decomposition for which (∗) holds, then the mapping which sends a+ +a− to a+ −a− is an involution of A. An algebra for which this holds is called a Z2 -graded algebra or a super-algebra. The elements of A+ are called even elements, and the non-zero elements of A− are called odd elements. Any element a of A can be decomposed as a = a+ + a− , the sum of the even and odd parts of a. If a ∈ A+ ∪ A− , we say that a is homogeneous. As an example, the real algebra C is a super-algebra, when we define the involution j as j(x + iy) = x − iy. As we shall see, Clifford algebras are super-algebras. If a is an element of a unital algebra A then b is a left inverse of a if ba = 1. A right inverse is defined similarly. a is invertible if it has a left inverse and a right inverse; they are then unique, and equal. This unique element is called the inverse of a, and is denoted by a−1 . The set of invertible elements is denoted by G(A); it is a group, under composition. When A = L(E), then G(A) is denoted by GL(E); it is called the general linear group of E. Similarly we denote G(Md (K)) by GLd (K). Proposition 2.1.1 Suppose that T ∈ L(E). The following are equivalent. (i) T is invertible. (ii) T has a left inverse. (iii) T has a right inverse. Proof Clearly (i) implies (ii) and (iii). If T has a left inverse, then T is injective. It therefore follows from the rank-nullity formula that rank(T ) = rank(T ) + n(T ) = dim E, so that T is surjective. Thus T is bijective. The inverse mapping T −1 is linear, and is the inverse of T in L(E). Thus (ii) implies (i). The proof that (iii) implies (i) is similar; if T has a right inverse, then T is surjective, and it follows from the rank-nullity formula that T is injective. There is a unique mapping, the determinant, det : Md (K) → K satisfying det (ST ) = det S.det T,

det I = 1,

det (λT ) = λd det T,

for S, T ∈ Md (K) and λ ∈ K. Then T ∈ GLd (K) if and only if det T 6= 0. A subalgebra J of an algebra A is a left ideal if aj ∈ J whenever a ∈ A and j ∈ J; a right ideal is defined similarly. A subalgebra which is both

20

Algebras, representations and modules

a left ideal and a right ideal is called a two-sided ideal, or more simply an ideal. If φ : A → B is an algebra homomorphism, then the null-space N (φ) = {a ∈ A : φ(a) = 0} is an ideal in A. An ideal J in A is proper if J is a proper subset of A. An ideal J in a unital algebra is proper if and only if 1A 6∈ J. An algebra A is simple if the only proper ideal in A is the trivial ideal {0}. Proposition 2.1.2

Md (K) is a simple algebra.

Proof Suppose that J is an ideal in Md (K) which is not equal to {0}. There exists a non-zero element t = (tij ) in J, and so there exist indices i0 and j 0 such that ti0 j 0 6= 0. Since J is an ideal, the matrix Pd Eii = (1/ti0 j 0 )Eii0 tEj 0 i ∈ J, for i ≤ i ≤ d, and so I = i=1 Eii ∈ J. Thus J is not a proper ideal in Md (K). The unital algebra Md (K) is a concrete example of an algebra, as to a lesser extent is L(E). A unital homomorphism π from a unital algebra into Md (K) or L(E) is called a representation of A. A linear subspace F of E is π-invariant if π(a)(F ) ⊆ F for each a ∈ A. The representation is irreducible if {0} and F are the only π-invariant subspaces of E. If π is one-one, the representation is said to be faithful. A faithful representation of A is therefore a unital isomorphism of A onto a subalgebra of Md (K); the elements of A are represented as matrices. As an important example, a faithful representation, the left regular representation l : A → L(A) of a unital algebra A is given by setting l(a)(b) = ab. It is straightforward to verify that l is a representation. Since la (1) = a, l is faithful. Suppose that A is a finite-dimensional real unital algebra, and that |.| is a norm on A. We can define another norm on A by setting kak = sup{|ab| : |b| ≤ 1}; then kIk = 1 and kabk ≤ kak . kbk. It then follows P∞ that if a ∈ A then the sum j=0 aj /j! converges. We denote the sum by ea . The mapping a → ea from A to A is called the exponential function. This will be useful in later calculations. Proposition 2.1.3 Suppose that A is a finite-dimensional real unital algebra and that a, b ∈ A. (i) The exponential function is continuous. (ii) If ab = ba then ea+b = ea eb . (iii) ea is invertible, with inverse e−a . (iv) The mapping t → et is a continuous homomorphism of the group (R, +) into G(A). (v) If ab = −ba then ea b = be−a .

2.1 Algebras

21

(vi) If a2 = −1 then eta = cos t.I + sin t.a, the mapping eit → eat is a homeomorphism of T onto a compact subgroup of G(A), and the mapping t → eta from [0, π/2] into G(A) is a continuous path from I to a. (vii) Suppose that a 6= I and that a2 = I. Then eta = cosh t.I+sinh t.a, and the mapping t → eat is a homeomorphism of R onto an unbounded subgroup of G(A). Proof Let us give a sketch of the proof. (i) Since (a + h)j − aj = ((a + h)j−1 − aj−1 )a − (a + h)j−1 h, an inductive argument shows that

(a + h)j − aj ≤ j khk (kak + khk)j−1 , so that

X n n−1 n j j Xa X (kak + khk)j

(a + h)

− ≤ khk ≤ khk ekak+khk .

j! j! j!

j=0 j=0 j=0 Letting n → ∞,

a+h

e − ea ≤ khk ekak+khk , which establishes continuity.

P∞ (ii) The fact that j=0 aj /j! ≤ ekak and that a similar inequality holds for b justifies the change in the order of summation in the following equalities:   ∞ ∞ j k l X X X a b  (a + b)  ea+b = = j! k!l! j=0 j=0 k+l=j ! ∞ ! ∞ k X bl X a = = ea eb . k! l! l=0

k=0

The remaining results follow easily from these results. For example if ab = −ba then aj b = b(−a)j , so that a

e b=

∞ X aj b j=0

j!

=

∞ X b(−a)j j=0

j!

= be−a .

22

Algebras, representations and modules

2.2 Group representations We can also consider the representation of groups. A homomorphism π from a group G into GLd (K) is called a representation of G, and π is said to be faithful if it is injective. A faithful representation of G is therefore an isomorphism of G onto a subgroup of GLd (K); the elements of G are represented as invertible matrices. For example, the mapping π : D → M2 (R) defined by     cos θ − sin θ cos θ sin θ π(Rθ ) = and π(Sθ ) = sin θ cos θ sin θ − cos θ is a faithful representation of the full dihedral group as a group of reflections and rotations of R2 . If we restrict this to D8 we have a faithful representation of D8 : π(1) = I, π(Rπ ) = −I,

π(Rπ/2 ) = J, π(S0 ) = U, π(R3π/2 ) = −J, π(Sπ ) = −U,

π(Sπ/2 ) = Q, π(S3π/2 ) = −Q,

where the matrices and their actions are described in the table on the next page. The matrices Q and U play a symmetric role. Let   1 1 −1 . W = π(Rπ/4 ) = √ 2 1 1 Then the mapping g → W −1 gW is an automorphism of π(D8 ) and W −1 QW = U,

W −1 U W = −Q,

W −1 JW = J.

Thus, with appropriate changes of sign, the matrices Q and U can be interchanged.

2.2 Group representations

Name

Matrix 

I 

0 1

Action

 Identity 

−1 0 0 −1

−I 

Rotation by π

0 −1 1 0



0 −1

1 0





1 0 0 −1



Reflection in the direction (0, 1) with mirror y = 0



−1 0



Reflection in the direction (1, 0) with mirror x = 0

J  −J

U

−U

 Q

 −Q

1 0

0 1

Rotation by π/2

Rotation by − π/2

0 1 1 0

23



0 −1 −1 0

Reflection in the direction (1, −1) with mirror y = x  Reflection in the direction (1, 1) with mirror y = −x

[Reflections are described more fully in Section 4.8.] These matrices will play a fundamental role in all our constructions. They satisfy the relations

Q2 = U 2 = J 4 = I,

QU = −U Q = J,

QJ = −JQ = U,

JU = −U J = Q.

24

Algebras, representations and modules

An equally important example is the faithful representation of the quaternionic group Q in M2 (C), which is defined using the associate Pauli matrices τ0 , τ1 , τ2 , τ3 . These and the Pauli spin matrices σ0 , σ1 , σ2 , σ3 are described in the following table.

Name

Matrix 

σ0 = I  σ1 = σx = Q  σ2 = σy = iJ  σ3 = σz = U

Name

1 0

0 1



0 1

1 0



Matrix 

1 0 0 1





0 i i 0



τ0 = I

τ1 = τx = iQ

0 i

−i 0



1 0

0 −1





0 1 −1 0





i 0 0 −i



τ2 = τy = −J

τ3 = τz = iU

[Notation and conventions vary; here we follow Dirac [Dir].] The representation is defined by setting π(1) = τ0 , π(−1) = −τ0 ,

π(i) = τ1 , π(j) = τ2 , π(−i) = −τ1 , π(−j) = −τ2

π(k) = τ3 , π(−k) = −τ3 .

Exercise 1. Show that the Pauli spin matrices generate a group G of order 16. What is its centre? What is the quotient group G/{I, −I}?

2.3 The quaternions Both R and C can be considered as finite-dimensional real algebras: R has real dimension 1 and C has real dimension 2. Each is a field: it is commutative, and every non-zero element has a multiplicative inverse. We can however drop the commutativity requirement: a division algebra is an algebra in which every non-zero element has a multiplicative

2.3 The quaternions

25

inverse. The algebra H of quaternions provides an example of a noncommutative finite-dimensional real division algebra. This algebra was famously invented by Hamilton in 1843 after ten years of struggle. It is denoted by H, in his honour. There are many ways of constructing this algebra; let us use the associate Pauli matrices τ0 , τ1 , τ2 and τ3 . We can consider M2 (C) as an eight-dimensional real algebra. The associate Pauli matrices form a linearly independent subset of M2 (C), and so their real linear span is a four-dimensional subspace H. If h = aτ0 + bτ1 + cτ2 + dτ3 ∈ H then   a + id c + ib h= . −c + ib a − id Consequently  H=

z −w ¯

w z¯



 : z, w ∈ C .

Since composition of the associate Pauli matrices generates the group π(Q), which is contained in H, it follows that H is a four-dimensional unital subalgebra of the eight-dimensional real algebra M2 (C). We now define the algebra H of quaternions to be any real algebra which is isomorphic as an algebra to H. Suppose that φ : H → H is an isomorphism. We set 1 = φ(τ0 ), i = φ(τ1 ), j = φ(τ2 ), and k = φ(τ3 ). Then of course it follows that these elements of H satisfy the relations ij = k ji = −k i2 = −1

jk = i ki = j kj = −i ik = −j j 2 = −1 k2 = −1,

and {±1, ±i, ±j, ±k} is a multiplicative group isomorphic to the quaternionic group Q. Elements of span(1) are called real quaternions and elements of span(i, j, k) are called pure quaternions; the space of pure quaternions is denoted by P u(H). We can characterize these in the following way. Proposition 2.3.1 (i) A quaternion x is real if and only if it is in the centre Z(H) of H. (ii) A quaternion x is pure if and only if x2 is real and non-positive.

26

Algebras, representations and modules

Proof (i) Certainly the real quaternions are in the centre of H. Conversely, if x = a1 + bi + cj + dk ∈ Z(H) then ai − b1 + ck − dj = ix = xi = ai − b1 − ck + dj, so that c = d = 0, and a similar argument, considering jx, shows that b = 0; thus x is real. (ii) If x = bi + cj + dk is pure, then x2 = −b2 − c2 − d2 , so that the condition is necessary. On the other hand, if x = a1 + y, where y is pure, then x2 = (a2 − y 2 ) + 2ay, so that if x2 is real and non-positive then ay = 0. If a = 0 then x is pure. If y = 0 then x2 = a2 , so that a = 0, and x = 0. We now define conjugation. If x = a1 + y = a1 + bi + cj + dk, we set ¯ = x) x ¯ = a1 − y = a1 − bi − cj − dk. Conjugation is a linear involution (x and x1 x2 = x ¯2 .¯ x1 . Thus the mapping x → x ¯ is an algebra isomorphism opp of H onto H . We set ∆(x) = x¯ x=x ¯x = a2 −y 2 = a2 +b2 +c2 +d2 ; ∆ is the quadratic norm on H. We set kxk = (∆(x))1/2 ; clearly k.k is a norm on H, and the mapping (a, b, c, d) → a1+bi+cj +dk is a linear isometry of R4 , with its Euclidean norm, onto H. We denote the corresponding inner product on H by h., .i, and say that x and y are orthogonal if hx, yi = 0. (Euclidean norms, inner products and orthogonality are described in Chapter 4.) Proposition 2.3.2 only if xy = −yx.

If x, y ∈ P u(H) then x is orthogonal to y if and

Proof For x is orthogonal to y if and only if ∆(x + y) = ∆(x) + ∆(y), if and only if (x + y)2 = x2 + y 2 , if and only if xy + yx = 0. Theorem 2.3.1 Proof

H is a division algebra.

If x 6= 0 then ∆(x) > 0, and x(¯ x/∆(x)) = (¯ x/∆(x))x = 1,

so that x is invertible, with inverse x ¯/∆(x). Let H∗ = H \ {0}. H∗ is a group, under multiplication. Proposition 2.3.3 The mapping ∆ : H∗ → R∗ is a homomorphism of H∗ onto the multiplicative group R∗ = {λ ∈ R : λ 6= 0}. Proof

For ∆(xy) = xy(xy) = xy y¯x ¯ = ∆(y)x¯ x = ∆(x)∆(y).

2.3 The quaternions

27

The kernel of this homomorphism is H1 = {x ∈ H : ∆x = 1}, the unit sphere of H. The set H1 is bounded and closed in the normed space H, and is therefore compact. The group operations are continuous, and so H1 is an example of a compact topological group. Hamilton invented quaternions, because he wished to extend the multiplication of complex numbers to higher dimensions. Can we go further? Theorem 2.3.2 (Frobenius’ theorem) The algebras R, C and H are the only finite-dimensional real division algebras. Proof We shall defer proof of this until Section 6.1; the proof uses ideas and results from the theory of Clifford algebras. Why else is the algebra H of quaternions of interest? We shall see in Section 4.9 that they can be used to describe the groups of rotations in three and four dimensions. We shall see that H is an example of a Clifford algebra, that it plays an essential role in the structure of Clifford algebras, and that Clifford algebras, in their turn, describe groups of rotations in higher dimensions. When A is a real unital algebra, we shall use the term ‘representation’ in a more general way than in Section 2.1. The complex algebra Md (C) can be considered as a real algebra of dimension 2d2 , and the algebra Md (H) can be considered as a real algebra of dimension 4d2 . We shall also call a unital homomorphism π of A into Md (C) or Md (H) a representation of the algebra A. For example, the mapping π : C → M2 (R) defined by   x y π(x + iy) = xI + yJ = −y x is a faithful representation of the real algebra C in M2 (R), and the mapping ρ : H → M2 (C) defined by   a + id c + ib ρ(a1 + bi + cj + dk) = aI + ibQ − cJ + idU = −c + ib a − id is a faithful representation of the real algebra H in the real algebra M2 (C). The reason for considering such representations is that, as we shall see, every Clifford algebra is either isomorphic to Md (D), where D = R, C or H, or is isomorphic to a direct sum Md (D) ⊕ Md (D). In the rest of this chapter, we shall show that this follows from the fact that Clifford algebras are either simple or the direct sum of two simple subalgebras.

28

Algebras, representations and modules

In fact, we establish these isomorphisms directly, and the rest of this chapter can be omitted, at least on a first reading.

2.4 Representations and modules We introduced the notion of the representation of an algebra in Section 2.1. We can look at representations in a rather different way. As an example, if θ : A → L(E) is a representation of A, then we can think of the endomorphisms θ(a) : E → E as extensions of the scalars acting on E. This leads to the following definition. Suppose that A is a unital algebra over the real field R. A left A-module M is a real vector space M together with a multiplication mapping (a, m) → am from A × M to M which is bilinear: (λ1 a1 + λ2 a2 )m = λ1 (a1 m) + λ2 (a2 m) a(µ1 m1 + µ2 m2 ) = µ1 (am1 ) + µ2 (am2 ), and satisfies (ab)m = a(bm) and 1A m = m, for all a, a1 , a2 , b ∈ A, λ1 , λ2 , µ1 , µ2 ∈ R. A right A-module is defined similarly, with bilinear multiplication M × A → A satisfying m(ab) = (ma)b. Note however that if M is a right A-module then M can be considered as a left Aopp -module, defining am = ma. For this reason, we shall only consider left A-modules. We shall develop some of the theory of modules: enough for our needs. The general theory applies to a much more general situation. If M is a left A-module, let θ(a)(m) = am; then θ is a representation of A in L(M ). Conversely, if θ is a representation of A in L(E) then E becomes a left A-module when we define ax = θ(a)(x). Thus there is a one-one correspondence between left A-modules and representations of A. In particular, the left regular representation l of A corresponds to considering A as a left A-module, using the algebra multiplication to define the module multiplication. We can form the direct sum of left A-modules: if M1 and M2 are left A-modules, then the vector space sum M1 ⊕M2 becomes a left A-module when we define a(m1 , m2 ) = (am1 , am2 ). Suppose that M is a left A-module. A linear subspace N of M is a

2.5 Module homomorphisms

29

left A-submodule if an ∈ N whenever a ∈ A and n ∈ N . N is then a left A-module in a natural way. If N1 and N2 are left A-submodules, then so is N1 + N2 . The intersection of left A-submodules is again a left A-submodule. This means that if C is a non-empty subset of a left A-module M then there is a smallest left A-submodule containing C, which we denote by [C]A . Then [C]A = {a1 c1 + · · · + ak ck : k ≥ 0, ai ∈ A, ci ∈ C}. If C = {c1 , . . . , cn }, we write [c1 , . . . , cn ]A for [C]A : then [c1 , . . . , cn ]A = {a1 c1 + · · · + an cn : ai ∈ A}. If M is a left A-module and M = [c1 , . . . , cn ]A , we say that M is a finitely generated left A-module. If M = [c]A = {ac : a ∈ A}, then M is a cyclic left A-module, and c is called a cyclic vector for M . Let us consider two examples. First, consider A as a left A-module. Then a subset J of A is a left A-submodule if and only if J is a left ideal in A. The left A-module A is cyclic, with cyclic vector 1. More generally, an element c of A is a cyclic vector if and only if c has a left inverse d in A (an element d such that dc = 1). For if [c]A = A then 1 ∈ [c]A , so that c must have a left inverse. Conversely, if d is a left inverse for c, then a = adc ∈ [c]A , for any a ∈ A, and so c is a cyclic vector. Secondly, suppose that M is a left A-module. Then M k is also a left A-module. It can however also be considered as a left Mk (A)-module: if α = (aij ) ∈ Mk (A) and µ = (mj ) ∈ M k , define αµ by (αµ)i = Pk j=1 aij mj .

2.5 Module homomorphisms Suppose that M1 and M2 are both left A-modules. Then a linear mapping T from M1 to M2 is called a module homomorphism, or A-homomorphism, if T (am) = aT (m) for all a ∈ A and all m ∈ M1 . A bijective A-homomorphism is called an A-isomorphism. The set of all A-homomorphisms from M1 to M2 is a linear subspace of L(M1 , M2 ), denoted by HomA (M1 , M2 ). If M is a left A-module, we denote HomA (M, M ) by EndA (M ); it is a subalgebra of L(M ). Its elements are called A-endomorphisms. For example, suppose that N = N1 ⊕· · ·⊕Nn and M = M1 ⊕· · ·⊕Mm are direct sums of left A-modules. Then a linear mapping θ : N → M

30

Algebras, representations and modules

is an A-homomorphism if and only if there are A-homomorphisms θij : Nj → Mi such that  m n X θ(x) =  , for x = (x1 , . . . , xn ) ∈ N. θij xj  j=1

i=1

In particular, if Mi = Nj = L then HomA (N, M ) is isomorphic to the space Mmn (EndA (L)) of m × n matrices with entries in EndA (L). Proposition 2.5.1 Suppose that A is a unital algebra. Then EndA (A) is naturally isomorphic as an algebra to Aopp . Proof Define φ(T ) = T (1A ), for T ∈ EndA (A). If I is the identity in EndA (A), then φ(I) = 1A = 1Aopp , and if S, T ∈ EndA (A) then φ(ST ) = ST (1A ) = S(φ(T )) = S(φ(T )1A ) = φ(T )φ(S), so that φ is an algebra homomorphism of EndA (A) into Aopp . It remains to show that φ is bijective. Suppose that φ(T ) = φ(S). If a ∈ A then T (a) = T (a1A ) = aT (1A ) = aS(1A ) = S(a1A ) = S(a), so that T = S. Thus φ is injective. Suppose that c ∈ Aopp . Let ρ(c)(b) = bc (this is multiplication in A, not in Aopp !). Then ρ(c)(ab) = abc = a(ρ(c)(b)), so that ρ(c) ∈ EndA (A). Since φ(ρ(c)) = ρ(c)(1A ) = c, φ is surjective.

2.6 Simple modules A left A-module M is said to be simple if it is not {0} and has no left A-submodules other than {0} and M itself. Thus A is simple if and only if the representation π : A → L(M ) defined by π(a)(x) = a.x is irreducible. Proposition 2.6.1 A left A-module M other than {0} is simple if and only if every non-zero element of M is a cyclic vector. Proof If M is simple and m ∈ M is non-zero then [m]A is a non-zero left A-submodule, and so must be M . Thus M is cyclic and m is a cyclic vector. Suppose conversely that every non-zero element of M is a cyclic vector. If n is a non-zero element of a left A-submodule N , then M = [n]A ⊆ N , so that N = M ; M is simple.

2.6 Simple modules

31

Recall that if we consider an algebra A as a left A-module, then the left A-submodules of A are the left ideals of A. Thus a non-zero left ideal is a simple left A-module if and only if it is a minimal (non-zero) left ideal. A is a simple left A-module if and only if it has no left ideals other than {0} and A itself. Note that this is stronger than being a simple algebra. Proposition 2.6.2 A unital algebra A is a simple left A-module if and only if A is a division algebra; that is, every element of A has a (two-sided) inverse. Proof If A is a division algebra, then every non-zero element of A has an inverse, and so is a cyclic vector. Thus A is simple, by Proposition 2.6.1. If conversely A is simple, and a 6= 0, then a is a cyclic vector, and so has a left inverse b. Then b 6= 0, and so b has a left inverse c. Then c = c(ba) = (cb)a = a, and so b is a right inverse of a, too. Thus by Frobenius’ theorem, EndA (M ) is isomorphic either to R, in which case the only A-endomorphisms of M are scalar multiples of the identity, or C or H. All these possibilities can occur. Clearly EndR (R) ∼ = R. C can be considered as a simple real left C-module and then EndC (C) ∼ = C. Similarly, H is a simple real left H-module, and opp ∼ ∼ EndH (H) = H = H. But C and H are not simple R-modules. A-homomorphisms between simple left A-modules have an important property. Theorem 2.6.1 (Schur’s lemma) Suppose that T ∈ HomA (M1 , M2 ), where M1 and M2 are simple left A-modules. If T 6= 0 then T is an A-isomorphism of M1 onto M2 , and T −1 ia also an A-isomorphism. Proof Suppose that T 6= 0. Then N = {x ∈ M1 : T (x) = 0} is a left A-submodule of M1 which is not equal to M1 ; thus N = {0}, and T is injective. Similarly, T (M1 ) is a non-zero left A-submodule of M2 , and so is equal to M2 . Thus T is an A-isomorphism. Let T − 1 be the inverse mapping. Then if a ∈ A and m ∈ M2 , T −1 (am) = T −1 (aT T −1 (m)) = T −1 (T (aT −1 )(m)) = aT −1 (m), so that T −1 is an A-isomorphism. Exercise 1. If z = x + iy ∈ C and h ∈ H, let zh = xh + yih. Show that this makes H a C-module. Is it simple?

32

Algebras, representations and modules

2.7 Semi-simple modules A left A-module M is semi-simple if it can be written as a direct sum M = M1 ⊕ · · · ⊕ Md of simple left A-modules. Since each Mi is cyclic, a semi-simple left A-module is finitely generated. Theorem 2.7.1 Suppose that M is a finitely generated left A-module. Then M is semi-simple if and only if M is the span of its simple left A-submodules. Proof The condition is certainly necessary. Suppose that M is spanned by its simple left A-submodules. If M = [m1 , . . . , mk ]A , then each mi is in the span of finitely many simple left A-submodules. Thus M is spanned by finitely many simple left A-submodules. Let {M1 , . . . , Mj } be a minimal family of simple left A-submodules which spans M . Let Li = [∪{Mk : k 6= i}]A . Suppose that Mi ∩ Li 6= {0}. Then Mi ∩ Li is a left A-module contained in Mi ; since Mi is simple, Mi ∩ Li = Mi , so that Mi ⊆ Li . Thus we can discard Mi , contradicting the minimality of the family. Thus Mi ∩ Li = {0} for each i, from which it follows that M = M1 ⊕ · · · ⊕ Mj . Proposition 2.7.1 Suppose that M = M1 ⊕ · · · ⊕ Mk is the direct sum of simple left A-modules. If N is a non-zero simple submodule of M , then N is isomorphic as a left A-module to Mi for some 1 ≤ i ≤ k. Proof Let j : N → M be the inclusion mapping from N to M , and let πi denote the projection from M onto Mi for 1 ≤ i ≤ k. Let x be a non-zero element of N . There exists i such that πi (x) 6= 0. Then πi ◦ j is a non-zero module homomorphism of N into Mi . Since N and Mi are simple, πi ◦ j is a module isomorphism of N onto Mi , by Schur’s lemma. In particular, if M is a finitely generated left H-module, there exist m1 , . . . , mk in M such that every element m of M can be written uniquely as m = h1 m1 + · · · + hk mk , where h1 , . . . , hk are in H. Then M has real dimension 4k, so that k is uniquely determined; we set dimH M = k. Theorem 2.7.2 simple. Proof

A finite-dimensional simple unital algebra A is semi-

Suppose that J is a non-zero left ideal of minimal dimension.

2.7 Semi-simple modules

33

Then J is a simple left A-module. Let ( k ) X JA = ji ai : k ∈ N, ji ∈ J, ai ∈ A . i=1

Then JA is a non-zero ideal in A; since A is simple, JA = A. Thus there exist j1 , . . . , jk in J and a1 , . . . , ak in A such that 1A = j1 a1 + · · · + jk ak . We may clearly suppose that each ji ai 6= 0, so that Jai is a non-zero left ideal. Now the mapping j → jai is a left A-module homomorphism of J onto Jai . By Schur’s lemma, it is an isomorphism, and so each Jai is also a simple left A-submodule. Since Ja1 , . . . , Jak span A, A is semi-simple. Corollary 2.7.1 (Wedderburn’s theorem) If A is a finite-dimensional simple real unital algebra then there exists k ∈ N and a division algebra D = R, C or H such that A ∼ = Mk (D). Thus A ∼ = EndD (Dk ). Proof We can write A = J1 ⊕ · · · ⊕ Jk as a direct sum of simple left Amodules (minimal non-zero left ideals). We show that each Ji is module isomorphic to J1 . If not, rearranging the terms if necessary, we can write A = (J1 ⊕ · · · ⊕ Jl ) ⊕ (Jl+1 ⊕ · · · ⊕ Jk ) = B ⊕ C, where Ji is module isomorphic to J1 if 1 ≤ i ≤ l and Ji is not module isomorphic to J1 if l + 1 ≤ i ≤ k. Let πB be the projection of A onto B and let πC be the projection of A onto C. If θ ∈ HomA (A), then πC (θ(Ji )) = πC (θ(AJi )) = πC (Aθ(Ji ) = AπC (θ(Ji )), so that either πC (θ(Ji )) = {0} or πC (θ(Ji )) is a submodule of C isomorphic to Ji . It therefore follows from Proposition 2.7.1 that if 1 ≤ i ≤ l then πC (θ(Ji )) = {0}, so that θ(Ji ) ⊆ B. Thus θ(B) ⊆ B, and similarly θ(C) ⊆ C. Since, for a ∈ A, the mapping b → ba : A → A is a left A-module homomorphism, it follows that B and C are ideals in A, contradicting the simplicity of A. By Proposition 2.6.2 there exists a division algebra D such that HomA (Ji , Jl ) ∼ = D, for 1 ≤ i, l ≤ k. Consequently Aopp ∼ = EndA (A) ∼ = EndA (J1 ⊕ · · · ⊕ Jk ) ∼ = Mk (HomA (Ji , Jl )) ∼ = Mk (D). Since Mk (D) ∼ = (Mk (Dopp ))opp ∼ = (Mk (D))opp , the result follows. Corollary 2.7.2

If a is an element of a finite-dimensional simple real

34

Algebras, representations and modules

unital algebra A, and if a has a left inverse or a right inverse, then a has a two-sided inverse. Proof We can consider A as EndA (Dk ), and a as an element of the real vector space L(Dk ) of real linear mappings from Dk to itself. Each of the conditions implies that a is a bijection, and so has an inverse a−1 in L(Dk ). If a−1 (x) = y and d ∈ D then a(dy) = d(ay) = dx, so that a−1 (dx) = dy = d(a−1 (x); thus a−1 ∈ EndA (Dk ). Are there other, essentially different, representations? The next theorem shows that the answer is ‘no’. Theorem 2.7.3 Let D = R, C or H. Suppose that W is a finitedimensional real vector space, and that π : Mk (D) → L(W ) is a real unital representation. Then the mapping (λ, w) → π(λI)(w) from D×W to W makes W a left D-module, dimD (W ) = rk for some r, and there is a π-invariant decomposition W = W1 ⊕ · · · ⊕ Wr , where dimD (Ws ) = k for each 1 ≤ s ≤ r. For each 1 ≤ s ≤ r there is a basis (e1 , . . . , ek ) of Ws and an isomorphism πs : A → Mk (D) such that     k k k X X X  π(A)  xj e j  = aij xj  ei , where (aij ) = πs (A). j=1

i=1

j=1

In other words, any irreducible unital representation of Mk (D) is essentially the same as the natural representation of Mk (D) as HomD (Dk ), and any finite-dimensional representation is a direct sum of irreducible representations. Proof It is immediate that the mapping (λ, w) → π(λI)(w) from D×W to W makes W a left D-module. Let the matrix E(i, j) be defined by setting E(i, j)i,j = 1 and E(i, j)l,m = 0 otherwise, for 1 ≤ i, j, l, m ≤ k, and let Tij = π(E(i, j)). Since E(j, j) is a projection, so is Tjj . Further, Tii Tjj = 0 for i 6= j, and I = T11 + · · · + Tkk . Let Uj = Tjj (W ). Then W = U1 ⊕ · · · ⊕ Uk . Since Tii Tij Tjj = Tij , Tij is an isomorphism of Uj onto Ui , with inverse Tji . Let (f1 , . . . , fr ) be a basis for U1 . We use this to express W as the direct sum W1 ⊕ · · · ⊕ Wr of r π-invariant subspaces, each of dimension k. Suppose that 1 ≤ s ≤ r.

2.7 Semi-simple modules (s)

Let ej

(s)

= Tj1 (fs ), for 1 ≤ j ≤ k. Then ej (s)

35 (s)

(s)

∈ Uj , and (e1 , . . . , ek ) is

(s)

a basis for Ws = span(e1 , . . . , ek ). (s) (s) (s) Since Tij (ej ) = ei and Tij (ek ) = 0 for k 6= j, Ws is π-invariant, and     k k k X X X (s)  π(A)  xj e j  = aij xj  esi for some (aij ). j=1

i=1

j=1

The mapping πs : Mk (D) → Mk (D) defined by πs (A) = (aij ) is then an isomorphism of A onto Mk (D), by Schur’s lemma. Finally, W = W1 ⊕ · · · ⊕ Wr .

3 Multilinear algebra

To each finite-dimensional vector space E equipped with a non-singular quadratic form, there is associated a universal Clifford algebra A. If dim E = d then dim A = 2d ; each time we increase the dimension of the vector space by 1, the dimension of the algebra doubles. This suggests strongly and correctly that the algebra should be constructed as a tensor product. Many mathematicians feel uncomfortable with tensor product spaces, since they are constructed as a quotient of an infinite-dimensional space by an infinite-dimensional subspace. Here we avoid this, by making systematic use of the duality theory of finite-dimensional vector spaces. In the same way, we use duality to construct the exterior algebra of a finite-dimensional vector space (which is a particular example of a Clifford algebra) and to construct its symmetric algebra. In this chapter, K will denote either the field R of real numbers or the field C of complex numbers.

3.1 Multilinear mappings Suppose that E1 , . . . , Ek and F are vector spaces over K. A mapping T : E1 × · · · × Ek → F is multilinear, or k-linear, if it is linear in each variable: T (x1 , . . . , xj−1 , αxj + βyj , xj+1 , . . . , xk ) = αT (x1 , . . . , xj−1 , xj , xj+1 , . . . , xk ) + βT (x1 , . . . , xj−1 , yj , xj+1 , . . . , xk ), for α, β ∈ K, xj , yj ∈ Ej and 1 ≤ j ≤ k. Under pointwise addition, the k-linear mappings from E1 × · · · × Ek into F form a vector space, which is denoted by M (E1 , . . . , Ek ; F ). When E1 = · · · = Ek = E, we write M k (E, F ) for M (E1 , . . . , Ek ; F ). We write M (E1 , . . . , Ek ) for

3.1 Multilinear mappings

37

M (E1 , . . . , Ek ; K), and M k (E) for M k (E, K). Elements of M (E1 , . . . , Ek ) are called multilinear forms, or k-linear forms. A 2-linear mapping is called a bilinear mapping. The vector space of bilinear mappings from E1 × E2 into F is denoted by B(E1 , E2 ; F ). We write B(E1 , E2 ) for B(E1 , E2 ; K) and B(E) for B(E, E). Elements of B(E1 , E2 ) are called bilinear forms. Proposition 3.1.1 If b ∈ B(E1 , E2 ; F ), e1 ∈ E1 and e2 ∈ E2 , let lb (e1 ) : E2 → F be defined by lb (e1 )(e2 ) = b(e1 , e2 ). Then lb (e1 ) ∈ L(E2 , F ), and the mapping lb : E1 → L(E2 , F ) is linear. The mapping l : b → lb is a linear isomorphism of B((E1 , E2 ; F ) onto L(E1 , L(E2 , F )). Similarly, the mapping r defined by rb (e2 )(e1 ) = b(e1 , e2 ) is a linear isomorphism of B((E1 , E2 ; F ) onto L(E2 , L(E1 , F )). Proof

Straightforward verification.

Corollary 3.1.1

dim B(E1 , E2 ; F ) = dim E1 . dim E2 . dim F .

Similarly, M (E1 , . . . , Ek ; F ) is isomorphic to M (E1 , . . . , Ek−1 ; L(Ek , F )), and an inductive argument shows that   k Y dim Ej  . dim F. dim M (E1 , . . . , Ek ; F ) =  j=1

Corollary 3.1.2

B(E1 , E2 ) ∼ = L(E1 , E20 ) ∼ = L(E2 , E10 ), and dim B(E1 , E2 ) = dim E1 . dim E2 .

Proposition 3.1.2

If B ∈ B(E1 , E2 ) then rank(lb ) = rank(rb ).

Proof If x2 ∈ E2 , then x2 ∈ (lb (E1 ))⊥ if and only if b(x1 , x2 ) = 0 for all x1 ∈ E1 , and this happens if and only if rb (x2 ) = 0. Thus (lb (E1 ))⊥ = N (rb ), and so rank(lb ) = dim E2 − dim(lb (E1 ))⊥ = dim E2 − dim N (rb ) = rank(rb ), by the rank-nullity formula. If b ∈ B(E1 , E2 ), we define the rank of b to be the rank of the element lb of L(E1 , E20 ). b is non-singular if rank(lb ) = dim E1 = dim E2 . Thus b is non-singular if and only if lb and rb are invertible.

38

Multilinear algebra

Exercises 1. Prove Proposition 3.1.1. 2. Suppose that (e1 , . . . , ed ) is a basis for E1 , with dual basis (φ1 , . . . , φd ), and that (f1 , . . . , fc ) is a basis for E2 , with dual basis (ψ1 , . . . , ψc ). Suppose that b is a bilinear form on E1 × E2 , and that bij = b(ei , fj ) for 1 ≤ i ≤ d, 1 ≤ j ≤ c. Explain how the matrix (bij ) is related to the matrices which represent lb and rb .

3.2 Tensor products We now use duality to define the tensor product of vector spaces. Suppose that (x1 , . . . , xk ) ∈ E1 × · · · × Ek . The evaluation mapping m → m(x1 , . . . , xk ) from M (E1 , . . . , Ek ) into K is a linear functional on M (E1 , . . . , Ek ). We denote it by x1 ⊗ · · · ⊗ xk , and call it an elementary tensor. We denote the linear span of all such elementary tensors by E1 ⊗ · · · ⊗ Ek . E1 ⊗ · · · ⊗ Ek is the tensor product of (E1 , . . . , Ek ). In fact, the elementary tensors span the dual M 0 (E1 , . . . , Ek ) of M (E1 , . . . , Ek ). Proposition 3.2.1

E1 ⊗ · · · ⊗ Ek = M 0 (E1 , . . . , Ek ).

Proof If m ∈ (E1 ⊗ · · · ⊗ Ek )⊥ , then m(x1 , . . . , xk ) = 0 for all (x1 , . . . , xk ) ∈ E1 × · · · × Ek , and so m = 0. Thus E1 ⊗ · · · ⊗ Ek = (E1 ⊗ · · · ⊗ Ek )⊥⊥ = M 0 (E1 , . . . , Ek ).

Corollary 3.2.1

dim(E1 ⊗ · · · ⊗ Ek ) =

Qk

j=1 (dim Ej ).

We can obtain a basis for E1 ⊗ · · · ⊗ Ek by taking a basis for each of the spaces E1 , . . . , Ek , and taking all tensor products of the form e1 ⊗ · · · ⊗ ek , where each ej is a member of the appropriate basis. The tensor product has the following universal mapping property, which enables us to replace a multilinear mapping by a linear one. Proposition 3.2.2 The mapping ⊗k : E1 × · · · × Ek → E1 ⊗ · · · ⊗ Ek defined by ⊗k (x1 , . . . , xk ) = x1 ⊗ · · · ⊗ xk is k-linear. If m ∈ M (E1 , . . . Ek ; F ) there exists a unique linear mapping L(m) ∈ L(E1 ⊗ · · · ⊗ Ek ) → F such that m = L(m) ◦ ⊗k ; that is, m(x1 , . . . xk ) = L(m)(x1 ⊗ · · · ⊗ xk ) for (x1 , . . . xk ) ∈ E1 × · · · × Ek .

3.2 Tensor products

39

The mapping L : m → L(m) is an isomorphism of M (E1 , . . . , Ek ; F ) onto L(E1 ⊗ · · · ⊗ Ek , F ). Thus we have the following diagram. E1 × · · · × Ek m

k



 E1 ⊗ · · · ⊗ Ek

Proof

L(m)

- F

Since (x1 ⊗ · · · ⊗ (αxj + βyj ) ⊗ · · · ⊗ xk )(m) = m(x1 , . . . , (αxj + βyj ), . . . , xk ) = αm(x1 , . . . , xj , . . . , xk ) + βm(x1 , . . . , yj , . . . , xk ) = α(x1 ⊗ · · · ⊗ xj ⊗ · · · ⊗ xk )(m) +β(x1 ⊗ · · · ⊗ yj ⊗ · · · ⊗ xk )(m),

⊗k is k-linear. The mapping T → T ◦⊗k is a linear mapping from L(E1 ⊗· · ·⊗Ek , F ) into M (E1 , . . . , Ek ; F ). If T ◦ ⊗k = 0, then T (x1 ⊗ · · · ⊗ xk ) = 0 for all elementary tensors x1 ⊗ · · · ⊗ xk . Since these span E1 ⊗ · · · ⊗ Ek , T = 0. Thus the mapping T → T ◦ ⊗k is injective. Since L(E1 ⊗ · · · Ek , F ) and M (E1 , . . . , Ek ; F ) have the same dimension, it is also surjective; this gives the result. Corollary 3.2.2 (i) There is a unique isomorphism κ : E ⊗ F → L(F 0 , E) such that κ(x ⊗ y)(φ) = φ(y)x for x ∈ E, y ∈ F, φ ∈ F 0 . (ii) There is a unique isomorphism λ : E ⊗ F 0 → L(F, E) such that λ(x ⊗ φ)(y) = φ(y)x for x ∈ E, y ∈ F, φ ∈ F 0 . Proof (i) If (x, y) ∈ E × F , let m(x, y)(φ) = φ(y)x, for φ ∈ F 0 . Then m(x, y) ∈ L(F 0 , E), and m is a bilinear mapping from E × F into L(F 0 , E). Let κ = L(m) be the corresponding linear mapping from E ⊗F into L(F 0 , E). We must show that κ is an isomorphism. Let (e1 , . . . , em ) be a basis for E, let (f1 , . . . , fn ) be a basis for F , and let (φ1 , . . . , φn ) be the basis for F 0 dual to (f1 , . . . , fn ). If t ∈ E ⊗ F , we can write t as Pm Pm Pn t e ⊗ f . Then κ(t)(φ ) = ij i j j j=1 i=1 tij ei , so that if κ(t) = 0 i=1

40

Multilinear algebra

then tij = 0 for all i and j, and t = 0. Thus κ is injective. Since E⊗F and L(F 0 , E) have the same dimension, it follows that κ is an isomorphism. (ii) This follows, since F 00 ∼ = F. We define the rank of an element t of E ⊗ F to be the rank of κ(t). Properties of the rank are obtained in Exercise 1. Corollary 3.2.3 (E1 ⊗ · · · ⊗ Ek ) ⊗ (Ek+1 ⊗ · · · ⊗ El ) is naturally isomorphic to E1 ⊗ · · · ⊗ El . Proof

For (E1 ⊗ · · · ⊗ Ek ) ⊗ (Ek+1 ⊗ · · · ⊗ El ) = B 0 (E1 ⊗ · · · ⊗ Ek , Ek+1 ⊗ · · · ⊗ El ) ∼ = B 0 (M 0 (E1 , . . . , Ek ), M 0 (Ek+1 , . . . , El )) ∼ = M 0 (E1 , . . . , El ) = E1 ⊗ · · · ⊗ El .

We denote the tensor product of k copies of a vector space E by ⊗k E. Then it follows from the preceding corollary that the infinite direct sum ⊗∗ E = K ⊕ E ⊕ (E ⊗ E) ⊕ · · · ⊕ (⊗k E) ⊕ · · · is an infinite-dimensional associative algebra, the tensor algebra of E. In particular, (⊗k E) ⊗ (⊗l E) ⊆ ⊗k+l E. The tensor algebra has the following universal property. Theorem 3.2.1 Suppose that T : E → A is a linear mapping of E into a unital algebra A. Then T extends uniquely to a unital algebra homomorphism T˜ : ⊗∗ E → A. Thus we have the following diagram.





-

⊗∗ E

T



E

- A

Proof Let (e1 , . . . , ed ) be a basis for E. If ei1 ⊗· · ·⊗eik is a corresponding basic vector in ⊗k (E), let T˜(ei1 ⊗ · · · ⊗ eik ) = T (ei1 ) ⊗ · · · ⊗ T (eik ) and extend by linearity to ⊗k E, and then to ⊗∗ E. T˜ is then a unital

3.2 Tensor products

41

algebra homomorphism. Since e1 , . . . , ed generate ⊗∗ E, the extension is unique. We can also consider the tensor product of linear operators. Suppose that Tj ∈ L(Ej , Fj ) for 1 ≤ j ≤ k. If m ∈ M (F1 , . . . , Fk ), let T (m)(x1 , . . . , xk ) = m(T1 (x1 ), . . . , Tk (xk )) for (x1 , . . . , xk ) ∈ E1 ⊗ · · · ⊗ Ek . Then T (m) ∈ M (E1 , . . . , Ek ), and T ∈ L(M (F1 , . . . , Fk ), M (E1 , . . . , Ek )). Let T 0 be its transpose, so that T 0 ∈ L(E1 ⊗ · · · ⊗ Ek , F1 ⊗ · · · ⊗ Fk ). The mapping (T1 , . . . , Tk ) → T 0 from L(E1 , F1 ) × · · · × L(Ek , Fk ) to L(E1 ⊗ · · · ⊗ Ek , F1 ⊗ · · · ⊗ Fk ) is k-linear, and so there exists a linear mapping j : L(E1 , F1 ) ⊗ · · · ⊗ L(Ek , Fk ) → L(E1 ⊗ · · · ⊗ Ek , F1 ⊗ · · · ⊗ Fk ) such that j(T1 ⊗ · · · ⊗ Tk ) = T 0 . By slight abuse of notation, we denote j(T1 ⊗ · · · ⊗ Tk ) by T1 ⊗ · · · ⊗ Tk . Thus (T1 ⊗ · · · ⊗ Tk )(x1 ⊗ · · · ⊗ xk ) = T1 (x1 ) ⊗ · · · ⊗ Tk (xk ). The representation of tensor products of linear mappings by matrices can become complicated. If I1 , . . . , Ik are the index sets for bases of E1 , . . . , Ek , and J1 , . . . , Jk are the index sets for bases of F1 , . . . , Fk , then T1 ⊗ · · · ⊗ Tk is represented by a I1 × · · · × Ik by J1 × · · · × Jk matrix. We shall use the following convention, in the case where k = 2. If S ⊗ T maps E1 ⊗ E2 into F1 ⊗ F2 and if S,T also denote matrices representing the mappings, then we represent S ⊗ T by the matrix   St11 . . . St1n  ..  .. ..  . , . . Stm1

···

Stmn

where in each place Stij is a multiple of the matrix S. For example, if E1 = E2 = F1 = F2 = R2 and U and Q are given by the matrices     1 0 0 1 U= and Q = , 0 −1 1 0 then U ⊗ Q is represented by 



0 U

0 0   U 0 0 =  0 1 0 0 −1

 1 0 0 −1   0 0  0 0

42

Multilinear algebra

and Q ⊗ U is represented by 

0    1 Q 0 =  0 0 −Q 0

 1 0 0 0 0 0  . 0 0 −1  0 −1 0

There are several other ways of defining the tensor product of vector spaces. We can consider the infinite-dimensional vector space of all K-valued functions on E1 × · · · × Ek which take the value 0 on all but finitely many points, and define the tensor product as a quotient of this by an appropriate infinite-dimensional subspace. (This is the only practicable method when considering tensor products of more general objects, such as modules.) With this definition, it is necessary to establish existence and uniqueness properties carefully. At the other extreme, we can consider a basis Bj for each space Ej , and define the tensor product as the space of all K-valued functions (k) (1) on B1 × · · · × Bk . We set ei1 ⊗ · · · ⊗ eik at the indicator function of (1)

(k)

(1)

(k)

(1)

(k)

{(ei1 , . . . , eik ), define ⊗k (ei1 , . . . , eik ) = ei1 ⊗ · · · ⊗ eik , and extend by multilinearity. This has the disadvantage that it is co-ordinate based, and it is necessary to determine the effect of changes of bases. Exercises 1. Suppose that t ∈ E ⊗ F has rank r. Show that t can be written as t = x1 ⊗ y1 + · · · + xr ⊗ yr . Show that in such a representation each of the sequences (x1 , . . . , xr ) and (y1 , . . . , yr ) is linearly independent. Conversely show that if t = x1 ⊗ y1 + · · · + xr ⊗ yr , where each of the sequences (x1 , . . . , xr ) and (y1 , . . . , yr ) is linearly independent, then t has rank r. 2. In the example above, show that (U ⊗ Q)(Q ⊗ U ) = (Q ⊗ U )(U ⊗ Q), and show that ((U ⊗ Q)(Q ⊗ U ))2 = −I.

3.3 The trace Suppose that E is a d-dimensional vector space over K, with basis (e1 , . . . , ed ) and dual basis (φ1 , . . . , φd ). Let us consider the tensor product E ⊗ E 0 , which has as basis {ei ⊗ φj : 1 ≤ i, j ≤ d}. The mapping

3.3 The trace

43

(x, ψ) → ψ(x) : E × E 0 → K is bilinear, and so there exists t ∈ (E ⊗ E 0 )0 such that t(x⊗ψ) = ψ(x) for all x ∈ E, ψ ∈ E 0 . t(x⊗ψ) is the contraction of the elementary tensor x ⊗ ψ. Recall (Corollary 3.2.2 (ii)) that there is a natural isomorphism λ of E ⊗ E 0 onto L(E). If T ∈ L(E), we set τ (T ) = t(λ−1 (T )). τ is the trace of T . If T is represented by the matrix (tij ), then   d X X X   τ (T ) = t tij ei ⊗ φj = tij φj (ei ) = tii . i,j

i,j

i=1

Note though that τ (I) = d; it is therefore often more convenient to work with the normalized trace τn = τ / dim E. We can use the trace, or the normalized trace, to define a duality between spaces of operators. Suppose that F is another vector space, with basis (f1 , . . . , fc ). If R ∈ L(E, F ) is represented by the matrix (rij ) and S ∈ L(F, E) is represented by the matrix (sji ), then SR ∈ L(E) is Pc represented by the matrix ( j=1 rij sjk ) and RS ∈ L(F ) is represented Pd P by the matrix ( i=1 sji rik ), so that τ (RS) = τ (SR) = i,j rij sji . Thus if we set b(R, S) = τ (RS) = τ (SR) then b is a bilinear form on L(E, F ) × L(F, E). In particular, the trace defines a bilinear form, the trace form, b, on L(E). Suppose now that A is a finite-dimensional unital algebra. The left regular representation l is an algebra isomorphism of A into L(A). We define τ (a) = τ (la ) and τn (a) = τn (la ); τ is the trace and τn the normalized trace on A. Then τ (ab) = τ (ba) and τn (ab) = τn (ba), for a, b ∈ A. Since l1 is the identity mapping on A, τn (1) = 1. Elements of the null space of τn are called pure elements of A, and elements of span(1) are called scalars. If a ∈ A, let P u(x) = x − τn (x)1. Then P u is a projection of A onto the subspace P u(A) of pure elements of A, and A = span(1) ⊕ P u(A). This definition is consistent with the definition of ‘pure quaternion’ defined in Section 2.3, since if x = a1 + bi + cj + dk ∈ H then, with respect to the basis (1, i, j, k), lx is represented by the matrix   a −b −c −d  b a −d c  ,   c d a −b  d

−c

so that τ (x) = 4a and τn (x) = a.

b

a

44

Multilinear algebra

Exercises 1. Show that the the mapping (S, T ) → τ (ST ) is a non-singular bilinear form on L(E, F ) × L(F, E). 2. Suppose that T1 ∈ L(E1 ) and T2 ∈ L(E2 ). Show that τ (T1 ⊗ T2 ) = τ (T1 )τ (T2 ). 3. Suppose that φ is a linear functional on L(E) which satisfies φ(I) = 1 and φ(ST ) = φ(T S) for all S, T ∈ L(E). Show that φ = τn . 4. Give an example of a finite-dimensional unital algebra A for which there is more than one linear functional φ on A for which φ(I) = 1 and φ(ab) = φ(ba) for all a, b ∈ A. 5. Suppose that T ∈ L(E). Let f (t) = det (I + tT ). Show that τ (T ) =

df (0). dt

6. Suppose that A is a two-dimensional real unital algebra. Show that A is a super-algebra, with direct sum decomposition R.1 ⊕ P u(A). Suppose that x is a non-zero pure element of A. Show that if x2 > 0 then A ∼ = R2 and if x2 < 0 then A ∼ = C. Describe A when x2 = 0.

3.4 Alternating mappings and the exterior algebra Throughout this section, we suppose that E is a d-dimensional vector space over K, with basis (e1 , . . . , ed ) and dual basis (φ1 , . . . , φd ) and that F is a vector space over K. A k-linear mapping a : E k → F is alternating if a(x1 , . . . , xk ) = 0 whenever there exist distinct i, j for which xi = xj . Proposition 3.4.1 Suppose that m ∈ M k (E, F ). The following are equivalent. (i) m is alternating. (ii) m(x1 , . . . , xk ) = (σ)m(xσ(1) , . . . , xσ(k) ) for σ ∈ Σk (where (σ) is the signature of the permutation σ). (iii) If i, j are distinct elements of {1, . . . , k}, and if x0i = xj , x0j = xi and x0l = xl for all other indices l, then m(x1 , . . . , xk ) = −m(x01 , . . . , x0k ).

3.4 Alternating mappings and the exterior algebra

45

Proof (ii) implies (iii), since (iii) is just (ii) applied to a transposition, and (iii) implies (ii), since any permutation can be written as a product of transpositions. (iii) clearly implies (i). Let us show that (i) implies (iii). To keep the terminology simple, let us suppose that i = 1 and j = 2; the argument clearly applies to the other cases: 0 = m(x1 + x2 , x1 + x2 , x3 , . . . , xk ) = m(x1 , x1 , x3 , . . . , xk ) + m(x1 , x2 , x3 , . . . , xk ) +m(x2 , x1 , x3 , . . . , xk ) + m(x2 , x2 , x3 , . . . , xk ) = m(x1 , x2 , x3 , . . . , xk ) + m(x2 , x1 , x3 , . . . , xk ).

We denote the set of alternating k-linear mappings of E k into F by Ak (E, F ); it is clearly a linear subspace of M k (E; F ). We write Ak (E) Vk Vk for Ak (E, K), and denote the dual of Ak (E) by (E). (E) is the kth exterior product of E. Suppose that (x1 , . . . , xk ) ∈ E k . We denote the evaluation mapping a → a(x1 , . . . , xk ) from Ak (E) to K by x1 ∧ . . . ∧ xk ; x1 ∧ . . . ∧ xk is the alternating product, or wedge product, of x1 , . . . , xd . It is a linear Vk functional on Ak (E), and so is an element of (E). We denote the Vk k mapping (x1 , . . . , xk ) → x1 ∧ . . . ∧ xk : E → (E) by ∧k ; ∧k is an alternating k-linear mapping. Proposition 3.4.2 ^k (E) = span{x1 ∧ . . . ∧ xk : (x1 , . . . , xk ) ∈ E k } = span{ej1 ∧ . . . ∧ ejk : 1 ≤ j1 < · · · < jk ≤ d}. Proof Suppose that a ∈ span{x1 ∧ . . . ∧ xk : (x1 , . . . , xk ) ∈ E k }⊥ . Then a(x1 , . . . xk ) = (x1 ∧ . . . ∧ xk )(a) = 0 for all (x1 , . . . , xk ) ∈ E k , so that a = 0. Thus ^k (E) = span{x1 ∧ . . . ∧ xk : (x1 , . . . , xk ) ∈ E k }. Expanding x1 , . . . , xk in terms of the basis, and using the fact that ∧k is an alternating k-linear mapping, it follows that (x1 , . . . , xk ) ∈ span{ej1 ∧ . . . ∧ ejk : 1 ≤ j1 < · · · < jk ≤ d}.

Ak (E) is a linear subspace of M k (E). On the other hand, the mapping

46

Multilinear algebra

Pk : M k (E) → Ak (E) defined by Pk (m)(x1 , . . . , xk ) =

1 X (σ)m(xσ(1) , . . . , xσ(k) ) k! σ∈Σk

is a natural projection of M k (E) onto Ak (E). We denote the transposed Vk inclusion mapping from (E) into ⊗k (E) by Jk : 1 X Jk (x1 ∧ . . . ∧ xk ) = (σ)xσ(1) ⊗ · · · ⊗ xσ(k) . k! σ∈Σk

Proposition 3.4.3 Vk for (E). Proof

{ej1 ∧ . . . ∧ ejk : 1 ≤ j1 < · · · < jk ≤ d} is a basis

We need to show that the set {ej1 ∧ . . . ∧ ejk : 1 ≤ j1 < · · · < jk ≤ d}

is linearly independent. Suppose that j = (j1 , . . . , jk ), with 1 ≤ j1 < · · · < jk ≤ d. Let mj (x1 , . . . , xk ) = φj1 (x1 ) . . . φjk (xk ). Then mj ∈ M k (E); let aj = Pk (mj ). Then aj ∈ Ak (E), and aj (ej1 , . . . , ejk ) = 1/k!, while aj (el1 , . . . , elk ) = 0 if l1 < · · · < lk and {l1 , . . . , lk } 6= {j1 , . . . , jk }. Suppose now that X u= {uj1 ,...,jk (ej1 ∧ . . . ∧ ejk ) : 1 ≤ j1 < · · · < jk ≤ d} = 0. Then uj1 ,...,jk = k!aj (u) = 0, for each j = (j1 , . . . , jk ), and so the set {ej1 ∧ . . . ∧ ejk : 1 ≤ j1 < · · · < jk ≤ d} is linearly independent. Corollary 3.4.1  d d! dim (E) = . = k k!(d − k)! Vd Vk In particular, (E) is one-dimensional, and (E) = {0} for k > d. Vk Vl−k If x1 ∧ . . . ∧ xk ∈ (E) and xk+1 ∧ . . . ∧ xl ∈ (E) then, as for Vl tensor products, x1 ∧ . . . ∧ xl ∈ (E). Extending by bilinearity, we see that if we set ^∗ ^k ^d (E) = K ⊕ E ⊕ · · · ⊕ (E) ⊕ · · · ⊕ (E), V∗ then (E) is a unital associative algebra of dimension 2d ; we denote ^

k

3.4 Alternating mappings and the exterior algebra 47 V∗ multiplication by ∧. This is the exterior algebra (E) of E. It becomes a super-algebra, when we set ^ ^ + (E) = span{ k (E) : k even}, ^ ^ − (E) = span{ k (E) : k odd}. If T ∈ L(E, F ) and 1 ≤ k ≤ d then T defines an element T (k) : Ak (F ) → Ak (E), defined by T (k) (a)(x1 , . . . , xk ) = a(T (x1 ), . . . , T (xk )). Vk Vk We denote the transpose mapping in L( (E), (F )) by ∧k (T ). Then ∧k (T )(x1 ∧ . . . ∧ xk ) = T (x1 ) ∧ . . . ∧ T (xk ). In particular, if E = F and k = d then ∧d (T ) is a linear mapping from the one-dimensional space Vd (E) into itself, and so there exists an element det T of K such that d ∧ (T )(x1 ∧ . . . ∧ xd ) = (det T )(x1 ∧ . . . ∧ xd ). det T is the determinant of T . V∗ If a, b ∈ (E), let la (b) = a∧b. Then the mapping l : a → la is the left V∗ V∗ regular representation of the algebra (E) in L( (E)); it is a unital V∗ V∗ isomorphism of (E) onto a subalgebra of L( (E)). In particular, if x ∈ E, then we denote the operator lx by mx : mx is called a creation Vk Vk+1 operator. mx maps (E) into (E). We can also define annihilation operators. Before we do so, let us introduce some terminology. If (x1 , . . . , xk+1 ) is a sequence with k + 1 terms, let (x1 , . . . , x ˇj , . . . , xk+1 ) denote the sequence with k terms obtained by deleting the term xj . Similarly (x1 , . . . , x ˇi , . . . , x ˇj , . . . , xk+1 ) denotes the sequence with k − 1 terms obtained by deleting the terms xi and xj . We use a similar notation for wedge products. Suppose that φ ∈ E 0 and a ∈ Ak (E). Define lφ (a) ∈ M k+1 (E) by setting lφ (a)(x1 , . . . , xk+1 ) = φ(x1 )a(x2 , . . . , xk+1 ). lφ (a) is not alternating; we set Pφ (a) = Pk+1 lφ (a), so that Pφ (a)(x1 , . . . , xk+1 ) =

1 (k + 1)!

X

(σ)φ(xσ(1) )a(xσ(2) , . . . , xσ(k+1) ).

σ∈Σk+1

An alternative, and useful, expression is given by k+1

1 X Pφ (a)(x1 , . . . , xk+1 ) = (−1)j−1 φ(xj )a(x1 , . . . , x ˇj , . . . , xk+1 ). k + 1 j=1 Then Pφ (a) ∈ Ak+1 (E), and Pφ ∈ L(Ak (E), Ak+1 (E)). Vk+1 Vk Let δφ : (E) → (E) be the transpose mapping. We set

48

Multilinear algebra

δφ (λ) = 0, for λ ∈ K. Letting k vary, we consider δφ as an element V∗ of L( (E)). δφ is the annihilation operator defined by φ. Easy calculations then show that δφ (x1 ∧ · · · ∧ xk ) k 1X (−1)j−1 φ(xj )(x1 ∧ · · · ∧ x = ˇ j ∧ · · · ∧ xk ) k j=1

=

1 X (σ)φ(xσ(1) )(xσ(2) ∧ · · · ∧ xσ(k) ). k! σ∈Σk

Theorem 3.4.1

m2x = δφ2 = 0, and mx δφ + δφ mx = φ(x)I.

Since x ∧ x = 0, m2x = 0. If a ∈ K then δφ2 (a) = 0. If a ∈ Ak (E)

Proof then

(k + 1)(k + 2)Pφ2 (a)(x1 , . . . xk+2 ) = (k + 1)

k+2 X

(−1)j−1 φ(xj )Pφ (a)(x1 , . . . , x ˇj , . . . , xk+2 )

j=1

X

=

(−1)j−1 (−1)i−1 φ(xj )φ(xi )a(x1 , . . . , x ˇi , . . . , x ˇj , . . . , xk+2 )

1≤i 0 and q(d) < 0. Then c and d are linearly independent, and the quadratic polynomial Q(λ) = q(λc + d) = λ2 q(c) + 2λb(c, d) + q(d) has at least one real root. If q(λc + d) = 0 then span(λc + d) is a onedimensional linear subspace of E on which q is identically zero. As a consequence of this, properties of regular quadratic spaces are harder to establish than the corresponding properties for inner product spaces, as we shall see. Suppose that (E, q) is a quadratic space, with basis (e1 , . . . , ed ). If the associated symmetric bilinear form b is represented by the matrix B = (bij ), then q(x) =

d X d X

bij xi xj .

i=1 j=1

If (E, q) is a quadratic space, we can use the corresponding bilinear

4.2 Orthogonality

63

form b and linear mapping lb : E → E 0 to define annihilation operators V V δx : E → E, for x ∈ E; we simply set δx = δlb (x) . Thus δx (x1 ∧ · · · ∧ xk ) k 1X (−1)j−1 b(x, xj )(x1 ∧ · · · ∧ x ˇ j ∧ · · · ∧ xk ) = k j=1

=

1 k!

X

(σ)b(x, xσ(1) )(xσ(2) ∧ · · · ∧ xσ(k+1) ).

σ∈Σn+1

We therefore have the following version of Theorem 3.4.1. Theorem 4.1.1 Suppose that (E, q) is a quadratic space, and that b is the corresponding quadratic form. If x, y ∈ E then m2x = δx2 = 0, and mx δy + δy mx = b(x, y)I.

4.2 Orthogonality In this section, we shall suppose that (E, q) is a regular quadratic space, with associated bilinear form b. If x and y are in E, we say that x and y are orthogonal, and write x⊥y, if b(x, y) = 0. Since b is symmetric, x⊥y if and only if y⊥x. Note that x⊥y if and only if q(x + y) = q(x) + q(y). If A is a subset of E, we define the orthogonal set A⊥ by A⊥ = {x : x⊥a for all a in A}. Proposition 4.2.1 Suppose that A is a subset of a regular quadratic space E, and that F is a linear subspace of E. (i) A⊥ is a linear subspace of E. (ii) If A ⊆ B then A⊥ ⊇ B ⊥ . (iii) A⊥⊥ ⊇ A and A⊥⊥⊥ = A⊥ . (iv) A⊥⊥ = span(A), and F = F ⊥⊥ . (v) dim F + dim F ⊥ = dim E. Proof lb is a linear isomorphism of E onto E 0 , and lb (x)(y) = 0 if and only if b(x, y) = 0, so that all these results follow from the corresponding duality results. Proposition 4.2.2 Suppose that F is a linear subspace of a regular quadratic space (E, q). Then the following are equivalent. (i) (F, q) is regular. (ii) F ∩ F ⊥ = {0}.

64

Quadratic forms

(iii) E = F ⊕ F ⊥ . (iv) (F ⊥ , q) is regular. Proof

An easy exercise.

4.3 Diagonalization Theorem 4.3.1 Suppose that b is a symmetric bilinear form on a real vector space E. There exists a basis (e1 , . . . , ed ) and non-negative integers p and m, with p + m = r, the rank of b, such that if b is represented by the matrix B = (bij ) then bii = 1 for 1 ≤ i ≤ p, bii = −1 for p + 1 ≤ i ≤ p + m, and bij = 0 otherwise. A basis which satisfies the conclusions of the theorem is called a standard orthogonal basis. If (E, q) is a Euclidean space and b is the associated bilinear form then bii = 1 for 1 ≤ i ≤ d; in this case, the basis is called an orthonormal basis. Proof The proof is by induction on d, the dimension of E. The result is vacuously true when d = 0. Suppose that the result holds for all spaces of dimension less than d and all symmetric bilinear forms on them. Suppose that (E, q) is a quadratic space of dimension d and that b is the corresponding symmetric bilinear form. We consider three cases. First it may happen that q(x) = 0 for all x in E. Then, by Proposition 4.1.1, b(x, w) = 0 for all x and w, and any basis will do: p = m =pr = 0. Secondly, there exists x with q(x) > 0. In this case we set e1 = x/ q(x). Thirdly, q(x) ≤p 0 for all x, but there exists w with q(w) < 0. In this case we set e1 = w/ −q(w), so that q(e1 ) = −1. We now argue in the same way for the last two cases. Let F = {e1 }⊥ . Then dim F = d − 1, and E = span(e1 ) ⊕ F . The restriction of b to F is again a symmetric bilinear form: by the inductive hypothesis there is a standard orthogonal basis (e2 , . . . , ed ) for F . If j > 1 then ej ∈ F , and so b(e1 , ej ) = b(ej , e1 ) = f (ej ) = 0. It therefore follows that (e1 , . . . , ed ) is a standard orthogonal basis for E. It is then clear that p + m is the rank of b. Pd If (ei ) is a standard orthogonal basis, and if x = i=1 xi ei and

4.3 Diagonalization y=

Pd

i=1

65

yi ei then

b(x, y) =

p X i=1

xi yi −

p+m X

xi yi , and q(x) =

i=p+1

p X

x2i



i=1

p+m X

x2i .

i=p+1

Theorem 4.3.1 depends on the fact that every positive real number has a square root, and every negative real number does not. For more general fields, there exists a basis (e1 , . . . , ed ) of E, and scalars (λ1 , . . . , λd ) for Pd Pd which q(x) = j=1 λj x2j for all x = j=1 xj ej ∈ E. Theorem 4.3.2 (Sylvester’s law of inertia) Suppose that (e1 , . . . , ed ) and (fi , . . . , fd ) are standard orthogonal bases for a quadratic space (E, q), with corresponding parameters (p, m) and (p0 , m0 ) respectively. Then p = p0 and m = m0 . Proof Let U = span{e1 , . . . , ep } and W = span{fp0 +1 , . . . , fd }. Then the restriction of q to U is positive definite, while its restriction to W is negative semi-definite. Thus U ∩ W = {0} and so p + (d − p0 ) = dim U + dim W ≤ d = dim V. Thus p ≤ p0 . Similarly, p0 ≤ p, and so p = p0 . Since p + m = p0 + m0 = r, it follows that m = m0 . We call (p, m) the signature s(q) of q. Conventions concerning the signature vary. Many authors write (p, q) for our (p, m) (using some other letter, such as Q, for the quadratic form), so that Q(x) =

p X

x2i

p+q X



i=1

x2i .

i=p+1

Unfortunately, many other authors interchange p and q, and rearrange the basis, so that we get Q(x) = −

p X i=1

x2i

+

p+q X

x2i .

i=p+1

We use the letters p and m to stand for ‘plus’ and ‘minus’. If (E1 , q1 ) and (E2 , q2 ) are quadratic spaces, then the direct sum E1 ⊕ E2 becomes a quadratic space when we define q(x1 ⊕ x2 ) = q1 (x1 ) + q2 (x2 ). If (E, q) = (E1 , q1 ) ⊕ (E2 , q2 ) then clearly the signature (p, m) of (E, q) is (p1 + p2 , m1 + m2 ), where (p1 , m1 ) is the signature of (E1 , q1 ) and (p2 , m2 ) is the signature of (E2 , q2 ). Thus we have the following corollary.

66

Quadratic forms

Corollary 4.3.1 If F and G are regular subspaces of (E, q) with the same signature, then F ⊥ and G⊥ have the same signature. Clearly q is positive definite if and only if p = d and m = 0, and is negative definite if and only if p = 0 and m = d. We define the Witt index to be w = min(p, m). If w > 0 we call (E, q) a Minkowski space. In relativity theory a four-dimensional real vector space with one ‘time-like’ dimension and three ‘space-like’ dimensions plays a fundamental rˆ ole. Here, conventions vary: many authors consider quadratic forms with p = 1 and m = 3, but many others consider forms with p = 3 and m = 1. We call a regular quadratic space with m = 1 a Lorentz space. One other special case occurs, when p = m and d = p + m = 2p = 2m. In this case we say that (E, q) is a hyperbolic space. Proposition 4.3.1 Suppose that (E, q) is a hyperbolic space of dimension d = 2p. Then there exists a basis (f1 , . . . , fd ) such that b(f2i , f2i−1 ) = b(f2i−1 , f2i ) = 1 for 1 ≤ i ≤ p, and bij = 0 otherwise. Proof

Let (e1 , . . . , ed ) be a standard orthogonal basis for E. Let √ √ f2i−1 = (1/ 2)(ei + ep+i ) and f2i = (1/ 2)(ei − ep+i )

for 1 ≤ i ≤ p. Then b(f2i−1 , fj ) = b(f2i , fj ) = 0 if j < 2i − 1 or j > 2i, b(f2i−1 , f2i−1 ) = b(f2i , f2i ) = 12 (q(ei ) + q(ep+1 )) = 0, b(f2i , f2i−1 ) = b(f2i−1 , f2i ) = 12 (q(ei ) − q(ep+1 )) = 1.

Such a basis is called a hyperbolic basis. If x = Pd y = i=1 yi fi then

Pd

i=1

b(x, y) = (x1 y2 + x2 y1 ) + · · · + (xd−1 yd + xd yd−1 ), q(x) = 2(x1 x2 + x3 x4 + · · · + xd−1 xd ). We therefore have the following special cases:

xi fi and

4.3 Diagonalization

p = d, m = 0 min(p, m) > 0 p = d − 1, m = 1 p = m, d = 2p

67

Euclidean space Minkowski space Lorentz space Hyperbolic space

It is convenient to have standard examples of the various regular quadratic spaces that can occur. Suppose that p and m are non-negative integers with p + m = d. Let Rp,m denote Rd equipped with the quadratic form q(x) =

p X i=1

x2i



p+m X

x2i .

i=p+1

Rp,m is called the standard regular quadratic space with dimension d and signature (p, m). We write Rd for Rd,0 , and call it standard Euclidean space; similarly, we use terms such as standard Minkowski space and standard Lorentz space. Finally, we write H2p for R2p with the hyperbolic quadratic form q(x) = 2(x1 x2 + · · · + x2p−1 x2p ); this is standard hyperbolic space. Exercises 1. Suppose that (E, q) is a quadratic space. Let N = {x ∈ E : b(x, y) = 0 for all y ∈ E}, and let F be any subspace of E complementary to N . Show that the restriction of q to F is non-singular. 2. Suppose that (E, q) is a regular quadratic space, that H is a hyperbolic subspace of E of maximal dimension and that G is any subspace of E complementary to H. Show that the restriction of q to G is either positive definite or negative definite. 3. Suppose that Zp is the field with p elements, where p = 4k − 1 is a prime number. Prove a result corresponding to Theorem 4.3.1 for a quadratic form on a finite-dimensional vector space over Zp .

68

Quadratic forms

4.4 Adjoint mappings Suppose that (E, q) is a regular quadratic space, with associated bilinear form b. Then lb is an injective linear mapping of E into E 0 , and dim E = dim E 0 , and so lb is an isomorphism of E onto E 0 . This enables us to define the adjoint of a linear mapping from E into a quadratic space. Theorem 4.4.1 Suppose that T is a linear mapping from a regular quadratic space (E1 , q1 ) into a quadratic space (E2 , q2 ). Then there exists a unique linear mapping T a from E2 to E1 such that b2 (T (x), y) = b1 (x, T a (y)) for x ∈ E1 , y ∈ E2 , where b1 and b2 are the associated bilinear forms. Proof lb1 is an isomorphism of E1 onto E10 . Let T a = lb−1 T 0 lb2 . Then it 1 is easy to verify that b2 (T (x), y) = b1 (x, T a (y)) for x ∈ E1 , y ∈ E2 . If S is another linear mapping with this property then b1 (x, T a (y)) = b1 (x, S(y)) for all x ∈ E1 , y ∈ E2 . Since b1 is non-singular, T a (y) = S(y) for all y ∈ E2 , and so T a = S. Thus T a is unique. T a is called the adjoint of T . The mapping T → T a is a linear mapping from L(E1 , E2 ) into L(E2 , E1 ). If (E2 , q2 ) is also regular then T aa = (T a )a = T , and so the mapping T → T a is an isomorphism of L(E1 , E2 ) onto L(E2 , E1 ). Note also that if S ∈ L(E2 , E3 ) and T ∈ L(E1 , E2 ), where (E1 , q1 ), (E2 , q2 ) and (E3 , q3 ) are regular quadratic spaces, then (ST )a = T a S a . Suppose that (E, q) is a regular quadratic space with associated bilinear form b, and with standard orthogonal basis (e1 , . . . , ed ), and suppose that (φ1 , . . . , φd ) is the dual basis. Then lb (ei ) = q(ei )φi , so that care is needed with signs. Thus we have the following proposition. Proposition 4.4.1 Suppose that (E1 , q1 ) and (E2 , q2 ) are regular quadratic spaces with standard orthogonal bases (e1 , . . . , ed ) and (f1 , . . . , fg ) respectively. Suppose that T ∈ L(E1 , E2 ) and that T is represented by the matrix (tij ) with respect to these bases. Then T a is represented by the matrix (taji ), where taji = q1 (ej )q2 (fi )tij . Corollary 4.4.1

If T ∈ L(E) then det T a = det T .

4.5 Isotropy

69

The matrix (taij ) is called the adjoint matrix of (tij ). It is the same as the transposed matrix if and only if (E1 , q1 ) and (E2 , q2 ) are both Euclidean, or q1 and q2 are both negative definite. Exercise 1. We can use the adjoint to consider the relation between creation and annihilation operators. Let q be any positive definite quadratic form on E, with corresponding bilinear form b and linear mapping lb : E → E 0 . If x ∈ E, let δx = δlb (x) . Let (e1 , . . . , ed ) be a standard V orthogonal basis for E. We can define a quadratic form q˜ on E by taking vectors ej1 ∧ . . . ∧ ejk , with 1 ≤ j1 < · · · < jk ≤ d, as an orthonormal basis. Show that max = δx . Note that this gives another proof that δx2 = 0.

4.5 Isotropy In this section we shall suppose that (E, q) is a regular quadratic space of dimension d and signature (p, m), with associated bilinear form b. An element x of (E, q) is isotropic if q(x) = 0 and is anisotropic otherwise. We write Iso(E, q) for the set of isotropic elements of E and An(E, q) for the set of anisotropic elements. Iso(E, q) is a linear subspace of E only in the trivial cases when q = 0 or when (E, q) is positive definite or negative definite; An(E, q) is never a linear subspace of E, since 0 is isotropic. A linear subspace U of E is totally isotropic if it is contained in Iso(E, q). Proposition 4.5.1 A linear subspace U of a regular quadratic space (E, q) is totally isotropic if and only if U ⊆ U ⊥ . Proof The condition is certainly sufficient. If U is totally isotropic then it follows from the polarization formula that the symmetric bilinear form b vanishes identically on U × U , and so U ⊆ U ⊥ . Proposition 4.5.2 If U is a totally isotropic subspace of a regular quadratic space E then dim U ≤ w = min(p, m). Proof Let (e1 , . . . , ed ) be a standard orthogonal basis for E, and let P = span(e1 , . . . , ep ). Since q is positive definite on P , P ∩ U = {0}. Thus p + dim U = dim P + dim U ≤ d, and so dim U ≤ d − p = m. Similarly dim U ≤ p.

70

Quadratic forms

Recall that w = min(p, m) is the Witt index of q. If (e1 , . . . , ed ) is a standard orthogonal basis for (E, q),and if U = span(e1 +ep+1 , . . . , ew +ep+w ), W = span(e1 −ep+1 , . . . , ew −ep+w ) then U and W are both totally isotropic subspaces of E. Thus the Witt index is the best possible upper bound in Proposition 4.5.2. Note also that U +W = U ⊕W is a hyperbolic subspace of E. This is quite typical, as the next result shows. Proposition 4.5.3 Suppose that U is a totally isotropic subspace of a regular quadratic space (E, q). Then there exists a totally isotropic space W , with dim W = dim U , such that U + W = U ⊕ W and U ⊕ W is hyperbolic. Proof Let (u1 , . . . , ur ) be a basis for U . We show by induction that there exist vectors (w1 , . . . , wr ) such that b(uj , wj ) = 1 for 1 ≤ j ≤ r, b(ui , wj ) = 0 for 1 ≤ i, j ≤ r and i 6= j, and b(wi , wj ) = 0 for 1 ≤ i, j ≤ r. Suppose that we have found (w1 , . . . , ws ) satisfying these conditions, where 0 ≤ s ≤ r. Let Ws = span(w1 , . . . , ws ). Suppose that Ps u = j=1 αj wj ∈ U ∩ Ws . Then αj = b(uj , u) = 0 for 1 ≤ j ≤ s, and so u = 0. Thus U ∩ Ws = 0, so that U + Ws = U ⊕ Ws . Further, Ps if w = j=1 αj wj = 0, then b(uj , w) = αj = 0, so that the vectors w1 , . . . , ws are linearly independent, and (w1 , . . . , ws ) is a basis for Ws . Now let Us+1 = span(u1 , . . . , u ˇs+1 , . . . , ur ), where the term us+1 is omitted: Us+1 is an r − 1 dimensional subspace of E. Then Us+1 ⊕ Ws is a proper subspace of U ⊕ Ws , and so (U ⊕ Ws )⊥ is a proper subspace of (Us+1 ⊕ Ws )⊥ . Pick v ∈ (Us+1 ⊕ Ws )⊥ \ (U ⊕ Ws )⊥ . Then b(v, uj ) = 0 for j 6= s + 1, and b(v, w) = 0 for w ∈ Ws . Since v 6∈ (U ⊕ Ws )⊥ , b(v, us+1 ) 6= 0. Set v 0 = v/b(v, us+1 ), so that b(v 0 , us+1 ) = 1. We now take ws+1 = v 0 − b(v 0 , v 0 )us+1 /2. Then b(ws+1 , ws+1 ) = b(v 0 , v 0 ) − b(v 0 , v 0 )b(v 0 , us+1 ) + b(v 0 , v 0 )2 b(us+1 , us+1 )/4 = 0, and, by the construction, all the other conditions are also satisfied. Thus the induction is established. We now take W = Wr . Then U + W = U ⊕ W , W is totally isotropic, and (u1 , w1 , . . . , ur , wr ) is a hyperbolic basis for U ⊕ W .

4.6 Isometries and the orthogonal group

71

4.6 Isometries and the orthogonal group We now consider structure-preserving linear mappings. Suppose that (E1 , q1 ) and (E2 , q2 ) are quadratic spaces, with associated symmetric bilinear forms b1 and b2 . A linear mapping T from E1 to E2 is an isometry if (i) T is injective, and (ii) q2 (T (x)) = q1 (x) for all x in E1 . By the polarization formula, (ii) is equivalent to (iii) b2 (T (x), T (y)) = b1 (x, y) for all x, y in E1 . Note that an isometry has to be injective. If (E, q) is regular, this is a consequence of (ii) (or (iii)), but if q is singular then this is not the case. As an easy example, which we shall use later, suppose that (E1 , q1 ) has signature (p1 , m1 ), that (E2 , q2 ) has signature (p2 , m2 ), and that p1 ≤ p2 , m1 ≤ m2 and n1 = d1 − p1 − m1 ≤ n2 = d2 − p2 − m2 . Let (e1 , . . . , ed1 ) be a standard orthogonal basis for (E1 , q1 ), and let (f1 , . . . , fd2 ) be a Pd1 standard orthogonal basis for (E2 , q2 ). If x = j=1 αj ej , let T (x) =

p1 X j=1

αj fj +

m1 X j=1

αp1 +j fp2 +j +

n1 X

αp1 +m1 +j fp2 +m2 +j .

j=1

Then T is an isometry. The composition of two isometries is an isometry. Suppose now that (E, q) is a quadratic space. Then an isometry T of (E, q) into itself is called an orthogonal mapping. It is necessarily surjective, and its inverse T −1 is also an isometry. The orthogonal mappings of E onto itself form a group, the orthogonal group O(E, q). When (E, q) = Rp,m , we write O(p, m) for O(E, q). Since O(E, q) = O(E, −q), the groups O(p, m) and O(m, p) are naturally isomorphic. When (E, q) = Rd , we call it the Euclidean group O(d); otherwise it is called the Minkowski group. Similarly, when m = 1 it is called the Lorentz group, and when p = m it is called the hyperbolic group. Proposition 4.6.1 Suppose that T ∈ L(E), where (E, q) is a regular quadratic space. Then the following are equivalent: (i) T ∈ O(E, q); (ii) T a ∈ O(E, q); (iii) T a T = T T a = I.

72 Proof

Quadratic forms If T ∈ O(E, q) then b(x, y) = b(T (x), T (y)) = b(x, T a T (y)) for all x, y ∈ E,

so that T a T = I. Thus T −1 = T a , and so T T a = I. Thus (i) implies (iii). Conversely if (iii) holds then b(x, y) = b(x, T a T (y)) = b(T (x), T (y)) for all x, y ∈ E, so that (i) and (iii) are equivalent. Applying this equivalence to T a , and using the fact that T = T aa , we also obtain the equivalence of (ii) and (iii). Corollary 4.6.1 (Orthogonality relations) Suppose that (E, q) is a regular quadratic space with standard orthogonal basis (e1 , . . . , ed ). Suppose that T ∈ L(E) and that T is represented by the matrix (tij ) with respect to this basis. Then the following are equivalent: (i) T ∈ O(E, q); Pd (ii) i=1 tij tik = δjk q(ej ) for 1 ≤ j ≤ d, 1 ≤ k ≤ d; Pd (iii) i=1 tji tki = δjk q(ej ) for 1 ≤ j ≤ d, 1 ≤ k ≤ d, where δjk = 1 if j = k and δjk = 0 otherwise. Corollary 4.6.2

If T ∈ O(E, q) then det T = ±1.

The subgroup SO(E, q) = {T ∈ O(E, q) : det T = 1} is called the Pd Pd−1 special orthogonal group. Let T ( j=1 αj ej ) = j=1 αj ej − αd ed . Then T ∈ O(E, q) and det T = −1, and so SO(E, q) is a proper subgroup of O(E, q). Since SO(E, q) is the kernel of the homomorphism det , which maps O(E, q) onto {1, −1}, it has index 2 in O(E, q), and we have a short exact sequence 1

- O(E, q) det- D2 - SO(E, q) ⊆

- 1

Thus if T ∈ O(E, q) \ SO(E, q) then O(E, q) = SO(E, q) ∪ T.SO(E, q) = SO(E, q) ∪ SO(E, q).T. Proposition 4.6.2 Suppose that (E, q) is a regular quadratic space, that T ∈ O(E, q) and that F is a regular subspace of E. Then T is an isometry of F onto T (F ) and is an isometry of F ⊥ onto (T (F ))⊥ . Proof Clearly T is is an isometry of F onto T (F ) and is an isometry of F ⊥ onto T (F ⊥ ). We must show that T (F ⊥ ) = (T (F ))⊥ . If x ∈ F and y ∈ F ⊥ then b(T (x), T (y)) = b(x, y) = 0; since this holds for all x ∈ F , T (y) ∈ (T (F ))⊥ , and T (F ⊥ ) ⊆ (T (F ))⊥ . Conversely if z ∈ (T (F ))⊥

4.7 The case d = 2

73

then z = T (w) for some w ∈ E; if x ∈ F then b(x, w) = b(T (x), z) = 0; since this holds for all x ∈ F , w ∈ F ⊥ , so that z ∈ T (F ⊥ ), and (T (F ))⊥ ⊆ T (F ⊥ ). Theorem 4.6.1 (Witt’s extension theorem) Suppose that F is a linear subspace of a regular quadratic space (E, q), and that T : F → E is an isometry. Then there exists S ∈ O(E, q) which extends T ; that is, S(x) = T (x) for all x ∈ F . Proof First suppose that (F, q) is regular, so that E = F ⊕ F ⊥ . Then T (F ) is regular, and so E = T (F )⊕(T (F ))⊥ . Since F and T (F ) have the same signature, F ⊥ and (T (F ))⊥ also have the same signature. Thus, by the example above, there is an isometry R of F ⊥ onto (T (F ))⊥ . If x = y + z ∈ F ⊕ F ⊥ , let S(x) = T (x) + R(y). Then S is an isometric extension of T . Next suppose that F is not regular, and that it has signature (p, m). Let n = d − (p + m), where d = dim F . Let (e1 , . . . , ed ) be a standard orthogonal basis for F . Then F = G ⊕ H, where G = span(e1 , . . . , ep+m ) is regular, and H = span(ep+m+1 , . . . , ed ) is totally isotropic. H is a subspace of the regular space G⊥ , and so by Proposition 4.5.3 there exists a totally isotropic subspace K of G⊥ such that H + K = H ⊕ K is hyperbolic. Similarly, there exists a totally isotropic subspace L of (T (G))⊥ such that T (H)+L = T (H)⊕L is hyperbolic. Suppose that (h1 , . . . , hn ) is a basis for H. Then there exists a basis (k1 , . . . , kn ) for K such that (h1 , k1 , . . . , hn , kn ) is a hyperbolic basis for H ⊕ K. (T (h1 ), . . . , T (hn )) is a basis for T (H), and so there exists a basis (l1 , . . . , ln ) for L such that (T (h1 ), l1 , . . . , T (hn ), ln ) is a hyperbolic basis for T (H) ⊕ L. If x=y+

n X

αj hj +

j=1

let T˜(x) = T (y) +

n X

βj kj ∈ G ⊕ H ⊕ K = F ⊕ K,

j=1 n X j=1

αj T (hj ) +

n X

βj lj ∈ T (F ) ⊕ L.

j=1

Then T˜ is an isometry of F ⊕K onto T (F )⊕L. Since F ⊕K = G⊕(H⊕K) is regular, T˜ can be extended to an element S of O(E, q).

4.7 The case d = 2 We now consider O(E, q) and SO(E, q) when E is a two-dimensional regular quadratic space.

74

Quadratic forms

The groups SO(2) and O(2) We begin with SO(2). If T ∈ SO(2) then q(T (e1 )) = 1, so that T (e1 ) = (cos θ, sin θ), for some unique θ ∈ [0, 2π). Since q(T (e2 )) = 1, b(T (e1 ), T (e2 )) = 0 and det T = 1, it follows that T (e2 ) = (− sin θ, cos θ). T is a rotation Rθ of R2 through an angle θ. Simple calculations show that Rθ Rφ = Rψ , where ψ = θ + φ (mod 2π), so that the mapping θ → Rθ is an isomorphism of the circle group T onto SO(2). In particular, this shows that SO(2) is a compact path-connected subset of L(R2 ). Note that Rπ = −I. Next, suppose that T ∈ O(2)\SO(2). As before, T (e1 ) = (cos θ, sin θ), for some unique θ ∈ [0, 2π), but since det T = −1 it follows that T (e2 ) = (sin θ, − cos θ). T has eigenvalues 1 and −1, with corresponding eigenvectors (cos θ/2, sin θ/2) and (− sin θ/2, cos θ/2), so that T is an orthogonal reflection Sθ of the plane, leaving the line {(λ cos θ/2, λ sin θ/2) : λ ∈ R} fixed. S0 is the reflection given by the matrix   1 0 U= , 0 −1 and O(2) = SO(2)∪S0 (SO(2)). Since Rθ S0 Rθ = S0 , it follows that O(2) is isomorphic to the full dihedral group D. The groups SO(1, 1) and O(1, 1) The quadratic space R1,1 is the simplest example of a hyperbolic space. Here things are more complicated, essentially because a real hyperbola has two connected components. We can consider either the standard orthogonal basis (e1 , e2 ) or a standard hyperbolic basis (f1 , f2 ). We begin with the former. Suppose that T ∈ SO(1, 1). Since q(T (e1 )) = 1, either T (e1 ) = (cosh t, sinh t) or T (e1 ) = (− cosh t, − sinh t) for some t ∈ R. We begin with the former case. Since q(T (e2 )) = −1, and det T = 1, T (e2 ) = (sinh t, cosh t). We denote this operator by Tt . Tt is a linear operator on R2 . For λ > 0, it maps each branch Cλ+ = {(x, y) : x2 − y 2 = λ, x ≥ 0} and Cλ− = {(x, y) : x2 − y 2 = λ, x ≤ 0} of the hyperbola Cλ = {(x, y) : x2 − y 2 = λ} into itself, and for λ < 0

4.7 The case d = 2

75

maps each branch Cλ+ = {(x, y) : x2 − y 2 = λ, y ≥ 0} and Cλ− = {(x, y) : x2 − y 2 = λ, y ≤ 0} of the hyperbola Cλ = {(x, y) : x2 − y 2 = λ} into itself. Again, Ts Tt = Ts+t , so that the mapping t → Tt is an isomorphism of (R, +) onto a path-connected subgroup SOc (1, 1) of SO(1, 1). Next suppose that T ∈ SO(1, 1) and that T (e1 ) = (− cosh t, − sinh t). Since det T = 1 it follows that T (e2 ) = (− sinh t, − cosh t), and that T = −Tt ; T maps Cλ+ onto Cλ− , and Cλ− onto Cλ+ . Thus SO(1, 1) = SOc (1, 1)∪−SOc (1, 1), and SO(1, 1) has two connected components, SOc (1, 1) and −SOc (1, 1). Let π(t) = Tlog t for t > 0 and let π(t) = −Tlog |t| for t < 0. Then π is an isomorphism of the multiplicative group (R∗ , ×) onto SO(1, 1). We now consider O(1, 1). The mapping S0 , which maps (x1 , x2 ) to (x1 , −x2 ) is in O(1, 1) and has determinant −1, so that O(1, 1) = SO(1, 1) ∪ S0 .SO(1, −1). Consequently O(1, 1) has four connected components, each homeomorphic to R; SOc (1, 1) is the connected component to which the identity belongs, and is a normal subgroup of O(1, 1). Thus we have the following short exact sequence: 1

- (R, +) - O(1, 1) - D2 × D2

- 1.

Easy calculations show that S0 Tt S0 = T−t , so that (S0 Tt )2 = I, and similarly (−S0 Tt )2 = I, so that every element of O(1, 1) \ SO(1, 1) is a reflection. The line {(λ cosh(t/2), −λ sinh(t/2)) : λ ∈ R} is fixed by S0 Tt , and S0 Tt (sinh(t/2), − cosh(t/2)) = −(sinh(t/2), − cosh(t/2)). Since Tt = S0 (S0 Tt ) and −Tt = S0 (S0 (−Tt )), every element of SO(1, 1) is the product of two reflections in O(1, 1). Next, we consider the hyperbolic basis (f1 , f2 ). Then Iso(R1,1 ) = span(f1 ) ∪ span(f2 ). Suppose that T ∈ O(1, 1). Since q(f1 ) = 0, it follows that q(T (f1 )) = 0, so that either T (f1 ) = af1 or T (f1 ) = af2 , where a ∈ R∗ . In the former case, T (f2 ) = f2 /a, since q(T (f2 )) = q(f2 ) = 0 and b(T (f1 ), T (f2 )) = b(f1 , f2 ) = 1. Thus T is represented by the matrix   a 0 , 0 1/a which has determinant 1. Thus in this case T ∈ SO(1, 1). In the latter

76

Quadratic forms

case, where T (f1 ) = af2 , it follows that T (f2 ) = f1 /a; T is represented by the matrix   0 1/a , a 0 which has determinant −1, and so T ∈ O(1, 1) \ SO(1, 1). Consequently the mapping   a 0 a→ 0 1/a establishes an isomorphism of (R∗ , ×) onto SO(1, 1), and O(1, 1) \ SO(1, 1) = T1 .SO(1, 1) = SO(1, 1).T1 , where T1 (x1 f1 + x2 f2 ) = x2 f1 + x1 f2 . Exercise 1. Let G be the group of homeomorphisms of R generated by the set of dilations {Dλ : λ > 0} (where Dλ (t) = λt), negation N (where N (t) = −t) and inversion σ (where σ(t) = 1/t). Show that G is isomorphic to O(1, 1).

4.8 The Cartan-Dieudonn´ e theorem We have seen that every element of O(2) and every element of O(1, 1) is either a reflection, or the product of two reflections. Can we extend this result to all regular quadratic spaces? First we must define the reflections that we should consider. Suppose that x is an anisotropic vector in a regular quadratic space (E, q). Then span(x) is regular, and E = span(x) ⊕ x⊥ . Suppose that y ∈ E, and that y = λx + z, with z ∈ x⊥ . Let ρx (y) = −λx + z. Then ρx is an injective linear mapping, and q(ρx (y)) = λ2 q(x) + q(z) = q(y), so that ρx is an isometry. ρx (x) = −x, and ρx (z) = z for z ∈ x⊥ . Also ρ2x = I (ρx is an involution), so that ρx is surjective. The mapping ρx is called the simple reflection in the direction x, with mirror x⊥ . Note that det ρx = −1.

4.8 The Cartan-Dieudonn´e theorem

77

Theorem 4.8.1 (Cartan-Dieudonn´e) If T is an isometry of a regular quadratic space (E, q), T is the product of at most dim E simple reflections. Proof The proof is by induction on the dimension d of E. The result is clearly true when d = 1, and we have seen that it is true when d = 2. Suppose that dim E = d ≥ 3, and that the result is true for spaces of dimension less than d. Let us set S = I − T , and let N (S) be the null-space of S: N (S) is the set of vectors fixed by T . We consider three cases. Suppose first that N (S) is not totally isotropic, and that x is an anisotropic element of N (S). We can then write E = span(x) ⊕ F , where F = x⊥ , and T maps F isometrically onto itself. Let S be the restriction of T to F . By the inductive hypothesis, we can write S = σxr . . . σx1 where each σxi is a simple reflection of F in the direction xi and r ≤ dim F = d − 1. But then it is easy to see that T = ρxr . . . ρx1 , where each ρxi is the simple reflection of E in the direction xi . Secondly, suppose that there exists an anisotropic vector x such that y = S(x) is anisotropic. Then b(T (x) + x, T (x) − x) = b(T (x) + x, y) = 0, so that T (x) + x ∈ y ⊥ . Thus     T (x) − x T (x) + x + ρy ρy (T (x)) = ρy 2 2 T (x) + x T (x) − x = − = x, 2 2 and so by the first case ρy T is the product of at most d − 1 simple reflections. But then T=ρy ρy T is the product of at most d simple reflections. (Notice that if q is positive definite (or negative definite) then at least one of these two cases must occur, and the proof is finished.) Thirdly, suppose that neither of the two conditions is satisfied. We show that this can only happen in very special circumstances. First we show that S(E) is totally isotropic. By hypothesis, S(x) is isotropic if x is anisotropic; it is therefore sufficient to show that if x is a non-zero isotropic vector then q(S(x)) = 0. Since dim x⊥ = d − 1 > d/2, x⊥ is not totally isotropic, and so there exists an anisotropic vector u ∈ x⊥ . Now q(x + u) = q(x − u) = q(u) 6= 0, and so, since the second condition is not satisfied, q(S(x + u)) = q(S(x − u)) = q(S(u)) = 0.

78

Quadratic forms

Now q(S(x + u)) + q(S(x − u)) = 2q(S(x)) + 2q(S(u)) = 2q(S(x)), and so S(x) is isotropic. Thus S(E) and N (S) are both totally isotropic. Consequently, dim S(E) ≤ w and dim N (S) ≤ w, where w is the Witt index of (E, q). But by the rank-nullity formula, dim(S(E)) + dim(N (S)) = d ≥ 2w. Thus dim S(E) = dim N (S) = w = d/2, so that d is even, and (E, q) is a hyperbolic space. In particular, dim(N (S)⊥ ) = d − dim N (S) = w = dim N (S), and so N (S) = N (S)⊥ . Now suppose that w = S(x) ∈ S(E) and that y ∈ N (S). Then T (y) = y, and so b(w, y) = b(T (x), y) − b(x, y) = b(T (x), T (y)) − b(x, y) = 0. Thus S(E) ⊆ N (S)⊥ = N (S), and so S 2 = 0. Now let (e1 , . . . , ew ) be a basis for N (S), and extend this to a basis (e1 , . . . , ed ) of E. With respect to this basis, S is given by a matrix of the form   0 M , 0 0 and so T is given by a matrix of the form   I −M . 0 I This means that det T = 1. Now let y be any anisotropic vector in E. Then ρy T is an isometry, and det ρy T = −1. This means that ρy T is not in this final case, and so must be in one of the first two cases. Thus ρy T is the product of at most d = 2w simple reflections. But since det ρy T = −1, ρy T must be the product of an odd number of simple reflections, and is therefore the product of at most d−1 simple reflections. Thus T = ρy ρy T is the product of at most d simple reflections. Corollary 4.8.1 (i) If dim E is odd and T ∈ SO(E, q), then there exists a non-zero x with T (x) = x. (ii) If dim E is odd and T ∈ O(E, q) \ SO(E, q), then there exists a non-zero x with T (x) = −x. (iii) If dim E is even and T ∈ O(E, q) \ SO(E, q), then there exists a non-zero x with T (x) = x.

4.8 The Cartan-Dieudonn´e theorem

79

Proof (i) T = ρxr . . . ρx1 is the product of an even number r of simple reflections, and r < dim E. Let Mi be the mirror for ρxi ; then dim Mi = dim E − 1, and so dim(∩ri=1 Mi ) ≥ dim E − r > 0. If x is a non-zero element of ∩ri=1 Mi , T (x) = x. (ii) −T ∈ SO(E, q), and so the result follows from (i). (iii) Since T ∈ O(E, q) \ SO(E, q), T is the product of an odd number s of simple reflections, and s < dim(E). Now argue as in (i). As we have observed, if T ∈ SO(E, q) then T is the product of an even number of simple reflections. We group these in pairs: we call the product of two simple reflections a simple rotation. Thus every element of SO(E, q) is the product of at most d/2 simple rotations. Let us consider simple rotations in more detail. Suppose that R = ρx ρy is a simple rotation. Let F = span(x, y). Then F is a regular subspace of E, and E = F ⊕ F ⊥ . R(z) = z for z ∈ F ⊥ . If x and y are linearly dependent, then R = I; otherwise the restriction RF of R to F is a rotation of the two-dimensional space F . Let (e1 , e2 ) be an orthogonal basis for F . If (F, q) is positive definite or negative definite then RF = Rθ , a rotation through an angle θ for some 0 < θ < 2π. If (F, q) is hyperbolic, then RF = Tt or −Tt , for some t ∈ R. The next result is an immediate consequence. Theorem 4.8.2 connected.

If (E, q) is a Euclidean space, then SO(E, q) is path-

Exercises 1. Suppose that Sθ (z) = eiθ z¯ is a reflection of C. Determine the set {w ∈ C : Sθ = ρw }, and determine the mirror of this reflection. 2. Suppose that T ∈ O(E, q), and that F is a regular subspace of E such that T (F ) ⊆ F . Show that T (F ⊥ ) = F ⊥ . 3. Suppose that T ∈ L(E), where E is a d-dimensional real vector space. (a) Show that there is a non-zero polynomial p of minimal degree such that p(T ) = 0. (b) Show that any real polynomial can be written as a product of real polynomial of degree 1 or 2. (c) Show that there is a linear subspace F of E of dimension 1 or 2 such that T (F ) ⊆ F . (d) Suppose that (E, q) is a Euclidean space, and that T ∈ SO(E, q). Show that there exists an orthogonal basis for E with respect to which

80

Quadratic forms T is represented by a matrix  T1  ..  .   0 0  where Tj =

of the form ··· .. . ··· ···

cos tj sin tj

 0 ..  .  , 0 

0 .. . Tk 0

Ir

− sin tj cos tj

 for 1 ≤ j ≤ k,

Ir is an r × r unit matrix and 2k + r = d. 4. Find necessary and sufficient conditions for two simple reflections to commute.

4.9 The groups SO(3) and SO(4) The quaternions can be used to obtain double covers of the groups SO(3) and SO(4). These were described by Cayley, who ascribed the first of them to Hamilton. These double covers will reappear in Chapter 8, when we consider spin groups. The subspace (P u(H), ∆) of H is a Euclidean space, isometrically isomorphic to R3 . We can therefore identify SO(P u(H), ∆) with SO(3). Theorem 4.9.1

Suppose that x ∈ H∗ . Let ρx (y) = xyx−1 for y ∈ P u(H).

Then ρ is a homomorphism of the multiplicative group H∗ onto SO(3), with kernel R∗ . Proof

First we show that ρx (y) ∈ P u(H): (ρx (y))2 = xy 2 x−1 = −∆(y)xx−1 = −∆(y) ≤ 0,

so that this follows from Proposition 2.3.1. Thus ρx ∈ L(P u(H)). Further, ∆(ρx (y)) = ∆(x)∆(y)∆(x−1 ) = ∆(y)∆(xx−1 ) = ∆(y), so that ρx ∈ O(3). Thus det ρx = ±1. But H∗ is connected, and the function det is continuous on H∗ , and so det ρx is constant on H∗ . Since det ρ1 = det I = 1, it follows that det ρx = 1 for all x ∈ H∗ , and so ρ is a homomorphism of H∗ into SO(3).

4.9 The groups SO(3) and SO(4)

81

If x ∈ H∗ then ρx = I if and only if x ∈ Z(H∗ ) = R∗ , and so R∗ is the kernel of ρ. It remains to show that ρ(H∗ ) = SO(3). Suppose that x ∈ P u(H) and that x 6= 0. Then ρx (x) = x, and if y ∈ x⊥ then ρx (y) = xyx−1 = −yxx−1 = −y. Thus −ρx is a simple reflection in the direction x, with mirror x⊥ . If T ∈ SO(3) then T is the product of two simple reflections, and so there exist non-zero x1 , x2 in P u(H) such that T = (−ρx1 )(−ρx2 ) = ρx1 x2 , and T ∈ ρ(H∗ ). Thus we have a short exact sequence 1

- R∗

- H∗

- SO(3)

- 1.

Since ρx = ρλx , for λ ∈ R∗ , ρ(x) = ρ(x1 ) = ρ(−x1 ), where x1 = x/ kxk ∈ H1 . We therefore also have the following short exact sequence 1

- D2

- H1

- SO(3)

- 1;

H1 is a double cover of SO(3). Here is a similar result, concerning SO(4). The quadratic space (H, ∆) is a Euclidean space, isometrically isomorphic to R4 , and so we identify SO(H, ∆) and SO(4). Theorem 4.9.2 Let φ(x,y) (z) = xzy −1 , for (x, y) ∈ H1 × H1 and z ∈ H. Then φ is a homomorphism of H1 × H1 onto SO(4), with kernel {(1, 1), (−1, −1)}.



Proof Since xzy −1 = kxk . kzk . y −1 = kzk, φ(x,y) ∈ O(4). Thus det φ(x,y) = ±1. The group H1 × H1 is connected, and so the argument of Theorem 4.9.1 shows that φ(x,y) ∈ SO(4). Thus φ is a homomorphism of H1 × H1 into SO(4). If (x, y) is in the kernel of φ, then xy −1 = φ(x,y) (1) = 1, so that x = y. Then xzx−1 = z for all z ∈ H, so that x ∈ Z(H) = R. Since kxk = 1, x = 1 or −1. Since φ(1,1) = φ(−1,−1) , the kernel of φ is {(1, 1), (−1, −1)}. It remains to show that φ is surjective. Suppose that T ∈ SO(4). Let b = T (1). Since kbk = 1, b ∈ H1 . Thus φ(1,b) T (1) = 1. Since φ(1,b) ◦ T ∈ SO(4), it follows that φ(1,b) T (P u(H)) = P u(H), and the restriction of φ(1,b) ◦ T to P u(H) is in SO(P u(H), ∆). By Theorem 4.9.1, there exist x1 and x2 in H1 so that φ(1,b) T = ρx1 ρx2 = φ(x1 x2 ,x1 x2 ) . Consequently, T = φ(y,z) , where y = x1 x2 and z = b−1 x1 x2 .

82

Quadratic forms

Thus we have a short exact sequence 1

- D2

- H1 × H1 - SO(4)

- 1;

H1 × H1 is a double cover of SO(4).

4.10 Complex quadratic forms Our principal concern is with real quadratic spaces. Here, we briefly consider what happens in the complex case. Suppose then that q is a quadratic form on a complex vector space E; that is, there exists a symmetric complex bilinear form b on E such that q(x) = b(x, x), for x ∈ E. The polarization equation holds, and so b is uniquely determined by q. Again, we say that (E, q) is regular if b is non-singular. Since every complex number has a square root, following through the proof of Theorem 4.3.1 we obtain the following: Theorem 4.10.1 Suppose that b is a symmetric bilinear form on a complex vector space E. There exists a basis (e1 , . . . , ed ) such that if b is then represented by the matrix B = (bij ) then bii = 1 for 1 ≤ i ≤ r, bij = 0 otherwise, where r is the rank of b. If d > 1 then (E, q) contains isotropic subspaces, even when q is regular; for span(e1 + ie2 ) is isotropic. In particular, if (E, q) is regular and d = 2p is even, then   e1 + ie2 e1 − ie2 e3 + ie4 e3 − ie4 e2p−1 + ie2p e2p−1 − ie2p √ √ √ , √ , √ , √ ,..., , 2 2 2 2 2 2 is a hyperbolic basis for (E, q). The proofs of Proposition 4.5.3 and the Witt extension theorem carry over to the complex case, as does the proof of the Cartan-Dieudonn´e theorem. Let O(E, q) denote the group of isometries of a regular complex quadratic space. If T ∈ O(E, q) then, by the Cartan-Dieudonn´e theorem, det T = 1 or −1; again, we denote by SO(E, q) the subgroup {T ∈ O(E, q) : det T = 1}. The product of two simple reflections is

4.10 Complex quadratic forms

83

again called a simple rotation. Suppose that R = ρx ρy is a simple rotation, and that x and y are linearly independent. Then F = span(x, y) is a regular two-dimensional R-invariant subspace, and R(z) = z for z ∈ F ⊥ . Let RF be the restriction of R to F . There are two possibilities. First, it may happen that RF has an anistropic eigenvector u, with eigenvalue λ. Since q(u) = q(T (u)) = q(λu) = λ2 q(u), λ = 1 or −1. Also u⊥ ∩ F is a one-dimensional R-invariant space, and so if v ∈ u⊥ ∩ F then R(v) = v or −v. Since det R = 1, RF = I or −I. The first possibility can be ruled out, since x and y are linearly independent, so that RF = −I. The second possibility is that every eigenvector of RF is isotropic. Let (f1 , f2 ) be a hyperbolic basis for F . Then Iso(F, q) = span(f1 ) ∪ span(f2 ), so that f1 and f2 must be eigenvectors for RF . Suppose that RF (f1 ) = λf1 and that RF (f2 ) = µf2 . Since det RF = 1, it follows that λµ = 1. It is readily verified that if this is the case, then RF is an isometry of F . Note that when λ = µ = ±1 then RF = ±I. contradicting the fact that every eigenvector of RF is isotropic. Thus λ ∈ C∗ \ {1, −1}. Bearing in mind that every element of L(E) has an eigenvector, and that if F is a regular T -invariant subspace of E then F ⊥ is a regular T -invariant subspace of E, we have the following. Theorem 4.10.2 If (E, q) is a regular complex quadratic space and T ∈ O(E, q), then E = F ⊕ G ⊕ H is a direct sum of regular T -invariant subspaces of E, where F = {x ∈ E : T (x) = x},

G{x ∈ E : T (x) = −x},

H has a hyperbolic basis (h1 , h2 , . . . , h2k−1 , h2k ), and there exist scalars λ1 , . . . , λk in C∗ \ {1, −1} such that T (h2i−1 ) = λi h2i−1 and T (h2i ) = λ−1 i h2i for 1 ≤ i ≤ k. T ∈ SO(E, q) if and only if the dimension of G is even.

Exercise 1. In Theorem 4.10.2, let √ √ g2j−1 = (h2j−1 + h2j )/ 2, g2j = i(h2j−1 − h2j )/ 2, for 1 ≤ j ≤ k. Calculate T (gi ), for 1 ≤ i ≤ 2k.

84

Quadratic forms

4.11 Complex inner-product spaces Besides complex quadratic spaces, we can consider complex inner-product spaces. An inner product on a complex vector space E is a mapping from E × E into C which satisfies: hα1 x1 + α2 x2 , yi = α1 hx1 , yi + α2 hx2 , yi , (i) hx, β1 y1 + β2 y2 i = β¯1 hx, y1 i + β¯2 hx, y2 i , for all x, x1 , x2 , y, y1 , y2 in E and all complex α1 , α2 , β1 , β2 ; (ii) hy, xi = hx, yi for all x, y in E, and (iii) hx, xi > 0 for all non-zero x in E. For example, if E = Cd , we define the usual inner product by setting Pd hz, wi = j=1 zj w¯j for z = (zj ), w = (wj ). A complex vector space E equipped with an inner product is called an inner-product space. An inner product is sesquilinear: linear in the first variable, and conjugate-linear in the second variable. [A word of warning: this is the convention that most pure mathematicians use. Physicists use the other convention: conjugate-linear in the first variable and linear in 1/2 the second variable.] The quantity kxk = hx, xi is then a norm on E, and the function d : E × E → R defined by d(x, y) = kx − yk is a metric on E. If E is complete under this metric then E is called a Hilbert space. Any finite-dimensional inner-product space is complete; such a space is called a Hermitian space. Arguing as in Theorem 4.10.1, it follows that if (E, h., .i) is an innerproduct space then there exists an orthonormal basis for E; a basis (e1 , . . . , ed ) such that hej , ej i = 1 for 1 ≤ j ≤ d and hei , ej i = 0 for i 6= j. In this case, * d X j=1

zj ej ,

d X j=1

+ wj ej

=

d X

zj w¯j ,

j=1

so that (E, h., .i) is isometrically isomorphic to Cd , with its usual inner Pd product hx, yi = j=1 xj y¯j . Arguing as in Theorem 4.4.1, if T is a linear mapping from a Hermitian space E to a Hermitian space F then there is a unique linear mapping T ∗ : F → E such that hT (x), yi = hx, T ∗ (y)i for all x ∈ E, y ∈ F. T ∗ is called the adjoint of T . (More generally, if E and F are Hilbert spaces and T : E → F is a bounded linear operator, then T has a unique

4.11 Complex inner-product spaces

85

adjoint T ∗ : F → E with the property described above.) If (e1 , . . . , ed ) is an orthonormal basis for E, if (f1 , . . . , fc ) is an orthonormal basis for F and if T is represented by the matrix (tij ), then T ∗ is represented by the matrix (t∗ij ), where t∗ij = t¯ji . A linear mapping T from a Hilbert space E to itself is Hermitian if T = T ∗ . If E is finite-dimensional and T is represented by a matrix (tij ) with respect to an orthonormal basis then T is Hermitian if and and only if tij = t¯ji for all i, j; the matrix (tij ) is then said to be Hermitian. If T is Hermitian there exists an orthonormal basis (e1 , . . . , ed ) and real scalars (λ1 , . . . , λd ) such that T (ei ) = λi ei for 1 ≤ i ≤ d: λ1 , . . . , λd are the eigenvalues of T . The group of linear isometries of (E, h., .i) is denoted by U (E, h., .i), and is called the unitary group. We write U (d) for U (Cd ), where Cd is given its usual inner product. Suppose that T ∈ U (E, h., .i). If λ is an eigenvalue for T , with eigenspace Eλ , then |λ| = 1, so that λ = eiθ for some θ ∈ [0, 2π). Arguing as in Proposition 4.6.2, we see that T (Eλ⊥ ) = Eλ⊥ ; consequently, there exists an orthonormal basis (e1 , . . . , ed ) for E and a sequence (θ1 , . . . , θd ) in [0, 2π) such that T (ej ) = eiθj ej for 1 ≤ j ≤ d. A unitary mapping T is in the special unitary group SU (d) if det T = 1; in terms of the representation just given, T ∈ SU (d) if and only if θ1 + · · · + θd = 0 (mod 2π). Both of the groups U (d) and SU (d) are compact and path-connected.

5 Clifford algebras

We have seen that if E is a vector space, then E can be embedded N∗ N∗ in the tensor algebra (E), in the symmetric algebra s (E), and, V∗ most pertinently, in the exterior algebra (E). The exterior algebra was introduced by Grassmann in 1844. In 1876, Clifford communicated the abstract of a paper [Cli2] to the London Mathematical Society; this paper, which was unfinished, and not published in his lifetime, showed how to modify the definition of the exterior algebra of an n-dimensional real inner-product space, to take the inner product into account. In this paper, he showed that the resulting algebra (which he called a ‘geometric algebra’) has dimension 2n , and that it can be decomposed into odd and even parts. He also observed that properties of the algebra depended on the value of n (mod4). Clifford’s definition extends easily to general quadratic spaces. It also extends to complex quadratic spaces, and indeed to quadratic spaces defined over more general fields. We shall however restrict our attention to real quadratic spaces.

5.1 Clifford algebras Throughout this chapter, we shall suppose that (E, q) is a d-dimensional real vector space E with quadratic form q, associated bilinear form b and standard orthogonal basis (e1 , . . . , ed ). Suppose that A is a unital algebra. A Clifford mapping j is an injective linear mapping j : E → A such that (i) 1 ∈ / j(E), and (ii) (j(x))2 = −q(x)1 = −q(x), for all x ∈ E. If, further,

5.1 Clifford algebras

87

(iii) j(E) generates A, then A (together with the mapping j) is called a Clifford algebra for (E, q). If j is a Clifford mapping, and x, y ∈ E then j(x)j(y) + j(y)j(x) = j(x + y)2 − j(x)2 − j(y)2 = −(q(x + y) + q(x) + q(y))1 = −2b(x, y)1. In particular, if x⊥y then xy = −yx. Conventions vary greatly! Some authors require that (j(x))2 = q(x). We can identify R with span(1): these are the scalars in A. We can also identify E with j(E), so that E is a linear subspace of A. Elements of E are then called vectors in A. V∗ Let us give three examples. First, suppose that q = 0. Let A = E, and let j(x) = x. Since 1 6∈ j(E), and x ∧ x = 0 = −q(x) for all x ∈ E, j V∗ is a Clifford mapping of (E, q) into E. Since E generates the algebra V∗ V∗ E, the exterior algebra E is a Clifford algebra for (E, q). Secondly, Let (E, q) be a one-dimensional inner-product space, with basic element e1 satisfying q(e1 ) = 1. Let A = C, the algebra of complex numbers, and let j(λe1 ) = λi. Then (j(λe1 ))2 = −λ2 = −q(λe1 ). Thus j is a Clifford mapping and C is a Clifford algebra for (E, q). (This example explains the sign-convention that we use.) We can think of A = C as a subalgebra of M2 (R), with     1 0 0 −1 1= and i = J = , 0 1 1 0 so that a typical element of A is  z = x + iy = xI + yJ =

x y

−y x

 .

Thirdly, let (E, q) be a one-dimensional quadratic space, with negativedefinite form q, and with basic element e1 satisfying q(e1 ) = −1. Let A = R2 , with multiplication defined as (x1 , y1 )(x2 , y2 ) = (x1 x2 , y1 y2 ), so that A = R ⊕ R is the algebra direct sum of two copies of R, with identity element 1 = (1, 1). Let j(λe1 ) = (λ, −λ). Then 1 6∈ j(E), and (j(λe1 ))2 = λ2 (1, 1) = −q(λe1 ).1, so that j is a Clifford mapping into A, and A is a Clifford algebra for (E, q). Again we can think of A as a subalgebra of M2 (R), with     1 0 1 0 1= and j(e1 ) = U = , 0 1 0 −1

88

Clifford algebras

so that a typical element of A is  x+y xI + yU = 0

0 x−y

 .

Alternatively, we could take  j(λe1 ) = λQ =

0 λ

λ 0

 ,

so that a typical element of A is  xI + yQ =

x y

y x

 .

We shall make frequent use of the following elementary result. Theorem 5.1.1 Suppose that that a1 , . . . , ad are elements of a unital algebra A. Then there exists a Clifford mapping j : (E, q) → A satisfying j(ei ) = ai for 1 ≤ i ≤ d if and only if a2i = −q(ei ) ai aj + aj ai = 0

for 1 ≤ i ≤ d, for 1 ≤ i < j ≤ d,

and 1 6∈ span(a1 , . . . , ad ). If so, j is unique. Proof The conditions are certainly necessary, since if j is a Clifford mapping then 1 6∈ span(a1 , . . . , ad ), a2i = j(ei )2 = −q(ei ), and ai aj + aj ai = j(ei )j(ej ) + j(ej )j(ei ) = 0, for i 6= j, since ei ⊥ej . Conversely, if they are satisfied, and x = x1 e1 + · · · + xj ej ∈ E, set j(x) = x1 a1 +· · ·+xj aj . Then j is a linear mapping of E into A, 1 6∈ j(E) and 2

j(x) =

d X i=1

x21 a2i

+

X 1≤i 1 then eC depends on the ordering of the set {1, . . . , d}. The element eΩ = e1 . . . ed will be particularly important. Proposition 5.1.1 Suppose that A is a Clifford algebra for (E, q). Then A = span(P ), where P = {eC : C ⊆ Ω}. If P is linearly independent (and therefore a basis for A) then A is universal. Proof Since e2i = 0, 1 or −1 and ei ej = −ej ei for i 6= j, it follows that if eC , eD ∈ P then either eC eD = 0 or eC eD = ±eC∆D , where C∆D is the symmetric difference (C \ D) ∪ (D \ C). Thus P ∪ (−P ) is closed under multiplication, and so spans A. If P is linearly independent and T ∈ L(E, F ) is an isometry of (E, q) into (F, r) and B(F, r) is a Clifford algebra for (F, r), then we can extend T to an algebra homomorphism T˜ : A → B by setting T˜(ei1 . . . eik ) = T (ei1 ) . . . T (eik ) and extending by linearity. Corollary 5.1.1

If dim A = 2d then A is universal.

5.2 Existence Theorem 5.2.1 If (E, q) is a quadratic space, there exists a universal Clifford algebra A(E, q). We shall give more than one proof of this important theorem. The proof given here uses creation and annihilation operators.

90

Clifford algebras

Proof We show that we can take A(E, q) to be a subalgebra of the V∗ V∗ algebra L( E) of linear operators on E. If x ∈ E, let j(x) = mx −δx , where mx is the creation operator and δx is the annihilation operator corresponding to x. Then j(x)2 = m2x − mx δx − δx mx + δx2 = −(mx δx + δx mx ) = −q(x)1, by Theorem 4.1.1. Since j(x)1 = x, 1 6∈ j(E). Thus j is a Clifford V∗ mapping of (E, q) into L( E). We take A(E, q) to be the unital algebra generated by j(E), and identify E with j(E). It remains to show that A(E, q) is universal. We show that the set {eC : C ⊆ Ω} is linearly independent. First, e∅ (1) = 1. Next, we show by induction on k = |C| that if C = {j1 < · · · < jk } then eC (1) = ej1 ∧· · ·∧ejk . The result is true if k = 1. Suppose that it it is true for k and that C = {j0 < · · · < jk }. Then mej0 (ej1 . . . ejk )(1) = mej0 (ej1 ∧ · · · ∧ ejk ) = ej0 ∧ ej1 ∧ · · · ∧ ejk , δej0 (ej1 . . . ejk )(1) = δej0 (ej1 ∧ · · · ∧ ejk ) = 0. V∗ Thus the elements {eC (1) : C ⊆ Ωd } are linearly independent in E; this implies that the operators {eC : C ⊆ Ωd } are linearly independent V∗ in L( E). Corollary 5.2.1 If A is a universal Clifford algebra for (E, q) then dim A = 2d , where d = dim E. Thus a Clifford algebra A for (E, q) is universal if and only if dim A = 2d , and if and only if the set P = {eC : C ⊆ Ω} is a linearly independent subset of A. We denote a universal Clifford algebra for (E, q) by A(E, q). We write Ad for A(Rd ) and Ap,m for A(Rp,m ). We can consider A(E, q) as a quotient of ⊗∗ E. Let i be the inclusion mapping E → ⊗∗ E, and let us denote the Clifford mapping from E to a universal Clifford algebra A(E, q) by jA . By Theorem 3.2.1, there is a unique algebra homomorphism kA from ⊗∗ E into A(E, q) such that kA ◦i = jA ; since E generates A(E, q), kA is surjective. Let Iq be the ideal in ⊗∗ E generated by the elements x ⊗ x + q(x)1, let C(E, q) = (⊗∗ E)/Iq be the quotient algebra, and let π : ⊗∗ E → C(E, q) be the quotient mapping. Let us set jB = π ◦ i. Then Iq is in the null-space of kA , and so there exists a unique algebra homomorphism J : C(E, q) → A(E, q) such that kA = J ◦ π. Then jA = kA ◦ i = J ◦ π ◦ i = J ◦ jB . Again, J is surjective. If x ∈ E, then J(jB (x)) = jA (x), so that 1 6∈ jB (E). Further, (jB (x))2 = π(x ⊗ x) = π(x ⊗ x + q(x)1) − π(q(x)1) = −q(x),

5.3 Three involutions

91

so that jB is a Clifford mapping of E into C(E, q). Since i(E) generates ⊗∗ E, jB (E) = π(i(E)) generates C(E, q), and so C(E, q) is a Clifford algebra for (E, q). Since A(E, q) is universal, there exists a unique unital algebra homomorphism ρ : A(E, q) → C(E, q) such that ρ ◦ jA = jB . It follows that J is an algebra isomorphism of C(E, q) onto A(E, q), with inverse ρ. Thus C(E, q) is a universal Clifford algebra for (E, q). E i  ⊗∗ (E)

jA

jB

-

kA

- A(E, q) -

π

J

ρ

-

? C(E, q) 

5.3 Three involutions Conjugation plays an important part in complex analysis. The same is true in Clifford algebras; but here the situation is rather more complicated. Suppose that (E, q) is a quadratic space, with universal Clifford algebra A = A(E, q). Let m(x) = −x, for x ∈ E. Then m is an isometry of E, and so by universality we have: E ⊂ ? A(E, q)

m

- E ⊂

? m ˜A(E, q)

We write a0 for m(a). ˜ Clearly a → a0 is an automorphism, and a00 = a (we have an involution). Also, e0C = (−1)|C| eC . This involution is called the principal involution. We set A+ = {a : a = a0 },

A− = {a : a = −a0 }.

Then A = A+ ⊕ A− , and A+ is a subalgebra of A, the even Clifford

92

Clifford algebras

algebra. Further, A+ .A+ = A− .A− = A+ ,

A+ .A− = A− .A+ = A− ,

so that A is a super-algebra. This super-algebra property is fundamentally important. Note that eC ∈ A+ if |C| is even, and eC ∈ A− if |C| is odd. In particular, j(E) ⊆ A− . Theorem 5.3.1 Suppose that the quadratic space (E, q) is the orthogonal direct sum (E1 , q1 ) ⊕ (E2 , q2 ) of quadratic spaces. Then A(E, q) ∼ = A(E1 , q1 ) ⊗g A(E2 , q2 ). Proof Let G be the graded product A(E1 , q1 ) ⊗g A(E2 , q2 ). If x1 ∈ E1 and x2 ∈ E2 , define j(x1 +x2 ) = (x1 ⊗g 1)+(1⊗g x2 ). Then j is a one-one linear mapping of E into G, and 1 6∈ j(E). Since x1 ∈ A− (E1 , q1 ) and x2 ∈ A− (E2 , q2 ), (j(x1 + x2 ))2 = −q1 (x1 ) + x1 ⊗g x2 − x1 ⊗g x2 − q2 (x2 ) = −q(x). Thus j is a Clifford mapping of (E, q) into G. Further, j(E1 + E2 ) generates G, and dim A = dim A(E1 , q1 ). dim A(E2 , q2 ) = 2d1 2d2 = 2d , so that G is a universal Clifford algebra for (E, q). Corollary 5.3.1 If (E, q) is a d-dimensional quadratic space, A(E, q) is isomorphic to the graded tensor product of d two-dimensional graded algebras. This theorem and its corollary are theoretically interesting, but are not very useful in practice, since the construction of graded tensor products is not a straightforward matter. In practice, it is more useful to construct ordinary tensor products, as we shall see in subsequent chapters. We can however use this theorem to give a second proof of Theorem 5.2.1. This uses induction on d = dim E. The result is true when d = 1, as the examples in the previous section show. Assume that the result is true for spaces of dimension less than dim E = d, where d > 1. We can write E as an orthogonal direct sum E1 ⊕ E2 , with dim E1 = d1 < d and dim E2 = d2 < d. Let q1 and q2 be the restrictions of q to E1 and E2 . By the inductive hypothesis, there exist universal Clifford algebras A(E1 , q1 ) and A(E2 , q2 ). Let G be the graded product A(E1 , q1 ) ⊗g A(E2 , q2 ). If x1 ∈ E1 and x2 ∈ E2 , define j(x1 +x2 ) = (x1 ⊗g 1)+(1⊗g x2 ). Then j is a

5.3 Three involutions

93

one-one linear mapping of E into G, and 1 6∈ j(E). Since x1 ∈ A− (E1 , q1 ) and x2 ∈ A− (E2 , q2 ), (j(x1 + x2 ))2 = −q1 (x1 ) + x1 ⊗g x2 − x1 ⊗g x2 − q2 (x2 ) = −q(x). Further, j(E1 + E2 ) generates G, and dim G = dim A(E1 , q1 ). dim A(E2 , q2 ) = 2d1 2d2 = 2d , so that G is a universal Clifford algebra for (E, q). Note that the theorem shows that A2 ∼ = A1 ⊗g A1 ∼ = C ×g C ∼ = H. We can also see this directly. Let j((x1 , x2 )) = x1 i + x2 j. Then j is an injective linear mapping of R2 into H, 1 6∈ j(R2 ) and (j((x1 , x2 )))2 = x21 i2 + x1 x2 (ij + ji) + x22 j 2 = −x21 − x22 . Thus j is a Clifford mapping of R2 into H, and so H is a Clifford algebra for the Euclidean space R2 . Since dim H = 4 = 22 , H is a universal Clifford algebra. Note that A+ 2 = span(1, k); this appearance of the quaternions is rather unsatisfactory, since i, j and k appear in an unsymmetric way. Recall that if A is an algebra, then the opposite algebra Aopp is the vector space A with a new multiplication defined by x ◦ y = yx. It is an algebra. If A(E, q) is a universal Clifford algebra for (E, q), with Clifford mapping j, then j : (E, q) → A(E, q)opp is also a Clifford mapping, and so A(E, q)opp is also a Clifford algebra for (E, q). It is universal, since it has the same dimension as A(E, q). We have: E ⊂ ? A(E, q)

Id

- E ⊂

? ˜ Id - A(E, q)opp

˜ where Id is the identity mapping. We write a∗ for Id(a). Then ∗ ∗ ∗ ∗ ∗ ∗∗ a b = b ◦ a = (ba) . Again, a = a, and so we have an involutory anti-automorphism of A(E, q), called the principal anti-involution, or reversal. If C = {ej1 , . . . , ejk } with 1 ≤ j1 < · · · < jk ≤ d, then e∗C = ejk . . . ej1 = (−1)|C|(|C|−1)/2 eC ,

94

Clifford algebras

since e∗C can be obtained from eC by making |C|(|C| − 1)/2 transpositions. Finally, we can consider: E

m

- E

⊂ ? A(E, q)

⊂ ? m ˜A(E, q)opp

where m(x) = −x. We write m(a) ˜ = a ¯. Clearly a ¯ = (a0 )∗ = (a∗ )0 , ¯ = a. Further, ab = ¯b.¯ a, so that the e¯C = (−1)|C|(|C|+1)/2 eA , and a mapping a → a ¯ is an anti-automorphism of A, called conjugation. If e1 , . . . , ed is a standard orthogonal basis for E, then e0C = (−1)α(C) eC , e∗C = (−1)β(C) eC and eC = (−1)γ(C) eC , where α, β and γ are given by the following table: |C|(mod 4) 0

1

2

3

α(C)

0

1

0

1

β(C)

0

0

1

1

γ(C)

0

1

1

0

If F is a linear subspace of (E, q), then the inclusion mapping F → A(E, q) is a Clifford mapping, and therefore extends to an isomorphism of A(F, q) into A(E, q). This mapping is called the canonical inclusion and will be denoted by ˜ı. Note that (˜ı(a))0 = ˜ı(a0 ), ˜ı(a) = ˜ıa ¯ and ∗ ∗ (˜ı(a)) = ˜ı(a ). Exercises 1. Suppose that F is a linear subspace of (E, q). Verify that the canonical inclusion A(F, q) → A(E, q) is injective. 2. Let G denote the group of automorphisms T of A(E, q) for which T (E) ⊆ E. Show that if T ∈ G then the restriction T|E of T to E is an isometry of (E, q), and that the mapping T → T|E is an isomorphism of G onto O(E, q).

5.4 Centralizers, and the centre

95

5.4 Centralizers, and the centre In this section, we suppose that (E, q) is a regular quadratic space. Recall that the centralizer CA (B) of a subset B of an algebra A is CA (B) = {a ∈ A : ab = ba for all b ∈ B}, and that the centre Z(A) of A is the centralizer CA (A): Z(A)) = {a ∈ A : ab = ba for all b ∈ A}. Proposition 5.4.1 Let A = A(E, q) be a universal Clifford algebra for a regular quadratic space (E, q). (i) CA (A+ ) = span(1, eΩ ). (ii) Z(A) = span(1, eΩ ) if d is odd, and Z(A) = span(1) if d is even. (iii) Z(A+ ) = span(1) if d is odd, and Z(A+ ) = span(1, eΩ ) if d is even. Proof (i) Since ei ej eΩ = eΩ ei ej and the terms ei ej generate A+ , span(1, eΩ ) ⊆ CA (A+ ). Suppose conversely that z ∈ CA (A+ ); we can P write z = C⊆Ω λC eC . It is sufficient to show that if D is a non-empty proper subset of Ω then λD = 0. There exist i ∈ D and j 6∈ D. If C ⊆ Ω then ei ej eC ei ej = αC eC where αC = ±1, and in particular ei ej eD ei ej = q(ei )q(ej )eD , so that αD = q(ei )q(ej ). Now X X −q(ei )q(ej )z = ei ej zei ej = λC ei ej eC ei ej = αC λC eC , C⊆Ω

C⊆Ω

P so that C⊆Ω (αC + q(ei )q(ej ))λC eC = 0, and (αC + q(ei )q(ej ))λC = 0 for C ⊆ Ωd . Since αD = q(ei )q(ej ), it follows that λD = 0. (ii) Z(A) ⊆ CA (A+ ). If d is odd, then ej eΩ = eΩ ej for each j, so that eΩ ∈ Z(A), and Z(A) = span(1, eΩ ). If d is even, then ej eΩ = −eΩ ej for each j, so that eΩ 6∈ Z(A), and Z(A) = span(1). Finally (iii) follows from (i), since eΩ ∈ A+ if and only if d is even. Corollary 5.4.1 The element eΩ is, up to choice of sign, independent of the choice of standard orthogonal basis. That is, the set {eΩ , −eΩ } is independent of the choice of standard orthogonal basis. Proof It follows from the proposition that span(1, eΩ ) does not depend on the choice of standard orthogonal basis. If x ∈ span(1, eΩ ) then x2 = ±1 if and only if x = 1, −1, eΩ or −eΩ . We call eΩ a volume element of A. It depends upon the orientation that we use. If we replace the standard orthogonal basis (e1 , . . . , ed ) by

96

Clifford algebras

the standard orthogonal basis (−e1 , e2 , . . . , ed ) then eΩ is replaced by −eΩ . Note that e2Ω = (−1)η , where η = 21 d(d − 1) + p. If e2Ω = 1 then the algebra A(eΩ ) ∼ = R ⊕ R = R2 , while if e2Ω = −1 then the algebra A(eΩ ) ∼ = C. When is e2Ω = 1? We consider two cases. First suppose that d = 2k is even and that p = k + t, m = k − t. Then η = k(2k − 1) + k + t = 2k 2 + t. Thus e2Ω = 1 if and only if t is even; this is the case if and only if p − m = 2t = 0 (mod 4). Secondly, if d = 2k + 1 is odd and p = k + t, m = k + 1 + t then η = k(2k + 1) + k + t = 2k(k + 1) + t. Again, e2Ω = 1 if and only if t is even; this is the case if and only if p − m = 2t − 1 = 3 (mod 4). We therefore we have the following table: p − m(mod4)

0

1

2

3

CA (A+ )

R2

C

C

R2

Z(A(E, q))

R

C

R

R2

Z(A+ (E, q))

R2

R

C

R

This means that A(E, q) can be considered as a complex algebra in a natural way if and only if d is odd and p − m = 1 (mod 4). Similarly, A+ (E, q) can be considered as a complex algebra in a natural way if and only if d is even and p − m = 2 (mod 4).

5.5 Simplicity Are there any non-universal Clifford algebras? If A is a Clifford algebra for (E, q) then we have a diagram E ⊂ ? A(E, q)

Id E ⊂ ? ˜ Id - A

5.5 Simplicity

97

˜ is an algebra homomorphism of A(E, q) onto A. If A(E, q) is and Id ˜ is an algebra isomorphism of A(E, q) onto A, and so simple, then Id (E, q) has no non-universal Clifford algebra. If (E, q) is not regular, then e2Ω = 0, and A(E, q)eΩ is a proper ideal in A(E, q). Let B(E, q) be the quotient algebra A(E, q)/A(E, q)eΩ , and let q : A(E, q) → B(E, q) be the quotient mapping. If dim E > 1 and j : (E, q) → A(E, q) is the Clifford mapping, then q ◦ j is a Clifford mapping from (E, q) into B(E, q), so that B(E, q) is a non-universal Clifford algebra for (E, q). Suppose that (E, q) is regular, with signature (p, m). Then (A, q) ∼ = Ap,m , and so it is sufficient to consider the algebras Ap,m . Theorem 5.5.1 If p − m 6= 3 (mod 4) then Ap,m is simple, so that Rp,m has no non-universal Clifford algebras. Proof We can suppose that d ≥ 2. Suppose that I is a non-zero ideal in Ap,m . Let x be a non-zero element in I with a minimal number of nonzero coefficients in its expansion with respect to the basis {eC : C ⊆ Ω}. By multiplying by an eC and scaling, we can suppose that X x=1+ λC eC , C∈R

where R is a set of nonempty subsets of Ω. We shall show that R is the empty set. Suppose that B ∈ R and that B 6= Ω, so that there exist i ∈ B and j 6∈ B. Then X ei ej xei ej = −q(ei )q(ej ) + µC eC , C∈R

where µC = ±q(ei )q(ej )λC , and where µB = q(ei )q(ej )λB . Consequently q(ei )q(ej )x − ei ej xei ej is a non-zero element of I with fewer non-zero terms, giving a contradiction. Thus x = 1 + λΩ eΩ . Suppose that d is even. Then e1 xe1 = −q(e1 )(1 − λΩ eΩ ), so that q(e1 )x − e1 xe1 = 2q(e1 )1 ∈ I and I = Ap,m . If p − m = 1 (mod 4), then e2Ω = −1, so that x(1 − λΩ eΩ ) = (1 + λ2Ω )1, and again I = Ap,m . We have corresponding results for even Clifford algebras. Combining this theorem with Theorem 6.4.1, we have the following. Corollary 5.5.1

If p 6= m (mod 4) then A+ p,m is simple.

What happens when(E, q) = Rp,m , with p − m = 3 (mod 4)? In this case, f = 12 (1+eΩ ) and g = 12 (1−eΩ ) are idempotents in the centre of the universal Clifford algebra A = Ap,m satisfying f + g = 0 and f + g = 1,

98

Clifford algebras

so that we have a decomposition of A as the direct sum of two ideals Af ⊕ Ag. Af and Ag are unital algebras, with identity elements f and g respectively. The mapping mf : a → af is an algebra homomorphism of A onto Af with null-space Ag. If x is a non-zero element of E then j(x)f = 12 (x + xeΩ ) 6∈ span(f ), and (j(x)f )2 = j(x)2 f = −q(x)f , so that the mapping x → j(x)f is a Clifford mapping of E into Af . j(E)f generates Af , and so Af is a non-universal Clifford algebra for (E, q). Now d = p + m is odd, so that if a ∈ A+ then aeΩ ∈ A− , and so (af )+ = a/2. Thus mf is one-one on A+ . Since A+ and Af have the same dimension, the restriction of mf to A+ is an algebra isomorphism of A+ onto Af . Consequently A+ ∼ = Af . The same holds for the mapping mg : a → ag from A onto Ag, and so A(E, q) ∼ = A+ (E, q) ⊕ A+ (E, q). Note that since f is neither in A+ nor in A− , the Z2 grading of A does not transfer to Af . Since e0Ω = −eΩ , the principal involution maps Af isomorphically onto Ag, and maps Ag isomorphically onto Af . Suppose that B is a Clifford algebra for (E, q) ∼ = Rp,m , where p − m = 3 (mod 4). Then we have a diagram E ⊂ ? Af ⊕ Ag

Id E ⊂ ? ˜ Id - B

Now Af ∼ = A+ (E, q), and A+ (E, q) is simple, by Corollary 5.5.1, so ˜ ˜ is injective on Af ; a similar result holds for that either Id(Af ) = 0 or Id Ag. From this it follows that either B is universal, or B ∼ = Af . Summing up, we have the following theorem. Theorem 5.5.2

If p − m = 3 (mod 4) then Ap,m is not simple, and + Ap,m ∼ = A+ p,m ⊕ Ap,m .

There exists a non-universal Clifford algebra Bp,m for Rp,m , and any such algebra is isomorphic to A+ p,m . When p − m = 3 (mod 4), it is sometimes easier to construct the nonuniversal Clifford algebra Bp,m than to construct the universal Clifford algebra Ap,m ; it is then easy to construct the Clifford algebra Ap,m from the Clifford algebra Bp,m . Suppose that k : Rp,m → Bp,m is a Clifford mapping from Rp,m into Bp,m . If eB Ω is the corresponding volume B element, then eΩ = ±I. The mapping j(x) = (k(x), −k(x)) from Rp,m

5.5 Simplicity

99

into Bp,m ⊕Bp,m is then a Clifford mapping, which extends to an algebra homomorphism j : Ap,m → Bp,m ⊕Bp,m . Let eA Ω be the volume element in A B Ap,m . Since d = p + m is odd, j(eΩ ) = (eΩ , −eB Ω ), and so j is injective. Since dim Ap,m = 2 dim Bp,m , it follows that j is an isomorphism of Ap,m onto Bp,m ⊕ Bp,m . Thus, using the Clifford mapping j, we can take Bp,m ⊕ Bp,m to be a universal Clifford algebra for Rp,m . Note that then

∼ A+ p,m = {(b, b) : b ∈ Bp,m } = Bp,m A− p,m = {(b, −b) : b ∈ Bp,m }.

Corollary 5.5.2 If A is a Clifford algebra for a regular quadratic space (E, q) and if a ∈ A has a left inverse b, then a is invertible, with inverse b.

Proof If A is simple, this is an immediate consequence of Corollary 2.7.2. If A is not simple, then A is universal, and there exists an isomorphism π : A → A+ ⊕ A+ . Let π(a) = (a1 , a2 ) and let π(b) = (b1 , b2 ). Then bi is a left inverse of ai in A+ , for i = 1, 2. Since A+ is simple, ai is invertible in A+ , with inverse bi , for i = 1, 2. Thus a is invertible, with inverse b.

Exercises

1. Show that the even Clifford algebras A+ (E, q) and A+ (E, −q) are isomorphic. 2. Suppose that p − m = 3 (mod 4). If x ∈ Rp,m , let mΩ (x) = xeΩ . Show that mΩ is a Clifford map from Rp,m into Ap,m , and let mΩ also denote the corresponding algebra homomorphism from Ap,m into Ap,m . What are the fixed points of mΩ ? What is the image of mΩ ? What is the null-space of mΩ ?

100

Clifford algebras

3. Suppose that (E, q) has signature (p, m), where p − m = 3 (mod 4). Let i : (E, q) → (E, −q) be the identity mapping, and let nΩ : (E, q) → A(E, −q) be defined by nΩ (x) = i(x)eΩ , where eΩ is a volume element in A(E, −q). Show that nΩ is a Clifford map from (E, q) into A(E, −q). Let nΩ also denote the corresponding algebra homomorphism from A(E, q) into A(E, −q). What is the image of nΩ ? What is the null-space of nΩ ?

5.6 The trace and quadratic form on A(E, q) Suppose that A = A(E, q) is a universal Clifford algebra for a regular quadratic space (E, q), so that the set {eC : C ⊆ Ωd } is a basis for A. Although the definition of the normalized trace τn on A does not depend on the choice of basis, we can use this basis to obtain properties of τn . If C 6= ∅ and D ⊆ Ωd then leC (eD ) = eC eD = ±eC∆D 6= ±eD , (where C∆D = (C \ D) ∪ (D \ C)), so that the diagonal terms of the matrix representing leC all vanish, and τn (eC ) = 0. Thus if P a = C⊆Ωd αC eC then τn (a) = α∅ . Note also that τn (a) = τn (a0 ) = τn (a∗ ) = τn (¯ a). In particular, if a ∈ A− (E, q) then τn (a) = 0. Theorem 5.6.1 The normalized trace τn is the only linear functional φ on A = A(E, q) which satisfies the conditions (i) φ(1) = 1; (ii) φ(ab) = φ(ba) for all a, b ∈ A; (iii) φ(a) = φ(a0 ) for all a ∈ A. Proof We have seen that τn satisfies the conditions. Suppose that φ satisfies the conditions. If |C| is odd then e0C = −eC , so that φ(eC ) = −φ(eC ), and φ(eC ) = 0. Suppose that |C| is even and that C 6= ∅. Let j be the least element of C and let D = C \ {j}, so that eC = ej eD = −eD ej . Then φ(eC ) = φ(ej eD ) = −φ(eD ej ) = −φ(ej eD ) = −φ(eC ), so that φ(eC ) = 0. Thus φ = τn .

5.7 The group G(E, q) of invertible elements of A(E, q)

101

We now use the normalized trace on A(E, q) to define a quadratic form on A(E, q). If a, b ∈ A, let B(a, b) = τn (a¯b). Then B is a bilinear form on A(E, q). Since a) = B(b, a), B(a, b) = τn (a¯b) = τn (a¯b) = τn (b¯ B is symmetric, and so it defines a quadratic form Q on A(E, q): Q Q(a) = τn (a¯ a) = τn (¯ aa). Since eC e¯C = j∈C q(ej ), it follows that Q Q(eC ) = j∈C q(ej ). In particular, if x ∈ E then Q(x) = q(x). If (E, q) is regular, then Q is non-singular, and the set {eC : C ⊆ Ωd } is a standard orthogonal basis for A(E, q). If (E, q) is positive definite, then so is (A, Q). On the other hand, if (E, q) is regular, but not positive definite, and if A ⊂ {1, . . . , d − 1} then Q(eA∪{d} ) = Q(A)Q(ed ) = −Q(eA ); it follows easily from this that (A(E, q), Q) is hyperbolic. The quadratic form Q enables us to decompose A(E, q) as a direct sum of certain linear subspaces. Let (e1 , . . . , ed ) be a standard orthogonal basis for E, and let Aj = span{eC : |C| = j}, Bj = span{eC : |C| ≤ j}, for 0 ≤ j ≤ d, with the convention that A0 = B0 = R. Then Bj = span{a ∈ A(E, q) : a is the product of at most j vectors}, so that Bj does not depend upon the choice of standard orthogonal ba⊥ sis. But Aj = Bj ∩ Bj−1 , and so Aj also does not depend upon the choice of standard orthogonal basis. We have an orthogonal decomposition A(E, q) = A0 ⊕A1 ⊕· · ·⊕Ad : A1 = E and Ad = span(eΩ ). Elements of A2 are called bivectors, elements of A2 are called trivectors, and so on. (A word of caution here: some authors use the term bivector to mean the product of two orthogonal vectors.)

5.7 The group G(E, q) of invertible elements of A(E, q) Suppose that (E, q) is a regular quadratic space. We denote the group of invertible elements of the universal Clifford algebra A(E, q) by G(E, q). The anisotropic elements of E are contained in G(E, q). If a ∈ A(E, q), let ∆(a) = a ¯a; ∆(a) is the quadratic norm on A(E, q). Note that Q(a) = τn (∆(a)). The element ∆(a) provides a useful test to determine whether a is invertible.

102

Clifford algebras

Proposition 5.7.1 ∆(a) ∈ G(E, q).

If a ∈ A(E, q) then a ∈ G(E, q) if and only if

Proof If a ∈ G(E, q) then a ¯ ∈ G(E, q), so that ∆(a) ∈ G(E, q). Conversely if ∆(a) ∈ G(E, q) then (∆(a))−1 a ¯a = I so that a has a left inverse. This implies that a ∈ G(E, q), by Corollary 5.5.2. For example, if x = a1 + bi + cj + dk ∈ H then, as in Section 2.3, ∆(x) = a2 + b2 + c2 + d2 , and so x ∈ G(R2 ) if and only if x 6= 0. If λ1 + x ∈ R ⊕ E then ∆(λ1 + x) = (λ1 − x)(λ1 + x) = λ2 + q(x), so that λ1 + x ∈ G(E, q) if and only if q(x) 6= −λ2 ; in particular, if (E, q) is Euclidean then the non-zero elements of R ⊕ E are invertible. Suppose that λ1 + x is an invertible element of R ⊕ E. Then if a ∈ A(E, q), a(λ1 − x)(λ1 + x)a) Q((λ1 + x)a) = τn (¯ aa) = (λ2 + q(x))Q(a), = τn ((λ2 + q(x))¯ so that in the left regular representation of A(E, q), lλ1+x is a multiple of an isometry of A(E, q). We can generalize this. Let N = N (E, q) = {a : ∆(a) ∈ R}, N∗ = {a : ∆(a) ∈ R∗ } = {a ∈ N : ∆(a) 6= 0}, N±1 = {a ∈ N : ∆(a) = ±1}, N1 = {a ∈ N : ∆(a) = 1}. We set N + = N ∩ A+ (E, q), and use the same convention for the other terms. If a ∈ N then Q(a) = τn (∆(a)) = ∆(a). Note that if (E, q) is a Euclidean space then ∆(a) ≥ 0 for a ∈ N , and ∆(a) > 0 for a ∈ N∗ . Proposition 5.7.2 N∗ is a subgroup of G(E, q), and ∆ is a character on N∗ : ∆(ab) = ∆(a)∆(b). If a ∈ N∗ then a0 , a∗ and a ¯ are in N∗ , and ∆(a) = ∆(a0 ) = ∆(a∗ ) = ∆(¯ a). Proof If a ∈ N∗ then ∆(a) ∈ G(E, q), and so a ∈ G(E, q). If a ∈ N∗ and b ∈ A then ∆(ab) = ¯b¯ aab = ¯b∆(a)b = ∆(a)¯bb = ∆(a)∆(b); in particular, ∆ is multiplicative on N∗ . Further, a(¯ a/∆(a)) = 1, so that a¯ a = ∆(a), a ¯ ∈ N∗ , ∆(a) = ∆(¯ a) and

5.7 The group G(E, q) of invertible elements of A(E, q)

103

a−1 = a ¯/∆(a) ∈ N∗ : N∗ is a subgroup of G(E, q). Since ∆(a0 ) = (a¯0 )a0 = (¯ aa)0 = (∆(a))0 = ∆(a), a0 ∈ N∗ . Finally a∗ ∈ N∗ and ∆(a∗ ) = ∆(a). Proposition 5.7.3 If a ∈ N and b ∈ A then Q(ab) = ∆(a)Q(b), so that in the left regular representation of A, l(a) is a scalar multiple of an isometry of A. Thus the mapping a → l(a) is an injective homomorphism of N1 (E, q) into the orthogonal group O(A(E, q), Q). Proof

Q(ab) = τn (a¯bb¯ a) = τn (¯ aa¯bb) = ∆(a)τn (¯bb) = ∆(a)Q(b).

6 Classifying Clifford algebras

From now on, we shall only consider Clifford algebras for regular quadratic spaces.

6.1 Frobenius’ theorem We have seen that H is a division algebra. Theorem 6.1.1 (Frobenius’ theorem) dimensional real division algebras.

R, C and H are the only finite-

Proof Suppose that A is a finite-dimensional real division algebra, of dimension d > 1. We identify R with R.1. We consider the set E = {a ∈ A : a2 ∈ R} and its subsets E + = {a ∈ E : a2 ≥ 0} and E − = {a ∈ E : a2 ≤ 0}. √ √ First observe that E + = R. For if a2 = c ≥ 0, then (a− c)(a+ c) = 0. √ √ Since A is a division algebra, either a − c = 0 or a + c = 0, so that a ∈ R. Conversely if a ∈ R then a2 ≥ 0. Next we show that A = span(E). Suppose that a ∈ A. Let ma be its minimal polynomial; that is, the monic real polynomial p of minimal degree for which p(a) = 0. ma must be irreducible, since if ma = p1 p2 then 0 = ma (a) = p1 (a)p2 (a), so that p1 (a) = 0 or p2 (a) = 0. Consider ma as a complex polynomial, and let z be a complex root. If z is real, then ma (x) = x − z, so that a = z.1 ∈ R. If z = g + ih is not real, then z¯ is also a root of ma , and (x − z)(x − z¯) = (x − g)2 + h2 is a real polynomial which divides ma . Thus ma (x) = (x−g)2 +h2 . Consequently (a−g.1)2 = −h2 .1 and a−g.1 ∈ E − . Thus a = (a−g.1)+g.1 ∈ span(E). We now show that E − is a linear subspace of A. Clearly if a ∈ E −

6.1 Frobenius’ theorem

105

and r ∈ R then ra ∈ E − . Suppose that a and b are linearly independent elements of E − . We show that a, b and 1 are linearly independent. If not, and λa + µb = ν.1, with λ, µ and ν not all zero, then λ2 a2 = (ν.1 − µb)2 = (ν 2 .1 + µ2 b2 ) − 2νµb ∈ R. Since ν 2 .1 + µ2 b2 − λ2 a2 ∈ R, 2νµb = 0, so that either ν = 0 (so that a and b are linearly dependent) or µ = 0 (so that a ∈ R); in either case we have a contradiction. Considering the minimal polynomials of a + b and a − b, we can write (a + b)2 = r(a + b) + s.1, (a − b)2 = t(a − b) + u.1, with r, s, t, u ∈ R. Adding, 2a2 + 2b2 = (r + t)a + (r − t)b + (s + u).1. Since a, b and 1 are linearly independent, r + t = 0 and r − t = 0, so that r = 0, and a + b ∈ E. But if a + b ∈ R, then either a + b = 0, or a, b and a + b are linearly independent. Neither is possible, and so we have a contradiction. Thus a + b ∈ E − , E − is a linear subspace of A, and A = R.1 ⊕ E − . Let β(a, b) = − 21 (ab + ba), for a, b ∈ E − . Then β is a symmetric bilinear mapping of E − ×E − into A. But β(a, b) = 21 (a2 +b2 −(a+b)2 ), so that β takes values in R. Let q(a) = β(a, a) = −a2 . Then q is a positive definite quadratic form on E − , and the inclusion mapping j : E − → A is a Clifford mapping. Since A = span(1, E − ), A is a Clifford algebra for E − . If A is universal, then dim A = 2d = d + 1, so that d = 1 and A∼ = C. If A is not universal, then dim A = 2d−1 = d + 1, so that d = 3 and A ∼ = H. Corollary 6.1.1 (i) Suppose that d = p + m = 2k + 1 is odd. If p − m = 3 (mod 4) then Ap,m ∼ = M2k (C). If p − m = 1 (mod 4) then either Ap,m ∼ = M2k (R) ⊕ M2k (R) or ∼ Ap,m = M2k−1 (H) ⊕ M2k−1 (H). (ii) Suppose that d = p + m = 2k is even. Then Ap,m ∼ = M2k (R) or M2k−1 (H). Proof Since the centre of Mn (D) is isomorphic to the centre of D, and since Ap,m has real dimension 2p+m , these results follow from Wedderburn’s theorem and from Theorems 5.5.1 and 5.5.2.

106

Classifying Clifford algebras

Although it is of interest to see how Wedderburn’s theorem and Frobenius’ theorem relate to the structure of Clifford algebras, we shall in fact classify the Clifford algebras without using them. Exercise 1. Suppose that A is a unital algebra in which every non-zero element has a left inverse. Show that A is a division algebra.

6.2 Clifford algebras A(E, q) with dim E = 2 We have shown in Corollary 5.3.1 that a universal Clifford algebra is isomorphic to the graded tensor product of two-dimensional algebras. Graded tensor products are not easy to work with; in the next section we shall show that a universal Clifford algebra for a regular quadratic space of even dimension 2k is isomorphic to the ordinary tensor product of k four-dimensional algebras. Here we consider the case where k = 1. The algebra A2 We have seen in Section 6.3 that the mapping j : R2 → H defined by j(x1 e1 + x2 e2 ) = x1 i + x2 j is a Clifford mapping, which extends to an algebra isomorphism of A2 onto the division algebra H of quaternions. ∼ Then j(eΩ ) = i.j = k, and so j(A+ 2 ) = span(1, k) = C. The algebra A1,1 We define a Clifford mapping from R1,1 into M2 (R) by setting   0 −x1 + x2 . j(x1 e1 + xe2 ) = x1 J + x2 Q = x1 + x2 0 Then j extends to an algebra homomorphism of A1,1 into M2 (R). Since A1,1 is simple and dim A1,1 = dim M2 (R) = 4, it follows that A1,1 ∼ = M2 (R). Note that eΩ = JQ = −U . Then   x0 − xΩ −x1 + x2 . j(x0 1 + x1 e1 + x2 e2 + xΩ eΩ ) = x1 + x 2 x0 + x Ω Thus A+ 1,1

 =

a 0 0 d



 : a, d ∈ R ∼ = R ⊕ R.

6.2 Clifford algebras A(E, q) with dim E = 2

107

The algebra A0,2 Let  j(x1 e1 + xe2 ) = x1 Q + x2 U =

x2 x1

x1 −x2

 .

Then j is a Clifford mapping of R0,2 into M2 (R), and so generates a Clifford algebra for R0,2 . Since A0,2 is simple and since dim A0,2 = dim M2 (R) = 4, it follows that A0,2 ∼ = M2 (R). Note that j(eΩ ) = QU = J. Then   x0 + x2 x1 − xΩ j(x0 1 + x1 e1 + x2 e2 + xΩ eΩ ) = . x1 + xΩ x0 − x2 Thus j(A+ 1,1 )

 =

x −y y x



 : x, y ∈ R ∼ = C.

Exercises 1. Let j : A2 → H be the isomorphism defined above. Show that if x = a0 1 + x1 e1 + x2 e2 + xΩ eΩ ∈ A2 then j(x0 ) = x0 1 − x1 i − x2 j + xΩ k, j(x∗ ) = x0 1 + x1 i + x2 j − xΩ k, j(¯ x) = x0 1 − x1 i − x2 j − xΩ k, and that x¯ x=x ¯x = (x20 + x21 + x22 + x2Ω )1. 2. Let j : A1,1 → M2 (R) be the isomorphism defined above. Show that if x ∈ A1,1 and   a b j(x) = ∈ M2 (R) c d then 0

j(x ) =



a −b −c d

 ,





j(x ) =

d c

b a



 ,

j(¯ x) =

and  x¯ x=x ¯x = det

a c

b d

 .I.

d −b −c a

 ,

108

Classifying Clifford algebras

3. Let j : A0,2 → M2 (R) be the isomorphism defined above. Show that if x ∈ A0,2 and   a b j(x) = ∈ M2 (R) c d then 0

j(x ) =



d −c −b a

 ,



j(x ) =



a b

c d



 ,

j(¯ x) =

d −b −c a

 ,

and  j(x¯ x) = j(¯ xx) = det

a c

b d

 .I.

6.3 Clifford’s theorem In [Cli1], Clifford showed that the even Clifford algebra of a Euclidean space of odd dimension 2k + 1 can be generated by k commuting fourdimensional algebras, each isomorphic to H or M2 (R) (the so-called quaternion algebras). Since A+ 2k+1 is isomorphic to A2k , this result also applies to Clifford algebras of Euclidean spaces of even dimension. It extends easily to Clifford algebras of regular quadratic spaces of even dimension, and can conveniently be expressed in terms of tensor products. Theorem 6.3.1 Suppose that (E, q) is a regular quadratic space of even dimension 2k and that E is an orthogonal direct sum F ⊕ G, where F and G are regular subspaces of dimensions 2k − 2 and 2 respectively. Let ωF be a volume element in the subalgebra AF = A(F, q) of A = A(E, q), and let (g1 , g2 ) be an orthogonal basis for G. Let c1 = ωF g1 and c2 = ωF g2 , and let C be the subalgebra of A generated by {c1 , c2 }. Then C is four-dimensional, and AF and C commute. Thus A ∼ = AF ⊗ C. If G is hyperbolic, or if g12 = g22 = ωF2 then C ∼ = M2 (R). 2 2 2 ∼ If g1 = g2 = −ωF , then C = H. Proof Since dim F is even, ci = gi ωF and ci x = xci for x ∈ F , for i = 1, 2. Thus AF and C commute, so that A ∼ = AF ⊗ C. Also c2i = ωF2 gi2 = ±1, for i = 1, 2, and c1 c2 = ωF g1 ωF g2 = ωF2 g1 g2 = −ωF2 g2 g1 = −ωF g2 ωF g1 = −c2 c1 , so that (c1 c2 )2 = (g1 g2 )2 = −g12 g22 = ±1. Thus C is four-dimensional.

6.4 Classifying even Clifford algebras

109

If G is hyperbolic, then c21 = −c22 and (c1 c2 )2 = 1, so that C ∼ = M2 (R). 2 2 2 2 2 2 If g1 = g2 = ωF , then c1 = c2 = 1, and (c1 c2 ) = −1, so that C∼ = M2 (R). If g12 = g22 = −ωF2 , then c21 = c22 = (c1 c2 )2 = −1, so that C ∼ = H. Corollary 6.3.1 If p − m = 2 or 4 (mod 8) then Ap,m ∼ = M2k−1 (H) and if p − m = 0 or 6 (mod 8) then Ap,m ∼ = M2k (R) Proof By the theorem, Ap+1,m+1 ∼ = Ap,m ⊗ M2 (R) ∼ = M2 (Ap,m ), and so Ap+j,m+j ∼ = M2j (Ap,m ). It is therefore sufficient to prove the result when m = 0 or p = 0. Suppose that m = 0. We prove the result by induction. Suppose that A8j ∼ = M24j (R). Then A8j+2 ∼ = M24j (R) ⊗ H ∼ = M24j (H), A8j+4 ∼ = M24j (H) ⊗ M2 (R) ∼ = M24j+1 (H), A8j+6 ∼ = M24j+1 (R) ⊗ H ⊗ H ∼ = M24j+3 (R), = M24j+1 (H) ⊗ H) ∼ A8j+8 ∼ = M24j+3 (R) ⊗ M2 (R) ∼ = M24j+4 (R). A similar proof establishes the result when p = 0. Proposition 6.3.1 If d = p + m = 2k + 1 and p − m = 1 (mod 4) then Ap,m ∼ = M2k (C). Proof Let F be a regular subspace of Rp,m of dimension 2k, and let AF = A(F, q). If eΩ is the volume element of Ap,m then C = span(1, eΩ ) is a subalgebra of Ap,m isomorphic to C, and the subalgebras AF and C commute. Since AF and C generate Ap,m and dim Ap,m = (dim AF )(dim C), it follows that Ap,m ∼ = AF ⊗C ∼ = AF ⊗C. If AF ∼ = M2k (R) then clearly Ap,m ∼ = M2k (C). If AF ∼ = M2k−1 (H) then Ap,m ∼ = M2k (C). = M2k−1 (R) ⊗ H ⊗ C ∼ = M2k−1 (R) ⊗ M2 (C) ∼

We shall consider the case where p − m = 3 (mod 4) in the next section.

6.4 Classifying even Clifford algebras Even Clifford algebras are isomorphic to universal Clifford algebras. Theorem 6.4.1

+ ∼ ∼ A+ p+1,m = Ap,m and Ap,m+1 = Am,p .

110

Classifying Clifford algebras

Proof Let (e1 , . . . , ep+m+1 ) be the standard basis for Rp+1,m and let (f1 , . . . , fp+m ) be the standard basis for Rp,m . We consider Rp+1,m as a linear subspace of Ap+1,m . Since ei+1 ej+1 = (e1 ei+1 )(e1 ej+1 ), the set {e1 ej+1 : 1 ≤ j ≤ p + m} generates A+ p+1,m . Since (e1 ej+1 )2 = e2j+1 = −q(ej+1 ) = −q(fj ), the mapping fj → e1 ej+1 : 1 ≤ j ≤ p + m from Rp,m into A+ p+1,m extends by linearity to a Clifford mapping l1 from Rp,m to A+ p+1,m ; this + then extends to an algebra homomorphism of Ap,m onto Ap+1,m . Since p+m Ap,m and A+ , this homomorphism is an p+1,m both have dimension 2 isomorphism. Similarly, let (e1 , . . . , ep+m+1 ) be the standard basis for Rp,m+1 , and let (g1 , . . . , gp+m ) be the standard basis for Rm,p . Let d = p + m. We consider Rp,m+1 as a linear subspace of Ap,m+1 . As above, the set 2 2 {ej ed+1 : 1 ≤ j ≤ d} generates A+ p,m+1 . Since (ej ed+1 ) = −ej = q(ej ), the mapping gj → ed+1−j ed+1 : 1 ≤ j ≤ d from Rm,p to A+ p,m+1 extends + by linearity to a Clifford mapping rd+1 from Rm,p to Ap,m+1 ; this then extends to an algebra homomorphism of Am,p onto A+ p+1,m . Again, Ap,m + p+m and Ap+1,m both have dimension 2 , so that this homomorphism is an isomorphism. Corollary 6.4.1 Suppose that d = 2k + 1 is odd. (i) If p − m = 3 (mod 8) then Ap,m ∼ = M2k−1 (H) ⊕ M2k−1 (H). (ii) If p − m = 7 (mod 8) then Ap,m ∼ = M2k (R) ⊕ M2k (R). ∼ + ∼ ∼ Proof For Bp,m ∼ = A+ p,m = Ap−1,m if p > 0 and B0,m = A0,m = Am−1 if p = 0.

6.5 Cartan’s periodicity law It follows from Theorem 6.4.1 that Ap,m+1 ∼ = Am,p+1 , since each is + isomorphic to Ap+1,m+1 . This result needs treating with caution, since in general Ap,m+1 is ∼ not isomorphic as a super-algebra to Am,p+1 . For A+ p,m+1 = Am,p and ∼ A+ p+1,m = Ap,m , and in general the algebras Am,p and Ap,m are not isomorphic. For example, if p − m = 3 (mod4) then Z(Ap,m ) ∼ = R and ∼ Z(Am,p ) = C. We obtain further results when p or m is at least 4. Proposition 6.5.1 Ap+4,m .

There is a graded isomorphism of Ap,m+4 onto

6.5 Cartan’s periodicity law

111

Let us denote the standard basis of Rp+4,m by (e1 , . . . , ed ) and the standard basis of Rp,m+4 by (g1 , . . . , gd ), where d = p + 4 + m. As usual, we consider Rp+4,m as a linear subspace of Ap+4,m , and Rp,m+4 as a linear subspace of Ap,m+4 . Let f = ep+1 ep+2 ep+3 ep+4 ∈ Ap+4,m , and let fj = ej f for 1 ≤ j ≤ d. Then f 2 = 1, and fj2 = e2j = −1 for 1 ≤ j ≤ p, fj2 = −e2j = 1 for p + 1 ≤ j ≤ p + 4, and fj2 = e2j = 1 for p + 5 ≤ j ≤ d. Also fj fk = −fk fj for j 6= k. Thus the mapping π from Rp,m+4 to Ap+4,m defined by Pd Pd π( j=1 xj gj ) = j=1 xj fj is a Clifford mapping. This extends to an isomorphism π of Ap,m+4 onto Ap+4,m . Since fj ∈ A− p+4,m for 1 ≤ j ≤ d, − − + + π(Ap,m+4 ) = Ap+4,m and π(Ap,m+4 ) = Ap+4,m . We have the following consequence. Theorem 6.5.1 (Cartan’s periodicity law) There are graded isomorphisms between the three graded algebras Ap+8,m , Ap,m+8 and M16 (Ap,m ). Proof

For

Ap+8,m ∼ = Ap+4,m+4 ∼ = Ap,m+8 , and Ap+4,m+4 ∼ = M16 (Ap,m ), and the isomorphisms respect the gradings. It is therefore sufficient to tabulate the results for 0 ≤, p, m ≤ 7. We have the following table (where K 2 denotes K ⊕ K). Note that these results are obtained without appealing to Wedderburn’s theorem. Further, Theorem 6.4.1 allows us to classify the corresponding even Clifford algebras.

M16 (C) M16 (H)

M8 (C) M8 (H)

M4 (H)

M4 (H) M4 (H2 ) M8 (H) M8 (H2 )

5 M2 (H2 )

7

M8 (C)

M16 (R)

M8 (R) M8 (R2 )

M4 (C)

M2 (H)

4

6

M8 (C)

M8 (R)

M4 (R2 )

M2 (C)

3

M4 (R)

M4 (C)

M4 (R)

M2 (R2 )

M2 (R)

M32 (R)

M16 (C)

M8 (H)

M4 (H2 )

M4 (H)

M4 (C)

5

M32 (C)

M16 (H)

M16 (C)

M16 (R)

M8 (R2 )

7

M64 (R)

M32 (C)

M128 (R)

M64 (C)

M32 (H)

M16 (H) M16 (H2 )

M8 (H2 )

M8 (H)

M8 (C)

M8 (R)

6

M64 (R) M64 (R2 )

M32 (R) M32 (R2 )

M16 (R) M16 (R2 )

M4 (H)

M2 (H2 )

2

M2 (H)

M2 (C)

M2 (R)

R2

1

M2 (H)

H2



H

4

3

C

2

R

→ 1

m 0

p 0

Universal Clifford algebras

6.6 Classifying complex Clifford algebras

113

6.6 Classifying complex Clifford algebras Let us briefly describe what happens in the complex case. This is much less interesting than the real case. Suppose that (E, q) is a regular complex quadratic space, with standard orthogonal basis (e1 , . . . , ed ). We leave the reader to verify the following results. The notions of Clifford mapping, Clifford algebra and universal Clifford algebra C(E, q) are defined as in the real case. The regular quadratic space (E, q) has a universal Clifford algebra (this is most easily seen from the constructions that we shall make), and for such algebras we can define the principal involution, reversal and conjugation (this conjugation, which is a linear anti-involution, is quite different from the conjugation of complex numbers, which is anti-linear, mapping z to z¯). We can therefore define the even Clifford algebra C(E, q)+ . Then C(E, q)+ is isomorphic to C(F, q), where F = span(e2 , . . . , ed ). If dim E = 1 then the mapping λe1 → (iλ, −iλ) is a Clifford mapping of (E, q) into C2 , and the resulting Clifford algebra is isomorphic to the commutative complex algebra C2 , and is not simple. If dim E = 2 then the mapping   0 −λ1 − iλ2 λ1 e1 + λ2 e2 → λ1 J + iλ2 Q = λ1 − iλ2 0 is a Clifford mapping of (E, q) into M2 (C), and the resulting Clifford algebra is isomorphic to the algebra M2 (C), and is simple. If dim E > 2 then we can write E = span(e1 , e2 ) ⊕ G, where G = span(e3 , . . . , ed ). The mapping   iy −λ1 − iλ2 λ1 e1 + λ2 + g → λ1 I ⊗ J + iλ2 I ⊗ Q + iy ⊗ U = λ1 − iλ2 −iy is a Clifford mapping of (E, q) into M2 (C(G, q)), and the resulting Clifford algebra is isomorphic to the algebra M2 (C(G, q)), and is simple if and only if C(G, q) is. As a consequence, we have the following theorem. Theorem 6.6.1 Suppose that (E, q) is a regular complex quadratic space. If dim E = 2k is even then the universal Clifford algebra C(E, q) is isomorphic to M2k (C), and is simple. If dim E = 2k + 1 is odd then C(E, q) is isomorphic to M2k (C) ⊕ M2k (C), and is not simple. The even Clifford algebra C(E, q)+ is isomorphic to C(F, q 0 ), where (F, q 0 ) is a regular complex quadratic space with dimension one less than the dimension of E.

7 Representing Clifford algebras

We have seen that the Clifford algebras and the even Clifford algebras of regular quadratic spaces are isomorphic either to a full matrix algebra Mk (D) or to a direct sum Mk (D) ⊕ Mk (D) of two full matrix algebras, where D = R, C or H. These algebras act naturally on a real or complex vector space, or on a left H-module, respectively. In this way, we obtain representations of the Clifford algebras. In this chapter, we shall construct some of these representations, using tensor products. These representations give useful information about the algebra, and its relation to its even subalgebra. We then construct some explicit representations of low-dimensional Clifford algebras. These are useful in practice, but it is probably not necessary to consider each of them in detail, on the first reading.

7.1 Spinors When Ap,m is simple, we have represented it as Mk (D), where D = R, C or H, so that we can consider Ap,m acting on the left D-module Dk . A left R-module is just a real vector space, and a left C-module is a complex vector space. Since H is a division algebra, the notions of linear independence, basis, and dimension can be defined as easily for a left H-module as for a vector space, and a left H-module is frequently called a vector space over H. When Ap,m is not simple, then Ap,m ∼ = Mk (D) ⊕ Mk (D), and so we can consider Ap,m acting on k k D ⊕ D . It follows from Theorem 2.7.3 that there are no essentially different representations. If Ap,m is simple and π is an irreducible representation of Ap,m in L(W ) then W is called a spinor space, and the elements of W are called

7.1 Spinors

115

spinors. For example if we consider the left regular representation of Ap,m then the minimal left ideals of Ap,m are spinor spaces; if we identify Ap,m with Mk (D) then the columns of Mk (D) are spinor spaces. It follows from Theorem 2.7.3 that any two irreducible representations are equivalent. We denote a spinor space for Ap,m by Sp,m , and consider it as a left Ap,m -module. If Ap,m is simple and Ap,m ∼ = Mk (R) then dim Ap,m = 2d = dim Mk (R) = k 2 , so that d = 2t is even, and k = 2t . Thus the real dimension of a spinor space is 2d/2 . If Ap,m is simple and Ap,m ∼ = Mk (C) then dimR Ap,m = 2d = dimR Mk (C) = 2k 2 , so that d = 2t + 1 is odd, and k = 2t . Thus the complex dimension of a spinor space is 2(d−1)/2 , and its real dimension is 2(d+1)/2 . If Ap,m is simple and Ap,m ∼ = Mk (H) then dimR Ap,m = 2d = dimR Mk (C) = 4k 2 , so that d = 2t is even, and k = 2t−1 . Thus the quaternionic dimension of a spinor space is 2(d−2)/2 , and its real dimension is 2(d+2)/2 . When p − m = 3 (mod 4), so that Ap,m is not simple, we consider irreducible representations of the simple non-universal Clifford algebra Bp,m ; the resulting left Bp,m -modules are called semi-spinor spaces. We then have the table on the following page. Suppose that (E, q) is a Euclidean space. If W is a spinor space for (E, q), we can consider W as a minimal left ideal in A(E, q), and give W the inner product and norm inherited from (A(E, q), Q). If a ∈ N1 (E, q) then the mapping w → aw : W → W is an isometry of W . Things are quite different if q is not positive definite. Suppose that p = 0. Thus q(x) = −1. Let p = 21 (1 + x). Then p2 = p, p¯ = 1 − p and p¯ A(E, q) = A(E, q)p ⊕ A(E, q)(1 − p) is the vector space direct sum of two left ideals. If ap ∈ A(E, q)p then Q(ap) = τn (ap¯ pa ¯) = 0, and similarly Q vanishes on A(E, q)¯ p. If W is a minimal left ideal of A(E, q) then W is contained in one of A(E, q)p or A(E, q)¯ p, and so W is an isotropic subspace of (A(E, q), Q).

116

Representing Clifford algebras Spinor and [semi-spinor] spaces p 0

→ 1

2

3

4

5

6

7

R

C

H

[H]

H2

C4

R8

[R8 ]

1

[R]

R2

C2

H2

[H2 ]

H4

C8

R16

2

R2

[R2 ]

R4

C4

H4

[H4 ]

H8

C16

3

C2

R4

[R4 ]

R8

C8

H8

[H8 ]

H16

4

H2

C4

R8

[R8 ]

R16

C16

H16

[H16 ]

5 [H2 ]

H4

C8

R16

[R16 ]

R32

C32

H32

6

H4

[H4 ]

H8

C16

R32

[R32 ]

R64

C64

7

C8

H8

[H8 ]

H16

C32

R64

[R64 ]

R128

m 0 ↓

7.2 The Clifford algebras Ak,k

117

7.2 The Clifford algebras Ak,k Recall that Ak,k ∼ = M2k (R), which we can identify with the k-fold tensor k product ⊗ M2 (R) of four-dimensional algebras. Since we shall consider different values of k, we shall denote the standard orthogonal basis of (k) (k) (k) (k) Rk,k by (e1 , . . . , ek , f1 , . . . , fk ). Recall the definitions of the matrices I, J, Q and U made in Section 2.2. Let us set (k)

jk (ei ) = (⊗k−i I) ⊗ J ⊗ (⊗i−1 Q) for 1 ≤ i ≤ k, (k)

jk (f1 ) = ⊗k Q, (k)

jk (fi ) = (⊗i−2 I) ⊗ U ⊗ (⊗k+1−i Q) for 2 ≤ i ≤ k. For example, when k = 4, (4)

j4 (e1 ) = I ⊗ I ⊗ I ⊗ J, (4)

j4 (e2 ) = I ⊗ I ⊗ J ⊗ Q, (4)

j4 (e3 ) = I ⊗ J ⊗ Q ⊗ Q, (4)

j4 (e4 ) = J ⊗ Q ⊗ Q ⊗ Q, (4)

j4 (f1 ) = Q ⊗ Q ⊗ Q ⊗ Q, (4)

j4 (f2 ) = U ⊗ Q ⊗ Q ⊗ Q, (4)

j4 (f3 ) = I ⊗ U ⊗ Q ⊗ Q, (4)

j4 (f4 ) = I ⊗ I ⊗ U ⊗ Q.

Then the conditions of Theorem 5.1.1 are satisfied, and so jk extends to a Clifford mapping of Rk,k into M2k (R), and this in turn extends to an algebra isomorphism of Ak,k onto M2k (R). If we write M2k (R) as the tensor product M2k−1 (R) ⊗ M2 (R), and if x ∈ Rk,k , then the non-zero elements of jk (x) occur only in the top righthand and bottom left-hand quadrants. It therefore follows that that if a ∈ A+ k,k , then the non-zero elements of jk (a) occur only in the top left-hand and bottom right-hand quadrants, and if a ∈ A− k,k , then the non-zero elements of jk (a) occur only in the top right-hand and bottom left-hand quadrants. Schematically we have:   [A+ ] [A− ] . [A− ] [A+ ] We denote the volume element of Ak,k by Vk,k : easy calculations show that jk (Vk,k ) = ±(⊗k−1 I) ⊗ U .

118

Representing Clifford algebras

We now consider A+ k+1,k+1 . Let (k+1)

= e1

for 1 ≤ i ≤ k,

(k+1)

=

for 1 ≤ i ≤ k + 1.

gi

hi

(k+1)

The elements gi lations show that (k+1)

jk+1 (gi

(k+1) (k+1) ei+1 (k+1) (k+1) e1 fi+1 (k+1)

and hi

are in A+ k+1,k+1 . Then more easy calcu-

(k)

(k+1)

) = jk (ei ) ⊗ U and jk+1 (hi

(k)

) = jk (fi ) ⊗ U,

for 1 ≤ i ≤ k. For example, when k = 3 then (4)

j4 (g1 ) = (I ⊗ I ⊗ J) ⊗ U, (4)

j4 (g2 ) = (I ⊗ J ⊗ Q) ⊗ U, (4)

j4 (g3 ) = (J ⊗ Q ⊗ Q) ⊗ U, (4)

j4 (h1 ) = (Q ⊗ Q ⊗ Q) ⊗ U, (4)

j4 (h2 ) = (U ⊗ Q ⊗ Q) ⊗ U, (4)

j4 (h3 ) = (I ⊗ U ⊗ Q) ⊗ U, (4)

j4 (h4 ) = (I ⊗ I ⊗ U ) ⊗ U. (k)

(k+1)

(k)

(k+1)

Let us define φk (ei ) = jk+1 (gi ) and φk (fi ) = jk+1 (hi ) for 1 ≤ i ≤ k, and extend by linearity. Then φk is a Clifford mapping of Rk,k into jk+1 (A+ k+1,k+1 ), and therefore extends to an algebra isomorphism of Ak,k into jk+1 (A+ k+1,k+1 ). Then  φk (a) =

jk (a) 0 0 jk (a0 )

 .

Thus the top left-hand quadrant of φk (a) is exactly the same as the matrix jk (a). The algebra jk+1 (A+ k+1,k+1 ) is then generated by φk (Ak,k ) and k jk+1 (Vk+1,k+1 ) = ±(⊗ I) ⊗ U . Exercise 1. Prove directly by induction that jk (Rk,k ) generates M2k (R). Deduce another proof of the existence of the universal Clifford algebra Ak,k . Use this to prove the existence of a universal Clifford algebra A(E, q) for a regular quadratic space (E, q).

7.3 The algebras Bk,k+1 and Ak,k+1

119

7.3 The algebras Bk,k+1 and Ak,k+1 We can use the ideas of the preceding section to construct representations of the non-universal Clifford algebra Bk,k+1 and the universal Clif(k) (k) (k) (k) ford algebra Ak,k+1 . Let (e1 , . . . , ek , f1 , . . . , fk+1 ) denote the standard orthogonal basis of Rk,k+1 . Let us set (k)

lk (ei ) = (⊗k−i I) ⊗ J ⊗ (⊗i−1 Q) for 1 ≤ i ≤ k, (k)

lk (f1 ) = ⊗k Q, (k)

lk (fi ) = (⊗i−2 I) ⊗ U ⊗ (⊗k+1−i Q) for 2 ≤ i ≤ k + 1, and extend by linearity to obtain a linear mapping lk from Rk,k+1 into M2k (R). For example, when k = 3, (3)

l4 (e1 ) = I ⊗ I ⊗ J, (3)

l4 (e2 ) = I ⊗ J ⊗ Q, (3)

l4 (e3 ) = J ⊗ Q ⊗ Q, (3)

l4 (f1 ) = Q ⊗ Q ⊗ Q, (4)

l4 (f2 ) = U ⊗ Q ⊗ Q, (4)

l4 (f3 ) = I ⊗ U ⊗ Q, (4)

l4 (f4 ) = I ⊗ I ⊗ U.

Then lk is a Clifford mapping, and so extends to an algebra homomorphism of Ak,k+1 into M2k (R). Since lk (eΩ ) = − ⊗k I, lk is not injective, and so, since Bk,k+1 ∼ = Ak,k+1 /span(I, eΩ ), we obtain an isomorphic representation of Bk,k+1 onto M2k (R). We can easily adjust this to obtain a representation of Ak,k+1 . Let us set (k)

(k)

(k)

(k)

mk (ei ) = lk (ei ) ⊗ U for 1 ≤ i ≤ k, mk (fi ) = lk (fi ) ⊗ U for 1 ≤ i ≤ k + 1, and extend by linearity to obtain a linear mapping lk from Rk,k+1 into M2k+1 (R).

120

Representing Clifford algebras

For example, when k = 3, (3)

m4 (e1 ) = I ⊗ I ⊗ J ⊗ U, (3)

m4 (e2 ) = I ⊗ J ⊗ Q ⊗ U, (3)

m4 (e3 ) = J ⊗ Q ⊗ Q ⊗ U, (3)

m4 (f1 ) = Q ⊗ Q ⊗ Q ⊗ U, (4)

m4 (f2 ) = U ⊗ Q ⊗ Q ⊗ U, (4)

m4 (f3 ) = I ⊗ U ⊗ Q ⊗ U, (4)

m4 (f4 ) = I ⊗ I ⊗ U ⊗ U.

Then mk is a Clifford mapping, and so extends to an algebra homomorphism of Ak,k+1 into M2k+1 (R). Since mk (eΩ ) = −(⊗k I) ⊗ U , mk is injective. Bearing in mind that the last factor is U , it follows that  mk (Ak,k+1 ) =

a 0

0 b



 : a, b ∈ M2k (R)

∼ = M2k (R) ⊕ M2k (R).

7.4 The algebras Ak+1,k and Ak+2,k We can also use the ideas of Section 7.2 to obtain a representation of Ak+1,k as M2k (C) and a representation of Ak+2,k as M2k (H). (k) (k) (k) (k) Let (e0 , . . . , ek , f1 , . . . , fk ) denote the standard orthogonal basis of Rk+1,k . We can consider the mapping jk of Section 7.2 as a Clifford mapping from Rk,k into M2k (C). We can extend jk as a mapping from (k) Rk+1,k into M2k (C) by setting jk (e0 ) = −i(⊗k−1 I) ⊗ U , and extending by linearity. Then jk is a Clifford mapping of Rk+1,k into M2k (C) which extends to an algebra isomorphism of Ak+1,k onto M2k (C). In this representation, jk (eΩ ) = iI. (k) (k) (k) (k) (k) (k) Similarly, we can take a basis (d1 , d2 , e1 , . . . , ek , f1 , . . . , fk ) for Rk+2,k and consider jk as a Clifford mapping from Rk,k into M2k (H). We can extend jk as a mapping from Rk+2,k into M2k (H) by setting (k) (k) jk (d1 ) = i(⊗k−1 I) ⊗ U , jk (d2 ) = j(⊗k−1 I) ⊗ U , and extending by linearity. Then jk is a Clifford mapping of Rk+2,k into M2k (H) which extends to an algebra isomorphism of Ak+2,k onto M2k (H). In this representation, jk (eΩ ) = kI.

7.5 Clifford algebras A(E, q) with dim E = 3

121

7.5 Clifford algebras A(E, q) with dim E = 3 We now consider some explicit representations of low-dimensional Clifford algebras. The algebra A3 The algebra A3 is not simple. Using Theorem 6.4.1, + ∼ ∼ A3 ∼ = A+ 3 ⊕ A3 = A2 ⊕ A2 = H ⊕ H.

A Clifford mapping k from R3 to H is obtained by setting k(x1 e1 + x2 e2 + x3 e3 ) = x1 i + x2 j + x3 k. This extends to an algebra isomorphism of B3 onto H. We then obtain a Clifford mapping of R3 into H ⊕ H by setting j(x) = (k(x), −k(x)); this extends to an algebra isomorphism of A3 onto H ⊕ H. The algebra A2,1 The Clifford algebra A2,1 is isomorphic to M2 (C). There are many representations that we can use; we consider two, each of which has interesting properties. First, let   x3 ix1 + x2 j1 (x1 e1 + x2 e2 + x3 e3 ) = ix1 Q − x2 J + x3 U = . ix1 − x2 x3 This is a Clifford mapping of R2,1 into M2 (C), which extends to an algebra homomorphism of A2,1 into M2 (C). Since A2,1 is simple, and both algebras have dimension 8, it is an isomorphism. Since j1 (e1 e2 ) = −iU , j2 (e1 e3 ) = iJ and j1 (e2 e3 ) = −Q,    z w + j1 (A2,1 ) = : z, w ∈ C . w ¯ z¯ In this representation, it is not apparent that A+ 2,1 is isomorphic to M2 (R). The linear subspace span(e1 , e2 ) of R2,1 is isomorphic to R2 . Thus    z w ˜ı(A2 ) = span(I, iQ, −J, −iU ) = : z, w ∈ C = H, −w ¯ z¯ where H is the real subalgebra of M2 (C) which was used to define the quaternions in Section 2.3.

122

Representing Clifford algebras

The linear subspace span(e2 , e3 ) of R2,1 is isomorphic to R1,1 . Thus ˜ı(A1,1 ) = span(I, −J, U, Q) = M2 (R). Secondly, let  j2 (x1 e1 + x2 e2 + x3 e3 ) = ix1 Q + ix2 U − ix3 J = i

x2 x1 − x3

x1 + x3 −x2

 .

Then j2 is a Clifford mapping of R2,1 into M2 (C), which extends to an algebra homomorphism j2 : A2,1 → M2 (C); as before, this is an (0) isomorphism. Note that j2 (R2,1 ) = iM2 (R), where (0)

M2 (R) = {T ∈ M2 (R) : τ (T ) = 0}. In this representation, j2 (e1 e2 ) = −J, j2 (e1 e3 ) = U, j2 (e2 e3 ) = −Q, so that j2 (A+ 2,1 ) = span(I, −J, U, −Q) = M2 (R). Once again,  ˜ı(A2 ) =

z −w ¯

w z¯



 : z, w ∈ C = H,

while  ˜ı(A1,1 ) =

z w ¯

w z¯



 : z, w ∈ C .

Again, in this representation, it is not apparent that A1,1 is isomorphic to M2 (R). The algebra A1,2 The algebra A1,2 is not simple, and A1,2 ∼ = A0,2 ⊕ A0,2 ∼ = M2 (R) ⊕ M2 (R). A Clifford mapping k : R1,2 → M2 (R) is given by setting   x3 −x1 + x2 k(x1 e1 + x2 e2 + x3 e3 ) = x1 J + x2 Q + x3 U = , x1 + x2 −x3 and this extends to an algebra homomorphism of A1,2 onto M2 (R). Then k(eΩ ) = −I, and we have represented B1,2 as M2 (R). Note that (0) k(R1,2 ) = M2 (R) = {T ∈ M2 (R) : τ (T ) = 0}.

7.5 Clifford algebras A(E, q) with dim E = 3

123

If we set j(x) = (k(x), −k(x)), we obtain a Clifford mapping of R1,2 into M2 (R) ⊕ M2 (R) which extends to an algebra isomorphism of A1,2 onto M2 (R) ⊕ M2 (R). The algebra A0,3 The Clifford algebra A0,3 is isomorphic to M2 (C). We use the Pauli spin matrices to obtain an explicit representation. Let j((x1 , x2 , x3 )) = x1 σ1 + x2 σ2 + x3 σ3 = x1 Q + ix2 J + x3 U   x3 x1 − ix2 = . x1 + ix2 −x3 This is a Clifford mapping of R0,3 into M2 (C), which extends to an isomorphism of A0,3 onto M2 (C). The matrices σ1 , σ2 and σ3 play a symmetric role; yet σ1 and σ3 appear to be real, and σ2 to be complex. Why is this? It follows from Proposition 5.4.1 that the centre Z(A0,3 ) = span(1, eΩ ) ∼ = C, and in the representation above, eΩ = Q(iJ)U = iI. Thus A0,3 can be considered as a four-dimensional complex algebra, where multiplication by i is multiplication by eΩ . Note that j(R0,3 ) is the three-dimensional real subspace of M2 (C) consisting of all Hermitian matrices with zero trace: j(R0,3 ) = {T ∈ M2 (C) : T = T ∗ and τ (T ) = 0}. The elements f1 = e3 e2 , f2 = e1 e3 and f3 = e2 e1 generate A+ 0,3 , and j(f1 ) = τ1 , j(f2 ) = τ2 and j(f3 ) = τ3 , where τ1 , τ2 and τ3 are the associate Pauli matrices. If a = xI + vτ1 + uτ2 + yτ3 ∈ j(A+ 0,3 ) then     x + iy u + iv z w a= = , −u + iv x − iy −w ¯ z¯ where z = x + iy and w = u + iv.  z + Thus j(A0,3 ) = −w ¯

w z¯



 : z, w ∈ C = H.

Exercises 1. Suppose that  a (S, T ) = c

b d

  α , γ

β δ



∈ M2 (R) × M2 (R) ∼ = A1,2 .

124

2. 3. 4. 5.

Representing Clifford algebras

Calculate (S, T )0 , (S, T )∗ and (S, T ). Let i : R1,1 → R1,2 be defined by i((x, y)) = (x, y, 0). Determine j(˜ı(A1,1 )). Let i : R0,2 → R1,2 be defined by i((x, y)) = (0, x, y). Determine j(˜ı(A1,1 )). Let i : R0,2 → R0,3 be defined by i((x, y)) = (x, 0, y). Show that j(˜ı(A0,2 )) = M2 (R). Suppose that   a b T = ∈ M2 (C) ∼ = A2,1 . c d Calculate T 0 , T ∗ , T¯ and T¯T .

7.6 Clifford algebras A(E, q) with dim E = 4 The algebras A4 and A0,4 By Proposition 6.5.1 the algebras A4 and A0,4 are isomorphic as ∼ ∼ + ∼ graded algebras, and A4 ∼ = M2 (H). Also A+ 4 = A0,4 = A3 = H ⊕ H. We construct three explicit representations of A4 . First, define j1 : R4 → M2 (H) by setting     0 i 0 j j1 (e1 ) = i ⊗ Q = , j1 (e2 ) = j ⊗ Q = , i 0 j 0     0 k 0 1 j1 (e3 ) = k ⊗ Q = , j1 (e4 ) = −I ⊗ J = , k 0 −1 0 and extending by linearity. Then j1 is a Clifford mapping, which extends to an algebra isomorphism j1 : A4 → M2 (H). j1 (A+ 4 ) is generated by     i 0 j 0 j1 ( e2 e3 ) = , j1 (e3 e1 ) = , 0 i 0 j     k 0 1 0 j1 (e1 e2 ) = , j( eΩ ) = , 0 k 0 −1 so that in this representation j1 (A+ 4)

 =

H 0 0 H

 .

Since eΩ = I ⊗ U , it follows from Proposition 6.5.1 that we obtain a

7.6 Clifford algebras A(E, q) with dim E = 4 Clifford mapping π1 from R0,4 to M2 (H) for which    0 −i π1 (e1 ) = i ⊗ J = , π1 (e2 ) = j ⊗ J = i 0    0 −k π1 (e3 ) = k ⊗ J = , π1 (e4 ) = I ⊗ Q = k 0

125

 −j , 0  0 1 . 1 0

0 j

This then extends to an algebra isomorphism of A0,4 onto M2 (H). Secondly, we can combine the representation j1 with the representation of H as a subalgebra of M2 (C), to obtain a representation j2 of A4 in M4 (C), where j2 (e1 ) = τ1 ⊗ Q = iQ ⊗ Q, j2 (e2 ) = τ2 ⊗ Q = −J ⊗ Q, j2 (e3 ) = τ3 ⊗ Q = iU ⊗ Q, j2 (e4 ) = −I ⊗ J. We shall see in the next section that this representation extends to a representation of A5 . Now eΩ = −I ⊗ U , so that using Proposition 6.5.1 we obtain a representation π2 : R0,4 → M4 (C) for which π2 (e1 ) = τ1 ⊗ J = iQ ⊗ J, π2 (e2 ) = τ2 ⊗ J = −J ⊗ Q, π2 (e3 ) = τ3 ⊗ J = iU ⊗ J, π2 (e4 ) = −I ⊗ Q. We obtain a third Clifford mapping j3 : R4 → M4 (C) by exchanging U and Q, setting j3 (e1 ) = τ1 ⊗ U = iQ ⊗ U, j3 (e2 ) = τ2 ⊗ U = −J ⊗ U, j3 (e3 ) = τ3 ⊗ U = iU ⊗ U, j3 (e4 ) = −iI ⊗ Q, and extending by linearity. We shall relate this to a representation of A+ 5 in the next section. The algebra A3,1 The Clifford algebras A3,1 and A1,3 are important for relativistic physics: we shall illustrate this in Chapter 9. ∼ ∼ The algebra A3,1 is isomorphic to M2 (H), and A+ 3,1 = A2,1 = M2 (C). Let us construct an explicit representation. We obtain a Clifford mapping j : R3,1 → M2 (H) by setting j(e1 ) = I ⊗ J,

j(e2 ) = i ⊗ U,

j(e3 ) = j ⊗ U,

j(e4 ) = I ⊗ Q,

and extending by linearity. It then follows that    a1 + a2 k b1 i + b2 j + j(A3,1 ) = : ai , bi , ci , di ∈ R for i = 1, 2 . c1 i + c2 j d1 + d2 k

126

Representing Clifford algebras

This clearly does not reflect the fact that A+ 3,1 is isomorphic to M2 (C). Since A3,2 ∼ = M4 (C), it is natural to represent A3,1 as a subalgebra of M4 (C); for this we use the Dirac matrices. These are not only of practical use, but also of historical interest, since they feature in Dirac’s work on the Dirac equation, which we shall discuss in Section 9.4. They are defined in terms of the Pauli spin matrices. Let  γ0 = σ0 ⊗ Q = I ⊗ Q =  γ1 = σ1 ⊗ J = Q ⊗ J =

I 0

 ,

0 −Q Q 0



0 iJ



0 U

γ2 = σ2 ⊗ J = iJ ⊗ J = γ3 = σ3 ⊗ J = U ⊗ J =

0 I

 ,

 −iJ , 0  −U . 0

In order to conform to our sign convention, we write γ4 for γ0 . The mapping γ defined by γ(x1 e1 + · · · + x4 e4 ) = x1 γ1 + · · · + x4 γ4 is then a Clifford mapping of R3,1 into M4 (C), which extends to an isomorphism γ of A3,1 into M4 (C). The image Γ3,1 = γ(A3,1 ) is called the Dirac algebra. Note that        0 0 a b       ¯b d  0 0      : a, d ∈ R, b ∈ C γ(R3,1 ) =      d −b 0 0     −¯b a 0 0    0 S ∗ = :S=S , adjS 0 where adjS is the adjugate of S. Let us now consider γ(A+ 3,1 ). Now γ(e1 e3 ) = −J ⊗ I, γ(e2 e4 ) = −iJ ⊗ U, γ(e1 e4 ) = −Q ⊗ U, γ(e2 e3 ) = −iQ ⊗ I, γ(e3 e4 ) = −U ⊗ U, γ(e1 e2 ) = −iU ⊗ I, and γ(eΩ ) = −iI ⊗ U .

7.6 Clifford algebras A(E, q) with dim E = 4

127

Suppose that x = λ0 I +

X

λij ei ej + λΩ eΩ ∈ A+ 3,1 .

1≤i0 .

Thus j

0

0

( Spin+ 1,1 )

 = 

and j ( Spin1,1 ) =

a 0

0 −1

a

a 0 0 d

 :a∈R







 : ad = ±1

∼ = (R∗ , ×)

∼ = (R∗ , ×) × D2 .

8.6 Spin groups, for d = 3 Spin3 ∼ = S3 ∼ = Spin0,3 ∼ = H1 = {h ∈ H : ∆(h) = 1} ∼ = SU2 . ∼ We have seen that B3 = H and that the corresponding Clifford mapping k maps R3 onto the space P u(H) of pure quaternions. If k : B3 → H is the corresponding algebra isomorphism, then ∆(k(b)) = k(b∗ b), and so k( Spin3 (B)) = {h ∈ H : ∆(h) = 1} = H1 . If b ∈ Spin3 (B) and x ∈ R3

8.6 Spin groups, for d = 3

145

then bxb−1 = k(b)k(x)(k(b))−1 , so that the action of Spin3 on R3 is the same as the action of H1 on P u(H) described in Theorem 4.9.1. For an alternative approach, let us consider A0,3 . We have seen in Section 7.5 that    z w + ∼ : z, w ∈ C , A0,3 = −w ¯ z¯ and that ∆(a) = |z|2 + |w|2 . Thus there is an isomorphism j of Spin0,3 onto    z w 2 2 : |z| + |w| = 1 = SU2 . −w ¯ z¯ Recall that j(R0,3 ) = {T ∈ M2 (C) : T = T ∗ and τ (T ) = 0}. If U ∈ SU2 and T ∈ j(R0,3 ), then α(U )(T ) = U T U ∗ ; since (U T U ∗ )∗ = U T ∗ U ∗ = U T U ∗ and τ (U T U ∗ ) = τ (U ∗ U T ) = τ (T ) = 0, we have direct confirmation that j(R0,3 ) is invariant under conjugation by SU (2). + ∼ ∼ Spin+ 2,1 = Spin1,2 = SL2 (R) ∼ ∼ In Section 7.5 we saw that A2,1 ∼ = M2 (C), that A+ 2,1 = A1,1 = M2 (R), and that, under this isomorphism, if a ∈ A+ 2,1 then ∆(a) = det (a). Thus + ∼ + ∼ Spin2,1 = Spin1,2 = SL2 (R). Let us consider Spin+ 1,2 (B). Let k : A1,2 → M2 (R) be the homomorphism defined in Section 7.5. Note that

k(e1 e2 ) = −U,

k(e1 e3 ) = Q,

k(e2 e3 ) = J,

so that if x = a1 + be1 e2 + ce1 e3 + de2 e3 ∈ A+ 1,2 then   a−b c−d k(x) = , c+d a+b and det k(x) = (a2 + d2 ) − (b2 + c2 ) = ∆(x). + Thus k( Spin+ 1,2 (B)) = SL2 (R), and the action of Spin1,2 (B) on R3 corresponds to the conjugation action of SL2 (R) on the three-dimensional (0) subspace M2 (R) of M2 (R). Exercise

146

Spin

1. If  T =

a c



b d

∈ SL2 (R) and T e3 T −1 = y1 e1 + y2 e2 + y3 e3 ,

calculate y1 , y2 and y3 .

8.7 Spin groups, for d = 4 Spin4 ∼ = Spin0,4 ∼ = H1 × H1 We consider the representation j1 : A0,4 → M2 (H) defined in Section 7.6. From the results there,   H1 0 j1 ( Spin0,4 ) = . 0 H1 If  j1 (y) =

k1 0

0 k2



 ∈ j1 ( Spin4 ) and j1 (x) =

0 −¯l l 0

 ∈ j(R4 ),

then j1 (yxy

−1

 )=

0 k2 lk¯1

−k1 ¯lk¯2 0

 .

Thus we can now recognize that this action is the same as the one described in Theorem 4.9.2. + ∼ ∼ Spin+ 3,1 = Spin1,3 = SL2 (C) Let us consider the representation γ of A3,1 as a subalgebra of M4 (C) defined in Section 7.6. In this representation, γ(A+ 3,1 ) is isomorphic to M2 (C), and if x ∈ A3,1 then ∆(x) = 1 if and only if det γ˜ (x) = 1. ∼ Thus Spin+ 3,1 = SL2 (C). Further, we can identify R3,1 with the fourdimensional real subspace {T ∈ M2 (C) : T = T ∗ } of M2 (C) consisting of Hermitian matrices. Then α(x)(y) = γ˜ (x)y(˜ γ (x))−1 . This representation goes back to Dirac.

∼ Spin+ 2,2 = SL2 (R) × SL2 (R) Let us continue with the notation of Section 7.6. It follows from the results there that if a ∈ A+ 2,2 and φ(a) = (S, T ) then ∆(a) = 1 if and ∼ only if det S = det T = 1. Thus Spin+ 2,2 = SL2 (R) × SL2 (R). Further, if a ∈ Spin+ 2,2 , with φ(a) = (S, T ), and y ∈ R2,2 then

8.8 The group Spin5

 j(α(a)y) =  =

S 0

0 T

  .

0 T v(y)S −1

0 v(y)

u(y) 0

Su(y)T −1 0

  −1 S . 0  .

147

0



T −1

The mapping u is a linear isomorphism of R2,2 onto M2 (R), and so −1 . the action of Spin+ 2,2 is given by the equation u(α(a)y) = Su(y)T

8.8 The group Spin5 As the dimension d becomes larger, the spin groups become more complicated. For d = 5 we shall only consider the Euclidean case; we shall describe Spin5 , without describing its action on R5 . The group Spin5 is a subgroup of A+ 5 , which is isomorphic to M2 (H). We shall describe Spin5 as a subgroup of M2 (H); to do this, we need to consider M2 (H) more carefully. We define an anti-automorphism, conjugation:     a ¯ c¯ a b ¯ if T = ∈ M2 (H) then T = ¯ ¯ . b d c d The hyper-unitary group or symplectic group HU (2) is then HU (2) = {T ∈ M2 (H) : T¯T = I}. (The term ‘symplectic’ was introduced by Hermann Weyl for groups which preserve an H-Hermitian form.) We shall show that Spin5 is isomorphic to HU (2). Let l1 be the isomorphism of A4 onto A+ 5 defined in Theorem 6.4.1, let j1 : A4 → M2 (H) be the isomorphism defined in Section 7.6, and let φ = j1 ⊗ l1−1 . Thus     0 i 0 j , φ(e1 e3 ) = jQ = , φ(e1 e2 ) = iQ = i 0 j 0     0 k 0 −1 φ(e1 e4 ) = kQ = , φ(e1 e5 ) = J = . k 0 I 0 Now e1 ei = −e1 ei and φ(e1 ei ) = −φ(e1 ei ) for 2 ≤ i ≤ 5, so that φ(e1 ei ) = e1 ei .

148

Spin

If 2 ≤ i < j ≤ 5 then φ(ei ej ) = φ(e1 ei e1 ej ) = φ(e1 ei )φ(e1 ej ) = φ(e1 ej ).φ(e1 ei ) = φ(e1 ej ).φ(e1 ei ) = φ(e1 ej )φ(e1 ei ) = φ(e1 ej e1 ei ) = φ(−ei ej ) = φ(ei ej ). As a result, if 1 ≤ i < j < l < m ≤ 5 then φ(ei ej el em ) = φ(ei ej )φ(el em ) = φ(el em ).φ(ei ej ) = φ(el em ).φ(ei ej ) = φ(el em .ei ej ) = φ(ei ej el em ). Thus if a ∈ A+ a) = φ(a). Thus a ∈ Spin5 = N1+ if and only if 5 then φ(¯ φ(a) ∈ HU (2). Exercise ¯ 1 k1 + h ¯ 2 k2 : 1. If h = (h1 , h2 ) and k = (k1 , k2 ) are in H2 , let hh, ki = h the mapping (h, k) → hh, ki is the standard H-Hermitian form on H2 . Show that T ∈ HU (2) if and only if hT (h), T (k)i = hh, ki for all h, k ∈ H2 .

8.9 Examples of spin groups for d ≥ 6 + When d ≥ 5 it is no longer the case that Spin+ p,m = N1 (Rp,m ), so that we need to work harder. We shall only consider Spin6 in any detail.

Theorem 8.9.1

Spin6 ∼ = SU4 .

Proof Let l1 be the isomorphism of A5 onto A+ 6 defined in Theorem 6.4.1, let j2 : A5 → M4 (C) be the isomorphism defined in Section 7.7, and let φ = j2 ⊗ l1−1 . Thus φ(e1 e2 ) = J ⊗ I, φ(e1 e3 ) = iQ ⊗ I, φ(e1 e4 ) = iU ⊗ U, φ(e1 e5 ) = U ⊗ J, φ(e1 e6 ) = iU ⊗ Q, φ(eΩ ) = i. We consider the usual conjugation on M4 (C): (T ∗ )ij = Tji . Now e1 ei = (e1 ei )∗ = −e1 ei , and (φ(e1 ei ))∗ = −φ(e1 ei ), for 2 ≤ i ≤ 6. Arguing as in the previous section, it follows that φ is a ∗ -isomorphism of A+ 6 onto M4 (C). Thus φ(N + ) = {A ∈ M4 (C) : A∗ A = λI with λ ∈ R∗ },

8.9 Examples of spin groups for d ≥ 6

149

and φ(N1+ ) = U4 . First we use a connectedness argument to show that Spin6 ⊆ SU (4). If x and y are unit vectors in R6 then (xy)2 = −1, so that det φ(xy) = ±1, and so det φ( Spin6 ) ⊆ {1, −1}. The set of unit vectors in R6 is connected, and so the set P2 = {a ∈ A6 : a = xy with x, y unit vectors in R6 } is connected in A6 . Thus the function xy → det φ(xy) is constant on P2 . But if x is a unit vector then det φ(x2 ) = det (−I) = 1. Thus φ(P2 ) ⊆ SU (4). Since Spin6 is generated by P2 , it follows that φ( Spin6 ) ⊆ SU4 . Since φ( Spin6 ) ⊆ SU4 and φ(N1+ ) = U4 , they are certainly not equal. In order to go further, we need to understand the relation between N1+ and Spin6 . We consider eteΩ for t ∈ R. Since e2Ω = −1, the mapping eit → eteΩ is an isomorphism of T onto a subgroup G of N1+ . Note that φ(eteΩ ) = eit I, so that det φ(eteΩ ) = e4it . If x ∈ R6 then eteΩ xe−teΩ = e2teΩ x, so that eteΩ ∈ Spin6 if and only if t = jπ/2 for some j ∈ Z. Further, eteΩ = 1

if t = 2kπ,

eteΩ = eΩ

if t = 2kπ + π/2,

eteΩ = −1

if t = (2k + 1)π,

eteΩ = −eΩ

if t = 2kπ + 3π/2.

Lemma 8.9.1 If g ∈ N1 and x ∈ R6 then gxg −1 = eteΩ u for some eteΩ ∈ G and u ∈ R6 . Proof Arguing as in Proposition 8.2.1, we can write gxg −1 = λ + eΩ ρ, where λ, ρ ∈ R6 . Then (gxg −1 )2 = gx2 g −1 = (λ + eΩ ρ)2 = λ2 + ρ2 + eΩ (ρλ − λρ) ∈ R, so that λρ = ρλ. If λ 6= 0, we can write ρ = αλ + σ, where σ⊥λ. Then λσ + σλ = 0 and λσ − σλ = λρ − ρλ = 0, so that λσ = 0 and σ = 0. Thus λ and ρ are linearly dependent, and we can write λ = cos t.u and ρ = sin t.u, for some 0 ≤ θ < 2π and u ∈ R6 . Then gxg −1 = eteΩ u = eteΩ /2 ue−teΩ /2 . Lemma 8.9.2

N1+ = G. Spin6 .

Proof Suppose that g ∈ N1+ . Let x be a non-zero element of R6 . By the preceding lemma, there exist eteΩ ∈ G and u ∈ R6 such that gxg −1 =

150

Spin

eteΩ /2 ue−teΩ /2 . Let f = e−teΩ /2 g; then f xf −1 = u ∈ R6 . We shall show that f ∈ Spin6 , so that g = eteΩ /2 f ∈ G. Spin6 . Suppose that y is linearly independent of x. By the preceding lemma, f yf −1 = eseΩ ve−seΩ = cos 2s.v + sin 2s.eΩ v, for some s ∈ R and v ∈ R6 . We shall show that sin 2s = 0, so that f yf −1 ∈ R6 , and f ∈ Spin6 . Suppose, if possible, that sin 2s 6= 0. Now f (x + y)f −1 = ereΩ we−reΩ = cos 2r.w + sin 2r.eΩ w, for some r ∈ R and w ∈ R6 . Since f (x + y)f −1 = u + f e2 f −1 it follows that cos 2r.w = u + cos2s.v and sin 2r.w = sin 2s.v. Thus v and w are linearly dependent. Since u = cos 2r.w−cos 2s.v, u and v are also linearly dependent. Thus f yf −1 = α(cos 2s.u + sin 2s.eΩ u), for some α ∈ R∗ . Let y1 = y − α cos 2s.x. Then f y1 f −1 = α sin 2s.eΩ u ∈ eΩ R6 . Suppose now that z is an element of R6 linearly independent of x and y. If t = f zf −1 6∈ R6 , then, using the same argument as before, t ∈ span(u, eΩ u) = f span(x, y)f −1 . This is not possible, and so t ∈ R6 . But now we can repeat the argument, replacing x by z, to deduce that y1 ∈ span(t, eΩ t) ∩ eΩ R6 = span(eΩ t). Thus t is a multiple of u, which is again impossible. We therefore conclude that sin 2s = 0. We now complete the proof of the theorem. If g ∈ N1+ and g = eteΩ f , with f ∈ Spin6 , then det φ(g) = det φ(eteΩ )det φ(f ) = e4it , so that if φ(g) ∈ SU (4) then eit = i, −1, −i or 1, so that eteΩ = eΩ , −1, −eΩ or 1, and g ∈ Spin6 . Thus φ( Spin6 ) = SU (4). This algebraic proof also gives information about the relationship between Spin6 and N1+ . There is however a shorter argument, based on dimension, which gives a shorter proof of the theorem (but no more). As we have seen, φ( Spin6 ) is a subgroup of SU4 . Since Spin6 is a double cover of SO6 , it has the same dimension as SO6 , which is 6.5/2 = 15. But U4 has dimension 42 = 16, and so SU4 has dimension 15 as well, so that φ( Spin6 ) is an open-and-closed subgroup of SU4 . But SU4 is connected, and so φ( Spin6 ) = SU4 .

8.10 Table of results

151

Spin7 is isomorphic to a subgroup of SO8 ∼ ∼ ∼ ∼ We know that A+ 7 = A6 and that A6 = A2,4 = M4 (A0,2 ) = M8 (R). Following through the usual calculations, we find that there is an isomorphism k : A+ 7 → M8 (R) such that k(e1 e7 ) = J ⊗ I ⊗ I, k(e2 e7 ) = U ⊗ J ⊗ I, k(e4 e7 ) = Q ⊗ J ⊗ U, k(e5 e7 ) = Q ⊗ I ⊗ J,

k(e3 e7 ) = Q ⊗ J ⊗ Q, k(e6 e7 ) = U ⊗ Q ⊗ J.

Since k(ei e7 ) = −k(ei e7 ) = (k(ei e7 ))0 for 1 ≤ i ≤ 6, it follows that + k(¯ a) = (k(a))0 for a ∈ A+ 7 , so that k(N1 ) = O(8). The connectedness argument that we used above then shows that k( Spin7 ) ⊆ SO(8). Note that dim( Spin7 ) = dim(SO(7)) = 21, while dim(SO(8)) = 28. Thus several constraints need to be imposed on an element of N1+ for it to belong to Spin7 .

8.10 Table of results Let us tabulate the results that have been obtained. Spin groups Spin2 ∼ = Spin0,2 + Spin1,1 Spin1,1 Spin3 ∼ = Spin0,3 + ∼ Spin2,1 = Spin+ 1,2 Spin4 ∼ = Spin0,4 Spin+ 2,2 Spin5 ∼ = Spin0,5 Spin6 ∼ = Spin0,6 Spin7 ∼ = Spin0,7

T (R, +) × D2 ∼ = (R∗ , ×) (R∗ , ×) × D2 H1 ∼ = S3 ∼ = SU2 SL2 (R) H1 × H1 SL2 (R) × SL2 (R) HU (2) SU4 A proper subgroup of SO8

PART THREE SOME APPLICATIONS

9 Some applications to physics

In this chapter, we briefly describe some applications of Clifford algebras to physics. The first of these concerns the spin of an elementary particle, when the spin is 1/2. Pauli introduced the Pauli spin matrices to describe this phenomenon. Physics is concerned with partial differential operators: building on Pauli’s ideas, Dirac introduced a first order differential operator, now called the Dirac operator, in order to formulate the relativistic wave equation for an electron; this in turn led to the discovery of the positron. We shall study the Dirac operator more fully in the next chapter. Here we show how it can be used to formulate Maxwell’s equation for an electromagnetic field in a particularly simple way, and will also describe the Dirac equation.

9.1 Particles with spin 1/2 The Pauli spin matrices were introduced by Pauli to represent the internal angular momentum of particles which have spin 1/2. Let us briefly describe how this can be interpreted in terms of the Clifford algebra A0,3 . In quantum mechanics, an observable corresponds to a Hermitian linear operator T on a Hilbert space H, and, when the possible values of the observable are discrete, these possible values are the eigenvalues of T . The Stern-Gerlach experiment showed that elementary particles have an intrinsic angular momentum, or spin. If x is a unit vector in R3 then the component Jx of the spin in the direction x is an observable. In a nonrelativistic setting, this leads to the consideration of a linear mapping x → Jx from the Euclidean space V = R3 into the space Lh (H) of Hermitian operators on an appropriate state space H. Particles are either

156

Some applications to physics

bosons, in which case the eigenvalues of Jx are integers, or fermions, in which case the eigenvalues of Jx are of the form (2k + 1)/2. In the case where the particle has spin 1/2, each of the operators Jx has just two eigenvalues, namely 1/2 (spin up) and −1/2 (spin down). Consequently, Jx2 = I. This equation suggests that we consider the negative-definite quad2 ratic form q(x) = − kxk on R3 , and consider R0,3 as a subspace of A0,3 . Now let j : A0,3 → M2 (C) be the isomorphism defined in Section 7.5, and let Ji = 21 j(ei ) = 12 σi for i = 1, 2, 3. Thus J1 J2 = (i/2)J3 ,

J2 J3 = (i/2)J1 ,

J3 J1 = (i/2)J2 .

Now the Pauli spin matrices are Hermitian, and identifying A0,3 with M2 (C), we see that we can take the state space H to be the spinor space C2 . If x = (x1 , x2 , x3 ) ∈ R0,3 and q(x) = −1 then Jx =

1 2



x3 x1 + ix2

x1 − ix2 −x3



is a Hermitian matrix with eigenvalues 1/2 and −1/2; the corresponding eigenvectors are     x1 − ix2 x1 − ix2 and 1 − x3 −1 − x3 respectively.

9.2 The Dirac operator We now begin to study the analytic use of Clifford algebras. Some knowledge of analysis is required. In particular, familiarity with the differential calculus for vector-valued functions of several real variables is needed. If U is an open subset of a finite-dimensional real vector space and F is a finite-dimensional vector space then we define C(U, F ) to be the vector space of all continuous F -valued functions defined on U , and, for k > 0, we define C k (U, F ) to be the vector space of all k-times continuously differentiable F -valued functions defined on U . Suppose that U is an open subset of Euclidean space Rd , that F is a finite-dimensional vector space and that f ∈ C 2 (U, F ). Then the second-order linear differential operator ∆ : C 2 (U, F ) → C(U, F ) (the

9.2 The Dirac operator

157

Laplacian) is defined as ∆f =

d X ∂2f j=1

∂x2j

.

Pd (Once again, conventions vary: for some authors, ∆f = − j=1 (∂ 2 f /∂x2j ), and for others, a factor 12 is included in the definition.) A harmonic function is one which satisfies ∆f = 0. The study of the Laplacian, and of harmonic functions, is one of the major topics of analysis. We can consider other quadratic forms on Rd . Suppose that Rp,m is the standard regular quadratic space with signature (p, m). Then we define the corresponding Laplacian ∆q as p p+m X X ∂2f ∂2f ∂2f ∆q = q(ej ) 2 = − 2 2, ∂x ∂x ∂x j j j j=1 j=1 j=p+1 d X

and define q-harmonic functions f as those for which ∆q f = 0. It was Dirac who first saw that the second-order linear differential operator −∆q can be written as the square of a first-order linear differential operator, provided that we are prepared to place the problem in a non-commutative setting. Suppose that U is an open subset of Rp,m , that F is a finite-dimensional left Ap,m -module, and that f ∈ C 1 (U, F ). We define the (standard) Dirac operator Dq as Dq f (x) =

d X

q(ej )ej

j=1

∂f . ∂xj

We say that f is Clifford analytic if Dq f = 0. First we show that this definition is independent of the choice of orthogonal basis. Theorem 9.2.1 Suppose that (i ) is any orthogonal basis for Rp,m ; denote the corresponding co-ordinates by (yi ). Then Dq =

d X i=1

q(i )i

∂ . ∂yi

Proof We can express each vector i in terms of the basis (ej ) as Pd i = aij ej , and each vector ej in terms of the basis (i ) as Pd j=1 ej = i=1 bji i . Then aij = b(i , ej ) = b(ej , i ) = bji .

158

Some applications to physics

Thus d X

q(i )aij aik =

i=1

d X

d X

q(i )bji bki = b

i=1

bji i ,

i=1

d X

! bki i

= b(ej , ek ).

i=1

Since f (x + ∂f (x) = lim t→0 ∂yi

Pd

j=1

taij ej ) − f (x) t

,

it follows that d

X ∂ ∂ = . aij ∂yi ∂x j j=1 Thus ! ∂ aik ∂xk k=1 ! d X d d X X ∂ = q(i )aij aik ej ∂xk j=1 i=1

d X

d d X X ∂ q(i )i = q ( i ) aij ej ∂y i i=1 i=1 j=1

d X

k=1

=

d X d X

b(ej , ek )ej

j=1 k=1

=

d X

q(ej )ej

j=1

∂ ∂xk

∂ = Dq . ∂xj

We now consider Dq2 as a mapping from C 2 (U, F ) into C(U, F ):   d d X X ∂  ∂  Dq2 = q(ej )ej q(ej )ej ∂xj j=1 ∂xj j=1 =

d X d X

q(ei )q(ej )ei ej

i=1 j=1

=

d X

q(ei )2 e2i

i=1

=−

d X i=1

q(ei )

∂ ∂ ∂xi ∂xj

d−1 X d X ∂ ∂ ∂2 q(ei )q(ej )(ei ej + ej ei ) + 2 ∂xi ∂xi ∂xj i=1 j=i+1

∂2 = −∆q , ∂x2i

where ∆q is the Laplacian. We have written −∆q as the product of two

9.3 Maxwell’s equations

159

first-order differential operators; but the price that we pay is that we must work in a non-commutative setting. Thus if we consider Dq

Dq

C 2 (U, F ) → C 1 (U, F ) → C(U, F ), then ker Dq = {f ∈ C 1 (U, F ) : f is Clifford analytic}, ker Dq2 = {f ∈ C 2 (U, F ) : f is q-harmonic}. Further, if f is q-harmonic, then Dq f is Clifford analytic. We shall study the Dirac operator, and Clifford analyticity, in more detail in the next chapter.

9.3 Maxwell’s equations Maxwell’s equations for an electromagnetic field can be expressed as a single equation involving the standard Dirac operator. We consider the simplest case of electric and magnetic fields in a vacuum, varying in space and time. We consider an open subset U of R × R3 = R4 . We denote points of U by (t, x1 , x2 , x3 ): t is the time variable, and x1 , x2 , x3 are space variables. To begin with, we consider E = (E1 , E2 , E3 ), the electric field, and B = (B1 , B2 , B3 ), the magnetic field, as continuously differentiable vector-valued functions defined on U and taking values in R3 . Given these, there are then a vector-valued current density, which is a continuous vector-valued function J defined on U and taking values in R3 , and a charge density ρ, which is a continuous scalar-valued function ρ defined on U . Then, with a suitable choice of units, Maxwell’s equations are ∇.E ∂E ∇×B− ∂t ∂B +∇×E ∂t ∇.B

= ρ, = J, = 0, = 0.

The formulation of these equations is one of the major achievements of nineteenth-century physics. It turned out however that these equations are not invariant under Euclidean changes of co-ordinates; instead, they are invariant under Lorentz transformations. This was one of the driving

160

Some applications to physics

forces that led to Einstein’s theory of special relativity. Thus, instead of R4 we must consider R1,3 , with quadratic form q(t, x1 , x2 , x3 ) = t2 − (x21 + x22 + x23 ) and orthogonal basis (e0 , e1 , e2 , e3 ). (It would also be possible to work with R3,1 .) We therefore consider U to be an open subset of R1,3 . We must also consider the vector spaces which contain the images of E, B and J more carefully. We now consider the electric and magnetic fields as continuously differentiable mappings from U into the Clifford algebra A1,3 . But instead of being vector-valued, we require them to be bivector-valued. The vector space V of bivectors is a six-dimensional hyperbolic space: we can write it as V = V + ⊕ V − , where V + = span(e2 e3 , e1 e3 , e1 e3 ) V − = span(e0 e1 , e0 e2 , e0 e1 ); the quadratic form Q on A1,3 is positive definite on V + and is negative definite on V − . In order to avoid confusion, we shall write E = E1 e1 + E2 e2 + E3 e3 and B = B1 e1 + B2 e2 + B3 e3 as the original R3 -valued functions. We set ˜ = E1 e0 e1 + E2 e0 e2 + E3 e0 e3 = e0 E, E ˜ = B1 e2 e3 − B2 e1 e3 + B3 e1 e2 = Be1 e2 e3 . B ˜ takes values in V + and E ˜ takes values in V − . We set F ˜ = E+ ˜ B; ˜ Thus B ˜ B ˜ and F ˜ are all bivector-valued. E, We combine the current density and the charge density: we set ˜ = ρe0 + J , so that J ˜ is vector-valued, taking values in the fourJ dimensional space R1,3 . ˜ . We can write Dq F ˜ = (Dq F ˜ )1 +(Dq F ˜ )3 , where We now calculate Dq F ˜ )1 takes values in R1,3 , and is the vector part of Dq F ˜ , and (Dq F ˜ )3 (Dq F ˜ . First, takes values in R1,3 eΩ , and is the trivector part of Dq F 3 X ∂Ej

3 X ∂Ej



 ∂B ∂B 2 3 ˜ )1 = − ej − e0 + − e1 (Dq F ∂t ∂xj ∂x3 ∂x2 j=1 j=1     ∂B1 ∂B1 ∂B2 ∂B3 + − e2 + − e3 ∂x1 ∂x3 ∂x2 ∂x1 ˜, = ρe0 + J = J

9.4 The Dirac equation

161

and secondly ˜ )3 = T1 + T2 + T3 , (Dq F where    ∂E3 ∂E3 ∂E1 ∂E2 − e0 e2 e3 + − e0 e1 e3 T1 = ∂x3 ∂x2 ∂x1 ∂x3   ∂E1 ∂E2 + − e0 e1 e2 , ∂x2 ∂x1 ∂B1 ∂B2 ∂B3 T2 = e0 e2 e3 − e0 e1 e3 + e0 e1 e2 ∂t ∂t ∂t ∂B1 ∂B2 ∂B3 T3 = e1 e2 e3 − e2 e1 e3 + e3 e1 e2 . ∂x1 ∂x2 ∂x3 

Thus ˜ )3 = − (∇ × E) eΩ − (Dq F



∂B ∂t

 eΩ + (∇.B)e1 e2 e3 = 0;

˜ =J ˜. Hence Maxwell’s equations reduce to the single equation Dq F ˜ = 0 then F ˜ is Clifford analytic, and that Note that this implies that if J ˜ is Clifford analytic then E and B are harmonic. if J

9.4 The Dirac equation Our next example concerns the Dirac equation. This relativistic quantum mechanical equation was defined by Paul Dirac in 1928; it led to the discovery of the positron, and is one of the great landmarks of theoretical physics. It was here that the Dirac operator was introduced. We begin with the classical relativistic equation E 2 = k.k + m2 , where, with suitable units, E is energy, k is momentum and m is restmass. The corresponding equation in quantum mechanics is the KleinGordon equation. The underlying Hilbert space H is a suitable space of sufficiently smooth functions, called wave functions, on R4 . In this quantum mechanical setting, E 2 corresponds to

∂2 ∂2 ∂2 ∂2 + + . and k.k corresponds to ∂t2 ∂x21 ∂x22 ∂x23

162

Some applications to physics

Thus we are led to the Klein-Gordon equation  2  ∂ ψ ∂2ψ ∂2ψ ∂2ψ 2 u t ψ= − + + = −mψ 2 , ∂t2 ∂x21 ∂x22 ∂x23 where ψ is a wave function. There are two conventions that we can consider for space-time: either R3,1 or R1,3 . We shall consider each of them, and begin with R3,1 . We therefore consider a wave function ψ taking values in a left A3,1 2 module F . If D3,1 is the corresponding Dirac operator, then D3,1 =t u 2. Consequently, we consider the equation D3,1 ψ = imψ, where i is an operator with i2 = −1. The operator i must commute with e1 , e2 , e3 and e4 . But A3,1 ∼ = M2 (H), and Z(A3,1 ) ∼ = R. We therefore need to ensure that F is also a C-module; that is, we require that F is a complex vector space. We follow the path that Dirac took. As in Section 7.6, we can consider A3,1 as a subalgebra of A3,2 , and A3,2 is isomorphic to M4 (C). A spinor space for A3,2 is isomorphic to C4 , and so we consider A3,1 acting on C4 . For this reason, we consider the wave function ψ taking values in C4 . We can use the Dirac γ-matrices defined in Section 7.6 to represent A3,1 in M4 (C); the Dirac equation then becomes D3,1 ψ = −γ0

∂ψ ∂ψ ∂ψ ∂ψ + γ1 + γ2 + γ3 = imψ. ∂t ∂x1 ∂x2 ∂x3

Alternatively, we can consider space-time as R1,3 , and label the stan2 dard orthogonal basis as (e0 , e1 , e2 , e3 ). In this case, D1,3 = −u t 2 , and so we consider the equation D1,3 ψ = mψ. Now A1,3 ∼ = M4 (R), and, as in Section 7.6, but with a change of indices, we obtain a representation by using the matrices j0 = J ⊗ U,

j1 = Q ⊗ U,

j2 = U ⊗ U,

j3 = I ⊗ Q.

A spinor space for A1,3 is isomorphic to R4 , and we could consider ψ taking values in R4 . In quantum mechanics, however, wave functions are required to be complex; we therefore take ψ to take values in C4 . The Dirac equation then becomes D1,3 ψ = j0

∂ψ ∂ψ ∂ψ ∂ψ − j1 − j2 − j3 = mψ. ∂t ∂x1 ∂x2 ∂x3

But in this case, the matrices are real-valued, with entries taking values −1, 0 and 1.

9.4 The Dirac equation

163

Exercises 1. Let ψ = (ψ1 , ψ2 , ψ3 , ψ4 ) take values in C4 . Expand the Dirac equation, using the matrices γ1 , γ2 , γ3 , γ4 defined above, to obtain four first-order linear differential equations. 2. Do the same, where now ψ = (ψ1 , ψ2 , ψ3 , ψ4 ) take values in R4 , using the matrices j0 , j1 , j2 , j3 defined above.

10 Clifford analyticity

We now study the Dirac operator, and Clifford analyticity, in greater detail. We can identify the complex field C with two-dimensional real space R2 and can compare analytic functions on C with harmonic functions on R2 . The relations between these two notions has led to a great deal of profound analysis. What happens in higher dimensions? There are corresponding relations, of a weaker, dimensionally dependent, sort between harmonic functions and Clifford analytic functions, and we shall describe some of these in this chapter. Although we shall describe the two-dimensional background, it will help to have some familiarity with the relationship between analytic and harmonic functions in the plane.

10.1 Clifford analyticity Let us recall the definitions. Suppose that U is an open subset of Rp,m , that F is a finite-dimensional left Ap,m -module, and that f ∈ C 1 (U, F ). Then the (standard) Dirac operator Dq is defined as Dq f (x) =

d X j=1

q(ej )ej

∂f , ∂xj

and f is Clifford analytic if Dq f = 0. Further, Dq2 = −∆q , so that if f is Clifford analytic, then f is q-harmonic, and if f ∈ C 2 (U, F ) is q-harmonic, then Dq (f ) is Clifford analytic. Let us first give some examples of q-harmonic functions; the examples in Euclidean space generalize to spaces with other quadratic forms. Proposition 10.1.1

If q(x) > 0, let f (x) = log q(x) for d = 2 and

10.1 Clifford analyticity

165

let f (x) = 1/q(x)(d−2)/2 for d > 2. Then f is q-harmonic on the set {x ∈ Rp,m : q(x) > 0}. Proof

If d = 2 then 4x2j 2q(ej ) 2q(ej )xj ∂2f ∂f = , = and − ∂xj q(x) ∂x2j q(x) q(x)2

so that ∆q f (x) = q(e1 )

∂2f ∂2f 4 4(q(e1 )x21 + q(e2 )x22 ) + q(e ) = = 0. − 2 ∂x21 ∂x22 q(x) q(x)2

If d > 2 then ∂f −(d − 2)q(ej )xj , = ∂xj q(x)d/2 so that d(d − 2)x2j ∂2f −(d − 2)q(ej ) = + , ∂x2j q(x)d/2 q(x)(d+2)/2 so that d X

∂2f −d(d − 2) d(d − 2)q(x) ∆q f (x) = + = 0. q(ej ) 2 = ∂xj q(x)d/2 q(x)(d+2)/2 j−1

If f is a real-valued function on Rp,m , we can consider it as a function taking values in the scalars in Ap,m . Corollary 10.1.1 The Rp,m -valued function g(x) = x/q(x)d/2 is Clifford analytic on {x : q(x) > 0}. Proof

If d = 2 then Dq (log q(x)) = 2x/q(x) and if d > 2 then Dq (q(x)−(d−2)/2 ) = −(d − 2)x/q(x)d/2 .

Let us consider the case where F = Ap,m and where f = takes values in Rp,m ⊆ Ap,m . In this case,   d d X X ∂fi   Dq f (x) = q(ej )ej ei ∂x j i=1 j=1 d X

∂fi + =− q(ei ) ∂xi i=1

X 1≤i 0} denote the upper half-space in Rd+1 , and let λ denote Lebesgue measure on Rd .

168

Clifford analyticity

The Poisson kernel cd t , (x2 + t2 )(d+1)/2 R (where cd is a normalizing constant, chosen so that Rd Pt (x) dλ(x) = 1) is a harmonic function on H d+1 (see the exercises below). Suppose that ν is a complex Borel measure on Rd . We can define a function u on the upper half-space H d+1 by setting Z u(t, x) = P (ν)(t, x) = Pt (ν)(x) = Pt (x − y) dν(y). P (t, x) = Pt (x) =

R

Then u is harmonic on Hd+1 . Similarly, if 1 ≤ p < ∞ and f ∈ Lp (λ), we set Z P (f )(t, x) = Pt (f )(x) = Pt (x − y)f (y) dλ(y). Rd

P (f ) is again harmonic. Then the Poisson kernel is an approximate identity, which solves the Dirichlet problem for the upper half-space. Theorem 10.3.1 If f ∈ Lp (λ) then P (f ) is harmonic on the upper half-space H d+1 . Further, Pt (f ) → f in Lp -norm, and almost everywhere, and in particular at the points of continuity of f , as t & 0. Instead of starting with measures and functions on Rd , we can start with harmonic functions on H d+1 . This raises the following fundamental question. Suppose that g is a harmonic function on the upper half-space H d+1 ; let gt (x) = g(t, x). For 1 ≤ p < ∞, we define the space hp (H d+1 ) to be the space of harmonic functions g on H d+1 for which kgkhp = sup{kgt kp : t > 0} < ∞. Then (hp (H d+1 ), kgkhp ) is a Banach space. If g ∈ hp (H d+1 ), does gt converge to an element of Lp (λ) as t & 0, and if so, does it converge pointwise almost everywhere? When 1 < p < ∞ then the answer is ‘yes’. The mapping f → P (f ) is a linear isomorphism of Lp (λ) onto hp (H d+1 ). When p = 1, the answer is ‘no’. In fact, if M (Rd ) is the linear space of complex Borel measures on Rd , then the mapping ν → P( ν) is a linear isomorphism of M (Rd ) onto h1 (H d+1 ). L1 (λ) is a closed linear subspace of M (Rd ), and the image P (L1 (λ)) is a proper linear subspace of h1 (H d+1 ). Let us introduce some spaces which will be of interest later. Suppose that g is a function on the upper half-space H d+1 . We define the maximal

10.4 The Hilbert transform

169

function g ∗ on Rd by setting g ∗ (x) = supt>0 |g(t, x)|. For 1 ≤ p < ∞ we define the Hardy space Hp (H d+1 ) to be Hp (H d+1 ) = {g ∈ hp (H d+1 ) : g ∗ ∈ Lp (λ)}. Similarly, we define the Hardy space Hp (Rd ) to be {f ∈ Lp (λ) : P (f ) ∈ Hp (H d+1 )}. We then have the following result. Theorem 10.3.2 If 1 < p < ∞ then Hp (Rd ) = Lp (Rd ). On the other hand, H1 (Rd ) is a proper linear subspace of L1 (Rd ). If g ∈ H1 (H d+1 ) then there exists f ∈ H 1 (Rd ) such that P (f ) = g. Thus P (H1 (Rd )) = H1 (H d+1 ). If χB is the characteristic function of the unit ball in Rd , then χB ∈ L1 (Rd ) \ H1 (Rd ). Exercises 1. If z = x + it ∈ C, show that t ix i = + πz π(x2 + t2 ) π(x2 + t2 ) = P (x, t) + iQ(x, t) = Pt (x) + iQt (x). P is the Poisson kernel, and Q is the conjugate Poisson kernel on R. 2. Deduce that P and Q are harmonic functions on H 2 . R 3. Verify that R Pt (x) dx = 1. 4. Verify that the function P (t, x) = cd t(x2 + t2 )−(d+1)/2 is harmonic on the upper half-space H d+1 .

10.4 The Hilbert transform We now consider what happens in the case where d = 1. We can identify H 2 with the upper half-plane C+ = {z = x + it ∈ C : t > 0}. If u is harmonic on C+ then u is the real part of an analytic function u + iv on C+ , which is unique up to an additive constant. We can use the conjugate Poisson kernel Q (defined in the exercises in the previous section) to define Q(ν) and Q(f ), exactly as above. Then P (ν) + iQ(ν) is an analytic function on C+ , as is P (f ) + iQ(f ). When 1 < p < ∞, things continue to work well. If f ∈ Lp (λ) then

170

Clifford analyticity

Q(f ) ∈ hp (H 2 ), so that Qt (f ) converges in Lp -norm to a function H(f ) as t & 0. H(f ) is the Hilbert transform of f , and the mapping f → H(f ) is a linear homeomorphism of Lp (λ) onto itself. When p = 2, the Hilbert transform is in fact an isometry. When p = 1, things do not work so well, since Qt (L1 ) is not contained in L1 . We must consider H1 (R), rather than L1 . If f ∈ H1 (R), then Q(f ) ∈ H1 (H 2 ), so that Qt (f ) converges in L1 -norm to a function H(f ) as t & 0. Then H(f ) ∈ H1 (R), and the Hilbert transform is a linear homeomorphism of H1 (R) onto itself. The following remarkable theorem, due to the brothers Riesz, shows that this is the only sensible course to take. Theorem 10.4.1 (F. and M. Riesz) Suppose that ν is a bounded complex measure on R for which P (ν) is analytic on C+ . Then there exists f ∈ H1 (R) such that ν = f dλ. These results show that the relation between harmonic functions and complex functions is subtle, and is not straightforward. Our aim will be to see how Clifford algebras and augmented Dirac operators can be used to extend these results to higher dimensions. Exercise 1. Let f be the indicator function of [−1, 1]. Calculate Qt (f )(x) for |x| > 1, and show that Qt (f ) is not in L1 (R).

10.5 Augmented Dirac operators Suppose that ν is a bounded signed or complex measure on Rd . Using the d-dimensional Poisson kernel, we have defined a function P (ν) on the upper half-space H d+1 = {(x, t) ∈ Rd+1 : t > 0}. The new variable t is separate from the original variable x. Putting this in a Clifford algebra context, the function u can be considered as a function on a subset of R ⊕ Rd ⊆ Ad . (More generally, an element of the subspace R ⊕ Rp,m of Ap,m is called a paravector.) We denote such a paravector by (t, x1 , . . . , xd ). This means that we need to consider a differential operator more general than the Dirac operator that we have considered in the previous chapter. Suppose that U is an open subset of the space R ⊕ Rp,m of paravectors in Ap,m , that F is a finite-dimensional left Ap,m -module,

10.5 Augmented Dirac operators

171

and that f : U → F is a smooth function. We define the augmented Dirac operator D to be d

∂f X ∂f Df = + , q(ej )ej ∂t j=1 ∂xj and its conjugate D, to be d

d

∂f X ∂f X ∂f ∂f Df = + = − . q(ej )¯ ej q(ej )ej ∂t j=1 ∂xj ∂t j=1 ∂xj Then d

∂2f X ∂2f DD(f ) = DD(f ) = 2 + q(ei ) 2 = ∆, ∂t ∂xi i=1 where now ∆ is the Laplacian on R ⊕ Rp,m ∼ = Rp+1,m . We say that f is an (augmented) Clifford analytic function if D(f ) = 0. In particular, if f is harmonic on U then Df is analytic. Frequently, one considers functions u = (u0 , u1 , . . . , un ) = u0 + u1 e1 + · · · + un en taking values in R ⊕ Rp,m ⊆ Ap,m . Then the condition Df = 0 reduces to the augmented Cauchy-Riemann (ACR) equations: d

X ∂ui ∂u0 = q(ei ) ∂t ∂xi i=1 ∂u0 ∂uj =− 1≤j≤d ∂xj ∂t ∂ui ∂uj q(ej ) = q(ei ) 1 ≤ i < j ≤ d. ∂xj ∂xi

q(ej )

Let us see how the augmented Cauchy-Riemann equations relate to the familiar Cauchy-Riemann equations of complex analysis. Suppose that U is an open subset of R ⊕ R and that f is a smooth mapping of U into A1 ∼ = C, and that f (x, y) = u(x, y) ⊕ v(x, y)e1 . We set   d ∂ ∂ 1 =2 − e1 = 12 D dz ∂x ∂y   d ∂ ∂ 1 =2 + e1 = 12 D. d¯ z ∂x ∂y Then f is analytic if df /d¯ z = 0 and if f is analytic then its derivative

172

Clifford analyticity

is df /dz. The equation df /d¯ z = 0 expands to give the Cauchy-Riemann equations. z relates to D, rather than The fact that d/dz relates to D and d/d¯ the other way round, is unfortunate. It would be appropriate to change the signs in the definition of D, so that d

∂f X −1 ∂f Df = + , e ∂t j=1 j ∂xj but the definitions that we have given conform to the usual practice.

10.6 Subharmonicity properties Why are analytic functions in the plane better behaved than harmonic functions? Many of the deep results concerning analytic functions depend upon their subharmonicity properties. A measurable real-valued function f on an open subset W of Rd , or C, is said to be subharmonic if, whenever U is a ball contained in W with centre x, then Z 1 f (y) dλ(y). f (x) ≤ λ(U ) U If f is smooth then f is subharmonic if and only if ∆(f ) ≥ 0. If f is an analytic function on an open subset U of C then log |f | is subharmonic, and so |f |α is subharmonic, for 0 < α < ∞. Many results concerning analytic functions on an open subset U of C follow from these facts. As we shall now see, there are corresponding results for Clifford analytic functions, but they are less strong, and depend upon the dimension d. We restrict attention to the Euclidean case. Theorem 10.6.1 Suppose that f is a Clifford analytic function on an open subset U of Rd , taking non-zero values in Rd ⊆ Ad . Then q ∆(kf k ) ≥ 0, for q ≥ (d − 2)/(d − 1) . Proof

We need the following lemma.

Lemma 10.6.1 Suppose that M = (mjk ) ∈ Md (R) is symmetric and has trace 0. Then d−1X 2 mjk , kM k2 ≤ d j,k

where kM k is the operator norm of M acting on l2d .

10.6 Subharmonicity properties

173

Proof We diagonalize M : there exists an orthogonal matrix P such that P ? M P = diag(λ1 , . . . , λd ). Then kM k = maxj |λj | = |λ1 |, say, X

m2jk = τ (M ? M ) =

Pd

j=1

λ2j ,

j=1

j,k

and τ (M ) =

d X

λj = 0. Thus  2 d d X X 2 λ1 =  λj  ≤ (d − 1) λ2j , j=2

j=2

by the Cauchy-Schwarz inequality, so that 2

d kM k = dλ21 = λ21 + (d − 1)λ21 ≤ (d − 1)

d X

λ2j = (d − 1)

j=1

X

m2jk .

j,k

We now prove Theorem 10.6.1. Since f is harmonic, the result is true if q ≥ 1. We can therefore suppose that (d − 2)/(d − 1) ≤ q < 1 (this is the interesting case). q q/2 kf k = hf, f i , so that   ∂ ∂f q (q−2)/2 (kf k ) = q hf, f i ,f ∂xj ∂xj and 2  ∂f ∂2 q (q−4)/2 ,f (kf k ) = q(q − 2) hf, f i ∂x2j ∂xj +!   * 2 ∂f ∂f ∂ f (q−2)/2 + q hf, f i , + ,f ∂xj ∂xj ∂x2j Adding, and using the harmonicity of f ,    2  2 X X ∂f ∂fk  q q−4  2 (q − 2) ∆(kf k ) = q kf k ,f + kf k . ∂xj ∂xj j k,j

Now X  ∂f 2 X X ∂fk ,f = fk ∂xj ∂xj j j

!2 2

= kM (f )k ,

k

where Mjk = ∂fk /∂xj . But the generalized Cauchy-Riemann equations

174

Clifford analyticity

of Section 9.2 show that M is symmetric and that τ (M ) = 0. Thus by Lemma 10.6.1    2 X X  ∂f 2 d−1 ∂fk  2 ≤ kf k , ,f ∂x d ∂x j j j k,j

and so, since q < 2, X 2 d−1 ∂fk ∆(|f | ) ≥ q kf k (q − 2) + 1 d ∂xj k,j    2 q(d − 1) d − 2 X ∂fk q−2 = kf k q− ≥ 0. d d−1 ∂xj q

q−2



k,j

We have a corresponding result for augmented Clifford analytic functions. Theorem 10.6.2 Suppose that f is an augmented Clifford analytic function defined on an open subset of R ⊕ Rd ⊆ Ad . Then ∆(|f |q ) ≥ 0 for q ≥ (d − 1)/d. Proof Let V = {(s, x) ∈ Rd+1 : (−s, x) ∈ U }, and let π(s, x) = (−s, x) for (s, x) ∈ V . Then f ◦ π satisfies the generalized Cauchy-Riemann equations on V , and so ∆(|f ◦ π|q ) ≥ 0 for q ≥ (d − 1)/d. The result follows, since ∆(|f |q ) ◦ π = ∆(|f ◦ π|q ).

Exercise q

1. Let f be the Clifford analytic function x/ kxk on Rd \ {0}. Show that |f |q is not subharmonic for 0 < q < (d − 2)/(d − 1).

10.7 The Riesz transform We are now in a position to see how results concerning the Hilbert transform can be extended to higher dimensions. Suppose that d ≥ 2. We consider the half-space H d+1 as a subspace of the space R ⊕ Rd of paravectors in Ad . By Proposition 10.1.1, g(t, x) =

1 (t2

2

+ kxk )(d−1)/2

10.7 The Riesz transform

175

is harmonic on (R × Rd ) \ {(0, 0)}. Let h(t, x) = −cd g(t, x)/(d − 1), where cd is a normalizing constant. For t > 0, let ∂h cd t (t, x) = 2 ∂t (t2 + kxk )(d+1)/2 ∂h cd xj and Kj (t, x) = (t, x) = , 2 ∂xj (t2 + kxk )(d+1)/2 P (t, x) =

for 1 ≤ j ≤ d. P is the d-dimensional Poisson kernel and the functions Kj are the Riesz kernels. The constant cd is chosen so that Z P (t, x) dx = 1 for t > 0. Rd

For fixed t > 0, the functions Kj (t, ·) are bounded functions on Rd which are in Lp (Rd ) for 1 < p < ∞ (but not in L1 (Rd )). Since h is harmonic, Dh(t, x) = P (t, x) −

d X

ej Kj (t, x)

j=1

is an augmented Clifford analytic function on H d+1 . Now suppose that ν ∈ M (Rd ). We define Z (j) (j) K (ν)(t, x) = Kt (ν)(x) = Kj (t, x − y) dν(y), Rd

Pd for (t, x) ∈ H d+1 and for 1 ≤ j ≤ d. Then P (ν) − j=1 ej K (j) j(ν) is an augmented Clifford analytic function on H d+1 . In particular, each K (j) (ν) is a harmonic function on H p+1 . We can also define K (j) (f )(t, x), for f ∈ Lp (Rd ), for 1 ≤ p < ∞. When 1 < p < ∞, things continue to work well. If f ∈ Lp (λ) then (j) K (j) (f ) ∈ Hp , so that Kt (f ) converges in Lp -norm to a function Kj (f ) as t & 0. Kj (f ) is the jth Riesz transform of f . The mapping f → Kj (f ) is a bounded linear mapping, and K (j) (f )(t, x) = P (Kj (f ))(t, x). When p = 1, things do not work so well. We must consider H1 (Rd ) rather than L1 (Rd ). If f ∈ H1 (Rd ), then K (j) (f ) ∈ H1 (H d+1 ), so that (j) Kt (f ) converges in L1 -norm to a function Kj (f ) as t & 0. Then the jth Riesz transform Kj (f ) ∈ H 1 (Rd ). Nevertheless, the facts that things work well for 1 < p < ∞, and that |f |α is subharmonic for α > (d − 1)/d if f is an augmented Clifford

176

Clifford analyticity

analytic function, lead to a proof of the d-dimensional version of the theorem of the brothers Riesz. Theorem 10.7.1 If ν is a bounded measure on Rd taking values in Rd , for which P (ν) is Clifford analytic, then ν is absolutely continuous with respect to Lebesgue measure, and there exists f ∈ H 1 (Rd ) such that ν = f dλ. Proof

We shall need the following lemma.

Lemma 10.7.1 Suppose that u is a continuous subharmonic function on H d+1 , and that u(x + iy) → 0 as y → 0 and as x + iy → ∞. Then u ≤ 0 on H d+1 . Proof If not, u attains its positive supremum at a point (t0 , x0 ) of H d+1 . Since u takes values arbitrarily close to zero on the ball 2

U = {(t, x) : (t − t0 )2 + kx − x0 k < t20 }, it follows that 1 λ(U )

Z u(t, x) dλ(t, x) < u(t0 , x0 ), U

contradicting the subharmonicity of u. We now turn to the proof of the theorem. Let u = P (ν). Choose q (d − 1)/d < q < 1 and let v = kuk ; v is subharmonic on H d+1 , by Theorem 10.6.1. Let p = 1/q > 1. Then if s > 0 and vs (x) = v(s, x), Z Z p |vs (x)| dx = kus (x)k dx ≤ kνk , Rd

Rd q

so that kvs kp ≤ kνk . Let ws = P (vs ), so that ws is harmonic in H d+1 . The function ds (t, x) = v(s + t, x) − ws (t, x) is subharmonic on H d+1 . Also ds (t, x)) → v(s, x) − vs (x) = 0 as s & 0, and ds (t, x) → 0 as (t, x) → ∞. Thus by Lemma 10.7.1, v(s + t, x)) ≤ ws (t, x), for (t, x) ∈ H d+1 . We now use the fact that if 1 < p < ∞ then the space Lp is reflexive. This means that if (fn ) is a bounded sequence in Lp then there is a subsequence (fnk ) Rwhich converges weakly: there exists f ∈ Lp such R 0 that fnk g dλ → f g dλ for each g ∈ Lp (where 1/p + 1/p0 = 1). Since {vs : s > 0}) is bounded in Lp (Rd ), there exists a sequence (sk ), with sk & 0 as k → ∞ such that (vsk ) converges weakly in Lp (Rd ) to V , say. In particular, wsk (t, x) = P (vsk )(t, x) → P (V )(t, x) for each

10.8 The Dirac operator on a Riemannian manifold

177

(t, x) ∈ H d+1 . Further, v(sk + t, x) → v(t, x) as k → ∞. Consequently, kP (ν)(t, x)k = ku(t, x)k = v(t, x)p ≤ (P (V )(t, x))p ≤ (P (V )∗ (x))p , and so P (ν)∗ ∈ L1 (Rd ). Thus P (ν) ∈ H1 (H d+1 ), and so there exists f ∈ H1 (Rd ) such that P (ν) = P (f ). Thus ν = f dλ.

10.8 The Dirac operator on a Riemannian manifold So far, we have considered the Dirac operator, and its properties, defined on functions on an open subset of Euclidean space. Even more important are the properties of a Dirac operator defined on the sections of a vector bundle over a Riemannian manifold. These properties play a large part in proofs of the Atiyah-Singer index theorem. It is unfortunate that a substantial amount of knowledge of Riemannian geometry is needed in order to give an accurate and detailed account of this; instead we shall give a brief and superficial description of what is involved, if only to encourage the interested reader to consult fuller accounts. To do so, we assume some rudimentary knowledge of differential geometry. A smooth differential manifold M is a Riemannian manifold if there is a smoothly varying inner product, defined on the fibres Tx of the tangent bundle T M of M . Using this, it is possible to define a smooth Clifford algebra bundle A(T M ) over M ; if x ∈ M then the fibre Ax is a universal Clifford algebra for the inner-product space Tx . As usual, we consider Tx as the space of vectors in Ax . We can now define Dirac bundles over M . A vector bundle S over M is a Dirac bundle over M if, first, each fibre Sx is a left Ax -module. Secondly, we require S to be a Riemannian manifold with a canonical torsion-free connection ∇ which satisfies the following compatibility conditions: (i) if a is a unit vector in Tx and s ∈ Sx then ha.s, a.six = hs, six . Thus

ha.s, six = a.s, a2 .s x = − hs, a.six , so that a defines a skew-symmetric operator on the fibre Sx ; (ii) if a is a smooth cross-section of A(T M ) and s is a smooth crosssection of S, then ∇(a.s) = (∇a).s + a.(∇s). As an example, the Clifford algebra bundle A(T M ) is itself a Dirac bun-

178

Clifford analyticity

dle over M . It is however not always possible to define a corresponding spinor bundle. The Dirac operator D is now defined as a first-order linear differential operator on the vector space Γ(S) of smooth sections of the Dirac bundle S. If x ∈ M then d X Ds(x) = ej ∇ej s(x), j=1

where (e1 , . . . , ed ) is an orthogonal basis for the fibre Tx ; as in the Euclidean case, this definition does not depend on the choice of orthogonal basis. It then turns out that D2 is an elliptic essentially self-adjoint operator on Γ(S). Further   d d X X D2 s = e i ∇ei  ej ∇ej s i=1

=−

d X i=1

j=1

∇2ei s +

X

ei ej (∇ei ∇ej − ∇ej ∇ei )s.

1≤i 0. If g −1 is as above, and j + k = n then π(g)hj,k (z, w) = (az + bw)j (−¯bz + a ¯w)k so that π(g)(hj,k ) ∈ Vn , and the vector space Vn is G-invariant. Let πn be the restriction of π to πn . Theorem 11.2.1

Each representation πn : G → L(Vn ) is irreducible.

Proof The trivial representation V0 is irreducible. Suppose that n > 0. It is enough to show that if T ∈ L(Vn ) and T πn (g) = πn (g)T for all g ∈ G then T is a multiple of the identity. First, let µ be irrational, and let a = eiµπ , so that all the powers of a, positive and negative, are different. Let     a 0 a 0 ga = = . 0 a−1 0 a ¯ Then ga ∈ SU (2) and if j + k = n then πn (ga )(hj,k )(z, w) = (¯ az)j (aw)k = ak−j hj,k (z, w), so that hj,k is an eigenvector of πn (ga ), with eigenvalue ak−j . Thus

182

Representations of Spind and SO(d)

πn (ga ) has distinct eigenvalues. Since T πn (ga ) = πn (ga )T , it follows that T (hj,k ) = λj,k hj,k , for some λj,k ∈ C. Next consider the rotation     1 1 1 −1 1 1 −1 with inverse R = √ . R= √ 2 1 1 2 −1 1 Then R ∈ SU (2) and n   (z + w)n 1 X n j n−j πn (R)(hn,0 )(z, w) = = n/2 z w , j 2n/2 2 j=0

so that n   1 X n πn (R)(hn,0 ) = n/2 hj,n−j j 2 j=0

and n   1 X n T πn (R)(hn,0 )) = n/2 λj,n−j hj,n−j . j 2 j=0

On the other hand, n   1 X n λn,0 hj,n−j . πn (R)(T (hn,0 )) = λn,0 π(R)(hn,0 ) = n/2 j 2 j=0

Thus λj,n−j = λn,0 for all 0 ≤ j ≤ n, and T = λn,0 I. In fact, these are all the irreducible representations of SU (2). Theorem 11.2.2 If π is an irreducible representation of SU (2) then π is equivalent to πn for some n ∈ Z. Proof This requires some basic representation theory. See [BtD], Proposition II.5.3.

11.3 Representations of Spind and SO(d) for d ≤ 4 We now consider the irreducible representations of Spind and SO(d) for d = 2, 3 and 4. The groups Spin2 and SO(2) are both abelian, and are isomorphic to T. Thus the irreducible representations of each are the characters, and in each case the dual group is isomorphic to Z. If γ is a character on SO(2), then γ ◦ α is a character on Spin2 . Since α(eteΩ )x = e2teΩ x, the mapping γ → γ ◦ α maps the group SO(2)0 of characters on SO(2) onto

11.3 Representations of Spind and SO(d) for d ≤ 4

183

the group Spin02 of characters on Spin2 whose elements correspond to the even integers in Z. The group Spin3 is isomorphic to SU (2), and so we can identify the irreducible representations of Spin3 with the irreducible representations {πn : n ∈ Z} of SU (2). Let us consider this in a little detail. Let us, as in Section 7.5, use the isomorphism j : A0,3 → M2 (C) defined by the Pauli spin matrices. If we set fi = ei eΩ , so that j(fi ) = τi , the associate Pauli matrix, for 1 ≤ i ≤ 3, and consider the isomorphism j of Spin0,3 onto SU (2), we find that   cos t i sin t tf1 j(e ) = , i sin t cos t   cos t sin t tf2 , j(e ) = − sin t cos t  it  e 0 tf3 j(e ) = . 0 e−it Let us set γn = πn ◦ j : Spin0,3 → GL(Vn ). Then γn (etf1 )hj,k (z, w)) = (z cos t − iw sin t)j (−iz sin t + w cos t)k , γn (etf2 )hj,k (z, w)) = (z cos t − w sin t)j (z sin t + w cos t)k , γn (etf3 )hj,k (z, w)) = ei(k−j)t z j wk .

Now γn (−I) = (−1)n γn (I), so that we obtain an irreducible representation of SO(3) if and only if n is even. If g ∈ SO(3) and g = α(˜ g ), let us set γ˜2n (g) = γ2n (˜ g ). Theorem 11.3.1 For each even integer 2n, γ˜2n is an irreducible representation of SO(3). Every irreducible representation π of SO(3) is equivalent to a representation γ˜2n , for some even integer 2n. Proof 11.1.

This essentially follows from the remarks at the end of Section

Further, if Ri,t is a rotation in SO(3) about the ith axis through an angle t, then Ri,t = α(etfi /2 ), for 1 ≤ i ≤ 3, and so we can use the formulae above to calculate γ˜2n (Ri,t ). While we are about it, let us describe the irreducible representations of O(3) and U (2). We need the following theorem.

184

Representations of Spind and SO(d)

Theorem 11.3.2 Suppose that π1 : G1 → L(V1 ) and π2 : G2 → L(V2 ) are irreducible representations. Then the representation π1 ⊗ π2 of G1 × G2 in L(V1 ⊗ V2 ) defined by (π1 ⊗ π2 )(g1 , g2 )(v1 ⊗ v2 ) = π1 (g1 )(v1 ) ⊗ π2 (g2 )(v2 ) is irreducible. Every irreducible representation of G1 × G2 is equivalent to one of these representations. Proof

See [BtD] Proposition II.4.14.

Theorem 11.3.3 For each even integer 2n there is a unique irreducible representation σ2n : O(3) → L(V2n ) such that σ2n (g) = γ˜2n (g) for g ∈ SO(n) and σ2n (g) = σ(−g) for g ∈ O(3) \ SO(3), and there is a unique irreducible representation τ2n : O(3) → L(V2n ) such that τ2n (g) = γ˜2n (g) for g ∈ SO(n) and τ2n (g) = −τ (−g) for g ∈ O(3) \ SO(3), and every irreducible representation of O(3) is equivalent to one of these representations. Proof

For O(3) is isomorphic to SO(3) × D2 .

Next, let us consider the irreducible representations of U (2). The mapping β : (eiθ , g) → eiθ g is a surjective homomorphism of T × SU (2) onto U (2), with kernel {(1, I), (−1, −I)}. Thus we have a short exact sequence 1

- D2

⊆ - T × SU (2) β- U (2)

- 1.

Every irreducible representation of T × SU (2) is of the form πm,n (eiθ , g) = eimθ πn (g), for some (m, n) ∈ Z×Z+ . Since πm,n ((−1, −I) = (−I)m+n , the irreducible representations of U (2) are derived from the representations {πm,n : m + n even}. Note also that det g = e2iθ . If n = 2k is even, we therefore have a sequence (˜ πm,2k )m∈Z of irreducible m representations defined by π ˜m,2k (g) = (det g) π2k (h). If n = 2k + 1 is odd, then we have a sequence (˜ πm,2k+1 )m∈Z of irreducible representaiθ tions defined by π ˜m,2k+1 (g) = e (det g)m π2k+1 (h). Finally, let us consider the irreducible representations of Spin4 and O(4). The group Spin4 is isomorphic to SU (2)×SU (2), and so it follows from Theorem 11.3.2 that every irreducible representation of Spin4 is isomorphic to a representation γm ⊗ γn , for (m, n) ∈ Z2 . The tensor product Vm ⊗ Vn can be considered as a linear subspace of C(S 3 × S 3 ): if (g, h) ∈ SU (2) ⊗ SU (2) and f ∈ Vm ⊗ Vn ⊆ C(S 3 × S 3 ), then γm ⊗ γn (g, h)(f )(s, t) = f (πm (g −1 )(s), πn (h−1 )(t)).

11.3 Representations of Spind and SO(d) for d ≤ 4

185

The representations γm ⊗ γn and γn ⊗ γm are equivalent, and so every irreducible representation of Spin4 is equivalent to some γm ⊗ γn , with m ≤ n. Since (γm ⊗γn )(−1, −1) = 1 if and only if m+n is even, it follows that every irreducible representations of SO(4) comes from an irreducible representation γm ⊗ γn with m ≤ n and m + n even. Exercise 1. Suppose that 0 ≤ n1 ≤ m1 and 0 ≤ n2 ≤ m2 and that m1 n1 = m2 n2 . Show that the irreducible representations πm1 ⊗ πn1 and πm2 ⊗ πn2 are isomorphic if and only if m1 = m2 , n1 = n2 .

12 Some suggestions for further reading

This book is only an introduction to Clifford algebras. Here are some suggestions for further reading; they are not meant to provide a comprehensive bibliography, but rather to indicate where to go next. The algebraic environment Further results about multilinear mappings and tensor products, are given in standard textbooks, such as Cohn [Coh], Jacobson [Jac] and Mac Lane and Birkhoff [MaB]. The results are presented in the more general setting of modules over a commutative ring. This leads to serious problems which do not arise in the vector space case. The idea of considering representations of algebras in terms of modules extends to infinite-dimensional algebras, such as C ∗ -algebras. A good starting point for this is the book by Lance [Lan]. The proof of the existence of the tensor product of two modules over a commutative ring, as described in the remark at the end of Section 3.2, does not lead to a simple description of the structure of the tensor product, and there are also problems with torsion. Tensor products of vector spaces are so much more straightforward that they deserve to be treated separately. Quadratic spaces Lam [Lam] is the standard work on quadratic forms, and is a goldmine of mathematics. It considers quadratic forms over fields not of characteristic 2. Here, extra complications arise; many of these concern the nature of the set F 2 of elements of a field which are squares, and involve interesting problems in number theory. On the other hand, many of the results of Chapter 3 carry over to this more general situation. For

Some suggestions for further reading

187

example, the proof of the Cartan-Dieudonn´e theorem (Theorem 4.8.1), which is due to Artin, applies in this more general situation. Clifford algebras There are many books on Clifford algebras. The works by Artin [Art] and Chevalley [Che] are now principally of historic interest. As Bourguignon writes in his Postface, ‘Chevalley’s style is as dry and systematic as possible’; Bourguignon’s Postface is itself well worth reading. Of the books that are concerned with real Clifford algebras, let us first mention Lounesto [Lou]. This is a discursive account; it begins by considering many examples, of Clifford algebras, of spin groups and of spinors, and does not give the definition of a general Clifford algebra until the second half of the book. The examples are interesting; the book is neither dry, nor systematic. The two books by Porteous, [Por1, Por2] , approach Clifford algebras from a geometric viewpoint, and therefore make up for the lack of geometry in the present work. They consider topics not included, or barely touched on in the present book: the classical groups and their relation to Clifford algebras, the Cayley algebra, conformal groups, and triality. This last topic concerns a group of automorphisms of Spin8 of order 3 which does not project down to SO(8). These topics are also studied in the book by Harvey [Har]. This comes in two separate parts; the first is concerned with normed algebras and calibrations, the second with Clifford algebras and spinors. The presentation of Clifford algebras is independent of Part I, but the two parts come together when spinor spaces are equipped with a natural inner product. Harvey suggests that the reader may wish to start the book with Part II. Do so; this is a most enjoyable book. He claims that his book is intended to be a collection of examples; it includes many more calculations than the present book does, but presented in a very different way. The books by Artin and Chevalley are concerned with Clifford algebras over general fields. It is also possible to construct Clifford algebras based on modules with coefficients in a commutative unital algebra; these are studied in the book by Hahn [Hah] and the recent magisterial book by Helmstetter and Micali [HeM]. Clifford algebras can be constructed for infinite-dimensional spaces. Plymen and Robinson [PlR] start with a real infinite-dimensional innerproduct space, and use it to construct a complex Clifford algebra; from this they construct a C ∗ -Cifford algebra, and a von Neumann Clifford algebra, and study the properties of these algebras. Similar algebras are

188

Some suggestions for further reading

considered by Meyer [Mey], but are approached from a very different direction; Meyer treats the subject as a branch of non-commutative, or quantum, probability theory. Clifford algebras and harmonic analysis The book by Brackx, Delange and Sommen [BDS] gives a comprehensive account of Clifford analysis. The most useful starting point, however, is the book by Gilbert and Murray [GiM]. This begins by developing the theory of Clifford algebras, and then considers Dirac operators, Riesz transforms and Clifford analyticity, operators of Dirac type and the representation theory of spin groups (particularly in the Euclidean case) and ends with a brief account of the Atiyah-Singer index theorem. The lecture notes by Alan McIntosh ‘Clifford algebras, Fourier series and harmonic functions in a Lipschitz domain’, in [Rya], also provide a comprehensive introduction to Clifford analysis (although McIntosh works with complex Clifford algebras), and there is much else of interest in these conference proceedings. The Riesz transforms are discussed in the classic works by Stein [Ste] and Stein and Weiss [StW], but without placing them in the context of Clifford algebras. Good accounts of the background theory of Hardy spaces are given in the books by Garcia-Cuerva and Rubio de Francia [GRF] and by Duoandikoetxea [Duo]. Clifford analysis on Riemannian manifolds In their proof of the index theorem [AtS], Atiyah and Singer defined Dirac operators on certain Riemannian manifolds. In the process they presented Clifford algebras in a new way, which has been followed ever since. The paper by Atiyah, Bott and Shapiro [ABS] gives an account of this: it is well worth reading. The book by Lawson and Michelsohn [LaM] is the standard work on Clifford analysis on Riemannian manifolds; the authors describe their work as a ‘modest introduction’, whose purpose is ‘to give a leisurely and rounded presentation’ of the results of Atiyah and Singer; it is much, much more than this. It does however require a good grounding in the theory of Riemann manifolds. Subsequent versions of the index theorem relate the Clifford analysis to the asymptotics of the heat kernel on the Riemannian manifold. Short accounts are given by Gilbert and Murray [GiM] and by Roe [Roe]; an encyclopedic account is given by Berline, Getzler and Vergne [BGV]. Al-

Some suggestions for further reading

189

though this book includes a preliminary chapter on differential geometry, it makes heavy demands on the reader. Spin groups and representation theory Spin groups have long been considered as double covers of classical Lie groups. There is a great difference between the theory of the representations of compact Lie groups and the theory of the representations of more general locally compact Lie groups, and it is well worth starting with the former. The excellent book by Br¨ocker and tom Dieck [BtD] is the standard text on this, and another good account is given in the recent book by Sepanski [Sep]. Both books explain the use of Clifford algebras and spin groups in representation theory. Applications to physics Finally, let us mention some books that link Clifford algebras with physics. The book by Doran and Lasenby [DoL] proceeds at a leisurely pace. Clifford algebras (called here by their alternative name ‘geometric algebras’) are introduced and studied as fundamental tools for the description of the physical world, rather than as objects in their own right, and much of the attention is concentrated on three-dimensional Euclidean space, and four-dimensional space-time. Nevertheless, a great deal of physical theory is expressed in a convincing way in terms of Clifford algebras. Similar remarks apply to the articles in [Bay]. The two volumes by Penrose and Rindler [PeR] use Clifford algebras to explore space-time in a very geometric manner. Many writers on Clifford algebras confess their lack of knowledge of theoretical physics. I must do the same; I have found the book by Sudbery [Sud] helpful in making connections with the mathematical theory of quantum mechanics.

References

[Art] [ABS] [AtS] [Bay] [BGV] [BDS] [BtD] [Che] [Cli1] [Cli2]

[Coh] [Dir] [DoL] [Duo] [GRF] [GiM]

E. Artin, Geometric algebra, John Wiley, New York, 1988. M. F. Atiyah, R. Bott and A. Shapiro, Clifford modules, Topology 3 (Supplement 1) (1964), 3-38. M. F. Atiyah and I. M. Singer, The index of elliptic operators on compact manifolds, Bull. Amer. Math. Soc. 69 (1963), 422-433. William E. Baylis (editor), Clifford (Geometric) Algebras, Birkh¨ auser, New York, 1996. Nicole Berline, Ezra Getzler and Mich`ele Vergne, Heat Kernels and Dirac Operators, Springer, Berlin, 1996. F. Brackx, R. Delanghe and F. Sommen, Clifford Analysis, Pitman, London, 1983. Theodor Br¨ ocker and Tammo tom Dieck, Representations of Compact Lie groups, Springer, Berlin, 1995. Claude Chevalley, The Algebraic Theory of Spinors and Clifford Algebras, Collected works, Volume 2, Springer, Berlin, 1997. W. K. Clifford, Applications of Grassmann’s extensive algebra, Amer. J. Math. 1 (1876) 350-358. W. K. Clifford, On the classification of geometric algebras, Mathematical papers, William Kingdon Clifford, AMS Chelsea Publishing, 2007, 397-401. P. M. Cohn, Classic Algebra, John Wiley, Chichester, 2000. P. A. M. Dirac, The Principles of Quantum Mechanics, Fourth Edition, Oxford University Press, 1958 Chris Doran and Anthony Lasenby, Geometric Algebra for Physicists, Cambridge University Press, Cambridge, 2003. Javier Duoandikoetxea, Fourier Analysis, Amer. Math. Soc., Providence, RI, 2001. J. Garcia-Cuerva and J. L. Rubio de Francia, Weighted Norm Inequalities and Related Topics, North Holland, Amsterdam, 1985. John E. Gilbert and Margaret M. E. Murray, Clifford Algebras and Dirac Operators in Harmonic Analysis, Cambridge University Press, Cambridge, 1991.

192 [Hah] [Har] [HeM] [Jac] [Lam] [Lan] [LaM] [Lou]

[MaB] [Mey] [PeR] [PlR] [Por1] [Por2] [Roe] [Rya] [Sep] [Ste] [StW] [Sud]

References Alexander J. Hahn, Quadratic Algebras, Clifford Algebras and Arithmetic Witt Groups, Springer, Berlin, 1994. F. Reese Harvey, Spinors and Calibrations, Academic Press, San Diego, CA, 1990. Jacques Helmstetter and Artibano Micali, Quadratic Mappings and Clifford Algebras, Birkha¨ user, Basel, 2008. Nathan Jacobson, Basic Algebra I and II, W.H. Freeman, San Francisco, CA, 1974, 1980 T. Y. Lam, Introduction to Quadratic Forms over Fields, Amer. Math. Soc., Providence, RI, 2004. E. C. Lance Hilbert C ∗ -modules, London Mathematical Society Lecture notes 210, Cambridge University Press, Cambridge, 1995. H. Blaine Lawson, Jr. and Marie-Louise Michelsohn, Spin Geometry, Princeton University Press, Princeton, NJ, 1989. Pertti Lounesto, Clifford Algebras and Spinors, London Mathematical Society Lecture notes 286, Cambridge University Press, Cambridge, 2001. Saunders Mac Lane and Garrett Birkhoff, Algebra Third Edition, AMS Chelsea, 1999. Paul-Andr´e Meyer, Quantum Probability for Probabilists, Springer Lecture Notes in Mathematics 1538, Springer, Berlin, 1993. R. Penrose and W. Rindler, Spinors and Space-time I and II, Cambridge University Press, Cambridge, 1984, 1986. R. J. Plymen and P. L. Robinson, Spinors in Hilbert Space, Cambridge University Press, Cambridge, 1994. I. R. Porteous, Topological Geometry, Cambridge University Press, Cambridge, 1981. I. R. Porteous, Clifford Algebras and the Classical Groups, Cambridge University Press, Cambridge, 1995. John Roe, Elliptic Operators, Topology and Asymptotic Methods, Second edition, Chapman and Hall/CRC, London 2001. John Ryan (editor), Clifford Algebras in Analysis and Related Topics, CRC Press, Boca Raton, FL, 1996. Mark R. Sepanski, Compact Lie Groups, Springer, Berlin, 2007. Elias M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, NJ, 1970. Elias M. Stein and Guido Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, Princeton, NJ, 1971. Anthony Sudbery, Quantum Mechanics and the Particles of Nature, Cambridge University Press, Cambridge, 1986.

Glossary

Ad0g Adjoint conjugation (137) A(E, q), Ap,m , Ad Universal Clifford algebras (88,90) + A+ (E, q), A+ p,m , Ad The even subalgebra of a universal Clifford algebra (92) − A− (E, q), A− p,m , Ad The odd part of a universal Clifford algebra (92) k k A (E, F ), A (E) Spaces of alternating linear mappings and forms (45) An The group of even permutations of {1, . . . , n} (9) An(E, q) The anisotropic elements of (E, q) (69) Aopp The opposite algebra (18) Aut(G) The group of automorphisms of the group G (8) ˜ The magnetic field (159) B, B curl The curl operator (166) B(E, q), Bp,m , Bd Non-universal Clifford algebras (99) B(E1 , E2 ; F ), B(E1 , E2 ), B(E) Spaces of bilinear mappings and forms (37) C, C∗ The complex numbers, the non-zero complex numbers (1,10) CG (A), CA (B) The centralizer of a set A in a group G, of a set B in an algebra A (9,18) C(U, F ), C k (U, F ) The space of continuous, k times continuously differentiable, functions on U with values in F (156) D, D The augmented Dirac operator and its conjugate (170) dφ An annihilation operator (47) D, D2n The full dihedral group, the dihedral group of order 2n (10) D2 The multiplicative group {1, −1} (9) Dq The standard Dirac operator (157) det The determinant (18, 45) dim E The dimension of a vector space E (12) ea The exponential function (20) eΩ A volume element (96) ˜ The electric field (159) E, E (E, q) A quadratic space (61) End(A) The algebra of unital automorphisms of a unital algebra (18) EndA (M ) The algebra of A-automorphisms of an A-module M (29) G(A) The group of invertible elements of a unital algebra (19) G(E, q) The group of invertible elements of A(E, q) (101) GL(E), GLd (K) The general linear group of operators, of matrices (19) Gp(A), Gp(g) The group generated by A, by {g} (7)

194

Glossary

H, H∗ , H1 The quaternions, non-zero quaternions, quaternions of norm 1 (25,27) H2p Standard hyperbolic space of dimension 2p (67) Hd (C) The space of d × d Hermitian matrices (146) H d+1 The upper half-space (167) H(f ) The Hilbert transform of f (170) Hp (H d+1 ) A Hardy space (169) HomA (M1 , M2 ) The space of A-module homomorphisms from M1 to M2 (29) HU (2) The hyper-unitary, or symplectic, group (147) ˜ı The canonical inclusion (94) Iso(E, q) The isotropic elements of (E, q) (69) J A matrix representing a quarter-turn rotation of the plane (23) ˜ The current density (159) J, J Kj The jth Riesz kernel (175) L(E, F ), L(E) The vector space of linear mappings of E into F , of E into itself (14) Lh (H) The space of Hermitian operators on a Hilbert space H (155) mx A creation operator (47) M (E1 , . . . , Ek ; F ), M k (E; F ), M (E, 1 . . . , Ek ), M k (E) Spaces of multilinear mappings and forms (36) M (Rd ) The space of signed Borel measures on Rd (168) Mm,n , Md (A) The vector space of m × n matrices, of d × d matrices with values in an algebra (13,17) N (T ), n(T ) The nullspace of a linear mapping, its nullity (13) o(G), o(g) The order of the group g, the element g (8) N , N∗ , N±1 , N1 Subgroups of G(E, q) (102) + N + , N∗+ , N±1 , N1+ Subgroups of G(E, q) ∩ A+ (E, q) (102) O(E, q), O(p, m), O(d) The orthogonal group (71) (p, m) The signature of a quadratic form (65) Pin(E, q) The Pin group (139) P u(H), P u(A) The pure quaternions, the pure elements of an algebra A (25, 43) Q The quaternionic group (11) Q A matrix reprenting a reflection in the plane, or the quadratic form on a universal Clifford algebra (23) rank(T ) The rank of T (13) R, R∗ The real numbers, the non-zero real numbers (9) Rp,m , Rd Standard regular quadratic space, standard Euclidean space (67) Rθ Rotation of R2 through an angle θ (74) span(A) The span of A [12] S k (E, F ), S k (E) Spaces of symmetric k-linear mappings and forms (49) Sp,m A spinor space (115) SO(E, q), SO(p, m), SO(d) The special orthogonal group (72) Spin(E, q) The Spin group (139) Spin+ (E, q), Spin− (E, q) Elements of Spin(E, q) with quadratic norm 1, −1, respectively (139) SU (d) The special unitary group (84) T The group {z ∈ C : |z| = 1} (10) T M The tangent bundle (177) U A matrix representing a reflection in the plane (23)

Glossary

195

U (E, h., .i), U (d) The unitary group (84) w = min(p, m) The Witt index of a quadratic form (66) Z The integers (9) Z2 The additive group {0, 1} = Z/2Z (9) Z(G), Z(A) The centre of a group G, of an algebra A (9) α(g) Adjoint conjugation (137) γ0 , . . . , γ3 The Dirac matrices (126) Γ3,1 The Dirac algebra (126) Γ(E, q) The Clifford group of (E, q) (137) Γ(S) The space of smooth sections of a bundle S (178) ∆, ∆q The Laplacian (157) ∆ The quadratic norm function (26, 101) ρx A simple reflection in the direction x (76) σ0 , . . . , σ3 The Pauli spin matrices (24) ΣX , Σn The group of permutations of X, of {1, . . . , n} (9) τ, τn The trace, the normalized trace (43) τ0 , . . . , τ3 The associate Pauli matrices (24) Ω = Ωd The set {1, . . . , d} (89) ∇ A connection (177) a → a0 , a → a∗ , a → a ¯ The principal automorphism, the principal anti-automorphism, conjugation, in a universal Clifford algebra (91-93) (a1 ⊗ b1 )g (a2 ⊗ b2 ) The graded tensor product of a1 ⊗ b1 and a2 ⊗ b2 (56) A ⊗g B The super-algebra product of A and B (56) A⊥ The annihilator of A, or the set orthogonal to A (15) [C]A , [c1 , . . . , cn ]A The A-submodule generated by C, by {c1 , . . . , cn } (29) E 0 , E 00 The dual, the bidual of E (14) E1 ⊗ · · · ⊗ Ek The tensor product of E1 , . . . , Ek (38) ⊗∗ (E) The tensor algebra of E (40) ∧k (E) The k-th exterior product of E(45) V∗ V V (E), + (E), − (E) The exterior algebra of E, and its odd and even parts (46) g ∗ The maximal function of g (169) T1 ⊗ · · · ⊗ Tk The tensor product of linear operators T1 , . . . , Tk (41) T 0 The transposed mapping (14) T a , (taij ) The adjoint mapping, the adjoint matrix (68) T ∗ The adjoint mapping, in a complex inner-product space (84) x1 ⊗ · · · ⊗ xk An elementary tensor (38) x1 ⊗s · · · ⊗s xk An elementary symmetric tensor (49) ⊗ks (E) The space of symmetric k-linear forms on E (49) Nk N∗ s (E), s (E) The kth symmetric tensor product of E, the symmetric tensor algebra of E (50) x1 ∧ . . . ∧ xk The alternating or wedge product of x1 , . . . , xk (45) hx, yi The inner product of x and y (62)

Index

adjoint, or adjoint mapping, 68, 84 matrix, 69 algebra, 16 algebra homomorphism,16 algebra isomorphism, 17 opposite, 17, 93 pure elements, 43 simple, 20 unital, 16 unital homomorphism, 16 Z2 -graded, 19 alternating, 44 product, 45 angular momentum, intrinsic, 156 anisotropic, 69 annihilation operator, 51, 63, 90 annihilator, 14 anti-involution principal, 94 Atiyah-Singer index theorem, 177, 188 automorphism, 8 principal, 84 basis, 12 hyperbolic, 66 orthonormal, 64 standard, 12 standard orthogonal, 64 bidual, 14 bilinear, 28,37 form, 37 non-singular form, 37 bivector, 101, 160 Borel measure, 168 boson, 156 C ∗ -algebra, 187

Clifford, 187 von Neumann, 187 canonical inclusion, 94 Cartan-Dieudonn´ e theorem, 82, 85, 187 Cartan’s periodicity law, 111 Cauchy integral formula, 167 Cauchy-Riemann equations, 171 augmented, 171, 174 generalized, 166, 174 Cauchy-Schwarz inequality, 173 Cayley, 80 centralizer of an algebra, 18, 95 of a group, 9 centre, of an algebra, 18, 95 of a group, 9 character, 102, 179, 182 Clifford, W. K., 108 Clifford algebra, 86 complex, 113 even, 92 universal, 88 tabulated, 112 Clifford analytic, 157 Clifford bundle, 177 Clifford group, 138 Clifford mapping, 86 Clifford’s theorem, 108 commutative ring, 186 conjugation adjoint, 137 of quaternions, 26 of quaternionic matrices, 147 of a Clifford algebra, 94 connection, 177

198 creation operator, 47, 51, 90 cross-section, smooth, 177 curvature tensor, 177 cyclic vector, 29 determinant, 19, 47 diagonalization, 64 dilation, 82 dimension, 12 Dirac, algebra, 126 bundle, 177 equation, 126, 161 matrices, 126 operator, 156 augmented, 170 standard 157, 164 Dirichlet problem, 168 division algebra, 31 double cover, 9 dual, or dual space, 14 dual basis, 14 Einstein, 160 elementary tensor, 38 electric field, 159 electromagnetic field, 159 endomorphism, 17 Euclidean space, 61, 62 evaluation mapping, 38 even, 19 exact sequence, 8 short, 8 exponential function, 20 extension by linearity, 14 exterior derivative, 166 exterior product, 45 Fock space, bosonic, 51 fermionic, 51 Frobenius’ theorem, 27, 31, 104 Geometric algebra, 189 graded isomorphism, 110 tensor multiplication, 56 vector representation, 138 group, 7 abelian, or commutative, 7 cyclic, 8 dihedral, of order 2n, 10 Euclidean 71 full dihedral,10

Index general linear, 19 hyperbolic, 71 hyper-unitary, 147 Lie, 179 compact, 189 Lorentz, 71 Minkowski, 71 orthogonal, 71 special, 72 Pin, 139 quaternionic, 11 quotient, 8 simple, 8 Spin, 139 symplectic, 147 unitary, 84 special, 84 Hamilton, 25, 80 Hardy space, 169, 188 harmonic and q-harmonic functions 157 heat kernel, 188 Hermitian, mapping, or operator, 84, 156 space, 84 standard H-Hermitian form, 148 Hilbert space, 83, 155, 161 Hilbert transform, 170 homogeneous, 19 polynomial, 181 homomorphism,8 hyperbolic basis, 66 space, 66 ideal, 20 left, 19 proper, 20 right, 19 two-sided, 20 idempotent, 18, 98 index theorem, Atiyah-Singer, 177, 194 inner-product space, 62 complex, 83 intertwining operator, 141, 180 invariant subspace, 20, 180 inverse, 19 invertible, 19 involution, 18, 76, 92 irreducible, 20, 30 isometry, 10, 26, 71 isotropic, 69 totally, 69

Index Klein-Gordon equation, 162 Laplacian 157, 171 of Laplace type, 178 left inverse, 19 linear mapping, 13 linear subspace, 12 linearly independent, 12 Lorentz space, 66,67 standard, 67 transformation, 159 magnetic field, 159, 160 matrix, matrices, 13 associate Pauli, 24, 55, 123 Pauli spin 24, 123, 126, 155 maximal function, 169 Maxwell’s equations, 159 Minkowski space, 66, 67 standard, 67 mirror, 23, 76 module, cyclic, 29 direct sum, 28 endomorphism, 29 finitely generated, 29 homomorphism, 29 left A-module, 28 left A-submodule, 29 right A-module, 29 semi-simple, 32 simple, 30 multilinear, k-linear, 37, 192 form, 38 negation, 76 nullity 13 null-space 13 observable, 155 odd, 19 orthogonality, 63 orthogonal, of quaternions 26 mapping, 71 in a quadratic space, 63 paravector, 170,174 path-connected, 74, 75, 79, 83 permutation, 9 physics, relativistic, 125 Poisson kernel, 168, 170, 175 conjugate, 169 projection, 18

199

quadratic form, 61 associated bilinear form, 62 complex, 81 negative definite, semi-definite, 62 non-singular, 62 positive definite, semi-definite, 62 quadratic space, 62 regular, 63 standard, 67 quantum mechanics, 155, 161, 189 quaternions, 24, 54 real, 25 pure, 25 rank, of a linear mapping, 13 of a bilinear form, 37 rank-nullity formula, 13 reflection, 10, 23 simple, 76 reflexive, 176 representation, 20, 22, 27 equivalent,180 faithful, 20 irreducible, 20 left regular, 20, 43 reversal, 98 Riemannian manifold, 177 Riesz, brothers, 170, 176 Riesz kernel, 175 Riesz transform, 175 right inverse, 19 rotation, 10,23 simple, 79, 82 Schur’s lemma, 31 semi-spinors and semi-spinor space, 115,116 sesquilinear, 83 signature,65 space-time, 189 special relativity, 160 spin, spin up, spin down, 156 spinors and spinor space,114, 156, 162 tabulated, 116 Stern-Gerlach experiment, 155 subalgebra, 18 subgroup, 7 normal, or self conjugate, 8 subharmonic, 172 super-algebra, 19 Sylvester’s law of inertia, 65 tangent bundle, 177

200 tensor product, 38 of linear operators, 41 symmetric, 49 tensor algebra, 40 symmetric, 49 torsion, 186 trace, 43 form, 43 normalized, 43, 100 transposed mapping, 14 trivector, 101, 140, 160 unitary, 180

Index vector space, 11 over H, 114 volume element, 96, 99, 117, 167 wave function, 161, 162 weak convergence, 176 Wedderburn’s theorem 33, 105 wedge product,45 Witt extension theorem, 73 index, 66, 70