Tensor products of C-star-algebras and operator spaces 9781108479011, 9781108782081, 9781108749114

544 99 3MB

English Pages 495 Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Tensor products of C-star-algebras and operator spaces
 9781108479011, 9781108782081, 9781108749114

Table of contents :
Introduction page......Page 12
1.1 Completely bounded maps on operator spaces......Page 22
1.2 Extension property of B(H)......Page 29
1.3 Completely positive maps......Page 34
1.4 Normal c.p. maps on von Neumann algebras......Page 41
1.5 Injective operator algebras......Page 42
1.6 Factorization of completely bounded (c.b.) maps......Page 44
1.7 Normal c.b. maps on von Neumann algebras......Page 48
1.8 Notes and remarks......Page 50
2.1 Rows and columns: operator Cauchy–Schwarz inequality......Page 52
2.2 Automatic complete boundedness......Page 54
2.3 Complex conjugation......Page 55
2.4 Operator space dual......Page 59
2.5 Bi-infinite matrices with operator entries......Page 61
2.6 Free products of C*-algebras......Page 64
2.7 Universal C*-algebra of an operator space......Page 68
2.8 Completely positive perturbations of completely bounded maps......Page 69
2.9 Notes and remarks......Page 72
3.1 Full (=Maximal) group C*-algebras......Page 74
3.2 Full C*-algebras for free groups......Page 77
3.3 Reduced group C*-algebras: Fell’s absorption principle......Page 82
3.4 Multipliers......Page 84
3.5 Group von Neumann Algebra......Page 88
3.6 Amenable groups......Page 89
3.7 Operator space spanned by the free generators in C*λ(Fn)......Page 94
3.8 Free products of groups......Page 95
3.9 Notes and remarks......Page 96
4.1 C*-norms on tensor products......Page 98
4.2 Nuclear C*-algebras (a brief preliminary introduction)......Page 102
4.3 Tensor products of group C*-algebras......Page 103
4.4 A brief repertoire of examples from group C*-algebras......Page 106
4.5 States on the maximal tensor product......Page 107
4.6 States on the minimal tensor product......Page 110
4.7 Tensor product with a quotient C*-algebra......Page 114
4.8 Notes and remarks......Page 115
5.1 Multiplicative domains......Page 117
5.2 Jordan multiplicative domains......Page 119
5.3 Notes and remarks......Page 123
6.1 The dec-norm......Page 124
6.2 The δ-norm......Page 132
6.3 Decomposable extension property......Page 136
6.4 Examples of decomposable maps......Page 140
6.5 Notes and remarks......Page 146
7.1 (α → β)-tensorizing linear maps......Page 147
7.2 || ||max is projective (i.e. exact) but not injective......Page 152
7.3 max-injective inclusions......Page 155
7.4 || ||min is injective but not projective (i.e. not exact)......Page 161
7.5 min-projective surjections......Page 164
7.6 Generating new C*-norms from old ones......Page 168
7.7 Notes and remarks......Page 171
8.1 Biduals of C*-algebras......Page 172
8.2 The nor-norm and the bin-norm......Page 173
8.3 Nuclearity and injective von Neumann algebras......Page 174
8.4 Local reflexivity of the maximal tensor product......Page 181
8.5 Local reflexivity......Page 185
8.6 Notes and remarks......Page 190
9 Nuclear pairs, WEP, LLP, QWEP......Page 191
9.1 The fundamental nuclear pair (C*(F∞),B(ℓ2))......Page 192
9.2 C*(F) is residually finite dimensional......Page 197
9.3 WEP (Weak Expectation Property)......Page 199
9.4 LLP (Local Lifting Property)......Page 204
9.5 To lift or not to lift (global lifting)......Page 209
9.6 Linear maps with WEP or LLP......Page 213
9.7 QWEP......Page 215
9.8 Notes and remarks......Page 219
10.1 The importance of being exact......Page 221
10.2 Nuclearity, exactness, approximation properties......Page 227
10.3 More on nuclearity and approximation properties......Page 233
10.4 Notes and remarks......Page 235
11.1 Traces......Page 236
11.2 Tracial probability spaces and the space L1(τ)......Page 239
11.3 The space L2(τ)......Page 241
11.4 An example from free probability: semicircular and circular systems......Page 246
11.5 Ultraproducts......Page 249
11.6 Factorization through B(H) and ultraproducts......Page 257
11.7 Hypertraces and injectivity......Page 267
11.8 The factorization property for discrete groups......Page 270
11.9 Notes and remarks......Page 272
12.1 Connes’s question......Page 273
12.2 The approximately finite dimensional (i.e. “hyperfinite”) II1-factor......Page 280
12.3 Hyperlinear groups......Page 282
12.4 Residually finite groups and Sofic groups......Page 284
12.5 Random matrix models......Page 287
12.6 Characterization of nuclear von Neumann algebras......Page 288
12.7 Notes and remarks......Page 290
13.1 LLP ⇒ WEP?......Page 291
13.2 Connection with Grothendieck’s theorem......Page 294
13.3 Notes and remarks......Page 301
14.1 From Connes’s question to Kirchberg’s conjecture......Page 302
14.2 From Kirchberg’s conjecture to Connes’s question......Page 303
14.3 Notes and remarks......Page 307
15.1 Finite representability conjecture......Page 308
15.2 Notes and remarks......Page 310
16.1 Unitary correlation matrices......Page 311
16.2 Correlation matrices with projection valued measures......Page 314
16.3 Strong Kirchberg conjecture......Page 320
16.4 Notes and remarks......Page 321
17 Property (T) and residually finite groups: Thom’s example......Page 322
17.1 Notes and remarks......Page 327
18 The WEP does not imply the LLP......Page 328
18.1 The constant C(n): WEP ⇒ LLP......Page 330
18.2 Proof that C(n) = √ n − 1 using random unitary matrices......Page 334
18.3 Exactness is not preserved by extensions......Page 338
18.4 A continuum of C*-norms on B⊗ B......Page 340
18.5 Notes and remarks......Page 343
19.1 Quantum coding sequences. Expanders. Spectral gap......Page 344
19.2 Quantum expanders......Page 347
19.3 Property (T)......Page 349
19.4 Quantum spherical codes......Page 352
19.5 Notes and remarks......Page 354
20 Local embeddability into C and nonseparability of (OSn,dcb)......Page 355
20.1 Perturbations of operator spaces......Page 356
20.2 Finite-dimensional subspaces of C......Page 357
20.3 Nonseparability of the metric space OSn of n-dimensional operator spaces......Page 362
20.4 Notes and remarks......Page 368
21.1 WEP as a local extension property......Page 369
21.2 WEP versus approximate injectivity......Page 373
21.3 The (global) lifting property LP......Page 375
21.4 Notes and remarks......Page 376
22.1 Complex interpolation......Page 377
22.2 Complex interpolation, WEP and maximal tensor product......Page 382
22.3 Notes and remarks......Page 393
23.1 Reduction to the σ-finite case......Page 395
23.2 A new characterization of generalized weak expectations and the WEP......Page 396
23.3 A second characterization of the WEP and its consequences......Page 399
23.4 Preliminaries on self-polar forms......Page 401
23.5 max+-injective inclusions and the WEP......Page 406
23.6 Complement......Page 414
23.7 Notes and remarks......Page 419
24.1 Full crossed products......Page 421
24.2 Full crossed products with inner actions......Page 425
24.3 B ⊗min B fails WEP......Page 429
24.4 Proof that C0(3) < 3 (Selberg’s spectral bound)......Page 438
24.5 Other proofs that C0(n) < n......Page 440
24.6 Random permutations......Page 442
24.7 Notes and remarks......Page 443
25 Open problems......Page 445
A.1 Banach space tensor products......Page 449
A.2 A criterion for an extension property......Page 450
A.4 Ultrafilters......Page 452
A.6 Finite representability......Page 454
A.7 Weak and weak* topologies: biduals of Banach spaces......Page 455
A.8 The local reflexivity principle......Page 457
A.9 A variant of Hahn–Banach theorem......Page 458
A.11 C*-algebras: basic facts......Page 459
A.12 Commutative C*-algebras......Page 461
A.13 States and the GNS construction......Page 462
A.14 On *-homomorphisms......Page 463
A.15 Approximate units, ideals, and quotient C*-algebras......Page 465
A.16 von Neumann algebras and their preduals......Page 467
A.17 Bitransposition: biduals of C*-algebras......Page 472
A.18 Isomorphisms between von Neumann algebras......Page 476
A.20 On σ-finite (countably decomposable) von Neumann algebras......Page 477
A.21 Schur’s lemma......Page 478
References......Page 481
Index......Page 493

Citation preview

LONDON MATHEMATICAL SOCIETY STUDENT TEXTS Managing Editor: Ian J. Leary, Mathematical Sciences, University of Southampton, UK 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95

Logic, induction and sets, THOMAS FORSTER Introduction to Banach algebras, operators and harmonic analysis, GARTH DALES et al Computational algebraic geometry, HAL SCHENCK Frobenius algebras and 2-D topological quantum field theories, JOACHIM KOCK Linear operators and linear systems, JONATHAN R. PARTINGTON An introduction to noncommutative Noetherian rings (2nd Edition), K. R. GOODEARL & R. B. WARFIELD, JR Topics from one-dimensional dynamics, KAREN M. BRUCKS & HENK BRUIN Singular points of plane curves, C. T. C. WALL A short course on Banach space theory, N. L. CAROTHERS Elements of the representation theory of associative algebras I, IBRAHIM ASSEM, ´ DANIEL SIMSON & ANDRZEJ SKOWRONSKI An introduction to sieve methods and their applications, ALINA CARMEN COJOCARU & M. RAM MURTY Elliptic functions, J. V. ARMITAGE & W. F. EBERLEIN Hyperbolic geometry from a local viewpoint, LINDA KEEN & NIKOLA LAKIC Lectures on K¨ahler geometry, ANDREI MOROIANU ¨ AN ¨ ANEN ¨ Dependence logic, JOUKU VA Elements of the representation theory of associative algebras II, DANIEL SIMSON & ´ ANDRZEJ SKOWRONSKI Elements of the representation theory of associative algebras III, DANIEL SIMSON & ´ ANDRZEJ SKOWRONSKI Groups, graphs and trees, JOHN MEIER Representation theorems in Hardy spaces, JAVAD MASHREGHI ´ PETER ROWLINSON & An introduction to the theory of graph spectra, DRAGOSˇ CVETKOVIC, SLOBODAN SIMIC´ Number theory in the spirit of Liouville, KENNETH S. WILLIAMS Lectures on profinite topics in group theory, BENJAMIN KLOPSCH, NIKOLAY NIKOLOV & CHRISTOPHER VOLL Clifford algebras: An introduction, D. J. H. GARLING Introduction to compact Riemann surfaces and dessins d’enfants, ERNESTO GIRONDO & ´ GABINO GONZALEZ–DIEZ The Riemann hypothesis for function fields, MACHIEL VAN FRANKENHUIJSEN Number theory, Fourier analysis and geometric discrepancy, GIANCARLO TRAVAGLINI Finite geometry and combinatorial applications, SIMEON BALL ¨ The geometry of celestial mechanics, HANSJORG GEIGES Random graphs, geometry and asymptotic structure, MICHAEL KRIVELEVICH et al Fourier analysis: Part I – Theory, ADRIAN CONSTANTIN ˘ Dispersive partial differential equations, M. BURAK ERDOGAN & NIKOLAOS TZIRAKIS Riemann surfaces and algebraic curves, R. CAVALIERI & E. MILES ¨ Groups, languages and automata, DEREK F. HOLT, SARAH REES & CLAAS E. ROVER Analysis on Polish spaces and an introduction to optimal transportation, D. J. H. GARLING The homotopy theory of (∞,1)-categories, JULIA E. BERGNER The block theory of finite group algebras I, M. LINCKELMANN The block theory of finite group algebras II, M. LINCKELMANN Semigroups of linear operators, D. APPLEBAUM Introduction to approximate groups, M. C. H. TOINTON Representations of finite groups of Lie type (2nd Edition), F. DIGNE & J. MICHEL

London Mathematical Society Student Texts 96

Tensor Products of C*-Algebras and Operator Spaces The Connes–Kirchberg Problem GILLES PISIER Texas A&M University

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108479011 DOI: 10.1017/9781108782081 © Gilles Pisier 2020 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2020 Printed in the United Kingdom by TJ International Ltd. Padstow Cornwall A catalogue record for this publication is available from the British Library. ISBN 978-1-108-47901-1 Hardback ISBN 978-1-108-74911-4 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Contents

Introduction

page 1

1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8

Completely bounded and completely positive maps: Basics Completely bounded maps on operator spaces Extension property of B(H ) Completely positive maps Normal c.p. maps on von Neumann algebras Injective operator algebras Factorization of completely bounded (c.b.) maps Normal c.b. maps on von Neumann algebras Notes and remarks

11 11 18 23 30 31 33 37 39

2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9

Completely bounded and completely positive maps: A tool kit Rows and columns: operator Cauchy–Schwarz inequality Automatic complete boundedness Complex conjugation Operator space dual Bi-infinite matrices with operator entries Free products of C ∗ -algebras Universal C ∗ -algebra of an operator space Completely positive perturbations of completely bounded maps Notes and remarks

41 41 43 44 48 50 53 57 58 61

3 3.1 3.2 3.3 3.4 3.5

C ∗ -algebras of discrete groups Full (=Maximal) group C ∗ -algebras Full C ∗ -algebras for free groups Reduced group C ∗ -algebras: Fell’s absorption principle Multipliers Group von Neumann Algebra

63 63 66 71 73 77

v

vi

Contents

3.6 3.7 3.8 3.9

Amenable groups Operator space spanned by the free generators in Cλ∗ (Fn ) Free products of groups Notes and remarks

78 83 84 85

4 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8

C ∗ -tensor products C ∗ -norms on tensor products Nuclear C ∗ -algebras (a brief preliminary introduction) Tensor products of group C ∗ -algebras A brief repertoire of examples from group C ∗ -algebras States on the maximal tensor product States on the minimal tensor product Tensor product with a quotient C ∗ -algebra Notes and remarks

87 87 91 92 95 96 99 103 104

5 5.1 5.2 5.3

Multiplicative domains of c.p. maps Multiplicative domains Jordan multiplicative domains Notes and remarks

106 106 108 112

6 6.1 6.2 6.3 6.4 6.5

Decomposable maps The dec-norm The δ-norm Decomposable extension property Examples of decomposable maps Notes and remarks

113 113 121 125 129 135

7 7.1 7.2 7.3 7.4 7.5 7.6 7.7

Tensorizing maps and functorial properties (α → β)-tensorizing linear maps  max is projective (i.e. exact) but not injective max-injective inclusions  min is injective but not projective (i.e. not exact) min-projective surjections Generating new C ∗ -norms from old ones Notes and remarks

136 136 141 144 150 153 157 160

8 8.1 8.2 8.3 8.4 8.5 8.6

Biduals, injective von Neumann algebras, and C ∗ -norms Biduals of C ∗ -algebras The nor-norm and the bin-norm Nuclearity and injective von Neumann algebras Local reflexivity of the maximal tensor product Local reflexivity Notes and remarks

161 161 162 163 170 174 179

Contents

vii

9 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8

Nuclear pairs, WEP, LLP, QWEP The fundamental nuclear pair (C ∗ (F∞ ),B(2 )) C ∗ (F) is residually finite dimensional WEP (Weak Expectation Property) LLP (Local Lifting Property) To lift or not to lift (global lifting) Linear maps with WEP or LLP QWEP Notes and remarks

180 181 186 188 193 198 202 204 208

10 10.1 10.2 10.3 10.4

Exactness and nuclearity The importance of being exact Nuclearity, exactness, approximation properties More on nuclearity and approximation properties Notes and remarks

210 210 216 222 224

11 11.1 11.2 11.3 11.4

Traces and ultraproducts Traces Tracial probability spaces and the space L1 (τ ) The space L2 (τ ) An example from free probability: semicircular and circular systems Ultraproducts Factorization through B(H ) and ultraproducts Hypertraces and injectivity The factorization property for discrete groups Notes and remarks

225 225 228 230

11.5 11.6 11.7 11.8 11.9

235 238 246 256 259 261

12 The Connes embedding problem 12.1 Connes’s question 12.2 The approximately finite dimensional (i.e. “hyperfinite”) II 1 -factor 12.3 Hyperlinear groups 12.4 Residually finite groups and Sofic groups 12.5 Random matrix models 12.6 Characterization of nuclear von Neumann algebras 12.7 Notes and remarks

262 262

13 13.1 13.2 13.3

280 280 283 290

Kirchberg’s conjecture LLP ⇒ WEP? Connection with Grothendieck’s theorem Notes and remarks

269 271 273 276 277 279

viii

14 14.1 14.2 14.3

Contents

Equivalence of the two main questions From Connes’s question to Kirchberg’s conjecture From Kirchberg’s conjecture to Connes’s question Notes and remarks

291 291 292 296

15 Equivalence with finite representability conjecture 15.1 Finite representability conjecture 15.2 Notes and remarks

297 297 299

16 16.1 16.2 16.3 16.4

300 300 303 309 310

Equivalence with Tsirelson’s problem Unitary correlation matrices Correlation matrices with projection valued measures Strong Kirchberg conjecture Notes and remarks

17 Property (T) and residually finite groups: Thom’s example 17.1 Notes and remarks

311 316

18 18.1 18.2 18.3 18.4 18.5

The WEP does not imply the LLP The constant C(n): WEP √ ⇒ LLP Proof that C(n) = 2 n − 1 using random unitary matrices Exactness is not preserved by extensions A continuum of C ∗ -norms on B ⊗ B Notes and remarks

317 319 323 327 329 332

19 19.1 19.2 19.3 19.4 19.5

Other proofs that C(n)< n: quantum expanders Quantum coding sequences. Expanders. Spectral gap Quantum expanders Property (T) Quantum spherical codes Notes and remarks

333 333 336 338 341 343

Local embeddability into C and nonseparability of (OSn,dcb ) Perturbations of operator spaces Finite-dimensional subspaces of C Nonseparability of the metric space OSn of n-dimensional operator spaces 20.4 Notes and remarks

344 345 346

21 21.1 21.2 21.3 21.4

358 358 362 364 365

20 20.1 20.2 20.3

WEP as an extension property WEP as a local extension property WEP versus approximate injectivity The (global) lifting property LP Notes and remarks

351 357

Contents

22 22.1 22.2 22.3

Complex interpolation and maximal tensor product Complex interpolation Complex interpolation, WEP and maximal tensor product Notes and remarks

23 Haagerup’s characterizations of the WEP 23.1 Reduction to the σ -finite case 23.2 A new characterization of generalized weak expectations and the WEP 23.3 A second characterization of the WEP and its consequences 23.4 Preliminaries on self-polar forms 23.5 max+ -injective inclusions and the WEP 23.6 Complement 23.7 Notes and remarks

ix

366 366 371 382 384 384 385 388 390 395 403 408

24 24.1 24.2 24.3 24.4 24.5 24.6 24.7

Full crossed products and failure of WEP for B ⊗min B Full crossed products Full crossed products with inner actions B ⊗min B fails WEP Proof that C0 (3) < 3 (Selberg’s spectral bound) Other proofs that C0 (n) < n Random permutations Notes and remarks

410 410 414 418 427 429 431 432

25

Open problems

434

Appendix: Miscellaneous background A.1 Banach space tensor products A.2 A criterion for an extension property A.3 Uniform convexity of Hilbert space A.4 Ultrafilters A.5 Ultraproducts of Banach spaces A.6 Finite representability A.7 Weak and weak* topologies: biduals of Banach spaces A.8 The local reflexivity principle A.9 A variant of Hahn–Banach theorem A.10 The trace class A.11 C ∗ -algebras: basic facts A.12 Commutative C ∗ -algebras A.13 States and the GNS construction A.14 On ∗-homomorphisms A.15 Approximate units, ideals, and quotient C ∗ -algebras A.16 von Neumann algebras and their preduals

438 438 439 441 441 443 443 444 446 447 448 448 450 451 452 454 456

x

Contents

A.17 Bitransposition: biduals of C ∗ -algebras A.18 Isomorphisms between von Neumann algebras A.19 Tensor product of von Neumann algebras A.20 On σ -finite (countably decomposable) von Neumann algebras A.21 Schur’s lemma References Index

461 465 466 466 467 470 482

Introduction

These lecture notes are centered around two open problems, one formulated by Alain Connes in his famous 1976 paper [61], the other one by Eberhard Kirchberg in his landmark 1993 paper [155]. At first glance, these two problems seem quite different and the proof of their equivalence described at the end of [155] is not so easy to follow. One of our main goals is to explain in detail the proof of this equivalence in an essentially self-contained way. The Connes problem asks roughly whether traces on “abstract” von Neumann algebras can always be approximated (in a suitable way) by ordinary matrix traces. The Kirchberg problem asks whether there is a unique C ∗ -norm on the algebraic tensor product C ⊗C when C is the full C ∗ -algebra of the free group F∞ with countably many generators. In the remarkable paper where he proved the equivalence, Kirchberg studied more generally the pairs of C ∗ -algebras (A,B) for which there is only one C ∗ -norm on the algebraic tensor product A ⊗ B. We call such pairs “nuclear pairs.” A C ∗ -algebra A is traditionally called nuclear if this holds for any C ∗ -algebra B. Our exposition chooses as its cornerstone Kirchberg’s theorem asserting the nuclearity of what is for us the “fundamental pair,” namely the pair (B,C ) where B = B(2 ) (see Theorem 9.6). Our presentation leads us to highlight two properties of C ∗ -algebras, the Weak Expectation Property (WEP) and the Local Lifting Property (LLP). The first one is a weak sort of extension property (or injectivity) while the second one is a weak sort of lifting property. The connection with the fundamental pair is very clear: A has the WEP (resp. LLP) if and only if the pair (A,C ) (resp. (A,B)) is nuclear. With this terminology, the Kirchberg problem reduces to proving the implication LLP ⇒ WEP, but there are many more interesting reformulations that deserve mention and we will present them in detail. For instance this problem is equivalent to the question whether every (unital) C ∗ -algebra is a quotient of one with the WEP, or equivalently, in short,

1

2

Introduction

is QWEP. In passing, although the P stands for property, we will sometimes write for short that A is WEP (or A is LLP) instead of A has the WEP (resp. LLP). Incidentally, since Kirchberg (unlike Connes) explicitly conjectured a positive answer to all these equivalent questions in [155], we often refer to them as his conjectures. One originality of our treatment (although already present in [155]) is that we try to underline the structural properties (or their failure), such as injectivity or projectivity, in parallel for the minimal and the maximal tensor product of C ∗ -algebras. This preoccupation can be traced back to the “fundamental pair” itself: Indeed, we may view B as “injectively universal” and C as “projectively universal.” The former because any separable C ∗ -algebra A is a subalgebra of B, the latter because any such A is a quotient of C (see Proposition 3.39). In particular, we will emphasize the fact that the minimal tensor product is injective but not projective, while the maximal one is projective but not injective (see §7.4 and 7.2). This is analogous to the situation that prevails for the Banach space tensor products in Grothendieck’s classical work, but unlike Banach space morphisms (i.e. bounded linear maps) the C ∗ -algebraic morphisms are automatically isometric if they are injective (see Proposition A.24). The lack of injectivity of the max-norm is a rephrasing of the fact that if B1 ⊂ B2 is an isometric (or equivalently injective) ∗-homomorphism between C ∗ -algebras and A is another C ∗ -algebra, it is in general not true that the resulting ∗-homomorphism A ⊗max B1 → A ⊗max B2

(1)

is isometric (or equivalently injective). This means that the norm induced by A ⊗max B2 on the algebraic tensor product A ⊗ B1 is not equivalent to the max-norm on A ⊗ B1 . In sharp contrast, this does not happen for the minnorm: A ⊗min B1 → A ⊗min B2 is always injective (=isometric), and this is why one often says that the minimal tensor product is “injective.” This “defect” of the max-tensor product leads us to single out the class of inclusions, B1 ⊂ B2 , for which this defect disappears (i.e. (1) is injective for any A). We choose to call them “max-injective.” We will see that this holds if and only if there is a projection P : B2∗∗ → B1∗∗ with P  = 1. We will also show that if (1) is injective for A = C then it is injective for all A. It turns out that a C ∗ -algebra A is WEP if and only if the embedding A ⊂ B(H ) is max-injective or, equivalently, if and only if there is a projection P : B(H )∗∗ → A∗∗ with P  = 1. All these facts have analogues for the min-tensor product, but now its “defect” is the failure of “projectivity,” meant in the following sense: Let q : B1 → B2 be a surjective ∗-homomorphism and let A be any C ∗ -algebra. Let I = ker(q). Then, although the associated

Introduction

3

∗-homomorphism qA : A ⊗min B1 → A ⊗min B2 is clearly surjective (indeed, it suffices for that to have a dense range), its kernel may be strictly larger than A ⊗min I. As a result, the min-norm on the algebraic tensor product A ⊗ B2 (= A ⊗ (B1 /I)) may be much smaller than the norm induced on it by (A ⊗min B1 )/(A ⊗min I). In sharp contrast, this “defect” does not happen for the max-norm and we always have an isometric identification A ⊗max (B1 /I) = (A ⊗max B1 )/(A ⊗max I). Again this defect of the min-norm leads us to single out the quotient maps (i.e. the surjective ∗-homomorphisms) q : B1 → B2 for which the defect does not appear, i.e. the maps such that for any A we have an isometry A ⊗min B2 = (A ⊗min B1 )/(A ⊗min I).

(2)

Here again, we can give a rather neat characterization of such maps, this time as a certain form of lifting property, see §7.5. It turns out that if (2) holds for A = B then it holds for all C ∗ -algebras A. We call such a map q a “minprojective surjection.” The usual terminology to express that (2) holds for any A is that B1 viewed as an extension of B2 by I is a “locally split extension” (we prefer not to use this term). This notion is closely connected with the notion of exact C ∗ -algebra. A C ∗ -algebra A is called exact if (2) holds for any surjective q : B1 → B2 . This “exact” terminology is motivated by the fact that (2) holds if and only if the sequence 0 → A ⊗min I → A ⊗min B1 → A ⊗min B2 → 0 is exact. But actually, for C ∗ -algebras, the exactness of that sequence boils down to the fact that the natural ∗-homomorphism A ⊗min B1 → A ⊗min B2 A ⊗min I is isometric (=injective). Although our main interest is in C ∗ -algebras, it turns out that many results have better formulations (and sometimes better proofs) when phrased using linear subspaces of C ∗ -algebras (the so-called operator spaces) or unital selfadjoint subspaces (the so-called operator systems). It is thus natural to try to describe as best as we can the class of linear transformations that preserve the C ∗ -tensor products. For the minimal norm, it is well known that the associated class is that of completely bounded (c.b.) maps. More precisely, given a linear map u : A → B between C ∗ -algebras we have for any C ∗ -algebra C IdC ⊗ u : C ⊗min A → C ⊗min B ≤ ucb

(3)

4

Introduction

where ucb is the c.b. norm of u. Moreover, the sup over all C of the lefthand side of (3) is equal to ucb , and it remains unchanged when restricted to C ∈ {Mn | n ≥ 1}. The space of such maps is denoted by CB(A,B). The mapping u is called completely positive (in short c.p.) if IdC ⊗ u : C ⊗min A → C ⊗min B is positive (=positivity preserving) for any C, and to verify this we may restrict to C = Mn for any n ≥ 1. The cone formed of all such maps is denoted by CP(A,B). For the max tensor product, there is an analogue of (3) but the corresponding class of mappings is smaller than CB(A,B). These are the decomposable maps denoted by D(A,B), defined as linear combinations of maps in CP(A,B). More precisely, for any u as previously we have IdC ⊗ u : C ⊗max A → C ⊗max B ≤ udec,

(4)

where udec is the norm in D(A,B). Moreover, the supremum over all C of the left-hand side of (4) is equal to the dec-norm of u composed with the inclusion B ⊂ B ∗∗ . The dec-norm was introduced by Haagerup in [104]. We make crucial use of several of the properties established by him in the latter paper. See Chapter 6. The third class of maps that we analyze are the maps u : A → B such that for any C IdC ⊗ u : C ⊗min A → C ⊗max B ≤ 1. This holds if and only if u is the pointwise limit of a net of finite rank maps with udec ≤ 1 (see Proposition 6.13). When u is the identity on A this means that A has the c.p. approximation property (CPAP) which, as is by now well known, characterizes nuclear C ∗ -algebras (see Corollary 7.12). More generally, suppose given two C ∗ -norms α and β, defined on A⊗B for any pair (A,B). We denote by A ⊗α B (resp. A ⊗β B) the C ∗ -algebra obtained after completion of A ⊗ B equipped with α (resp. β). Then we say that a linear map u : A → B between C ∗ -algebras is (α → β)tensorizing if for any C ∗ -algebra C IdC ⊗ u : C ⊗α A → C ⊗β B ≤ 1. In §7.1 we describe the factorizations characterizing such maps in all the cases when α and β are either the minimal or the maximal C ∗ -norm. We also include the case when u is only defined on a subspace E ⊂ A using the norm induced on C ⊗ E by C ⊗α A. The main cases of interest are min → max (nuclearity) and max → max (decomposability). For the former, we refer to Chapter 10, where we characterize nuclear C ∗ -algebras in parallel with exactness. The bidual A∗∗ of a C ∗ -algebra A is isomorphic to a von Neumann algebra. In Chapter 8 we study the relations between C ∗ -norms on A and on A∗∗

Introduction

5

and we describe the biduals of certain C ∗ -tensor products. The notion of local reflexivity plays an important role in that respect. We prove in §8.3 the equivalence of the injectivity of A∗∗ and the nuclearity of A. In Corollary 7.12 (proved in §10.2) we show that for C ∗ -algebras nuclearity is equivalent to the completely positive approximation property (CPAP). We also show in Theorem 8.12 that injective von Neumann algebras are characterized by a weak* analogue of the CPAP, which is sometimes called “semidiscreteness.” But our main emphasis is on nuclear pairs: in §9.1 we prove the nuclearity of the fundamental pair (B,C ) and in the rest of Chapter 9 we give various equivalent characterizations of C ∗ -algebras with the properties WEP, LLP, and QWEP, that we choose to define using nuclear pairs. The main ones are formulated using the bidual A∗∗ of a C ∗ -algebra A (see §8.1). Let iA : A → A∗∗ be the natural inclusion. For instance: (i) A is nuclear if and only if for some (or any) embedding A∗∗ ⊂ B(H ) there is a projection P : B(H ) → A∗∗ with P cb = 1. (ii) A is WEP if and only if for some (or any) embedding A ⊂ B(H ) there is a projection P : B(H )∗∗ → A∗∗ with P cb = 1. (iii) A is QWEP if and only if for some embedding A∗∗ ⊂ B(H )∗∗ there is a projection P : B(H )∗∗ → A∗∗ with P cb = 1. We then come to the central part of these notes: the Connes embedding problem whether any tracial probability space embeds in an ultraproduct of matricial ones (Chapter 12) and the Kirchberg conjecture (Chapter 13) that C is WEP or that every C ∗ -algebra is QWEP. We show that they are equivalent in Chapter 14. We also show the equivalence with a well-known conjecture from Banach space theory (Chapter 15). The latter essentially asserts that every von Neumann algebra is isometric (as a Banach space) to a quotient of B(H ) for some H . In yet another direction we show in Chapter 16 that all these conjectures are equivalent to one formulated by Tsirelson in the context of quantum information theory. In one of its many equivalent forms, Kirchberg’s conjecture reduces to LLP ⇒ WEP for C ∗ -algebras. Actually, he originally conjectured also the converse implication but in Chapter 18 we show that this fails, by producing tensors t ∈ B ⊗ B for which the min and max norms are different; in other words the pair (B,B) is not nuclear. The proof combines ideas from finite-dimensional operator space theory (indeed t ∈ E ⊗ F for some finite-dimensional subspaces E,F of B) together with estimates of spectral gaps, that allow us to show that a certain constant C(n) defined next is 2 there is a map

14

Completely bounded and completely positive maps

√ T : n∞ → B(2 ) with T  ≤ 1 and T cb ≥ n/2 > 1. Indeed, let (uj )1≤j ≤n be a matricial spin system i.e. a system of unitary self-adjoint N ×N matrices that are anticommuting i.e. satisfying ∀i = j ui uj + uj ui = 0. Let T : n∞ → MN be defined by T (ej ) = uj /(2n)1/2 . Then T satisfies the √ announced bounds. The proof that T cb ≥ n/2 uses the elementary identity n  1 uj ⊗ uj min = n, valid for any n-tuple of unitary matrices (see (18.5)). As for T  ≤ 1 we refer to [104, p. 209] for a proof. Haagerup also shows in [104] that T  = T cb when n = 2, which we will prove in Remark 3.13. When ucb ≤ 1, we say that u is “completely contractive” (or “a complete contraction”). The notion of isometry is replaced by that of “complete isometry”: a linear map u : E → F is said to be a complete isometry (or completely isometric) if un : Mn (E) → Mn (F ) is an isometry for all n ≥ 1. An invertible mapping u : E → F is said to be a complete isomorphism if both u and u−1 are c.b. Clearly, a completely isometric surjective map is a complete isomorphism. Remark 1.5 For instance if S : H → H is an isometry, then the linear map uS : B(H) → B(H ) defined by uS (x) = S ∗ xS is completely isometric. This is easily checked by observing that S induces an isometry Sn from n2 (H ) to n2 (H), such that (uS )n (y) = Sn∗ ySn for any y ∈ Mn (B(H)) = B(n2 (H)). Thus (uS )n is of the same form as uS . In particular if S : H → H is a surjective isometry then uS is completely isometric isomorphism. Remark 1.6 If E,F are C ∗ -algebras and u : E → F is a ∗-homomorphism then un : Mn (E) → Mn (F ) is also a ∗-homomorphism, and hence (see Proposition A.24) we have un  = 1 for all n (unless u = 0), which shows that u is automatically a complete contraction. Moreover, if u is injective then un is obviously also injective. Therefore (see Proposition A.24) un is isometric and u is automatically a complete isometry. Definition 1.7 Let E ⊂ B(H ) and G ⊂ B(K) be operator spaces. We have a natural embedding G ⊗ E ⊂ B(K ⊗2 H ) that allows us to define G ⊗min E = G ⊗ E

norm

⊂ B(K ⊗2 H ).

The space G ⊗min E is then called the minimal tensor product of G and E. In particular, in the case G = B(n2 ) = Mn , we have an obvious completely isometric identification

1.1 Completely bounded maps on operator spaces Mn (E) = Mn ⊗min E.

15

(1.4)

Indeed, by Remark 1.5 this simply follows from the Hilbert space identification n2 (H )  n2 ⊗2 H . Remark 1.8 (Associativity of the minimal tensor product) Let Ej ⊂ B(Hj ) be operator spaces (1 ≤ j ≤ n). We define similarly E1 ⊗min · · · ⊗min En = E1 ⊗ · · · ⊗ En ⊂ B(H1 ⊗2 · · · ⊗2 Hn ). Since we have H1 ⊗2 H2 ⊗2 H3  (H1 ⊗2 H2 ) ⊗2 H3  H1 ⊗2 (H2 ⊗2 H3 ), by Remark 1.5 we also have completely isometrically E1 ⊗min E2 ⊗min E3  (E1 ⊗min E2 ) ⊗min E3  E1 ⊗min (E2 ⊗min E3 ). (1.5) Thus we may also view E1 ⊗min · · ·⊗min En = E1 ⊗ · · · ⊗ En as obtained from successive minimal tensor products of suitable pairs, and we may suppress the parentheses since they become irrelevant. Remark 1.9 (Commutativity of the minimal tensor product) Since we have K ⊗2 H  H ⊗2 K, by Remark 1.5 we also have completely isometrically G ⊗min E  E ⊗min G,

(1.6)

via x ⊗ y → y ⊗ x. Remark 1.10 (Injectivity of the minimal tensor product) From the preceding definition the following property is obvious: Let E1 ⊂ E2 ⊂ B(H ) and G1 ⊂ G2 ⊂ B(K) be operator subspaces, so that E1 ⊗ G1 ⊂ E2 ⊗ G2 . Then for any t ∈ E1 ⊗ G1 we have tE1 ⊗min G1 = tE2 ⊗min G2 .

(1.7)

Proposition 1.11 Let u : E → F be a c.b. map between two operator spaces. Then for any other operator space G the mapping IdG ⊗ u : G ⊗ E → G ⊗ F extends to a bounded mapping uG : G ⊗min E → G ⊗min F and we have ucb = supG uG  = supG uG cb,

(1.8)

where the suprema run over all possible G’s. Proof First observe that the choice of G = Mn shows that sup{uG  : G an operator space } ≥ ucb . To prove the converse assume G ⊂ B(K). For notational simplicity, assume  K = 2 and let t = rk=1 ak ⊗ bk ∈ G ⊗ E. Consider the natural embeddings 2n  [e1, . . . ,en ] ⊂ 2 (n ≥ 1) with respect to some choice {ej : j ≥ 1}

16

Completely bounded and completely positive maps

of orthonormal basis for 2 and the corresponding orthogonal projections Pn : K ⊗2 H → n2 ⊗2 H . Then ∪n 2n ⊗ H = K ⊗2 H and hence tmin = supn Pn t|n ⊗H : n2 ⊗2 H → n2 ⊗2 H . 2

Let ak (i,j ) = ei ,ak ej . It is not hard to see that Pn t|n ⊗H can be identified 2  with the matrix tn ∈ Mn (E) given by tn (i,j ) = k ak (i,j )bk . This shows that tG⊗min E = supn tn Mn (E), (1.9)  We have un (tn ) = k ak (i,j )u(bk ) ∈ Mn (F ). Applying (1.9) to uG (t) =  ak ⊗ u(bk ) gives us uG (t)min = supn un (tn )Mn (F ) and hence uG (t)min ≤ ucb supn tn Mn (E) = ucb tmin , which shows uG  ≤ ucb . Thus ucb = supG uG . Then, substituting Mn (G) for G, we easily deduce that supG uG  = supG uG cb .



Corollary 1.12 Let E1,F1 , E2,F2 be operator spaces. Let u1 ∈ CB(E1,F1 ) and u2 ∈ CB(E2,F2 ). Then u1 ⊗ u2 continuously extends by density to a c.b. map u1 ⊗ u2 : E1 ⊗min E2 −→ F1 ⊗min F2 such that u1 ⊗ u2 cb ≤ u1 cb u2 cb .

(1.10)

Proof The argument is based on the obvious identity u1 ⊗ u2 = (u1 ⊗ IdF2 )(IdE1 ⊗ u2 ), which gives us u1 ⊗ u2  ≤ u1 ⊗ IdF2 IdE1 ⊗ u2 . By (1.8) we have IdE1 ⊗ u2  ≤ u2 cb and using (1.6) we also find u1 ⊗ IdF2  ≤ u1 cb . This gives us u1 ⊗ u2  ≤ u1 cb u2 cb . Now replacing u1 by IdMn ⊗ u1 , by (1.5) and (1.4) we obtain the announced (1.10) after taking the sup over n. It is an easy exercise to show that (1.10) is actually an equality but we do not use this in the sequel. We will now generalize (1.9).  Proposition 1.13 For any t = ak ⊗ bk ∈ G ⊗ E we have       v ∈ CB(G,Mn ), vcb ≤ 1 . v(ak ) ⊗ bk  tmin = sup  Mn (E)

n≥1

(1.11) Furthermore tmin

    = sup  v(ai ) ⊗ w(bi )

Mnm

 (1.12)

where the supremum runs over n,m ≥ 1 and all pairs v : G → Mn, : E → Mm with vcb ≤ 1 and wcb ≤ 1. (We can of course restrict to n = m if we wish.)

1.1 Completely bounded maps on operator spaces

17

Proof By (1.8) and (1.6) we have v ⊗ IdE  ≤ vcb , so the left-hand side of (1.11) is ≥ the right-hand side. But by (1.9) we see that equality holds: indeed just observe that tn = (vn ⊗ IdE )(t) with vn (·) = an∗ · an where an : n2 → H denote the inclusion. To check (1.12) we again invoke (1.6), which allows us to apply (1.11) one more time on the second factor. Remark 1.14 The preceding proposition shows that the min-norm on G ⊗ E depends only on the sequences of norms on Mn (G) and Mn (E) and not on the particular embeddings G ⊂ B(K) and E ⊂ B(H ). Indeed, the latter sequences suffice to determine the norms of the spaces CB(G,Mn ) and CB(E,Mn ) (see Proposition 1.19 for more precision). More generally, the same remark holds for the norm in Mn (G ⊗min E) and hence the whole sequence of the norms  · Mn (G⊗min E) depends only on the sequences of norms on Mn (G) and Mn (E). Corollary 1.15 If an element t ∈ G ⊗min E is such that (v ⊗ w)(t) = 0 for any v ∈ G∗ and w ∈ E ∗ , then t = 0. Proof This is immediate from (1.12). Indeed, the assumption remains obviously true for any v : G → Mn and w : E → Mm . Warning: The reason we emphasize the rather simple fact in Corollary 1.15 is that the analogous fact for the maximal tensor product of two C ∗ -algebras fails in general. Remark 1.16 (Direct sum of operator spaces) Let Ei ⊂ B(Hi ) (i ∈ I ) be  a family of operator spaces. Let E = ⊕



i∈I Ei ∞∗. Note that E ⊂  ) ⊕ i∈I B(Hi ) ∞ , and that ⊕ i∈I B(H

i ∞ is a C -algebra naturally  embedded in B(H ) with H = ⊕ i∈I Hi 2 . This allows us to equip E with an operator space structure as a subspace of B(H ). Let n ≥ 1. Any matrix a ∈ Mn (E) is determined by a family (ai )i∈I with ai ∈ Mn (Ei ) for all i ∈ I . It is then easy to check that for any n ≥ 1 and any a ∈ Mn (E) we have aMn (E) = supi∈I ai Mn (Ei ) .

(1.13)

More generally, for any operator space F ⊂ B(K), we have a natural isometric embedding       F ⊗min ⊕ Ei ⊂ ⊕ F ⊗min Ei , (1.14) i∈I



i∈I



which is an isomorphism if dim(F ) < ∞ since both sides are then setwise identical.

18

Completely bounded and completely positive maps

The equality (1.13) shows furthermore that for any operator space D, a linear map u : D → E is c.b. if and only if the coordinates ui : D → Ei are c.b. with supi∈I ui cb < ∞ and we have ucb = supi∈I ui cb .

(1.15)

1.2 Extension property of B(H ) We first recall that the spaces L∞ are the injective objects in the category of Banach spaces. Theorem 1.17 (Nachbin’s Hahn–Banach theorem) Let ( ,μ) be any measure space, and let E ⊂ X be any subspace of a Banach space X. Then any u ∈ u ∈ B(X,L∞ (μ)) such that  u = u. B(E,L∞ (μ)) admits an extension  X  u

E

u

L∞ (μ)

The proof of Nachbin’s theorem relies on several identifications. First we note the elementary isometric isomorphisms B(E,F ∗ ) ∼ = B(F,E ∗ ) ∼ = Bil(E × F ) where Bil(E × F ) is the Banach space of all bounded bilinear forms on E × F . Then we have an isometric identification ∧

B(E,F ∗ ) ∼ = (E ⊗ F )∗

(1.16)



where ⊗ is the projective tensor product, i.e. the completion of the algebraic tensor product E ⊗ F with respect to the so-called projective norm (see §A.1)  n n aj bj  : t = aj ⊗ bj . t∧ = inf 1



1



Note that E ⊗ F and F ⊗ E can obviously be (isometrically) identified. The ∧

duality between tensors t ∈ E ⊗ F and operators u ∈ B(E,F ∗ ) is defined first on rank one tensors by setting u,a ⊗ b = u(a),b, then this can be extended to unambiguously define u,t for t ∈ E ⊗ F by ∧

linearity. Then by density we define u,t for t ∈ E ⊗ F , and (1.16) holds for this duality.

1.2 Extension property of B(H )

19

By a classical result (due to Grothendieck) when F = L1 ( ,μ) (on ∧



some measure space ( ,μ)), the space E ⊗ F (or equivalently F ⊗ E) can be identified isometrically to the (Bochner sense) vector valued L1 -space L1 (μ;E). Sketch of Proof of Nachbin’s Theorem Taking F = L1 (μ) in the preceding, we find B(E,L∞ (μ)) = L1 (μ;E)∗

B(X,L∞ (μ)) = L1 (μ;X)∗ .

and

Then since we have an isometric inclusion L1 (μ;E) ⊂ L1 (μ;X) Nachbin’s Theorem can be deduced from the classical Hahn–Banach theorem. We will follow the same approach to prove the noncommutative version of Nachbin’s Theorem, due to Arveson. Theorem 1.18 (Arveson’s Hahn–Banach theorem) Let H be any Hilbert space, and let X ⊂ B(H) be any operator space and let E ⊂ X be any subspace. Then any u ∈ CB(E,B(H )) admits an extension  u ∈ B(X,B(H )) such that  ucb = ucb . X  u

E

u

B(H )

The projective tensor norm  ∧ on L1 (μ) ⊗ E will be replaced by the following one on K ⊗ E ⊗ H where H,K are Hilbert spaces: For any t ∈ K ⊗ E ⊗ H , we define (recall k = k for all k ∈ K)  1/2 n 1/2  m 2 2 ki  [aij ]Mm×n (E) hj  γE (t) = inf j =1

i=1

where the infimum runs over all representations of t of the form m n ki ⊗ aij ⊗ hj . t= i=1

j =1

In analogy with Nachbin’s Theorem, we will show that this norm satisfies: (i) The dual space (K ⊗ E ⊗ H,γE )∗ can be identified with CB(E,B(H,K)). (ii) The natural inclusion (K ⊗ E ⊗ H,γE ) ⊂ (K ⊗ X ⊗ H,γX ) is isometric.

20

Completely bounded and completely positive maps

Proof of Arveson’s Hahn–Banach theorem Using (i) and (ii) the proof of Theorem 1.18 can be completed exactly as in the case of Banach spaces: We simply take K = H and apply the Hahn–Banach theorem to the subspace (K ⊗ E ⊗ H,γE ) ⊂ (K ⊗ X ⊗ H,γX ). Thus the proof now reduces to the verification of (i) and (ii). It is easy to check that γE is a norm by arguing as follows: m n  n     Let t = m i=1 j =1 ki ⊗ aij ⊗ hj and t = p=1 q=1 kp ⊗ apq ⊗ hq be elements of K ⊗ E ⊗ H . We have obviously (consider the block diagonal matrix with blocks a and a  ) 1/2    γE (t + t  ) ≤ ki 2 + kp 2 max{[aij ]Mm×n (E),[apq ]Mm ×n (E)} ×



hj 2 +



hq 2

1/2

.

But by homogeneity, for any ε > 0 there are suitable representations of t,t  such that m n ki 2 = hj 2 < γE (t) + ε, [aij ]Mm×n (E) = 1 and j =1

i=1

as well as  [apq ]Mm ×n (E) = 1

and

m p=1

kp 2 =

n q=1

hq 2 < γE (t  ) + ε.

Then we find for any ε > 0 γE (t + t  ) ≤ γE (t) + γE (t  ) + 2ε,

(1.17)

which shows that γE is subadditive and hence (since γE (t) dominates the norm of t as a bounded trilinear form on K ∗ × E ∗ × H ∗ ) it is a norm. Consider the C-linear correspondence CB(E,B(H,K))  u → ϕu ∈ (K ⊗ E ⊗ H )∗   defined by ϕu (t) = i,j ki ,u(aij )hj , if t = i,j ki ⊗ aij ⊗ hj . Note that    hj 2 ki ,u(aij )hj  ϕu γE∗ = sup |ϕu (t)| = sup i,j

γE (t) 1 or on B(2 ) or K(2 ) (see also Remark 2.2). Remark 1.30 Consider a linear mapping u : Mn → A into a C ∗ -algebra A. Let a ∈ Mn (A) be the matrix defined by aij = u(eij ). Then u ∈ CP(Mn,A) if and only if a ∈ Mn (A)+ . Indeed, note that a = un (ξ ) with ξ ∈ Mn (Mn )+ defined by ξij = eij . From this the “only if” part follows. Conversely, if a ∈ Mn (A)+ then we have  ∗ xij bkj (x ∈ Mn ) from a = b∗ b for some b ∈ Mn (A), and hence u(x) = k bki which it is easy to deduce that u is of the form (1.19) and hence is c.p. We leave the details as anexercise. Note that if u ∈ CP(Mn,A) we have u =    ∗  bik . u(1) =  aii  =  ik bik Remark 1.31 In the case when u(eij ) = 0 for any i = j , u can be identified with a mapping from n∞ to A, and the associated matrix a is diagonal. Then u : n∞ → A is c.p. if and only if u (or equivalently a) is positive. Remark 1.32 (Positivity with commutative domain) Consider C ∗ -algebras A,B and assume now that A is commutative. Then any positive linear map u : A → B is c.p. A simple way to see this is to observe that the identity of A is the pointwise limit of a net of maps ui : A → A of the form vi

wi

ui : A−→n(i) ∞ −→A where n(i) are integers and vi ,wi are positive contractions (see Remark A.20). By Remark 1.28 vi is c.p. Using this we are reduced to show that uwi : n(i) ∞ → B is c.p., and this case is covered by the end of the preceding Remark 1.31. Remark 1.33 (Positivity for unital forms) Let A a unital C ∗ -algebra and f ∈ A∗ . Then f ≥ 0 ⇔ f (1) = f . Indeed, if A is commutative, say A = C(T ) with T compact, this is a wellknown characterization of positive measures on T . But if we fix x ∈ A+ , and let Ax denote the (commutative) unital C ∗ -algebra generated by x, f (1) = f  ⇒ f (1) = f|Ax  ⇒ f|Ax ≥ 0 ⇒ f (x) ≥ 0, which proves the implication from right to left. The converse direction can be proved easily using Cauchy–Schwarz for the inner product y,x = f (y ∗ x),

1.3 Completely positive maps

27

applied with y = 1 and x ∈ BA . We actually already proved a more general fact when we proved (1.20). We note in passing that for any a ∈ A a ≥ 0 ⇔ ϕ(a) ≥ 0 ∀ϕ ∈ A∗+ .

(1.23)

Remark 1.34 (Positivity for unital maps) More generally let u : A → B be a linear map with values in a C ∗ -algebra B. If u(1) = 1 and u = 1 then u ∗ , the linear form is positive. Indeed, by the preceding remark, for any F ∈ B+ f : x → F (u(x)) satisfies f (1) = f  and hence is positive. Conversely, if u is positive and unital then u = 1 by Corollary A.19. The next statement, which is a sort of recapitulation, introduces an important “bridge” between c.p. and c.b. maps. Theorem 1.35 Let E ⊂ A be a subspace of a unital C ∗ -algebra such that 1 ∈ E. Let u : E → B(H ) be a unital linear map (i.e. such that u(1) = 1). u : A → B(H ). Then ucb = 1 if and only if u extends to a c.p. map  Proof By the injectivity of B(H ) (see Theorem 1.18), if ucb = 1 there is u(1) = 1, the an extension  u : A → B(H ) with  ucb = ucb = 1. Since  preceding remark shows that  u is positive, but since the same can be applied u, we conclude that  u is actually c.p. Conversely, if u admits to  un = IdMn ⊗  a c.p. extension  u then we have  u =  u(1) = 1 and hence u = 1. Again applying this to un : Mn (E) → Mn (B(H )) we obtain un  = 1 ∀n and hence ucb = 1. Definition 1.36 A (not necessarily closed) subspace S ⊂ B(H ) for some Hilbert space H is called an operator system, if it is self-adjoint and unital. Clearly, if S is an operator system, Mn (S) ⊂ Mn (B(H )) also is one. Let S+ = S ∩ B(H )+ . If S ⊂ B(H ) is an operator system, S coincides with the linear span of S+ . Indeed, if a ∈ S and a = a ∗ , then a = a1 − (a1 − a) ∈ S+ − S+ . This explains why c.p. maps can be used efficiently on operator systems. The elementary proof of the next lemma is left to the reader, to whom we √ recall the arithmetic/geometric mean inequality st ≤ (s + t)/2,∀s,t ≥ 0. Lemma 1.37 (i) Let s1,s2,a ∈ B(H ). Assume s1,s2 ≥ 0. Then   s1 a ≥ 0 ⇔ |y,ax| ≤ (y,s1 yx,s2 x)1/2 a ∗ s2 ≤ (y,s1 y + x,s2 x)/2 ∀x,y ∈ H .

(1.24)

28

Completely bounded and completely positive maps

When this holds, we have a ≤ (s1 s2 )1/2 ≤ (s1  + s2 )/2. (ii) In particular,



1 a∗

(1.25)

 a ≥ 0 ⇔ a ≤ 1. 1

Lemma 1.38 Let w : E → B(H ) be a map defined on an operator space E ⊂ B(K).  Let S ⊂ M2 (B(K)) be the operator system consisting of all λ1 a matrices with λ,μ ∈ C, a,b ∈ E, and let W : S → M2 (B(H )) be b∗ μ1 the (unital) mapping defined by     λ1 a λ1 w(a) W = . b∗ μ1 w(b)∗ μ1 Then wcb ≤ 1 if and only if W is c.p. Proof The easy direction is W c.p. ⇒ wcb ≤ 1. Indeed, if W is c.p. we have W cb = W (1) = 1, and a fortiori wcb ≤ 1. We now turn to the Assume wcb ≤ 1. Consider an element   converse. λ a , λ,μ ∈ Mn (C1), a,b ∈ Mn (E). Assume s ≥ 0, s ∈ Mn (S), say s = ∗ b μ then necessarily λ,μ ∈ Mn (C1)+ and a = b. Fix ε > 0 and let λε = λ + ε1 and με = μ + ε1 (invertible perturbations of λ and μ). Let sε = s + ε1. Let us −1/2 −1/2 and let Wn = IdMn ⊗ W , wn = IdMn ⊗ w. We have denote xε = λε aμε       −1/2 −1/2 λε 0 0 λε 1 xε = (1.26) −1/2 sε −1/2 , xε∗ 1 0 με 0 με and hence the left-hand side of the preceding equation is ≥ 0, which implies by part (ii) in Lemma 1.37 that xε  ≤ 1. Therefore if wcb ≤ 1, we have wn (xε ) ≤ 1, which implies that   1 wn (xε ) ≥ 0. 1 wn (xε )∗ 

 1 xε . Now, applying Wn to both But this last matrix is the same as Wn ∗ xε 1 sides of (1.26) and using the linearity of w, we find       −1/2 −1/2 λε λε 0 0 1 xε Wn ∗ = −1/2 Wn (sε ) −1/2 . xε 1 0 με 0 με

1.3 Completely positive maps

29

Thus  1/2 λε Wn (sε ) = 0

⎛

1

0 ⎝ 1/2 με wn (xε )∗

⎞ wn (xε )  1/2 ⎠ λε 0 1

 0 1/2 . με

Since the right-hand side is ≥ 0, we have Wn (sε ) ≥ 0 and letting ε → 0 we conclude that Wn (s) ≥ 0, whence that W is c.p. We now give the c.p. version of Arveson’s extension theorem: Theorem 1.39 (Arveson’s extension theorem/C.P. version) Let E ⊂ A ⊂ B(K) be an operator system included in a unital C ∗ -subalgebra of B(K). Any c.p. map u : E → B(H ) satisfies ucb = u = u(1).

(1.27)

Moreover, u : E → B(H ) extends to a c.p. map  u : A → B(H ) such that  ucb = u(1). Proof We firstestablish (1.27). By part (ii)   in Lemma 1.37, we see that x ≤ u(1) u(x) 1 x ≥ 0 ⇒ ≥ 0. Hence by part (i) in Lemma 1 ⇒ u(x ∗ ) u(1) x∗ 1 1.37, we find that u(x) ≤ u(1) ⇒ u =   u(1). 1 x ≥ 0 ⇒ un (x) ≤ Similarly, x ∈ Mn (S),x ≤ 1 ⇒ x∗ 1 un (1) = u(1). Hence ucb = supn un  ≤ u(1), i.e. ucb = u(1), proving (1.27). Let ϕ ∈ A∗ be any state on A. Since u is positive, we know that u(1) ≥ 0. We may assume that u(1) = 1, so that 0 ≤ u(1) ≤ 1. Consider E ⊕ E ⊂ A ⊕ A ⊂ B(K) ⊕ B(K) ⊂ B(K ⊕ K). Let v : E ⊕ E → B(H ) be the mapping defined by v(x ⊕ y) = u(x) + (1 − u(1))ϕ(y). Then v is clearly c.p. and v(1 ⊕ 1) = 1. By (1.27) vcb = 1. Therefore, by Theorem 1.35 v admits a c.p. extension  v : A ⊕ A → B(H ) with  v cb = v(1⊕1) = 1. Then the map  u defined by  u(x) =  v (x ⊕ 0), is a c.p. extension v cb = 1 = u(1). Thus (since 1 = u(1) =  u(1) ≤ of u and  ucb ≤  ucb = u(1).  ucb ) we obtain  Lemma 1.40 (Schur product of c.p. mappings) Let B,C be C ∗ -algebras (or merely linear subspaces of C ∗ -algebras). Fix a number n ≥ 1. Consider u ∈ CP(A,Mn (B)) and v ∈ CP(B,Mn (C)). Then the mapping x → [vij (uij (x))] is in CP(A,Mn (C)).

30

Completely bounded and completely positive maps

Proof We identify Mn (B) with Mn ⊗min B, so that u(x) = Consider the composition



eij ⊗ uij (x).

w = (IdMn ⊗ v) ◦ u : A → Mn ⊗min Mn ⊗min C. Since complete positivity is obviously preserved under composition, w ∈ CP(A,Mn ⊗min Mn ⊗min C). We have    w(x) = eij ⊗ v ◦ uij (x) = eij ⊗ ek ⊗ vk ◦ uij (x). ij

ij

k

Let S → ⊗ be the isometry defined by Sej = ej ⊗ ej . We may assume C ⊂ B(H ). Then we have  eij ⊗ vij (uij (x)), (S ⊗ IdH )∗ w(x)(S ⊗ IdH ) = : n2

n2

n2

and hence the latter mapping is c.p.

1.4 Normal c.p. maps on von Neumann algebras Recall that a bounded linear map u : M → N between von Neumann algebras is called “normal” if it is continuous when M and N are both equipped with the weak* topology, or equivalently if u∗ (N∗ ) ⊂ M∗ . We wish to record here the following variant of the extension theorem. Theorem 1.41 Let M ⊂ B(K) be a von Neumann algebra. Let u : M → B(H ) be a normal c.p. map with u = 1. Then there is a normal c.p. map , a  u : B(K) → B(H ) extending u with  u = 1. More precisely, there are H ) and a contraction W : H → H  normal ∗-homomorphism  π : B(K) → B(H ∗ π (x)W for all x ∈ M. Moreover, we can obtain the latter such that u(x) = W   of the form H  = L ⊗2 K and with  π (b) = IdL ⊗ b (b ∈ B(K)) for with H some Hilbert space L. Proof Consider a minimal dilation of u of the form u(x) = V ∗ π(x)V for all x ∈ M as in Remark 1.23. Let ξ,ξ  ∈ π(M)(VH), say ξ = π(m)(Vh), ξ  = π(m )(Vh ) (m,m ∈ M,h,h ∈ H ). We have ξ ,π(x)ξ  = h,u(m ∗ xm)h, which shows that x → ξ ,π(x)ξ  is normal on M. By the density of the  this implies that π : M → B(H ) is normal (see linear span of π(M)(VH) in H Remark A.42). Then the result follows immediately by the special form of the normal ∗-homomorphisms described in Theorem A.61. Indeed, the latter says that we can find a Hilbert space L, a subspace E ⊂ L ⊗2 K (invariant under  → E such that π(x) = U ∗ PE (IdL ⊗ x)|E U for IdL ⊗ M) and a unitary U : H π (b) = IdL ⊗ b and any x ∈ M. Let jE : E → L ⊗2 H be the inclusion, let  u defined by  u(b) = W ∗ π (b)W W = jE UV : H → L⊗2 K. Then the mapping  for any b ∈ B(H ) is a c.p. extension of u with  u = 1.

1.5 Injective operator algebras

31

Remark 1.42 In the preceding situation if u is unital the extension  u is unital and W is an isometry. In the case of the inclusion A ⊂ A∗∗ we also have a simple extension property, as follows. Lemma 1.43 Let A be a C ∗ -algebra, M a von Neumann algebra. Then for any u ∈ CP(A,M) the mapping u¨ : A∗∗ → M is a normal c.p. map extending u with u ¨ = u. Proof We recall that, by density, u¨ can be viewed as the unique (σ (A∗∗,A∗ ), σ (M,M∗ ))-continuous extension of u (see (A.32)). By the weak*-density of BA ∩ A+ in BA∗∗ ∩ (A∗∗ )+ (see (A.36)), it follows that u¨ is positive. Applying that to un and observing (see Proposition A.58) that Mn (A)∗∗ = Mn (A∗∗ ) shows that u¨ n is positive, and hence u¨ is completely positive.

1.5 Injective operator algebras Definition 1.44 A C ∗ -algebra (or an operator space) A is called injective if there exists a completely isometric embedding A ⊂ B(H ) and a projection P : B(H ) → A with P cb = 1. Of course, B(H ) is the fundamental example of an injective C ∗ -algebra. We will focus on C ∗ -algebras (see Remark 1.49 for the case of operator spaces). The next result is classical. Theorem 1.45 (Tomiyama’s theorem [245]) If A ⊂ B is a C ∗ -subalgebra of a C ∗ -algebra B, any linear projection P : B → A with P  = 1 is automatically completely positive and completely bounded with P cb = 1. Moreover, P is a conditional expectation in the sense that ∀a1,a2 ∈ A, ∀b ∈ B

P (a1 ba2 ) = a1 P (b)a2 .

(1.28)

Proof The (nontrivial) proof can be found in Takesaki’s book (see [240, p. 131]) or in [39, p. 13]. We skip it here. Proposition 1.46 (Extension property) Consider a C ∗ -subalgebra A ⊂ B(H ). The following are equivalent. (i) A is injective. (ii) For any completely isometric embedding A ⊂ B into a C ∗ -algebra B there is a projection Q : B → A with Qcb = 1. (iii) For any pair of operator spaces with X1 ⊂ X2 (completely u ∈ CB(X2,A) isometrically) any u ∈ CB(X1,A) admits an extension  with  ucb = ucb .

32

Completely bounded and completely positive maps

(iv) For any pair B1,B2 of C ∗ -algebras with B1 ⊂ B2 (C ∗ -subalgebra), any u ∈ CB(B2,A) with  ucb = ucb . u in CB(B1,A) admits an extension  Proof Let j1 : A → B(H ) and j2 : A → B denote completely isometric embeddings. By the extension Theorem 1.18, ∃ j1 ∈ CB(B,B(H )) such that   j1 cb = j1 cb and j1 j2 = j1 . Assume (i). Let P : B(H ) → j1 (A) be a j1 is a projection projection such that P cb = 1. Then Q = j2 j1−1 |j1 (A) P  onto j2 (A) with Qcb = 1. This shows (i) ⇒ (ii). Assume (ii). Consider A ⊂ B(H ) and a projection P : B(H ) → A with P cb = 1. By the extension Theorem 1.18, any u ∈ CB(X1,A) admits an v cb = ucb . Then  u = P v ∈ CB(X2,A) extension v ∈ CB(X2,B(H )) with  satisfies (iii). This shows (ii) ⇒ (iii). (iii) ⇒ (iv) is trivial. Assume (iv). Consider A ⊂ B(H ). Let B1 = A, B2 = B(H ) and u = IdA . Then  u : B(H ) → A is a projection with cb norm 1, and hence (i) holds. Corollary 1.47 Let A be a C ∗ -algebra and A1 ⊂ A a C ∗ -subalgebra. Assume that there is a projection P : A → A1 with P  = 1. If A is injective then A1 is also injective. We end this section with a simple stability property of injective C ∗ -algebras.

 Proposition 1.48 Let (Ai )i∈I be a family of C ∗ -algebras. Then ⊕ i∈I Ai ∞ is injective if and only if Ai is injective for all i ∈ I . Proof If Ai ⊂ B(Hi ) and Pi : B(Hi ) →

with Pi cb = 1

Ai are projections   then the mapping P : ⊕ i∈I B(Hi ) ∞ → ⊕ i∈I Ai ∞ taking (xi ) to  (Pi xi ) is clearly c.b. with P cb ≤ 1. Let H = ⊕ i∈I Hi 2 . If we denote by Vi : Hi → H the natural (isometric) inclusion, and if we define

Qi : B(H ) →  ∗ B(Hi ) by Qi (T ) = Vi T Vi and Q : B(H ) → ⊕ i∈I B(Hi ) ∞ by Q(T ) = (Qi (T ))i∈I , then Q is clearly c.b. with Qcb = supi Qi cb = 1. We may identify

Hi with  a subspace of H so that Vi becomes the inclusion. Then ⊕ i∈I Ai ∞ can be naturally identified with the C ∗ -subalgebra of B(H ) formed of all operators T ∈ B(H ) such that TH i ⊂ Hi and T|Hi ∈ Ai for all i ∈ I . With this con vention, PQ is a projection from B(H ) onto ⊕ i∈I Ai ∞ with PQcb = 1. This proves the “if” part. The converse

follows easily from the fact that the  canonical projection from ⊕ i∈I Ai ∞ to Ai (which is a ∗-homomorphism) has cb-norm = 1 for all i ∈ I . Remark 1.49 For operator spaces, Tomiyama’s theorem does not hold. Therefore, it is more natural to say that an operator space E ⊂ B(H ) is c-injective if there is a projection P : B(H ) → E with P cb ≤ c. With this terminology, we call E injective if it is 1-injective. The reader will easily

1.6 Factorization of completely bounded (c.b.) maps

33

check that Proposition 1.46, and Proposition 1.48 remain valid for injective (i.e. 1-injective) operator spaces. See [185] for an example of a bounded linear map u : A → B(H ) defined on a C ∗ -subalgebra A ⊂ B(H) that does not extend to a bounded map on B(H). We return to injectivity for von Neumann algebras in §8.3. See [80, §6] for more information on injective operator spaces.

1.6 Factorization of completely bounded (c.b.) maps We now turn to the factorization of c.b. maps. This important result, proved independently by Wittstock, Haagerup, and Paulsen in the early 1980s, can be viewed as the “linearization” of the Stinespring factorization Theorem 1.22. Theorem 1.50 (Factorization of c.b. maps) Let H,K be Hilbert spaces. Consider an operator space E ⊂ B(K). Let B ⊂ B(K) be a unital C ∗ -algebra such that E ⊂ B ⊂ B(K). Consider a c.b. map B ∪ E

u

−→

B(H )

, a unital ∗-homomorphism π : B −→ B(H ) Then there is a Hilbert space H  and operators V1,V2 ∈ B(H, H ) such that V1  V2  = ucb and u(x) = V2∗ π(x)V1 .

∀x ∈ E

(1.29)

Conversely, if (1.29) holds then u is c.b. and ucb ≤ V1  V2 .

(1.30)

In addition, if V1 = V2 , then u is completely positive. The corresponding diagram is as follows: B

π

) B(H b→V2∗ bV 1

E

u

B(H )

Proof It suffices to prove this when B = B(K). We may assume ucb = 1. Consider the operator system S ⊂ M2 (B(K)) defined in Lemma 1.38 and let W ∈ CP(S,M2 (B(H ))) be defined by

34

Completely bounded and completely positive maps  W

  λ1 λ1 a = u(b)∗ b∗ μ1

 u(a) . μ1

 ∈ CP(M2 (B(K)),M2 By (Arveson’s extension) Theorem 1.39, there is W (B(H ))) extending W , to which we may apply (Stinespring’s) Theorem 1.22, with A = M2 (B(K)) and, setting H = H ⊕ H , we have M2 (B(H )) =  an operator V : H → H  and a B(H). This gives us a Hilbert space H, ∗-homomorphism σ : M2 (B(K)) → M2 (B(H )) such that for any x ∈ E we have     0 u(x) 0 x ∗ =V σ V 0 0 0 0 or equivalently  0 0

  x u(x) = V ∗σ 0 0

  0 0 σ x 0

 1 V 0

and hence if P1 : H ⊕ H → H (resp. P2 : H ⊕ H → H ) is the first (resp. second) coordinate projection, we have     x 0 0 1 ∗ u(x) = P1 V σ σ V P2∗ 0 x 0 0     x 0 0 1 and we obtain (1.29) with π(x) = σ , V1 = σ V P2∗ and 0 x 0 0 V2∗ = P1 V ∗ . Note that V1 V2  ≤ V 2 = 1. Conversely, if (1.29) holds we obviously have 1 = ucb ≤ V1 V2 . Moreover, if V1 = V2 , u is clearly c.p.  by Remark 1.51 In (1.29) we may without loss of generality replace H . 0 = span{π(b)V1 h | b ∈ B,h ∈ H } ⊂ H H 0 is an invariant subspace for all π(b)s for b ∈ B, the mapping Indeed, since H 0 and we may define W1,W2 ∈ b → PH0 π(b)|H0 is a representation of B on H  B(H, H0 ) by W1 h = V1 h and W2 h = PH0 V2 h, so that W1 W2  ≤ ucb and ∀x ∈ E

u(x) = W2∗ π(x)W1 .

The point of this remark is that if H,B are both separable, we can ensure that  also is. Since it is a trivial matter to enlarge H  if necessary, this shows that H  = 2 . Note that if E if H = 2 and B is separable we can always take H is separable, we may replace B by the separable C ∗ -algebra generated by E. Thus it suffices to assume E separable.

1.6 Factorization of completely bounded (c.b.) maps

35

 and H Similarly, if dim(B) < ∞ and dim(H ) = ∞, we can ensure that H have the same Hilbertian dimension and hence by unitary equivalence we can  = H. take H For emphasis and for later reference, we state as separate corollaries parts of Theorem 1.50 that will be used frequently in the sequel. The first one repeats part of Theorem 1.35. Corollary 1.52 Let E ⊂ B(H ) be an operator space containing I . Consider a map u : E → B(K). If u(I ) = I and ucb = 1, then there is a Hilbert space  with K ⊂ H  and a unital representation π : B(H ) → B(H ) such that H u(x) = PK π(x)|K .

∀x ∈E

In particular, u is completely positive. Proof By Theorem 1.50, we have u(·) = V2∗ π(·)V1 . By homogeneity, we may assume V1  = V2  = 1. Since I = u(I ) = V2∗ π(I )V1 = V2∗ V1 , we have V2 h,V1 h = h2 for any h ∈ H . This forces V1 h = V2 h = h, and . Identifying also V1 h = V2 h. Thus V1 is an isometric embedding of K into H K with V1 (K), u(·) = V2∗ π(·)V1 becomes u(·) = PK π(·)|K . The second corollary is the decomposability of c.b. maps into B(H ) as linear combinations of c.p. maps. We should emphasize that while this holds for c.b. maps with range in B(H ), it usually fails for c.b. maps with range in a C ∗ -subalgebra B ⊂ B(H ): in general we cannot get the c.p. maps uj s to be B-valued. See Chapter 6 for more on the decomposability theme. Corollary 1.53 Any c.b. map u : E → B(K) can be decomposed as u = u1 − u2 + i(u3 − u4 ) where u1,u2,u3,u4 are c.p. maps with uj cb ≤ ucb . More precisely, we have u1 + u2  ≤ ucb and u3 + u4  ≤ ucb . Proof By Theorem 1.50, we have u(·) = V2∗ π(·)V1 . By homogeneity, we may 1/2 assume V1  = V2  = ucb = 1. Let us denote V = V1 and V2 = W , ∗ so that u(·) = W π(·)V . Then the result simply follows from the polarization formula: we define u1,u2,u3,u4 by u1 (·) = 4−1 (V + W )∗ π(·)(V + W ),

u2 (·) = 4−1 (V − W )∗ π(·)(V − W ),

u3 (·) = 4−1 (V + iW)∗ π(·)(V + iW),

u4 (·) = 4−1 (V − iW)∗ π(·)(V − iW).

Then, by (1.30), uj cb ≤ 1 for j = 1,2,3,4 and u = u1 − u2 + i(u3 − u4 ). Note that actually (u1 + u2 )(·) = 2−1 (V ∗ π(·)V + W ∗ π(·)W ) and hence again by (1.30) u1 + u2 cb ≤ (V 2 + W 2 )/2 ≤ 1. Similarly u3 + u4 cb ≤ 1.

36

Completely bounded and completely positive maps

Remark 1.54 (GNS and Hahn decomposition) Let A be a C ∗ -algebra and let f ∈ A∗ . Then it is well known that that there are a ∗-homomorphism π : A → B(H ) and vectors η,ξ ∈ H such that for any x ∈ A f (x) = η,π(x)ξ  and ξ η = f A∗ . This can be derived from the classical GNS factorization (see (A.16)). We would like to point out to the reader that we have proved it in passing. Indeed, we can view this fact as a particular case of Theorem 1.50 applied with E = A to the linear map f : A → C, since by Proposition 1.3 we have f cb = f  in this case. Moreover, if f is a self-adjoint form i.e. if f (x) = f (x ∗ ) for any x ∈ A then we recover the classical Hahn decomposition: there are f+,f− ∈ A∗+ such that f = f+ − f− and f+  + f−  = f . Indeed, we can take f± (x) = 4−1 η ± ξ,π(x)(η ± ξ ) and by homogeneity we may assume η = ξ  = f 1/2 . Then f+  + f−  ≤ 4−1 (η + ξ 2 + η − ξ 2 ) = 2−1 (η2 + ξ 2 ) = f . We already saw in (1.3) that every finite rank map u : E → F (between arbitrary operator spaces) is c.b. Let α(n) be the best constant C such that, for any E,F , any map u : E → F of rank n satisfies ucb ≤ Cu. To majorize α(n), we will need the following classical lemma. Lemma 1.55 (Auerbach’s lemma) Let E be an arbitrary n-dimensional normed space. There is a biorthogonal system xj ∈ E, ξj ∈ E ∗ (j = 1, 2, . . . ,n) such that xj  = ξj  = 1 for all j = 1, . . . ,n. Proof Choose x1, . . . ,xn in the unit sphere of E on which the function x → | det(x1, . . . ,xn )| attains its maximum, supposed equal to C > 0. Then let ξj (y) = C −1 det(x1, . . . ,xi−1,y,xi+1, . . . ,xn ). The desired properties are easy to check. Remark 1.56 We may as well assume dim(F ) = n. Then by Auerbach’s lemma we can write IdF as the sum of n rank one maps of unit norm. This immediately implies α(n) ≤ n. ´ However, this is not best possible: It is known (due to Eric Ricard, see [208, 1/4 p. 145]) that α(n) ≤ n/2 , but the exact value of α(n), or of lim sup α(n)/n, does not seem to be known although it is known that the latter limit is ≥1/2, by an argument due to Paulsen (see [196] or [208, p. 75]).

1.7 Normal c.b. maps on von Neumann algebras

37

To illustrate the use of the factorization theorem, we end by a well-known characterization of complete boundedness for Schur mulipliers. Let I be any set. We denote by (ei )i∈I the canonical basis of 2 (I ) and by eij (i,j ∈ I ) the matrix units in B(2 (I )). To any a ∈ B(2 (I )) we associate the matrix [aij ] (i,j ∈ I ) defined as usual by aij = ei ,a(ej ). Let  : I × I → C be any function. Any linear mapping that takes [aij ] to [aij (i,j )] is commonly called a “Schur multiplier.” The next result characterizes the Schur mulipliers that are c.b. linear maps from B(2 (I )) to itself. Proposition 1.57 (Schur multipliers on B(2 )) Let C ≥ 0 be a constant. The following are equivalent: (i) There is a c.b. map u : B(2 (I )) → B(2 (I )) such that u(eij ) = (i,j )eij for any i,j ∈ I × I with ucb ≤ C. (ii) There are a Hilbert space H and bounded functions y : I → H , x : I → H such that supI yH supI xH ≤ C and ∀i,j ∈ I × I

(i,j ) = y(i),x(j ).

(iii) The Schur multiplier u : B(2 (I )) → B(2 (I )) that takes [aij ] to [aij (i,j )] is c.b. with ucb ≤ C. ) and Proof Assume (i). By Theorem 1.50 there is π : B(2 (I )) → B(H  with V W  ≤ C such that (i,j )eij = V ∗ π(eij )W . V ,W : 2 (I ) → H This implies (i,j ) = ei ,(V ∗ π(eij )W )ej . Fix an element o ∈ I . Note eij = eio eoj . Therefore we have (i,j ) = y(i),x(j ) where x(j ) = π(eoj )Wej and y(i) = π(eio )∗ Vei , and (ii) follows. Assume (ii). Define π : B(2 (I )) → B(H ) by π(x) = x ⊗ IdH and let Vx : 2 (I ) → 2 (I ) ⊗2 H be the map taking ei to ei ⊗ x(i) (i ∈ I ). Note Vx  = supI xH . Let u(·) = Vy∗ π(·)Vx . Then u coincides with the Schur multiplier in (iii) and ucb ≤ C. Thus (ii) ⇒ (iii). (iii) ⇒ (i) is trivial. Remark 1.58 Actually, it is known that (iii) ⇒ (ii) holds even if we merely assume that u is a bounded Schur multiplier with u ≤ C. In other words the c.b. norm and the norm of a Schur multiplier on B(2 (I )) are equal. We prove this as a consequence of a more general phenomenon in Corollary 2.7.

1.7 Normal c.b. maps on von Neumann algebras It is well known that the Banach space X = L1 (μ) has the property that there is a contractive projection from X∗∗ to X. This corresponds to the decomposition into absolutely continuous and singular parts (with respect to μ) in the abstract L1 -space X∗∗ .

38

Completely bounded and completely positive maps

The noncommutative analogue is also true, as follows. Let M be a von Neumann algebra with predual M∗ . We will work with M ∗∗ that we assume realized as a von Neumann subalgebra of B(K). Since this is a source of mistakes, as a preliminary precaution Remark A.60 is recommended reading. For any linear map u : M → B(H ), we will use the notation (see §A.16) u¨ = (u∗ |B(H )∗ )∗ : M ∗∗ → B(H ). Note that u¨ is normal and for any f ∈ B(H )∗ and any z ∈ M ∗∗ we have f , u(z) ¨ = u∗ (f ),z.

(1.31)

If u is a unital ∗-homomorphism, u¨ is also one. In particular taking u = IdM we find a normal unital ∗-homomorphism π : M ∗∗ → M that extends the identity on M. Consider then the set I0 ⊂ M ∗∗ that is the annihilator of M∗ ⊂ M ∗ . It is immediate that I0 is a weak* closed two-sided ideal in M ∗∗ because I0 = ker(π ). It follows (see Remark A.34) that there is a central projection Q0 ∈ M ∗∗ such that I0 = Q0 M ∗∗ . Lemma 1.59 For any bounded linear map u : M → B(H ) we define ∀x ∈ M

¨ − Q0 )x) and uS (x) = u(Q ¨ 0 x). uN (x) = u((1

Then uN : M → B(H ) is normal with uN  ≤ u and in the c.b. case uN cb ≤ ucb . If u is normal we have uN = u. Moreover if u is a ∗-homomorphism (resp. is c.p.) so is uN . Proof If u is a ∗-homomorphism (resp. is c.p.) so is u, ¨ thus (Q0 being central) the last assertion is obvious, as well as the norm inequalities. To show that uN is normal it suffices to show that u∗N (B(H )∗ ) ⊂ M∗ . A priori we only know u∗N (B(H )∗ ) ⊂ M ∗ . By the bipolar criterion (applied to the duality between M ∗ and M ∗∗ ) it suffices to show that u∗N (B(H )∗ ) ⊂ M∗ ⊥⊥ = I0 ⊥ , or equivalently to show that u∗N (f ),z = 0 for any f ∈ B(H )∗ and any z ∈ I0 . This can be checked as follows. Let (zi ) be a bounded net in M tending weak* to z ∈ I0 . Then (see Remark A.37) (1 − Q0 )zi

σ (M ∗∗,M ∗ )

−→

(1 − Q0 )z = 0.

Since uN (zi ) tends weak* to u∗∗ N (z) we have ¨ − Q0 )zi ) u∗N (f ),z = f ,u∗∗ N (z) = limf ,uN (zi ) = limf , u((1 = limu∗ (f ),(1 − Q0 )zi  = 0, where the last step uses (1.31). This completes the proof that uN is normal.

1.8 Notes and remarks

39

Assume u normal. We will show that f ,uS (x) = 0 for any f ∈ B(H )∗ and x ∈ M. Indeed, we have f ,uS (x) = u∗ (f ),Q0 x and Q0 x ∈ I0 = M∗⊥ while u∗ (f ) ∈ M∗ therefore f ,uS (x) = 0. Since B(H )∗ separates the points of B(H ) this implies uS = 0 and hence uN = u. Theorem 1.60 Let M ⊂ B(K) be a von Neumann algebra. Let u : M → B(H ) be a normal c.b. map. Then there is a normal c.b. map  u : B(K) → B(H ) extending u with  ucb = ucb . , a normal ∗-homomorphism π : B(K) → B(H ) and operators There are H   V1 : H → H , V2 : H → H with V1 V2  = ucb such that u(x) =  of the form V2∗ π(x)V1 for all x ∈ M. Moreover, we can obtain this with H  = L ⊗2 K and with π(x) = IdL ⊗ x (x ∈ B(K)) for some Hilbert space L. H , π : M → B(H ) and V1,V2 such that Proof By Theorem 1.50 we can find H ∗ ∗ ¨ = V2 π¨ (·)V1 and hence uN (·) = u(·) = V2 π(·)V1 . We have clearly u(·) V2∗ πN (·)V1 and since u is normal u = uN . Thus we may replace π by πN and assume that π is normal. Then the proof can be completed by applying Theorem 1.41 to u = π (or, more directly, by invoking Theorem A.61). We end by a very simple observation for later use. Lemma 1.61 Let A be a C ∗ -algebra, M a von Neumann algebra. Then for any u ∈ CB(A,M) the mapping u¨ : A∗∗ → M is a normal c.b. map extending u with u ¨ cb = ucb . Proof Recall that u¨ is the unique (σ (A∗∗,A∗ ),σ (M,M∗ ))-continuous extension of u. Since Mn (A)∗∗ = Mn (A∗∗ ) (see Proposition A.58), it follows from the weak*-density of BMn (A) in BMn (A∗∗ ) that u¨ n  = un .

1.8 Notes and remarks The history of complete positivity starts with Stinespring’s 1955 paper where he proves his factorization theorem for c.p. maps on a C ∗ -algebra A. The case when A is commutative was already known due to Naimark’s work on spectral measures. In two major, very influential Acta Mathematica papers in 1969 and 1972 Arveson [12] considerably expanded on Stinespring’s breakthrough. He made the crucial step of considering complete positivity for maps defined only on operator systems, and he proved his extension theorem. Later on, Choi and Effros [47] made a deep study of injectivity for operator systems, that somehow opened the way for the later development of operator space theory. While it seems a bit surprising in retrospect, the factorization of completely bounded maps emerged only in the early 1980s through independent works by Wittstock, Haagerup (unpublished), and Paulsen. We refer the reader to

40

Completely bounded and completely positive maps

Paulsen’s book [196] for details and more proper credit on the genesis of that important result, which is fundamental for operator space theory. The latter was ignited by Ruan’s 1987 Ph.D. thesis where his abstract characterization of operator spaces is proved. This opened the way to the study of duality for operator spaces (see §2.4), which was thoroughly investigated independently by Effros–Ruan and Blecher–Paulsen. The books by Effros and Ruan [80], by Paulsen [196], by Blecher and Le Merdy [27], as well as our own [208], provide multiple complements to our presentation of complete positivity and complete boundedness. See Størmer’s [235] for more on the comparison between positivity and complete positivity. Theorem 1.57 and Remark 1.58 about Schur multipliers have a long history: some essentially equivalent formulation (up to some factor 2) can be traced back to Grothendieck’s [98]. Later on the result was rediscovered independently by J. Gilbert and U. Haagerup (see [210] or [207, p. 100] for details). The appendix of Haagerup’s manuscript dating from 1986 but recently published as [108] contains more results on Schur multipliers. See also Corollary 2.7.

2 Completely bounded and completely positive maps A tool kit

In this follow-up chapter on c.b. and c.p. maps, we include a variety of related topics that will later allow us to better illustrate the C ∗ -algebraic tensor product theory, and its significance for spaces of linear mappings between operator spaces or C ∗ -algebras. We advise the reader to browse through this chapter on first reading and return to each specific topic whenever needed.

2.1 Rows and columns: operator Cauchy–Schwarz inequality Let (xj ) be an n-tuple in a C ∗ -algebra A. Let r ∈ Mn (A) be the “row matrix” that has (xj ) on its first row and zero everywhere else. In other words, r1j = xj and rij = 0 for any i > 1. Equivalently, in tensor product notation  r = nj=1 e1j ⊗ xj ∈ Mn ⊗ A. Then 1/2    xj xj∗  . rMn (A) =  Indeed, this follows from rMn (A) = rr ∗ Mn (A) . Motivated by this, we will frequently use the notation 1/2    xj xj∗  . xR =  1/2

Analogously, if c ∈ Mn (A) is the “column matrix” that has (xj ) on its first column and zero everywhere else, or in tensor product notation if c = n j =1 ej 1 ⊗ xj ∈ Mn ⊗ A, then 1/2    xj∗ xj  . cMn (A) =  This follows either from cMn (A) = c∗ cMn (A) or from the fact that r = c∗ is a row matrix. We will use the notation 1/2

41

42

Completely bounded and completely positive maps  1/2   xj∗ xj  . xC = 

These simple remarks lead us to the following useful test to check whether a map is c.b. or to evaluate its c.b. norm. Proposition 2.1 Let E ⊂ B(H ),F ⊂ B(K) be operator spaces. Then any u ∈ CB(E,F ) satisfies the following inequalities for any n and any n-tuple (xj ) in E:  1/2  1/2     u(xj )u(xj )∗  ≤ ucb  xj xj∗  ,    1/2 1/2     u(xj )∗ u(xj ) ≤ ucb  xj∗ xj  .  Proof Just observe that un takes a row (resp. column) matrix to a row (resp. column) matrix. Remark 2.2 Let Rn = span[e1j ] ⊂ Mn and Cn = span[ej 1 ] ⊂ Mn . Using the preceding test the reader can check as an easy exercise that the linear mapping u : R → Cn defined by u(e1j ) = ej 1 (which is isometric) satisfies ucb ≥ √ n √ n, and in fact ucb = n. Another classical exercise consists in checking that the c.b. norm of transposition as a linear map on Mn is equal to n (hint: compute  viewed      ij eij ⊗ eij and ij eij ⊗ eji ). It will be convenient to record here several simple consequences of the classical Cauchy–Schwarz inequality. Lemma 2.3 Let (ai )i∈I and (bi )i∈I be finitely supported families of operators in B(H ). We have for any bounded family (xi ) in B(H )    1/2 1/2        ai ai∗  sup xi   ai xi bi  ≤  bi∗ bi  . (2.1)  i∈I

In particular   

i∈I

  1/2  1/2      ai ai∗   bi∗ bi  . ai bi  ≤ 

(2.2)

Proof See (A.12). More generally: Lemma 2.4 Let A ⊂ B(H ) be a C ∗ -algebra. Let (ai )1≤i≤n and (bj )1≤j ≤n be operators in A, and let x = [xij ] ∈ Mn (A). Then n  

i,j =1

  1/2  1/2      ai ai∗  xMn (A)  bj∗ bj  . ai xij bj  ≤ 

(2.3)

2.2 Automatic complete boundedness

Proof n  

i,j =1

43

    ai xij bj = sup η,ai xij bj ξ  = sup ai∗ η,xij bj ξ  ξ,η∈BH

≤ [xij ]Mn (B(H ))

ξ,η∈BH

 1/2  1/2 sup sup bj ξ 2 ai∗ η2

ξ ∈BH

η∈BH

 1/2  1/2     bj∗ bj   ai ai∗  . ≤ xMn (E) 

In particular, we quote the following variant for future reference. ∗ Lemma 2.5 Let A ⊂ B(H  and let  (aj ), (bj ) be finite   ) be∗ a C -algebra   aj aj ≤ 1 and  bj∗ bj  ≤ 1. For any ϕ ∈ A∗ sequences in A such that let ϕj be defined by ϕj (x) = ϕ(aj xbj ). Then  (2.4) ϕj A∗ ≤ ϕA∗ .   Proof We have ϕj A∗ = sup{| ϕj (xj )|} where the sup runs over all xj ∈ BA . Then by (2.1) we have       aj xj bj ≤ ϕA∗ , sup ϕj (xj ) = ϕ

whence (2.4).

2.2 Automatic complete boundedness Let u : E → B be a bounded linear map from an operator space E to a C ∗ -algebra B. We already saw (see Proposition 1.3) that if B is commutative then u is automatically c.b. and ucb = u. Essentially, this phenomenon reduces to the case B = C. We will now show a very useful generalization of the latter fact involving “cyclicity.” We recall that a ∗-homomorphism π : A → B(H ) on a C ∗ -algebra is called cyclic if there is a vector ξ ∈ H (itself called cyclic) such that π(A)ξ = H . Note that when dim(H ) = 1 any nonzero π is trivially cyclic. Theorem 2.6 ([232]) Let E ⊂ B(H) be an operator space. Let u : E → B(H ) be a bounded linear map. Assume that there are unital C ∗ -subalgebras A1,A2 ⊂ B(H) and ∗-homomorphisms π1 : A1 → B(H ) and π2 : A2 → B(H ) with respect to which E is a bimodule and u is bimodular, meaning that for all aj ∈ Aj and all x ∈ E we have a1 xa2 ∈ E and u(a1 xa2 ) = π1 (a1 )u(x)π2 (a2 ). If π1 and π2 are cyclic, then u is c.b. and ucb = u.

44

Completely bounded and completely positive maps

Proof We may assume u = 1. Let ξj be a cyclic unit vector for πj . Let complete the proof n ≥ 1 and x = [xij ] ∈ Mn (E) with xMn (E) ≤ 1. To  it suffices to show that [u(xij )]Mn (B(H )) ≤ 1, or that ij ki ,u(xij )hj  ≤ 1   n 2 2 for any (ki ),(hj ) ∈ H such that ki  < 1 and hj  < 1. By cyclicity, we may assume that ki = π1 (ai )ξ1 and hj = π2 (bj )ξ2 for some ai ∈ A1,bj ∈  ∗ 1/2  ∗ 1/2 A2 . Assume for the moment that a = ai ai and b = bj bj are invertible. We may then factorize ai ,bj as ai = ai a and bj = bj b where ai = ai a −1 and bj = bj b−1 . Let ξ1 = π1 (a)ξ1 and ξ2 = π2 (b)ξ2 .  ∗   ∗  bj bj = 1 and also that A simple verification shows that ai ai = 1,     2 2 2 2 ξ1  = ki  < 1 and ξ2  = hj  < 1. Therefore using the modular assumptions we have   ki ,u(xij )hj  = ξ  ,π1 (a  ∗ )u(xij )π2 (b )ξ   i j 1 2 ij       ∗ ∗ = ξ1 ,u ai xij bj  ai xij bj ξ2  ≤ u  and hence by (2.3) ≤ u. This proves the result assuming a,b invertible. The general case requires a minor adjustment: fixing ε > 0 we set a = ε1 +

1/2  ∗ 1/2  ai ai and b = ε1 + bj∗ bj . Then a,b are invertible and a simple modification of the preceding argument leads to the same conclusion. Corollary 2.7 (Schur multipliers as module maps) For any Schur multiplier u : B(2 (I )) → B(2 (I )) as in Proposition 1.57 we have ucb = u. Proof We apply Theorem 2.6 taking for A1 and A2 the algebra of diagonal operators on 2 (I ). We may reduce to the case when I is countable, in which case the latter algebra has a cyclic vector.

2.3 Complex conjugation Let E be a Banach space. We will denote by E the complex conjugate of E, i.e. the vector space E with the same norm but with the conjugate multiplication by a complex scalar. We will denote by x → x the identity map from E to E. Thus, x and x are the same element but we “declare” that ∀λ ∈ C λx = λx. The space E is anti-isometric to E. Perhaps, we should warn the reader that although this notion is very simple, it is easy to get confused by it. Remark 2.8 For any Hilbert space H the dual H ∗ is a Hilbert space that can be canonically identified with H using the (sesquilinear) scalar product. Let (ej )

2.3 Complex conjugation

45

(resp. (fi )) be othonormal bases in a Hilbert space H (resp. K). The spaces H ∗ (resp. K ∗ ) can be equipped with the biorthogonal orthonormal bases (that can be identified with (ej ) and (fi ) in H and K). Let a ∈ B(H,K). We define a : H → K by setting a(h) = a(h) for any h ∈ H . It is easy to see that a → a is an isomorphism from B(H,K) to B(H,K). We can associate a bi-infinite matrix [aij ] in the usual way so that aij = fi ,u(ej ). Then on one hand [aij ] is the matrix associated to a, and on the other hand, the Banach space sense adjoint operator from K ∗ to H ∗ admits the transposed matrix [aji ] as associated matrix. We choose to denote it by T a : K ∗ → H ∗ to avoid the conflict with the usual Hilbert sense adjoint a ∗ : K → H . Note that a → T a is linear while a → a ∗ is antilinear. A moment of thought shows that the mapping T a : K ∗ → H ∗ can be canonically identified with a ∗ : K → H . Incidentally, it is perhaps worthwhile to remind the reader that while H ∗  H is canonical, the identifications H ∗  H and H  H depend on the choice of an orthonormal basis. In sharp contrast, for general Banach spaces E is not (C-linearly) isomorphic to E (see [33]). When A is a C ∗ -algebra, A is also a C ∗ -algebra for the same product and involution. This allows us to extend the notion of complex conjugate to operator spaces: for any operator space E ⊂ A, we define E ⊂ A as the corresponding subspace of A. Equivalently, assuming E ⊂ B(H ), by what precedes B(H ) can be canonically identified with B(H ), thus the embedding E ⊂ B(H ) = B(H ) allows us to equip E with an operator space structure. Moreover, if E ⊂ B(H ) is a C ∗ − (resp. von Neumann) subalgebra, so is E ⊂ B(H ). To be more “concrete,” if an operator space E is given as a collection of bi-infinite matrices {[aij ] | a ∈ E} (representing operators acting on 2 ), then the space formed of all operators with complex conjugate matrices {[aij ] | a ∈ E} is C-linearly (completely) isometrically isomorphic to E. Remark 2.9 It is an easy exercise to check that the injectivity of E is equivalent to that of E. Remark 2.10 Let (ei )i∈I be an orthonormal basis of H . While the isomorphism B(H )  B(H ) is canonical (which means independent of the choice of orthonormal basis), there is also a noncanonical isomorphism B(H )  B(H ), associated to the usual (basis dependent) isometric isomorphism H  H that takes ei to ei (i ∈ I ). In the case of Mn = B(n2 ), the (completely isometric)

46

Completely bounded and completely positive maps

isomorphism π : Mn → Mn is the linear map defined by π([aij ]) = [aij ] or

   aij eij = aij eij . The underlying in tensor product notation π aij eij = map from Mn to itself is the complex conjugation. Let E be an operator space. By the preceding definition of E the norm of Mn (E) = Mn ⊗min E is characterized by the following identity   n n     aj ⊗ xj  = aj ⊗ xj  . ∀aj ∈ Mn ∀xj ∈ E  1

Mn ⊗min E

1

Mn ⊗min E

In other words, the operator space structure of E is precisely defined so that Mn ⊗min E is naturally anti-isometric to Mn ⊗min E. For each matrix [aij ] in Mn (E), we simply have [aij ]Mn (E) = [aij ]Mn (E) . In sharp contrast to B(H ), there are examples (see [60]) of von Neumann algebras E which fail to be C ∗ -isomorphic to E. Let H,K be Hilbert spaces. Assume H = 2 (I1 ), K = 2 (I2 ). Any ξ ∈ H ⊗2 K can be represented by a kernel [ξ(i1,i2 )] so that the series  ξ = I1 ×I2 ξ(i1,i2 )ei1 ⊗ ei2 converges in H ⊗2 K  2 (I1 × I2 ). To any pair ξ,η ∈ H ⊗2 K we associate a linear form on B(H ) ⊗min B(K) ⊂ B(H ⊗2 K) defined by ϕξ,η (t) = ξ,tη. Let hξ : 2 (I2 ) → 2 (I1 ) be the (Hilbert–Schmidt) linear operator associated to ξ in the usual way, so that if ξ = ei1 ⊗ ei2 then (with the usual matrix conventions) hξ = ei1 i2 ((i1,i2 ) ∈ I1 × I2 ). As usual we associate to an operator a ∈ B(H ) the matrix [aij ] defined by aij = ei ,aej  for (i,j ) ∈ I1 and similarly for b. We denote by t b ∈ B(K) the operator associated to the transposed matrix [bji ]. We observe that (note that h∗ξ ahη t b ∈ S1 (K,K)) ∀a ∈ B(H ), b ∈ B(K),

ϕξ,η (a ⊗ b) = tr(h∗ξ ahη t b).

(2.5)

Indeed, it suffices to check this when ξ = ei1 ⊗ ei2 and η = ej1 ⊗ ej2 and then both terms in (2.5) are easily seen to be equal to ai1,j1 bi2,j2 . We now wish to apply the same formula to B(H ) ⊗ B(K). In that case, to  any ξ ∈ H ⊗2 K we associate the kernel [ξ(i1,i2 )] defined by ξ = I1 ×I2 ξ(i1,i2 )ei1 ⊗ ei2 so that hξ : K → H remains the same. The formula becomes ∀a ∈ B(H ), b ∈ B(K),

ϕξ,η (a ⊗ b) = tr(h∗ξ ahη b∗ ).

(2.6)

Indeed, t b can be identified with the adjoint operator b∗ ∈ B(H ), with associated matrix [bji ].

2.3 Complex conjugation

47

 Let t ∈ B(H ) ⊗ B(K), be a finite sum t = aj ⊗ bj . Then tB(H )⊗min B(K) = sup{|ϕξ,η (t)|} where the sup runs over ξ,η in the unit ball of H ⊗2 K. This gives us      (2.7) = sup tr(y ∗ aj xbj∗ ) , aj ⊗ bj   B(H )⊗min B(K)

j

y,x∈B2

where B2 denotes the unit ball in S2 (K,H ). Let us denote by  2 the norm in S2 (K,H ) or S2 (H,K). Then (2.7) can be rewritten as             aj ⊗ bj  aj xbj∗  = sup  bj∗ y ∗ aj  . = sup   B(H )⊗min B(K)

2

x∈B2

2

y∈B2

(2.8) We need to record all variants of this for future reference: Proposition 2.11 Consider finite sequences (aj ) in B(H ) and (bj ) in B(K). Then we have             bj x ∗ aj∗  = sup  aj∗ ybj  aj ⊗ bj  = sup   B(H )⊗min B(K)

2

x∈B2

2

y∈B2

(2.9)         aj xbj∗  = sup  bj∗ y ∗ aj  . = sup  2

x∈B2

2

y∈B2

(2.10)     Proof Since  aj ⊗ bj B(H )⊗ B(K) =  bj ⊗ aj B(K)⊗ B(H ) by min min Remark 1.9, it is easy to derive (2.9) from (2.8), exchanging the roles of (aj ),B(H ) and (bj ),B(K). Then (2.10) follows using T 2 = T ∗ 2 for all T in S2 (H,K) or S2 (K,H ). When H = K and aj = bj we can make these formulae more precise: Proposition 2.12 Consider a finite sequence (aj ) in B(H ). Then we have        = sup tr(xaj∗ y ∗ bj ) x,y ∈ BS2 (H ),x,y ≥ 0 .  aj ⊗ aj B(H )⊗min B(H )



j

(2.11)

Moreover, if aj ⊗ aj is self-adjoint, then the preceding supremum is unchanged if we restrict it to x = y ≥ 0 in BS2 (H ) . Proof Every x in S2 can be written as x1 − x2 + i(x3 − x4 ) with x1, . . . ,x4 all ≥ 0 such that x1 22 + x2 22 + x3 22 + x4 22 = x22 . From this fact (applied to both x and y) it is easy to deduce (2.11) from (2.7). Moreover, when   aj ⊗ aj is self-adjoint, the sesquilinear form (y,x) → j tr(xaj∗ y ∗ aj ) is symmetric, and hence (by polarization) the supremum in (2.11) remains the same if we restrict it to x = y.

48

Completely bounded and completely positive maps

Remark 2.13 (Opposite C ∗ -Algebra) Let A be a C ∗ -algebra. We define the opposite C ∗ -algebra, that we denote by Aop , as the same C ∗ -algebra but with the reverse product, so that the product of a,b in Aop is defined as ba. The involution remains unchanged. It turns out that this is nothing but another way to consider A. Indeed, as is easy to check, the mapping a → a ∗ is a (C-linear) isomorphism from A to Aop . Therefore for any C ∗ -algebra B and any aj ∈ A, bj ∈ B (1 ≤ j ≤ n) we have         aj∗ ⊗ bj  op aj ⊗ bj  = . (2.12)  A ⊗min B

A⊗min B

If H,K are Hilbert spaces, we have clearly H ⊗2 K  H ⊗2 K, and hence B(H ) ⊗min B(K)  B(H ) ⊗min B(K). Therefore:              aj ⊗ bj  aj ⊗ bj  = aj ⊗ bj  = .   A⊗min B

A⊗min B

A⊗min B

(2.13) Remark 2.14 (Conjugate of group representation) We will also use complex conjugation for group representations. Given a group G and a unitary group representation π : G → B(H ), let π (t) = π(t) for any t ∈ G. This defines a unitary representation π : G → B(H ).

2.4 Operator space dual The existence of the operator space dual is a consequence of Ruan’s fundamental theorem, describing for any vector space E, the sequences of norms on the spaces {Mn (E) | n ≥ 1} that come from an embedding of E in B(H ), in other words that are associated to an operator space structure on E. But actually we prefer to give a direct proof avoiding the use of Ruan’s theorem. Theorem 2.15 Let E be an operator space. There is a Hilbert space H and an isometric embedding J : E ∗ → B(H) such that, for all n ≥ 1 and all ξ = [ξij ] ∈ Mn (E ∗ ), we have [J (ξij )]Mn (B(H)) = uξ cb where uξ : E → Mn is the linear map naturally associated to ξ . Proof Consider the set D that is the disjoint union of the unit balls in Mn (E), i.e. we have ·  BMn (E) . D= n≥1

2.4 Operator space dual

49

Then for any t ∈ D, we have t ∈ BMn (E) for some n = n(t) and we denote by vt : E ∗ → Mn(t) the linear map associated to t. We then define     ∀ξ ∈ E ∗ J (ξ ) = vt (ξ ) ∈ ⊕ Mn(t) . ∞

t∈D

Since, by (1.3), ξ cb = ξ  and vt (ξ ) = (IdMn(t) ⊗ ξ )(t) ∈ Mn(t) , we have vt (ξ )Mn(t) ≤ ξ  and hence J (ξ ) ≤ ξ , but using only t ∈ BM1 (E) = BE we find J (ξ ) ≥ ξ . Thus J is isometric. Consider now ξ = [ξij ] ∈ Mn (E ∗ ) with associated linear map uξ : E → Mn . We have then by (1.2) uξ cb = sup (IdMn(t) ⊗ uξ )(t)Mn(t) (Mn ) t∈D

but since (modulo permutation of factors) Mn(t) (Mn )  (IdMn(t) ⊗ uξ )(t)  [vt (ξij )] ∈ Mn (Mn(t) ) and since such a permutation clearly preserves the norms we have uξ cb = sup [vt (ξij )]Mn (Mn(t) ) = [J (ξij )]Mn (B(H)), t∈D

where at the last step we used (1.13). The operator space dual E ∗ is defined as the one obtained by equipping E ∗ with the sequence of the norms on Mn (E ∗ ) derived from the embedding J in the preceding statement. Thus we have isometrically Mn (E ∗ ) = Mn ⊗min E ∗ = CB(E,Mn ).

(2.14)

More generally, we have for any operator space G an isometric inclusion G ⊗min E ∗ ⊂ CB(E,G) (or equivalently E ∗ ⊗min G ⊂ CB(E,G)). (2.15) Indeed, this is easy to deduce from the case G = Mn using (1.9) or (1.11). The norm in Mn (E ∗ ) is described in a more suggestive way by the formula ∀ξ ∈ Mn (E ∗ )ξ Mn (E ∗ ) = sup{ξ · xMn (Mm ) | x ∈ BMm (E), m ≥ 1} (2.16) where ξ ·x =

 1≤ij≤n,1≤k,l≤m

eij ⊗ ekl ξij (xkl ).

By (1.18) the equality (2.16) still holds if we restrict the sup to m = n.

50

Completely bounded and completely positive maps

The resulting formula then appears as an extension of ξ E ∗ = sup{|ξ(x)| | x ∈ BE } (case n = 1). We can reverse the roles of E and E ∗ , as follows. We will show that for any G we also have isometric embeddings E ⊗min G ⊂ CB(E ∗,G) (or equivalently G ⊗min E ⊂ CB(E ∗,G)). (2.17) Let t ∈ E ⊗ G and let  t : E ∗ → G be the associated linear map. For any v ∈ CB(E,Mn ) let tv ∈ Mn (E ∗ ) be the associated tensor. By (2.14), the correspondence v → tv is a bijection from BCB(E,Mn ) to BMn (E ∗ ) . Observe t)(tv ). By (1.9) (with G and E interchanged) we that (v ⊗ IdG )(t) = (IdMn ⊗  have tmin = sup{(v ⊗ IdG )(t)Mn (G) | n ≥ 1,v ∈ BCB(E,Mn ) }, and hence tmin = sup{(IdMn ⊗  t)(tv )Mn (G) | n ≥ 1,tv ∈ BMn (E ∗ ) } =  tcb, which proves that (2.17) is isometric. Taking G = Mn in (2.17) and recalling (2.14) we find that the inclusion Mn (E) ⊂ Mn ((E ∗ )∗ ) is isometric. This shows that the inclusion E ⊂ (E ∗ )∗ = E ∗∗ is completely isometric. In particular, if E is reflexive as a Banach space, then (E ∗ )∗ = E completely isometrically. From these remarks, the following statement emerges naturally: Lemma 2.16 Let E,F be operator spaces. For any u ∈ CB(E,F ) the adjoint u∗ : F ∗ → E ∗ is c.b. and u∗ cb = ucb . Proof Composition by u on the left gives us a contraction from CB(F,Mn ) to CB(E,Mn ) which can be restated as (u∗ )n : Mn (F ∗ ) → Mn (E ∗ ) ≤ ucb . This implies u∗ cb ≤ ucb . Iterating we find (u∗ )∗ cb ≤ u∗ cb , and by the preceding remark ucb ≤ (u∗ )∗ cb .

2.5 Bi-infinite matrices with operator entries In the completely bounded context, it is natural to wonder whether one can replace Mn (E) by the space M∞ (E) of bi-infinite matrices with entries in E. Let us first clarify what we mean by M∞ (E). Let E ⊂ B(H ) be an operator space. Recall the notation 2 (H ) = H ⊕ H ⊕ · · · . We may clearly represent an operator a ∈ B(2 (H )) by a matrix [aij ] with entries in B(H ). We denote by M∞ (B(H )) the space of such matrices equipped with the norm transplanted from B(2 (H )) (so that M∞ (B(H ))  B(2 (H )) isometrically). Then we define M∞ (E) ⊂ M∞ (B(H )) as the subspace formed of those a ∈ M∞ (B(H )) such that aij ∈ E for all i,j .

2.5 Bi-infinite matrices with operator entries

51

The norm of a ∈ M∞ (E) is easy to compute in terms of the truncated matrices: we have (assuming the indices i,j run over {1,2, . . .}) aM∞ (E) = supn [aij ]1≤i,j ≤n Mn (E) . Moreover, if we are given a priori the entries aij ∈ E, then there is a ∈ M∞ (E) admitting these as its entries if and only if supn [aij ]1≤i,j ≤n Mn (E) < ∞. Therefore it is evident that for any operator spaces E,F and any u ∈ CB(E,F ) we have ucb = u∞ : M∞ (E) → M∞ (F ) where u∞ : M∞ (E) → M∞ (F ) is defined by u∞ ([aij ]) = [u(aij )]. While we used 2 for simplicity, we could just as well use an arbitrary Hilbert space H instead: We define MH (B(H )) = B(H ⊗2 H ), and MH (E) = {x ∈ MH (B(H )) | x(ξ,η) ∈ E ∀ξ,η ∈ H} where this time, for any ξ,η ∈ H, we have denoted by x(ξ,η) the element of B(H ) obtained under the natural action of the functional fξ,η ∈ B(H)∗ defined by fξ,η (y) = ξ,yη. Equivalently, for any η ∈ H, we let vη : H → H ⊗2 H be defined by vη (h) = η ⊗ h for any h ∈ H , and we set x(ξ,η) = vξ∗ xvη . The collection of all the finite-dimensional subspaces of H directed by inclusion gives us a simple formula for the norm of a ∈ MH (E). We have (the easy verification is left to the reader) aMH (E) = sup (v ∗ ⊗ I )a(v ⊗ I )Mn (E)

(2.18)

where the supremum runs over all n and all isometric embeddings v : n2 → H. Again if H is infinite dimensional, we have ucb = uH : MH (E) → MH (F )

(2.19)

where uH : MH (E) → MH (F ) is defined as the only mapping such that [uH (a)](ξ,η) = u(a(ξ,η)) for any ξ,η ∈ H. Moreover, if we are given a priori a sesquilinear map (ξ,η) → a(ξ,η) (linear in η, antilinear in ξ ) from H × H to E, then this will come from an element of MH (E) if and only if the matrices [aijv ] defined by [aijv ] = a(vei ,vej ) satisfy sup [aijv ]Mn (E) < ∞ with the sup as before.

(2.20)

52

Completely bounded and completely positive maps

Using the definition of the dual operator space E ∗ given in §2.4, we find that we have an isometric identity CB(E,B(H)) = MH (E ∗ ).

(2.21)

Indeed, if dim(H) < ∞ this is the very definition of MH (E ∗ ). Now, if H is arbitrary, for any u ∈ CB(E,B(H)) we have ucb = sup{x → v ∗ u(x)vcb } where the sup is as before, and moreover if u : E → B(H) is any linear map then u ∈ CB(E,B(H)) iff sup{x → v ∗ u(x)vcb } < ∞ where the sup is as before. Comparing this with (2.18) and (2.20), we obtain (2.21). Note that the algebraic tensor product B(H) ⊗ B(H ) is weak*-dense in B(H ⊗2 H ). When the subspaces F ⊂ B(H) and E ⊂ B(H ) are weak* ¯ ⊂ B(H ⊗2 H ) the weak* closure of closed in B(H ), we denote by F ⊗E ¯ ). We observe E ⊗ F . With this notation we have B(H ⊗2 H ) = B(H)⊗B(H for later use: Proposition 2.17 If E ⊂ B(H ) is weak* closed in B(H ), then for any H we have ¯ MH (E) = B(H)⊗E.

(2.22)

Proof It is easy to check that if E is weak* closed in B(H ) then MH (E) is ¯ ⊂ MH (E). weak* closed in B(H ⊗2 H ), whence the inclusion B(H)⊗E To show the reverse assume H = 2 for notational simplicity. Then for any a ∈ M∞ (E) it is easy to see that the truncated matrices [aij ]1≤i,j ≤n (which are obviously in B(H) ⊗ E) tend weak* to a in M∞ (B(H )) = B(H ⊗2 H ) when n → ∞. This proves the reverse inclusion. The case of a general H is similar. We skip the details. Since we use the spaces Mn (E) throughout these notes it is worthwhile to observe that the representations of Mn are very special: Proposition 2.18 Let π : Mn → A ⊂ B(H ) be an isometric unital ∗homomorphism embedding Mn in a C ∗ -algebra A. Let A1 = A ∩ π(Mn ) be the relative commutant of π(Mn ) in A. Then A is isomorphic to Mn (A1 ), and with this isomorphism π can be identified with x → x ⊗ 1. Remark 2.19 The case A = MN shows that the existence of a unital embedding Mn ⊂ MN requires that n divides N , while for a nonunital one n ≤ N obviously suffices (we may just add zero entries). We leave the proof of Proposition 2.18 to the reader, but the proof is an easy modification of the following one, which is a von Neumann variant of it, to be used toward the end of these notes.

2.6 Free products of C ∗ -algebras

53

Proposition 2.20 Let H be any Hilbert space. Let π : B(H) → M ⊂ B(H ) be a normal (isometric and unital) ∗-homomorphism embedding B(H) in a von Neumann algebra M. Let M1 = M ∩ π(B(H)) . Then M is isomorphic ¯ 1 , and modulo this isomorphism π as a von Neumann algebra to B(H)⊗M can be identified with B(H)  b → b ⊗ 1. Proof We may assume H = 2 (I ) for some set I . For notational simplicity, we assume I = {1,2, . . .}. Let Eij = π(eij ) ∈ M. We will use the standard identities Eik Ek  j = Eij if k = k  and Eik Ek  j = 0 otherwise.

(2.23)

Note that M1 = M ∩ {Eij | i,j ≥ 1} . For any x ∈ B(H) let x(n) = Pn xPn where Pn is the canonical projection onto the span of the first n basis vectors  in H. Let Qn = π(Pn ) = n1 Eii . Then x(n) → x, in particular Pn → I and hence π(x(n)) → π(x) and Qn → I ; these limits are meant when n → ∞ and with respect to the weak* topology, as all the limits in the rest of the  proof. For any a ∈ M we have Qn aQn → a. Let aij = limn nk=1 Eki aEjk .   We have 1≤i,j ≤n Eij aij = 1≤i,j ≤n Eii aEjj = Qn aQn . Using (2.23) one ¯ M1 → M by checks easily that aij ∈ M1 . We define a mapping σ : B(H) ⊗ setting   n n σ lim eij ⊗ yij = lim Eij yij . i,j =1

i,j =1

It is easy to check with (2.23) that σ is a ∗-homomorphism, and since σ ([aij ]) = a for any a ∈ M, σ is onto  M. Moreover, we have a =  lim Qn aQn  = lim  1≤i,j ≤n Eij aij  = σ ([aij ]). Therefore, σ is an ¯ M1 to M. Since the latter are both isometric ∗-isomorphism from B(H) ⊗ von Neumann algebras, we know (see Remark A.38) that σ and its inverse are automatically normal (actually it is easy to see this directly in the situ

present  n e ation). Lastly, we have σ (b ⊗ 1) = lim ni,j =1 Eij bij = lim π ij i,j =1 bij , and hence (since π is normal) σ (b ⊗ 1) = π(b) for any b ∈ B(H). For more information on all questions involving operator spaces or algebras and dual topologies, we refer the reader to [27].

2.6 Free products of C ∗ -algebras We start by recalling the definition of the free product of algebras: let (Ai )i∈I be a family of algebras (resp. unital algebras). We will denote by A˙ (resp. A) their free product in the category of algebras (unital algebras). This object is characterized as the unique algebra (resp. unital algebra) A containing each Ai as a (resp. unital) subalgebra and such that if we are given another object B

54

Completely bounded and completely positive maps

and morphisms ϕi : Ai → B (i ∈ I ), there is a unique morphism ϕ : A → B such that ϕ|Ai = ϕi for all i. Moreover A is generated by the union of the Ai’s viewed as subalgebras (resp. unital subalgebras) of A. Similarly, if all the Ai’s are ∗-algebras, we can equip A with a ∗-algebra structure (meaning with an involution) so that for any ∗-algebra B and ∗-morphisms ϕi : Ai → B, there is a unique ∗-morphism such that ϕ|Ai = ϕi for all i, and the Ai’s can be viewed as ∗-subalgebras of A. One can think of a typical element a of A˙ (resp. A) as a sum of products of  ak with each ak of elements of the union of the Ai’s. More precisely, a = the form ak = ak (i1 ) . . . ak (i(k) ) with ak (ij ) ∈ Aij (1 ≤ j ≤ (k)); when two consecutive terms are in the same algebra, say when ij = ij +1 we may replace ak (ij )ak (ij +1 ) by the single term (ak (ij )ak (ij +1 )) ∈ Aij , thus reducing the product. When all such reductions have been done, we obtain a product of the same form but for which i1 = i2 = · · · = i(k) , the product

is then called  (i ) reduced. For any scalar λ ∈ C we have λa = λa



k 1 . . . ak (i(k) ). The  ˙ product in A (resp. A) is defined by k ak m bm = k,m ak bm and by concatenation of the product terms ak bm i.e. [ak (i1 ) . . . ak (i(k) )][bm (j1 ) . . . bm (j(m) )] = ak (i1 ) . . . ak (i(k) )bm (j1 ) . . . bm (j(m) ). If we now assume that (Ai )i∈I is a family of C ∗ -algebras (resp. unital ones), then we can equip A˙ (resp. A) with a (resp. unital) C ∗ -algebra structure in the following way. ˙ Let C be the collection of all ∗-homomorphisms Let F be either A or A. π : F → B(Hπ ) (automatically such that π|Ai  ≤ 1 for all i in I ). Let   j : F → π ∈C B(Hπ ) be the embedding defined by j (x) = π ∈C π(x) for all x in F. Clearly j is a ∗-homomorphism and (by standard algebraic facts) it is injective. This allows us to equip F with the noncomplete C ∗ -algebra structure associated to j and, after completion with respect to the norm ∀x ∈ F

x = supπ ∈C π(x)

(2.24)

we obtain a C ∗ -algebra (resp. a unital one), admitting F as a dense subalgebra. We will denote by ∗˙ i∈I Ai (resp. ∗i∈I Ai ) the resulting (resp. unital) C ∗ -algebra, which we call the free product of the family of (resp. unital) C ∗ -algebras (Ai )i∈I . See [15] and [254] for basic facts on free products. Let A be any of these two free products. Let σi : Ai → A be the natural embedding. Then for any family of morphisms πi : Ai → B(H ) there is a unique morphism π : A → B(H ) such that π σi = πi for all i.

2.6 Free products of C ∗ -algebras

55

Remark 2.21 The constructions of ∗i∈I Ai or ∗˙ i∈I Ai that we just sketched deliberately avoids a few points. For instance it is true but not obvious that the canonical morphism from A to ∗i∈I Ai is injective. This point is addressed in [23] and [171, p. 37]. See also [254, p. 4]. Using this it is easy to check that the norm we just defined in (2.24) on either A or A˙ is the maximal C ∗ -norm. Let A1,A2 be unital C ∗ -algebras with subspaces Ej ⊂ Aj (j = 1,2). By suitably restricting the product map, we have a natural embedding E1 ⊗ E2 ⊂ A1 ∗ A2 with range equal to the linear span of the set E1 E2 ⊂ A1 ∗ A2 . The induced operator space structure on E1 ⊗ E2 is that of the so-called Haagerup tensor product E1 ⊗h E2 , which, despite its importance, we chose not to present in detail in the present volume. The reader will find a full treatment in our previous book [208, ch. 5] (or in [80, 196]). In the present one, the main result we need is stated as the next lemma. Let A1,A2 be unital C ∗ -algebras with subspaces Ej ⊂ Aj (j = 1,2). Let E ⊂ A1 ∗A2 be the linear span of the set E1 E2 ⊂ A1 ∗A2 . We wish to describe the operator space structure on E induced by this embedding, i.e. the norm in Mn (E) for all n. Actually we can do this directly for n = ∞: Lemma 2.22 Assume dim(H ) = ∞. Let t ∈ B(H ) ⊗ E. Then tmin ≤ 1 if and only if there are tj ∈ B(H ) ⊗ Ej with tj min ≤ 1 such that t = t1  t 2 ,

(2.25)

where the sign  means the bilinear mapping defined from (B(H ) ⊗ E1 ) × (B(H ) ⊗ E2 ) → B(H ) ⊗ E by (b1 ⊗ e1 )  (b2 ⊗ e2 ) = (b1 b2 ⊗ e1 e2 ) (bj ∈ B(H ),ej ∈ Ej ). Equivalently,  is the restriction of the usual product on B(H ) ⊗ (A1 ∗ A2 ). About the proof Since t1,t2 are tensors of finite rank, the statement reduces immediately to the case when E1,E2 are finite dimensional. Using mainly results from [50] it was proved in [204] that E can be identified completely isometrically with the Haagerup tensor product E1 ⊗h E2 . Then the result follows from [208, cor. 5.9, p. 95]. Actually Lemma 2.22 is valid for any number n of factors Ej ⊂ Aj (1 ≤ j ≤ n). More generally, one can describe very efficiently the (operator space) structure of a free product ∗i∈I Ai of C ∗ -algebras, as follows. Theorem 2.23 (Blecher–Paulsen Factorization [28]) Let (Ai )i∈I be a family of unital C ∗ -algebras, let A = ∗i∈I Ai be the unital free product, and let

56

Completely bounded and completely positive maps

A ⊂ A be the algebraic free product. Let n ≥ 1 and consider x ∈ Mn (A). Then x < 1 if and only if there are m, i1 = i2 = · · · = im in I and rectangular matrices xk ∈ Mpk ×qk (Aik ) with xk  < 1 such that p1 = qm = n and x = x1 . . . xm .

(2.26)

About the proof This beautiful result is a consequence of the Blecher–Ruan– Sinclair (BRS in short) characterization of operator algebras from [29]. One equips Mn (A) with the norm  xk  (2.27) xn = inf where the infimum runs over all factorizations x = x1 . . . xm of the kind just described. By the BRS characterization there is for some H a unital homomorphism π : A → B(H ) such that xn = (IdMn ⊗ π )(x)Mn (B(H )) .

(2.28)

Let πi = π|Ai . Clearly xn ≤ xMn (A) whenever x ∈ Mn (Ai ). Therefore by (2.28) πi cb ≤ 1. Since πi (1) = 1, πi is c.p. by Theorem 1.35 and a fortiori self-adjoint. Consequently πi must be a ∗-homomorphism for any i. This shows that π = ∗πi on A. By definition of A = ∗i∈I Ai , π extends to a (contractive) ∗-homomorphism on A. But now by (2.27) we clearly have xMn (A) ≤ xn for any n, in particular xA ≤ π(x) for any x ∈ A. By the maximality of xA (as reflected in (2.24)) equality must hold, so that π is the restriction of an isometric, and hence completely isometric, ∗homomorphism from A to B(H ). Thus we conclude that xMn (A) = xn for any x ∈ A. The following very useful result is due to Boca [30]. Theorem 2.24 (Boca’s theorem) Let (Ai )i∈I be a family of unital C ∗ -algebras and let A = ∗i∈I Ai be the unital free product. For each i ∈ I let fi be a state on Ai . Let ui : Ai → B be unital c.p. maps with values in another unital C ∗ -algebra B. There is a unital c.p. map u : A → B such that for any m, any i1 = i2 = · · · = im in I and any ak ∈ Aik such that fik (ak ) = 0 for all k we have u(a1 a2 . . . am ) = ui1 (a1 )ui2 (a2 ) . . . uim (am ).

(2.29)

See [68] for a recent proof of Boca’s theorem using the Stinespring dilation Theorem 1.22.

2.7 Universal C ∗ -algebra of an operator space

57

2.7 Universal C ∗ -algebra of an operator space Let E be an operator space. The universal C ∗ -algebra generated by E will be denoted by C ∗ E. Its definition will be given after the next statement. Theorem 2.25 Let E be an operator space. There is a (resp. unital) C ∗ -algebra A and a completely isometric embedding j : E → A with the following properties: (i) For any (resp. unital) C ∗ -algebra B and any completely contractive map u : E → B there is a (resp. unital) representation π : A → B extending u, i.e. such that πj = u. (ii) The (resp. unital) algebra generated by j (E) is dense in A. Moreover, (ii) ensures that the (resp. unital) representation π in (i) is unique. Proof The proof is immediate. Let I be the “collection” of all u as in (i) with range generating a (resp. unital) C ∗ -algebra denoted by Bu , so that u : E → Bu satisfies ucb ≤ 1. Let    Bu . BE = u∈I



We define j : E → BE by j (x) = (u(x))u∈I . By (1.13) it is easy to check that j is completely isometric. Then, if we define A to be the (resp. unital) C ∗ -algebra generated by j (E) in BE , the announced universal property of A is immediate. Notation: We will denote by C ∗ E (resp. Cu∗ E) the (resp. unital) C ∗ -algebra A appearing in the preceding statement. Note that C ∗ E is essentially unique. Indeed, if j1 : E → A1 is another completely isometric embedding into a C ∗ -algebra A1 with the property in Theorem 2.25, then the universal property of A (resp. A1 ) implies the existence of a representation π : A → A1 (resp. π1 : A1 → A) such that πj = j1 (resp. π1 j1 = j ). Since C ∗ -representations are automatically contractive we have π  ≤ 1, π1  ≤ 1 and π1 = π −1 on j1 (E) hence on the ∗-algebra generated by j1 (E), which is dense in A1 by assumption. This implies that π is an isometric isomorphism from A onto A1 . Similarly, Cu∗ E is characterized as the unique unital C ∗ -algebra C containing E completely isometrically in such a way that, for any unital C ∗ -algebra B (actually we may restrict to B = B(H ) with H arbitrary),

58

Completely bounded and completely positive maps

any c.c. map u : E → B uniquely extends to a unital representation (i.e. ∗homomorphism) from C to B. It is easy to see that C ∗ E can be identified to the C ∗ -algebra generated by E in Cu∗ E. Indeed, the latter has the property in Theorem 2.25. Remark 2.26 If two operator spaces E,F are completely isometrically isomorphic, then E and F can be realized as “concrete” operator subspaces E ⊂ A and F ⊂ B of two isomorphic C ∗ -algebras A and B, for which there is an isometric ∗-homomorphism π : A → B such that π(E) = F . Indeed, let A = C ∗ E and B = C ∗ F , let u : E → F be a completely isometric isomorphism, let π : C ∗ E → C ∗ F  be the (unique) extension of u (as in Theorem 2.25) and let σ : C ∗ F  → C ∗ E be the (unique) extension of u−1 . Then clearly we must have (by unicity again) σ π = IdC ∗ E and π σ = IdC ∗ F  so that σ = π −1 . Remark 2.27 (Universal C ∗ -algebra of a contraction) Consider for example the simplest choice of E, namely E = C. Then C ∗ C can be described more explicitly as the completion of the space of polynomials P in the formal variables X,X∗ equipped with the C ∗ -norm P  = sup{P (x,x ∗ )} where the sup runs over all H and all x ∈ B(H ) with x ≤ 1. Indeed, the linear mapping ux : C → B(H ) taking 1 to x satisfies trivially ux cb = x, so that x ≤ 1 if and only if ux extends to a ∗-homomorphism πx : C ∗ C → B(H ) with πx  = 1. The analogue for the unital case requires that we consider polynomials “with a constant term” of the form λ1 + P (X,X∗ ) with λ ∈ C. We then equip the resulting unital algebra of polynomials with the norm sup{λIdH + P (x,x ∗ )} over all H ’s and all contractions x ∈ B(H ) as before. The completion can now be identified with Cu∗ C.

2.8 Completely positive perturbations of completely bounded maps Warning: this section is devoted to a rather technical point. To avoid interrupting the flow of our presentation, we advise the reader to skip it until it becomes needed (in §9.4). Notation (Order defined by the cone of c.p. maps). Let u,v : E → B be linear maps from an operator system E to a C ∗ -algebra. If v − u ∈ CP(E,B), we will write u  v.

(2.30)

This is the partial order on CB(E,B) for which CP(E,B) is the positive cone.

2.8 Completely positive perturbations of completely bounded maps

59

As we saw previously (see Theorem 1.35), in analogy with a well-known fact in measure theory, if a complete contraction on an operator system is unital, then it is automatically c.p. In this section, we will prove quantitative versions of this: if a unital map has a cb-norm close to 1, then the map is close to a c.p. map. In the first result the range is an arbitrary C ∗ -algebra, while in the second one it is B(H ). The first case is restricted to a finite-dimensional domain but the second one is not. Theorem 2.28 ([77]) Let E be an n-dimensional operator system, B a unital C ∗ -algebra. (i) For any ϕ ∈ CP(E,B) there is a linear form f ∈ E ∗ such that f ≥ 0, f  ≤ 2nϕ and the mapping  : E → B defined by (x) = f (x)1 − ϕ(x) is c.p. (equivalently we have 0  ϕ  f (·)1). (ii) For any self-adjoint unital linear map ϕ : E → B such that ϕcb ≤ 1 + δ, where 0 < δ < 1/2n, there is a u.c.p. map ψ : E → B such that ϕ − ψcb ≤ 8nδ. Proof We reproduce the proofs from [77] with a cosmetic change. (i) We first consider the case of a self-adjoint (not necessarily c.p.) linear map ϕ : E → B of rank 1 with ϕ ≤ 1 of the form ϕ(x) = f (x)b for some f ∈ BE ∗ and b ∈ BB both self-adjoint. We will denote this map by f ⊗ b. We claim that there is F ∈ BE ∗ with F ≥ 0 and F  ≤ 1 such that, with respect to the natural ordering (2.30) of CB(E,B), we have −F ⊗ 1  f ⊗ b  F ⊗ 1. Assuming E ⊂ B(H ), f can be extended to a self-adjoint form f  ∈ B(H )∗ with f   ≤ f  (indeed, we may take for f  the real part of the Hahn–Banach extension of f ). By the classical Hahn decomposition (see Remark 1.54), we have f  = f+ −f− with f± ≥ 0 such that f+ +f−  ≤ f  . By restricting this to E we find a decomposition f = f+ − f− with f± ∈ E ∗ positive and such that f+  + f−  ≤ f . In parallel, any b ∈ BB with b = b∗ can be decomposed as b = b+ − b− with b± ≥ 0 in B such that b+ + b−  ≤ 1. But it is easy to check that for the c.p. order for any 0 ≤ f1 ≤ f2 (f1,f2 ∈ E ∗ ) and 0 ≤ b1 ≤ b2 (b1,b2 ∈ B) we have 0  f1 ⊗ b1  f2 ⊗ b2 . Therefore we have −T  (f+ − f− ) ⊗ (b+ − b− )  T with T = (f+ + f− ) ⊗ (b+ + b− ) and a fortiori also with T = (f+ + f− ) ⊗ 1 since 0 ≤ b+ + b− ≤ 1. Thus our claim follows with F = f+ + f− . Now, since dim(E) = n, by Auerbach’s lemma 1.55, there is a biorthogonal n system (ξj ,xj ) in BE ∗ × BE such that IdE = 1 ξj ⊗ xj . Note that

60

Completely bounded and completely positive maps

∗  for any x ∈ E we have x ∗ = ξj (x)xj and also (since x ∗ ∈ E) x ∗ =   n ∗ ∗ ξj ⊗ xj = ξj ∗ ⊗ xj∗ , 1 ξj (x )xj . Thus (setting ξj ∗ = ξj (x )) we have and hence   (ξj + ξj ∗ )/2 ⊗ (xj + xj∗ )/2 ξj ⊗ xj = 2n  fj ⊗ yj , − (ξj − ξj ∗ )/(2i) ⊗ (xj − xj∗ )/(2i) = 1

with fj ,yj self-adjoint in BE ∗ × BE . This gives us ∀x ∈ E

x=

2n 1

fj (x)yj .

(2.31)

Now let ϕ ∈ CP(E,B). Using (2.31), we may write ϕ as ϕ = fj ⊗ bj with bj = ϕ(yj ). We may assume ϕ = 1 (by homogeneity). Note bj = bj∗ and bj  ≤ ϕyj  ≤ 1. Therefore, using the first part of the proof we obtain Fj ≥ 0 with Fj  ≤ 1, and hence Fj  ≤ 2n, such that −(Fj ) ⊗ 1  ϕ  (Fj ) ⊗ 1. In particular, 0  ϕ  f (·)1 with f = Fj ≥ 0 and f  ≤ 2n. In other words x → (x) = f (x)1 − ϕ(x) is in CP(E,B). (ii) Assume B ⊂ B(H ). By Corollary 1.53 we can write ϕ = ϕ1 − ϕ2 with ϕ1,ϕ2 ∈ CP(E,B(H )) such that ϕ1 + ϕ2  ≤ ϕcb . Let aj = ϕj (1). Then a1,a2 ≥ 0, a1 − a2 = 1 and a1  ≤ a1 + a2  ≤ 1 + δ. Thus, for any unit vector ξ in H we have ξ,a2 ξ  = ξ,(a1 − 1)ξ  = ξ,a1 ξ  − 1 ≤ δ, and hence by (1.27) ϕ2  = a2  ≤ δ. By part (i) there is f ∈ E ∗ with f ≥ 0 and f  ≤ 2nδ such that the mapping ψ2 : E → B(H ) defined by ψ2 (x) = f (x)1 − ϕ2 (x) is c.p. We then set ψ0 = ϕ1 + ψ2 . Note that, since ψ0 (x) − ϕ(x) = f (x)1 ∈ C1, we have ψ0 (E) ⊂ B, ψ0 ∈ CP(E,B) and ψ0 − ϕcb ≤ 2nδ.

(2.32)

To obtain a unital ψ we need to work a bit more. Let b = ψ0 (1). Then b ≥ 0 2nδ < 1, and hence b is invertible. A and b − 1 = ψ0 (1) − ϕ(1) ≤ √ b − 1| ≤ |b − 1| ∀b ∈ [0,2]), we have fortiori, by functional calculus (since | √  b − 1 ≤ b − 1 ≤ 2nδ. For any y ∈ B we have

2.9 Notes and remarks 1

1

1

1

61

1

b 2 yb 2 − y = (b 2 − 1)yb 2 + y(b 2 − 1) 1

1

1

≤ (b 2 − 1 b 2  + b 2 − 1)y ≤ 6nδy. A similar estimate holds for any k and any y = [yij ] ∈ Mk (B). Thus if we now set 1

1

ψ(x) = b− 2 ψ0 (x)b− 2 , 1

1

1

1

so that ψ0 (·) = b 2 ψ(·)b 2 , we find ψ0 − ψcb = b 2 ψb 2 − ψcb ≤ 6nδψcb = 6nδ. Thus we obtain by (2.32) ϕ − ψcb ≤ ϕ − ψ0 cb + ψ0 − ψcb ≤ 8nδ, and ψ ∈ CP(E,B) is unital with ψcb = ψ(1) = 1 (by (1.27)). The case when B = B(H ) is much simpler: Lemma 2.29 (Kirchberg) Let A be a unital C ∗ -algebra and let ϕ : A → B(H ) be a completely bounded self-adjoint unital map. Then there is a u.c.p. map ψ : A → B(H ) such that ϕ − ψcb ≤ ϕcb − 1. Proof Let ε = ϕcb − 1. By the factorization Theorem 1.50 there are  π : A → B(H ) and V ,W ∈ B(H, H ) such that V W  = 1 + ε H, ∗ and ϕ(a) = V π(a)W for all a in A. By homogeneity we may assume 1

V  = W  = (1 + ε) 2 . Since ϕ is unital we have V ∗ W = IdH , and since it is self-adjoint V ∗ π(a)W = W ∗ π(a)V . Note W ∗ V = IdH . Let ψ(a) = (V ∗ π(a)V + W ∗ π(a)W )/2. We have then 1 ∗ (V π(a)V + W ∗ π(a)W − V ∗ π(a)W − W ∗ π(a)V ) 2 1 = (V − W )∗ π(a)(V − W ). 2

ψ(a) − ϕ(a) =

Note that ψ − ϕ (as well as ψ) is c.p. Moreover (V − W )∗ (V − W ) = V ∗ V + W ∗ W − 2IdH ≤ 2εIdH . Therefore, ψ − ϕ = ψ − ϕcb ≤ ε.

2.9 Notes and remarks For Schur multipliers as in Remark 1.58 the equality u = ucb is due to Haagerup, and for “module maps” as in Theorem 2.6 it was proved by Roger Smith in [232]. The theory of free products of C ∗ -algebras can be traced back to Avitsur’s seminal paper [15], but it really took off with Voiculescu’s “free probability” (see [254]). Note, however, that in the latter theory, it is the

62

Completely bounded and completely positive maps

reduced free product of two C ∗ -algebras equipped with states that plays the central role, while for the present volume we are mostly concerned with the maximal free product of unital C ∗ -algebras (i.e. amalgamated over C1). For this kind of free product it is rather striking that unital c.p. maps are “admissible” morphisms as described by Boca’s theorem from [30]. As already mentioned the duality theory for operator spaces goes back to Effros– Ruan and Blecher–Paulsen. The results in §2.4 and §2.5 are due to them, except for Propositions 2.20 and 2.18, both well-known facts. The remarkable factorization in (2.26) (due to Blecher and Paulsen [28]) is a consequence of the no-less remarkable Blecher–Ruan–Sinclair characterization of operator algebras; the central underlying concept is the Haagerup tensor product of operator spaces. These important topics are treated in detail in our previous book [208] which explains our reluctance to expand on that same theme in the present one. We describe these results in Theorem 2.23 and, making here an exception, refer the reader to [208] or [196] for detailed proofs. See Loring’s book [171] for a discussion of amalgamated full free products as solutions of a universal problem. Loring also considers in [171] free products of unital algebras amalgamated over C1 but relative to nonunital embeddings into the free product, which can be a quite different object. Concerning §2.8 and the completely positive ordering, we should mention the following Radon–Nikodym theorem due to Arveson [12]. Let A be a C ∗ -algebra, let u,v ∈ CP(A,B(H )). Let v(·) = V ∗ π(·)V be the minimal Stine) with V π(A)(H ) = spring factorization of v (which means that V ∈ B(H, H   H ). Then 0  u  v if and only if there is T ∈ π(A) ⊂ B(H ) with 0 ≤ T ≤ 1 such that u(·) = V ∗ T π(·)V . Moreover, such a T is unique. See [197] for a sort of completely bounded analogue.

3 C ∗ -algebras of discrete groups

Group representations are one of the main sources of examples of C ∗ -algebras. The universal representation of a group G gives rise to the full or maximal C ∗ -algebra C ∗ (G), while the left regular representation leads to the reduced C ∗ -algebra Cλ∗ (G). In this chapter we review some of their main properties when G is a discrete group.

3.1 Full (=Maximal) group C ∗ -algebras We first recall some classical notation from noncommutative Abstract Harmonic Analysis on an arbitrary discrete group G. We denote by e (and sometimes by eG ) the unit element. Let π : G → B(H) be a unitary representation of G. We denote by Cπ∗ (G) the C ∗ -algebra generated by the range of π . Equivalently, Cπ∗ (G) is the closed linear span of π(G). In particular, this applies to the so-called universal representation of G, a notion that we now recall. Let (πj )j ∈I be a family of unitary representations of G, say πj : G → B(Hj ) in which every equivalence class of a cyclic unitary representation of G has an equivalent copy. Now one can define the “universal” representation UG : G → B(H) of G by setting UG = ⊕j ∈I πj

on

H = ⊕j ∈I Hj .

CU∗ G (G) is simply denoted by C ∗ (G) and is “maximal”) C ∗ -algebra of the group G, to distinguish

C ∗ -algebra

Then the associated called the “full” (or the it from the “reduced” one that is described in the sequel. Note that C ∗ (G) = span{UG (t) | t ∈ G}.

63

64

C ∗ -algebras of discrete groups

Let π be any unitary representation of G. By a classical argument, π is unitarily equivalent to a direct sum of cyclic representations, hence for any finitely supported function x : G → C we have         (3.1) x(t)UG (t) . x(t)π(t) ≤   In particular, if π is the trivial representation      x(t)UG (t) . x(t) ≤  Equivalently (3.1) means     x(t)UG (t) 

B(H)

    x(t)π(t) = sup 

(3.2)  B(Hπ )

where the supremum runs over all possible unitary representations π : G → B(Hπ ) on an arbitrary Hilbert space Hπ . More generally, for any Hilbert space K and any finitely supported function x : G → B(K) we have          x(t) ⊗ π(t) x(t) ⊗ UG (t) = sup   B(K⊗2 H)

B(K⊗2 Hπ )

where the sup is the same as before. There is an equivalent description in terms of the group algebra C[G], the elements of which are simply the formal linear combinations of the elements of G, equipped with the obvious natural ∗-algebra structure. One equips C[G] with the norm (actually a C ∗ -norm)      x(t)t → sup  x(t)π(t) t∈G

where the supremum runs over all possible unitary representations π of G. One can then define C ∗ (G) as the completion of C[G] with respect to the latter norm. These formulae show that the norm of C ∗ (G) is the largest possible C ∗ norm on C[G]. Whence the term “maximal” C ∗ -algebra of G. Remark 3.1 (A recapitulation) By (3.1) there is a 1−1 correspondence between the unitary representations π : G → B(H ) and the ∗-homomorphisms ψ : C ∗ (G) → B(H ). More precisely, for any π there is a unique ψ : C ∗ (G) → B(H ) such that ∀g ∈ G ψ(UG (g)) = π(g), or if we view G as a subset of C[G] ⊂ C ∗ (G) (which means we identify g and UG (g)), we have ∀g ∈ G π(g) = ψ(g). Remark 3.2 (c.b. and c.p. maps on C ∗ (G)) A linear map u : C ∗ (G) → B(K) is c.b. if and only if there exists a unitary group representation π : G → B(Hπ ) and operators V ,W : K → Hπ such that

3.1 Full (=Maximal) group C ∗ -algebras

65

∀t ∈ G u(UG (t)) = W ∗ π(t)V . Moreover, we have ucb = inf{W V } and the infimum is attained. Indeed, in view of the preceding remark this follows immediately from Theorem 1.50. The c.p. case is characterized similarly but with V = W . When K = C and hence B(K) = C, this gives us a description of the dual of C ∗ (G), as well as a characterization of states on C ∗ (G). The next result (in which we illustrate the preceding remark in the case of multipliers) is classical, and fairly easy to check. Proposition 3.3 (Multipliers on C ∗ (G)) Let ϕ : G → C. Consider the associated linear operator M ϕ (a so-called multiplier, see §3.4) defined on   x(t)UG (t) = x(t)ϕ(t)UG (t). Then Mϕ span{UG (t) | t ∈ G} by Mϕ extends to a bounded operator on C ∗ (G) if and only if there are a unitary representation π : G → B(Hπ ) and ξ,η in Hπ such that ∀t ∈G

ϕ(t) = η,π(t)ξ .

(3.3)

Moreover we have for the resulting bounded operator (still denoted by Mϕ ) Mϕ  = Mϕ cb = inf{ξ η}

(3.4)

where the infimum (which is attained) runs over all possible π , ξ , η for which this holds. Lastly, if Mϕ is positive (3.3) holds with ξ = η, and then Mϕ is completely positive on C ∗ (G).  Proof If Mϕ : C ∗ (G) → C ∗ (G) ≤ 1, let f (x) = t∈G ϕ(t)x(t). Then by (3.2) f ∈ C ∗ (G)∗ with f  ≤ 1. Note f (UG (t)) = ϕ(t). By Remark 1.54 there are π , ξ , and η with ξ η ≤ f  ≤ 1 such that (3.3) holds. If Mϕ (and hence f ) is positive we find this with ξ = η. For the converse, since (like any unitary group representation) the mapping UG (t) → UG (t) ⊗ π(t) extends to a continuous ∗-homomorphism σ : C ∗ (G) → B(H ⊗2 Hπ ), we have Mϕ (·) = V2∗ σ (·)V1 , with V1 h = h ⊗ ξ and V2 h = h ⊗ η (h ∈ H) from which we deduce by (1.30) Mϕ cb ≤ ξ η. If ξ = η then V1 = V2 and hence Mϕ is c.p. on C ∗ (G). Remark 3.4 By Remark 3.2 and (3.4) the space of bounded multipliers on C ∗ (G) can be identified isometrically with C ∗ (G)∗ . If fϕ is the linear form on C ∗ (G) taking UG (t) to ϕ(t) (t ∈ G) we have Mϕ  = fϕ C ∗ (G)∗ . Proposition 3.5 Let G be a discrete group and let  ⊂ G be a subgroup. Then the correspondence U (t) → UG (t), (t ∈ ) extends to an isometric

66

C ∗ -algebras of discrete groups

(C ∗ -algebraic) embedding J of C ∗ () into C ∗ (G). Moreover, there is a completely contractive and completely positive projection P from C ∗ (G) onto the range of this embedding, defined by P (UG (t)) = UG (t) for any t ∈  and P (UG (t)) = 0 otherwise. Proof By the universal property of C ∗ () the unitary representation  ⊃ γ → UG (γ ) extends to a ∗-homomorphism J : C ∗ () → C ∗ (G) with J  = 1. Let ϕ = 1 . The projection P described in Proposition 3.5 coincides with the multiplier Mϕ acting on C ∗ (G). Thus, by Proposition 3.3 it suffices to show that there is a unitary representation π : G → B(Hπ ) of G and a unit  vector ξ ∈ Hπ such that ϕ(t) = ξ,π(t)ξ . Let G = s∈G/  s be the disjoint partition of G into left cosets. For any t ∈ G the mapping s → ts defines a permutation σ (t) of the set G/ , and t → σ (t) is a homomorphism. Let Hπ = 2 (G/ ) and let π : G → B(Hπ ) be the unitary representation defined on the unit vector basis by π(t)(δs ) = δσ (t)(s) for any s ∈ G/ . Let [[]] ∈ G/  denote the coset  (i.e. s for s = 1G ) and let ξ = δ[[]] . Then it is immediate that ϕ(t) = ξ,π(t)ξ  for any t ∈ G. Remark 3.6 Let G be a discrete group and let E ⊂ C ∗ (G) be any separable subspace. We claim that there is a countable subgroup  ⊂ G such that with the notation of Proposition 3.5 we have E ⊂ J (C ∗ ()). Indeed, since C ∗ (G) ⊂ span[UG (t) | t ∈ G] for any fixed x ∈ C ∗ (G) there is clearly a countable subgroup x ⊂ G and an analogous Jx such that x ⊂ Jx (C ∗ (x )). Arguing like this for each x in a dense countable sequence in E and taking the group generated by all the resulting x’s gives us the claim. By Proposition 3.5 this shows that there is a separable C ∗ -subalgebra C ⊂ ∗ C (G) with E ⊂ C for which there is a c.p. projection P : C ∗ (G) → C. Remark 3.7 Let G be any discrete group, let A = C ∗ (G). Then A¯  A. Indeed, since for any unitary representation π on G, the complex conjugate π¯ (as in Remark 2.14) is also a unitary representation, the correspondence π → π¯ is a bijection on the set of unitary representations, from which the Clinear isomorphism  : C ∗ (G) → C ∗ (G) follows immediately. Denoting by UG the universal representation of G, this isomorphism takes UG (t) to UG (t). Note that A¯  A is in general not true (see [60]).

3.2 Full C ∗ -algebras for free groups In this section, we start by comparing the C ∗ -algebras of free groups of different cardinals. Our goal is to make clear that we can restrict to C = C ∗ (F∞ ) (or if we wish to C ∗ (F2 )) for the various properties of interest to us in the

3.2 Full C ∗ -algebras for free groups

67

sequel. Then we describe the operator space structure of the span of the free generators in C ∗ (F) when F is any free group. The following simple lemma will be often invoked when we wish to replace C ∗ (F) by C ∗ (F∞ ). Lemma 3.8 Let F be a free group with generators (gi )i∈I . Let E ⊂ C ∗ (F) be any separable subspace. Then the inclusion E ⊂ C ∗ (F) admits an extension TE : C ∗ (F) → C ∗ (F) that can be factorized as w

v

TE : C ∗ (F)−→C ∗ (F∞ )−→C ∗ (F) where v,w are contractive c.p. maps. For any C ∗ -algebra D and any x ∈ D ⊗ E we have xD⊗max C ∗ (F) = (IdD ⊗ w)(x)D⊗max C ∗ (F∞ ) .

(3.5)

In particular, E ⊂ C ∗ (F) is completely isometric to w(E) ⊂ C ∗ (F∞ ). Proof For any x ∈ C ∗ (F) there is clearly a countable subgroup x ⊂ F such that x ∈ span[UF (t) | t ∈ x ]. By the separability of E, we can find a countable subgroup  such that E ⊂ span[UF (t) | t ∈ ]. Since any element of t ∈  can be written using only finitely many “letters” in {gi | i ∈ I }, we may assume that  is the free subgroup generated by (gi )i∈I  for some countable subset I  ⊂ I . Then, identifying span[UF (t) | t ∈ ] with C ∗ (), Proposition 3.5 yields a mapping T = JP : C ∗ (F) → C ∗ (F) with the required factorization through C ∗ () = C ∗ (FI  ) that is the identity when restricted to E. If I  is infinite the proof is complete: since C ∗ (FI  ) = C ∗ (F∞ ) we may take TE = T . Otherwise, we note that FI  ⊂ F∞ as a subgroup and hence by PropoJ

sition 3.5 again we have a factorization of the same type C ∗ (FI  )−→ P

C ∗ (F∞ )−→C ∗ (FI  ) from which it is easy to conclude. Note xD⊗max C ∗ (F) = (IdD ⊗ TE )(x)D⊗max C ∗ (F) = (IdD ⊗ vw)(x)D⊗max C ∗ (F) . By Corollary 4.18 since v,w are c.p. contractions we have xD⊗max C ∗ (F) ≤ (IdD ⊗ w)(x)D⊗max C ∗ (F∞ )

and

(IdD ⊗ w)(x)D⊗max C ∗ (F∞ ) ≤ xD⊗max C ∗ (F), from which (3.5) follows. Let F be a free group with generators (gi )i∈I . We start with a basic property of the span of the free generators in C ∗ (F).

C ∗ -algebras of discrete groups

68

Lemma 3.9 Let F be a free group with generators (gi )i∈I . Let Ui = UF (gi ) ∈ C ∗ (F). Let E = span[(Ui )i∈I ,1] ⊂ C ∗ (F) and EI = span[(Ui )i∈I ] ⊂ C ∗ (F). Then for any linear map u : E → B(H ) and any v : EI → B(H ) we have ucb = u = max{supi∈I u(Ui ),u(1)}

and

vcb = v = max{supi∈I v(Ui )}.

(3.6)

Proof It clearly suffices to show that max{supi∈I u(Ui ),u(1)} ≤ 1 implies ucb ≤ 1. When u(1) = 1 and all u(Ui ) are unitaries this is easy: indeed there is a (unique) group representation σ : F → B(H ) such that σ (gi ) = u(Ui ) and the associated linear extension uσ : C ∗ (F) → B(H ) is a ∗-homomorphism automatically satisfying uσ cb = 1, and hence ucb = 1. This same argument works if we merely assume that u(1) is unitary. Indeed, we may replace u by x → u(1)−1 u, which takes us back to the previous easy case. Since the general case is easy to reduce to that of a finite set, we assume that I is finite. Then the Russo–Dye Theorem A.18 shows us that any u such that max{supi∈I u(Ui ),u(1)} ≤ 1 lies in the closed convex hull of u’s for which u(1) and all the u(Ui )s are unitaries, and hence ucb ≤ 1 in that case also. The first part of the next result is based on the classical observation that a unitary representation π : F → B(H ) is entirely determined by its values ui = π(gi ) on the generators, and if we let π run over all possible unitary representations, then we obtain all possible families (ui ) of unitary operators. The second part is also well known. Lemma 3.10 Let A ⊂ B(H ) be a C ∗ -algebra. Let F be a free group with generators (gi )i∈I . Let Ui = UF (gi ) ∈ C ∗ (F). Let (xi )i∈I be a family in A with only finitely many nonzero terms. Consider the linear map T : ∞ (I ) → A  defined by T ((αi )i∈I ) = i∈I αi xi . Then we have          ui ⊗ xi  (3.7) Ui ⊗ xi  ∗ = T cb = sup   i∈I

C (F)⊗min A

min

where the sup runs over all possible Hilbert spaces K and all families (ui ) of unitaries on K. Actually, the latter supremum remains the same if we restrict it to finite-dimensional Hilbert spaces K. Moreover, in the case when A = B(H ) with dim(H ) = ∞, we have    1/2  1/2         zi∗ zi  (3.8) Ui ⊗ xi  ∗ = inf  yi yi∗    i∈I

C (F)⊗min B(H )

where the infimum, which runs over all possible factorizations xi = yi zi with yi ,zi in B(H ), is actually attained. Moreover, all this remains true if we enlarge the family (Ui )i∈I by including the unit element of C ∗ (F).

3.2 Full C ∗ -algebras for free groups

69

Proof It is easy to check going back to the definitions that on one hand          , = sup  ui ⊗ xi  Ui ⊗ xi   min

min

where the sup runs over all possible families of unitaries (ui ), and on the other hand that      , T cb = sup  ti ⊗ xi  min

where the sup runs over all possible families of contractions (ti ). By the Russo– Dye Theorem A.18, any contraction is a norm limit of convex combinations of unitaries, so (3.7) follows by convexity. Actually, the preceding sup obviously remains unchanged if we let it run only over all possible families of contractions (ti ) on a finite-dimensional Hilbert space. Thus it remains unchanged when restricted to families of finite-dimensional unitaries (ui ). Now assume T cb = 1. By the factorization of c.b. maps we can write ) is a representation and where T (α) = V ∗ π(α)W where π : ∞ (I ) → B(H ) with V  W  = T cb . Since we assume dim(H ) = ∞ V ,W are in B(H, H and may assume I finite (because i → xi is finitely supported), by Remark  = H . Let (ei )i∈I be the canonical basis of ∞ (I ), 1.51 we may as well take H we set yi = V ∗ π(ei )

and

zi = π(ei )W .

 1/2  1/2 It is then easy to check  yi yi∗   zi∗ zi  ≤ V  W  = T cb . Thus we obtain one direction of (3.8). The converse follows from (2.2) (easy consequence of Cauchy–Schwarz) applied to ai = Ui ⊗ yi and bi = 1 ⊗ zi . Finally, the last assertion follows from the forthcoming Remark 3.12. Remark 3.11 (Russo–Dye) The Russo–Dye Theorem A.18 shows that the sup of any continuous convex function on the unit ball of a unital C ∗ -algebra coincides with its sup over all its unitary elements. Remark 3.12 Let {0} be a singleton disjoint from the set I and let I˙ = {0} ∪ I . Then for any finitely supported family {xj | j ∈ I˙} in B(H ) (H arbitrary) we have           (3.9) Ui ⊗ xi  = sup  u ⊗ x I ⊗ x0 + j ˙ j i∈I

min

j ∈I

min

where the supremum runs over all possible families (uj )j ∈I˙ of unitaries. Indeed, since          −1 ⊗ x u ⊗ x = + u u ⊗ x    , I j j 0 i i 0 ˙ j ∈I

min

i∈I

min

70

C ∗ -algebras of discrete groups

the right-hand side of (3.9) is the same as the supremum of      ui ⊗ xi  I ⊗ x0 + i∈I

(3.10)

min

over all possible families of unitaries (ui )i∈I . Therefore (recalling U (gi ) = Ui ) we find        Ui ⊗ xi  = sup I ⊗ x0 I ⊗ x0 + i∈I min     + ui ⊗ xi  ui unitary , i∈I

min

where the sup runs over all Hilbert spaces H and all families (ui ) of unitaries in B(H). Moreover, by the same argument we used for Lemma 3.10, we can restrict to finite-dimensional H’s:        Ui ⊗ xi  = supn≥1 I ⊗ x0 I ⊗ x0 + i∈I min     + ui ⊗ xi  ui n × n unitaries i∈I

min

(3.11) so that the supremum on the right-hand side is restricted to families of finitedimensional unitaries. Indeed, by Russo–Dye (Remark 3.11) the suprema of (3.10) taken over ui’s in the unit ball of B(H ) and over unitary ui’s are the same. Replacing ui by PE ui |E with E ⊂ H, dim(E) < ∞ shows that the supremum of (3.10) is the same if we restrict it to ui’s in the unit ball of B(E) with dim(E) < ∞. Then, invoking Russo–Dye (Remark 3.11) again, we obtain (3.11). Remark 3.13 Using (3.11) when I is a singleton and the fact that a single unitary generates a commutative unital C ∗ -algebra, it is easy to check that T  = T cb for any T : 2∞ → B(H ). Remark 3.14 (1 (I ) as operator space) In the particular case A = C, (3.7) becomes      Ui xi  ∗ = |xi |, (3.12)  i∈I

C (F)

i∈I

which shows that EI = span[Ui ,i ∈ I ]  1 (I ) isometrically. Note that (3.8) generalizes the classical fact that B1 = B2 B2 for the pointwise product. More generally, Lemma 3.9 shows that the dual operator space EI∗ can be identified with the von Neumann algebra ∞ (I ) equipped with its natural operator space structure as a C ∗ -algebra, i.e. the one such that we have Mn (∞ (I )) = ∞ (I ;Mn ) isometrically for all n. Lemma 3.10 describes the

3.3 Reduced group C ∗ -algebras: Fell’s absorption principle

71

dual operator space of the operator space (actually a C ∗ -subalgebra) c0 (I ) ⊂ ∞ (I ) that is the closed span of the canonical basis in ∞ (I ). We obtain c0 (I )∗ = EI completely isometrically, which is the operator space analogue of the isometric identity c0 (I )∗ = 1 (I ). Indeed, together with Lemma 3.9, (3.7) tells us that CB(c0 (I ),Mn ) = Mn (EI ) isometrically for all n.

3.3 Reduced group C ∗ -algebras: Fell’s absorption principle We denote by Cλ∗ (G) (resp. Cρ∗ (G)) the so-called reduced C ∗ -algebra generated in B(2 (G)) by λG (resp. ρG ). Equivalently, Cλ∗ (G) = span{λG (t)| t ∈ G} and Cρ∗ (G) = span{ρG (t) | t ∈ G}. Note that λG (t) and ρG (s) commute for all t,s in G. We denote λG and ρG simply by λ and ρ (and UG by U ) when there is no ambiguity. The following very useful result is known as Fell’s “absorption principle.” Proposition 3.15 For any unitary representation π : G → B(H ), we have λG ⊗ π  λ G ⊗ I

(unitary equivalence).

Here I stands for the trivial representation of G in B(H ) (i.e. I (t) = IdH ∀t ∈ G). In particular, for any finitely supported functions a : G → C and b : G → B(2 ), we have     a(t)λG (t) ⊗ π(t) 

     b(t)⊗λG (t)⊗π(t)

Cλ∗ (G)⊗min B(H )

B(2 )⊗min Cλ∗ (G)⊗min B(H )

    a(t)λG (t) , =

(3.13)

    =  b(t)⊗λG (t)

B(2 )⊗min Cλ∗ (G)

.

Proof Note that λG ⊗ π acts on the Hilbert space K = 2 (G) ⊗2 H  2 (G;H ). Let V : K → K be the unitary operator taking x = (x(t))t∈G to (π(t −1 )x(t))t∈G . A simple calculation shows that V −1 (λG (t) ⊗ IdH )V = λG (t) ⊗ π(t).

We will often use the following immediate consequence: Corollary 3.16 For any unitary representation π : G → B(H ), the linear map σπ : span[λG (G)] → B(2 (G) ⊗2 H ) defined by σπ (λG (g)) = λG (t) ⊗ π(t) (∀t ∈ G) extends to a (contractive) ∗-homomorphism from Cλ∗ (G) to B(2 (G) ⊗2 H ).

72

C ∗ -algebras of discrete groups

Remark 3.17 Let F be a free group with free generators (gj ). Then for any finitely supported sequence of scalars (aj ), for any H and for any family (uj ) of unitary operators in B(H ) we have         aj λ(gj ) . aj λ(gj ) ⊗ uj  =  min

Indeed, this follows from (3.13) applied to the function a defined by a(gj ) = aj and = 0 elsewhere, and to the unique unitary representation π of F such that π(gj ) = uj . Proposition 3.18 Let G be a discrete group and let  ⊂ G be a subgroup. Then the correspondence λ (t) → λG (t), (t ∈ ) extends to an isometric (C ∗ -algebraic) embedding Jλ : Cλ∗ () → Cλ∗ (G). Moreover there is a completely contractive and completely positive projection Pλ from Cλ∗ (G) onto the range of this embedding, taking λG (t) to 0 for any t ∈ .  Proof Let Q = G/  and let G = gq be the partition of G into (disjoint) q∈Q

right cosets. For convenience, let us denote by 1 the equivalence class of the unit element of G. Since G   × Q, we have an identification 2 (G)  2 () ⊗2 2 (Q) such that ∀t ∈

λG (t) = λ (t) ⊗ I .

This shows of course that Jλ is an isometric embedding. Moreover, we have a natural (linear) isometric embedding V : 2 () → 2 (G) (note that the range of V coincides with 2 () ⊗ δ1 in the preceding identification), such that λ (t) = V ∗ λG (t)V for all t ∈ . Let u(x) = V ∗ xV. Clearly for any t ∈ G / . Therefore we have u(λG (t)) = λ (t) if t ∈  and u(λG (t)) = 0 if t ∈ Pλ = Jλ u is the announced completely positive and completely contractive projection from Cλ∗ (G) onto Jλ Cλ∗ (). As an immediate application, we state for further use the following particular case: Corollary 3.19 (The diagonal subgroup in G × G) Let  = {(g,g) | g ∈ G} ⊂ G × G be the diagonal subgroup. There are: ∗ ∗ ∗  – a complete isometry J : Cλ (G) → Cλ (G) ⊗min Cλ (G) such that  J (λG (t)) = λG (t) ⊗ λG (t), and ∗ ∗ ∗   – a c.p. map Q : Cλ (G) ⊗min Cλ (G) → Cλ (G) with Q  = 1 such that Q (λG (t) ⊗ λG (s)) = 0 whenever s = t and Q (λG (t) ⊗ λG (t)) = λG (t).

3.4 Multipliers

73

Proof We apply Proposition 3.18 to the subgroup  and we use the identification Cλ∗ (G) ⊗min Cλ∗ (G)  Cλ∗ (G × G) which follows easily from the definition of both sides (see §4.3 for more such identifications). The projection Pλ in the preceding proposition is an example of mapping associated to a “multiplier.”

3.4 Multipliers Let ϕ : G → C be a (bounded) function and let π be a unitary representation of G. Let Mϕ be the linear mapping defined on the linear span of {π(t) | t ∈ G} by ∀t ∈ G Mϕ (π(t)) = ϕ(t)π(t). As anticipated in Proposition 3.3, we say that ϕ is a bounded (resp. c.b. rresp. c.p.) multiplier on Cπ∗ (G) if Mϕ extends to a bounded (resp. c.b. rresp. c.p.) linear map on Cπ∗ (G). We will be mainly interested in the cases when π = λG or π = UG . In the commutative case (or when G is amenable) the bounded or c.b. multipliers of Cλ∗ (G) coincide with the linear combinations of positive definite functions, and the latter, as we explain next, are the c.p. multipliers. However, in general the situation is more complicated. The next statement characterizes the c.b. case. We may even include B(H )-valued multipliers. Theorem 3.20 ([35, 136]) Let G be a discrete group, H a Hilbert space. The following properties of a function ϕ : G → B(H ) are equivalent: (i) The linear mapping defined on span[λ(t) | t ∈ G] by Mϕ (λ(t)) = λ(t) ⊗ ϕ(t) extends to a c.b. map Mϕ : Cλ∗ (G) → Cλ∗ (G) ⊗min B(H ) ⊂ B(2 (G) ⊗2 H ) with Mϕ cb ≤ 1.  and bounded functions x : G → B(H, H ) (ii) There is a Hilbert space H ) with supt∈G x(t) ≤ 1 and sups∈G y(s) ≤ 1 and y : G → B(H, H such that ϕ(s −1 t) = y(s)∗ x(t).

∀ s,t ∈ G

C ∗ -algebras of discrete groups

74

, a Proof Assume (i). Then by Theorem 1.50 there are a Hilbert space H ) and operators Vj : 2 (G) ⊗2 H → H  representation π : Cλ∗ (G) → B(H (j = 1,2) with V1 V2  ≤ 1 such that ∀θ ∈G

λ(θ ) ⊗ ϕ(θ ) = Mϕ (λ(θ )) = V2∗ π(λ(θ ))V1 .

(3.14)

We will use this for θ = s −1 t, in which case we have δs −1 ,λ(θ )δt −1  = 1. We ) and y(s) ∈ B(H, H ) by x(t)h = π(λ(t)) V1 (δt −1 ⊗ h) define x(t) ∈ B(H, H and y(s)k = π(λ(s))V2 (δs −1 ⊗ k). Note that when θ = s −1 t δs −1 ⊗ k,(λ(θ ) ⊗ ϕ(θ ))(δt −1 ⊗ h) = k,ϕ(s −1 t)h, and hence (3.14) implies k,ϕ(s −1 t)h = k,y(s)∗ x(t)h, and we obtain (ii). ) by π(x) = Conversely assume (ii). Define π : Cλ∗ (G) → B(2 (G) ⊗2 H x ⊗ IdH. Let  Vj : 2 (G) ⊗2 H → 2 (G) ⊗2 H be defined by V1 (δt ⊗ h) = δt ⊗ x(t)h and V2 (δs ⊗ k) = δs ⊗ y(s)k. Note that V1  = supt∈G x(t) ≤ 1 and V2  = sups∈G y(s) ≤ 1. Then for any θ,t,s,h,k we have δs ⊗ k,V2∗ π(λ(θ ))V1 (δt ⊗ h) = δs ,λ(θ )δt k,y(s)∗ x(t)h = δs ⊗ k,(λ(θ ) ⊗ ϕ(θ ))(δt ⊗ h), equivalently V2∗ π(λ(θ ))V1 = Mϕ (λ(θ )), so the converse part of Theorem 1.50 yields (ii) ⇒ (i). In the particular case C = B(H ) the preceding result yields: Corollary 3.21 (Characterization of c.b. multipliers on Cλ∗ (G)) Consider a complex-valued function ϕ : G → C. Then Mϕ : Cλ∗ (G) → Cλ∗ (G)cb ≤ 1 if and only if there are Hilbert space valued functions x,y with supt x(t) ≤ 1 and sups y(s) ≤ 1 such that ϕ(s −1 t) = y(s),x(t).

∀ s,t ∈ G

Remark 3.22 (On positive definiteness) A function ϕ : G → C is called positive definite if for any n and any t1, . . . ,tn ∈ G the n × n-matrix [ϕ(ti−1 tj )] is positive (semi)definite, i.e. we have  xi xj ϕ(ti−1 tj ) ≥ 0. ∀x ∈ Cn

3.4 Multipliers

Equivalently ∀x ∈ C[G]



75

x(s)x(t)ϕ(s −1 t) ≥ 0.

Using the scalar product defined by the latter condition, we find, after passing to the quotient and completing in the usual way, a Hilbert space Hϕ and a ˙ 2Hϕ = mapping C[G] → Hϕ denoted by x → x˙ with dense range (so that x  x(s)x(t)ϕ(s −1 t) for all x ∈ C[G]) and a unitary representation πϕ of G extending left translation on C[G]. Let δe ∈ C[G] denote the indicator function of the unit element of G. We have δ˙e,πϕ (g)δ˙e Hϕ = δ˙e, δ˙g Hϕ = ϕ(g).

(3.15)

Thus ϕ is a (diagonal) matrix coefficient of π . Conversely, any ϕ of the form ϕ(g) = ξ,π(g)ξ  (with π unitary and ξ ∈ Hπ ) is positive definite. Proposition 3.23 Let ϕ : G → C. The following are equivalent: (i) ϕ is a c.p. multiplier of Cλ∗ (G). (ii) ϕ is positive definite. Moreover, in that case we have Mϕ  = Mϕ cb = ϕ(e) where e is the unit of G. Proof Assume (i). Let t1, . . . ,tn ∈ G. Consider the matrix a defined by aij = λG (ti )−1 λG (tj ). Clearly a ∈ Mn (Cλ∗ (G))+ . Then (IdMn ⊗ Mϕ )(a) = [ϕ(ti−1 tj )aij ] ∈ Mn (Cλ∗ (G))+ . Therefore, for any x1, . . . ,xn ∈ 2 (G) we  have ϕ(ti−1 tj )xi ,aij xj  ≥ 0. Choosing xj = λj δt −1 (λj ∈ C) we find j

xi ,aij xj  = λi λj for all i,j , and we conclude that ϕ is positive definite. Assume (ii). By (3.15) we have for any g ∈ G Mϕ (λG (g)) = ϕ(g)λG (g) = V ∗ ([λG ⊗ πϕ ](g))V where V : 2 (G) → 2 (G) ⊗2 Hϕ is defined by V (h) = h ⊗ δ˙e . By Corollary 3.16 we have Mϕ (·) = V ∗ (σπ (·))V , and hence Mϕ is c.p. on Cλ∗ (G). Moreover Mϕ (1) = ϕ(e)1, so Mϕ (1) = ϕ(e). Remark 3.24 The reader can easily check that the preceding statement remains valid for B(H )-valued functions, in analogy with Theorem 3.20, for the natural extension of positive definiteness, defined by requesting that [ϕ(ti−1 tj )] ∈ Mn (B(H ))+ for all n. Such functions are sometimes called completely positive definite. In the preceding construction, we associated a linear mapping Mϕ to a function ϕ. We now go conversely. We will associate to a c.b. mapping a

C ∗ -algebras of discrete groups

76

multiplier. In other words, we will describe a linear projection from the set of c.b. maps to the subspace formed by those associated to multipliers. Proposition 3.25 (Haagerup) Let u : Cλ∗ (G) → Cλ∗ (G) be a c.b. map. Then the function ϕu defined by (recall e is the unit of G) ϕu (t) = δt ,u(λG (t))δe  is a c.b. multiplier on Cλ∗ (G) with Mϕu cb ≤ ucb . If u is c.p. then the multiplier is also c.p. If u has finite rank then ϕu ∈ 2 (G). Moreover, if u = Mϕ then ϕu = ϕ. Proof We have IdCλ∗ (G) ⊗ u : Cλ∗ (G) ⊗min Cλ∗ (G) → Cλ∗ (G) ⊗min Cλ∗ (G) ≤ ucb . It is easy to see that Cλ∗ (G) ⊗min Cλ∗ (G) can be identified with Cλ∗ (G × G). With this identification, we have, for the mappings J ,Q in Corollary 3.19, for any t ∈ G ϕu (t)λG (t) = Q [IdCλ∗ (G) ⊗ u]J  (λG (t)). In other words, Mϕu = Q [IdCλ∗ (G) ⊗ u]J  . All the assertions are now evident. We just note that if u has rank 1, say u(x) = f (x)y with f ∈ Cλ∗ (G)∗ and y ∈ Cλ∗ (G), then ϕu (t) = f (λG (t))y(t), and t → f (λG (t)) is bounded while y(t) = δt ,yδe  is in 2 (G); this shows ϕu ∈ 2 (G). Remark 3.26 With the notation of the next section we have ϕu (t) = τG (λG (t)∗ u(λG (t))), while with that of §11.2 it becomes ϕu (t) = λG (t),u(λG (t))L2 (τG ) . The preceding two statements combined show that if u is decomposable as a linear combination of c.p. maps on Cλ∗ (G) (as in Chapter 6) then ϕu is a linear combination of positive definite functions. In particular: Corollary 3.27 Let ϕ : G → C. The associated mapping Mϕ is decomposable on Cλ∗ (G) if and only if ϕ is a linear combination of positive definite functions. We will now complete the description started in Proposition 3.3 of multipliers on the full algebra C ∗ (G). In this case the picture is simpler. Proposition 3.28 Let ϕ : G → C. The following are equivalent: (i) ϕ is a bounded multiplier on C ∗ (G). (ii) ϕ is a linear combination of positive definite functions. (iii) ϕ is c.b. multiplier on C ∗ (G). Moreover, ϕ is positive definite if and only if Mϕ is c.p. on C ∗ (G).

3.5 Group von Neumann Algebra

77

Proof We already know (i) ⇔ (iii) from Proposition 3.3. Assume (i). Then by Proposition 3.3 ϕ satisfies (3.3) for some π,η,ξ . By the polarization formula, we can rewrite ϕ as a linear combination of four functions of the form t → ξ,π(t)ξ  with η = ξ . But the latter are clearly positive definite. This shows (i) ⇒ (ii). Assume ϕ positive definite. By (3.15) and by the case ξ = η in Proposition 3.3 Mϕ is c.p. and hence a fortiori c.b. Now (ii) ⇒ (iii) is clear.

3.5 Group von Neumann Algebra We denote by MG ⊂ B(2 (G)) the von Neumann algebra generated by λG . This means that MG = λG (G) . Equivalently MG is the weak* closure of the linear span of λG (G), and also the weak* closure of Cλ (G). See §A.16 for some background on von Neumann algebras (in particular on the bicommutant Theorem A.46). Let f ∈ 2 (G). Note that a priori, the operator of left convolution by f , Tf : x → f ∗ x is only bounded from 2 (G) to ∞ (G). An operator T ∈ B(2 (G)) belongs to MG if and only if there is a (uniquely determined by f = Tf (δe )) function f ∈ 2 (G) such that x → f ∗ x defines a bounded operator on 2 (G) such that T = Tf . We have  MG = λG (G) = ρG (G) and ρG (G) = MG .

Let  ⊂ G be a subgroup. Since the embedding Jλ : Cλ∗ () → Cλ∗ (G) in Proposition 3.18 is clearly bicontinuous with respect to the weak* topologies of B(2 ()) and of B(2 (G)), it extends to an embedding M ⊂ MG , with which we may identify M to a von Neumann subalgebra of MG . Let {δt | t ∈ G} denote the canonical basis of 2 (G). There is a distinguished tracial state τG defined on MG by τG (T ) = δe,T (δe ). Of course this makes sense on the whole of B(2 (G)), but it is tracial only if we restrict to MG : ∀S,T ∈ MG

τG (TS) = τG (ST).

Clearly τG is “normal” (meaning continuous for the weak* topology of B(2 (G)) ) and faithful (meaning τG (T ∗ T ) = 0 ⇒ T = 0) and τG (1) = 1. Thus (MG,τG ) is the basic example of a “tracial (or noncommutative)

78

C ∗ -algebras of discrete groups

probability space” that we will consider in Chapter 12 when we discuss the Connes embedding problem. Remark 3.29 Let ϕ be as in Corollary 3.21. Let (s,t) = ϕ(s −1 t). Then the Schur multiplier u : B(2 (G)) → B(2 (G)) associated to  according to (iii) in Theorem 1.57 is completely contractive on B(2 (G)) if and only if ϕ satisfies the equivalent conditions in Corollary 3.21. Moreover, the latter Schur multiplier is weak* continuous, meaning continuous from B(2 (G)) to B(2 (G)) when both spaces are equipped with the weak* topology. Therefore, if we restrict to MG we obtain a weak* continuous (also called normal) complete contraction from MG to MG that extends the multiplier Mϕ : Cλ∗ (G) → Cλ∗ (G). We will call the resulting maps weak* continuous multipliers on MG . A similar argument, based on Proposition 3.23, shows that ϕ is positive definite if and only if Mϕ extends to a weak* continuous c.p. multiplier on MG . Lastly, the conclusion of Proposition 3.25 holds with the same proof for any c.b. map u : MG → MG . The resulting multiplier Mϕu is weak* continuous on MG , with Mϕu cb ≤ ucb . Moreover, if u is c.p. on MG , so is Mϕu .

3.6 Amenable groups We review some basic facts on amenability. A discrete group G is called amenable if it admits an invariant mean, i.e. a functional ϕ in ∞ (G)∗+ with ϕ(1) = 1 such that ϕ(δt ∗ f ) = ϕ(f ) for any f in ∞ (G) and any t in G. Theorem 3.30 The following are equivalent: (i) G is amenable. (i)’ There is a net (hi ) in the unit sphere of 2 (G) that is approximately translation invariant, i.e. such that λG (t)hi − hi 2 → 0 for any t ∈ G. (ii) C ∗ (G) = Cλ∗ (G). (iii) For f : G → C we have  supported function   any finitely f (t) ≤   f (t)λG (t).   (iii)’ For any finite subset E ⊂ G, we have |E| =  t∈E λG (t) . (iv) There is a generating subset S ⊂ G with e ∈S such that, for any finite  subset E ⊂ S, we have |E| =  t∈E λG (t) . (v) MG is injective. Proof Assume (i). Let ϕ be the invariant mean. Note that ϕ is in the unit ball of 1 (G)∗∗ + . Therefore, there is a net (ϕi ) in the unit ball of 1 (G)+ tending in the sense of σ (1 (G)∗∗,1 (G)∗ ) to ϕ. Let 1 be the constant function equal to 1

3.6 Amenable groups

79

on G. Since ϕi (1) → 1, we may assume after renormalization that ϕi (1) = ϕi 1 (G) = 1. Fix t ∈ G. Since δt ∗ ϕ = ϕ, we have δt ∗ ϕi − ϕi → 0 when i → ∞. But since δt ∗ ϕi − ϕi lies in 1 (G) this means that limi→∞ (δt ∗ ϕi − ϕi ) = 0 for the weak topology of 1 (G). By (Mazur’s) Theorem A.9, passing to convex combinations of elements of a subnet (here we leave some details to the reader, see Remark A.10) we may assume that limi→∞ δt ∗ ϕi − ϕi 1 (G) = 0. A priori, this was obtained for each fixed t, but, by suitably refining the argument (here again we skip some details), we can obtain the same for each √ finite subset T ⊂ G. Let hi = ϕi . We claim that δt ∗ hi − hi 2 → 0 for any t ∈ T . This claim clearly implies (i)’. To check the claim, using |x 1/2 − y 1/2 | ≤ |x − y|1/2 for any x,y ∈ R+ , we observe that |δt ∗ hi (s) − hi (s)| ≤ |δt ∗ ϕi (s) − ϕi (s)|1/2 and hence δt ∗ hi − hi 2 → 0 for any t ∈ T . This shows (i) ⇒ (i)’.  Assume (i)’. Let x = x(t)λG (t) ∈ span[λG (t) | t ∈ G]. Let π : G → B(H ) be any unitary representation. By the absorption principle (3.13)     x(t)λG (t) =  x(t)π(t) ⊗ λG (t). We claim that  x(t)π(t) ⊗  λG (t) ≥  x(t)π(t). Indeed, let fi be the state on B(2 (G)) defined by fi (T ) = hi ,T hi . Then we have clearly          x(t)π(t) ⊗ λG (t) x(t)π(t) ⊗ λG (t)  ≤  [Id ⊗ fi ] but [Id ⊗ fi ]



   x(t)π(t) ⊗ λG (t) = x(t)π(t)fi (λG (t)) → x(t)π(t),

where at the last step = hi ,δt ∗ hi  → 1. This implies the   G (t))   we use fi (λ sup over the claim and hence  x(t)λG (t) ≥   x(t)π(t). Taking the     π s we obtain (by “maximality” of UG )  x(t)λG (t) =  x(t)UG (t). This shows (i)’ ⇒ (ii). Assume (ii). Then (iii) holds by (3.2), and (iii) ⇒ (iii)’ ⇒ (iv) are trivial.  Assume (iv). We will show (i)’. Fix E as in (iv). Let ME = |E|−1 t∈E λG (t) so that ME  = 1. There is a net (xi ) in the unit sphere of 2 (G) such that ME (xi ) → 1. By the uniform convexity of 2 (G) (see §A.3), this implies δt ∗ xi − xi → 0 in 2 (G) for any t ∈ E. Rearranging the net (here again we leave the details to the reader) we find a net (hi ) in the unit sphere of 2 (G) such that the same holds for any t ∈ S, and since S generates G, still the same for any t ∈ G. This shows (iv) ⇒ (i)’. Assume (i)’. We will show (i). Let ϕ ∈ ∞ (G)∗ be defined by  x(t)|hi (t)|2, ∀x ∈ ∞ (G) ϕ(x) = limU where U is an ultrafilter refining the net (see Remark A.6). Let Dx ∈ B(2 (G)) be the diagonal operator associated to x. Note that ϕ(x) = limU hi ,Dx hi , and also

C ∗ -algebras of discrete groups

80

λG (t)Dx λG (t)−1 = Dδt ∗x .

(3.16)

Therefore ϕ(δt ∗ x) = limU hi ,Dδt ∗x hi  = limU λG (t)hi ,Dx λG (t)hi  = limU hi ,Dx hi  = ϕ(x). Thus ϕ is an invariant mean, so (i) holds. This proves the equivalence of (i)–(iv), (i)’, and (iii)’. It remains to show that (i) and (v) are equivalent. Assume (i). We will show that there is a c.p. projection P : B(2 (G)) → MG with P  = 1. Let T ∈ B(2 (G)). We define T : G → B(2 (G)) by T (g) = ρG (g)T ρG (g)−1 . We will define P (T ) as the “integral” with respect to ϕ of the function T , but some care is needed since ϕ is not really a measure on G. Let [T (s,t)] be the “matrix” associated to T defined by T (s,t) = δs ,T δt  (s,t ∈ G). Observe that g → T (g)(s,t) is in ∞ (G). Then we set P (T )(s,t) = ϕ(T (·)(s,t)). This defines a matrix and it is easy to see that the associated linear operator on span[δt | t ∈ G] extends to a bounded one (still denoted by P (T )) on 2 (G) such that P (T ) ≤ T . We have T (g)(s,t) = T (sg,tg) and hence, by the left invariance of ϕ, P (T )(s,t) = P (T )(st−1,e). This shows that P (T ) acts on 2 (G) as a left convolution bounded operator, in other words P (T ) ∈ MG . Moreover, if T ∈ MG then T commutes with ρG so we have P (T ) = T . This proves that P : B(2 (G)) → MG is a contractive projection. A simple verification left to the reader shows that it is c.p. (but this is automatic by Tomiyama’s Theorem 1.45). This shows (i) ⇒ (v). Assume (v). Let P : B(2 (G)) → MG be a projection with P  = 1. Invoking Theorem 1.45 again, we know that P is a c.p. conditional expectation. We define ∀x ∈ ∞ (G) Clearly ϕ ∈ of τG

∞ (G)∗+ ,

ϕ(x) = τG (P (Dx )) = δe,P (Dx )δe .

ϕ(1) = 1 and by (3.16), (1.28) and the trace property

∀t ∈ G ϕ(δt ∗ x) = τG [P (λG (t)Dx λG (t)−1 )] = τG [λG (t)P (Dx )λG (t)−1 ] = τG [P (Dx )] = ϕ(x). Thus ϕ is an invariant mean on G. This shows (v) ⇒ (i). Remark 3.31 If the generating set S is finite, the condition (iv) obviously reduces (by the triangle inequality) to     λG (t) . |S| =  t∈S

3.6 Amenable groups

81

Remark 3.32 The net (hi ) in (i)’ is sometimes called asymptotically left invariant. By density (and after renormalization) when it exists, it can always be found in the group algebra C[G]. Remark 3.33 (On Følner sequences) It is well known (see e.g. [194]) that for any amenable discrete group G the net (hi ) appearing in (i)’ in Theorem 3.30 can be chosen of the form hi = 1Bi |Bi |−1/2 for some family (Bi ) of finite subsets of G. For (hi ) of the latter form, (i)’ boils down to the assertion that the symmetric differences (tBi )Bi satisfy ∀t ∈ G

|tBi Bi ||Bi |−1 → 0.

A net of finite subsets (Bi ) satisfying this is called a Følner net, and a Følner sequence when the index set is N. Thus a (resp. countable) group G is amenable if and only if it admits a Følner net (resp. sequence). For instance, for G = Zd (1 ≤ d < ∞), the sequence Bn = [−n,n]d is a Følner sequence. This gives us the following special property of the reduced C ∗ -algebra, called the CPAP in the sequel (see Definition 4.8): Lemma 3.34 If G is amenable, there is a net of finite rank maps ui ∈ CP(Cλ∗ (G),Cλ∗ (G)) (resp. ui ∈ CP(C ∗ (G),C ∗ (G))) with ui  = 1 that tends pointwise to the identity on Cλ∗ (G) (resp. C ∗ (G)). Moreover, in both cases the ui’s are multiplier operators. Proof By Remark 3.32, there is a net (hi ) in C[G] in the unit sphere of 2 (G) such that λG (t)hi − hi 2 → 0 for any t in G. Let h∗i (t) = hi (t −1 ) (t ∈ G). A simple verification show that ϕi = h∗i ∗ hi is a positive definite function on G such that ϕi (e) = hi 22 = 1. Moreover, ϕi is finitely supported and tends pointwise to the constant function 1 on G. Let ui be the associated multiplier operator on Cλ∗ (G) (resp. C ∗ (G)). Its rank being equal to the cardinality of the support of ϕi is finite. By Proposition 3.23 (resp. Proposition 3.28), ui is c.p. and since ui (1) = ϕi (e)1 = 1, we have ui  = 1 by (1.20). For any   x(t)UG (t)) with x finitely supported, ui (x) x = x(t)λG (t) (resp. x = obviously tends to x in the norm of Cλ∗ (G) (resp. C ∗ (G)). Since such finite sums are dense in Cλ∗ (G) (resp. C ∗ (G)) and supi ui  < ∞, we conclude that ui (x) → x for any x ∈ Cλ∗ (G) (resp. C ∗ (G)). Remark 3.35 (Examples of amenable groups) All commutative groups are  is defined as amenable. If G is commutative (and discrete), its dual G the group formed of all homomorphisms γ : G → T, which is compact for the pointwise convergence topology. For any finitely supported function  f : G → C we define its “Fourier transform” by f(γ ) = f (g)γ (g). (This is the usual convention but we could remove the bar from γ (g) if we wished). As

82

C ∗ -algebras of discrete groups

is entirely classical f → fextends to an isometric isomorphism from 2 (G) to   and convolution of where m is the normalized Haar measure on G, L2 (G,m), two functions on G is transformed into the pointwise product of their Fourier transforms. Using the latter fact one shows that the correspondence f → f  of extends to an isometric isomorphism from Cλ∗ (G) to the C ∗ -algebra C(G)  Thus in the commutative case we have all continuous functions on G.  C ∗ (G) = Cλ∗ (G)  C(G).

(3.17)

All finitely generated groups of polynomial growth are amenable. The growth is defined using the length. If G is generated by a symmetric set S the smallest number of elements of S needed to write an element g ∈ G (as a word in letters in S) is denoted by S (g). The growth function is the function (R) = |{g ∈ G | S (g) ≤ R}. The group G is called of polynomial growth if (R) grows less than a power of R when R → ∞. For instance G = Zn is of polynomial growth (but Fn is not whenever n ≥ 2). Remark 3.36 By Kesten’s famous work on the spectral radius of random walks on the free group Fn with n generators, the set S1 ⊂ Fn formed of the 2n elements of length 1 (i.e. these are either generators or their inverses), satisfies   √   λFn (s) = 2 2n − 1. (3.18)  s∈S1

Kesten also observed that it is not difficult to deduce from this that for any group G and any symmetric subset S ⊂ G with |S| = k we have   √   λG (s) ≥ 2 k − 1.  s∈S

Akemann and Ostrand [2] proved that any S ⊂ S1 in Fn with |S| = k satisfies   √   λFn (s) = 2 k − 1. (3.19)  s∈S

In particular

n  

j =1

 √  λFn (gj ) = 2 n − 1.

(3.20)

The subsets S of a discrete group for which (3.19) holds have been characterized by Franz Lehner in [166], as the translates of the union of a free set and the unit. Let S ⊂ Fn be the set formed of the unit and the n free generators, so that |S| = n + 1. Then a variant of what precedes is that for G = Fn   √   λG (s) = 2 n. (3.21)  s∈S

When n ≥ 2, this is < n + 1, and hence (iii) or (iv) in Theorem 3.30 fails. This shows that Fn is not amenable for n ≥ 2.

3.7 Operator space spanned by the free generators in Cλ∗ (Fn )

83

Since amenability passes to subgroups (by Proposition 3.18 and (i)⇔ (iv) in Theorem 3.30), any group containing a copy of F2 as a subgroup is nonamenable. The converse, whether nonamenable groups must contain F2 , remained a major open question for a long time but was disproved by A. Olshanskii, see [126] for details. See Monod’s [178] for what seems to be currently the simplest construction of nonamenable groups not containing F2 as a subgroup.

3.7 Operator space spanned by the free generators in Cλ∗ (Fn ) The next statement gives us a description up to complete isomorphism of the span of the generators in Cλ∗ (Fn ) (and also implicitly in Cλ∗ (F∞ )). See [168] for a more precise (completely isometric) description. Theorem 3.37 Let (gj )1≤j ≤n be the generators in Fn (n ≥ 1). Then for any Hilbert space H and any aj ∈ B(H )(1 ≤ j ≤ n) we have  1/2  1/2        ∗  ∗ ≤ aj ⊗ λFn (gj ) aj aj  max  aj aj  , 

min

1/2  1/2      aj∗ aj  +  aj aj∗  . ≤ (3.22) In particular for any αj ∈ C we have 

|αj |2

1/2

   1/2   αj λFn (gj ) ≤ 2 |αj |2 ≤ .

Proof We will first prove the upper bound in (3.22). Let Ci+ ⊂ Fn (resp. Ci− ⊂ Fn ) be the subset formed by all the reduced words which start with gi (resp. gi−1 ). Note: except for the empty word e, every element of G can be written as a reduced word in the generators admitting a well-defined “first” and “last” letter (where we read from left to right). Let Pi+ (resp. Pi− ) be the orthogonal projection on 2 (Fn ) with range span[δt | t ∈ Ci+ ] (resp. span[(δt | t ∈ Ci− ]). The 2n projections {Pi+,Pi− | 1 ≤ i ≤ n} are mutually orthogonal. Then it is easy to check that λFn (gj ) = λFn (gj )Pj− + λFn (gj )(1 − Pj− ) = λFn (gj )Pj− + Pj+ λFn (gj )(1 − Pj− ) = λFn (gj )Pj− + Pj+ λFn (gj )

C ∗ -algebras of discrete groups

84

so that setting λFn (gj ) = xj + yj with xj we find         Pj−  ≤ 1 and xj∗ xj  =  

= λFn (gj )Pj− and yj = Pj+ λFn (gj )         yj yj∗  =  Pj+  ≤ 1. 

Therefore for any finite sequence (aj ) in B(H ) we have by (1.11) (note aj ⊗ xj = (aj ⊗ 1)(1 ⊗ xj ) and similarly for aj ⊗ yj )             aj ⊗ yj  aj ⊗ xj  +  aj ⊗ λFn (gj ) ≤    1/2  1/2     aj aj∗  +  ≤ aj∗ aj  . The inverse inequality follows from a more general one valid for any discrete group G: for any finitely supported function a : G → B(H ) we have  1/2   1/2         a(t) ⊗ λG (t) . a(t)∗ a(t) ,  a(t)a(t)∗  ≤ max  min

(3.23)

 To check this, let T = a(t) ⊗ λG (t). For any h in BH we have T (h ⊗ δe ) =

  2 1/2 and hence a(t)h ⊗ δt so that T (h ⊗ δe ) = t a(t)h 1/2   1/2   a(t)∗ a(t) = sup a(t)h2 ≤ T .  Similarly since

T∗

=



h∈BH

a(t −1 )∗

⊗ λG (t) we find 1/2    a(t)a(t)∗  ≤ T ∗  = T  

and we obtain (3.23). In the case G = Fn , (3.23) implies the left-hand side of (3.22). The second inequality follows by taking aj = αj 1. Corollary 3.38 For any n,N ≥ 1 and any unitaries a ∈ Un , x1, . . . ,xn ∈ UN we have n  √   aij xi ⊗ λFn (gj ) ≤ 2 n.  i,j =1  a is unitary a simple verification (using Proof Let aj = i aij xi . Since   ∗ 1/2  1/2 √  aj aj  (A.13)) shows that we have ≤  xj∗ xj  = n and  1/2   1/2 √ ∗ ∗  aj a  ≤  xj x  = n. j

j

3.8 Free products of groups Let (Gi )i∈I be a family of groups. The free product G = ∗i∈I Gi is a group containing each Gi as a subgroup and possessing the following universal

3.9 Notes and remarks

85

property that characterizes it: for any group G and any family of homomorphisms fi : Gi → G , there is a unique homomorphism f : G → G extending each fi . When I = {1,2} we denote G1 ∗ G2 the free product ∗i∈I Gi . When I = {1, . . . ,n} and G1 = · · · = Gn = Z it is easy to see that G = ∗i∈I Gi can be identified with Fn . More generally, any free group F that is generated by a family of free elements (gi )i∈I can be identified with the free product ∗i∈I Gi relative to Gi = Z for all i ∈ I . We denote that group by FI . It is well known that any group G is a quotient of some free group. Indeed, if G is generated by a family (ti )i∈I , let f : FI → G be the (unique) homomorphism such that f (gi ) = ti for all i ∈ I . Then f is onto G. Thus G  FI / ker(f ). The analogous fact for C ∗ -algebras is the next statement. Proposition 3.39 Any unital C ∗ -algebra A is a quotient of C ∗ (FI ) for some set I . If A is separable (resp. is generated by n unitaries) then we can take I = N (resp. I = {1, . . . ,n}). Proof Let G be the unitary group of A. Let f : FI → G be a surjective homomorphism. Let π : C ∗ (FI ) → A be the associated ∗-homomorphism, as in Remark 3.1. By the Russo–Dye Theorem A.18, the range of π is dense in A, but since it is closed (see §A.14), π must be surjective. Thus A is a quotient of C ∗ (FI ). If A is generated as a C ∗ -algebra by a family of unitaries (ui )i∈I , we can replace G in the preceding argument by the group generated by (ui )i∈I . This settles the remaining assertions. Remark 3.40 As we saw in Remark 3.36, F2 = Z ∗ Z is not amenable. More generally it can be shown that Zn ∗Zm is not amenable if n ≥ 2 and m ≥ 3, and in fact contains a subgroup isomorphic to F∞ . The group Z2 ∗ Z2 is a slightly surprising exception, it is amenable because it happens to have polynomial growth (an exercise left to the reader).

3.9 Notes and remarks The main results of this section are by now well known, and sometimes for general locally compact groups (for instance Proposition 3.5 is proved in greater generality in [225]), but we choose to focus on the discrete ones. Section 3.2 on free groups is just a reformulation of operator space duality illustrated on the pair (1,∞ ). Lemmas 3.9 and 3.10 are elementary facts from operator space theory (see [80, 208]). The classical reference that exploited C ∗ -algebra theory in noncommutative harmonic analysis is Eymard’s thesis [85]. The name of Fell is attached to the notions of weak containment

86

C ∗ -algebras of discrete groups

and weak equivalence of group representations, which apparently led him to the principle enunciated in Theorem 3.15. Concerning multipliers, those considered in Theorem 3.20 are sometimes called Herz–Schur multipliers (in honor of Carl Herz). The characterization in Theorem 3.20 and its Corollary is due to Jolissaint [136], but the simple proof we give is due to Bo˙zejko and Fendler [35]. Our treatment is inspired by Haagerup’s unpublished (but widely circulated) notes on multipliers, where in particular he proves Proposition 3.25. There are many known characterizations of amenability, the main one going back to Kesten, with variants due to Hulanicki and many authors. We refer the reader to [194] (or [199]) for details and references. Theorem 3.37 appears in [118]. In [168] Lehner gives an exact computation of the norm of  aj ⊗ λFn (gj ) when the coefficients aj are matricial or equivalently when dim(H ) < ∞.

4 C ∗ -tensor products

A norm on a ∗-algebra A is called a C ∗ -norm if it satisfies x = x ∗ ,

xy ≤ x y

and

x ∗ x = x2

(4.1)

for any x,y in A. The completion A of (A,.) then becomes a C ∗ -algebra. It is useful to point out that, after completion, the norm is unique: there is only one C ∗ -norm on a C ∗ -algebra. In particular if two C ∗ -norms on A are distinct then they are not equivalent, since otherwise they would produce the same completion, where the C ∗ -norm is unique. In particular any ∗-isomorphism between C ∗ -algebras must be isometric. More generally, it is useful to record here that any injective ∗-homomorphism between C ∗ -algebras is automatically isometric. Consequently, a ∗-homomorphism between C ∗ -algebras must have a closed range, and the range is isometric to a quotient C ∗ -algebra of the source of the map. Indeed, the kernel of any ∗-homomorphism u : A → B is a (closed two-sided and self-adjoint) ideal I ⊂ A. Passing to the quotient gives us an injective (and hence isometric) ∗-homomorphism A/I → B, which must have a closed range. When u is surjective we have B ∼ = A/I. See Proposition A.24 for more details.

4.1 C ∗ -norms on tensor products Let A1,A2 be two C ∗ -algebras. Their algebraic tensor product A1 ⊗ A2 is a ∗-algebra for the natural operations defined by (a1 ⊗ a2 ) · (b1 ⊗ b2 ) = a1 b1 ⊗ a2 b2 and (a1 ⊗ a2 )∗ = a1∗ ⊗ a2∗ .

87

88

C ∗ -tensor products

Thus a norm   on A1 ⊗ A2 is a C ∗ -norm if it satisfies (4.1) for any x,y in A1 ⊗ A2 . This subject was initiated in the 1950s by Turumaru in Japan. Later work by Takesaki and Guichardet leads to the following result. Theorem 4.1 There is a minimal C ∗ -norm  min and a maximal one  max , so that any C ∗ -norm  ·  on A1 ⊗ A2 must satisfy xmin ≤ x ≤ xmax

∀x ∈ A1 ⊗ A2 .

(4.2)

We denote by A1 ⊗min A2 (resp. A1 ⊗max A2 ) the completion of A1 ⊗ A2 for the norm  min (resp.  max ). The maximal C ∗ -norm is easy to describe. We simply write xmax = sup π(x)B(H )

(4.3)

where the supremum runs over all possible Hilbert spaces H and all possible ∗-homomorphisms π : A1 ⊗ A2 → B(H ). The minimal (or spatial) norm can be described as follows: embed A1 and A2 as C ∗ -subalgebras of B(H1 ) and B(H2 ) respectively, then for any  1 x = ai ⊗ ai2 in A1 ⊗ A2 , xmin coincides with the norm induced by the space B(H1 ⊗2 H2 ), i.e. we have an embedding (i.e. an isometric ∗-homomorphism) of the completion, denoted by A1 ⊗min A2 , into B(H1 ⊗2 H2 ). The resulting C ∗ -algebra does not depend on the particular embeddings A1 ⊂ B(H1 ) and A2 ⊂ B(H2 ). More generally, even if we allow completely isometric linear embeddings A1 ⊂ B(H1 ) and A2 ⊂ B(H2 ), we obtain the same norm (i.e. the min-norm) induced on A1 ⊗ A2 . So that, actually, the minimal tensor product of operator spaces, introduced in §1.1, coincides with the minimal C ∗ -tensor product when restricted to two C ∗ -algebras. See §1.1 for more on this. Proof of Theorem 4.1 Let x → xα be a C ∗ -norm on A1 ⊗ A2 . After completion we find a C ∗ -algebra A1 ⊗α A2 and a (Gelfand–Naimark) embedding π : A1 ⊗α A2 ⊂ B(H ). For any x ∈ A1 ⊗ A2 we have xα = π(x), which shows xα ≤ xmax . This proves the second inequality in (4.2). In particular, the minimal norm must satisfy  min ≤  max . This goes back to Guichardet. The lower bound  min ≤  α is due to Takesaki and is much more delicate. For a proof, see either [240] or [146]. It is easy to see (at least in the unital case) that for any ∗-homomorphism π : A1 ⊗ A2 → B(H ) there is a pair of (necessarily contractive) ∗-homomorphisms πi : Ai → B(H ) (i = 1,2) with commuting ranges such that

4.1 C ∗ -norms on tensor products π(a1 ⊗ a2 ) = π1 (a1 )π2 (a2 )

∀a1 ∈ A1

89 ∀a2 ∈ A2 .

(4.4)

Indeed, in the unital case we just set π1 (a1 ) = π(a1 ⊗ 1) and π2 (a2 ) = π(1 ⊗ a2 ). In the general case, the same idea works with approximate units (see Remark 4.2). Conversely any such pair πj : Aj → B(H ) (j = 1,2) of ∗-homomorphisms with commuting ranges determines uniquely a ∗-homomorphism π : A1 ⊗ A2 → B(H ) by setting π(a1 ⊗ a2 ) = π1 (a1 )π2 (a2 ).  For any finite sum x = ak1 ⊗ ak2 in A1 ⊗ A2 we will use the notation  (π1 · π2 )(x) = π1 (ak1 ) ⊗ π2 (ak2 ), (4.5) with which π = π1 · π2 . Then, we can rewrite (4.3) as:     π1 (ak1 )π2 (ak2 ) = sup {(π1 · π2 )(x)} , xmax = sup 

(4.6)

where the supremum runs over all possible such (π1,π2 ).   pairs  1 1  ak ⊗ ak2 max ≤ ak ak2  Since π1  ≤ 1 and π2  ≤ 1 we have and hence (see §A.1)   ak1 ak2  (4.7) xmax ≤ x∧ = inf where the infimum runs over all possible ways to write x as a finite sum of tensors of rank 1. Incidentally, this ensures that (4.6) or (4.3) is finite. Remark 4.2 ((4.4) still holds in the nonunital case) In the general a priori nonunital case, we claim that it is still true that any ∗-homomorphism π : A1 ⊗ A2 → B(H ) that is “nondegenerate” (meaning here such that V =  j j π(A1 ⊗ A2 )(H ) is dense in H ) must be of the form (4.4). Let t = a1 ⊗a2 ∈  j j A1 ⊗ A2 . We denote for x1 ∈ A1 (resp. x2 ∈ A2 ) x1 · t = x1 a1 ⊗ a2 (resp.  j  j a1 ⊗ x2 a2 ). Let ξ = n1 π(tk )hk ∈ V (tk ∈ A1 ⊗ A2 ). We define x2 · ·t =   π1 (x1 )ξ = n1 π(x1 · tk )hk and similarly π2 (x2 )ξ = n1 π(x2 · ·tk )hk . Then π(x1 ⊗ x2 )ξ = (π1 · π2 )(x1 ⊗ x2 )(ξ ) and hence π(t)ξ = (π1 · π2 )(t)(ξ ) for any t ∈ A1 ⊗ A2 . Moreover πj extends to a bounded ∗-homomorphism πj : Aj → B(H ). Indeed, a simple verification shows that π1 (x1 )ξ 2 ≤ x1 2 ξ 2 and similarly π2 (x2 )ξ 2 ≤ x2 2 ξ 2 . This proves the claim. Remark 4.3 ((4.6) still holds in the nonunital case) Now if V is not assumed dense in H , the existence of approximate units shows that π(t) = π(t)|V  so that since we can always replace H by V we conclude that (4.6) still holds in the general a priori nonunital case.

Remark 4.4 The norm x∧ appearing in (4.7) is called the projective norm. It is the largest among the reasonable tensor norms on tensor products of Banach

C ∗ -tensor products

90

spaces in Grothendieck’s sense (see §A.1), but it is not adapted to our context because it is not a C ∗ -norm. Remark 4.5 It follows that any C ∗ -norm · on A1 ⊗A2 automatically satisfies a1 ⊗ a2  = a1  a2  ∀a1 ∈ A1,∀a2 ∈ A2 . Indeed, it is easy to show that a1 ⊗ a2 max ≤ a1  a2  and a1 ⊗ a2 min ≥ a1  a2 . Note that if α is either min or max we have canonically A1 ⊗α A2  A2 ⊗α A1 .

(4.8)

Indeed, in both cases the flip a1 ⊗ a2 → a2 ⊗ a1 extends to an isomorphism. The basic definitions extend to tensor products of n-tuples A1,A2, . . . ,An of C ∗ -algebras, but when α = either min or max the resulting tensor products are “associative” so that we may reduce consideration if we wish to the case n = 2. Indeed, “associative” means here the identity (A1 ⊗α A2 ) ⊗α A3 = A1 ⊗α A2 ⊗α A3 = A1 ⊗α (A2 ⊗α A3 ),

(4.9)

which is easy to check and shows that the theory of multiple products reduces, by iteration, to that of products of pairs.

 Remark 4.6 Let A = ⊕ n1 Aj ∞ be the direct sum of a finite family of C ∗ -algebras. It is easy to check that for any representation π : A → B(H ) that is nondegenerate (that is such that π(A)H = H ) there is an orthogonal  decomposition H = ⊕ n1 Hj and nondegenerate representations πj : Aj → B(Hj ) such that π can be unitarily identified with π1 ⊕ · · · ⊕ πn . Using this it is easy to check that, when α is either min or max, for any C ∗ -algebra B we have  n   n  Aj ⊗α B  ⊕ Aj ⊗α B . (4.10) ⊕ 1



1



For α = min, a more general identity holds, see (1.14). Let (B1,B2 ) be another pair of C ∗ -algebras and let πi : Ai → Bi (i = 1,2) be ∗-homomorphisms. Then it is immediate from the definition that (π1 ⊗ π2 )(t)max ≤ tmax for any t ∈ A1 ⊗ A2 and hence π1 ⊗ π2 defines a ∗-homomorphism from A1 ⊗max A2 to B1 ⊗max B2 . For the minimal tensor product, this is also true because ∗-homomorphisms are automatically complete contractions (see Remark 1.6). Indeed, consider c.b. maps ui : Ai → Bi (i = 1,2). Then (see §1.1) u1 ⊗ u2 defines a c.b. map from A1 ⊗min A2 to B1 ⊗min B2 with u1 ⊗ u2 cb = u1 cb u2 cb . In sharp contrast, the analogous property does not hold for the max-tensor products. However, it does hold if we moreover assume that u1 and u2 are

4.2 Nuclear C ∗ -algebras (a brief preliminary introduction)

91

completely positive (resp. decomposable) and then (see the forthcoming Corollary 6.12 and §7.1) the resulting map u1 ⊗ u2 also is completely positive (resp. decomposable) from A1 ⊗max A2 to B1 ⊗max B2 , and we have ∀x ∈ A1 ⊗ A2

(u1 ⊗ u2 )(x)B1 ⊗max B2 ≤ u1 u2  xA1 ⊗max A2 ,

(resp. u1 ⊗ u2 dec ≤ u1 dec u2 dec ). As we will see in Theorem 7.6 decomposable maps are the “right” analogue of c.b. maps when one replaces the minimal tensor products by the maximal ones. If B1 = B(H1 ) and B2 = B(H2 ) (or merely if both B1 and B2 are assumed injective) then u1 dec = u1 cb and u2 dec = u2 cb (see Proposition 6.7), so in this particular case there is no problem, tensor products of c.b. maps are bounded both on the minimal and maximal tensor products. We have obviously a bounded ∗-homomorphism q : A1 ⊗max A2 → A1 ⊗min A2 , which (as all C ∗ -representations) has a closed range, hence A1 ⊗min A2 is C ∗ -isomorphic to the quotient (A1 ⊗max A2 )/ker(q). The observation that in general q is not injective is at the basis of the theory of nuclear C ∗ -algebras.

4.2 Nuclear C ∗ -algebras (a brief preliminary introduction) In these notes, we will emphasize the notion of nuclear pair rather than that of nuclear C ∗ -algebra (which is by now well known), i.e. we will focus attention on specific pairs (A,B) of C ∗ -algebras such that the min and max C ∗ -norms coincide on A ⊗ B. See Chapter 9. Thus in our presentation the theory of nuclear C ∗ -algebras becomes embedded in that of nuclear pairs. Nevertheless, it seems more convenient to give here first a brief overview of nuclear C ∗ -algebras. Definition 4.7 A C ∗ -algebra A is called nuclear if for any C ∗ -algebra B we have  min =  max on A ⊗ B or in short if A ⊗min B = A ⊗max B. In that case, there is only one C ∗ -norm on A ⊗ B. This notion was introduced (under a different name) by Takesaki and was especially investigated by Lance [165], who saw the connection with the following property: Definition 4.8 A C ∗ -algebra A has the completely positive approximation property (in short CPAP) if the identity on A is the pointwise limit of a net of finite rank c.p. maps. We will see in Corollary 7.12 that the CPAP is actually equivalent to nuclearity. Since we place this result in a much broader context (see Theorem 7.10), we delay the full details of its proof till §10.2.

92

C ∗ -tensor products

Remark 4.9 If A and B are nuclear then A ⊗min B is also nuclear. This is easy to check using (4.9). By (4.10), A ⊕ B is nuclear, as well as the direct sum of finitely many nuclear C ∗ -algebras. Remark 4.10 (Examples of nuclear C ∗ -algebras) For example, if dim(A) < ∞, A is nuclear, because A ⊗ B (there is no need to complete it!) is already a C ∗ -algebra, hence it admits a unique C ∗ -norm. All commutative C ∗ -algebras are nuclear. Indeed, any such algebra A is isometric to the space C0 (T ) of continuous functions vanishing at ∞ on a locally compact space T (that can be taken compact in the unital case). It is a well-known fact from Banach space theory that all such spaces have the metric approximation property, meaning that the identity is the pointwise limit of a net of finite rank maps of norm at most 1. Moreover, in the particular case of A = C0 (T ) we can arrange the latter net to be formed of positive maps. Since the latter maps are automatically c.p. (see Remark 1.28) this shows that A has the CPAP, and hence by Corollary 7.12 that A is nuclear. It is an easy exercise to show that K(H ) has the CPAP, and hence is nuclear. For the same reason (although it is not so immediate) the Cuntz algebras are nuclear. The Cuntz algebra On for n ∈ N ∪ {∞} is the C ∗ -subalgebra of B(2 ) generated by an n-tuple (a sequence if n = ∞) of isometries (Sj ) on 2 such  that Sj Sj∗ = 1. It can be shown that, given any fixed n, all such algebras are isomorphic, regardless of the choice of (Sj ). In sharp contrast, B(2 ) is not nuclear and C ∗ (F2 ) does not embed in a nuclear C ∗ -algebra (due to Simon Wasserman [255, 257]); we will prove these facts in the sequel (see Theorem 12.29 or Corollary 18.12 and Proposition 7.34). Other examples or counterexamples can be given among group C ∗ algebras. For any discrete group G, the full C ∗ -algebra C ∗ (G) or the reduced one Cλ∗ (G) (as defined in §3.1 and §3.3) is nuclear if and only if G is amenable. See the subsequent Corollary 7.13 for details. So for instance if G = FI with |I | ≥ 2 then C ∗ (G) and Cλ∗ (G) are not nuclear. (Note that for continuous groups the situation is quite different: Connes [61] proved that, for any separable connected locally compact group G, C ∗ (G) and Cλ∗ (G) are nuclear.)

4.3 Tensor products of group C ∗ -algebras The following results are easy exercises: Let G1,G2 be two discrete groups. Then C ∗ (G1 ) ⊗max C ∗ (G2 )  C ∗ (G1 × G2 ),

(4.11)

Cλ∗ (G1 ) ⊗min

(4.12)

Cλ∗ (G2 )



Cλ∗ (G1

× G2 ),

4.3 Tensor products of group C ∗ -algebras

93

and similarly for the free product G1 ∗ G2 : C ∗ (G1 ) ∗ C ∗ (G2 )  C ∗ (G1 ∗ G2 ).

(4.13)

These identities can be extended to arbitrary families (Gi )i∈I in place of the pair (G1,G2 ). In particular, we have   ∗ C ∗ (Gi )  C ∗ ∗ Gi . i∈I

i∈I

The next result (essentially from [208, p.150]) illustrates the usefulness of the Fell principle (see Proposition 3.15). Theorem 4.11 Let G : Cλ∗ (G)⊗max Cλ∗ (G) → MG ⊗max MG be the extension of the natural inclusion Cλ∗ (G) ⊗ Cλ∗ (G) ⊂ MG ⊗ MG . We have an isometric ∗-homomorphism JG : C ∗ (G) → Cλ∗ (G) ⊗max Cλ∗ (G) taking UG (t) to λG (t) ⊗ λG (t) (t ∈ G), and a completely contractive c.p. mapping PG : MG ⊗max MG → C ∗ (G) such that IdC ∗ (G) = PG G JG as in the diagram JG

G

PG

IdC ∗ (G) : C ∗ (G)−−→Cλ∗ (G) ⊗max Cλ∗ (G)−−→MG ⊗max MG −−→C ∗ (G).  Moreover, for all a,b ∈ MG , such that a(δe ) = t∈G a(t)δt , b(δe ) =  b(t)δ , we have (absolutely convergent series) t t∈G  PG (a ⊗ b) = a(t)b(t)UG (t). t∈G

We can express the preceding as a commuting diagram: Cλ∗ (G) ⊗max Cλ∗ (G)

G

JG

C ∗ (G)

MG ⊗max MG PG

IdC ∗ (G)

C ∗ (G)

Proof Let x ∈ MG ⊗ MG (algebraic tensor product). For s ∈ G let ∗ be the natural linear form defined by f (a) = δ ,aδ , so that fs ∈ MG s s e  a = fs (a)λG (s) (convergence in L2 (τG )) for any a ∈ MG . Clearly x(s,t) = (fs ⊗ ft )(x) is well defined. Note that (a ⊗ b)(s,t) = fs (a)ft (b) = 

2 1/2 ≤ a a(s)b(t) and MG (a,b ∈ MG ). Thus (Cauchy–Schwarz) s |fs (a)|   |(a ⊗ b)(t,t)| ≤ a b . This shows that t |x(t,t)| < ∞ for any M M G G t x ∈ MG ⊗ M G .

C ∗ -tensor products

94

We will show the following claim:     ∀x ∈ MG ⊗ MG  x(t,t)UG (t)

C ∗ (G)

t

≤ xMG ⊗max MG .

(4.14)

 Then we set PG (x) = t x(t,t)UG (t). This implies the result. Indeed, in the converse direction we have obviously by maximality         x(t,t)λG (t) ⊗ λG (t) ≤ x(t,t)UG (t) ⊗ UG (t)  max max     x(t,t)UG (t) . ≤ Therefore (4.14) implies at the same time that the map JG (and also G JG ) defines an isometric ∗-homomorphism and that PG G is a contractive map onto C ∗ (G). The proof of the claim will actually show that PG is c.p. Incidentally, JG PG is a “conditional expectation” onto JG (C ∗ (G)) ⊂ Cλ∗ (G) ⊗max Cλ∗ (G), in the sense of Theorem 1.45. We now prove the claim. Let π : G → B(H ) be a unitary representation of G. We introduce a pair of commuting representations (π1,π2 ) as follows: π1 (λG (t)) = λG (t) ⊗ π(t) and

π2 (λG (t)) = ρG (t) ⊗ I .

Note that both π1 and π2 extend to normal isometric representations of MG . For π1 this follows from the Fell absorption principle. For π2 , it follows from the fact that ρG  λG (indeed if W : 2 (G) → 2 (G) is the unitary taking δt to δt −1 , then W λG (·)W ∗ = ρG (·)). Since π1 and π2 have commuting ranges, we have (recall the notation (4.5)) (π1 .π2 )(x)B(2 (G)⊗2 H ) ≤ xMG ⊗max MG ,

(4.15)

and hence if we restrict the left-hand side to K = δe ⊗ H ⊂ 2 (G) ⊗2 H , we obtain (note that δe,λG (s)ρG (t)δe  = 1 if s = t and zero otherwise)  Id ⊗ t x(t,t)π(t) = PK (π1 · π2 )(x)|K and hence     x(t,t)π(t) ≤ xMG ⊗max MG . (4.16)  t

B(H )

Finally, taking the supremum over π , we obtain the announced claim (4.14). This argument shows that PG is c.p. and PG cb ≤ 1. Remark 4.12 By (4.16) applied when π is the trivial representation, we have for any x ∈ MG ⊗ MG  x(t,t) ≤ xMG ⊗max MG . Note that since the matrix of λG (t) has real entries (equal to 0 or 1) λG (t) =   λG (t). Therefore the correspondence f (t)λG (t) → f (t)λG (t) is an

4.4 A brief repertoire of examples from group C ∗ -algebras

95

isomorphism from Cλ∗ (G) to Cλ∗ (G), that extends to an isomorphism from MG to MG . Thus we also have (say assuming (s,t) → x(s,t) finitely supported)      x(t,t) ≤  x(s,t)λG (s) ⊗ λG (t) s,t MG ⊗max MG   (4.17)   ≤ x(s,t)λG (s) ⊗ λG (t) ∗ . ∗ s,t

Cλ (G)⊗max Cλ (G)

Remark 4.13 It will be convenient to record here the following fact similar to (4.16) (when π is the trivial representation). Consider x ∈ C[G] ⊗ C[G]. Then, via the maps C ∗ (G) → Cλ∗ (G) ⊂ MG , x determines an element xU ∈ C ∗ (G) ⊗max C ∗ (G), an element xλ ∈ Cλ∗ (G) ⊗max Cλ∗ (G), and lastly xm ∈ MG ⊗max MG (we will not use this notation later on). Since λG and ρG obviously extend to representations on MG with commuting ranges, we have (λG .ρG )(xm )B(2 (G)) ≤ xm MG ⊗max MG ≤ xλ Cλ∗ (G)⊗max Cλ∗ (G) ≤ xU C ∗ (G)⊗max C ∗ (G) .

(4.18)

As earlier, for any x ∈ MG ⊗ MG , let x(s,t) = δs ⊗ δt ,x(δe ⊗ δe ) (s,t ∈ G).  Since x(t,t) = δe,(λG · ρG )(x)δe , we have  x(t,t) ≤ (λG · ρG )(x)B(2 (G)) . (4.19) t∈G

4.4 A brief repertoire of examples from group C ∗ -algebras It is often hard to calculate norms of operators, and hence also of tensors in C ∗ -tensor products. The group case provides us with many instances where there are nice formulae. For convenience we recapitulate them here. Proposition 4.14 Let G be a discrete group, and π : G → B(H ) a unitary representation. Let f : G → C be any finitely supported function, then     f (t)UG (t) ⊗ UG (t) ∗  C (G)⊗min C ∗ (G)     (4.20)     f (t)UG (t) ⊗ UG (t) ∗ f (t)U = = (t)  . G ∗ C (G)⊗max C (G)

     f (t)UG (t) = f (t). (4.21)          f (t)UG (t) ⊗ λG (t) ∗ f (t)λG (t) ∗ . (4.22) =  C (G)⊗min Cλ∗ (G) Cλ (G)         f (t)λG (t) ∗ . (4.23) f (t)π(t) ⊗ λG (t) ∗ =  ∗ ∀f ≥ 0

Cπ (G)⊗min Cλ (G)

Cλ (G)

C ∗ -tensor products

96

     f (t)λG (t) ⊗ λG (t) ∗ ≥ f (t) .  ∗ Cλ (G)⊗max Cλ (G)      f (t)λG (t) ⊗ λG (t) ∗ ≥ f (t) .  Cλ (G)⊗max MG

∀f ≥ 0

    f (t)λG (t) ⊗ λG (t) ∗  Cλ (G)⊗max Cλ∗ (G)      = f (t)λG (t) ⊗ λG (t) = f (t).

(4.24) (4.25)

(4.26)

MG ⊗max MG

∀f ≥ 0

    f (t)UG (t) ⊗ λG (t) ∗  C (G)⊗max Cλ∗ (G)      = f (t)UG (t) ⊗ λG (t) ∗ = f (t).

(4.27)

C (G)⊗max MG

Proof (4.20): It is easy to show that UG ⊗UG dominates UG since UG contains the trivial representation, and the converse is obvious by maximality.  (4.21): Indeed, ≥ f (t) holds because of the presence of the trivial  representation in UG , and ≤ |f (t)| follows from the triangle inequality. Both (4.22) and (4.23) follow from Fell’s principle (see (3.13)). Let x (resp. y) be the (common) left-hand side of (4.24) and (4.26) (resp. of (4.27)). Then          y ≥ x ≥  f (t)λG (t)ρG (t) ≥  f (t)λG (t)ρG (t)δe  = f (t) where at the last step we use λG (t)ρG (t)δe = δe . When f ≥ 0 the triangle inequality gives the converse. The same argument is valid for (4.25) and for the terms involving MG in (4.26) and (4.27) since λG and ρG obviously extend to mutually commuting ∗-homomorphisms on MG .

4.5 States on the maximal tensor product By definition, a state on a C ∗ -algebra A is a positive linear form ϕ of unit norm. If A is unital, a functional ϕ ∈ A∗ is a state if and only if ϕ = ϕ(1) = 1 (see Remark 1.33). If A is not unital, a positive functional ϕ on A is a state if and only if ϕ(xi ) → 1 when (xi ) is any fixed approximate unit in A as in (1.22). Indeed, this follows from Remark 1.24 and (1.22). When this holds we say that ϕ is approximately unital. Remark 4.15 Let  α be any C ∗ -norm on the algebraic tensor product A1 ⊗A2 of two C ∗ -algebras. The set of elements of the form {x ∗ x | x ∈ A1 ⊗ A2 } is clearly α-dense in the set {x ∗ x | x ∈ A1 ⊗α A2 }, or equivalently α-dense

4.5 States on the maximal tensor product

97

in (A1 ⊗α A2 )+ . This shows that when ϕ ∈ (A1 ⊗α A2 )∗ , if ϕ is positive on A1 ⊗ A2 then it is positive on A1 ⊗α A2 . If both algebras are unital then 1 ⊗ 1 ∈ A1 ⊗ A2 , and hence, by (1.22) and Remark 1.24, the set of states on A1 ⊗α A2 is simply formed of the set of positive unital functionals on A1 ⊗ A2 that are α-continuous. In the nonunital case, if xi ≥ 0 (resp. yj ≥ 0) is any approximate unit in BA1 (resp. BA2 ) as in (1.22), then xi ⊗ yj is also one in the unit ball of A1 ⊗α A2 . Thus, in any case, the set of states on A1 ⊗α A2 is simply formed of the set of approximately unital positive functionals on the algebraic tensor product that happen to be continuous for the norm  α . The following statement describes the states on A1 ⊗max A2 . Not surprisingly this is the largest possibility: the set of all normalized positive functionals on A1 ⊗ A2 . Theorem 4.16 Let ϕ : A1 ⊗ A2 → C be a linear form and let uϕ : A1 → A∗2 be the corresponding linear map defined by uϕ (a1 )(a2 ) = ϕ(a1 ⊗ a2 ). The following are equivalent: (i) ϕ extends to a positive linear form in (A1 ⊗max A2 )∗ . (ii) uϕ : A1 → A∗2 is completely positive in the following sense:  uϕ (xij )(yij ) ≥ 0 ∀n ∀x ∈ Mn (A1 )+ ∀y ∈ Mn (A2 )+ . i,j

(4.28)

(iii) ϕ is a positive linear form on A1 ⊗ A2 , in the sense that ϕ(t ∗ t) ≥ 0 for any t ∈ A1 ⊗ A2 . When this holds ϕ(A1 ⊗max A2 )∗ = uϕ .

(4.29)

Thus ϕ(A1 ⊗max A2 )∗ = 1 (i.e. ϕ is a state on A1 ⊗max A2 ) if and only if uϕ  = 1. Proof Assume (i) with ϕ of norm 1. By the GNS construction (see §A.13), there are a representation π : A1 ⊗max A2 → B(H ) and ξ in the unit ball of H such that ϕ(·) = ξ,π(·)ξ . We may assume that π = π1 · π2 as in (4.4). Let  ∗ ∗ ) we claim zkj (note zik = zki x,y be as in (ii). Let z = y 1/2 so that yij = k zki that the matrix [π1 (xij )π2 (yij )] is positive. Indeed, for each fixed k the matrix [aijk ] defined by aijk = π2 (zki )∗ π1 (xij )π2 (zkj ) can be rewritten as a product Ck∗ [π1 (xij )]Ck showing that it is positive and since π1,π2 have commuting ranges we have   π1 (xij )π2 (yij ) = π2 (zki )∗ π1 (xij )π2 (zkj ) = aijk . k

k

C ∗ -tensor products

98

This proves our claim. Let  ξ ∈ H ⊕ · · · ⊕ H (n times) be defined by  ξ = ξ ⊕ · · · ⊕ ξ . Then we have  ξ,[π1 (xij )π2 (yij )] ξ  ≥ 0, uϕ (xij )(yij ) =  which (by homogeneity) shows that (i) ⇒ (ii).   Assume (ii). Consider t = aj ⊗ bj in A1 ⊗ A2 . Then t ∗ t = i,j ai∗ aj ⊗ bi∗ bj hence by (ii)  ϕ(t ∗ t) = uϕ (ai∗ aj )(bi∗ bj ) ≥ 0, i,j

which shows that (iii) holds. Assume (iii). By the GNS construction applied to A1 ⊗ A2 , there are a ∗-homomorphism π : A1 ⊗A2 → B(H ) and ξ in H such that ϕ(·) = ξ,π(·)ξ . But any ∗-homomorphism π : A1 ⊗ A2 → B(H ) extends to one on the whole of A1 ⊗max A2 . Thus ϕ also extends to A1 ⊗max A2 and satisfies (i). Moreover, if we assume A1,A2 and π unital we have ϕ∗max = ϕ(1 ⊗ 1) = uϕ (1)(1) ≤ uϕ , but also |uϕ (a1 )(a2 )| ≤ ϕ∗max a1 ⊗ a2 max = ϕ∗max for any unit vectors a1,a2 , and hence ϕ∗max = uϕ . In the nonunital case it is easy to modify this argument using an approximate unit (we leave the details to the reader). Remark 4.17 Let ϕ be as in (4.28). Consider C ∗ -algebras Bj and uj ∈ CP (Bj ,Aj ) (j = 1,2). Then the composition (u2 )∗ uϕ u1 is obviously c.p. from B1 to B2∗ in the sense of (4.28). By definition of the maximal tensor product, the following inequality (4.30) is clear when u1,u2 are ∗-homomorphisms, but we will crucially use the following more general fact. Corollary 4.18 Let ui : Ai → Bi (i = 1,2) be c.p. maps between C ∗ -algebras. Then the linear map u1 ⊗ u2 : A1 ⊗ A2 → B1 ⊗ B2 extends to a c.p. map from A1 ⊗max A2 to B1 ⊗max B2 and ∀x ∈ A1 ⊗ A2

(u1 ⊗ u2 )(x)B1 ⊗max B2 ≤ u1 u2  xA1 ⊗max A2 . (4.30)

Proof We may assume ui  ≤ 1. Consider any ϕ in the unit ball of (B1 ⊗max B2 )∗+ , define ψ by ψ(t) = ϕ((u1 ⊗ u2 )(t)) (t ∈ A1 ⊗ A2 ). We claim that ψ ∈ (A1 ⊗max A2 )∗+ . Indeed, uψ : A1 → A∗2 is given by uψ = (u2 )∗ uϕ u1 , and the latter map is c.p. by Remark 4.17. By Theorem 4.16 ψ ∈ (A1 ⊗max A2 )∗+ and ψ(A1 ⊗max A2 )∗ = uψ  ≤ uϕ u1 u2  ≤ 1. Note (u1 ⊗ u2 )(t)max = sup{|ϕ((u1 ⊗ u2 )(t))| | ϕ(B1 ⊗max B2 )∗ ≤ 1}. Since any element ϕ in the unit ball of the dual of a C ∗ -algebra such as (B1 ⊗max B2 )∗ decomposes as a linear combination of positive elements

4.6 States on the minimal tensor product

99

ϕ = ϕ1 − ϕ2 + i(ϕ3 − ϕ4 ) all in the unit ball, it is clear that u1 ⊗ u2 must be bounded (say by 4) from (A1 ⊗ A2, max ) to (B1 ⊗ B2, max ) and hence extends to a bounded map u : A1 ⊗max A2 → B1 ⊗max B2 . To complete the proof we show that the latter extension u is positive, i.e. that t ∈ (A1 ⊗max A2 )+ ⇒ u(t) ∈ (B1 ⊗max B2 )+ . Indeed, it suffices to check that ϕ(u(t)) ≥ 0 for all ϕ in (B1 ⊗max B2 )∗+ , but with the preceding notation we have ϕ(u(t)) = ψ(t) with ψ ∈ (A1 ⊗max A2 )∗+ , so the positivity of u is clear. Replacing A1 by Mn (A1 ), we obtain its complete positivity. By (1.20), if A1,A2 are unital we have u1 ⊗ u2 : A1 ⊗max A2 → B1 ⊗max B2  ≤ (u1 ⊗ u2 )(1 ⊗ 1) = u1 (1) · u2 (1) ≤ u1  · u2 . In the nonunital case, we obtain the same conclusion using approximate units. Corollary 4.19 Let ui : Ai → B(H ) (i = 1,2) be c.p. maps with commuting ranges. Then the linear map u1 .u2 : A1 ⊗ A2 → B(H ) defined by (u1 .u2 )(x1 ⊗x2 ) = u1 (x1 )u2 (x2 ) extends to a c.p. map from A1 ⊗max A2 to B(H ) with norm ≤ u1 u2 . Proof Let Bi ⊂ B(H ) be the C ∗ -subalgebra generated by ui (Ai ). Since the ui’s are self-adjoint so are their ranges. Therefore B1 and B2 mutually commute. Let π : B1 ⊗max B2 → B(H ) be the ∗-homomorphism defined by π(b1 ⊗ b2 ) = b1 b2 . Since u1 · u2 = π(u1 ⊗ u2 ), by the preceding corollary u1 · u2 is a composition of c.p. maps with u1 · u2  ≤ π u1 ⊗ u2  ≤ u1 u2 .

4.6 States on the minimal tensor product We clearly have a surjective ∗-homomorphism Q : A1 ⊗max A2 → A1 ⊗min A2 . Let I = ker(Q) so that A1 ⊗min A2 = (A1 ⊗max A2 )/I. Therefore (A1 ⊗min A2 )∗ = I ⊥ ⊂ (A1 ⊗max A2 )∗ . Note that by (1.3) and (1.10) we have a natural canonical inclusion A∗1 ⊗ A∗2 ⊂ (A1 ⊗min A2 )∗ .

C ∗ -tensor products

100

By Corollary 1.15, we have ker(Q) = {t ∈ A1 ⊗max A2 | t,ξ1 ⊗ ξ2  = 0 ∀ξ1 ∈ A∗1,ξ2 ∈ A∗2 }.

(4.31)

Lemma 4.20 (A1 ⊗min A2 )∗ = A∗1 ⊗ A∗2 where the closure is with respect to pointwise convergence on A1 ⊗min A2 . Moreover, when viewed as a subset of (A1 ⊗max A2 )∗ , the set (A1 ⊗min A2 )∗ is the closure of A∗1 ⊗ A∗2 with respect to pointwise convergence on A1 ⊗max A2 . Proof By Hahn–Banach, to check the second assertion it suffices to show that any t ∈ A1 ⊗max A2 that vanishes on A∗1 ⊗ A∗2 belongs to I = ker(Q), i.e. satisfies Q(t) = 0. But by (4.31) this is clear. The first assertion is then clear. Let Aj ⊂ B(Hj ) be C ∗ -algebras. Then, by definition, A1 ⊗min A2 ⊂ B(H1 ⊗2 H2 ) isometrically. Therefore any state on A1 ⊗min A2 is the restriction of a state on B(H1 ⊗2 H2 ). Let H = H1 ⊗2 H2 . We should first recall how to tackle states on B(H ). First note that the unit ball of B(H )∗ is the weak* closure of the convex hull of the elements which come from rank one operators on H (see §A.10), i.e. the functionals of the form ϕξ,η (T ) = ξ,T η for some ξ,η ∈ BH . Similarly the positive part of the unit ball of B(H )∗ is the weak* closure of the convex hull of the set of functionals of the form ϕξ,ξ for some ξ ∈ BH . Lastly, the set of states on B(H ) is the weak* closure of the convex hull of {ϕξ,ξ | ξ  = 1}. Moreover, if F ⊂ H is a dense linear subspace, the latter set is the same as the weak* closure of the convex hull of {ϕξ,ξ | ξ  = 1,ξ ∈ F }. Let A ⊂ B(H ) be a C ∗ -algebra. We will say that a map u ∈ CP(A,Mn ) is obtained by compression if there is an isometry V : n2 → H such that u(x) = V ∗ xV ∈ B(n2 )  Mn for any x ∈ A. We will say that a linear form ϕ on A1 ⊗ A2 comes from a matricial state (resp. obtained by compression) if there are integers n(j ), maps uj ∈ CP(Aj ,Mn(j ) ) with uj  ≤ 1 (j = 1,2) (resp. both obtained by compression) and a state ψ on Mn(1) ⊗min Mn(2) such that ∀x ∈ A1 ⊗ A2

ϕ(x) = ψ((u1 ⊗ u2 )(x)).

(4.32)

Note that the notion of map u ∈ CP(A,Mn ) or of state “obtained by compression” depends on the embedding A ⊂ B(H ), while that of state coming from a matricial state does not. We can now refine the description of the states on A1 ⊗min A2 given in the preceding lemma.

4.6 States on the minimal tensor product

101

Theorem 4.21 Let Aj ⊂ B(Hj ) be C ∗ -algebras. Let ϕ : A1 ⊗ A2 → C be a linear form and let uϕ : A1 → A∗2 be the corresponding linear map. We first assume A1,A2 unital and ϕ(1 ⊗ 1) = 1. The following are equivalent: (i) The functional ϕ extends to a state on A1 ⊗min A2 . (i)’ The functional ϕ satisfies |ϕ(x)| ≤ xmin for any x ∈ A1 ⊗ A2 . (ii) The functional ϕ is the pointwise limit on A1 ⊗ A2 of a net of functionals that come from matricial states obtained by compression. (iii) The functional ϕ is the pointwise limit on A1 ⊗ A2 of a net of functionals that come from matricial states. (iv) The map uϕ : A1 → A∗2 is the pointwise limit with respect to the weak* topology on A∗2 of a net of finite rank c.p. maps of unit norm from A1 to A∗2 . Moreover, in the nonunital case, if we replace our assumption on ϕ by sup{ϕ(x ⊗ y) | 0 ≤ x,0 ≤ y,x < 1,y < 1} = 1, the equivalence still holds. Proof By our normalization assumption, (i) and (i)’ are equivalent (see the discussion at the beginning of §4.5). Assume (i) and Aj ⊂ B(Hj ), j = 1,2. Let H = H1 ⊗2 H2 . Then ϕ is the restriction of a state on B(H1 ⊗2 H2 ). By the remarks before Theorem 4.21, ϕ is the pointwise limit of states on B(H ) of the form N λk ϕξk ,ξk (T ) (4.33) T → N

1

where λk > 0, 1 λk = 1 and the ξk’s are unit vectors in H . We may assume (by density) that in each case there are finite-dimensional subspaces Kj ⊂ Hj such that ξk ∈ K1 ⊗2 K2 for all k ≤ N . Then the resulting states come from matricial states. Indeed, letting uj (x) = PKj x|Kj ∈ B(Kj ), and denoting by ψk the state on B(K1 ) ⊗min B(K2 ) = B(K1 ⊗2 K2 ) defined by ϕξk ,ξk , we may  write ϕξk ,ξk (T ) = ψk ((u1 ⊗ u2 )(T )). Then the state λk ϕξk ,ξk satisfies (4.32)  with n(j ) = dim(Kj ) and ψ = λk ψk . This proves (i) ⇒ (ii) and (ii) ⇒ (iii) is trivial. Note that (4.32) implies uϕ = (u2 )∗ uψ u1 , and hence by Remark 4.17 uϕ is a c.p. map of finite rank when ϕ comes from matricial states. Moreover, uϕ  = ϕ(1 ⊗ 1) by (4.29). Assume (iii). Let ϕi be the pointwise approximating functionals coming from matricial states. Since we can replace them by ϕi /ϕi (1 ⊗ 1) and ϕi (1 ⊗ 1) → 1 we obtain (iv). Assume (iv). Then by Theorem 4.16, ϕ is the pointwise limit on the set {a1 ⊗ a2 | aj ∈ Aj } (or equivalently, by linearity, on A1 ⊗ A2 ) of a net ϕi ∈ A∗1 ⊗ A∗2 , formed of states on A1 ⊗max A2 . By density, since this net is

C ∗ -tensor products

102

equicontinuous on A1 ⊗max A2 , we have pointwise convergence on the whole of A1 ⊗max A2 , and hence by Lemma 4.20 we obtain (i). Remark 4.22 Consider a C ∗ -algebra A ⊂ B(H ). Let H = H ⊕ H ⊕ · · · . Then π : a → a ⊕ a ⊕ · · · is an embedding of A in B(H). We will say that any such embedding A ⊂ B(H) has infinite multiplicity. When A ⊂ B(H)  has infinite multiplicity, any state ϕ on A of the form a → λk ξk ,aξk  (with unit vectors ξk ∈ H ) can be rewritten as a vector state on B(H). More 1/2 precisely, if we define ξ  = (λk ξk ) then ξ  is a unit vector in H and we have ϕ(T ) = ξ ,π(T )ξ  . Thus, any state on A is a pointwise limit of vector states (relative to H). We will need an obvious generalization of this trick for A = A1 ⊗min A2 with H = H1 ⊗2 H2 and Aj ⊂ B(Hj ). Let Hj = Hj ⊕ Hj ⊕ · · · and πj : B(Hj ) → B(Hj ) be again such that πj (a) = a ⊕ a ⊕ · · · . For notational simplicity we give ourselves fixed orthonormal bases in H1 and H2 which allow us to define unambiguously the transpose t a of a ∈ B(Hj ) simply as the operator associated to the transposed matrix. We then define t π (a) = π (t a) = t a ⊕ t a ⊕ · · · for any a ∈ B(H ). j j j Thus we obtain: Proposition 4.23 Let ϕ be a state on A1 ⊗min A2 . With the preceding notation, there is a net of finite rank operators zi : H2 → H1 with Hilbert–Schmidt norm 1, i.e. tr(zi∗ zi ) = 1 such that ∀(a,b) ∈ A1 × A2

ϕ(a ⊗ b) = lim tr(zi∗ π1 (a)zi t π2 (b)).

Proof The state ϕ is the limit of states of the form (4.33). Each state ϕξk ,ξk can be written as described in (2.5) (with unit vectors ξ = η). Thus it is easy to complete the proof using the same idea as in Remark 4.22 (with zi acting diagonally). Remark 4.24 The preceding proposition shows that a state ϕ on A1 ⊗min A2 is the pointwise limit of states that come from matricial vector states (that is for which ψ in (4.32) is a vector state on Mn ⊗min Mm  B(n2 ⊗2 m 2 )). Actually, we will more often use the following variant for A1 ⊗min A2 (see also Proposition 2.11): Proposition 4.25 In the preceding situation, for any state ϕ on A1 ⊗min A2 there is a net of finite rank operators zi : H2 → H1 with tr(zi∗ zi ) = 1 such that ∀(a,b) ∈ A1 × A2

¯ = lim tr(zi∗ π1 (a)zi π2 (b)∗ ). ϕ(a ⊗ b)

Similarly, for any state ϕ on A1 ⊗min A2 there is a net of finite rank operators hi : H2 → H1 with tr(h∗i hi ) = 1 such that

4.7 Tensor product with a quotient C ∗ -algebra

103

ϕ(a¯ ⊗ b) = lim tr(h∗i π1 (a)∗ hi π2 (b)). Proof The first part is just a rewriting of the preceding Proposition with (2.6) in place of (2.5). For the second part we observe that b ⊗ a → ϕ(a ⊗ b) defines a state on A2 ⊗min A1 .

4.7 Tensor product with a quotient C ∗ -algebra We will need the following basic fact on the behavior of C ∗ -tensor products with respect to quotient C ∗ -algebras. We will return to this topic specifically for the minimal tensor product more extensively in §7.5. Lemma 4.26 Let A,B be C ∗ -algebras and let I ⊂ A be a closed ideal so that A/I is a C ∗ -algebra. Let  α be any C ∗ -norm on A ⊗ B. Let E ⊂ B be an arbitrary subspace. We denote by A⊗α E (resp. I⊗α E) the closure of A ⊗ E (resp. I⊗E) in A ⊗α B. Let Qα [E] =

A⊗α E I⊗α E.

Then, if E ⊂ F ⊂ B are arbitrary subspaces, we have a natural isometric embedding Qα [E] ⊂ Qα [F ]. Proof The proof uses the classical fact that the ideal I has a two-sided approximate unit formed of elements ai with 0 ≤ ai and ai  ≤ 1 (see §A.15). Let Ti : A⊗α F → I⊗α F be the operator defined by Ti (x ⊗ y) = ai x ⊗ y. If B is unital, this is just the left multiplication by ai ⊗ 1. Note that Ti  ≤ 1 and I − Ti  ≤ 1. Moreover, Ti (ϕ) → ϕ for any ϕ in I ⊗ F . Let us denote by d(·,·) the distance in the norm of A ⊗α B. Note that by density we have for any x ∈ A ⊗ B d(x,I⊗α F ) = d(x,I ⊗ F ), and similarly for E. We claim that for any x ∈ A ⊗ F d(x,I⊗α F ) = lim supi (1 − Ti )(x). Let y ∈ I ⊗ F . Since 1 − Ti  ≤ 1 we have x − Ti xα ≤ (I − Ti )(x − y)α + (I − Ti )(y)α ≤ x − yα + (I − Ti )(y)α , and (I − Ti )(y)α → 0 for any y ∈ I ⊗ F . Thus we obtain lim supi x − Ti xα ≤ x − yα and hence lim supi x − Ti xα ≤ d(x,I⊗α F ). Since

104

C ∗ -tensor products

Ti (x) ∈ I ⊗ F , we have lim infi x − Ti xα ≥ d(x,I⊗α F ) which proves the claim (and the convergence of x − Ti xα ). Now to show that for any x in A ⊗ E we have d(x,I ⊗ F ) = d(x,I ⊗ E), it suffices to observe that Ti x ∈ I ⊗ E for any x ∈ A ⊗ E. This gives us d(x,I ⊗ E) ≤ lim infi x − Ti xα = d(x,I ⊗ F ) and the converse is obvious since I ⊗ E ⊂ I ⊗ F . The following simple fact will be invoked several times. Lemma 4.27 Let A,B be C ∗ -algebras and let I ⊂ A be a closed (two-sided, self-adjoint) ideal. Let  α be any C ∗ -norm on A ⊗ B. Then α

(A ⊗ B) ∩ I⊗B = I ⊗ B. Moreover, if we denote by Q : A ⊗ B → (A ⊗ B)/(I ⊗ B) the quotient map, then for any t ∈ (A ⊗ B)/(I ⊗ B) we have tA⊗α B |  t ∈ A ⊗ B, Q( t) = t}. (4.34) t(A⊗α B)/I ⊗B α = inf{ n Proof Let t = 1 ak ⊗ bk ∈ A ⊗ B. Let (xi ) be a (bounded) approximate unit of I and (yj ) one for B in the sense of §A.15. Clearly for any z ∈ I ⊗ B we have z − z(xi ⊗ yj )α → 0, and by equicontinuity this remains true for any   α α z ∈ I ⊗ B . Therefore, if t ∈ I ⊗ B then  nk=1 ak ⊗ bk − n1 ak xi ⊗ n bk yj α → 0 and hence also (since bk yj − bk  → 0)  k=1 (ak − ak xi ) ⊗ bk α → 0. We may assume the bk is linearly independent. Then it follows that ak xi → ak and hence ak ∈ I¯ = I for any k. This proves (A ⊗ B) ∩ (I⊗α B) ⊂ I ⊗ B and the converse is trivial. Let s ∈ A ⊗ B be a representative of t modulo I ⊗ B. Then α t(A⊗α B)/I ⊗B α = inf{s + ηα | η ∈ I⊗B } and by density this is = inf{s + ηα | η ∈ I⊗B}, which is the same as (4.34).

4.8 Notes and remarks The study of tensor products of C ∗ -algebras was initiated in the 1950s by Turumaru in Japan, and continued by Takesaki [238] and Guichardet [100] (see also [101]). Much work was then done on nuclear C ∗ -algebras, (to which we return in §10.2). In the process, this clarified what we know on C ∗ -tensor products. For instance, Lance’s paper [165] contains a lot of information on the latter, in particular (4.11) and (4.12), and the fact that Cλ∗ (G) is nuclear if and only if G is amenable. Lance [165] showed that the CPAP implies nuclearity.

4.8 Notes and remarks

105

Choi–Effros and Kirchberg [45, 154] independently proved the converse (see §10.2). In some variant, Theorem 4.11 appears in [208, p. 150]). As an example of application this shows that exactness is not stable by the max-tensor product (see Remark 10.5), which was left open in Kirchberg’s early works on exactness (see [156, p. 75 (P3)]). Lance [165] found the description of the states of the maximal tensor product that appears in §4.5. The corresponding result for states on the minimal one in §4.6 is used in Kirchberg’s work [155] but had probably long been known to experts. Section 4.7 is probably well known, but our treatment is influenced by Arveson’s [13].

5 Multiplicative domains of c.p. maps

When dealing with a contraction u : A → B between C ∗ -algebras it is often interesting to identify the largest C ∗ -subalgebra of A on which u behaves as a ∗-homomorphism. For c.p. maps there is a useful description of the largest such C ∗ -subalgebra, called the multiplicative domain of u.

5.1 Multiplicative domains The unreasonable effectiveness of completely positive contractions in C ∗ -algebra theory is partially elucidated by the next statement. Theorem 5.1 Let u : A → B be a c.p. map between C ∗ -algebras with u ≤ 1. (i) Then if a ∈ A satisfies u(a ∗ a) = u(a)∗ u(a), we have necessarily u(xa) = u(x)u(a),∀x ∈ A and the set of such a’s forms an algebra. (ii) Let Du = {a ∈ A | u(a ∗ a) = u(a)∗ u(a) and u(aa∗ ) = u(a)u(a)∗ }. Then Du is a C ∗ -subalgebra of A (called the multiplicative domain of u) and u|Du is a ∗-homomorphism. Moreover, we have ∀a,b ∈ Du ∀x ∈ A

u(ax) = u(a)u(x), u(xb) = u(x)u(b)

and u(axb) = u(a)u(x)u(b).

(5.1)

Proof First recall a classical inequality for x ∈ L2 (m) when m is a probability |x|2 dm ≥ |

106

xdm|2 .

5.1 Multiplicative domains

107

We will show that u satisfies a similar Cauchy–Schwarz inequality (first used by Choi for 2-positive maps, but see also [143] for earlier similar results for positive ones), as follows. ∀x ∈ A

u(x ∗ x) ≥ u(x)∗ u(x).

(5.2)

This is easy for c.p. maps. Indeed, by Theorem 1.22, we can write u as u(·) = ) and V : H → H  with B ⊂ V ∗ π(·)V for some representation π : A → B(H B(H ). Then we have for all T with 0 ≤ T ≤ 1 u(x ∗ x) = V ∗ π(x)∗ π(x)V ≥ V ∗ π(x)∗ T π(x)V , and hence choosing T = VV ∗ we obtain (5.2). This implies that the “defect” ϕ(x,y) = u(x ∗ y) − u(x)∗ u(y) behaves like a B-valued scalar product. In particular, we clearly have (by Cauchy–Schwarz) |ξ,ϕ(x,y)ξ | ≤ ξ,ϕ(x,x)ξ 1/2 ξ,ϕ(y,y)ξ 1/2

∀ξ ∈ H

∀x,y ∈ A.

This shows (taking y = a) that if ϕ(a,a) = 0 we have ξ,ϕ(x,a)ξ  = 0 for all ξ , hence ϕ(x,a) = 0 for all x in A. Changing x to x ∗ (and recalling that a c.p. map is self-adjoint) we obtain ∀x ∈ A

u(xa) = u(x)u(a).

Thus we have proved {a ∈ A | u(a ∗ a) = u(a)∗ u(a)} = {a ∈ A | u(xa) = u(x)u(a)

∀x ∈ A}. (5.3)

It is easy to see that the right-hand side of this equality is an algebra (using associativity). This proves (i). To check (ii), we note that reversing the roles of a and a ∗ in (i) we have u(aa∗ ) = u(a)u(a)∗ if and only if u(ay) = u(a)u(y) for all y in A. Note that a ∈ Du if and only if both a and a ∗ belong to the set (5.3). Therefore Du is a C ∗ -algebra, we have u(ab) = u(a)u(b) for any a,b in Du which proves (ii) and (5.1) holds. Remark 5.2 In the situation of Theorem 5.1, let π = u|Du : Du → B. Then ker(π ) is a hereditary C ∗ -subalgebra of A (and of course an ideal of Du ). Indeed, if 0 ≤ y ≤ x with y ∈ A and x ∈ Du then π(x) = 0 ⇒ u(y) = 0 and 0 ≤ u(y 2 ) ≤ yu(y) = 0 so that y ∈ Du and hence y ∈ ker(π ). As a consequence, we have Corollary 5.3 (On bimodular maps) Let C ⊂ B be a C ∗ -subalgebra of a C ∗ -algebra B. Let π : C → π(C) ⊂ B(H ) be a representation. Then

108

Multiplicative domains of c.p. maps

any contractive c.p. map (in particular any unital c.p. map) u : B → B(H ) extending π must satisfy u(c1 xc2 ) = π(c1 )u(x)π(c2 ), for all x ∈ B and all c1,c2 ∈ C i.e. u must be a C-bimodule map (for the action defined by π ). In particular (taking for π the identity on C): Corollary 5.4 (On conditional expectations) Let C ⊂ B be a C ∗ -subalgebra of a C ∗ -algebra B. Then any contractive c.p. projection P : B → C is a conditional expectation, i.e., P (c1 xc2 ) = c1 P (x)c2, ∀x ∈ B ∀c1,c2 ∈ C.

5.2 Jordan multiplicative domains In some situations, we will have to deal with a contractive mapping u : A → B that is merely positive (and hence preserving self-adjointness). In that case, there is an analogue of Theorem 5.1 where the product in A and B is replaced by the Jordan product defined by x ◦ y = (xy + yx)/2. A linear subspace  of a C ∗ -algebra is called a Jordan subalgebra if it is stable under the Jordan product. A linear map u :  → B(H ) is called a Jordan morphism if ∀a,b ∈ 

u(a ◦ b) = u(a) ◦ u(b).

If in addition  and u are self-adjoint (meaning a ∗ ∈  and u(a ∗ ) = u(a)∗ for any a ∈ ), we will say that u is a Jordan ∗-morphism. Theorem 5.5 Let u : A → B be a positive unital map between unital C ∗ -algebras with u ≤ 1. If (and only if) a ∈ A satisfies u(a ∗ ◦ a) = u(a)∗ ◦ u(a), we have ∀x ∈ A

u(x ◦ a) = u(x) ◦ u(a).

The set u of such a’s forms a closed self-adjoint Jordan subalgebra (called the Jordan multiplicative domain of u) and u|u is a Jordan ∗-morphism. Proof Since u preserves positivity it also preserves self-adjointness. Note that the commutativity of ◦ dispenses us from distinguishing left and right products. In particular, u is self-adjoint. Let a ∈ A with a = a ∗ . We claim that u(a 2 ) ≥ u(a)2 . C ∗ -algebra

(5.4)

Let Ca be the generated by a. Since Ca is commutative, the restriction of u to Ca is c.p. by Remark 1.32. Therefore, the claim follows

5.2 Jordan multiplicative domains

109

from (5.2) applied to the latter restriction. Note for later use that if u(a 2 ) = u(a)2 , Theorem 5.1 implies that u is multiplicative on Ca , and in particular u(a 4 ) = u(a 2 )2 .

(5.5)

Now for x ∈ A of the form x = a + ib with a,b self-adjoint we have x ∗ ◦ x = a 2 + b2 and u(x) = u(a) + iu(b). Therefore, (5.4) implies u(x ∗ ◦ x) ≥ u(x)∗ ◦ u(x).

(5.6)

From that point on, the proof can be completed like for Theorem 5.1. Note that u = {a ∈ A | u(x ◦ a) = u(x) ◦ u(a) ∀x ∈ A}.

(5.7)

However, some extra care is needed to show that u (which is clearly selfadjoint) is a Jordan algebra, because the Jordan product is not associative. Since u is a self-adjoint subspace, it suffices to show that a ◦ b ∈ u for any pair a,b of self-adjoint elements of u . Since a ◦b = ((a +b)2 −(a −b)2 )/4, it suffices to show that a = a ∗ and a ∈ u implies a 2 ∈ u , or equivalently that a = a ∗ and u(a 2 ) = u(a)2 implies u(a 4 ) = u(a 2 )2 , but we already observed this in (5.5). At some point in the sequel we will crucially need the following result due to Størmer (see [123]). Theorem 5.6 Let A be a C ∗ -algebra. Let r : A → B(H ) be a Jordan ∗-morphism. There is a projection p in r(A) ∩ r(A) such that the decomposition ∀a ∈ A

r(a) = pr(a) + (1 − p)r(a)

decomposes r as the sum of a ∗-homomorphism a → pr(a) (= pr(a)p) and ∗-antihomomorphism a → (1 − p)r(a) (= (1 − p)r(a)(1 − p)). We will use this (without proof) via the following consequence: Theorem 5.7 Let M,N be von Neumann algebras. Let ϕ : N → M be a normal (surjective) linear map such that ϕ(BN ) = BM . Then there are mutually orthogonal projections p,q in N such that M embeds in pNp ⊕ (qNq)op as a von Neumann subalgebra admitting a contractive conditional expectation onto it. More precisely, there is an injective normal ∗-homomorphism r : M → pNp ⊕ (qNq)op and a normal contractive (and c.p.) projection P : pNp ⊕ (qNq)op → r(M). To prepare for the proof we first need some background on support projections.

110

Multiplicative domains of c.p. maps

Remark 5.8 Let N ⊂ B(H ) be a von Neumann algebra. Let PN denote the set of (self-adjoint) projections in N . Let (pi )i∈I be any family in PN . Then there is a unique element denoted by ∨{pi | i ∈ I } in PN that is minimal among all projections q ∈ PN such that q ≥ pi for all i ∈ I . To verify this just observe that by the bicommutant Theorem A.46 a (self-adjoint) projection p ∈ B(H ) is in PN if and only if p commutes with N  or equivalently if p(H ) ⊂ H is an invariant subspace for N  . By this criterion if E = span[∪i∈I pi (H )] the projection PE is in N and hence we may define simply ∨{pi | i ∈ I } = PE . Remark 5.9 Let ϕ : N → B(H ) be a positive map that is weak* to weak* continuous (in other words ϕ is “normal”). The support projection sϕ of ϕ in N is defined as sϕ = 1 − ∨{p | p ∈ PN , ϕ(p) = 0}. We claim that ϕ(1 − sϕ ) = 0. Let J = {x ∈ N | ϕ(x ∗ x) = 0}. Recall x ∗ y ∗ yx ≤ y2 x ∗ x for any x,y ∈ N . Thus J is a weak* closed left ideal in N. By Remark A.36 there is a projection P ∈ J such that J = NP. Now for p ∈ PN , ϕ(p) = 0 implies p ∈ NP and hence ker(P ) ⊂ ker(p) or equivalently p(H ) ⊂ P (H ) therefore also span[p(H ) | ϕ(p) = 0] ⊂ P (H ). It follows that 1 − sϕ ≤ P and hence ϕ(1 − sϕ ) = 0, which proves our claim. This implies that ϕ(x) = ϕ(sϕ xsϕ ) for any x ∈ N . Indeed, we have x − sϕ xsϕ = x(1 − sϕ ) + (1 − sϕ )xsϕ and it is easy to check (hint: compose with a state and use Cauchy–Schwarz) that ϕ(x(1 − sϕ )) = ϕ((1 − sϕ )y) = 0 for any x,y ∈ N. Lastly, ϕ(x) = 0 for any nonzero x ∈ (sϕ N sϕ )+ . Otherwise in the commutative von Neumann algebra generated by x in sϕ N sϕ we would find a nonzero projection p such that εp ≤ x for some ε > 0 and hence ϕ(p) = 0, so that p ≤ 1 − sϕ which is absurd for 0 = p ∈ sϕ N sϕ . Moreover, sϕ is the largest projection in PN with the latter property. Proof of Theorem 5.7 We follow closely Ozawa’s presentation in [189]. Our first goal is to show that our assumption implies the existence of a normal positive unital surjective linear map ϕ  : N → M such that ϕ  (BN ) = BM . Let C = {x ∈ N | x ≤ 1,ϕ(x) = 1}. Observe that C is a (nonvoid) convex subset (actually, as will soon become apparent, a “face”) of the unit ball of N. Since ϕ is normal, C is σ (N,N∗ ) compact and hence has extreme points by the Krein–Milman theorem. Let U be an extreme point of C. We claim that U is necessarily an extreme point of BN . Indeed, if U is the midpoint of a segment [a,b] in BN , then ϕ(U ) = 1 is the midpoint of the segment [ϕ(a),ϕ(b)] in the unit ball of M. But it is easy to see (by the uniform convexity of the Hilbert space on which M is realized, see §A.3) that this forces ϕ(a) = ϕ(b), and hence ϕ(a) = ϕ(b) = 1. The latter means that a,b ∈ C, and since

5.2 Jordan multiplicative domains

111

U ∈ ext(C), we must have a = b = U . This proves the claim that U ∈ ext(BN ). By a well-known characterization of ext(BN ) (see e.g. [240, p. 48]) U is a partial isometry. Since ϕ(U ) = 1 we have ∀x ∈ N

ϕ(x) = ϕ(UU ∗ x).

(5.8)

Indeed, for any normal state g on M we have g(ϕ(U )) = 1 so that the functional f ∈ N∗ defined by f (x) = g(ϕ(x)) satisfies f N∗ = 1 = f (U ), so by Lemma A.41 we have f (x) = f (UU ∗ x) or g(ϕ(x)) = g(ϕ(UU ∗ x)) and since this holds for any g, we obtain (5.8). Let us define ϕ  : N → M by ϕ  (x) = ϕ(Ux). Then ϕ  (1) = 1 = ϕ   and, by Remark 1.34, ϕ  is automatically positive. By (5.8) ϕ  still takes the closed unit ball of N onto that of M. Thus we reach our first goal. Thus, replacing ϕ by ϕ  we may assume that ϕ is in addition positive and unital. Let P denote the set of projections in N . Let e ∈ P denote the support projection sϕ of ϕ (see Remark 5.9). Then ϕ(a) = 0 for any nonzero a ∈ (eNe)+ . Replacing N by eNe (which has e as its unit) and ϕ by its restriction to eNe, we may assume that e = 1. Since ϕ(x) = ϕ(exe) for any x ∈ N , after this change from N to eNe our assumption ϕ(BN ) = BM still holds. Let ϕ be the Jordan multiplicative domain. By Theorem 5.5, the latter is a self-adjoint Jordan subalgebra, which is weak* closed as can be easily deduced from (5.7), and ϕ|ϕ : ϕ → M is a normal Jordan ∗-morphism. We first claim that ϕ|ϕ is injective. Indeed, for x ∈ ϕ , if ϕ(x) = 0 then ϕ(x)∗ ◦ ϕ(x) = ϕ(x ∗ ◦ x) = 0 and since 1 = e is the support of ϕ we have x ∗ ◦ x = 0 and hence x = 0. Secondly, we claim that ϕ|ϕ is surjective. Let v ∈ U (M). We will show that there is w ∈ U (N ) such that ϕ(w) = v. Since ϕ(BN ) = BM we know there is w ∈ BN satisfying this. But now the positivity of ϕ implies ϕ(1 − w∗ ◦ w) ≥ 0 and by (5.6) ϕ(1 − w∗ ◦ w) ≤ 0. Therefore we must have 1 − w ∗ ◦ w = 0, so that w is necessarily unitary. Lastly, since w and ϕ(w) are both unitary and ϕ(1) = 1, we must have w ∈ ϕ . This shows that the range of ϕ|ϕ contains U (M), and since M is linearly spanned by U (M), this proves the surjectivity of ϕ|ϕ . Thus ϕ|ϕ is an invertible Jordan ∗-morphism from ϕ onto M. We now apply Theorem 5.6 to the inverse Jordan ∗-morphism ψ : M → ϕ ⊂ N : there are mutually orthogonal projections p,q ∈ P with p + q = 1 in ϕ ∩ ϕ ⊂ N such that the mapping r : M → pNp ⊕ (qNq)op defined by r(x) = pψ(x) ⊕ qψ(x) is an injective ∗-homomorphism. We may write just as well r(x) = pψ(x)p ⊕ qψ(x)q since p,q commute with the range of ψ. Moreover, since ψ(M) = ϕ is weak* closed in N and commutes with p (and q), r(M) is also weak*

112

Multiplicative domains of c.p. maps

closed in pNp ⊕ (qNq)op . Thus r(M) is a von Neumann algebra, r : M → r(M) is a ∗-isomorphism and hence (recall Remark A.38) r is automatically normal. Consider y = y1 ⊕ y2 ∈ pNp ⊕ (qNq)op . Let t (y) = y1 + y2 ∈ N . Then t : pNp ⊕ (qNq)op → N is isometric (positive but in general not c.p.). The mapping P : pNp ⊕ (qNq)op → r(M) defined by P (y) = (rϕt)(y) is a contractive (normal) projection onto r(M) (because tr = ψ and ϕψ is the identity on M). By Tomiyama’s theorem 1.45, it is a c.p. projection. Remark 5.10 Conversely, if r,p,q,P are as in the conclusion of Theorem 5.7, then the map ϕ of the form ϕ(x) = r −1 P ((pxp,qxq)), is a normal (positive unital) map onto M such that ϕ(BN ) = BM .

5.3 Notes and remarks The theory of multiplicative domains for c.p. (or merely 2-positive) maps is due to Choi [44]. It was preceded by important work on positive maps by Kadison and Størmer; see [235] for references and information on the latter maps. See [123] for information on Jordan algebras. More recent results on Jordan multiplicative domains appear in Størmer’s paper [234].

6 Decomposable maps

This chapter is devoted to linear maps that are decomposable as linear combinations of c.p. maps and to the appropriate norm denoted by  · dec . As will soon be clear, these maps and the dec-norm play the same role for the max-tensor product as cb-maps and the cb-norm with respect to the mintensor product.

6.1 The dec-norm Let A ⊂ B(H ) be a closed subspace forming an operator system and B a C ∗ -algebra. We will denote by D(A,B) the set of all “decomposable” maps u : A → B, i.e. the maps that are in the linear span of CP(A,B). This means that u ∈ D(A,B) if and only if there are uj ∈ CP(A,B) (j = 1,2,3,4) such that u = u1 − u2 + i(u3 − u4 ).

 A simple minded choice of norm would be to take u = inf 41 uj , but this is not the optimal choice. In many respects, the “right” norm on D(A,B) is the following one, introduced by Haagerup in [104]. We denote udec = inf{max{S1 ,S2 }}

(6.1)

where the infimum runs over all maps S1,S2 ∈ CP(A,B) such that the map   S1 (x) u(x) V :x → (6.2) u(x ∗ )∗ S2 (x) is in CP(A,M2 (B)). We will use the notation u∗ (x) = u(x ∗ )∗ .

113

114

Decomposable maps

Note that u = u∗ if and only if u takes self-adjoint elements of A to self-adjoint elements of B. This holds in particular for any c.p. map u. With this notation, we can write   S1 u V = . u∗ S2 Then D(A,B) equipped with the norm  dec is a Banach space. To clarify this, let us denote by D  (A,B) the set of those linear mappings u : A → B such that there are S1,S2 ∈ CP(A,B) for which the preceding map V is in CP(A,M2 (B)).     0 1 1 0 Remark 6.1 Let λ ∈ C. Consider the matrices a = and b = . 1 0 0 λ Let V be as in (6.2). Then V  : x → a ∗ V (x)a and V  : x → b∗ V (x)b are also c.p. Note     λu S u∗ S V  = ¯ 1 . (6.3) V = 2 λu∗ |λ|2 S2 u S1 We will show that actually: Lemma 6.2 D(A,B) = D  (A,B) and D(A,B) is a Banach space for the norm  dec . Proof Let u ∈ D  (A,B) with V as in (6.2). Note that we have u(x) =

0 . Therefore by the polarization formula this implies u ∈ 1 0 V (x) 1 D(A,B). Thus D  (A,B) ⊂ D(A,B). We now claim that D  (A,B) is a vector space. Clearly it is stable by addition. A look at V  in (6.3) shows that u ∈ D  (A,B) ⇒ λu ∈ D  (A,B),  proving the claim. Now if u ∈ CP(A,B), and if we denote χ = e1i , the mapping     u(x) u(x) x → eij ⊗ u(x) = χ ∗ χ ⊗ u(x) (6.4) = 1≤i,j ≤2 u(x) u(x) is clearly c.p. Therefore CP(A,B) ⊂ D  (A,B). But since D  (A,B) is a vector space, this implies D(A,B) ⊂ D  (A,B). The easy verification that  dec is a norm for which D(A,B) is complete is left to the reader. Remark 6.3 It is easy to show that the infimum in the definition (6.1) of the dec-norm is a minimum (i.e. this infimum is attained) when the range B is a von Neumann algebra, or when there is a contractive c.p. projection from B ∗∗ to B. Haagerup raises in [104] the (apparently still open) question whether it is always a minimum.

6.1 The dec-norm

115

Lemma 6.4 Let u : A → B be “self-adjoint” i.e. such that u∗ = u, and let S1,S2 ∈ CP(A,B).   S1 u ∈ CP(A,M2 (B)) in other words if 0  V (see (2.30)) If V = u S2 then −(S1 + S2 )/2  u  (S1 + S2 )/2. Proof For any a ∈ A+ we have V (a) ≥ 0, and hence ±u(a) ≤ (S1 (a) + S2 (a))/2 by (1.24). Therefore, the two mappings a → (S1 (a) + S2 (a))/2 ∓ u(a) are positive   mappings (i.e. positivity preserving). But since Vn = un (S1 )n is assumed positive for any n, we conclude that the same un (S2 )n two mappings are c.p., which proves the lemma. Lemma 6.5 The following simple properties hold: (i) If u ∈ CP(A,B), then udec = ucb = u. (ii) If u(x) = u(x ∗ )∗ (i.e. u is “self-adjoint”) then udec = inf{u1 + u2  | u1,u2 ∈ CP(A,B), u = u1 − u2 }.

(6.5)

 0 u . (iii) To any u : A → B we associate the self-adjoint mapping  u= u∗ 0 Then u ∈ D(A,B) if and only if  u ∈ D(A,M2 (B)) and udec =  udec .   u u Proof (i) If u ∈ CP(A,B), then ∈ CP(A,M2 (B)) and hence u u udec ≤ u. Conversely, for  any x ≥ 0 in the unit ball of A, if V is as S1 (x) u(x) ≥ 0 and hence, by Lemma 1.37, u(x) ≤ in (6.2), then u(x) S2 (x) max{S1 (x),S2 (x)}. Therefore u ≤ max{S1 ,S2 } by (1.21) and hence u ≤ udec . Since u is c.p. we already know (see (1.22)) that u = ucb .   u1 u1 ∈ (ii) Assume u = u1 − u2 with u1,u2 ∈ CP(A,B). Then V1 = u1 u1   u2 −u2 ∈ CP(A,M2 (B)) and (use (6.3) with λ = −1) V2 = −u2 u2 CP(A,M2 (B)), and hence V1 + V2 ∈ CP(A,M2 (B)). This shows udec = u1 − u2 dec ≤ u1 + u2 , and hence dec ≤ inf{u1 + u2  | u = u1 − u2 }.  u S1 u ∈ CP(A,M2 (B)), let T = (S1 + Conversely, if u = u∗ and if u S2 S2 )/2. Then by Lemma 6.4 we have −T  u  T and hence we can write u = u1 − u2 with u1 = (u + T )/2 and u2 = (−u + T )/2. Then u1,u2 ∈ CP(A,B) 

116

Decomposable maps

and u1 + u2 = T . Thus u1 + u2  = (S1 + S2 )/2 ≤ max{S1 ,S2 }. So we find inf{u1 + u2  | u = u1 − u2 } ≤ udec . (iii) Assume  u = U1 − U2 with U1,U2 ∈ CP(A,M2 (B)). Note that U1,U2 coincide on the diagonal and are self-adjoint. Let (S1,S2 ) be their diagonal coefficients, which are clearly in CP(A,B). We have then mappings u1 : A → B and u2 : A → B such that     S1 u1 S1 u2 and U2 = . U1 = u1∗ S2 u2∗ S2 This implies u1 dec ≤ max{S1 ,S2 } and u2 dec ≤ max{S1 ,S2 }. Therefore udec ≤ u1 dec + u2 dec ≤ 2 max{S1 ,S2 } ≤ U1 + U2 , where for the last inequality we used the classical inequality      a 0   a b  .    ≤ max{a,d} =  0 d   c d  This shows that udec ≤ inf{U1 + U2  |  u = U1 − U2 } =  udec , where for the last = we use (ii) for  u.   u S c.p. and Conversely, if udec < 1 there are c.p. maps S1,S2 with 1 u∗ S2   S1 0 . Then (recall (6.3) with λ = −1) max{S1 ,S2 } < 1. Let S = 0 S2 S ± u is c.p. and hence  u = U1 −U2 with U1 = (S + u)/2 and U2 = (S − u)/2, udec ≤ udec . and U1 + U2  = S < 1. This shows that  Proposition 6.6 The following additional properties hold: (i) We have D(A,B) ⊂ CB(A,B) and ∀u ∈ D(A,B) ucb ≤ udec .

(6.6)

(ii) If u ∈ D(A,B) and v ∈ D(B,C) then vu ∈ D(A,C) and vudec ≤ vdec udec .

(6.7)

Proof (i) Assume first that u is self-adjoint, i.e. u = u∗ and udec < 1. Then, by part (ii) in Lemma 6.5, u = u1 − u2 with u1,u2 c.p. and u1 + u2  < 1. We claim that sup{u(x) | x = x ∗,x ≤ 1} ≤ 1.

(6.8)

Indeed, first consider x ≥ 0 in the unit ball of A. We have then ±u(x) ≤ (u1 + u2 )(x) and hence u(x) ≤ 1. But now if x = x ∗ and x ≤ 1, we have ±x ≤ |x| = x + + x − , and hence ±u(x) = ±(u1 − u2 )(x) ≤ (u1 + u2 )(|x|), which implies u(x) ≤ |x| ≤ 1, proving the claim.

6.1 The dec-norm

117

But (6.8) is valid also for IdMn ⊗ u = IdMn ⊗ u1 − IdMn ⊗ u2 for any n. In   0 x is self-adjoint in M2 (A), we particular, using n = 2 since the matrix ∗ 0 x have for any x ∈ A      0 x   0 u(x)      = x, ≤ u(x) ≤  u(x ∗ ) 0   x∗ 0  which implies u ≤ 1. Since we may replace u by IdMn ⊗ u for any n, we conclude ucb ≤ 1. By homogeneity, this proves (i) for self-adjoint u’s. udec , and by what we But by part (iii) in Lemma 6.5, we have udec =  udec . Since we have obviously ucb ≤  ucb , we just proved  ucb ≤  obtain (i).     T1 v S1 u ∈ CP(A,M2 (B)) and ∈ CP(B, (ii) Assume that u∗ S2 v T2 ∗  vu T1 S1 ∈ CP(A,M2 (C)). M2 (C)). Then by Lemma 1.40 we have v∗ u∗ T2 S2 Therefore, observing that v∗ u∗ = (vu)∗ , we have vudec ≤ max{T1 S1 ,T2 S2 } ≤ max{T1 ,T2 } max{S1 ,S2 }, and (ii) follows. Note that for self-adjoint mappings there is a very direct argument: if u = u1 − u2 and v = v1 − v2 we have vu = (v1 u1 + v2 u2 ) − (v1 u2 + v2 u1 ) (a difference of two c.p. maps) and hence vudec ≤ (v1 u1 + v2 u2 ) + (v1 u2 + v2 u1 ) = (v1 + v2 )(u1 + u2 ) ≤ v1 + v2 u1 + u2 , and recalling (6.5), this yields (ii) for self-adjoint maps u,v. The preceding results are valid with an arbitrary range. However, the special case when the range is B(H ) (or is injective) is quite important: Proposition 6.7 If B = B(H ) or if B is an injective C ∗ -algebra, then D(A,B) = CB(A,B) and for any u ∈ CB(A,B) we have udec = ucb .

(6.9)

Proof Assume B = B(H ). By the factorization of c.b. maps (see Theorem 1/2 1.50) we can write u(x) = V ∗ π(x)W (x ∈ A) with V  = W  = ucb . ∗ ∗ ∗ Let S1 (x) = V π(x)V and S2 (x) = W π(x)W . Note u∗ (x) = W π(x)V . Then the map   ∗     0 V S1 u π 0 V W = u∗ S2 W∗ 0 0 π 0 0

118

Decomposable maps

is c.p. (by Remark 1.21) and hence udec ≤ max{V 2,W 2 } = ucb . Equality holds by (6.6). If B ⊂ B(H ) is injective there is a contractive c.p. projection P : B(H ) → B. Note that by (i) in Lemma 6.5 P dec = 1. Then by (ii) in Proposition 6.6 u : A → Bdec ≤ u : A → B(H )dec P : B(H ) → Bdec = ucb . Again equality holds by (6.6). In analogy with (1.15) we have: Lemma 6.8 (Decomposable into a direct sum) Let A and (Bi )i∈I be maps  C ∗ -algebras and let B = ⊕ i∈I Bi ∞ . Let u : A → B. We denote ui = pi u : A → Bi . Then u ∈ D(A,B) if only if all the ui’s are decomposable with supi∈I ui dec < ∞ and we have udec = supi∈I ui dec .

(6.10)

Proof Assume supi∈I ui dec < 1. We then have c.p. maps Vi : A → M2 (Bi ) such that Vi 12 = u i and such that max{Vi 11 ,Vi 22 } < 1. By (6.6) the  to (Vi ) i∈I is well defined mapping V : A → ⊕ i∈I M2 (Bi ) ∞ associated  and clearly c.p. Since we may identify ⊕ i∈I M2 (Bi ) ∞ with M2 (B) so that u = V12 , we obtain udec ≤ 1. By homogeneity this proves udec ≤ supi∈I ui dec . The converse follows from (6.7). In the von Neumann algebra setting, the next lemma will be useful. Lemma 6.9 (Decomposability extends to the bidual) Let u : A → M be a linear map from a C ∗ -algebra A to a von Neumann algebra M. Then u ∈ ¨ dec = udec . D(A,M) ⇒ u¨ ∈ D(A∗∗,M) and u Proof By Lemma 1.43 we know u ∈ CP(A,M) ⇒ u¨ ∈ CP(A∗∗,M). Assume udec < 1. Then  let S1,S2 ∈ CP(A,B) with  S1 < 1,S2  < 1 be such that S¨1 u¨ S1 u is c.p. and note that V¨ = . This implies u ¨ dec < 1, V = u∗ S2 u¨∗ S¨2 and hence by homogeneity, u ¨ dec ≤ udec . The converse is obvious (say by (ii) in Proposition 6.6). The next statement provides us with simple examples of decomposable maps (note that we will show in Lemma 6.24 that (6.12) is somewhat optimal). Proposition 6.10 Let A be a C ∗ -algebra. (i) Fix a,b in A. Let u : A → A be defined by u(x) = a ∗ xb, then udec ≤ a b.

6.1 The dec-norm

119

(ii) Let u : Mn → A be a linear mapping into a C ∗ -algebra. 1/2 Assume that  u(eij ) = ai∗ bj with ai ,bj in A. Let aC =  ai∗ ai  . Then udec ≤ aC bC .  ∗ b with a ,b in A then (iii) More generally, if u(eij ) = 1≤k≤m aki kj ki kj 1/2  1/2      ∗ ∗ aki aki  bkj bkj  . udec ≤   ki

kj

(6.11)

(6.12)

Proof (i) Let V : A → M2 (A) be the mapping defined by   ∗ a xa a ∗ xb . V (x) = ∗ b xa b∗ xb   x 0 An elementary verification shows that V (x) = t ∗ t where t = 2−1/2 0 x   a b . Clearly this shows that V is c.p. hence by definition of the dec-norm a b we have udec ≤ max{V11 ,V22 } where V11 (x) = a ∗ xa and V22 (x) = b∗ xb. Thus we obtain udec ≤ max{a2,b2 }. Applying this to the mapping x → u(x)a−1 b−1 we find udec ≤ a b. (ii) Let a ∗ = (a1∗,a2∗, . . . ,an∗ ),b∗ = (b1∗,b2∗, . . . ,bn∗ ) viewed as row matrices with entries in A (so that a and b are column matrices). Then, for any x in Mn , u(x) can be written as a matrix product: u(x) = a ∗ xb. We the mapping V : Mn → M2 (A)  ∗ again ∗introduce   defined by V (x) = a xa a xb x 0 ∗ . Again we note V (x) = t t where t = 2−1/2 b∗ xa b∗ xb 0 x   a b ∈ M2n×2 (A) which shows that V is c.p. so we obtain udec ≤ a b   max{V11 ,V22 } ≤ max{b2,a2 } = max{ bj∗ bj , ai∗ ai } and by homogeneity this yields (6.11).  ∗b . (iii) We have u = uk where uk : Mn → A is defined by uk (eij ) = aki kj  Let Vk : Mn → M2 (A) be associated to uk as in (ii). Let V = Vk . Clearly V is c.p. and hence udec ≤ max{V11 ,V22 } = max{V11 (1),V22 (1)}, which yields (6.12).

120

Decomposable maps

Proposition 6.11 Let A,B,C be C ∗ -algebras. For any u ∈ D(A,B) ∀x ∈ C ⊗ A

(IdC ⊗ u)(x)C⊗max B ≤ udec xC⊗max A .

(6.13)

Moreover, the mapping IdC ⊗ u : C ⊗max A → C ⊗max B is decomposable and its norm satisfies IdC ⊗ uD(C⊗max A,C⊗max B) ≤ udec .

(6.14)

Proof Since by (6.6) IdC ⊗ u ≤ IdC ⊗ udec , it suffices to prove (6.14). We already saw in Corollary 4.18 the analogous property for c.p. maps. Let  S1 u is in CP(A,M2 (B)). Then u ∈ D(A,B). Assume that the map V = u∗ S2 by Corollary 4.18 the mapping IdC ⊗ V : C ⊗max A → C ⊗max M2 (B) is c.p. and using C ⊗max M2 (B) ∼ = M2 (C ⊗max B), we may view it as  IdC ⊗ S1 IdC ⊗ u . Since IdC ⊗ u∗ = (IdC ⊗ u)∗ this implies that IdC ⊗ u∗ IdC ⊗ S2 IdC ⊗ u ∈ D(C ⊗max A,C ⊗max B) with dec-norm ≤ max{IdC ⊗ S1 : C ⊗max A → C ⊗max B, IdC ⊗ S2 : C ⊗max A → C ⊗max B} but, by (4.30) since S1,S2 are c.p. , this is the same as max{S1 ,S2 }. Taking the infimum over all possible S1,S2 , we obtain (6.14). Corollary 6.12 Let uj ∈ D(Aj ,Bj ) (j = 1,2) be decomposable mappings between C ∗ -algebras. Then u1 ⊗ u2 extends to a decomposable mapping in D(A1 ⊗max A2,B1 ⊗max B2 ) such that u1 ⊗ u2 D(A1 ⊗max A2,B1 ⊗max B2 ) ≤ u1 dec u2 dec .

(6.15)

Proof Just write u1 ⊗ u2 = (u1 ⊗ Id)(Id ⊗ u2 ). When the mapping u has finite rank then a stronger result holds. We can go min → max: Proposition 6.13 Let u ∈ D(A,B) be a finite rank map between C ∗ -algebras. For any C ∗ -algebra C we have ∀x ∈ C ⊗ A

(IdC ⊗ u)(x)C⊗max B ≤ udec xC⊗min A .

(6.16)

Proof For any finite-dimensional subspace F ⊂ B, the min and max norms are clearly equivalent on C ⊗ F . Thus since its rank is finite u defines a bounded map IdC ⊗ u : C ⊗min A → C ⊗max B. That same map has norm at most udec as a map from C ⊗max A to C ⊗max B. But since we have a metric surjection

6.2 The δ-norm

121

q : C ⊗max A → C ⊗min A taking the open unit ball onto the open unit ball, it follows automatically that IdC ⊗ u : C ⊗min A → C ⊗max B = IdC ⊗ u : C ⊗max A → C ⊗max B ≤ udec .

6.2 The δ-norm In this section, we introduce a “hybrid” tensor product E ⊗δ A of an operator space E and a C ∗ -algebra A, which is convenient to compute the dec-norms of certain important examples, as we do in the next section. We will use it again in §10.3 to relate nuclearity and c.p. approximation properties. The main point is Theorem 6.15, which provides us with a useful factorization for the elements in the unit ball of (C ⊗max A) ∩ (E ⊗ A) when C is the universal C ∗ algebra of E. We need the obvious generalization of the notation (4.5): given operator spaces E,F and linear mappings θ : E → B(H ) and π : F → B(H ), we denote by θ · π : E ⊗ F → B(H ) the linear mapping defined by    θ (xj )π(yj ) (xj ∈ E,yj ∈ F ). (θ · π ) xj ⊗ yj = Let A be a C ∗ -algebra. For any y ∈ E ⊗ A we define (y) = sup{θ · π(y)B(H ) }

(6.17)

where the supremum runs over all Hilbert spaces H and all pairs (θ,π ) where π : A → B(H ) is a ∗-homomorphism and θ : E → π(A) is a complete contraction. When A is unital, we claim that (6.17) remains unchanged if we restrict to unital ∗-homomorphisms. Indeed, if π is not unital, let p = π(1), then p is a projection on H and it is immediate that θ · π(y) = θ  · π  (y) where θ  = pθp and π  = pπp; thus if we replace π by π  , which we view as a unital ∗-homomorphism into B(p(H )), and θ by θ  , we find θ ·π(y) = θ  ·π  (y) whence the claim. Lemma 6.14 We view E as embedded into C ∗ E (as defined in §2.7). Then (y) = yC ∗ E⊗max A .

(6.18)

Proof Since π(A) is a C ∗ -algebra, any completely contractive map θ : E → θ : C ∗ E → π(A) . Hence we clearly have π(A) extends to a representation  ∀y ∈ E ⊗ A

122

Decomposable maps (y) ≤ sup  θ · π(y) ≤ yC ∗ E⊗max A .

The reverse inequality is clear (using an embedding C ∗ E ⊗max A ⊂ B(H )). The main motivation for introducing  as in (6.17) is the following. Theorem 6.15 Let A ⊂ B(H ) be a unital C ∗ -algebra and let E be an operator space. Consider an element y in E ⊗ A. Let   1/2  1/2    ∗ ∗  (6.19) ai ai   bj bj  δ(y) = inf xMn (E)  where the infimum runs over all possible n and all possible representations of y of the form n xij ⊗ ai bj . (6.20) y= i,j =1

Then (y) = δ(y).

(6.21)

Proof We adapt a proof from [208] that actually is valid even if the subalgebra A is not assumed self-adjoint (in that case the π ’s are assumed to be unital completely contractive homomorphisms). We first show the easy inequality (y) ≤ δ(y). Let (θ,π ) be as in the definition of (y). Then we have, assuming (6.20)  (θ · π )(y) = π(ai )θ (xij )π(bj ) and hence by (2.3) (y) ≤ δ(y). To show the converse, since δ and  are norms (for δ this can be proved by the same idea as for γE in (1.17)), it suffices to show that ∗ ≤ δ ∗ . So let ϕ : E ⊗ A → C be a linear form such that δ ∗ (ϕ) ≤ 1, or equivalently such that |ϕ(y)| ≤ δ(y).

∀y ∈ E ⊗ A

The following Lemma 6.16 is the heart of the proof. With the notation from that lemma, we have v · π )(y)η2  ϕ(y) = η1,( therefore |ϕ(y)| ≤  v · π(y) ≤ (y). This completes the proof that ∗ ≤ δ ∗ , and hence that δ ≤ .

6.2 The δ-norm

123

Lemma 6.16 Let ϕ : E ⊗ A → C be a linear form with δ ∗ (ϕ) ≤ 1. Then  and a representation π : A → B(H ) together with there are a Hilbert space H  and  a completely contractive map  v : E → B(H ) and unit vectors η1 ∈ H  η2 ∈ H such that for any a,b,c in A and any x in E we have ϕ(x ⊗ ab) = η1,π(a) v (x)π(b)η2  and  v (x)π(c) = π(c) v (x). Proof We first claim that there are two representations π1 : A → B(H1 ) and π2 : A → B(H2 ) together with a completely contractive map v : E → B(H2,H1 ) and unit vectors ξ1 ∈ H1 and ξ2 ∈ H2 such that for any a,b in A and any x in E we have ϕ(x ⊗ ab) = ξ1,π1 (a)v(x)π2 (b)ξ2 .

(6.22)

Recall the classical arithmetic/geometric mean inequality: ∀α,β ≥ 0

(αβ)1/2 = infs>0 (α/s + sβ)/2.

(6.23)

By the definition of δ, this implies that if x ∈ BMn (E)           ai ai∗  +  bj∗ bj  . xij ⊗ ai bj ≤ (1/2)  ϕ Let S be the set of pairs of states (f1,f2 ) on A and let F : S → R be the function defined by         ai ai∗ + f2 bj∗ bj −# ϕ xij ⊗ ai bj . F (f1,f2 ) = (1/2) f1 (6.24) Then supt∈S F (t) ≥ 0. Let F ⊂ ∞ (S,R) be the set of all such functions. It is easy to check that F is a convex cone. Indeed, if F  is the function associated to x ,ai ,bj with x  ∈   x 0  , with the sequences (ai ,ai ) and BMn (E) then F + F is associated to 0 x (bj ,bj ) obtained by concatenation. Moreover, S is a weak* compact convex subset of A∗ ⊕ A∗ and each F ∈ F is affine and weak* continuous. By the variant of Hahn–Banach described in Lemma A.16 there is (f1,f2 ) in S such that for any x ∈ BMn (E) and ai ,bj ∈ A         ≤ (1/2) f1 ai ai∗ + f2 bj∗ bj . # ϕ xij ⊗ ai bj By (6.23) (since the left-hand side is unchanged when we replace (ai ,bj ) by (ai /s,sbj ) for any s > 0) we find automatically        1/2 # ϕ ≤ f1 . xij ⊗ ai bj ai ai∗ f2 bj∗ bj

124

Decomposable maps

Changing ai to zai with z ∈ C,|z| = 1 does not change the right-hand side either, thus we also find       1/2 . (6.25) ai ai∗ f2 bj∗ bj xij ⊗ ai bj ≤ f1 ϕ In particular, for any x ∈ BE and a,b ∈ A |ϕ(x ⊗ ab)| ≤ (f1 (aa∗ )f2 (b∗ b))1/2 . Let π1 : A → B(H1 ) (resp. π2 : A → B(H2 )) be the GNS representation of the state f1 (resp. f2 ) with cyclic unit vector ξ1 ∈ H1 (resp. ξ2 ∈ H2 ). This gives us |ϕ(x ⊗ ab)| ≤ π1 (a ∗ )ξ1 π2 (b)ξ2 . Recall that since we use cyclic vectors, the sets {π2 (b)ξ2 | b ∈ A} and {π1 (a ∗ )ξ1 | a ∈ A} are dense in H2 and H1 respectively. Thus we can unambiguously define a linear mapping v : E → B(H2,H1 ) with v ≤ 1 such that for any x ∈ E,a,b ∈ A ϕ(x ⊗ ab) = π1 (a ∗ )ξ1,v(x)π2 (b)ξ2 . But going back to (6.25) this shows us that actually for any x ∈ BMn (E) we have   1/2  1/2 π1 (ai∗ )ξ1 2 π2 (bj )ξ2 2 . π1 (ai∗ )ξ1,v(xij )π2 (bj )ξ2  ≤ In other words, we actually have vn  ≤ 1 for any n ≥ 1 and hence vcb ≤ 1, which proves our claim. Then, writing (ac)b = a(cb) into (6.22) and using the density just mentioned, we find for all c in A ∀x ∈ E

v(x)π2 (c) = π1 (c)v(x).

(6.26)

Let π : A → B(H1 ⊕ H2 ) and  v : E → B(H1 ⊕ H2 ) be defined by     0 0 v(x) π1 (a) and  v (x) = π(a) = . 0 π2 (a) 0 0 Then (6.26) implies  v (x)π(c) = π(c) v (x), forx ∈ E, c ∈ A,  π is a  and 0 ξ1 and η1 = we find unital representation on A. Lastly, letting η2 = 0 ξ2 ϕ(x ⊗ ab) = η1,π(a) v (x)π(b)η2 . Remark 6.17 Let E be an operator space and A a unital C ∗ -algebra. Consider y ∈ E ∗ ⊗ A. Let u : E → A be the associated linear map. If δ(y) ≤ 1 then for any ε > 0 u admits a factorization of the form v

w

E −→Mn −→A,

6.3 Decomposable extension property

125

 with vcb ≤ 1 + ε and wdec ≤ 1. Indeed, we can write y as y = xij ⊗ ai∗ bj with xMn (E ∗ ) ≤ 1 + ε and aC bC ≤ 1 (recall the notation aC =  ∗ 1/2  a ai  ). Define w : Mn → A by w(eij ) = a ∗ bj , and let v : E → Mn be i

i

the map associated to x, so that vcb = xMn (E ∗ ) ≤ 1 + ε. Then by (6.11) we have wdec ≤ 1. Remark 6.18 The space E ⊗δ A can be described as a quotient of the Haagerup tensor product A ⊗h E ⊗h A via the map a ⊗ e ⊗ b → e ⊗ ab, see [208, p. 241].

6.3 Decomposable extension property We previously described (see §1.2) an analogue of the Hahn–Banach extension theorem for c.b. maps. In this section we give an analogue for the maximal tensor product, where decomposable maps replace the c.b. ones. At this point, we advise the reader to review Lemma 1.38 and Corollary 5.3 on bimodule maps. We will use a generalization of Lemma 1.38 for bimodule maps on “operator modules,” as follows (this result appears in [236], see also [197]). Lemma 6.19 Let C ⊂ B(K) be a unital C ∗ -algebra given with a representation π : C → B(H ). Let E ⊂ B(K) be a C-bimodule, i.e. an operator space stable by (left and right) multiplication by any element of C. Consider a bimodule map w : E → B(H ), i.e. a map satisfying w(c1 xc2 ) = S ⊂ M π(c1 )w(x)π(c2 ), (c1,c2 ∈ C,x ∈ E). Let  2 (B(K)) be the operator λ a system consisting of all matrices of the form ∗ with λ,μ ∈ C, a,b ∈ E. b μ Let W : S → M2 (B(H )) be defined by     π(λ) w(a) λ a = . W w(b)∗ π(μ) b∗ μ Then wcb ≤ 1 if and only if W is c.p. Proof The proof of Lemma 1.38 can be easily modified to yield this. We now give an extension property (one more version of Hahn–Banach) for maps defined on a subspace of the maximal tensor product. We will repeat the same trick later on in §8.3. Theorem 6.20 Let A be a unital C ∗ -algebra, E ⊂ A an operator space and M ⊂ B(H ) a von Neumann algebra. Let u : E → M be a bounded linear map. Let  u : M  ⊗ E → B(H ) be the linear map defined when x  ∈ M  , x ∈ E by  u(x  ⊗ x) = x  u(x).

126

Decomposable maps

Let M  ⊗max E = M  ⊗ E ⊂ M  ⊗max A denote the closure of M  ⊗ E in M  ⊗max A equipped with the norm induced by M  ⊗max A. u : M  ⊗max E → Then  u extends to a c.b. map on M  ⊗max E with  u ∈ D(A,M) with  udec ≤ 1 extending u. B(H )cb ≤ 1 if and only if there is  In other words u : A → Mdec |  u|E = u}  u : M  ⊗max E → B(H )cb = inf{ and the infimum is attained. Proof Assume there is an extension  u with  u : A → Mdec ≤ 1. By (6.14) u : M  ⊗max A → M  ⊗max Mcb ≤ 1 and hence a fortiori and (6.6) IdM  ⊗  IdM  ⊗ u : M  ⊗max E → M  ⊗max Mcb ≤ 1. Since the product map p : M  ⊗ M → B(H ) defines trivially a ∗-homomorphism on M  ⊗max M we have u : M  ⊗max E → B(H )cb ≤ 1 p : M  ⊗max M → B(H )cb ≤ 1, and hence  since  u = p(IdM  ⊗ u). Conversely, assume  u : M  ⊗max E → B(H )cb ≤ 1. Let K be a suitable Hilbert space so that M  ⊗max A ⊂ B(K), let C = M  ⊗1 ⊂ B(K), let π : C → B(H ) be the natural identification M  ⊗ 1  M  and let E = M  ⊗max E. Note that E is a C-bimodule, and that  u : clE → B(H ) is a C-bimodule map. Hence, with the same notation as in Lemma 6.19 that we apply here to w =  u the mapping W : S → M2 (B(H )) must be completely positive (and unital). Recall that by Theorem 1.39 any unital c.p. map V : A1 → B(H ) on a (unital) operator  : A2 → B(H ). Thus, let system A1 ⊂ A2 admits a (unital) c.p. extension V  : M2 (M  ⊗max A) → M2 (B(H )) W be a completely positive extension of W : S → M2 (B(H )), and let T be the  to M2 (1 ⊗ A). Identifying A to 1 ⊗ A ⊂ B(K), we may view restriction of W  (1) = 1. In the T as a mapping from M2 (A) to M2 (B(H )). Note T (1) = W present case of modular maps we claim that T has the following special form   T11 (x11 ) T12 (x12 ) (6.27) ∀x ∈ M2 (A) T (x) = T21 (x21 ) T22 (x22 ) with T12|E = u and moreover that T (M2 (A)) ⊂ M2 (M). Taking this claim temporarily for granted, we will now conclude the proof. the Since T is unital and c.p. we have max{T11 ,T22 } ≤ T = 1,  a a maps T11 and T22 are c.p. and moreover the mapping R : a → T a a is clearly c.p. on A, and a fortiori self-adjoint. Let  u = T12 (and as usual u(a ∗ )∗ for any a ∈ A). Then we have  u∗ (a) = 

6.3 Decomposable extension property

∀a ∈ A

R(a) =

 T11 (a)  u∗ (a)

127

  u(a) . T22 (a)

Therefore, by definition of the dec-norm, we have  udec ≤ max{T11 ,T22 } = 1. Thus it only remains to prove the claim. For this purpose, observe  that,by its 0 c with definition, W is a ∗-homomorphism on the algebra of matrices 1 0 c2  c1,c2 in C = M ⊗ 1. Let D denote the set of all such matrices. By Corollary  must be D-bimodular, i.e. we have 5.3 the map W  (x)W (y2 ),  (y1 xy2 ) = W (y1 )W W

∀y1,y2 ∈ D

∀x ∈ M2 (B(H )). (6.28)



   1 0 0 0  is Applying this with y1,y2 equal to either or , we find that W 0 0 0 1  (x)ij depends only on xij . A fortiori the same is true for necessarily such that W  T = W|M2 (A) . Thus we can write a priori T in the form (6.27). Moreover, since  extends W , we know W 12 extends W12 = w =  u, and hence restricting this W to A  1 ⊗ A (on which  u = u) we see that  u extends u. Lastly, it remains to check that all Tij s take their values in M. Equivalently it suffices to check that all the terms Tij (xij ) (i,j = 1,2,xij ∈ A) commute with M  . But this is an easy consequence of (6.28). Indeed, since 1 ⊗ A trivially  commutes with  M ⊗ 1, any x ∈ M2 (A) commutes with any y ∈ D of the 0 m ⊗1 ∈ D with m ∈ M  , and hence by (6.28) form y =  0 m ⊗1  (x) = W  (yx) = W  (xy) = W  W (y)W   (x)W(y). Equivalently W (y)T (x) = 0 m , this implies that Tij (xij ) all T (x)W (y) and since W (y) = 0 m   commute with any m ∈ M , and hence take their values in M  = M. This completes the proof of the claim, and of Theorem 6.20. Corollary 6.21 (Infinite multiplicity) In the situation of Theorem 6.20, if the embedding M ⊂ B(H ) has infinite multiplicity, which means we assume H = 2 ⊗2 H and there is an isomorphism π : M → M with M ⊂ B(H) such that the embedding M ⊂ B(H ) is of the form x → Id2 ⊗ π(x) (x ∈ M), then u : M  ⊗max E → B(H )cb .  u : M  ⊗max E → B(H ) = 

(6.29)

¯  . Let v = π u : E → M. A simple Proof We have M  = B(2 )⊗M verification shows that  u restricted to [B(2 ) ⊗ M ] ⊗ E ⊂ M  ⊗ E can v . Moreover, we may use this idenfication with be identified with IdB(2 ) ⊗ 

128

Decomposable maps

Mn ⊂ B(2 ) in place of B(2 ) and the isomorphism Mn (M  ) ⊗max E = u =  v cb . But since π is an isomorphism Mn (M  ⊗max E) to show that  we also have u|E = u} = inf{ v : A → Mdec |  v|E = v}. inf{ u : A → Mdec |  By Theorem 6.20 this last equality implies  ucb =  v cb , so that we obtain ucb .  u =  v cb =  Corollary 6.22 (Case E = n∞ ) In the situation of Theorem 6.20, assume either that E = A or that E ⊂ A is a C ∗ -subalgebra for which there is a contractive c.p. projection P : A → E. Then ucb . uD(E,M) = 

(6.30)

In particular, this holds for E = n∞ , E = Mn or when E is an injective C ∗ -algebra. Proof We claim udec = inf{ u : A → Mdec |  u|E = u}. The case E = A is trivial, and the other one very simple. Indeed, by (6.7) for any extension  u u|E dec ≤  udec and if we choose  u = uP we we have u : E → Mdec =  have  udec ≤ udec P dec = udec . If M  admits a cyclic vector, or equivalently if M admits a separating vector (see Lemma A.62) then the mere boundedness of  u on M  ⊗max E ensures that it is c.b. Actually this is a general phenomenon, as the next lemma shows. Corollary 6.23 (Cyclic case) In the situation of Theorem 6.20, assume that M  ⊂ B(H ) admits a cyclic vector. Let  · α be a C ∗ -norm on M  ⊗ A and let M  ⊗α A be the resulting C ∗ -algebra (after completion). Let α

M  ⊗α E = M  ⊗ E ⊂ M  ⊗α A equipped with the norm induced by M  ⊗α A. Then if  u defines (by density) a bounded map on M  ⊗α E, the latter is (automatically) c.b. and its c.b. norm is equal to its norm. Proof We may assume M  ⊗α A ⊂ B(H). The operator space E = M  ⊗α E ⊂ M  ⊗α A ⊂ B(H) is clearly a bimodule with respect to M  ⊗ 1 viewed as a C ∗ -subalgebra of B(H). Let π : M  ⊗ 1 → B(H ) be the natural embedding. If u itself)  u defines (by density) a bounded map on M  ⊗α E, the latter is (like  bimodular with respect to π : M  ⊗ 1 → B(H ). Therefore, by Theorem 2.6 the resulting map  u : M  ⊗α E → B(H ) is c.b. with equality of the norm and the c.b. norm.

6.4 Examples of decomposable maps

129

6.4 Examples of decomposable maps We end this section with several important examples. First we invite the reader to recall Remark 1.30. We will now refine Proposition 6.10. Lemma 6.24 Consider a linear mapping u : Mn → A into a C ∗ -algebra A. Let a ∈ Mn (A) be the matrix defined by aij = u(eij ). Then   1/2  1/2     ∗ ∗ ∗ udec = inf  akj akj   bkj bkj  a,b ∈ Mn (A), a = a b . k,j

k,j

(6.31) Proof By (iii) in Proposition 6.10 we know that udec is ≤ the right-hand side of (6.31). Conversely, assume udec < 1. Let V ∈ CP(Mn,M2 (A)) be such that V12 = u and Vjj  < 1 for j = 1,2. Let αij = V (eij ) ∈ M2 (A). We denote its entries by αij 11,αij 12, . . . ∈ A. By Remark 1.30, α ∈ Mn (M2 (A))+ , and hence α = β ∗ β for some β ∈ Mn (M2 (A)). Then  aij = αij 12 = βki ∗11 βkj 12 + βki ∗21 βkj 22 . k

Moreover,

 k,j

and hence   

k,j

Similarly

βkj ∗11 βkj 11 + βkj ∗21 βkj 21 =

 j

αjj 11 = V (1)11

  βkj ∗11 βkj 11 + βkj ∗21 βkj 21  = V (1)11  = V11  < 1.

  

k,j

  βkj ∗12 βkj 12 + βkj ∗22 βkj 22  ≤ V22  < 1.

Thus we “almost” conclude as desired that the right-hand side of (6.31) is < 1. The only trouble is that we obtain a representation of the form a = a ∗ b with matrices a,b of size 2n × n instead of n × n. This is easy to fix using the following elementary factorization (essentially the polar decomposition): assuming A unital for simplicity, for any ε > 0 we set γ = (a ∗ a + ε1)−1/2 a ∗ b (b∗ b + ε1)−1/2 . Let x = (a ∗ a + ε1)−1/2 a ∗ and y = b(b∗ b + ε1)−1/2 . Then clearly x = xx∗ 1/2 ≤ 1, y = y ∗ y1/2 ≤ 1 and hence γ  = xy ≤ 1. Then we have a ∗ b = (a ∗ a + ε1)1/2 γ (b∗ b + ε1)1/2 = a ∗ b, where a  = (a ∗ a + ε1)1/2 and b = γ (b∗ b + ε1)1/2 . But now a ,b are both in ∗  ∗ ε1. It follows Mn (A) and suchthat a ∗ a  ≤ a ∗ a + ε1 and   b b∗ ≤ b b +  that   ∗  ∗  ∗ b  + n2 ε. 2   ≤    + n ≤ a a a a ε and b b b kj kj kj kj kj kj kj kj kj kj kj kj Since ε > 0 is arbitrary this proves that the right-hand side of (6.31) is < 1. By homogeneity, this completes the proof.

130

Decomposable maps

For emphasis, we single out the next example, which will play an important role in the sequel. The reader should compare this to the earlier description of the unit ball of CB(n∞,A) in (3.7). Lemma 6.25 Consider a linear mapping T : n∞ → A into a C ∗ -algebra A. Let xj = T (ej ) (1 ≤ j ≤ n). Then   1/2  1/2     aj∗ aj   bj∗ bj  aj ,bj ∈ A, xj = aj∗ bj . T dec = inf  j

j

(6.32) Proof We may identify n∞ with the C ∗ -subalgebra of diagonal matrices in Mn . We know there is a contractive c.p. projection P : Mn → n∞ . By (i) in Lemma 6.5 both P and the inclusion n∞ → Mn have dec-norm = 1. By (6.7) we have T D(n∞,A) = TPD(Mn,A) .

(6.33)

Using this it is easy to deduce (6.32) from (6.31). We leave the details to the reader. Lemma 6.26 In the situation of Theorem 6.15, assume that A is a unital C ∗ algebra, let F be another operator space and B another C ∗ -algebra. Consider u1 ∈ CB(E,F ) and u2 ∈ D(A,B). Then for all y in E ⊗ A, we have δ((u1 ⊗ u2 )(y)) ≤ u1 cb u2 dec δ(y).

(6.34)

Proof Assume u1 cb = 1. Note that u1 : E → F extends to a C ∗ representation from C ∗ E to C ∗ F . Then (6.34) is an immediate consequence of (6.15) and (6.18). In the particular cases when E = Mn∗ or E = n∗ ∞ , Theorem 6.15 becomes: Corollary 6.27 In Theorem 6.15, assume A is a unital C ∗ -algebra and let E = Mn∗ (resp. E = n∗ ∞ ) for some n ≥ 1, viewed as dual operator spaces in the sense of §2.4. Then for all y in E ⊗ A, with associated linear map y : n∞ → A), we have  y : Mn → A (resp.  δ(y) =  y D(Mn,A)

(resp. δ(y) =  y D(n∞,A) ).

(6.35)

Proof Let y ∈ E ⊗A when E = Mn∗ . Let  y (eij ) = aij . Let (ξij ) be biorthogonal  ξij ⊗ aij . Assume a = a ∗ b as to the standard basis (eij ) in Mn , so that y = in (6.31). Then   ∗ ∗ ξij ⊗ aki bkj = 1k=l ξij ⊗ aki blj y= i,k,j i,k,l,j  ∗ = z((i,k),(j,l))aki blj i,k,l,j

6.4 Examples of decomposable maps

131

with z((i,k),(j,l)) = 1k=l ξij . By definition of δ(y) we have     δ(y) ≤  eij ⊗ ekl ⊗ z((i,k),(j,l)) (aki )C (blj )C Mn ⊗min Mn ⊗min E ∗

i,k,l,j

and

     eij ⊗ ekl ⊗ z((i,k),(j,l))  i,k,l,j Mn ⊗min Mn ⊗min E ∗       = eij ⊗ ekk ⊗ ξij  i,j k Mn ⊗min Mn ⊗min E ∗     = eij ⊗ ξij  = IdE cb = 1. Mn ⊗min E ∗

i,j

Thus we obtain δ(y) ≤  y dec by (6.31). We now turn to the reverse inequality N (which incidentally is valid for any E). If we have y = i,j =1 xij ai bj for some N ≥ 1, then let v : E ∗ → MN be the map defined by v(ξ ) = (ξ(xij )) (ξ ∈ E ∗ ). Note vcb = xMN (E) by (2.14). Let w : MN → A be defined by w(eij ) = ai bj . Then by (6.11) 1/2  1/2      wdec ≤  bj∗ bj  . ai ai∗  

(6.36)

Whence since  y = wv by part (ii) in Proposition 6.6 and Proposition 6.7:  y dec ≤ vdec wdec = vcb wdec  1/2  1/2     ai ai∗   bj∗ bj  . ≤ xMN (E)  Taking the infimum over all N and all possible (ai ) and (bj ), we obtain  y dec ≤ δ(y). The case when E = n∞ is proved similarly but using (6.32) instead of (6.31). We invite the reader to compare the following fact with Lemma 3.10. Lemma 6.28 Let F be a free group with (free) generators (gi )i∈I and let Ui = UF (gi ) ∈ C ∗ (F)(i ∈ I ). We augment I by one element by setting formally I˙ = I ∪ {0}, and we set g0 equal to the unit in F so that U0 = UF (g0 ) = 1. Let (xi )i∈I˙ be a finitely supported family in a C ∗ -algebra A and let T : ∞ (I˙) → A  be the mapping defined by T ((αi )i∈I˙ ) = i∈I˙ αi xi . Then we have     U ⊗ x = T dec . (6.37)   ∗ i i ˙ i∈I

C (F)⊗max A

 Ui ⊗ xi ∈ E ⊗ A. Recall Proof Let E = span[Ui , | i ∈ I˙]. Let y = that, by Theorem 6.15, δ(y) = (y), and hence by Corollary 6.27, we have T dec = (y), and by definition (recalling Lemma 3.9)

132

Decomposable maps     (y) = sup  vi π(xi )

where the supremum runs over all representations π : A → B(H ) and all families (vi ) of contractions in π(A) ⊂ B(H ). By Remark 3.11 (Russo– Dye), the latter supremum remains unchanged if we let it run only over all the families (vi ) of unitaries in π(A) . Equivalently, by (4.6) this means:     (y) =  U ⊗ x ,  ∗ i i ˙ i∈I

C (F)⊗max A

which establishes (6.37). When A is a von Neumann algebra with a separating vector the preceding lemma can be significantly reinforced and it gives us, in the case |I˙| = n, a rather pretty formula for the dec-norm of a mapping T : n∞ → M. Theorem 6.29 In the situation of the preceding lemma, assume that A = M where M is a von Neumann algebra with either infinite multiplicity (in the sense of Corollary 6.21) or with a separating vector. Then           u U ⊗ x = T  = sup u x ∈ U (M ) .     i dec i ˙ i ˙ i i ∗ i∈I

i∈I

C (F)⊗max M

(6.38) Proof Since (xi ) is assumed finitely supported, we may assume |I | < ∞, say |I | = n. Let T : M  ⊗ ∞ (I˙) → B(H ) be the mapping defined as in Theorem    yi xi . By (6.29) and (6.30) and Corollary 6.23 we 6.20 by T yi ⊗ ei = have T dec = T, which is the second equality in (6.38). The first one just repeats (6.37). Remark 6.30 Note that the second equality in (6.38) does not hold in general. For instance if M = B(2 ) then M  = C1, and the third term in (6.38) is equal to T , which in general is < T cb , and a fortiori < T dec (see Remark 1.4). Remark 6.31 Let C = Mn ∗ C ∗ (Z). By Proposition 2.18, we have C  Mn ⊗min Bn = Mn (Bn ) where Bn is a unital C ∗ -algebra called the Brown algebra for Larry Brown who introduced it in [36] (see [54] for more on Bn ). In the isomorphism C  Mn ⊗min Bn = Mn (Bn ), the embedding  eij ⊗ Uij ∈ U (Mn (Bn )) Mn ⊂ C becomes x → x ⊗ 1. Let U = be the unitary corresponding to the (single) generator of Z in C ∗ (Z) ⊂ C. Let A be a C ∗ -algebra. As before, let E = Mn∗ (operator space dual), let ξij ∈ E be biorthogonal to the usual basis (eij ) of Mn , let aij ∈ A and  y = ni,j =1 ξij ⊗ aij ∈ E ⊗ A. Then         ξij ⊗ aij  ∗ Uij ⊗ aij  = . (6.39)  y D(Mn,A) =  C E⊗max A

Bn ⊗max A

6.4 Examples of decomposable maps

133

The first equality is the same as (6.35). We will prove the second one. For any unital C ∗ -algebra D ⊂ B(H ), there is a 1–1 correspondence π →  eij ⊗ π(Uij ) between the set of unital ∗-homomorphisms π : Bn → D and U (Mn (D)). Indeed, π ↔ IdMn ⊗ π : Mn (Bn ) → Mn (D) and since Mn (Bn )  Mn ∗ C ∗ (Z), each IdMn ⊗ π is determined by a ∗-homomorphism ρ : C ∗ (Z) → Mn (D) coupled with x → x ⊗ 1 on Mn , and of course ρ is determined by its value on the single generator of Z, and hence by a single element of U (Mn (D)). Therefore for any x ∈ Mn (B(H ))           uij xij  u ∈ U (Mn (D)) . sup  π(Uij )xij  π : Bn → D = sup  By Remark 3.11 (Russo–Dye) applied to Mn (D), we have           zij xij  z ∈ BMn (D) . uij xij  u ∈ U (Mn (D)) = sup  sup  Let σ : A → B(H ) be a ∗-homomorphism. Applying this to D = σ (A) and xij = σ (aij ), we find      π(Uij )σ (aij ) π : Bn → σ (A) sup       zij σ (aij ) z ∈ BMn (σ (A) ) , = sup  and since Mn (D) = E ∗ ⊗min D = CB(E,D) isometrically when E = Mn∗  (see (2.17)) the last term is the same as  ξij ⊗ aij C ∗ E⊗max A . This proves (6.39). In the situation of Theorem 6.29 (with A = M) the norms in (6.39) are equal to      uij aij  u ∈ U (Mn (M  )) . sup  This follows from Theorem 6.20 just like for (6.38). Remark 6.32 (Computing some dec-norms) Assume |I | = n (and hence |I˙| = n + 1). In the preceding theorem, consider the case when A = Cλ∗ (FI ) and xi = λFI (gi ). Then by (4.27) we have T dec = n+1, and by (3.7),(4.22) and (3.21) √ T cb = 2 n, so that T dec = T cb when n > 1. The same equalities clearly hold if Cλ∗ (FI ) is replaced by MFI . More generally, assume that A = M is a finite von Neumann algebra, as defined in §11.2. Then if (xi ) is any family of unitaries in M we have T dec = n + 1.

134

Decomposable maps

    Indeed, we have clearly  i∈I˙ Ui ⊗ xiC ∗ (F)⊗ M ≥  i∈I˙ xi∗ ⊗ xiM op ⊗ M max max and using the left and right multiplications L and R on L2 (τ ) we find      ∗ x ⊗ x ≥ 1,R(xi∗ )L(xi )1 = n + 1.  op  i ˙ i ˙ i∈I

M ⊗max M

i∈I

Actually, the same reasoning shows that for any family of scalars (αi )i∈I˙ we have          ∗ α U ⊗ x ≥ α x ⊗ x ≥ α,    i i  op ˙ i i ˙ i i ˙ i ∗ i∈I

C (F)⊗max M

i∈I

M ⊗max M

i∈I

and by the triangle inequality this becomes an equality when αi ≥ 0 for all i ∈ I˙. Remark 6.33 (The exceptional case when |I˙| = 2) The case n = 2 is in sharp contrast with the preceding remark : Any linear map T : 2∞ → A into an arbitrary C ∗ -algebra satisfies T dec = T cb = T . We already observed T cb = T  in Remark 3.13. Since the C ∗ -algebra generated by the unit and a single unitary is commutative and hence nuclear, we can replace the max-norm by the min-norm in (6.37), then the first equality in (3.7) shows that T dec = T cb . Remark 6.34 By Proposition 6.7 if M is injective then T dec = T cb for any n and any T : n∞ → M. At the end of [104] Haagerup asks whether the converse holds, and even simply for n = 3. We return to this open problem later on in Corollary 23.5 and Remark 23.4. For the record, we now turn to decomposable multipliers on C ∗ (G). They turn out to be the same as the bounded ones, as the next remark shows. Remark 6.35 (Decomposable multipliers on C ∗ (G)) Recall that a bounded linear map u : C ∗ (G) → C ∗ (G) is called a multiplier if there is a (necessarily bounded) function ϕ : G → C such that u(UG (t)) = ϕ(t)UG (t). In that case, we claim that u ∈ D(C ∗ (G),C ∗ (G)) and udec = u. Indeed, assuming u = 1, by Proposition 3.3 we can write ϕ(t) = η,π(t)ξ  where π is a unitary representation on G and η = ξ  = 1. Let S1 (resp. S2 ) be the bounded multiplier mapping on C ∗ (G) associated to the function ϕ1 (t) = η,π(t)η (resp. ϕ2 (t) = ξ,π(t)ξ ). By Proposition 3.3 S1  = S2  = 1. Let using (6.4)that the linear A = C ∗ (G). It is easy to check  map from A   to M 2 (A) π(a) π(a) S1 (a) u(a) η = (η ξ ) that takes a = UG (t) to ⊗a u(a ∗ )∗ S2 (a) π(a) π(a) ξ is c.p. and hence we have udec ≤ maxj =1,2 {Sj } = 1, which proves the claim.

6.5 Notes and remarks

135

The next two results are the analogue of Proposition 3.25 but with C ∗ (G) in place of Cλ∗ (G). Proposition 6.36 Let G be a discrete group. Let T ∈ D(C ∗ (G),MG ). Define ψT : G → C by ψT (t) = δt ,T (UG (t))(δe ). Then the linear mapping that takes UG (t) to ψT (t)UG (t) extends to a decomposable multiplier T ∈ D(C ∗ (G),C ∗ (G)) with T = Tcb = Tdec ≤ T dec . ˙ G : C ∗ (G) → MG be the natural ∗-homomorphism taking UG (t) Proof Let Q to λG (t). Let TG ∈ D(C ∗ (G) ⊗max C ∗ (G),MG ⊗max MG ) ˙ G given by (6.15) so that TG dec ≤ T dec . denote the extension of T ⊗ Q  Let T = PG TG JG where PG and JG are as in Theorem 4.11. The definitions of PG and JG show that T is the multiplier corresponding to ψT and Tdec ≤ T dec by (6.7). By (3.4) we have T = Tcb , and T = Tdec by Remark 6.35. Corollary 6.37 In the preceding situation, there is a contractive projection Q from D(C ∗ (G),C ∗ (G)) onto the subspace formed of all the multipliers in D(C ∗ (G),C ∗ (G)). ˙ G u ∈ D(C ∗ (G),MG ) by (6.7). Proof Let u ∈ D(C ∗ (G),C ∗ (G)), then T = Q  Let Q(u) = T . Then Q(u)dec ≤ udec by (6.7) and Q(u) = u if u is a multiplier.

6.5 Notes and remarks The results of §6.1 on the dec-norm come mainly from [104]. Those of §6.2 come from [208]. The δ-norm was developed there to present the equivalence nuclear ⇔ CPAP (due to Choi–Effros and Kirchberg) in a framework better suited for linear maps on operator spaces. We exploit this in the sequel in §10.2. Lemma 6.28 which appeared in [204] was directly inspired by a previous result of Haagerup in [104, lemma 3.5], which was essentially the second equality in (6.38). The examples of §6.4 are variations suggested by the δ-norm with roots in Haagerup’s ideas in [104]. Haagerup in [104, proposition 3.4] states and proves Remark 6.33 for maps on 2∞ with values in a von Neumann algebra.

7 Tensorizing maps and functorial properties

In this chapter, we introduce a major tool to study tensor products. We will try to identify the linear mappings u : A → B between C ∗ -algebras that are “tensorizing” meaning by this that, for any other C ∗ -algebra C, IdC ⊗ u gives rise to a bounded map that is bounded from C ⊗ A to C ⊗ B when the domain and the range are equipped with given C ∗ -norms, respectively  · α and  · β . In order for this to make sense, of course we need  · α and  · β defined on C ⊗ A and C ⊗ B for any C. More generally, we will consider the case of maps u that are defined only on a subspace E ⊂ A (with E ⊗ C equipped with the induced norm), but for simplicity we will restrict ourselves to the minimal and maximal C ∗ -norms.

7.1 (α → β)-tensorizing linear maps This is meant as a quick preliminary overview of the topic. Some of the main statements are proved in detail later on in this volume in §10.2. Definition 7.1 Let A,B be C ∗ -algebras. Let α,β be one of the symbols min or max. Let E ⊂ A be an operator subspace of a C ∗ -algebra A. We will say that a linear map u : E → B is (α → β)-tensorizing if for any C ∗ -algebra C we have ∀x ∈ C ⊗ E

IdC ⊗ u(x)β ≤ xα .

When this holds IdC ⊗ u extends to a contraction as indicated in the following diagram. C ⊗α A ∪ C⊗E

.α

IdC ⊗u

−−−−−−−−−−→C ⊗β B.

136

7.1 (α → β)-tensorizing linear maps

137

Remark It should be emphasized that this notion, when α = max depends on the particular embedding E ⊂ A under consideration. The preceding definition equivalently means that: sup IdC ⊗ u : C ⊗ E

.α

→ C ⊗β B ≤ 1

(7.1)

where the sup runs over all possible C’s. This can be automatically strengthened to sup IdC ⊗ u : C ⊗ E

.α

→ C ⊗β Bcb ≤ 1.

(7.2)

Indeed, replacing C by Mn (C) gives control of the cb-norm, and Mn (C) ⊗α A = Mn (C ⊗α A) when either α = min or α = max, and similarly for β. By Proposition 1.11, the case (min , min) is clear: Proposition 7.2 The map u is (min → min)-tensorizing if and only if ucb ≤ 1. Note that for u : E → B to be (min → min)-tensorizing, by Proposition 1.11, it suffices to consider the min-tensor product with the C ∗ -algebra C = B. The case of (max → max) is given by the following remarkable Theorem 7.6 due to Kirchberg [161]. Let us denote by iB : B → B ∗∗ the canonical inclusion map of B into B ∗∗ viewed as a von Neumann algebra as usual. See §A.16 for background on this. Anticipating a little on the subsequent Proposition 7.26 and Corollary 7.27, we will need the following preliminary fact on the bidual. Lemma 7.3 Let B be any C ∗ -algebra. Then for any C ∗ -algebra C we have ∀x ∈ C ⊗ B

xC⊗max B = xC⊗max B ∗∗ .

Proof Let (π1,π2 ) be a pair of representations with commuting ranges taking values in some B(H ) with π1 (resp. π2 ) defined on C (resp. B). Let π¨2 : B ∗∗ → π1 (C) is as in §A.16. By (4.6), we have (π1 · π2 )(x)B(H ) = (π1 · π¨2 )(x)B(H ) ≤ xC⊗max B ∗∗ , whence xC⊗max B ≤ xC⊗max B ∗∗ , and the converse is obvious by (4.6). Theorem 7.4 The map u is (max → max)-tensorizing if and only if u admits udec ≤ 1, as in the following a decomposable extension  u : A → B ∗∗ with  commutative diagram. A

E

 u u

B

iB

B ∗∗

138

Tensorizing maps and functorial properties

Remark 7.5 We will see in Corollary 7.16 that for u : E → B to be (max → max)-tensorizing it suffices to consider the max-tensor product with the C ∗ -algebra C = C . Proof of Theorem 7.4 The idea is to apply Theorem 6.20 to the map iB u. To udec ≤ 1. prove the if part, assume we have an extension  u : A → B ∗∗ with  By (6.13),  u and a fortiori its restriction iB u is (max → max)-tensorizing. By Lemma 7.3, the map u itself is (max → max)-tensorizing. Conversely, assume u is (max → max)-tensorizing. We will use C = M  in order to apply Theorem 6.20. Let M = B ∗∗ viewed as a von Neumann algebra embedded in u : M  ⊗max E → B(H ) be B(H ) for some H , so that B ⊂ B ∗∗ ⊂ B(H ). Let  as defined in Theorem 6.20. Since (7.1) implies (7.2) we have IdC ⊗ u : C ⊗ E

.max

→ C ⊗max Bcb ≤ 1

for all C. Then, choosing C = M  , we observe that  u is the composition of the map IdM  ⊗ u with the ∗-homomorphism σ : M  ⊗max B → B(H ) defined by σ (c ⊗ b) = cb = bc. This shows that  u : M  ⊗max E → B(H )cb ≤ 1. Then Theorem 6.20 applied to the map iB u : E → M = B ∗∗ shows that there is an udec ≤ 1. extension  u : A → B ∗∗ with  In the particular case when E = A, we must have  u = iB u, whence: Theorem 7.6 ([161]) Let A,B be C ∗ -algebras and let u : A → B be a linear map. Then u is (max → max)-tensorizing if and only if iB u : A → B ∗∗ is decomposable with iB udec ≤ 1. Remark 7.7 It is easy to check that this holds if and only if u∗∗ D(A∗∗,B ∗∗ ) ≤ 1. Note also that u∗∗ = iB¨u with the notation in §A.16. We state here for future reference a consequence of (6.13): Corollary 7.8 Any c.p. map u with u ≤ 1 between C ∗ -algebras is (max → max)-tensorizing. Proof This follows from Corollary 4.18 (and udec = u by (ii) in Proposition 6.6). Remark 7.9 Let A,B,G be C ∗ -algebras. Let u : A → B be an (α → β)tensorizing linear map (where α or β can be either min or max). If u is c.p. then the mapping IdG ⊗ u extends to a c.p. map from G ⊗α A to G ⊗β B. This extends Corollary 1.26. Indeed, by Corollary 4.18 for any t ∈ G⊗A that is of the form t = a ∗ a with a ∈ G ⊗ A we have (IdG ⊗ u)(t) ∈ (G ⊗max B)+ and a fortiori (IdG ⊗ u)(t) ∈ (G ⊗β B)+ . But since IdG ⊗ u is (α → β)-bounded and the set of such t’s

7.1 (α → β)-tensorizing linear maps

139

is clearly dense in G ⊗α A, it follows that IdG ⊗ u extends to a positive map from G ⊗α A to G ⊗β B. Replacing G by Mn (G) the assertion follows. The case (min → max) is closely related to the notion of nuclearity: a C ∗ -algebra A is nuclear if and only if the identity map on A is (min → max)tensorizing. Kirchberg and Choi–Effros [45, 154] independently showed that this implies a strong approximation property for A called the CPAP (see Definition 4.8). The following statement for more general linear mappings (in place of IdA ) originates in their work. We postpone its proof to §10.2 (see Theorem 10.14). Theorem 7.10 Let u : E → B be a linear mapping from an operator space to a C ∗ -algebra. The following assertions are equivalent. (i) The map u is (min → max)-tensorizing. (ii) There is a net of finite rank maps ui : E → B admitting factorizations through matrix algebras of the form Mn i vi

wi ui

E

B

with vi cb wi dec ≤ 1 such that ui = wi vi converges pointwise to u. (iii) There is a net ui : A → B of finite rank maps with sup ui dec ≤ 1 that tends pointwise to u when restricted to E. In particular when E = A we get (see Corollary 10.16 for a detailed proof): Theorem 7.11 Let u : A → B be a completely positive and unital linear mapping between two unital C ∗ -algebras. The following assertions are equivalent. (i) u is (min → max)-tensorizing. (ii) There is a net of finite rank maps (ui ) admitting factorizations through matrix algebras of the form Mn i vi

A

wi ui

B

where vi ,wi are c.p. maps with vi wi  ≤ 1 such that ui = wi vi converges pointwise to u. (iii) There is a net ui : A → B of finite rank c.p. maps that tends pointwise to u.

140

Tensorizing maps and functorial properties

Lastly, when E = A = B, we obtain the classical characterization of nuclear C ∗ -algebras: Corollary 7.12 The following properties of a C ∗ -algebra A are equivalent. (i) A is nuclear, (ii) A has the CPAP, i.e. the identity on A is the pointwise limit of a net of finite rank c.p. maps. As for the remaining case max → min, we leave it as an exercise for the reader. The answer is the same as for min → min (hint: take C = Mn ). Let us illustrate this with group C ∗ -algebras: Corollary 7.13 (Nuclearity versus amenability) The following properties of a discrete group G are equivalent. (i) (ii) (iii) (iv)

G is amenable. Cλ∗ (G) is nuclear. C ∗ (G) is nuclear. The canonical quotient map QG : C ∗ (G) → Cλ∗ (G) is (min → max)-tensorizing. ˙ G : C ∗ (G) → MG is (v) The natural ∗-homomorphism Q (min → max)-tensorizing.

Note that the group G is amenable if and only if Cλ∗ (G) = C ∗ (G), by Theorem 3.30. Proof (i) ⇒ (ii) (resp. (i) ⇒ (iii)) follow from the preceding corollary since by Lemma 3.34 Cλ∗ (G) (resp. C ∗ (G)) has the CPAP when G is amenable. Assume (ii). Then we claim QG : C ∗ (G) → Cλ∗ (G) is (min → max)-tensorizing; indeed, being a ∗-homomorphism, it is clearly (min → min)-tensorizing (and also (max → max)-tensorizing), so composing QG with IdCλ∗ (G) which is (min → max)-tensorizing gives the claim. Thus (ii) ⇒ (iv). A similar argument using IdC ∗ (G) (and (max → max) for QG ) shows (ii) ⇒ (iv). (iv) ⇒ (v) is trivial. Assume (v). Then for any f ∈ C[G], since ˙ G (UG (t)) = λG (t) we have Q         ≤  f (t)λG (t)⊗UG (t) ∗  f (t)λG (t)⊗λG (t) ∗ ∗ Cλ (G)⊗max MG

Cλ (G)⊗min C (G)

and hence by (4.22) and (4.25) we have      f (t)λG (t) f (t) ≤  and G is amenable by Theorem 3.30. This shows (v) ⇒ (i) which completes the proof.

7.2  max is projective (i.e. exact) but not injective

141

7.2  max is projective (i.e. exact) but not injective We first observe that for the algebraic tensor product we have no problem. It is both injective and projective. In fact if A,B and I ⊂ A are merely vector spaces we have a linear isomorphism (A/I) ⊗ B = (A ⊗ B)/(I ⊗ B).

(7.3)

When A,B are ∗-algebras and I ⊂ A is a self-adjoint ideal, this isomorphism is also a ∗-homomorphism. We start with a basic fact, which will be generalized in Proposition 7.19. Lemma 7.14 Let I ⊂ A be a (closed, self-adjoint, two-sided) ideal in a C ∗ -algebra. We have then (isometrically) for any C ∗ -algebra B I ⊗max B ⊂ A ⊗max B.

(7.4)

Equivalently, I ⊗max B can be identified with the closure of I ⊗B in A⊗max B. Proof Let (xi ) denote the net formed by all xi ∈ I+ such that xi  < 1. We view these as a generalized sequence (i ≤ j means xi ≤ xj ). It is well known (see §A.15) that xxi − x → 0 and xi x − x → 0 for any x ∈ I when √ √ √ i → ∞ (along the net). Since xi ≤ xi (and xi ∈ I+ with  xi  < 1), √ √ we also have x xi − x +  xi x − x → 0, and by the triangle inequality √ √ √ √ √ √ √  xi x xi − x ≤  xi x xi − x xi  + x xi − x ≤  xi x − x + √ x xi − x. Therefore √ √ (7.5) ∀x ∈ I  xi x xi − x → 0. Let t ∈ I ⊗B. Assume tA⊗max B < 1. Let Pi : A → I be the c.p. map defined √ √ by Pi (x) = xi x xi for x ∈ A. By (4.30) we have (Pi ⊗ IdB )(t)I ⊗max B ≤ tA⊗max B < 1. By (7.5) (Pi ⊗ IdB )(t) − tI ⊗max B → 0 and hence tI ⊗max B ≤ 1. By homogeneity this shows that tI ⊗max B ≤ tA⊗max B , and the reverse inequality is trivial, proving (7.4). Proposition 7.15 (Exactness of the max-tensor product) Let A,B be C ∗ -algebras and let I ⊂ A be a (closed, self-adjoint, two-sided) ideal. We have then (isometrically) (A/I) ⊗max B = (A ⊗max B)/(I ⊗max B). In other words the sequence {0} → I ⊗max B → A ⊗max B → (A/I) ⊗max B → {0}

(7.6)

142

Tensorizing maps and functorial properties

is exact. More precisely, any x in (A/I) ⊗ B with xmax < 1 admits a lifting  x in A ⊗ B such that  x max < 1. Proof To verify (7.6), we use (7.3). Let ρ : A ⊗max B → (A/I) ⊗max B denote the natural representation (obtained from A → A/I after tensoring with the identity of B). Obviously, ρ vanishes on I ⊗max B. Hence denoting by Q : A ⊗max B → (A ⊗max B)/(I ⊗max B) the quotient map, we have a factorization of ρ of the form ρ = π Q where π : (A ⊗max B)/(I ⊗max B)→(A/I) ⊗max B is a ∗-homomorphism such that π : (A ⊗max B)/(I ⊗max B)→(A/I) ⊗max B ≤ 1.

(7.7)

By Lemma 4.27 we have an injective ∗-homomorphism (A ⊗ B)/(I ⊗ B) ⊂ (A ⊗max B)/(I ⊗max B). Thus the norm of (A⊗max B)/(I ⊗max B) induces a C ∗ -norm on (A⊗B)/(I ⊗ B), but by (7.3) we may view it as a C ∗ -norm on (A/I)⊗B. By (7.7) the latter C ∗ -norm dominates the maximal C ∗ -norm on (A/I) ⊗ B, and hence it must coincide with it. Since we know (Lemma 7.14)) that I ⊗max B is the closure of I ⊗ B in A ⊗max B, the last assertion follows from (4.34). Corollary 7.16 Let E ⊂ A be an operator subspace of a C ∗ -algebra A and let u : E → B be a linear map into another C ∗ -algebra B. The following are equivalent. (i) The map u is (max → max)-tensorizing i.e. for any C ∗ -algebra C we have ∀x ∈ C ⊗ E

(IdC ⊗ u)(x)C⊗max B ≤ xC⊗max A .

(ii) The same as (i) holds but restricted to C = C . Proof If (ii) holds, we claim that it holds when C = C ∗ (F) for any free group F. Indeed, since we may assume x ∈ E1 ⊗ E for some separable subspace E1 ⊂ C ∗ (F), this is an immediate consequence of Lemma 3.8. Now, since any unital C is a quotient of C ∗ (F) for a suitable F (see Proposition 3.39), (i) for such C’s follows from the preceding proposition and Lemma 4.26 applied with α = max. When C is not unital, we may view it as an ideal in its  ⊗max D for all D  and by Lemma 7.14 we have C ⊗max D ⊂ C unitization C  implies (i) for C. This in particular for both D = A and D = B. Thus (i) for C shows (ii) ⇒ (i). Proposition 7.15 shows that the maximal tensor product behaves well with respect to quotients. However, in sharp contrast with the minimal norm, it does

7.2  max is projective (i.e. exact) but not injective

143

not behave well at all with respect to subalgebras, i.e. when D ⊂ A is a C ∗ subalgebra of a C ∗ -algebra and C is another C ∗ -algebra, the ∗-homomorphism C ⊗max D → C ⊗max A is in general NOT injective. Indeed, if this holds and if the pair (A,C) is nuclear, it follows that the pair (D,C) is nuclear. Taking C = C , this would imply that the WEP is inherited by subalgebras, and hence that any C ∗ -subalgebra A ⊂ B(H ) is WEP, which is of course absurd. We will now give an explicit example showing that this fails for the inclusion Cλ∗ (G) ⊂ B(2 (G)) with C = C ∗ (G) whenever G is a nonamenable discrete group such that (C ∗ (G),B(H )) is a nuclear pair. With the terminology of §9.4 this means that C = C ∗ (G) has the LLP. In particular this holds when G = Fn for n ≥ 2. Let G be a discrete group and let S ⊂ G be a finite subset and let f : S → R+ be any function. We will draw this from our repertoire in §4.4. Let  f (t)UG (t) ⊗ λG (t) ∈ C ∗ (G) ⊗ Cλ∗ (G) ⊂ C ∗ (G) ⊗ B(2 (G)). x= t∈S

By (4.27) xC ∗ (G)⊗max Cλ∗ (G) =

 t∈S

f (t),

but since the min and max norms coincide by assumption on C ∗ (G) ⊗ B(2 (G)) we have xC ∗ (G)⊗max B(2 (G)) = xC ∗ (G)⊗min B(2 (G)) = xC ∗ (G)⊗min Cλ∗ (G), and hence by (4.22) (i.e. by Fell’s absorption principle)     = f (t)λG (t) . t∈S

B(2 (G))

By Kesten’s criterion (see Theorem 3.30),   if G is not amenable, there is a finite subset S ⊂ G such that |S| >  t∈S λG (t)B( (G)), thus we obtain 2 as announced that if D = Cλ∗ (G), A = B(2 (G)) and C = C ∗ (G), the ∗-homomorphism C ⊗max D → C ⊗max A is not injective (since it is not isometric). The same proof works for the inclusion of D = MG in B(2 (G)). More explicitly, if we consider the inclusion Cλ∗ (Fn ) ⊂ B(2 (Fn )) then we have:   n √   (7.8) UFn (gj ) ⊗ λFn (gj ) ∗ =2 n I ⊗ I + 1

but

C (Fn )⊗max B(2 (Fn ))

  n   UFn (gj ) ⊗ λFn (gj ) I ⊗ I + 1

and these are different for n > 1.

C ∗ (Fn )⊗max Cλ∗ (Fn )

=n+1

(7.9)

144

Tensorizing maps and functorial properties

More generally, using a different reasoning, we have: Proposition 7.17 Consider D = Cλ∗ (G) and A = B(2 (G)). If G is not amenable, then the inclusion D ⊂ A is not max-injective (in the sense of Definition 7.18). In particular, this holds when G = Fn for n ≥ 2. Proof Indeed, assume that D ⊂ A is max-injective. Then by iteration, D ⊗max D¯ → A ⊗max D¯ → A ⊗max A¯ is isometric. Anticipating a bit, we will see in Theorem 23.7 that, when A = B(2 (G)), the norms of A ⊗max A¯ and A ⊗min A¯ coincide on the “positive definite cone” formed of the  n  tensors s∈S λ(s) ⊗  of the form   1 aj ⊗ aj , andhence by Fell’s  principle λ(s)A⊗ A¯ =  s∈S λ(s) ⊗ λ(s)A⊗ A¯ =  s∈S λ(s). But by (4.17) max min   we have  λ(s) ⊗ λ(s) ¯ = |S|. Therefore, by Kesten’s criterion s∈S

D⊗max D

(see Theorem 3.30) G is amenable.

7.3 max-injective inclusions Motivated by the last section, we are naturally led to the following notion. Definition 7.18 Let A be a C ∗ -algebra and let D ⊂ A be a C ∗ -subalgebra. We will say that the inclusion D ⊂ A is max-injective if for any C ∗ -algebra C, the max-norm on C ⊗ D coincides with the norm induced by C ⊗max A. Equivalently, this means that the map C ⊗max D → C ⊗max A is isometric (or equivalently injective). As already observed in Proposition 7.17, this does not always hold. Here are some examples of max-injective inclusions D ⊂ A: Proposition 7.19 Let A be a C ∗ -algebra and let D ⊂ A be a C ∗ -subalgebra. The inclusion D ⊂ A is max-injective in each of the following cases. (i) If there is a c.p. projection P : A → D with P  = 1. (ii) More generally, if we have “approximate projections,” meaning by this that there is a net of c.p. mappings Pi : A → D with Pi  ≤ 1 such that Pi (x) → x ∀x ∈ D. (iii) If D is a (closed two-sided self-adjoint) ideal of A, we already saw in Lemma 7.14 that D ⊂ A is max-injective. Actually this still holds if D is merely a hereditary subalgebra of A (i.e. we have x ∈ A,y ∈ D 0 ≤ x ≤ y ⇒ x ∈ D). Proof (i) clearly suffices, since P will be (max → max)-tensorizing by Corollary 7.8. (ii) suffices for essentially the same reason: the Pi’s are all (max → max)-tensorizing. We claim that case (ii) implies case (iii). Indeed,

7.3 max-injective inclusions

145

let (Pi ) be as in the proof of Lemma 7.14. Note that if x ∈ D+ we have 0 ≤ Pi (x) ≤ xxi . Thus in both cases (ideal or hereditary) we have Pi (x) ∈ D and we obtain the desired “approximate projections,” proving the claim. Remark 7.20 The notion of max-injectivity is relevant to the study of pairs (A,B) of C ∗ -algebras such that A ⊗min B = A ⊗max B, that we call nuclear pairs in the sequel. For instance, if we assume that D ⊂ A is a max-injective inclusion, then clearly: (A,B) nuclear ⇒ (D,B) nuclear.

(7.10)

Remark 7.21 Let D,C be C ∗ -algebras. Let A = D ⊗min C and B another C ∗ -algebra. Then: (A,B) nuclear ⇒ (D,B) nuclear.

(7.11)

Indeed, if C is unital we have an embedding D  D ⊗ 1 ⊂ A and a unital c.p. projection P : A → D ⊗ 1 defined by P (d ⊗ c) = d ⊗ (ϕ(c)1), where ϕ is any state on C (recall Remark 1.24). By (i) in Proposition 7.19, the inclusion D ⊂ A is max-injective. If C is not unital, the same conclusion can be obtained using an approximate unit and (ii) in Proposition 7.19. Therefore (7.11) is a special case of (7.10). Remark 7.22 By Proposition 3.5, for any subgroup  ⊂ G of a discrete group G, the inclusion C ∗ () ⊂ C ∗ (G) is max-injective. Thus for any C ∗ -algebra B we have isometrically C ∗ () ⊗max B ⊂ C ∗ (G) ⊗max B. Let F be any free group. For any fixed t ∈ C ∗ (F)⊗B there is an at most countably generated free subgroup  ⊂ F (that may depend on t) such that (viewing C ∗ () ⊂ C ∗ (F)) we have t ∈ C ∗ ()⊗B. The group  is isomorphic to Fn for some 1 ≤ n ≤ ∞. Thus the norm of t in C ∗ (F) ⊗max B can be computed in C ∗ (Fn ) ⊗max B. In fact if there is a copy of F∞ such that  ⊂ F∞ ⊂ F, we may compute the latter norm simply in C ∗ (F∞ ) ⊗max B. It is natural to wonder whether in the nonseparable case one can compute the norm of the max-tensor product using separable C ∗ -subalgebras. The next two statements address this question. Lemma 7.23 Let A,B be C ∗ -algebras. Let t ∈ A ⊗ B. Then for any ε > 0 there are separable C ∗ -subalgebras A1 ⊂ A, B1 ⊂ B such that t ∈ A1 ⊗ B1 and tA1 ⊗max B1 ≤ (1 + ε)tA⊗max B .

(7.12)

146

Tensorizing maps and functorial properties

A fortiori t ∈ A1 ⊗ B and we have tA1 ⊗max B ≤ (1 + ε)tA⊗max B .

(7.13)

Proof A moment of thought shows that (7.13) implies (7.12) by iteration (recall (4.8)). Thus it suffices to find A1 separable for which (7.13) holds. Assume A unital. Then by Proposition 3.39 for a suitable free group F there is an onto ∗-homomorphism q : C ∗ (F) → A. By (7.6) and (4.34) for any tC ∗ (F)⊗max B ≤ (1 + ε)tA⊗max B ε > 0 there is  t ∈ C ∗ (F) ⊗ B with  such that (q ⊗ IdB )( t) = t. We may assume  t ∈ E ⊗ B for some separable (and even finite-dimensional) subspace E ⊂ C ∗ (F). By Remark 3.6 there is a separable C ∗ -subalgebra C ⊂ C ∗ (F) containing E and admitting a contractive c.p. projection P : C ∗ (F) → C. By (i) in Proposition 7.19 we tC⊗max B . Let A1 = q(C). Then A1 is separable, have  tC ∗ (F)⊗max B =  t) ∈ A1 ⊗ B and by (say) (4.30) t = (q ⊗ IdB )( tC⊗max B =  tC ∗ (F)⊗max B ≤ (1 + ε)tA⊗max B . tA1 ⊗max B ≤  This proves the unital case. The nonunital one follows by a unitization argument. Proposition 7.24 Let A,B be C ∗ -algebras. Let E ⊂ A be a separable subspace. Assume that B is separable. There is a separable C ∗ -subalgebra A1 such that E ⊂ A1 ⊂ A such that the ∗-homomorphism A1 ⊗max B → A⊗max B is isometric. Proof Assume A unital. We may clearly assume that E is a C ∗ -subalgebra. We first claim that there is a separable unital C ∗ -subalgebra E1 such that E ⊂ E1 ⊂ A and such that for any t ∈ E ⊗ B we have tE1 ⊗max B = tA⊗max B . Indeed, let {tn } be a dense sequence in E ⊗ B with respect to the norm in E ⊗max B. A fortiori {tn } is dense in E ⊗ B for the (smaller) norm induced by E1 ⊗max B for any E1 ⊃ E. Moreover, we may and do assume that each element in {tn } appears infinitely many times. By Lemma 7.23 for each n there is a separable unital C ∗ -subalgebra Dn ⊂ A such that tn ∈ Dn ⊗ B and tn Dn ⊗max B ≤ (1 + 1/n)tn A⊗max B . Let E1 ⊂ A be the C ∗ -subalgebra generated by ∪n≥1 Dn . Since Dn ⊂ E1 we have obviously tn E1 ⊗max B ≤ tn Dn ⊗max B ≤ (1 + 1/n)tn A⊗max B for all n ≥ 1, and hence (since each element tn appears with n arbitrary large) we obtain tn E1 ⊗max B = tn A⊗max B for all n ≥ 1, and by the density of {tn } this proves the claim. By iteration, this claim gives us a sequence of separable unital C ∗ -subalgebra En with E0 = E such that En−1 ⊂ En ⊂ A and such that for any t ∈ En−1 ⊗ B we have tEn ⊗max B = tA⊗max B for all n ≥ 1. Then let A1 = ∪n≥1 En . Let t ∈ A1 ⊗ B. To show that tA1 ⊗max B = tA⊗max B we may assume by density that t ∈ ∪n≥1 En ⊗ B or equivalently that t ∈ En ⊗ B for some

7.3 max-injective inclusions

147

n ≥ 1. Then tEn+1 ⊗max B = tA⊗max B and again since En+1 ⊂ A1 , we have tA1 ⊗max B ≤ tEn+1 ⊗max B . Thus we conclude tA1 ⊗max B ≤ tA⊗max B which completes the proof since the reverse inequality is obvious from the start. Remark 7.25 In the situation of Proposition 7.24 if A ⊗min B = A ⊗max B then A1 ⊗min B = A1 ⊗max B We now turn to a very simple but quite useful example: for any C ∗ -algebra A the inclusion A ⊂ A∗∗ is max-injective (see §A.17 for background on A∗∗ ). Proposition 7.26 (i) For any C ∗ -algebras A1,A2 we have an isometric embedding A1 ⊗max A2 → A∗∗ 1 ⊗max A2 . (ii) More generally, we have an isometric embedding ∗∗ A1 ⊗max A2 → A∗∗ 1 ⊗max A2 .

Proof We start with a preliminary observation. If σ1 : A1 → B(H ) and σ2 : A2 → B(H ) are ∗-homomorphisms with ¨ 2 : A∗∗ commuting ranges then σ¨ 1 : A∗∗ 1 → B(H ) and σ 2 → B(H ) still have  commuting ranges. Indeed, let M1 = σ1 (A1 ) and M2 = σ2 (A2 ) . The latter are mutually commuting von Neumann algebras in B(H ). Since σ¨ 1 is ∗ the unique (σ (A∗∗ 1 ,A1 ),σ (M1,M1∗ ))-continuous extension of σ1 and M1 = weak∗

σ1 (A1 ) (by the bicommutant Theorem A.46) we have σ¨ 1 (A1 ) ⊂ M1 and similarly σ¨ 2 (A2 ) ⊂ M2 . For (i) (resp. for (ii)), it clearly suffices to show that if σ1 : A1 → B(H ) and σ2 : A2 → B(H ) are ∗-homomorphisms with commuting ranges, then σ¨ 1 and σ2 (resp. σ¨ 1 and σ¨ 2 ) still have commuting ranges, which is what the preceding observation says. Actually we can also deduce (ii) from (i) by iteration. In other words (see Definition 7.18), (i) means: Corollary 7.27 For any C ∗ -algebra A the inclusion iA : A → A∗∗ is maxinjective. Remark 7.28 For (min → max) and (max → max)-tensorizing maps in the sense of §7.1, it is worthwhile to record here the following observation: Let u : E → B be a linear map. Then u is (min → max)-tensorizing (resp. (max → max)-tensorizing) if and only if the same is true for iB u : E → B ∗∗ . Indeed, this is immediate, given the preceding Corollary (applied to B instead of A).

148

Tensorizing maps and functorial properties

Theorem 7.29 Consider an inclusion D ⊂ A between C ∗ -algebras, and the bitransposed inclusion D ∗∗ ⊂ A∗∗ . Let iD : D → D ∗∗ denote as before the canonical inclusion. The following properties are equivalent: (i) The inclusion D ⊂ A is max-injective. (i)’ The map C ⊗max D → C ⊗max A is isometric when C = C . (ii) There is a contractive map T : A → D ∗∗ such that T|D = iD , or equivalently such that the following diagram commutes. A T

D

iD

D ∗∗

(iii) There is a contractive and normal c.p. projection P : A∗∗ → D ∗∗ . (iii)’ There is a contractive projection P : A∗∗ → D ∗∗ . Proof The equivalence of (i) and (i)’ follows from Corollary 7.16. The implication (i) ⇒ (ii) can be deduced from Theorem 7.4 (applied with E = D and u = IdD ): (i) holds if and only if there is T : A → D ∗∗ with T dec = 1 extending the inclusion D ⊂ D ∗∗ . When this holds, a fortiori (ii) holds. Assume (ii). We will use T¨ as defined in §A.16. Then (see Proposition A.50) P = T¨ : A∗∗ → D ∗∗ is a normal projection onto D ∗∗ with P  = 1. By Theorem 1.45, P is automatically c.p. so that (iii) holds, and (iii) ⇒ (iii)’ is trivial. Assume (iii)’. By Theorem 1.45 again, P is automatically c.p. so that by part (i) in Proposition 7.19 the inclusion D ∗∗ ⊂ A∗∗ is max-injective. Let C be a C ∗ -algebra. Let t ∈ C ⊗ D such that tC⊗max A ≤ 1. A fortiori we have clearly tC⊗max A∗∗ ≤ 1 and hence tC⊗max D ∗∗ ≤ 1 by our assumption. By Corollary 7.27 (applied to D) we have tC⊗max D = tC⊗max D ∗∗ ≤ 1, and hence by homogeneity tC⊗max D ≤ tC⊗max A for all t ∈ C ⊗ D. The converse being obvious, this completes the proof that (iii)’ ⇒ (i). The last step can be rephrased more abstractly like this: since the inclusions D ⊂ D ∗∗ and A ⊂ A∗∗ are max-injective (see Corollary 7.27), it is formally immediate that the max-injectivity of D ∗∗ ⊂ A∗∗ implies that of D ⊂ A, as can be read on this diagram: D ∗∗ ⊗max C

A∗∗ ⊗max C

D ⊗max C

A ⊗max C

7.3 max-injective inclusions

149

Corollary 7.30 The properties in Theorem 7.29 are also equivalent to: (iv) Any c.p. contraction u : D → M into a von Neumann algebra M extends to a c.p. (complete) contraction  u : A → M. (iv)’ For any von Neumann algebra M, any u ∈ D(D,M) extends to a mapping  u ∈ D(A,M) with  udec = udec , as in the following diagram. A  u

D

u

M

Proof Assume (iii) and let u be as in (iv)’. By Lemma 6.9, u¨ ∈ D(D ∗∗,M) ¨ : A∗∗ → M and let  u = (uP ¨ )|A : A → M. and u ¨ dec = udec . Consider uP ¨ dec P dec = By (6.7) and by part (i) from Lemma 6.5, we have  udec ≤ u u ¨ dec P  = udec . Thus (iii) ⇒ (iv)’. Using Lemma 1.43 the same argument yields (iii) ⇒ (iv). Conversely if we assume either (iv) or (iv)’ and apply it to u = iD : D → D ∗∗ we obtain (ii). Remark 7.31 Any T : A → D ∗∗ satisfying (ii) in Theorem 7.29 is automatically c.p. and completely contractive. This follows from Tomiyama’s Theorem 1.45 since T¨ is a projection onto D ∗∗ and T = T¨|A . Remark 7.32 If there is a net of maps Ti : A → D with Ti  ≤ 1 such that Ti (x) − x → 0 for any x ∈ D, then the inclusion D ⊂ A is max-injective. Indeed, U being an ultrafilter refining the net (see §A.4), let T (x) = limU Ti (x) for all x in A with respect to σ (D ∗∗,D ∗ ), then T : A → D ∗∗ satisfies (ii) in Theorem 7.29. The next statement will be very useful when dealing with QWEP C ∗ algebras in §9.7. Theorem 7.33 Let A,B be C ∗ -algebras with B unital. Let ϕ : A → B be a c.p. map such that ϕ(BA ) = BB . Then the restriction to the multiplicative domain ϕ|Dϕ : Dϕ → B is a surjective ∗-homomorphism, and moreover the inclusion Dϕ ⊂ A is max-injective. Proof Note that for any unitary y in B there is x ∈ BA such that ϕ(x) = y. By the Choi–Cauchy–Schwarz inequality (5.2), we have 1 = ϕ(x)∗ ϕ(x) ≤ ϕ(x ∗ x) ≤ 1

150

Tensorizing maps and functorial properties

and hence ϕ(x)∗ ϕ(x) = ϕ(x ∗ x). Similarly, ϕ(x)ϕ(x)∗ = ϕ(xx∗ ), so that x ∈ Dϕ and hence Dϕ ⊃ {x ∈ BA | ϕ(x) ∈ U (B)}, where U (B) denotes as usual the set of unitaries in B. It follows that ϕ(Dϕ ) ⊃ U (B), so that the restriction π = ϕ|Dϕ : Dϕ → B is a surjective ∗-homomorphism. Let C be another C ∗ -algebra. Let j : Dϕ → A be the inclusion. We will now show that the ∗-homomorphism j C : C ⊗max Dϕ → C ⊗max A that extends IdC ⊗ j is injective. Let x ∈ C ⊗max Dϕ be such that j C (x) = 0. We will show that x = 0. Let D 0 = ker(π ) ⊂ Dϕ . By Remark 5.2 D 0 is a hereditary C ∗ -subalgebra of A (and an ideal in Dϕ ), and hence D 0 ⊂ A is max-injective by Proposition 7.19 (iii). Let us denote ϕ C : C ⊗max A → C ⊗max B and π C : C ⊗max Dϕ → C ⊗max B the natural extensions. Clearly, ϕ C j C = π C . Therefore π C (x) = 0. But by the projectivity (or exactness) of max (see (7.6)), we have ker(π C ) = C ⊗max D 0 and hence x ∈ C ⊗max D 0 ⊂ C ⊗max Dϕ . But since D 0 ⊂ A is max-injective, the composition C ⊗max D 0 → C ⊗max Dϕ → C ⊗max A is injective, and we conclude that x = 0 in C ⊗max Dϕ .

7.4  min is injective but not projective (i.e. not exact) In this section, in analogy with what we saw for the maximal tensor products in §7.2, we investigate if or when the canonical identification (A/I) ⊗ B = (A ⊗ B)/(I ⊗ B)

(7.14)

remains valid, after completion, for the minimal tensor products. By its very definition, the minimal tensor product is obviously injective, even in the operator space setting (see Remark 1.10). In the C ∗ -case, if Ej ,Gj (j = 1,2) are C ∗ -algebras and Ej ⊂ Gj (j = 1,2) are injective

7.4  min is injective but not projective (i.e. not exact)

151

∗-homomorphisms, then E1 ⊗min E2 ⊂ G1 ⊗min G2 is also an injective ∗-homomorphism. In particular, if I ⊂ A is an ideal so that A/I is a C ∗ -algebra, we have for any B an embedding I ⊗min B ⊂ A ⊗min B, and I ⊗min B is an ideal in A ⊗min B. It is somewhat natural, in analogy with (7.14), to expect that the quotient (A ⊗min B)/(I ⊗min B) should be identifiable with (A/I) ⊗min B. However, although it is true for many B’s, it is not so for all B. We will now give an explicit example showing that this fails for A = C ∗ (Fn ) whenever n ≥ 2. See §3.1 for notation and background on operator algebras such as C ∗ (G) and MG associated to a discrete group G. The same argument works for A = C ∗ (G) assuming G nonamenable, but assuming also that C ∗ (G) has the LLP and that G is approximately linear (so-called hyperlinear), i.e. that the von Neumann algebra of G, namely MG = λG (G) is QWEP (see Remark 9.68). When G = Fn we will see later on in these notes that the latter properties hold (see (9.5) and Theorem 12.21). In analogy with Proposition 7.17, we will prove (the traditional terminology for this would be that for G = Fn , the algebra C ∗ (G), viewed as an extension of Cλ∗ (G), is not “locally split”): Proposition 7.34 Recall that QG : C ∗ (G) → Cλ∗ (G) denotes the canonical quotient map. For G = Fn with n > 1, if A = B = C ∗ (G) and I = ker(QG ), the natural ∗-homomorphism (A ⊗min B)/(I ⊗min B) → (A/I) ⊗min B

(7.15)

is not injective. Equivalently (just exchanging A and B), the homomorphism (B ⊗min A)/(I ⊗min A) → (B/I) ⊗min A

(7.16)

is not injective. To prove this we need to anticipate slightly: we will review a few facts that will be proved later on in these notes. For a group G, we will consider the following property: ˙ G : C ∗ (G) → MG that is the same as QG but Property 7.35 The map Q w v ˙ G : C ∗ (G)− →B − → MG viewed as taking values in MG admits a factorization Q where v,w are c.p. maps with vcb ≤ 1 and wcb ≤ 1, and where B is a C ∗ -algebra such that the pair (B,C ∗ (G)) is nuclear. For further reference, we introduce a notion due to Kirchberg [155]. Definition 7.36 A group G is said to have the factorization property (or simply property (F)) if for any x ∈ C ∗ (G) ⊗ C ∗ (G) we have [λG · ρG ](x)B(2 (G)) ≤ xC ∗ (G)⊗min C ∗ (G),

(7.17)

152

Tensorizing maps and functorial properties

where [λG · ρG ] : C ∗ (G) ⊗ C ∗ (G) → B(2 (G)) denotes the ∗-homomorphism that takes UG (s) ⊗ UG (t) to λG (s)ρG (t) (s,t ∈ G). Remark 7.37 This definition should be compared with (4.18). In particular, (4.18) shows that the factorization property holds if G is amenable, because C ∗ (G) is then nuclear. Lemma 7.38 If G satisfies Property 7.35 then G has the factorization property (7.17). Proof Since c.p. maps with cb-norm ≤ 1 tensorize both the minimal and maximal tensor products, the following maps are all of norm 1: C ∗ (G) ⊗min C ∗ (G) → B ⊗min C ∗ (G) = B ⊗max C ∗ (G) → MG ⊗max C ∗ (G) → MG ⊗max MG ˙G ⊗ Q ˙ G on the algebraic tensor product. and the composition is equal to Q Thus ˙ G : C ∗ (G) ⊗min C ∗ (G) → MG ⊗max MG  ≤ 1. ˙G ⊗ Q Q Then (4.18) yields the conclusion. Lemma 7.39 Assume that G satisfies the factorization property (7.17). If (7.15) is injective (or equivalently isometric) with A = B = C ∗ (G) and I = ker(QG ), then G is amenable. Proof By (7.17) for any x ∈ C ∗ (G) ⊗ C ∗ (G) we have [λG .ρG ](x)B(2 (G)) ≤ xC ∗ (G)⊗min C ∗ (G) . Let I = ker(QG ). Clearly the corresponding mapping x → [λG · ρG ](x) vanishes on I ⊗min C ∗ (G). Therefore, we can write [λG · ρG ](x)B(2 (G)) ≤ (QG ⊗ Id)x C ∗ (G)⊗min C ∗ (G) . I⊗min C ∗ (G)

Therefore, if (7.15) is isometric [λG · ρG ](x)B(2 (G)) ≤ (QG ⊗ Id)xCλ∗ (G)⊗min C ∗ (G)     = x(s,t)λG (s) ⊗ UG (t) ∗

Cλ (G)⊗min C ∗ (G)

.

 Let S ⊂ G be any finite set. Applying this to x = s∈S UG (s) ⊗ UG (s) so  that (QG ⊗ Id)x = s∈S λG (s) ⊗ UG (s), and using (4.19) we find     λG (s) ⊗ UG (s) |S| ≤  s∈S

7.5 min-projective surjections

153

and hence by (4.22) (Fell’s absorption principle)     |S| ≤  λG (s) . s∈S

Thus G is amenable by Theorem 3.30. Proof of Proposition 7.34 We will show in the sequel (see Corollary 12.22) that G = Fn verifies the factorization appearing in 7.35 for some B with the WEP. We will also show (see Theorem 9.6 or rather Corollary 9.40) that (B,C ∗ (G)) is a nuclear pair. Thus G = Fn satisfies Property 7.35 and consequently also the factorization property (7.17) but is not amenable. By Lemma 7.39, we obtain Proposition 7.34. More explicitly, viewing Cλ∗ (Fn ) = C ∗ (Fn )/I we have by (4.22) and the preceding proof   n √   λFn (gj ) ⊗ UFn (gj ) ∗ =2 n 1 ⊗ 1 + ∗ 1

but

Cλ (Fn )⊗min C (Fn )

  n   λFn (gj ) ⊗ UFn (gj ) C ∗ (Fn )⊗min C ∗ (Fn ) = n + 1 1 ⊗ 1 + 1

I⊗min C ∗ (Fn )

and these are different for n > 1.

7.5 min-projective surjections Motivated by the last section, we are naturally led to the following notion. Definition 7.40 Let q : A → C be a surjective ∗-homomorphism. Let I = ker(q) so that C ∼ = A/I. We will say that the surjection q is minprojective if for any C ∗ -algebra B, the min-norm on B ⊗ C = B ⊗ (A/I) coincides with the norm induced by (B ⊗min A)/(B ⊗min I). Equivalently, this means that the canonical map qB : B ⊗min A → B ⊗min C (that extends IdB ⊗ q) satisfies ker(qB ) = B ⊗min ker(q).

(7.18)

Thus q : A → A/I is min-projective if for any B B ⊗min (A/I) = (B ⊗min A)/(B ⊗min I), or equivalently (see Remark 1.9) (A/I) ⊗min B = (A ⊗min B)/(I ⊗min B).

(7.19)

154

Tensorizing maps and functorial properties

Remark 7.41 In analogy with Remark 7.20, let B be another C ∗ -algebra. If q : A → C is min-projective, then (7.6) (with the roles of B,A interchanged) shows that (A,B) nuclear ⇒ (C,B) nuclear.

(7.20)

Remark 7.42 Obviously qB (B ⊗min I) = 0. Thus we have (B ⊗min A)/(B ⊗min I) → B ⊗min C ≤ 1. Therefore, q is min-projective if and only if for any B and any t ∈ B ⊗ (A/I) we have conversely t(B⊗min A)/(B⊗min I ) ≤ tB⊗min (A/I ) .

(7.21)

Or equivalently, for any t ∈ (A/I) ⊗ B we have t(A⊗min B)/(I ⊗min B) ≤ t(A/I )⊗min B .

(7.22)

As observed in the preceding section, the latter does not always hold. In the subsequent §10.1 we will study the C ∗ algebras B (these are called “exact”) such that (7.18) (or (7.19)) holds for any quotient map q. But for the moment, we content ourselves with a simple characterization of the quotient maps for which (7.18) holds for any B. We will need the following useful lemma which requires a specific notation. Let I ⊂ A be a (closed two-sided) ideal in a C ∗ -algebra A. Let E be an operator space. As in Lemma 4.26 we denote for simplicity A ⊗min E . I ⊗min E Then if F is another operator space and if u : E → F is a c.b. map we clearly have a bounded linear map Q[E] =

u[Q] : Q[E] → Q[F ] naturally associated to IdA ⊗ u such that u[Q]  ≤ u. Lemma 7.43 If u is an isometry, then u[Q] also is one. In particular, if E ⊂ F then Q[E] ⊂ Q[F ] (isometrically). Proof For simplicity we assume that E ⊂ F and u is the inclusion map. Then the result is a particular case of Lemma 4.26 applied for α = min. Next we show that, just like A → A/I, the quotient map E ⊗min A → (E ⊗min A)/(E ⊗min I) takes the closed unit ball to the closed unit ball. Lemma 7.44 Let I ⊂ A be an ideal in a C ∗ -algebra A. Consider an operator space E and let q[E] : A ⊗min E → Q[E] denote the quotient map. Then for

7.5 min-projective surjections

155

any  y in Q[E], there is an element y in A ⊗min E that lifts  y (i.e. q[E] (y) =  y) y Q[E] . such that ymin =  Proof Choose y0 in A ⊗min E such that q[E] (y0 ) =  y . It is easy to check that q[E] has the properties appearing in Lemmas A.31 and A.32 suitably modified. Therefore, repeating the argument appearing before Lemma A.33, we obtain a y for all Cauchy sequence y0,y1, . . . ,yn, . . . in A ⊗min E such that q[E] (yn ) =  y Q[E] when n → ∞. Thus y = lim yn is a lifting with n and yn min →  the same norm as  y. Remark 7.45 (Description of ker(qB )) Let qB : B ⊗min A → B ⊗min A/I be as in (7.18). Let t ∈ B ⊗min A. Then t ∈ ker(qB ) if and only if (ξ ⊗ IdA )(t) ∈ I for any ξ ∈ B ∗ . Indeed, qB (t) = 0 is the same as (ξ ⊗ IdA/I )(qB (t)) = 0 for any ξ ∈ B ∗ (see Corollary 1.15) and it is immediate that ξ ⊗ IdA/I = (IdC ⊗ q)(ξ ⊗ IdA )(t) and of course we may identify IdC ⊗ q with q. Thus when (7.18) fails there are ts satisfying this which fail to be in the min-closure of B ⊗ I (although they are in that of B ⊗ A). This shows that the failure of (7.18) is closely related to nontrivial approximation problems. In [246] Tomiyama introduced the related notion of Fubini product. When given operator subspaces Y ⊂ B and X ⊂ A in C ∗ -algebras, we may consider the subspace F (Y,X) ⊂ B ⊗min A (called the Fubini product) formed of all those t ∈ B ⊗min A such that (ξ ⊗ IdA ) ∈ X and (IdB ⊗ η) ∈ Y for any (ξ,η) ∈ B ∗ × A∗ . Obviously Y ⊗min X ⊂ F (Y,X) but in general the latter is larger. For instance when (7.18) fails, what precedes shows us that B ⊗min I = F (B,I). See [246] for more information on this interesting notion that we will not use. Definition 7.46 Fix a constant c ≥ 0. Let C be a C ∗ -algebra (or merely an operator space). Let u : C → A/I be a linear map into a quotient C ∗ -algebra. Let q : A → A/I denote the quotient map. We will say that u is c-liftable if there is v : C → A with vcb ≤ c that lifts u in the sense that qv = u. We will say that u is locally c-liftable (or admits a local c-lifting) if, for any finite-dimensional subspace E ⊂ C, the restriction u|E is c-liftable, or more explicitly for any finite dimensional E ⊂ C there is v E : E → A with v E cb ≤ c such that qv E = u|E . Remark 7.47 Assume that I is completely complemented in A, by which we mean that there is a c.b. projection P from A onto I. Then it is easy to see that the identity of A/I is c-liftable for c = 1+P cb . We just define v : A/I → A by v(x) = (I − P )( x ) where  x ∈ A is any lifting of x ∈ A/I. See Corollary 9.47 for more on unital c.p. maps that are locally 1-liftable. In that case a unital c.p. v E can be found.

156

Tensorizing maps and functorial properties

Proposition 7.48 Let C be any operator space. Fix a constant c ≥ 0. Let u : C → A/I be a linear map into a quotient C ∗ -algebra. Let q : A → A/I denote the quotient map. The following are equivalent. (i) For any C ∗ -algebra B, u defines a map IdB ⊗ u : B ⊗ C → (B ⊗ A)/(B ⊗ I) such that IdB ⊗ u : B ⊗min C → (B ⊗min A)/(B ⊗min I) ≤ c. (ii) Same as (i) with B = B. (iii) The map u is locally c-liftable. (iii)’ The map u is locally (c + ε)-liftable for any ε > 0. Proof (i) ⇒ (ii) is trivial. Assume (ii). Let sE ∈ E ∗ ⊗ C be the tensor associated to the inclusion map jE : E ⊂ C of a finite-dimensional subspace. We view E ∗ ⊂ B completely isometrically (see Theorem 2.15). Then (IdB ⊗ u) (sE ) ∈ E ∗ ⊗(A/I) is the tensor associated to u|E , and sE min = jE cb = 1. By (ii) we have (IdB ⊗ u)(sE )(B ⊗min A)/(B ⊗min I ) ≤ csE min = c. By Lemma 7.43 we have (IdB ⊗ u)(sE )(B ⊗min A)/(B ⊗min I ) = (IdB ⊗ u)(sE )(E ∗ ⊗min A)/(E ∗ ⊗min I ), and since E ∗ ⊗min F = CB(E,F ) isometrically, we find an isometric identity (E ∗ ⊗min A)/(E ∗ ⊗min I) = CB(E,A)/CB(E,I) and taking Lemma 7.44 into account, we find that there is v ∈ CB(E,A) with vcb ≤ c such that qv = u|E . This shows (ii) ⇒ (iii) and (iii) ⇒ (iii)’ is trivial. To complete the proof it clearly suffices to show (iii) ⇒ (i). Assume (iii). Let t ∈ B ⊗ C. Let E ⊂ C be finite dimensional and such that t ∈ B ⊗ E. t = (IdB ⊗ Let v E : E → A with v E cb ≤ c lifting u|E : E → A/I. Let  tB⊗min A ≤ v E cb tmin ≤ ctmin . Moreover, v E )(t) ∈ B ⊗ A. We have  t) = (IdB ⊗ u)(t). By (4.34) applied with α = min, this shows (IdB ⊗ q)( (IdB ⊗ u)(t)(B⊗min A)/(B⊗min I ) ≤ ctB⊗min C , and the latter means that (i) holds. Remark 7.49 In the situation of Proposition 7.48, assume that B ⊗min A = B ⊗max A (this is what we take as definition of the LLP for A in the sequel). In that case the conditions in Proposition 7.48 are equivalent to (ii)’ IdB ⊗ u : B ⊗min C → B ⊗max (A/I) ≤ c.

7.6 Generating new C ∗ -norms from old ones

157

Indeed, this follows from (7.6) (but one has to exchange the roles of the letters A and B there). When u = IdA/I and c = 1, Proposition 7.48 becomes: Corollary 7.50 Let q : A → C be a surjective ∗-homomorphism. Let I = ker(q) so that C ∼ = A/I. The following are equivalent (i) q is min-projective (i.e. (7.18) holds for any B). (ii) We have (7.18) for B = B (i.e. for B = B(2 )). (iii) For any ε > 0, the identity map IdA/I : A/I → A/I is locally (1 + ε)-liftable. (iii)’ Same as (iii) with ε = 0. Remark 7.51 Here to emphasize the analogy injective/projective, we did not conform to the existing terminology: One usually says, using the exact sequence 0 → I → A → A/I → 0, that A is a “locally split extension” (of A/I by I) to express the property (iii) in Corollary 7.50, which equivalently means that A → A/I is min-projective. By Remark 7.47, Corollary 7.50 implies: Corollary 7.52 If the ideal I is completely complemented in A the quotient map A → A/I is min-projective. Remark 7.53 Note that in the subsequent Corollary 9.47, we show that, in the unital case (when A,C and q are unital), the properties in Corollary 7.50 are equivalent to (iv) For any finite-dimensional operator system E ⊂ C there is a unital c.p. map uE : E → A (with uE cb = 1) that lifts q in the sense that quE coincides with the inclusion map E ⊂ C.

7.6 Generating new C ∗ -norms from old ones Following the path Grothendieck opened up for Banach space tensor products in [98], we now derive several additional C ∗ -tensor products from the minimal and maximal ones using the injective and projective universality of B(H ) and C ∗ (F) respectively. Kirchberg already followed that same road first in [155], then more systematically in [158]. Let A1,A2,B1,B2,C1,C2 be C ∗ -algebras. The basic idea is two-fold: (i) If we are given an embedding A1 ⊂ B1 we have a linear embedding A1 ⊗ A2 ⊂ B1 ⊗ A2 . Then given a C ∗ -norm α on B1 ⊗ A2 its restriction

158

Tensorizing maps and functorial properties

to A1 ⊗ A2 defines an a priori new C ∗ -norm on A1 ⊗ A2 , denoted by α1 . If α is the min-norm then so is α1 (see Remark 1.10), but when α is the max-norm the induced norm α1 is in general not the max-norm on A1 ⊗ A2 , since by §7.2 the max-norm is not injective. Thus this produces an a priori new C ∗ -norm on A1 ⊗ A2 . (ii) If we are given a surjective ∗-homomorphism q1 : C1 → A1 so that A1 = C1 / I1 (I1 = ker(q1 )), we have a surjection q1 ⊗ IdA2 : C1 ⊗ A2 → A1 ⊗ A2 . Then given a C ∗ -norm α on C1 ⊗ A2 we can define a new C ∗ -norm on A1 ⊗ A2 as the norm induced on α A1 ⊗ A2 by the natural C ∗ -norm in (C1 ⊗α A2 )/I1 ⊗ A2 . The resulting C ∗ -norm on A1 ⊗ A2 is denoted by α 1 . If α is the max-norm then by (7.6) so is α 1 , but when α is the min-norm, α 1 is in general not the min-norm on A1 ⊗ A2 , since by §7.4 the min-norm is not projective. Thus this produces an a priori new C ∗ -norm on A1 ⊗ A2 . One can also apply these constructions to the second factor, and produce in this way another pair of a priori new C ∗ -norms α2,α 2 on A1 ⊗ A2 . In [98] (see also [71]) Grothendieck applied these constructions in the Banach space category starting from the minimal and maximal norms among what he called the reasonable tensor norms. In the Banach space setting, he denoted by /α and \α (resp. α\ and α/) the analogues of α1 and α 1 (resp. α2 and α 2 ). We will now describe a couple of possibilities that this idea offers us when we apply it to C ∗ -norms. We start with the C ∗ -norms derived from the (injective) universality of B(H ). Assume (as we may) Aj ⊂ B(Hj ) (j = 1,2). Then the norm induced on A1 ⊗ A2 by B(H1 ) ⊗max A2 is a C ∗ -norm on A1 ⊗ A2 , that we denote by max1 . Let uj : Aj → Bj (j = 1,2) be c.b. maps. Consider u1 ⊗ u2 : A1 ⊗ A2 → B1 ⊗ B2 . Then u1 ⊗ u2 : A1 ⊗max1 A2 → B1 ⊗max1 B2  ≤ u1 cb u2 dec .

(7.23)

Indeed, this follows from the extension property of c.b. maps (Theorem 1.18) together with (6.15) and (6.9). Using again the extension property, one shows that the max1 -norm does not depend on the embedding A1 ⊂ B(H1 ). Moreover, we have ∀t ∈ A1 ⊗ A2

tmax1 = inf tB1 ⊗max A2

where the inf runs over all C ∗ -algebras B1 containing A1 as a C ∗ -subalgebra. The same construction applied to the second factor leads to another C ∗ -norm on A1 ⊗ A2 , that we denote by max2 . By (4.8), we have A1 ⊗max2 A2  A2 ⊗max1 A1 , so this is not really new.

7.6 Generating new C ∗ -norms from old ones

159

The most interesting case is when we do this operation on both factors: continuing, we are led to denote by max12 the norm induced on A1 ⊗ A2 by B(H1 ) ⊗max B(H2 ). Doing the two operations in reverse order would lead us to define max21 but this is clearly the same as max12 . Thus, to lighten the notation, we denote it in the sequel by  M and the completed tensor product by A1 ⊗M A2 (see Definition 20.11). It has the supplementary advantage that, like the minimal norm, it yields at the same time a tensor product of operator spaces. Indeed, by the same argument as for (7.23), this time we have u1 ⊗ u2 : A1 ⊗M A2 → B1 ⊗M B2  ≤ u1 cb u2 cb,

(7.24)

and the definition of  M makes sense even when A1,A2 are just operator spaces. Moreover, we have ∀t ∈ A1 ⊗ A2

tM = inf tB1 ⊗max B2

where the inf runs over all C ∗ -algebras Bj containing Aj as a C ∗ -subalgebra (j = 1,2), or even just completely isometrically. We now turn to the C ∗ -norms derived from the (projective) universality of C ∗ (F). Assume that Aj = Cj /Ij with each Cj of the form Cj = C ∗ (Gj ) for some free group Gj . Let qj : Cj → Cj /Ij be the quotient map. Let t ∈ A1 ⊗ A2 . We define tmin1 = t(C1 ⊗min A2 )/(I1 ⊗min A2 ) . Clearly this is a C ∗ -norm on A1 ⊗ A2 . Let ut : A∗2 → A1 be the finite rank linear map associated to t. Using the property (2.15) of the dual operator space together with (4.34) we immediately obtain the following reinterpretation of tmin1 in terms of c.b. liftings (here w* means weak*): tmin1 = inf{vcb | v : A∗2 → C1, w* continuous, rk(v) < ∞, q1 v = ut }. (7.25) Obviously we could do the same on the right-hand side to define ⊗min2 but by (4.8) this boils down to the same notion. However, if we do it twice, then we obtain a genuinely new tensor product. More precisely, using ⊗L for what should be denoted ⊗min12 , we set tA1 ⊗L A2 = t(C1 ⊗min C2 )/(I1 ⊗min C2 +C1 ⊗min I2 ) . Note A∗2 ⊂ C2∗ . Again we have tA1 ⊗L A2 = inf{vcb | v : C2∗ → C1, w* continuous, rk(v) < ∞, q1 v|A∗2 = ut }. (7.26)

160

Tensorizing maps and functorial properties

We will show later on in Remark 9.45 when discussing the LLP that (7.25) and (7.26) are independent of the choice of C1,C2 as long as they are of the required form C ∗ (G) with G a free group (or simply as long as they have the LLP).

7.7 Notes and remarks This chapter is inspired mainly from Kirchberg’s ideas, but we introduce special terms (such as “max-injective” and “min-projective”) in order to emphasize as much as possible properties of linear maps in the spirit of operator space theory. We feel some features become much clearer. There are analogies with the situation in Banach space theory according to Grothendieck’s viewpoint in [98]. We explain this in §7.6. For more in this direction, see Kirchberg’s presentation of his works in [158], where he systematically adopts a category theory standpoint. Kirchberg communicated Theorem 7.6 to the author with permission to include it in [208]. What we call min-projective surjection is very much the center of attention in Effros and Haagerup’s paper [77], but they do not give it a name. Proposition 7.48 is essentially there (see [77, th. 3.2]), except they use finite-dimensional operator systems and approximate c.p. liftings, but, by part (ii) in Proposition 9.42, this is equivalent to our formulation with locally liftable maps. Incidentally, Effros and Haagerup mention a 1971 paper by Douglas and Howe [73] on Toeplitz operators as an early source for the fact that the existence of liftings implies that the quotient map is min-projective as in Proposition 7.48. The latter is the C ∗ -algebraic analogue of a well-known principle in homological algebra. It is amusing to observe, as a historical curiosity, that in [73, prop. 2] Douglas and Howe only assume that the lifting is bounded while their proof uses its complete boundedness; this defect is observed in the later paper [20] (also quoted in [77]) and repaired by invoking [5, th. 7].

8 Biduals, injective von Neumann algebras, and C ∗ -norms

In this chapter, we review the main results involving C ∗ -algebras and their biduals proved in the aftermath of Connes’s breakthroughs [61] on injective von Neumann algebras. In particular, we show in Theorem 8.16 that the nuclearity of a C ∗ -algebra A is equivalent to the injectivity of A∗∗ .

8.1 Biduals of C ∗ -algebras We will use here the basic facts and notation introduced in §A.16: when A is a C ∗ -algebra and M a von Neumann one, for all u : A → M we recall that u¨ = (u∗ |M∗ )∗ : A∗∗ → M. The following statement is merely a recapitulation. Theorem 8.1 Let u : A → M be a linear map from a C ∗ -algebra to a von Neumann algebra. (i) If u is a ∗-homomorphism then u¨ : A∗∗ → M is a normal ∗-homomorphism. ¨ = u. (ii) u ∈ CP(A,M) ⇒ u¨ ∈ CP(A∗∗,M) and u ¨ cb = ucb . (iii) u ∈ CB(A,M) ⇒ u¨ ∈ CB(A∗∗,M) and u ¨ dec = udec . (iv) u ∈ D(A,M) ⇒ u¨ ∈ D(A∗∗,M) and u Proof We recall that, by density, u¨ can be viewed as the unique (σ (A∗∗,A∗ ), σ (M,M∗ ))-continuous extension of u. (i) is a well-known consequence of the very definition of A∗∗ (see Theorem A.55). (ii) (resp. (iii)) was proved in Lemma 1.43 (resp. Lemma 1.61) and (iv) in Lemma 6.9.

161

162

Biduals, injective von Neumann algebras, and C ∗ -norms

8.2 The nor-norm and the bin-norm Let A be a C ∗ -algebra and M a von Neumann algebra. In this “hybrid” situation, one defines a C ∗ -norm on A ⊗ M as follows. For any t ∈ A ⊗ M we set tnor = sup (σ · π )(t)

(8.1)

where the sup runs over all H ’s and all commuting pairs of ∗-homomorphisms σ : A → B(H ), π : M → B(H ) with π assumed normal (i.e. continuous with respect to both weak* topologies on M and B(H )). It is easy to check that this is indeed a C ∗ -norm intermediate between the minimal and maximal norms. We denote by A ⊗nor M the corresponding completion. Remark 8.2 There is obviously a similar definition of the nor-norm on M ⊗ A. In some situation (say if A happens to be a von Neumann algebra too) this may lead to some confusion, so that we should use a notation that distinguishes both cases (such as nor1 and nor2 ), but for simplicity we prefer not to do that. In the cases we consider in the sequel there is no risk of confusion. Remark 8.3 Let A1,M 1 be respectively a C ∗ -algebra and a von Neumann algebra. Let σ : A → A1 and π : M → M 1 be ∗-homomorphisms (resp. isomorphisms). Then if π is normal, the mapping σ ⊗ π obviously defines (by density) a ∗-homomorphism (resp. an isomorphism) from A ⊗nor M to A1 ⊗nor M 1 . See Remark A.38 for clarification. We now turn to a more symmetric situation. Let M,N be von Neumann algebras. On M ⊗ N one defines the “binormal” norm of t ∈ M ⊗ N by tbin = sup{(π .σ )(t)}

(8.2)

where the sup runs over all pairs of normal ∗-homomorphism π : M → B(H ) and σ : N → B(H ) with commuting ranges. This is clearly a C ∗ -norm. We denote by M ⊗bin N the completion. Using normal embeddings M ⊂ B(H1 ), N ⊂ B(H2 ), H = H1 ⊗2 H2 and the usual pair π(x) = x ⊗ 1, σ (y) = 1 ⊗ y, we find for any t ∈ M ⊗ N tmin ≤ tbin .

(8.3)

When M = A∗∗ and N = B ∗∗ are biduals, then we may clearly write (by the extension property of biduals, see Theorem 8.1 (i)) tbin = sup{(π¨ .σ¨ )(t)}

(8.4)

where the sup runs over all ∗-homomorphisms π : A → B(H ) and σ : B → B(H ) with commuting ranges.

8.3 Nuclearity and injective von Neumann algebras

163

Remark 8.4 Let M be a von Neumann algebra and A a C ∗ -algebra. Then the norm induced on A ⊗ M by the bin-norm on A∗∗ ⊗ M coincides with the nornorm on A ⊗ M as defined in (8.1). This is easy to check using again part (i) in Theorem 8.1. Moreover, for any t ∈ A∗∗ ⊗ M, we have clearly tbin ≤ tnor (where for the nor-norm we view A∗∗ as a C ∗ -algebra).

8.3 Nuclearity and injective von Neumann algebras We will need the following basic fact: Proposition 8.5 A von Neumann algebra M ⊂ B(H ) is injective if and only if its commutant M  is injective. The (not so simple) proof is based on the following Lemma 8.6 Let M ⊂ B(H ) and N ⊂ B(H) be isomorphic von Neumann algebras. If N  is injective then M  is injective. Proof By Theorem A.61 we may write the isomorphism T : M → N as the product of three isomorphisms of three different kinds: amplification, compression and spatial. Clearly it suffices to check the lemma for each of the three kinds, and this turns out to be an easy exercise that we leave to the reader. Proof of Proposition 8.5 By a very well-known fact there is a realization of M say ψ : M ⊂ B(H) for which ψ(M) and ψ(M) are anti-isomorphic. This is part of what is called the standard form of M (see [102] or [241, p. 151] and the proof of the subsequent Theorem 23.30). Then clearly ψ(M) injective ⇔ ψ(M) injective (recall Remarks 2.9 and 2.13). Recall that by Definition 1.44 injectivity is stable by completely isometric isomorphisms. Thus if M is injective, so is ψ(M) and hence ψ(M) is injective. By Lemma 8.6 it follows that M  is injective. Reversing the roles of M and M  gives the converse. The fact that M injective ⇔ M  injective allows us to prove a simple, but important stability property of injective von Neumann algebras: Proposition 8.7 If {Ni | i ∈ I } is a family, directed by inclusion, of injective von Neumann algebras in B(H ) then their weak* closure M = ∪i∈I Ni ⊂ B(H ) is injective. Proof Note that Ni ⊂ Nj is equivalent to Nj ⊂ Ni . Therefore the commutants (Ni ) form a decreasing directed family. Let Pi : B(H ) → Ni be a (completely) contractive projection. Let U be an ultrafilter refining the underlying net

164

Biduals, injective von Neumann algebras, and C ∗ -norms

formed by (Ni ) (see Remark A.6). Then the mapping P defined by P (x) = limU Pi (x) (the limit being in the weak* topology of B(H )) is clearly a (completely) contractive projection onto M  = ∩i∈I Ni . Thus M  , and hence M itself, is injective. The von Neumann algebras M that can be written as the (weak*) closure M = ∪i∈I Ni of an ascending union (directed by inclusion) of finitedimensional von Neumann algebras Ni are sometimes called “hyperfinite,” but, as already emphasized by many authors (including Connes himself in [61, p. 113]) the term “approximately finite dimensional” (AFD) is more appropriate. Thus the last statement shows that AFD ⇒ injective. The converse (say, when H is separable) is a celebrated deep result of Connes [61], that we state without proof: Theorem 8.8 (Finite-dimensional approximation of injective von Neumann algebras) Any injective von Neumann algebra M is approximately finite dimensional (AFD). This major advance led to a number of deep characterizations of injective von Neumann algebras and nuclear C ∗ -algebras. Although we do not include the proof of Theorem 8.8, we will try in this section to include complete proofs of the latter characterizations, for which (unlike for Theorem 8.8) reasonably simple proofs are now available. In particular we will now show that injectivity is equivalent to the weak* CPAP, which is a rather natural analogue of the CPAP for von Neumann algebras. This is often called “semidiscreteness” but we prefer to use a term that emphasizes the analogy with the CPAP. Definition 8.9 A von Neumann algebra M has the weak* CPAP (in other words is “semidiscrete”) if the identity on M is the pointwise weak* limit of a net of finite rank normal c.p. maps, i.e. there is a net of weak* continuous unital c.p. maps ui : M → M of finite rank such that ui (x) → x in the weak* topology for any x ∈ M. Remark 8.10 When this holds, we may assume that ui = vi∗ for some vi : M∗ → M∗ , and the net (vi ) converges pointwise to the identity of M∗ with respect to σ (M∗,M) (i.e. the weak topology of M∗ ). By Mazur’s Theorem A.9 after passing to convex combinations we may assume that (vi ) converges pointwise to the identity of M∗ for the norm topology. We first give a characterization of injectivity obtained by a simple but very important trick involving extensions of maps on the max tensor product; the latter is called “The Trick” in [39]. We already used a variant of this idea previously for the extension property in Theorem 6.20, and again for the characterization of max-injective inclusions in Theorem 7.29. Actually the

8.3 Nuclearity and injective von Neumann algebras

165

main point of the next statement (i.e. the if part) can be deduced from Theorem 6.20 (with E = A) but we prefer to repeat the argument in the present situation, thus avoiding the use of operator modules and Lemma 6.19. Proposition 8.11 Let A be a C ∗ -algebra, π : A → B(H ) a ∗-homomorphism and let M = π(A) ⊂ B(H ) be the von Neumann algebra it generates. Let  π : M  ⊗ A → B(H ) be the ∗-homomorphism associated to the product, so that  π (x  ⊗ a) = x  π(a) for all a ∈ A,x  ∈ M  . Let max

M  ⊗max A = M  ⊗ A

⊂ B(H ) ⊗max A

be (as in Theorem 6.20) the closure of M  ⊗ A in B(H ) ⊗max A. Then M  is injective if and only if  π extends to a continuous ∗-homomorphism from π is bounded with respect to the norm M  ⊗max A to B(H ) (equivalently  induced by B(H ) ⊗max A). Proof We first treat the case when π and hence  π are unital. Assume  π continuous on M  ⊗max A, so that automatically (see Remark 1.6)  π : M  ⊗max A → B(H )cb = 1. Let ϕ : B(H ) ⊗max A → B(H ) be a complete contraction extending  π according to Theorem 1.18. Since ϕ is unital it is c.p. and its multiplicative domain D obviously includes M  ⊗ A. Let P : B(H ) → B(H ) be defined by P (b) = ϕ(b ⊗ 1) for any b ∈ B(H ). Since 1 ⊗ a ∈ D for any a ∈ A, we have (see Corollary 5.3) P (b)π(a) = ϕ((b ⊗ 1)(1 ⊗ a)) = ϕ((1 ⊗ a)(b ⊗ 1)) = π(a)P (b). This shows P (b) ∈ π(A) = M  . Moreover, we have P (m ) = m for any m ∈ M  since ϕ extends  π . It follows that P is a completely contractive and c.p. projection from B(H ) onto M  ; in other words M  is injective. This proves the “if part” in the unital case. If A is not unital, let (xi ) be an approximate unit of A as in §A.15 and let Q be the weak* (or w.o.t.) limit of π(xi ) in B(H ). Then Qπ(a) = π(a)Q = π(a) for any a ∈ A. It follows that Q is a (self-adjoint) projection in the center M ∩ M  of M. A moment of thought shows that M  = QMQ ⊕ B(K) with K = (I − Q)(H ). Thus we are reduced to show that QMQ is injective. Then we may as well replace H by Q(H ) and Q by I . In that case, if we define P (b) as the w.o.t.-limit of ϕ(b ⊗ xi ), the same reasoning as for the unital case leads to the desired result. Conversely, if M  is injective, there is a (completely) contractive c.p. projection P  : B(H ) → M  . By (4.30) P  ⊗ IdA : B(H ) ⊗max A → M  ⊗max A = 1, and by definition of the maximal tensor product  π : M  ⊗max A →  B(H ) = 1. Therefore, for any t ∈ M ⊗ A, since t = (P  ⊗ IdA )(t) we have

166

Biduals, injective von Neumann algebras, and C ∗ -norms π (P  ⊗ IdA )(t)B(H ) ≤ (P  ⊗ IdA )(t)M  ⊗max A  π (t)B(H ) =  ≤ tB(H )⊗max A = tM  ⊗max A .

Theorem 8.12 All injective von Neumann algebras have the weak* CPAP. Proof Here we are facing an embarrassing situation. The ingredients for a complete proof are scattered in the sequel in a more general framework emphasizing the WEP instead of injectivity. Rather than move the proof to a much later chapter we choose to give it here with references to the text coming ahead, hoping for the reader’s indulgence. We believe our (exceptional) choice gives a more focused global picture of injectivity. The first part of the proof is simply a reduction to the case when M admits a finite faithful normal tracial state τ . Indeed, if this case is settled it is very easy to deduce from it the case when M is semifinite, because semifiniteness of M implies (see Remark 11.1) the existence of a net of normal unital c.p. maps on M tending weak* to the identity and each with range in a finite von Neumann subalgebra of M, which inherits the injectivity of M. Once the semifinite case is settled, Takesaki’s duality theorem 11.3 comes to our rescue and produces the general case, taking Remark 11.4 into account. Thus it suffices to consider a tracial probability space (M,τ ) (in the sense of Definition 11.6). If M is injective, then by (22.15) for any finite set (xj ) in M we have         xj ⊗ xj  = xj ⊗ xj  . (8.5)  M⊗max M

M⊗min M

Since left and right multiplication are commuting  representations   of M on   xj 2 ≤  xj ⊗ xj  but L2 (τ ) (see §11.3) we always have 2 M⊗max M injectivity implies by (8.5)      xj 22 ≤  xj ⊗ xj  . M⊗min M

We assume M ⊂ B(L2 (τ )). By Theorem 11.38 (with π = IdM and E = M) and (i) ⇒ (ii) in Theorem 4.21 there is a net of finite-dimensional subspaces Hi ⊂ L2 (τ ) and Ki ⊂ L2 (τ ) and states fi on B(Hi ) ⊗min B(Ki ) such that ∀y,x ∈ M

τ (y ∗ x) = limi fi (PHi y|Hi ⊗ PKi x|Ki ).

(8.6)

Replacing both Hi and Ki by Hi + Ki ⊂ L2 (τ ) we may assume for simplicity that Ki = Hi . Since M is dense in L2 (τ ) by perturbation we may assume that Hi ⊂ M. This gives us the advantage (purely for convenience in the sequel) that there is ci < ∞ such that

8.3 Nuclearity and injective von Neumann algebras ∀y ∈ M

PHi y|Hi  ≤ ci τ (|y|),

167

(8.7)

which is easy to deduce from (11.2). We will now describe the state ϕi ∈ (M ⊗max M)∗ defined by ∀x,y ∈ M

ϕi (y ⊗ x) = fi (PHi y|Hi ⊗ PHi x|Hi ).

By Theorem 4.16 and the discussion of M∗  L1 (τ ) in §11.2 there is a finite rank map ηi : M → L1 (τ ) that is c.p. in the sense of (4.28) such that ∀x,y ∈ M

ϕi (y ⊗ x) = τ (y ∗ ηi (x)).

The fact that ηi takes its values in M∗ rather that M ∗ is due to the fact that ϕi is separately normal when considered as a bilinear form on M × M. Its rank is finite because dim(Hi ) < ∞ ensures that the latter bilinear form is of finite rank. The condition (8.7) gives us |τ (y ∗ ηi (x))| ≤ ci τ (|y|)x from which ηi (x) ≤ ci x follows by (11.4). Thus ηi (M) ⊂ M ⊂ L1 (τ ). Since ηi : M → L1 (τ ) satisfies (4.28), one can check using (11.9) for Mn (M) that ηi ∈ CP(M,M). We now wish to modify ηi to make it unital. Observe that by (8.6) ηi (1) → 1 for σ (M∗,M). Using Mazur’s Theorem A.9 we may pass to convex combinations and assume that ηi (1) → 1 in norm in M∗ . Assume for a moment that ηi (1) is invertible in M, then we define for x ∈ M ui (x) = ηi (1)−1/2 ηi (x)ηi (1)−1/2 . We have for all x,y ∈ M τ (y ∗ x) = limi τ (ηi (1)1/2 y ∗ ηi (1)1/2 ui (x)) but since ηi (1)1/2 → 1 in L2 (τ ) by the Powers–Størmer inequality (11.38) (as extended to general traces by Araki, Connes, and Haagerup, see [241, (9) p. 143], see also [134, appendix]) we must have as well τ (y ∗ x) = limi τ (y ∗ ui (x)). Thus we have obtained a net of finite rank unital c.p. maps (ui ) tending to the identity in the σ (M,M)-sense but since ui  = ui (1) = 1, the net being equicontinuous and M dense in M∗ , we conclude that ui (x) → x for σ (M,M∗ ), whence the weak* CPAP. The only drawback is that we assumed ηi (1) invertible. This can be fixed easily by replacing ηi by ηi,ε defined by ηi,ε (x) = ηi (x) + ετ (x)1 (x ∈ M) in the preceding reasoning and letting ε > 0 tend to zero as part of our net. We skip the details. Theorem 8.13 The following properties of a von Neumann algebra M ⊂ B(H ) are equivalent:

168

(i) (ii) (iii) (iv)

Biduals, injective von Neumann algebras, and C ∗ -norms

M is injective. M has the weak* CPAP. For any C ∗ -algebra A and any t ∈ A ⊗ M we have tnor = tmin . For any von Neumann algebra N and any t ∈ N ⊗ M we have tbin = tmin .

Proof We already know (i) ⇒ (ii) by Theorem 8.12. Assume (ii). Let (ui ) be as in Definition 8.9. Let t ∈ A ⊗ M. By (6.16) we have (IdA ⊗ ui )(t)max ≤ tmin and hence for any commuting pair of ∗-homomorphisms σ : A → B(H ) and π : M → B(H ) we have (σ · π )((IdA ⊗ ui )(t)) ≤ tmin . Now if π is assumed normal it is easy to check that (σ · π )((IdA ⊗ ui )(t)) tends weak* to (σ · π )(t) and hence (σ · π )(t) ≤ tmin . This shows tnor ≤ tmin . The converse inequality is obvious. (iii) ⇒ (iv) is trivial since  · bin ≤  · nor . Assume (iv) with N = M  . We claim that M  is injective and hence (i) holds by Proposition 8.5. Indeed, (iv) implies that the map  π in Proposition 8.11 (with A = M and π : M → B(H ) the embedding) is continuous on M  ⊗ M, with respect to the min-norm, i.e. the norm induced by B(H )⊗min M; a fortiori it is continuous with respect to the norm induced by B(H ) ⊗max M, so M  is injective by Proposition 8.11. The equivalence between (ii) and (iii) in the next result is somewhat surprising. We state this for emphasis. Theorem 8.14 (A consequence of the weak* CPAP) The following properties of a von Neumann algebra M ⊂ B(H ) are equivalent: (i) M is injective. (ii) The product mapping p defined on M  ⊗ M by p(x  ⊗ x) = x  x defines a contractive ∗-homomorphism from M  ⊗min M to B(H ). (iii) The product mapping p defines a contractive ∗-homomorphism from M  ⊗ M equipped with the norm induced by B(H ) ⊗max B(H ). Proof Assume (i). Note that p is clearly contractive on M  ⊗bin M, so that (ii) holds by Theorem 8.13 applied with N = M  . (ii) ⇒ (iii) is clear since the norm induced on M  ⊗ M by B(H ) ⊗max B(H ) dominates the minimal C ∗ -norm, i.e. the one of M  ⊗min M. Assume (iii). We will apply Proposition 8.11 with A = M. Note that the norm induced on M  ⊗M by B(H )⊗max B(H ) is clearly majorized by the norm induced by B(H ) ⊗max M. Thus Proposition 8.11 shows that M  is injective, but since we may exchange the roles of M and M  , M is injective. We now derive the consequences of the preceding (major) theorems for nuclear C ∗ -algebras.

8.3 Nuclearity and injective von Neumann algebras

169

Corollary 8.15 Let A be a nuclear C ∗ -algebra. Then for any ∗-homomorphism π : A → B(H ) the von Neumann algebra M = π(A) generated by π is injective. In particular, A∗∗ is injective. Proof Since A is nuclear, we have an isometric embedding M  ⊗max A ⊂ B(H ) ⊗max A, π in Proposition 8.11 clearly satisfies and by definition of M  ⊗max A the map   π : M  ⊗max A → B(H ) ≤ 1. Thus M  and also M by Proposition 8.5 are injective. Actually the converse of the preceding corollary also holds, as the next statement shows. Theorem 8.16 A C ∗ -algebra A is nuclear if and only if its bidual A∗∗ is injective. Proof Assume A∗∗ injective. By Theorem 8.12 it has the weak* CPAP. By Remark 8.10 there is a net of finite rank maps (vi ) on A∗ that are preadjoints of unital c.p. maps and tend pointwise to the identity on A∗ . Let B be any C ∗ -algebra. Let u : B → A∗ be a c.p. map. Composing with ui gives us a net of c.p. maps tending pointwise to u. It follows from the description of the set of states on B ⊗max A and B ⊗min A given in §4.5 and §4.6 that they must coincide. Thus we conclude that B ⊗max A = B ⊗min A, which means A is nuclear. The converse was already part of the preceding statement. Corollary 8.17 ([46]) Nuclearity is preserved under quotients. Proof Let A/I be a quotient C ∗ -algebra. By (A.37) we have A∗∗  (A/I)∗∗ ⊕ I ∗∗ . Therefore, A∗∗ injective implies (A/I)∗∗ injective. Note that there is no known really simple and direct proof of Corollary 8.17. Corollary 8.18 ([46]) Nuclearity is preserved under “extensions.” This means that if I ⊂ A is an ideal in a C ∗ -algebra, and if both I and A/I are nuclear, then A is nuclear. Proof This is a corollary of Proposition 7.15 (on the exactness of the maxtensor product). This assertion can also be seen as an easy consequence of Theorem 8.16 and the fact that for any ideal I ⊂ A, we have by (A.37) a C ∗ -isomorphism A∗∗  (A/I)∗∗ ⊕ I ∗∗ . Indeed, the latter isomorphism shows that A∗∗ is injective if and only if both (A/I)∗∗ and I ∗∗ are injective.

170

Biduals, injective von Neumann algebras, and C ∗ -norms

As we will see in the next chapter, injectivity is equivalent to the WEP for von Neumann algebras (see Corollary 9.26). Thus the reader will find more conditions equivalent to injectivity there, as well as in §11.7 on hypertraces.

8.4 Local reflexivity of the maximal tensor product In the next section we will study local reflexivity. A C ∗ -algebra A is locally reflexive if for any B and any t ∈ A∗∗ ⊗B we have t(A⊗min B)∗∗ ≤ tA∗∗ ⊗min B (see what follows for clarification). Equivalently this means that for any finite-dimensional operator space E we have CB(E,A)∗∗ = CB(E,A∗∗ ) isometrically. We will soon show (see Remarks 8.32 and 8.33) that this does not always hold, even when E = n∞ (n > 2). In sharp contrast, we show in the present section that a property analogous to local reflexivity does hold for the max-tensor product, and moreover we always have D(E,A)∗∗ = D(E,A∗∗ ) isometrically when E = Mn or E = n∞ (n ≥ 1). Let A,B be unital C ∗ -algebras. We first need to clarify how we embed ∗∗ A ⊗ B ∗∗ into the biduals (A ⊗max B)∗∗ and (A ⊗min B)∗∗ . Let us denote simply by i0 : A ⊗max B → (A ⊗max B)∗∗ (resp. i1 : A ⊗min B → (A⊗min B)∗∗ ) the natural inclusion. Define π0 : A → (A⊗max B)∗∗ (resp. π1 : A → (A ⊗min B)∗∗ ) and σ0 : B → (A ⊗max B)∗∗ (resp. σ1 : B → (A ⊗min B)∗∗ ) by π0 (a) = i0 (a ⊗ 1) (resp. π1 (a) = i1 (a ⊗ 1)) and σ0 (b) = i0 (1 ⊗ b) (resp. σ1 (b) = i1 (1 ⊗ b)). Then π0,σ0 (resp. π1,σ1 ) are ∗-homomorphisms with commuting ranges such that i0 = π0 · σ0 (resp. i1 = π1 · σ1 ). Actually we can define similar pairs (π0,σ0 ) (resp. (π1,σ1 )) in the nonunital case using Remark 4.2 and the observation that the universal representations of A ⊗max B and A ⊗min B, being direct sums of cyclic ones, are nondegenerate. Let q : A ⊗max B → A ⊗min B be the quotient map. Then q ∗∗ : (A ⊗max ∗∗ B) → (A ⊗min B)∗∗ is a normal ∗-homomorphism onto (A ⊗min B)∗∗ . Note that we have canonical linear embeddings A∗ ⊗ B ∗ ⊂ (A ⊗max B)∗ (resp. A∗ ⊗ B ∗ ⊂ (A ⊗min B)∗ ) that, for any (f ,g) ∈ A∗ ×B ∗ , take f ⊗g to the linear map f ⊗g : A⊗max B → C (resp. f ⊗ g : A ⊗min B → C). Proposition 8.19 There are natural inclusions Jmax : A∗∗ ⊗ B ∗∗ → (A ⊗max B)∗∗ and Jmin : A∗∗ ⊗ B ∗∗ → (A ⊗min B)∗∗ such that for all (a ,b ) ∈ A∗∗ × B ∗∗ and (f ,g) ∈ A∗ × B ∗ we have Jmax (a  ⊗ b ),f ⊗ g = a  (f )b (g) = Jmin (a  ⊗ b ),f ⊗ g. Moreover Jmin = q ∗∗ Jmax .

8.4 Local reflexivity of the maximal tensor product

171

Proof Let π¨ 0 : A∗∗ → (A ⊗max B)∗∗ (resp. π¨ 1 : A∗∗ → (A ⊗min B)∗∗ ) and σ¨ 0 : B ∗∗ → (A ⊗max B)∗∗ (resp. σ¨ 1 : B ∗∗ → (A ⊗min B)∗∗ ) be the normal ∗-homomorphisms extending π0 and σ0 (resp. π1 and σ1 ), still with commuting ranges. This gives us a ∗-homomorphism Jmax : A∗∗ ⊗ B ∗∗ → (A ⊗max B)∗∗ (resp. Jmin : A∗∗ ⊗ B ∗∗ → (A ⊗min B)∗∗ ) defined by Jmax = π¨ 0 · σ¨ 0 and Jmin = π¨ 1 · σ¨ 1 . Claim: For any t ∈ A∗∗ ⊗ B ∗∗ and any F ∈ (A ⊗min B)∗ we have Jmax (t),q ∗ (F ) = Jmin (t),F .

(8.8)

Moreover if F = f ⊗ g with (f ,g) ∈ A∗ × B ∗ , then Jmin (t),F  = t,f ⊗ g

(8.9)

where the last pairing is the canonical one between A∗∗ ⊗ B ∗∗ and A∗ ⊗ B ∗ . Proof of the Claim: It suffices to prove (8.8) for any t of the form t = a  ⊗ b with (a ,b ) ∈ A∗∗ × B ∗∗ . Equivalently, it suffices to prove Jmax (a  ⊗ b ),q ∗ (F ) = Jmin (a  ⊗ b ),F .

(8.10)

It is easy to verify going back to the definitions that, for any fixed F ∈ (A ⊗min B)∗ , both sides of (8.10) are separately weak* continuous bilinear forms on A∗∗ × B ∗∗ , which coincide (and are equal to F ) on A × B. Therefore they coincide on A∗∗ × B ∗∗ . This proves (8.8) and hence Jmin = q ∗∗ Jmax . Now if F = f ⊗ g with (f ,g) ∈ A∗ × B ∗ , then (a ,b ) → t,F  = f (a  )g(b ) is also a separately weak* continuous bilinear form on A∗∗ × B ∗∗ coinciding with the preceding two on A × B. This implies (8.9), completing the proof of the claim. By the second part of Remark A.1, (8.9) shows that Jmin is injective, and since Jmin = q ∗∗ Jmax so is Jmax . It will be convenient to record here a simple observation. Lemma 8.20 Let A,B be C ∗ -algebras. Let π : A → B(H ) and σ : B → B(H ) be representations with commuting ranges such that π · σ : A ⊗ B → B(H ) extends to a contractive ∗-homomorphism T : A ⊗min B → B(H ). Then T¨ : (A ⊗min B)∗∗ → B(H ) satisfies ∀a  ∈ A∗∗,b ∈ B ∗∗

T¨ (a  ⊗ b ) = π¨ (a  )σ¨ (b ),

(8.11)

where the embedding A∗∗ ⊗B ∗∗ ⊂ (A ⊗min B)∗∗ is implicitly meant to be Jmin . Proof Note that by definition the mapping (a ,b ) → Jmin (a  ⊗ b ) is separately normal. Therefore both sides of (8.11) are separately normal bilinear maps on A∗∗ × B ∗∗ . Since they clearly coincide on A × B, and A (resp. B) is weak* dense in A∗∗ (resp. B ∗∗ ), they must coincide on A∗∗ × B ∗∗ .

Biduals, injective von Neumann algebras, and C ∗ -norms

172

We now come to the version of local reflexivity satisfied by the maximal tensor product. In the next two statements, the embedding A∗∗ ⊗ B ∗∗ ⊂ (A ⊗max B)∗∗ is implicitly meant to be Jmax . Theorem 8.21 For any B and any t ∈ A∗∗ ⊗ B t(A⊗max B)∗∗ ≤ tA∗∗ ⊗max B .

(8.12)

More precisely, we have t(A⊗max B)∗∗ = tA∗∗ ⊗bin B ∗∗ ≤ tA∗∗ ⊗max B ∗∗ = tA∗∗ ⊗max B .

(8.13)

We start by proving first a more precise version of (8.13): Theorem 8.22 The norm induced by (A ⊗max B)∗∗ on A∗∗ ⊗ B ∗∗ coincides with the bin-norm. Proof By definition Jmax : A∗∗ ⊗ B ∗∗ → (A ⊗max B)∗∗ is of the form Jmax = π¨ 0 .σ¨ 0 . From this follows, by definition of the bin-norm that for any t ∈ A∗∗ ⊗ B ∗∗ we have Jmax (t) ≤ tbin . Actually, the reverse inequality also holds. To show this consider an isometric embedding ϕ : A∗∗ ⊗bin B ∗∗ ⊂ B(H ) such that the restriction to each factor is normal. (This obviously exists, just consider the direct sum of all π¨ · σ¨ as in (8.4).) Note that, by definition, the bin norm of A∗∗ ⊗ B ∗∗ restricted to A ⊗ B coincides with the max-norm of A ⊗ B. Thus we have a ∗-homomorphism ψ : A ⊗max B → B(H ) obtained by restricting ϕ to A ⊗ B. But now we have a normal extension ¨ = 1 of course). We claim that ψ¨ : (A ⊗max B)∗∗ → B(H ) (with ψ ∀a ∈ A∗∗,

∀b ∈ B ∗∗,

¨ max (a ⊗ b) = ϕ(a ⊗ b), ψJ

(8.14)

and hence ∀t ∈ A∗∗ ⊗ B ∗∗,

¨ max (t) = ϕ(t). ψJ

Indeed, both sides of (8.14) are separately normal bilinear maps on A∗∗ × B ∗∗ , which coincide on A × B. By weak* density again, (8.14) follows. From this we deduce ∀t ∈ A∗∗ ⊗ B ∗∗,

tA∗∗ ⊗bin B ∗∗ = ϕ(t) ≤ Jmax (t)(A⊗max B)∗∗ .

8.4 Local reflexivity of the maximal tensor product

173

Remark 8.23 We cannot replace the bin-norm by the max-norm in Theorem 8.22. Indeed, consider A = B = K(H ) (compact operators). Now A,B are nuclear so A ⊗max B = A ⊗min B = K(H ⊗2 H ). Thus (A ⊗max B)∗∗ = B(H ⊗2 H ), and A∗∗ = B ∗∗ = B(H ). The norm induced by (A ⊗max B)∗∗ on B(H ) ⊗ B(H ) is now the min norm. But it is known (see §18.1) that the min and max norm are not equivalent on B(H ) ⊗ B(H ). Proof of Theorem 8.21 By the maximality of the max-norm, (8.12) is clear. Moreover, since the inclusion B ⊂ B ∗∗ is max-injective (see Corollary 7.27) we have tA∗∗ ⊗max B = tA∗∗ ⊗max B ∗∗ for any t ∈ A∗∗ ⊗B and the rest follows by Theorem 8.22. Lemma 8.24 Let C,A be C ∗ -algebras. Let u : C → A∗∗ and let (ui ) be a net in the unit ball of D(C,A) such that ui (x) → u(x) with respect to σ (A∗∗,A∗ ) for any x ∈ C. Then u ∈ D(C,A∗∗ ) with udec ≤ 1. Proof By definition of ui dec there are Vi ∈ CP(C,M2 (A)) of the form   i S1 (x) ui (x) with S1i  ≤ 1 S2i  ≤ 1. Vi : x → ui (x ∗ )∗ S2i (x) Passing to a subnet we may assume that S1i (x) and S2i (x) are σ (A∗∗,A∗ )convergent for any x ∈ C to S1 (x) ∈ A∗∗ and S2 (x) ∈ A∗∗ , so that ∗∗ S1  ≤ 1,S  limit V of (Vi ) is clearly in CP(C,M2 (A )).  2  ≤ 1. Then the S1 (x) u(x) , we have u ∈ D(C,A∗∗ ) and udec ≤ 1. Since V = u(x ∗ )∗ S2 (x) In sharp contrast with (8.20), we have Theorem 8.25 For any n and any C ∗ -algebra A, we have natural isometric identifications D(Mn,A∗∗ ) = D(Mn,A)∗∗ and D(n∞,A∗∗ ) = D(n∞,A)∗∗ . Proof Note that the spaces D(Mn,A∗∗ ) and D(Mn,A)∗∗ are setwise identical. The inclusion D(Mn,A)∗∗ → D(Mn,A∗∗ ) has norm ≤ 1 by Lemma 8.24. For the reverse inclusion, we use the description of the unit ball of D(Mn,A) given in (6.31). Let u be in the open unit ball of D(Mn,A∗∗ ). Define a ∈ Mn (A∗∗ ) by aij = u(eij ). We can find a,b ∈ Mn (A∗∗ ) such that a = a ∗ b !  ∗ 1/2   ∗ 1/2 " and (by homogeneity) max  kj akj < 1. Since akj  ,  kj bkj bkj  ∗∗ ∗∗ γ Mn (A ) = Mn (A) (see Proposition A.58), there are nets (a ) and (bδ ) in !  γ∗ ∗∗ ∗  Mn (A) that are σ (A ,A )-convergent to a and b and such that max kj akj    "   ∗ 1/2 γ 1/2 γ ,δ γ∗ ≤ 1. Let a a  , bδ bδ  = a bδ . Let us now assume kj

kj kj

kj

ij

k ki

kj

that A∗∗ ⊂ B(H ) (as a von Neumann subalgebra). We then have for all h,h ∈ H

174

Biduals, injective von Neumann algebras, and C ∗ -norms limγ limδ h,aij h = γ ,δ

=

 k



k

δ limγ limδ aki h,bkj h γ

aki h,bkj h = h,aij h.

γ ,δ

Thus limγ limδ aij = aij in the w.o.t. of B(H ). Then since the adjoint of the embedding A∗∗ ⊂ B(H ) takes B(H )∗ onto A∗ , the set of functionals on A of the form x → h,xh (h,h ∈ H ) is total in A∗ and hence since the aγ ,δ s γ ,δ are uniformly bounded we must have limγ limδ aij = aij for σ (A∗∗,A∗ ). This shows uD(Mn,A)∗∗ ≤ 1. Thus we conclude that the reverse inclusion: D(Mn,A∗∗ ) → D(Mn,A)∗∗ has norm ≤ 1. If we replace Mn by n∞ the same proof works using (6.32). Alternatively we can use the realization of n∞ as diagonal matrices in Mn and (6.33) to deduce the case of n∞ from that of Mn .

8.5 Local reflexivity Following [77] a C ∗ -algebra A is called locally reflexive if for any B and any t ∈ A∗∗ ⊗ B we have t(A⊗min B)∗∗ ≤ tA∗∗ ⊗min B ,

(8.15)

or equivalently (see Remark 1.9) for any t ∈ B ⊗ A∗∗ we have t(B⊗min A)∗∗ ≤ tB⊗min A∗∗ .

(8.16)

In (8.15) and throughout this section, we (implicitly) use the map Jmin : A∗∗ ⊗ B ∗∗ → (A ⊗min B)∗∗ from Proposition 8.19 to view A∗∗ ⊗ B ∗∗ as included in (A ⊗min B)∗∗ . In sharp contrast with the Banach space analogue of local reflexivity, briefly described in §A.8 (from which the terminology comes) this property does not always hold (see Remark 8.33). It is implied by “exactness” but the converse is an open problem. Remark 8.26 (Reversing (8.16) and (8.15)) Let A be an arbitrary C ∗ -algebra. Then the reverse inequalities to (8.16) or (8.15) hold: for any t ∈ B ⊗ A∗∗ we have tB⊗min A∗∗ ≤ t(B⊗min A)∗∗ .

(8.17)

This is immediate by the minimality of the min-norm among C ∗ -norms on B ⊗A∗∗ . However, we find it instructive to include a direct “hands on” proof, as follows. Let E be an n-dimensional operator space such that t ∈ E ⊗ A∗∗ . The space E⊗min A∗∗ is isomorphic to [A∗∗ ]n . The unit ball of the space E⊗min A∗∗ is closed for the weak* topology (i.e. the topology induced by σ (A∗∗,A∗ ))

8.5 Local reflexivity

175

(we leave this as an exercise). Therefore E ⊗min A∗∗ is isometrically a dual Banach space. Let J : E ⊗min A → E ⊗min A∗∗ denote the isometric inclusion. Equivalently J = IdE ⊗ iA . Then, with the notation in (A.32), we have J¨ : [E ⊗min A]∗∗ → E ⊗min A∗∗  ≤ 1, and J¨ is the identity on [E ⊗min A]∗∗  E ⊗min A∗∗  [A∗∗ ]n . Remark 8.27 Let t ∈ B ⊗ A∗∗ and let E ⊂ B be finite dimensional such that t ∈ E ⊗ A∗∗ . Clearly, since E ⊗min A∗∗ ⊂ B ⊗min A∗∗ and [E ⊗min A]∗∗ ⊂ [B ⊗min A]∗∗ are isometric inclusions, (8.16) holds if and only if for any finitedimensional E ⊂ B and any such t ∈ E ⊗ A∗∗ we have t[E⊗min A]∗∗ ≤ tE⊗min A∗∗ .

(8.18)

Using the identification CB(E ∗,A) = E ⊗min A (see §2.4), in which we may exchange the roles of E and E ∗ , and the preceding remark we see that A is locally reflexive if and only if for any finite-dimensional operator space E we have CB(E,A)∗∗ = CB(E,A∗∗ ) isometrically. Let us record this important fact. Proposition 8.28 A C ∗ -algebra A is locally reflexive if and only for any u ∈ CB(E,A∗∗ ) uCB(E,A)∗∗ ≤ uCB(E,A∗∗ ),

(8.19)

or more explicitly, any u ∈ BCB(E,A∗∗ ) is the pointwise-weak* limit of a net in BCB(E,A) . The latter reformulation explains the analogy with (A.10). Actually the reverse of (8.19) always holds by Remark 8.27, and hence we have equality in (8.19). This shows that when A is locally reflexive, we have an isometric embedding A∗∗ ⊗min B ⊂ [A ⊗min B]∗∗ for any B. Theorem 8.29 Any nuclear C ∗ -algebra is locally reflexive. Proof Let A,B be C ∗ -algebras. Assume A nuclear. Then A ⊗min B = A ⊗max B ⇒ (A ⊗min B)∗∗ = (A ⊗max B)∗∗ . By Theorem 8.22 we get A∗∗ ⊗bin B ∗∗ → (A ⊗max B)∗∗  ≤ 1. By Corollary 8.15 the algebra A∗∗ is injective and hence A∗∗ ⊗bin B ∗∗ = A∗∗ ⊗min B ∗∗ . Thus we obtain A∗∗ ⊗min B ∗∗ → (A ⊗min B)∗∗  ≤ 1 (this is called property (C) in Remark 8.34), which implies a fortiori the local reflexivity of A. In the opposite direction, the next statement will allow us to produce explicit examples failing local reflexivity.

176

Biduals, injective von Neumann algebras, and C ∗ -norms

Proposition 8.30 If a C ∗ -algebra A is locally reflexive then for any ideal I ⊂ A and any C ∗ -algebra B we have B ⊗min (A/I) = (B ⊗min A)/(B ⊗min I). In other words, the quotient map A → A/I is min-projective in the sense of Definition 7.40. Proof Recall that the canonical map [(B ⊗min A)/(B ⊗min I)] → B ⊗min (A/I) has unit norm. Thus it suffices to prove the same for its inverse. Since B ⊗min I is an ideal in B ⊗min A we have canonically (see (A.37)) (B ⊗min A)∗∗  [(B ⊗min A)/(B ⊗min I)]∗∗ ⊕ (B ⊗min I)∗∗ . Moreover since A∗∗  (A/I)∗∗ ⊕I ∗∗ (again by (A.37)) we have an embedding B ⊗min (A/I)∗∗ ⊂ B ⊗min A∗∗ . By the local reflexivity of A we may write (with maps of unit norm) B ⊗min (A/I)∗∗ ⊂ B ⊗min A∗∗ → (B ⊗min A)∗∗ → [(B ⊗min A)/(B ⊗min I)]∗∗, and a fortiori B ⊗min (A/I) → [(B ⊗min A)/(B ⊗min I)]∗∗ has unit norm. But the range of the latter ∗-homomorphism is included in (B ⊗min A)/ (B ⊗min I), therefore we find that the map B ⊗min (A/I) → [(B ⊗min A)/(B ⊗min I)] also has unit norm. Remark 8.31 (Local reflexivity and injectivity) Let E ⊂ F be an inclusion of operator spaces with E finite dimensional. Assume that a C ∗ -algebra A has the following extension property: for any u : E → A there is  u : F → A extending u with  ucb = ucb . If A is locally reflexive then A∗∗ has the same property. F  u

E

u

A ⊂ A∗∗

Indeed, given u ∈ CB(E,A∗∗ ) we have a net ui ∈ CB(E,A) with ui cb ≤ u ∈ CB(F,A∗∗ ) equal to a pointwiseucb tending weak* to u. Then the map  weak* cluster point of the net ( ui ) is the desired extension of u.

8.5 Local reflexivity

177

When A is a von Neumann algebra the preceding property holds for all inclusions E ⊂ F with dim(E) < ∞ if and only if A is injective. Indeed, this follows by a simple weak* limit argument. Remark 8.32 (Local reflexivity is inherited by subalgebras and quotients) Local reflexivity passes to C ∗ -subalgebras. Indeed, this follows directly from the definition, once one recalls that for any closed subspace Y ⊂ X of a Banach space X we have an isometric canonical embedding Y ∗∗ ⊂ X∗∗ (see Remark ∗∗ (completely) A.54). When A1 ⊂ A is a C ∗ -subalgebra, we have A∗∗ 1 ⊂ A isometrically, and the latter embedding realizes A∗∗ 1 as a von Neumann subalgebra of A∗∗ . We also have similarly (A1 ⊗min B)∗∗ ⊂ (A ⊗min B)∗∗ isometrically. Therefore, if A is locally reflexive, for any t ∈ A∗∗ 1 ⊗ B ⊂ A∗∗ ⊗ B we have , t(A1 ⊗min B)∗∗ = t(A⊗min B)∗∗ ≤ tA∗∗ ⊗min B = tA∗∗ 1 ⊗min B and we conclude that A1 is locally reflexive. Thus B(H ) must fail local reflexivity, otherwise any C ∗ -algebra would be locally reflexive. One quick way to see that B(H ) fails local reflexivity is to observe that if it were locally reflexive then B(H )∗∗ would be injective, and hence B(H ) would be nuclear (see Remark 8.31). Local reflexivity passes to quotient C ∗ -algebras. We briefly sketch the easy argument for this. Let q : A → A/I be the quotient map. By (A.37) we have a ∗-homomorphism r : (A/I)∗∗ → A∗∗ such that q ∗∗ r = Id(A/I )∗∗ . Let B be another C ∗ -algebra, we have r ⊗ IdB : (A/I)∗∗ ⊗min B → A∗∗ ⊗min B = 1 and if A is locally reflexive A∗∗ ⊗min B → (A ⊗min B)∗∗  = 1, while clearly (q ⊗ IdB )∗∗ : (A ⊗min B)∗∗ → ((A/I) ⊗min B)∗∗  = 1. By composition it follows that (A/I)∗∗ ⊗min B → ((A/I) ⊗min B)∗∗  = 1, which means that A/I is locally reflexive. Thus, for some free group F (resp. for F = F∞ ) C ∗ (F) (resp. C ) must fail local reflexivity, otherwise any (resp. any separable) C ∗ -algebra would be locally reflexive. Remark 8.33 (C and B fail local reflexivity (quantitative estimate)) By Proposition 7.34 the quotient map C ∗ (Fn ) → Cλ∗ (Fn ) is not min-projective when 1 < n ≤ ∞. By Proposition 8.30 this means that C ∗ (Fn ) is not locally reflexive, and since the latter embeds in B, a fortiori B is not locally reflexive. More explicitly, let G = Fn , A = C ∗ (G), H = 2 (G) and M = MG ⊂ B(H ). Then the extension of λG defines a ∗-homomorphism π : A → M, such that π¨ : A∗∗ → M is a surjective normal ∗-homomorphism (see Remark A.49). Let I = ker(π). ¨ Then A∗∗  M ⊕ I by (A.24), so that we have a natural embedding M ⊂ A∗∗ , which we denote by  : M → A∗∗ , that lifts π¨ so that π¨  = IdM . Let Uj = UG (gj ) as usual and U0 = 1. Consider the tensor

178

Biduals, injective von Neumann algebras, and C ∗ -norms

t=

n 0

Uj ⊗ (π Uj ) ∈ A ⊗ A∗∗ .

Then √ tA⊗min A∗∗ = 2 n and t(A⊗min A)∗∗ = n + 1.   √ Indeed, tA⊗min A∗∗ =  n0 Uj ⊗ π Uj A⊗ M = 2 n by (4.27). By Lemma min 8.20, the inequality t(A⊗min A)∗∗ ≥ n + 1 follows from the factorization property of the free groups that will be proved later on in Corollary 12.23. Indeed, let σ : A → B(H ) be the representation associated to ρG . By the factorization property (see Definition 7.36) we know that T = σ · π : A ⊗min A → B(H ) is contractive. By Lemma 8.20 we have  T¨ (t) = σ (Uj )π((π(U ¨ j )))  = σ (Uj )π(Uj ), and T¨ : (A ⊗min A)∗∗ → B(H ) ≤ 1.   This gives us n + 1 =  σ (Uj )π(Uj ) ≤ T¨ (t) ≤ t(A⊗min A)∗∗ . Using (2.14) this can be reformulated using the linear operator u : n+1 ∞ → ∗∗ A (corresponding to t) defined by u(ej ) = (Uj ), as follows √ (8.20) uCB(n+1,A∗∗ ) = 2 n and uCB(n+1,A)∗∗ = n + 1. ∞



This argument shows that if a group G has the factorization property then G is amenable if and only if C ∗ (G) is locally reflexive. Remark 8.34 (Properties C and C  ) The origin of local reflexivity for C ∗ -algebras lies in [10]. There Archbold and Batty introduced two properties that they named C and C  . A has property C if for any C ∗ -algebra B we have an isometric embedding A∗∗ ⊗min B ∗∗ ⊂ (A ⊗min B)∗∗ . A has property C  if for any C ∗ -algebra B we have an isometric embedding A ⊗min B ∗∗ ⊂ (A ⊗min B)∗∗ . They showed in [10] that property C implies exactness, as defined in the sequel in §10.1, and it is obvious that C ⇒ C  . Kirchberg showed later on that C and C  are actually equivalent properties, and each is equivalent to exactness. Clearly property C implies local reflexivity but (as we already mentioned) the converse remains open. We refer the reader to [208, ch. 18] for more comments and to [39, ch. 9] for a complete proof of the equivalence of C and C  , which is much more delicate than the one of C  with exactness that we prove later on in Proposition 10.12.

8.6 Notes and remarks

179

Remark 8.35 The definition of local reflexivity makes sense equally well for operator spaces. Surprisingly, it turns out that any predual of a von Neumann algebra is locally reflexive when viewed as an operator space. See [78].

8.6 Notes and remarks The results of §8.1 on biduals are all well-known facts, while those on decomposable maps are based on [104]. Equations (6.14) and (6.15) appear in [141]. Concerning injectivity in §8.3 again the main references are Connes’s [61], Lance’s [165] and the Choi–Effros papers [45–48]. The original proof that injective factors on a separable Hilbert space are approximately finite dimensional (i.e. “hyperfinite”) is an outstanding achievement of Connes [61]. The case of general von Neumann algebras was deduced from Connes’s results by Elliott. See [82–84, 261] for clarifications on that question. Later on, simpler proofs of Connes’s result that injective implies AFD were given by Uffe Haagerup [105] and Sorin Popa [217]. See also chapter XVI in [241], [39, p. 333] and [4, ch. 11] for more recent detailed expositions. The proof that injective ⇒ semidiscrete is more accessible. A simpler proof of that implication appears in S. Wassermann’s [256]. Before Connes’s work and the Choi–Effros papers a number of implications between injectivity and semidiscreteness (or in other words the weak* CPAP) appeared in the Effros– Lance paper [79] which was already circulating as a preprint around 1974. In particular, they proved the equivalence of injectivity and semidiscreteness for the von Neumann algebra of a discrete group. They also proved the equivalence of semidiscreteness with either (iii) or (iv) in Theorem 8.13 and also with (ii) in Theorem 8.14. Local reflexivity originates in Archbold–Batty’s [10] (see Remark 8.34) where they prove Theorem 8.29, but the subject owes a lot to a subsequent paper by Effros and Haagerup [77], and to Simon Wassermann’s work in connection with exactness (see [258] for more references). See [78] for more recent important work on operator space local reflexivity.

9 Nuclear pairs, WEP, LLP, QWEP

We start with a few general remarks around nuclearity for pairs. Definition 9.1 A pair of C ∗ algebras (A,B) will be called a nuclear pair if A ⊗min B = A ⊗max B, or equivalently if the min- and max-norm are equal on the algebraic tensor product A ⊗ B. Remark 9.2 If the min- and max-norm are equivalent on A ⊗ B, then they automatically are equal by Corollary A.26. Remark 9.3 Let A1 ⊂ A and B1 ⊂ B be C ∗ -subalgebras. In general, the nuclearity of the pair (A,B) does not imply that of (A1,B1 ). As the sequel will demonstrate, this “defect” is a major feature of the notion of nuclearity. However, if (A1,B1 ) admit contractive c.p. projections (conditional expectations) P : A → A1 and Q : B → B1 then (A1,B1 ) inherits the nuclearity of (A,B). This is an immediate application of Corollary 7.8 (see also Remark 7.20) and Proposition 7.19. More generally, the following holds: Lemma 9.4 Let A, D be C ∗ -algebras. We assume that IdD factors through A in a certain “local” sense as follows: For any finite-dimensional subspace v w E ⊂ D and any ε > 0 there is a factorization E →A→D of the inclusion map with vcb wdec ≤ 1 + ε. Let B be another C ∗ -algebra. Then, if (A,B) is nuclear, the same is true for (D,B). Proof Let x ∈ D ⊗ B. We may assume x ∈ E ⊗ B with dim(E) < ∞. Then, since x = (w ⊗ IdB )(v ⊗ IdB )(x) we have by (6.13)

180

9.1 The fundamental nuclear pair (C ∗ (F∞ ),B(2 ))

181

xD⊗max B ≤ wdec (v ⊗ IdB )(x)A⊗max B = wdec (v ⊗ IdB )(x)A⊗min B ≤ vcb wdec xE⊗min B . Thus, since xE⊗min B = xD⊗min B , we obtain xD⊗max B = xD⊗min B . Remark 9.5 Since iD : D → D ∗∗ is max-injective, the preceding argument works equally well if we only assume that iD (instead of IdD ) factors through A in the same local sense as described in Lemma 9.4. Recall that A is called nuclear if (A,B) is nuclear for all B. The basic examples of nuclear C ∗ -algebras (see §4.2) include all commutative ones, the algebra K(H ) of all compact operators on an arbitrary Hilbert space H , C ∗ (G) for all amenable discrete groups G and the Cuntz algebras. While the meaning of nuclearity for a C ∗ -algebra seems by now fairly well understood, it is not so for pairs, as reflected by Kirchberg’s fundamental conjecture from [155] that we discuss in detail in §13.

9.1 The fundamental nuclear pair (C ∗ (F∞ ),B(2 )) A large part of the sequel revolves around the two fundamental examples B = B(2 )

and

C = C ∗ (F∞ ).

Note that these are both universal but in two different ways, injectively for B = B(2 ) (by this we mean that every separable C ∗ -algebra embeds in B), projectively for C = C ∗ (F∞ ) (by this we mean that every separable unital C ∗ -algebra is a quotient of C , see Proposition 3.39 for details). Kirchberg’s conjecture is simply that the pair (C ,C ) is nuclear. The main goal of these notes is to introduce the reader to the state of the art related to this conjecture, and in particular its equivalence with a problem posed back in 1976 by Alain Connes [61]. In this context, Kirchberg [155] proved the following striking result. We will give the simpler proof from [204]. Theorem 9.6 (The fundamental pair) The pair (C ∗ (F∞ ),B(2 )) = (C ,B) is a nuclear pair, as well as (C ∗ (F∞ ),B(H )) for any H . The following simple fact is essential for our argument. Proposition 9.7 Let A,B be two unital C ∗ -algebras. Let (ui )i∈I be a family of unitary elements of A generating A as a unital C ∗ -algebra (i.e. the smallest unital C ∗ -subalgebra of A containing them is A itself). Let E ⊂ A be the

182

Nuclear pairs, WEP, LLP, QWEP

linear span of (ui )i∈I and 1A . Let T : E → B be a linear operator such that T (1A ) = 1B and taking each ui to a unitary in B. Then T cb ≤ 1 suffices to ensure that T extends to a (completely) contractive ∗-homomorphism T : A → B. Moreover, T : A → B is the unique completely contractive (or equivalently the unique unital c.p.) map extending T . Lastly, if T is completely isometric and T (E) generates B (as a C ∗ -algebra) then T is a ∗-isomorphism from A to B. Proof The first variant uses multiplicative domains (see §5.1). Consider B as embedded in B(H). By Arveson’s extension Theorem 1.18, T extends to a complete contraction T : A → B(H). Since T is assumed unital, T is unital, and hence completely positive by Corollary 1.52. Now for any unitary U in the family (ui )i∈I , since T(U ) = T (U ) is unitary by assumption, we have {ui | i ∈ I } ⊂ DT, and since DT is a C ∗ -algebra (see Theorem 5.1) this implies automatically A = DT, so that T is actually a (contractive) ∗-homomorphism into B(H). Since T(ui ) = T (ui ) and the ui’s generate A, we must have T(A) ⊂ B. Moreover, any other complete contraction T  : A → B extending T must be a ∗-homomorphism equal to T on E, and hence must be equal to T. Lastly, let F = T (E) ⊂ B. Note that T(A) is the C ∗ -algebra generated by F . Assume T completely isometric and T(A) = B. Then we can apply the first part of the proof to T −1 : F → A. This gives us a ∗-homomorphism σ : B → A with σ  ≤ 1 that is inverse to T and proves the last assertion. This completes the proof. Alternate argument: The reader who so wishes can avoid the use of multiplicative domains by arguing like this: By Theorem 1.22 we can find an embedding H ⊂ K and a unital ∗-homomorphism π : A → B(K) such that T(a) = PH π(a)|H . Then an elementary argument shows that if a unitary U on K is such that PH U|H is still unitary, then U must commute with PH . Thus, by our assumption this commutation is true for π(ui ) and hence for π(A) since the π(ui )’s generate it. This shows that a → PH π(a)|H (which is the same as T) is a ∗-homomorphism, necessarily with range in B. The main idea of our proof of Kirchberg’s Theorem 9.6 is that if E is the linear span of 1 and the free unitary generators of C ∗ (F∞ ), then it suffices to check that the min- and max-norms coincide on E ⊗ B(H ). More generally, we will prove Theorem 9.8 Let A1,A2 be unital C ∗ -algebras. Let (ui )i∈I (resp. (vj )j ∈J ) be a family of unitary operators that generate A1 (resp. A2 ). Let E1 (resp. E2 ) be

9.1 The fundamental nuclear pair (C ∗ (F∞ ),B(2 ))

183

the closed span of (ui )i∈I (resp. (vj )j ∈J ). Assume 1 ∈ E1 and 1 ∈ E2 . Then the following assertions are equivalent: (i) The inclusion map E1 ⊗min E2 → A1 ⊗max A2 is completely isometric. (ii) A1 ⊗min A2 = A1 ⊗max A2 . Proof The implication (ii) ⇒ (i) is trivial (since ∗-homomorphisms are completely contractive), so we prove only the converse. Assume (i). Let E = E1 ⊗min E2 . We view E as a subspace of A = A1 ⊗min A2 . By (i), we have an inclusion map T : E1 ⊗min E2 → A1 ⊗max A2 with T cb ≤ 1. By Proposition 9.7, T extends to a (contractive) ∗-homomorphism T from A1 ⊗min A2 to A1 ⊗max A2 . Clearly T must preserve the algebraic tensor products A1 ⊗ 1 and 1 ⊗ A2 , hence also A1 ⊗ A2 . Thus we obtain (ii). Remark 9.9 Let us denote by E1 ⊗ 1 + 1 ⊗ E2 the linear subspace {e1 ⊗ 1 + 1 ⊗ e2 | e1 ∈ E1, e2 ∈ E2 }. Then, in the situation of Theorem 9.8, E1 ⊗ 1 + 1 ⊗ E2 generates A1 ⊗min A2 , so that it suffices for the conclusion of Theorem 9.8 to assume that the operator space structures induced on E1 ⊗ 1 + 1 ⊗ E2 by the min and max norms coincide. Proof of Kirchberg’s Theorem 9.6 Let A1 = C ∗ (F∞ ), A2 = B(H ). We may clearly assume dim(H ) = ∞. Let U0 = 1 and let (Ui )i≥1 denote the free unitary generators of C = C ∗ (F∞ ). We take E2 = B(H ) and let E1 be the linear span of (Ui )i≥0 . Consider x ∈ E1 ⊗ E2 , with xmin < 1. By Lemma 3.10 we can write  (x )  finitely supported, x = i≥0 Ui ⊗ xi with xi ∈ B(H   ),  admitting a  i i≥0 decomposition as xi = ai bi with  ai ai∗  < 1,  bi∗ bi  < 1, ai ,bi ∈ B(H ). Now, let π : A1 ⊗max A2 → B(H) be any faithful ∗-homomorphism. Let π1 = π|A1 ⊗1 and π2 = π|1⊗A2 . We have   π(x) = π1 (Ui )π2 (xi ) = π1 (Ui )π2 (ai )π2 (bi ). i≥0

i≥0

Since π1 and π2 have commuting ranges we have π(x) = y with  π2 (ai )π1 (Ui )π2 (bi ). y= i≥0

Now by (2.1) from Lemma 2.3 we have  1/2     y ≤  π2 (ai )π2 (ai )∗   i≥0

i≥0

1/2  π2 (bi )∗ π2 (bi ) < 1.

Thus we conclude that xmax = π(x) < 1. This shows that the min and max norms coincide on E1 ⊗ B(H ). But since dim(H ) = ∞ we have Mn (B(H ))  B(H ) for any n, and hence

184

Nuclear pairs, WEP, LLP, QWEP Mn (E1 ⊗min B(H )) = E1 ⊗min Mn (B(H ))  E1 ⊗min B(H ),

and also Mn (A1 ⊗max B(H )) = A1 ⊗max Mn (B(H ))  A1 ⊗max B(H ), therefore the latter coincidence of norms “automatically” implies that the inclusion E1 ⊗min B(H ) → A1 ⊗max B(H ) is completely isometric. In other words, the operator space structures associated to the min and max norms coincide. We may clearly replace E1 by its closure. Thus, the proof is concluded by Theorem 9.8 (here E2 = A2 = B(H )). As a complement to his fundamental Theorem 9.6 Kirchberg observed the following general phenomenon. Theorem 9.10 Let F be any free group and M any von Neumann algebra. Then  · max =  · nor on C ∗ (F) ⊗ M. Proof Assume F = FI . Let E = span[Ui | i ∈ I˙] with {Ui | i ∈ I˙} as in Lemma 6.28. Let us denote by E ⊗max M (resp. E ⊗nor M)) the operator space generated by E ⊗ M in C ∗ (F) ⊗max M (resp. C ∗ (F) ⊗nor M). By Proposition 9.7 it suffices to prove that the natural mapping E ⊗max M → E ⊗nor M is completely isometric. Since we may replace M by Mn (M) to pass from isometric to completely isometric (we skip the easy details for this point), it suffices to show that tmax = tnor for any t ∈ E ⊗ M (the max- and nornorms being the ones induced on E ⊗M by those of C ∗ (F)⊗M). Let π : M → B(H ) be a normal embedding with infinite multiplicity (i.e. H = 2 ⊗2 K and π(·) = Id2 ⊗ σ (·) where σ : M → B(K) embeds M as a von Neumann subalgebra). Then (6.38) means that for any t ∈ E ⊗ M tmax = supσ (σ .π )(t)

(9.1)

where the sup runs over all ∗-homomorphisms σ : C ∗ (F)→M  , and hence tmax ≤ tnor . By maximality tmax = tnor . The proof actually shows that (9.1) holds for all t ∈ C ∗ (F) ⊗ M. In the same direction, the following variant will be useful.

Theorem 9.11 Let (Ai )i∈I be a family of C ∗ -algebras and let A = ⊕  i∈I Ai ∞ . Let t ∈ C ⊗ A. For each i ∈ I let ti = (Id C ⊗ pi )(t) ∈ C ⊗ Ai , where pi : A → Ai is the coordinate projection. Then tC ⊗max A = supi∈I ti C ⊗max Ai .

(9.2)

9.1 The fundamental nuclear pair (C ∗ (F∞ ),B(2 ))

185

Proof By Proposition 9.7 it suffices to check that this holds for any t ∈ E ⊗ A where E is the linear span of the unitary generators (Uj ) and the unit. (Indeed,

 one can replace A by Mn (A) and observe that Mn (A)  ⊕ i∈I Mn (Ai ) ∞ .) Then we may as well assume that t ∈ E ⊗ A with E = span[I,U1, . . . ,UN −1 ]. N Let u : N ∞ → A (resp. ui : ∞ → Ai ) be the linear map associated to t (resp. ti ). Then (6.37) shows us that (9.2) is equivalent to udec = supi∈I ui dec, which we observed in (6.10). For our exposition, it will be convenient to adopt the following definitions (equivalent to the more standard ones by [155]). Definition 9.12 Let A be a C ∗ -algebra. We say that A has the WEP (or is WEP) if (A,C ) is a nuclear pair. We say that A has the LLP (or is LLP) if (A,B) is a nuclear pair. We say that A is QWEP if it is a quotient (by a closed, self-adjoint, twosided ideal) of a WEP C ∗ -algebra. Here WEP stands for weak expectation property (and LLP for local lifting property). Remark 9.13 If A has the WEP (resp. LLP) then any C ∗ -subalgebra D ⊂ A such that the inclusion D ⊂ A is max-injective has the WEP (resp. LLP) by (7.10). Remark 9.14 By Proposition 3.5, for any subgroup  ⊂ G of a discrete group G, the inclusion C ∗ () ⊂ C ∗ (G) is max-injective. Therefore, for any given B, if the pair (C ∗ (G),B) is nuclear, then the same is true for the pair (C ∗ (),B). For instance, it is well known that F∞ embeds as a subgroup in Fn for any 1 < n < ∞. Therefore: (C ∗ (F∞ ),B) is nuclear ⇔ (C ∗ (Fn ),B) is nuclear.

(9.3)

In particular, C ∗ (Fn ) is LLP for any 1 ≤ n ≤ ∞. Let F be any free group. By Lemmas 3.8 and 9.4 we can replace Fn by F in (9.3): (C ∗ (F∞ ),B) is nuclear ⇔ (C ∗ (Fn ),B) is nuclear ⇔ (C ∗ (F),B) is nuclear for any F.

(9.4)

In particular: C ∗ (F) is LLP for any free group F.

(9.5)

186

Nuclear pairs, WEP, LLP, QWEP

Remark 9.15 Let A,B,C be C ∗ -algebras. Assume that A is nuclear. If the pair (B,C) is nuclear then the pair (A⊗min B,C) is also nuclear. We leave the proof as an (easy) exercise (see (4.9)). In particular, if B has the WEP (resp. LLP) then the same is true of A ⊗min B. Remark 9.16 If A is WEP, LLP (or QWEP) then the same is true of Aop or A. This can be deduced easily from the fact that Aop  A (or A  A) when A = C and A = B (see Remarks 3.7 and 2.10). Moreover, by (4.10) the properties WEP, LLP (or QWEP) are stable under direct sum.

9.2 C ∗ (F) is residually finite dimensional A C ∗ -algebra is called residually finite dimensional (RFD) if for any x ∈ A with x = 0 there is a ∗-homomorphism π : A → B(H ) with dim(H ) < ∞ such that π(x) = 0. It is easy to see that this holds if and only if there is a family of finitedimensional ∗-homomorphisms πi : A → Mn(i)

i ∈ I, n(i) < ∞

such that their direct sum

  ⊕i∈I πi : A → ⊕

 i∈I

Mn(i)

(9.6)



is an embedding. Equivalently A is RFD if and only if the ∗-homomorphism that is the direct sum, over all n ≥ 1, of all ∗-homomorphisms π : A → Mn is injective. Remark 9.17 If A is separable (and RFD), since there is a countable subset dense in the unit sphere, we can always find a countable family (Mn(i) )i∈I for which (9.6) is an embedding. In this section we prove the following result (due to Choi). Theorem 9.18 Let G be any free group with free generators {gi | i ∈ I }. Then C ∗ (G) is residually finite dimensional. 0 } denote the collection of all the finite-dimensional Proof Let {π | π ∈ G unitary representations of G (without repetitions). We define 0 }. σ (t) = ⊕{π(t) | π ∈ G

∀t ∈G

Clearly, σ extends to a (contractive) ∗-homomorphism    u : C ∗ (G) → Cσ∗ (G) ⊂ ⊕ B(H ) , π  π ∈G0



9.2 C ∗ (F) is residually finite dimensional

187

taking UG (t) to σ (t). Let E ⊂ C ∗ (G) be the linear span of 1 and {UG (gi ) | i ∈ I }. To prove the theorem, we will show that u is isometric. By Proposition 9.7 it suffices to show that u|E : E → Cσ∗ (G) is completely isometric. Let I˙ = {0} ∪ I (disjoint union) and let {xi | i ∈ I˙} be any finitely supported family in B(H ) (with H arbitrary). Then a typical element of B(H ) ⊗ E is of the form  xi ⊗ UG (gi ), x = x0 ⊗ I + i∈I

and we have

   xB(H )⊗min E = sup x0 ⊗ I +

i∈I

  xi ⊗ ui  ,

where the sup runs over all possible families {ui | i ∈ I } of unitaries (including infinite-dimensional ones). But we saw in (3.11) that this supremum remains the same if we let it run over all possible families of finite-dimensional unitaries. Thus (3.11) tells us that xB(H )⊗min E = [IdB(H ) ⊗ u](x)B(H )⊗min Cσ∗ (G), which shows that the restriction of u to E is completely isometric. By Propo∗ sition 9.7, u gives us an embedding of C (G) into ⊕ π ∈G 0 B(Hπ ) ∞ . ∗ Remark 9.19 If A,B are residually finite dimensional C-algebras, then A⊗min residually finite

dimensional. Indeed, if A ⊂ ⊕ i∈I Mn(i) ∞ and B ⊂ B is ⊕ j ∈J Mm(j ) ∞ then    A ⊗min B ⊂ ⊕ Mn(i) ⊗min Mm(j ) (i,j )∈I ×J ∞    = ⊕ Mn(i)m(j ) . (i,j )∈I ×J



on a C ∗ -algebra A is “a finite-dimensional

Let us say for short that a state f state” (resp. “a finite-dimensional vector state”) if there is a ∗-homomorphism π : A → B(H ) on a finite-dimensional Hilbert space H and a state (resp. a vector state) F on B(H ) such that f (a) = F (π(a)) for all a ∈ A. Using multiplicity, we observe that any finite-dimensional state is actually a finite-dimensional vector state: indeed, any state f on B(H ) with dim(H ) < ∞ is of the form f (x) = tr(a ∗ xa) = a,xaS2 (H ) for some unit vector a ∈ S2 (H ), and hence it becomes a vector state when we embed B(H ) in B(S2 (H )) by left multiplication. Thus, for later reference, we state the next result only for vector states. Proposition 9.20 A C ∗ -algebra A is residually finite dimensional if and only if any state on A is the pointwise limit of a net of finite-dimensional vector states.

188

Nuclear pairs, WEP, LLP, QWEP

In particular this holds for the states on C ⊗min C . n(i)

Proof Assume A RFD, so that (9.6) is an embedding. Let Hi = 2 and H = ⊕i∈I Hi . Any state f on A extends to a state on B(H ), which is a limit of normal states on B(H ). The latter states are limits of states fγ in the convex hull of those of the form a → ξ,π(a)ξ  where ξ is a unit vector in H , which, after truncation and renormalization, we may assume to be all in ⊕i∈I (γ ) Hi for some finite subset I (γ ) ⊂ I . Then fγ is in the convex hull of states of the form a → ξ,(⊕i∈I (γ ) πi (a))ξ . Since dim(⊕i∈I (γ ) Hi ) < ∞, each fγ is a finite-dimensional state. By the preceding observation, this proves the only if part. Conversely, for any a ∈ A we have a2 = a ∗ a = sup f (a ∗ a) where the sup is over all states. If any such f is the pointwise limit of finitedimensional states we can restrict the sup to the latter, and then we find a2 ≤ sup π(a ∗ a) where the sup runs over all ∗-homomorphisms π : A → B(H ) with H finite dimensional. Since the reverse inequality is obvious, we conclude that A is RFD. The last assertion follows from Theorem 9.18 and Remark 9.19.

9.3 WEP (Weak Expectation Property) We defined the WEP for A by the equality A ⊗min C = A ⊗max C . We will now see that it is equivalent to a weak form of extension property (a sort of weakening of injectivity), which is the original and more traditional definition of the WEP. Let A ⊂ B(H ) be a C ∗ -subalgebra. If A is injective, there is a completely contractive projection P : B(H ) → A, satisfying the properties of a conditional expectation by Theorem 1.45. Recall that the weak* closure weak∗ A of A in B(H ), is equal to A by Theorem A.46. A unital c.p. mapping weak∗ is called a weak expectation if T (a) = a for any a ∈ T : B(H ) → A A. This concept goes back to Lance [165]. We will show that the WEP is weak∗ equivalent to the existence of a weak expectation T : B(H ) → π(A) = π(A) for any H and any embedding π : A → B(H ) (see Remark 9.23). But for our broader framework, it will be convenient to enlarge Lance’s concept, as follows. Definition 9.21 Let A ⊂ B be a C ∗ -subalgebra of another one. A linear mapping V : B → A∗∗ will be called a generalized weak expectation if V  ≤ 1 and V (a) = a for any a ∈ A. If V (B) ⊂ A then V is a conditional expectation in the usual sense as in Theorem 1.45.

9.3 WEP (Weak Expectation Property)

189

In Theorem 7.29 we already gave an important characterization of the inclusions that admit a generalized weak expectation, as those such that A ⊂ B is max-injective. We will show that the WEP of A is equivalent to the existence of a generalized weak expectation V : B → A∗∗ whenever A embeds in B. Indeed, this reduces to the case B = B(H ) treated in Theorem 9.31. Note that if the embedding A ⊂ B(H ) is the universal representation of A then weak∗ π(A) = A∗∗ (see §A.16) and hence the generalized notion of weak expectation coincides in this case with Lance’s original one. See Remark 9.32 for more on generalized weak expectations. Theorem 9.22 Let A ⊂ B(H ) be a C ∗ -algebra. The following are equivalent. (i) A has the WEP (i.e. (A,C ) is a nuclear pair). (ii) The inclusion A ⊂ B(H ) is max-injective. (iii) Any ∗-homomorphism u : A → M into a von Neumann algebra M extends to a completely positive and (completely) contractive mapping from B(H ) to M. (iii)’ Any ∗-homomorphism u : A → M into a von Neumann algebra M factors completely positively and (completely) contractively through B(H) for some H. (iv) The inclusion iA : A → A∗∗ factors completely positively and (completely) contractively through B(H) for some H. Proof (i) ⇒ (ii). Assume (i). Then A ⊗max C = A ⊗min C ⊂ B(H ) ⊗min C = B(H ) ⊗max C where the last equality is from Theorem 9.6. By (i) ⇔ (i)’ in Theorem 7.29, (ii) holds. Assume (ii). Then (iii) holds by Corollary 7.30. (iii) ⇒ (iii)’ is obvious, and using u = iA we see that (iii)’⇒(iv). Assume (iv). Consider a completely positive and (completely) contractive factorization iA : A → B(H) → A∗∗ . By Theorem 9.6 (recalling that by Corollary 7.8 contractive c.p. maps are (max → max)-tensorizing) we find a contractive factorization iA ⊗ IdC : A ⊗min C → B(H) ⊗min C = B(H) ⊗max C → A∗∗ ⊗max C and since iA is max-injective we conclude that A ⊗min C = A ⊗max C . In other words we obtain (i).

190

Nuclear pairs, WEP, LLP, QWEP

Remark 9.23 (On Lance’s WEP) Lance’s definition of the WEP for a C ∗ algebra is different but easily seen to be equivalent to ours. Lance [165] says that a ∗-homomorphism π : A → B(H ) has the WEP if the von weak∗ , admits a Neumann algebra it generates, i.e. the weak* closure π(A) weak expectation. He then says that A has the WEP if every faithful π has the WEP. We claim that Lance’s WEP is the same as our WEP in Theorem 9.22 (i). Indeed, by (i) ⇒ (iii) in Theorem 9.22, A has Lance’s WEP if it has our WEP (because if A has our WEP so does π(A) for any faithful π ). Conversely, Lance’s WEP applied to the embedding π : A ⊂ A∗∗ ⊂ B(H ) (assuming A∗∗ embedded as a von Neumann algebra in B(H )) implies the existence of a weak expectation from B(H ) to A∗∗ that is equal to π on A and hence by (iv) ⇒ (i) in Theorem 9.22 that A has our WEP. Using the extension properties of B(H) described in Theorems 1.18 and 1.39, the following is an immediate consequence of (iv): Corollary 9.24 If A is WEP then it has the following weak forms of injectivity into the bidual: For any operator space X and any subspace E ⊂ X, any u ∈ CB(E,A) ucb = ucb such that  u|E = admits an “extension”  u ∈ CB(X,A∗∗ ) with  iA u. If E,X are operator systems, then any u ∈ CP(E,A) admits an “extension” u = u such that  u|E = iA u.  u ∈ CP(X,A∗∗ ) with  Remark 9.25 It is obvious that the second property in Corollary 9.24 characterizes WEP: just substitute to E ⊂ X the inclusion jA : A → B(H ). That the first one also does is less obvious, but it will be shown by Theorem 23.7. Corollary 9.26 A von Neumann algebra M is injective if and only if it has the WEP. Proof If M has the WEP, then the identity of M factors through some B(H) as in (iii) in Theorem 9.22 (take A = M and u = IdM ). Therefore M is injective. Conversely, if M is injective the identity of M factors completely positively and (completely) contractively through B(H), so (iv) in Theorem 9.22 (with A = M) follows immediately. Recall that it is obvious by our definition that nuclear implies WEP, thus {nuclear} ∪ {injective} ⊂ {WEP}. By Theorem 8.16 we can deduce from Corollary 9.26:

9.3 WEP (Weak Expectation Property)

191

Corollary 9.27 Let C = C ∗ (F∞ ). A C ∗ -algebra A is nuclear if and only if C ⊗min A∗∗ = C ⊗max A∗∗, i.e. if and only if the pair (C ,A∗∗ ) is nuclear. Corollary 9.28 A C ∗ -algebra A is both WEP and locally reflexive if and only if it is nuclear. Proof Assume that A has the WEP. By our definition of the WEP, C ⊗min A = C ⊗max A and hence (C ⊗min A)∗∗ = (C ⊗max A)∗∗ . If A is locally reflexive, then C ⊗min A∗∗ → (C ⊗min A)∗∗  = 1, and hence C ⊗min A∗∗ → (C ⊗max A)∗∗  = 1. By Theorem 8.22 we have C ⊗min A∗∗ → C ∗∗ ⊗bin A∗∗  = 1. But the norm induced on C ⊗A∗∗ by the bin-norm on C ∗∗ ⊗A∗∗ coincides with the nor-norm (see Remark 8.4). Therefore we have C ⊗min A∗∗ → C ⊗nor A∗∗  = 1. Since by Theorem 9.10 C ⊗nor A∗∗ = C ⊗max A∗∗ , we conclude that C ⊗min A∗∗ → C ⊗max A∗∗  = 1, and hence A is nuclear by Corollary 9.27. The converse is immediate since nuclear implies locally reflexive by Theorem 8.29. Corollary 9.29 If the reduced C ∗ -algebra Cλ∗ (G) of a discrete group G has the WEP then G is amenable. Proof Assume that Cλ∗ (G) has the WEP. Let M ⊂ B(2 (G)) be as usual the von Neumann algebra generated by Cλ∗ (G). Let T : B(2 (G)) → M be the completely positive contraction extending the inclusion Cλ∗ (G) → M, given by (iii) in Theorem 9.22. By Corollary 5.3, T is Cλ∗ (G)-bimodular. This implies in particular that T (λ(g)xλ(g)∗ ) = λ(g)T (x)λ(g)∗ for any x ∈ B(2 (G)) and any g ∈ G. For any x ∈ ∞ (G), let Dx ∈ B(2 (G)) denote the (diagonal) operator of multiplication by x. Note that λ(g)Dx λ(g)∗ = Dδg ∗x . Let τG : M → C be defined by τG (a) = δe,aδe . It is easy to check that τG (λ(g)aλ(g)∗ ) = τG (a) for any a ∈ M and g ∈ G (in other words τG is a trace on M in the sense of §11.1). Let ϕ ∈ ∞ (G)∗+ be the functional defined by ϕ(x) = τG (T (Dx )) = δe,T (Dx )δe . Then we have ϕ(δg ∗ x) = τG (T (λ(g)Dx λ(g)∗ )) = τG (λ(g)T (Dx )λ(g)∗ ) = τG (T (Dx )) = ϕ(x). Thus ϕ is an invariant mean on G.

192

Nuclear pairs, WEP, LLP, QWEP

Proposition 9.30 For any separable C ∗ -algebra A ⊂ B(H ) there is a separable unital C ∗ -algebra A1 ⊂ B(H ) with the WEP such that A ⊂ A1 . Proof This follows directly from Proposition 7.24 and Remark 7.25 applied with B = C and A = B(H ). Now that we know by Theorem 9.22 that A has the WEP if and only if the inclusion A ⊂ B(H ) is max-injective, let us review what Theorem 7.29 tells us about it. Theorem 9.31 (On weak expectations) Let A ⊂ B(H ) be a C ∗ -algebra and let A∗∗ ⊂ B(H )∗∗ be the embedding obtained by bitransposition. The following are equivalent. (i) A has the WEP. (ii) There is a contractive linear map V : B(H ) → A∗∗ such that V (x) = x for any x ∈ A (in other words, V is a generalized weak expectation for A ⊂ B(H )). (iii) There is a projection P : B(H )∗∗ → A∗∗ with P  = 1. Remark 9.32 (Comparing weak expectations and projections) We repeatedly use the observation recorded in Proposition A.50 that, assuming A ⊂ B, A,B being here merely Banach spaces, a linear map V : B → A∗∗ is a generalized weak expectation if and only if V¨ : B ∗∗ → A∗∗ is a contractive projection. It can be shown (by a fairly easy application of the Hahn–Banach theorem) ∧



that such a V exists if and only the natural map A⊗C → B ⊗C between the projective tensor products is isometric for any Banach space C, and actually it suffices to have this for C = A∗ . This kind of duality argument can be generalized to treat the case when V is not necessarily a contraction. It can ∧



also be checked easily that the natural map A⊗C → A∗∗ ⊗C is isometric for any Banach space C. In analogy with the latter facts, in the C ∗ -algebra case, we showed in Theorem 7.29 that a generalized weak expectation exists if and only if the inclusion A ⊂ B is max-injective, and in Corollary 7.27 that iA : A → A∗∗ is always max-injective. Remark 9.33 (Warning on a trap) To avoid possible errors, we emphasize that (in sharp contrast with the analogue for injectivity) the existence of an embedding u : A∗∗ ⊂ B(H )∗∗ admitting a c.p. contractive projection P : B(H )∗∗ → A∗∗ does not in general imply the WEP for A. Indeed, we will show in Theorem 9.72 that this holds if and only if A is QWEP. For the WEP to hold it is essential to assume in addition that u = v ∗∗ for some v : A → B(H ). Then we effectively can conclude that v is max-injective so A has the WEP by Theorem 9.22.

9.4 LLP (Local Lifting Property)

193

We now turn to the stability of the WEP under infinite direct sums in the sense of ∞ . ∗ Proposition  9.34 For any family {Ai | i ∈ I } of WEP C -algebras the direct sum ⊕ i∈I Ai ∞ also has the WEP.

Proof With the notation in (9.2), for any t ∈ C ⊗A we have by (9.2) and (1.14) tC ⊗max A = supi∈I ti C ⊗max Ai = supi∈I ti C ⊗min Ai = tC ⊗min A, which means that A has the WEP. Remark 9.35 Thus if C denotes as usual the full C ∗ -algebra of F∞ , this means that if (A i ,C ) is nuclear for any i then (A,C ) is nuclear where  A = ⊕ i∈I Ai ∞ . However, this is not true if we replace C by an arbitrary is not preserved by infinite direct sums of the C ∗ algebra. Indeed, nuclearity

  type A = ⊕ i∈I Ai ∞ . For example B = (⊕ n≥1 Mn )∞ is not nuclear (see Corollary 18.11). We refer to [189, Lemma 3.2] for a different proof of Proposition 9.34 based on the following purely Banach space result, which as its corollary, is of independent interest. Theorem 9.36 Let B be any Banach space and let A ⊂ B be a closed subspace. Then the following are equivalent: (i) There is a projection P : B ∗∗ → A∗∗ with P  = 1. (ii) For any ε > 0 and any finite-dimensional subspace E ⊂ B there is a linear map ϕ : E → A with ϕ ≤ 1 + ε such that ∀a ∈ E ∩ A

ϕ(a) = a.

Corollary 9.37 Let {Bi | i ∈ I } be a family of Banach spaces. Let {Ai | i ∈ I } be a family of subspaces, with Ai ⊂ Bi for each i ∈ I , such that there is a projection Pi : Bi∗∗ → A∗∗ i with Pi  = 1. Then there is a projection   ∗∗ ∗∗   P: ⊕ Bi → ⊕ Ai i∈I



i∈I



with P  ≤ 1.

9.4 LLP (Local Lifting Property) In Banach space theory, the “lifting property” of 1 is classical: for any bounded linear map u from 1 into a quotient Banach space X/Y and for any u ≤ (1 + ε)u. Moreover, the ε > 0, there is a lifting  u : 1 → X with  so-called L1 -spaces satisfy a local variant of this. There are analogues of this

194

Nuclear pairs, WEP, LLP, QWEP

lifting property for C ∗ -algebras and operator spaces (see [208, ch. 16]). We return to this in §9.5 and §21.2. But in this chapter we concentrate on the local variant called LLP, which is better understood. We show next that indeed the LLP as defined previously in Definition 9.12 is equivalent to a certain “local” lifting property. Recall that linear maps with values in a quotient A/I (of a C ∗ -algebra A by an ideal I) that locally c-lift were defined in Definition 7.46. Theorem 9.38 (Local lifting property) The following properties of a C ∗ -algebra C are equivalent: (i) C has the LLP i.e. the pair (C,B) is nuclear. (ii) Any u in the unit ball of D(C,A/I) is locally 1-liftable for any quotient A/I. (iii) Any u ∈ CP(C,A/I) with u ≤ 1 is locally 1-liftable for any quotient A/I. (iv) Any ∗-homomorphism u : C → A/I is locally 1-liftable for any quotient A/I. Proof Assume (i). Let u ∈ D(C,A/I) with udec ≤ 1. By (6.13) for any C ∗ -algebra B we have IdB ⊗ u : B ⊗max C → B ⊗max (A/I) ≤ 1

(9.7)

and by (7.6) (with the roles of A,B interchanged) B ⊗max (A/I) = (B ⊗max A)/(B ⊗max I), and hence a fortiori we have a “canonical” ∗-homomorphism of norm ≤ 1 B ⊗max (A/I) → (B ⊗min A)/(B ⊗min I). By definition of the LLP, we have B ⊗max C = B ⊗min C. Thus taking B = B in (9.7), we find IdB ⊗ u : B ⊗min C → (B ⊗min A)/(B ⊗min I) ≤ 1 which means, by Proposition 7.48, that u is locally 1-liftable. Thus (i) ⇒ (ii). (ii) ⇒ (iii) ⇒ (iv) are trivial. It remains to show that (iv) ⇒ (i). Assume (iv). Consider t ∈ C ⊗ B. We can assume t ∈ E ⊗ B with E ⊂ C finite dimensional. Let F be a free group such that C  C ∗ (F)/I for some ideal I ⊂ C ∗ (F) (see Proposition 3.39). Let us denote A = C ∗ (F), let q : A → A/I  C be the quotient map, and let u : C → A/I be the identity mapping (here we identify C with A/I) so that u|E : E → A/I is the natural inclusion. By assumption (iv) u|E admits a completely contractive lifting v : E → A, so that qv = u|E . We have then t = (q ⊗ IdB )(v ⊗ IdB )(t) and hence

9.4 LLP (Local Lifting Property)

195

tC⊗max B ≤ (v ⊗ IdB )(t)A⊗max B = (v ⊗ IdB )(t)A⊗min B (by (9.5)) ≤ tE⊗min B = tC⊗min B (by (1.8)). Hence we have proved (iv) ⇒ (i). Alternative proof: Combine Remark 7.41 and (iii)’ ⇔ (i) in Corollary 7.50. See Corollary 9.47 for more on the local liftings in points (iii) and (iv) of Theorem 9.38. Remark 9.39 (On the OLLP) The reader is probably wondering why we do not include c.b. maps in the list of mappings from C to A/I appearing in Theorem 9.38. The reason is the corresponding property of C is a much stronger one called the OLLP. An operator space X is said to have the OLLP if its universal C ∗ -algebra Cu∗ X (in the sense of §2.7) has the LLP. The OLLP is studied in detail by Ozawa in [186]. See also [208, p. 278]. The spaces 3∞ or Mn for n ≥ 3 are the simplest examples of C ∗ -algebras with the LLP but failing the OLLP. Thus, when C is one of these, it is not true that any u in the unit ball of CB(C,A/I) is locally liftable. We now come to the general form of Kirchberg’s Theorem 9.6: Corollary 9.40 (Generalized Kirchberg Theorem) For any LLP C ∗ -algebra C and any WEP C ∗ -algebra B, the pair (C,B) is nuclear. Proof Assume C LLP and B WEP, to prove that (C,B) is nuclear we simply invoke (9.4) and repeat the reasoning for (iv) ⇒ (i) in Theorem 9.38 but with B in place of B. Corollary 9.41 Let A,B,C be C ∗ -algebras with C = A/I. Assume that C has the LLP. If (A,B) is nuclear, then (C,B) is nuclear. In particular, if a QWEP C ∗ -algebra C has the LLP then it has the WEP. Proof Since the identity on C = A/I is locally 1-liftable (with respect to A → A/I), Proposition 7.48 implies that B ⊗min C → (B ⊗min A)/(B ⊗min I) is well defined and of norm 1 (and hence is an isomorphism). If (A,B) is nuclear (B ⊗min A)/(B ⊗min I) = (B ⊗max A)/(B ⊗max I) = B ⊗max C where at the last step we used (7.6). Thus (C,B) is nuclear. The second assertion corresponds to the case when A has the WEP and B = C.

196

Nuclear pairs, WEP, LLP, QWEP

It is important to emphasize that the local liftings considered in Theorem 9.38 are all for maps defined on a whole C ∗ -algebra C. In the next statement the maps that we want to lift are only defined on a finite-dimensional operator subspace or system E ⊂ C and in general they do not contractively extend to C. Proposition 9.42 (On approximate liftings) Let C be a unital C ∗ -algebra C, E ⊂ C a finite-dimensional linear subspace and u : E → A/I a linear operator. Let c ≥ 0 be a constant. (i) Assume that u : E → A/I admits a lifting v1 : E → A with v1  ≤ c and another one v2 : E → A that is c.p. Then, for any ε > 0, there is v ∈ CP(E,A) with v ≤ c such that qv − u ≤ ε. (ii) Assume that E is a (finite-dimensional) operator system, that A is unital, that u ∈ CP(E,A/I) is unital and admits a lifting v1 : E → A with v1 cb ≤ 1. Then, for any ε > 0, there is a unital v ∈ CP(E,A) such that qv − u ≤ ε. Proof (i) Let σi be a quasi-central approximate unit as in §A.15. For any w : E → A we denote w i (x) = (1−σi )1/2 w(x)(1−σi )1/2 . By (A.19) we have (1 − σi )w − wi → 0 pointwise on E, and hence q(w − w i ) → 0 pointwise on E. This gives us that u − qvi2 = q(v2 − v2i ) → 0 pointwise on E. Since dim(E) < ∞ pointwise convergence implies norm convergence, and hence u − qvi2  → 0.

(9.8)

Moreover, if w takes all its values in the ideal I, then wi → 0 pointwise on E, and hence wi  → 0. In particular this holds for the mapping w = v2 − v1 : E → I. Then v2i = v1i + w i , v2i  ≤ c + w i , and v2i is c.p. Let v = v2i c(c + w i )−1 so that v ≤ c and v − v2i  ≤ w i . Since u = qv2 we have qv − u ≤ qv − qv2i  + qvi2 − u ≤ w i  + qvi2 − u. Thus we can choose i large enough so that qv − u ≤ ε. (ii) We first observe that by replacing v1 by (v1 + v1∗ )/2 we may assume ∗ so that f (1) = 1 (i.e. f is a state on that v1 is self-adjoint. Choose any f ∈ C+ C). Let wi : E → A be defined by wi (x) = v1i (x) + f (x)σi , so that qwi = qvi1 and wi (1) − 1 = (1 − σi )1/2 (v1 (1) − 1)(1 − σi )1/2 . Since qv1 (1) = u(1) = 1, we know 1 − v1 (1) ∈ I, and hence 1 − wi (1) → 0. Note that

9.4 LLP (Local Lifting Property)

wi (x) = ((1 − σi )

1/2

1/2 σi )

 v1 (x) 0

0 f (x)1



197 (1 − σi )1/2 1/2 σi



and hence wi cb ≤ max{v1 cb,f } ≤ 1. Fix δ > 0 (to be specified). By (9.8) we can choose i far enough so that qvi1 − u ≤ δ, and also 1 − wi (1) ≤ δ. We now set ϕ(x) = wi (x) + f (x)(1 − wi (1)) so that ϕ is self-adjoint, ϕ(1) = 1 and ϕ − wi cb ≤ δ, and hence ϕcb ≤ wi cb + δ ≤ 1 + δ. Let n = dim(E). By Theorem 2.28, there is a unital v ∈ CP(E,A) such that ϕ − vcb ≤ 8nδ. Note that q(wi − v1i ) = 0 and hence q(ϕ − v1i ) = q(ϕ − wi ) ≤ ϕ − wi  ≤ δ. Therefore qv − u ≤ q(v − ϕ) + q(ϕ − v1i ) + qv1i − u ≤ 8nδ + 2δ. Choosing δ so that 8nδ + 2δ < ε, we obtain v with the desired property. Remark 9.43 Actually the preceding proof in part (ii) works as well if we merely assume that u admits a family of liftings vi ∈ CB(E,A) such that infi∈I vi cb = 1. To end this section we sketch a proof of the stability of the LLP under free products, generalizing the fact that C ∗ (F2 ) = C ∗ (Z) ∗ C ∗ (Z) has the LLP, that we saw in Theorem 9.6. Theorem 9.44 Let C1,C2 be unital C ∗ -algebras with the LLP. Then C1 ∗ C2 has the LLP. Proof Let E ⊂ C1 ∗ C2 be the linear span of {a1 a2 | (a1,a2 ) ∈ C1 × C2 }. By Theorem 9.8 applied with E1 = A1 = B and E2 = E with A2 = C1 ∗C2 , it suffices to prove that for any n ≥ 1 and t ∈ Mn (B⊗E) with tMn (B ⊗min E) ≤ 1 we have tMn (B ⊗max (C1 ∗C2 )) ≤ 1. Since Mn (B)  B, it suffices to prove this for n = 1, i.e. for t ∈ B ⊗ E with tmin ≤ 1. By Lemma 2.22 we can factorize t as t = t1  t2 with tj ∈ B⊗Cj in the unit ball of B⊗min Cj (j = 1,2). Let π : B → B(H ) and σ : C1 ∗ C2 → B(H ) be ∗-homomorphisms with commuting ranges. Equivalently, the restrictions σj = σ|Cj : Cj → B(H ) (j = 1,2) have range in π(B) . Let sj = (π .σj )(tj ) ∈ B(H ). Then sj  ≤ tj max (j = 1,2), and by the assumed commutations we have (π .σ )(t) = (π .σ )(t1  t2 ) = s1 s2 , and hence (π .σ )(t) ≤ s1 s2  ≤ t1 max t2 max . If C1 and C2 have the LLP this implies (π .σ )(t) ≤ 1 and hence tmax ≤ 1.

198

Nuclear pairs, WEP, LLP, QWEP

Remark 9.45 We can now show that (7.25) and (7.26) are independent of the choice of C1,C2 as long as they have the LLP. Let D1 be another C ∗ -algebra with LLP admitting a surjective morphism r1 : D1 → A1 = D1 / ker(r1 ). Consider v : A∗2 → C1 as in (7.25). Let E ⊂ C1 be the (finite-dimensional) range of v. By Theorem 9.38 the LLP of C1 implies that the map from C1 to D1 / ker(r1 ) is locally 1-liftable. So there is a lifting w : E → D1 such that r1 w(e) = e for all e ∈ E and wcb = 1. Then s = wv : A∗2 → D1 is a (weak* continuous with finite rank) lifting of ut , with scb ≤ vcb . Since we can reverse the roles of D1 and C1 , this shows that the norm in (7.25) is the same if we compute it using C1,q1 or using D1,r1 . Reasoning as we just did for (7.25) one can show that (7.26) does not depend on the choice of either C1 or C2 , as long as both have the LLP.

9.5 To lift or not to lift (global lifting) In this section we will prove several global lifting theorems, notably the wellknown Choi–Effros lifting Theorem [49], for c.p. maps defined on nuclear C ∗ -algebras taking values in a quotient C ∗ -algebra. We return to this theme in the later §21.2 where we will formally introduce and discuss briefly the lifting property (LP). We will also explain there why if Kirchberg’s conjecture holds then LLP ⇒ LP in the separable case. This is based on Theorem 9.46 (which heads this section) and the fact that WEP implies a certain restricted form of extension property (to be proved in §21.2): for any finite-dimensional subspace E ⊂ C of an LLP C ∗ -algebra C, and any ε > 0 any u ∈ CB(E,W ) from E to a space W with WEP admits an extension  u ∈ CB(C,W ) with  ucb ≤ (1 + ε)ucb . Our first statement, due to Arveson, that says that nicely liftable maps on separable spaces are stable by pointwise limits is a priori surprising: one would expect a stronger limit to be required for this to hold. Roughly, the idea of the proof is similar to that of Lemma A.33. Theorem 9.46 (Pointwise limits of liftables are liftable) Let E be a separable operator space. Let I ⊂ A be an ideal in a C ∗ -algebra and let q : A → A/I be the quotient map. Consider a bounded linear map u : E → A/I. Assume that there is a net of complete contractions vγ : E → A such that qvγ → u pointwise on E. Then: (i) The map u admits a completely contractive lifting v : E → A, i.e. we have vcb ≤ 1 and qv = u. (ii) If E is a (separable) operator system and if u and all the vγ’s are c.p. (resp. unital and c.p.) then we can find a c.p. (resp. unital and c.p.) lifting v : E → A with vcb ≤ 1.

9.5 To lift or not to lift (global lifting)

199

Proof Let {xk } be a dense sequence in the unit ball of E. Assume given a complete contraction (c.c. in short) wn such that qwn xk − uxk  < 2−n

∀k = 1, . . . ,n.

(9.9)

Moreover, if u is c.p. we assume that wn is c.p. We claim there is a map wn+1 : E → A (c.p. if u and wn are c.p.) with wn+1 cb ≤ 1 such that qwn+1 xk − uxk  < 2−n−1

∀k = 1, . . . ,n + 1

(9.10)

∀k = 1, . . . ,n.

(9.11)

and (wn+1 − wn )xk  < 2−n+1

Taking this for granted, we may construct by induction a sequence (wn ) satisfying (9.10) and (9.11) for any n. Then v(x) = lim wn (x) is the desired lifting. Indeed, wn (x) is Cauchy for any x in {x1,x2, . . .}. Therefore v(x) = limn wn (x) exists and satisfies qv(x) = u(x). Since the norms wn  are uniformly bounded (by 1), this still holds for any x ∈ E. We have vcb ≤ limn wn cb ≤ 1, and in the c.p. case v is c.p. as a limit of c.p. maps. Thus it suffices to prove the claim. Let wn be as in the claim. Going far enough in the net (vγ ), we can find v : E → A (that is also c.p. in the c.p. case) with vcb ≤ 1 such that qvxk − uxk  < 2−n−2

∀k = 1, . . . ,n + 1.

(9.12)

Let (σi ) be an approximate unit in I as in §A.15. For a suitable choice of i (to be specified later on) we will let 1/2

wn+1 (x) = σi

1/2

wn (x)σi

Note that since wn+1 (x) =

1/2 [σi

(1 − σi )

1/2

+ (1 − σi )1/2 v(x)(1 − σi )1/2 . # w (x) ] n 0

0 v(x)



%$1/2

&

σi (1 − σi )1/2

we have wn+1 cb ≤ max{wn cb,vcb } ≤ 1. Moreover, in the c.p. case, wn+1 is also c.p. By (A.23), for any given x and ε > 0, i can be chosen large enough so that wn+1 (x) − [σi wn (x) + (1 − σi )v(x)] < ε

(9.13)

and hence (since σi wn (x) − σi v(x) ∈ I) we have q(wn+1 (x) − v(x)) < ε, which implies qwn+1 (x) − u(x) < ε + qv(x) − u(x).

200

Nuclear pairs, WEP, LLP, QWEP

So we can choose i large enough so that qwn+1 (xk ) − u(xk ) < ε + 2−n−2

∀k = 1, . . . ,n + 1.

Moreover, using wn+1 (x) − wn (x) = wn+1 (x) − [σi wn (x) + (1 − σi )wn (x)] we find by (9.13) wn+1 (x) − wn (x) < ε + (1 − σi )[v(x) − wn (x)] hence for i large enough, by (A.20) we can ensure that wn+1 (x) − wn (x) < 2ε + q[v(x) − wn (x)] < 2ε + qv(x) − u(x) + qwn (x) − u(x). Thus if we now make this last choice of i valid for any x in {x1, . . . ,xn } and take ε = 2−n−2 we obtain the announced estimates (9.10) and (9.11) for wn+1 (recalling (9.12) and (9.9)). Lastly, the same proof yields the unital case. We can now add a complement to Definition 7.46 concerning locally liftable unital c.p. maps. Corollary 9.47 Let C be a unital operator system and let u : C → A/I be locally 1-liftable. If u is unital and c.p. then for any finite-dimensional operator system E ⊂ C there is a unital c.p. map v : E → A such that qv = u|E . Proof By (ii) in Proposition 9.42 applied to u|E , for any ε > 0 there is a unital vε ∈ CP(E,A) such that qvε − u|E  ≤ ε. By (ii) in Theorem 9.46, there is a unital v ∈ CP(E,A) such that qv = u|E . Definition 9.48 Let λ ≥ 1. We say that an operator space E ⊂ B(H ) has the λ-completely bounded approximation property (in short λ-CBAP) if there is a net of finite rank maps ui : E → E with sup ui cb ≤ λ tending pointwise to the identity of E. We say that E has the CBAP if it satisfies this for some 1 ≤ λ < ∞. Corollary 9.49 Let E be a separable operator space. Assume that E has the λ-CBAP for some λ ≥ 1. Then any locally c-liftable u ∈ CB(E,A/I) admits a (global) lifting v ∈ CB(E,A) with vcb ≤ cλ. Proof Let (ui ) be as in Definition 9.48. Let Ei be any finite-dimensional operator space such that Ei ⊃ ui (E). Since u locally c-lifts (see Definition 7.46), there is wi ∈ CB(Ei ,A) with wi cb ≤ c such that qwi = u. Then wi ui lifts uui and wi ui cb ≤ cλ. Clearly qwi ui = uui tends to u pointwise. Then the net vi = (cλ)−1 wi ui is formed of complete contractions such that qvi → (cλ)−1 u pointwise. Thus Theorem 9.46 gives us the conclusion.

9.5 To lift or not to lift (global lifting)

201

Remark 9.50 There are purely Banach space analogues of Corollary 9.49. See [260] for a survey. For instance, for any fixed n ≥ 1, there is an analogue for which the cb-norm of a map u is replaced everywhere by the norm of un = IdMn ⊗ u. In the case n = 1 this reduces to the Banach space case. More explicitly, using un  in place of ucb in the definition of locally c-liftable maps, we get the definition of a locally (c,Mn )-liftable map u : E → A/I. Similarly, we define the Mn -BAP with constant λ > 0. Then if E is separable with the Mn -BAP with constant λ > 0, any locally (c,Mn )-liftable map u : E → A/I admits a (global) lifting v ∈ B(E,A) with vn  ≤ cλ. This follows by a cosmetic adaptation of the proofs of Theorem 9.46 and Corollary 9.49. Corollary 9.51 Let E be a separable operator system. If E has the CPAP (in the sense of Definition 4.8), in particular if dim(E) < ∞, then any unital and locally 1-liftable u ∈ CP(E,A/I) admits a (global) lifting v ∈ CP(E,A) with v = u. Proof Let (ui ) be a net of finite rank c.p. maps tending pointwise to IdE . Recalling (1.27), we may assume u = ucb = 1. Let Ei be any finitedimensional operator system such that Ei ⊃ ui (E). Since u is locally 1-liftable (see Definition 7.46), by (ii) in Proposition 9.42 (or by Corollary 9.47) for any ε > 0 there is wi ∈ CP(Ei ,A) with wi  ≤ 1 such that qwi − u|Ei  < ε. Then the composition vi = wi ui satisfies qvi −uui  = (qwi −u|Ei )ui  < ε. Since uui → u pointwise (and ε is arbitrary), we can arrange so that the net (vi ) is such that qvi → u pointwise on E. Then Theorem 9.46 allows us to conclude. Remark 9.52 Actually it suffices that u itself be approximable in the following sense: it is enough to assume that u is the pointwise limit of a net of finite rank maps ui ∈ CP(E,E) completely positively and completely contractively factorized through some Mn(i) . To check this, one uses the lifting property of the Mn(i) ’s (see Remark 9.54). We can now deduce the celebrated Choi–Effros lifting theorem. Theorem 9.53 (Choi–Effros lifting theorem) Let C be a unital separable nuclear C ∗ -algebra. Then any unital c.p. map u : C → A/I (into an arbitrary unital quotient C ∗ -algebra) admits a c.p. lifting v : C → A with v = u. Proof Since C is nuclear, it has the LLP and (by Corollary 7.12) the CPAP, so by (i) ⇔(iii) in Theorem 9.38 this can be viewed as a particular case of the preceding Corollary.

202

Nuclear pairs, WEP, LLP, QWEP

Remark 9.54 (Lifting property of Mn ) In particular, the preceding Theorem shows that any unital c.p. map u : Mn → A/I admits a unital c.p. lifting v : Mn → A. We wish to emphasize again (see Remark 9.39) that the c.b. variant of the preceding lifting theorem is generally not valid. In general one cannot lift a complete contraction u : Mn → A/I to a complete contration v : Mn → A.

9.6 Linear maps with WEP or LLP It might clarify certain features of the theory to consider the WEP and the LLP for linear maps, whence the following definitions, where we restrict for simplicity to the unit ball. Definition 9.55 Let E ⊂ B(H ) be an operator space and B a C ∗ -algebra. Let us say that a linear mapping u : E → B is WEP if IdC ⊗ u : C ⊗min E → C ⊗max B ≤ 1. The following is a corollary of Theorem 7.4. Corollary 9.56 In the situation of Definition 9.55, the following are equivalent: (i) u is WEP. (ii) For some H , there are v ∈ CB(E,B(H )) and w ∈ D(B(H ),B ∗∗ ) such that wv = iB u and vcb wdec ≤ 1. Proof We have isometrically C ⊗min E ⊂ C ⊗min B(H ) and C ⊗min B(H ) = C ⊗max B(H ) by Theorem 9.6. Thus we have isometrically C ⊗min E ⊂ C ⊗max B(H ), so this is indeed an immediate consequence of Theorem 7.4 and Remark 7.5. This allows us to formulate the interesting variant of Theorem 9.22 that folllows. Theorem 9.57 Let u : A → B be a unital c.p. map between unital C ∗ algebras. The following are equivalent. (i) u is WEP. (ii) The mapping iB u : A → B ∗∗ factorizes completely positively and contractively through some B(H ), i.e. there are v ∈ CP(A,B(H )) and w ∈ CP(B(H ),B ∗∗ ) such that wv = iB u and vw ≤ 1. (iii) For some H , there are v ∈ CB(A,B(H )) and w ∈ CP(B(H ),B ∗∗ ) such that wv = iB u and vcb w ≤ 1.

9.6 Linear maps with WEP or LLP

203

Proof Assume (i). Assume A ⊂ B(H ). By Corollary 9.56 u admits an extension w : B(H ) → B ∗∗ with wdec ≤ 1. Since w(1) = u(1) = 1 and wcb ≤ wdec ≤ 1 (see (6.6)) w must be c.p. by Theorem 1.35. Thus we obtain (ii). (ii) ⇒ (iii) is obvious. Assume (iii). We have by (4.30) IdC ⊗ iB u : C ⊗min A → C ⊗max B ∗∗  ≤ wIdC ⊗ v : C ⊗min A → C ⊗max B(H ). By Theorem 9.6 this is = wIdC ⊗ v : C ⊗min A → C ⊗min B(H ) and by (1.8) the latter is ≤ wvcb ≤ 1. Lastly, since B ⊂ B ∗∗ is maxinjective (see Proposition 7.26), we have IdC ⊗ iB u : C ⊗min A → C ⊗max B ∗∗  = IdC ⊗ u : C ⊗min A → C ⊗max B, and we obtain (i). To emphasize the parallelism WEP/LLP, we also introduce the LLP for linear maps. Definition 9.58 Let E ⊂ B(H ) be an operator space and B a C ∗ -algebra. Let us say that a linear mapping u : E → B is LLP if IdB ⊗ u : B ⊗min E → B ⊗max B ≤ 1. From this definition it is clear that u : E → B is LLP if and only if for any finite-dimensional subspace E1 ⊂ E the restriction u|E1 : E1 → B is LLP. Thus the assumption of finite dimensionality in the next result is not too restrictive. Proposition 9.59 Let E ⊂ B(H ) be finite dimensional and let u : E → B be a linear map into a C ∗ -algebra B. The following are equivalent: (i) u is LLP. (ii) There are v ∈ CB(E,C ) and w ∈ D(C ,B) such that wv = u and vcb wdec ≤ 1. Proof Assume (i). Remark 7.49 and Theorem 7.48 show that (ii) holds with C replaced by C ∗ (F) where F is a large enough free group so that B = C ∗ (F)/I for some ideal I ⊂ C ∗ (F). Then we can replace C ∗ (F) by C using Remark 3.6. Conversely, assume (ii). Since the fundamental pair (B,C ) is nuclear, (i) is easily derived using (3) for v and (6.13) for w.

204

Nuclear pairs, WEP, LLP, QWEP

9.7 QWEP We start with two consequences of Theorem 7.33 for QWEP C ∗ -algebras. Corollary 9.60 Let A,B be C ∗ -algebras, with B unital, for which there exists ϕ ∈ CP(A,B) such that ϕ(BA ) = BB . If a C ∗ -algebra C is such that (A,C) is a nuclear pair, the same is true for the pair (Dϕ ,C). In particular, if A is WEP, or merely QWEP, then B is QWEP. Proof The first assertion is obvious (by Remark 7.20). Thus, by our definition of WEP, if A is WEP, Dϕ is WEP and hence B is QWEP. If A is a quotient of a WEP C ∗ -algebra, we may compose with the quotient mapping, then we are reduced to the case when A is WEP. For further reference, we spell out a particular case: Corollary 9.61 Let D ⊂ A be a C ∗ -subalgebra for which there is a c.p. projection P : A → D with P  = 1. If A is QWEP, then D is QWEP. We now turn to the stability properties of the class of QWEP C ∗ -algebras. We first state an immediate consequence of Proposition 9.34. ∗ Corollary

any family {Ai | i ∈ I } of QWEP C -algebras the direct  9.62 For sum ⊕ i∈I Ai ∞ also is QWEP.

Remark 9.63 Let {Ai | i ∈ I } be a family, directed by inclusion, of C ∗ subalgebras of B(H ) and let A ⊂ B(H ) be the norm closure of their union. If all the Ai’s are WEP then A is WEP. Indeed, it suffices to show that tmax = tmin for any t ∈ (∪Ai ) ⊗ C . But then t ∈ Ai ⊗ C for some i and hence tA⊗max C ≤ tAi ⊗max C = tAi ⊗min C = tA⊗min C . Proposition 9.64 Let D ⊂ D1 be a C ∗ -subalgebra of a quotient one D1 = A1 /I. Let q : A1 → D1 be the quotient map. Let A = q −1 (D) so that I = q −1 ({0}) ⊂ A ⊂ A1 and D = A/I. If the inclusion D ⊂ D1 is max-injective, then the inclusion A ⊂ A1 is also max-injective. Proof Note I = ker(q) = ker(q|A ). Let C be any C ∗ -algebra. We must show that the natural map J : C ⊗max A → C ⊗max A1 is injective. Let x ∈ C ⊗max A be in its kernel. Then, if D ⊂ D1 is max-injective, (IdC ⊗ q|A )(x) must vanish in C ⊗max D, as the following commuting diagram shows. C ⊗max A IdC ⊗q|A

C ⊗max D

J

C ⊗max A1 IdC ⊗q

C ⊗max D1

9.7 QWEP

205

Therefore by (7.6), x ∈ C ⊗max I, but since I is an ideal in A1 , I ⊂ A1 is max-injective, and hence J restricted to C ⊗max I (which obviously coincides with the inclusion C ⊗max I → C ⊗max A1 ) is injective. Therefore x = 0. This proves the proposition. Corollary 9.65 Let D ⊂ D1 be a C ∗ -subalgebra of a QWEP C ∗ -algebra D1 . If the inclusion D ⊂ D1 is max-injective, then D is QWEP. Proof Assume D1 = A1 /I with A1 WEP. Let A = q −1 (D) where q : A1 → D1 is again the quotient map. By Proposition 9.64 the inclusion A ⊂ A1 is max-injective. By Remark 9.13 A has the WEP and hence D = A/I is QWEP. Theorem 9.66 A C ∗ -algebra B is QWEP if and only if its bidual B ∗∗ is also QWEP. Proof Assume B ∗∗ QWEP. By Corollary 7.27, the inclusion B → B ∗∗ is maxinjective, and hence B is QWEP by Corollary 9.65. To prove the converse, assume B QWEP and B ∗∗ ⊂ B(H ) (as a von Neumann subalgebra). Let I be the net of neighborhoods of 0 for the weak* topology of B(H ). Let U be an ultrafilter refining this net (see Remark A.6). By the weak* density of BB , for any x ∈ BB ∗∗ ∀i ∈ I ∃x(i) ∈ (x + i) ∩ BB .

(9.14)

This implies that x(i) → x (weak*). For any i ∈ I we set Ai = B. We define a linear map   ϕ: ⊕

i∈I

 Ai



→ B ∗∗

 by setting, for any x = (xi ) in the unit ball of ⊕ i∈I Ai ∞ ϕ(x) = limU xi , where the limit is meant in the weak* sense. Clearly, ϕ is positive 



∼ ⊕ A (meaning positivity preserving), and similarly (using M n i i∈I ∞ =

 we see that ⊕ i∈I Mn (Ai ) ∞

ϕn = IdMn ⊗ ϕ is positive for any n. Thus ) ϕ is c.p. Let A = ⊕ i∈I Ai ∞ . By Corollary 9.62, A is QWEP. By (9.14), ϕ(BA ) = BB ∗∗ . This shows that ϕ ∈ CP(A,BB ∗∗ ) satisfies the assumption of Corollary 9.60. The latter ensures that B ∗∗ is QWEP. Actually, using the density of BB in BB ∗∗ for the so-called strong* operator topology, one obtains directly a ∗-homomorphism in place of ϕ, and then B ∗∗ appears as a quotient of A.

206

Nuclear pairs, WEP, LLP, QWEP

Theorem 9.67 (Characterization of QWEP) The following properties of a unital C ∗ -algebra D are equivalent: (i) D is QWEP. (ii) For any LLP C ∗ -algebras C and C1 , any decomposable map u : C → D with udec ≤ 1, satisfies IdC1 ⊗ u : C1 ⊗min C → C1 ⊗max D ≤ 1. (ii)’ Same as (ii) for C = C1 = C . (iii) There is a free group F and a surjective unital ∗-homomorphism π : C ∗ (F) → D such that IdC ⊗ π : C ⊗min C ∗ (F) → C ⊗max D ≤ 1, in other words such that the quotient mapping π : C ∗ (F) → D is WEP. Proof Assume (i), so that D = A/I with A WEP. Let q : A → A/I be the quotient map. Let u be as in (ii). Let t ∈ C1 ⊗ C. Let E ⊂ C be a finitedimensional subspace such that t ∈ C1 ⊗ E. By Theorem 9.38, if C is LLP u is locally 1-liftable, so there is uE ∈ CB(E,A) with uE cb ≤ 1 such that quE = u|E , and hence (IdC1 ⊗ uE )(t)C1 ⊗min A ≤ tC1 ⊗min C . By Corollary 9.40, the pair (C1,A) is nuclear. Therefore we have (IdC1 ⊗ uE )(t)C1 ⊗max A = (IdC1 ⊗ uE )(t)C1 ⊗min A and hence, since (obviously) (IdC1 ⊗ u)(t) = (IdC1 ⊗ u|E )(t), we find (IdC1 ⊗ u)(t)C1 ⊗max A = (IdC1 ⊗ quE )(t)C1 ⊗max A ≤ (IdC1 ⊗ uE )(t)C1 ⊗min A ≤ tC1 ⊗min C . This shows (i) ⇒ (ii). (ii) ⇒ (ii)’ is trivial. (ii) ⇒ (iii) is obvious since any D is a quotient of C ∗ (F) for some F and the latter has the LLP by (9.5). In addition, (ii)’ ⇒ (iii) is easy to check using Lemma 3.8 since any t ∈ C ⊗ C ∗ (F) lies in C ⊗ E for some finite-dimensional E ⊂ C ∗ (F). Assume (iii). Let C = C ∗ (F). By Theorem 9.57, we can write π = wv with v ∈ CP(C,B(H )), w ∈ CP(B(H ),D ∗∗ ) of unit norm. By Theorem 8.1 ¨ = w admits a weak* continuous extension w¨ ∈ CP(B(H )∗∗,D ∗∗ ) with w w. Since π is onto, we know by Lemma A.33 that π(BC ) = BD , and hence ¨ B(H )∗∗ ) = BD ∗∗ . w(BB(H ) ) ⊃ BD . This implies by weak* density that w(B Since B(H )∗∗ is QWEP (see Theorem 9.66), Corollary 9.60 shows that D ∗∗ is also QWEP. Since D ⊂ D ∗∗ is max-injective (see Corollary 7.27), Corollary 9.65 shows that D is QWEP, and we obtain (i).

9.7 QWEP

207

Remark 9.68 Let G be a discrete group such that C ∗ (G) has the LLP. If MG is QWEP then the natural map C ∗ (G) ⊗ C ∗ (G) → MG ⊗ MG extends to ∗-homomorphism C ∗ (G) ⊗min C ∗ (G) → MG ⊗max MG . A fortiori, G has the factorization property in the sense of Definition 7.36 (see also §11.8). Indeed, this follows from (ii) in Theorem 9.67 with C = C1 = C ∗ (G) and ˙ G : C ∗ (G) → MG . Note that this remark typically applies when G is a u=Q free group (see Corollary 12.23). Taking C = D and u = IdD in (ii)’ from Theorem 9.67, we immediately derive: Corollary 9.69 If a QWEP C ∗ -algebra D has the LLP then it has the WEP. Corollary 9.70 Let A and D be C ∗ -algebras. Assume that there is a factorization of the identity of A of the form v

w

IdA : A−→D −→A where v,w are decomposable maps. If D is QWEP then A also is QWEP. Proof Let λ = vdec wdec . Assume D is QWEP. Thus D satisfies (ii) in Theorem 9.67. Let π : C ∗ (F) → A be a quotient map. By (6.13), IdC ⊗ π : C ⊗min C ∗ (F) → C ⊗max A ≤ λ. But since IdC ⊗π is a ∗-homomorphism, the latter norm must be = 1. Thus A satisfies (iii) in Theorem 9.67 and hence is QWEP. Remark 9.71 In Corollary 9.70 it suffices to assume that there are nets of decomposable maps wi : A → D and vi : D → A with supi vi dec wi dec < ∞ such that vi wi tends pointwise to iA : A → A∗∗ . Indeed, the preceding proof leads to IdC ⊗ iA π : C ⊗min C ∗ (F) → C ⊗max A∗∗  ≤ λ, and Corollary 7.27 allows us to conclude. Theorem 9.72 (Another characterization of QWEP) The following properties of a unital C ∗ -algebra D are equivalent: (i) D is QWEP. (ii) For some H there is an embedding (as a von Neumann subalgebra) D ∗∗ ⊂ B(H )∗∗ admitting a contractive projection P : B(H )∗∗ → D ∗∗ . (iii) There is a factorization of the identity of D ∗∗ of the form v

w

IdD ∗∗ : D ∗∗ −→B(H )∗∗ −→D ∗∗ where v,w are decomposable maps.

208

Nuclear pairs, WEP, LLP, QWEP

Proof Assume D = A/I with A WEP. By Theorem 9.31 the algebra A∗∗ certainly satisfies (ii). But by (A.37) we have A∗∗  I ∗∗ ⊕D ∗∗ . Therefore D ∗∗ also satisfies (ii). This shows (i) ⇒ (ii). Then (ii) ⇒ (iii) is obvious (recalling that P is automatically c.p. by Theorem 1.45) and (iii) ⇒ (i) follows from Corollary 9.70 and Theorem 9.66. Remark 9.73 (Warning on a trap) It is important to understand the difference between the last statement and Theorem 9.31. For that purpose, we urge the reader to look into Remark 9.33. Theorem 9.74 Let {Ai | i ∈ I } a family, directed by inclusion, of C ∗ subalgebras of B(H ), let A ⊂ B(H ) (resp. N ⊂ B(H )) be the norm (resp. weak* ) closure of their union. If all the Ai’s are QWEP then A and N are QWEP. Proof Assume A unital for simplicity. Using the unitary groups U (Ai ) it is easy to find a free group F and a surjective ∗-homomorphism π : C ∗ (F) → A so that there is a family, directed by inclusion, of free subgroups Gi with union = F, such that π restricted to C ∗ (Gi ) realizes Ai as a quotient of C ∗ (Gi ). Assume all the Ai’s are QWEP. Then arguing as in Remark 9.63 one shows that π satifies (iii) in Theorem 9.67 and hence A is QWEP. We may view N as generated by A. Then by Theorem 9.66 A∗∗ is QWEP, and since N is a quotient of A∗∗ (see Theorem 8.1 (i)), it is also QWEP. One can also prove directly that N is QWEP by a modification of the proof of Theorem 9.66. Remark 9.75 (QWEP is separably determined) Let A1 ⊂ A and A2 ⊂ A be separable C ∗ -subalgebras of a C ∗ -algebra A. Then the C ∗ -algebra generated by A1 ∪ A2 is still separable. Thus the family of separable C ∗ -subalgebras of A forms a directed net for inclusion and the union of all of them is equal to A. Thus, by Theorem 9.74 if all separable C ∗ -subalgebras are QWEP then A is QWEP. A fortiori, if all weak* separable subalgebras of a von Neumann algebra M are QWEP then M is QWEP.

9.8 Notes and remarks As we already mentioned, Takesaki introduced the notion of nuclearity for C ∗ -algebras (initially under a different name), and of course implicitly also for pairs. The term was inspired by Grothendieck’s work on nuclear locally convex spaces. Later on the subject was deepened by the works of Lance [165], Choi– Effros and Kirchberg [45, 154] (who independently proved that nuclearity is equivalent to the CPAP, see §10.2) and Effros–Lance [79]. A major step was taken thanks to Connes’s work on injective factors [61] that allowed Choi

9.8 Notes and remarks

209

and Effros to complete the proof that a C ∗ -algebra A is nuclear if and only if its bidual A∗∗ is an injective von Neumann algebra (see §8.3). For most of this initial period, not much interest was devoted to what we call “nuclear pairs,” except for Lance’s question whether the nuclearity of (A,Aop ) implies that A is nuclear, answered by Kirchberg in [155]. The latter constructed a counterexample, and at the same time started a deep study of nuclear pairs that led him to identify the WEP and the LLP for a C ∗ -algebra A as the nuclearity of the pairs respectively (A,C ) and (A,B) (which in this volume we choose as defining WEP and LLP). Kirchberg’s Theorem 9.6 (that tells us that (C ,B) is a nuclear pair) was proved in [156], but we followed the simpler proof from [204]. We encourage the reader to compare the latter proof with the original one! Theorem 9.18 is due to Choi. The origin of the WEP is Lance’s paper [165]. Our terminology is slightly different, as explained in Remark 9.23. Major advances were made in Kirchberg’s [155]. Our presentation of the stability properties of the WEP (due to Lance and Kirchberg) in §9.3 is much influenced by Ozawa’s [189]. Corollary 9.28 already appears in [77]. Lance raised the question whether there are examples of embeddings A ⊂ σ B(H ) admitting a weak expectation from B(H ) to M = A but nevertheless not admitting a contractive projection from B(H ) onto M (in other words noninjective). Such examples were proposed by Blackadar in [23, 24]. More examples (where M is a free group factor and A a specially chosen weak*dense Popa C ∗ -subalgebra) appear in later papers by Brown and Dykema [37, 38]. The definition of the LLP and the results in §9.4 are due to Kirchberg in [155]. The stability of the LLP under free products in Theorem 9.44 is due to the author [204]. A subsequent paper by Boca [31] contains variations on the same theme. The results of §9.5 are due to Kirchberg but are based on Arveson’s ideas in [13]. Theorem 9.53 is due to Choi and Effros [49]. In §9.6 and 9.7 again the main ideas come from Kirchberg’s work, but as usual we try to put forward properties of linear maps. Theorem 9.72 is easy to deduce from Ozawa’s viewpoint in [189], but apparently was not formulated yet.

10 Exactness and nuclearity

We already gave a brief introduction to nuclear C ∗ -algebras in §4.2 and we already announced some of their main properties, like their characterization by the CPAP, which will be proved in the present chapter. Since, in the separable case, exact C ∗ -algebras are just C ∗ -subalgebras of nuclear ones, it is not surprising that many features of nuclear C ∗ -algebras and their approximation properties have analogues for exactness. We try to emphasize this “parallelism.”

10.1 The importance of being exact A C ∗ -algebra A is called exact if the phenomenon described in (7.16) cannot happen when we take the minimal tensor product with A. More precisely: Definition 10.1 A C ∗ -algebra A is called exact if for any C ∗ -algebra B and any ideal I ⊂ B so that B/I is a C ∗ -algebra, the sequence {0} → I ⊗min A → B ⊗min A → (B/I) ⊗min A → {0} is exact. Equivalently, this holds if and only if the kernel of the mapping B ⊗min A → (B/I) ⊗min A coincides with I ⊗min A. In other words, this reduces to: the ∗ −homomorphism

B ⊗min A → (B/I)⊗min I ⊗min A A is injective (or equivalently isometric).

(10.1)

Even more explicitly this boils down (equivalently) to: ∀x ∈ (B/I) ⊗ A

x(B⊗min A)/(I ⊗min A) ≤ x(B/I )⊗min A .

(10.2)

Remark 10.2 Let /BI be a quotient C ∗ -algebra by an ideal I. For brevity, let us say that A is (B,I)-exact if (10.1) or (10.2) holds. Let B0 ⊂ B be a

210

10.1 The importance of being exact

211

C ∗ -subalgebra, let I0 = I ∩ B0 so that I0 ⊂ I and B0 /I0 ⊂ B/I. Assume that there is a completely contractive projection P : B → I such that P (B0 ) = I0 . It is easy to check that if A is (B,I)-exact then it is also (B0,I0 )-exact: indeed, we have isometric embeddings (B0 /I0 ) ⊗min A ⊂ (B/I) ⊗min A and (B0 ⊗min A)/(I0 ⊗min A) ⊂ (B ⊗min A)/(I ⊗min A). For the latter we use the projection P ⊗ IdA : B ⊗min A → I ⊗min A. The classical Calkin algebra Q = B/K is of special interest (here K is the ideal of compact operators in B). Let (Nn ) be any increasing sequence  MNn ∞ and let I0 = {b = (bn ) ∈ B0 | of integers, and let B0 = ⊕ lim bn  = 0}. We have a classical block diagonal embedding B0 ⊂ B such that I0 = B0 ∩ K . Let q0 : B0 → B0 /I0 denote the quotient map. It is easy to check that for any b = (bn ) ∈ B0 we have q0 (b) = lim sup bn . More generally, the same easy argument that for any shows

finite-dimensional  operator space E and any b = (bn ) ∈ ⊕ MNn ⊗min E ∞  B0 ⊗min E (for the last  see (1.14)), we have (q0 ⊗ IdE )(b)(B0 ⊗min E)/(I0 ⊗min E) = lim supn→∞ bn MNn ⊗min E .

(10.3)

The main point of the next result is that exactness restricted to the Calkin algebra implies the general exactness. Theorem 10.3 The following properties of a C ∗ -algebra A are equivalent: (i) The C ∗ -algebra A is (B,K )-exact. (ii) For any finite-dimensional subspace E ⊂ A and any bounded sequence of linear maps un : En → E, where En are arbitrary operator spaces, we have lim supn→∞ un cb ≤ supk lim supn→∞ IdMk ⊗un : Mk (En ) → Mk (E). (10.4) (iii) For any finite-dimensional subspace E ⊂ A and any ε > 0 there are an  → E such that ucb < 1 + ε  ⊂ MN and u : E integer N, a subspace E −1 and u cb ≤ 1. (iv) The C ∗ -algebra A is exact. Proof Assume (i). Choose εn > 0 such that εn → 0. For any n there is Nn such that un cb ≥ IdMNn ⊗ un  > un cb − εn . We may clearly adjust so that (Nn ) is increasing. For any n there is xn ∈ BMNn (En ) such that un cb ≥ (IdMNn ⊗ un )(xn )MNn ⊗min E > un cb − εn . Let tn = (IdMNn ⊗ un )(xn ) ∈ MNn ⊗min E. We use the notation in Remark 10.2 for B0 and I0 . Since the ranks of the un’s are bounded we must have

212

Exactness and nuclearity



sup un cb < ∞ (see Remark 1.56), and hence (tn ) ∈ ⊕ MNn ⊗min E ∞  B0 ⊗min E. Let t˙ = (q0 ⊗ IdE )((tn )) ∈ (B0 /I0 ) ⊗min E ⊂ (B0 /I0 ) ⊗min A. By Remark 10.2, we may assume that A is (B0,I0 )-exact. Recalling Lemma 4.26, this gives us t˙(B0 ⊗min E)/(I0 ⊗min E) ≤ t˙(B0 /I0 )⊗min E .

(10.5)

By (10.3) we have on one hand t˙(B0 ⊗min E)/(I0 ⊗min E) = lim sup tn MNn ⊗min E = lim sup un cb, and on the other hand we claim that t˙(B0 /I0 )⊗min E ≤ supk lim supn→∞ IdMk ⊗ un : Mk (En ) → Mk (E). The last two inequalities together with (10.5) imply (10.4), which proves (ii). We now prove the claim. By (1.11) (recalling (1.6)) we have t˙(B0 /I0 )⊗min E = supk sup{(IdB0 /I0 ⊗ v)(t˙)(B0 /I0 )⊗min Mk | v ∈ BCB(E,Mk ) }, (10.6) and since (B0 /I0 ) ⊗min Mk = (B0 ⊗min Mk )/(I0 ⊗min Mk ) (indeed Mk is trivially exact), we find for any k and v ∈ CB(E,Mk ) by (10.3) (IdB0 /I0 ⊗ v)(t˙)(B0 /I0 )⊗min Mk = lim supn (IdMNn ⊗ vun )(xn )MNn ⊗min Mk ≤ lim supn vun cb

(10.7)

and since vun cb = IdMk ⊗ vun  by (1.18), we have lim supn vun cb = lim supn IdMk ⊗ vun  ≤ lim supn IdMk ⊗ un , and the claim now follows from (10.6) and (10.7). Assume (ii). Let E ⊂ A with dim(E) < ∞. Since E is separable, we may assume E ⊂ B(H ) (completely isometrically) with H separable. Let Hn ⊂ H (n ≥ 1) be an increasing sequence of subspaces with dim(Hn ) < ∞ whose union is dense in H . Then (as earlier for (1.9)) for any k and any x ∈ Mk (E) we have xMk (E) = limn ↑ IdMk ⊗ vn (x) where vn (e) = PHn e|Hn (e ∈ E). Since the unit sphere SE of E is compact (and |vn (x) − vn (y)| ≤ x − y ∀x,y ∈ E), we have by Ascoli’s theorem infx∈SE vn (x) ↑ 1 and hence vn : E → vn (E) is invertible for all n large enough. For such n, let un = vn−1 : vn (E) → E be its inverse. We have un  ↓ 1. Similarly, limn ↓ IdMk ⊗ un  = 1 for any k. By (ii) we have limn un cb ≤ 1, and hence (iii) follows: choosing n large enough so that un cb < 1 + ε we may take N = dim(Hn ) and u = un . Assume (iii). Let B/I be a quotient C ∗ -algebra. To prove (10.2) it suffices to show that for any E ⊂ A with dim(E) < ∞ and any ε > 0 we have

10.1 The importance of being exact

213

 ⊂ MN and u (B/I) ⊗min E → B ⊗min E/(I ⊗min E) ≤ 1 + ε. Let E be as in (iii). Since MN is exact we have isometrically (B/I) ⊗min MN = = (B ⊗min MN )/(I ⊗min MN ) and hence also isometrically (B/I) ⊗min E −1   (B ⊗min E)/(I ⊗min E) by Lemma 4.26. Therefore using IdE = u ◦ u we find (B/I) ⊗min E → (B ⊗min E)/(I ⊗min E) ≤ u−1 cb ucb ≤ 1 + ε. Thus we obtain (iv) and (iv) ⇒ (i) is trivial. Remark 10.4 By the results of the preceding §7.2 (see Proposition 7.15) it is clear that any nuclear C ∗ -algebra is exact. Remark 10.5 (On the stability properties of exactness) By Lemma 4.26 (applied here with the roles of A and B exchanged) if A is exact then any C ∗ -subalgebra D ⊂ A is also exact. This is also immediate from (iii) ⇔ (iv) in Theorem 10.3. Incidentally, this shows that B is not exact. Kirchberg proved that exactness also passes to quotient C ∗ -algebras but this lies much deeper (see [39, p. 297] for a detailed proof). However, as we will prove in §18.3, exactness is not stable under extensions. By a simple iteration argument, one shows that the minimal tensor product of two exact C ∗ -algebras is exact. In sharp contrast, exactness is not preserved by the maximal tensor product. Indeed, by Theorem 4.11 there is an embedding C ∗ (Fn ) ⊂ Cλ∗ (Fn )⊗max Cλ∗ (Fn ) and Cλ∗ (Fn ) is exact (see Remark 10.21) while C ∗ (Fn ) is not exact when n > 1 (see Proposition 7.34). Remark 10.6 (Embeddings in the Cuntz algebra) Kirchberg [158, 162] obtained a series of striking results on exact C ∗ -algebras culminating with his outstanding proof that any separable exact C ∗ -algebra A can be embedded (as a C ∗ -subalgebra) in a nuclear C ∗ -algebra, namely the Cuntz algebra O2 . Moreover, he showed that if A is nuclear the embedding A ⊂ O2 can be made so that there is a contractive c.p. projection from O2 onto A. See [3] for a nice presentation of this subject. Most of the tools for this topic, being related to C ∗ -algebraic K-theory, are quite far from the subject of the present volume, which explains why we do not discuss this fundamental embedding any further. Nevertheless, Theorem 10.3 as well as the next statement are important steps (which we fully prove) on the way to this achievement. In any case, with the applications related to the CBAP in Theorem 10.18, all this already shows that exactness is a certain form of “subnuclearity.” For (min → max)-tensorizing c.p. maps, see Theorem 7.11 (which is proved in the next section). Theorem 10.7 A C ∗ -algebra A ⊂ B(H ) is exact if and only if the inclusion mapping jA : A → B(H ) is (min → max)-tensorizing.

214

Exactness and nuclearity

Proof Assume A exact. Let B be any unital C ∗ -algebra. Then, by Proposition 3.39, for some free group F, B is a quotient of C = C ∗ (F), so that B = C/I. Then the following ∗-homomorphism is contractive A ⊗min B =

B(H ) ⊗min C A ⊗min C → A ⊗min I B(H ) ⊗min I

but since the pair (C,B(H )) is nuclear (see (9.4)) we have B(H ) ⊗max C B(H ) ⊗min C = B(H ) ⊗min I B(H ) ⊗max I and by (7.4) this last space coincides with B(H ) ⊗max (C/I) = B(H ) ⊗max B. Therefore A → B(H ) is (min → max)-tensorizing. Conversely, if the latter inclusion is (min → max)-tensorizing, then for any C and any quotient C/I we have A ⊗min (C/I) → B(H ) ⊗max (C/I) =

B(H ) ⊗max C B(H ) ⊗max I

and a fortiori A ⊗min (C/I) →

B(H ) ⊗min C . B(H ) ⊗min I

min C Thus if we denote as before Q[E] = E⊗ E⊗min I we have a contractive map A⊗min (C/I) → Q[B(H )]. But by Lemma 7.43 we know that the norm induced by Q[B(H )] on A ⊗ (C/I) is the norm of Q[A]. Therefore we obtain A ⊗min (C/I) → Q[A] ≤ 1 which means that A is exact. Alternative route: (iii) in Theorem 10.3 implies that jA factors as in (iii) in Theorem 7.10.

Corollary 10.8 If A ⊂ B(H ) is exact, any complete contraction u : A → W with values in a WEP C ∗ -algebra W is (min → max)-tensorizing. Proof If W is WEP, the inclusion iW : W → W ∗∗ factors through some B(K) via c.p. complete contractions. By the extension property of B(K) (see Theorem 1.18) we have a factorization of iW u of the form A → B(H ) → W ∗∗ , showing that iW u is (min → max)-tensorizing. Therefore (see Remark 7.28) u itself is (min → max)-tensorizing. Applying this to u = IdA (and Recalling Remark 10.4) we immediately obtain Corollary 10.9 A C ∗ -algebra A is nuclear if and only if A is exact and has the WEP.

10.1 The importance of being exact

215

Recall that we denote by Q = B/K (where B = B(2 ),K = K(2 )) the classical “Calkin algebra.” The analogue of the last corollary for the LLP involves Q. Corollary 10.10 For a C ∗ -algebra A, the pair (A,Q) is nuclear if and only if A is exact and has the LLP. In that case, (A,B) is nuclear for any QWEP C ∗ -algebra B. Proof If A is exact we have A ⊗min Q = (A ⊗min B)/(A ⊗min K ).

(10.8)

If A has the LLP, (A,B) is nuclear by our definition of the LLP. Therefore (A ⊗min B)/(A ⊗min K ) = (A ⊗max B)/(A ⊗max K ). By (7.6) (exactness of the max-tensor product): (A ⊗max B)/(A ⊗max K ) = A ⊗max Q,

(10.9)

and hence (A,Q) is nuclear. The same argument (with Corollary 9.40) shows that (A,B) is nuclear for any QWEP C ∗ -algebra B. Conversely, assume (A,Q) nuclear. Then (10.8) holds by (10.9) (since the max-norm dominates the minnorm on A ⊗ B). By Theorem 10.3, (10.8) implies exactness. By (10.8) and (10.9) we have (A ⊗min B)/(A ⊗min K ) = (A ⊗max B)/(A ⊗max K ), and also A ⊗min K = A ⊗max K . One easily deduces from this that A ⊗min B = A ⊗max B, which means, by our definition of the LLP, that A has it. Remark 10.11 Kirchberg conjectured (see Proposition 13.1) that LLP ⇒ WEP, and also that all C ∗ -algebras are QWEP (see Proposition 13.1). By the preceding corollaries, this would show that (A,Q) nuclear ⇒ A nuclear. We now return to the connection between exactness and local reflexivity (see Remark 8.34 for important additional information). Proposition 10.12 Property C  is equivalent to exactness. Proof Assume that A has property C  . Let I ⊂ B be an ideal. A preliminary observation is that by (A.37) the kernel of the natural mapping B ∗∗ ⊗min A → (B ∗∗ /I ∗∗ ) ⊗min A is equal to I ∗∗ ⊗min A (see also Corollary 7.52). Property C  tells us that we have an isometry B ∗∗ ⊗min A → (B ⊗min A)∗∗ . Let Z = ker[q ⊗ IdA : B ⊗min A → (B/I) ⊗min A]. Viewing B as embedded in B ∗∗ , we view B ⊗min A ⊂ B ∗∗ ⊗min A. We also may view I ∗∗ ⊂ B ∗∗ . Then our preliminary observation shows us that Z ⊂ I ∗∗ ⊗min A. By property C  a fortiori Z ⊂ I ∗∗ ⊗min A ⊂ (I ⊗min A)∗∗ . Therefore, any z ∈ Z is in the

216

Exactness and nuclearity

σ ((B ⊗min A)∗∗,(B ⊗min A)∗ )-closure of a bounded net in I ⊗min A. But since both Z and I ⊗min A are included in B ⊗min A, this means that any element z ∈ Z is the weak limit in B ⊗min A of a bounded net in I ⊗min A, and hence by (Mazur’s) Theorem A.9 the norm limit of another such net, which implies that z ∈ I ⊗min A. Thus Z = I ⊗min A and A is exact. Conversely, assume A ⊂ B(H ) exact. Let B be a C ∗ -algebra. By Theorem 10.7 we have A ⊗min B ∗∗ → B(H ) ⊗max B ∗∗  = 1. By (8.12) (and (4.8)) we have B(H ) ⊗max B ∗∗ → (B(H ) ⊗max B)∗∗  = 1 and a fortiori B(H ) ⊗max B ∗∗ → (B(H ) ⊗min B)∗∗  = 1. Therefore A ⊗min B ∗∗ → (B(H ) ⊗min B)∗∗  = 1. But since the inclusion A ⊗min B → B(H ) ⊗min B is isometric, so is (A ⊗min B)∗∗ → (B(H ) ⊗min B)∗∗ . Therefore we conclude A ⊗min B ∗∗ → (A ⊗min B)∗∗  = 1, which means A has property C  . Remark 10.13 The argument described in Remark 8.32 to show that local reflexivity (i.e. property C  ) passes to quotients also shows the same for property C. However, it does not seem to work for property C  . Thus it does not lead to a proof of the same for exactness.

10.2 Nuclearity, exactness, approximation properties The aim of this section is to give a reasonably direct proof that a C ∗ -algebra A is nuclear if and only if it has the approximation properties mentioned in Theorem 7.11 (but not proved yet). We expand on the same theme including several more general statements in the next section. The proofs will require the dual description of the maximal C ∗ -norm given by Theorem 6.15. The next statement is the analogue for general linear maps of the characterization of (min → max)-tensorizing maps stated (but not proved yet) in Theorem 7.10. Theorem 10.14 Let λ be a positive constant. Consider two C ∗ -algebras A and B and an operator subspace E ⊂ A. Let u : E → B be a linear mapping. The following assertions are equivalent. (i) For any C ∗ -algebra C, IdC ⊗ u defines a bounded linear map from C ⊗min E to C ⊗max B with norm ≤ λ. In other words, u is (min → max)-tensorizing with constant λ. (ii) Same as (i) with C = C ∗ F ∗  for all finite-dimensional operator subspaces F ⊂ E.

10.2 Nuclearity, exactness, approximation properties

217

(iii) For any finite-dimensional subspace F ⊂ E, the restriction u|F admits, w

V

for any ε > 0, a factorization of the form F −→Mn −→B with V cb wdec ≤ λ + ε. (iv) There is a net of finite rank maps ui : E → B admitting factorizations through matrix algebras of the form Mn i vi

wi ui

A⊃E

B

with vi cb wi dec ≤ λ such that ui = wi vi converges pointwise to u. (v) There is a net ui : A → B of finite rank maps with sup ui dec ≤ λ such that the restrictions ui |E tend pointwise to u. Proof (i) ⇒ (ii) is trivial. Assume (ii). Let F ⊂ E be an arbitrary finitedimensional subspace and let tF ∈ F ∗ ⊗ E be the tensor associated to the inclusion jF : F → E. Let C = C ∗ F ∗ . By (ii) we have (IdC ⊗ u) (tF )C⊗max B ≤ λtF C⊗min E . But by the injectivity of the min-norm (see Remark 1.10) and by (2.15), we have tF C⊗min E = tF F ∗ ⊗min E = jF CB(F,E) = 1. Hence we have (IdC ⊗ u)(tF )C⊗max B ≤ λ. By (6.18) and Remark 6.17, this implies that, for any ε > 0, there is a factorization of u|F of the following form Mn V

F

w u|F

B

with V cb ≤ 1 and wdec ≤ λ + ε. This shows (ii) ⇒ (iii). Assume (iii). By the extension property of Mn (see Theorem 1.18), we can extend V to a mapping v : E → Mn with vcb ≤ V cb . Thus if we take for index set I the set of all finite-dimensional subspaces F ⊂ E (directed by inclusion) we obtain nets vi : E → Mni and wi : Mni → B such that wi vi (x) − u(x) → 0 for all x in E and such that (after a suitable renormalization) sup{vi cb wi dec } ≤ 1. This completes the proof that (iii) ⇒ (iv). We may clearly assume (by the extension Theorem 1.18) that vi is extended to A with the same cb norm, thus recalling (6.7) and (6.9), (iv) ⇒ (v) is immediate. Finally, the proof that (v) ⇒ (i) is an immediate consequence of Proposition 6.13. Recall that a C ∗ -algebra A is said to have the CPAP (for completely positive approximation property) if the identity on A is in the pointwise closure of

218

Exactness and nuclearity

the set of finite rank c.p. maps on A. To derive the known results on this approximation property, the following will be useful (here we follow closely [208, ch. 12]). Lemma 10.15 Let E ⊂ B(H ) be a finite-dimensional operator system and let A be a C ∗ -algebra. Consider a unital self-adjoint mapping u : E → A associated to a tensor t ∈ E ∗ ⊗ A. Fix ε > 0. Then if δ(t) < 1 + ε, we can decompose u as u = ϕ − ψ with ϕ,ψ c.p. such that ψ ≤ ε and ϕ admits for some n a factorization of the form Mn V

E

W ϕ

A

where V ,W are c.p. maps with V  ≤ 1 + ε and W  ≤ 1. Proof By the definition of the norm δ (see §6.2) and by Theorem 1.50, we can assume that u = wv where v : E → Mn satisfies ∀x ∈ E

v(x) = v1∗ π(x)v2,

) is the restriction of a ∗-homomorphism, v1,v2 are where π : E → B(H n  operators in B(2 , H ) with v1  = v2  0 we can write u|E = ϕ − ψ with ϕ = WV as in Lemma 10.15. By the extension property of c.p. maps we may as well assume that V is a c.p. mapping of norm ≤ 1 + ε from A to Mn . Then, using the net (directed by inclusion) formed by the finite-dimensional operator systems E ⊂ A, and letting ε → 0 we obtain (iv) (after a suitable renormalization of V ). Then (iv) ⇒ (v) is trivial. Lastly, assume (v). Note that since ui  = ui (1) and ui (1) → u(1) = 1, we have “automatically” ui  → 1. By (6.16) and (i) in Lemma 6.5, we have for any C ∗ -algebra C IdC ⊗ ui C⊗min A→C⊗max B ≤ ui ,

220

Exactness and nuclearity

and hence since ui → u pointwise IdC ⊗ uC⊗min A→C⊗max B ≤ 1. This shows that (v) ⇒ (i). We now recover the most classical characterization of nuclear C ∗ -algebras, due independently to Choi–Effros and Kirchberg [45, 154]: Theorem 10.17 (Nuclear ⇔ CPAP) The following properties of a unital C ∗ -algebra A are equivalent: (i) A is nuclear. vi wi (ii) There is a net of finite rank maps of the form A−→Mni −→A where vi ,wi are c.p. maps with vi  ≤ 1, wi  ≤ 1, that tends pointwise to the identity. (iii) A has the CPAP. Proof This follows from Corollary 10.16 with A = B and u = IdA . Just like Theorem 10.7 the next statement is a characterization of exactness for C ∗ -algebras, that is parallel to that of nuclearity. We just replace the identity on A namely IdA by the inclusion jA : A → B(H ). This already suggests to think of exactness as some sort of “subnuclearity,” at least in a somewhat local sense, but actually Kirchberg proved that separable exact C ∗ -algebras globally embed in nuclear ones (see Remark 10.6). Note however, in sharp contrast, that the preceding paralellism does not extend to (iv) in Theorem 10.18, since the CBAP for IdA does not imply nuclearity (see Remark 10.21). Theorem 10.18 The following properties of a unital C ∗ -algebra A ⊂ B(H ) are equivalent: (i) A is exact. vi wi (ii) There is a net of finite rank maps of the form A−→Mni −→B(H ) where vi ,wi are c.p. maps with vi  ≤ 1, wi  ≤ 1, that tends pointwise to the inclusion mapping jA : A ⊂ B(H ). (iii) There is a net of finite rank c.p. maps ui : A → B(H ) tending pointwise to jA : A ⊂ B(H ) (in other words, we might say that “jA has the CPAP”). (iv) There is a net of finite rank c.b. maps ui : A → B(H ) with supi ui cb < ∞ that tends pointwise to jA : A ⊂ B(H ) (we might say that “jA has the CBAP”). Proof (i) ⇔ (ii) ⇔ (iii) follows from Corollary 10.16 with B = B(H ) and u = jA .

10.2 Nuclearity, exactness, approximation properties

221

(iii) ⇒ (iv) is trivial. Assume (iv). By (6.9) ui dec = ui cb . By (i) ⇔ (v) in Theorem 10.14, jA is (min → max)-tensorizing with constant λ = supi ui cb . But actually since jA is a ∗-homomorphism, this automatically is true also with a constant = 1 (see Proposition A.24). Thus A is exact (meaning (i) holds) by Theorem 10.7. Remark 10.19 [On unitizations] A C ∗ -algebra A is nuclear (resp. exact) if and only if its unitization is nuclear (resp. exact). Indeed, A is an ideal in its  so that A/A  unitization A = C. By Proposition 7.19, an ideal in a nuclear ∗  nuclear implies A nuclear. The analogue for C -algebra is nuclear. Thus A exactness is obvious (see Remark 10.5). The converses are easy and left as an exercise to the reader. Moreover, using an approximate unit of A, one easily  implies that of A. This remark allows us to extend shows that the CPAP of A Theorem 10.17 to the nonunital case. A similar reasoning applies for exactness and Theorem 10.18. Remark 10.20 [On the CBAP] Let (E) be the smallest λ for which E has the λ-CBAP in the sense of Definition 9.48. By Theorem 10.18 any C ∗ -algebra with the CBAP must be exact. We refer the reader to Cowling and Haagerup’s [64] for an important example of a sequence of groups (Gn ) (with property (T)) for which (Cλ∗ (Gn )) = n + 1 (n = 1,2, . . . ). Remark 10.21 [Exactness for free groups] By Haagerup’s classical paper [103], for any free group F, the reduced C ∗ -algebra Cλ∗ (F) has the 1-CBAP, and a fortiori is exact. Thus unlike the CPAP the CBAP does not imply nuclearity. In sharp contrast, the full C ∗ -algebra C ∗ (Fn ) is not exact whenever n ≥ 2. The latter fact follows from (7.8) and (7.9). Remark 10.22 (On the weak* CBAP) Let λ > 0 be a constant. We say that a von Neumann algebra M has the weak* λ-CBAP if there is a net of normal finite rank maps Ti : M → M with Ti cb ≤ λ that tend weak* to the identity. Haagerup proved in 1986 (see [108]) that for a discrete group G, the λ-CBAP for Cλ∗ (G) is equivalent to the weak* λ-CBAP for MG , and moreover when this holds we may always find the net (Ti ) formed of finite rank multipliers (see Proposition 3.25 and Remark 3.29). In particular the latter net is then formed of maps such that Ti (MG ) ⊂ Cλ∗ (G). By Remark 10.21, MG has the weak* 1-CBAP for any free group G. See [41, 110] for further developments on approximation properties for groups. Because of the equivalent reformulation with multipliers, the groups G for which Cλ∗ (G) has the CBAP are now called weakly amenable. Haagerup (1986, unpublished) proved that SL3 (Z) is not weakly amenable (see also [190]), even though Cλ∗ (G) is exact when G = SL3 (Z) as well as when G is any closed discrete subgroup of a connected Lie group. This last fact is attributed to A. Connes in [155, p. 453]. By [99] the

222

Exactness and nuclearity

same holds when G is any discrete linear group. See [109, 113, 114, 164] for more recent breakthroughs in this direction. Remark 10.23 Haagerup also proved in [103] a different kind of multiplier approximation property for free groups, now called the Haagerup property, for which we refer to [42].

10.3 More on nuclearity and approximation properties In this section we present a result (due to Junge and Le Merdy) that refines Theorem 6.15 using an original application of Kaplansky’s density theorem (see Theorem A.47) discovered by Junge [137], as follows. We merely sketch the proof. Lemma 10.24 Let A,B be arbitrary C ∗ -algebras. Then any c.b. map θ : B ∗ → A can be approximated in the point-norm topology by a net of weak∗ continuous finite rank maps θi : B ∗ → A with θi cb ≤ θ cb . Sketch Let A∗∗ ⊂ B(H ), and B ∗∗ ⊂ B(K) be embeddings (as von Neumann subalgebras). Let M = A∗∗ ⊗B ∗∗ denote the von Neumann algebra generated by A∗∗ ⊗ B ∗∗ in B(H ⊗2 K). The space CB(B ∗,A∗∗ ) can be identified isometrically with M in a natural way (see e.g. [208, p. 49] for details). Let t ∈ A∗∗ ⊗B ∗∗ be the tensor associated to iA θ : B ∗ → A∗∗ (recall iA : A → A∗∗ is the canonical inclusion). Then, by Kaplansky’s density Theorem A.47, there is a net (ti ) in A ⊗ B with ti min ≤ t such that ti σ (M,M∗ )-tends to t. Let θi : B ∗ → A be the finite rank map associated to ti . We have θi cb = ti min ≤ t = θ cb . Moreover, for any ξ in B ∗ , θi (ξ ) must σ (A∗∗,A∗ )-tend to θ (ξ ). But since θi (ξ ) and θ (ξ ) both lie in A, this means that θi (ξ ) tends to θ (ξ ) weakly in A. Passing to suitable convex hulls, we obtain (by Mazur’s Theorem, A.9) a net (θi ) such that, for any ξ , θi (ξ ) tends to θ (ξ ) in norm. Remark 10.25 The same argument shows that, for any von Neumann algebra R, any c.b. map θ : R∗ → A can be approximated pointwise by a net of finite rank maps θi : R∗ → A with θi cb ≤ θ cb . Remark 10.26 We will use the approximation property described in Lemma 10.24 via the following: Let A be a C ∗ -algebra. Assume that E is the dual B ∗ (resp. the predual R∗ ) of a C ∗ -algebra B (resp. von Neumann algebra R). Then for any y ∈ E ⊗ A the supremum defining the norm (y) in (6.17) can be restricted to pairs (θ,π ) where θ : E → π(A) is a complete contraction of finite rank. Moreover in case E = B ∗ , we can restrict to weak∗ -continuous

10.3 More on nuclearity and approximation properties

223

finite rank maps θi : B ∗ → A. Indeed, this is an immediate application of Lemma 10.24 (resp. Remark 10.25). The next statement shows that if E is a C ∗ -algebra and u a finite rank map, the “approximate factorizations” of u appearing in part (iv) of Theorem 10.14 become bona fide factorizations of u. Theorem 10.27 ([137]) Let u : B → A be a finite rank map between two C ∗ -algebras. Then, for any ε > 0, there is an integer n and a factorization u = wv of the form v

w

B −→Mn −→A with (vcb wcb ≤) vcb wdec ≤ udec (1 + ε). Therefore, if y ∈ B ∗ ⊗ A is the tensor associated to u : B → A, we have udec = δ(y). Proof If u = wv as in Theorem 10.27, then, by (6.7) and (6.9), we have udec ≤ vdec wdec = vcb wdec , hence by Remark 6.17, udec ≤ δ(y).  We now turn to the converse. We may assume u(x) = k1 ξj (x)aj with ξj ∈  ξj ⊗ aj . By the equality δ =  B ∗ and aj ∈ A, or equivalently y = reinforced by the preceding Remark 10.26 it suffices to show the following claim: for any representation π : A → B(H ) and weak∗ continuous finite rank map θ : B ∗ → π(A) with θ cb ≤ 1, we have     θ (ξj )π(aj ) ≤ 1.  Let θ be such a map and let t ∈ π(A) ⊗ B be the tensor associated to it, so that tmin = θ cb ≤ 1. Let C = π(A) . By (6.16), since u : B → A has finite rank, we have (IdC ⊗ u)(t)C⊗max A ≤ udec tC⊗min B = udec θ cb ≤ udec .  Since (IdC ⊗ u)(t) = θ (ξj ) ⊗ aj , this yields         θ (ξj ) ⊗ aj  ≤ udec . θ (ξj )π(aj ) ≤    π(A) ⊗max A

This proves the claim, and hence the equality δ(y) = udec . Then, the first assertion follows from Remark 6.17 (applied with E = B). Remark 10.28 The following question seems interesting: fix ε > 0, let k be the rank of u, can we obtain the preceding factorization with n ≤ f (k,ε) for some function f ? In other words can we control n by a function f depending only on k and ε?

224

Exactness and nuclearity

10.4 Notes and remarks Section 10.1 is due to Kirchberg. In the separable case, exact C ∗ -algebras are just C ∗ -subalgebras of nuclear ones, but the latter fact was proved (by Kirchberg) as the crowning achievement of a long series of his own previous works [155, 159, 160]. We present in Theorems 10.3 and 10.18 only a simpler (but quite important) step that led him to that result. The fact that (iii) in Theorem 10.3 characterizes exactness is related to what Kirchberg called locfin(A) in [159]. These ideas make sense equally well for operator spaces, but in that generality one needs to introduce a constant of exactness, that replaces the constant 1 appearing in (10.2), see [208, ch. 17]. See [162] (or [3]) for a proof of the full result, namely that a separable exact C ∗ -algebra A embeds in O2 , and that if A is nuclear the embedding A ⊂ O2 can be obtained together with a completely contractive projection onto A. In §10.2 the main points are due to Choi–Effros (Theorem 10.17) and Kirchberg (Theorem 10.18). The use of the δ-norm allows us to deduce them from results on general linear maps as in our previous book [208]. Theorem 10.27 is due to Junge and Le Merdy [137]. The latter statement explains transparently why the CPAP implies a reinforced approximation property by c.p. maps that uniformly factorize through Mn .

11 Traces and ultraproducts

11.1 Traces We start with some preliminaries on noncommutative (or more appropriately not necessarily commutative) measure spaces. Let M be a von Neumann algebra. Let M+ denote the positive part of M. We recall that a trace on M is a map τ : M+ → [0,∞] satisfying • (i) τ (x + y) = τ (x) + τ (y), ∀ x,y ∈ M+ ; • (ii) τ (cx) = cτ (x), ∀ c ∈ [0,∞),x ∈ M+ ; ∗ ∗ • (iii) τ (a a) = τ (aa ), ∀ a ∈ M (this is the “tracial” condition). τ is said to be faithful if τ (x) = 0 implies x = 0, normal if supi τ (xi ) = τ (supi xi )

(11.1)

for any bounded increasing net (xi ) in M+ . Note that since (xi ) is bounded there is x in M+ such that, for any h in H , xi h,h ↑ xh,h, which implies that xi tends to x weak* (see Remark A.11) and hence x ∈ M+ . The operator x being obviously the least upper bound of (xi ), it is natural to denote it by supi xi .  By considering the net of finite partial sums i∈γ Pi (γ ⊂ I ), we see that (11.1) implies that    τ (Pi ) τ Pi = for any mutually orthogonal family of (self-adjoint) projections (Pi )i∈I in M, which is analogous to the σ -additivity of measures. When dealing with von Neumann algebras, it is customary to refer to selfadjoint projections simply as “projections.” Since the confusion this abuse may create is very unlikely, we adopt this convention: in a von Neumann algebra, the term projection will always mean a self-adjoint one.

225

226

Traces and ultraproducts

The trace τ is called semifinite if for any nonzero x ∈ M+ there is y ∈ M+ , such that 0 = y ≤ x and τ (y) < ∞, and finite if τ (1) < ∞ (1 denoting the identity of M). In the finite case, τ (x) < ∞ for any x ≥ 0. Remark 11.1 If τ is semifinite there is a family of mutually orthogonal  projections (pi )i∈I in M with τ (pi ) < ∞ for all i ∈ I such that i∈I pi = 1. Indeed, let (pi )i∈I be a maximal such family (except for the latter condition).  If r = 1 − i∈I pi = 0 there is y ∈ M nonzero such that 0 ≤ y ≤ r with τ (y) < ∞. By spectral theory we have y ≥ λq for some λ > 0 and some projection 0 = q ∈ M. Then q contradicts the maximality of (pi )i∈I . Therefore r = 0. A von Neumann algebra M is called finite if the family formed of the finite normal traces separates the points of M. Clearly this happens if M admits a single faithful normal finite trace, but a finite M may fail to have any faithful finite trace, for instance M = ∞ () with  uncountable. However, on a separable Hilbert space (i.e. if M is weak*-separable) the converse is also true; then M is finite if and only if it admits a faithful normal finite trace. The definition of semifiniteness is simpler: A von Neumann algebra M is called semifinite if M admits a faithful normal semifinite trace. For example, B(H ) is semifinite for any H . The trace τ : B(H )+ → [0,∞]  ei ,T (ei ) where is the usual one defined for T ∈ B(H )+ by τ (T ) = (ei ) is any orthonormal basis of H . The abundance of finite rank (and hence finite trace) operators guarantees semifiniteness. B(H ) is finite if and only if dim(H ) < ∞. Remark 11.2 (Classification of von Neumann algebras in types I, II, III) It is traditional to classify von Neumann algebras in three main classes called type I, type II, and type III. A von Neumann algebra is called of type I if  ¯ it is isomorphic to a direct sum of the form (⊕ i∈I Ci ⊗B(H i ))∞ where (Hi )i∈I is a family of (mutually nonisomorphic) Hilbert spaces and (Ci )i∈I a family of commutative von Neumann algebras. It is easy to check that these are semifinite. A von Neumann algebra M is called of type II if it is semifinite and if there is no nonzero projection p for which pMp is commutative (equivalently no p = 0 for which pMp is type I). Any semifinite M can be decomposed as M = MI ⊕ MII with MI (resp. MII ) of type I (resp. II). Then M is called of type III if there is no nonzero projection p ∈ M for which pMp is semifinite. Any von Neumann algebra M can be decomposed as M = MI ⊕ MII ⊕ MIII with MI (resp. MII , rresp MIII ) of type I (resp. II, rresp III).

11.1 Traces

227

See e.g. [240, ch. V] for a complete discussion. In these notes we will be mainly concerned by finite or semifinite von Neumann algebras, (i.e. algebras of type I or II). However, in several instances we will have to consider the case of type III von Neumann algebras. Then an important structure theorem due to Takesaki will come to our rescue. We will use it as a “black box” and refer for the proof to Takesaki’s book [242, th. XII.1.1 p. 364 and th. X.2.3] or to the original paper [239, th. 4.5 and §8]. See also [146, §13.3]. A very concise description can be found in [256]. See also [111] for useful information on the same theme. Theorem 11.3 (Takesaki’s duality theorem) Let M be a von Neumann algebra. There is a semifinite von Neumann algebra M with the following two properties: (i) There is an embedding M ⊂ M of M as a von Neumann subalgebra and a c.p. projection P : M → M with P  = 1. (ii) There is an embedding M ⊂ M of M as a von Neumann subalgebra and a c.p. projection Q : M → M with Q = 1. In its usual form, Takesaki’s duality theorem asserts that Theorem 11.3 holds for any von Neumann algebra of type III. Schematically, in the latter case M is the crossed product of M with a one parameter automorphism group called the modular group. The algebra M appears as the fixed point algebra with respect to a certain one parameter (dual) automorphism group acting on M. Then a well-known averaging argument based on the amenability of R (the parameter group) shows that there is a contractive projection P from M to M. To obtain (ii) a similar construction is applied to M (whence Q by the same averaging argument) but the resulting “double” crossed product is now isomorphic to M, so we obtain in this way the type III case. Since the semifinite case is trivial, one can use the decomposition M = MI ⊕ MII ⊕ MIII and apply Takesaki’s duality theorem for the type III case to produce a semifinite MIII associated to MIII as in Theorem 11.3. Then we obtain the general case by simply setting M = MI ⊕ MII ⊕ MIII . Remark 11.4 In general the projection appearing in (i) or (ii) is not normal (see [248]), reflecting the fact that an invariant mean is not a measure. However, just like invariant means are pointwise limits of true measures there is a net of normal contractive c.p. maps Pi : M → M that tend pointwise to P for the weak* topology on M, and similarly for Q (see [256, p. 45] for more details). Remark 11.5 (Lp -spaces for semifinite traces) Let τ be a semifinite faithful normal trace on a von Neumann algebra M. Then A = {x ∈ M | τ (|x|) < ∞} is a weak* dense ∗-subalgebra of M. When τ is finite, of course A = M. For

228

Traces and ultraproducts

any 1 ≤ p < ∞ one usually defines the space Lp (τ ) as the completion of A equipped with the norm defined for any x ∈ A by xp = (τ (|x|p ))1/p . For example, when M = B(H ) with the usual trace the space L2 (τ ) (resp. L1 (τ )) can be identified with the Hilbert–Schmidt class (resp. the trace class). When 1 ≤ p < ∞ we obtain the so-called Schatten p-class. By Remark 11.1 in any semifinite M there is a directed increasing net (pγ )  of projections with finite traces tending weak* to 1 (just take pγ = i∈γ pi for any finite subset γ ⊂ I ). Let Mγ = pγ Mpγ and let τγ denote the restriction of τ to Mγ , which is clearly a finite faithful normal trace on Mγ . For any γ ≤ δ in the net we have Mγ ⊂ Mδ and there is a natural isometric embedding Lp (Mγ ,τγ ) ⊂ Lp (Mδ ,τδ ). Then it is not hard to check that the space Lp (τ ) that we just defined can be identified with the completion of the union ∪γ Lp (Mγ ,τγ ) with its natural norm. In this way the construction of the spaces Lp (τ ) can be reduced to the finite trace case. Since we will be using the semifinite case only much later in Chapter 22 we prefer to concentrate for now on the slightly less technical case of finite traces with p = 2 or p = 1.

11.2 Tracial probability spaces and the space L1 (τ ) In general, a functional f on an algebra A is called tracial if f (xy) = f (yx) for any x,y ∈ A. Definition 11.6 By a “tracial probability space,” we mean a von Neumann algebra M equipped with a trace τ : M+ → R+ that is normalized (i.e. such that τ (1) = 1) faithful and normal. Then (see Remark 11.7) τ then extends to a normal (tracial) state on M, so that ∀x,y ∈ M τ (xy) = τ (yx). Remark 11.7 Indeed, let Msa denote the set of self-adjoint elements in M. Since any x in Msa can be written x = x1 − x2 with x1,x2 ∈ M+ , τ uniquely extends to an R-linear form on Msa by setting τ (x) = τ (x1 )−τ (x2 ). Since τ is additive on M+ , this definition is unambiguous. Note that since τ is nonnegative on M+ , this extension of τ preserves order, i.e. x ≤ y implies τ (x) ≤ τ (y). Then, by complexification, τ uniquely extends to a positive C-linear form (and hence a state) on M. The latter state is normal by Theorem A.44. Moreover, by polarization, the tracial property of τ on M+ implies τ (y ∗ x) = τ (xy∗ ) for any x,y ∈ M, or equivalently (replace y by y ∗ ) τ is tracial.

11.2 Tracial probability spaces and the space L1 (τ )

229

If M is commutative, M can be identified with L∞ ( ,A ,μ) for some abstract probability space ( ,A ,μ), then projections are indicator functions of sets in A and τ corresponds to μ on ( ,A ), i.e. we have for all f in L∞ ( ,A ,μ) τ (f ) =

f (ω) dμ(ω).

For this reason, (M,τ ) is usually called a “noncommutative” probability space, but since we want to include the commutative case, we prefer the term tracial probability space. The simplest example is of course the algebra Mn of all n × n matrices equipped with the usual normalized trace, τ (x) = n−1 tr(x). As we will see in the next section, infinite tensor products and ultraproducts of matrix algebras of varying sizes give us examples of tracial probability spaces. Any discrete group G also provides us with another fundamental example namely (MG,τG ) (see §3.5). Let (M,τ ) be a tracial probability space, and let M∗ be the predual of M. By definition (see §A.16), this is the subspace of M ∗ formed of all the normal (i.e. weak*-continuous) elements of M ∗ . It is well known that M∗ can be identified with the (“noncommutative”) L1 -space associated to (M,τ ). Let us now explain how the latter is defined and how this identification works. For any x in M, we set x1 = τ (|x|). Proposition 11.8 For all x in M we have |τ (x)| ≤ τ (|x|). More generally τ (|x|) = sup{|τ (xy)| | y ∈ M y ≤ 1} = sup{|τ (yx)| | y ∈ M y ≤ 1}. (11.2) In particular, this shows that  1 is a norm on M. Proof Recall that as any positive functional, τ satisfies the Cauchy–Schwarz inequality, that is |τ (ba)| ≤ τ (bb∗ )τ (a ∗ a) for all a,b ∈ M. Assume y ≤ 1. Let x = u|x| be the polar decomposition of x. Let a = u|x|1/2 and b = |x|1/2 y. Note a ∗ a = |x| and bb∗ ≤ |x|. By Cauchy–Schwarz we have |τ (xy)| = |τ (ab)| = |τ (ba)| ≤ (τ (a ∗ a)τ (bb∗ ))1/2 ≤ τ (|x|). In particular, taking y = 1, we obtain |τ (x)| ≤ τ (|x|). Finally, |x| = u∗ x yields the first equality in (11.2). Since τ is tracial, the second one is clear. The space L1 (M,τ ) or simply L1 (τ ) is defined as the completion of M with respect to the norm  1 . Note that, by definition, we have an inclusion with dense range

230

Traces and ultraproducts M ⊂ L1 (M,τ ).

For any x ∈ M, let fx ∈ M ∗ be the linear form defined on M by fx (y) = τ (xy).

(11.3)

Clearly, fx is normal (indeed, since τ is normal, this follows from Remark A.37. Thus fx ∈ M∗ . Remark 11.9 Let  ⊂ M∗ be the linear subspace defined by  = {fx | x ∈ M}. We claim that  is dense in M∗ . To see this it suffices to show that it is σ (M∗,M)-dense, or equivalently that any ϕ ∈ (M∗ )∗ = M that vanishes on  must vanish on the whole of M∗ . Equivalently, we must show that if y ∈ M (corresponding to ϕ) is such that fx (y) = 0 for any x ∈ M then y = 0. Indeed this is clear: the choice of x = y ∗ gives us τ (y ∗ y) = 0, and hence y = 0. Remark 11.10 More generally, let V ⊂ M be any σ (M,M∗ )-dense linear subspace. Then the same argument as for Remark 11.9 shows that V is dense in L1 (τ ). Theorem 11.11 The map that takes x ∈ M to fx extends to an isometric isomorphism from L1 (τ ) onto M∗ . In short we have L1 (τ )  M∗

(isometrically).

Proof By (11.2), fx M∗ = τ (|x|) = x1 . Therefore, since  is dense in M∗ , the latter can be viewed as the completion of (,.1 ). Consequently, since M = (M∗ )∗ , we also have L1 (τ )∗  M

(isometrically),

or more explicitly for the record: Corollary 11.12 The map that takes y ∈ M to the functional ϕy ∈ L1 (τ )∗ defined by ϕy (x) = τ (xy) is an isometric isomorphism from M onto L1 (τ )∗ . Thus we have ∀y ∈ M

y = sup{|τ (xy)| | x ∈ M,τ (|x|) ≤ 1}.

(11.4)

11.3 The space L2 (τ ) Let (M,τ ) be a tracial probability space. We will denote by L2 (τ ) the Hilbert space associated to (M,τ ) by the GNS construction (see §A.13). More precisely, the space L2 (τ ) is the completion of M with respect to the norm x → x2 defined on M by

11.3 The space L2 (τ )

231

x2 = (τ (|x|2 ))1/2, or equivalently x2 = (τ (x ∗ x))1/2 . Note that the traciality of τ ensures that x2 = (τ (x ∗ x))1/2 = (τ (xx ∗ ))1/2 = x ∗ 2 . The latter norm is derived from the scalar product y,x = τ (y ∗ x)

(x,y ∈ M),

(11.5)

which, as already mentioned, satisfies the Cauchy–Schwarz inequality |y,x| ≤ τ (x ∗ x)1/2 τ (y ∗ y)1/2 = x2 y2 .

(11.6)

Note that this shows in particular that x → x2 is subadditive, and since τ is assumed faithful (i.e. such that τ (x ∗ x) = 0 ⇔ x = 0), we see that  · 2 is indeed a norm on M. Remark 11.13 Let (M,τ ) be a tracial probability space. If x ∈ BM is such that τ (x ∗ x) = 1 then x is unitary (and conversely of course). Indeed, τ (1 − x ∗ x) = 0 and 1 − x ∗ x ≥ 0 imply x ∗ x = 1 by the faithfulness of τ . Similarly xx∗ = 1 so that x ∈ U (M). Remark 11.14 Recall that the “modulus” we constantly use is defined for all x ∈ M by |x| = (x ∗ x)1/2 . It is worthwhile to emphasize that the triangle inequality |x + is false for this modulus. Here is an example:

y| ≤ |x| + |y| ∗ ∗ = I . However, a simple 0 1 Let x = 0 1 . Then x − x √is unitary 1 1

0 0 so |x ∗− x | −1/2 calculation shows that |x| = 2 0 1 and |x | = 2 1 1 so that (|x| + |x ∗ |)e1,e1  = 2−1/2 while |x −x ∗ |e1,e1  = 1. Therefore |x −x ∗ | ≤ |x|+|x ∗ |. Note however the following useful substitute from [1]: for any pair x,y in M there are isometries U,V in M such that |x + y| ≤ U |x|U ∗ + V |y|V ∗ . From the GNS construction, we recall that for any a,x in M we have ≤ a2 1 and hence x ∗ a ∗ ax ≤ a2 x ∗ x which implies ax2 ≤ ax2 . Therefore, the left multiplication by a extends by density to a bounded operator a∗a

L(a) : L2 (τ ) → L2 (τ ) such that L(a) ≤ a and L : M → B(L2 (τ )) is a ∗-homomorphism. Let j : M → L2 (τ ) be the inclusion map. Note that j has dense range by definition of L2 (τ ) and the unit vector ξ = j (1) is cyclic for L i.e. L(M)ξ = L2 (τ ) (and L(M)ξ = j (M)). Note that ∀a ∈ M

τ (a) = 1,a = ξ,L(a)ξ .

(11.7)

Up to now we did not make use of the trace property. We will now use it to show that right multiplications are also bounded on L2 (τ ): For any a,x in L2 (τ ) we have

232

Traces and ultraproducts

τ ((xa)∗ (xa)) = τ ((xa)(xa)∗ ) = τ (xaa∗ x ∗ ) ≤ a2 τ (xx∗ ) = a2 τ (x ∗ x), and hence xa2 ≤ ax2 . Therefore the right multiplication by a extends to a bounded operator on L2 (τ ) denoted by R(a) : L2 (τ ) → L2 (τ ). The mapping R : M op → B(L2 (τ )) is a ∗-homomorphism on the opposite algebra M op , i.e. the same ∗-algebra but with the “opposite” product (defined by a · b = ba). In fact, what precedes holds more generally for the GNS construction associated to any tracial state on a C ∗ -algebra. But since τ is faithful and normal, L and R are isometric normal ∗-homomorphisms (see §A.13 and Remark A.42), which embed M (resp. M op ) as a von Neumann subalgebra of B(L2 (τ )). The fact that L(M) (resp. R(M op )) is a von Neumann subalgebra of B(L2 (τ )) can be deduced e.g. from Kaplansky’s density Theorem A.47 (because BL(M) = L(BM ) is weak* compact and similarly for R). The uniqueness of the predual then guarantees that L (resp. R) is a weak*homeomorphism from M to L(M) (resp. R(M op )). Remark 11.15 Consider x ∈ M. Then x ∈ BM if and only if ∀a,b ∈ M

|τ (a ∗ xb)| ≤ a2 b2 .

Indeed, this boils down to x = L(x) (or x = R(x)). Moreover, for any x ∈ M x ≥ 0 ⇔ τ (a ∗ xa) ≥ 0 ∀a ∈ M,

(11.8)

x ≥ 0 ⇔ τ (yx) ≥ 0 ∀y ∈ M+ .

(11.9)

or equivalently

Indeed, (11.8) is the same as a,L(x)a ≥ 0 for any a ∈ L2 (τ ). Another way to see that L(M) (resp. R(M op )) is a von Neumann subalgebra of B(L2 (τ )) is via the next statement: Proposition 11.16 We have L(M) = R(M) and R(M) = L(M) , and hence L(M) = L(M) and R(M op ) = R(M op ) . Proof Let T ∈ R(M) . Assume T  = 1 for simplicity. For any a,b ∈ M we have |b,T (a)| ≤ a2 b2 and also T (a) = T (R(a)1) = R(a)T (1). Therefore |ba∗,T (1)| = |R(a ∗ )b,T (1)| = |b,R(a)T (1)| ≤ a2 b2 . Let ϕ ∈ M ∗ be the linear form defined by ϕ(x) = x ∗,T (1). Using the polar decomposition x = U |x| = ab∗ with a = U |x|1/2 and b = |x|1/2 , we find |ϕ(x)| ≤ τ (|x|) for any x ∈ M. By Corollary 11.12 (and the density of M in L1 (τ )) there is y ∈ BM such that ϕ(x) = ϕy (x) for any x ∈ M, and hence

11.3 The space L2 (τ )

233

(since M is dense in L2 (τ )) T (1) = y in L2 (τ ). Thus T (1) ∈ M so that we may write T (a) = R(a)T (1) = T (1)a = L(T (1))a for any a ∈ M, so that T = L(T (1)) and we conclude R(M) ⊂ L(M). The converse is obvious, and similarly for L(M) . Remark 11.17 We will use the following classical fact about the inclusion M ⊂ L2 (τ ): the weak*-closure (i.e. σ (M,M∗ )-closure) of a bounded convex subset of M coincides with its closure for the topology induced on M by the norm of L2 (τ ). Indeed, since L2 (τ ) is dense in M∗ , the σ (M,M∗ )-closure of a bounded subset of M coincides with its closure in the weak topology of L2 (τ ), and by (Mazur’s) Theorem A.9 for a convex set this is the same as the norm closure in L2 (τ ). Remark 11.18 In a tracial probability space (M,τ ) certain inequalities are immediate consequences of the usual probabilistic ones. For instance for any x ∈ M and any 0 < p < q < ∞ we have for any x ∈ M (τ (|x|p ))1/p ≤ (τ (|x|q ))1/q and lim (τ (|x|p ))1/p = xM .

(11.10)

p→∞

Indeed, let N ⊂ M be the commutative von Neumann subalgebra generated by |x|. Since we may identify (N,τ|N ) with (L∞ ( ,A ,μ),μ) for some abstract probability space ( ,A ,μ) (that of course depends on |x|), the preceding assertion reduces to the fact that for any f ∈ L∞ ( ,A ,μ) we have f p ≤ f q and limp→∞ f p = f ∞ . Proposition 11.19 Let (M,τ ) and (N,ϕ) be two tracial probability spaces. Let T : L2 (τ ) → L2 (ϕ) be a trace preserving isometry. Assume that there is a weak*-dense unital ∗-subalgebra A ⊂ M such that T (A) ⊂ N and T|A : A → N is a unital ∗-homomorphism. Then, when restricted to M, T defines a normal (injective trace preserving) ∗-homomorphism embedding M into N (as a von Neumann subalgebra). Proof We claim that T (a) = a for any a in A. Indeed, since (T (a)∗ T (a))m = T ((a ∗ a)m ) and ϕ(T (x)) = τ (x) (a,x ∈ A), we have by Remark 11.18 1

1

T (a) = lim ↑ (ϕ((T (a)∗ T (a))m )) 2m = lim ↑ (τ ((a ∗ a)m )) 2m = a. m→∞

m→∞

Let A1 = T (A) ⊂ N. Then T defines an isometric isomorphism from (A, ) to (A1, ). We may assume without loss of generality that A1 generates N . Thus, by Remark 11.17 and by Kaplanky’s classical density theorem, we L (τ ) L (τ ) have BM = A ∩ BM 2 and similarly BN = A1 ∩ BN 2 . Since T is an isometry on L2 (τ ) this clearly implies that T (BM ) = BN . Moreover, since the product and the ∗-operation are continuous on (BM , L2 (τ ) ), T : M → N

234

Traces and ultraproducts

is a ∗-homomorphism, which has kernel = {0} and is necessarily surjective (since T (BM ) = BN ). Lastly, since T ∗ (L2 (τ )) ⊂ L2 (τ ), we have T ∗ (N∗ ) ⊂ M∗ , so T is normal. Actually T : M → N , being a ∗-isomorphism between von Neuman algebras, is necessarily bicontinuous for the topologies σ (M,M∗ ),σ (N,N∗ ) (see Remark A.38). Remark 11.20 Here is a typical application of Proposition 11.19. Let (M,τ ) and (N,ϕ) be two tracial probability spaces. Fix n ≥ 1. Let (x1, . . . ,xn ) ∈ M n and (y1, . . . ,yn ) ∈ N n be n-tuples that have “the same ∗-distribution” in the sense that for any polynomial (or equivalently for any monomial) P (X1, . . . ,Xn,X1∗, . . . ,Xn∗ ) in noncommuting variables (X1, . . . ,Xn ) and their adjoints we have τ (P (x1, . . . ,xn,x1∗, . . . ,xn∗ )) = ϕ(P (y1, . . . ,yn,y1∗, . . . ,yn∗ )). Then the correspondence T : P (x1, . . . ,xn,x1∗, . . . ,xn∗ ) → P (y1, . . . ,yn,y1∗, . . . ,yn∗ ) extends to a trace preserving ∗-isomorphism between the (unital) von Neumann subalgebras Mx and Ny generated respectively by (x1, . . . ,xn ) and (y1, . . . ,yn ). Indeed, T clearly extends to an isometry from L2 (Mx ,τ|Mx ) onto L2 (Ny ,ϕ|Ny ), and Proposition 11.19 can be applied to the unital ∗-algebra Ax generated by (x1, . . . ,xn ) in Mx . More generally, the same can be applied for arbitrary families (xi )i∈I and (yi )i∈I . In that case we say they have the same ∗-distribution if it is the case in the preceding sense for (xi )i∈I  and (yi )i∈I  for any finite subset I  ⊂ I . Then there is a trace preserving isomorphism between the von Neumann algebras generated by (xi )i∈I and (yi )i∈I . We will invoke several times the following well-known fact. It may be worthwhile to emphasize that this is special to the finite trace case. In general, for instance when M = B(H ) and N ⊂ M there may not exist any bounded projection onto N . Proposition 11.21 (Conditional expectations) Let (M,τ ) be a tracial probability space. Then for any von Neumann subalgebra N ⊂ M, there is a normal c.p. projection P : M → N with P  = P cb = 1, such that P (axb) = aP(x)b for any x ∈ M and a,b ∈ N . Proof By definition L2 (N,τ|N ) can be naturally identified to a subspace of L2 (M,τ ), namely the closure N of N in L2 (M,τ ). Let P be the orthogonal projection from L2 (M,τ ) to N  L2 (N,τ|N ). Fix x ∈ L2 (M,τ ). Just like in the classical commutative case, P (x) is the unique x  ∈ L2 (N,τ|N ) such that x ,y = x,y for any y ∈ N . Then since ayb ∈ N for any a,b ∈ N , it

11.4 An example from free probability

235

follows that P (axb) = aP(x)b. By Remark 11.15, if x ∈ M (resp. x ∈ M+ ) then P (x) ∈ N (resp. P (x) ∈ N+ ) and P (x) ≤ x. Of course P (x) = x if x ∈ N . Moreover, for any a,b ∈ N the form x → a,P (x)b = τ (a ∗ xb) is normal (since τ is normal) and hence P is normal by Remark A.42. Thus P is a normal positive projection from M onto N with P  = 1. Its complete positivity is automatic by Theorem 1.45. But in any case, if we repeat the argument with N ⊂ M replaced by Mn (N ) ⊂ Mn (M) with τ replaced by  τn ([xij ]) = τ (xjj ) we obtain that P is c.p. Remark 11.22 (A recapitulation) We have natural inclusions M ⊂ L2 (τ ) and M ⊂ L1 (τ ), and since x1 ≤ x2 for any x in M, we also have a natural inclusion L2 (τ ) ⊂ L1 (τ ) with norm = 1. Let j : M → L2 (τ ) denote the natural inclusion. Then the (Banach space sense) adjoint j ∗ : L2 (τ )∗ → M ∗ actually takes values into M∗ . Indeed, using the canonical identification L2 (τ )∗  L2 (τ ) we find that j ∗ takes x¯ ∈ M ⊂ L2 (τ )  L2 (τ )∗ to the linear form fx ∗ where fx is as in (11.3), which is in M∗ . Then it is easy to verify that the composition T = j ∗ j : M → L2 (τ )  L2 (τ )∗ → M∗ is the antilinear map that takes x ∈ M to fx ∗ ∈ M∗ , or more rigorously the C-linear map that takes x ∈ M to fx ∗ ∈ M∗ . Recall the canonical identification M  M op that takes x¯ ∈ M to x ∗ . Note M∗  (M op )∗ . Using this we find that T can be identified to the mapping T0 : M → (M op )∗ defined by T0 (x) = fx .

11.4 An example from free probability: semicircular and circular systems In Voiculescu’s free probability theory, stochastic independence of random variables is replaced by freeness of noncommutative analogues of random variables. We refer the reader to [254] for an account of this beautiful theory. Here we only want to introduce the basic tracial probability space that is the free analogue of Rn equipped with the standard Gaussian measure. The free analogue of a family of standard independent real (resp. complex) Gaussian variables is a free semicircular (resp. circular) family. They satisfy a similar distributional invariance under the orthogonal (resp. unitary) group. Such families generate a tracial probability space that can be realized on the “full” Fock space, as follows. Let H be a (complex) Hilbert space. We denote by F(H )(or simply by F) the “full” Fock space associated to H , that is to say we set H0 = C, Hn = H ⊗n (Hilbertian tensor product) and finally

236

Traces and ultraproducts F = ⊕n≥0 Hn .

We consider from now on Hn as a subspace of F. For every h ∈ F, we denote by (h) : F → F the operator defined by: (h)x = h ⊗ x. More precisely, if x = λ1 ∈ H0 = C1, we have (h)x = λh and if x = x1 ⊗ x2 · · · ⊗ xn ∈ Hn we have (h)x = h ⊗ x1 ⊗ x2 · · · ⊗ xn . We will denote by the unit element in H0 = C1. The von Neumann algebra B(F) is equipped with the vector state ϕ defined by ϕ(T ) =  ,T , called the vacuum state. Let (es )s∈S be an orthonormal basis of H . Since ϕ((h)(h)∗ ) = 0 and ϕ((h)∗ (h)) = h2 , we see that ϕ is not tracial on B(F). However, it can be checked (a possible exercise for the reader) that it is tracial when restricted to the von Neumann algebra M generated by the operators (es )+(es )∗ (s ∈ S), i.e. we have ϕ(xy) = ϕ(yx) for all x,y in this ∗-subalgebra. The pair (M,ϕ) is an example of a tracial probability space. Let Ws = (es ) + (es )∗ .

(11.11)

Then the family (Ws )s∈S is the prototypical example of a free semicircular system, sometimes also called a “free-Gaussian” family. The term semicircular is used for any family with the same joint moments as (Ws )s∈S . Such a family enjoys properties very much analogous to those of a standard independent Gaussian family. Indeed, let [aij ] (i,j ∈ S) be an “orthogonal matrix,” by which we mean that aij ∈ R (i,j ∈ S) and that the associated R-linear mapping is an  isometric isomorphism on 2 (S). Then if we let Wia = j aij Wj , we have ϕ(P (Wia )) = ϕ(P (Wi ))

(11.12)

for any polynomial P in noncommuting variables (Xi )i∈S . In other words the families (Ws )s∈S and (Wsa )s∈S have the same joint moments. This is analogous to the rotational invariance of the usual Gaussian distributions on RS when (say) S is finite. In particular, for any fixed s 0 ∈ S the variables Ws 0 and Wsa0 have the same moments, so that ϕ((Wsa0 )2m ) = ϕ((Ws 0 )2m ) for any m ≥ 0, and letting m → ∞ after taking the 2mth root we find Wsa0  = Ws 0 . Since we can adjust a so that its s 0 th row matches any element (αs )s∈S in the unit sphere of 2 (S,R) we obtain

11.4 An example from free probability

∀(αs )s∈S ∈ 2 (S,R)

  

s∈S

237

  1/2  αs2 αs Ws  = cR ,

where cR is the common value of Ws 0 . Thus, the real Banach space R-linearly generated by (Ws )s∈I is isometric to a real Hilbert space. By (11.11) we have cR ≤ 2, and with a little more work (see e.g. [254]) one shows that cR = 2. For any a ∈ BB(H ) , let F (a) ∈ BB(F (H )) denote the linear operator that fixes n the vacuum vector and acts like a ⊗ on H n (so called “second quantization” of a). To check (11.12) the simplest way is to observe that for all h ∈ H and all a ∈ B(H ) we have F (a)(h) = (ah)F (a). If a is unitary then F (a) is unitary and we have F (a)(h)F (a)∗ = (ah) and F (a)(h)∗ F (a)∗ = (ah)∗ . Therefore, F (a)((h) + (h)∗ )F (a)∗ = (ah) + (ah)∗, and hence when the coefficients aij are all real and t a denotes the transpose of a we find F (t a)Ws F (t a)∗ = Wsa , and hence for any polynomial P F (t a)P (Ws )F (t a)∗ = P (Wsa ). Since all the F (a)’s and their adjoints preserve , (11.12) follows. Using the basic ideas of spectral theory and free probability one can show that if |S| = n (resp. S = N) then there is a trace preserving isomorphism from (M,ϕ) to (MFn ,τFn ) (resp. (MF∞ ,τF∞ )). We now pass to the complex case. We need to assume that the index set is  = H ⊕H partitioned into two copies of the same set S, so we replace H by H and assume given a (partitioned) orthonormal basis {es | s ∈ S} ∪ {fs | s ∈ S} )). Let M  ⊂ B(F(H )) s = (es ) + (fs )∗ ∈ B(F(H of H. We then define W s )s∈S . Let  ϕ be the vacuum state be the von Neumann algebra generated by (W )). Then again (M,  ϕ ) is a tracial probability space and the family on B(F(H  (Ws )s∈S is the prototypical example of a so-called free circular system. It is the free analogue of an i.i.d. family of complex valued Gaussian variables with s )s∈S is covariance equal to the 2×2 identity matrix. As the latter, the family (W unitarily invariant. Indeed, let [aij ] (i,j ∈ S) be the matrix of a unitary operator   a = j aij W j , we have on 2 (S). Then if we let W i   

ia , W i , W ia ∗ = ϕ P W i∗ ϕ P W (11.13)

238

Traces and ultraproducts

for any polynomial P (Xi ,Xi∗ ) in noncommuting variables (Xi )i∈S and s , W s∗ ) and (W sa , W sa ∗ ) have the same (Xi∗ )i∈S . In other words the families (W joint moments. This is analogous to the unitary invariance of the usual standard Gaussian measure on CS when S is finite. The proof of (11.13) is similar to ). that of (11.12) (hint: consider F (α) for α = a ⊕ a¯ acting on H As before, we have    1/2  s  αs W , |αs |2 ∀(αs )s∈S ∈ 2 (S,C)   = cC s∈S

i , and it can be shown (see [254]) that where cC is the common value of W cC = 2.

11.5 Ultraproducts Let {(M(i),τi ) | i ∈ I } be a family of tracial probability spaces. Let U be an ultrafilter on the index set I (see Remark A.6). Then for any bounded family of numbers (xi )i∈I , the limit along U is well defined. We denote it by lim U xi . Let

  B= x∈

i∈I

 M(i) | supi∈I x(i)M(i) < ∞ ,

equipped with the norm xB = supi∈I x(i)M(i) . As already mentioned, we adopt in these notes the notation    B= ⊕ M(i) . i∈I



(11.14)

We define a functional fU ∈ B ∗ by setting for all t = (ti )i∈I in B fU (t) = limU τi (ti ). Clearly fU is a tracial state on B. Let HU be the Hilbert space associated to the tracial state fU in the GNS construction applied to B. We denote by L : B → B(HU )

and

R : B op → B(HU )

the representations of B corresponding to left and right multiplication by an element of B. (We recall that B op is the same C ∗ -algebra as B but with reverse multiplication.) More precisely, let pU (x) = limU τi (xi∗ xi )1/2 .

11.5 Ultraproducts

239

Clearly pU is a Hilbertian seminorm on B. Let ! " IU = ker(pU ) = x ∈ B | limU τi (xi∗ xi ) = 0 . Then IU is a closed two-sided ideal, and HU is defined as the completion of B/IU equipped with the Hilbertian norm associated to pU . For any t = (ti )i in B we denote by t˙ the equivalence class of t in B/IU .  ˙

 ˙

Then L and R are defined by L(x)t˙ = xt and R(x)t˙ = tx. Clearly since fU is tracial, these are contractive representations of B and B op on HU . Let us record the following simple fact (see §A.4 for background on ultrafilter limits). Lemma 11.23 For any t = (ti )i in B we have t˙HU = limU ti L2 (τi ) . Moreover, for any ε > 0 there is s = (si )i in B such that s − t ∈ IU and supi∈I si L2 (τi ) < t˙HU + ε = ˙s HU + ε. Proof It is immediate that for any t,s ∈ B such that t − s ∈ IU we have limU ti L2 (τi ) = limU si L2 (τi ) . By definition of the limit  = limU ti L2 (τi ) for any ε > 0 the set ! " Jε = i ∈ I | |ti L2 (τi ) − | < ε is such that limU 1Jε = 1 and limU 1I \Jε = 0. See §A.4 for details if necessary. Thus we may simply set si = ti for any i ∈ Jε and si = 0 otherwise. Then we have limU ti − si L2 (τi ) ≤ c limU 1I \Jε = 0, where c = supi∈I ti − si L2 (τi ) . Actually, it turns out that for many considerations the Hilbert space HU is “too small.” We need to embed it in the larger Hilbert space HU that is the ultraproduct of the family (Hi ) defined by Hi = L2 (τi ), as defined in §A.5. We claim that we have a natural isometric inclusion jU : HU → HU .

 Indeed, to any x ∈ B we can associate (xi ) in the space X = ⊕ i∈I Hi ∞ and we have by definition (see §A.5) ˙ = limU xi L2 (τi ) = (xi )U HU . pU (x) ˙ = (xi )U extends to an isometric (linear Thus the correspondence x˙ → jU (x) of course) embedding jU : HU → HU . Remark 11.24 For any t = (ti )i in B, the mapping (xi ) → (ti xi ) is clearly in B(X) and takes ker(pU ) to itself. Thus it defines a mapping

240

Traces and ultraproducts

π (t) : HU → HU . The mapping t → π (t) is clearly a ∗-homomorphism from B/IU to B(HU ). It is easy to check that if we view HU as embedded in HU via jU , then HU ⊂ HU is an invariant subspace under π (t) and we have L(t) = π (t)|HU . See Remarks 11.28 and 11.29 for a description of HU as the subset of HU formed of the elements admitting a “uniformly square integrable” representative. The reader who is already familiar with the latter can skip the next lemma, which is but a pedestrian reformulation of the same fact. We will invoke the following simple observation. Lemma 11.25 For any β ∈ HU and ε > 0, there is a family (βi ) ∈ X such that

∀t = (ti ) ∈ B

jU (β) = (βi )U ,

(11.15)

sup βi 2 < βHU + ε,

(11.16)

L(t)βHU = limU ti βi 2 = π (t)((βi )U )HU , (11.17)

and moreover, for any family of projections (pi ) (pi ∈ M(i)) such that limU τi (pi ) = 0, we have limU pi βi 2 = limU βi pi 2 = 0.

(11.18) 

Proof Let β(m) be a sequence in B/IU such that β = β(m) and  β(m)HU < βHU + ε/2. Let q : B → B/IU denote the quotient map. Then by Lemma 11.23 we may assume that, for each m, we have β(m) = q(b(m)) for some sequence b(m) = (bi (m))i∈I ∈ B such that ∀m ≥ 0

sup bi (m)2 ≤ β(m)HU + 2−m (ε/4).

(11.19)

i

Then, if we set βi =

 m

bi (m) ∈ L2 (τi ),

(11.20)

 assuming 0 < ε < 1, (11.16) clearly holds. From β = m β(m) (absolutely convergent series in HU ), since jU : HU → HU is isometric we deduce  jU (β) = jU (β(m)). m

Note jU (β(m)) = (bi (m))U . Therefore, we have (absolutely convergent series in HU )  jU (β) = (bi (m))U . m

11.5 Ultraproducts

241

But we have also by (11.20) (absolutely convergent series in HU )  (bi (m))U . (βi )U = m

Therefore,we obtain (11.15). Also (11.17) is immediate by Remark 11.24. To check (11.18), note that for each fixed k           bi (m)  + pi bi (m)  pi βi 2 ≤ pi m≤k m>k 2 2   ≤ pi 2  b(m)M(i) + supi bi (m)2 m≤k

m>k

  and since supi  m≤k bi (m)M(i) ≤  m≤k b(m)B < ∞ and limU pi 2 = 0 we have by (11.19)   β(m)HU + 2−m (ε/4) limU pi βi 2 ≤ m>k

m>k

and letting k → ∞ we obtain one part of (11.18). The other part follows similarly. The next result seems to go back to McDuff’s 1969 early work. Note that the kernel IU is not weak* closed in B (and the quotient map is not normal), so the fact that the quotient B/IU is nevertheless (∗-isomorphic to) a von Neumann algebra is a priori somewhat surprising. Theorem 11.26 The functional fU : B → C vanishes on IU . The associated functional τU : B/IU → C is a faithful tracial state on B/IU such that, if q : B → B/IU denotes the quotient map, we have τU (q(t)) = fU (t)

∀ t ∈ B.

(11.21)

The kernels of L and R coincide with the set IU . After passing to the quotient, L and R define isometric representations LU : B/IU → B(HU )

and

RU : B op /IU → B(HU )

with commuting ranges. Lastly, the commutants satisfy [LU (B/IU )] = RU (B op /IU )

and

[RU (B op /IU )] = LU (B/IU ).

In particular, LU (B/IU ) and RU (B op /IU ) are (mutually commuting) von Neumann subalgebras of B(HU ). Proof By Cauchy–Schwarz, we have |fU (x)| ≤ (fU (x ∗ x))1/2 (fU (1))1/2 = (fU (x ∗ x))1/2 , and IU = {x ∈ B | fU (x ∗ x) = 0}. Therefore, fU vanishes

242

Traces and ultraproducts

on IU . Passing to the quotient, we obtain a functional τU unambiguously defined by (11.21), which is clearly a tracial state on B/IU . Let 1i be the unit of M(i) and let ξ = (1i )i∈I ∈ B. We have ˙ ˙ fU (t) = ξ,L(t) ξ˙  = ξ,R(t) ξ˙ . Let t ∈ B. If L(t) = 0, then L(t ∗ t) = 0 which by the preceding line implies fU (t ∗ t) = 0 hence t ∈ IU . Conversely, if t ∈ IU then x ∗ t ∗ tx ∈ IU for  ˙

any x in B, whence fU (x ∗ t ∗ tx) = 0 which means tx = 0 for all x in B, or equivalently L(t) = 0. A similar argument applies for R, so we obtain that ker(L) = ker(R) = IU . Then, after passing to the quotient by IU , L and R define the isometric representations LU and RU with the same respective ranges as L and R. Therefore LU and RU still have commuting ranges. Finally, let T ∈ B(HU ) be an operator commuting with LU (B/IU ), i.e. T ∈ LU (B/IU ) . We will show that T must be in the range of RU . Let β = T (ξ˙ ) ∈ HU . We will show that there is b = (bi ) in B such that β = b˙ and that ˙ T = R(b) = RU (b). Indeed, we have for any t = (ti ) in B TL(t)ξ˙ = L(t)T ξ˙ = L(t)β

(11.22)

L(t)βHU ≤ T  L(t)ξ˙ HU = T  t˙HU .

(11.23)

hence

By Lemma 11.25 there is a family (βi ) with βi ∈ L2 (τi ) satisfying (11.18) and such that sup βi 2 < ∞, (βi )U = jU (β) and ∀t = (ti ) ∈ B

L(t)βHU = limU ti βi 2 .

This, together with (11.23), implies that for any t in B limU τi (βi βi∗ ti∗ ti ) ≤ T 2 limU τi (ti∗ ti ).

(11.24)

Let βi = hi vi be the polar decomposition of βi in L2 (τi ) with hi ∈ L2 (τi ), hi ≥ 0, vi partial isometry in M(i) and hi = |βi∗ | (see Remark 11.29 for clarification). Fix ε > 0. Let pi be the spectral projection of hi associated to ]T  + ε,∞[. Note that βi βi∗ pi = h2i pi ≥ (T  + ε)2 pi . Hence (11.24) implies (with ti = pi ) (T  + ε)2 limU τi (pi ) ≤ limU τi (βi βi∗ pi ) ≤ T 2 limU τi (pi ). This forces limU τi (pi ) = 0, and hence by (11.18) limU τi (βi βi∗ pi ) = 0.

11.5 Ultraproducts

243

Therefore, if we set finally bi = (1−pi )hi vi we find bi  ≤ (1−pi )hi  ≤ T  + ε and βi − bi 2L2 (τi ) ≤ pi hi vi 2L2 (τi ) ≤ τi (βi βi∗ pi ), hence limU βi − bi L2 (τi ) = 0. Let b = (bi ). Then (βi )U = (bi )U , and b ∈ B with bB ≤ T  + ε. Then going back to (11.22) we obtain finally  ˙

TL(t)ξ˙ = L(t)β = L(t)b˙ = (ti bi ) = RU (b)L(t)ξ˙ . This shows that T = RU (b), which completes the proof that LU (B/IU ) = RU (B op /IU ). The same argument clearly yields RU (B op /IU ) = LU (B/IU ), and hence LU (B/IU ) = LU (B/IU ), which proves that LU (B/IU ) is a von Neumann algebra. Definition 11.27 Let MU = LU (B/IU ). The tracial probability space (MU ,τU ) is called the ultraproduct of the family (M(i),τi ) with respect to U . Remark 11.28 Let HU be the usual ultraproduct of the Hilbert spaces Hi = L2 (τi ). Then, HU can be identified to the closure in HU of the subspace of all elements of the form (bi )U with supi bi M(i) < ∞. Alternatively HU ⊂ HU can also be described as the subspace corresponding to the “uniformly square integrable” families. More precisely, let β = (βi )U be an element of HU , with supi βi Hi < ∞. Then β belongs to HU (meaning rather jU (HU )) if and only if lim limU τi (βi βi∗ 1{βi βi∗ >c} ) = 0

c→∞

or if and only if lim limU τi (βi∗ βi 1{βi∗ βi >c} ) = 0,

c→∞

where we have denoted (abusively) by 1{h>c} the spectral projection of the Hermitian operator h for the interval (c,∞). Remark 11.29 In what precedes, we invoked the polar decomposition in L2 (τi ), using its structure as a bimodule over M(i). In general this involves dealing with unbounded operators. But actually, we will apply the preceding result only in the case when M(i) is a (finite-dimensional) matrix algebra for which the polar decomposition is entirely elementary and classical, since L2 (τi ) coincides with M(i) itself. In general, a unitary in a quotient C ∗ -algebra A/I does not lift to a unitary in A. However it is so when A/I is isomorphic to a von Neumann algebra, in particular for B/IU .

244

Traces and ultraproducts

Lemma 11.30 Let qU : B → MU be the quotient map (given by qU = LU q). u = (ui )i∈I in B such that For any unitary u in MU , there is a unitary  u) = u. qU ( Proof Since MU is a von Neumann algebra (see Theorem 11.26), we can write u = exp ix for some self-adjoint x ∈ MU (see Remark A.39). Since qU maps x ∈ B with  x∗ =  x such that B onto MU and is a ∗-homomorphism, there is  x ) = x. Then  u = exp i x is a unitary in B lifting u. qU ( Remark 11.31 Any (self-adjoint) projection Q ∈ MU admits a lifting (Qi ) ∈ B such that Qi is a (self-adjoint) projection for any i. Indeed, let (xi ) ∈ BB be a lifting of Q (see Lemma A.33). Replacing xi by its real part, we may assume all the xi’s self-adjoint. Observe that for any λ in the spectrum of xi , we have |λ| ≤ 1 and d(λ,{0,1}) ≤ 2|λ − λ2 |. Using this it is easy to show that there is a (self-adjoint) projection Qi (in the commutative von Neumann algebra generated by xi ) such that |xi − Qi | ≤ 2|xi − xi2 |. Since Q = Q2 we have (xi − xi2 ) ∈ IU and hence (xi − Qi ) ∈ IU . Thus we conclude as announced Q = q((Qi )). Remark 11.32 We will be mostly interested with the case when the algebras M(i) are finite dimensional. The main and simplest case is when M(i) = MN (i) (matrices of size N (i) × N (i)) equipped with the normalized trace τi (x) = N(i)−1 tr(x). In that case we refer to MU as an ultraproduct of matricial tracial probability spaces. If we merely assume that all the M(i)s are finite dimensional, the resulting MU can anyway be embedded in an ultraproduct of the preceding matricial kind. Indeed, each finite-dimensional (M,τ ) can be identified in a trace preserving way with Mn(1) ⊕ · · · ⊕ Mn(k) equipped with the trace τ (x1 ⊕ · · · ⊕ xk ) = w1 n(1)−1 tr(x1 ) + · · · + wk n(k)−1 tr(xk ) where the positive weights satisfy w1 + · · · + wk = 1. If these weights are all rational, say wj = pj /N with p1 + · · · + pk = N, then we can embed (M,τ ) into (MN ,τN ) (here τN is the normalized trace on MN ) by a block diagonal embedding repeating each factor Mn(j ) with multiplicity pj . In the general case, when the weights are arbitrary real numbers, we can approximate them by rationals, and form an ultraproduct associated to these elementary numerical approximations. This shows that any finite-dimensional (M,τ ) (and hence any ultraproduct of such) can be embedded in an ultraproduct of matricial tracial probability spaces. See Corollary 12.7 for another way to justify this. Remark 11.33 (On Dixmier’s approximation theorem) A von Neumann algebra M is called a factor if M ∩ M  = CI . For example, MG is a factor if (and only if) all the nontrivial conjugacy classes of G (i.e. the sets {gtg−1 | g ∈ G}

11.5 Ultraproducts

245

with t = 1) are infinite. Moreover, if all the M(i)s are factors, the ultraproduct MU of the tracial probability spaces (M(i),τi ) is also a factor. We leave the proofs as exercises for the reader. Let (M,τ ) be a tracial probability space. If M is a factor, a classical theorem asserts that τ is the unique tracial state on M. This is an immediate corollary of a more general result due to Dixmier (see e.g. [146, p. 523] for a proof): for any x ∈ M the norm closure of the convex hull of {uxu−1 | u ∈ U (M)} intersects the center Z = M ∩ M  (when M is a factor, the latter intersection is reduced to {τ (x)1}). This implies that two tracial states on M that coincide on Z must be identical. In particular, given any other tracial probability space (N,ϕ), any unital embedding π : M → N automatically preserves the trace if M is a factor. Remark 11.34 If (M,τ ) is a tracial probability space then any ∗-homomorphism π : A1 ⊗ A2 → M is continuous with respect to the minimal norm, and hence continuously extends to A1 ⊗min A2 . We only sketch the proof. Replacing M by π(A1 ⊗ A2 ) , we may assume that π(A1 ⊗ A2 ) = M ⊂ B(L2 (τ )). By Remark A.27 it suffices to show that the tracial state A1 ⊗ A2  x → τ (π(x)) continuously extends to A1 ⊗min A2 . It is easy to show that the extreme points of the set of tracial states on a C ∗ -algebra A are all factorial states, i.e. states f for which πf (A) is a factor. Indeed, if the center Z of πf (A) is nontrivial we can find a nonzero projection p ∈ PZ such that 0 = p = 1 and then τ (·) = τ (p)[τ (p)−1 τ (p·)] + τ (1 − p)[τ (1 − p)−1 τ ((1 − p)·)] shows that τ is not extreme. Using this for A = A1 ⊗max A2 , we may assume that M is a factor. We may assume π = π1 · π2 as in (4.4). Let Mj = πj (Aj ) . Since π1,π2 have commuting ranges, the center of Mj is included in that of M. Therefore, if M is a factor, both M1,M2 are factors, and τj = τ|Mj is the tracial state of Mj . Applying Dixmier’s approximation theorem (see Remark 11.33) to each of M1 and M2 we find that τ (x1 x2 ) = τ1 (x1 )τ2 (x2 ) for all (x1,x2 ) ∈ M1 × M2 . Then we have τ ◦ π = (τ1 ◦ π1 ) ⊗ (τ2 ◦ π2 ), and hence |τ ◦ π(x)| ≤ xmin for all x ∈ A1 ⊗ A2 , which completes the proof. Remark 11.35 (On ascending unions of factors) Let (M,τ ) be a tracial probability space. Let M(i) ⊂ M be a family of von Neumann subalgebras directed by inclusion and let N ⊂ M be the weak* closure (or equivalently the bicommutant by Theorem A.46) of their union. If each M(i) is a factor then N is also a factor. To check this assertion, let τi = τ|M(i) and Ei = L2 (M(i),τi ). We view Ei and L2 (N,τN ) as subspaces of L2 (τ ). Consider the orthogonal projection Pi : L2 (τ ) → Ei . Clearly Pi (x) → x for all x ∈ L2 (N,τ ). By Proposition 11.21, Pi is a conditional expectation, so that Pi (axb) = aPi (x)b for all a,b ∈ M(i) and all x ∈ M. Since N  ⊂ M(i) , this implies that

246

Traces and ultraproducts

Pi (M ∩ N  ) ⊂ M(i) . Therefore Pi (N ∩ N  ) ⊂ M(i) ∩ M(i) and the assertion follows since Pi (x) → x in L2 (N,τ ) for any x ∈ N and we assume M(i) ∩ M(i) = C1 for all i ∈ I . Remark 11.36 (Reduction to factors) The notion of free product M ∗ N of two finite von Neumann algebras goes back to Ching [43] and Voiculescu (see [254]). Equivalently, they defined the free product (M ∗ N,τ ∗ ϕ) of two tracial probability spaces (M,τ ), (N,ϕ). Moreover, the construction is done so that the canonical embeddings M → M ∗ N and N → M ∗ N are trace preserving. In [43] Ching proved that if M,N both admit an orthonormal basis formed of unitaries for their L2 spaces (and dim(M) ≥ 2, dim(N ) ≥ 3) then M ∗ N is automatically a factor. In [218, th. 4.1] Sorin Popa proves a very general result of this type from which it follows that if (M,τ ) is any tracial probability space, and if for instance (N,ϕ) = (MF2 ,τF2 ) (namely the socalled free group factor) then M ∗ N is automatically a factor. More precisely, the relative commutant N  ∩ (M ∗ N ) is trivial. Actually, Popa proves this whenever (N,ϕ) is a nonatomic tracial probability space. In particular, this shows that any tracial probability space (M,τ ) embeds in a trace preserving way into one that is a factor and is separable if M is separable. We note in passing that it is an interesting and fundamental open question whether for any tracial probability space (M,τ ) (on a separable Hilbert space and “atomless,” that is without nontrivial minimal projections) there is an orthonormal basis of L2 (τ ) formed of unitaries. Of course this holds whenever (M,τ ) = (MG,τG ) with G a discrete group. For more in-depth information on finite von Neumann algebras we strongly recommend [4] to the reader. Remark 11.37 (GNS representations on B(H ) and ultraproducts) Let (Pn ) be a sequence of mutually orthogonal (self-adjoint) projections in B(H ) with rk(Pn ) = n for all n. Let U be a nontrivial ultrafilter on N and let fU be the state on B(H ) defined by fU (x) = limU n−1 tr(Pn xPn ). Then, for some ¯ U infinite-dimensional Hilbert space KU , we have πfU (B(H ))  B(KU )⊗M where MU is the ultraproduct of the tracial probability spaces (Mn,τn ) (with τn (·) = n−1 tr(·)). This is due to Anderson and Bunce, see [6, th. 5].

11.6 Factorization through B(H ) and ultraproducts We will describe a simple criterion that guarantees that a ∗-homomorphism π : A → M from a C ∗ -algebra to a von Neumann algebra factorizes completely positively through B(H ). To adapt its use to various situations we

11.6 Factorization through B(H ) and ultraproducts

247

consider more generally the restriction of π to a unital linear subspace E ⊂ A spanned by unitaries. Theorem 11.38 Let A be a unital C ∗ -algebra, (M,τ ) a tracial probability space. Let π : A → M be a unital ∗-homomorphism. Let S ⊂ U (A) be a subset with 1 ∈ S and let E = span(S) ⊂ A. Assume that      π(xj )2 ≤  xj ⊗ xj  . (11.25) ∀n ≥ 1,∀x1, . . . ,xn ∈ E  2 min

Then there is a state f on M ⊗min M such that ∀x,y ∈ E

τ (π(y)∗ π(x)) = f (y ⊗ x).

(11.26)

More precisely, there is an embedding A ⊂ B(H ), a family of finite rank operators (hi )i∈I on H with tr(h∗i hi ) = 1 and an ultrafilter U on I such that ∀x,y ∈ E

τ (π(y)∗ π(x)) = limU tr(h∗i y ∗ hi x).

(11.27)

Conversely (11.27) implies (11.25). Remark 11.39 The proof will show that any family (hi )i∈I of Hilbert–Schmidt operators on H satisfying (11.27) automatically also satisfies an approximate commutation condition as follows: ∀x ∈ E

limU tr |xhi − hi x|2 = 0.

(11.28)

This is but a simple consequence of the equality case in Cauchy–Schwarz. The rest of this section is devoted to the proof of Theorem 11.38 and the reinforced version in Theorem 11.42. Proof We will assume (without loss of generality) that A ⊂ B(H ), with infinite multiplicity. More precisely, this means that, starting from an embedding A ⊂ B(H1 ), we replace H1 by H = 2 ⊗2 H1 and embed B(H1 ) into B(H ) (diagonally) by T → Id2 ⊗ T . This gives us a new faithful ∗-homomorphism ρ : A → B(H ). For simplicity we view ρ as an embedding i.e. we set ρ(a) = a for all a ∈ A. We identify H¯ ⊗2 H with the Hilbert–Schmidt class S2 (H ). Let Bmin denote the unit ball of (A¯ ⊗min A)∗ . Note that for any T ∈ A¯ ⊗min A we have T min = sup {#(f (T )) | f ∈ Bmin } . By our assumption we have       π(xj )2 x ⊗ x ≤   j j L2 (τ )

min

= sup

f ∈Bmin



#(f (xj ⊗ xj )).

248

Traces and ultraproducts

We now apply Lemma A.16. Since Bmin is convex and weak* compact, this gives us in the limit a functional f in Bmin such that ∀x ∈ E

π(x)2L2 (τ ) ≤ #(f (x¯ ⊗ x)).

(11.29)

Taking x ∈ S we find 1 ≤ #(f (x¯ ⊗ x)) ≤ 1 and hence #(f (x¯ ⊗ x)) = 1 for any x ∈ S. In particular, 1 = #(f (1¯ ⊗ 1)), and hence the real part of f is a state, which implies (see §A.23) that f itself is a state on A¯ ⊗min A. By Proposition 4.25 there is a net (hi )i∈I in the unit sphere of S2 (H ) such that ∀x ∈ E

f (x¯ ⊗ x) = lim tr(x ∗ hi xh∗i ).

Note (by the trace property) tr(x ∗ hi xh∗i ) = tr(h∗i x ∗ hi x) and hence tr(x ∗ hi xh∗i ) = xhi ,hi x where the last inner product is relative to S2 (H ). Thus, for any (unitary) x ∈ S we have 1 = #f (x¯ ⊗ x) = lim #xhi ,hi x, and also hi xS2 (H ) = xhi S2 (H ) = 1. Therefore for any x ∈ S and hence for any x ∈ E lim xhi − hi xS2 (H ) = 0.

(11.30)

This implies for any x ∈ E f (x¯ ⊗ x) = lim tr(h∗i x ∗ hi x) = lim tr(h∗i x ∗ xhi ) ≥ 0.

(11.31)

This allows us to rewrite (11.29) more simply as ∀x ∈ E

π(x)2L2 (τ ) ≤ f (x¯ ⊗ x).

(11.32)

We claim that equality holds in (11.32), i.e. ∀x ∈ E

π(x)2L2 (τ ) = f (x¯ ⊗ x).

(11.33)

It clearly suffices to show equality for all x ∈ span(S1 ) for any finite subset S1 ⊂ S, so we assume x ∈ span(U1, . . . ,Ur ) with {U1, . . . ,Ur } ⊂ S. Consider the matrices defined for 1 ≤ i,j ≤ r by aij = π(Ui ),π(Uj )L2 (τ ) and bij = f (U¯i ⊗ Uj ). We then have the following situation: we have two (nonnegative) matrices a,b ∈ Mr such that a ≤ b by (11.32) and also ajj = bjj for all j , and this clearly implies that a = b (because for c = b − a ≥ 0, tr(c) = 0 ⇒ c = 0). This proves our claim (11.33). By the polarization identity of sesquilinear forms, the latter and (11.31) yield (since tr(h∗i y ∗ xhi ) = tr(hi h∗i y ∗ x) by the trace property) ∀x,y ∈ E

τ (π(y ∗ x)) = f (y¯ ⊗ x) = lim tr(h∗i y ∗ hi x) = lim tr(hi h∗i y ∗ x). (11.34)

11.6 Factorization through B(H ) and ultraproducts

249

Since the finite rank operators are dense in S2 (H ) we may assume by perturbation that the hi’s are all of finite rank. Lastly, passing to an ultrafilter U refining the net (see §A.4) we may as well assume that the preceding limits are all with respect to U . This completes the proof, since (recall Proposition 2.11) the converse direction is obvious. Remark 11.40 (Complement to the proof) Note for further use that for any unitary x ∈ B(H ) xhi − hi xS2 (H ) = x ∗ (xhi − hi x)x ∗ S2 (H ) = hi x ∗ − x ∗ hi S2 (H ) = xh∗i − h∗i xS2 (H ), and moreover (derivation rule) xyhi − hi xy = x(yhi − hi y) + (xhi − hi x)y. Therefore (11.30) still holds for any x in the group GS ⊂ U (A) generated by S ∪ S −1 . Applying the derivation rule again we see that limU xhi h∗i − hi h∗i xS1 (H ) = 0. Thus we have ∀x ∈ GS

limU xhi h∗i − hi h∗i xS1 (H ) = 0.

(11.35)

By perturbation, we may clearly assume that the finite rank operators hi h∗i have rational eigenvalues. Remark 11.41 In [39], a state ϕ on A ⊂ B(H ) is called an “amenable trace” if ϕ (b) for any U ∈ it can be extended to a state  ϕ on B(H ) such that  ϕ (U ∗ bU) =  U (A) and any b ∈ B(H ). In the situation of the preceding proof, assuming E dense in A, let ϕ(x) = τ (π(x)) (x ∈ A), and  ϕ (b) = limU tr(h∗i bhi ) (b ∈ ∗ B(H )). By (11.30) we have  ϕ (b) =  ϕ (U bU) for any U ∈ S and by (11.34)  ϕ|E = ϕ|E . Therefore, if E is dense in A, (11.25) implies that ϕ is an amenable trace on A. Conversely, if ϕ is an amenable trace then (11.25) holds for E = A. This is easy to check using the Powers–Størmer inequality (11.38) which comes next. The notion of “amenable trace” generalizes to C ∗ -algebras that of “hypertrace” that will be discussed for von Neumann algebras in §11.7. Theorem 11.42 The conclusion of Theorem 11.38 can be strengthened as follows: There are a Hilbert space H, an embedding σ : A → B(H), a family of finite rank projections (Ri )i∈I on H and an ultrafilter U on I such that ∀y,x ∈ E

τ (π(y)∗ π(x)) = limU (tr(Ri ))−1 tr(Ri σ (y)∗ Ri σ (x)), (11.36)

and ∀y,x ∈ E

limU (tr(Ri ))−1 tr(Ri σ (y)∗ σ (x))

− (tr(Ri ))−1 tr(Ri σ (y)∗ Ri σ (x)) = 0.

Note that (11.36) is similar to (11.27) with hi replaced by hi = tr(Ri )

(11.37) −1/2

Ri .

250

Traces and ultraproducts

To complete the proof we will need the Powers–Størmer inequality and Lemma 11.45. Remark 11.43 Actually we will prove that (11.37) holds for all y in the linear span of the group generated by S in U (A) and for all x ∈ A. We need some technical preliminary to be able to complete the proof. Lemma 11.44 (Powers–Størmer inequality) Let s,t ≥ 0 be trace class operators on H . Then s 1/2 − t 1/2 22 ≤ s − t1 . In particular, for any unitary U ∈ B(H ) U ∗ t 1/2 U − t 1/2 22 ≤ U ∗ tU − t1 .

(11.38)

Proof Let ek be an orthonormal basis of H consisting of eigenvectors of s 1/2 − t 1/2 and let λk be the corresponding (real) eigenvalues. Note for later use that for any self-adjoint T we have −|T | ≤ T ≤ |T | and hence for any x∈H |x,Tx| ≤ x,|T |x.

(11.39)

Note that since ±(s 1/2 −t 1/2 ) ≤ s 1/2 +t 1/2 it follows that |x,(s 1/2 −t 1/2 )x| ≤ x,(s 1/2 + t 1/2 )x for all x ∈ H , and hence |λk | ≤ ek ,(s 1/2 + t 1/2 )ek . Thus we have

  ek ,|s 1/2 − t 1/2 |2 ek  = |λk |2  ≤ |λk |ek ,(s 1/2 + t 1/2 )ek .

s 1/2 − t 1/2 22 = tr(|s 1/2 − t 1/2 |2 ) =

Let εk be the sign of λk . Then |λk |ek ,(s 1/2 + t 1/2 )ek  is the same as both εk (s 1/2 − t 1/2 )ek ,(s 1/2 + t 1/2 )ek  and εk ek ,(s 1/2 + t 1/2 )(s 1/2 − t 1/2 )ek . Therefore   |λk |ek ,(s 1/2 + t 1/2 )ek  = 2−1 εk (s 1/2 − t 1/2 )ek ,(s 1/2 + t 1/2 )ek 

+ ek ,(s 1/2 + t 1/2 )(s 1/2 − t 1/2 )ek   |ek ,(s 1/2 − t 1/2 )(s 1/2 + t 1/2 ) ≤ 2−1 + (s 1/2 + t 1/2 )(s 1/2 − t 1/2 )ek |  |ek , (s − t) ek | =  ek ,|s − t|ek  = tr|s − t| = s − t1, ≤ where for the last ≤ we used (11.39).

11.6 Factorization through B(H ) and ultraproducts

251

The next lemma is taken from [39, Lemma 6.2.5]. It is the most difficult step. Lemma 11.45 For any x ∈ B(H ) we denote x  = x ⊗ Id2 ∈ B(H ⊗2 2 ). Let h ∈ B(H ) be a finite rank operator with tr(hh∗ ) = 1. Let t = hh∗ . We assume that t = hh∗ has rational eigenvalues. Then there are an integer r and a projection R of rank r such that: ∀U ∈ U (H ) |tr(t) − r −1 tr(RU ∗ RU  )| ≤ 2U ∗ tU − t1 , 1/2

(11.40)

and more generally:

∀U,V ∈ U (H ) tr(U ∗ V t) − r −1 tr RU ∗ RV   1/4  1/4 ≤ 2 U ∗ tU − t 1 V ∗ tV − t 1 .

(11.41)

p1 r Q1

Proof Let t = + pr2 Q2 + · · · + prk Qk be the spectral decomposition of p t where the eigenvalues rj are in increasing order and the projections Qj are   mutually orthogonal. Note that j pj tr(Qj ) = j pj rk(Qj ) = rtr(t) = r.

Let K = 2 . Let P1 ≤ P2 ≤ · · · ≤ Pk be projections on K such that tr(Pj ) = rk(Pj ) = pj for any j . We then define a projection R on H ⊗ K by k Qj ⊗ Pj . R= j =1

Note tr(R) = rk(R) = r. Then we observe ∀x ∈ B(H ) tr(xt) = tr(xhh∗ ) = r −1 tr(x  R). (11.42)   Indeed, r −1 tr(x  R) = r −1 tr(xQj )tr(Pj ) = tr(xr −1 Qj pj ) = tr(xt) = tr(xhh∗ ). We will first show that (11.40) implies (11.41). This uses an idea (an operator valued Cauchy–Schwarz inequality) similar to the one used to prove Theorem 5.1 about the multiplicative domain of a c.p. map (here the relevant map is x → Rx  R). Let ∀x,y ∈ B(H )

F (x,y) = r −1 tr(x ∗ y  R) − r −1 tr(Rx ∗ Ry ).

(11.43)

We claim that F (x,x) ≥ 0. Indeed, R ≤ I implies Rx ∗ Rx R = (x  R)∗ R(x  R) ≤ (x  R)∗ I (x  R) = Rx ∗ x  R, and since R 2 = R we have tr(Rx ∗ Rx ) = tr(Rx ∗ Rx R) and tr(x ∗ x  R) = tr(Rx ∗ x  R), from which the claim follows. Then by Cauchy–Schwarz we have |F (x,y)| ≤ F (x,x)1/2 F (y,y)1/2 .

252

Traces and ultraproducts

In particular, |F (U,V )| ≤ F (U,U )1/2 F (V ,V )1/2 . By (11.42) we have tr(U ∗ Vt) = r −1 tr(U ∗ V  R). This shows that (11.40) ⇒ (11.41). We now turn to the more delicate verification of (11.40), for which we follow [39]. Since this elementary fact is implicitly used several times in the proof we remind the reader that tr(xy) = tr(x 1/2 yx1/2 ) ≥ 0 whenever x ≥ 0 and y ≥ 0 are (say) Hilbert–Schmidt. In the present situation since tr(t) = 1 the Powers–Størmer inequality (11.38) gives us 2(1 − tr(t 1/2 U ∗ t 1/2 U )) = t 1/2 − U ∗ t 1/2 U 22 ≤ U ∗ tU − t1 .

(11.44)

A simple verification shows that r −1 tr(RU ∗ RU  ) =

 m,

min(pm,p ) tr(Qm U ∗ Q U ). r

Plugging in the elementary inequality ∀p,p ≥ 0

min(p,p ) =

1 1 p + p − |p − p | ≥ (pp )1/2 − |p − p |, 2 2

we find r −1 tr(RU ∗ RU  ) ≥ β − γ ,

(11.45)

where β = tr(t 1/2 U ∗ t 1/2 U )

γ = (2r)−1

and 1/2

1/2

1/2

 m,

|pm − p | tr(Qm U ∗ Q U ).

1/2

Using |pm − p | = |pm − p |(pm + p ), we find by Cauchy–Schwarz γ ≤ (2r)−1



1/2

m,

×



1/2

(pm − p )2 tr(Qm U ∗ Q U ) 1/2

(pm + p )2 tr(Qm U ∗ Q U ) 1/2

m,

1/2

1/2

1/2

1/2

1/2

1/2

and after expanding (pm − p )2 and (pm + p )2 we find γ ≤ (2r)−1 (2r − 2rtr(t 1/2 U ∗ t 1/2 U ))1/2 (2r + 2rtr(t 1/2 U ∗ t 1/2 U ))1/2 . Since the last term is ≤(4r)1/2 we obtain γ ≤ (2 − 2tr(t 1/2 U ∗ t 1/2 U ))1/2, and hence by (11.44) γ ≤ U ∗ tU − t1 . 1/2

11.6 Factorization through B(H ) and ultraproducts

253

By (11.44) again 1 − β = 1 − tr(t 1/2 U ∗ t 1/2 U ) ≤ 2−1 U ∗ tU − t1 ≤ 1. 1/2 A fortiori 1 − β ≤ U ∗ tU − t1 . Thus, recalling (11.43) and (11.45), we obtain F (U,U ) ≤ 1 − β + γ ≤ 2U ∗ tU − t1 . 1/2

This proves (11.40). Proof of Theorem 11.42 Let H = H ⊗2 2 and σ (a) = a  for a ∈ A. We will exploit Remark 11.40 and (11.35). Let (hi ) be as in Theorem 11.38. By perturbation we may assume that hi h∗i has rational eigenvalues. By Lemma 11.45 we can find a net of projections Ri of rank n(i) = tr(Ri ) on H so that (11.40) and (11.41) are satisfied for t = hi h∗i . Taking U = 1 in (11.41) (or invoking (11.42)) we find n(i)−1 tr(Ri x  ) = tr(xhi h∗i ).

∀x ∈ U (B(H ))

(11.46)

Then (11.41) and (11.35) imply for any U ∈ GS and V ∈ U (B(H )) ∗

limU |tr(U ∗ V hi h∗i ) − n(i)−1 tr(Ri U  Ri V  )| = 0, and hence by (11.46) ∗



limU |n(i)−1 tr(Ri U  V  ) − n(i)−1 tr(Ri U  Ri V  )| = 0, which implies (11.37) and Remark 11.43. Recalling (11.34) we find ∀U,V ∈ S



τ (π(U −1 V )) = limU n(i)−1 tr(Ri U  V  ) ∗

= limU n(i)−1 tr(Ri U  Ri V  ), which implies (11.36) since E is spanned by S. Corollary 11.46 In the situation of Theorem 11.38, assume that E generates A (as a C ∗ -algebra) and that π(S) ⊂ U (M) is a subgroup generating M (i.e. such that π(S) = M). Then, for some H , π factors through B(H ) via unital c.p. maps. More precisely, there is a family (Mn(i) )i∈I of matrix algebras and an ultrafilter U on I , so that π admits a factorization of the form u

qU

v

A −→ B −−→ MU −→ M

(11.47)

 where B = (⊕ i∈I Mn(i) )∞ , qU : B → MU is the quotient ∗-homomorphism and u : A → B as well as v : MU → M are unital c.p. maps (so that ucb = vcb = 1). Moreover, qU u is a ∗-homomorphism, v is normal and defines a contraction from L2 (τU ) to L2 (τ ). Lastly, M embeds in MU .

254

Traces and ultraproducts

Proof With the notation from the conclusion of Theorem 11.42, let Hi ⊂ H be the range of Ri and let n(i) = dim Hi . For any x ∈ E, let ui (x) = Ri σ (x)|Hi ∈ B(Hi ). Choosing an orthonormal basis of the range of Ri , we may view ui (x) as an element of Mn(i) . Then let u(x) = (ui (x)) ∈ B. Clearly, this defines a unital c.p. map u : A → B. By (11.36) we have ∀x,y ∈ S

τU (qU u(y)∗ qU u(x)) = τ (π(y)∗ π(x)).

(11.48)

By sesquilinearity this remains valid for all x,y ∈ E and hence ∀x,y ∈ E

qU u(x) − qU u(y)L2 (τU ) = π(x) − π(y)L2 (τ )

(11.49)

and also ∀x ∈ E

τU (qU u(x)) = τ (π(x)).

(11.50)

In particular, τU (qU u(x)∗ qU u(x)) = 1 for any x ∈ S, which shows by Remark 11.13 that qU u(x) ∈ U (MU ). By Proposition 9.7 (or because S is in its multiplicative domain), the linear map qU u : A → MU is a ∗-homomorphism on A. Let A ⊂ M be the linear span of π(S) (i.e. A = π(E)). By our assumption A is a weak*-dense subalgebra of M. By (11.49) the correspondence π(x) → qU u(x) is a well-defined linear map from A to MU and it is a ∗-homomorphism (since qU u is one). By Remark 11.20, the “∗-distribution equality” (11.50) implies that the von Neumann subalgebra NU generated in MU by {qU u(x) | x ∈ S} is isomorphic to M = π(S) , via the correspondence T : M → MU defined on A by T (π(x)) = qU u(x). This gives us a ∗-isomorphism T : M → NU ⊂ MU , and hence M embeds in MU . But, by Proposition 11.21, we also have a conditional expectation P : MU → NU , thus setting v = T −1 P , we obtain the desired factorization. Note that (recall (11.49)) T also extends to an isometry from L2 (τ ) to L2 (τU ), so we could invoke Proposition 11.19 instead of Remark 11.20 in the preceding argument. We have clearly v : L2 (τU ) → L2 (τ ) = 1 (with the obvious abuse of notation), and v : MU → M is normal. Lastly, since B is injective, IdB factors through B(H ) (for some H ) via unital c.p. maps (see §1.5), which proves the first assertion. Remark 11.47 To see that the factorization in (11.47) implies the embedding M ⊂ MU , let ρ = qU u : A → MU , and let ρ¨ : A∗∗ → MU be the normal ∗-homomorphism extending ρ. Since v is normal, π = vρ implies π¨ = v ρ. ¨ ∗∗ But (see (A.37)) we know A  M ⊕ ker(π¨ ), so that with respect to the associated embedding M ⊂ A∗∗ we have IdM = π¨ |M . Thus with IdM = v ρ¨|M we obtain the embedding of M in MU .

11.6 Factorization through B(H ) and ultraproducts A∗∗

M

π¨

ρ¨

A

ρ

255

MU

v

M

π

Corollary 11.48 In the situation of Corollary 11.46, (or simply when E = A in Theorem 11.38) assume given a unital embedding A ⊂ A1 into another C ∗ -algebra A1 . Then any ∗-homomorphism π : A → M satisfying (11.25) extends to a unital c.p. mapping π1 : A1 → M still satisfying (11.25). Proof By the extension Theorem 1.39, u : A → B admits a u.c.p. extension u1 : A1 → B. Then let π1 = vqU u1 . Note that, by the converse direction in Theorem 11.38, qU itself satisfies the property described by (11.25). Since v : L2 (τU ) → L2 (τ ) = 1 and u1 ⊗ u1 : A1 ⊗min A1 → B ⊗min B ≤ 1, the map π1 must also satisfy (11.25). The extension property considered in Corollary 11.48 will be refined in a later chapter notably in Theorem 23.29. We now relate injectivity to Theorem 11.38. Corollary 11.49 Let (M,τ ) be a tracial probability space. The following are equivalent: (i) M is injective. (ii) For any finite subsets (xj ) and (yj ) in M we have      τ (yj∗ xj ) ≤  yj ⊗ xj 

min

.

(iii) For any finite subset (xj ) in M we have 

    xj ⊗ xj  τ (xj∗ xj ) ≤ 

min

.

Proof Assume (i). Let L,R be as in Proposition 11.16, so that L(M)  M is injective and L(M) = R(M op ). By (i) ⇒ (ii) in Theorem 8.14 applied to   xj yj∗ , L(M), for any n and any (yj ), (xj ) in M n , since R(yj∗ )L(xj )1 = we have       R(yj∗ )L(xj ) yj∗ xj ≤  τ B(L2 (τ ))     ∗ R(yj ) ⊗ L(xj ) ≤ , op R(M )⊗min L(M)

256 also  (2.12)

Traces and ultraproducts 

R(yj∗ ) ⊗ L(xj )R(M op )⊗min L(M) =      yj∗ ⊗ xj  

M op ⊗min M



yj∗ ⊗ xj M op ⊗min M and by

    = yj ⊗ xj 

M⊗min M

.

This shows (i) ⇒ (ii). (ii) ⇒ (iii) is trivial. Assume (iii). Equivalently (iii) means that (11.25) holds with A = M , π = IdM and E = A(= M). The factorization in (11.47) implies that IdM factors through B via u.c.p. maps. Therefore M is injective.

11.7 Hypertraces and injectivity Let (M,τ ) be a tracial probability space. The goal of this section is to present a particularly neat characterization of the injectivity of M, refining the last corollary when M is a factor, and its generalization involving the center of M when M is not. The proof uses the notion (due to Connes) of hypertrace. Definition 11.50 A tracial state on M ⊂ B(H ) is called a hypertrace if it admits an extension to a state f on B(H ) such that ∀U ∈ U (M), ∀x ∈ B(H ),

f (UxU ∗ ) = f (x).

(11.51)

Note that this is the same as ∀U ∈ U (M), ∀x ∈ B(H ), f (Ux) = f (xU) or equivalently: ∀y ∈ M, ∀x ∈ B(H ), f (yx) = f (xy). Hypertraces, or rather the states f satisfying (11.51), are analogous to invariant means for amenable groups. In this regard, the reader is invited to compare (11.52) to (iv) in Theorem 3.30. For example, if P : B(H ) → M is a contractive projection, then f (x) = τ (P (x)) satisfies (11.51), because P (UxU ∗ ) = UP(x)U ∗ (x ∈ B(H ),U ∈ U (M)) by Theorem 1.45. Thus τ is a hypertrace if M is injective. The converse is also true: Proposition 11.51 If τ is a hypertrace then M is injective. Proof Let L2 (f ) be the GNS Hilbert space associated to f and let πf : B(H ) → B(L2 (f )) be the GNS representation with unit vector ξf ∈ L2 (f ) such that f (x) = ξf ,πf (x)ξf . Since τ (x ∗ x) = f (x ∗ x) for any x ∈ M, we have an isometric embedding L2 (τ )  πf (M)ξf ⊂ L2 (f ). We view M ⊂ L2 (τ ) and denote by ψ : πf (M)ξf → L2 (τ ) the isometric isomorphism that takes πf (a)ξf to a for any a ∈ M. Let Q : L2 (f ) → πf (M)ξf be the orthogonal projection. We define P : B(H ) → L2 (τ ) by

11.7 Hypertraces and injectivity

257

P (x) = ψQ(πf (x)ξf ) for any x ∈ B(H ). Then P (x) = x for any x ∈ M. We claim that P (x) ∈ M and P (x) ≤ x for any x ∈ B(H ), which proves that M is injective. This will be checked by the same classical argument that was used to prove Proposition 11.21. To check the claim it suffices to show by Remark 11.15 that |τ (y2∗ P (x)y1 )| ≤ xy1 L2 (τ ) y2 L2 (τ ) for any y1,y2 ∈ M. Note τ (y2∗ P (x)y1 ) = τ (y1 y2∗ P (x)) = y2 y1∗,P (x)L2 (τ ) . Since ψ is isometric we have y2 y1∗,P (x)L2 (τ ) = πf (y2 y1∗ )ξf ,Q(πf (x)ξf )L2 (f ) and hence by the definition of Q and the hypertrace property of f y2 y1∗,P (x)L2 (τ ) = πf (y2 y1∗ )ξf ,πf (x)ξf L2 (f ) = f (y1 y2∗ x) = f (y2∗ xy1 ) = πf (y2 )ξf ,πf (x)πf (y1 )ξf , which yields |τ (y2∗ P (x)y1 )| ≤ xπf (y2 )ξf L2 (f ) πf (y1 )ξf L2 (f ) = xy1 L2 (τ ) y2 L2 (τ ) . Thus P is a contractive projection onto M. Theorem 11.52 Let (M,τ ) be a tracial probability space. Let Z = M ∩ M  the center of M. (i) If M is a factor (i.e. Z = C1) then M is injective if and only if  n   Uj ⊗ U j  . ∀n ≥ 1,∀Uj ∈ U (M) n =  1

min

(11.52)

(ii) In general, M is injective if and only if for any nonzero projection q ∈ PZ we have n    ∀n ≥ 1,∀Uj ∈ U (M) n =  qU j ⊗ qUj  . (11.53) 1

min

Proof We first show that (11.53) holds if M is injective. Since we may replace M by qM which is still injective (with unit q and trace x → τ (q)−1 τ (qx)) it suffices to show this for q = 1. The latter case follows from (iii) in Corollary 11.49. This settles both “only if” parts. (i) Assume (11.52). Then for any n and any U0,U1, . . . ,Un−1 in U (M) with U0 = 1, for any ε > 0 by (2.9) there is a unit vector h ∈ S2 (H ) such that  n   Uj hUj∗  , n − ε2 /4n <  1

and hence by (A.3) supj h − vectors in S2 (H ) such that

Uj hUj∗ 2

∀U ∈ U (M)

2

≤ ε. Thus there is a net (hi ) of unit

  hi − U hi U ∗  → 0. 2

(11.54)

Let U be a nontrivial ultrafilter refining this net. We set f (x) = limU tr(xh∗i hi ). Then (11.54) implies (11.51). A fortiori, f|M is a tracial state on M. Since (finite) factors have a unique tracial state (see Remark 11.33), we must have f|M = τ , so that τ is a hypertrace, and M is injective by Proposition 11.51.

258

Traces and ultraproducts

(ii) Assume (11.53). We will use the classical fact that if two tracial states coincide on Z, then they are equal (see Remark 11.33). Consider the set I m formed by all the disjoint partitions of 1M as a finite sum 1M = 1 qk of mutually orthogonal projections in PZ . The set I is ordered by its natural order: i ≤ i  means that each projection that is part of i is a sum of some of the projections in i  . Since Z is commutative, the set I is directed with respect to this order. We claim that for any i ∈ I , i = (q1, . . . ,qm ), there is a state fi on B(H ) satisfying (11.51) such that ∀(λk ) ∈ Cm

fi

m 1

  λk τ (qk ). λk qk =

(11.55)

We may view (fi ) as a net indexed by a directed ordered set. Let U be an ultrafilter refining this net and let f = limU fi (pointwise on B(H )). Taking the claim as granted for the moment, let us conclude. Clearly f still satisfies   λk τ (qk ) (11.51) and also for any i = (q1, . . . ,qm ) we have f ( m 1 λk qk ) = for any (λk ) ∈ Cm (because for any j ≥ i (11.55) remains true if we replace fi by fj ). This shows that f|Z and τ|Z coincide on the linear span of PZ , and hence since the latter is norm-dense in Z (here Z is isomorphic to some L∞ -space) we conclude that f|Z = τ |Z. But since f|M is a tracial state, by the preceding classical fact f|M = τ . Thus τ is a hypertrace and again M is injective by Proposition 11.51. To prove the claim, consider i = (q1, . . . ,qm ). Fix 1 ≤ k ≤ m and apply the preceding argument for (i) to the von Neumann algebra qk M ⊂ B(qk H ) (with unit qk ) instead of M ⊂ B(H ). This gives us a state f k on B(qk H ) satisfying (11.51) (with respect to qk H instead of H and qk M instead of M).  k f (Pqk H x|qk H )τ (qk ) for any x ∈ B(H ). Then fi is a state on Let fi (x) = B(H ) satisfying (11.51) and (11.55). This proves the claim. Remark 11.53 Actually, any von Neumann algebra M satisfying (11.53) must be finite. Indeed, the preceding proof shows that if (11.53) holds then qM admits a tracial state for any q ∈ PZ . From this it is easy to deduce by structural arguments that M must be finite. If M is σ -finite this implies that M admits a faithful normal finite trace τ (see §23.1). Remark 11.54 Let (M,τ ) be a tracial probability space, with M ⊂ B(H ). Then τ is a hypertrace if and only if there is a net (hi ) of unit vectors in S2 (H ) such that Uhi U ∗ − hi S2 (H ) → 0 (or equivalently Uhi − hi U S2 (H ) → 0) for any U ∈ U (M) and such that τ (x) = lim tr(xh∗i hi ) for any x ∈ M. Indeed, if τ is a hypertrace, let f be as in Definition 11.50. Let (ti ) be a net of unit vectors in S1 (H ) = B(H )∗ with ti ≥ 0 such that f (x) = lim tr(xti ) for any x ∈ B(H ) (we may assume ti ≥ 0 because ti S1 (H ) ≤ 1 and

11.8 The factorization property for discrete groups

259

tr(ti ) → 1 together imply that there is ti ≥ 0 such that ti − ti S1 (H ) → 0). Then (11.51) implies that Uti U ∗ − ti → 0 for σ (B(H )∗,B(H )). By Mazur’s Theorem A.9 after passing to a different net we may as well assume that 1/2 Uti U ∗ − ti S1 (H ) → 0. Let hi = ti . By the Powers–Størmer inequality (11.38) we have Uhi U ∗ − hi S2 (H ) → 0, while since f|M = τ we have τ (x) = lim tr(xh2i ) for any x ∈ M. No wonder if this argument rings a bell: we used an analogous one to prove (i) ⇒ (ii) in Theorem 3.30. This proves the “only if” part. The converse is immediate (just set f (x) = limU tr(xh∗i hi ) for any x ∈ B(H )). Note that the existence of such a net (hi ) implies that π = IdM satisfies (11.27) and hence Proposition 11.51 can be deduced alternatively from Corollary 11.46 as in the proof of Corollary 11.49.

11.8 The factorization property for discrete groups We already introduced the factorization property in Definition 7.36. We now give several equivalent properties. Theorem 11.55 The following properties of discrete group G are equivalent: (i) The unitary representation (s,t) → λG (s)ρG (t) on G × G, extends to a (continuous) representation on C ∗ (G) ⊗min C ∗ (G). In other words G has the factorization property. (ii) The linear functional f defined on C[G] ⊗ C[G] by  f (x ⊗ y) = t∈G x(t)y(t) extends to a linear form of norm 1 on C ∗ (G) ⊗min C ∗ (G). (iii) For any finite sequence x1, . . . ,xn in C ∗ (G) we have   λG (xj )2 L

2 (τG )

    ≤ xj ⊗ xj 

C ∗ (G)⊗min C ∗ (G)

.

(11.56)

˙ G : C ∗ (G) → MG (such that (iv) The natural ∗-homomorphism Q ˙ G (UG (t)) = λG (t) for all t ∈ G) factorizes via unital c.p. maps Q through B(H ) for some H . Proof Assume (i). Let f be as in (ii). Let x,y ∈ C[G]. We set λG (x) =   s∈G x(s)λG (s) and ρG (y) = t∈G y(t)ρG (t). Then f (x ⊗ y) = δe,λG (x)ρG (y)δe . This shows (i) ⇒ (ii).   x(t)UG (t) x(t)UG (t) to Assume (ii). The linear mapping taking (∀x ∈ C[G]) extends to a C-linear isomorphism  : C ∗ (G) → C ∗ (G). Therefore (ii) implies

260

Traces and ultraproducts       xj (t)yj (t) ≤  xj ⊗ yj  j

t

C ∗ (G)⊗min C ∗ (G)

,

and hence taking xj = yj we obtain (iii). Assume (iii). We apply Corollary 11.46 to the case A = C ∗ (G) with ˙ G . This implies (iv). S = {UG (t) | t ∈ G} (i.e. essentially S = G) and π = Q Assume (iv). We first show that (iv) ⇒ (iii). Unfortunately, we need to invoke an inequality satisfied by B(H ) that is proved only later on these notes, namely (22.15). The latter implies that if a ∗-homomorphism π : C ∗ (G) → MG satisfies the factorization in (iv) then for any finite set (xj ) in C ∗ (G)         π(xj ) ⊗ π(xj ) ≤ xj ⊗ xj  ∗ .  ∗ MG ⊗max MG

Since

  π(xj )2

L2 (τG )

C (G)⊗min C (G)

    ≤ π(xj ) ⊗ π(xj )

MG ⊗max MG

we obtain (iii). ˙G Assume (iii). Let A = C ∗ (G), E = span[UG (t) | t ∈ G} and π = Q as before. By Theorem 11.38 choosing a suitable embedding A ⊂ B(H ) there is a net (hi ) in the unit ball of S2 (H ) such that for any x,y ∈ E, say   x = x(t)UG (t), y = y(t)UG (t) we have  x(t)y(t) = τG (π(x ∗ y)) = limU tr(x ∗ hi yh∗i ). By Proposition 2.11 this implies       xj∗ hi yj h∗i ) ≤  xj (t)yj (t) = lim tr( xj ⊗ yj  ¯ j

t

U

A⊗min A

.

Thus, using again the C-linear isomorphism  : C ∗ (G) → C ∗ (G), (ii) follows. To conclude we show (ii) ⇒ (i). This is a routine argument based on the observation that the representation appearing in (i) is a GNS representation  associated to the state appearing in (ii). Let T = xj ⊗ yj ∈ A ⊗ A with ∗ T min ≤ 1. Then 1 ⊗ 1 − T T ∈ (A ⊗min A)+ . Let κ : A ⊗ A → B(2 (G)) be the ∗-homomorphism associated to (s,t) → λG (s)ρG (t). Note for any z∈A⊗A f (z) = δe,κ(z)δe . For any z ∈ A ⊗ A we have z∗ (1 ⊗ 1 − T ∗ T )z ∈ (A ⊗min A)+ and hence assuming (ii), we have κ(T )κ(z)δe 22 (G) = f (z∗ T ∗ T z) ≤ f (z∗ z) = κ(z)δe 22 (G), which shows κ(T )B(2 (G)) ≤ 1. This completes the proof that (ii) ⇒ (i). Corollary 11.56 If G has the factorization property, in particular if G is amenable (see Remark 7.37) then MG embeds in a trace preserving way in an ultraproduct of matrix algebras.

11.9 Notes and remarks

261

Proof This follows by applying Corollary 11.46 to the case when A = C ∗ (G), ˙ G. M = MG = λG (G) , S = G viewed as a subset of U (A) and π = Q Remark 11.57 We will show in Corollary 12.23 that all free groups have the factorization property.

11.9 Notes and remarks The construction of noncommutative measure theory outlined in §11.2 was motivated by quantum mechanics, and hence goes far back. The results of §11.2 are all classical facts. As for the origin of noncommutative Lp -spaces one usually attaches the names of Dixmier, Kunze, Segal, and more recently Nelson, see [215] for more information on this topic. The construction of ultraproducts goes back to McDuff [176]. See [4] for a much more complete treatment of the ramifications of this important topic. The reader interested in ultraproducts in the nontracial case is referred to [240, p. 115] and the more recent papers [8, 9]. Theorem 11.38 is a relatively easy fact reformulating ideas that can be traced back to Kirchberg’s [155] in the style of a Pietsch factorization for 2-summing maps as in [205, §5]. The unital assumption (which guarantees that π(S) ⊂ U (M)) is the key to obtain an equality as in (11.27). The proof of Theorem 11.42 is more delicate. The main point comes from Brown and Ozawa’s book [39, Lemma 6.2.5]. Its relevance to the situation considered in §11.6 was pointed out by Ozawa in [191]. The notion of hypertrace, together with Proposition 11.51 and Theorem 11.52 for factors are all due to Connes [61]. The generalization in Theorem 11.52 (ii) comes from Haagerup’s [104] on which §11.7 is based. The factorization property was introduced by Kirchberg in [155]. It is particularly interesting in connection with property (T) groups as in Theorem 17.5. The equivalence of the properties in Theorem 11.55 were surely known to Kirchberg [157]. For its proof we used some simplifications due to Ozawa [189]. See also [39, p. 219].

12 The Connes embedding problem

We now turn to the first of a series of problems that will turn out to be eventually all equivalent. Since it was formulated as a question (or, say, a problem) we do not refer to it as a conjecture.

12.1 Connes’s question In the classical von Neumann algebra terminology, a “II 1 -factor” is an infinitedimensional tracial probability space (M,τ ) with trivial center, i.e. such that M ∩ M  = CI . In his famous paper [61], Connes observed that in addition to the case when G is amenable, in the somewhat “opposite” case when G = Fn (2 ≤ n ≤ ∞), the II 1 -factor (MG,τG ) also embeds in a trace preserving way in (MU ,τU ) for some U . Since the latter case was at the time the principal “bad apple” in the classification theory of factors, it was natural for Connes to wonder whether in fact the same embedding held for any discrete group G and any II 1 -factor. But since it can be shown, using free products (see Remark 11.36) that any tracial probability space embeds (in a trace preserving way) in a II 1 -factor, we can rephrase the problem more generally as follows. Connes’s question: Is it true that any tracial probability space (M,τ ) embeds in a trace preserving way in an ultraproduct of matricial tracial probability spaces (MU ,τU ) for some U ? A priori, the embedding in the preceding question must preserve much of the algebraic structure. But actually, much less is needed for the same conclusion: as we will show in Theorem 12.3, a mere isometric assumption (as opposed to a completely isometric one) implies a strong ∗-isomorphic conclusion. Not surprisingly, one of the key ingredients to prove this comes from the isometric theory (or the Jordan theory) of C ∗ -algebras, namely the

262

12.1 Connes’s question

263

following result, going back to Kadison [142] and Størmer [233]. See [123] for an excellent detailed account of this theory. Theorem 12.1 Let (M,τ ) and (N,ϕ) be two tracial probability spaces, i.e. two von Neumann algebras equipped with faithful, normal, normalized traces. Let T : L2 (τ ) → L2 (ϕ) be an isometry such that T (1) = 1 and T ∗ (1) = 1. In other words, T is unital and trace preserving. Assume that T (BM ) ⊂ BN (i.e. T defines a mapping of norm 1 from M to N ). Then T : M → N decomposes as a direct sum of a ∗-homomorphism and a ∗-antihomomorphism. More precisely, there is an orthogonal decomposition I = P + Q in N (P ⊥ Q) with P ,Q ∈ T (M) ∩ N and an associated decomposition ∀x ∈ M

T (x) = P T (x)P + QT(x)Q

such that x → P T (x)P is a ∗-homomorphism and x → QT (x)Q is a ∗-antihomomorphism. Proof By assumption T : L2 (τ ) → L2 (ϕ) is isometric but we assume T : M → N  = 1. From now on, we view T as a mapping from M to N . Note that the latter mapping is normal since its adjoint takes L2 (ϕ) to L2 (τ ), and hence by density (since L2 (ϕ) is norm dense in N = L1 (ϕ) ) it takes L1 (ϕ) to L1 (τ ), or equivalently N∗ to M∗ . Recapitulating, T : M → N is unital, normal, and preserves the trace. We will show that T takes unitaries to unitaries. Indeed, for any unitary u ∈ M, we have T (u)N ≤ 1 and T (u)22 = u2 = 1, so T (u) is unitary by Remark 11.13. We claim this implies that T is a Jordan ∗-morphism, i.e. that for any hermitian h, its image T (h) is hermitian and we have T (h2 ) = T (h)2 . Indeed, since T (eith ) = 1 + itT(h) + o(t) and T (eith ) ≤ 1, T (h) must be hermitian, and actually since T (eith ) is unitary, if we develop further for small t ∈ R 1 = (T (eith ))∗ T (eith ) = 1 + t 2 (T (h)2 − T (h2 )) + o(t 2 ), we find that necessarily T (h)2 = T (h2 ). From the theory of Jordan representations as already used in Theorem 5.6 (see [123, p. 163] or also [147, pp. 588– 589]), we know that there is an orthogonal decomposition I = P + Q with P ,Q ∈ T (M) ∩ N, P ⊥ Q that gives us a decomposition ∀x ∈ M

T (x) = P T (x)P + QT(x)Q

such that x → P T (x)P is a ∗-homomorphism and x → QT (x)Q is a ∗-antihomomorphism. (Of course these are a priori nonunital.) Proposition 12.2 Let (M,τ ) and (N,ϕ) be two tracial probability spaces and let W ⊂ U (M) be a weak*-dense subset of the unitary group of M, i.e. such that U (M) is the closure of W in L2 (τ ). The following are equivalent:

264

The Connes embedding problem

(i) There is a unital trace preserving (C-linear) isometry T : L2 (τ ) → L2 (ϕ) such that T : M → N  ≤ 1. (ii) There is a function r : {1} ∪ W → U (N ) such that r(1) = 1 and ∀u1,u2 ∈ {1} ∪ W

τ (u∗1 u2 ) = ϕ(r(u1 )∗ r(u2 )).

(12.1)

(ii)’ There is a function r : W → BN such that ∀u1,u2 ∈ W

τ (u∗1 u2 ) = ϕ(r(u1 )∗ r(u2 )).

(12.2)

Proof Assume (i). The first step of the proof of Theorem 12.1 shows that T (U (M)) ⊂ U (N ). Then (i) ⇒ (ii) is obvious: we just let r = T|{1}∪W and observe that ∀u1,u2 ∈ U (M) ∀μ ∈ C

u1 + μu2 2L2 (τ ) = 1 + |μ|2 + 2#(μτ (u∗1 u2 )).

Since u1 + μu2 2L2 (τ ) = r(u1 ) + μr(u2 )2L2 (ϕ) , it follows that r = T|{1}∪W satisfies (ii). Assume (ii). Assuming for a moment that such a T exists, let T : span[{1} ∪ W] → N be the linear mapping extending r. For any x ∈ span[{1} ∪ W], we have by (12.1) T (x)2L2 (ϕ) = x2L2 (τ ), and this shows that T is unambiguously well defined. Note that T preserves the trace (take u1 = 1 in (12.1)), and T (1) = r(1) = 1. By our density assumption, the unitaries in W are dense for the L2 (τ )-norm in the set U (M) of unitaries of M (which linearly spans M), and hence span[W] is dense in L2 (τ ). Therefore T extends to a unital trace preserving isometry, still denoted by T , from L2 (τ ) to L2 (ϕ), such that T (W) ⊂ U (N ). Moreover, since the set of unitary elements is closed in L2 (ϕ) (see e.g. Remark 11.13), and W is assumed dense in U (M), we have T (U (M)) ⊂ U (N ). By the Russo–Dye Theorem A.18, the convex hull of U (M) is norm-dense (and a fortiori L2 (τ )dense) in the unit ball of M, and the unit ball of N is closed in L2 (ϕ). Therefore T : M → N = 1. This shows that (i) holds, and hence (i) ⇔ (ii). (ii) ⇒ (ii)’ is trivial. Conversely, assume (ii)’. Then ϕ(r(u)∗ r(u)) = 1 for any u ∈ W. The fact that r(W) ⊂ U (N ) is automatic by Remark 11.13. To take care of the condition r(1) = 1, we will change W and r. We pick −1 a fixed u0 ∈ W and we set W  = u−1 0 W = {u0 u | u ∈ W} and −1  −1   r (u0 u) = r(u0 ) r(u). Then 1 ∈ W , r (1) = 1, (W ,r  ) satisfy (12.1) and W  is still dense in U (M). Therefore, by the (already proved) implication (ii) ⇒ (i) applied to W  and r  , we see that (i) holds, and we already proved that (i) implies (ii). The following criterion for M to embed in some MU is due to Kirchberg.

12.1 Connes’s question

265

Theorem 12.3 (Kirchberg’s criterion) Let (M,τ ) be a tracial probability space. Let W ⊂ U (M) be a weak*-dense unital subset of the unitary group of M, i.e. we assume that U (M) is the closure of W in L2 (τ ). The following are equivalent: (i) There is a (unital) trace preserving embedding of M in an ultraproduct of matrix algebras. (ii) For any ε > 0, any n and any u1, . . . ,un ∈ W there is an integer N and unitary N × N matrices v1, . . . ,vn such that ∀i,j = 1, . . . ,n |τ (u∗i uj ) − τN (vi∗ vj )| ≤ ε,

(12.3)

∀j = 1, . . . ,n |τ (uj ) − τN (vj )| ≤ ε.

(12.4)

and

(iii) For any ε > 0, any n and any u1, . . . ,un ∈ W there is an integer N and N × N matrices x1, . . . ,xn in the unit ball of MN , equipped with its normalized trace τN , such that ∀i,j = 1, . . . ,n |τ (u∗i uj ) − τN (xi∗ xj )| ≤ ε.

(12.5)

Moreover, if W ⊂ U (M) is countable, (ii) ⇒ (i) holds for an ultraproduct based on a sequence of matrix algebras. Proof (i) ⇒ (ii) is essentially obvious with W = U (M) (recall Lemma 11.30). (ii) ⇒ (iii) is trivial. We give the rest of the proof assuming that W is countable. The general case can be treated similarly. Assume (iii). To show that (i) holds we will use Proposition 12.2 and Theorem 12.1. Let {u1,u2, . . .} be an enumeration of W. Let I be the set of pairs (n,ε) with n ∈ N and ε > 0. We view it as a directed set for the order defined by (n,ε) ≤ (n,ε ) if n ≤ n and ε ≤ ε. We may restrict ε to be in a countable sequence decreasing to 0 so that I be countable. Let U be an ultrafilter refining the resulting net (see Remark A.6). For each i = (n,ε), we can find N (i) and (x1 (i), . . . ,xn (i)) in the unit ball of MN (i) such that (12.5) holds. We then set ∀k ≤ n vk (i) = xk (i), ∀k > n vk (i) = 1 (say). The values for k > n will turn out to be irrelevant. Let M(i) = MN (i) equipped with its normalized trace. Let B and MU be as before. We may define Vk ∈ BB by setting Vk = (vk (i))i∈I .

266

The Connes embedding problem

Clearly, by (12.5) , for any k, ∈ N limU τi (Vk (i)∗ V (i)) = τ (u∗k u ). In other words, if we denote by vkU ∈ BMU the equivalence class of Vk modulo IU , we have ∗

τU (vkU vU ) = τ (u∗k u ).

(12.6)

Thus if we define r : W → BMU by r(uk ) = vkU , then (12.2) holds for ϕ = τU . By Proposition 12.2 and Theorem 12.1 there is a unital trace preserving isometry T : M → MU and an orthogonal decomposition I = P + Q with P ,Q ∈ T (M) ∩ MU that induces a decomposition ∀x ∈ M

T (x) = P T (x)P + QT(x)Q

such that x → P T (x)P is a ∗-homomorphism and x → QT (x)Q is a ∗-antihomomorphism. We claim that there is a unital trace preserving ∗-anti-isomorphism κ : QMU Q → QMU Q. Using this claim we can obtain a bona fide unital and trace preserving ∗-homomorphism T  embedding M in MU by setting T  (x) = P T (x)P + κ(QT(x)Q), whence (i). To check the claim, first observe that MU is clearly ∗-anti-isomorphic to itself by a trace preserving map y → t y (associated to matrix transposition). More precisely, for any y = qU ((yi )) ∈ MU we set t y = qU ((t yi )). Note that the latter map gives us a ∗-anti-isomorphism from QMU Q (with unit Q) to t QMU t Q (with unit t Q). Let (Q ) ∈ B be a representative of Q such that Q is a projection in i i M(i) = MN (i) for all i ∈ I (see Remark 11.31). Since Qi and t Qi have the same trace, their ranges have the same dimension, so there is a unitary matrix ϒi such that Qi = ϒi t Qi ϒi∗ . Then if ϒ ∈ MU is the unitary associated to (ϒi ) the mapping κ defined by κ(y) = ϒ t yϒ ∗ is the desired ∗-antiisomorphism. Remark 12.4 By (i) ⇔ (iii) in Theorem 12.3 to show that a tracial probability space (M,τ ) embeds in a trace preserving way in an ultraproduct of matrix algebras, it suffices to check that this holds for any finitely generated (and a fortiori) weak* separable von Neumann subalgebra of M. Remark 12.5 (Separable factors suffice) Recall that, by definition, a von Neumann algebra M is a factor if its center is trivial. By Remark 11.36, to answer positively Connes’s question for all finite von Neumann algebras, it actually suffices to answer it for finite “factors” on a separable Hilbert space (which incidentally is the original question raised in [61]). Indeed, by Remark 11.36 any weak* separable tracial probability space embeds in a trace preserving way into one that is a factor. Moreover, by Remark 12.4 the weak*

12.1 Connes’s question

267

separable case of the Connes embedding problem implies the general one. Actually, any embedding of a factor M as a C ∗ -subalgebra of an ultraproduct MU of matrix algebras is automatically trace preserving (and hence normal) since M has a unique tracial state (see Remark 11.33). Thus for the Connes embedding problem it suffices to show that any weak* separable finite factor embeds as a C ∗ -subalgebra of a matricial MU . Using Lemma 11.30, one easily deduces the following fact (which can also be proved by observing that an ultraproduct of ultraproducts is again an ultraproduct). Corollary 12.6 Let (M(i),τi )i∈I be a family of tracial probability spaces. Assume that each one of them embeds in a trace preserving way into an ultraproduct of matrix algebras. Then the same is true for their ultraproduct (MU ,τU ) relative to any ultrafilter U on I . For future reference it may be worthwhile to formulate the following obvious consequence: Corollary 12.7 Let (M,τ ) and (N,ϕ) be two tracial probability spaces. Assume that there is a trace preserving embedding of M in an ultraproduct of matrix algebras. For the same to hold for (N,ϕ) the following condition is sufficient: For any ε > 0, any n and any u0, . . . un ∈ U (N ) there are v0,v1, . . . vn ∈ BM such that ∀i,j = 0, . . . ,n

|ϕ(u∗i uj ) − τ (vi∗ vj )| ≤ ε.

(12.7)

Proof We may replace M by MU . Then N satisfies (iii) in Theorem 12.3 with W = U (N ). The next variant is more involved. Here we use Lemma 11.45 to further refine the preceding criterion. Theorem 12.8 The conditions (i)–(iii) in Theorem 12.3 are equivalent to the following ones: (iv) For any ε > 0, any n and any u1, . . . un ∈ W, there are an integer N , matrices x1, . . . xn in the unit ball of MN and η ∈ MN with τN (η∗ η) = 1 such that ∀i,j = 1, . . . ,n

|τ (u∗i uj ) − τN (xi∗ ηxj η∗ )| < ε.

(12.8)

(iv)’ For any ε > 0, any n and any u1, . . . un ∈ W, there are x1, . . . xn in the unit ball of B(2 ) and a Hilbert–Schmidt operator h ∈ B(2 ) with tr(h∗ h) = 1 such that ∀i,j = 1, . . . ,n |τ (u∗i uj ) − tr(xi∗ hxj h∗ )| < ε.

(12.9)

268

The Connes embedding problem

(v) For any ε > 0, any n and any u1, . . . un ∈ W there are an integer N , N × N unitary matrices v1, . . . vn and η ∈ MN with τN (η∗ η) = 1 such that ∀i,j = 1, . . . ,n |τ (u∗i uj ) − τN (vi∗ ηvj η∗ )| < ε.

(12.10)

Proof (iii) ⇒ (iv) is obvious (with η = 1) and (iv) ⇒ (iv)’ is trivial (with h = N −1/2 η). Conversely, if (iv)’ holds we may assume by density of the finite rank operators in S2 (2 ) that there is a projection P of finite rank N such that h = PhP. We can replace xj by Pxj P and setting η = N 1/2 h we see that (12.9) becomes (12.8). This shows (iv) ⇔ (iv)’. Assume (iv). By polar decomposition we can write xj = vj |xj | with vj unitary and also xj∗ = vj∗ (vj |xj |vj∗ ) = vj∗ |xj∗ |. Then τN (xi∗ ηxj η∗ ) = τN (vi∗ |xi∗ |ηvj |xj |η∗ ). Thus to show (v) it suffices to prove that |τN (vi∗ |xi∗ |ηvj |xj |η∗ ) − τN (vi∗ ηvj η∗ )| ≤ f1 (ε)

(12.11)

where f1 depends only on ε and f1 (ε) = o(ε). We will denote f2,f3, . . . functions of the same kind. To prove (12.11) it suffices to show that |xj |η∗ − η∗ L2 (τN ) ≤ f1 (ε)/2 and |xi∗ |η − ηL2 (τN ) ≤ f1 (ε)/2. (12.12) Taking i = j in (12.8) we find |1 − xj η∗,η∗ xj L2 (τN ) | ≤ ε,

(12.13)

and hence η∗ xj − xj η∗ 2L2 (τN ) ≤ 2ε. Therefore |1 − xj η∗,xj η∗ L2 (τN ) | ≤ √ f2 (ε) = ε + 2ε, which means |1−τN (ηxj∗ xj η∗ )| ≤ f2 (ε). Since |xj |2 ≤ |xj | we have 0 ≤ τN (ηxj∗ xj η∗ ) ≤ τN (η|xj |η∗ ) ≤ 1 and hence |1 − τN (η|xj |η∗ )| ≤ f2 (ε). The latter means |1 − η∗,|xj |η∗ L2 (τN ) | ≤ f2 (ε), and hence η∗ − |xj |η∗ 2L2 (τN ) ≤ 2f2 (ε), which proves the first part of (12.12). We now reapply the same argument starting from |1 − xj∗ η,ηxj∗ L2 (τN ) | ≤ ε instead of (12.13) and this gives us the second part of (12.12), so that (v) follows. It remains to show (v) ⇒ (iii). Assume (v). We have for any j |1 − vj η,ηvj L2 (τN ) | = |1 − τN (vj∗ ηvj η∗ )| ≤ ε, and hence ηvj −vj η2L2 (τN ) ≤ 2ε. Using ηη∗ −vj ηη∗ vj∗ = (ηvj −vj η)vj∗ η∗ + vj η(vj∗ η∗ − η∗ vj∗ ) we find ηη∗ − vj ηη∗ vj∗ L1 (τN ) ≤ 2(2ε)1/2, and also |τN (vi∗ (ηvj )η∗ ) − τN (vi∗ (vj η)η∗ )| ≤ (2ε)1/2 .

(12.14)

12.2 The approximately finite-dimensional (i.e. “hyperfinite”) II 1 -factor

269

 We may assume that ηη∗ has rational eigenvalues. Let H = N 2 and let v = −1 v⊗Id2 ∈ B(H ⊗2 2 ) as in Lemma 11.45. The latter associates to t = N ηη∗ a projection R ∈ B(H ⊗2 2 ) of finite rank r, such that

|τN (vi∗ vj ηη∗ ) − r −1 tr(Rvi R ∗ vj )| ≤ f3 (ε). By (12.14) ∗

|τN (vi∗ ηvj η∗ ) − r −1 tr(Rvi Rvj )| ≤ (2ε)1/2 + f3 (ε), and hence lastly ∗

|τ (u∗i uj ) − r −1 tr(Rvi Rvj )| ≤ ε + (2ε)1/2 + f3 (ε). Let H ⊂ H ⊗2 2 be the range of R. We conclude that (iii) holds with r viewed as an operator on H, or now playing the role of N and xj = Rvj |H equivalently as an element of Mr .

12.2 The approximately finite-dimensional (i.e. “hyperfinite”) II1 -factor Although we prefer not to use this in the sequel, we wish to briefly mention here another way to formulate the Connes question in terms of the socalled approximately finite-dimensional (i.e. “hyperfinite”) II 1 -factor (R,τ0 ), which can be defined as the unique countably generated, approximately finitedimensional, tracial probability space with trivial center and no atom (i.e. no nonzero minimal projection). One can legitimately think of (R,τ0 ) as a noncommutative analogue of the Lebesgue interval ([0,1],dt); indeed, the latter is also the unique countably generated atomless probability space. The uniqueness of (R,τ0 ) (up to isomorphism) goes back to Murray and von Neumann who proved that any two countably generated finite factors with no atom (the so-called II 1 -factors) are isomorphic if each is approximately finite dimensional. For a detailed proof see [146, p. 896]. Many years later, solving a longstanding problem, Connes [61] proved that for such algebras injective implies (and hence is equivalent to) approximately finite dimensional. Thus (R,τ0 ) can be described as the unique (up to isomorphism) injective, tracial probability space with trivial center and no atom on a separable Hilbert space. We will not prove any of these uniqueness theorems. We briefly describe one classical construction by which (R,τ0 ) can be produced, thus showing its “existence” but we take its uniqueness for granted. The quick construction we outline highlights that there is a copy of (R,τ0 ) inside an ultraproduct of matricial tracial probability spaces. The Connes embedding problem can then be reformulated using the

270

The Connes embedding problem

“ultrapowers” of (R,τ0 ). Let (R(i))i∈I be a family of copies of R. Then the ultraproduct of (R(i),τ0 ) with respect to an ultrafilter U is called an ultrapower of (R,τ0 ); we denote it by (R U ,τ0U ). We will show that a tracial probability space (M,τ ) embeds (trace preservingly) in R U for some U if and only if it similarly embeds in an ultraproduct of matricial tracial probability spaces. Let (mi )i∈N be a sequence of integers ≥2. The traditional way is to define (R,τ0 ) as an infinite tensor product (often called “ITPFI” in the literature) of the form ' (Mmi ,τmi ). (12.15) i∈N

By the uniqueness results just mentioned, up to isomorphism, the resulting algebra does not depend on the choice of the sequence (mi ), and the simplest choice is clearly to take mi = 2 for all i, but we will not prove this either. Let N (i) = k≤i mk . Let M(i) = MN (i)  ⊗k≤i Mmk and let τi be the normalized trace on M(i). Note that M(k + 1)  M(k) ⊗ Mmk+1 and more generally, for any i > k M(i)  M(k) ⊗ Mmk+1 ⊗ · · · ⊗ Mmi so that we have a natural trace preserving embedding M(k) → M(i) taking x ∈ M(k) to x ⊗ 1 ⊗ · · · ⊗ 1 ∈ M(i). Using these embeddings we may think of M(k) as a ∗-subalgebra of M(i) and form the unital ∗-algebra that is the union (i.e. formally the inductive limit) A = ∪M(i). It is convenient to think of a typical element x of A as x = a ⊗ 1 ⊗ 1 ⊗ · · · with a ∈ M(i) for some i, followed by an infinite sequence of ⊗1’s. Then τ (x) = τi (a) defines a linear functional on A, such that τ (x ∗ x) = τ (xx∗ ) ≥ 0 and τ (1) = 1. The GNS construction applied to A produces a Hilbert space H and an injective ∗-homomorphism π : A → B(H ), such that τ (x) = 1,π(x)1 (x ∈ A). We then define ⊗i∈N (Mmi ,τmi ) as R = π(A) equipped with the natural extension of τ , that is a faithful normal trace τ0 on π(A) . We have a natural identification H  L2 (τ0 ) with which π becomes the representation L of left multiplication on L2 (τ0 ). We now relate this construction to ultraproducts. Let U be a non trivial ultrafilter on N and let (MU ,τU ) be the ultraproduct of (M(i),τi ). We will show that ⊗i∈N (Mmi ,τm i ) embeds in a trace preserving way into MU .  As before, let B = ⊕ i∈I M(i) ∞ , with quotient map qU : B → MU , let vk : M(k) → B be the map taking a ∈ M(k) to b = (bi ) ∈ B defined by bi = a ⊗ 1 ⊗ · · · ⊗ 1 for all i ≥ k and (this is actually somewhat irrelevant)

12.3 Hyperlinear groups

271

bi = 0 for all i < k. Then let uk : M(k) → MU be the map taking a ∈ M(k) to qU (vk (a)). Note τU (uk (a)) = τk (a) and uk : M(k) → MU is an embedding. Let Ak = uk (M(k)) ⊂ MU . With the obvious identification (corresponding to a → a ⊗ 1) we have Ak ⊂ Ak+1 . This gives us an embedding ψ : A ⊂ MU such that τU (ψ(x)) = τ0 (x) for any x ∈ A. Clearly, this extends to an isometry T : L2 (R,τ0 ) → L2 (MU ,τU ). By Proposition 11.19 that same map defines a trace preserving embedding of (R,τ0 ) into (MU ,τU ). Proposition 12.9 Let (M,τ ) be a tracial probability space on a separable Hilbert space. Then there is a trace preserving embedding of M into an ultraproduct of matrix algebras if and only if there is one of M into R U for some ultrafilter U . Proof By Corollary 12.6 there is a trace preserving embedding of R U into an ultraproduct of matrix algebras. This settles the “if part.” For the converse, recall that (12.15) gives us a copy of R no matter what (mi ) is. Thus MN (i) embeds in R for each i, and hence any ultraproduct of (MN (i) ) relative to U embeds in R U . Remark 12.10 As most ultraproducts, the von Neumann algebra R U is defined on a nonseparable Hilbert space and this is unavoidable. The appearance of large cardinals suggests that issues from logic should play a role. Consider for instance the very natural question whether the ultrapowers R U (for varying nontrivial ultrafilters on N) are all isomorphic: a positive answer turns out to be equivalent to the continuum hypothesis [86]. See [86–89] where the analogous question for ultraproducts of (Mn )n≥1 is discussed as well as other similar issues.

12.3 Hyperlinear groups Definition 12.11 A discrete group G will be called approximately linear (also called “hyperlinear”) if (MG,τG ) embeds in a trace preserving way in an ultraproduct of matricial tracial probability spaces, such as (MU ,τU ) with all M(i)’s matricial. In other words, G is approximately linear (or “hyperlinear”) if for (MG,τG ) the answer to the Connes question is positive. We already know (see Remark 11.56) that this holds if G is amenable or has the factorization property. Just like “hyperfinite,” the term “hyperlinear” is hardly a good choice since it does not imply linear in any reasonable sense, so we prefer to use “approximately linear” which seems more appropriate.

272

The Connes embedding problem

Theorem 12.12 The following properties of a discrete group G are equivalent: (i) The group G is approximately linear (so-called hyperlinear). (ii) There is a group representation π : G → U (MU ) embedding G into the unitary group U (MU ) of an ultraproduct of matricial tracial probability spaces and satisfying ∀t ∈ G τU (π(t)) = τG (λG (t)).

(12.16)

(iii) For any finite subset S ⊂ G containing the unit e and any ε > 0 there is an integer N < ∞ and a function ψ : S → UN with values in the group UN = U (MN ) of N × N-unitary matrices such that ∀s,t ∈ S

ψ(s)ψ(t) − ψ(st)L2 (τN ) < ε,

and |τN (ψ(e)) − 1| < ε

and

∀t ∈ S,t = e

|τN (ψ(t))| < ε.

Proof Assume (i). Let  : MG → MU the embedding. Let π(t) = (λG (t)) (t ∈ G). Then (ii) is immediate. Assume (ii). By Lemma 11.30 each π(t) has a unitary representative modulo IU . Thus, for each i there is πi (t) ∈ U (M(i)) such that qU ((πi (t))) = π(t). Since π(st) = π(s)π(t) (s,t ∈ G), we have limU πi (s)πi (t) − πi (st)L2 (τi ) = 0 and since τ (π(e)) = 1 and τ (π(t)) = 0 if t = e, we have limU |τi (πi (e)) − 1| = 0 and limU |τi (πi (t))| = 0 if t = e. Recall that (M(i),τi ) = (MN (i),τN (i) ) for some N (i) < ∞ Thus, it suffices to take ψ = πi and to choose i far enough relative to U to obtain (iii). Assume (iii). Let I be the set of pairs (S,ε) with S ⊂ G finite subset and ε > 0, with the usual ordering (S,ε) ≤ (S ,ε ) if S ⊂ S  and ε ≤ ε. Let U be an ultrafilter refining the net associated to this directed set (see Remark A.6). For any i = (S,ε), when t ∈ S we set πi (t) = ψ(t) where ψ is the function given by (iii); and when t ∈ S (this is actually irrelevant) we set πi (t) = 1. Let π(t) = qU ((πi (t))). Then it is easy to check that (ii) holds, but our goal is (i). Let A = span[λG (t) | t ∈ G] ⊂ L2 (τG ). Let T : A → MU be the ∗-homomorphism taking λG (t) to π(t). Note that T (a)2L2 (τU ) = τU (π(a ∗ a)) and a2L2 (τG ) = τG (a ∗ a). By (12.16), for any a ∈ A, we have T (a)L2 (τU ) = aL2 (τG ) , and hence T extends to an isometric embedding from L2 (τG ) to L2 (τU ). Therefore, by Proposition 11.19, T also extends to a normal (trace preserving) embedding on MG into MU , showing that (i) holds. Alternatively, for (ii) ⇒ (i) we could invoke Remark 11.20, observing that (ii) simply means that (λG (t))t∈G and (π(t))t∈G have the same ∗-distribution.

12.4 Residually finite groups and Sofic groups

273

Remark 12.13 If we take s = t = e in ψ(s)ψ(t) − ψ(st)L2 (τN√ ) < ε < 1 it follows that the unitary matrix ψ(e) satisfies |τN (ψ(e)) − 1| < 2ε, so the condition |τN (ψ(e)) − 1| < ε could be omitted.

12.4 Residually finite groups and Sofic groups We denote by SN the group of permutations of the set with N elements, i.e. the “symmetric group.” Note the embedding SN ⊂ UN obtained by identifying a permutation σ with the N × N unitary matrix u defined by u(ej ) = eσ (j ) . Note that τN (u) = N −1 |{j ∈ {1, . . . ,N} | σ (j ) = j }| is the proportion of the number of fixed points of σ . Definition 12.14 A discrete group G is called “sofic” if it satisfies the condition (iii) in Theorem 12.12 with a function ψ taking values in permutation matrices. Equivalently, and more explicitly, G is sofic if for any finite subset S ⊂ G with eG ∈ S and any ε > 0 there is an integer N < ∞ and a function ψ : S → SN such that ∀s,t ∈ S

|{j ∈ {1, . . . ,N} | (ψ(s)ψ(t))(j ) = ψ(st)(j )}| ≤ εN,

and |{j ∈ {1, . . . ,N} | ψ(e)(j ) = j }| ≤ εN and ∀t ∈ S \ {e}, |{j ∈ {1, . . . ,N} | ψ(t)(j ) = j }| ≤ εN . Remark 12.15 (About examples of sofic groups) Using Følner sequences (see Remark 3.33), we show next that amenable groups are sofic, but also as we will soon show (see Lemma 12.19) all free groups are sofic, which leads one to the important open question whether every group is sofic. This seems to be the group theoretic analogue of the Connes problem. The notion was introduced by Gromov and the term “sofic” was coined by B. Weiss (sofi means finite in hebrew). We refer to a series of papers by Elek and Szab´o (for instance [81]) for more information on sofic groups. Lemma 12.16 Amenable groups are sofic. Proof By Remark 3.33 there is a net (Bi ) formed of finite subsets of our amenable G such that ∀t ∈ G

lim |Bi \ t −1 Bi ||Bi |−1 = 0.

274

The Connes embedding problem

Let ψi (t) be a permutation of Bi that is equal to x → tx for any x ∈ Bi ∩t −1 Bi and that is extended to Bi \t −1 Bi in such a way that ψ(t) : Bi → Bi is bijective. Then for any unital finite set S ⊂ G and ε > 0 when i is far enough in the net we will have |Bi \ (st)−1 Bi | < (ε/3)|Bi | for any (s,t) ∈ S × S (and hence also for (s,eG ) and (eG,t)). It is then easy to check that ψ = ψi satisfies the conditions required in Definition 12.14 for G to be sofic. Definition 12.17 A group G is called residually finite if there exists a collection of finite groups (i ) and homomorphisms ϕi : G → i separating the points of G, i.e. for any finite subset S ⊂ G there is an i for which the restriction of ϕi to S is injective. Without loss of generality, we may assume that i = G/Ni where each Ni ⊂ G is a normal subgroup with finite index and ϕi is the canonical quotient map. Thus, G is residually finite if and only if it admits a family of normal subgroups with finite index (Ni ), directed by ( (downward) inclusion and such that i∈I Ni = {eG }. Proposition 12.18 Any residually finite group is sofic and any sofic group is approximately linear (so-called hyperlinear). Proof If G is residually finite, let S ⊂ G be a finite subset. There is a finite group  and a group homomorphism ψ : G →  that is injective on S. Let N = ||. We may view  as acting on itself by translation (and hence any t = e acts without fixed points), so that  ⊂ SN . Then, viewing ψ as acting into SN , we obtain the properties in Definition 12.14 with ε = 0. Therefore G is clearly sofic. The implication sofic ⇒ approximately linear (so-called hyperlinear) is obvious given the definition of sofic and (iii) ⇒ (i) in Theorem 12.12. The following fact is classical. Lemma 12.19 Free groups are residually finite (and a fortiori sofic). Proof Let G = FI . Let {gi | i ∈ I } be the (free) generators. Let C ⊂ G be a finite subset. It suffices to produce a (group) homomorphism h : G →  into a finite group  such that, for any c in C, we have h(c) = e if c = e, where e denotes the unit in  and e the unit in G (i.e. the “empty word”). We may assume that C ⊂ G where G is the subgroup generated by a finite subset {gi | i ∈ J } of the generators. Let k = max{|c| | c ∈ C} (here |c| denotes the length of the reduced word associated to c, i.e. the number of elements in {gi ,gi−1 | i ∈ I } used to express c in reduced form). We then set S = {t ∈ G | |t| ≤ k}.

We will take for  the (finite) group of all permutations of the (finite) set S. For any i in J , we introduce Si = {t ∈ S | gi t ∈ S}.

12.4 Residually finite groups and Sofic groups

275

Then clearly Si ⊂ S and gi Si ⊂ S. Hence (since |Si | = |gi Si | and S is finite) there is a permutation σi : S → S such that σi (s) = gi s for any s in Si . Then if s,t ∈ S and if gi t = s (or equivalently t = gi−1 s) we have σi (t) = s (or equivalently t = σi−1 (s)). Thus it is easy to check that if a reduced word t = giε11 giε22 . . . giεmm (m ≤ k εi = ±1) lies in S (note that, by definition of S, e and all the subwords of t also lie in S) we have σiε11 σiε22 . . . σiεmm (e) = t. Therefore, if we define h : G →  as the unique homomorphism such that / J , we find, for t as before, h(t) = h(gi ) = σi ∀i ∈ J and σ (gi ) = e ∀i ∈ σiε11 σiε22 . . . σiεmm and h(t)(e) = t, in particular we have h(t) = e whenever t ∈ S and t = e. Since C ⊂ S, we obtain the announced result. Remark 12.20 By a famous result due to Malcev [177], finitely generated linear groups are residually finite. Using this, Lemma 12.19 could be deduced from Choi’s Theorem 9.18. Consequently, we obtain the following important fact, which was mentioned in passing by Connes in [61, p. 105] as motivation for his question discussed in §12. Theorem 12.21 ([255]) There is a trace preserving embedding of the von Neumann algebra of the free groups Fn or F∞ into an ultraproduct of matrix algebras (in other words free groups are approximately linear). Corollary 12.22 When G is a free group, there is a factorization of the w ˙ G : C ∗ (G)− ˙ G : C ∗ (G) → MG of the form Q → canonical ∗-homomorphism Q v →MG where w is a ∗-homomorphism, v is c.p. with vcb ≤ 1, and B is a B− von Neumann algebra with the WEP (actually B is injective). Proof By Lemma 12.19 and Proposition 12.18, G is approximately linear (socalled hyperlinear). We have MG ⊂ MU with a c.p. projection (the conditional expectation) P : MU → MG . The unitary representation π : G → U (MU ) appearing in property (ii) in Theorem 12.12 a lifting to a unitary

admits  representation  π : G → U (B) where B = ⊕ i∈I M(i) ∞ as in (11.14). Indeed, by the freeness of G, it suffices for this to be able to lift the images under π of each free generator, and this is guaranteed by Lemma 11.30. w →B and, denoting as before Then  π extends to a ∗-homomorphism C ∗ (G)− by q : B → MU the quotient map, we can take v = Pq. Note that B is injective and hence has the WEP (see Proposition 1.48 and Corollary 9.26). By Theorem 11.55, we deduce from Corollary 12.22: Corollary 12.23 Free groups have the factorization property described in §11.8.

276

The Connes embedding problem

12.5 Random matrix models The term “matrix model” is frequently used with respect to a tracial probability space (M,τ ). This is an alternative way to discuss the embedding in the Connes question. More precisely, assume that M is generated by a family of elements (xs )s∈S (indexed by some set S), consider a sequence of matricial sizes (nN )N ≥1 and families (xs(N ) )s∈S in MnN such that for any polynomial P (Xs ,Xs∗ ) in the noncommuting formal variables (Xs )s∈S we have ∗

limN →∞ τnN (P (xs(N ),xs(N ) )) = τ (P (xs ,xs∗ )). (N )

We then say that (xs )s∈S is a matrix model for (xs )s∈S , or (somewhat abusively) for (M,τ ). Thus to say that (M,τ ) admits a matrix model is but another way of saying that it embeds in a trace preserving way in an ultraproduct of matricial tracial probability spaces. However, the matrix model terminology is better adapted to the theory of random matrices. The latter provides very interesting and fruitful examples of matrix models, in connection with Voiculescu’s free probability theory (see [254]). For instance, the convergence of both the unitary and Gaussian models in the following statements, in which nN = N and S = {1,2, . . .}, is due to Voiculescu: Theorem 12.24 (Random unitary matrix model) Let (Us(N ) )s≥1 be an independent, identically distributed (i.i.d. in short) sequence of random matrices uniformly distributed over the unitary group UN . We assume (for convenience) all random elements defined on a probability space ( ,P). Let (M,τ ) = (MF∞ ,τF∞ ). Consider the sequence xs = λF∞ (gs ) (s ≥ 1) in M. Then for (N ) almost all ω ∈ , the sequence (Us (ω))s≥1 is a matrix model for (xs )s≥1 (with respect to N → ∞). Remark 12.25 (Random permutation matrix model) The same result is valid (N ) if we assume that (Us )s≥1 is an i.i.d. family uniformly distributed over the subgroup of UN formed of matrices of permutation of size N × N . This is due to Nica (see [179]). Fix a nontrivial ultrafilter U on N. Let MU be the ultraproduct of the family  (MN ,τN ) with respect to U . Let B = (⊕ N ≥1 MN )∞ . Let qU : B → MU

be the quotient map. Fix ω such that (Us(N ) (ω))N ≥1 is a matrix model for (λF∞ (gs )). We know by Remark 11.20 that the correspondence λF∞ (gs ) → qU ((Us(N ) (ω))N ≥1 ) extends to a trace preserving ∗-homomorphism JωU : MF∞ → MU embedding MF∞ as a von Neumann subalgebra of MU . Moreover, by Proposition 11.21 there is a c.p. contractive projection PωU from MU onto JωU (MF∞ ), whence the following statement.

12.6 Characterization of nuclear von Neumann algebras

277

Corollary 12.26 For almost all ω, the preceding map JωU : MF∞ → MU is a trace preserving (von Neumann sense) embedding and there is a c.p. contractive projection PωU from MU onto JωU (MF∞ ). The case of Gaussian random matrices is central. To state the result in that case one needs the notion of a free semicircular sequence (xs ), and that of a circular one (ys ), for which we refer to §11.4. It is known that the von Neumann algebra generated by either (xs ) or (ys ) is isomorphic to the von Neumann algebra M = MF∞ . (N ) (N ) Let (Xs )s≥1 (resp. (Ys )s≥1 ) be an independent, identically distributed (i.i.d. in short) sequence of random matrices each with the same distribution as a Gaussian model X(N ) (resp. Y (N ) ). By definition its entries X(N ) (i,j ) (for 1 ≤ i ≤ j ≤ N) are all independent mean zero (real valued) Gaussian variables with E|X(N ) (i,j )|2 = 1/N for all i < j and E|X(N ) (j,j )|2 = 2/N for all j ; the other entries are determined by X(N ) (i,j ) = X(N ) (j,i) so that X(N ) is a symmetric random matrix. This is known as the GOE random matrix model. By definition the entries Y (N ) (i,j ) (for 1 ≤ i,j ≤ N ) are i.i.d. complex valued Gaussian variables such that #(Y (N ) (i,j )) and '(Y (N ) (i,j )) are independent (real valued) Gaussian with mean zero and such that E|#(Y (N ) (i,j ))|2 = E|'(Y (N ) (i,j ))|2 = (2N )−1 = (1/2)E|Y (N ) (i,j )|2 . Theorem 12.27 (Gaussian random matrix model) Let N ≥ 1 be any matrix size. Let (xs )s≥1 (resp. (ys )s≥1 ) be a free semicircular (resp. circular) sequence in M. Then for almost all ω ∈ , the sequence (Xs(N ) (ω))s≥1 (resp. (Ys(N ) (ω))s≥1 ) is a matrix model for (xs )s≥1 (resp. (ys )s≥1 ). We refer the reader to [7, 254] for the proofs.

12.6 Characterization of nuclear von Neumann algebras It is easy to see from the definition that for any family (Hi )i∈I of Hilbert spaces the von Neumann algebra    B(Hi ) M= ⊕ i∈I



is injective. In the particular case I = N with Hn n-dimensional, the von Neumann algebra    Mn B= ⊕ n≥1



278

The Connes embedding problem

is injective (and a fortiori WEP). However, it is not nuclear. This was proved by S. Wassermann: Lemma 12.28 ([255]) The von Neumann algebra B is not nuclear. Proof Let G = Fn with 1 < n ≤ ∞. By Theorem 12.21, MG embeds in an ultraproduct MU of matrix algebras and by Proposition 11.21 there is a contractive c.p. projection (conditional expectation) P : MU → MG . A priori  B(Hi ))∞ for some family of finite-dimensional MU is a quotient of (⊕i∈I Hilbert spaces. Since G is countable and residually finite, from the proof of Theorem 12.21, we may as well assume that MU is a quotient of B. Now the nuclearity of B would imply by Theorem 8.15 the injectivity of MU , and hence also of MG (recall Corollary 1.47). Since we know by Theorem 3.30 (and the remarks after it) that MG is not injective (because G is not amenable) we conclude that B is not nuclear. More precisely, we have: Theorem 12.29 ([255]) Let M be a von Neumann algebra. The following are equivalent: (i) M is nuclear. (ii) M does not contain a copy of B as a von Neumann subalgebra. (iii) There is a finite set I , integers n(i) ≥ 1 (i ∈ I ) and commutative von Neumann algebras Ci such that M is isomorphic to  (⊕ i∈I Ci ⊗min Mn(i) )∞ . Sketch of proof Since B is injective, (i) ⇒ (ii) follows from the preceding Lemma (recall Remark 9.3). Assume (iii). By Remarks 4.9 and 4.10, M is nuclear, so (iii) ⇒ (i). The remaining implication (ii) ⇒ (iii) lies deeper. Its fully detailed proof requires classical results from the structural theory of von Neumann algebras that would take us too far off to cover in these notes. We merely outline the argument for the convenience of the reader. We have a decomposition M  MI ⊕ MII ⊕ MIII into three (possibly vanishing) parts called respectively of type I,II,III. By general results, it can be shown that if either MII = {0} (resp. MIII = {0}) then B embeds in MII (resp. MIII ). Thus we may assume M = MI , i.e. that M is of type I . Moreover, assuming (ii) we know that B(2 ) does not embed in M = MI . By the classification of type I von Neumann algebras,  M is isomorphic to a direct sum (⊕n≥1 Cn ⊗min Mn )∞ where each Cn is a commutative von Neumann algebra (i.e. Cn = L∞ ( n,μn ) for some measure space ( n,μn ) and Cn ⊗min Mn = L∞ ( n,μn ;Mn )). Let I = {n ≥ 1 | Cn = 0}. The assumption (ii) implies that I is a finite set. Thus we obtain (iii).

12.7 Notes and remarks

279

12.7 Notes and remarks As already mentioned in the text, the ideas for Theorem 12.1 go back to Kadison [142] and Størmer [233] (see [123]). Proposition 12.2 is an elementary fact formulated for the convenience of our presentation. Kirchberg’s criterion in Theorem 12.3 is much more substantial and Theorem 12.8 is even more so. The latter refined criterion is essentially due to Ozawa [191, th. 29]. It will be crucially used to prove Theorem 14.7. Concerning injective factors and in particular the uniqueness of the injective factor R, the fundamental reference is Connes’s paper [61]. The notion of sofic group has recently become quite popular. The names of Gromov and Weiss are associated with it. Initial work by Elek and Szabo [81] has been influential. Theorem 12.21 and its corollary are due to S. Wassermann [255] and independently to Connes [61]. The results in §12.5 are due to Voiculescu (see [254]), except for the random permutation model due to Nica [179]. The results of §12.6 are due to S. Wassermann [255].

13 Kirchberg’s conjecture

We now turn to the second problem of our series, which actually was explicitly formulated as a conjecture by Kirchberg.

13.1 LLP ⇒ WEP? At the end of his landmark paper [155] Eberhard Kirchberg formulated several conjectures about the properties WEP and LLP. Essentially, he asked whether they are equivalent. However, the implication WEP ⇒ LLP was soon disproved by Marius Junge and the author in [141], where it was proved that the prototypical WEP C ∗ -algebra, namely B = B(2 ) fails the LLP. We return to this in more detail in §18.1. This left open the remaining conjecture, namely the implication LLP ⇒ WEP. Given that C is the prototypical example of a C ∗ -algebra with LLP, one can reformulate the conjecture like this: Kirchberg’s Conjecture: The C ∗ -algebra C has the Weak Expectation Property (WEP). Because of its equivalence with Connes’s problem, this is now widely considered as one of the most important open problems in Operator Algebra theory (if not the most important one). At this point, it is worthwhile to list several equivalent forms of Kirchberg’s conjecture. Proposition 13.1 The following conjectures are all equivalent: (i) (i)’ (ii) (iii)

C is WEP. The pair (C ,C ) is a nuclear pair. C ⊗max C is residually finite dimensional. C ⊗max C has a faithful tracial state.

280

13.1 LLP ⇒ WEP?

281

(iv) For any free group F, C ∗ (F) has the WEP. (v) Any unital C ∗ -algebra is isomorphic to a quotient of a WEP C ∗ -algebra. (We call these QWEP.) (vi) Any von Neumann algebra is QWEP. (vii) LLP ⇒ WEP. Proof (i) ⇔ (i)’ is tautological in view of our definition of WEP. We first consider (i)–(iii). (i) ⇒ (ii) follows from Theorem 9.18 and Remark 9.19. (ii) ⇒ (iii) follows from Remark 9.17 since, of course, each Mn has a faithful tracial state. (iii) ⇒ (i)’ follows from Remark 11.34 applied to the GNS representation of the faithful tracial state, which is isometric on C ⊗max C . Whence (i)–(iii) are equivalent. (i)’ ⇒ (iv) follows from (9.4) (applied with B = C ). (iv)⇒(v) is clear since, by Proposition 3.39, any unital C ∗ -algebra is a quotient of C ∗ (F) for some F, and (v)⇒ (vi) is trivial. Let us show (vi)⇒ (vii). Assume (vi). Let C be a C ∗ -algebra with the LLP. By (vi), C ∗∗ is QWEP. By (i) ⇒ (ii) in Theorem 9.67, the linear map iC : C → C ∗∗ is WEP. Since the latter is max-injective, it follows that IdC and hence C itself is WEP, so that (vii) holds. Lastly we have (vii)⇒(i) since C has the LLP (by Theorem 9.6). Remark 13.2 (One-for-all . . . ) By Corollary 9.69 if C is QWEP, then it has the WEP and by Proposition 13.1 every C ∗ -algebra is QWEP. Motivated by this, we will say that a unital C ∗ -algebra A is “one-for-all” if the property: (viii) A is QWEP implies that all C ∗ -algebras are QWEP (in other words implies the Kirchberg conjecture). If A has a one-for-all quotient, then A itself is one-for-all. Moreover, by Corollary 9.70 if the identity of a one-for-all C ∗ -algebra B factors with unital c.p. maps through A, then A is also one-for-all (more generally see Remark 13.2). The obvious example of one-for-all is C ∗ (F) when F is any non-Abelian free group. More generally any C ∗ -algebra that admits C ∗ (F2 ) as a quotient is also an example, but it turns out there are more noteworthy examples. For instance the universal unital C ∗ -algebra of a contraction described in Remark 2.27 is one-for-all. Clearly Cu∗ C is generated (as a unital C ∗ -algebra) by the single polynomial P (X) = X, and any singly generated unital C ∗ -algebra is a quotient of Cu∗ C. It follows that any C ∗ -algebra that is generated by a pair of

282

Kirchberg’s conjecture

(a priori noncommuting) hermitian contractions x1,x2 is a quotient of Cu∗ C since it is generated by (x1 + ix2 )/2. In particular, if Cj is generated by xj then the full (unital) free product C1 ∗ C2 is a quotient of Cu∗ C. It is a simple exercise to check that the C ∗ -algebra C ∗ (G) of a finite Abelian group G can be generated by a single hermitian element. Thus (see (4.13)) if G1,G2 are finite Abelian groups then C ∗ (G1 ∗ G2 ) is a quotient of Cu∗ C. This shows e.g. that C ∗ (Z2 ∗ Z3 ) is a quotient of Cu∗ C, but it is well known (see Remark 3.40) that F∞ is a subgroup of Z2 ∗ Z3 , and hence by Proposition 3.5 there is a unital c.p. factorization of the identity of C through C ∗ (Z2 ∗ Z3 ). Therefore, we can now conclude: if Cu∗ C is QWEP then so is C ∗ (Z2 ∗ Z3 ), and by Corollary 9.70 so is C . This shows that Cu∗ C is one-for-all. We remind the reader that Z2 ∗ Z2 being amenable, we do need Z3 here, see Remark 3.40. The same argument shows that the unital free product A1 ∗ A2 of two unital C ∗ -algebras is one-for-all if (say) A1 (resp. A2 ) admits C ∗ (Zn ) (resp. C ∗ (Zm )) as a quotient with n ≥ 2 and m ≥ 3. More generally, by Boca’s Theorem 2.24, the same holds if we have a unital c.p. factorization of the identity of C ∗ (Zn ) (resp. C ∗ (Zm )) through A1 (resp. A2 ). For instance, C ∗ (Zn ) admits such a factorization through Mn (because C ∗ (Zn )  n∞ and the latter can be identified with diagonal matrices in Mn , see (3.17)). This shows that Mn ∗ C ∗ (Z) is one-for-all if n ≥ 2. It is easy to show that Mn ∗ C ∗ (Z) = Mn (Bn ) for some unital C ∗ -algebra Bn called the Brown algebra (see Remark 6.31). Clearly B is QWEP if and only if Mn (B) is QWEP. Therefore Bn is one-forall. It can be shown that Bn is the unital C ∗ -algebra generated by the entries of a universal unitary block matrix in Mn (B(H )). In passing we observe that Mn ∗ C ∗ (Z) and hence Bn has the LLP since, by Theorem 9.44, the LLP is stable by free products. The free product of Cuntz algebras On ∗ Om is one-for-all if n,m ≥ 1. Another interesting example is B ⊗max B. Indeed, let G = F∞ . By Theorem 4.11 the identity of C factorizes with unital c.p. maps through MG ⊗max MG . More precisely, the latter factorization is of the form C → C ⊗max C → MG ⊗max MG → C .

(13.1)

But since by Corollary 12.22 the natural map C → MG factorizes with unital c.p. maps through some B(H ), it follows that in (13.1) the mapping C ⊗max C → MG ⊗max MG factorizes with unital c.p. maps through B(H )⊗max B(H ). It follows that the identity of C factorizes similarly through B(H ) ⊗max B(H ). In fact (see the proof of Corollary 12.22) we can take here H = 2 so we obtain that C factorizes with unital c.p. maps through B ⊗max B. Thus by Corollary 9.70 if B ⊗max B is QWEP so is C . In other words B ⊗max B is one-for-all.

13.2 Connection with Grothendieck’s theorem

283

13.2 Connection with Grothendieck’s theorem Curiously, there seems to be a connection between Grothendieck’s theorem (in short GT) (or Grothendieck’s inequality) and the Kirchberg conjecture. Indeed, the latter problem can be phrased as the identity of two norms on 1 ⊗1 , while GT implies that these two norms are equivalent, and their ratio is bounded by the Grothendieck constant KG . Here is the simplest formulation of the classical GT. Theorem 13.3 (GT) Let [aij ] be an n × n scalar matrix (n ≥ 1). Assume that for any n-tuples of scalars (αi ), (βj ) we have  aij αi βj ≤ supi |αi | supj |βj |. (13.2) Then for any Hilbert space H and any n-tuples (xi ),(yj ) in H , we have  aij xi ,yj  ≤ K sup xi  sup yj , (13.3) where K is a numerical constant. The best K (valid for all H and all n) is denoted by KG . In this statement the scalars can be either real or complex, but that affects R and in the constant KG , so we must distinguish its value in the real case KG C the complex case KG . To this day, its exact value is still unknown although it C < K R ≤ 1.782, see [210] for more information. In our is known that 1 < KG G context (in connection with spectral theory) it is natural to restrict to the case of complex scalars. Remark 13.4 If we restrict to positive semidefinite matrices [aij ], then the best constant in (13.3) is known to be exactly 4/π in the complex case (and π/2 in the real case). This is called the “little” GT, see [210, §5] for details.  Let (ei ) denote the canonical basis of 1 . Let t = aij ei ⊗ ej ∈ 1 ⊗ 1 be a tensor in the linear span of {ei ⊗ ej }. We denote    (13.4) tH  = sup aij xi ,yj  where the supremum is over all Hilbert spaces H and all xi ,yj in the unit ball of H . We identify n1 ⊂ 1 with the linear span of {ei | 1 ≤ i ≤ n} in 1 , so we may consider that t → tH  is a norm on n1 ⊗ n1 for each integer n ≥ 1. The classical “injective” Banach space tensor norm for an element t =  aij ei ⊗ ej ∈ n1 ⊗ n1 ⊂ 1 ⊗ 1 is given by the following formula.

284

Kirchberg’s conjecture

  aij αi βj t∨ = sup

 αi ,βj ∈ C, sup |αi | ≤ 1, sup |βj | ≤ 1 . i j (13.5)

With this notation, GT in the form (13.3) can be restated as follows: there is a constant K such that for any n and any t in n1 ⊗ n1 we have tH  ≤ Kt∨ .

(13.6)

We will also need another norm introduced by Grothendieck as follows. We abusively denote again by {ei | 1 ≤ i ≤ n} the canonical basis of n∞ . Note that   aij ei ⊗ we have isometrically both n∞ = (n1 )∗ and n1 = (n∞ )∗ . Let t  = ej ∈ n∞ ⊗ n∞ (aij ∈ C). We define t  H = inf{supi xi  supj yj }

(13.7)

where the infimum runs over all Hilbert spaces H and all xi ,yj in H such that aij = xi ,yj  for all i,j = 1, . . . ,n. It is an easy exercise to check directly that this is a norm but actually this follows from Proposition 1.57. We denote by n1 ⊗H  n1 (resp. n∞ ⊗H n∞ ) the space n1 ⊗n1 (resp. n∞ ⊗n∞ ) equipped with the H  -norm (resp. H -norm). By definition of tH  , we have clearly    aij aij t  H ≤ 1 = sup{|t,t  | | t  H ≤ 1}, (13.8) tH  = sup so we have isometrically n1 ⊗H  n1 = (n∞ ⊗H n∞ )∗

and

n∞ ⊗H n∞ = (n1 ⊗H  n1 )∗ .

In operator theory terms, the norm aH  can be rewritten as follows. For any n and any aij ∈ C (1 ≤ i,j ≤ n) we have          (13.9) aij ui vj  aij ei ⊗ ej   = sup   H

B(H )

where the sup runs over all Hilbert spaces H and all n-tuples (ui ), (vj ) in the unit ball of B(H ). Indeed, it is an easy exercise (left to the reader) to check that (13.4) and (13.9) are equal. Equivalently (by the Russo–Dye Theorem A.18)          (13.10) aij Ui Vj  aij ei ⊗ ej   = sup   H

B(H )

where the sup runs over all Hilbert spaces H and all n-tuples (Ui ), (Vj ) of unitaries in B(H ). In both cases it suffices to consider H = 2 . Remark 13.5 Consider the free product F∞ ∗ F∞ and its associated (full) C ∗ (1) (2) algebra C ∗ (F∞ ∗ F∞ ) = C ∗ C . Let (Uj ) (resp. (Uj )) denote the free

13.2 Connection with Grothendieck’s theorem

285

unitary generators of the first (second) of the two “factors” of the free product. Then it is easy to check that (13.10) is equivalent to:         aij Ui(1) Uj(2)  aij ei ⊗ ej   =  . (13.11)  C ∗C

H

To explain the connection with GT, we first give several equivalent reformulations of the Kirchberg conjecture. As before, we set C = C ∗ (F∞ ). Let (Uj )j ≥1 denote the unitaries in C that correspond to the free generators of F∞ . For convenience of notation we set U0 = 1 (i.e. the unit in C ). We recall that the closed linear span E ⊂ C of {Uj | j ≥ 0} is isometric to 1 (see Remark 3.14) and that E generates C as a C ∗ -algebra. Fix n ≥ 1. Consider a family of matrices {aij | i,j ≥ 0} with aij ∈ MN for all i,j ≥ 0 such that |{(i,j ) | aij = 0}| < ∞. We denote  a= aij ⊗ Ui ⊗ Uj ∈ MN ⊗ E ⊗ E. We denote, again for C = C ∗ (F∞ ):     aij ⊗ Ui ⊗ Uj  and amin =  M (C ⊗min C )  N    aij ⊗ Ui ⊗ Uj  . amax =  MN (C ⊗max C )

Then, on one hand, going back to the definitions, it is easy to check that      (13.12) aij ⊗ ui vj  amax = sup  MN (B(H ))

where the supremum runs over all H and all possible unitaries ui ,vj on the same Hilbert space H such that ui vj = vj ui for all i,j . On the other hand, using the known fact that C embeds into a direct sum of matrix algebras (see Theorem 9.18), one can check that      (13.13) aij ⊗ ui vj  amin = sup  dim(H ) < ∞ MN (B(H ))

where the sup is as in (13.12) except that we restrict it to all finite-dimensional Hilbert spaces H . Indeed, since dim(H ) < ∞ ⇒ B(H ) ⊗min B(H ) = B(H ) ⊗max B(H ), the sup in (13.13) is the same as the supremum over all finite-dimensional H ’s and all unitaries ui ,vj on H of     aij ⊗ (ui ⊗ vj )  MN (B(H )⊗min B(H ))

and by Theorem 9.18, the latter is the same as amin . We may ignore the restriction u0 = v0 = 1 because we can always replace −1 (ui ,vi ) by (u−1 0 ui ,vi v0 ) without changing either (13.12) or (13.13). The following was observed in [204].

286

Kirchberg’s conjecture

Proposition 13.6 Let C = C ∗ (F∞ ). The following assertions are equivalent: (i) C ⊗min C = C ⊗max C (i.e. Kirchberg’s conjecture is correct). (ii) For any N ≥ 1 and any {aij | i,j ≥ 0} ⊂ MN as previously the norms (13.12) and (13.13) coincide i.e. amin = amax . (iii) The identity amin = amax holds for all N ≥ 1 but merely for all families {aij } in MN supported in the union of {0} × {0,1,2} and {0,1,2} × {0}. Note that (iii) reduces the Kirchberg conjecture to a statement about an operator space of dimension 5! But it requires to control the whole operator space structure of this five-dimensional space, so the size N of the five matrix coefficients is unbounded in (iii). It was conceivable that the equality amax = amin might hold when aij ∈ C i.e. in the case N = 1 in the previous (ii). But recently in [191, th. 29] Ozawa proved that this is actually equivalent to the Kirchberg conjecture. More precisely, he showed that the conditions in Proposition 13.6 are equivalent to: (iv) For any {aij | i,j ≥ 0} ⊂ C as previously the norms (13.12) and (13.13) coincide, i.e. we have amin = amax, for all a in the C-linear span of Ui ⊗ Uj . Note that Ozawa’s argument requires infinitely many generators in (iv). This result will be restated and proved later on as Theorem 14.7. It should be compared with the next two statements. Theorem 13.7 (Tsirelson [251]) We consider the case N = 1 and aij ∈ R for all i,j . Then amax = amin = aH  , where we have denoted by aH  the norm appearing either in (13.9), (13.10) or (13.11). Moreover, these norms are all equal to     aij ui vj  (13.14) sup  where the sup runs over all d ≥ 1 and all self-adjoint unitary d × d matrices ui ,vj such that ui vj = vj ui for all i,j . Proof Recall (see (13.4)) that aH  ≤ 1 if and only if for any unit vectors xi ,yj in a Hilbert space we have  aij xi ,yj  ≤ 1.

13.2 Connection with Grothendieck’s theorem

287

Note that, since aij ∈ R, whether we work with real or complex Hilbert spaces does not affect this condition. The resulting H  -norm is the same. We have trivially amin ≤ amax ≤ aH  , so it suffices to check aH  ≤ amin . Consider unit vectors xi ,yj in a real Hilbert space H . We may assume that {aij } is supported in [1, . . . ,n] × [1, . . . ,n] and that dim(H ) = n. From classical facts on “spin systems” (Pauli matrices, Clifford algebras and so on), we claim that there are selfadjoint unitary matrices ui ,vj (of size 2n ) such that ui vj = vj ui for all i,j and a (vector) state f such that f (ui vj ) = xi ,yj  ∈ R. Indeed, let  = Cn and let H ∧k denote the k-fold antisymmetric tensor product, H = Rn , H which admits {ei1 ∧ · · · ∧ eik | i1 < · · · < ik } as orthonormal basis. Let ⊕H ∧2 ⊕ · · · denote the (2n -dimensional) antisymmetric Fock F = C⊕H  with vacuum vector ( ∈ F is the unit in C ⊂ F). For space associated to H any x,y ∈ H = Rn , let c(x),c(y) ∈ B(F) (resp. d(x),d(y) ∈ B(F)) be the left (resp. right) creation operators defined by c(x) = x (resp. d(x) = x) ∧n with n > 0. Let and c(x)t = x ∧ t (resp. d(x)t = t ∧ x) for x ∈ H ∗ ∗ vy = d(y) + d(y) and ux = c(x) + c(x) . Then ux ,vy are commuting selfadjoint unitaries and setting f (T ) =  ,T , we find f (ux vy ) = x,y. So applying this to xi ,yj yields the claim. Thus we obtain by (13.13)    aij uxi vyj ≤ amin, aij xi ,yj  = f and hence aH  ≤ amin . This proves aH  = amin but also aH  ≤ (13.14). Since, by (13.13), we have (13.14) ≤ amin , (13.14) must be also equal to the number aH  = amin = amax . Remark 13.8 When aij ∈ C, the preceding argument shows that        aij Ui ⊗ Uj  sup aij #xi ,yj  xi ,yj ∈ B2 ≤ 

min

.

Indeed, the supremum remains the same when reduced to unit vectors xj ,yi ∈ n2 (C), and then #xi ,yj  can be thought of as the scalar product in the “real” n Hilbert space 2n 2 (R)  2 (C). This inequality fails (for aij ∈ C) if one replaces #xi ,yj  by xi ,yj  (see Proposition 13.10). However, we have Proposition 13.9 Assume N = 1 and aij ∈ C for all i,j ≥ 0 then amax ≤ C a aH  and a∨ ≤ amin . Therefore amax ≤ KG min . Moreover, for any positive semidefinite matrix [aij ] we have amax ≤ (4/π )amin .

288

Kirchberg’s conjecture

Proof By (13.13) we have    sup aij si tj si ,tj ∈ C |si | = |tj | = 1 = a∨ ≤ amin . For any unit vectors x,y in H , we have   aij u∗i y,vj x ≤ aH  , aij y,ui vj x = so that

    aij ui vj  ≤ aH  

and actually this holds without any extra assumption (such as mutual commutation) on the {ui }’s and the {vj }’s. A fortiori, we have amax ≤ aH  , and C a by Theorem 13.3 we conclude that amax ≤ KG min . The last assertion follows from Remark 13.4. Remark The equalities aH  = amin and aH  = amax in Theorem 13.7 do not extend to the case of matrices with complex entries. This was proved by ´ Ricard. See the subsequent Proposition 13.10. In sharp contrast, if [aij ] is Eric a 2 × 2 matrix, they do extend because, by [69, 249] for 2 × 2 matrices in the complex case (13.3) happens to be valid √ with K = 1 (while in the real case, for 2 × 2 matrices, the best constant is 2). See [57, 75, 95, 138, 222, 223] for related contributions. Recall that we view 1 ⊂ C ∗ (F∞ ) completely isometrically. In this embedding the canonical basis of 1 is identified with the canonical free unitaries {Uj | j ≥ 0} generating C ∗ (F∞ ), with the convention U0 = 1 (see Remark 3.14). Similarly, for any n = 1,2, . . ., we may restrict to {Uj | 0 ≤ j ≤ n − 1} and identify n1 ⊂ C ∗ (F∞ ) as the span of {Uj | 0 ≤ j ≤ n − 1}. Equivalently, we could replace F∞ by Fn−1 and view n1 ⊂ C ∗ (Fn−1 ). ´ The following proposition and its proof were kindly communicated by Eric Ricard. Proposition 13.10 (Ricard) For any n ≥ 4, the H  -norm on n1 ⊗ n1 does not coincide with the norm induced by the max-norm on C ∗ (F∞ ) ⊗ C ∗ (F∞ ) (or equivalently on C ∗ (Fn−1 ) ⊗ C ∗ (Fn−1 )). Proof The proof is inspired by some of the ideas in [115]. Let us consider a family x0,x1, . . . in the unit sphere of C2 and let ajk = xj ,xk . We identify n1 with the span of the canonical free unitaries 1 = U0,U1, . . . ,Un−1 generating  C ∗ (Fn−1 ). Then consider the element a = n−1 j,k=0 ajk ej ⊗ ek . Note that a is clearly in the unit ball of n∞ ⊗H n∞ . For simplicity let A = C ∗ (Fn−1 ). Assume

13.2 Connection with Grothendieck’s theorem

289

that the max-norm coincides with the H  -norm on n1 ⊗ n1 . Then any such a defines a functional of norm at most 1 on n1 ⊗ n1 ⊂ A ⊗max A (with induced norm). Therefore (Hahn–Banach) there is f in the unit ball of (A⊗max A)∗ such that f (Uj ⊗Uk ) = ajk , and since f (1) = f (U0 ⊗U0 ) = a00 = x0,x0  = 1, f is a state (see Remark 1.33). By the GNS representation of states (A.16) there is a Hilbert space H , a unit vector h ∈ H and commuting unitaries uj ,vk∗ on H (associated to a unital representation of A ⊗max A) such that u0 = v0 = 1 and xj ,xk  = f (Uj ⊗ Uk ) = h,vk∗ uj h = vk h,uj h (0 ≤ j,k ≤ n − 1). (13.15) Moreover the vector h can be chosen cyclic for the ∗-algebra generated by uj ,vk∗ (or equivalently of course uj ,vk ). Since ajj = xj ,xj  = 1, we have uj h = vj h for all j , and hence h must also be cyclic for each of the ∗-algebras Au and Av generated by (uj ) and (vj ) respectively (indeed note that Au h = Av h implies Au Av h = Au h = Av h). But by a classical elementary fact (see Lemma A.62) this implies that h is separating for each of the commutants Au and Av , and since we have Au ⊂ Av and Av ⊂ Au , it follows that h is separating for each of the ∗-algebras Au and Av . We claim that there is an isomorphism T : span[uj ] → span[xj ] such that T (uj ) = xj and in particular the linear span of (uj ) is at most twodimensional. Indeed, first observe that by (13.15) (since uj h = vj h) any    linear combination bj xj (bj ∈ C) satisfies  bj xj 2 =  bj uj h2 ≤   bj uj 2 , and hence T : span[uj ] → span[xj ] such that T (uj ) = xj is  well defined. But now any linear relation bj xj = 0 (bj ∈ C) implies    bj uj h2 = 0 and (since h is separating on Au ) bj uj = 0. This shows that T : span[uj ] → span[xj ] is injective, and hence an isomorphism so the linear span of (uj ) is at most two-dimensional. Suppose that x0,x1 are linearly independent. Then so is 1,u1 (recall u0 = 1), and the other uj’s are linear combinations of 1,u1 (so that Au is the unital x0,x1 commutative C ∗ -algebra generated by u1 ). So let us choose simply for √ the canonical basis of C2 , and recall T −1 xj = uj . Choosing (here i = −1) x2 = (1,i)2−1/2 = (x0 + ix1 )2−1/2 and x3 = (1,1)2−1/2 = (x0 + x1 )2−1/2 , by the linearity of T −1 we must have T −1 (x2 ) = (1 + iu1 )2−1/2 . The latter must be unitary, equal to u2 , as well as T −1 (x3 ) = (1 + u1 )2−1/2 equal to u3 . But this is impossible because (1 + iu1 )2−1/2 unitary requires the spectrum of u1 included in {±1}, while (1 + u1 )2−1/2 unitary requires that it is included in {± i}. Thus we obtain a contradiction for any n ≥ 4. This should be compared with Dykema and Juschenko’s results in [75].

290

Kirchberg’s conjecture

13.3 Notes and remarks §13.1 is due to Kirchberg. Remark 13.2 on “one-for-all” C ∗ -algebras combines various observations, some of them already in [208]. The remark that the Brown algebra is such an example appears in [128]. For more remarks on a similar theme, see [91], [128], and [63]. For §13.2 most references are credited in the text. The papers [91, 152, 153] contain several equivalent formulations of the WEP as a Riesz interpolation property and in terms of matrix completion problems. The formulation of Grothendieck’s inequality in Theorem 13.3 was put forward by Lindenstrauss and Pełczy´nski in 1968, see [210] for more (and more precise) references.

14 Equivalence of the two main questions

We will now prove the equivalence of both problems.

14.1 From Connes’s question to Kirchberg’s conjecture Theorem 14.1 If the answer to Connes’s question is positive, then any von Neumann algebra is QWEP and consequently Kirchberg’s conjecture follows. The proof of Theorem 14.1 is based on the three steps described in the following lemmas. Lemma 14.2 If the answer to Connes’s question is positive then any finite von Neumann algebra M is QWEP . Proof Assume M ⊂ MU in a trace preserving way. Since we are dealing with finite traces, there is a c.p. conditional expectation P : MU → M with P  = 1 (see Proposition 11.21). Clearly MU is QWEP by definition. Thus the result follows from Corollary 9.61. Lemma 14.3 If all finite von Neumann algebras are QWEP , then all semifinite von Neumann algebras M are QWEP . Proof If M is semifinite we can write M = ∪Mγ

weak∗

where the von Neumann (unital) subalgebras Mγ are all finite and form a net directed by inclusion. Indeed, let Pγ be an increasing net of projections with finite trace tending weak* to the identity on M (see Remark 11.1). Then the unital subalgebras Mγ = Pγ MPγ + C(1 − Pγ ) are finite and their increasing union is weak*-dense in M. Since all the Mγ’s are QWEP, we conclude that M is QWEP by Theorem 9.74.

291

292

Equivalence of the two main questions

Lemma 14.4 If all semifinite von Neumann algebras are QWEP , then all von Neumann algebras are QWEP . Proof By Takesaki’s Theorem 11.3 any von Neumann algebra M can be embedded in a semifinite algebra N , so we have M ⊂ N , in such a way that there is a c.p. projection P : N → M with P  = 1. Using this, the lemma follows immediately from Corollary 9.61. Proof of Theorem 14.1 By the three preceding Lemmas, if the answer to Connes’s question is positive then any von Neumann algebra is QWEP . By Proposition 13.1 this implies Kirchberg’s conjecture. Remark 14.5 Let (M(i),τi ) be a family of tracial probability spaces, with ultraproduct MU . If all the M(i)’s are QWEP, then MU is also QWEP by Corollary 9.62. Moreover, any von Neumann subalgebra M ⊂ MU is also QWEP. Indeed, since the conditional expectation is a c.p. projection P : MU → M, this follows from Corollary 9.61. Remark 14.6 (On the Effros–Mar´echal topology) In a series of remarkable papers, Haagerup with Winsløw and Ando ([9, 121, 122]) make a deep study of the Effros–Mar´echal topology on the set vN(H ) of all von Neumann algebras M ⊂ B(H ) (on a given Hilbert space H ). The latter topology is defined as the weakest topology for which the map M → ϕ|M  is continuous for every ϕ ∈ B(H )∗ . In particular, they show that a von Neumann algebra is QWEP if and only if it is in the closure (for that topology) of the set of injective factors. Thus their work shows that the Kirchberg conjecture is equivalent to the density of the latter set in vN(H ), say for H = 2 . Actually, it is also equivalent to the density of the set of type I factors and, as expected, to the density of the set of finite-dimensional factors (i.e. matricial factors).

14.2 From Kirchberg’s conjecture to Connes’s question Let E = span{Uj | j ≥ 0} ⊂ C be the linear span of the generators {Uj | j ≥ 1} and the unit (recall the convention U0 = 1) in C = C ∗ (F∞ ). We will work with E¯ ⊗ E ⊂ C¯ ⊗ C . In E¯ ⊗ E we distinguish the cone (E¯ ⊗ E)+ formed by all the “positive definite” tensors, more precisely r  xk ⊗ xk ∈ E¯ ⊗ E | xk ∈ E,r ≥ 1 . (E¯ ⊗ E)+ = 1

14.2 From Kirchberg’s conjecture to Connes’s question

293

Alternatively: (E¯ ⊗ E)+ =



 aij Ui ⊗ Uj | n ≥ 1,a ∈ (Mn )+ .

(14.1)

We call the elements of (Mn )+ “positive definite” matrices (they are often called positive semidefinite). To check (14.1) observe that (by classical linear  algebra) any such matrix can be written as a finite sum aij = rk=1 xk (i)xk (j ) (xk (j ) ∈ C), and conversely any matrix of the latter form is positive definite.    Then we have aij Ui ⊗ Uj = r1 xk ⊗ xk where xk = j xk (j )Uj . We define analogously   (14.2) aij Ui ⊗ Uj | n ≥ 1,a ∈ (Mn )+ . (E ⊗ E)+ = We also define + ¯ (E¯ ⊗ E)+ min = {t ∈ (E ⊗ E) | tmin ≤ 1}

and

+ ¯ (E¯ ⊗ E)+ max = {t ∈ (E ⊗ E) | tmax ≤ 1}.

We have then Theorem 14.7 (Ozawa [191]) If the min and max norms coincide on E ⊗ E ⊂ C ⊗ C (or equivalently if they coincide on E¯ ⊗ E ⊂ C¯ ⊗ C ), then the Connes problem has a positive solution. Actually with the same idea as Ozawa’s we will prove the following refinement: + ¯ Theorem 14.8 If (E¯ ⊗ E)+ min = (E ⊗ E)max , then the Connes problem has a positive solution.

Since a positive answer to Connes’s question implies Kirchberg’s conjecture (see §14.1), we have: Corollary 14.9 If the min and max norms coincide on E ⊗ E ⊂ C ⊗ C (resp. on (E¯ ⊗ E)+ ⊂ C¯ ⊗ C ) then they coincide on C ⊗ C (resp. on C¯ ⊗ C ), meaning the Kirchberg conjecture holds. Remark 14.10 Curiously there is currently no direct proof available for the preceding corollary. Remark 14.11 For any C ∗ -algebra A we have A¯  Aop . The mapping being + ¯ x¯ → x ∗ (see §2.3). So the assumption (E¯ ⊗ E)+ min = (E ⊗ E)max can be rewritten as         xk∗ ⊗ xk  op xk∗ ⊗ xk  op = .  A ⊗min A

A ⊗max A

294

Equivalence of the two main questions

Remark 14.12 Recall (see Remark 3.7) that when A = C ∗ (F∞ ) we have A¯  A and we have a C-linear isomorphism  : A → A that takes UF∞ (t) to UF∞ (t). In particular, this isomorphism takes Ui to Ui and C onto C¯. + ¯ Therefore, the assumption (E¯ ⊗ E)+ min = (E ⊗ E)max is equivalent to the following one: For any n and any a ∈ (Mn )+ we have         = aij Ui ⊗ Uj  . aij Ui ⊗ Uj   C ⊗min C

C ⊗max C

The main point is as follows: Theorem 14.13 Let (M,τ ) be a tracial probability space on a separable Hilbert space. Let π : C → M be a ∗-homomorphism taking the free generators {Uj } to an L2 (τ )-dense subset of U (M). Assume that for any finite set (xj ) in span[Uj ] ⊂ C we have      π(xj )2 ≤  x ⊗ x . (14.3)   j j 2 C ⊗min C

Then M embeds in a trace preserving way in a matricial MU for some U on N. Proof Let U0 = 1. We apply Theorem 11.38 with S = {Uj | j ≥ 0} and W = π(S). This shows that W satisfies the condition (iv)’ in Kirchberg’s criterion (see Theorem 12.8). By the L2 (τ )-density of W in U (M) it follows that there is a trace preserving embedding of M in an ultraproduct of matrix algebras, which completes the proof. Remark 14.14 (A variant) If we assume that π(S) is an L2 (τ )-dense subgroup of U (M) then we can conclude using Corollary 11.46 instead of Kirchberg’s criterion (Theorem 12.3). Corollary 14.15 A finite von Neumann algebra (on a separable Hilbert space) is QWEP if and only if it embeds in a matricial MU for some U on N. Proof If M is QWEP, let π be as in Theorem 14.13 (or Remark 14.14). By Theorem 9.67, IdC ⊗ π : C ⊗min C → C ⊗max M ≤ 1, and hence a fortiori π ⊗ π : C ⊗min C → M ⊗max M ≤ 1.   From this, since τ (π(xj )∗ π(xj )) = 1,L(π(xj )∗ )R(π(xj ))1, we deduce          τ (π(xj )∗ π(xj )) ≤  π(xj ) ⊗ π(xj ) ≤ xj ⊗ xj  . M⊗max M

C ⊗min C

14.2 From Kirchberg’s conjecture to Connes’s question

295

By Theorem 14.13 (or Remark 14.14) M embeds in some matricial MU . Conversely if M embeds in some matricial MU , the latter is obviously QWEP, and there is a unital c.p. projection (conditional expectation) from MU onto M. By Corollary 9.61 M inherits QWEP from MU . We can now easily complete the proof of Theorems 14.8 and 14.7. All the difficult ingredients have been already proved mainly in §11.6. Proof of Theorems 14.8 and 14.7 It clearly suffices to prove Theorem 14.8. Let (M,τ ) be a tracial probability space on a separable Hilbert space. Then there is a countable subgroup W ⊂ U (M) that is L2 (τ )-dense in U (M). We may assume that there is a unital bijection from S = {Uj | j ≥ 0} onto W. Let π : C → M be the ∗-homomorphism extending the latter bijection (this exists by the freeness of S). Since left-hand and right-hand multiplications on L2 (τ ) have commuting ranges we have for any xj ∈ C 

    τ (π(xj )∗ π(xj )) ≤  π(xj ) ⊗ π(xj )

M⊗max M

    ≤ xj ⊗ xj 

C ⊗max C

.

(14.4) Thus, if the min and max norms coincide on (E¯ ⊗ E)+ , (14.3) holds, and by Theorem 14.13, M trace preservingly embeds in some matricial MU , so that the Connes question has a positive answer. Remark 14.16 (A slight refinement) If the min and max norms coincide on {T + T ∗ | T ∈ (E¯ ⊗ E)+ } =



xj ⊗ xj +



xj∗ ⊗ xj∗ | xj ∈ E



then they coincide on C¯ ⊗ C . Indeed, with the notation of the preceding proof, we have 

τ (π(xj )∗ π(xj )) + τ (π(xj )π(xj )∗ )      ≤ π(xj ) ⊗ π(xj ) + π(xj∗ ) ⊗ π(xj∗ )

M⊗max M

     ≤ xj ⊗ xj + xj∗ ⊗ xj∗  max          = xj ⊗ xj + xj∗ ⊗ xj∗  ≤ 2 xj ⊗ xj  min

C ⊗min C

.

But since τ (π(xj )∗ π(xj )) = τ (π(xj )π(xj )∗ ) (trace property) we again obtain (14.3), and we conclude as in the preceding proof.

296

Equivalence of the two main questions

Remark 14.17 (A slight refinement (bis)) If the min and max norms coincide on {T + T ∗ | T ∈ (E ⊗ E)+ } then they coincide on C ⊗ C . Indeed, by Remark 14.12 we have ((E ⊗ E)+ ) = (E¯ ⊗ E)+ .

14.3 Notes and remarks Most of the results are due to Kirchberg. One major exception is Ozawa’s Theorem 14.7 derived from [191, th. 29] which is a quite significant improvement over Proposition 13.6. The refinement from Theorem 14.7 to Theorem 14.8 or Theorem 14.13 is obtained by an easy adaptation of Ozawa’s ideas. Corollary 14.15 is due to Kirchberg.

15 Equivalence with finite representability conjecture

Definition 15.1 A Banach space X is finitely representable in another Banach space Y if for any ε > 0 and any finite-dimensional subspace E ⊂ X there is a  ⊂ Y that is (1 + ε)-isomorphic to E. In that case we write X f.r. Y . subspace E We discuss equivalent forms of finite representability in §A.6. We denote by S1 (H ) the Banach space formed of all the trace class operators on H (see §A.10). The space S1 (H ) is isometric to the predual of B(H ). Recall the latter space is also isometric to the dual of K(H ). When H = 2 we set S1 = S1 (2 ).

15.1 Finite representability conjecture In the late 1970s, the following conjecture began to circulate: Finite representability conjecture: A∗ f.r. S1 for any C ∗ -algebra A. Equivalently, M∗ f.r. S1 for any von Neumann algebra M. This conjecture was popularized in the Banach space community by talks and papers notably by Pełczy´nski and Garling, but it probably can also be traced back to Haagerup. Remark 15.2 Let S1 (H ) be the space of trace class operators on a Hilbert space H . Let x ∈ S1 (H ) with H infinite dimensional. Then there is clearly a subspace K  2 such that x = PK xPK . More generally, given a finite set x1, . . . ,xn ∈ S1 (H ) there is a K such that this holds for any x ∈ {x1, . . . ,xn }. Therefore any finite-dimensional subspace E ⊂ S1 (H ) embeds isometrically in S1 = S1 (2 ). A fortiori, S1 (H ) f.r. S1 . Theorem 15.3 (Kirchberg) The finite representability conjecture is equivalent to the Kirchberg conjecture. More precisely, a C ∗ -algebra A is QWEP if and only if A∗ f.r. S1 .

297

298

Equivalence with finite representability conjecture

Proof We first show that if A is QWEP then A∗ f.r. S1 . The converse is much more delicate. Assume that A is QWEP. By Theorem 9.72, A∗∗ is a 1-complemented subspace of B(H )∗∗ , and hence A∗∗∗ , and a fortiori A∗ ⊂ A∗∗∗ , embeds isometrically in B(H )∗∗∗ . By Proposition A.15 (i.e. the local reflexivity principle) we know that B(H )∗∗∗ f.r. B(H )∗ , and also (applying it again) that B(H )∗ f.r. B(H )∗ , while Remark 15.2 tells us B(H )∗ f.r. S1 . Thus we obtain that A∗ f.r. S1 . Assume A∗ f.r. S1 . By Lemma A.8 there is a set I and a (metric) surjection Q : ∞ (I ;B(2 )) → A∗∗ taking the closed unit ball of ∞ (I ;B(2 )) onto that of A∗∗ . Let W = ∞ (I ;B(2 )) (which is WEP by, say, Proposition 9.34) and N = W ∗∗ . Note that N is QWEP by Theorem 9.66. Replacing Q by ϕ = ¨ : W ∗∗ → A∗∗ (as defined in (A.32)) we find a normal (metric) surjection Q ϕ : N → A∗∗ taking the closed unit ball of N onto that of A∗∗ . To conclude we call Jordan algebras to the rescue via Theorem 5.7 applied to M = A∗∗ , as follows. For any projection p in N, the C ∗ -algebras pNp and (pNp)op are QWEP by Corollary 9.70. If q is another such projection, with q ⊥ p, then pNp ⊕ (qNq)op is QWEP (recall Remark 9.16). By Theorem 5.7, the identity of M = A∗∗ factors completely positively through pNp ⊕ (qNq)op for some p,q. Therefore, by Corollary 9.70 A∗∗ is QWEP, and a fortiori by Theorem 9.66, A itself is QWEP. Remark 15.4 An operator space X is finitely representable in the operator sense in another one Y if for any finite-dimensional subspace E ⊂ X and  ⊂ Y that is completely (1 + ε)-isomorphic to E. We any ε > 0 there is E say that X is strictly locally reflexive if X∗∗ is finitely representable in the operator sense in X. By [78] this holds whenever X∗ is completely isometric to a C ∗ -algebra (i.e. for any so-called noncommutative L1 -space), in particular for X = B(H )∗ and X = S1 . Combined with the first part of the preceding argument, this shows that A∗ is finitely representable in the operator sense in S1 whenever A is QWEP. In the same spirit as the finite representability conjecture, we note one more characterization of QWEP von Neumann algebras involving only their Banach space structure: Theorem 15.5 (Kirchberg) A von Neumann algebra N is QWEP if and only if it is isometric as a Banach space to a (Banach space sense) quotient of B(H ) for some Hilbert space H . Proof Consider first the special case N = B(H )∗∗ . In that case Proposition  A.12 tells us that N is isometric to a quotient of some (⊕ i∈I Xi )∞ with  Xi = B(H ) for all i ∈ I . Let H = ⊕ i∈I Hi with Hi = H for all i ∈ I .

15.2 Notes and remarks

299

The projection onto the diagonal defines a metric surjection B(H) →   (⊕ i∈I Xi )∞ . Therefore (⊕ i∈I Xi )∞ , and a fortiori N , is isometric to a quotient of B(H). If N is QWEP, by Theorem 9.72 its bidual N ∗∗ is isometric to a quotient of some B(H )∗∗ and hence by what precedes to a quotient of some B(H). By Remark A.52 the same is true for N . Conversely, assume N is isometric to a quotient of B(H ). Then N ∗ embeds isometrically in B(H )∗ . A fortiori N ∗ f.r. B(H )∗ . By the local reflexivity principle (Proposition A.15) and Remark 15.2 we know that B(H )∗ f.r. B(H )∗ f.r. S1 . A fortiori N ∗ f.r. S1 , and N is QWEP by Theorem 15.3.

15.2 Notes and remarks The results here come from Kirchberg’s [155], but Ozawa’s expository paper [189] greatly clarified the picture.

16 Equivalence with Tsirelson’s problem

It is difficult to introduce Tsirelson’s problem without mentioning the unusual saga that led to it. In a remarkable paper [251] in 1980, Tsirelson observed that the famous Bell inequality, widely celebrated in quantum mechanics, could be viewed as a particular instance of Grothendieck’s inequality equally celebrated by Banach space theorists. Both inequalities involve matrices of the form [xi ,yj ] where xi ,yj are in the unit ball of a Hilbert space H . Tsirelson was particularly interested in the case when H = L2 (M,τ ) on a tracial probability space (M,τ ). In his discussion he asserted without proof that for his specific purpose (to be described) the case of a general (M,τ ) could be reduced to the matricial one. Years later, in connection with the development of Quantum Information Theory it was pointed out to him that this reduction was not clear and he himself advertised the problem widely. The goal of this section is to show that the latter Tsirelson problem is actually equivalent to the Connes embedding problem. The proof of this equivalence was completed in several steps, in the papers [95] and [138], the final step being taken by Ozawa in [191].

16.1 Unitary correlation matrices As a preliminary, it seems worthwhile to describe several reformulations of the Connes and Kirchberg problem in terms of unitary correlation matrices. Let Rc (d) be the set of d × d-matrices x = [xij ] of the form xij = ξ,u∗i vj ξ 

(16.1)

where (ui ) and (vi ) are d-tuples of unitaries in B(H ) such that ui vj = vj ui (or equivalently u∗i vj = vj u∗i ) for all i,j , where H is arbitrary and ξ ∈ H is a unit vector. We then define the analogous set Rs (d) but restricted to finitedimensional H ’s.

300

16.1 Unitary correlation matrices

301

Explicitly: let Rs (d) be the set of d × d-matrices x = [xij ] of the form xij = ξ,u∗i vj ξ  where ui ,vj are unitaries on H with dim(H ) < ∞ such that ui vj = vj ui for all i,j and ξ ∈ H is a unit vector. Let E(d) ⊂ C be the linear span of {U1, . . . ,Ud }. For any state f on C ⊗max C (resp. C ⊗min C ) let x f ∈ Md be the matrix defined by: f

xij = f (Ui ⊗ Uj ). Clearly x f → f|E(d)⊗E(d) is a linear isomorphism. Let S max (resp. S min ) be the set of states on C ⊗max C (resp. C ⊗min C ). Then Rc (d) = {x f | f ∈ S max }.

(16.2)

Indeed, if x is as in (16.1), let π : C ⊗C → B(H ) be a unital ∗-homomorphism taking Ui ⊗ Uj to u∗i vj . We have x = x f with f (·) = ξ,π(·)ξ  in S max . Conversely, the GNS factorization of f (see §A.13) shows that x f ∈ Rc (d) for any f ∈ S max . Since S max is weak* closed, (16.2) implies that Rc (d) is closed (this can also be shown using ultraproducts as in (16.9)). Analogously, we claim that Rs (d) = {x f | f ∈ S min }.

(16.3)

Indeed, this is easy to deduce from the same reasoning but using Proposition 9.20. Obviously, Rs (d) ⊂ Rc (d), and since Rc (d) is closed we have Rs (d) ⊂ Rc (d). Proposition 16.1 The Kirchberg conjecture is equivalent to the assertion that Rs (d) is dense in Rc (d) for any d. Proof The Kirchberg conjecture implies S max = S min and hence Rs (d) = Rc (d) by (16.2) and (16.3). We turn to the converse. Let [aij ] be a positive  definite matrix in Md . In particular a = a ∗ so that aij = aji . Let T = aij Ui ⊗ Uj ∈ (E ⊗ E)+ ⊂ C ⊗ C . We claim that   aij (xij + xji ) x ∈ Rc (d)} = T + T ∗ max sup (16.4) while

   aij (xij + xji ) x ∈ Rs (d) ≤ T + T ∗ min . sup

(16.5)

In fact we will see that (16.5) is an equality. Taking these claims for granted, the density of Rs (d) in Rc (d) implies T + T ∗ max = T + T ∗ min . Then the Kirchberg conjecture follows by Corollary 14.9 and Remark 14.17. We now verify the claim. By definition of the max-norm we have     aij u∗i vj + aij ui vj∗  , T + T ∗ max = sup 

302

Equivalence with Tsirelson’s problem

where the sup runs over all (ui ),(vj ) as in the definition of Rc (d). Thus  )  * )  *  aij u∗i vj ξ + ξ, aij ui vj∗ ξ T + T ∗ max = sup ξ, where the sup runs over all unit vectors ξ . Equivalently     T + T ∗ max = sup xij aij + xij aij x ∈ Rc (d) which (since a = a ∗ ) is the same as (16.4). Let Au (resp. Av ) denote the C ∗ -algebra generated by (ui ) (resp. (vi )). We have by a similar argument        aij (xij + xji ) x ∈ Rs (d) = sup  aij u∗i vj + aij ui vj∗  sup where the sup runs over all (ui ),(vj ) on a finite-dimensional H as in the definition of Rs (d); moreover, since Au is nuclear if dim(H ) < ∞, the latter (ui ),(vj ) satisfy         aij ui ⊗ vj + aij u∗i ⊗ vj∗  aij u∗i vj + aij ui vj∗  ≤   Au ⊗max Av     aij ui ⊗ vj + aij u∗i ⊗ vj∗  = , Au ⊗min Av

and we have (using the ∗-homomorphisms Ui → ui , Uj → vj )         ≤  aij Ui ⊗ Uj + aij Ui∗ ⊗Uj∗   aij ui ⊗ vj + aij u∗i ⊗vj∗  Au ⊗min Av

C ⊗min C

.

 Thus we obtain sup{| aij (xij + xji )| | x ∈ Rs (d)} ≤ T + T ∗ min , which is (16.5). Actually (we do not need this in the sequel) equality holds in (16.5). This is easy to check using Proposition 9.20. Remark 16.2 Let R⊗ (d) be the set of d × d-matrices of the form xij = ξ,(ui ⊗ vj )ξ  where (ui ) and (vi ) are d-tuples of unitaries in B(H ) where H is any finitedimensional Hilbert space and ξ ∈ H ⊗2 H is a unit vector. We claim R⊗ (d) = Rs (d). Obviously R⊗ (d) ⊂ Rs (d) (since ui ⊗ 1 commutes with 1 ⊗ vj ). To check the converse consider x ∈ Rs (d), say xij = ξ,u∗i vj ξ  with ui ,vj ∈ U (B(H )) such that ui vj = vj ui for all i,j and dim(H ) < ∞. Let Au , Av be again the C ∗ -algebras generated in B(H ) by {u∗i } and {vj }. Let f be the state defined on Au ⊗max Av by f (z ⊗ y) = ξ,zyξ . Since the algebras are finite dimensional this is a state on the min-tensor product, and hence extends to a state on B(H ) ⊗min B(H ) = B(H ⊗2 H ). Since dim(H ) < ∞ this implies that there is a finite family of unit vectors ξk in

16.2 Correlation matrices with projection valued measures

303

 H ⊗ H and wk > 0 (1 ≤ k ≤ m) with wk = 1 such that f (z ⊗ y) =  2 w ξ ,(z ⊗ y)ξ  for all z,y ∈ B(H ). Let H = m k k k k 2 ⊗2 [H ⊗2 H ]. We now use the multiplicity trick to write:   *  √ ) √ wk ek ⊗ ξk , ekk ⊗ [z ⊗ y] wk ek ⊗ ξk . f (z ⊗ y) = k

k

k

(16.6) In particular, we find xij = f (u∗i ⊗ vj ) = ξ ,(ui ⊗ vj )ξ    √  with a unit vector ξ  = k wk ek ⊗ξk ∈ H, ui = k ekk ⊗ui ⊗1 ∈ U (B(H))  and vj = k ekk ⊗ 1 ⊗ vj ∈ U (B(H)). This shows that x ∈ R⊗ (d), which completes the proof. Remark 16.3 It is easy to see that both Rs (d) and Rc (d) are convex sets. Indeed, this can be checked by the same multiplicity trick as in (16.6).

16.2 Correlation matrices with projection valued measures The Tsirelson problem asks whether the analogue of Proposition 16.1 holds for projection valued measures, in short PVMs. In quantum theory, a PVM with m outputs is an m-tuple of mutually orthogonal projections (Pj )1≤j ≤m  on a Hilbert space H with Pj = I . The terminology reflects the fact that if  we set μP (A) = j ∈A Pj , then μP is indeed a projection valued measure on {1, . . . ,m}. Seen from another viewpoint, we have a ∗-homomorphism πP : m ∞ → B(H ) defined by m πP (x) = xj Pj . (16.7) j =1

Let (Qj )1≤j ≤m be another PVM on H , such that Qj commutes with Pi for all i,j , or equivalently the ranges of πP and πQ commute. In quantum mechanics, the commutation corresponds to the independence of the corresponding experimental measurements. When we are given a state of the system in the form of a unit vector ξ ∈ H the probability of the event (i,j ) is given by ξ,Pi Qj ξ  and the matrix [ξ,Pi Qj ξ ] represents their correlation. It is then natural to wonder whether the same numbers can be obtained by modeling the system on a finite-dimensional Hilbert space H. The first and simplest question is whether the m × m “covariance matrix” [ξ,Pi Qj ξ ] can be approximated by m × m-matrices of the same form but realized on a finite-dimensional H. In the present simplest situation, the answer is yes. To justify this we choose a m pedantic but hopefully instructive way. Let f : m ∞ ⊗max ∞ → C be the linear form defined by f (x ⊗ y) = ξ,πP (x)πQ (y)ξ . Clearly f is a state and of

304

Equivalence with Tsirelson’s problem 2

2

m m m m course m m ∞ . A state of ∞ ∞ ⊗max ∞ = ∞ ⊗min ∞ can be identified with   is simply an element z = (zij ) with zij ≥ 0 such that i j zij = 1. Thus   we may describe f as f = (fij ) with fij ≥ 0 such that i j fij = 1 given 2

by fij = ξ,Pi Qj ξ . Now let H = m 2 with o.n. basis eij , and let pi (resp. qj ) denote the orthogonal projection onto the span of {eij | 1 ≤ j ≤ m} (resp.   1/2 {eij | 1 ≤ i ≤ m}). Let ξ  = i j fij eij . Then ξ  is a unit vector, (pi ), (qj ) are commuting PVMs on H with dim(H) < ∞ and ∀i,j

ξ,Pi Qj ξ  = ξ ,pi qj ξ  .

(16.8)

A fortiori our approximation problem is solved: we even obtain an equality. A more serious difficulty appears when we consider the same problem for several PVMs. We give ourselves two d-tuples of PVMs (P 1, . . . ,P d ), (Q1, . . . ,Qd ) (each with m outputs) on the same H and we assume that Pik commutes with Qlj for any 1 ≤ i,j ≤ m, 1 ≤ k,l ≤ d. Let Qc (m,d) denote the set of all “covariance matrices” x = [x(k,i;l,j )] of the form x(k,i;l,j ) = ξ,Pik Qlj ξ  (on an arbitrary H ), and let Qs (m,d) be the same set of matrices but restricted to finite-dimensional Hilbert spaces H . We first claim that Qc (m,d) is closed. Let us briefly sketch the easy ultraproduct argument for this: Assume that a = [aijkl ] is the limit when γ → ∞ of ξ(γ ),Pik (γ )Qlj (γ )ξ(γ ) that we assume relative to H (γ ). Let H (U ) be the Hilbert space that is the ultraproduct of the H (γ )’s for a nontrivial U , let ξ(U ),Pik (U ),Qlj (U ) be the associated objects on H (U ), then we find aijkl = limU ξ(γ ),Pik (γ )Qlj (γ )ξ(γ ) = ξ(U ),Pik (U )Qlj (U )ξ(U )

(16.9)

and we conclude a ∈ Qc . Since Qs (m,d) ⊂ Qc (m,d) is obvious, it follows that Qs (m,d) ⊂ Qc (m,d). We can now state Tsirelson’s question. Tsirelson’s problem: Is it true that Qs (m,d) = Qc (m,d) for all m,d ? As we already mentioned in Remark 3.40, the free product Z2 ∗ Z2 is amenable and hence 2∞ ∗2∞ is nuclear. It is not difficult to deduce from this (as should be clear from the sequel) that Qs (m,d) = Qc (m,d) when m = d = 2, and of course we already checked in (16.8) that this also holds for any m if d = 1. For the other values the problem is open, and in fact: Theorem 16.4 ([95, 138, 191]) The Tsirelson problem is equivalent to the Kirchberg conjecture whether the min and max norms coincide on C ⊗ C .

16.2 Correlation matrices with projection valued measures

305

To clarify the connection we need more notation. Let m A(m,d) = m ∞ ∗ · · · ∗ ∞ (d times).

Let Aj ⊂ A(m,d) denote the j th copy of m ∞ in the (unital) free product A(m,d). Let E(m,d) ⊂ A(m,d) be the linear subspace formed of all the sums x1 + · · · + xd where xj ∈ Aj for all 1 ≤ j ≤ d. Note that being unital and self-adjoint, E(m,d) is an operator system. We start with an elementary observation. Remark 16.5 (From PVMs to A(m,d) ⊗max A(m,d)) Consider two d-tuples of PVMs (P 1, . . . ,P d ), (Q1, . . . ,Qd ) in B(H ) such that all Pik commute with all Qlj as before (1 ≤ k,l ≤ d, 1 ≤ i,j ≤ m). Let πP k : m ∞ → B(H ) → B(H )) be the associated ∗-homomorphism as in (16.7)). Let (resp. πQl : m ∞ πp : A(m,d) → B(H ) (resp. πq : A(m,d) → B(H )) be the ∗-homomorphism on the free product canonically defined by (πP k )1≤k≤d (resp. (πQl )1≤l≤d ). Our commutation assumption implies that πp and πq have commuting ranges. Therefore πp .πq is a ∗-homomorphism on A(m,d) ⊗max A(m,d), and hence the linear map F : A(m,d) ⊗ A(m,d) → C defined by F (x ⊗ y) = ξ,πp (x)πq (y)ξ  extends to a state on A(m,d) ⊗max A(m,d). This gives us: Proposition 16.6 Let F be a state on A(m,d) ⊗max A(m,d). Let ejk denote the m m m j th basis vector of m ∞ in the kth copy of ∞ inside A(m,d) = ∞ ∗ · · · ∗ ∞ . k l Let xF (k,i;l,j ) = F (ei ⊗ ej ). Then Qc (m,d) = {xF | F state on A(m,d) ⊗max A(m,d)}.

(16.10)

Proof The preceding remark shows Qc ⊂ {xF }. To show the converse, just consider the GNS construction applied to F on A(m,d) ⊗max A(m,d). The next lemma records an elementary (linear algebraic) observation. . Let H ⊂ H  Lemma 16.7 Let (Pj )1≤j ≤m be a PVM on a Hilbert space H  be finite dimensional. There is a finite-dimensional K0 with H ⊂ K0 ⊂ H  there is a PVM such that for any finite-dimensional K with K0 ⊂ K ⊂ H (Pj )1≤j ≤m on K such that PH Pj = PH Pj |H for all 1 ≤ j ≤ m. |H

Proof Let Ej = Pj (H ). Of course Pj and PEj commute and they coincide on . Let H . Let K0 = E1 + · · · + Em . Note H ⊂ K0 . Assume K0 ⊂ K ⊂ H pj = PEj |K : K → Ej ⊂ K. In order to replace (pj ) by an m-tuple (Pj ) with sum = IdK , we set Pj = pj for j < m and Pm = pm + [IdK − (p1 + · · · + pm )]. Then for any h ∈ H we

306

Equivalence with Tsirelson’s problem

have Pj h = pj h = PEj h = Pj h for j < m and Pm h ∈ Pm h + K ( K0 ⊂ Pm h + H ⊥ . This yields the desired property. 1 and Lemma 16.8 Let (Pi )1≤i≤m and (Qj )1≤j ≤m be PVM’s respectively on H    H2 . Let H1 ⊂ H1 and H2 ⊂ H2 be finite-dimensional subspaces. There are 1 and H2 ⊂ K2 ⊂ H 2 finite-dimensional subspaces K1,K2 with H1 ⊂ K1 ⊂ H k l   and PVM’s P and Q (1 ≤ k,l ≤ d) respectively on K1 and K2 such that for all 1 ≤ i,j ≤ m and all 1 ≤ k,l ≤ d we have PH1 ⊗2 H2 [Pi ⊗ Qj ]|H1 ⊗2 H2 = PH1 ⊗2 H2 [Pi k ⊗ Qlj ]|H1 ⊗2 H2 . k

l

Proof We apply Lemma 16.7. This tells us that if K1 and K2 are chosen large enough there are Pi k ,Qj l as required. Lemma 16.9 Let H1,H2 be finite dimensional. Let u : A(m,d) → B(H1 ) and v : A(m,d) → B(H2 ) be unital c.p. maps. There are finite-dimensional Hilbert spaces K1,K2 with K1 ⊃ H1 , K2 ⊃ H2 and ∗-homomorphisms π1 : A(m,d) → B(K1 ), π2 : A(m,d) → B(K2 ) such that ∀x ∈ E(m,d) ⊗ E(m,d)

(u ⊗ v)(x) = PH1 ⊗2 H2 (π1 ⊗ π2 )(x)|H1 ⊗2 H2 .

Proof By Stinespring’s theorem we know that this holds (on the whole 2 and 1, H of A(m,d) ⊗ A(m,d)) with arbitrary Hilbert spaces, say H   1 ), π : A(m,d) → B(H 2 ) in place ∗-homomorphisms π1 : A(m,d) → B(H 2 of H1,H2 . Let then P k (resp. Ql ) be the PVM’s associated respectively to the restrictions of π1 (resp. π2 ) to the kth (resp. lth) copy of m ∞ in A(m,d). Let K1 ,K2 and Pi k ,Qj l be as in Lemma 16.8. Then the ∗-homomorphisms π1,π2

with values in B(K1 ) and B(K2 ), associated respectively to (Pi k ) and (Qj l ) satisfy the required equality on E(m,d) ⊗ E(m,d). Remark 16.10 Let Q⊗ (m,d) be the set of matrices x = [x(k,i;l,j )] of the form xf (k,i;l,j ) = ξ,(Pik ⊗ Qlj )ξ  where P k and Ql are PVM’s on a finitedimensional space H and ξ is a unit vector in H ⊗2 H . The same reasoning as for Remark 16.6 shows that Q⊗ (m,d) = Qs (m,d). Proposition 16.11 Let f be a state on A(m,d) ⊗min A(m,d). Let xf (k,i;l,j ) = f (eik ⊗ ejl ). Then Qs (m,d) = Q⊗ (m,d) = {xf | f state on A(m,d) ⊗min A(m,d)}. (16.11) Proof By (iii) in Theorem 4.21 f is the pointwise limit of a net (Fγ ) of states of the form F γ (x ⊗ y) = ψ γ (uγ (x) ⊗ vγ (y)),

16.2 Correlation matrices with projection valued measures γ

307

γ

where ψ γ is a state on B(H1 ⊗2 H2 ) for finite-dimensional Hilbert spaces γ γ γ γ γ H1 ,H2 and where u1 ,u2 are unital c.p. maps from A(m,d) to B(H1 ) and γ B(H2 ). By the same reasoning as for (16.6) (see also Remark 4.24), we may assume that ψ γ is a vector state. By Lemma 16.9 there are ∗-homomorphisms γ γ γ γ γ γ π1 : A(m,d) → B(K1 ) and π2 : A(m,d) → B(K2 ) with K1 ,K2 finite dimensional such that for all x,y ∈ E(m,d) γ

γ

γ

γ

u1 (x) ⊗ u2 (y) = PH γ ⊗2 H γ (π1 (x) ⊗ π2 (y))|H γ ⊗2 H γ . 1

2

1

2

Applying that to x = eik and y = ejl we obtain xf (k,i;l,j ) = limγ x γ (k,i;l,j ) where x γ (k,i;l,j ) = F γ (eik ⊗ ejl ) γ

γ

= ψ γ (u1 (eik ) ⊗ u2 (ejl )) and we claim that x γ ∈ Q⊗ (m,d). Indeed, if ψ γ (t) = ξ,tξ  for some unit γ γ vector ξ ∈ H1 ⊗2 H2 we find x γ (k,i;l,j ) = ξ,(Pik ⊗ Qlj )ξ  with Pik = γ γ π1 (eik ) and Qlj = π2 (ejl ). This shows that {xf } ⊂ Qs . To show the converse, by the preceding Remark 16.10 it suffices to show Q⊗ ⊂ {xf } (since the latter set is obviously closed). This inclusion is immediate: let x ∈ Q⊗ of the form x(k,i;l,j ) = ξ,(Pik ⊗ Qlj )ξ . Then x = xf where f is the state on A(m,d) ⊗min A(m,d) defined by f (x ⊗ y) = ξ,(πP k (x) ⊗ πQl (y))ξ . Putting together Proposition 16.6 and the last one we obtain: Proposition 16.12 The following assertions are equivalent for any fixed (m,d). (i) Qs (m,d) = Qc (m,d). (ii) The restrictions to E(m,d) ⊗ E(m,d) of the states of A(m,d) ⊗α A(m,d) for α = min and α = max form identical subsets of (E(m,d) ⊗ E(m,d))∗ . Moreover, these assertions imply (iii) The min and max norms of A(m,d) ⊗ A(m,d) coincide on the subspace {T + T ∗ | T ∈ E(m,d) ⊗ E(m,d)}. Proof Observe that (eik ⊗ ejl ) is a basis of the linear space E(m,d) ⊗ E(m,d). Therefore the restriction of a state f of A(m,d) ⊗α A(m,d) to E(m,d) ⊗ E(m,d) is entirely determined by xf . The equivalence of (i) and (ii) is then clear by (16.10) and (16.11).

308

Equivalence with Tsirelson’s problem

Let T ∈ E(m,d) ⊗ E(m,d). Clearly T + T ∗ α = sup{|f (T + T ∗ )|} where the sup runs over all states f on A(m,d) ⊗α A(m,d). Thus (ii) ⇒ (iii). We will now relate the last statement to C ∗ (Fd ) ⊗ C ∗ (Fd ). Lemma 16.13 The C ∗ -algebra A(m,d) has the LLP. Moreover, the identity of A(m,d) factors via unital c.p. maps through C ∗ (Fd ). Proof The first assertion is clear since the LLP is stable by free products (see Theorem 9.44). For the second one (which actually also implies the first one) we use Boca’s Theorem 2.24. It is easy to see that the identity of ∗ ∗ m ∞ = C (Z/mZ) factors via unital c.p. maps through C (Z) (this follows by applying Remark 9.54 to the natural quotient ∗-homomorphism C ∗ (Z) → C ∗ (Z/mZ)). By Theorem 2.24 the identity of A(m,d) factors through C ∗ (Z) ∗ · · · ∗ C ∗ (Z) = C ∗ (Z ∗ · · · ∗ Z) (d times) and Z ∗ · · · ∗ Z = Fd . Lemma 16.14 Let E(d) ⊂ C ∗ (Fd ) be the linear span of the d unitary generators and the unit. There are unital c.p. maps vm : C ∗ (Fd ) → A(m,d) and wm : A(m,d) → C ∗ (Fd ) such that vm (E(d)) ⊂ E(m,d) for any m and such that wm vm → IdC ∗ (Fd ) pointwise when m → ∞. Proof It is easy to check this for d = 1 (see Remark A.20 and recall Remark 1.32). Then by Boca’s Theorem 2.24 the general case follows. We can now complete the proof of the equivalence of the Tsirelson problem with the Connes–Kirchberg one. Proof of Theorem 16.4 Assume Qs (m,d) = Qc (m,d) for all m. Recall that any pair of unital c.p. maps gives rise to a contraction on the maximal tensor product (see Corollary 4.18). Then by Proposition 16.12 and Lemma 16.14 the min and max norms coincide on {T + T ∗ | T ∈ E(d) ⊗ E(d)} ⊂ C ∗ (Fd ) ⊗ C ∗ (Fd ). Since this holds for all d it holds for d = ∞, i.e. with E ⊂ C in place of E(d) ⊂ C ∗ (Fd ). By Corollary 14.9 and Remark 14.17 we conclude that the min and max norms coincide on C ⊗C , which means the Kirchberg conjecture holds. Conversely, if the Kirchberg conjecture holds, by Proposition 13.1 we know from LLP ⇒ WEP that the min and max norms coincide on A ⊗ A for any A with the LLP. Therefore, by Lemma 16.13 and Proposition 16.12 again we conclude that Qs (m,d) = Qc (m,d) for all m,d. Remark 16.15 (About POVMs) A positive operator valued measure (in short POVM) with m outputs is an m-tuple of self-adjoint operators (aj )1≤j ≤m on  a Hilbert space H with aj = I . Although this notion is more general than

16.3 Strong Kirchberg conjecture

309

PVMs, it is easy to reduce the study of correlation matrices of POVMs to that of PVMs. Indeed, consider two d-tuples of POVMs (a 1, . . . ,a d ), (b1, . . . ,bd ) (each with m outputs) on the same H and assume that aik commutes with bjl for any 1 ≤ i,j ≤ m, 1 ≤ k,l ≤ d. We wish to address the same problem as before but for x(k,i;l,j ) = ξ,aik bjl ξ . There is a 1–1 correspondence between POVMs and unital positive (and hence c.p. by Remark 1.32) linear maps from m ∞ to B(H ). Just consider for any → B(H ) defined by ua (ei ) = ai . If two such a = (ai )1≤i≤m the map ua : m ∞ POVMs (ai )1≤i≤m and (bj )1≤j ≤m mutually commute, the ranges of ua and ub also commute. By Boca’s Theorem 2.24 and by Corollary 4.19 we have a unital c.p. map u : A(m,d)⊗max A(m,d) → B(H ) such that u(eik ⊗ejl ) = aik bjl or any 1 ≤ i,j ≤ m, 1 ≤ k,l ≤ d. By Stinespring’s dilation Theorem 1.22 )  ⊃ H and a ∗-homomorphism π : A(m,d) ⊗max A(m,d) → B(H we have H such that u(·) = PH π(·)H . It follows that for any ξ ∈ H we have ξ,aik bjl ξ  = ξ,Pik Qlj ξ  where P k and Ql are the PVMs associated to π by setting π(eik ⊗ 1) = Pik and π(1 ⊗ ejl ) = Qlj . This shows that the Tsirelson problem for POVMs reduces to the case of PVMs.

16.3 Strong Kirchberg conjecture In [192] Ozawa introduced a strong version of Kirchberg’s conjecture that he proved to be equivalent to the assertion that C ⊗min C ⊗min B = C ⊗max C ⊗max B. It is an easy exercise to see that this is the same as the two assertions that C ⊗min C = C ⊗max C together with (C ⊗max C )⊗min B = (C ⊗max C )⊗max B. In other words, this is the same as the Kirchberg conjecture together with the assertion that C ⊗max C = C ∗ (F∞ × F∞ ) has the LLP. The strong Kirchberg conjecture from [192] asserts the following for any d ≥ 2: For any δ > 0 ∃ε > 0 such that given unitaries U1, . . . ,Ud , V1, . . . ,Vd in B(H ) with dim(H ) < ∞, satisfying ∀i,j

[Ui ,Vj ] ≤ ε

d , V 1, . . . , V d in  ⊃ H with dim(H ) < ∞ and unitaries U 1, . . . , U there is H  B(H ) satisfying

310

Equivalence with Tsirelson’s problem ∀i,j

i , V j ] = 0 and Ui − PH U i |H  ≤ δ, Vj − PH Vj  ≤ δ. [U |H

Actually, its validity for some d ≥ 2 implies the same for all d ≥ 2. The next lemma somewhat explains the role of the LLP. Lemma 16.16 Consider the assertion of the strong Kirchberg conjecture  be finite dimensional. Then this is equivalent without the requirement that H to the LLP for C ∗ (Fd × Fd ). In [192], Ozawa also considers an analogous equivalent conjecture involving positive operator valued measures (POVMs) in the style of Tsirelson’s conjecture. At this stage we refer to [192] for detailed proofs.

16.4 Notes and remarks The elementary results of §16.1 are only formulated to clarify (hopefully) those of the subsequent §16.2. See [139, 140] for more recent information on Bell’s inequalities and their connections with operator spaces. See Dykema and Juschenko’s [75] for more results on unitary correlation matrices. In [129] Harris and Paulsen study a different kind of unitary correlation matrices, derived from consideration of the Brown algebra (see Remark 13.2) and prove an analogue of Theorem 16.4 in their setting. See Slofstra’s [230, 231] (and Dykema and Paulsen’s [76]) for proofs that the set Qs (m,d) is not closed in general. In a different direction, the papers [115, 116] by Haagerup and Musat relate what they call the asymptotic quantum Birkhoff property for factorizable Markov maps to the Connes embedding problem.

17 Property (T) and residually finite groups Thom’s example

Definition 17.1 A finitely generated discrete group G, with generators g1,g2, . . . ,gn , is said to have property (T) if the trivial representation is isolated in the set of all unitary representations of G. More precisely, this means that there is a number δ > 0 such that, for any unitary representation π the condition (∃ξ ∈ Hπ , ξ  = 1, supj ≤n π(gj )ξ − ξ  < δ) suffices to conclude that π admits a nonzero invariant vector, or in other words that π contains the trivial representation (as a subrepresentation). It is easy to see that this property actually does not depend on the choice of the set of generators, but of course the corresponding δ does depend on that choice. The classical example of discrete group with property (T) is G = SL3 (Z). This goes back to D. Kazhdan (1967) and property (T) groups are often also called “Kazhdan groups.” We will use the following basic fact. Lemma 17.2 If G with generators S = {g1,g2, . . . ,gn } has property (T) then there is a function f : (0,∞) → (0,∞) with limε→0 f (ε) = 0 such that, for any unitary representation π : G → B(Hπ ), if ξ ∈ Hπ is a unit vector satisfying supj ≤n π(gj )ξ − ξ  < ε there is a unit vector

ξ

∈ Hπ such that

π(g)ξ  = ξ  ∀g ∈ G and ξ − ξ   < f (ε). Proof Let Hπinv ⊂ Hπ be the subspace of π -invariant vectors, i.e. vectors x ∈ Hπ such that π(g)x = x for all g ∈ G. Let P be the orthogonal projection

311

312

Property (T) and residually finite groups ⊥

from Hπ onto Hπinv . Clearly, Hπinv is invariant under π and the restriction of ⊥ π to it has no nonzero invariant vector. Therefore for any ξ ∈ Hπinv with ξ  = 1 we must have supj ≤n π(gj )ξ − ξ  ≥ δ, where δ is as in Definition 17.1. By homogeneity this implies supj ≤n π(gj )ξ − ξ  ≥ δξ  for all ξ and since π(gj )P ξ = P ξ we have π(gj )ξ − ξ = π(gj )(ξ − P ξ ) − (ξ − P ξ ) for all 1 ≤ j ≤ n, and hence ∀ξ ∈ Hπ

supj ≤n π(gj )ξ − ξ  ≥ δξ − P ξ .

Therefore ∀ξ ∈ Hπ

d(ξ,Hπinv ) ≤ δ −1 supj ≤n π(gj )ξ − ξ .

Let ξ  = P ξ P ξ −1 . If supj ≤n π(gj )ξ − ξ  < ε and ξ  = 1 we have ξ − P ξ  ≤ ε/δ and hence P ξ  ≥ 1 − ε/δ. Assuming 0 < ε < δ, we obtain ξ  − ξ  ≤ ξ  − ξ P ξ −1  + ξ P ξ −1 − ξ  ≤ (ε/δ)(1 − ε/δ)−1 . Property (T) can also be reformulated in terms of spectral gap, as follows. We will return to that theme in §19.3. Proposition 17.3 A discrete group G generated by a finite subset S ⊂ G containing the unit has property (T) if and only if there is ε ∈ (0,1) such that for any unitary representation π : G → B(Hπ ) we have    −1   (17.1) π(s) − PHπinv  ≤ 1 − ε. |S| s∈S

Proof Let n = |S|. By (A.6) if   n−1 

s∈S

  π(s)P|H inv ⊥  > 1 − ε π

(17.2)

√ ⊥ there is a unit vector ξ in Hπinv such that π(s)ξ − π(t)ξ  < 2 2εn for any s,t √ ∈ S. In particular since the unit of G is in S we have sups∈S π(s)ξ − ξ  < 2 2εn. Now √ assume that G has property (T). Let δ be as in Definition 17.1. Then if 2 2εn = δ, i.e. if ε = δ 2 /8n, we conclude that (17.2) is impossible: ⊥ otherwise the restriction of π to Hπinv would have an invariant unit vector and this is absurd by the very definition of Hπinv . This shows that (17.1) holds for ε = δ 2 /8n. Conversely, if (17.1) holds for some ε > 0 and if sups∈S π(s)ξ − ξ  < δ for some unit vector ξ then ξ − PHπinv ξ  ≤ 1 − ε + δ. Thus if we choose δ < ε, we find PHπinv ξ = 0 so that the vector PHπinv ξ is a nonzero invariant vector for π , and G has property (T).

Property (T) and residually finite groups

313

When the group G has property (T) the structure of C ∗ (G) is very special, in particular it splits as a direct sum pC ∗ (G) ⊕ (1 − p)C ∗ (G), as shown by the next statement. Proposition 17.4 Let G be a finitely generated group with Property (T). Then there is a self-adjoint projection p in the center of C ∗ (G) such that for any ∗-homomorphism π : C ∗ (G) → B(Hπ ) associated to a unitary representation π : G → B(Hπ ) we have π(p) = PHπinv , in particular, π(p) = 1 if π is the trivial representation and π(p) = 0 if π is any nontrivial irreducible unitary representation. Proof Note that Hπinv is also the set of invariant vectors of the range of π so we will denote it simply by Hπinv . Let S be a finite symmetric set of generators.   Let t = |S|−1 s∈S UG (s), so that π(t) = |S|−1 s∈S π (s). The projection p is simply the limit in C ∗ (G) of t m when m → ∞. Let us show that this limit exists. Note π(t)m − PHπinv = π(t)m (1 − PHπinv ) for any m ≥ 1 and π(t) commutes with (1 − PHπinv ) so that π(t)m − PHπinv = (π(t)(1 − PHπinv ))m = (π(t) − PHπinv )m . By Property (T) as in (17.1) there is ε ∈ (0,1) such that π(t) − PHπinv  ≤ 1 − ε for any π , and hence π(t)m − PHπinv  ≤ (1 − ε)m

(17.3)

for any m ≥ 1. A fortiori, π(t)m − π(t)m+1  ≤ 2(1 − ε)m , and since this holds for any π we have t m − t m+1 C ∗ (G) ≤ 2(1 − ε)m . Therefore by the Cauchy criterion t m converges to a limit p ∈ C ∗ (G). Since t ∗ = t we have p∗ = p, and also p2 = lim t 2m = p. Moreover, by (17.3) for any π we have π(p) = lim π(t m ) = lim π(t)m = PHπinv , and since PHπinv obviously commutes with the range of π , we have π(p)π(x) = π(x)π(p) for any x ∈ C ∗ (G). In particular, taking π = IdC ∗ (G) we see that p is in the center of C ∗ (G). We now connect the factorization property, introduced in Definition 7.36 and further studied in Theorem 11.55, with property (T). For simplicity, we still denote by λG : C ∗ (G) → Cλ∗ (G) the ∗-homomorphism that extends the unitary representation λG , so that λG (UG (t)) = λG (t) for t ∈ G. Theorem 17.5 Let G be a discrete group with property (T). Let S ⊂ G be an arbitrary generating set with 1 ∈ S. Let E ⊂ C ∗ (G) be the linear span of S. Assume that      τG (|λG (xj )|2 )) ≤  xj ⊗ xj  . (17.4) ∀n ∀x1, . . . ,xn ∈ E min

314

Property (T) and residually finite groups

Then S is residually finite dimensional (RFD in short), i.e. S is separated by the set of finite-dimensional unitary representations. In particular, if (17.4) holds for S = G, then G is RFD. Before giving the proof we start with comments and consequences to motivate Theorem 17.5. Remark 17.6 By a classical Theorem of Malcev [177], any finitely generated linear group is RF. Thus for finitely generated groups RFD implies RF. Actually, the groups G that are separated by their finite-dimensional unitary representations are characterized as subgroups of compact groups. They are usually called “maximally almost periodic.” By Theorem 11.55, the factorization property implies (17.4) with S = G, therefore: Corollary 17.7 If G has property (T) and the factorization property, then G is residually finite. Corollary 17.8 If C ∗ (G) has the WEP and G has property (T), then G is residually finite. Proof If C ∗ (G) has the WEP, then the min and max norms coincide on the set  { xj ⊗ xj } ⊂ C ∗ (G) ⊗ C ∗ (G). Therefore we again have (17.4) for S = G. Property T implies that G is finitely generated, thus by Malcev’s theorem (see Remark 17.6) G is RF. Proof of Theorem 17.5 Let C = C ∗ (G). By Theorem 11.38 and Remark 11.39 for a suitable embeddding C ⊂ B(H ) there is a net (hi ) of Hilbert–Schmidt operators on H with tr(h∗i hi ) = 1 such that ∀y,x ∈ E

τG (λG (y)∗ λG (x)) = limi tr(y ∗ xh∗i hi )

and such that, denoting by g → U (g) ∈ B(H ) the unitary representation of G obtained by restricting to G the embedding C → B(H ), we have ∀s ∈ S

U (s)hi − hi U (s)2 = U (s)hi U (s)∗ − hi 2 → 0.

It follows that (hi ) is an approximately invariant unit vector for the representation U ⊗ U of G on S2 (H ) = H ⊗2 H . Since G has property (T) there is, by Lemma 17.2, a net of invariant unit vectors (hi ) in S2 (H ) such that  hi − hi 2 → 0. Let Ti = h∗ i hi . We have then ∀y,x ∈ E

τG (λG (y)∗ λG (x)) = limi tr(y ∗ xTi ),

tr(Ti ) = 1 and U (s)Ti = Ti U (s) for all s ∈ S, and hence for all g ∈ G.

Property (T) and residually finite groups

315

The latter condition implies that U (g) commutes with all the spectral projections of Ti ≥ 0, and since Ti is compact these are finite dimensional. Thus if we write the spectral decomposition of Ti as Ti =

 k

λik Pki

with λik > 0

 k

λik tr(Pki ) = 1

we find that πki : g → U (g)|P i (H ) is a finite-dimensional unitary representation k

of G on Pki (H ), such that for any y,x ∈ E τ (λG (y)∗ λG (x)) = limi

 k

λik tr(πki (y)∗ πki (x)).

In particular for any s,t ∈ S δs,t = limi

 k

λik tr(πki (s −1 t)).

From which it is clear that {πki } separates the points of S. In [244], Andreas Thom exhibited a group G with property (T), that is approximately linear (i.e. “hyperlinear”), and even sofic, but not residually finite. This is a remarkable example (or counterexample) because: Proposition 17.9 Let G be a group with property (T), that is approximately linear (i.e. “hyperlinear”) but not residually finite. Then C ∗ (G) has neither the WEP nor the LLP. Proof That C ∗ (G) fails the WEP follows from Corollary 17.8. Since G is approximately linear, we know (see Corollary 9.60 or Remark 14.5) that we can write MG as a quotient MG = A/I of a WEP C ∗ -algebra A. Let E ⊂ C ∗ (G) be any finite-dimensional subspace. If C ∗ (G) had the LLP, the ˙ G ) would be locally natural map u : C ∗ (G) → A/I (which is the same as Q 1-liftable. We claim that this would imply (17.4). Indeed, there would be a map uE : E → A with uE cb ≤ 1 such that quE = u|E . Let xj ∈ E and yj = uE (xj ) ∈ A (1 ≤ j ≤ n). Then we would have u(xj ) = q(yj ) and hence   u(xj )2

L2 (τG )

    ≤ u(xj ) ⊗ u(xj ) MG ⊗max MG     ≤ xj ⊗ yj  ∗ C (G)⊗max A.

316

Property (T) and residually finite groups

By Corollary 9.40 (general form of Kirchberg’s Theorem) and since yj = uE (xj ) and uE cb ≤ 1 we would have         xj ⊗ yj  ∗ = xj ⊗ yj  ∗  C (G)⊗max A C (G)⊗min A     ≤ xj ⊗ xj  ∗ , C (G)⊗min E

    and since  xj ⊗ xj C ∗ (G)⊗ E =  xj ⊗ xj C ∗ (G)⊗ C ∗ (G) , our claim min min (17.4) would follow. By Theorem 17.5 this would contradict the fact that G is not residually finite, thus showing that C ∗ (G) fails the LLP.

Remark 17.10 Let us say that an operator u : C → B between C ∗ -algebras locally factors (completely boundedly) through a C ∗ -algebra A if there is a constant c such that for any finite-dimensional subspace E ⊂ C the restriction v w u|E admits a factorization u|E : E −→A−→B with vcb wcb ≤ c. Using the subsequent inequality (22.16) the preceding argument can be modified to show more generally that the inclusion u : C ∗ (G) → MG does not locally factor (completely boundedly) through a WEP C ∗ -algebra. This negates at the same time both WEP and LLP for C ∗ (G), when we know that MG is QWEP and G has (T).

17.1 Notes and remarks Theorem 17.5 originally comes from [157]. A simpler proof appears in [17] which we recommend to the reader for (much) more information on Property (T) (see also [127]). Proposition 17.4 is due to Valette [252]. A. Thom’s construction of a group G as in Proposition 17.9 appears in [244] to which we refer the reader for full details. This is the first example for which C ∗ (G) fails the LLP.

18 The WEP does not imply the LLP

Although S. Wassermann had proved in his 1976 paper [255] that B(H ) is not nuclear (assuming dim(H ) = ∞), the problem whether A ⊗min B = A ⊗max B when A = B = B(H ) remained open until [141]. In the latter paper, several different proofs were given. Taking into account the most recent information from [120], we now know that   n tmax t ∈ B(H ) ⊗ B(H ), rk(t) ≤ n ≥ √ . (18.1) sup tmin 2 n−1 This estimate is rather sharp asymptotically, since it can be shown that the √ supremum appearing in (18.1) is ≤ n (see [141] or [208, p. 353]). Remark 18.1 We need to clarify a few points regarding complex conjugation, which we already discussed in §2.3. In general, we will need to consider the conjugate A¯ of a C ∗ -algebra A. This is the same object but with the complex multiplication changed to (λ,a) → λ¯ a, so that A¯ is anti-isomorphic to A. Recall that for any a ∈ A, we denote by a¯ the same element viewed as ¯ Recall (see Remark 2.13) that A¯  Aop via the mapping an element of A. ∗ a¯ → a . The distinction between A and A¯ is necessary in general, but not for A = B(H ) since in that case, using H  H , we have B(H )  B(H )  B(H ), and in particular MN  MN . In the case of MN , the mapping a¯ → [aij ] is an embedding of MN into MN , taking eij to eij . As a consequence, for any matrix a in MN (A) we have           eij ⊗ aij  = eij ⊗ aij  = [aij ]M (A) . (18.2)  MN ⊗min A

MN ⊗min A

By (2.13) this also implies [a¯ ij ]MN (A) ¯ = [aij ]MN (A) .

317

N

318

The WEP does not imply the LLP

Note however that H  H depends on the choice of a basis so the isomorphism B(H )  B(H ) is not canonical. Nevertheless, this shows that the problem whether the min and max norms are the same is identical for B(H ) ⊗ B(H ) and for B(H ) ⊗ B(H ). Remark 18.2 Consider a1, . . . ,an in A and b1, . . . ,bn in B. Using the preceding remark we have     aj ⊗ bj  

A⊗α B

    aj∗ ⊗ bj  =

Aop ⊗α B

for any “reasonably” well behaved C ∗ -norm, in particular for α = min or max. Moreover, we have             aj ⊗ bj  = aj∗ ⊗ bj  op = sup  π(aj∗ )σ (bj )  A ⊗max B

A⊗max B

where the supremum runs over all commuting range pairs π : A → B(H ), σ : B → B(H ) with σ a representation and π an anti-representation on the same (arbitrary) Hilbert space H . Remark 18.3 Let M be a C ∗ -algebra equipped with a tracial state τ . Then the GNS construction (see §A.13) associated to (M,τ ) produces a Hilbert space H = L2 (τ ), a cyclic unit vector ξ in H associated to 1M and commuting left-hand and right-hand actions of M induced by the corresponding multiplications on M. As earlier, we denote them by L(a)h = a · h (L is what we denoted πf in §A.13) and R(a)h = h·a. Then L (resp. R) is a representation of M (resp. M op ) on B(H ) and the ranges of L and R commute (see the beginning of §11). We have then for any n-tuple (u1, . . . ,un ) of unitaries in M  n   uj ⊗ uj   1

 n   = u∗j ⊗ uj 

M op ⊗max M

1

M⊗max M

= n.

(18.3)

Indeed, this is n   n     ≥ L(u∗j )R(uj ) ≥  u∗j · ξ · uj  1

1

(18.4)

but since ξ ∈ L2 (τ ) is the element associated to 1M we have u∗j · ξ · uj = ξ hence (18.4) is ≥ n and ≤ n is trivial by the triangle inequality. In particular, for any unitary matrices u1, . . . ,un in MN we have   n   uj ⊗ uj   1

min

 n   = uj ⊗ uj  1

max

= n.

(18.5)

18.1 The constant C(n): WEP ⇒ LLP

319

18.1 The constant C(n): WEP ⇒ LLP In [141], a crucial role is played by a certain constant C(n), defined as follows: C(n) is the infimum of the constants C such that for each m ≥ 1, there is Nm ≥ 1 and an n-tuple [u1 (m), . . . ,un (m)] of unitary Nm × Nm matrices such that n    uj (m) ⊗ uj (m ) ≤ C. (18.6) sup  m=m

j =1

min

By (18.2) the preceding min-norm can be understood either in MNm ⊗min MNm  MNm Nm (with uj (m) denoting the usual matrix with conjugate entries) or in MNm ⊗min MNm with uj (m) ∈ MNm . The connection of C(n) to B(H ) ⊗ B(H ) goes through the following statement. Theorem 18.4 ([141]) For any n ≥ 1 and ε > 0, there is a tensor t of rank n in B ⊗ B such that tmax /tmin ≥ n/C(n) − ε. Remark 18.5 By Corollaries 22.13 and 22.16 we have tmax = tmin for any  xj ⊗ xj . This perhaps explains tensor t ∈ B(H ) ⊗ B(H ) of the form t = why Theorem 18.4 is not so easy. We have trivially C(n) ≤ n for all n. The crucial fact to show that B(H ) ⊗min B(H ) = B(H ) ⊗max B(H ) is that C(n) < n for at least one n > 1. The final word on this is now: Theorem 18.6 ([120]) √ (18.7) ∀n ≥ 2 C(n) = 2 n − 1. √ The (much easier) lower bound 2 n − 1 ≤ C(n) was proved in [206]. The complete proof of the upper bound uses a delicate random matrix ingredient (namely Theorem 18.16) the proof of which is beyond the scope of these notes, but we will give the proof of (18.7) modulo this ingredient in the next section. See the next chapter for simpler proofs that C(n) < n. Theorem 18.4 is an immediate consequence of the next Theorem 18.9. To prove the latter we will use a compactness argument for “convergence in distribution” (or rather in moments) of n-tuples of operators, that is described in the next two lemmas. By “distribution” we mean the collection of all “moments” in the operators (viewed as noncommutative random variables) and their adjoints. This is the same notion as that of ∗-distribution used by Voiculescu in free probability (see [254]), but our notation is slightly different.

320

The WEP does not imply the LLP

Let S be the set consisting of the disjoint union of the sets Sk = [1, . . . ,n]k × {1,∗}k . For any w = ((i1, . . . ,ik ),(ε1, . . . ,εk )) in Sk and any n-tuple x = (x1, . . . ,xn ) in B(H ) we denote w(x) = xiε11 xiε22 . . . xiεkk (where x ε = x if ε = 1 and x ε = x ∗ if ε = ∗). Let x = (x1, . . . ,xn ) be an n-tuple in a von Neumann algebra M equipped with a tracial state τ . By “the distribution of x,” we mean the function μx : S → C defined by μx (w) = τ (w(x)). When x = (x1, . . . ,xn ) is an n-tuple of unitary operators, we may as well consider that μx is a function defined on Fn (free group with generators g1, . . . ,gn ) by setting μx (w) = τ (πx (w)) for any “word” w ∈ Fn , where πx : Fn → M is the unitary representation defined by π(gj ) = uj . The following is elementary and well known. Lemma 18.7 Fix n ≥ 1. Let (M(m),τm ) be a sequence of von Neumann algebras equipped with (tracial) states. Let x(m) = (x1 (m), . . . ,xn (m)) be a bounded sequence of n-tuples with x(m) ∈ M(m)n . Then there is a subsequence {mk } such that the distributions of x(mk ) converge pointwise on S when k → ∞. To identify the limit of a sequence of distribution, it will be convenient to use ultraproducts. Let (M(m),τm ) be as before (m ∈ N). Let U be a nontrivial  ultrafilter on N. Let B = (⊕ m M(m))∞ . Let HU be the GNS Hilbert space for the state τU defined on B by ∀y = (ym ) ∈ B τU (y) = limU τm (ym ). Let ∗ y ) = 0}. As explained earlier (see §11.2), IU = {y = (ym ) | limU τm (ym m since τU vanishes on IU it defines a tracial state on B/IU , that we still denote (albeit abusively) by τU . We have an isometric representation: a → L(a) ∈ B(L2 (τU )) of B/IU on L2 (τU ) (associated to left-hand multiplication) and an isometric representation a → R(a) ∈ B(L2 (τU )) of (B/IU )op (associated to right-hand multiplication). We already know (cf. Theorem 11.26) that MU = L(B/IU ) is a von Neumann subalgebra of B(HU ) and that we have MU = R(B/IU ) and R(B/IU ) = MU . We will view (abusively) τU as a functional on MU by setting for any y ∈ MU τU (y) = limU τm (ym ), where (ym ) ∈ B is any element of the equivalence class of L−1 (y) ∈ B/IU .

18.1 The constant C(n): WEP ⇒ LLP

321

Let x = {x(m) | m ∈ N} be a bounded sequence of n-tuples with x(m) ∈ x = ( x1, . . . , xn ) be M(m)n as before. Equivalently, x is a sequence in Bn . Let  n the associated n-tuple in MU . Then, for any “word” w in S , we clearly have τU (w( x )) = limU τm (w(x(m))). Hence the distribution of x(m) tends pointwise to that of  x along U , so we can write limU μx(m) = μ x . The next (again elementary and well known) lemma connects limits in distribution with ultrafilters. Lemma 18.8 Let {x(m) | m ∈ N} be a sequence of n-tuples as in the preceding Lemma. The following are equivalent. (i) The distributions of x(m) converge pointwise when m → ∞. (ii) For any nontrivial ultrafilter U on N, the associated n-tuple xn ) in (MU ,τU ) has the same distribution (i.e. its  x = ( x1, . . . , distribution does not depend on U ). (iii) There is a tracial probability space (M,τ ) and y = (y1, . . . ,yn ) in M n such that x(m) → y in distribution. Proof (i) ⇒ (ii) is essentially obvious. (ii) ⇒ (iii) is proved by picking any fixed nontrivial ultrafilter U and taking (M,τ ) = (MU ,τU ) (see Remark A.4 for clarification). Lastly (iii) ⇒ (i) is again obvious. With the notation in (11.14), let {[u1 (m), . . . ,un (m)],m ∈ N} be a sequence of n-tuples of unitary matrices as in (18.6) (recall u1 (m), . . . ,un (m) are of size Nm × Nm ). For any subset ω ⊂ N, let    Bω = ⊕ MNm . (18.8) m∈ω



Let N = ω(1) ∪ ω(2) be any disjoint partition of N into two infinite subsets, and let   uj (m) ∈ Bω(1) u2j = uj (m ) ∈ Bω(2) . u1j =  m ∈ω(2)

m∈ω(1)

(18.9) Theorem 18.9 Suppose that [u1 (m), . . . ,un (m)] converges in distribution when m → ∞ and satisfies (18.6). Let n u1j ⊗ u2j . t= j =1

We have then tmin ≤ C

and

tmax = n,

322

The WEP does not imply the LLP

and hence tmax /tmin ≥ n/C, where the min and max norms are relative to Bω(1) ⊗ Bω(2) . Proof We have obviously tmin =

sup

m∈ω(1),m ∈ω(2)

    uj (m) ⊗ uj (m ) 

hence tmin ≤ C. We now turn to tmax . Let U be a nontrivial ultrafilter on ω(1) and let V be one on ω(2). We construct the ultraproducts MU and MV as previously. Since the quotient mappings Bω(1) → MU and Bω(2) → MV are ∗-homomorphisms, we have     uj ⊗ vj  tmax ≥  MU ⊗max MV

 where uj (resp. vj ) is the equivalence class modulo U (resp. V) of uj (m) m∈ω(1)  (resp. uj (m)). m∈ω(2)

Now, since we assume that [u1 (m), . . . ,un (m)] converges in distribution when m → ∞, the two n-tuples (u1, . . . ,un ) and (v1, . . . ,vn ) must have the same distribution relative respectively to τU and τV . But this implies (see Remark 11.20) that there is a ∗-isomorphism π from the von Neumann algebra Nu ⊂ MU generated by (u1, . . . ,un ) to the one Nv ⊂ MV generated by (v1, . . . ,vn ), defined simply by π(uj ) = vj . Moreover, since we are dealing here with finite traces, there is a conditional expectation P from MU onto Nu (see Proposition 11.21). Therefore the composition T = π P is a unital completely positive map from MU to Nv ⊂ MV such that T (uj ) = vj . Thus we find by (4.30)         uj ⊗ vj  ≥ T (uj ) ⊗ vj   MU ⊗max MV MV ⊗max MV     = vj ⊗ vj  . (18.10) MV ⊗max MV

But then by (18.3) we conclude that tmax ≥ n. Remark 18.10 The same reasoning shows that     vj ⊗ vj  tmin ≥ 

MV ⊗min MV

.

Proof of Theorem 18.4 Since there exist max-injective inclusions Bω(1) ⊂ B and Bω(2) ⊂ B, Theorem 18.9 gives us a tensor t  ∈ B ⊗ B of rank n with t  max /t  min ≥ n/C. Using B  B, we find a similar tensor in B⊗B. We now exploit the mere fact that C(n) < n for some n, which we know by Theorem 18.6:

√ 18.2 Proof that C(n) = 2 n − 1 using random unitary matrices

323



Corollary 18.11 Recall B = ⊕ n≥1 Mn ∞ . Then the pair (B,B) (or the pair (B,B)) is not nuclear and B (although it has the WEP) fails the LLP. Proof Since C(n) < n for some n, Theorem 18.9 tells us that (Bω(1),Bω(2) ) is not a nuclear pair. We have inclusions Bω(1) ⊂ B and Bω(2) ⊂ B each admitting a c.p. contractive projection. By Proposition 7.19 (i), these inclusions are max-injective, therefore we have an isometric embedding Bω(1) ⊗max Bω(2) ⊂ B ⊗max B so the min and max norms cannot coincide on B ⊗ B or equivalently (recall B ∼ = B) on B ⊗ B. Since for the inclusion B ⊂ B(2 ) there is also a c.p. contractive projection from B(2 ) onto B, the same argument shows they do not coincide on B(2 ) ⊗ B, which means that B fails the LLP. Similarly: Corollary 18.12 For B = B(2 ), we have B ⊗min B = B ⊗max B or equivalently B (although it has the WEP) fails the LLP. Proof Since we have max-injective inclusions B ⊂ B this follows from Corollary 18.11. More generally, we have Corollary 18.13 A von Neumann algebra M has the LLP if and only if M is nuclear. A pair (M,N) of von Neumann algebras is nuclear if and only if one of them is nuclear.

 Proof Let B = ⊕ n≥1 Mn ∞ . If M is not nuclear, by Theorem 12.29 there is an embedding B ⊂ M, and since B is injective, the embedding is maxinjective. Since B fails the LLP (by Corollary 18.11), so does M (see Remark 9.3 or Remark 9.13). If, in addition, N is not nuclear, we have B ∼ = B ⊂ N . Thus again, since (B,B) is not nuclear, we find that (M,N ) is not nuclear if none of M,N is nuclear, and the converse is trivial.

√ 18.2 Proof that C(n) = 2 n − 1 using random unitary matrices We start with an easy result from [206] estimating C(n) from below. The alternate proof we give here of (18.11) is due to Szarek.

324

The WEP does not imply the LLP

Proposition 18.14 Let u1, . . . ,un be arbitrary unitary operators in B(H ) (H any Hilbert space), then  n √   uj ⊗ uj  . (18.11) 2 n−1≤ j =1

S+

Proof Let = {t ∈ S2 (H ) | t ≥ 0 case of (2.11). Note:

min

t2 = 1}. We will use the self-adjoint

∀a ∈ B(H ),∀t ∈ S + tr(tata ∗ ) = tr([t 1/2 at 1/2 ][t 1/2 at 1/2 ]∗ ) ≥ 0. (18.12) n n Let T = 1 uj ⊗ uj and let S = j =1 λFn (gj ). In accordance with the identification of T with the operator t → uj tu∗j acting on S2 (H ), we denote  t,T t = tr( t ∗ uj tu∗j ) for any t ∈ S2 (H ). The idea of the proof is to show that for any integer m ≥ 1 and any t in S + we have t,(T ∗ T )m t ≥ δe,(S ∗ S)m δe  = τFn ((S ∗ S)m )

(18.13)

where δe denotes the basis vector in 2 (Fn ) indexed by the unit element of Fn .  We can expand (T ∗ T )m as a sum of the form w∈I uw ⊗ uw where the uw ’s are unitaries of the form u∗i1 uj1 u∗i2 uj2 . . .. Now for certain ws, we have uw = I (and hence tr(uw tuw∗ t) = 1) by formal cancellation (no matter what the uj s are). Let us denote by I  ⊂ I the set of all such w’s. Then by (18.12) we have for all t in S +   t,(T ∗ T )m t = tr(uw tuw∗ t) ≥ 1 = card(I  ). w∈I 

w∈I

An elementary counting argument shows that card(I  ) = (S ∗ S)m δe,δe  = τFn ((S ∗ S)m ). Thus we obtain (18.13). Therefore (recalling (11.10)) T ∗ T  ≥ lim supt,(T ∗ T )m t1/m ≥ lim sup(τFn ((S ∗ S)m ))1/m = S ∗ S, m→∞

m→∞

so that we obtain T  ≥ S, whence (18.11) by (3.20). Corollary 18.15

√ 2 n − 1 ≤ C(n).

(18.14) 

Proof Let (uj (m)) be as in (18.6). Let B = (⊕ MNm )∞ . Let U be a non trivial ultrafilter on N. Let I ⊂ B be the (closed self-adjoint two-sided) ideal formed of all x ∈ B such that limU xi  = 0. Let q : B → B/I be the quotient morphism. Let uj = q((uj (m)). We claim that  n   n     uj ⊗ uj  ≤ supm=m  uj (m) ⊗ uj (m ) .  j =1

min

1

Using the claim we deduce (18.14) from (18.11). To check the claim note that  for any C ∗ -algebra A we have A ⊗min B = (⊕ m A ⊗min MNm )∞ and hence for any (aj ) in An

√ 18.2 Proof that C(n) = 2 n − 1 using random unitary matrices  n   aj ⊗ uj  

A⊗min (B/I )

1

n    ≤ limU ,m  aj ⊗ uj (m ) 1

A⊗min MN

325

. m

Indeed, since IdA ⊗ q : A ⊗min B → A ⊗min (B/I) is a contraction we observe  that the left-hand side is ≤ supm  n1 aj ⊗ uj (m )min . Moreover, let γ ⊂ N  be a subset that belongs to U . If we multiply m → n1 aj ⊗ uj (m ) by the indicator of γ and apply the same observation we find  n   n     aj ⊗ uj  ≤ supm ∈γ  aj ⊗ uj (m ) ,  A⊗min (B/I )

1

1

A⊗min MN

m

and since this holds for any γ the claim follows. (Note: the claim merely spells out the fact that the natural morphism (A⊗min B)/(A⊗min I) → A⊗min (B/I) is always contractive, see the discussion in §10.1.) Applying the claim twice gives us  n  n     uj ⊗ uj  ≤ limU ,m  uj (m) ⊗ uj   1 1 min min n    ≤ limU ,m limU ,m  uj (m) ⊗ uj (m ) , 1

and obviously the last term is ≤ supm=m  the claim.

n 1

min

uj (m)⊗uj (m ), which proves

Alternate proof: Passing to a subsequence we may assume that (uj (m)) converges in distribution when m → ∞. Then with the notation from Remark 18.10 we have n        uj (m) ⊗ uj (m ) ≥ tmin ≥  vj ⊗ vj  sup  m=m

MV ⊗min MV

1

and we again deduce (18.14) from (18.11). √ The proof that C(n) ≤ 2 n − 1 is much more delicate. The first proof by Haagerup and Thorbjørnsen in [119] was based on a fundamental limit theorem for Gaussian random matrices, which was a considerable strengthening of Theorem 12.24. We will use the following refinement due to Collins and Male [56], valid for unitary matrices, which gives a more direct approach. Theorem 18.16 ([56]) Let UN denote the group of all N × N unitary (N ) (N ) matrices (N ≥ 1). Let U1 , . . . ,Un be a sequence of independent matrix valued random variables, each having as its distribution the normalized Haar measure on UN . Let g1, . . . ,gn be the free generators of the free group Fn . For convenience we set U0(N ) = I (unit in UN ) and we denote by g0 the unit in Fn . Then, for all k and for all a0, . . . ,an in Mk , we have, for almost all ω  n  n     lim  aj ⊗ Uj(N ) (ω) = aj ⊗ λFn (gj ) . (18.15) N →∞

0

min

0

min

326

The WEP does not imply the LLP

In particular, if a1, . . . ,an are all unitary, for almost all ω n  √   (N ) lim  aj ⊗ Uj (ω) = 2 n − 1. N →∞

1

min

(18.16)

The implication (18.15) ⇒ (18.16) follows from Remark 3.17 and (3.20). √ Proof of Theorem 18.6 By (18.14) it suffices to show that C(n) ≤ 2 n − 1. Fix ε > 0. Obviously it suffices to construct a sequence of n-tuples {(uj (m))1≤j ≤n | m ≥ 1} of unitary matrices (we emphasize that (uj (m))1≤j ≤n is assumed to be an n-tuple of matrices of size Nm × Nm ) such that, for any integer p ≥ 1, we have   √   sup  uj (m) ⊗ uj (m ) < 2 n − 1 + ε. (18.17) 1≤m=m ≤p

min

We will construct this sequence and the sizes Nm by induction on p. Assume that we already know the result up to p. That is, we already know a family {(uj (m))1≤j ≤n | 1 ≤ m ≤ p} formed of p n-tuples satisfying (18.17). We need to produce an additional n-tuple (uj (p + 1))1≤j ≤n of unitary matrices (possibly of some larger size Np+1 × Np+1 ) such that (18.17) still holds for the enlarged family {(uj (m))1≤j ≤m | 1 ≤ m ≤ p + 1} formed of one more n-tuple. By (18.16), for any 1 ≤ m ≤ p, we have for almost all ω  n √   (N ) uj (m) ⊗ Uj (ω) = 2 n − 1. lim  N →∞

1

min

Hence, if N is chosen large enough, we can ensure that, for all 1 ≤ m ≤ p simultaneously, we can find ω such that  n  √   uj (m) ⊗ Uj(N ) (ω) < 2 n − 1 + ε.  1

min

(N )

But then, if we set Np+1 = N and uj (p + 1) = Uj (ω), the extended family {(uj (m))1≤j √ ≤n | 1 ≤ m ≤ p + 1} clearly still satisfies (18.17). This proves C(n) ≤ 2 n − 1. Fix ε > 0. Actually, by concentration of measure arguments (see e.g. [7, §4.4]), there is a sequence of sizes N1 < N2 < · · · , for our random unitary matrices, such that  ⎫ ⎧    n ⎬ ⎨ √   (Nm ) (Nm )  ≤ 2 n − 1 + ε > 1 − ε. U (ω) ⊗ U (ω) P ω ∈ | sup  j j   ⎭ ⎩ m=m j =1  min

Thus, provided the sizes grow sufficiently fast, with close to full probability a random choice yields (18.6) with almost the best C. We refer to [211, 212] for more details and for related estimates.

18.3 Exactness is not preserved by extensions

327

18.3 Exactness is not preserved by extensions In this section we indicate a quick way to produce an example of a separable nonexact C ∗ -algebra A and a closed ideal I ⊂ A such that both I and the quotient C = A/I are exact. In this situation one usually says that A is an extension of C by I. Thus exactness is not stable by extension. When I and C are both nuclear, then A is nuclear (see Corollary 8.18). Thus in sharp contrast, nuclearity is preserved by extensions.  Recall B = (⊕ N ≥1 MN )∞ . For any V ∈ B, for notational convenience, we will denote in this section by V (N ) ∈ MN the N ’th coordinate of V . Remark 18.17 Let I0 ⊂ B be the subset of all b = (b(N ) ) ∈ B such that limN b(N )  = 0. Let A ⊂ B be a unital C ∗ -subalgebra. Let q : A → A/A∩I0 be the quotient map. Then for any b0, . . . ,bn ∈ A and a0, . . . ,an in Mk (n,k ≥ 1) we have n  n      lim sup  aj ⊗ bj(N )  = aj ⊗ q(bj ) . (18.18) N →∞

0

Mk (A/A∩I0 )

0

Mk (MN )

To justify this, let Q : B → B/I0 be the quotient morphism. Since A/A ∩ I0 = A/ ker(Q|A )  Q(A) ⊂ Q(B)  B/I0,

(18.19)

it suffices to check (18.18) in the case A = B, for which the easy verification is left to the reader. Kirchberg gave in [155] the first example of a nonexact extension of an exact C ∗ -algebra by the algebra of compact operators on 2 . Our example, based on the random unitary matrix model, will be deduced from Theorem 18.16, but first we analyze the underlying deterministic matrix model. Remark 18.18 Let V0 = 1B and let g0 be as before the unit in Fn . Let V1, . . . ,Vn be unitaries in B that form a matrix model for MFn in the sense of §12.5. This implies that for any nontrivial ultrafilter U on N, denoting as before by qU : B → MU the quotient map, the correspondence λFn (gj ) → qU (Vj ) (0 ≤ j ≤ n) extends to an isometric normal ∗-homomorphism embedding MFn into MU . In particular this implies that the following holds: n    aj ⊗ λFn (gj ) (18.20) ∀k ≥ 1 ∀aj ∈ Mk  0

 n   aj ⊗ qU (Vj ) = 0

Mk (MU )

Mk (MFn )

.

By Proposition 9.7 the latter property (18.20) implies conversely that the previous correspondence extends to an isometric ∗-homomorphism from Cλ∗ (Fn ) into MU .

328

The WEP does not imply the LLP

Theorem 18.19 Let V0 = 1 and let V1, . . . ,Vn be unitaries in B. Assume that they satisfy n    aj ⊗ λFn (gj ) ∀k ≥ 1 ∀aj ∈ Mk  ∗ 0

Mk (Cλ (Fn ))

n    = lim sup  aj ⊗ Vj (N ) N →∞

0

Mk (MN )

.

(18.21)

Let A ⊂ B be the (unital) C ∗ -algebra generated by V0,V1, . . . ,Vn , and let I = A ∩ I0 . Then A/I  Cλ∗ (Fn ), so that A/I and I are exact but A is not exact. Proof First observe that since I ⊂ I0 and I0 is obviously nuclear (e.g. because it has the CPAP, see Corollary 7.12) it is immediate that I is exact (see Remarks 10.4 and 10.5). Let q : A → A/I be the quotient morphism. Let E = span[λFn (gj ) | 0 ≤ j ≤ n] ⊂ Cλ∗ (Fn ). Similarly let Eq = span[q(Vj ) | 0 ≤ j ≤ n] ⊂ A/I, and let u : E → Eq be the linear map defined by u(λFn (gj )) = q(Vj ) (0 ≤ j ≤ n). By (18.21) and (18.18) u is a unital completely isometric isomorphism. By Proposition 9.7 it extends to a ∗-isomorphism π : Cλ∗ (Fn ) → A/I. Thus A/I is exact by Remark 10.21. As observed in (18.19) A/I ⊂ B/I0 and since I0 ⊂ IU we have obviously a canonical map rU : B/I0 → B/IU = MU , such that qU = rU Q. Let σ : C ∗ (Fn ) → A be the ∗-homomorphism defined by σ (UFn (gj )) = Vj (0 ≤ j ≤ n). Let J : A → B denote the canonical inclusion. Let ψ = qU J : A → MU . Note the factorization through B: J

qU

ψ : A−→B−→MU . Assume for contradiction that A was exact. Then J would be (min → max)tensorizing by Corollary 10.8, and hence so would be ψ = qU J : A → MU . It would follow (see the diagram at the end of the proof) that the ∗-homomorphism  : C ∗ (Fn ) → MU defined by (UFn (gj )) = qU (Vj ) σ

ψ

(that can be factorized as C ∗ (Fn )−→A−→MU ) would also be (min → max)tensorizing. But we claim that this is not true when n ≥ 2, a contradiction that shows that A is not exact. To check the claim, observe that  n  n     qU (Vj ) ⊗(UFn (gj )) = qU (Vj ) ⊗ qU (Vj)  0

MU ⊗max MU

0

MU ⊗max MU

=n+1 where the last equality follows from (18.3), and also since qU (Vj ) = rU |A/I (q(Vj ))

18.4 A continuum of C ∗ -norms on B ⊗ B

329

n    qU (Vj ) ⊗ UFn (gj )  0 MU ⊗min C ∗ (Fn ) n    ≤ q(Vj ) ⊗ UFn (gj ) 0 A/I ⊗min C ∗ (Fn ) n    = u(λFn (gj )) ⊗ UFn (gj ) 0 A/I ⊗min C ∗ (Fn ) n  √   = λFn (gj ) ⊗ UFn (gj ) ∗ = n, ∗ 0

Cλ (Fn )⊗min C (Fn )

√ where the last inequality follows from (3.21) and (3.13). Since 2 n < n + 1 this proves the claim. The preceding proof is summarized by the following diagram: C ∗ (Fn )

σ

A

J

B

q

Q

Cλ∗ (Fn )  A/I (N )

(N )

qU

MU

rU

B/I0 (N )

Corollary 18.20 Let 1 = U0 ,U1 , . . . ,Un be the random unitaries in Theorem 18.16, assumed defined on a probability space ( ,P). Let B = (⊕N ≥1 MN )∞ . Let Uj (ω) = (Uj(N ) (ω))N ≥1 ∈ B. Let Aω ⊂ B be the (unital) C ∗ -algebra generated by U0 (ω), . . . ,Un (ω), and let Iω = Aω ∩ I0 . Then for almost all ω we have Aω /Iω  Cλ∗ (Fn ),

(18.22)

and Aω /Iω and Iω are exact but Aω is not. (N )

Proof By the preceding statement, it suffices to show that (Uj (ω)) satisfies (18.21) for almost all ω. To check the latter assertion just observe that there is  ⊂ with P(  ) = 1 such that for any ω ∈  we have (18.15) for all k and all a0, . . . ,an ∈ Mk . Indeed, we may restrict by density to a0, . . . ,an with rational entries, and then  appears as an intersection of a countable family of events each having full probability by (18.15).

18.4 A continuum of C ∗ -norms on B ⊗ B The preceding results from [141] only showed that there is more than one C ∗ -norm on B ⊗ B or B ⊗ B. In [193] N. Ozawa and the author proved that there is actually a continuum of distinct such norms. The proof for B ⊗ B is

330

The WEP does not imply the LLP ℵ

very simple and it yields a family of (maximal) cardinality 22 0 of distinct C∗ -norms. We include it in this section. Curiously, there does not seem to be a simple argument to transplant the result to B ⊗ B (as we did for min = max in Corollary 18.12). The latter case is more delicate, and we refer the reader to [193] for full details. Let (Nm ) be any sequence of positive integers tending to ∞ and let    MNm . B= ⊕ m





Theorem 18.21 There is a family of cardinality 22 0 of mutually distinct (and hence inequivalent) C∗ -norms on B ⊗ M for any von Neumann algebra M that is not nuclear. Remark 18.22 Assuming M ⊂ B(2 ) nonnuclear, we note that the cardinality of B(2 ) and hence of B(2 ) ⊗ M is c = 2ℵ0 , so the set of all real valued ℵ functions of M ⊗ B(2 ) into R has the same cardinal 22 0 as the set of C∗ -norms. Fix n > 2, and let [u1 (m), . . . ,un (m)] be a sequence of n-tuples of unitary Nm × Nm matrices satisfying (18.6). By compactness (see Lemmas 18.7 and 18.8) we may and do assume (after passing to a subsequence) that the n-tuples [u1 (m), . . . ,un (m)] converge in distribution (i.e. in moments) to an n-tuple [u1, . . . ,un ] of unitaries in a von Neumann algebra M equipped with a faithful normal trace τ . Then, if U is any nontrivial ultrafilter, we can take for (M,τ ) the ultraproduct (MU ,τU ), and the resulting limit distribution along U does not depend on U . For any subset s ⊂ N and any u ∈ B we denote by u[s] = ⊕m u[s](m) ∈ B the element of B defined by u[s](m) = u(m) if m ∈ s and u[s](m) = 0 otherwise. We denote by πU : B → MU

(or πU : B → MU )

the natural quotient map. Recall that if U,V are ultrafilters on N, then U = V if and only if there are disjoint subsets s ⊂ N and s  ⊂ N with s ∈ U and s  ∈ V (see Remark A.5). In that case we have ∀u ∈ B

πU (u[s  ]) = πV (u[s]) = 0.

(18.23)

Lemma 18.23 Let U = V be ultrafilters on N. Consider disjoint subsets s ⊂ N and s  ⊂ N with s ∈ U and s  ∈ V, and let n t (s,s  ) = uk [s] ⊗ uk [s  ] ∈ B ⊗ B. k=1

18.4 A continuum of C ∗ -norms on B ⊗ B

331

Then t (s,s  )B⊗min B ≤ C

and

[πU ⊗ πV ](t (s,s  ))MU ⊗max MV = n.

Proof We have obviously t (s,s  )min =

sup

(m,m )∈s×s 

    uk (m) ⊗ uk (m ) 

hence t (s,s  )min ≤ C. We now turn to the max tensor product. Let uk = πU (uk [s]) and vk = πV (uk [s  ]) so that we have     uk ⊗ vk  [πU ⊗ πV ](t (s,s  ))MU ⊗max MV = 

MU ⊗max MV

.

Since we assume that [u1 (m), . . . ,un (m)] converges in distribution, (u1, . . . ,un ) and (v1, . . . ,vn ) must have the same distribution relative respectively to τU and τV . Arguing as for (18.10) we obtain [πU ⊗ πV ](t (s,s  ))max = n. For any nontrivial ultrafilter U on N we denote by αU the norm defined on B ⊗ B by ∀t ∈ B ⊗ B

αU (t) = max{tB⊗min B , [πU ⊗ Id](t)MU ⊗max B }. ℵ

Theorem 18.24 There is a family of cardinality 22 0 of mutually distinct (and hence inequivalent) C∗ -norms on B ⊗ B. More precisely, the family {αU } indexed by nontrivial ultrafilters on N is such a family on B ⊗ B. Proof Let (U,V) be two distinct nontrivial ultrafilters on N. Let s ⊂ N and s  ⊂ N be disjoint subsets such that s ∈ U and s  ∈ V. By Lemma 18.23 we have αU (t (s,s  )) ≥ [πU ⊗ πV ](t (s,s  ))MU ⊗max MV = n but since (πV ⊗ Id)(t (s,s  )) = 0 by (18.23) we have αV (t (s,s  )) ≤ C < n. This shows αU and αV are different, and hence (automatically for C∗ -norms) inequivalent. Lastly, it is well known (see e.g. [58, p. 146]) that the cardinality ℵ of the set of nontrivial ultrafilters on N is 22 0 . Proof of Theorem 18.21 If M is not nuclear, by Theorem 12.29 there is an embedding B ⊂ M. Moreover, since B is injective, there is a conditional expectation from M to B, which guarantees that, for any A, the max norm on A ⊗ B coincides with the restriction of the max norm on A ⊗ M (see Corollary αU on B ⊗ M 4.18 or Proposition 7.19). Thus we can extend αU to a C∗ -norm  by setting ∀t ∈ B ⊗ M

 αU (t) = max{tB⊗min M , [πU ⊗ Id](t)MU ⊗max M }.

332

The WEP does not imply the LLP

Since  αU = αU on B ⊗ B, this gives us a family of distinct C ∗ -norms on B ⊗ M. Since B  B, we can replace B by B if we wish. Remark 18.25 It is easy to see that Theorem 18.21 remains valid for any choice of the sequence (Nm ) and in particular it holds if Nm = m for all m, i.e. for B = B.

18.5 Notes and remarks The main sources for §18.1 are [141] with the simplifications brought by [209]. The first proof in [141] that (B,B) is not nuclear was more indirect. It went through first proving that if B had the LLP then the set OSn of n-dimensional (n ≥ 3) operator spaces would be separable for the metric dcb (see §20), and (after some more topological considerations) that would contradict a certain operator space version of Grothendieck’s theorem. A second proof was proposed in a revision of the same paper [141]. The latter proof used property (T) groups to show that C(n) < n for n ≥ 3, and deduced from that the nonseparability of (OSn,dcb ) (see §20 for more on this). Later on, A. Valette observed that the Lubotzky–Philips–Sarnak results were exactly √ what was needed to prove that C(n) ≤ 2 n − 1 for any n = p + 1 with p prime. Finally, the paper [120] proved using random matrices that this bound remains valid √ for any n ≥ 2, and hence (by the easy lower bound in [206]) that C(n) = 2 n − 1 for all n ≥ 2. §18.4 comes from [193].

19 Other proofs that C(n)< n: quantum expanders

In what precedes, we used random matrices to show that C(n)< n. In this section, we will describe a different way to prove the latter fact using more explicit examples based on the theory of “expanders” or “expanding graphs.” Actually, we only use graphs that are Cayley graphs of finite groups. We will see that the more recent notion of “quantum expander” is particularly well adapted for our purposes. We should warn the reader that there is also a notion of quantum graph that seems to have little to do with quantum expanders.

19.1 Quantum coding sequences. Expanders. Spectral gap To prove that C(n) < n we must produce a sequence of n-tuples (u(m))m≥1 of unitary matrices of the same size Nm such that (18.6) holds for some C < n. Following (and modifying) the terminology from [253] we call such a sequence a “quantum coding sequence” of degree n. See §19.4 for an explanation of our terminology. To introduce expanders we first recall some notation. Let π : G → B(Hπ ) be a unitary representation. Let Hπinv = {ξ ∈ Hπ | π(t)ξ = ξ

∀t ∈ G}

be the set of π -invariant vectors. Let S ⊂ G be a finite subset generating G (i.e. G is the smallest subgroup containing S). We denote     (19.1) ε(π,S) = 1 − |S|−1  π(s)PH inv ⊥  . s∈S

π

Definition 19.1 Let (Gm,Sm ) be a sequence of finite groups with generating sets Sm such that |Sm | = n for all m ≥ 1. The sequence of associated Cayley graphs is called “an expander” or an “expanding family” (the terminology is not so well established) if |Gm | → ∞ and

333

334

Other proofs that C(n)< n: quantum expanders

inf ε(Gm,Sm ) > 0.

m≥1

The notion of “expanding graph” has had a major impact far beyond graph theory. For our purposes, we will only discuss Cayley graphs of groups. Given a group G with a finite symmetric set of generators S ⊂ G, the associated Cayley graph is defined as having G as its vertex set and having as edges the pairs (x,y) in G2 such that y −1 x ∈ S. For a general group G the spectral gap ε(G,S) can be defined by setting ε(G,S) = inf ε(π,S)

(19.2)

with the infimum running over all unitary representations π of G. Equivalently, this is the spectral gap of the universal representation. As anticipated in Proposition 17.3, ε(G,S) > 0 characterizes property (T) (see §19.3), and we will see in §19.3 that certain property (T) groups lead to expanding families of graphs. For the moment, let us assume that G is finite and that |S| = n. In that case we will show that we may restrict consideration to π = λG and we have      (19.3) 1 − ε(G,S) = n−1 λG (s)|1⊥  s∈S

where 1 denotes the constant function on G (i.e. the element ξ ∈ 2 (G) such that ξ(t) = 1 for all t ∈ G). The latter is an eigenvector for the eigenvalue 1 for  the so-called Markov operator n−1 s∈S λG (s). The number ε(G,S) measures the gap between that extreme eigenvalue and the rest of the spectrum. Since the only vectors invariant under λG are those in C1, we have (using the notation (19.1)) ε(G,S) = ε(λG,S). Indeed, as is well known, when |G| < ∞, λG decomposes as a direct sum of a family formed of all the irreducible representations (each with the  denote the set of irreducible same multiplicity as its dimension). Let G representations on G and let T be the trivial representation on G. As usual we identify two representations if they are unitarily equivalent. With this notation we may write  π λG   π ∈G

and also λG |1⊥ 

  π ∈G\{T }

π.

(19.4)

19.1 Quantum coding sequences. Expanders. Spectral gap

Therefore 1 − ε(G,S) =

sup

 π ∈G\{T }

  |S|−1 

s∈S

  π(s) .

335

(19.5)

Remark 19.2 For any finite group G we have ε(G,S) = inf{ε(π,S)},

(19.6)

where the infimum runs over all unitary representations π of G without invariant vectors. Indeed, by decomposing π into irreducibles the infimum remains  \ {T }. Thus (19.6) follows from (19.5). unchanged if we restrict it to π ∈ G  Remark 19.3 By the Schur Lemma A.69 and (19.6), for any pair π  σ ∈ G we have     |S|−1  (19.7) π(s) ⊗ σ (s) ≤ 1 − ε(G,S). s∈S

Proposition 19.4 Let {π | π ∈ T } be a finite set of distinct irreducible representations of G on a common N-dimensional Hilbert space. Let √ 2 ε = ε(G,S) and n = |S|. Then |T | ≤ (1 + 2/ ε)2nN .  Proof By (19.7), for any π = σ ∈ T we have | s∈S tr(π(s)∗ σ (s))| ≤ n(1 − ε), and hence 1/2 √   ∀π = σ ∈ T n−1 tr |π(s) − σ (s)|2 ≥ 2ε. s∈S

Thus we have a set of |T | unit vectors in a Hilbert space of dimension √ nN 2 2 (and hence Euclidean dimension 2nN ) that are mutually at distance ≥ 2ε. The proposition follows by a well-known √ volume argument: the open balls centered at these points with radius r = 2ε/2 being disjoint, the volume of 2 their union is = |T |r 2nN vol(B) where B is the Euclidean ball of dimension 2nN 2 , and the latter union being included in a ball of radius 1 + r has volume 2 2 at most (1 + r)2nN vol(B). This implies |T | ≤ (1 + 1/r)2nN . Corollary 19.5 Let (Gm,Sm )m≥1 be an expanding family with |Sm | = n for 1 all m ≥ 1. Let Nm be the largest dimension dπ of a representation π ∈ G m. Then Nm → ∞.  Proof Fix a number N . By the preceding proposition, d2 1 π ∈G m,dπ ≤N π  2 remains bounded. Since π ∈G d = |Gm | → ∞, when m is large enough 1 m π 1 we must have dπ > N for some π ∈ G m . In other words Nm > N when m is large enough.

336

Other proofs that C(n)< n: quantum expanders

19.2 Quantum expanders Let H,K be Hilbert spaces. Let S2 (K,H ) denote the Hilbert space of Hilbert–Schmidt operators x : K → H equipped with the norm xS2 = (tr(x ∗ x))1/2 = (tr(xx∗ ))1/2 . Let u = (uj )1≤j ≤n ∈ B(H )n and v = (vj )1≤j ≤n ∈ B(K)n . We denote by Tu,v : S2 (K,H ) → S2 (K,H ) the mapping defined by n u∗j xvj . Tu,v (x) = n−1 1

K∗

(canonically), we may identify K ⊗2 H with S2 (K,H ). With Since K   this identification, Tu,v corresponds to n−1 n1 vj∗ ⊗ u∗j which has the same  norm as n−1 n1 uj ⊗ vj (see Proposition 2.11) so that n    Tu,v  = n−1  uj ⊗ vj  1

where the last norm is in B(K) ⊗min B(H ) or equivalently in B(K ⊗2 H ). Let us denote by L2 (τN ) the space MN equipped with the norm xL2 (τN ) = (N −1 tr(x ∗ x))1/2 associated to the normalized trace τN on MN (namely τN (x) = N −1 tr(x)). Except for the normalization of the trace this is the same as S2 (K,H ) when n K = H = N 2 . Therefore for any v = (vj )1≤j ≤n ∈ MN we have     vj ⊗ vj  Tv,v  = n−1  MN ⊗min MN     −1  ∗ = sup n vj ξ vj  ξ L2 (τN ) ≤ 1}. L2 (τN )

Note that for any ξ,η ∈ BL2 (τN ) we have ) *  −1  ∗ ∗ τN (vj ξ vj η) = n−1 vj∗ ξ vj , η n

≤ Tv,v . L2 (τN )

(19.8)

Let T : L2 (τN ) → L2 (τN ) be a linear map with T  = 1 such that T (I ) = I and T ∗ (I ) = I where I ∈ MN denotes the identity matrix, so that 1 is an eigenvalue (with eigenvector I ) of T and I ⊥ is invariant under T . The “spectral gap” of T is defined as e(T ) = 1 − T|I ⊥ .

(19.9)

Let u be an n-tuple in UN . We then set ε(u) = e(Tu,u ).

(19.10)

Equivalently ε(u) is the largest number ε ≥ 0 such that for any ξ,η in BL2 (τN ) with τN (ξ ) = τN (η) = 0 we have n  −1 (19.11) u∗j ξ ∗ uj η ≤ 1 − ε. n τN 1

19.2 Quantum expanders

337

Lemma 19.6 Let u = (uj )1≤j ≤n ∈ UnN . Then for any k ≤ N and any v = (vj )1≤j ≤n ∈ Mkn we have Tu,v  ≤ (k/N + 1 − ε(u))1/2 Tv,v 1/2 .

(19.12)

Proof For simplicity we replace vj by vj ⊕ 0 ∈ MN . Thus we assume that the vj s are N × N matrices for which there is an orthogonal projection P ∈ MN of rank k such that vj P = Pvj = vj for all j . To prove (19.12) it suffices to show that for any ξ,η in BL2 (τN ) we have −1 n τN (u∗j ξ ∗ vj η) ≤ (k/N + 1 − ε(u))1/2 Tv,v 1/2 . n 1

Since we may replace ξ,η by P ξ,P η, we may assume ξ,η of rank ≤k. Let ξ = U |ξ | and η = V |η| be the polar decomposition. Using the identity   τN (u∗j ξ ∗ vj η) = τN (|η|1/2 u∗j |ξ |1/2 ) (|ξ |1/2 U ∗ vj V |η|1/2 ) and Cauchy–Schwarz we find by (19.8) 1/2 n −1 n τN (u∗j ξ ∗ vj η) ≤ n−1 τN (|η|u∗j |ξ |uj ) Tv,v 1/2 . (19.13) n 1

ξ

1

η

Now let = |ξ | − τN (|ξ |) and = |η| − τN (|η|) so that ξ ,η ∈ BL2 (τN ) but now τN (ξ  ) = τN (η ) = 0. We have n n−1 τN (|η|u∗j |ξ |uj ) ≤ I + II (19.14) 1  where I = τN (|ξ |)τN (|η|) and II = n−1 n1 τN (η u∗j ξ  uj ). Since |ξ | and |η| have rank ≤ k (and τ (P ) = k/N )) we have τN (|ξ |) ≤ τN (|ξ |2 )1/2 (k/N )1/2 and similarly for |η|. It follows that |I | ≤ k/N. Now using the definition of the spectral gap ε(u) in (19.11) we find |II| ≤ 1 − ε(u). Putting these bounds together with (19.13) and (19.14) we obtain (19.12). In analogy with the theory of expanding graphs (or expanders) the following definition was recently introduced: Definition 19.7 Let (Nm )m≥1 be a nondecreasing sequence of integers. For each m, consider u(m) = (u1 (m), . . . ,un (m)) ∈ UnNm . The sequence (u(m))m≥1 is called a quantum expander if Nm → ∞ and infm≥1 ε(u(m)) > 0. We call n the degree of (u(m))m≥1 . The link between quantum expanders and the constant C(n) goes through the following.

338

Other proofs that C(n)< n: quantum expanders

Proposition 19.8 Let (u(m))m≥1 be a quantum expander. There is a subsequence of (u(m))m≥1 that is a quantum coding sequence. Proof Assume ε(u(m)) ≥ ε > 0 for all m. Just choose the subsequence m1 < m2 < · · · so that [Nmk /Nmk+1 ] + 1 − ε ≤ 1 − ε/2 for all k ≥ 1. Then by Lemma 19.6 we obtain (18.6) with C = (1 − ε/2)n, which means (u(m))m≥1 is a quantum coding sequence. Corollary 19.9 To show that C(n) < n, it suffices to know that there is a quantum expander of degree n. Remark 19.10 (From expanders to quantum expanders) Let (Gm,Sm )m≥1 be an expander in the sense of Definition 19.1. Recall |Sm | = n for any m ≥ 1. Let πm : Gm → B(Hm ) be irreducible representations such that dim(Hm ) → ∞ (see Corollary 19.5). Then, by Schur’s lemma A.69, the representation [πm ⊗ πm ]|I ⊥ has no nonzero invariant vector. This implies that its decomposition into irreducible components has only nontrivial irreducible representations. By (19.4), the latter are all contained in the restriction of λGm to 1⊥ , and hence     [πm (s) ⊗ πm (s)]|I ⊥  ≤ 1 − ε(Gm,Sm ). n−1  s∈Sm

In other words ε((πm (s))s∈Sm ) ≥ ε(Gm,Sm ), so that the sequence of n-tuples (πm (s))s∈Sm (m ≥ 1) forms a quantum expander.

19.3 Property (T) We already introduced groups with property (T) in Definition 17.1. We will now show that the existence of such groups leads to that of expanders, from which that of quantum expanders and quantum coding sequences follows. Compared with the random approach, the advantage of this method is that it produces explicit examples. Proposition 19.11 A discrete group G generated by a finite subset S ⊂ G containing the unit has property (T) if and only if it has a nonzero spectral gap ε(G,S) (as defined in (19.2)) or equivalently if there is ε > 0 such that for any unitary representation π : G → B(Hπ ) we have ε(π,S) ≥ ε.

(19.15)

19.3 Property (T)

339

Proof This is but a restatement of Proposition 17.3. Lemma 19.12 Let G,S be as in Proposition 19.11, with property (T). Let ε > 0 be as in (19.15). (i) For any irreducible unitary representation π : G → B(H ) with dim(H ) < ∞ we have ε((π(s))s∈S ) ≥ ε.

(19.16)

(ii) Let σ be another finite-dimensional irreducible unitary representation that is not unitarily equivalent to π , then     (19.17) |S|−1  π(s) ⊗ σ (s) ≤ 1 − ε. s∈S

Proof (i) Let ρ(t) = π(t)⊗π(t). Observe that Hρinv = CI by the irreducibility of π (see §A.21). Therefore by (19.15)     (19.18) π(s) ⊗ π(s)|I ⊥  ≤ 1 − ε. |S|−1  s∈S

Equivalently we have (i). (ii) By Schur’s lemma A.69, the representation π ⊗ σ does not have any invariant vector ξ = 0. Thus (19.17) follows from (19.15) applied with π ⊗ σ in place of π . Proposition 19.13 Let G be a property (T) group with S as in Proposition 19.11. Let n = |S| and let ε0 = ε(G,S). Assume that G admits a sequence (πm )m≥1 of finite-dimensional distinct irreducible unitary representations with dimension tending to ∞. Then the sequence {(πm (s))s∈S | m = 1,2, . . .} is both a quantum expander and a quantum coding sequence and C(n) ≤ (1 − ε0 )n. Proof Let S = {t0, . . . ,tn−1 } with t0 = 1. Let Nm = dim(Hπm ). Let uj (m) = πm (tj ). For any m = m we have πm = πm . By (19.17) this implies   n−1    n−1  u (m) ⊗ u (m ) j j  0  ≤ 1 − ε0, and hence we have a quantum coding sequence. By (19.16) or (19.18), (πm (s))s∈S is a quantum expander. Remark 19.14 (From property (T) to expanders) In the preceding situation, assume that the group G admits a sequence of finite quotient groups Gm with

340

Other proofs that C(n)< n: quantum expanders

quotient maps denoted by qm : G → Gm . Let Sm = qm (S). We assume that |Sm | = n. Let σm : Gm → B(Hm ) be a unitary representation without invariant unit vector (e.g. a nontrivial irreducible one) (m ≥ 1). Then πm = σm qm is a unitary representation without invariant unit vector on G. Moreover,   n−1 S πm  = n−1 Sm σm , and hence ε(σm,Sm ) ≥ ε(G,S). Since this holds for any such σm on Gm this implies ε(Gm,Sm ) ≥ ε(G,S). Therefore if |Gm | → ∞ the sequence (Gm,Sm ) is an expanding family (i.e. “an expander”). This shows that we can deduce the existence of expanding families from that of a property (T) group with the required properties. We give an example in the next remark. Note that for the present remark we could content ourselves with property (τ ) for which we refer the reader to [173]. Remark 19.15 ((SL3 (Zp )) is an expander) By Remarks 19.14 and 19.10, to prove that C(n) < n (or to produce quantum expanders) it suffices to produce a group G with property (T) admitting a sequence of distinct finitedimensional irreducible unitary representations with unbounded dimensions. The classical example for this phenomenon is G = SL3 (Z) (or SLd (Z) for d ≥ 3). For any prime number p let Zp = Z/pZ. Recall this is a field with p elements. The group SL3 (Zp ) is a finite quotient of G. Indeed, we have a natural homomorphism qp : SL3 (Z) → SL3 (Zp ) that takes a matrix [aij ] to the matrix [a˙ ij ] where a˙ ∈ Z/pZ denotes the congruence equivalence class of a ∈ Z modulo p. It is known that this maps SL3 (Z) onto SL3 (Zp ). This follows for instance from the well-known fact that, for any field k, SL3 (k) is generated by the set Sk formed of the unit and the matrices with 1 on the diagonal and only one nonzero entry elsewhere equal to 1. When k = Z/pZ, it is obvious that such matrices are in the range of the preceding homomorphism. Therefore the latter is onto SL3 (Zp ). We will use S = SZ so that n = |S| = 6. Clearly |qp (S)| = |S| = 6 if p > 1. Thus Remark 19.14 shows that the sequence (SL3 (Zp ),qp (S)) indexed by prime numbers p > 1 is an expanding family (i.e. “an expander”). Actually, by Remark 19.14 for any finite generating set S in SL3 (Z) the sequence (SL3 (Zp ),qp (S)) indexed by large enough prime numbers p > 1 is an expanding family (i.e. “an expander”). Indeed, by taking p large enough we can clearly ensure that qp is injective on S and hence |qp (S)| = |S|. The irreducible unitary representations on SL3 (Zp ) have unbounded dimensions when p → ∞. This can be deduced from the property (T) of SL3 (Z) using the same idea as for Corollary 19.5. More explicitly, consider for instance the action of SL3 (Zp ) on the set of “lines” L(p) in Z3p , or equivalently the set of 1-dimensional subspaces in the vector space Z3p (over the field Zp ). By standard linear algebra, SL3 (Zp ) acts transitively on the latter set and this action defines a unitary representation πp of SL3 (Zp ) on 2 (L(p)) that

19.4 Quantum spherical codes

341

permutes the canonical basis vectors. Since the action on L(p) is transitive, the constant functions on L(p) are the only invariant vectors. Furthermore, again by linear algebra, the action is actually bitransitive, and hence (see Lemma A.68) the restriction πp0 = πp |1⊥ to the orthogonal of constant functions is irreducible, and of course its dimension, equal to |L(p)| − 1 = p2 + p (see Remark 24.29), tends to ∞ when p → ∞. Thus we conclude by Remark 19.10 that for any finite unital generating set S in SL3 (Z) with |S| = n the sequence of n-tuples (πp0 (s))s∈S indexed by large enough primes p > 1 is a quantum expander. Remark 19.16 Since it is known (see [250]) that SL3 (Z) is generated by a pair of elements (together with the unit), we may take |S| = 3. The preceding remark then gives us C(3) < 3. Since it can be shown (exercise) that C(2) = 2 this is optimal.

19.4 Quantum spherical codes We would like to motivate the terminology that we adopted to emphasize the analogy with certain questions in coding theory. In [62, ch. 9], Conway and Sloane define a spherical code SC of dimension n, size k and minimal angle 0 < θ < π/2 as a set of k points of the unit sphere in Rn with the property that ∀x,y ∈ SC, x = y

x · y ≤ cos θ .

(19.19)

They discuss the reasons why it is of interest to find the maximal size A(n,θ ) of such a code, and give estimates for it. A similar variant of that problem is the search for a maximal set of vectors {ξ(m)} in the sphere of radius n1/2 in n2 with coordinates all unimodular (e.g. equal to ±1) that are such that supm=m |ξ(m),ξ(m )| < C for some C < n. Such (finite) sequences are useful in coding theory: the family {ξ(m)} itself can be thought of as a code. If we know that the message (i.e. a length n sequence of ±1s) consists of one of the ξ(m)s then even if there are erroneous digits (=coordinates) but fewer than (n−C)/8 of them, we can recognize which ξ(m) was sent in the message. (We leave the easy verification as an exercise). It is thus quite useful to have a number as large as possible of vectors ξ(m) given n and C < n. This becomes all the more useful when C/n is small, typically when C = δn with 0 < δ < n. For this (classical) problem, of course there can only be finitely many such ξ(m)s since the sphere is compact, but in the quantum version, the effect of compactness diminishes when the matrix size N → ∞, and the significant problem becomes to produce an infinite sequence such as the ones we call quantum coding sequences. One can also fix the matrix size N and try to estimate (as a function of N,n,C) the maximal

342

Other proofs that C(n)< n: quantum expanders

number of n-tuples of N × N -unitaries that are separated as in (18.6) for some fixed C < n. Now the main point is the asymptotic behavior of the latter number when N → ∞ as in the forthcoming Theorem 19.17. To illustrate better the analogy that we wish to emphasize, we revise our notation as follows. For any x = (x1, . . . ,xn ) and y = (y1, . . . ,yn ) in MNn we set x¯ = n (x1, . . . ,xn ) ∈ MN and denote n n x·y = xj ⊗ yj ∈ MN ⊗ MN x¯ · y = xj ⊗ yj ∈ MN ⊗ MN . 1

1

We view the set of x’s such that x¯ · xmin ≤ 1 as a quantum analogue of the Euclidean unit sphere. Note that if x = (xj ) ∈ n−1/2 UnN then x¯ · xmin = 1. When x¯ · xmin ≤ 1 and y¯ · ymin ≤ 1 we say that x,y are δ-separated  if  n1 xj ⊗ yj min ≤ 1 − δ. This is analogous to the preceding separation condition (19.19) for spherical codes with cos θ = 1 − δ. The next result is a matricial analogue of some of the known estimates for spherical codes. We interpret it as an upper estimate of the size of quantum spherical codes of dimension n, angle θ = arccos(1 − δ) and (this is the novel parameter) matrix size N. In addition, it gives us a rather large number of δseparated quantum expanders. Theorem 19.17 ([212]) There are absolute constants β > 0 and δ > 0 such that for each 0 < ε < 1 and for all sufficiently large integers n and N , more precisely such that n ≥ n0 and N ≥ N0 with n0 depending on ε, and N0 depending on n and ε, there is a subset T ⊂ MNn with cardinal |T | ≥ exp βnN 2, such that ∀x ∈ T

(n1/2 xj ) ∈ UnN

and

√ ε(x) ≥ 1 − (2 n − 1/n + ε)

∀x = y ∈ T x¯ · ymin ≤ (1 − δ). √ Note that √ 2 n − 1 < n for all n ≥ 3 and hence for 0 < ε < 1 small enough 1 − (2 n − 1/n + ε) > 0. Remark 19.18 As for lower estimates, by the same volume argument as for Proposition 19.4, set T with √ the property in Theorem 19.17 we have √ for any 2 |T | ≤ (1 + 2/ δ)2nN ≤ exp (4nN 2 / δ). We refer to [212] for the proof, which makes crucial use of the following result of Hastings [132]:

19.5 Notes and remarks

343

Theorem 19.19 (Hastings) If we equip UnN with its normalized Haar measure PN,n , then for each n and ε > 0 we have √ lim PN,n ({u ∈ UnN | 1 − ε(u) ≤ 2 n − 1/n + ε}) = 1. N →∞

√ This is best possible in the sense that Theorem 19.19 fails if 2 n − 1 is replaced by any smaller number. The proof of Theorem 19.19 in [132] is rather delicate; however a simpler proof based with Gaussian √ on a comparison √ random matrices is given in [212] with 2 n − 1 replaced by c n − 1 where c is a numerical constant.

19.5 Notes and remarks The first explicit expanding graphs were discovered by Margulis around 1973. See [227] for a very concise introduction to the subject. The main reference for expanders and Kazhdan’s property (T) is Lubotzky’s book [172]. See Lubotzky’s survey [174] for a more recent update. The main interest for us is the notion of spectral gap. We return to that theme with some more references in §24.4. Proposition 19.4 and Corollary 19.5 originate in S. Wassermann’s [259]. The term “coding sequences” was coined by Voiculescu in [253] for the sequences that we choose to call quantum coding sequences to emphasize their noncommutative nature. While they suffice for our main goal to prove that C(n) < n, the notion of quantum expanders turns out to be more convenient because of the analogy with the usual expanders. Quantum expanders were introduced independently by Hastings [132] – a mathematical physicist – and by two computer scientists Ben-Aroya and Ta-Shma [18]. See [19] for more on the connection with computer science. Hastings [132] proved the crucial bound appearing in Theorem 19.19 to exhibit quantum expanders using random unitaries. See also [212]. Quantum expander theory was further developed by Aram Harrow (see [130, 131]) who in particular observed the content of Remark 19.10.

20 Local embeddability into C and nonseparability of (OSn,dcb )

In this section, we tackle various issues concerning finite-dimensional operator spaces. While all the spaces of the same dimension are obviously completely isomorphic there is a natural “distance” that measures to what degree they are really close. When dealing with just Banach spaces E,F one defines classically the Banach–Mazur “distance” d(E,F ) as equal to ∞ if E,F are not isomorphic and otherwise as d(E,F ) = inf{uu−1  | u : E → F isomorphism}. Given two operator spaces E ⊂ B(H ) and F ⊂ B(K) the analogous “distance” (called the cb-distance) is defined as dcb (E,F ) = inf{ucb u−1 cb | u : E → F complete isomorphism}. If E,F are not completely isomorphic we set dcb (E,F ) = ∞. In contrast, we have clearly dcb (E,F ) < ∞ if dim(E) = dim(F ) < ∞. This is a “multiplicative distance” meaning that the triangle inequality takes the following form: for any operator spaces E,F,G we have dcb (E,G) ≤ dcb (E,F )dcb (F,G). Thus if we wanted to insist to have a bona fide distance we could replace dcb by δcb = log dcb and then we would have the usual triangle inequality for δcb . Moreover, since the axioms of a distance include Hausdorff separation, it is natural to identify the spaces E and F if δcb (E,F ) = 0 or equivalently if dcb (E,F ) = 1. If E,F are finite dimensional dcb (E,F ) = 1 (or δcb (E,F ) = 0) if and only if E,F are completely isometric. The last assertion is an easy exercise based on the compactness of the unit ball of CB(E,F ). Let us denote by OSn the set of all operator spaces, with the convention to identify two spaces when they are completely isometric, and let us equip it

344

20.1 Perturbations of operator spaces

345

with the distance δcb . Then we obtain a bona fide metric space. Again a simple exercise shows that it is complete. The Banach space analogue (that we could denote by (Bn,δ)) is called the “Banach–Mazur compactum” and as the name indicates it is a compact metric space for each dimension n (for a proof see [208, p. 334]). In sharp contrast (OSn,δcb ) is not compact and actually not even separable! The main goal of this chapter is to establish this by exhibiting for each n large enough a continuous family of elements of OSn that are uniformly separated. But in practice we will usually not bother to replace dcb by δcb = log dcb and we will state the “distance estimates” in terms of dcb alone. The reader should remember that dcb (E,F ) = 1 is the “shortest distance” so that dcb (E,F ) > 1 + ε with ε > 0 means E,F are separated. In analogy with the Banach space case, it is known (see [208, p. 133]) that ∀E,F ∈ OSn

dcb (E,F ) ≤ n.

(20.1)

20.1 Perturbations of operator spaces We include here several simple facts from the Banach space folklore which have been easily transferred to the operator space setting. We start with a well-known fact (the proof is the same as for ordinary norms of operators). Lemma 20.1 Let v : X → Y be a complete isomorphism between operator spaces. Then clearly any map w : X → Y with v − wcb < v −1 −1 cb is again a complete isomorphism and if we let  = v − wcb v −1 cb we have w−1 cb ≤ v −1 cb (1 − )−1 and w −1 − v −1 cb ≤ v −1 2cb (1 − )−1 . Lemma 20.2 (Perturbation Lemma) Fix 0 < ε < 1. Let X be an operator space. Consider a biorthogonal system (xj ,xj∗ ) (j = 1,2, . . . ,n) with xj ∈ X, xj∗ ∈ X∗ and let y1, . . . ,yn ∈ X be such that  xj∗  xj − yj  < ε. Then there is a complete isomorphism w : X → X such that w(xj ) = yj , wcb ≤ 1 + ε and w−1 cb ≤ (1 − ε)−1 . In particular, if E1 = span(x1, . . . ,xn ) and E2 = span(y1, . . . ,yn ), we have dcb (E1,E2 ) ≤ (1 + ε)(1 − ε)−1 . Proof Recall (1.3). Let ξ : X → X be the map defined by setting ξ(x) =  ∗  ∗ xj  yj − xj  < ε. Let xj (x)(yj − xj ) for all x in X. Then ξ cb ≤

346

Local embeddability into C and nonseparability of (OSn,dcb )

w = I + ξ . Note that w(xj ) = yj for all j = 1,2, . . . ,n, wcb ≤ 1 + ξ cb ≤ 1 + ε and by the preceding lemma we have w−1 cb ≤ (1 − ε)−1 . Corollary 20.3 Let X be any separable operator space. Then, for any n, the set denoted by OSn (X) of all the n-dimensional subspaces of X is separable for the “distance” associated to dcb . Proof Let (x1 (m), . . . ,xn (m)) be a dense sequence in the set of all linearly independent n-tuples of elements of X. Let Em = span(x1 (m), . . . ,xn (m)). Then, by the preceding lemma, for any ε > 0 and any n-dimensional subspace E ⊂ X, there is an m such that dcb (E,Em ) ≤ 1 + ε. Lemma 20.4 Consider an operator space E and a family of subspaces Ei ⊂ E directed by inclusion and such that ∪Ei = E. Then for any ε > 0 and any finite-dimensional subspace S ⊂ E, there exists i and  S ⊂ Ei such that S) < 1 + ε. Let u : F1 → F2 be a linear map between two operator dcb (S,  a

b

spaces. Assume that u admits the following factorization F1 −→E −→F2 with c.b. maps a,b such that a is of finite rank. Then for each ε > 0 there exists i and   a b a factorization F1 −→Ei −→F2 of u with  a cb  bcb < (1 + ε)acb bcb , and  a of finite rank. Proof For the first part let x1, . . . ,xn be a linear basis of S and let xj∗ be the dual basis extended (by Hahn–Banach) to elements of E ∗ . Fix ε > 0. Choose  ∗ xj  xj − yj  < ε . Let i large enough and y1, . . . ,yn ∈ Ei such that  S = span(y1, . . . ,yn ). Then, by the preceding lemma, there is a complete isomorphism w : E → E with wcb w −1 cb < (1 + ε )(1 − ε )−1 such S) ≤ (1 + ε )(1 − ε )−1 so it suffices that w(S) =  S ⊂ Ei . In particular, dcb (S,   to adjust ε to obtain the first assertion. a

b

Now consider a factorization F1 −→E −→F2 and let S = a(F1 ). Note that S is finite dimensional by assumption. Applying the preceding to this S, we find i and a complete isomorphism w : E → E with wcb w −1 cb < 1 + ε a = wa : F1 → Ei and  b = bw−1 such that w(S) ⊂ Ei . Thus, if we take  |Ei , we obtain the announced factorization.

20.2 Finite-dimensional subspaces of C The results of this section are derived from [141]. For any operator space X (actually X will often be a C ∗ -algebra), and any finite-dimensional operator space E, we introduce dS X (E) = inf{dcb (E,F ) | F ⊂ X }. Of course if X = B(H ) with dim(H ) = ∞, dS X (E) = 1 for all E.

(20.2)

20.2 Finite-dimensional subspaces of C

347

We will concentrate on the special case when X = C = C ∗ (F∞ ) and to simplify the notation we set df (E) = dSC∗ (F∞ ) (E).

(20.3)

Theorem 20.5 Let c ≥ 0 be a constant and let X ⊂ B(H) be an operator space. The following are equivalent. (i) df (E) ≤ c for all finite-dimensional subspaces E ⊂ X. (ii) For any finite-dimensional subspace E ⊂ X and any ε > 0 the inclusion E ⊂ B(H) admits a factorization through C of the form vE

wE

E −−→ C −−→B(H) with v E cb w E cb ≤ c + ε. (iii) For any H and any operator space F ⊂ B(H ), we have tB(H)⊗max B(H ) ≤ ctmin .

∀t ∈ X ⊗ F

(iv) Same as (iii) with H = 2 and F = B(2 ). (v) Any mapping u : X → A/I into a quotient C ∗ -algebra that factorizes v w through B(K) (for some K) as X−→B(K)−→A/I with vcb ≤ 1 and wdec ≤ 1 is locally c-liftable. Proof Assume (i). Let u : X → B(H) be the inclusion map. Let E ⊂ X be ⊂C a finite-dimensional subspace. Let ε > 0. Since df (E) ≤ c there is E  such that dcb (E, E) < c + ε. By the extension property of B(H) there is a vE

wE

factorization of u|E : E → B(H) of the form u|E : E −→C −→B(H) with v E cb w E cb ≤ c + ε. Thus (ii) holds. (ii) ⇒ (iii) follows from Kirchberg’s Theorem 9.6, (6.13) and (6.9). (iii) ⇔ (iv) is essentially trivial. v w Assume (iv). Let u be as in (v) of the form u : X−→B(K)−→A/I , i.e. v : B(H) → B(K) be such that u = wv with vcb ≤ 1 and wdec ≤ 1. Let  v cb = vcb ≤ 1. Let T = w v : B(H) → A/I . Note T|X = u.  v|X = v and  v dec (see (6.9)) and hence T dec ≤ 1 (see (6.7)). By Moreover  v cb =  (6.13) we have for any t ∈ X ⊗ B (u ⊗ Id)(t)(A/I )⊗max B = (T ⊗ Id)(t)(A/I )⊗max B ≤ tB(H)⊗max B , and by (7.6) (u ⊗ Id)(t)(A⊗min B )/(I ⊗min B ) ≤ (u ⊗ Id)(t)(A⊗max B )/(I ⊗max B ) = (u ⊗ Id)(t)(A/I )⊗max B , therefore using (iv) we obtain (u ⊗ Id)(t)(A⊗min B )/(I ⊗min B ) ≤ ctmin . By (ii) ⇔ (iii) from Proposition 7.48 u is locally c-liftable. In other words (v) holds.

348

Local embeddability into C and nonseparability of (OSn,dcb )

Assume (v). Assume B(H) = C ∗ (F)/I. Then by (v) the inclusion X →  ⊂ C ∗ (F) such is locally c-liftable. Clearly this implies there is E  ≤ c. We may as well assume E  ⊂ C ∗ (F∞ ) (see Lemma 3.8). that dcb (E, E) Thus we obtain (i).

C ∗ (F)/I

Applying the preceding theorem with X = E, we obtain: Corollary 20.6 Let E ⊂ B(H) be an n-dimensional operator space. Then   uB(H)⊗max B(2 ) u ∈ E ⊗ B( ) . df (E) = sup 2 u min

By Theorem 10.7 this implies: Corollary 20.7 For any exact (in particular any nuclear) C ∗ -algebra A we have df (E) ≤ dSA (E)

(20.4)

for any finite-dimensional operator space E. Remark 20.8 (Completely isometric embeddings in C ) If a separable operator space X is such that df (E) = 1 for every finite-dimensional subspace E ⊂ X and if in addition X admits a net of completely contractive finite rank maps tending pointwise to the identity (in particular if X itself is finite dimensional), then X embeds completely isometrically into C . Indeed, assume X ⊂ B(H), and also B(H) = C ∗ (F)/I. Then by (v) in Theorem 20.5, the inclusion X → C ∗ (F)/I is locally 1-liftable. By Corollary 9.49 it is 1-liftable, and hence X embeds completely isometrically into C ∗ (F), or equivalently (since X is separable, see Remark 9.14) into C . In particular, the operator Hilbert space OH from [205] embeds completely isometrically into C . Unfortunately however, we cannot describe the embedding more explicitly. Remark 20.9 Let A be a nuclear C ∗ -algebra. By the preceding remark the condition df (A) = 1 implies that A is completely isometric to a subspace of C . Note however that A need not embed as a C ∗ -algebra into C . For instance, let A be the Cuntz algebra, being nuclear it is a fortiori exact, so that df (A) = dS K (A) = 1, however A does not embed into C , because C embeds into a direct sum of matrix algebras (see Theorem 9.18), hence left invertible elements in it are right invertible, and the latter property obviously fails in the Cuntz algebra. Nevertheless, by the Choi–Effros lifting Theorem 9.53 there is a unital completely positive (and completely contractive) factorization of the identity of the Cuntz algebra (or any separable nuclear C ∗ -algebra) through C .

20.2 Finite-dimensional subspaces of C

349

Let us record here an obvious consequence of Corollary 20.6: Corollary 20.10 Let H = 2 . For any n ≥ 1 we have   umax u ∈ B(H ) ⊗ B(H ), rk(u) ≤ n = sup{df (E) | dim(E) ≤ n}. sup umin (20.5) Theorem 20.5 naturally leads us to introduce a new tensor product E1 ⊗M E2 , both for operator spaces and C ∗ -algebras, as follows. Definition 20.11 Let E1 ⊂ B(H1 ),E2 ⊂ B(H2 ) be arbitrary operator spaces. We will denote by  M the norm induced on E1 ⊗ E2 by B(H1 ) ⊗max B(H2 ), and by E1 ⊗M E2 its completion with respect to this norm. Clearly E1 ⊗M E2 can be viewed as an operator space embedded into B(H1 ) ⊗max B(H2 ). It can be checked easily, using the extension property of c.b. maps into B(H ) (see Theorem 1.18), that  M and E1 ⊗M E2 do not depend on the particular choices of complete embeddings E1 ⊂ B(H1 ), E2 ⊂ B(H2 ). Indeed, this is an immediate consequence of the following Lemma. Lemma 20.12 Let E1 ⊂ B(H1 ),E2 ⊂ B(H2 ), F1 ⊂ B(K1 ),F2 ⊂ B(K2 ) be operator spaces. (i) Consider c.b. maps u1 : E1 → F1 and u2 : E2 → F2 . Then u1 ⊗ u2 defines a c.b. map from E1 ⊗M E2 to F1 ⊗M F2 with u1 ⊗ u2 CB(E1 ⊗M E2,F1 ⊗M F2 ) ≤ u1 cb u2 cb . (ii) If u1 and u2 are complete isometries, then u1 ⊗ u2 : E1 ⊗M E2 → F1 ⊗M F2 also is a complete isometry. Proof By Theorem 1.18 we may assume that each uj admits an extension uj cb =  uj ∈ CB(B(Hj ),B(Kj )) with the same cb-norm. By Proposition 6.7   uj dec , therefore (i) follows from (6.15) and (6.6). Applying (i) to the inverse mappings, we obtain (ii). Remark When E1,E2 are C ∗ -algebras, then E1 ⊗M E2 can be identified with a C ∗ -subalgebra of B(H1 ) ⊗max B(H2 ), so that this tensor product ⊗M makes sense in both categories, operator spaces and C ∗ -algebras. The next result analyzes more closely the significance of uM = 1 for u ∈ E ⊗ F . It turns out to be closely connected to the factorizations of the associated linear operator U : F ∗ → E through a subspace of C ∗ (F∞ ). Proposition 20.13 Let E ⊂ B(H ) and F ⊂ B(K) be operator spaces, let t ∈ E ⊗ F and let T : F ∗ → E be the associated finite rank linear operator.

350

Local embeddability into C and nonseparability of (OSn,dcb )

Consider a finite-dimensional subspace S ⊂ C ∗ (F∞ ) and a factorization of T of the form T = ba with bounded linear maps a : F ∗ → S and b : S → E, where a : F ∗ → S is weak* continuous. Then tM = inf{acb bcb }

(20.6)

where the infimum, which is actually attained, runs over all such factorizations of T . Proof It clearly suffices (recalling Lemma 2.16) to prove (20.6) in the case when E and F are both finite dimensional, so we do assume that. Then since both sides of (20.6) are finite we may assume by homogeneity that tM = 1. Assume first T factorized for some S as previously. We claim that acb bcb ≥ 1. Indeed, by Kirchberg’s Theorem 9.6, the min and max norms are equal on C ∗ (F∞ ) ⊗ B(K). Hence, by (ii) in Lemma 20.12, we have a is the element of S ⊗min F isometrically S ⊗min F = S ⊗M F , so that if  a ). Therefore, by associated to a, we have  a M = acb and t = (b ⊗ IdF )( a M ≤ acb bcb which (i) in Lemma 20.12 we have 1 = tM ≤ bcb  proves the claim. We will now show that equality holds. Let F be a large enough free group so that B(H ) is a quotient of C ∗ (F) (see Proposition 3.39) and let q : C ∗ (F) → B(H ) be the quotient ∗-homomorphism with kernel I. By the exactness of the maximal tensor product (see Proposition 7.15), if we view t as sitting in B(H ) ⊗ B(K), we have 1 = tB(H )⊗max B(K) = t(C ∗ (F)/I )⊗max B(K) = t(C ∗ (F)⊗max B(K))/(I ⊗max B(K)) ≥ t(C ∗ (F)⊗min B(K))/(I ⊗min B(K)) . By Lemma 7.43, since t ∈ B(H ) ⊗ F , we find 1 ≥ t(C ∗ (F)⊗min F )/(I ⊗min F ) and by Lemma 7.44 the tensor t ∈ B(H ) ⊗ F admits a lifting  t in C ∗ (F) ⊗ F ∗ with  tmin ≤ 1. Let S ⊂ C (F) be a finite-dimensional subspace such that  t ∈ S ⊗ F . Note that since S is separable, by Remark 3.6 there is a subgroup G1 ⊂ F isomorphic to F∞ such that S ⊂ C ∗ (G1 )  C ∗ (F∞ ). Let a : F ∗ → S be the linear map associated to  t, and let b be the restriction of q to S. Since  t tmin ≤ 1. Thus lifts t, we have T = ba, bcb ≤ qcb ≤ 1 and acb =  we obtain acb bcb = 1, which proves (20.6) and at the same time that the infimum is attained. Remark 20.14 The preceding proof can be shortened using Proposition 9.59 applied to the operator u : F ∗ → B(H ) that is the same as T viewed as acting into B(H ). We just need to observe that

20.3 Nonseparability of the metric space OSn IdB ⊗ u : B ⊗min F ∗ → B ⊗max B(H ) = tM .

351

(20.7)

Indeed, for any t  ∈ BF ∗ ⊗min B with associated linear operator T  ∈ BCB(F, B ) we have (IdB(H ) ⊗ T  )(t) = (u ⊗ IdB )(t  ) and hence sup

t  ∈BF ∗ ⊗min B

=

(u ⊗ IdB )(t  )B(H )⊗max B

sup

T  ∈BCB(F,B)

(IdB(H ) ⊗ T  )(t)B(H )⊗max B = tM ,

and since the left-hand side is equal to the one in (20.7) up to a transposition we obtain (20.7). Applying the last result to T = IdE , we obtain Corollary 20.15 Let E be a finite-dimensional operator space. Let IE ∈ E ⊗ E ∗ be the tensor associated to the identity on E. Then IE M = df (E). In particular we have df (E) = df (E ∗ ).

(20.8)

Corollary 20.16 For any finite-dimensional operator space E, there is a  such that  ⊂ C = C ∗ (F∞ ) and an isomorphism u : E → E subspace E ucb u−1 cb = df (E). In particular, E satisfies df (E) = 1 if and only if E is completely isometric to a subspace of C .

20.3 Nonseparability of the metric space OSn of n-dimensional operator spaces Using the number C(n) it was proved in [141] that (OSn,δcb ) is nonseparable for any n ≥ 3 (the case n = 2 remains open). See [208, ch. 21] for a detailed proof. More precisely, there is a continuous family (Et )t∈[0,1] in OSn and a constant cn > 1 such that dcb (Es ,Et ) ≥ cn for any s = t ∈ [0,1]. When √ n is large the method in [141] based on C(n) gives this with cn ≈ n. The variant Cu (n) introduced next will lead us to the order of growth cn ≈ n when n → ∞, which is the optimal one by (20.1). We denote by Cu (n) the infimum of the numbers C > 0 for which there is a sequence of sizes (Nm )m≥1 and a sequence u(m) = (uj (m))1≤j ≤n of n-tuples in UnNm such that for any unitary matrix a ∈ Un we have

Local embeddability into C and nonseparability of (OSn,dcb )

352

n  sup 

m=m

i,j =1

  aij ui (m ) ⊗ uj (m) ≤ C.

(20.9)

Clearly C(n) ≤ Cu (n) and Cu (n) ≤ n by (2.3). Theorem 20.17 For any n ≥ 5 there is a continuous family (Et )t∈[0,1] in OSn such that ∀ε > 0 ∀s = t

dcb (Es ,Et ) ≥ (n/Cu (n))2 − ε ≥ n/4 − ε.

(20.10)

Corollary 20.18 The metric space (OSn,δcb ) is not separable for any n ≥ 5 . To prove Theorem 20.17 we will need several lemmas. Lemma 20.19 Let U1(N ), . . . ,Un(N ) be a sequence of independent random unitary matrices uniformly distributed over UN as in Theorem 18.16. Then for any k and any x1, . . . ,xn ∈ Uk we have almost surely n  √  (N )  aij xi ⊗ Uj  ≤ 2 n. (20.11) limN →∞ sup  a∈Un

i,j =1

min

Proof Let us first observe that for any a,b ∈ Un we have for any k,k  and any xi ∈ BMk , yj ∈ BMk n  n   (aij − bij )xi ⊗ yj  ≤ |aij − bij |. (20.12)  i,j =1

min

i,j =1

By compactness there is a finite ε-net Nε ⊂ Un for the distance appearing on the right-hand side of (20.12). By the triangle inequality it follows that for any k,k  and any xi ∈ BMk , yj ∈ BMk n  n      sup  aij xi ⊗ yj  ≤ sup  aij xi ⊗ yj  + ε. (20.13) a∈Un

i,j =1

a∈Nε

min

i,j =1

min

By (18.15) and Corollary 3.38 for each fixed a ∈ Nε we have a.s. n  √   limN →∞  aij xi ⊗ Uj(N )  ≤2 n i,j =1

and hence also (since Nε is finite) n  limN →∞ sup  a∈Nε

i,j =1

min

 

(N ) 

aij xi ⊗ Uj

min

√ ≤ 2 n.

By (20.13) we obtain (20.11) since ε > 0 is arbitrary. Lemma 20.20 For any n ≥ 5 the obvious bound Cu (n) ≤ n can be improved to √ Cu (n) ≤ 2 n.

20.3 Nonseparability of the metric space OSn

353

Sketch Since the argument is very similar to the one we gave in §18.2 we will be brief. Fix ε > 0. Obviously it suffices to construct a sequence of n-tuples {(uj (m))1≤j ≤n | m ≥ 1} of unitary matrices (the mth one being of size Nm × Nm ) such that, for any integer p ≥ 1, we have   √   aij ui (m) ⊗ uj (m ) sup  < 2 n + ε. (20.14) sup a∈Un 1≤m=m ≤p

min

We will construct this sequence and the sizes Nm by induction on p. Assume that we already know the result up to p. That is, we already know a family {(uj (m))1≤j ≤n | 1 ≤ m ≤ p} formed of p n-tuples satisfying (20.14). We need to produce an additional n-tuple (uj (p + 1))1≤j ≤n of unitary matrices (a priori of some larger size Np+1 × Np+1 ) such that (20.14) still holds for the enlarged family {(uj (m))1≤j ≤m | 1 ≤ m ≤ p + 1} formed of one more n-tuple. By (20.11) such a choice is possible by simply choosing Np+1 large enough. The next lemma is very useful to compute dcb (E,F ). See [208, ch. 10] for illustrations of this. It is proved by a rather simple averaging argument. Lemma 20.21 Let E,F be n-dimensional operator spaces. Let (xj ) (resp. (yj )) be a linear basis of E (resp. F ). For any n × n matrix a we denote by πe (a) : E → E (resp. πf (a) : F → F ) the linear map associated as usual to   a, that is defined by πe (a)(xj ) = i aij xi (resp. πf (a)(yj ) = i aij yi ). Let u0 : E → F be the linear map defined by u0 (xj ) = yj for all 1 ≤ j ≤ n. We assume that there is a constant c > 0 such that for any a,b ∈ Un and any u ∈ CB(E,F ) we have πf (b)uπe (a)cb ≤ cucb . Then −1 c−2 u0 cb u−1 0 cb ≤ dcb (E,F ) ≤ u0 cb u0 cb .

(20.15)

Proof It suffices to prove the first inequality. Let u : E → F be an isomorphism and let x ∈ Mn be the matrix representing u with respect to the given bases. By the polar decomposition and classical linear algebra we can write x = a0 b0 Db0−1 where D is a diagonal matrix with positive coefficients and a0,b0 ∈ Un . By abuse of notation we view D as a linear mapping from E to F . Then u = πf (a0 b0 )Dπe (b0−1 ). Let v=

πf (aa0−1 )uπe (a −1 )dm(a)

(20.16)

where m is normalized2 Haar measure on Un . By translation invariance of the integral we have v = πf (a)Dπe (a)−1 dm(a). In other words v : E → F is

Local embeddability into C and nonseparability of (OSn,dcb )

354

2 the linear map associated to the matrix aDa −1 dm(a) = n−1 tr(D)I , which means that v = n−1 tr(D)u0 . From (20.16) and Jensen’s inequality we get vcb ≤ cucb and hence since trD = tr|x| we find n−1 (tr|x|)u0 cb ≤ cucb . But obviously x −1 is the representing matrix for u−1 and if we apply the same argument to u−1 we obtain −1 n−1 (tr|x −1 |)u−1 0 cb ≤ cu cb,

and taking the product of the last two inequalities we find 2 −1 n−2 (tr|x|tr|x −1 |)u0 cb u−1 0 cb ≤ c ucb u cb .

(20.17)

Now a simple verification shows that since x −1 = b0 D −1 b0−1 a0−1 we have ∗ |x −1 | = (x −1 x −1 )1/2 = a0 b0 D −1 (a0 b0 )−1 and hence tr|x −1 | = tr(D −1 ). Then if we denote by D1, . . . ,Dn the diagonal coefficients of D we have by Cauchy–Schwarz  1/2 −1/2 ≤ (trDtr(D −1 ))1/2 n= Dk Dk and hence n2 ≤ trDtr(D −1 ). Thus (20.17) implies (20.15). Remark 20.22 Given a C ∗ -algebra B ⊂ B(H ) and an n-dimensional operator space E ⊂ B with a specific basis (xj )1≤j ≤n we wish to define a unitarily  still n-dimensional, with a basis invariant operator space denoted by E, ( xj )1≤j ≤n such that for any k and any yj ∈ Mk we have         yi ⊗  xi  = sup  aji yi ⊗ xj  . (20.18)  i

min

a∈Un

ij

min

To do this we consider the C ∗ -algebra C = C(Un ;B) of all continuous xi ∈ C by functions on Un with values in B and we define   aji xj . ∀1 ≤ i ≤ n  xi (a) = j

 is unitarily invariant in the following Then (20.18) clearly holds. The space E  → E  be the linear map defined for any a ∈ Un by sense: let πe (a) : E  xj ) = i aij xi . Then by translation invariance on Un one checks using πe (a)( (20.18) that πe (a) is a complete isometry.  Now let F be another n-dimensional operator space with a specific basis, F → F  analogous be the similar associated space. Then the linear map πf(a) : F   to πe (a) is a complete isometry for any a ∈ Un . Let u0 : E → F be the linear

20.3 Nonseparability of the metric space OSn

355

map associated to the identity matrix. Then by Lemma 20.21 (applied with c = 1) we have F ) = u0 cb u−1 cb . dcb (E, 0

(20.19)

Proof of Theorem 20.17 Let {[u1 (m), . . . ,un (m)],m ∈ N} be a sequence of n-tuples of unitary matrices satisfying (20.9) for some constant C (recall u1 (m), . . . ,un (m) are of size Nm × Nm ). For any subset ω ⊂ N, let again    MNm . Bω = ⊕ ∞

m∈ω

Let ω(1) ⊂ N and ω(2) ⊂ N be any pair of infinite subsets, and let   ui (m) ∈ Bω(1) u2i = ui (m ) ∈ Bω(2) . u1i =  m ∈ω(2)

m∈ω(1)

31 and E 32 be Let Ek = | 1 ≤ j ≤ n] where k = 1 or k = 2. Let E 3 32 the unitarily invariant spaces as defined in Remark 20.22. Let u0 : E1 → E  1 be the linear map (associated to the identity matrix) defined by u0 (ui ) = u2i 1, E 2 ) = u0 cb u−1 cb . We (1 ≤ i ≤ n). By Remark 20.22 we have dcb (E 0 claim that if ω(2) ⊂ ω(1) then span[ukj

u0 cb ≥ n/C.

(20.20)

Indeed, let m ∈ ω(2) \ ω(1). On one hand by (20.9) (applied with the transposed of a) we have     u2i (m) ⊗ u1i  ≤ C.  min

And on the other hand by (18.3)     u2i (m) ⊗ u2i  ≥ sup  min

a∈Un

    aji u2i (m) ⊗ u2j (m) 

    ≥ u2j (m) ⊗ u2j (m) j

min

min

= n.

 By the definition of the cb-norm (here we view u2j (m) ⊗ u1j as an element 1 )), this implies our claim (20.20). Now if we assume moreover that of MNm (E ω(1) ⊂ ω(2) then we find u−1 0 cb ≥ n/C, and hence by (20.19) we have 1, E 2 ) ≥ (n/C)2 and since we can take for C any value > Cu (n) for any dcb (E ε > 0 fixed in advance we obtain 1, E 2 ) ≥ (n/Cu (n))2 − ε. dcb (E √ By Lemma 20.20 we have Cu (n) ≤ 2 n and hence (n/Cu (n))2 ≥ n/4. It remains to observe (exercise) that there is a set T of subsets of N with the cardinality of the continuum such that for any s = t ∈ T we have both s ⊂ t

356

Local embeddability into C and nonseparability of (OSn,dcb )

and t ⊂ s. Hint: consider the set V of vertices of an infinite binary tree, the set T of subsets of V forming an infinite branch from the root has the required properties, and V has the same cardinal as N. Actually the set of numbers in [0,1] that do not have a finite dyadic expansion is in 1 − 1 correspondence with a subset of T . Thus we may as well take [0,1] as our index set in (20.10). By Corollary 20.3 the following is an immediate consequence of Theorem 20.17. Corollary 20.23 There does not exist a separable C ∗ -algebra (or operator space) X such that dSX (E) = 1 for any finite-dimensional operator space. More precisely for any n ≥ 5 we have √ sup{dSX (E) | dim(E) ≤ n} ≥ n/2. Proof Assume sup{dSX (E) | dim(E) ≤ n} < c. By definition for any t ∈ [0,1] and ε > 0 there is a subspace Ft ⊂ X such that dcb (Et ,Ft ) < c. Since OSn (X) is dcb -separable we can restrict Ft to belong to a countable dense subset of OSn (X). But then (pidgeon hole principle) the assigment t → Ft cannot be injective so there must be s = t such that Fs = Ft . The latter implies dcb (Es ,Et ) ≤ dcb (Es ,Fs )dcb (Ft ,Et ) and hence n/4 − ε < c2 . This gives the announced bound. In particular, taking X = C : Corollary 20.24 For any n ≥ 5 we have sup{df (E) | dim(E) ≤ n} ≥

√ n/2.

Remark 20.25 By the identity in (20.5) the last corollary can be restated as: for any n ≥ 5 we have   √ umax u ∈ B(H ) ⊗ B(H ), rk(u) ≤ n ≥ n/2, sup umin which is slightly less precise than Theorem 18.4. Remark 20.26 These estimates are asymptotically best possible. Indeed, if A = K(2 ) (the algebra of compact operators on 2 ) for any E ∈ OSn we have √ √ dSA (E) ≤ n (see [208, p. 133]). By Corollary 20.7 this implies df (E) ≤ n, which shows that the numbers appearing in either Corollary 20.24 or Remark √ 20.25 are ≤ n. Remark 20.27 In sharp contrast with the nonseparability of OSn , Ozawa proved in [184] that the subset formed of all the n-dimensional subspaces of a so-called noncommutative L1 -space (i.e. a von Neumann algebra predual) is separable and even compact for the metric δcb .

20.4 Notes and remarks

357

20.4 Notes and remarks §20.1 collects basic facts on operator spaces easily adapted from the corresponding well-known statements for Banach spaces. The main source for §20.2 is [141] but the results were somewhat anticipated by Kirchberg in his questions at the end of [155] (see conjecture (A7) in [155, p. 483]). Remark 20.8 which is based on Arveson’s ideas on operator systems appears in [208, p. 352] (see also [208, ex. 2.4.2]) but is due to Ozawa. In [124] Harcharras studies the stability properties of the class of spaces E such that df (E) = 1. §20.3 is based on [141] and [180]. In the former we obtained a continuous √ family separated by c n for some c > 0 independent of n. The optimal result √ with c n replaced by n/4 stated in Theorem 20.17 is due to Oikhberg and Ricard in the latter paper [180]. For Corollaries 20.23, 20.24, √ and Remark 20.25, the lower bounds previously obtained in [141] (by n(2 n − 1)−1 ) are √ slightly better than the ones we gave in the text (by n/2). See [208, ch. 21] for more details. Results such as Lemma 20.21 are due to Zhang [264]. They are presented in detail in [208, p. 217]. It is natural to try to estimate the metric entropy of the metric space OSn when equipped with natural metrics for which it becomes compact. This question is considered in [213] both for Bn (the Banach space case) and OSn .

21 WEP as an extension property

We will first show (see Theorem 21.3) that the WEP of a C ∗ -algebra is characterized by a certain form of extension property for maps into it but defined on a special class of finite-dimensional subspaces of C ∗ -algebras. It is interesting that such a weak form of extension property already implies the WEP, because as the next result shows (see Theorem 21.4) the WEP implies a much stronger extension property. Remark 21.1 Fix an integer N ≥ 1. Let us recall here the content of Remark 3.14 when |I | = N. We will view the Banach space N 1 as an operator space, as follows: denoting Uj = UFN (gj ), we let E1N = span[U1, . . . ,UN ] ⊂ C ∗ (FN ). N Then E1N is isometric to N 1 as a Banach space. We use the notation E1 to N distinguish the Banach space 1 from the operator space we just defined. When A = Mn , the identity (3.7) describes the norm in Mn (E1N ) for arbitrary n ≥ 1. From (3.6) it is immediate that the space CB(E1N ,Mn ) can be identified with N ∞ (Mn ), and ucb = u = supj u(Uj ) for any u from ∗ E1N to any operator space. This means (see (2.14)) that E1N  N ∞ completely isometrically. Moreover, the last assertion in Lemma 3.10 shows that E1N is also completely isometric to span{1,U1, . . . ,UN −1 } ⊂ C ∗ (FN −1 ). The latter is thus an alternative way to define the same operator space structure.

21.1 WEP as a local extension property We first consider extension properties for maps defined on finite-dimensional subspaces. Theorem 21.2 Let W ⊂ B be an inclusion of C ∗ -algebras. The following are equivalent: (i) There is a generalized weak expectation T : B → W ∗∗ with T cb ≤ 1.

358

21.1 WEP as a local extension property

359

(ii) For any N ≥ 1, ε > 0 and any subspace E ⊂ E1N , every u : E → W that is the restriction of a complete contraction v : E1N → B admits an ucb ≤ 1 + ε. extension  u : E1N → W with  Proof Assume (i). Let E ⊂ E1N , v : E1N → B with vcb ≤ 1 such that u = Tv : E1N → W ∗∗ . Note v(E) ⊂ W and let u = v|E ∈ CB(E,W ). Let  N N B(E1 ,A) = ∞ (A) for any Banach space A, and the unit ball of N ∞ (A) ∗∗ ). Therefore there is a net of maps (A is clearly weak*-dense in that of N ∞ u. In particular for any x ∈ E vi : E1N → W with vi  ≤ 1 tending weak* to  the net vi (x) converges weak* to Tv(x) = u(x), but since they both belong to W, weak convergence holds. By Mazur’s Theorem A.9, we can form convex combinations of the vi |E ’s that converge in norm to u (see Remark A.10). The corresponding convex combinations of the vi’s give us a modified net vi in the unit ball of B(E1N ,W ) such that vi |E tends in norm to u. Thus for any

  < δ. By δ > 0 there is v  in the unit ball of B(E1N ,W ) such that u − v|E the perturbation Lemma 20.2 if we choose δ > 0 small enough there is an  and w  w −1  < 1 + ε. isomorphism wδ : W → W such that u = wδ v|E δ cb cb δ  Then  u = wδ v satisfies (ii). Assume (ii). Let j : W → B denote the inclusion. We will apply the dual extension criterion formulated in Proposition A.2. With the latter we will extend the mapping iW : W → W ∗∗ (a priori only defined on W ⊂ B) to a contraction on the whole of B. Let v : W ∗∗ → W be a weak* continuous finite rank map such that N∧ (j v) < 1. By (A.3) jv can be rewritten as j v = T2 DT1 N N N with T1 : W ∗∗ → N ∞ , D : ∞ → 1 and T2 : 1 → B as in (A.2) satisfying N (by homogeneity) T1  = T2  = 1 and D < 1. Note that, using N 1  E1 , we have T2  = T2 cb by (3.6) (and similarly for T1 and D). Let then E = DT 1 (W ∗∗ ) ⊂ E1N . Let u = T2|E : E → W . Note ucb ≤ T2 cb = u : E1N → W extending T2 . By the assumed extension property, there is  u with  ucb ≤ (1 + ε)T2 cb = 1 + ε. It follows that the operator uDT1 ) ≤ (1 + ε)T1 DT2  < 1 + ε  uDT1 : W ∗∗ → W satisfies N∧ ( and since  uDT1 = v we obtain

|tr(iW v)| ≤ N∧ (iW v) ≤ N∧ (v) < 1 + ε. Since ε > 0 was arbitrary, this shows by homogeneity that |tr(iW v)| ≤ N∧ (v) ≤ N∧ (j v) for any weak* continuous finite rank v : W ∗∗ → W . Then Proposition A.2 implies the existence of a contraction T as in (i). By Remark 7.31 (essentially Tomiyama’s theorem) T is automatically a complete contraction. In the case of the inclusion W ⊂ B(H ), since B(H ) is injective this gives us as an immediate consequence:

360

WEP as an extension property

Theorem 21.3 A C ∗ -algebra W has the WEP if and only if for any N ≥ 1, ε > 0 and any subspace E ⊂ E1N , every u : E → W admits an extension ucb ≤ (1 + ε)ucb .  u : E1N → W with  We now come to the local extension property satisfied by C ∗ -algebras with the WEP. Theorem 21.4 Let C be a separable C ∗ -algebra with the LLP and let W be another one with the WEP. Then for any finite-dimensional subspace E ⊂ C and any ε > 0, any u ∈ CB(E,W ) admits an extension  u ∈ CB(C,W ) such that  ucb ≤ (1 + ε)ucb . C  u

E

u

W

Moreover, if E ⊂ C is a finite-dimensional operator system (assuming C unital) and u is a unital c.p. map then we can find a unital v ∈ CP(C,W ) such that v|E − u < ε. Remark 21.5 The proof will show the following: Assume that C is an operator space such that df (C) = sup{df (E)|E ⊂ C, dim(E) < ∞} < ∞. Then (assuming W has the WEP) for any E ⊂ C with dim(E) < ∞ and any ε > 0, any u ∈ CB(E,W ) admits an extension  u ∈ CB(C,W ) such that  ucb ≤ (1 + ε)df (C)ucb . Proof We first claim that it suffices to show the following “local” extension property: For any finite-dimensional F ⊂ C such that E ⊂ F ⊂ C and any u ∈ CB(E,W ) there is  u : F → W extending u with  ucb ≤ (1 + ε)ucb . Indeed, assuming this, consider a sequence F0 ⊂ · · · Fn ⊂ Fn+1 ⊂ · · · with F0 = E and ∪Fn = C and εn > 0 such that (1 + εn ) < 1 + ε. Then, by repeated application of the preceding “local” extension property, we obtain the announced result. Thus it suffices to establish the latter property. Let E ⊂ F ⊂ C with dimF < ∞. Since W has the WEP, by Theorem 9.22 we have a factorization π

T

→ B(H ) − → W ∗∗ iW : W − where π is an isometric ∗-homomorphism and T is c.p. with T  = 1. Let u ∈ CB(E,W ). By Theorem 1.18 there is a mapping v ∈ CB(F,B(H )) extending π u, so that v|E = π u with vcb = ucb . We may identify v with

21.1 WEP as a local extension property

361

t ∈ F ∗ ⊗B(H ) such that tmin = vcb . Since by (20.8) df (F ∗ ) = df (F ) = 1 there is a completely isometric embedding F ∗ ⊂ C . With respect to the latter embedding, by Kirchberg’s Theorem 9.6 we have tmax = tmin = vcb = ucb , and hence a fortiori by (4.30) (Id ⊗ T )(t)C ⊗max W ∗∗ ≤ ucb . We now invoke §8.4: by (8.12) (applied to C ⊗max W ∗∗ instead of A∗∗ ⊗max B), this implies (Id ⊗ T )(t)(C ⊗max W )∗∗ ≤ ucb and a fortiori (since F ∗ ⊗min W ⊂ C ⊗min W isometrically) (Id ⊗ T )(t)(F ∗ ⊗min W )∗∗ = (Id ⊗ T )(t)(C ⊗min W )∗∗ ≤ ucb .

(21.1)

By homogeneity we may assume ucb = 1. Let Z = F ∗ ⊗min W . Then (21.1) means that there is a net (ti ) in BZ tending to (Id ⊗ T )(t) with respect to σ (Z ∗∗,Z ∗ ). Let vi : F → W be the linear maps in the unit ball of CB(F,W ) associated to ti . Since x ⊗ ξ ∈ Z ∗ for any (x,ξ ) ∈ F × W ∗ , we have vi (x) → Tv(x) with respect to σ (W ∗∗,W ∗ ). Restricting to E we find ∀e ∈ E

vi (e) → Tv(e) = u(e).

And hence, since vi (e) − u(e) ∈ W , we have ∀e ∈ E

vi (e) − u(e) → 0

σ (W,W ∗ ).

Since dim(E) < ∞, this means vi → u in the weak topology of CB(E,W ). Therefore by the classical Mazur Theorem A.9, u is in the norm closure of the convex hull of {vi |i ∈ I }. Note that vi cb = ti min ≤ 1. Thus ∀ε > 0

∃w ∈ conv{vi |i ∈ I } ⊂ BCB(F,W )

such that

w|E − ucb < ε.

By the perturbation Lemma 20.2 there is δ(ε) > 0 and a complete isomorphism  : W → W with cb −1 cb ≤ 1 + δ(ε) such that w|E = u and u = w : F → W satisfies  ucb ≤ limε→0 δ(ε) = 0. Then, the mapping  1 + δ(ε) and  u|E = u. If E is a finite-dimensional operator system and u is unital and c.p., by a suitably modified iteration argument, we can again reduce to finding v ∈ CP(F,W ) such that v|E − u < ε when F ⊃ E is an arbitrary finitedimensional operator system. Let us assume u c.p. and unital. Then (by (1.27)) u : F → W with ucb = 1. The first part of the proof gives us an extension   u ≤ 1 + ε. Of course  u is still unital. By the perturbation Theorem 2.28, when F remains fixed and ε → 0,  u becomes “close” to a c.p. map, so the conclusion follows.

362

WEP as an extension property

21.2 WEP versus approximate injectivity We already know that c.b. maps into a WEP C ∗ -algebra A can be extended to c.b. maps into the larger algebra A∗∗ (see Corollary 9.24). Our goal in this section is to describe situations where the passage to A∗∗ can be avoided. Definition 21.6 A unital C ∗ -algebra A is said to be approximately injective if for any pair S0 ⊂ S1 of finite-dimensional operator systems, any unital completely positive map u0 : S0 → A can be nearly extended to a completely positive map on S1 , meaning that for any ε > 0 there is u1 ∈ CP(S1,A) such that u1|S0 − u0  < ε. In [77] where this property was introduced, it is proved that a unital A is approximately injective if and only if the preceding property holds for not necessarily unital c.p. maps u0 : S0 → A. A general C ∗ -algebra will be called approximately injective if its unitization is approximately injective. We will see (following [77] and [155, Lemma 2.5]), that any approximately injective C ∗ -algebra has the WEP. But the converse is apparently not true if one believes the equivalence of the conjectures (A2) ⇔ (A4) of [155]. Indeed, the latter claims1 that if WEP ⇒ approximately injective, then WEP ⇒ LLP, but by the results of [141] (presented in the present volume in §18.1), there exists a WEP unital C ∗ -algebra (namely B !) which is not LLP. Nevertheless, it turns out that if we restrict to operator spaces S1 that are subspaces of C , i.e. such that S1 ⊂ C , then the resulting property that we call approximate C -injectivity is equivalent to the WEP. C ∗ -algebra

Definition 21.7 Let C be a unital C ∗ -algebra. A C ∗ -algebra A will be called approximately C-injective if for any pair E0 ⊂ E1 of finite-dimensional operator spaces with E1 ⊂ C, for any u0 ∈ CB(E0,A) and any ε > 0 there is u1 ∈ CB(E1,A) extending u0 , i.e. satisfying u1|E0 = u0 , and such that u1 cb ≤ u0 cb (1 + ε). Proposition 21.8 If A is approximately C-injective in the sense of the preceding Definition 21.7, then for any pair S0 ⊂ S1 ⊂ C of finite-dimensional operator systems, any unital completely positive map u0 : S0 → A can be nearly extended to a completely positive map on S1 in the sense of Definition 21.6, that is for any ε > 0 there is u1 ∈ CP(S1,A) such that u1|S0 − u0  < ε. Proof Indeed, by Definition 21.7, for any δ > 0 there is ϕ ∈ CB(S1,A) extending u0 and such that ϕcb ≤ 1 + δ. Then by part (ii) in Theorem 2.28 1

The author does not understand the proof that (A6) ⇒ (A2) or that (A4) ⇒ (A2) in [155, p. 485], and the related final remark 8.2 in [155, p. 485] does not seem correct.

21.2 WEP versus approximate injectivity

363

the announced result holds with ε = 8δ dim(S1 ). Since δ > 0 is arbitrary, this completes the argument. We use a slightly different terminology from [155]: there it is said that A is approximately injective in C when A is approximately C -injective in the previous sense. Our exposition includes several remarks from [124]. Lemma 21.9 If M2 (A) is approximately injective, then A is approximately C-injective for any C. Proof Let E0 ⊂ E1 ⊂ C be finite-dimensional operator spaces. We restrict ourselves, without loss of generality, to the case when C = B(H ) and A is unital. Let u0 ∈ CB(E0,A). We may assume by homogeneity that u0 cb = 1. Let S0 ⊂ S1 ⊂ M2 (C) be the operator systems associated to E0 ⊂ E1 as in Lemma 1.38. Then the mapping V0 : S0 → M2 (A) defined by  V0

λ y∗

  x λ = μ u0 (y ∗ )

u0 (x) μ



is unital and c.p. If M2 (A) is approximately injective, then for any 0 < ε < 1 there is V1 ∈ CP(S1,M2 (A)) such that V1|S0 − V0  < ε. By (1.20) we have V1 cb = V1 (1) and V0 (1) = 1, therefore V1 cb < 1 + ε. Let v : E1 → A be the (1,2) entry of V1 . Clearly vcb ≤ V1 cb < 1 + ε. Since v|E0 − u0  ≤ V1|S0 − V0  we have v|E0 − u0  < ε. Since dim(E0 ) < ∞, the norm and the cb-norm on CB(E0,A) are equivalent (see Remark 1.56), so (using a different ε) we can find a v satisfying as well v|E0 − u0 cb < ε. Note that v − v/(1 + ε)cb ≤ ε. Thus, replacing v by v/(1 + ε) we can find v ∈ CB(E1,A) with vcb ≤ 1 and v|E0 −u0  < 2ε. Thus the linear mapping v → v|E0 takes the unit ball of CB(E1,A) to a dense subset of the unit ball of CB(E0,A). Then a standard iteration argument gives the announced result: the latter mapping is onto and for any ε > 0 there is u1 ∈ CB(E1,A) such that u1|E0 = u0 and u1 cb ≤ 1 + ε. We will now concentrate on the case when C = C . In that case we note that the condition S1 ⊂ C can be rephrased in terms of the quantity df defined by (20.3) (see Corollary 20.16). Using the latter, we may rewrite S1 ⊂ C as df (S1 ) = 1. Recapitulating, we may state: Theorem 21.10 A C ∗ -algebra is approximately C -injective if and only if it has the WEP. Proof By Theorem 21.3 approximate C -injectivity implies the WEP. The converse direction was already proved in Theorem 21.4.

364

WEP as an extension property

21.3 The (global) lifting property LP Combining several of our earlier statements, we obtain a global lifting theorem for the pair (C,W ) as in Theorem 21.4 when C is separable. Theorem 21.11 (Global lifting from LLP to QWEP) Let C,W be unital C ∗ -algebras and let q : W → W/I be a surjective ∗-homomorphism. If C is separable with LLP and if W has the WEP, then any unital u ∈ CP(C,W/I ) admits a unital lifting v ∈ CP(C,W ) (i.e. we have qv = u) with v = u. Proof By Theorem 9.38, the c.p. map u is locally 1-liftable. By (ii) in Proposition 9.42, for any ε > 0 and any finite-dimensional operator system E ⊂ C there is vE : E → W unital c.p. such that qvE − u|E  ≤ ε. By Theorem 21.4 we may find a unital c.p. map w E : C → W such that E − v  ≤ ε. A fortiori (qwE − u)  ≤ 2ε. Let us denote i = (E,ε) w|E E |E E and let vi = w : C → W . We view the set of i’s as directed in the usual way so that E tends to C and ε to zero when i → ∞. Then qvi (x) − u(x) → 0 for any given x ∈ C when i → ∞. By part (ii) of Theorem 9.46 (applied to vi : C → W ) there is a unital c.p. map lifting u. We will now briefly discuss the (global) lifting property. Definition 21.12 A unital C ∗ -algebra C is said to have the lifting property (LP) if any unital c.p. u : C → A/I , into an arbitrary quotient C ∗ -algebra, admits a unital c.p. lifting  u : C → A with the same norm as u. Remark 21.13 The property makes sense in the nonunital case as well. Just omit “unital” in the preceding definition. We restrict to the unital case for simplicity. The main example is C for which Kirchberg proved the LP in [155]. See [189] or [39, p.376] for a detailed proof. Corollary 21.14 If Kirchberg’s conjecture holds, then LLP ⇒ LP for separable unital C ∗ -algebras. Proof Assume C and A unital. Let C be separable with LLP. Consider a unital c.p. map u : C → A/I . Let q : A → A/I denote the quotient map. Let F be a free group such that there is a surjective ∗-homomorphism Q : C ∗ (F) → A (see Proposition 3.39). Let W = C ∗ (F) and J = ker(qQ). Then A/I = W/J and W has the LLP. Thus if the Kirchberg conjecture is valid, W has the WEP and hence by Theorem 21.11 the map u : C → A/I = W/J admits a unital c.p. lifting v : C → W , but then Qv : C → A is a unital c.p. lifting of u, which proves that C has the LP.

21.4 Notes and remarks

365

21.4 Notes and remarks Approximate injectivity was introduced by Effros and Haagerup in [77]. Among many results, they show there that it implies the WEP. §21.2 comes from [77] and [155] although we changed slightly the terminology. Theorem 21.4 is essentially [155, Lemma 2.5]. §21.3 comes from [155].

22 Complex interpolation and maximal tensor product

22.1 Complex interpolation The complex interpolation method is a very useful way to produce intermediate Banach spaces, or intermediate norms, between two given ones (see e.g. [22]). Starting with a pair of norms  · 0 ,  · 1 one defines a norm  · θ (0 < θ < 1) that can be thought of as a deformation of a special kind, providing a privileged path from  · 0 to  · 1 , somewhat analogous to a geodesic. Geometrically, in many examples  · θ appears as a sort of “geometric mean” of  · 0 and  · 1 . The classical example is the pair Lp(0),Lp(1) with their usual norms (on a fixed measure space) for which the norm  · θ is the norm in the space Lp(θ) for p(θ ) determined by 1/p(θ ) = (1 − θ )/p(0) + θ/p(1). It is known that this extends to noncommutative Lp -spaces (see e.g. [215]), but our interest here will be in a different (although not totally unrelated) direction. To state the precise definitions, we will need some specific notation. Let S = {z ∈ C | 0 < Re(z) < 1}, ∂0 = {z ∈ C | Re(z) = 0}

and

∂1 = {z ∈ C | Re(z) = 1}.

(22.1)

Note that ∂S = ∂0 ∪ ∂1 . Given a subset ⊂ C and a Banach space B, we denote by Cb ( ;B) the space of bounded continuous functions f : Omega → B equipped with the norm f Cb ( ;B) = supz∈ f (z)B . The following celebrated and classical three line lemma is the crucial tool to develop complex interpolation (the extension from the C-valued to the B-valued case is straightforward). Lemma 22.1 (Three line lemma) Let f : ovlS → B be a bounded continuous function with values in a Banach space B that is analytic on S. Then for any 0 0 such that W (t) ≥ δI for all t. Then there is a unique analytic function F : D → B(H ) such that (i) for all x in H , z → F (z)x is in H 2 (H ) and its boundary values satisfy almost everywhere on T W (t)x,y = F (t)x,F (t)y, (ii) F (0) ≥ 0, (iii) z → F (z)−1 exists and is bounded analytic on D. Corollary 22.25 Consider a von Neumann subalgebra M ⊂ B(H ). Then in the situation of Theorem 22.24, if W is M-valued, F necessarily also is M-valued. Proof Indeed, for any unitary u in the commutant M  , the function z → u∗ F (z)u still satisfies the conclusions of Theorem 22.24, and hence (by uniqueness) we must have F = u∗ Fu, which implies by the bicommutant Theorem A.46 that F (z) ∈ M  = M. Proof of Theorem 22.10 Let θ = 1/p. Recall that we denote L∞ (τ ) = M. By Theorem 22.8 when θ = 1/p we have isometrically (L∞ (τ ),L1 (τ ))θ = Lp (τ ). We will use the spaces (X0,X1 ) defined in (22.7) but for A = M. Clearly for all (xj ) ∈ M n we have by (2.1)     R(xj∗ )L(xj ) = (x1, . . . ,xn )X0 .  B(L∞ (τ ))

Similarly, it is easy to check by transposition that     R(xj∗ )L(xj ) = (x1, . . . ,xn )X1 .  B(L1 (τ ))

Hence, if (xj ) is in the unit ball of (X0,X1 )θ and θ = 1/p, we have by Theorem 22.4 and (22.6)     R(xj∗ )L(xj ) ≤ 1.  B(Lp (τ ))

This is the easy direction. To prove the converse, we assume that     R(xj∗ )L(xj ) ≤ 1.  B(Lp (τ ))

(22.22)

380

Complex interpolation and maximal tensor product

We will proceed by duality. Let B denote the open unit ball in the space (X0∗,X1∗ )θ . Note that X0∗ (resp. X1∗ ) coincides with M∗n equipped with the norm # 1/2 $ , (ξ1, . . . ,ξn )0∗ = τ ξj∗ ξj  # 1/2 $ resp. (ξ1, . . . ,ξn )1∗ = τ . ξj ξj∗ Let B o be the polar of B in the duality between M n and M∗n . By the duality property of interpolation spaces (cf. Remark 22.3) B o coincides with the unit ball of (X0,X1 )θ . Hence to conclude it suffices to show that (22.22) implies (x1, . . . ,xn ) ∈ B o . Equivalently, to complete the proof it suffices to show that,  if (22.22) holds, then for any (ξ1, . . . ,ξn ) in B we have ξj (xj ) ≤ 1. The rest of the proof is devoted to the verification of this. By the definition of semifiniteness there is an increasing net of (finite trace) projections qi tending weak* to the identity for which qi Mqi is a finite von Neumann algebra with unit qi . If we identify again M∗ with L1 (τ ) in the usual way, we have ξ − qi ξ qi M∗ → 0 for any ξ ∈ M∗ . Thus, by perturbation we may assume ξj of the form ξj (x) = τ (bj x) for some bj in qMq where q is a projection in M, such that qMq is finite. In that case we have ξj (xj ) = ξj (qxj q). Note that (22.22) remains true if we replace (xj ) by (qxj q). Therefore, at this point we may as well replace M by the finite von Neumann algebra qMq (with unit q) on L2 (qMq,τ|qMq ) so that we are reduced to the finite case. For simplicity, we assume in the rest of the proof that M is finite with unit I and that ξj lies in M viewed as a subspace of L1 (τ ) (i.e. that the elements bj appearing in the preceding step are in M and q = I ). By definition of (X0∗,X1∗ )θ , since (ξj ) is in B there are functions fj : S → L1 (τ ) that are bounded, continuous on S and analytic on S such that ξj = fj (θ ) for j = 1, . . . ,n and moreover such that sup τ z∈∂0

#

fj (z)∗ fj (z)

1/2 $

0 (to be specified later). We define functions W1 and W2 on ∂S = ∂0 ∪ ∂1 by setting  1/2 1/2 ∗ ∀ z ∈ ∂1 W1 (z) = + δI , (22.24) fj (z)fj (z) ∀ z ∈ ∂0

W1 (z) = I,

(22.25)

∀ z ∈ ∂1

W2 (z) = I,  1/2 1/2 ∗ W2 (z) = fj (z) fj (z) + δI .

(22.26)

∀ z ∈ ∂0

(22.27)

By (22.23) we can choose δ small enough so that sup τ (W12 ) < 1 and

z∈∂1

sup τ (W22 ) < 1.

(22.28)

z∈∂0

By Theorem 22.24 and Corollary 22.25 , using a conformal mapping from S onto D, we find bounded M-valued analytic functions F and G on S with (nontangential) boundary values satisfying FF ∗ = W12

and

G∗ G = W22 .

(22.29)

Moreover, F −1 and G−1 are analytic and bounded on S. Therefore we can write fj (z) = F (z)gj (z)G(z) where gj (z) = F (z)−1 fj (z)G(z)−1 . We claim that ∀z∈S



gj (z)2L2 (τ ) ≤ 1.

(22.30)

(22.31)

By the three lines lemma, to verify this it suffices to check it on the boundary of S. (Note that we know a priori that sup gj (z)L2 (τ ) < ∞ since F −1  < z∈S

δ −1/2 and G−1  ≤ δ −1/2 , hence gj is an H ∞ function with values in L2 (τ ), and its nontangential boundary values still satisfy (22.30) a.e. on the boundary of S.) We have    gj (z)2L2 (τ ) = τ gj (z)gj (z)∗ ∀ z ∈ ∂1    = τ F (z)−1 fj (z)fj (z)∗ F (z)−1∗ = τ ((FF ∗ )−1 (W12 − δI )2 ) hence by (22.29) and (22.28) ≤ τ (W12 ) < 1.

382

Complex interpolation and maximal tensor product

Similarly, we find ∀ z ∈ ∂0



gj (z)2L2 (τ ) ≤ τ (W22 ) < 1.

This proves our claim (22.31). Finally, if θ = 1/p we have L2p (τ ) = (M,L2 (τ ))θ

and

L2p (τ ) = (L2 (τ ),M)θ .

Hence by definition of the latter complex interpolation spaces, since F (z)M = W1 M ≤ 1 on ∂0 and (by (22.28)) F (z)L2 (τ ) < 1 on ∂1 , we have F (θ )L2p (τ ) ≤ 1 and similarly G(θ )L2p (τ ) ≤ 1. To conclude we have ξj = fj (θ ) = F (θ )gj (θ )G(θ ), and hence if (xj ) satisfies (22.22) we have by (22.31) (and Cauchy– Schwarz)   τ (F (θ )gj (θ )G(θ )xj ) ξj (xj ) =  1/2 ≤ G(θ )xj F (θ )2L2 (τ ) 1/2    ≤ xj F (θ )F (θ )∗ xj∗   1/2   ≤ R(xj∗ )L(xj )

Lp (τ )

B(Lp (τ ))

≤ 1.

(22.32)

Thus we have verified that (22.22) implies (xj ) ∈ B o . This concludes the proof of Theorem 22.10.

22.3 Notes and remarks The standard reference on complex interpolation in §22.1 is the book [22]. We also need some less standard facts notably by Bergh [21], Stafney and others, which the reader can find treated in some detail in [214]. To see how noncommutative Lp -spaces can be constructed using complex interpolation, see e.g. [215]. In §22.2 the main results come from [203, 205], with several improvements due to Haagerup, through personal communications at the time [203] was being published. This motivated Haagerup for the results described in the next chapter. The inequality in Corollary 22.18 has been communicated (in the main case A = B = B(H )) by Haagerup to Junge and the author for inclusion in the earlier paper [141].

22.3 Notes and remarks

383

The equalities in Theorem 22.10 and Corollaries 22.12 and 22.13 were greatly influenced by the discovery (due to O. Kouba, see [205]) that a certain kind of tensor product, in particular the Haagerup tensor product, behaves very nicely under complex interpolation. The main technical ingredient is an operator valued version of a classical theorem of Szeg¨o that tells us that under a suitable nonvanishing assumption any integrable nonnegative matrix valued function W : T → Mn can be written in the form W = F ∗ F where F : T → Mn is the boundary value of an analytic function in the Hardy space H 2 . In Theorem 22.24 and Corollary 22.25 we use a generalization of Szeg¨o’s theorem with B(2 ) in place of Mn due to Devinatz [70]. Such results have a long history, starting with Masani–Wiener and Helson–Lowdenslager, see e.g. [133]. The subject was later on investigated for operator algebras by Arveson [11] (see also the discussion of noncommutative Hardy spaces in our survey with Q. Xu [215]). Related results appear in a paper of Haagerup and the author [117]. We refer to Blecher and Labuschagne’s work [26] for a more recent update on the state of the art on generalizations of Szeg¨o’s theorem in operator algebra theory.

23 Haagerup’s characterizations of the WEP

In this chapter we give two new characterizations of the WEP that are significantly more involved than the preceding ones.

23.1 Reduction to the σ -finite case In both cases, the main point consists in proving that if an inclusion of von Neumann algebras M ⊂ M satisfies a certain property, say property P, then there is a completely contractive projection P : M → M. In this section we will show that modulo a simple assumption we may restrict to the case when M is σ -finite. The proof will use the structural Theorem A.65. The assumptions on P are as follows: if M ⊂ M has property P then for any projection q ∈ M the inclusion qMq ⊂ qMq (unital with unit q) also has property P. Moreover, if π : M → M1 is an isomorphism of von Neumann algebras taking M onto a subalgebra M 1 ⊂ M1 , then we assume that the “isomorphic inclusion” M 1 ⊂ M1 also has property P. Proposition 23.1 Under the preceding assumptions, to show that every inclusion M ⊂ M with property P admits a completely contractive projection P : M → M, it suffices to settle the case when M is σ -finite. Proof Consider a general inclusion M ⊂ M. By the structural Theorem A.65 we may assume    ¯ i B(Hi )⊗N , (23.1) M= ⊕ i∈I



¯ i . Let qi be the (central) projection with the Ni’s σ -finite. Let Mi = B(Hi )⊗N in M corresponding to Mi in (23.1) so that Mi = qi M = qi Mqi . Let Mi = qi Mqi . By our first assumption on P the inclusion Mi ⊂ Mi satisfies P. We

384

23.2 A new characterization of generalized weak expectations & the WEP 385 claim that we have a von Neumann algebra Ni , with a subalgebra Ni1 ⊂ Ni and ¯ i such that πi (Mi ) = B(Hi )⊗N ¯ i1 . In an isomorphism πi : Mi → B(Hi )⊗N other words, the inclusion Mi ⊂ Mi is “isomorphic” in the preceding sense to ¯ i1 ⊂ B(Hi )⊗N ¯ i . Indeed, since B(Hi )  B(Hi ) ⊗ 1 ⊂ the inclusion B(Hi )⊗N Mi , by Proposition 2.20, for some Ni we have an isomorphism πi : Mi → ¯ i so that πi (x ⊗ 1) = x ⊗ 1 for any x ∈ B(Hi ). Then the subalgebra B(Hi )⊗N ¯ i to a subalgebra that 1 ⊗ Ni ⊂ Mi ⊂ Mi is mapped by πi : Mi → B(Hi )⊗N commutes with B(Hi ) ⊗ 1Ni , and hence is included in 1 ⊗ Ni . Thus we find Ni1 such that πi (1 ⊗ Ni ) = 1 ⊗ Ni1 , and an isomorphism ψi : Ni → Ni1 such that πi (1 ⊗ y) = 1 ⊗ ψi (y) for all y ∈ Ni . It follows that πi (B(Hi ) ⊗ Ni ) = B(Hi ) ⊗ Ni1 , and since πi is bicontinuous for the weak* topology, we have ¯ i ) = B(Hi )⊗N ¯ i1 . This proves the claim. πi (B(Hi )⊗N ¯ i1 ⊂ B(Hi )⊗N ¯ i By our second assumption on P, the inclusion B(Hi )⊗N  satisfies P. Let ri be a rank 1 projection in B(Hi ). Let qi = ri ⊗1. By our first ¯ i1 ]qi ⊂ qi [B(Hi )⊗N ¯ i ]qi (with assumption again, the inclusion qi [B(Hi )⊗N unit qi ) also satisfies P. The latter being clearly “isomorphic” to the inclusion Ni1 ⊂ Ni we conclude that Ni1 ⊂ Ni satisfies P. But now, at last, since Ni1  Ni is σ -finite, if we accept the σ -finite case, we find that there is a completely contractive projection Pi : Ni → Ni1 . By (2.19) and (2.22), IdB(Hi ) ⊗ Pi ¯ i to B(Hi )⊗N ¯ i1 . defines a completely contractive projection from B(Hi )⊗N −1 Then Qi = πi [IdB(Hi ) ⊗ Pi ]πi is a completely contractive projection from

 Mi to Mi , and hence the mapping x → (Qi (qi xqi ))i∈I ∈ ⊕ i∈I Mi ∞ gives us a completely contractive projection from M onto M.

23.2 A new characterization of generalized weak expectations and the WEP The main result is the following characterization of generalized weak expectations (see Definition 9.21), in terms of decomposable maps. It may be viewed as a refinement of Kirchberg’s characterization of the latter in Theorem 7.6. Theorem 23.2 Let B be a C ∗ -algebra. Let i : A → B be the inclusion mapping from a C ∗ -subalgebra A ⊂ B. The following are equivalent: (i) For any n ≥ 1 and any u : n∞ → A we have uD(n∞,A) = iuD(n∞,B) . (ii) For any n ≥ 1 and any v : n∞ → A∗∗ we have vD(n∞,A∗∗ ) = i ∗∗ vD(n∞,B ∗∗ ) .

386

Haagerup’s characterizations of the WEP

(iii) There is a completely contractive c.p. projection P : B ∗∗ → A∗∗ (in other words by Remark 9.32 there is a generalized weak expectation from B to A∗∗ ). Proof We first claim (i) ⇔ (ii). This is an immediate consequence of Theorem 8.25. Indeed, let Xn = D(n∞,A) and Yn = D(n∞,B), viewed as Banach spaces. Then, the assertion that Xn ⊂ Yn isometrically, which is a reformulation of (i), is equivalent to Xn∗∗ ⊂ Yn∗∗ isometrically. This follows from the classical fact (see Remark A.13) that a mapping between Banach spaces is isometric if and only if its bitranspose is isometric. By Theorem 8.25 we have Xn∗∗ = D(n∞,A∗∗ ) and Yn∗∗ = D(n∞,B ∗∗ ). Thus, (i) ⇔ (ii) follows. Then the equivalence (ii) ⇔ (iii) will follow from the next statement about von Neumann algebras applied to the inclusion A∗∗ ⊂ B ∗∗ . Theorem 23.3 Let M be a von Neumann algebra. Let i : M → M be the inclusion mapping from a von Neumann subalgebra M ⊂ M. The following are equivalent: (i) For any n ≥ 1 and any u : n∞ → M we have uD(n∞,M) = iuD(n∞, M) . (ii) There is a completely contractive c.p. projection P : M → M. Proof We first show (i) ⇒ (ii). We will use the reduction to the σ -finite case. Let P be the property appearing in (i). By the results of §6.1 it is easy to check that P satisfies the assumptions of Theorem 23.1. Therefore, to show (i) ⇒ (ii) we may assume M σ -finite. Then (see Theorem A.63) there is a realization of M in some B(H ) such that M has a cyclic vector. Let M  ⊂ B(H ) be the commutant of M in B(H ). Let I ⊂ U (M  ) \ {1} be a set of unitaries in M  that jointly generate M  as a von Neumann algebra, and let I˙ = I ∪ {1}. Let F be a free group with (free) generators (gx )x∈I . Let Ux = UF (gx ) ∈ C ∗ (F) (x ∈ I ), set also U1 = 1C ∗ (F) , and let σ : C ∗ (F) → M  be the unital ∗-homomorphism defined by σ (Ux ) = x for all x ∈ I . Let E = span[Ux | x ∈ I˙]. Consider then the linear mapping  u : E ⊗ M → B(H ) defined for any e ∈ E,m ∈ M by  u(e ⊗ m) = σ (e)m (and extended by linearity to E ⊗ M). Then for any t ∈ E ⊗ M we have clearly by (4.6)  u(t) ≤ tC ∗ (F)⊗max M . By (6.37) (i) implies that for any t ∈ E ⊗ M we have tC ∗ (F)⊗max M = tC ∗ (F)⊗max M . This shows that (i) implies  u : E ⊗max M → B(H ) ≤ 1

23.2 A new characterization of generalized weak expectations & the WEP 387 where E ⊗max M is viewed as a subspace of C ∗ (F) ⊗max M equipped with the induced norm. By Theorem 2.6 since M has a cyclic vector we have u : E ⊗max M → B(H )cb .  u : E ⊗max M → B(H ) =  By the extension Theorem 1.18 there is  u : C ∗ (F) ⊗max M → B(H ) extending  u with  ucb ≤ 1. C ∗ (F) ⊗max M  u

E ⊗max M

 u

B(H )

Since  u is unital so is  u and hence  u is c.p. by Theorem 1.35. We claim that E ⊗ 1 (and hence actually C ∗ (F) ⊗ 1) is included in the multiplicative domain u(Ux ⊗ 1) =  u(Ux ⊗ 1) = x ∈ U (M  ) for any x ∈ I˙, we D u . Indeed, since  ˙ have Ux ⊗ 1 ∈ D u for any x ∈ I and the claim follows. Let P : M → B(H ) be defined by P (b) =  u(1 ⊗ b). Then P is completely contractive and c.p. for any x ∈ I˙ and since, by Theorem 5.1,  u is bimodular Since Ux ⊗ 1 ∈ D u with respect to D u we have (by the trick we used previously many times) for any x ⊂ I˙ = U (M  ) xP(b) = x u(1 ⊗ b) =  u((Ux ⊗ 1)(1 ⊗ b)) =  u(Ux ⊗ b) = u((1 ⊗ b)(Ux ⊗ 1)) =  u(1 ⊗ b)x = P (b)x. Since the unitaries in I generate M  , this shows that P (b) ∈ (M  ) = M and completes the proof that (i) ⇒ (ii). The converse (ii) ⇒ (i) is an immediate consequence of (6.7) (recalling (i) in Lemma 6.5). Remark 23.4 (The case n = 3) In the situation of the preceding Theorem 23.3 let us merely assume that for any u : 3∞ → M we have uD(3∞,M) = iuD(3∞, M) . If we assume in addition that M ⊂ B(H ) is cyclic and that M  is generated by a pair of unitaries, then the same proof (now with F = F2 and |I˙| = 3) shows that there is a completely contractive c.p. projection P : M → M. Thus when M = B(H ) we conclude that M is injective. We recall in passing that it is a longstanding open problem whether any von Neumann algebra on a separable Hilbert space is generated by a single element or equivalently by two unitaries. Important partial results are known, notably by Carl Pearcy, see [74] for details and references. See Sherman’s paper [229] for the current status of that problem. Actually this single generation problem is open for MF∞ , which is a natural candidate for a counterexample. Note that since MF2 is clearly singly generated a negative answer would have the

388

Haagerup’s characterizations of the WEP

spectacular consequence that MF2 and MF∞ are not isomorphic, which is another famous open question. We come to the characterization of the WEP: Corollary 23.5 Let A ⊂ B(H ) be a C ∗ -algebra. The following are equivalent: (i) For any n ≥ 1 and any u : n∞ → A we have uD(n∞,A) = ucb . (ii) The C ∗ -algebra A has the WEP. Proof We apply Theorem 23.2 with B = B(H ). Note that in that case uD(n∞,B(H )) = ucb by (6.9) so that (i) in Corollary 23.5 is the same as (i) in Theorem 23.2. By Theorem 9.31 when B = B(H ) (iii) in Theorem 23.2 means that A has the WEP. Since WEP and injectivity are equivalent for von Neumann algebras (see Corollary 9.26), we can now recover Haagerup’s original result (see [104]) on this section’s theme. Corollary 23.6 When A is a von Neumann algebra, the assertion (i) in Corollary 23.5 holds if and only if A is injective.

23.3 A second characterization of the WEP and its consequences Haagerup’s unpublished characterization of the WEP follows as Theorem 23.7. This important result is closely related to the complex interpolation results presented in §22. It will be fully proved later on in §23.5. For the moment we only give indications on the easy parts of the proof. Theorem 23.7 The following properties of a C ∗ -algebra A are equivalent: (i) A has the WEP. (ii) For any n and any a1, . . . ,an in A we have         aj ⊗ aj  = aj ⊗ aj   A⊗max A

A⊗min A

.

(iii) There is a constant C such that for all n and all a1, . . . ,an in A we have         aj ⊗ aj  ≤C aj ⊗ aj  .  A⊗max A

(iv) The inclusion iA : A → for some H.

A∗∗

A⊗min A

factors completely boundedly through B(H)

23.3 A second characterization of the WEP and its consequences

389

First part of the proof of Theorem 23.7 Note that (iii) ⇒ (ii) is elementary: let x ∈ A¯ ⊗ A be of the form appearing in (iii) or (ii). Assuming (iii) we have 2m ∗ m ∗ m x2m max = (x x) max ≤ C(x x) min = Cxmin , and hence xmax ≤ 1/2m xmin . Letting m → ∞ we obtain (ii). The converse is trivial. For the C proof of (i) ⇒ (ii) see Corollary 22.16. The proof of (ii) ⇒ (i) is more delicate. It is analogous to that of Theorem 14.8. We prove this remaining part of the proof at the end of §23.5 after Theorems 23.34 and 23.35. Taking this for granted this shows that (i), (ii), and (iii) are equivalent. We now turn to (iv). (i) ⇒ (iv) is immediate from the condition (iv) in Theorem 9.22. Assume (iv). By Corollary 22.16, B(H) satisfies the property (ii) in Theorem 23.7. Then, by (22.16) applied to the c.b. map from B(H) to A∗∗ in the factorization in (iv), there is a constant C such that for all n and all a1, . . . ,an in A we have         aj ⊗ aj  ∗∗ ≤ C a ⊗ a .    j j ∗∗ A ⊗max A

A⊗min A

By part (ii) in Proposition 7.26, we deduce (iii), and hence (iv) is equivalent to (i), (ii), and (iii). Remark 23.8 The remarkable advantage of the characterizations of the WEP in Theorem 23.7 is that they use only the operator space structure of A, because by Corollary 22.19 for any C ∗ -algebra B, any u ∈ CB(A,B) and any a1, . . . ,an in A we have         u(aj ) ⊗ u(aj ) ≤ u2cb  aj ⊗ aj  . (23.2)  B⊗max B

A⊗max A

Moreover the analogous inequality for the min-norm is obvious. Thus if B is completely isomorphic to a C ∗ -algebra A with the WEP, it must satisfy (iii) in Theorem 23.7, and hence have the WEP. Corollary 23.9 The properties in Theorem 9.22 (and in Theorem 23.7) are equivalent to: (v) Any c.b. map u : A → M into a von Neumann algebra admits for some H a factorization of the form v

w

→ B(H) − →M u:A − with vcb wcb = ucb . Proof Indeed, Theorem 8.1 (iii) shows that u admits an extension u¨ : A∗∗ → iA

→ M with u ¨ cb = ucb , so we have a factorization of u of the form A − u¨

A∗∗ − → M. This shows that the property (iv) in Theorem 9.22 implies (v).

390

Haagerup’s characterizations of the WEP

Conversely, assume (v). Then (v) holds for u = iA : A → A∗∗ . Theorem 23.7 shows that A has the WEP. A first consequence of the implication (iii) ⇒ (i) in Theorem 23.7 and of (23.2) is a different approach to a result due independently to ChristensenSinclair and the author (see [208, p. 273]): Corollary 23.10 Let A ⊂ B(H ) be a C ∗ -subalgebra such that there is a c.b. projection P : B(H ) → A then A has the WEP. Thus if A is a von Neumann algebra, it must be injective. Proof Since it has the WEP, B(H ) satisfies (ii) in Theorem 23.7. Then by (23.2) A satisfies (iii) in Theorem 23.7 with C = P 2cb . We give more results in the same vein, on c.b. projections from a general von Neumann algebra onto a subalgebra, at the end of §23.5.

23.4 Preliminaries on self-polar forms It will be crucial to consider von Neumann algebras M ⊂ B(H ) for which the commutant M  appears as a mirror copy of M. The following simple (Radon– Nikodym type) lemma will be useful. Lemma 23.11 Let M ⊂ B(H ) be a von Neumann algebra. Fix a unit vector ξ ∈ H . Let ϕ(x) = ξ,xξ  for all x ∈ M. For any ψ ∈ M ∗ such that 0 ≤ ψ ≤ ϕ there is a ∈ M  with 0 ≤ a ≤ 1 such that ∀x ∈ M

ψ(x) = ξ,axξ .

Proof Let Hξ = Mξ ⊂ H . We have ∀x,y ∈ M

|ψ(y ∗ x)| ≤ (ψ(x ∗ x)ψ(y ∗ y))1/2 ≤ xξ  yξ .

Therefore there is a (unique) linear map a : Hξ → Hξ with a ≤ 1 such that ∀x,y ∈ M

ψ(y ∗ x) = yξ,axξ .

(23.3)

We extend a to H by setting a = 0 on Hξ⊥ . Clearly (23.3) still holds. Since ∀z ∈ M

ψ((z∗ y)∗ x) = ψ(y ∗ zx) = ψ(y ∗ (zx))

and z∗ yξ, axξ  = yξ, zaxξ , we have yξ, zaxξ  = yξ,azxξ , and hence (za − az)|Hξ = 0. Also za − az = 0 on Hξ⊥ (because z∗ Hξ ⊂ Hξ ⊥  implies zH ⊥ ξ ⊂ Hξ ). Therefore a ∈ M and we have ψ(x) = ξ,axξ  for any

23.4 Preliminaries on self-polar forms

391

x in M. Lastly ψ(x ∗ x) = xξ,axξ  shows a ≥ 0 on Hξ . Since a vanishes on Hξ⊥ we have 0 ≤ a ≤ 1 as announced. We will work with sesquilinear forms s : M × M → C on a complex vector space M. This means that s is antilinear (resp. linear) in the first (resp. second) variable. Clearly such forms are in 1 − 1 correspondence with linear forms on M ⊗ M. A sesquilinear form s : M × M → C will be called positive definite if s(x,x) ≥ 0 for any x ∈ M. If moreover s(x,x) = 0 ⇒ x = 0 we say that s is strictly positive definite (or nondegenerate). Remark 23.12 Let u : A → H be a bounded linear map. Then s(y,x) = uy,ux is a bounded positive definite sesquilinear form on A × A. Conversely, any bounded positive sesquilinear form s : A × A → C is of this form (and we may replace H by u(A) to ensure that u has dense range if we wish). Indeed by a well-known construction (after passing to the quotient by the kernel of s and completing) we find a Hilbert space H associated to the inner product (y,x) → s(y,x) and a linear map u : A → H with dense range such that s(y,x) = uy,ux. Remark 23.13 Let s be a positive definite bounded sesquilinear form on a von Neumann algebra M. If (xi ) is a bounded net in M converging weak* to x ∈ M and if s is separately weak* continuous then s(x,x) ≤ lim inf s(xi ,xi ). Indeed, by Cauchy–Schwarz s(x,x) = lim s(x,xi ) ≤ lim inf s(x,x)1/2 s(xi ,xi )1/2 . Definition 23.14 Let A be a unital C ∗ -algebra. A (sesquilinear) form s : A × A → C will be called “bipositive” if it is both positive definite and such that s(a,b) ≥ 0

∀a,b ∈ A+ .

(23.4)

We call it normalized if s(1,1) = 1. For example, let (M,τ ) be a tracial probability space. Assume A ⊂ M. Recall that for all a,b ∈ M+ we have by the trace property τ (ab) = τ (a 1/2 ba1/2 ) ≥ 0. This shows that the form (y,x) → τ (y ∗ x) is bipositive. More generally, given ξ ≥ 0 in L2 (τ ) the sesquilinear form s defined on A × A by s(y,x) = τ (y ∗ ξ xξ ) = ξ 1/2 yξ 1/2,ξ 1/2 xξ 1/2 L2 (τ )

(23.5)

is bipositive. ∗ The condition (23.4) means that the linear map u : A → A associated to s is positivity preserving. For example this holds whenever s is associated to a state on A ⊗max A since that means u is c.p. (see Theorem 4.16). However, not every state on A ⊗max A gives rise to a positive definite form s. We call those

392

Haagerup’s characterizations of the WEP

which satisfy this “positive definite states” on A ⊗max A. In other words a state for A ⊗max A is called positive definite if f (x ⊗ x) ≥ 0 for any x ∈ A. Remark 23.15 If a bipositive form s is nondegenerate then the state ϕ : x → s(1,x) is faithful. Indeed, if x ≥ 0 and ϕ(x) = 0 then 0 ≤ s(x,x) ≤ s(x1,x) = 0, which shows that s nondegenerate implies ϕ faithful. In the converse direction, if ϕ is faithful and s(x,x) = 0 for some x ≥ 0 then by Cauchy–Schwarz s(y,x) = 0 for any y ∈ M and hence ϕ(x) = 0, so that x = 0, which is a weaker form of nondegeneracy. Remark 23.16 Let s : M × M be a bipositive form. Let ϕ ∈ M ∗ be the functional defined by ϕ(x) = s(1,x). If ϕ is weak* continuous (i.e. normal) on M, then s is separately weak* continuous. Indeed, for any a ∈ M+ we have 0 ≤ s(a,x) ≤ s(a1,x) = aϕ(x) for all x ∈ M+ , and hence (see Remark A.45) x → s(a,x) is normal for any a ∈ M+ . Since M is linearly generated by M+ this remains true for any a ∈ M. We will be mainly interested in the following notion. Definition 23.17 Let M be a von Neumann algebra. A (sesquilinear) form s on M × M such that x → s(1,x) is a normal state on M will be called self-polar ∗ such that 0 ≤ ψ(x) ≤ s(1,x) if it is bipositive and such that for any ψ ∈ M+ for all x in M+ , there is a ∈ M with 0 ≤ a ≤ 1 such that ψ(x) = s(a,x). Remark 23.18 For example, if ξ ≥ 0 and τ (ξ 2 ) = 1 the form s in (23.5) is self-polar. Indeed, by Lemma 23.11 the condition 0 ≤ ψ(x) ≤ τ (ξ 2 x) = ξ,L(x)ξ  (x ∈ M) implies the existence of 0 ≤ a ≤ 1 in M such that ψ(x) = ξ,R(a)L(x)ξ  = s(a,x). The following theorem plays a key role in the sequel. It is due to Woronowicz and Connes (see [59, 263]). Theorem 23.19 Let M be a von Neumann algebra equipped with a faithful normal state ϕ. Let s,s1 be normalized bipositive forms on M × M such that s1 (1,x) ≤ s(1,x) = ϕ(x).

∀x ∈ M+

If s is self-polar and strictly positive definite then s1 (x,x) ≤ s(x,x) for any x ∈ M. Proof For any a ∈ M with 0 ≤ a ≤ 1, note ∀x ∈ M+

0 ≤ s1 (a,x) ≤ s1 (1,x) ≤ ϕ(x).

Let ψa (x) = s1 (a,x). By the self-polar property of s, there is b ∈ M with 0 ≤ b ≤ 1 such that ∀x ∈ M

ψa (x) = s(b,x).

23.4 Preliminaries on self-polar forms

393

Morever, such a b is unique. Indeed, since s is strictly positive definite we know s(b,x) = 0 ∀x ⇒ b = 0. Thus we may set b = T (a). In particular 0 ≤ T (1) ≤ 1. The correspondence a → Ta is clearly additive and hence extends by scaling to M+ and by linearity to the whole of M (see (A.11)): we set first T (a + − a − ) = T (a + ) − T (a − ) if a ∗ = a, and T (a) = T (#(a)) + iT('(a)) for an arbitrary a ∈ M. Then for any a ∈ M the element Ta ∈ M is characterized by the identity ∀x ∈ M

s(Ta,x) = s1 (a,x).

It follows that T : M → M is a positive linear map with T (1) ≤ 1. Therefore T  ≤ 1 by Corollary A.19. Furthermore, since s1 is positive definite it satisfies s1 (a,x) = s1 (x,a) for all a,x ∈ M. From this we deduce s(Ta,x) = s(a,Tx). We claim that ∀x ∈ M Indeed assuming x = 0 let

|s(Tx,x)| ≤ s(x,x).

s(T k x,x) . λ(k) = s(x,x)

Then by Cauchy–Schwarz we have λ(k) ≤ λ(2k)1/2 n

and hence λ(1) ≤ λ(2)1/2 ≤ · · · ≤ λ(2n )1/2 for any n ≥ 1. Since for any 0 ≤ a ≤ 1 we have 0 ≤ T (a) ≤ 1, by iteration we have 0 ≤ T k (a) ≤ 1 for any k and hence s(T k a,a) ≤ 1. Note that a → s(T k (a),T k (a))1/2 = s(T 2k (a),a)1/2 is a subadditive functional on M. It follows using a = a + −a − (as in (A.11)) that |s(T 2k a,a)| ≤ 4a2 and |s(T 2k x,x)| ≤ 16x2 for any x n in M. This gives us λ(1) ≤ (16x2 s(x,x)−1 )1/2 and letting n → ∞ we find λ(1) ≤ 1. Since the case x = 0 is trivial this proves our claim and a fortiori that s1 (x,x) ≤ s(x,x) for any x ∈ M. Corollary 23.20 Let M be a von Neumann algebra equipped with a selfpolar form s such that the state x → s(1,x) is faithful and normal. Then any normalized bipositive form s1 such that ∀x ∈ M

s(x,x) ≤ s1 (x,x)

must be identical to s. Proof Since the forms are normalized, x → s1 (1,x) is a state and s(1 + tx,1 + tx) ≤ s1 (1+tx,1+tx) implies 2t#s(1,x)+t 2 s(x,x) ≤ 2t#s1 (1,x)+t 2 s1 (x,x). Letting t → 0 we obtain #s(1,x) = #s1 (1,x) for all x ∈ M and hence s(1,x) = s1 (1,x) for all x ∈ M+ . Then Theorem 23.19 tells us that s1 (x,x) ≤

394

Haagerup’s characterizations of the WEP

s(x,x) and hence s1 (x,x) = s(x,x) for any x ∈ M. By polarization s1 (y,x) = s(y,x) for any y,x ∈ M. The preceding theorem involves positivity with respect to two distinct cones in M × M, namely M+ × M+ and {(x,x) | x ∈ M}. For the latter case the associated order relation for two positive definite forms s1,s2 on M × M means that s1 ≤ s2 if s1 (x,x) ≤ s2 (x,x) for all x ∈ M. We will refer to this order as the pointwise order. In other words this is the pointwise ordering of the associated quadratic forms. The following statement is then an immediate consequence of Theorem 23.19: Corollary 23.21 Let ϕ and s be as in Theorem 23.19. Then s is the largest element (for the pointwise order) in the set of bipositive forms s  on M × M such that s  (1,x) = ϕ(x) for all x ∈ M. Proposition 23.22 Let A,B be unital C ∗ -algebras with A ⊂ B. Let s be a positive definite form on A × A such that x → s(1,x) is a state on A. The following are equivalent: (i) We have ∀n, ∀x1, . . . ,xn ∈ A,



    s(xj ,xj ) ≤  xj ⊗ xj 

B⊗max B

.

(ii) There is a state f on B ⊗max B such that ∀x ∈ A

s(x,x) ≤ #(f (x ⊗ x))

(23.6)

and moreover for any self-adjoint x ∈ A s(1,x) = (f (1 ⊗ x) + f (x ⊗ 1))/2.

(23.7)

Proof Assume (i). We may assume that B ⊗max B ⊂ B(H ) and that the embedding is of the form y ⊗ x → σ (y)∗ π(x) where π : B → B(H ) (resp. σ : B → B(H )) is a ∗-homomorphism (resp. antihomomorphism) and π,σ have commuting ranges. Then we have   #(f (xj ⊗ xj )) s(xj ,xj ) ≤ sup f ∈C

where C is the (weak∗ compact) unit ball of (B ⊗max B)∗ . By Hahn–Banach (see Lemma A.16) there2is a net of finitely supported probabilities (λi ) on C 2 such that s(x,x) ≤ limi #(f (x ⊗ x))dλi (f ). Let fi = fdλi ∈ C. Passing to a subnet we may ensure by the weak* compactness of C that fi → f pointwise and hence we have ∀x ∈ A

s(x,x) ≤ #(f (x ⊗ x)).

23.5 max+ -injective inclusions and the WEP

395

But 1 = s(1,1) ≤ #(f (1 ⊗ 1)) = (#f )(1 ⊗ 1) implies that #f is a state, and since f ∈ C we must have '(f ) = 0, so that f is a state (see Remark A.23). Replacing x by 1 + tx in (23.6) and letting t → 0 we obtain (23.7). This proves (i) ⇒ (ii). The converse is obvious.

23.5 max+ -injective inclusions and the WEP Our goal here is Haagerup’s theorem [107] that asserts that if an inclusion A → B of C ∗ -algebras is such that the map A¯ ⊗max A → B¯ ⊗max B is injective when restricted to the “positive definite” tensors then that map is injective and actually A → B is max-injective. Note the analogy with the previous Corollary 14.9. When applied to the inclusion A ⊂ B(H ) this gives a new characterization of the WEP. This result may seem at first sight of limited scope, but it turns out to lie rather deep. In particular, despite our efforts we could not avoid the use of ingredients from the Tomita–Takesaki theory in the proof, which makes it less self-contained than we hoped for. In the case of the inclusion A ⊂ B(H ), one must show that any ∗-homomorphism π : A → M into a von Neumann algebra factors completely contractively through B(H ). When M is finite or semifinite, we do give a self-contained proof (see the proof of Theorem 23.30), but the general case eludes us, we need to use the socalled standard form of M or some fact of similar nature, for which including a complete proof would take us too far out. Let A,B be C ∗ -algebras with A ⊂ B. We say that the inclusion A ⊂ B is max+ -injective if         ∀n ∀x1, . . . ,xn ∈ A  xj ⊗ xj  = xj ⊗ xj  . A⊗max A

B⊗max B

(23.8) Let us denote again by (A ⊗ A)+ the subset of the “positive definite” elements in A ⊗ A, i.e. the subset consisting of all the finite sums of the form  xj ⊗ xj (xj ∈ A). Warning: this should not be confused with the set (A ⊗ A)+ = {t ∗ t | t ∈ A ⊗ A}. Remark 23.23 Let t ∈ A ⊗ A. It is an easy exercise in linear algebra to check that t ∈ (A ⊗ A)+ if and only if (ϕ¯ ⊗ ϕ)(t) ≥ 0 for any ϕ ∈ A∗ . In particular, whenever A ⊂ B, by the Hahn–Banach theorem, we have (A ⊗ A)+ = (A ⊗ A) ∩ (B ⊗ B)+ .

(23.9)

396

Haagerup’s characterizations of the WEP

With this notation the inclusion A ⊂ B is max+ -injective if the inclusion A ⊗max A → B ⊗max B is isometric when restricted to (A ⊗ A)+ . Remark 23.24 Actually for this to hold it suffices that there exists a constant C such that         xj ⊗ xj  ≤C xj ⊗ xj  . ∀n ∀x1, . . . ,xn ∈ A  A⊗max A

B⊗max B

(23.10) 

Indeed, let t = xj ⊗ xj . Observe that (t ∗ t)m ∈ (A ⊗ A)+ for any m ≥ 1. Therefore, (23.10) implies t2m A⊗

max A

= (t ∗ t)m A⊗max A ≤ C(t ∗ t)m B⊗max B = Ct2m B⊗

max B

and hence tA⊗max A ≤ C 1/2m tB⊗max B . Letting m → ∞ we obtain (23.8). Remark 23.25 If B ⊂ C is another inclusion between C ∗ -algebras, and if both inclusions A ⊂ B and B ⊂ C are max+ -injective, then the same is true for A ⊂ C. The following fact is an immediate consequence of (22.10) and Corollary 22.15. Lemma 23.26 If A ⊂ B is max+ -injective then A∗∗ ⊂ B ∗∗ is also max+ injective. Remark 23.27 If A ⊂ B is max-injective then it is max+ -injective. Indeed, since A ⊂ B is clearly max-injective as well, the ∗-homomorphisms A ⊗max A → A⊗max B and A⊗max B → B ⊗max B are each isometric, and hence their composition A ⊗max A → B ⊗max B is isometric too. A fortiori the inclusion A ⊂ B is max+ -injective. Our main goal is to show that conversely max+ -injective ⇒ max -injective, but the proof will be quite indirect. In fact we will prove that max+ -injective implies that there is a contractive projection P : B ∗∗ → A∗∗ . Since P is automatically c.p. this holds if and only if the inclusion A ⊂ B is max-injective (see Theorem 7.29). By arguments that have been already discussed it suffices to show the following: For any von Neumann algebra M any ∗-homomorphism π : A → M admits a c.p. extension v : B → M. The next lemma will allow us to reduce to the case when π is injective.

23.5 max+ -injective inclusions and the WEP

397

Lemma 23.28 Let A ⊂ B be a von Neumann subalgebra of a von Neumann algebra B. Let π : A → M be a normal ∗-homomorphism onto a von Neumann algebra M, let I = ker(π ) and let [π ] : A/I → M be the ∗-homomorphism canonically associated to π . We view A/I ⊂ B via the chain of embeddings A/I ⊂ A/I ⊕ I  A ⊂ B. (i) If A ⊂ B is max+ -injective then the inclusion A/I → B is max+ -injective. (ii) If [π ] : A/I → M admits a contractive positive (resp. c.p.) extension to B, then π admits a contractive positive (resp. c.p.) extension to B. Proof By (A.24) we have a decomposition A = (A/I) ⊕ I. Let p,q be the central projections in A corresponding to this decomposition, so that p + q = 1 A/I = qA,

I = pA.

Let Q : A → A/I be the quotient map. For any x ∈ A the inclusion A/I ⊂ A takes Q(x) to qx (see Remark A.34). To show (i) we simply observe that the inclusion A/I ⊂ A is max-injective by Proposition 7.19. Then (i) becomes clear by Remark 23.25. We turn to (ii). Let w : B → M be a contractive positive (resp. c.p.) extension of [π ] : A/I → M, i.e. we have w(qa) = π(a) for any a in A. Let v : B → M be defined by ∀x ∈ B

v(x) = w(qxq).

Clearly v is contractive positive (resp. c.p.) and for any a ∈ A since qa = aq we have v(a) = w(qaq) = w(qa) = π(a), which proves (ii). The key step is the following. See §A.20 for background on σ -finiteness. Theorem 23.29 Let B be a C ∗ -algebra and A ⊂ B a C ∗ -subalgebra. Let π : A → M be a ∗-homomorphism into a (σ -finite) von Neumann algebra M with a faithful normal state ϕ. If A ⊂ B is max+ -injective, then π admits a contractive c.p. extension  π : B → M. The key ingredient to prove this is the next statement, which guarantees the existence of self-polar forms associated to any faithful normal state ϕ. This

398

Haagerup’s characterizations of the WEP

fact (due to Connes) for a fully general M requires knowledge of the Tomita– Takesaki Theory, and unfortunately we will have to accept it without proof. We merely give indications on its proof. Theorem 23.30 Let ϕ be a faithful normal state on a (σ -finite) von Neumann algebra M. There is a unique strictly positive definite self-polar form sϕ on M × M such that sϕ (1,x) = ϕ(x)

∀x ∈ M.

In addition, ϕ “extends” to a state  on M⊗max M in the sense that (x⊗x) = sϕ (x,x) ∀x ∈ M. More precisely, there is a Hilbert space H , an embedding σ : M → B(H ) (as a von Neumann subalgebra), a unit vector ξϕ ∈ H and a ∗-homomorphism ρϕ : M ⊗max M → B(H ) such that sϕ (y,x) = ξϕ ,ρϕ (y ⊗ x)ξϕ 

(23.11)

∀x ∈ M

(23.12)

ρϕ (1 ⊗ x) = σ (x)

and moreover y → ρϕ (y ⊗ 1) is an isomorphism from M to σ (M) . Indications on the proof of Theorem 23.30 The unicity follows from Theorem 23.19 (or Corollary 23.21), so we will concentrate on the existence of such a form. The case when M admits a faithful tracial state τ is easy. In that case, we may identify ϕ with an element of L1 (τ )+ so that ϕ(x) = τ (ϕx) (see §11.2). Then ϕ 1/2 ∈ L2 (τ )+ and, as we already mentioned (see Remark 23.18), it is not hard to show that the form sϕ defined by sϕ (y,x) = τ (y ∗ ϕ 1/2 xϕ 1/2 ) which is bipositive by (22.21) is self-polar. Letting ξϕ = ϕ 1/2 , H = L2 (τ ) and ρϕ (y ⊗ x) = Ry ∗ Lx where Lx (resp. Rx ) denotes left- (resp. right-)hand multiplication by x on L2 (τ ), we obtain all the other properties. Although this is technically more involved, the same idea works if τ is merely semifinite. However, in the general case, despite our efforts we could not find a shortcut to avoid the use (without proof) of the Tomita–Takesaki modular theory. We will use it via the following basic fact (see [241, p. 151], and Haagerup’s landmark paper [102]): Any von Neumann algebra admits a “standard form,” which means that there is a triple (H,J,P  ) consisting of a Hilbert space H such that M ⊂ B(H ) (as a von Neumann algebra), an antilinear isometric involution J : H → H and a cone P  ⊂ H such that (i) J MJ = M  and J xJ = x ∗

∀x ∈ M ∩ M  .

23.5 max+ -injective inclusions and the WEP

399

(ii) P  ⊂ {ξ ∈ H | J ξ = ξ } and P  is self-dual, i.e. P  = {ξ ∈ H | ξ,η ≥ 0 ∀η ∈ P  }. (iii) ∀x ∈ M J xJx(P  ) ⊂ P  . (iv) For any ϕ in M∗+ there is a unique ξϕ in P  such that ϕ(x) = ξϕ ,xξϕ  for any x in M. Let ϕ be a normal faithful state so that ξϕ  = 1. We set σ (x) = x and we define sϕ (y,x) = ξϕ , J yJxξϕ . By (iii) and (ii) we have sϕ (x,x) ≥ 0 for any x in M. Moreover, since ϕ is faithful, the vector ξϕ is separating for both M and M  (since JMJ = M  ). By a well-known reasoning (see Lemma A.62) this implies that ξϕ is cyclic for both M and M  . In particular M  ξϕ = H . Now assume sϕ (x,x) = 0. By Cauchy–Schwarz we have sϕ (y,x) = 0 for any y in M and hence xξϕ ⊥ M  ξϕ , which means xξϕ = 0 and hence x = 0. Thus sϕ is nondegenerate. To check the self-polarity, we will use Lemma 23.11. Assume 0 ≤ ψ ≤ ϕ. By Lemma 23.11 there is b in M  with 0 ≤ b ≤ 1 such that ∀x ∈ M

ψ(x) = ξϕ , b xξϕ .

Since J MJ = M  , we may write b = J bJ with 0 ≤ b ≤ 1 in M and we obtain ψ(x) = sϕ (b,x). Lastly, since y → J yJ and x → x are commuting ∗-homomorphisms on M and M respectively, the map ρϕ defined for y,x ∈ M by ρϕ (y ⊗ x) = J yJx extends to a ∗-homomorphism on M ⊗max M, and sϕ extends (so to speak) to a state  on M ⊗max M defined by (y ⊗ x) = ξϕ ,ρϕ (y ⊗ x)ξϕ . In particular, this shows that sϕ is bipositive. Remark 23.31 The preceding proof shows that for any unit vector ξ ∈ P  the form s defined on M × M by s(y,x) = ξ,JyJxξ  is a (normalized and separately normal) self-polar form. Remark 23.32 The standard form of a von Neumann algebra M ⊂ B(H )  ⊂ B(H ) relative to (J,P  ) is unique in the following very strong sense. Let M  via an , relative to (J, P  ). If M is isomorphic to M be in standard form on H  there is a unique unitary U : H → H  such that isomorphism π : M → M, −1 −1     π(x) = U xU for all x ∈ M, J = UJU and P = U (P ). Proof of Theorem 23.29 Let A ⊂ B be a max+ -injective inclusion. By Lemma 23.26 A∗∗ ⊂ B ∗∗ is also max+ -injective. Assume A,B and π unital. Since π extends to a normal ∗-homomorphism π¨ : A∗∗ → M, it suffices to show that π¨ admits a contractive c.p. extension to B ∗∗ . In other words, it suffices to prove Theorem 23.29 when A is a von Neumann subalgebra of a von Neumann

400

Haagerup’s characterizations of the WEP

algebra B and π is normal. By Lemma 23.28, we may assume that π : A → M is an isomorphism. To simplify the notation, we observe that it suffices to prove the following. Claim: Let M → B be an injective ∗-homomorphism such that the corresponding inclusion is max+ -injective then there is a contractive c.p. projection P : B → M. Indeed, going back to the preceding situation, M  x → π −1 (x) ∈ A ⊂ B defines clearly a max+ -injective inclusion of M into B, and if there is P as in the claim then π P is a contractive c.p. extension of π as required in Theorem 23.29. We now turn to the proof of the claim. For simplicity we assume M ⊂ B. Let sϕ be the self-polar form on M given by Theorem 23.30. By Proposition 23.22 there is a state f on B ⊗max B such that ∀x ∈ M

sϕ (x,x) ≤ #f (x ⊗ x),

(23.13)

and for any self-adjoint x ∈ M ϕ(x) = sϕ (1,x) = (f (1 ⊗ x) + f (x ⊗ 1))/2. Therefore, decomposing in real and imaginary parts, we find for any x ∈ M ϕ(x) = sϕ (1,x) = (f (1 ⊗ x) + f (x ⊗ 1))/2. Let s : B × B → C be the sesquilinear form defined by s(y,x) = (f (y ⊗ x) + f (x ⊗ y))/2. Note that s(x,x) = #(f (x ⊗ x)) for any x in B and s(1,x) = ϕ(x) for any x in M. Moreover, s satisfies the positivity condition (23.4) on B (we even have complete positivity, see Theorem 4.16). Thus the restriction of s to M × M is bipositive. Since sϕ is self-polar (by Theorem 23.30), Corollary 23.20 implies  s(y,x) = sϕ (y,x) for any y,x ∈ M, and hence for any t = yj ⊗xj ∈ M⊗M       sϕ (yj ,xj ) = f /2 yj ⊗ xj + f xj ⊗ yj whence since f is a state on B ⊗max B         yj ⊗ xj  + xj ⊗ yj  ≤ (1/2)  max

max



    = yj ⊗ xj 

B⊗max B

.

Let ρϕ : M ⊗ M → B(H ) be the ∗-homomorphism described in Theorem  sϕ (yj ,xj ). By what we just 23.30. By (23.11) we have ξϕ ,ρϕ (t)ξϕ  = proved t → ξϕ ,ρϕ (t)ξϕ  has unit norm as a functional on M ⊗ M equipped with the norm induced by B ⊗max B. Obviously, since ξϕ is cyclic for M it is a

23.5 max+ -injective inclusions and the WEP

401

cyclic vector for ρϕ , and hence by Remark A.27 we have ρϕ (t) ≤ tB⊗max B for any t ∈ M ⊗ M. Let E be the closure of M ⊗ M in B ⊗max B. Since ρϕ is a ∗-homomorphism it automatically defines a (completely contractive) c.p. map v from E to B(H ), itself admitting a c.p. contractive extension  v : B ⊗max B → B(H ), containing M ⊗ M in its multiplicative domain. We now conclude as in Theorem 6.20. Let σ : M → B(H ) be the inclusion map so that σ (x) = ρϕ (1¯ ⊗ x) for any x ∈ M. Let P1 : B → B(H ) be defined v (1 ⊗ b) for b ∈ B. Then P1 extends σ and by the usual by P1 (b) =  v (y ⊗ 1) = ρϕ (y ⊗ 1) multiplicative domain argument P1 (b) commutes with  for any y ∈ M. Since {ρϕ (y ⊗ 1) | y ∈ M} = σ (M) , we conclude that P1 (b) ∈ σ (M) = σ (M). Thus P : B → M defined by P (b) = σ −1 P1 (b) is the desired projection, proving the claim. This proves the unital case. The argument is easily modified to cover the nonunital case. We leave this to the reader. Corollary 23.33 Let M,M be von Neumann algebras with M ⊂ M. Assume that M is σ -finite or equivalently admits a faithful normal state ϕ. If the inclusion M ⊂ M is max+ -injective, then there is a contractive c.p. projection P : M → M. Proof By Theorem 23.29 the identity of M admits a contractive c.p. extension P : M → M. We can now reach our goal: Theorem 23.34 Let A be a C ∗ -subalgebra of a C ∗ -algebra B. The following are equivalent: (i) (i)’ (ii) (iii)

The inclusion A ⊂ B is max+ -injective. The (bitransposed) inclusion A∗∗ ⊂ B ∗∗ is max+ -injective. There is a contractive c.p. projection P : B ∗∗ → A∗∗ . The inclusion A ⊂ B is max-injective.

Proof By Lemma 23.26 we know (i) ⇒ (i)’. We already know that (ii) and (iii) are equivalent by Theorem 7.29, and (iii) ⇒ (i) is clear by Remark 23.27. Thus it only remains to show the implication (i)’ ⇒ (ii). This is settled by the next statement, in which we remove the σ -finiteness assumption from Corollary 23.33. Theorem 23.35 Let M be a von Neumann algebra. Let M ⊂ M be a von Neumann subalgebra of M. The following are equivalent: (i) The inclusion M ⊂ M is max+ -injective. (ii) There is a completely contractive c.p. projection P : M → M.

402

Haagerup’s characterizations of the WEP

Proof Assume (i). If M is σ -finite, (ii) follows by Corollary 23.33. Let P be the property of max+ -injectivity for inclusions such as M ⊂ M. By the reduction in §23.1 to prove (i) ⇒ (ii) in general it suffices to show that P satisfies the assumptions of Proposition 23.1. This can be done by a routine diagram chasing verification as follows. Let q be a projection in M. We claim that the inclusion qMq ⊂ qMq is max+ -injective. This can be checked on the following commuting diagram: qMq ⊂ M ∪



qMq ⊂ M Indeed, by (i) in Proposition 7.19 the horizontal arrows are max-injective (and a fortiori max+ -injective by Remark 23.27) and the second vertical one is max+ -injective by our assumption. The latter means that the inclusion (M ⊗max M)+ ⊂ (M ⊗max M)+ is isometric. Therefore the commuting diagram (qMq ⊗max qMq)+ ↑ (qMq ⊗max qMq)

⊂ (M⊗max M)+ ∪

+

⊂ (M⊗max M)+

must have its first vertical arrow isometric. This proves the claim. Let π : M → M1 be an isomorphism of von Neumann algebras, and let 1 M = π(M). Then π¯ ⊗ π : M ⊗max M → M1 ⊗max M1 is an isometric isomorphism (indeed its inverse is π¯ −1 ⊗π −1 ). Let i : M → M and i1 : M 1 → M1 be the inclusions. We have π i = i1 and hence also (π¯ ⊗ π )(i¯ ⊗ i) = i1 ⊗ i1 . This shows that if i¯ ⊗ i : (M ⊗max M)+ → (M ⊗max M)+ is isometric, then i1 ⊗ i1 : (M 1 ⊗max M 1 )+ → (M1 ⊗max M1 )+ is also isometric. This shows that P satisfies the required assumptions. End of the proof of Theorem 23.7 It remains only to prove (ii) ⇒ (i). Assume (ii). Let A ⊂ B(H ) be any embedding of A. Then (ii) implies (and by (22.15) is the same as saying) that the inclusion A ⊂ B(H ) is max+ -injective. By Theorem 23.34 it is max-injective and hence A has the WEP by Theorem 9.22. The next Corollary was obtained in various steps by Uffe Haagerup and the author and independently by Christensen and Sinclair. See [51, 52, 107, 202, 203].

23.6 Complement

403

Corollary 23.36 Let M,M be von Neumann algebras with M ⊂ M. If there is a c.b. projection from M onto M, then there is a completely contractive (and c.p.) one. Proof Let P : M → M be a c.b. projection. Let C = P 2cb . By Corollary 22.19 the inclusion M ⊂ M satisfies the property in (23.10). By Remark 23.24, this means it is max+ -injective, and the corollary follows. Remark 23.37 By Remark 22.20 we obtain the same conclusion if we merely assume that P tensorizes boundedly with both the row and the column operator spaces. More precisely, if we have IdX ⊗P : X⊗min M → X⊗min M ≤ C 1/2 for both X = R and X = C (row and column operator spaces), then the inclusion M ⊂ M satisfies (23.10). Remark 23.38 However, it remains an open problem whether the mere existence of a bounded linear projection P : M → M is enough for the conclusion of Corollary 23.36. An affirmative answer is given in [201] in case ¯  M. It is proved in [118] that if G is any group containing F2 there is M ⊗M no bounded projection from B(2 (G)) to MG .

23.6 Complement Our goal in this section is to prove the following extension of Theorem 22.10 beyond the semifinite case. Theorem 23.39 Let M ⊂ B(H ) be a von Neumann algebra assumed in standard form on H with respect to (J,P  ) as described in the proof of Theorem 23.30. Then for any finite set (x1, . . . xn ) in M we have         J xj J xj  xj ⊗ xj  =  max )  *  (23.14) = sup ξ, J xj J xj ξ ξ ∈ P , ξ H = 1 , where for the second equality we assume in addition that adjoint in M ⊗ M. Moreover, we have      xj ⊗ xj  ≥ sup s(xj ,xj )  max



xj ⊗ xj is self-

(23.15)

where the sup runs either over all separately normal, normalized bipositive  forms s on M ×M, or over all self-polar forms, and equality holds if xj ⊗xj is self-adjoint.

404

Haagerup’s characterizations of the WEP

Proof We will first assume M σ -finite and show that both parts of Theorem 23.39 hold. By Takesaki’s Theorem 11.3 we have an embedding M ⊂ M with M semifinite and σ -finite such that there is a contractive c.p. projection P :M → M. It follows  (by (i) in Proposition 7.19) that for all (xj ) in M   we have  xj ⊗ xj M⊗ M =  xj ⊗ xj M⊗ M . Assume that t = max max  xj ⊗ xj is self-adjoint in M ⊗ M. Let τ be a faithful normal semifinite trace on M. By (22.20), there is a faithful normal state f on M such  for any ε > 0   that  xj ⊗ xj M⊗ M − ε < τ (xj∗ f 1/2 xj f 1/2 ). Let s : M × M → C be max the form defined by s(x,x) = τ (x ∗ f 1/2 xf 1/2 ), which is bipositive by (22.21). Let ϕ(x) = τ (fx) for any x ∈ M. Then ϕ is a normal faithful state on M. Let sϕ be the corresponding self-polar form as in Theorem 23.30 and let ρ = ρϕ as in (23.11). By Theorem 23.19 we have s(x,x) ≤ sϕ (x,x) for any x ∈ M. This gives us      xj ⊗ xj  −ε < sϕ (xj ,xj ) = ξϕ ,ρ(t)ξϕ   M⊗max M

and hence we obtain tmax ≤ sup{ξ,ρ(t)ξ  | ξ ∈ P ,ξ H = 1} ≤ ρ(t). Since ρ(t) ≤ tmax is obvious we obtain (23.14). The latter is proved assuming t = t ∗ , but since we may replace a general t by t ∗ t, we obtain ρ(t) = tmax for all t ∈ M ⊗ M. This also proves the second equality in (23.14) when t = t ∗ . Now let s be a normalized bipositive separately normal form and let s1 be any nondegenerate self-polar form such that x → s1 (1,x) is normal. For any 0 < ε < 1 consider the normalized bipositive form sε = (1 − ε)s + εs1 . Let ϕε = sε (1,x) (x ∈ M). Then ϕε is normal and faithful. By Theorem 23.30 we have sε ≤ sϕε in the pointwise order, and hence   (1 − ε) s(xj ,xj ) ≤ sε (xj ,xj )  ≤ sϕε (xj ,xj ) = ξϕε ,ρ(t)ξϕε  ≤ ρ(t).  Letting ε → 0 we find s(xj ,xj ) ≤ ρ(t). This yields the second part of Theorem 23.39 (for self-polar forms recall Remark 23.31). We now turn to the general case. We start by observing that         xj ⊗ xj  = sup  pxj p ⊗ pxj p (23.16)  max

max

where the sup runs over all projections p ∈ M such that pMp is σ -finite. Indeed, this can be deduced fairly easily from the structure Theorem A.65. Let   pxj p ⊗ p ∈M be any projection with pMp σ -finite. We will show that   pxj pmax ≤  J xj J xj .

23.6 Complement

405

Let q ∈ B(H ) be the projection defined by q = JpJp = ρ(p¯ ⊗ p). p

p

Let = JpJ. Note is a projection in M  so that q = pp = p p. Note also p = Jp J (since J 2 = 1). A simple verification shows that qJ = Jq = pJp. By known results on the standard form (see [102, Lemma 2.6] for a detailed argument) pMp is isomorphic to qMq via the correspondence pxp → qxq and the embedding qMq ⊂ B(q(H )) is a realization of qMq (with unit q) in standard form on the Hilbert space q(H ) with respect to qJq (restricted to q(H )) and the cone q(P  ). By the first part of the proof, we have         (23.17) pxj p ⊗ pxj p = qJqxj qJ qxj q  .  max

But since p

M

= JpJ ∈ commutes with p we have qxq = (pxp)p = p (pxp) for any x ∈ M so that qJqxqJ qxq = q(Jp (pxp)J )(pxp)p and hence since qJp = qJp J J = qpJ = qJ we have qJqxqJ qxq = q(J (pxp)J )(pxp)q.

(23.18)

Recalling that ρ is a ∗-homomorphism on M ⊗ M, we may write J (pxp)J (pxp) = ρ(pxp ⊗ pxp) = ρ(p¯ ⊗ p)ρ(x¯ ⊗ x)ρ(p¯ ⊗ p) = qρ(x¯ ⊗ x)q so that by (23.18)

Thus, if we denote t  =

 

qJqxj qJ qxj q = qρ(t)q.

(23.19)

pxj p ⊗ pxj p, the identity (23.17) gives us t  max ≤ ρ(t).

Using (23.16) we conclude tmax ≤ ρ(t) and since the converse is trivial, the first part of (23.14) follows. By the third property in the definition of a standard form we have q(P  ) ⊂ P  . Using this with (23.19) the preceding argument allows us to extend the second equality in (23.14) when t ∗ = t. Now let s be a separately normal, normalized bipositive form on M × M.  xj ⊗ xj . Again invoking Let (xj ) be a finite set in M and let t = Theorem A.65 via Corollary A.66, one can find a net pi of projections in M such that pi Mpi is σ -finite for any i, and such that pi xpi → x weak* for any x ∈ M. Then (see Remark 23.13) since s is separately normal   s(xj ,xj ) ≤ lim infi s(pi xj pi ,pi xj pi ). By the first part of the proof, we  have s(pi xj pi ,pi xj pi ) ≤ (pi ⊗ pi )t (pi ⊗ pi )max ≤ tmax and hence  we obtain s(xj ,xj ) ≤ tmax . This shows  sup s(xj ,xj ) ≤ tmax,

406

Haagerup’s characterizations of the WEP

where the sup runs over all separately normal, normalized bipositive forms on M × M. But by (23.14) and Remark 23.31 if we restrict the sup to the subset of those s of the form s(y,x) = ξ,JyJxξ  with ξ unit vector in P  , we obtain equality in (23.15) when t = t ∗ . The proof of Theorem 23.39 actually proves the following fact. Theorem 23.40 Let  α be a C ∗ -norm on M ⊗ M. Recall the notation n  xj ⊗ xj n ≥ 1,xj ∈ M . (M ⊗ M)+ = 1

If tmax ≤ tα for all t ∈ (M ⊗ M)+ then ρ(t) ≤ max{tα ,t tα } for all t ∈ M ⊗ M, where t → t t denotes the linear map on M ⊗ M taking y ⊗ x to x ∗ ⊗ y ∗ . Sketch Since the argument is the same as earlier, we only outline it. Assume M in standard form. By (23.14) we have for any unit vector ξ ∈ P       xj ⊗ xj f ∈ B(M⊗α M)∗ . ξ,Jxj J xj ξ  ≤ sup #f Therefore (as in Prop. 23.22) there is a state f on (M ⊗α M)∗ (depending on ξ of course) such that for all x ∈ M ξ,JxJ xξ  ≤ #f (x ⊗ x) = (1/2)f (x ⊗ x + x ∗ ⊗ x ∗ ). Observing that the form s(y,x) = (1/2)f (y¯ ⊗ x + x ∗ ⊗ y ∗ ) is normalized, bipositive and such that ξ,JxJ xξ  ≤ s(x,x) it follows from Corollary 23.20  that ξ,JyJ xξ  = s(y,x) for any y,x ∈ M. Therefore for any t = n1 yj ⊗ xj we have  |ξ,ρ(t)ξ | = ξ,Jyj J xj ξ  = (1/2)f (t + t t) ≤ max{tα ,t tα }. In particular, applying this to t ∗ t gives us ρ(t)ξ 2 ≤ max{t2α ,t t2α }, and we conclude if ξ is cyclic for M. This settles the σ -finite case. The general case is proved by the same reduction as in the preceding proof. We skip the details. The preceding naturally leads us to introduce a new C ∗ -norm as follows. For any von Neumann algebra M ⊂ B(H ) in standard form on B(H ) with the preceding notation we set (temporarily)      yj ⊗ xj ∈ M ⊗ M tvns =  Jyj J xj  ∀t = or equivalently tvns = ρ(t). Note this is only a seminorm. If (M,τ ) is a tracial probability space, viewing M ⊂ B(L2 (τ )) via left multiplications

23.6 Complement

407

x → L(x) as usual, and denoting by y →  R(y) the right-hand side multiplications, this means that      R(yj∗ )L(xj ) . ∀t = yj ⊗ xj ∈ M ⊗ M ts =  However, we need the generality of the standard form to make sense of the following. Definition 23.41 For any C ∗ -algebra A and any t ∈ A ⊗ A we define ts = max{tA∗∗ ⊗vns A∗∗ ,tmin }.

(23.20)

If A = M is a von Neumann algebra, then for some central projection p ∈ M ∗∗ we have M  pM ∗∗ and M ∗∗ = pM ∗∗ ⊕ (1 − p)M ∗∗ . The analysis of the standard form of pM ∗∗ p done previously (and here p being central this is much simpler) shows that the restriction to M ⊗ M of the vns-seminorm of M ∗∗ ⊗ M ∗∗ coincides with the vns-seminorm of M ⊗ M. Thus when A = M we can replace M ∗∗ by M in (23.20). Of course s stands here for standard (or self-polar), and it would be natural to call  s the standard C ∗ -norm on A⊗A. Remark 23.42 With this notation, Theorem 23.40 can be reformulated as saying this: For any symmetric C ∗ -norm  α on M ⊗ M, if ts ≤ tα for any t ∈ (M ⊗ M)+ then ts ≤ tα for any t ∈ M ⊗ M. All this brings us to the following nice sounding refinement of Haagerup’s characterization. Theorem 23.43 A C ∗ -algebra A has the WEP if and only if A ⊗s A = A ⊗min A. Proof If ts = tmin for any t ∈ A ⊗s A, then a fortiori it holds for any t ∈ (A ⊗s A)+ and hence the min and max norms coincide on (A ⊗ A)+ by (23.14). The WEP follows by Theorem 23.7. Conversely, if A has the WEP the min and max norms coincide on (A ⊗ A)+ , and by Lemma 23.26 assuming A ⊂ B(H ) the inclusion A∗∗ ⊂ B(H )∗∗ is max+ -injective. By (23.14), this implies that the s norm on (A∗∗ ⊗ A∗∗ )+ is less than (actually equal to) the norm induced on it by B(H )∗∗ ⊗s B(H )∗∗ . The latter being a (symmetric) C ∗ -norm by Theorem 23.40 the same domination must be true on the whole of A∗∗ ⊗s A∗∗ . Therefore since A ⊗s A ⊂ A∗∗ ⊗s A∗∗ isometrically (by definition) and similarly for A = B(H ) we have isometrically A ⊗s A ⊂ B(H ) ⊗s B(H ). Lastly we observe that obviously (since the natural representation B(H ) → B(H ⊗2 H ) by left multiplication is in standard form) B(H ) ⊗s B(H ) = B(H ) ⊗min B(H ) and we conclude by injectivity of the min-norm that A ⊗s A = A ⊗min A.

408

Haagerup’s characterizations of the WEP

23.7 Notes and remarks This chapter contains “new” results in the sense that they have not been published yet. The characterization of the WEP in Corollary 23.5 was claimed by Haagerup in personal communication to Junge and Le Merdy while they completed their paper [137]. They do not have a written trace of the proof. Similarly the author, who had just written [203] and was at that time in close contact with Haagerup in connection with the latter’s related unpublished manuscript [107] does not remember being informed about the content of Corollary 23.5. Thus we are left guessing what his argument was, but the results of §23.2 seem very likely to be close to what Haagerup had in mind. Note that the question whether Corollary 23.5 holds is implicit in Haagerup’s previous fundamental (published) paper [104], where he proves Corollary 23.6 and then asks explicitly whether for a von Neumann algebra M the isometric identity D(3∞,M) = CB(3∞,M) implies its injectivity (we discussed this briefly in Remark 6.34). In other words he asks whether (i) in Corollary 23.5 with n = 3 suffices to imply the same for all n. This is still open, but it holds if M ⊂ B(H ) is cyclic and M  generated by a pair of unitaries (see Remark 23.4). As observed by Junge and Le Merdy in [137] it also holds if the equality D(3∞,M) = CB(3∞,M) is meant in the completely isometric sense (the proof uses the main idea of [204], or equivalently Theorem 9.8 in the present volume). The reduction in §23.1 is directly inspired from the reasoning from [107] but it involves only fairly standard ideas. The results in §23.5 are all included in Haagerup’s unpublished paper [107], but he does not use the terms max-injective and max+ -injective, which we introduce for convenience. Our presentation deliberately emphasizes the parallel between the two characterizations of the WEP in Theorems 23.2 and 23.34. We draw the reader’s attention to the analogy between the norm of D(n∞,A) as described in (6.32) and the norm in (22.13) (with Corollary 22.15). are derived very directly from the “column” norm  Both

1/2  . xj∗ xj An  (xj ) →  In §23.5 we use a new ingredient: self-polar forms. We prove some of the basic facts we need about them in §23.4. The main references in that direction are Connes’s [59] (his “th`ese de 3`eme cycle”) and Woronowicz’s work, in part joint with Pusz [219, 220, 263]. Their work generalizes Araki’s previous work in that direction. Theorem 23.19 and its corollary are due to them. Proposition 23.22 is an easy consequence of the Hahn–Banach theorem in the form described in §A.9. Corollary 23.36 was proved first for M = B(H ) by the author in [202] and independently by Christensen–Sinclair in [51]. The case when M was semifinite was obtained in [203] and one of Haagerup’s motivation for [107] was to prove the general case, which was also obtained

23.7 Notes and remarks

409

independently by Christensen and Sinclair [52]. The papers [118, 201] discuss the situation when there is merely a bounded projection P : B(H ) → M when M ⊂ B(H ). In the appendix of [16], Haagerup’s results from [107] (specifically Theorem 23.35) are used to prove the equivalence of several notions of co-amenability for inclusions of von Neumann algebras.

24 Full crossed products and failure of WEP for B ⊗min B

Our goal for this section is to present Ozawa’s result that B ⊗min B (or M ⊗min N for any pair (M,N ) of nonnuclear von Neumann algebras) fails the WEP. We follow Ozawa’s main idea in [188] to study full crossed products and exploit Selberg’s spectral bound (in place of (18.7)) but we broaden his viewpoint and we take advantage of the shortcut indicated in [209].

24.1 Full crossed products The definition of the maximal tensor product of two C ∗ -algebras B ⊗max C involves all pairs r = (σ,π ) of ∗-homomorphisms from B,C respectively into B(H ) with commuting ranges. So the fundamental relation imposed on r = (σ,π ) is σ (x)π(y) = σ (x)π(y) for all x,y. It is natural to wonder whether analogous properties hold if we impose different relations to the pair r = (σ,π ). When C = C ∗ (G) for some group G, and G acts on B, the crossed products that we define next are an illustration of this quest, for the set of relations (24.1). The latter are inspired from the ones that appear for semi-direct products of groups. Let B be a C ∗ -algebra and G a group. Any homomorphism g → αg of G into the group of ∗-automorphisms of B will be called simply an action of G on B. The triple (B,G,α) is usually called a C ∗ -dynamical system. By definition, a covariant representation of (B,G,α) on a Hilbert space H is a pair r = (σ,π ) where σ : B → B(H ) is a ∗-homomorphism and π : G → B(H ) a unitary representation such that ∀g ∈ G, ∀b ∈ B,

σ (αg b) = π(g)σ (b)π(g)−1 .

410

(24.1)

24.1 Full crossed products

411

Let Rα be the set of all such pairs. For r ∈ Rα , we denote by Hr the corresponding Hilbert space. Let B[G] denote the set of all finitely supported functions f : G → B. We define a linear mapping    B(Hr ) ⊂ B(⊕r∈Rα Hr ) α : B[G] → ⊕ r∈Rα

by setting: α (f ) =







g∈G

σ (f (g))π(g)

r=(σ,π )∈Rα

.

Then the image of α is a ∗-subalgebra of B(H) where H = ⊕r∈Rα Hr . Indeed, this follows from the relations imposed to the covariant representation r = (σ,π ). The closure of α (B[G]) in B(H) is a C ∗ -algebra called the full (or maximal) crossed product of B by G with respect to α. Equivalently, this is the completion of B[G] for the norm defined by f  = α (f ).

∀f ∈ B[G],

We denote it by B α G or simply by B  G when there is no ambiguity. Remark 24.1 (Full crossed products generalize the max-tensor product) If α acts trivially on B, i.e. αg (b) = b for all g ∈ G, b ∈ B, then it is easy to check that B α G can be identified with B ⊗max C ∗ (G). Indeed, covariant pairs boil down to pairs with commuting ranges. See Proposition 24.8 for a more general statement. It will be convenient to use the following

 Notation: f ∈ B[G] will be denoted as a formal sum g∈G f (g)g. Then we have         σ (f (g))π(g) f  =  f (g)g  = sup .  

g∈G

B(Hr )

r=(σ,π )∈Rα

Moreover, using α we define a ∗-algebra structure on B[G] by transplanting  that of α (B[G]) ⊂ B(H) where H = ⊕ r Hr 2 . This means that we define f ∗ and f1 f2 for f ,f1,f2 ∈ B[G] simply by the identities α (f ∗ ) = α (f )∗

and

α (f1 f2 ) = α (f1 )α (f2 ).

With this convention we have ∀g ∈ G, ∀b ∈ B, ∀g1,g2 ∈ G, ∀b1,b2 ∈ B,

(bg)∗ = αg −1 (b∗ )g −1 .

(b1 g1 )(b2 g2 ) = (b1 αg1 (b2 ))(g1 g2 ),

and in particular ∀b ∈ B,

gbg −1 = αg (b).

412

Full crossed products and failure of WEP for B ⊗min B

Remark 24.2 We have a canonical ∗-homomorphism B → B β G defined by b → beG and, assuming B unital, a group homomorphism from G to U (B β G) defined by g → 1B g. We saw previously that c.p. maps preserve the maximal tensor product (see Corollary 4.18). The next statement and its corollaries generalize this property to full crossed products. Indeed, by Remark 24.1 if β and α act trivially on B and L respectively then B β G = B ⊗max C ∗ (G) and L α G = L ⊗max C ∗ (G). Theorem 24.3 (Stinespring’s factorization for full crossed products) Let (B,G,β) be a C ∗ -dynamical system. Consider a pair (ϕ,π ) where π : G → B(H ) is a unitary representation and ϕ : B → B(H ) a c.p. map that is “covariant” in the sense that π(g)ϕ(b)π(g)−1 = ϕ(βg b)

(24.2)

for any b ∈ B,g ∈ G. , an isometry V : H → H , a homomorphism (i) There are a Hilbert space H ) and a ∗-homomorphism  ) such that the pair  π : G → B(H σ : B → B(H ( σ , π ) is covariant and we have ∀b ∈ B, ∀g ∈ G,

σ (b) π (g)V . ϕ(b)π(g) = V ∗

(24.3)

(ii) The mapping defined on B[G] by  ϕ (bg) = ϕ(b)π(g) extends to a c.p. map  ϕ : B  G → B(H ) with  ϕ  = ϕ. Moreover, if B and ϕ are unital,  ϕ is also unital. Proof (i) Assume B unital for simplicity. The proof is exactly the same as that of Stinespring’s Theorem 1.22: We equip B ⊗ H with the scalar product   defined for t ∈ B ⊗ H, t = bj ⊗ hj by t,t = i,j hi ,ϕ(bi∗ bj )hj . This , a unital ∗-homomorphism  ) and an leads to a Hilbert space H σ : B → B(H   defined by  isometry V : H → H σ (b)t = bbj ⊗ hj and (recall we assume B unital) Vh = 1 ⊗ h. Note V 2 = V ∗ V  ≤ ϕ. ) be the mapping defined by Let  π : G → B(H   π (g)(t) = βg bj ⊗ π(g)hj . . By our assumption (24.2), g →  π (g) defines a unitary representation on H Moreover, we have  π (g) σ (b) π (g)−1 (t) =  σ (βg (b))(t).

24.1 Full crossed products

413

Therefore, the pair ( σ , π ) is a covariant representation for (B,G,β). Moreover, we have (24.3). Indeed, for any h ∈ H , h, V ∗ σ (b) π (g)Vh = Vh,  σ (b) π (g)Vh = 1 ⊗ h, b ⊗ π(g)h = h, ϕ(b)π(g)h.  : BG → B(H ) be the ∗-homomorphism associated to the covariant (ii) Let  (·)V . Therefore  ϕ : B G → B(H ) is (unital pair ( σ , π ). We have  ϕ (·) = V ∗  2 and) c.p. with  ϕ  ≤ V  = ϕ. The nonunital case can be proved as we did for Corollary 4.18. Corollary 24.4 (Equivariant c.p. maps preserve full crossed products) Let B,L be C ∗ -algebras (resp. unital C ∗ -algebras) with actions of G denoted by g → βg and g → αg respectively on B and L. Let ϕ ∈ CP(B,L) (resp. such that ϕ(1) = 1). Assume that ϕ is equivariant in the sense that ∀g ∈ G, ∀b ∈ B,

ϕ(βg (b)) = αg (ϕ(b)).

(24.4)

Then the linear mapping  ϕ defined on B[G] by  ϕ (bg) = ϕ(b)g extends to a c.p. map (resp. unital) from B  G to L  G with norm  ϕ  = ϕ (resp.  ϕ  = 1). Proof Let (σ,π ) be a covariant representation of (L,G,α) on H , such that the linear map  : LG → B(H ) defined by (lg) = σ (l)π(g) is an embedding. It suffices to show that  ◦  ϕ is c.p. with norm ≤  ϕ . In other words, if we replace ϕ by ϕ  = σ ◦ ϕ, we are reduced to the case when L = B(H ) with ϕ  : B → B(H ) c.p. such that ϕ(βg b) = π(g)ϕ  (b)π(g)−1 , which is covered by Theorem 24.3. Corollary 24.5 (Equivariant conditional expectations) Let L ⊂ B be a unital C ∗ -subalgebra. Let g → βg be an action of G on B and g → αg an action of G on L. Assume that ∀g ∈ G,

βg|L = αg .

Assume moreover that there is a c.p. projection P : B → L such that P (βg b) = αg P (b),∀b ∈ B. Then the natural inclusion L ⊂ B defines an isometric ∗homomorphism L α G ⊂ B β G. Proof We have obviously a (contractive) unital ∗-homomorphism L α G → B β G. To show that it is isometric, let ϕ = P . Then Corollary 24.4 shows ϕ (f )LG ≤ f B×G for any f ∈ L[G]. that f LG = 

414

Full crossed products and failure of WEP for B ⊗min B

Proposition 24.6 ( and ⊗max commute) Assume B unital. Let C be another C ∗ -algebra. We use βˇg = βg ⊗ IdC as action of G on B ⊗max C. Then the natural map  : (B ⊗max C) βˇ G → (B β G) ⊗max C taking (b ⊗ c)g to (bg) ⊗ c is a ∗-isomorphism. Proof Consider an isometric embedding ρ : (B β G) ⊗max C → B(H ). Let ρ1 : B β G → B(H ) and ρ2 : C → B(H ) be the associated commuting pair as in (4.4) so that tmax = (ρ1 · ρ2 )(t)B(H ) for any t in (B β G) ⊗max C. Let ρ1 = ρ1|B , σ = ρ1 · ρ2 : B ⊗max C → B(H ) and π(g) = ρ1 (g). By Remark 24.2, σ is a ∗-homomorphism on B ⊗max C, and π a unitary representation of G on H with range commuting with that of ρ2 . For all b ⊗ c in B ⊗ C, we have π(g)σ (b ⊗ c)π(g)−1 = π(g)ρ1 (b)ρ2 (c)π(g)−1 = π(g)ρ1 (b)π(g)−1 ρ2 (c) = ρ1 (gbg−1 )ρ2 (c) = ρ1 (βg (b))ρ2 (c) = σ (βˇg (b ⊗ c)). Thus (σ,π ) is a covariant representation for (B ⊗max C, βˇg ,G). This implies that the linear mapping defined by (b ⊗ c)g −→ σ (b ⊗ c)π(g) = (ρ1 · ρ2 )(((b ⊗ c)g)) extends to a contractive ∗-homomorphism on (B ⊗max C) βˇ G. Therefore (χ )max ≤ χ  for any χ ∈ (B ⊗max C) βˇ G. By the maximality of the max-norm,  must be a ∗-isomorphism.

24.2 Full crossed products with inner actions Here we show that when the action is inner the full crossed product can be identified to a maximal tensor product. Definition 24.7 Let B be a unital C ∗ -algebra. Let β : G → Aut(B) be an action as before. We say that β is inner if there is a unitary representation ρ : G → U (B) (into the unitary group of B) such that ∀b ∈ B, ∀g ∈ G,

βg b = ρ(g)bρ(g)−1 .

(24.5)

Proposition 24.8 Let g → U (g) be the universal representation of G. If β is inner then the mapping  : bg → bρ(g) ⊗ U (g), where ρ is the representation in (24.5), extends to a ∗-isomorphism from B β G to B ⊗max C ∗ (G).

24.2 Full crossed products with inner actions

415

Proof The pair r = (σ,π ) defined by σ (b) = b ⊗ 1,

π(g) = ρ(g) ⊗ U (g)

is clearly a covariant representation for (B,G,β) on Hρ ⊗ HU . Therefore ∀f ∈ B[G]

(f )max ≤ f  .

By density (on both sides of the isomorphism) it clearly suffices to prove that (f )max = f  for any f ∈ B[G]. To check this it suffices to observe that the norm defined on B ⊗ C[G] by setting x =  −1 (x)  or more explicitly setting for any x = g∈G bg ⊗ U (g) with g → bg ∈ B finitely supported,         def      bg ⊗ U (g)  =  (bg ρ(g)−1 )g  bg ⊗ U (g) =  −1  ∀x ∈ B ⊗ C[G]



C ∗ -norm

C ∗ (G).

defines a on B ⊗ Indeed, one can check that if (σ,π ) is a covariant pair then the pair π1,π2 defined by π1 (b) = σ (b) and π2 (g) = σ (ρ(g −1 ))π(g) extends to a commuting pair of ∗-homomorphisms on B and C ∗ (G) respectively. This gives us  −1 (x) ≤ xmax and hence f  = (f )max for any f ∈ B[G]. We now state an application of the preceding results to a certain “tensorizing” property, in the spirit of §7.1. Corollary 24.9 Let (B1,G,β1 ) and (B2,G,β2 ) be two unital C ∗ -dynamical systems, and let ϕ ∈ CP(B1,B2 ) be equivariant in the sense that β2 ◦ϕ = ϕ◦β1 . Assume β1,β2 both inner with respect to homomorphisms ρ1 : G → B1 and ρ2 : G → B2 . Then: (i) The mapping Tϕ : B1 ⊗ C ∗ (G) → B2 ⊗ C ∗ (G) defined by Tϕ (b1 ⊗ U (g)) = ϕ(b1 ρ1 (g)−1 )ρ2 (g) ⊗ U (g) extends to a c.p. map Tϕ : B1 ⊗max C ∗ (G) → B2 ⊗max C ∗ (G) with norm Tϕ  ≤ ϕ. (ii) Let C be another unital C ∗ -algebra. Assume that IdC ⊗ ϕ extends to a (necessarily c.p.) mapping  : C ⊗min B1 → C ⊗max B2 with  ≤ 1. Then IdC ⊗ Tϕ extends to a bounded map ∗ ∗ Id C ⊗ Tϕ : (C ⊗min B1 ) ⊗max C (G) → (C ⊗max B2 ) ⊗max C (G)

with norm ≤ 1.

416

Full crossed products and failure of WEP for B ⊗min B

Proof (i) By Corollary 24.4 we have a c.p. map  ϕ : B1  G → B2  G with norm = ϕ. Composing with the ∗-isomorphisms ψj : Bj  G → Bj ⊗max C ∗ (G) (j = 1,2) given by Proposition 24.8, we find that Tϕ = ψ2  ϕ ψ1−1 is a c.p. map with norm = ϕ. ˇ the action of G on (ii) Given an action β on B, let us denote by β˙ (resp. β) C ⊗min B (resp. C ⊗max B) defined on the algebraic tensor product as IdC ⊗ β and then extended by density to C ⊗min B (resp. C ⊗max B). Note that β˙ and βˇ are both inner if β is inner. Thus we may apply the first part to the mapping . Since T = IdC ⊗ Tϕ we obtain the announced result. Corollary 24.10 Let (B,G,β) be a unital C ∗ -dynamical system that is inner with respect to a homomorphism ρ : G → B and let C be a C ∗ -algebra. Let P ∈ CP(B,B) be equivariant (in the sense that β ◦ P = P ◦ β), and such that IdC ⊗ P : C ⊗min B → C ⊗max B ≤ 1. Then the mapping θ defined on C ⊗ B ⊗ C ∗ (G) by θ (c ⊗ b ⊗ U (g)) = c ⊗ P (bρ(g)−1 )ρ(g) ⊗ U (g) extends to a map  θ satisfying  θ : (C ⊗min B) ⊗max C ∗ (G) → (C ⊗max B) ⊗max C ∗ (G) ≤ 1. Proof Apply the preceding Corollary with B1 = B2 = B, β1 = β2 = β and ϕ = P. Corollary 24.11 In the situation of the preceding Corollary let E = {x ∈ B | Px = x}. Let F = span[xρ(g) ⊗ U (g) | x ∈ E,g ∈ G] ⊂ B ⊗ C ∗ (G). Then the norms of (C ⊗min B) ⊗max C ∗ (G) and (C ⊗max B) ⊗max C ∗ (G) coincide on C ⊗ F . Proof The mapping  θ is the identity on C ⊗ F . Theorem 24.12 Let C be a unital C ∗ -algebra. In the situation of Corollary 24.5, assume: β is inner,

(24.6)

(L,C) is a nuclear pair.

(24.7)

(B ⊗min C,C ∗ (G)) nuclear ⇒ (L  G,C) nuclear.

(24.8)

Then

Proof We start by observing that the assumption in (24.8) implies a fortiori (B,C ∗ (G)) is a nuclear pair.

(24.9)

24.2 Full crossed products with inner actions

417

Indeed, this follows from Remark 7.21 (where we replace D by B and B by C ∗ (G)). Recall β˙ denotes the inner action of G on B ⊗min C, defined by β˙g = βg ⊗ IdC , and similarly for α˙ on L ⊗min C = L ⊗max C. We first claim that we have an isometric embedding (L α G) ⊗max C ⊂ (B ⊗min C) ⊗max C ∗ (G).

(24.10)

Indeed, by Proposition 24.6 (applied to L ⊗max C) we have (L α G) ⊗max C = (L ⊗max C) αˇ G, and by (24.7) and Corollary 24.5 (applied to P ⊗ IdC : B ⊗min C → L ⊗min C) we have an isometric embedding (L ⊗max C) αˇ G = (L ⊗min C) α˙ G ⊂ (B ⊗min C) β˙ G.

(24.11)

Furthermore, by Proposition 24.8, since β˙ is inner on B ⊗min C, we have (B ⊗min C) β˙ G = (B ⊗min C) ⊗max C ∗ (G). This proves the claim (24.10). We now turn to (L α G) ⊗min C. By Corollary 24.5 applied this time to P : B → L we have an isometric embedding LG→B G and hence the following map is also isometric (L α G) ⊗min C → (B β G) ⊗min C. By Proposition 24.8, since β is inner this produces an isometric embedding (L α G) ⊗min C → (B ⊗max C ∗ (G)) ⊗min C and using (24.9) (and permuting factors using (1.6) and (1.5)) we find an isometric embedding (L α G) ⊗min C ⊂ (B ⊗min C) ⊗min C ∗ (G).

(24.12)

Moreover, it is easy to trace back the identifications we made to check that (24.10) and (24.12) coincide on (L α G) ⊗ C. Thus the implication in (24.8) follows from (24.10) and (24.12). Corollary 24.13 In the situation of Corollary 24.5 with β inner, assume that C ∗ (G) and L have the LLP. If B ⊗min B has the WEP then L  G has the LLP. Proof We apply Theorem 24.12 with C = B. In that case (24.7) and (by our assumption on B ⊗min B) the left-hand side of (24.8) hold by the generalized Kirchberg Theorem 9.40.

418

Full crossed products and failure of WEP for B ⊗min B

24.3 B ⊗min B fails WEP Consider again the constant C(n) defined previously as the infimum of the C’s in (18.6). We need a modified version, as follows: C0 (n) is the infimum of the constants C ≤ n such that for each m ≥ 1, there is Nm ≥ 1 and an n-tuple [u1 (m), . . . ,un (m)] of unitary Nm × Nm matrices of permutation such that 4  5   n    uj (m) ⊗ uj (m ) ≤ C, (24.13) sup   m=m

j =1

|[χ ⊗χ  ]⊥ min

where χ and χ  are the constant unit vectors in CNm and CNm respectively, i.e. Nm Nm −1/2 −1/2 ek and χ  = Nm ek . χ = Nm 1

1

Equivalently, since ⊥



[χ ⊗ χ  ]⊥ = [χ ⊥ ⊗ χ  ] ⊕ [χ ⊥ ⊗ χ  ] ⊕ [χ ⊗ χ  ] and uj (m)(χ ) = χ for all j,m, (24.13) means that we have 4   n   u (m) ⊗ u (m ) ≤ C, sup  ⊥ ⊥  j j |χ |χ   m=m

together with

j =1

(24.14)

min

n  supm 

j =1

  uj (m)|χ ⊥  ≤ C.

(24.15)

Note that the matrix associated to uj (m)|χ ⊥ is a unitary matrix of size Nm − 1 (with respect to any orthonormal basis of χ ⊥ ). Thus if we neglect (24.15), which in our framework will be easy to verify, the definition of C0 (n) is the same as the previous one of C(n) in (18.6) but with the additional requirement that the unitary matrices must be obtained by restricting permutation matrices to the orthogonal of the constant vector. Moreover, since the latter have real entries, we could drop the complex conjugation sign, but to preserve the analogy with C(n), we prefer not to do that. In any case, we have C(n) ≤ C0 (n). Remark 24.14 In the case m = m the operators {uj (m) ⊗ uj (m) | 1 ≤ j ≤ n}, which obviously admit J = χ ⊗ χ as a common invariant vector, also  m admit the “identity” namely I = N 1 ek ⊗ ek , as another one. Actually, when the permutations giving rise to the unitaries (uj (m))1≤j ≤n generate SNm , all the common invariant vectors lie in span[J,I ]. This will be the case in most of the examples that follow. Thus in the quantum expander context it will be

24.3 B ⊗min B fails WEP

419

 m ⊥ natural to consider the restriction of N 1 uj (m) ⊗ uj (m) to span[J,I ] , or ⊥ equivalently {J,I } . Let us observe for later use the orthogonal decomposition {J,I }⊥ = [(χ ⊥ ⊗ χ ⊥ ) ∩ I ⊥ ] ⊕ [χ ⊥ ⊗ χ ] ⊕ [χ ⊗ χ ⊥ ], where the sum of the dimensions, which are respectively (Nm −1)2 −1, Nm −1 and Nm − 1, is equal to dim({J,I }⊥ ) = Nm2 − 2. Using the latter decomposition, we find  # $   N m   uj (m) ⊗ uj (m)   1  |{J,I }⊥   7 6# $   N  N  m m     = max  uj (m)χ ⊥ ⊗ uj (m)χ ⊥ uj (m)χ ⊥  . ,  1 1     ⊥ |I (24.16) The crucial ingredient to show that B(H ) ⊗min B(H ) fails the WEP is that C0 (n) < n for at least one n > 1. We will prove this later on with n = 3 using a fundamental result due to Selberg on the group SL2 (Z). This way seems to produce the most explicit example. However, here again one can use probabilistic methods (such as those of [93, 94]), or the fact that the family of all permutation groups forms an expander ([148]). We describe these alternative ways in §24.5. The general approach is similar to the one we used to prove Theorem 18.7. The main new ingredient is a very special property of permutation matrices with spectral gap (the following Lemma 24.18) that is somewhat implicit in Ozawa’s [188]. We need some specific notation. We denote by (n,N ) the set of n-tuples of N × N unitary permutation matrices u = (u1, . . . ,un ), such that u1 = I .  For any such u the operators uj all admit χN = N −1/2 N 1 ek as a common n invariant vector, and hence their sum 1 uj admits χN as eigenvector for the eigenvalue n. We denote by ε0 (u) the “spectral gap” of the latter operator beyond n. More precisely, the number ε0 (u) ∈ [0,n] is defined by  4   n 5  = n − ε0 (u).  u j  ⊥ 1 |χN  Remark 24.15 Let [u1 (m), . . . ,un (m)] be a sequence of n-tuples of permutation matrices satisfying (24.13). Let vj (m) = u1 (m)−1 uj (m) for all 1 ≤ j ≤ n. Then [v1 (m), . . . ,vn (m)] belongs to (n,Nm ) and the sequence [v1 (m), . . . ,vn (m)] still satisfies (24.13). Remark 24.16 Consider u ∈ (n,N ), u ∈ (n,N  ). We denote by u ⊗ u N the n-tuple (uj ⊗ uj ). If we identify N 2 and 2 (using the canonical basis) we may view (uj ) as an n-tuple of permutation matrices, and hence view u ⊗

420

Full crossed products and failure of WEP for B ⊗min B

u = (uj ⊗ uj ) as an n-tuple of matrices of size N × N  . It is easy to check that the latter are still permutation matrices, so that with our identification u ⊗ u ∈ (n,NN  ). Note also that if u0 and u0 are diagonal matrices with size respectively N and N  , then u0 ⊗ u0 is identified with a diagonal matrix of size NN  . Moreover χNN  can be identified with χN ⊗χN  (or χN ⊗χN  ). Therefore, the assumption appearing in (24.13) can be equivalently rewritten as infm=m ε0 (u(m) ⊗ u(m )) ≥ n − C. Remark 24.17 We will be interested in the consequences of ε0 (u) > 0 for an n-tuple u ∈ (n,N ). More precisely let us assume ε0 (u) ≥ ε0 for some fixed ε0 > 0, or equivalently  4   n 5  ≤ n − ε0 .  u j  ⊥ 1

|χN

n

Note that 1 uj admits both CχN and [CχN ]⊥ as invariant subspaces. From this it is easy to check that there is a function f1 : (0,1) → R+ (depending only on ε0 and n) with limε→0 f1 (ε) = 0 such that for any unit vector x ∈ N 2 n    uj (x) ≥ n − ε ⇒ inf x − zχN  ≤ f1 (ε). (24.17)  1

z∈C

Indeed, let θ = PχN ⊥ x. Then (1 − ε/n)2 ≤ n−1 (1 − ε0 /n)2 θ 2 which implies

n 1

uj (x)2 ≤ (1 − θ 2 ) +

θ 2 ≤ 2ε(2ε0 − ε02 /n)−1 ≤ 2ε/ε0, so that we can take f1 (ε) = (2ε/ε0 )1/2 . Let us now assume that the unit vector x ∈ N 2 has its coordinates in R+ . There is then a function f2 : (0,1) → R+ (depending only on ε0 and n) with limε→0 f2 (ε) = 0 such that n    uj (x) ≥ n − ε ⇒ x − χN  ≤ f1 (ε) + f2 (ε). (24.18)  1

Indeed, since x and χN are unit vectors and x has nonnegative coordinates, an elementary argument shows that the optimal z in (24.17) necessarily satisfies both |1 − |z|| ≤ f1 (ε) and d(z,R+ ) ≤ |z − χN ,x| ≤ f1 (ε), whence an estimate |1 − z| ≤ f2 (ε) for some function f2 with limε→0 f2 (ε) = 0. As already mentioned, the next key lemma is implicit in [188]. Lemma 24.18 Fix n ≥ 1 and ε0 > 0. (i) For each δ > 0 there is ε > 0 (depending only on δ, ε0 and n) such that the following holds:

24.3 B ⊗min B fails WEP

421

for any N ≥ 1, any u ∈ (n,N ) such that ε0 (u) ≥ ε0 satisfies the following property: for any Hilbert space H and for any n-tuple (Vj )1≤j ≤n of unitaries on H , if a unit vector ξ ∈ N 2 (H ) is such that n     (24.19) uj ⊗ Vj (ξ ) > n − ε  j =1

then |ξ | − χN N < δ, 2

(24.20)

−1/2 1 where χN is the (constant) unit vector of N [1,...,N ] , 2 defined by χN = N N and where |ξ | ∈ 2 is the unit vector defined by

∀i ∈ [1, . . . ,N]

|ξ |(i) = ξ(i)H .

(ii) There is ε0 > 0 (depending only on ε0 and n) such that for any N ≥ 1, any u ∈ (n,N) satisfying ε0 (u) ≥ ε0 satisfies the following property: for any H , for any Vj ∈ U (B(H )) (1 ≤ j ≤ n) with V1 = 1, and for any diagonal unitary operator D ∈ MN with zero trace, we have   n   uj ⊗ Vj  ≤ n + 1 − ε0 . (24.21) D ⊗ I + j =1

Proof (i) By (A.6), the assumption (24.19) implies that there is unit vector  ξ  ∈ N 2 (H ) such that supj (uj ⊗ Vj )(ξ ) − ξ  < f (ε) for some function f (ε) such that limε→0 f (ε) = 0. Let θj be the permutation represented by uj so that uj ei = eθj (i) . Note for any i ∈ [1, . . . ,N] [(uj ⊗ Vj )(ξ ) − ξ  ](i)H ≥ |Vj ξ(θj (i))H − ξ  (i)H | = |ξ(θj (i))H − ξ  (i)H | = |[uj (|ξ |) − |ξ  |](i)| and hence f (ε) > (uj ⊗ Vj )(ξ ) − ξ   ≥ uj (|ξ |) − |ξ  |,

(24.22)

and furthermore n  n      uj (|ξ |) ≥ nξ   −  uj (|ξ |) − |ξ  |  1

1



≥ nξ  − nf (ε) = n − nf (ε). By (24.18) we have |ξ | − χN  ≤ f3 (ε) for some function f3 (independent of N) with limε→0 f3 (ε) = 0. Thus if we adjust ε so that f3 (ε) < δ we obtain (24.20).

422

Full crossed products and failure of WEP for B ⊗min B

   (ii) Assume D ⊗ I + nj=1 uj ⊗ Vj  > n + 1 − ε. We will reach a contradiction when ε is small enough. By our assumption there is ξ as in (i) satisfying 4 5  n   (24.23) uj ⊗ Vj (ξ ) > n + 1 − ε.  D⊗I + j =1

Then (by the triangle inequality) (24.19) and hence (24.20) holds. By the uniform convexity of Hilbert space again, (24.23) implies that maxj [D⊗I ](ξ )− [uj ⊗ Vj ](ξ ) ≤ f2 (ε) for some function f2 such that limε→0 f2 (ε) = 0, and hence (using j = 1) that ξ −[D⊗I ](ξ ) ≤ f2 (ε). But ξ −[D⊗I ](ξ ) = |ξ − [D ⊗ I ](ξ )| and a moment of thought shows that |ξ − [D ⊗ I ](ξ )| = |I − D|(|ξ |). Therefore ξ − [D ⊗ I ](ξ ) = (I − D)(|ξ |). By (24.20) ξ − [D ⊗ I ](ξ ) ≥ (I −√D)(χN ) − 2δ, and since χN ⊥√D(χN ), this implies ξ − [D ⊗ I ](ξ ) ≥ 2 − 2δ. Thus we obtain f2 (ε) ≥ 2 − 2δ. This is the desired contradiction: when ε is small enough the number δ becomes arbitrarily small, so limε→0 f2 (ε) = 0 is impossible. We now state the crucial fact on which the proof rests, but postpone its proof to §24.4. Theorem 24.19 We have C0 (3) < 3. We recall the notation introduced in (18.8) and (18.9). Let {[u1 (m), . . . ,un (m)],m ∈ N} be a sequence of n-tuples of unitary matrices of size Nm × Nm . By Remark 24.15 we may always assume as we do in the sequel that u1 (m) = 1 for all m and we will work with n-tuples in (n,Nm ). For any subset ω ⊂ N, let    Bω = ⊕ MNm . m∈ω



In addition we denote by Lω ⊂ B ω the commutative (and hence nuclear) C ∗ -subalgebra formed of those x = (xm )m∈ω such that xm is a diagonal matrix for all m. Let N = ω(1) ∪ ω(2) be any disjoint partition of N into two infinite subsets, and let   uj (m) ∈ Bω(1) u2j = uj (m ) ∈ Bω(2) . u1j =  m∈ω(1)

m ∈ω(2)

(24.24) We now deduce Ozawa’s result by a shorter route (although based on similar ingredients as his).

24.3 B ⊗min B fails WEP

423

Theorem 24.20 Fix ε0 > 0. Let {u(m),m ∈ N} be a sequence of n-tuples of matrices with u(m) ∈ (n,Nm ) for each m satisfying infm=m ε0 (u(m) ⊗ u(m )) ≥ ε0 > 0.

(24.25)

For each m, we choose a diagonal unitary matrix u0 (m) with zero trace and size Nm × Nm . Suppose that {(uj (m))0≤j ≤n,m ∈ N} converges in distribution when m → ∞. Let (Uj )2≤j ≤n be the unitary generators of Fn−1 with the convention U0 = U1 = 1. Let N = ω(1) ∪ ω(2) be any disjoint partition of N into two infinite subsets. With the preceding notation (24.24), we set n u1j ⊗ u2j ⊗ Uj ∈ [Bω(1) ⊗min Bω(2) ] ⊗ C ∗ (Fn−1 ). (24.26) t= j =0

We have then tmin ≤ n + 1 − ε0

and

tmax = n + 1,

(24.27)

where ε0 > 0 is given by (24.21), and hence [Bω(1) ⊗min Bω(2) ] ⊗min C ∗ (Fn−1 ) = [Bω(1) ⊗min Bω(2) ] ⊗max C ∗ (Fn−1 ). Proof The estimate tmin ≤ n + 1 − ε0 follows from Remark 24.16 and Lemma 24.18. We now turn to tmax . As explained in the proof of Theorem 18.9 we have that  n   u1j ⊗ u2j  = n + 1. (24.28)  0

Bω(1) ⊗max Bω(2)

Let G = Fn−1 , and for convenience let g2,g3, . . . ,gn denote the n − 1 free generators, with the notational convention g0 = g1 = 1. We consider the unitary representation ρ2 : G → Bω(2) defined by ρ2 (gj ) = u2j for j = 2,3, . . . ,n (and of course ρ2 (gj ) = 1 for j = 0,1). Let β be the action of G on Bω(2) defined by βg (b) = ρ2 (g)bρ2 (g)−1 . Recall Lω(2) ⊂ Bω(2) is the diagonal subalgebra. Since the ρ2 (g)’s are permutation matrices, this action preserves diagonal matrices and hence its restriction to Lω(2) gives us an action (a priori not inner) on Lω(2) , but the diagonal projection P : Bω(2) → Lω(2) is clearly equivariant. Moreover, Lω(2) being commutative is nuclear (see Remark 4.10), and of course included in Bω(2) . Therefore, for any C ∗ -algebra C, IdC ⊗ P defines a c.p. contraction  : C ⊗min Bω(2) → C ⊗max Bω(2) , such that (c ⊗ b) = c ⊗ P (b). Thus we are in the situation to apply Corollary 24.11. Using C = Bω(1) and B = Bω(2) , we observe that, with the notation in Corollary 24.11, since P (1) = 1 and P (u20 ) = u20 , we have t ∈ C ⊗ F.

424

Full crossed products and failure of WEP for B ⊗min B

Therefore Corollary 24.11 tells us that t[Bω(1) ⊗max Bω(2) ]⊗max C ∗ (Fn−1 ) = t[Bω(1) ⊗min Bω(2) ]⊗max C ∗ (Fn−1 ) . Applying to t the ∗-homomorphism IdBω(1) ⊗max Bω(2) ⊗ T where T is the linear map associated to the trivial representation on G that takes all U (g)’s to 1, we find     u1j ⊗ u2j  ≤ tmax, n+1= Bω(1) ⊗max Bω(2)

and the proof is complete. Remark 24.21 It is probably worthwhile for the reader to emphasize that n 1 2 we cannot replace t by t1 = j =1 uj ⊗ uj ⊗ Uj , because we have t1 [Bω(1) ⊗min Bω(2) ]⊗min C ∗ (Fn−1 ) = n. Indeed, pick m ∈ ω(1),m ∈ ω(2) and let vj = u1j (m) ⊗ u2j (m ). Using the map taking Uj to vj , we have  n   t1 [Bω(1) ⊗min Bω(2) ]⊗min C ∗ (Fn−1 ) ≥  vj ⊗ vj  j =1

min

 n  and, j =1 vj ⊗  since the vj’s are unitary matrices, we have by (18.5) vj min = n. Note that if we try to apply that same argument to t, recalling U0 = U1 = 1, we are led to write   n   vj ⊗ vj  tmin ≥ (v0 + v1 ) ⊗ 1 + j =2

min

and since tr(v0 + v1 ) = 0, this leads to tmin ≥ n − 1, but what really matters is that, as Theorem 24.20 shows us, tmin < n + 1. Corollary 24.22 B ⊗min B fails the WEP. Proof We base the proof directly on the information that C0 (n) < n. By Selberg’s bound this holds at least for n = 3. We use similar ingredients as for the proof that B ⊗min B = B ⊗max B given previously in Theorem 18.9. By Remarks 24.15 and 24.16 we may assume that we have a sequence u(m) with u(m) ∈ (n,Nm ) such that ε0 (u(m) ⊗ u(m )) ≥ ε0 > 0. For each m, we choose a diagonal unitary matrix u0 (m) with zero trace. The preceding theorem, then shows that the pair (Bω(1) ⊗min Bω(2),C ∗ (F2 )) is not nuclear, so Bω(1) ⊗min Bω(2) is not WEP. As in Corollary 18.12, it follows that B ⊗min B fails the WEP. More generally, using Theorem 12.29 we have Corollary 24.23 If (M,N ) are nonnuclear von Neumann algebras, M ⊗min N fails the WEP.

24.3 B ⊗min B fails WEP

425

Proof By Theorem 12.29 and the injectivity of B we have a completely positive factorization of the identity of B ⊗min B through M ⊗min N . Although we opted for tackling directly the failure of WEP for Bω(1) ⊗min Bω(2) , we could have first proved the following (in accordance with Theorem 24.12): Corollary 24.24 In the situation of Theorem 24.20, the space Lω(2)  Fn−1 fails the LLP. Proof We continue to use the same notation. In particular, β is the action of G on Bω(2) , the crossed products are with respect to β and with respect to the natural extensions of β to C ⊗ Bω(2) with C = Bω(1) . Recall the convention g0 = g1 = 1. Note in passing that g2, . . . ,gn and Lω(2) together generate Lω(2)  Fn−1 . We define sj ∈ Lω(2) and sj ∈ Lω(2)  Fn−1 by setting: s0 = u20 ∈ Lω(2),

sj = sj .gj ∈ Lω(2)  Fn−1 for 0 ≤ j ≤ n.

and

Consider the tensors n s =

j =0

and s=

sj = 1 ∈ Lω(2) for 1 ≤ j ≤ n,

n j =0

u1j ⊗ sj ∈ Bω(1) ⊗ [Lω(2)  Fn−1 ],

[u1j ⊗ sj ].gj ∈ [Bω(1) ⊗max Lω(2) ]  Fn−1 .

We claim that s  max = tmax = n + 1 and s  min = tmin ≤ n + 1 − ε0 . By Proposition 24.6 we know that s  max = s[Bω(1) ⊗max Lω(2) ]Fn−1 . By the nuclearity of Lω(2) s  max = s[Bω(1) ⊗min Lω(2) ]Fn−1 , since Lω(2) ⊂ Bω(2) (equivariantly) s  max ≥ s[Bω(1) ⊗min Bω(2) ]Fn−1 . Recall that presently C = Bω(1) and G = Fn−1 . Since the action of G on C ⊗min Bω(2) is inner, the map  associated to (C ⊗min Bω(2) )  G as in Proposition 24.8 is such that (s) = t (because here ρ = ρ2 and sj ρ2 (gj ) = u2j for all 0 ≤ j ≤ n), where t is as in (24.26). Therefore s  max ≥ tmax = n + 1.

426

Full crossed products and failure of WEP for B ⊗min B

We now turn to s  min . By Corollary 24.5 the conditional expectation implies [Lω(2)  Fn−1 ] ⊂ [Bω(2)  Fn−1 ], by Proposition 24.8 (i.e. by innerness) [Bω(2)  Fn−1 ]  Bω(2) ⊗max C ∗ (Fn−1 ) and by Kirchberg’s Theorem (see Corollary 9.40) Bω(2) ⊗max C ∗ (Fn−1 ) = Bω(2) ⊗min C ∗ (Fn−1 ). This shows we have s  Bω(1) ⊗min [Lω(2) Fn−1 ] = t[Bω(1) ⊗min Bω(2) ]⊗min C ∗ (Fn−1 ) . By (24.27) this proves our claim and hence that Lω(2) Fn−1 fails the LLP. We now show Corollary 24.25 In the situation of Theorem 24.20, the pair (Lω(1)  Fn−1,Lω(2)  Fn−1 ) is not nuclear. Proof By the conditional expectation argument (see Corollary 24.5) we know that Lω(2)  Fn−1 ⊂ Bω(2)  Fn−1 and Lω(1)  Fn−1 ⊂ Bω(1)  Fn−1, with conditional expectations onto each of them. Therefore both the min and max norms on [Lω(1)  Fn−1 ] ⊗ [Lω(2)  Fn−1 ] are equal to the norms induced respectively by the min and max norms on [Bω(1)  Fn−1 ] ⊗ [Bω(2)  Fn−1 ]. By Proposition 24.8 (applied twice) these can be identified with the min and max norms on [Bω(1) ⊗max C ∗ (Fn−1 )] ⊗ [Bω(2) ⊗max C ∗ (Fn−1 )]. By Kirchberg’s theorem (see Corollary 9.40) these are the same as the min and max norms on [Bω(1) ⊗min C ∗ (Fn−1 )] ⊗ [Bω(2) ⊗min C ∗ (Fn−1 )]. Let us denote by sj (2) ∈ Lω(2) the element previously denoted by sj , and by sj (1) ∈ Lω(1) the analogous element. Consider the element n z= (sj (1).gj ) ⊗ (sj (2).gj ) ∈ [Lω(1)  Fn−1 ] ⊗ [Lω(2)  Fn−1 ]. j =0

24.4 Proof that C0 (3) < 3 (Selberg’s spectral bound)

427

The element z corresponding to z in [Bω(1) ⊗min C ∗ (Fn−1 )] ⊗ [Bω(2) ⊗min C ∗ (Fn−1 )] is n z = [sj (1)ρ1 (gj ) ⊗ U (gj )] ⊗ [sj (2)ρ2 (gj ) ⊗ U (gj )], j =0

where ρ1 is like ρ2 (defined in the proof of Theorem 24.20) but relative to ω(1). We have after permuting factors n  sj (1)ρ1 (gj ) ⊗ sj (2)ρ2 (gj ) z min =  j =0  ⊗ U (gj ) ⊗ U (gj ) ∗ ∗ Bω(1) ⊗min Bω(2) ⊗min C (Fn−1 )⊗min C (Fn−1 )

and since U ⊗ U  U , the latter is the same as  n   sj (1)ρ1 (gj ) ⊗ sj (2)ρ2 (gj ) ⊗ U (gj )  j =0

Bω(1) ⊗min Bω(2) ⊗min C ∗ (Fn−1 )

.

Note that for all 0 ≤ j ≤ n sj (1)ρ1 (gj ) = u1j and sj (2)ρ2 (gj ) = u2j . Thus we obtain by (24.27) z min = tmin ≤ n + 1 − ε0 . We now turn to z max = z Bω(1) ⊗max C ∗ (Fn−1 )⊗max Bω(2) ⊗max C ∗ (Fn−1 ) . Composing with the trivial representation on Fn−1 we find  n    z max ≥  sj (1)ρ1 (gj ) ⊗ sj (2)ρ2 (gj ) j =0

Bω(1) ⊗max Bω(2)

,

and the latter is = n + 1 by (24.28). This proves that z max = z min and hence that the pair (Lω(1)  Fn−1,Lω(2)  Fn−1 ) is not nuclear.

24.4 Proof that C0 (3) < 3 (Selberg’s spectral bound) Our goal in this section is to describe how Theorem 24.19 follows from a famous result due to Selberg [228] that we will now describe without proof. Lubotzky, Philip, and Sarnak [175] first observed that Selberg’s results on the spectrum of the Laplacian on the hyperbolic space imply that the finite groups {SL2 (Z/pZ)} form an expander family. We refer the reader to [172, pp. 51– 54] and [243] for more information. Tao’s book [243] contains a detailed proof on the deduction of the spectral gaps for {SL2 (Z/pZ)} from the ones for the Laplacian on a certain family of “arithmetic Riemann surfaces” traditionally denoted by {X(p)}(see [243, pp. 75–76]).

428

Full crossed products and failure of WEP for B ⊗min B

Selberg’s Theorem [228] says that the trivial representation of SL2 (Z) is isolated in the set of representations that factor through SL2 (Z/pZ) for some integer p ≥ 2. Note that the kernel of the natural quotient map SL2 (Z) → SL2 (Z/pZ) is the set    1+a b a,b,c,d ∈ pZ . c 1+d Thus a representation factors through SL2 (Z/pZ) if and only if it is trivial (i.e. =1) on the latter subset. It is well known that SL2 (Z) (the “modular group”) is generated by {t2,t3 } where     1 1 0 1 t2 = and t3 = (see [224, p. 9] or [65, p. 94]). 0 1 −1 0 For convenience we set t1 = 1. Thus the (unital) subset S = {t1,t2,t3 } ⊂ SL2 (Z) generates SL2 (Z). Equivalently the Selberg property for SL2 (Z) means that for some ε0 > 0 we have   3   ρ(tj ) (24.29) supρ   ≤ 3 − ε0, j =1 where the sup runs over all the unitary representations ρ that factor through SL2 (Z/pZ) for some integer p ≥ 2 and do not admit any invariant vector. We will content ourselves with representations of a special form. We assume that ρ is associated to an action of SL2 (Z) by permutation on a finite index set ρ , so that Hρ = 2 (ρ ), we assume moreover that ρ factors through SL2 (Z/pZ) for some p and that the constant function 1 on ρ is its only invariant vector (up to scaling) in Hρ = 2 (ρ ). We denote by the set of such ρ’s. Let p > 1 be a prime number. Consider the action of SL2 (Z) on Z2p , viewed as a vector space over the field Zp = Z/pZ. Let p denote the set of lines in Z2p . In other words p is the set of one-dimensional subspaces (i.e. the projective space) in the vector space Z2p . The group SL2 (Z) acts (via SL2 (Zp )) by permutation on p . This induces a representation ρp of SL2 (Z) on Hp = 2 (p ). Thus we have in this case ρp = p . For simplicity we also denote by χ p ∈ 2 (p ) the constant vector with all coordinates equal to |p |−1/2 . Lemma 24.26 For any pair p,q of distinct prime numbers ρp ⊗ ρq ∈ to unitary equivalence).

(up

Proof Recall Hp = 2 (p ). Using the canonical basis of Hp we may identify Hp and Hp . Then Hp ⊗ Hq  2 (p × q ) and ρp ⊗ ρq acts by permutation on p × q . Note that any two distinct elements of p produce a linear basis

24.5 Other proofs that C0 (n) < n

429

of Z2p . Therefore, by classical linear algebra over the field Zp , the action of SL2 (Zp ) (by permutation) on p is bitransitive, and hence by Lemma A.68 the representation ρp0 = ρp |χ p ⊥ is irreducible. Since |p | = p + 1 (see Remark 24.29) |p | = |q | whenever p = q. This implies that ρp0 and ρq0 are distinct irreducible representations. Let T denote the trivial representation. By Schur’s classical Lemma A.69, ρp0 ⊗ ρq0 , as well as ρp0 ⊗ T and T ⊗ ρq0 , have no invariant vector. Therefore, the only invariant vector of ρp ⊗ ρq is χ p ⊗ χ q . Since ρp ⊗ ρq is trivial on matrices with entries divisible by both p and q, it factors through SL2 (Z/Zpq ) and hence belongs to . Proof of Theorem 24.19 By (24.29) and the preceding lemma we have   3   (ρ ⊗ ρ )(t ) supp=q  q j |[χ p ⊗χ q ]⊥  ≤ 3 − ε0 .  j =1 p Thus if we set ∀j = 1,2,3

uj (p) = ρp (tj )

we have ε0 (u(p) ⊗ u(q)) ≥ ε0, and hence C0 (3) ≤ 3 − ε0 . Remark 24.27 By (i) in Lemma A.69, (24.29) also implies:    3  0 0  (ρp ⊗ ρp )(tj )|I ⊥  supp   ≤ 3 − ε0 . j =1

Thus the family {(ρp0 (tj ))1≤j ≤3 | p prime} is a quantum expander.

24.5 Other proofs that C0 (n) < n In the next remarks we give several alternative ways to prove that C0 (n) < n. Remark 24.28 (Using property (T) for SLd (Z) for d ≥ 3) This is similar to what we just did with SL2 (Zp ). We use the fact that SLd (Z) has property (T) for d ≥ 3 (and only for d ≥ 3). Let S ⊂ SLd (Z) be a finite generating set containing the unit. By Proposition 17.3 there is ε > 0 such that any unitary representation π on SLd (Z) without nonzero invariant vector satisfies     (24.30) |S|−1  π(s) ≤ 1 − ε. s∈S

For any prime p we let SLd (Zp ) act on the set of lines (i.e. one dimensional linear subspaces) in Zdp . By classical linear algebra over the field Zp , this action

430

Full crossed products and failure of WEP for B ⊗min B

is bitransitive, and hence yields by Lemma A.68 (after composition with the surjection SLd (Z) → SLd (Zp )) an irreducible representation πp0 defined on SLd (Z). Remark 24.29 (How many lines in Zdp ?) The set of lines in Zdp has cardinality (pd − 1)/(p − 1). Indeed each point in Zdp \ {0} determines a line, each line contains p − 1 nonzero points, and two distinct lines intersect only at 0. Arguing as for Lemma 24.26 (comparing the respective dimensions) we find that πp0 = πq0 for any pair of distinct primes p = q. By Schur’s lemma (see (ii) in Lemma A.69) πp0 ⊗ πq0 has no invariant vector if p = q and hence by (24.30)     πp0 (s) ⊗ πq0 (s) ≤ 1 − ε. supp=q |S|−1  s∈S

Let n = |S|. Arguing as in the preceding proof of Theorem 24.19 we obtain C0 (n) ≤ n − εn. Moreover, we also have (see (i) in Lemma A.69) 4 5     0 (s) ⊗ π 0 (s)  ≤ 1 − ε, π supp |S|−1  p p  s∈S |I ⊥  which shows that the family {(πp0 (s))s∈S | p prime} forms a quantum expander. Since it is known (see [250] or [262, prop. 5]) that SLd (Z) can be generated just by two elements and the unit, we may take n = 3 in what precedes, so we obtain again C0 (3) < 3. Remark 24.30 (Using Kassabov’s expander) In answer to a longstanding question, Kassabov proved in [148] that with a suitable choice of generators of a fixed size n the sequence of the permutation groups forms an expander. From this one can deduce easily that there are quantum expanders formed of permutation matrices restricted to the orthogonal of the constant vector as in (24.14) and (24.15), and hence that C0 (n) < n for that same n. I am grateful to Aram Harrow for pointing this out to me. More precisely, let Sm denote the symmetric group of all permutations of an m element set. Kassabov [148] proved that the family {Sm | m ≥ 1} forms an expanding family with respect to subsets Sm ⊂ Sm of a fixed size n and a fixed spectral gap δ > 0. The construction detailed in [148] leads to a rather large value of n, but in [148, Rem. 5.1] it is asserted that, assuming m large enough, one can obtain n = 20 at the expense of a smaller gap δ > 0, and also that one can obtain generating sets formed of involutions. To produce quantum expanders coming from permutation matrices, we invoke Lemma A.68: the natural representation πm : Sm → B(m 2 ) that acts by permuting the basis vectors (i.e. π (σ )(e ) = e ) decomposes as on m m j σ (j ) 2

24.6 Random permutations

431

the sum of the trivial representation on Cχm and an irreducible representation πm0 that is the restriction of πm to χm⊥ . By Remark 19.10 the sequence {(πm0 (s))s∈Sm | m ≥ 1} forms a quantum expander, relative to the dimensions Nm = m − 1. By Proposition 19.8, we can extract from it a quantum coding sequence. This implies that C0 (n) < n.

24.6 Random permutations Let us assume that n ≥ 4 is an even integer. We choose an n-tuple of random ) (N ) permutation matrices in the following way: we simply select u(N 1 , . . . ,un/2 independently and uniformly over the group of permutation matrices of size (N ) (N ) N × N. We then define uj +n/2 = (uj )−1 for any 1 ≤ j ≤ n/2. A priori this allows some repetitions, but when N is much larger than n we obtain with (very) high probability an n-tuple of distinct permutation matrices. Indeed, the (N ) (N ) probability that ui = uj for some i = j in [1,n/2] is less than (n/2)2 /N !. Let (N )

Vj

(N )

= uj

|χ ⊥

,

 (N ) where again χ = N −1/2 N 1 ek . We view (Vj ) as random (N −1)×(N −1) unitary matrices (with respect to any orthonormal basis of χN⊥ ). In [93], Joel Friedman proved that for any fixed ε > 0 when N → ∞ we have   n √   Vj(N )  > 2 n − 1 + ε → 0, (24.31) P  1

which had been conjectured by Noga Alon. Using a quite different approach, Bordenave and Collins recently proved an analogue of Theorem 18.16 for the same model of independent random permutation matrices (restricted to χ ⊥ ), which leads to an optimal estimate of C0 (n). They prove the following result: Theorem 24.31 ([32]) Fix an even integer n ≥ 4. Let g1, . . . ,gn/2 be the free generators of the free group Fn/2 . We set λj = λFn (gj ) and λj +n/2 = λFn (gj−1 ) for all 1 ≤ j ≤ n/2. Then, for all k and for all a0, . . . ,an in Mk such that a0 = a0∗ and aj +n/2 = aj∗ for all 1 ≤ j ≤ n/2 we have   n  (N )  aj ⊗ Vj  ∀ε > 0 limN →∞ P a0 ⊗ I + 1    n   > a0 ⊗ I + aj ⊗ λj  + ε = 0. 1 √ Corollary 24.32 C0 (n) = 2 n − 1 for all even n ≥ 4.

432

Full crossed products and failure of WEP for B ⊗min B

Proof We will use the preceding result assuming that the aj’s are all unitary matrices with a0 = 0. In that case by √ the absorption principle (3.13) and n   by (3.18) we have 1 aj ⊗ λj = 2 n − 1. Then, we may proceed √ exactly as we did to prove Theorem 18.6 in §18.2 to prove that C0 (n) ≤ 2 n − 1 for all even n ≥ 4. Equality holds by (18.14). Bordenave and Collins [32] also prove a result that yields quantum expanders derived from permutation matrices. They prove that Theorem 24.31 (N )

(N )

(N )

(N )

by uj ⊗ uj where J,I are still holds if we replace Vj = uj |χ ⊥ |{J,I }⊥ as in (24.16). By (24.16), we may deduce from the latter result the following consequences which refine the analogous result of Hastings [132] for random unitaries. Theorem 24.33 ([32]) In the situation of Theorem 24.31, we have 6# 7 $   n  √   (N ) (N ) V ⊗ Vj = 0, ∀ε > 0 limN →∞ P  >2 n−1+ε 1 j  |I ⊥  where I stands here for the tensor associated to the identity on the N − 1dimensional space χ ⊥ . √ Corollary 24.34 For any δ > 0 and ε > 0 such that 2 n − 1 + ε < n, there is a sequence Nm → ∞ such that, with probability > 1 − δ, the family (N ) (Vj m )1≤j ≤n forms a quantum expander such that # $   n  √   (N ) (N ) supm  Vj m ⊗ Vj m  ≤ 2 n − 1 + ε. 1   |I ⊥ Proof By Theorem 24.33 we can choose a sequence Nm → ∞ such that 6# 7 $   n   √   (Nm ) (Nm ) P V ⊗ Vj < δ.  >2 n−1+ε 1 j m  |I ⊥   n (Nm )  "

! √ (N )  ⊗ Vj m |I ⊥  ≤ 2 n − 1 + ε > 1 − δ. Then P supm  1 Vj Remark 24.35 The preceding theorem improves an earlier result from [94] (see also [131]). The same paper [94] also contains bounds for the norm of sums of  (N ) (N ) the form Vj ⊗ · · · ⊗ Vj of degree r with r > 2.

24.7 Notes and remarks This chapter is based on [188]. Crossed products (like tensor products) are more often considered in the literature in the reduced case than in the “full” one as we do here. The results stated in §24.1 are rather standard facts. Those of

24.7 Notes and remarks

433

§24.2 are easy variants of the approach Ozawa uses in [188] to relate the WEP of B ⊗min B with the LLP of a certain (full) crossed product. The presentation we adopt in §24.3 emphasizes the parallel between Ozawa’s proof in [188] that B ⊗min B fails WEP and that in [141] that B ⊗min B = B ⊗max B. In [209] we described a shortcut to prove the results from [141] and [188]. We effectively use this in our proof of Ozawa’s Theorem 24.20. This approach avoids passing through the nonseparability of the metric space (equipped with dcb ) of n-dimensional operator spaces (see §20.3 for more on this) and allows us to put forward the constant C0 (n) defined in (24.13).

25 Open problems

Besides the main problem discussed in Chapters 12 and 13, many related questions remain open, and at least some of them are probably more accessible. 1. If a C ∗ -algebra has both WEP and LLP, is it nuclear? Of course a positive answer will solve negatively the Connes–Kirchberg problem. So perhaps we should rephrase this as: Is there a nonnuclear C ∗ algebra with both WEP and LLP? [Added in proof: while this book was at the printing stage, the author constructed such an example]. Is there any discrete group G for which C ∗ (G) is such an example? Of course, A has both WEP and LLP if and only if the pair (A,B ⊕ C ) is nuclear. 2. If the pair (A,Q) is nuclear where Q is the Calkin algebra, does it follow that A is nuclear? It should be true if the Kirchberg conjecture is correct. See Remark 10.11. There are many open questions involving discrete groups. It is natural to declare that a group G is WEP (resp. LLP) if C ∗ (G) has the WEP (resp. LLP). (As for the WEP of Cλ∗ (G), it is equivalent to amenability by Corollary 9.29). By Proposition 3.5 and Remark 7.20 both properties pass to subgroups. Clearly, amenable groups have both properties, since C ∗ (G) is then nuclear. Of course, groups with WEP are very poorly understood, since we do not even know whether free groups are WEP. 3. Is there any nonamenable WEP group? Curiously, however, although it should be much easier since free groups are clearly LLP, there are very few known examples of nonamenable groups with LLP, besides free groups. In fact, until A. Thom’s paper [244] no explicit example was known. For instance, the following very interesting question is still open:

434

Open problems

435

4. Is the product of two free groups (say F2 × F2 ) LLP? This boils down, of course (by (4.11)), to the question whether C ∗ (F2 )⊗max C ∗ (F2 ) has the LLP. Note that by (4.13) and Theorem 9.44 the LLP for groups is stable by free products. The example of Thom [244] is a group with Kazhdan’s property (T). We discuss this in more detail in §17. Thom’s example is approximately linear (i.e. “hyperlinear”), with property (T) but not residually finite. It follows that for that group G, C ∗ (G) fails the LLP. Recall that the Kirchberg conjecture is equivalent to the assertion that every C ∗ -algebra is QWEP, but there is a candidate for a counterexample: 5. It seems to be open whether Cλ∗ (G) is QWEP when G = SL3 (Z). In sharp contrast, MG is QWEP since SL3 (Z) is RF (see Proposition 12.18). In connection with this, there is no example of discrete group G for which the inclusion Cλ∗ (G) → MG is not max-injective in the sense defined in Definition 7.18. This is true when G is a free group (see [39, p. 384]). In fact it follows from Remarks 7.32 and 10.22 (with Corollary 23.36) that if G is weakly amenable then Cλ∗ (G) → MG is max-injective. If Cλ∗ (G) → MG is max-injective, then MG QWEP implies Cλ∗ (G) QWEP by Corollary 9.65. Concerning exactness, a discrete group G is called exact if Cλ∗ (G) is exact. Ozawa’s results [187] show that a certain group G called Gromov’s monster, and which, as the name indicates, is extremely hard to construct, is not exact. See [14, 96, 182]. Until recently this was the only known example of nonexact group. However, a remarkable example of nonexact residually finite group was constructed more recently by Osajda in [183]. More examples, and simpler ones, would be most welcome. Analogously, the exactness of the full C ∗ -algebra C ∗ (G) is not better understood, as shown by the following longstanding open problem: 6. Does the exactness of C ∗ (G) imply the amenability of G? Recall (see Theorem 3.30) that G is amenable if and only if either C ∗ (G) or Cλ∗ (G) is nuclear. Thus, if G is amenable C ∗ (G) and Cλ∗ (G) are exact (and actually identical). The WEP for Cλ∗ (G) is better understood, since as we showed in Corollary 9.29, Cλ∗ (G) has the WEP if and only if G is amenable. Analogously, if we assume that Cλ∗ (G) is QWEP or G approximately linear (in other words hyperlinear) -assumptions for which no counterexample is known-then Cλ∗ (G) has the LLP if and only if G is amenable. Indeed, by Corollary 9.41 if a QWEP C ∗ -algebra has the LLP then it has the WEP. Can this be proved without the a priori assumption that Cλ∗ (G) is QWEP? Equivalently:

436

Open problems

7. Does LLP ⇒ WEP hold for Cλ∗ (G)? We showed in §24.3 that B ⊗min B fails WEP, and we know that B ⊗max B=  B ⊗min B (see Corollary 18.12) but the following seems to be open: 8. Show that B ⊗max B fails WEP. Concerning the question whether it is QWEP see Remark 13.2. By Theorems 23.7 and 23.34, and similar results in §23.5, we know that a C ∗ -algebra A ⊂ B(H ) has the WEP if a certain kind of operators from A to a Hilbert space H (namely those associated to a positive definite state on A ⊗max A) admit an extension of the same kind from B(H ) to H. This line of thought naturally leads us to the following questions. 9. Let A ⊂ B(H ) be a unital C ∗ -subalgebra. Assume that any bounded linear u : B(H ) → 2 with  u = u. Does it u : A → 2 admits an extension  follow that A has the WEP? 10. More generally, let A ⊂ B be an inclusion of C ∗ -algebras. Assume that u : B → 2 with  u = u. Is there a any u : A → 2 admits an extension  ∗∗ ∗∗ contractive projection P : B → A ? Here the assumption that the norm is preserved is essential. Indeed, in the setting of questions 9 and 10, by the author’s version of the noncommutative Grothendieck theorem, there is always an extension  u with  u ≤ Cu where C is a universal constant. See [210] for more information and references in this direction. See also our memoir [205, ch. 5] for a general discussion of maps such as u : A → 2 . Concerning injectivity of von Neumann algebras and possible generalizations of Tomiyama’s Theorem 1.45 the following is open: 11. Does the existence of a bounded linear projection from a von Neumann algebra M to a (von Neumann) subalgebra M ⊂ M imply the existence of a contractive one? In particular, if G is a nonamenable group, although it seems likely to be true, there is no known proof of the absence of bounded projections from B(2 (G)) onto MG . However, if G contains F2 this was proved by Haagerup and the author in [118]. See also [53]. For a von Neumann subalgebra M ⊂ B(H ) let λ(M) = inf P  where the infimum runs over all linear projections P : B(H ) → M onto M. Note that λ(M) remains the same for any completely isometric embedding of M in B(H ). It is proved in [201] that λ(M⊗N) ≥ λ(M)λ(N) for any pair M,N of von Neumann algebras. Let A/I be the quotient of a C ∗ -algebra by a (self-adjoint closed) ideal I as usual. We saw that if X is a separable operator space and A/I is nuclear

Open problems

437

then any complete contraction u : X → A/I admits a completely contractive lifting  u : X → A (see Corollary 9.49). Using the theory of M-ideals, Ando and Choi–Effros independently proved (see [125, p. 59]) that if X is separable and A/I has the bounded approximation property then any bounded u : X → A/I admits a bounded lifting  u : X → A (note that for Banach spaces local reflexivity and hence local liftings are given “for free”). In particular when A/I is separable the identity of A/I admits a bounded lifting. Combined with Haagerup’s subsequent result [104] that Cλ∗ (F∞ ) has the metric approximation property, this shows that, in the case A = C ∗ (F∞ ) and A/I = Cλ∗ (F∞ ), the natural quotient map C ∗ (F∞ ) → Cλ∗ (F∞ ) has a contractive lifting, and hence there is a bounded projection P : C ∗ (F∞ ) → I, but no completely bounded one, since, by Proposition 7.34, there is no completely bounded lifting. It is a longstanding open question whether the preceding Ando–Choi–Effros theorem holds without any assumption on A/I, more precisely: 12. The following basic questions are open: (i) If X is a separable Banach space is it true that any bounded u : X → A/I admits a bounded lifting  u : X → A? (ii) Is there always a bounded linear projection P : A → I when A/I is separable? (iii) Is there always a bounded linear projection P : A → I when A is separable? These are three equivalent forms of the same question. Indeed, taking X = A/I, the existence of a lifting for u = IdA/I is equivalent to the existence of a bounded projection P : A → I. This shows that “yes” to (i) implies “yes” to (ii) and the latter trivially imples “yes” to (iii). Let X and u : X → A/I be as in (i) and let q : A → A/I be the quotient map. There is clearly a separable C ∗ -subalgebra A1 ⊂ A such that q(A1 ) ⊃ u(X). Therefore a “yes” to (iii) implies that A1 ∩ I is complemented in A1 or equivalently that the identity of q(A1 ) admits a bounded lifting in A1 . This implies a fortiori that u is liftable, so that “yes” to (iii) implies “yes” to (i). In Banach space theory, well-known work by Sobczyk shows that the space c0 is separably injective (i.e. complemented in any separable superspace) but not injective (see [167, p. 106]). See Zippin’s paper [265] for a proof that c0 is actually the only (infinite-dimensional) separably injective Banach space (up to isomorphism). See [181] for a discussion of the analogous open questions for operator spaces, with K(2 ) playing the role of c0 . See [266] for a broad survey of bounded linear projections in Banach space theory.

Appendix Miscellaneous background

Our intention here is to help the reader remember why certain basic facts are true and how they are interlaced. We use deliberately a telegraphic style. We refer the reader to the many existing basic reference books for a more harmonious presentation of the various topics we survey.

A.1 Banach space tensor products Let X,Y be Banach spaces. Any t ∈ X ⊗ Y (algebraic tensor product) can be written as a finite sum t = N 1 xj ⊗ yj (xj ∈ X,yj ∈ Y ). The smallest possible N is called the rank of t. Let Bil(X × Y ) denote the space of bounded bilinear forms on X × Y . We have a canonical linear injective map from X ⊗ Y to the space Bil(X∗ × Y ∗ ). Namely this is the linear mapping taking x ⊗ y (x ∈ X,y ∈ Y ) to the form defined on X ∗ × Y ∗ by (x ,y  ) → x  (x)y  (y) (x  ∈ X∗,y  ∈ Y ∗ ). The latter form is separately weak* continuous on X∗ × Y ∗ . Remark A.1 Similarly, we have a canonical linear injective map from X∗ ⊗ Y ∗ to the space Bil(X × Y ). This is the linear mapping taking x  ⊗ y  (x  ∈ X∗,y  ∈ Y ∗ ) to the form defined on X × Y by (x,y) → x  (x)y  (y) (x ∈ X,y ∈ Y ). In the case of biduals, it follows that if t ∈ X∗∗ ⊗ Y ∗∗ vanishes on X∗ ⊗ Y ∗ then t = 0. The projective and injective tensor norms of t are defined by   N xj yj  t∧ = inf 1

where the inf runs over all possible ways to write t as precedingly and t∨ = sup {|t,ξ ⊗ η| | (ξ,η) ∈ BX∗ × BY ∗ }    N = sup ξ(xj )η(yj ) (ξ,η) ∈ BX∗ × BY ∗ . 1

438

A.2 A criterion for an extension property

439

Note that by homogeneity, we also have 6 7 1/2  1/2 N N N 2 2 t∧ = inf xj  yj  x ⊗ yj . t = 1 1 1 j Let α be either ∧ or ∨. These norms satisfy both x ⊗ yα = xy for any (x,y) ∈ X × Y and similarly for the dual norm on X∗ ⊗ Y ∗ we have ξ ⊗ η∗α = ξ η for any (ξ,η) ∈ X∗ × Y ∗ . Any norm  α on X ⊗ Y satisfying both conditions is called “reasonable.” Then the projective (α = ∧) and injective (α = ∨) norm are respectively the largest and smallest among the reasonable tensor norms. As is easy to check, they are dual to each other: if  α =  ∧ (resp.  α =  ∨ ) then  ∗α =  ∨ (resp.  ∗α =  ∧ ). ∧



We denote by X⊗Y (resp. X⊗Y ) the respective completions of X ⊗ Y . Note that we have canonical isometric identifications ∧



X ⊗Y = Y ⊗X

and





X⊗Y = Y ⊗X.



The dual of X⊗Y can be canonically identified with the space Bil(X × Y ). The duality is the one obtained by extending the pairing  F,t = F (xk ,yk )  for F ∈ Bil(X × Y ) and t = N 1 xk ⊗ yk ∈ X ⊗ Y , for which we have |F,t| ≤ F t∧ . The space Bil(X × Y ) can be naturally isometrically identified either with B(X,Y ∗ ) or equivalently with B(Y,X∗ ). Thus we have ∧

(X⊗Y )∗  B(X,Y ∗ )  B(Y,X∗ ).

(A.1)

When X or Y is an L1 -space, the projective tensor norm can be computed more explicitly: For any measure space ( ,μ), we have (isometrically) ∧

L1 (μ)⊗Y  L1 (μ;Y ), where the latter space is meant in “Bochner’s sense.”





See [200] for a discussion of the pairs (X,Y ) such that X⊗Y = X⊗Y , that goes beyond the scope of the present volume. Note however that in the latter case the two norms are not equal as in the C ∗ -case, but only equivalent.

A.2 A criterion for an extension property Let B,C be Banach spaces. The projective norm on the algebraic tensor product was just defined for any t ∈ B ⊗ C by   N bj B cj C t∧ = inf 1

 the infimum being on all possible representations of t as a finite sum t = N 1 bj ⊗ cj (bj ∈ B, cj ∈ C). Let B ⊗∧ C denote the resulting normed space; for our

440

Appendix: Miscellaneous background

present purpose we do not need to complete it. For the linear operator T : C ∗ → B corresponding to t, for any ε > 0, there are an integer N and a factorization T1

T2

N T : C ∗ −→N ∞ −→1 −→B D

(A.2)

where D is diagonal, T1 is weak* continuous, and t∧ ≤ T1 DT2  ≤ (1 + ε)t∧ . The correspondence is like  this: if t = 0 we may assume bj cj  = 0, then for any ξ ∈ C ∗ we set T1 (ξ ) = ξ(cj /cj )ej , T2 (ej ) = bj /bj  and let D be the diagonal matrix with coefficients (bj cj ). This leads to t∧ = N∧ (T )

(A.3)

where N∧ (T ) = inf{T1 DT2 } the infimum being over all integers N ≥ 1 and all factorizations of T as in (A.2), with D diagonal and T1 weak* continuous. (Warning: N = ∞ is not allowed here, as it leads to a smaller norm, namely the nuclear norm). In the case B = C ∗ , there is a natural linear form on C ∗ ⊗ C denoted by t → tr(t) that takes ξ ⊗ x (ξ ∈ C ∗,x ∈ C) to ξ(x). We have clearly ∀t ∈ C ∗ ⊗ C

|tr(t)| ≤ t∧ .

(A.4)

Let T : C → C be the finite rank linear map associated to t and let E ⊂ C be any finitedimensional subspace such that E ⊃ T (C). Let TE : E → E be the restriction of T to E. Then tr(t) = tr(TE ), the latter being is of course the usual (linear algebraic) trace of TE , which is independent of E. Thus it is natural to define the trace of a finite rank T on an infinite-dimensional C simply by setting tr(T ) = tr(t). Then (A.4) becomes |tr(T )| ≤ N∧ (T ). We already mentioned the classical (and easy to see) fact that [B ⊗∧ C]∗  B(B,C ∗ ) isometrically. Let U ∈ B(B,C ∗ ). Since [B ⊗∧ C]∗  B(B,C ∗ ) we have for the corresponding duality |U,t| ≤ t∧ U  = N∧ (T )U . The finite rank operators UT : C ∗ → C ∗ and TU : B → B have the same trace and in fact U,t = tr(UT) = tr(TU). Therefore, we have |tr(UT)| ≤ N∧ (T )U .

(A.5)

Proposition A.2 (Dual criterion for extension) Let B,C Banach spaces. Let A ⊂ B be a closed subspace and let j : A → B be the inclusion mapping. Let u : A → C ∗ be a linear mapping. The following are equivalent: (i) There is a linear map  u : B → C ∗ extending u with norm ≤ 1. (ii) For any weak* continuous v : C ∗ → A of finite rank we have |tr(uv)| ≤ N∧ (j v). B  u

j

C∗

v

A

u

C∗

A.4 Ultrafilters

441

Proof Assume (i). Note uv =  uj v. Then clearly (applying (A.5) with U =  u and T = jv) |tr(uv)| = |tr( uj v)| ≤  uN∧ (j v) ≤ N∧ (j v). The converse is a simple application of the Hahn–Banach theorem. Assume (ii). Let S = C ⊗ A ⊂ C ⊗∧ B. Any s ∈ C ⊗ A corresponds to a weak* continuous finite rank map vs : C ∗ → A. We equip S with the norm induced by C ⊗∧ B. Observe that by (A.3) (ii) means that the linear form f : S → C defined by f (s) = tr(uvs ) has norm ≤ 1. Let f ∈ [C ⊗∧ B]∗ be its Hahn–Banach extension with f[C⊗∧ B]∗ ≤ 1. Let  u : B → C ∗ be the u is the extension required associated operator. We have  u = f[C⊗∧ B]∗ ≤ 1 and  u(a),c = f(a ⊗ c) = f (a ⊗ c) = u(a),c for in (i). Indeed, since f|S = f we have  any a ∈ A, c ∈ C .

A.3 Uniform convexity of Hilbert space It is convenient to record here the following elementary fact expressing the uniform convexity of Hilbert space. The latter is usually formulated for pairs of unit vectors (i.e. n = 2 in (A.6)), but the generalization to n-tuples is straightforward: Lemma A.3 Let x = (x1, . . . ,xn ) be an n-tuple in the unit ball of a Hilbert space H . Then  n  √  2 ∀ε > 0 n−1 xk  > 1 − ε ⇒ max xi − xj  < 2 nε. (A.6) 1≤i=j ≤n

1

 Proof Let M(x) = n−1 n1 xk . A simple verification (this is a classical fact on the variance) shows that n n xk − M(x)2 = n−1 xk 2 − M(x)2, n−1 supk xk − M(x)2 ≤ n−1 1

1

which implies (A.6).

A.4 Ultrafilters Most readers will surely know what is a free or nontrivial ultrafilter U on a set I . The following quick introduction is meant to allow those with less familiarity to grasp the minimum terminology necessary to follow the present notes. For our purposes, the notion that matters is “the limit along an ultrafilter,” and this can be explained easily. Let f : ∞ (I ) → C be a unital ∗-homomorphism (a fortiori f is positive). Then for any subset α ⊂ I we have f (1α ) ∈ {0,1}. The collection formed by the subsets α ⊂ I such that f (1α ) = 1 is what is called an ultrafilter on I . Since indicators form a total set in ∞ (I ), f is entirely determined by its values on the indicators of subsets, so that the correspondence f ↔ U is 1-1. So any ultrafilter U comes from a unique functional fU . The traditional notation is then to denote ∀x ∈ ∞ (I )

limU xi = fU (x),

442

Appendix: Miscellaneous background

and to refer to limU xi as the limit of xi along U. In this viewpoint, for a subset α ⊂ I , we have α ∈ U ⇔ limU 1α = 1. The trivial ultrafilters are those that are associated to the functionals δi (i ∈ I ) defined by δi (x) = xi . The other ones are called nontrivial (or “free”). They are characterized by the property that limU xi = 0 whenever i → xi is finitely supported. The existence of nontrivial U’s can be deduced easily from the pointwise compactness of the set of states on ∞ (I ), which shows that the set of cluster points of (δi )i∈I when “i → ∞” is nonvoid, in other words that ∩α⊂I,|α| 0 the set Iε = {i ∈ I | |ai − | < ε} belongs to U. Indeed, since ε1I \Iε ≤ (|ai − |) and fU is positive, we must have ε limU 1I \Iε ≤ limU |ai − | = 0, and hence limU 1I \Iε = 0 or equivalently Iε ∈ U. The converse is also true: if for any ε > 0 the set Iε = {i ∈ I | |ai − | < ε} belongs to U, then we must have  = limU ai , because (|ai − |) ≤ ε1Iε + (a∞ + ||)1I \Iε implies limU |ai − | ≤ ε. More generally when (ai ) and  are points in a topological space we say that  = limU ai if for any neighborhood V of  the set {i ∈ I | ai ∈ V } belongs to U. Remark A.4 Let (an ) be a bounded sequence of reals. Then (an ) converges when n → ∞ if and only if the limits limU an are independent of U (U nontrivial ultrafilter on N), and then the limit of (an ) is their common value. Indeed, it is easy to show that the set formed of all the limits limU an (U nontrivial ultrafilter on N) coincides with the set of cluster points of (an ). Remark A.5 Let U and V be distinct ultrafilters on I . We claim that there is a disjoint partition I = α ∪ β such that limU 1α = 1 and limV 1β = 1. Indeed, by what precedes there must exist an infinite subset γ ⊂ I such that limU 1γ = limV 1γ . Then either limU 1γ = 1 and then we can take β = I \ γ (and α = γ ), or limU 1γ = 0 and then we can take α = I \ γ (and β = γ ). Remark A.6 Let I be a directed set, meaning that I is given with a partial order such that for any pair i,j ∈ I there is k ∈ I such that i ≤ k and j ≤ k. Assuming I infinite, any function x : I → C can be viewed as a “generalized sequence” (one also speaks of the net associated to the directed set I ) and the meaning of lim x(i) =  is copied on the usual one: ∀ε > 0∃j ∈ I such that |x(i) − | < ε ∀i ≥ j . We claim that there is an ultrafilter U on I such that for any x ∈ ∞ (I ) that admits a limit  in the preceding sense we have limU x(i) = . Equivalently, for any i ∈ I the set α i = {j ∈ I | j ≥ i} satisfies limU 1α i = 1. With terminology from the theory of filters, or nets and subnets one usually says that the ultrafilter U refines the filter or the net associated to the directed set I . To prove that such a U exists, just observe that by the directedness of I and the compactness of the set of states on ∞ (I ) the intersection ∩i∈I {δj | j ≥ i} is nonvoid.

A.6 Finite representability

443

A.5 Ultraproducts of Banach spaces Let (Hi )i∈I be a family of Banach spaces. We recall the usual definition of the

of (Hi )i∈I with respect to an ultrafilter U on the set I . Let X = ultraproduct  ⊕ i∈I Hi ∞ . For any x = (xi ) ∈ X we define a seminorm ψU on X by ψU (x) = limU xi Hi . We will denote by HU = X/ ker(ψU ) the resulting Banach space. We call it the ultraproduct of (Hi )i∈I with respect to U. When all the Hi’s are identical to a single space H we say that HU is an ultrapower of H , and we denote HU = H U . Let q : X → X/ ker(ψU ) be the quotient map. For any element x = (xi ) ∈ X, by convention we denote by (xi )U the corresponding element in HU , i.e. we set (xi )U = q(x). With this notation, we have (xi )U HU = limU xi Hi . The most important case for us is the one when the Hi’s are Hilbert spaces. In that case it is easy to check that we can equip HU with a scalar product defined by setting for any pair x,y ∈ X (xi )U ,(yi )U  = limU xi ,yi . Clearly the right-hand side depends only on q(x),q(y), so this is legitimate. The resulting space HU is a Hibert space.

A.6 Finite representability A Banach space X is finitely representable in another one Y (X f.r. Y in short) if for any ε > 0 any finite-dimensional subspace of X is (1 + ε)-isomorphic to some subspace of Y . Note that X f.r. Y and Y f.r. Z implies X f.r. Z. Lemma A.7 We have X f.r. Y if and only if X embeds isometrically in an ultrapower of Y (in the sense of §A.5). Proof Assume X f.r. Y . Let S be the set of all the finite-dimensional subspaces of X directed by inclusion. Let I = S × N. Let i = (E,m) and i  = (E ,m ) be elements of I . We define a partial order on I by declaring that i ≤ i  if E ⊂ E  and m ≤ m . Then I is a directed set. Let (E,m) ∈ I . Since X f.r. Y there is an operator ui : E → Y such that x ≤ ui (x) ≤ (1 + 1/m)x for any x ∈ E. Let U be an ultrafilter on I refining the associated net (so that in particular limU 1/m = 0) as explained in Remark A.6. For any x ∈ X we have x ∈ E for all i = (E,m) large enough in I . Thus ui (x) is well defined. Otherwise we set, say, ui (x) = 0. Then the mapping u : X → Y U defined by u(x) = (ui (x))U is an isometric embedding.

444

Appendix: Miscellaneous background

Conversely assume X ⊂ Y U isometrically. To show X f.r. Y it suffices to show that Y U f.r. Y . Let E ⊂ Y U be finite dimensional with basis x1, . . . ,x n . We may   ak xk  = assume xk = (xk (i))  U . For any scalar coefficients a1, . . . ,an we have  limU  ak xk (i). Fix ε > 0. Let Nε be a finite ε-net in the unit ball of E. Choosing i large enough we can obtain             ak xk  ≤  ak xk (i) ≤ (1 + ε)  ak xk  . (A.7) ∀a ∈ Nε (1 − ε)  E

Y

E

Noting that any element a in the unit ball BE of E can be written (by successive approximations) in the form a = a 0 + εa 1 + ε2 a 2 + · · · with a 0,a 1,a 2, . . . all in Nε , we obtain after a simple calculation             ak xk (i) ≤ (1 + ε)(1 − ε)−1  ∀a ∈ BE (1 − δε )  ak xk  ≤  ak xk  , (A.8) where δε → 0 when ε → 0. This shows that Y U f.r. Y . Lemma A.8 If X f.r. Y then there is a set I and a (metric) surjection q : ∞ (I ;Y ∗ ) → X ∗ taking the closed unit ball of ∞ (I ;Y ∗ ) onto that of X∗ . Proof Let S, I , U and u : X → Y U be as in the first part of the preceding proof. We ∗ define Q : ∞ (I ;Y ∗ ) → Y U by setting for any ξ = (ξi )i∈I in ∞ (I ;Y ∗ ) ∀y = (yi )U ∈ Y U

Q(ξ )(y) = limU ξi ,yi .

Clearly Q ≤ 1. Let q = u∗ Q : ∞ (I ;Y ∗ ) → X∗ . Clearly q ≤ 1. Let η ∈ BX∗ . For any i = (E,m), let Ei = ui (E) ⊂ Y . By the preceding argument for all i = (E,m) large enough ui is an isomorphism from E to ∗ ∗ ∗ Ei . Let fi = η|E u−1 i ∈ Ei . Then fi Ei ≤ 1. Let ξi ∈ Y denote a Hahn–Banach extension of fi to Y , so that ξi Y ∗ ≤ 1 and hence ξ ∞ (I ;Y ∗ ) ≤ 1. Let x ∈ X. Then u∗ Q(ξ ),x = Q(ξ ),u(x) = limU ξi ,ui (x) = limU fi (ui (x)) = η(x). Thus we conclude q(ξ ) = η.

A.7 Weak and weak* topologies: biduals of Banach spaces Let X be a Banach space with (closed) unit ball BX . As usual the weak topology on X is denoted by σ (X,X∗ ), while the weak* topology on X∗ is denoted by σ (X∗,X). In general the latter is distinct from its weak topology σ (X∗,X∗∗ ). Of course both are weaker than the norm topology. However, the following well-known result allows to conveniently pass from weak to strong convergence in many interesting cases. Theorem A.9 (Mazur’s theorem) For any convex subset C ⊂ X, the weak (i.e. σ (X,X∗ )) closure of C coincides with its norm closure.

A.7 Weak and weak* topologies: biduals of Banach spaces

445

Remark A.10 Mazur’s theorem is often used in the following form. Suppose we have a net (xi ) in X that converges weakly to a limit x ∈ X. Then we can form a net (xβ ) that converges in norm to x and that is such that each xβ is a convex combination of elements of the original net (xi ). More precisely, we can arrange for the following supplementary property. Assume without loss of generality that the nets are with respect to directed sets of indices (sometimes called generalized sequences). Then we can make sure that for any i there is β such that for all η ≥ β the point xη is in the convex hull of {xξ ,ξ ≥ i}. weak

, and Indeed, this is easy to check using the observation that x ∈ conv({xξ ,ξ ≥ i})  ∈ conv({x ,ξ ≥ i}) such that hence by Mazur’s theorem for any ε > 0 there is xi,ε ξ  − x < ε, so that we may use for the β’s the set of pairs (i,ε) directed in the xi,ε obvious way. Now consider a set , functions ui :  → X and assume that the net (ui ) tends pointwise on  to a limit u :  → X with respect to the weak topology of X. We claim that there is a net (uβ ) tending to u pointwise with respect to the norm topology such that each uβ is a convex combination of the ui’s. To check this fix a finite subset F ⊂ . We will apply the first part with X replaced by XF equipped with say (the choice is largely irrelevant) the norm of ∞ (F,X). Consider then the elements xi in XF defined by xi = (ui (γ ))γ ∈F ∈ XF , and let x = (u(γ ))γ ∈F ∈ XF . By the first part for any ε > 0 there is u in the convex hull of the ui’s such that u(γ ) − ui (γ ) < ε for any γ ∈ F . Using this, the claim follows, we leave the remaining details to the reader. Remark A.11 Let D ⊂ X be total, i.e. such that the linear span of D is norm dense in X. Then any bounded net (xi ) in X∗ that is convergent (resp. Cauchy) with respect to pointwise convergence on D is convergent (resp. Cauchy) with respect to the weak* topology. The verification of this fact is entirely elementary. In general, a bounded net in X does not converge weakly in X. However, since bounded subsets of X∗∗ are relatively σ (X∗∗,X∗ )-compact, there is a subnet that converges for σ (X∗∗,X∗ ) to some point in X∗∗ . More precisely, it is a well-known fact that BX

σ (X∗∗,X∗ )

= BX∗∗ .

(A.9)

In these notes, we use on several occasions the following useful reformulation: Proposition A.12 (Biduals as quotients of ∞ -sums) There is a set I such that if we set Xi = X for any i ∈ I and    XI = ⊕ Xi , i∈I ∞ ∗∗ there is a metric surjection ϕ : XI → X such that

ϕ(BXI ) = BX∗∗ . In particular X∗∗ is isometric to a quotient Banach space of XI . Proof Let I be a base of the set of neighborhoods of 0 in X∗∗ for σ (X∗∗,X∗ ). By (A.9) for any x ∈ BX∗∗ there is (xi )i∈I in the closed unit ball of XI such that xi ∈ x + i for any i ∈ I . Observe that lim xi = x with respect to σ (X∗∗,X∗ ) the limit being relative to the directed net formed by the neighborhood base I . Let U be an utrafilter refining this net. Let

446

Appendix: Miscellaneous background

∀y = (yi )i∈I ∈ XI

ϕ(y) = limU yi .

By the weak* compactness of BX∗∗ the latter limit exists and ϕ ≤ 1. By the preceding observation we have ϕ(BXI ) = BX∗∗ . Consequently, X∗∗ is isometrically isomorphic to XI / ker(ϕ). Remark A.13 The following fact is one more well-known consequence of the Hahn– Banach theorem: let u : X → Y be a linear mapping between Banach spaces. Then u : X → Y is an isometry if and only if u∗∗ : X∗∗ → Y ∗∗ is also one.

A.8 The local reflexivity principle In Banach space theory, the important “principle of local reflexivity” (from [170]) says that every Banach space X has the following property called “local reflexivity”: B(E,X)∗∗ = B(E,X∗∗ ) (isometrically) for any finite-dimensional Banach space E.

(A.10) ∨

Recall that B(E,X) can be identified with the injective tensor product E ∗ ⊗X. The main point is that B(E,X)∗ can be identified isometrically with the projective tensor ∧

product E ⊗X∗ . From this (A.10) is immediate. The typical application of (A.10) is this: Lemma A.14 Let X be any Banach space. Let E ⊂ X∗∗ be a finite-dimensional subspace. There is a net of maps ui : E → X with ui  → 1 such that ∀x ∈ E ∩ X,

ui (x) = x

and ∀x ∈ E,

ui (x) → x for σ (X∗∗,X∗ ).

Proof Let uE : E → X∗∗ denote the inclusion map and let B = B(E,X). Note uE  = 1. By (A.10) the unit ball of B(E,X) is σ (B ∗∗,B ∗ )-dense in that of B(E,X∗∗ ). Therefore there is a net of maps vi : E → X (i ∈ I ) with vi  ≤ 1 such that vi (x) → uE (x) for any x ∈ E with respect to σ (X∗∗,X∗ ). For any x ∈ E ∩ X, vi (x) − x lies in X and tends σ (X,X∗ ) to 0. By Mazur’s classical theorem A.9, 0 lies in the norm closure of conv({vi (x) − x | i ∈ I }), and also of conv({vj (x) − x | j ≥ i}) for any choice of i ∈ I . Therefore (see Remark A.10) we can find a net formed of convex combinations of the vi’s that we denote (abusively) by wi : E → X satisfying still that wi (x) → uE (x) for any x ∈ E with respect to σ (X∗∗,X∗ ) but in addition such that wi (x) − x → 0 for any x ∈ E ∩ X. Note wi  ≤ 1. Fix ε > 0. Let e1, . . . ed be a linear basis of E ∩ X (assuming E ∩ X = {0}). Let e1∗, . . . ed∗ be biorthogonal X∗ . We then set linear functionals on E ∩ X. By Hahn–Banach we may assume ek∗ ∈  ∗ x ∈ E. Then ui (x) = k ek∗ (x)ek = x ui (x) = wi (x)+ k ek (x)(ek −wi (ek )) for any for any x ∈ E ∩ X. Moreover ui  ≤ wi  + k ek∗ ek − wi (ek ), so that for all i large enough we have ui  ≤ 1 + ε. Lastly ui (x) − wi (x) → 0 for any x ∈ E, so that we still have ui (x) → uE (x) for any x ∈ E with respect to σ (X∗∗,X∗ ). As a consequence of (A.10) we have: Proposition A.15 For any Banach space X we have X∗∗ f.r. X.

A.9 A variant of Hahn–Banach theorem

447

Proof Fix a finite-dimensional E ⊂ X∗∗ . Let B = B(E,X). Let u : E → X∗∗ be the inclusion mapping. By (A.10) there is a net of mappings ui : E → X with ui  ≤ 1 tending in the sense of σ (B ∗∗,B ∗ ) to u, i.e. such that ξ(ui (e)) → ξ(e) for all e ∈ E and all ξ ∈ X∗ . Let e ∈ E. We have lim sup ui (e) ≤ e and also lim inf ui (e) ≥ supξ ∈BX∗ lim inf |ξ(ui (e))| = e. Therefore ui (e) → e. Let ε > 0 and let Nε be a finite ε-net in the unit ball of E. Choosing i large enough we can obtain (1 − ε)eE ≤ ui (e)X ≤ eE .

∀e ∈ Nε

Then we conclude by arguing as for the passage from (A.7) to (A.8) that X∗∗ f.r. X.

A.9 A variant of Hahn–Banach theorem Lemma A.16 Let S be a set and let F ⊂ ∞ (S) be a convex cone of real valued functions on S such that ∀f ∈F

sup f (s) ≥ 0. s∈S

Then there 2 is a net (λi ) of finitely supported probability measures on S such that the limit of f dλi exists for any f ∈ F and satisfies ∀f ∈F

lim

f dλi ≥ 0.

If S is a weak* compact convex subset of the dual X∗ of a Banach space X, and F is formed of affine weak* continuous functions on S, then there is s ∈ S such that ∀f ∈F

f (s) ≥ 0.

Proof Let ∞ (S,R) denote the space all bounded real valued functions on S with its usual norm. In ∞ (S,R) the set F is disjoint from the set C− = {ϕ ∈ ∞ (S,R) | sup ϕ < 0}. Hence by the Hahn–Banach theorem (we separate the convex set F and the convex open set C− ) there is a nonzero ξ ∈ ∞ (S,R)∗ such that ξ(f ) ≥ 0 ∀ f ∈ F and ξ(f ) ≤ 0 ∀ f ∈ C− . Let M ⊂ ∞ (S,R)∗ be the cone of all finitely supported (nonnegative) measures on S viewed as functionals on ∞ (S,R). Since we have ξ(f ) ≤ 0 ∀ f ∈ C− , ξ must be in the bipolar of M for the duality of the pair (∞ (S,R),∞ (S,R)∗ ). Therefore, by the bipolar theorem, ξ is the limit for the topology σ (∞ (S,R)∗,∞ (S,R)) of a net of finitely supported (nonnegative) measures ξi on S. We have for any f in ∞ (S,R), ξi (f ) → ξ(f ) and this holds in particular if f = 1, thus (since ξ is nonzero) we may assume ξi (1) > 0, hence if we set λi (f ) = ξi (f )/ξi (1) we obtain the first assertion. ∗ If F 2 is formed of affine functions on a weak*2closed convex set S ⊂ X , let si = sdλi (s) be the barycenter of λi . We have f dλi = f (si ) for all f ∈ F . By the weak* compactness of S, there is a subnet for which si converges weak* to some point s ∈ S and since f is assumed weak* continuous, we have lim

f dλi = lim f (si ) = f (s),

and hence f (s) ≥ 0 for any f ∈ F .

448

Appendix: Miscellaneous background

A.10 The trace class Let H,K be Hilbert spaces. We denote by H the complex conjugate Hilbert space (the same space but with complex conjugate scalar multiplication). Recall the canonical identifications H∗ = H

H∗ = H

and

(K)∗ = K.

With the latter, (A.1) implies the isometric identity ∧

(K ⊗H )∗ = B(H,K). ∧

The space K ⊗H can be identified with the space S1 (K,H ) of trace class operators, i.e. the operators T : K → H such that for some (or all) orthonormal bases (ei ) of K we  have ei ,|T |ei  < ∞ (here |T | = (T ∗ T )1/2 ). We equip this space with the norm  ei ,|T |ei  = tr(|T |). T S1 = ∧

Then that takes a tensor t = n n K ⊗H  S1 (K,H ) isometrically, for the correspondence x ⊗ y to the operator T defined by T (ξ ) = x ,ξ y (ξ ∈ K). k k 1 k 1 k That same correspondence t → T gives us an isometric identification K ⊗2 H  S2 (K,H ), where S2 (K,H ) denotes the space of Hilbert–Schmidt mappings T : K → H , with norm defined by 1/2  T (ei )2 = (tr(T ∗ T ))1/2 . T S2 = One also defines the Schatten p-class Sp (K,H ) with norm T Sp = (tr|T |p )1/p for other values of p ∈ [1,∞) but we do not use them in this volume (see e.g. [215]). In particular, when K = H , for p = 1,2 we denote simply Sp (H ) = Sp (H,H ). ∧

The preceding shows that B(H ) admits as its predual the space H ⊗H (or equivalently the space S1 (H )). We will sometimes denote that predual by B(H )∗ , especially when we view it as a subspace of B(H )∗ . Remark A.17 For any T ∈ S1 (H ), we have clearly T ∗ ∈ S1 (H ) and, for any a ∈ B(H ), aT ∈ S1 (H ) and Ta ∈ S1 (H ). Moreover, T ∗ S1 = T S1 , aTS1 ≤ T S1 a and TaS1 ≤ T S1 a. It follows that the mappings x → x ∗ , x → ax and x → xa are all continuous from B(H ) to itself equipped with the weak* topology.

A.11 C ∗ -algebras: basic facts A C ∗ -algebra A is a Banach ∗-algebra equipped with a norm   satisfying x ∗  = x, xy ≤ xy for all x,y ∈ A, and more importantly x ∗ x = x2 . Gelfand’s theory tells us that for any such algebra there is an isometric ∗-homomorphism π : A → B(H ). Thus A is embedded (or “realized”) in B(H ).

A.11 C ∗ -algebras: basic facts

449

Moreover, if A is unital (i.e. has a unit element 1) then there is an embedding such that π(1) = IdH . This is proved using the GNS construction that we describe in §A.13. Any element x in a C ∗ -algebra A can be decomposed as x = a + ib with a,b ∈ A self-adjoint given by a = (x + x ∗ )/2 and b = (x − x ∗ )/(2i). Furthermore, using the functional calculus in the commutative C ∗ -algebras generated respectively by a and b we can decompose x further as x = a + − a − + i(b+ − b− ).

(A.11)

We often use the following simple operator analogue of the classical Cauchy– Schwarz inequality. Let (ai )1≤i≤n and (bi )1≤i≤n be finitely supported families of operators in B(H ). We have   1/2  1/2        ai ai∗   bi∗ bi  . ai bi  ≤  (A.12)  This follows from )     *   ai bi η ai bi  = sup ξ,    1/2  1/2 = sup ai∗ ξ,bi η ≤ supξ supη ai∗ ξ 2 bi η2 and 1/2 1/2    1/2  1/2     bi∗ bi  = supη ai ai∗  = supξ ai∗ ξ,ai∗ ξ  bi η,bi η , ,  (A.13) where the suprema run over all ξ,η in the unit ball of H . Assuming A unital, an element x ∈ A is called unitary if x ∗ x = xx∗ = 1. When A is realized in B(H ) these correspond exactly to the unitary operators on H that are in A. The following classical result is very useful when dealing with metric properties: Theorem A.18 (Russo–Dye theorem) The convex hull of the set of unitary elements of a unital C ∗ -algebra is dense in the unit ball. The proof appears in all major books such as [226, 240]. Kadison and Pedersen proved the following refinement improved later on by Haagerup (see [106, 112, 144]): if x in a unital C ∗ -algebra satisfies x ≤ 1 − 2/n then there are unitaries u1, . . . ,un such that x = (u1 + · · · + un )/n. Actually the first result in that direction had been proved a few years earlier by Popa [216]. Corollary A.19 For any (bounded) positive linear map u : A → B between unital C ∗ -algebras, we have u = u(1). Proof It suffices to show that u(x) ≤ u(1) for any unitary element x ∈ A. We may clearly replace A by the commutative C ∗ -algebra generated by x. Then the statement reduces to the case when A = C(T ) for some compact set T . The case when T is finite is easy. Indeed, we may assume A = n∞ , and then denoting by (ej ) the canonical basis of n∞ and observing that u positive means u(ej ) ≥ 0 for all j , we have by (A.12) for any z = zj ej ∈ Bn∞             u(ej ) = u(1). u(ej )1/2 × zj u(ej )1/2  ≤  zj u(ej ) =  u(z) =  The proof can then be completed using Remark A.20.

450

Appendix: Miscellaneous background

A.12 Commutative C ∗ -algebras By spectral theory any commutative, closed, self-adjoint and unital subalgebra A ⊂ B(H ) is isometrically ∗-isomorphic to an algebra of continuous functions on a compact set T . By definition, the spectrum of A, denoted by σA , is the set of nonzero multiplicative ∗-homomorphic functionals f : A → C (sometimes called “characters”). If A is unital this is a compact set for the pointwise topology. In the nonunital case it is a locally compact space, which we can compactify by adding a point at ∞. When A is commutative and unital, there is an isometric ∗-isomorphism from A to the C ∗ -algebra, denoted by C(σA ), formed of all the continuous functions on σA . Thus A can be identified with C(T ) for T = σA . When A is commutative and nonunital, A can be identified with the C ∗ -algebra, denoted by C0 (T ), formed of all the continuous functions on T that tend to 0 at ∞ (of course the latter condition becomes void when T is compact, so C0 (T ) = C(T ) in that case). Let A ⊂ B(H ) be a unital C ∗ -subalgebra. For any x ∈ A let σx ⊂ C denote the spectrum of x (i.e. the set of z ∈ C such that zI − x is noninvertible). If x is a normal operator, i.e. x ∗ x = xx∗ (in particular if x is self-adjoint) we have x = supz∈σx |z|.

(A.14)

Moreover, if x ∈ A is invertible in B(H ) then its inverse is in A, so the spectra of x relative either to A or to B(H ) are the same. In particular, it is the same as the spectrum of x in the unital C ∗ -subalgebra Ax ⊂ A generated by x. If x is normal (in particular if it is self-adjoint), Ax is isomorphic to C(σx ). This allows us to make sense of f (x) ∈ A (“functional calculus”) for any f ∈ C(σx ). Note that the spectrum of a continuous function in C(T ) (T compact) is just the closure of its range. Therefore, if x is normal ∀f ∈ C(σx )

σf (x) = f (σx ).

(A.15)

For convenience we record here a basic observation on the space C(T ), that gives a convenient way to reduce many questions to the case when T is finite. Recall that we denote by n∞ the space Cn equipped with the sup-norm. For any n-element set T the space C(T ) can be identified to n∞ . Remark A.20 Let A = C(T ) with T compact. Then the identity of A is the pointwise limit of a net of maps ui : A → A of the form vi

n(i) wi

ui : A−→∞ −→A where n(i) are integers and vi ,wi are unital positive contractions. Indeed, for any ε > 0 and any finite set x1, . . . ,xk ∈ A, there is a finite open covering (Um )1≤m≤n of T such that for all 1 ≤ m ≤ n and 1 ≤ j ≤ k the oscillation of xj on Um is ≤ ε. Let (ϕm )1≤m≤n  be a partition of unity subordinated to this covering. We have 0 ≤ ϕm ≤ 1, 1 = 1≤m≤n ϕm and supp(ϕm ) ⊂ Um . Fix points ωm ∈ Um arbitrarily chosen. We define v : A → n∞ and w : n∞ → A by setting for x ∈ A and y = (ym ) ∈ n∞ n v(x) = (x(ωm ))1≤m≤n and w(y) = ym ϕm . 1

A.13 States and the GNS construction

451

  Then wv(xj ) − xj  =  1≤i≤n ϕm [xj (ωm ) − xj ] ≤ ε for any 1 ≤ j ≤ k. Moreover, v and w are unital positive maps with v ≤ 1 and w ≤ 1. This proves the assertion.

A.13 States and the GNS construction Let A be a C ∗ -algebra. An element f ∈ A∗+ with f  = 1 is called a state. To any state we can associate an inner product on A by setting for any a,b ∈ A a,b = f (a ∗ b). We then obtain a Hilbert space denoted by L2 (f ), after passing to the quotient by the kernel and completing. There is a natural mapping A → L2 (f ). The left multiplication by a ∈ A defines an element of B(L2 (f )) denoted by πf (a). Then a → πf (a) is a ∗-homomorphism from A to B(L2 (f )), and there is a unit vector ξf ∈ L2 (f ) such that πf (A)ξf = L2 (f ) (i.e. ξf is cyclic) and ∀a ∈ A

f (a) = ξf ,πf (a)ξf .

(A.16)

This is called the “GNS construction” for Gelfand–Naimark–Segal. We will say that (A.16) is the GNS factorization of f . Remark A.21 In the converse direction let π : A → B(H ) be cyclic with cyclic unit vector ξ . Then f (·) = ξ,π(·)ξ  is a state and the associated πf is unitarily equivalent to π . Indeed, the correspondence π(·)ξ → πf (·)ξ defines a unitary u : H → L2 (f ) such that π(·) = u∗ πf (·)u. Remark A.22 Note that we can perform this construction starting only from a functional f of norm 1 on a dense ∗-subalgebra A ⊂ A such that f (x ∗ x) ≥ 0 for any x ∈ A. Indeed, such an f extends to a state on A: by Hahn–Banach f extends to f  ∈ A∗ with f   = 1 and by the density of A we have f  (x ∗ x) ≥ 0 for any x ∈ A. If A admits a state f that is faithful, i.e., such that f (x ∗ x) = 0 implies x = 0, then the ∗-homomorphism πf : A → B(L2 (f )) is an embedding. In general, the operation of right multiplication by a ∈ A is not bounded on L2 (f ) for the norm we defined as xL2 (f ) = f (x ∗ x)1/2 (x ∈ A). But it would be had we chosen to define it by xL2 (f ) = f (xx∗ )1/2 . This difficulty disappears when both inner products coincide on A. This holds in particular when ∀x,y ∈ A

f (xy) = f (yx).

In that case, the functional f ∈ A∗+ is called tracial. We have then xL2 (f ) = x ∗ L2 (f ) for any x ∈ A, so that x → x ∗ defines an antilinear isometric involution J on L2 (f ) and the operation of right multiplication b → ba defines a bounded linear map Rf (a) ∈ B(L2 (f )) such that ∀a,b ∈ A

J (Rf (a)b) = πf (b∗ )J (a).

Moreover, we have Rf (ab) = Rf (b)Rf (a) so Rf is a ∗-homomorphism on the algebra Aop that is the same as A but with the reversed product. It is important to observe that the ranges of πf and Rf mutually commute.

452

Appendix: Miscellaneous background

Let T be any locally compact space. Then a state f on C0 (T ) can be identified with a probability measure μf on T , and L2 (f ) can be identified with L2 (μf ). With this identification, the natural mapping πf : C0 (T ) → B(L2 (μf )) takes a function x ∈ C0 (T ) to the operator of multiplication by x on L2 (μf ). Then f is faithful if and only if μf has full support on T . In that case πf realizes C0 (T ) as multiplication operators acting on L2 (μf ). In any case, when A is commutative, there is a set I and an isometric ∗-homomorphism π : A → B(2 (I )) such that all the operators in the range of π are diagonal. Indeed, assuming A = C0 (T ) we can take I = T viewed as a discrete set, and define π(f ) ∈ B(2 (I )) as the diagonal operator with coefficients (f (t))t∈T . Remark A.23 Let T be a compact space. It is well known that C(T )∗ is isometric to the space M(T ) formed of the complex measures μ on T equipped with the norm μM(T ) = |μ|(T ). It is well known and elementary that a measure μ ∈ M(T ) with |μ|(T ) = 1 is positive if and only if μ(T ) = 1. Moreover this holds if and only if #(μ)(T ) = 1. The generalization of this on a noncommutative unital C ∗ -algebra A is immediate: for any f ∈ A∗ with f A∗ = 1 the functional f is positive if and only if f (1) = 1 or if and only if #(f )(1) = 1. Indeed, to verify that f (x) ≥ 0 for any x ≥ 0 we obviously may restrict to the unital C ∗ -algebra generated by x, but the latter being commutative, the problem reduces to the preceding measure space case.

A.14 On ∗-homomorphisms Let π : A → B(K) be a unital ∗-homomorphism on a C ∗ -algebra A. We claim that π  = 1.

(A.17)

Assume x = x ∗ ∈ A. Then σπ(x) ⊂ σx (because zI−x is invertible implies π(zI−x) = zI − π(x) invertible). Therefore by (A.14) π(x) ≤ x whenever x ∗ = x. But now for an arbitrary x ∈ A, we have π(x) = π(x ∗ x)1/2 ≤ x ∗ x1/2 = x, proving the claim, since π(1) = 1 guarantees π  ≥ 1. If π is injective then it is isometric. Indeed, using the preceding idea it suffices to show that injectivity forces σπ(x) = σx when x = x ∗ . Indeed, if σπ(x) = σx there is z ∈ σx \ σπ(x) , so we can find a continuous function f on σx such that f (z) = 1 but f|σπ(x) = 0. Let y = f (x). Recall this is defined using (A.14) and limits of polynomials, therefore π(y) = f (π(x)), and by (A.15) σy = f (σx ) = {0} but σπ(y) = f (σπ(x) ) = {0}. Thus y = 0 but π(y) = 0, i.e. π is not injective. If A is not unital, one can show that π extends to a unital ∗-homomorphism on the unitization of A (namely CI +A), and the preceding results remain true. Recapitulating: Proposition A.24 For any ∗-homomorphism π : A → B between C ∗ -algebras the range π(A) is closed and π  = 1 (assuming π = 0). If π is injective it is isometric. If π is injective with dense range, it is automatically a surjective isometric isomorphism. Proof We may assume B ⊂ B(K). We have a canonical factorization A → A/ ker(π ) → B, with an injective ∗-homomorphism A/ ker(π ) → B ⊂ B(K), which by what precedes must be isometric. Thus the range is isometric to A/ ker(π ). Since the latter is complete, the range is closed. The other assertions are now obvious.

A.14 On ∗-homomorphisms

453

Corollary A.25 On a C ∗ -algebra the C ∗ -norm is unique. Proof We apply Proposition A.24 to the identity of A viewed as a ∗-homomorphism from (A, 1 ) to (A, 2 ), where ( 1, 2 ) are C ∗ -norms on A. Corollary A.26 On a ∗-algebra, if two C ∗ -norms are equivalent, they are equal. Proof After completion they become C ∗ -norms on the same C ∗ -algebra, so by the preceding Corollary they coincide. More explicitly, if 0 < x1 /x2 = θ < 1, then 0 < (x ∗ x)n 1 /(x ∗ x)n 2 = θ 2n → 0. Remark A.27 Let A ⊂ A be a dense (resp. unital) ∗-subalgebra A ⊂ A in a (resp. unital) C ∗ -algebra A. Let π : A → B(H ) be a (resp. unital) ∗-homomorphism admitting a cyclic unit vector ξ ∈ H . Let f : A → C be defined by f (x) = ξ,π(x)ξ . If f  = 1 then π  = 1 and hence π extends to a (resp. unital) ∗-homomorphism on A. Indeed, f extends to a form f  ∈ A∗+ (see Remark A.22). For any x,y ∈ A we have y ∗ x ∗ xy ≤ x2 y ∗ y in the order of A, and hence f  (y ∗ x ∗ xy) ≤ x2 f  (y ∗ y) which means f (y ∗ x ∗ xy) ≤ x2 f (y ∗ y) or equivalently π(x)π(y)ξ,π(x)π(y)ξ  ≤ x2 π(y)ξ,π(y)ξ . Since π(A)ξ is dense in H (and since 1 = f  ≤ π ), we conclude that π  = 1. Actually, the mere continuity of f with respect to the norm induced by A implies that of π . Recall the following classical notions in operator theory. Definition A.28 (Weak and strong operator topology) A net (Ti ) of operators in B(H ) converges in the weak operator topology (w.o.t. in short) to T ∈ B(H ) if η,Ti ξ  → η,T ξ  for all η,ξ ∈ H . We say that Ti → T in the strong operator topology (s.o.t. in short) if Ti ξ − T ξ  → 0 for all ξ ∈ H . If both Ti → T and Ti∗ → T ∗ in s.o.t. then we say that Ti → T in the strong* topology. Remark A.29 (Universal representation of a C ∗ -algebra) Since the C ∗ -norm is the same in all representations of a (complete) C ∗ -algebra in B(H ) whatever H may be, it often makes no difference to us which representation we use. However, if we choose a representation with (apparently redundant) multiplicity, certain questions involving comparisons between the weak* (or the weak operator) topology of B(H ) and the weak topology can have a simpler answer. For instance, let A ⊂ B(H ), let H ∞ denote the direct sum of countably many copies of H and let π : A → B(H ∞ ) be the direct sum of the embeddings A ⊂ B(H ), so that π(a) (a ∈ A) acts “diagonally” on H ∞ . Then the description of the trace class operators (see §A.10) shows that a net (Ti ) ∈ B(H ) converges weak* in B(H ) if and only if π(Ti ) converges in the w.o.t. in B(H ∞ ). This motivates consideration of the “universal” representation πU of a C ∗ -algebra A, which is defined as follows. Let S(A) be the set of states of A. For any f ∈ S(A), let Hf = L2 (f ) and let πf : A → B(Hf ) be the cyclic representation (from the GNS construction) we then define HU = ⊕f ∈S(A) Hf and the universal representation πU as the direct sum  A  a → πU (a) = πf (a) ∈ B(HU ). f ∈S(A)

Note that, up to unitary equivalence, any cyclic representation can be identified to some πf (see Remark A.21) and any representation of A is a direct sum of cyclic

454

Appendix: Miscellaneous background

representations (which explains the term universal). In particular, it is clear that πU is an isometric representation of A. Remark A.30 Let (ai ) be a net in A. Then (ai ) converges for σ (A∗∗,A∗ ) to some a  ∈ A∗∗ if and only if πU (ai ) converges in the w.o.t. in B(HU ). If a  = a ∈ A this happens if and only if πU (ai ) → πU (a) in the w.o.t. of B(HU ). Indeed, ai → a  for some a  ∈ A∗∗ with respect to σ (A∗∗,A∗ ) if and only if (f (ai )) is Cauchy for any f ∈ A∗ or equivalently for any f ∈ S(A), and this is clearly implied by its w.o.t. counterpart in B(HU ). The converse is obvious since a → η,πU (a)ξ  is in A∗ for any η,ξ ∈ HU . Similarly πU (ai ) → πU (a) in the w.o.t. of B(HU ) if and only if ai → a in the weak topology of A.

A.15 Approximate units, ideals, and quotient C ∗ -algebras Most of the C ∗ -algebraic questions we consider in these notes can be reduced to C ∗ -algebras A with a unit element. When A is not unital, the role of the unit element is played by an approximate unit, and fortunately all C ∗ -algebras have approximate units (see [240]). By a (bounded) approximate unit in a Banach algebra one usually means a bounded net of elements (xi ) such that for any x ∈ A, xi x − x → 0 along the net. As usual we implicitly assume that our nets (xi ) are indexed by a directed set I . In the case of a C ∗ -algebra A, it turns out that the whole set I = {x ∈ A+,x < 1} is upward directed, meaning that for all x,y ∈ I there is z ∈ I such that x ≤ z and y ≤ z, so that the whole collection I = {x ∈ A+,x < 1} forms a net (xi ) (with the rare feature that i → xi is the identity!) and it can be checked (see [240, p. 26]) that this net is an approximate unit for A. We will need the more refined key notion of “quasi-central approximate unit” (see [13]) for an ideal I ⊂ A in a C ∗ -algebra A. Let q : A → A/I be the quotient mapping. Then there is a non decreasing net (σi ) in the unit ball of I with σi ≥ 0 such that for any a in A and any b in I aσi − σi a → 0 and

σi b − b → 0.

(A.18)

Such a net is called a “quasi-central approximate unit” for I ⊂ A. Here is a brief sketch of proof that they exist. Assume A ⊂ B(H ). Let (xi ) be an approximate unit for I, with xi ≥ 0 and xi  ≤ 1. We denote by σ the weak* σ topology in B(H ). Let p ∈ I be the weak* (or weak operator topology since the net is bounded) limit of (xi ). Clearly px = xp = x for any x ∈ I. Therefore, px = xp = x σ for any x ∈ I . Moreover, since I is an ideal, xi y and yxi are in I for any y ∈ A, from σ which we deduce that yp and py are in I . This implies (take x = yp or x = py) that yp = pyp = py, so that p commutes with A. Therefore for any y ∈ A, yxi − xi y → 0 for σ . But if we choose for the embedding A ⊂ B(H ) the universal representation of A (see Remark A.29), the weak* topology σ coincides on A with the weak topology (see Remark A.30). Then yxi − xi y → 0 weakly in A. Passing to convex combinations we can replace (cf. Mazur’s theorem A.9) this weak limit by a norm limit, and we obtain the desired net (σi ) with σi in the convex hull of {xj | j ≥ i}. See [13] or [67] for more details. The main properties we will use are summarized in the following.

A.15 Approximate units, ideals, and quotient C ∗ -algebras

455

Lemma A.31 Let A,I,q and (σi ) be as previously. Then, for any a in A, we have both 1/2

σi

1/2

a − aσi

 → 0 and (1 − σi )1/2 a − a(1 − σi )1/2  → 0.

(A.19)

Moreover we have ∀a ∈ A 1/2

∀a,b ∈ A

lim sup σi

1/2

lim σi

1/2

aσi

1/2

aσi

q(a) = lim a − σi a,

(A.20)

+ (1 − σi )1/2 b(1 − σi )1/2  ≤ max{a,q(b)}, (A.21)

+ (1 − σi )1/2 a(1 − σi )1/2 − a = 0.

(A.22)

Proof √ The first assertion (A.19) is immediate using a polynomial approximation of t → t, so we turn to (A.20). Fix ε > 0. Let x ∈ A be such that q(x) = q(a), x < q(a) + ε. Then a − x ∈ I implies (1 − σi )(a − x) → 0 hence (1 − σi )a ≤ (1 − σi )x + (1 − σi )(a − x), therefore, since 1 − σi  ≤ 1, we find lim sup (1 − σi )a ≤ x < q(a) + ε. Also q(a) ≤ (1 − σi )a implies q(a) ≤ lim inf (1 − σi )a. This proves (A.20). To verify (A.21) note that by (A.12) we have 1/2

σi

1/2 ∗ 1/2 a aσi + (1 − σi )1/2 b∗ b(1 − σi )1/2 1/2

1/2

+ (1 − σi )1/2 b(1 − σi )1/2  ≤ σi

aσi

and hence 1/2

σi

1/2

aσi

+ (1 − σi )1/2 b(1 − σi )1/2  ≤ max{a,b}.

Now if we replace b by b such that q(b ) = q(b) we have by the second parts of (A.19) and (A.18) lim (1 − σi )1/2 (b − b )(1 − σi )1/2  = lim (1 − σi )(b − b ) = 0, hence the left-hand side of (A.21) is ≤ max{a,b }. Taking the infimum over b we obtain (A.21). Finally, by (A.19) we have a fortiori for any a in A 1/2

σi a − σi

1/2

aσi

 → 0 and (1 − σi )a − (1 − σi )1/2 a(1 − σi )1/2  → 0, (A.23)

from which (A.22) is immediate. Lemma A.32 Let I ⊂ A be a (closed two-sided) ideal in a C ∗ -algebra. Let q : A → A/I be the quotient map. Then ∀x ∈ A ∀ε > 0 ∃x1 ∈ A with q(x1 ) = q(x) such that x1  < q(x) + ε and x1 − x ≤ x − q(x). Proof Let (σi ) be as before. We set x1 = σi (xx−1 q(x)) + (1 − σi )x. We will show that when i is large enough this choice of x1 satisfies the announced properties. First we have clearly q(x1 ) = q(x) (since σi ∈ I). We introduce x1 = σi

1/2

(xx−1 q(x))σi

1/2

+ (1 − σi )1/2 x(1 − σi )1/2 .

Choosing i large enough, by the first assertion of the preceding lemma we may on the one hand assume x1 − x1  < ε/2 and, by (A.21), also x1  ≤ q(x) + ε/2 whence x1  < q(x) + ε. On the other hand, we have x − x1  = σi xx−1 (x − q(x)) ≤ x − q(x).

456

Appendix: Miscellaneous background

Thus we obtain the following well-known fact: Lemma A.33 For any x in A, there is  x in A such that q( x ) = q(x) and  x A = q(x)A/I . Proof Using Lemma A.32 we can select by induction a sequence x,x1,x2, . . . in A such that q(xn ) = q(x), xn  ≤ q(x) + 2−n and xn − xn−1  ≤ xn−1  − q(xn−1 ) ≤ x has the announced 2−n+1 . Since it is Cauchy, this sequence converges and its limit  property. Remark A.34 When A is a dual space and I a weak* closed ideal, the situation is much simpler. Indeed, the net (A.18) now has a subnet that converges weak* to a limit p ∈ I. By (A.18) p is a self-adjoint projection in the center of A that is a unit element for I. Therefore the mapping x → px = xp is a projection from A to I that is also a weak* continuous ∗-homomorphism. We thus obtain a decomposition A = (1 − p)A ⊕ pA that can be rewritten equivalently as A  (A/I) ⊕ I.

(A.24)

In other words, if we denote by Q : A → A/I the quotient map, the correspondence (1 − p)x → Q(x) is a ∗-isomorphism from (1 − p)A to A/I. Remark A.35 Actually, in the preceding remark, we can use any bounded approximate unit (xi ) of I to produce the projection p. Indeed, passing to a subnet we may assume that xi tends weak* to p ∈ I. Then since xi x → x and (taking adjoints) xxi∗ → x in norm for any x ∈ I, we derive px = x, xp∗ = x so that in particular pp∗ = p ∗ = p, so that p 2 = p (see Remark A.17 for clarification). For any y ∈ A we have yp ∈ I and py ∈ I since I is an ideal, and hence yp = p(yp) = (py)p = py. Remark A.36 More generally if the C ∗ -algebra A is a dual space and I ⊂ A a weak* closed left ideal (meaning AI ⊂ I), we claim that there is a (self-adjoint) projection P ∈ A such that I = AP . To check this let I  = {x | x ∗ ∈ I}. Then I  is a weak* closed right ideal, and I ∩ I  a weak* closed C ∗ -subalgebra. As in the preceding remark, any net forming an approximate unit for the C ∗ -algebra I ∩ I  now has a subnet that converges weak* to a limit P ∈ I ∩ I  that is the unit of I ∩ I  and is a (self-adjoint) projection in A. Therefore we have I ∩ I  = (I ∩ I  )P = P (I ∩ I  ). We claim that I = AP . Indeed, let x ∈ I. Then x ∗ x ∈ I and hence x ∗ x ∈ I ∩I  . This gives us Px∗ x = x ∗ xP = Px∗ xP and hence (x − xP)∗ (x − xP) = 0. Thus x = xP which means x ∈ AP. Thus I ⊂ AP . Since P ∈ I the converse is obvious, proving the claim. See also [226, p. 24], [240, p. 123], or [146, p. 443].

A.16 von Neumann algebras and their preduals For the convenience of the reader, we recall now a few facts concerning von Neumann algebras. A von Neumann algebra on a Hilbert space H is a self-adjoint subalgebra of

A.16 von Neumann algebras and their preduals

457

B(H ) that is equal to its bicommutant. For M ⊂ B(H ) we denote by M  (resp. M  ) its commutant (resp. bicommutant). By a well-known result due to Sakai (see [240, p. 133]), a C ∗ -algebra A is C ∗ -isomorphic to a von Neumann algebra if and only if it is isometric to a dual Banach space, i.e. if and only if there is a closed subspace X ⊂ A∗∗ such that X∗ = A isometrically. For instance the subspace X ⊂ A∗∗ formed of all the weak* continuous linear forms on A∗ clearly satisfies this. For a general dual Banach space A there may be several subspaces X ⊂ A∗∗ such that X∗ = A isometrically, however, when A is a dual C ∗ -algebra X is the unique one. Thus the predual is unique and is denoted by A∗ . In the case A = B(H ), the predual B(H )∗ can be identified with the space of all trace class operators on H , equipped with the trace class norm, and it is easy to check that a von Neumann algebra M ⊂ B(H ) is automatically σ (B(H ),B(H )∗ )-closed. Thus, when it is a dual space, Sakai’s theorem says that A can be realized as a von Neumann algebra on a Hilbert space H and its predual can be identified with the quotient of B(H )∗ by the preannihilator of A. Moreover, A is weak* separable if and only if it can be realized on a separable H . In the early literature, a C ∗ -algebra that is C ∗ -isomorphic to a von Neumann algebra is called a W ∗ -algebra. The latter term is less often used nowadays. Remark A.37 (Weak* continuous operations) Let M ⊂ B(H ) be a von Neumann subalgebra. Let (xi ) be a net in M. If xi → x ∈ M in the weak* sense then xi∗ → x ∗ and, for any a ∈ M, axi → ax, xi a → xa all in the weak* sense. In other words the mappings x → x ∗ , x → ax, and x → xa are weak* to weak* continuous. This is an easy consequence of Remark A.17. Remark A.38 (On isomorphisms of von Neumann algebras) The unicity of the predual is essentially equivalent to the following useful fact. Let π : M → M 1 be an isometric (linear) isomorphism between two C ∗ -algebras that are both dual spaces (and hence C ∗ -isomorphic to von Neumann algebras). Then π and π −1 are both continuous for the weak* topologies (i.e. “normal”). Indeed, π ∗ (M∗1 ) ⊂ M ∗ is automatically a predual of M, and hence must coincide with M∗ , and similarly of course for π −1 . In other words any C ∗ -isomorphism u : M1 → M2 from a von Neumann algebra onto another one is automatically bicontinuous for the weak−∗ topologies (σ (M1,M1∗ ) and σ (M2,M2∗ )) (see e.g. [240, vol. I p. 135] for more on this). Remark A.39 (Commutative von Neumann algebras) If a commutative C ∗ -algebra A is isomorphic to a von Neumann algebra then there is a locally compact space (T ,m) equipped with a positive Radon measure m such that A is isomorphic (as a C ∗ -algebra) to L∞ (T ,m). This is a classical structural result that summarizes “abstractly” the spectral theory of unitary operators. See [240, p. 109]. A concrete consequence is that any unitary U in A can be written as U = exp ix for some self-adjoint x ∈ A. This last fact (proved directly in [145, pp. 313–314]) does not hold in a general C ∗ -algebra, but some weaker version is available, see [145, p. 332]. Remark A.40 (The predual as a quotient of the trace class) The space B(H )∗ can be ∧

identified with the projective tensor product H ⊗H . This is a particular case of (A.1).

458

Appendix: Miscellaneous background

The duality is defined for all t = b ∈ B(H ) by:





xk ⊗ yk ∈ H ⊗H with

b,t =



We have then tB(H )∗ = inf



xk yk  < ∞ and

xk ,byk .

∞ 1

(A.25)

 xk yk 

 wherethe infimum is over all possible representations of t in the form t = ∞ 1 xk ⊗ yk with xk yk  < ∞, and the duality is as in (A.25). Let M ⊂ B(H ) be a weak* closed unital subalgebra (in other words a von Neumann algebra). Let X = B(H )∗ /M⊥ where M⊥ is the pre-orthogonal of M. Standard functional analysis tells us that X∗ = M isometrically, so that M∗ ⊂ M ∗ can be identified isometrically with X. Thus any f ∈ M∗ can be represented by a (nonunique) element T of B(H )∗ (or equivalently of the trace class) and f M∗ = inf T B(H )∗ where the infimum runs over all possible such T ’s. The next lemma is a useful refinement that is special to von Neumann algebras. The point is that we can obtain equality in (A.26) so the preceding infimum is attained. Lemma A.41 Let f ∈ M∗ . (i) There are xk ,yk ∈ H such that ∞ 1

xk yk  = f M∗ ,

(A.26)

and ∀b ∈ M

f (b) =



yk ,bxk .

(A.27)

Moreover, there is a partial isometry u in M such that f (u) = f M∗ . (ii) Let u ∈ M be a partial isometry such that f (u) = f M∗ . Then for any b ∈ B(H ) we have f (uu∗ b) = f (bu∗ u) = f (b).

(A.28)

Moreover there is a positive g ∈ M∗ such that for any b ∈ M f (b) = g(u∗ b).

(A.29)

(iii) If f is a (normal) state on M, (i) holds with (xk ) = (yk ) and there is a normal state f on B(H ) extending f . Proof For the classical fact in (i) we refer to [240, Ex. 1, p. 156], [226, p. 78], or any other standard text for a proof. In [146, th. 7.1.12 p. 462] (i) is proved for positive f ’s. Then (i) follows by the polar decomposition of f (see [146, th. 7.3.2 p. 474]) to which (i) is closely related, and which is essentially the same as (A.29). Since the set Cf = {x ∈ BM | f (x) = f M∗ } is convex and weak* compact it has extreme points. Since it is a face of BM its extreme points are extreme points in BM , and it is well known (see [240, p. 48]) that the latter are partial isometries. This shows that Cf contains a partial isometry u ∈ M.

A.16 von Neumann algebras and their preduals

459

(ii) By spectral theory, it is known that the trace class operator associated to f as in (A.27) can be rewritten in the form  λ d ,bei , (A.30) f (b) = i∈I i i  where λi > 0 for any i ∈ I (I is at most countable) satisfies i∈I λi = f M∗ and (ei ),(di ) are orthonormal systems. After normalization, we may assume f (u) = Hilbert space x = f M∗ = 1. The reader should keep inmind that for vectors in  y = y,x implies x = y. We have i∈I λi di ,uei  = 1 = i∈I λi . This forces di ,uei  = 1, and hence di = uei , for all i ∈ I . Therefore 1 = uei 2 = ei ,u∗ uei  and hence u∗ uei = ei . Similarly uu∗ di = di for all i ∈ I . Using this, (A.28) is an immediate consequence of (A.30). Let g(x) = f (ux) (x ∈ M). Then g(1) = f M∗ = gM∗ and hence g ≥ 0. By (A.28) we have f (b) = g(u∗ b). ∞ (iii) If f is a state, we have 1 = f (1) = f  so that 1 = 1 xk yk  = ∞ 1 yk ,xk  and hence (assuming xk yk  = 0) we must have xk yk  = yk ,xk  and hence xk = yk for all i. Then ∞ f(b) = xk ,bxk  (b ∈ B(H )) (A.31) 1

is the desired extension. A linear map u : M → M 1 between von Neumann algebras is called normal if it is continuous for the σ (M,M∗ ) and σ (M 1,M∗1 ) topologies, or equivalently if there is a map v : M∗1 → M∗ of which u is the adjoint. Remark A.42 In other words, u : M → M 1 is normal if and only if the map ∗ u∗ : M 1 → M ∗ satisfies u∗ (M∗1 ) ⊂ M∗ . Since u∗ is norm continuous, it suffices for this to know that u∗ (V ) ⊂ M∗ for some norm-total subset V ⊂ M∗1 . For instance, assuming M 1 ⊂ B(H 1 ), we may take for V the set formed of the (normal) linear forms f on M 1 of the form f (x) = ξ ,xξ  with ξ,ξ  running over a dense linear subspace of H 1 . Using (A.27) one checks easily that the latter V is dense in M∗1 . In particular, the normal linear forms on M are exactly those that are in the predual M∗ ⊂ M ∗ . Furthermore: Remark A.43 (GNS for normal forms) The GNS representation πf : M → B(Hf ) associated as in §A.13 to a normal form f : M → C is normal. This is a particular case of the preceding remark. Just observe that x → f (a ∗ xb) = πf (a)ξf ,πf (x)πf (b)ξf  is in M∗ if f ∈ M∗ . The following fact is well known. ∗ Theorem A.44 A positive linear

functional  ϕ ∈ M is normal if and only it is ϕ(pi ) for any family (pi ) of mutually completely additive, meaning ϕ pi = orthogonal projections in M.

Proof By definition, “ϕ ∈ M∗ ” means that there are x,y in 2 (H ) such that ϕ(a) =  axn,yn  for all a in M. Clearly this implies the complete additivity of ϕ. The problem is to check the converse. To check this we will use the following facts. Fact 1 Assume ϕ completely  additive. If (Pj )j ∈I is a family of mutually orthogonal projections with sum P = Pj , such that Pj ·ϕ ∈ M∗ for all j , then P ·ϕ ∈ M∗ . Here

460

Appendix: Miscellaneous background

P · ϕ (resp. ϕ · P , resp. P · ϕ · P ) is the linear form on M defined by P · ϕ(x) = ϕ(xP), (resp. ϕ · P (x) = ϕ(Px), resp. P · ϕ · P (x) = ϕ(PxP)). Indeed, fix ε > 0 and let J ⊂ I be a finite subset such that (here we use complete additivity) ϕ P − J Pj < ε and let PJ = J Pj . Then by Cauchy–Schwarz we have |(P − PJ ) · ϕ(x)|2 ≤ ϕ(x ∗ x)ε ≤ ϕ(1)εx2

∀x ∈ M

and hence P · ϕ − PJ · ϕM ∗ ≤ (ϕ(1)ε)1/2 . Clearly PJ · ϕ ∈ M∗ ; therefore since M∗ is norm closed in M ∗ we conclude that P · ϕ ∈ M∗ . ∗ and P is a projection in M such that P · ϕ · P ∈ M , then P · ϕ ∈ Fact 2 If ϕ ∈ M+ ∗ . Indeed, by (iii) in Lemma A.41 there is (xn ) is 2 (H ) such that P · ϕ · P (a) = M ∗  xn,axn . Then by Cauchy–Schwarz again 1/2  aPxn 2 ϕ(1)1/2 |P · ϕ(a)| ≤ ϕ(Pa∗ aP)1/2 ϕ(1)1/2 = n

which implies that there is (yn ) in 2 (H ) (with norm ≤ ϕ(1)1/2 ) such that P · ϕ(a) =  yn,aPxn . ∗ if ϕ(q) ≤ ψ(q) for any projection q in M then ϕ ≤ ψ. Fact 3 Given ϕ,ψ in M+

Indeed, given x in M+ , to show ϕ(x) ≤ ψ(x) it suffices to show that ϕ|L ≤ ψ|L where L is the commutative von Neumann algebra generated by x. Then this fact becomes obvious (e.g. because we can approximate x in norm by nonnegative step functions). We now complete the proof of the theorem. Let (Pj )j ∈I be a maximal family of mutually orthogonal (nonzero) projections such that Pj · ϕ is normal for all j . (This exists  By Fact 1, it suffices to show that  by Zorn’s lemma or, say, transfinite induction). Pj = 1. Assume to the contrary that Q = 1 − Pj = 0. We will show that there is a projection 0 = Q ≤ Q such that Q · ϕ ∈ M∗ ; this will contradict maximality and thus prove that Q = 0. Pick any h in H such that Qh = 0 and adjust its normalization so that ϕ(Q) < h,Qh. Let ωh ∈ M∗ be defined by ωh (a) = h,ah. Let (Qβ ) be a maximal family of mutually orthogonal (nonzero)  projections in M such that Qβ ≤ Q and ωh (Qβ ) ≤ ϕ(Qβ ). Then we must have Qβ = Q, because otherwise (by complete additivity of ωh ) we find   ωh (Qβ ) ≤ ϕ(Qβ ) ≤ ϕ(Q), ωh (Q) =  which contradicts our choice of h. Let then Q = Q − Qβ = 0. By maximality of (Qβ ) we must have ωh (q) > ϕ(q) for any projection 0 < q ≤ Q in M. By Fact 3 this implies Q · ωh · Q ≥ Q · ϕ · Q , so that Q · ϕ · Q ∈ M∗ and hence by Fact 2, Q · ϕ ∈ M ∗ .  As announced this contradicts the maximality of (Qβ ), therefore we conclude Qβ = 1 and ϕ ∈ M∗ by Fact 1. ∗ is such that 0 ≤ ψ ≤ ϕ then Remark A.45 Theorem A.44 shows that if ψ ∈ M+ ϕ ∈ M∗ ⇒ ψ ∈ M∗ . The latter fact can also be derived from part (iii) in Lemma A.41 and Lemma 23.11.

Theorem A.46 (von Neumann bicommutant theorem) Let A ⊂ B(H ) be a unital ∗-subalgebra. The closures of A either for the weak operator topology (w.o.t.), the

A.17 Bitransposition: biduals of C ∗ -algebras

461

strong operator topology (s.o.t.) (see Definition A.28) or for the weak* topology (σ (B(H ),B(H )∗ )) all coincide, and they are equal to the bicommutant A . Thus to check that x ∈ B(H ) belongs to a von Neumann subalgebra M ⊂ B(H ) it suffices to check that xy = yx for any y ∈ M  . See e.g. [145, p. 326] or [240, p. 74] for a proof. Theorem A.47 (Kaplansky’s density theorem) Let A ⊂ B(H ) be a ∗-subalgebra. The closures of BA either for the weak operator topology (w.o.t.), the strong operator topology (s.o.t.) (see Definition A.28) or for the weak* topology (σ (B(H ),B(H )∗ )) all coincide, and if A is unital they are equal to the unit ball of M = A . See e.g. [145, p. 329] or [240, p. 100] for a proof. Throughout what follows M denotes a von Neumann algebra. Note that it is an elementary fact (in general topology) that the w.o.t. coincides with the weak* topology on any bounded subset of M. Indeed, since such a set is relatively weak* compact, σ (M,M∗ ) coincides on it with σ (M,D) for any total separating subset of M∗ . Thus BA clearly has the same closure in both. But it is nontrivial that whenever A is weak*-dense in M, BM ∩ A

σ (M,M∗ )

= BM .

Since BM ∩ A is a bounded, convex subset of M, its closure is the same for σ (M,M∗ ), for the strong operator topology or even for the so-called strong* operator topology in which a net of operators Ti ∈ B(H ) converges to T ∈ B(H ) if both Ti h → Th and Ti∗ h → T ∗ h for any h ∈ H . Remark A.48 Let π : M → B(H ) be an injective normal ∗-homomorphism. We will show that its range π(M) is weak*-closed. Then by Remark A.38, π −1 : π(M) → M is normal and π(M) is a von Neumann subalgebra in B(H ), isomorphic to M. To show that π(M) is weak*-closed, by Kaplansky’s theorem (A.47) it suffices to show that the unit ball of π(M) is weak*-closed. But the latter, being the image of BM which is σ (M,M∗ )-compact, is itself weak*-compact. Remark A.49 Let π : M → B(H ) be a normal ∗-homomorphism. We claim that π(M) is weak* closed and isomorphic to M/ ker(π ). Indeed, let I = ker(π ). Clearly I is a weak*-closed (two-sided, self-adjoint) ideal so that the quotient M/I is a dual with predual (M/I)∗ = I⊥ = {f ∈ M∗ | f (x) = 0 ∀x ∈ I}. The injective ∗-homomorphism π1 : M/I → π(M) canonically associated to π being clearly normal, the claim follows from the preceding Remark A.48. By Remark A.36 we have M  (M/I) ⊕ I.

A.17 Bitransposition: biduals of C ∗ -algebras Let A be a C ∗ -algebra. The bidual of A can be equipped with a C ∗ -algebra structure as follows: let πU : A → B(H ) be the universal representation of A (i.e. the direct sum of all cyclic representations of A as in Remark A.29). Then the bicommutant πU (A) , which is a von Neumann algebra on H , is isometrically isomorphic (as a Banach space) to the bidual A∗∗ (see the proof of Theorem A.55). Using this isomorphism

462

Appendix: Miscellaneous background

as an identification, we will view A∗∗ as a von Neumann algebra, so that the canonical inclusion A → A∗∗ is a ∗-homomorphism. This inclusion possesses a fundamental extension property (see Theorem A.55), that we first discuss in a broader Banach space framework. Let X,Y be Banach spaces. Then any linear map u : X → Y ∗ admits a (unique) ¨ = u. Indeed, just weak* to weak* continuous extension u¨ : X∗∗ → Y ∗ such that u consider u∗ |Y : Y → X∗ and set u¨ = (u∗ |Y )∗ .

(A.32)

X∗∗ ∗ )∗ u=(u ¨ |Y

X

u

Y∗

The unicity of u¨ follows from the σ (X∗∗,X∗ )-density of BX in BX∗∗ . Since u¨ : X∗∗ → Y ∗ is weak* to weak* continuous, we can describe its value at any point x  ∈ X∗∗ like this: for any net (xi ) in X with sup xi  < ∞ tending weak* to x  , we have (the limit being meant in σ (Y ∗,Y )) u(x ¨  ) = lim u(xi ).

(A.33)

Indeed, for any y ∈ Y we have u(x ¨  ),y = x ,u∗ |Y y = limxi ,u∗ |Y y = limu(xi ),y. We record here two simple facts about maps such as u. ¨ Proposition A.50 Let X be a Banach space, D ⊂ X a closed subspace, so that we have D ∗∗ ⊂ X∗∗ as usual. Let iD : D → D ∗∗ denote the canonical injection. The following properties of a bounded linear map T : X → D ∗∗ are equivalent: (i) T|D = iD . (ii) The mapping P = T¨ : X∗∗ → D ∗∗ is a linear projection onto D ∗∗ . Moreover, when these hold the projection P is continuous with respect to the weak* topologies of X∗∗ and D ∗∗ , and P  = T . Proof Assume (i). Any x  ∈ BX∗∗ is the weak* limit of a net (xi ) in BX , and we have (with limit meant for σ (D ∗∗,D ∗ )) T¨ (x  ) = lim T¨ (xi ). If x  ∈ D ∗∗ we can choose xi ∈ BD and then T¨ (x  ) = lim T¨ (xi ) = lim xi = x  . This shows (ii) with P  ≤ T . Conversely, if (ii) holds then T = T¨|X clearly satisfies (i) and T  ≤ P . Remark A.51 Let u : X → B be a bounded linear map between Banach spaces. Let iB : B → B ∗∗ be the inclusion and let v = iB u : X → B ∗∗ . Then v¨ = u∗∗ . Indeed, with the notation in the preceding proof we have (with limits all meant for σ (B ∗∗,B ∗ )) ¨ i ) = lim u(xi ) = u∗∗ (x). v(x ¨  ) = lim v(x Remark A.52 When X is a dual space (say X = Y ∗ ) there is a contractive projection P : X∗∗ → X. Indeed, if u = IdX then P = u¨ is such a projection. Remark A.53 (The bidual as solution of a universal problem) Let X ⊂ Z be an isometric inclusion of Banach spaces. Assume that Z is a dual space and that for any Y u : Z → Y ∗ extending u with and any u : X → Y ∗ there is a unique weak* continuous 

A.17 Bitransposition: biduals of C ∗ -algebras

463

 u = u. Then it is an easy exercise to see that Z is isometrically isomorphic to X∗∗ via an isomorphism that transforms the inclusion X → Z into iX : X → X∗∗ (and  u into u). ¨ Remark A.54 (Bidual of subspace or quotient) Let I ⊂ X be a closed subspace of a Banach space X. The space I ∗∗ can be naturally identified with the σ (X∗∗,X∗ )w∗ ⊂ X∗∗ of I in X∗∗ . Indeed, if v : I → X denotes the inclusion map, closure I w∗ ∗∗ then v : I ∗∗ → X∗∗ is an isometric embedding with range I . Similarly, the w∗ ∗∗ ∗∗ can be naturally identified with (X/I) . More precisely, quotient space X /I let q : X → X/I be the quotient map, consider u = iX/I q : X → (X/I)∗∗ . Then w∗

u¨ : X∗∗ → (X/I)∗∗ is a metric surjection such that ker(u) ¨ = I , and hence u¨ defines w∗ an isometric isomorphism X∗∗ /I → (X/I)∗∗ . These classical facts follow from the Hahn–Banach theorem. Let A be a C ∗ -algebra and let S denote the set of states of A. For each f ∈ S, let πf : A → B(Hf ) denote the GNS representation associated to f , so that there is a unit vector ξf ∈ Hf such that ∀a ∈ A

f (a) = ξf ,πf (a)ξf .

∀a ∈ A

f (a) = ξf ,πU (a)ξf .

(A.34)

As before (see Remark A.29), we denote by πU : A → ⊕ f ∈S B(Hf ) ∞ ⊂ B(⊕f ∈S Hf ) the universal representation taking a ∈ A to the block diagonal operator with coefficients (πf (a))f ∈S . Let H = ⊕f ∈S Hf and let ξf denote the unit vector of H with coefficients equal to 0 except at the f -place where it is equal to ξf . Then we have



(A.35)

Theorem A.55 (C ∗ -algebra structure on A∗∗ ) Let A be a C ∗ -algebra. There is a (unique) C ∗ -algebra structure on A∗∗ (with the same norm) satisfying the following: (i) The canonical inclusion iA : A → A∗∗ is a ∗-homomorphism. (ii) For any von Neumann algebra M and any ∗-homomorphism π : A → M the mapping π¨ : A∗∗ → M is a ∗-homomorphism. Proof Let M = πU (A) = πU (A)

weak∗

.

Note that πU : A → πU (A) is isometric. Indeed, for any a ∈ A we have a2 = a ∗ a = supf ∈S f (a ∗ a) = supξf ,πU (a ∗ a)ξf  ≤ πU (a ∗ a) = πU (a)2 ≤ a2 . From (A.35) one sees that the correspondence a → πU (a) is a homeomorphism from (BA,σ (A,A∗ )) to (BM,w.o.t.) or equivalently (by Remark A.11) (BM,σ (M,M∗ )). More precisely, the correspondence (xi ) → (πU (xi )) is a bijection from the set of σ (A,A∗ ) – Cauchy nets in BA to that of σ (M,M∗ ) – Cauchy nets in BπU (A) (over the same index set). Taking (A.33) and Kaplansky’s density theorem into account, this means that π¨ U defines a bijection from BA∗∗ to BM . In other words, π¨ U : A∗∗ → M is an isometric isomorphism. Thus we can equip A∗∗ with a C ∗ -algebra structure

464

Appendix: Miscellaneous background

by transplanting that of M. This means that we define the product and involution in A∗∗ as ∀x ,y  ∈ A∗∗

x  · y  = π¨ U−1 (π¨ U (x  )π¨ U (y  )) and x ∗ = π¨ U−1 (π¨ U (x  )∗ ).

Since π¨ U extends πU the property (i) is immediate. To check the second one we observe that since any π : A → M decomposes as a direct sum of cyclic representation, it suffices to check it assuming that π has a cyclic vector. Then π is unitarily equivalent to πf for some f (see Remark A.21), so that we are reduced to the case π = πf : A → B(Hf ), and M = πf (A) . Since the latter is weak* closed it suffices to prove that π¨ f : A∗∗ → B(Hf ) is a ∗-homomorphism, or equivalently that π¨ f π¨ U−1 : M → B(Hf ) is one. This turns out to be very easy: indeed, if we denote 

by Qf : ⊕ f ∈S B(Hf ) ∞ → B(Hf ) the coordinate projection, which is clearly a weak* continuous ∗-homomorphism, we have πf = Qf πU , and hence by (A.33) π¨ f = Qf π¨ U , from which we see that π¨ f π¨ U−1 |M = Qf . Lastly the uniqueness follows from the observation that if A∗∗ bis is the same Banach space as A∗∗ but with another C ∗ -algebra structure then (ii) with π equal to the ∗∗ → A∗∗ for which the inclusion A ⊂ A∗∗ bis leads to a ∗-homomorphism π¨ : A bis ∗∗ ∗∗ underlying linear map is the identity of A . This means A∗∗ bis is identical to A . Remark A.56 By Remark A.51, if π : A → B is a ∗-homomorphism between C ∗ -algebras so is π ∗∗ : A∗∗ → B ∗∗ , since π ∗∗ = v¨ with v = iB π and v is a ∗-homomorphism. + = BA ∩ A+ . Let π : A → B(H ) be a Remark A.57 For any C ∗ -algebra A, let BA w∗ ∗-homomorphism. Let M = π(A) ⊂ B(H ) where the closure is meant in the weak* sense. We claim that + + = Bπ(A) BM

w∗

.

w∗

+ + + The inclusion Bπ(A) ⊂ BM is obvious. To show the converse, let x ∈ BM , then ∗ x = y y for some y ∈ M. By Kaplansky’s theorem (A.47), there is a bounded net (yi ) in π(A) such that yi → y in s.o.t. Then yi∗ yi → y ∗ y = x in w.o.t. and hence weak* w∗

+ . This proves the claim. (since the net is bounded), so that x ∈ Bπ(A) Applying this to the universal representation we obtain: + + BA ∗∗ = BA

σ (A∗∗,A∗ )

.

(A.36)

We have a natural identification Mn (A)∗∗  Mn (A∗∗ ) as vector spaces. A moment of thought shows that this is isometric: Proposition A.58 The identification Mn (A)∗∗  Mn (A∗∗ ) is an (isometric) ∗isomorphism. Proof We know that iA : A → A∗∗ is a ∗-homomorphism. It follows that IdMn ⊗ iA : Mn (A) → Mn (A∗∗ ) is also one. Let σ = IdMn ⊗iA . By the characteristic property of biduals σ¨ : Mn (A)∗∗ → Mn (A∗∗ ) is a ∗-homomorphism. The latter must be an isomorphism since Mn (A)∗∗ and Mn (A∗∗ ) can both be identified as Banach spaces with the direct sum of n2 copies of A∗∗ .

A.18 Isomorphisms between von Neumann algebras

465

Remark A.59 Let I ⊂ A be an ideal as in §A.15. Then I ∗∗ ⊂ A∗∗ is a weak* closed ideal and A∗∗ /I ∗∗ = (A/I)∗∗ . By (A.24), we have a canonical identification A∗∗  A∗∗ /I ∗∗ ⊕ I ∗∗  (A/I)∗∗ ⊕ I ∗∗ .

(A.37)

Indeed, this follows by taking π equal to the canonical map π : A → A/I ⊂ (A/I)∗∗ . Then π¨ : A∗∗ → (A/I)∗∗ is a ∗-homomorphism onto (A/I)∗∗ with kernel I ∗∗ (see Remark A.54). Then (A.37) follows from Remark A.34. Remark A.60 (Important warning) When M ⊂ B(H ) is a von Neumann algebra, its bidual M ∗∗ (just like any C ∗ -algebra bidual) is itself isomorphic to a von Neumann algebra so that M ∗∗ can be realized as a weak* closed ∗-subalgebra M ∗∗ ⊂ B(H). It is important to be aware that there are two distinct embeddings of M in M ∗∗ . The first one is of course the canonical inclusion iM : M → M ∗∗ . This is a unital faithful ∗-homomorphism that in general is not normal. Its range iM (M) is weak*-dense in M ∗∗ . In particular, being not weak* closed in general its range is not a von Neumann subalgebra of B(H). The second one appears when one considers the mapping u¨ : M ∗∗ → M where u is the identity on M. We know this is a unital weak* continuous (i.e. normal) ∗-homomorphism, so that I = ker(u) ¨ is a weak* closed two-sided ideal. As observed in Remark A.34 applied to M ∗∗ there is a central projection p ∈ M ∗∗ such that I = pM ∗∗ and the mapping ψM : M → M ∗∗ defined by ψM (x) = (1−p)x is a normal embedding of M in M ∗∗ , so that its range ψM (M) is a weak* closed subalgebra of M ∗∗ . However, if dim(M) = ∞, ψM is not unital and the unit of ψM (M) is 1 − p. The confusion of these two embeddings can be a source of mistakes for beginners (as the author remembers!).

A.18 Isomorphisms between von Neumann algebras We would like to describe more precisely the structure of isomorphisms for von Neumann algebras. Let M ⊂ B(H ) and N ⊂ B(H) be von Neumann subalgebras. An isomorphism W : M → N of the form W(x) = U −1 xU for some unitary U : H → H  = K ⊗2 H and N = {IdK ⊗ x | x ∈ M} will be called a spatial isomorphism. When H the isomorphism A : M → N defined by A(x) = IdK ⊗ x will be called an amplification. When H ⊂ H is invariant under M (this means H = p(H ) for some p ∈ M  ), any ∗-homomorphism V : M → N ⊂ B(H) of the form V(x) = PH x|H (x ∈ M) where PH : H → H denotes the orthogonal projection viewed as acting into H (or V(x) = pxp viewed as an element of B(H)) will be called a compression. The following is classical. Theorem A.61 Any weak* continuous (also called “normal”) ∗-homomorphism π : M → N can be written as a composition π = WVA for some amplification A : M → M 1 , compression V : M 1 → N 1 and spatial isomorphism W : N 1 → N , where M 1 ⊂ B(H 1 ) and N 1 ⊂ B(K 1 ) are von Neumann algebras. If π is an isomorphism, then V must be an isomorphism. Sketch of proof Recall M ⊂ B(H ) and N ⊂ B(H). By the usual decomposition of π as a direct sum of cyclic representations, it suffices to prove the statement in the cyclic

466

Appendix: Miscellaneous background

case. Let ξ ∈ H be a cyclic unit vector for π . Then x → f (x) = ξ,π(x)ξ  is a normal state on M. Let f: B(H ) → C be a normal state extending f (see Lemma A.41). By (A.31) there is a unit vector η = ei ⊗xi ∈ 2 ⊗2 H such that f(b) = η,[Id⊗b]η. Then a simple verification shows that the correspondence π(x)ξ → [Id ⊗ x]η (x ∈ M) extends to an isometric embedding S : H ⊂ 2 ⊗2 H such that S(H ) is invariant under M 1 = [Id ⊗ M], and moreover ∀x ∈ M

π(x) = S ∗ [Id ⊗ x]S.

Let A(x) = Id⊗x (x ∈ M). Let U : H → S(H ) be the unitary that is the same operator as S but with range S(H ). We obtain π(x) = U −1 (PS(H ) A(x)|S(H ) )U (x ∈ M) or equivalently π = WVA with W(·) = U −1 · U and V(·) = PS(H ) ·|S(H ) . For a more detailed proof see [72] (p. 55 in the French edition, p. 61 in the English one).

A.19 Tensor product of von Neumann algebras Let M ⊂ B(H ) and N ⊂ B(K) be von Neumann algebras. We have a natural embedding M ⊗ N ⊂ B(H ⊗2 K) of the algebraic tensor product into B(H ⊗2 K). ¯ as follows: We define the tensor product in the von Neumann sense M ⊗N ¯ =M ⊗N M ⊗N

weak∗

⊂ B(H ⊗2 K),

¯ = (M ⊗ N) . and by the bicommutant Theorem A.46 we also have M ⊗N ¯ In particular, with this definition we have B(H )⊗B(K) = B(H ⊗2 K).

A.20 On σ -finite (countably decomposable) von Neumann algebras A von Neumann algebra is called σ -finite or countably decomposable (the terminology is not unanimous) if it admits a normal faithful state. Equivalently, this means that any family of mutually orthogonal nonzero (self-adjoint) projections in M is at most countable. Any von Neumann algebra on a separable Hilbert space is σ -finite. Recall that a vector ξ ∈ H is separating for M ⊂ B(H ) if m ∈ M,mξ = 0 ⇒ m = 0. The following basic facts are classical: Lemma A.62 Let M ⊂ B(H ) be a von Neumann algebra and let ξ ∈ H . Then ξ is cyclic for M if and only if it is separating for M  . Theorem A.63 Any σ -finite von Neumann algebra can be realized for some H as a von Neumann subalgebra M ⊂ B(H ) in such a way that both M and M  have a cyclic vector. Remark A.64 (On the nonseparable case) In the commutative case, saying that M is σ -finite is the same as saying that M is (isomorphic to) the L∞ -space of a σ -finite measure space ( ,μ). Equivalently, there is f ∈ L1 (μ) with f > 0 almost everywhere. If M ⊂ B(H ) with H separable then M is σ -finite, but this sufficient condition is not necessary, even in the commutative case (consider e.g. = {−1,1}I with the usual product probability and I uncountable).

A.21 Schur’s lemma

467

Many results on von Neumann algebras are easier to handle in the σ -finite (=countably decomposable) case where almost all the interesting examples lie. A classical way to reduce consideration to the latter case is via the following fundamental structural result: Theorem A.65 (Fundamental reduction to σ -finite case) Any von Neumann algebra M admits a decomposition as a direct sum    ¯ i B(Hi )⊗N M ⊕ i∈I



where the Ni’s are σ -finite (=countably decomposable) and the Hi’s are Hilbert spaces. See [72, ch. III, §1, lemma 7; p. 224 in the French edition and p. 291 in the English one] for a detailed proof. Note that if M is σ -finite we can take for I a singleton with Ni = M and Hi = C. Corollary A.66 For any f ∈ M∗ there is a (self-adjoint) projection p ∈ M such that pMp is σ -finite and f (x) = f (pxp) for any x ∈ M. Proof With the notation of Theorem

A.65 we may assume Ni ⊂ B(Hi ) so that M ⊂ B(H ) with H = ⊕i∈I Hi ⊗2 Hi 2 . There are xk ,yk ∈ H such that (A.27) holds. We can clearly find a countable subset J ⊂ I and separable subspaces Ki ⊂ Hi such that Hi )2 viewed as a subspace of H . Let xk ,yk ∈ K for  all k ≥ 1 with K = (⊕i∈J Ki ⊗2 ¯ i p = PK = i∈J PKi ⊗ 1Ni . Then pMp  ⊕ i∈J B(Ki )⊗N ∞ is σ -finite and p has the required property.

A.21 Schur’s lemma A subspace E ⊂ H is called “invariant” under an operator T ∈ B(H ) if T (E) ⊂ E. It is called “reducing” if it is also invariant under T ∗ . In the latter case the orthogonal projection PE commutes with T . Let G be a discrete group. Consider an irreducible unitary representation π : G → B(Hπ ) with dim(Hπ ) < ∞. By “irreducible” we mean that there is no nontrivial subspace of Hπ that is left invariant by π(G) (the range of π ). Note that since π(G) is a self-adjoint subset of B(Hπ ), a subspace E ⊂ Hπ is invariant under π(G) if and only if the orthogonal projection PE commutes with the operators in π(G). In that case, we have π(t)PE = PE π(t)PE = PE π(t) and the mapping t → π(t)|E can be viewed as a unitary representation of G in B(E) such that π(t) = π(t)|E ⊕ π(t)|E ⊥ . By definition π is irreducible if and only if there is no nontrivial decomposition of this type, nontrivial meaning that {0} = E = Hπ . Equivalently, the commutant of π(G) (which is a C ∗ -algebra) is equal to CI (here we denote by I the identity on Hπ ). Indeed, the commutant is clearly linearly spanned by its self-adjoint elements; their spectral projections being orthogonal projections commuting with π(G) must be equal to either 0 or the identity, and hence they must be in CI . Thus π is irreducible if and only if the commutant of π(G) consists of scalar multiples of the identity. Remark A.67 Any finite-dimensional unitary representation can be decomposed as a direct sum of irreducible ones.

468

Appendix: Miscellaneous background

Indeed, if it is irreducible this is obvious, and if not then it is the sum of representations of lower dimensions so that we can obtain the result by induction on the dimension (starting with dimension 1 which is obviously irreducible). group, i.e. the set of permutations of a set with N Let SN denote the symmetric  elements. Let χ = N −1/2 N 1 ek . The following simple example is useful: N Lemma A.68 The natural unitary representation π : SN → B(N 2 ) that acts on 2 by permuting the basis vectors (i.e. π(σ )(ek ) = eσ (k) ) decomposes as the direct sum of the trivial representation on Cχ and an irreducible representation π 0 that is the restriction of π to χ ⊥ . More generally, if a group G acts by permutation on {1,2, . . . ,N}, in a bitransitive way, meaning that for any i = j and i  = j  there is σ ∈ G such that σ (i) = i  and σ (j ) = j  , then the corresponding unitary representation π 0 = π|χ ⊥ : G → B(χ ⊥ ) is

irreducible on χ ⊥ .

Proof Let G be a group acting bitransitively on {1,2, . . . ,N} with associated unitary representation π : G → B(N 2 ). A fortiori the action is transitive, so that π(σ )χ = χ for any σ ∈ G, and hence also π(σ )χ ⊥ = χ ⊥ . Therefore π = π|Cχ ⊕ π|χ ⊥ .

The irreducibility of π 0 = π|χ ⊥ can be checked as follows. Using transitivity and bitransitivity one checks easily that the commutant of the range of π is formed of matrices [aij ] that are constant both on the diagonal and outside of it. These are matrices in the linear span of the identity and the orthogonal projection P0 onto Cχ, or equivalently span[P0,I − P0 ]. From this one deduces easily that the commutant of the range of π 0 in B(χ ⊥ ) is formed of multiples of the identity on χ ⊥ . fd the set of finite-dimensional irreducible unitary representations We denote by G of G with the convention that we identify two representations if they are unitarily equivalent. fd . Consider the representation ρ = π¯ ⊗ σ : G → Hπ ⊗2 Hσ defined Let π,σ ∈ G by ρ(t) = π(t) ⊗ σ (t) (t ∈ G). Using the identification Hπ ⊗ Hσ = S2 (Hπ ,Hσ ) we may view π(t) ⊗ σ (t) as an operator on S2 (Hπ ,Hσ ). More precisely, we have for any ξ ∈ S2 (Hπ ,Hσ ) ρ(t)ξ = σ (t)ξ π(t)∗ .

(A.38)

Let I denote the identity operator on Hπ . By (A.38), I is an invariant vector for π¯ ⊗ π . The following classical result of Schur is very well known. Lemma A.69 (Schur’s lemma) Let G be any group. fd , the representation [π¯ ⊗ π ] ⊥ has no nonzero invariant vector. (i) For any π ∈ G |I fd the representation π¯ ⊗ σ has no nonzero invariant (ii) For any pair π = σ ∈ G vector. Proof (i) Let ξ ∈ S2 (Hπ ,Hπ ) be an invariant vector. Then ξ = π(t)ξ π(t)∗ for any t ∈ G. Therefore, ξ commutes with π(G). By the preceding remarks, ξ ∈ CI . Therefore any invariant ξ ∈ I ⊥ must be = 0.

A.21 Schur’s lemma

469

(ii) Assume otherwise that π¯ ⊗ σ has an invariant vector ξ = 0. Viewing ξ as an element of Hπ ⊗ Hσ = S2 (Hπ ,Hσ ) we would have σ (t)ξ π(t)∗ = ξ for any t in G (ξ is called an “intertwiner”). It follows that π(t)ξ ∗ ξ π(t)∗ = ξ ∗ ξ for any t in G. By the irreducibility of π , we must have ξ ∗ ξ = I . Arguing similarly with σ , we find ξ ξ ∗ = I and hence ξ must be unitary. But then σ (t)ξ π(t)∗ = ξ implies σ (t) = ξ π(t)ξ ∗ , which would mean that σ  π which is excluded.

References

[1] C. Akemann, J. Anderson, and G. Pedersen, Triangle inequalities in operator algebras, Lin. Multi. Alg. 11 (1982), 167–178. [2] C. Akemann and P. Ostrand, Computing norms in group C ∗ -algebras, Amer. J. Math. 98 (1976), 1015–1047. [3] C. Anantharaman-Delaroche, Amenability and exactness for dynamical systems and their C ∗ -algebras, Trans. Amer. Math. Soc. 354 (2002), 4153–4178. [4] C. Anantharaman and S. Popa, “An introduction to II 1 -factors”, Cambridge University Press, Cambridge, to appear. [5] T. B. Andersen, Linear extensions, projections, and split faces, J. Funct. Anal. 17 (1974), 161–173. [6] J. Anderson, Extreme points in sets of positive linear maps in B(H), J. Funct. Anal. 31 (1979), 195–217. [7] G. Anderson, A. Guionnet, and O. Zeitouni, An introduction to random matrices, Cambridge University Press, Cambridge, 2010. [8] H. Ando and U. Haagerup, Ultraproducts of von Neumann algebras, J. Funct. Anal. 266 (2014), 6842–6913. [9] H. Ando, U. Haagerup, and C. Winsløw, Ultraproducts, QWEP von Neumann algebras, and the Effros–Mar´echal topology, J. Reine Angew. Math. 715 (2016), 231–250. [10] R. Archbold and C. Batty, C ∗ -tensor norms and slice maps, J. Lond. Math. Soc. 22 (1980), 127–138. [11] W. Arveson, Analyticity in operator algebras, Amer. J. Math. 89 (1967), 578–642. [12] W. Arveson, Subalgebras of C ∗ -algebras, Acta Math. 123 (1969), 141–224. Part II. Acta Math. 128 (1972), 271–308. [13] W. Arveson, Notes on extensions of C ∗ -algebras, Duke Math. J. 44 (1977), 329– 355. [14] G. Arzhantseva and T. Delzant, Examples of random groups, unpublished preprint, 2008. [15] D. Avitsour, Free products of C ∗ -algebras, Trans. Amer. Math. Soc. 271 (1982), 423–435. [16] J. Bannon, A. Marrakchi, and N. Ozawa, Full factors and co-amenable inclusions, arXiv:1903.05395, 2019.

470

References

471

[17] B. Bekka, P. de la Harpe, and A. Valette, Kazhdan’s property (T), Cambridge University Press, Cambridge, 2008. [18] A. Ben-Aroya and A. Ta-Shma, Quantum expanders and the quantum entropy difference problem, arXiv:quant-ph/0702129, no. 3, 2007. [19] A. Ben-Aroya, O. Schwartz, and A. Ta-Shma, Quantum expanders: motivation and constructions, Theory Comput. 6 (2010), 47–79. [20] C. A. Berger, L. A. Coburn, and A. Lebow, Representation and index theory for C ∗ -algebras generated by commuting isometries, J. Funct. Anal. 27, no. 1 (1978), 51–99. [21] J. Bergh, On the relation between the two complex methods of interpolation, Indiana Univ. Math. J. 28 (1979), 775–778. [22] J. Bergh and J. L¨ofstr¨om, Interpolation spaces: an introduction, Springer-Verlag, Berlin, 1976. [23] B. Blackadar, Weak expectations and injectivity in operator algebras, Proc. Amer. Math. Soc. 68 (1978), 49–53. [24] B. Blackadar, Weak expectations and nuclear C ∗ -algebras, Indiana Univ. Math. J. 27 (1978), 1021–1026. [25] B. Blackadar, Operator algebras: theory of C ∗ -algebras and von Neumann algebras, Encyclopaedia of mathematical sciences, 122, Springer-Verlag, Berlin, 2006. [26] D. P. Blecher and L. Labuschagne, Outers for noncommutative Hp revisited, Studia Math. 217 (2013), 265–287. [27] D. P. Blecher and C. Le Merdy, Operator algebras and their modules: an operator space approach, Oxford University Press, Oxford, 2004. [28] D. Blecher and V. Paulsen, Explicit constructions of universal operator algebras and applications to polynomial factorization, Proc. Amer. Math. Soc. 112 (1991), 839–850. [29] D. Blecher, Z. J. Ruan, and A. Sinclair, A characterization of operator algebras, J. Funct. Anal. 89 (1990), 188–201. [30] F. Boca, Free products of completely positive maps and spectral sets, J. Funct. Anal. 97 (1991), 251–263. [31] F. Boca, A note on full free product C ∗ -algebras, lifting and quasidiagonality, operator theory, operator algebras and related topics (Timis¸oara, 1996), 51–63, Theta Found., Bucharest, 1997. [32] C. Bordenave and B. Collins, Eigenvalues of random lifts and polynomial of random permutations matrices, Ann. of Math. 190 (2019), 811–875. [33] J. Bourgain, Real isomorphic complex Banach spaces need not be complex isomorphic, Proc. Amer. Math. Soc. 96 (1986), 221–226. [34] M. Bo˙zejko, Some aspects of harmonic analysis on free groups, Colloq. Math. 41 (1979), 265–271. [35] M. Bo˙zejko and G. Fendler, Herz–Schur multipliers and completely bounded multipliers of the Fourier algebra of a locally compact group, Boll. Unione Mat. Ital. (6) 3-A (1984), 297–302. [36] L. Brown, Ext of certain free product C ∗ -algebras, J. Operator Theory 6 (1981), 135–141. [37] L. Brown, Invariant means and finite representation theory of C ∗ -algebras, Memoirs of the American Mathematical Society, 184, American Mathematical Society, Providence, RI, 2006.

472

References

[38] L. Brown and K. Dykema, Popa algebras in free group factors, J. Reine Angew. Math. 573 (2004), 157–180. [39] N. P. Brown and N. Ozawa, C∗ -algebras and finite-dimensional approximations, Graduate studies in mathematics, 88, American Mathematical Society, Providence, RI, 2008. [40] S. Burgdorf, K. Dykema, I. Klep, and M. Schweighofer, Addendum to “Connes’ embedding conjecture and sums of Hermitian squares” [Adv. Math. 217, no. 4 (2008) 1816–1837], Adv. Math. 252 (2014), 805–811. [41] J. de Canni`ere and U. Haagerup, Multipliers of the Fourier algebras of some simple Lie groups and their discrete subgroups, Amer. J. Math. 107 (1985), 455–500. [42] P.-A. Cherix, M. Cowling, P. Jolissaint, P. Julg, and A. Valette, Groups with the Haagerup property. Gromov’s a-T-menability, Birkh¨auser Verlag, Basel, 2001. [43] W. M. Ching, Free products of von Neumann algebras, Trans. Amer. Math. Soc. 178 (1973), 147–163. [44] M. D. Choi, A Schwarz inequality for positive linear maps on C ∗ -algebras, Illinois J. Math. 18 (1974), 565–574. [45] M. D. Choi and E. Effros, Nuclear C ∗ -algebras and the approximation property, Amer. J. Math. 100 (1978), 61–79. [46] M. D. Choi and E. Effros, Nuclear C*-algebras and injectivity. The general case, Indiana Univ. Math. J. 26 (1977), 443–446. [47] M. D. Choi and E. Effros, Injectivity and operator spaces, J. Funct. Anal. 24 (1977), 156–209. [48] M. D. Choi and E. Effros, Separable nuclear C ∗ -algebras and injectivity, Duke Math. J. 43 (1976), 309–322. [49] M. D. Choi and E. Effros, The completely positive lifting problem for C ∗ -algebras, Ann. Math. 104 (1976), 585–609. [50] E. Christensen, E. Effros, and A. Sinclair, Completely bounded multilinear maps and C ∗ -algebraic cohomology, Invent. Math. 90 (1987), 279–296. [51] E. Christensen and A. Sinclair, On von Neumann algebras which are complemented subspaces of B(H ), J. Funct. Anal. 122 (1994), 91–102. [52] E. Christensen and A. Sinclair, Module mappings into von Neumann algebras and injectivity, Proc. Lond. Math. Soc. 71 (1995), 618–640. [53] E. Christensen and L. Wang, Von Neumann algebras as complemented subspaces of B(H). Internat. J. Math. 25 (2014), 1450107, 9 pp. [54] K. McClanahan, C ∗ -algebras generated by elements of a unitary matrix, J. Funct. Anal. 107 (1992), 439–457. [55] P. M. Cohn, Basic algebra, Springer, London, 2003. [56] B. Collins and C. Male, The strong asymptotic freeness of Haar and deterministic ´ Norm. Sup´er. 47 (2014), 147–163. matrices, Ann. Sci. Ec. [57] B. Collins and K. Dykema, A linearization of Connes’ embedding problem, New York J. Math. 14 (2008), 617–641. [58] W.W. Comfort, S. Negrepontis, The theory of ultrafilters, Springer, New York, 1974. [59] A. Connes, Caract´erisation des espaces vectoriels ordonn´es sous-jacents aux alg`ebres de von Neumann, Ann. Inst. Fourier (Grenoble) 24 (1974), 121–155. [60] A. Connes, A factor not anti-isomorphic to itself, Bull. Lond. Math. Soc. 7 (1975), 171–174.

References

473

[61] A. Connes, Classification of injective factors. Cases II 1 , II ∞ , III λ , λ = 1, Ann. Math. (2) 104 (1976), 73–115. [62] J. H. Conway and N. J. A. Sloane, Sphere packings, lattices and groups, Third edition. Springer-Verlag, New York, 1999. [63] K. Courtney and D. Sherman, “The universal C ∗ -algebra of a contraction”, arXiv:1811.04043, 2018, to appear. [64] M. Cowling and U. Haagerup, Completely bounded multipliers of the Fourier algebra of a simple Lie group of real rank one, Invent. Math. 96 (1989), 507–549. [65] H. S. M. Coxeter and W. O. J. Moser, Generators and relations for discrete groups, Springer-Verlag, Berlin-G¨ottingen-Heidelberg, 1957. [66] M. Cwikel and S. Janson, Interpolation of analytic families of operators, Studia Math. 79 (1984), 61–71. [67] K. Davidson. C ∗ -algebras by example, Fields Institute publication. Toronto, American Mathematical Society, Providence, RI, 1996. [68] K. Davidson and E. Kakariadis, A proof of Boca’s theorem, Proc. Roy. Soc. Edinburgh Sect. A 149 (2019), 869–876. [69] A. M. Davie, Matrix norms related to Grothendieck’s inequality, Banach spaces (Columbia, MO, 1984), Lecture Notes in Mathematics, 1166, Springer, Berlin, 1985. [70] A. Devinatz, The factorization of operator valued analytic functions, Ann. Math. 73 (1961), 458–495. [71] J. Diestel, J. H. Fourie, and J. Swart, The metric theory of tensor products. Grothendieck’s r´esum´e revisited, American Mathematical Society, Providence, RI, 2008. [72] J. Dixmier, Les Alg`ebres d’Op´erateurs dans l’Espace Hilbertien (Alg`ebres de von Neumann), Gauthier-Villars, Paris 1969. (In translation: von Neumann algebras, North-Holland, Amsterdam–New York 1981.) [73] R. Douglas and R. Howe, On the C ∗ -algebra of Toeplitz operators on the quarterplane, Trans. Amer. Math. Soc. 158 (1971), 203–217. [74] R. Douglas and C. Pearcy, Von Neumann algebras with a single generator, Michigan Math. J. 16 (1969), 21–26. [75] K. Dykema and K. Juschenko, Matrices of unitary moments, Math. Scand. 109 (2011), 225–239. [76] K. Dykema, V. Paulsen and J. Prakash, Non-closure of the set of quantum correlations via graphs, Comm. Math. Phys. 365 (2019), 1125–1142. [77] E. Effros and U. Haagerup, Lifting problems and local reflexivity for C ∗ -algebras, Duke Math. J. 52 (1985), 103–128. [78] E. Effros, M. Junge, and Z. J. Ruan, Integral mappings and the principle of local reflexivity for noncommutative L1 -spaces, Ann. Math. 151 (2000), 59–92. [79] E. Effros and C. Lance, Tensor products of operator algebras, Adv. Math. 25 (1977), 1–34. [80] E. Effros and Z. J. Ruan, Operator Spaces, Oxford University Press, Oxford, 2000. [81] G. Elek and E. Szab´o, Hyperlinearity, essentially free actions and L2 -invariants. The sofic property, Math. Ann. 332 (2005), 421–441. [82] G. Elliott, On approximately finite-dimensional von Neumann algebras, Math. Scand. 39 (1976), 91–101.

474

References

[83] G. Elliott, On approximately finite-dimensional von Neumann algebras. II, Canad. Math. Bull. 21 (1978), 415–418. [84] G. Elliott and E. Woods, The equivalence of various definitions for a properly infinite von Neumann algebra to be approximately finite dimensional, Proc. Amer. Math. Soc. 60 (1976), 175–178. [85] P. Eymard, L’alg`ebre de Fourier d’un groupe localement compact, Bull. Soc. Math. France 92 (1964), 181–236. [86] I. Farah, B. Hart, and D. Sherman, Model theory of operator algebras I: stability, Bull. Lond. Math. Soc. 45 (2013), 825–838. [87] I. Farah, B. Hart, and D. Sherman, Model theory of operator algebras III: elementary equivalence and II1 factors, Bull. Lond. Math. Soc. 46 (2014), 609–628. [88] I. Farah, B. Hart, and D. Sherman, Model theory of operator algebras II: model theory, Israel J. Math. 201 (2014), 477–505. [89] I. Farah and S. Shelah, A dichotomy for the number of ultrapowers, J. Math. Log. 10 (2010), 45–81. [90] D. Farenick, A. Kavruk, and V. Paulsen, C ∗ -algebras with the weak expectation property and a multivariable analogue of Ando’s theorem on the numerical radius, J. Operator Theory 70 (2013), 573–590. [91] D. Farenick, A. Kavruk, V. Paulsen, and I. Todorov, Characterisations of the weak expectation property, New York J. Math. 24A (2018), 107–135. [92] D. Farenick and V. Paulsen, Operator system quotients of matrix algebras and their tensor products, Math. Scand. 111 (2012), 210–243. [93] J. Friedman, A proof of Alon’s second eigenvalue conjecture and related problems, Mem. Amer. Math. Soc. 195, 910 (2008). [94] J. Friedman, A. Joux, Y. Roichman, J. Stern, and J.-P. Tillich, The action of a few permutations on r-tuples is quickly transitive, Random Struct. Algo. 12 (1998), 335–350. [95] T. Fritz, Tsirelson’s problem and Kirchberg’s conjecture, Rev. Math. Phys. 24 (2012), 1250012, 67 pp. [96] M. Gromov, Random walk in random groups, Geom. Funct. Anal. 13 (2003), 73–146. [97] L. Gross, A non-commutative extension of the Perron–Frobenius theorem, Bull. Amer. Math. Soc. 77 (1971), 343–347. [98] A. Grothendieck, R´esum´e de la th´eorie m´etrique des produits tensoriels topologiques, Boll. Soc. Mat. S˜ao-Paulo 8 (1953), 1–79. Reprinted in Resenhas 2 (1996), no. 4, 401–480. [99] E. Guentner, N. Higson, and S. Weinberger, The Novikov conjecture for linear ´ groups, Publ. Math. Inst. Hautes Etudes Sci. 101 (2005), 243–268. [100] A. Guichardet, Tensor products of C ∗ -algebras, Dokl. Akad. Nauk. SSSR 160 (1965), 986–989. [101] A. Guichardet, Tensor products of C ∗ -algebras (Part I. Finite tensor products. Part II. Infinite tensor products), Lecture Notes Series 12 and 13, Aarhus Universitet, 1969. [102] U. Haagerup, The standard form of von Neumann algebras, Math. Scand. 37 (1975), 271–283. [103] U. Haagerup, An example of a nonnuclear C ∗ -algebra, which has the metric approximation property, Invent. Math. 50 (1978–1979), 279–293.

References

475

[104] U. Haagerup, Injectivity and decomposition of completely bounded maps, Operator algebras and their connections with topology and ergodic theory, 170– 222, Lecture Notes in Mathematics, 1132, Springer, Berlin, Heidelberg, 1985. [105] U. Haagerup, A new proof of the equivalence of injectivity and hyperfiniteness for factors on a separable Hilbert space, J. Funct. Anal. 62 (1985), 160–201. [106] U. Haagerup, On convex combinations of unitary operators in C*-algebras, Mappings of operator algebras (Philadelphia, PA, 1988), 1–13, Progr. Math., 84, Birkh¨auser Boston, Boston, MA, 1990. [107] U. Haagerup, Self-polar forms, conditional expectations and the weak expectation property for C ∗ -algebras, Unpublished manuscript (1993). [108] U. Haagerup, Group C ∗ -algebras without the completely bounded approximation property, J. Lie Theory 26 (2016), 861–887. [109] U. Haagerup, S. Knudby, and T. de Laat, A complete characterization of ´ Norm. connected Lie groups with the approximation property, Ann. Sci. Ec. Sup´er. 49 (2016), 927–946. [110] U. Haagerup and J. Kraus, Approximation properties for group C ∗ -algebras and group von Neumann algebras, Trans. Amer. Math. Soc. 344 (1994), 667–699. [111] U. Haagerup, M. Junge, and Q. Xu, A reduction method for noncommutative Lp -spaces and applications, Trans. Amer. Math. Soc. 362 (2010), 2125–2165. [112] U. Haagerup, R. Kadison, and G. Pedersen, Means of unitary operators, revisited, Math. Scand. 100 (2007), 193–197. [113] U. Haagerup and T. de Laat, Simple Lie groups without the approximation property, Duke Math. J. 162 (2013), 925–964. [114] U. Haagerup and T. de Laat, Simple Lie groups without the approximation property II, Trans. Amer. Math. Soc. 368 (2016), 3777–3809. [115] U. Haagerup and M. Musat, Factorization and dilation problems for completely positive maps on von Neumann algebras, Comm. Math. Phys. 303 (2011), 555–594. [116] U. Haagerup and M. Musat, An asymptotic property of factorizable completely positive maps and the Connes embedding problem, Comm. Math. Phys. 338 (2015), 141–176. [117] U. Haagerup and G. Pisier, Factorization of analytic functions with values in non-commutative L1 -spaces, Canadian J. Math. 41 (1989), 882–906. [118] U. Haagerup and G. Pisier, Bounded linear operators between C ∗ -algebras, Duke Math. J. 71 (1993), 889–925. [119] U. Haagerup and S. Thorbjoernsen, Random matrices and K-theory for exact C∗ -algebras, Doc. Math. 4 (1999), 341–450 (electronic). [120] U. Haagerup and S. Thorbjørnsen, A new application of random matrices: ∗ (F )) is not a group, Ann. Math. 162 (2005), 711–775. Ext(Cred 2 [121] U. Haagerup and C. Winsløw, The Effros-Mar´echal topology in the space of von Neumann algebras, Amer. J. Math. 120 (1998), 567–617. [122] U. Haagerup and C. Winsløw, The Effros-Mar´echal topology in the space of von Neumann algebras. II, J. Funct. Anal. 171 (2000), 401–431. [123] H. Hanche-Olsen and E. St¨ormer, Jordan operator algebras, Pitman, Boston, 1984. [124] A. Harcharras, On some “stability” properties of the full C ∗ -algebra associated to the free group F∞ , Proc. Edinburgh Math. Soc. 41 (1998), 93–116. [125] P. Harmand, D. Werner, and W. Werner, M-ideals in Banach spaces and Banach algebras, Lecture Notes in Mathematics, 1547 Springer-Verlag, Berlin, 1993.

476

References

[126] P. de la Harpe, Topics in geometric group theory, The University of Chicago Press, Second printing, with corrections and updates, Chicago, 2003. [127] P. de la Harpe and A. Valette, La propri´et´e (T) de Kazhdan pour les groupes localement compacts (avec un appendice de Marc Burger). Ast´erisque 175 (1989), Soc. Math. France, Paris. [128] S. Harris, A non-commutative unitary analogue of Kirchberg’s conjecture, Indiana Univ. Math. J. 68 (2019), 503–536. [129] S. Harris and V. Paulsen, Unitary correlation sets, Integral Equations Operator Theory 89 (2017), 125–149. [130] A. Harrow, Quantum expanders from any classical Cayley graph expander, Quantum Inf. Comput. 8 (2008), 715–721. [131] A. Harrow and M. Hastings, Classical and quantum tensor product expanders, Quantum Inf. Comput. 9 (2009), 336–360. [132] M. Hastings, Random unitaries give quantum expanders, Phys. Rev. A (3) 76 no. 3 (2007), 032315, 11 pp. [133] H. Helson, Lectures on invariant subspaces, Academic Press, New York, 1964. [134] F. Hiai and Y. Nakamura, Distance between unitary orbits in von Neumann algebras, Pacific J. Math. 138 (1989), 259–294. [135] S. Itoh, Conditional expectations in C ∗ -crossed products, Trans. Amer. Math. Soc. 267 (1981), 661–667. [136] P. Jolissaint, A characterization of completely bounded multipliers of Fourier algebras, Colloq. Math. 63 (1992), 311–313. [137] M. Junge and C. Le Merdy, Factorization through matrix spaces for finite rank operators between C ∗ -algebras, Duke Math. J. 100, (1999), 299–319. [138] M. Junge, M. Navascues, C. Palazuelos, D. Per´ez-Garc´ıa, V.B. Scholz, and R.F. Werner, Connes’ embedding problem and Tsirelson’s problem, J. Math. Phys. 52 (2011), 012102, 12 pp. [139] M. Junge, C. Palazuelos, D. Perez-Garc´ıa, I. Villanueva, and M. M. Wolf, Operator Space theory: a natural framework for Bell inequalities, Phys. Rev. Lett. 104, 170405 (2010). [140] M. Junge and C. Palazuelos, Large violation of Bell inequalities with low entanglement, Comm. Math. Phys. 306 (2011), 695–746. [141] M. Junge and G. Pisier, Bilinear forms on exact operator spaces and B(H ) ⊗ B(H ), Geom. Funct. Anal. 5 (1995), 329–363. [142] R. Kadison, Isometries of operator algebras, Ann. Math. 54 (1951), 325–338. [143] R. Kadison, A generalized Schwarz inequality and algebraic invariants for operator algebras, Ann. Math. 56 (1952), 494–503. [144] R. Kadison and G. Pedersen, Means and convex combinations of unitary operators, Math. Scand. 57 (1985), 249–266. [145] R. Kadison and J. Ringrose, Fundamentals of the theory of operator algebras, Vol. I, Birkh¨auser Boston, Inc., Boston, MA, 1983. [146] R. Kadison and J. Ringrose, Fundamentals of the theory of operator algebras, Vol. II, Birkh¨auser Boston, Inc., Boston, MA, 1992. [147] R. Kadison and J. Ringrose, Fundamentals of the theory of operator algebras, Vol. IV, Birkh¨auser Boston, Inc., Boston, MA, 1992. [148] M. Kassabov, Symmetric groups and expanders, Inv. Math. 170 (2007), 327–354. [149] A. Kavruk, Tensor products of operator systems and applications. Thesis (Ph.D.), University of Houston, 2011.

References

477

[150] A. Kavruk, V. Paulsen, I. Todorov, and M. Tomforde, Tensor products of operator systems, J. Funct. Anal. 261 (2011), 267–299. [151] A. Kavruk, V. Paulsen, I. Todorov, and M. Tomforde, Quotients, exactness, and nuclearity in the operator system category, Adv. Math. 235 (2013), 321–360. [152] A. Kavruk, The weak expectation property and Riesz interpolation, arXiv:1201.5414, 2012. [153] A. Kavruk, Nuclearity related properties in operator systems, J. Operator Theory 71 (2014), 95–156. [154] E. Kirchberg, C ∗ -nuclearity implies CPAP, Math. Nachr. 76 (1977), 203–212. [155] E. Kirchberg, On nonsemisplit extensions, tensor products and exactness of group C ∗ -algebras, Invent. Math. 112 (1993), 449–489. [156] E. Kirchberg, Commutants of unitaries in UHF algebras and functorial properties of exactness, J. Reine Angew. Math. 452 (1994), 39–77. [157] E. Kirchberg, Discrete groups with Kazhdan’s property T and factorization property are residually finite, Math. Ann. 299 (1994), 551–563. [158] E. Kirchberg, Exact C∗ -algebras, tensor products, and the classification of purely infinite algebras, Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Z¨urich, 1994), 943–954, Birkh¨auser, Basel, 1995. [159] E. Kirchberg, On subalgebras of the CAR-algebra, J. Funct. Anal. 129 (1995), 35–63. [160] E. Kirchberg, On restricted perturbations in inverse images and a description of normalizer algebras in C ∗ -algebras, J. Funct. Anal. 129 (1995), 1–34. [161] E. Kirchberg, Personal communication. [162] E. Kirchberg and N. C. Phillips, Embedding of exact C ∗ -algebras in the Cuntz algebra O2 , J. Reine Angew. Math. 525 (2000), 17–53. [163] I. Klep and M. Schweighofer, Connes’ embedding conjecture and sums of Hermitian squares, Adv. Math. 217 (2008), 1816–1837. [164] V. Lafforgue and M. De la Salle, Noncommutative Lp -spaces without the completely bounded approximation property, Duke Math. J. 160 (2011), 71–116. [165] C. Lance, On nuclear C ∗ -algebras, J. Funct. Anal. 12 (1973), 157–176. [166] F. Lehner, A characterization of the Leinert property, Proc. Amer. Math. Soc. 125 (1997), 3423–3431. [167] J. Lindenstrauss and L. Tzafriri, Classical Banach spaces, vol. I, Sequence spaces, Springer Verlag, Berlin 1976. [168] F. Lehner, Computing norms of free operators with matrix coefficients, Amer. J. Math. 121 (1999), 453–486. [169] E. Lieb, Convex trace functions and the Wigner–Yanase–Dyson conjecture, Adv. Math. 11 (1973), 267–288. [170] J. Lindenstrauss and H. P. Rosenthal, The Lp spaces, Israel J. Math. 7 (1969), 325–349. [171] T. Loring, Lifting solutions to perturbing problems in C ∗ -algebras, Fields Institute Monographs, American Mathematical Society, Providence, RI, 1997. [172] A. Lubotzky, Discrete groups, expanding graphs and invariant measures, Progress in Math, 125. Birkh¨auser, 1994. [173] A. Lubotzky, What is Property (τ )? Notices Amer. Math. Soc. 52 (2005), 626–627. [174] A. Lubotzky, Expander graphs in pure and applied mathematics, Bull. Amer. Math. Soc. 49 (2012), 113–162.

478

References

[175] A. Lubotzky, R. Phillips, and P. Sarnak, Hecke operators and distributing points on S 2 , I, Comm. Pure and Applied Math. 39 (1986), 149–186. [176] D. Mc Duff, Uncountably many II1 factors, Ann. Math. 90 (1969), 372–377. [177] A. I. Malcev, On isomorphic matrix representations of infinite groups of matrices (Russian), Mat. Sb. 8 (1940), 405–422 & Amer. Math. Soc. Transl. (2) 45 (1965), 1–18. [178] N. Monod, Groups of piecewise projective homeomorphisms, Proc. Natl. Acad. Sci. USA 110 (2013), 4524–4527. [179] A. Nica, Asymptotically free families of random unitaries in symmetric groups, Pacific J. Math. 157 (1993), 295–310. ´ Ricard, Operator spaces with few completely bounded maps, [180] T. Oikhberg and E. Math. Ann. 328 (2004), 229–259. [181] T. Oikhberg and H. P. Rosenthal, Extension properties for the space of compact operators, J. Funct. Anal. 179 (2001), 251–308. [182] D. Osajda, Small cancellation labellings of some infinite graphs and applications, arXiv:1406.5015, 2014. [183] D. Osajda, Residually finite non-exact groups, Geom. Funct. Anal. 28 (2018), 509–517. [184] N. Ozawa, On the set of finite-dimensional subspaces of preduals of von Neumann algebras, C. R. Acad. Sci. Paris S´er. I Math. 331 (2000), 309–312. [185] A non-extendable bounded linear map between C ∗ -algebras, Proc. Edinb. Math. Soc. (2) 44 (2001), 241–248. [186] N. Ozawa, On the lifting property for universal C ∗ -algebras of operator spaces, J. Operator Theory 46 no. 3, suppl. (2001), 579–591. [187] N. Ozawa, Amenable actions and exactness for discrete groups, C. R. Acad. Sci. Paris S´er. I Math. 330 (2000), 691–695. [188] N. Ozawa, An application of expanders to B(2 ) ⊗ B(2 ), J. Funct. Anal. 198 (2003), 499–510. [189] N. Ozawa, About the QWEP conjecture, Internat. J. Math. 15 (2004), 501–530. [190] N. Ozawa, Examples of groups which are not weakly amenable, Kyoto J. Math. 52 (2012), 333–344. [191] N. Ozawa, About the Connes embedding conjecture: algebraic approaches, Jpn. J. Math. 8 (2013), 147–183. [192] N. Ozawa, Tsirelson’s problem and asymptotically commuting unitary matrices, J. Math. Phys. 54 (2013), 032202, 8 pp. [193] N. Ozawa and G. Pisier, A continuum of C ∗ -norms on B(H )⊗B(H ) and related tensor products, Glasgow Math. J. 58 (2016), 433–443. [194] A. Paterson, Amenability, American Mathematical Society, Mathematical Surveys and Monographs, 29, 1988. [195] V. Paulsen, Completely bounded maps and dilations, Pitman Research Notes 146. Pitman Longman (Wiley) 1986. [196] V. Paulsen, Completely bounded maps and operator algebras, Cambridge University Press, Cambridge, 2002. [197] V. Paulsen and C.-Y. Suen, Commutant representations of completely bounded maps, J. Operator Theory 13 (1985), 87–101. [198] V. Pestov, Operator spaces and residually finite-dimensional C ∗ -algebras, J. Funct. Anal. 123 (1994), 308–317. [199] J. P. Pier, Amenable locally compact groups, Wiley Interscience, New York, 1984.

References

479

[200] G. Pisier, Factorization of linear operators and the geometry of Banach spaces, CBMS (Regional Conferences of the A.M.S.) no. 60 (1986), Reprinted with corrections 1987. [201] G. Pisier, Remarks on complemented subspaces of von Neumann algebras, Proc. Royal Soc. Edinburgh 121 A (1992), 1–4. [202] G. Pisier, Espace de Hilbert d’op´erateurs et interpolation complexe, Comptes Rendus Acad. Sci. Paris, S´erie I 316 (1993), 47–52. [203] G. Pisier, Projections from a von Neumann algebra onto a subalgebra, Bull. Soc. Math. France 123 (1995), 139–153. [204] G. Pisier, A simple proof of a theorem of Kirchberg and related results on C ∗ -norms, J. Operator Theory 35 (1996), 317–335. [205] G. Pisier, The operator Hilbert space OH, complex interpolation and tensor norms, Memoirs Amer. Math. Soc. 122 no. 585 (1996), 1–103. [206] G. Pisier. Quadratic forms in unitary operators, Linear Algebra Appl. 267 (1997), 125–137. [207] G. Pisier, Similarity problems and completely bounded maps. Second, Expanded Edition, Springer Lecture Notes, 1618 (2001). [208] G. Pisier, Introduction to operator space theory, Cambridge University Press, Cambridge, 2003. [209] G. Pisier, Remarks on B(H ) ⊗ B(H ), Proc. Indian Acad. Sci. (Math. Sci.) 116 (2006), 423–428. [210] G. Pisier, Grothendieck’s theorem, past and present, Bull. Amer. Math. Soc. 49 (2012), 237–323. [211] G. Pisier, Random matrices and subexponential operator spaces, Israel J. Math. 203 (2014), 223–273. [212] G. Pisier, Quantum expanders and geometry of operator spaces, J. Europ. Math. Soc. 16 (2014), 1183–1219. [213] G. Pisier, On the metric entropy of the Banach–Mazur compactum, Mathematika 61 (2015), 179–198. [214] G. Pisier, Martingales in Banach spaces, Cambridge University Press, Cambridge, 2016. [215] G. Pisier and Q. Xu, Non-commutative Lp -spaces, Handbook of the geometry of Banach spaces, vol. II, North-Holland, Amsterdam, 2003. [216] S. Popa, On the Russo–Dye theorem, Michigan Math. J. 28 (1981), 311–315. [217] S. Popa, A short proof of “injectivity implies hyperfiniteness” for finite von Neumann algebras, J. Operator Theory 16 (1986), 261–272. [218] S. Popa, Markov traces on universal Jones algebras and subfactors of finite index, Invent. Math. 111 (1993), 375–405. [219] W. Pusz and S. L. Woronowicz, Form convex functions and the WYDL and other inequalities, Lett. Math. Phys. 2 (1977/78), 505–512. [220] W. Pusz and S. L. Woronowicz, Functional calculus for sesquilinear forms and the purification map, Rep. Mathematical Phys. 8 (1975), 159–170. [221] T. Pytlik and R. Szwarc, An analytic family of uniformly bounded representations of free groups, Acta Math. 157 (1986), 287–309. [222] F. R˘adulescu, A comparison between the max and min norms on C ∗ (Fn ) ⊗ C ∗ (Fn ), J. Operator Theory 51 (2004), 245–253.

480

References

[223] F. R˘adulescu, Combinatorial aspects of Connes’s embedding conjecture and asymptotic distribution of traces of products of unitaries, Operator Theory 20, 197–205, Theta Ser. Adv. Math. 6, Theta, Bucharest, 2006. [224] R. A. Rankin, Modular forms and functions, Cambridge University Press, Cambridge, 1977. [225] M. Rieffel, Induced representations of C ∗ -algebras, Adv. Math. 13 (1974), 176–257. [226] S. Sakai, C ∗ -algebras and W ∗ -algebras, Springer-Verlag, New York, 1971. [227] P. Sarnak, What is an expander? Notices Amer. Math. Soc. 51 (2004), 762–763. [228] A. Selberg, On the estimation of Fourier coefficients of modular forms, Proceedings of the Symposium Pure Mathematics, Vol. VIII, American Mathematical Society, Providence, RI, 1965, pp. 1–15. [229] D. Sherman, On cardinal invariants and generators for von Neumann algebras, Canad. J. Math. 64 (2012), 455–480. [230] W. Slofstra, The set of quantum correlations is not closed, Forum Math. Pi 7 (2019), e1, 41 pp. [231] W. Slofstra, A group with at least subexponential hyperlinear profile, arXiv:1806.05267, 2018. [232] R. R. Smith, Completely bounded module maps and the Haagerup tensor product, J. Funct. Anal. 102 (1991), 156–175. [233] E. Størmer, On the Jordan structure of C ∗ -algebras, Trans. Amer. Math. Soc. 120 (1965), 438–447. [234] E. Størmer, Multiplicative properties of positive maps, Math. Scand. 100 (2007), 184–192. [235] E. Størmer, Positive linear maps of operator algebras, Springer, Heidelberg, 2013. [236] C-Y. Suen, Completely bounded maps on C*-algebras, Proc. Amer. Math. Soc. 93 (1985), 81–87. [237] R. Szwarc, An analytic series of irreducible representations of the free group, Ann. Inst. Fourier 38 (1988), 87–110. [238] M. Takesaki, A note on the cross-norm of the direct product of C ∗ -algebras, Kodai Math. Sem. Rep. 10 (1958), 137–140. [239] M. Takesaki, Duality for crossed products and the structure of von Neumann algebras of type III, Acta Math. 131 (1973), 249–310. [240] M. Takesaki, Theory of Operator algebras, vol. I, Springer-Verlag, Berlin, Heidelberg, New York, 1979. [241] M. Takesaki, Theory of Operator algebras, vol. II, Springer-Verlag, Berlin, Heidelberg, New York, 2003. [242] M. Takesaki, Theory of Operator algebras, vol. III, Springer-Verlag, Berlin, Heidelberg, New York, 2003. [243] T. Tao, Expansion in finite simple groups of Lie type, American Mathematical Society, Providence, RI, 2015. [244] A. Thom, Examples of hyperlinear groups without factorization property, Groups Geom. Dyn. 4 (2010), 195–208. [245] J. Tomiyama, Tensor products and projections of norm one in von Neumann algebras, Lecture Notes, University of Copenhagen, 1970. [246] J. Tomiyama, Tensor products and approximation problems of C ∗ -algebras, Publ. Res. Inst. Math. Sci. 11 (1975/76), 163–183.

References

481

[247] J. Tomiyama, On the product projection of norm one in the direct product of operator algebras. Tˆohoku Math. J. 11, no. (1959) 305–313. [248] J. Tomiyama, On the projection of norm one in W ∗ -algebras. III, Tˆohoku Math. J. (2) 11 (1959) 125–129. [249] A. Tonge, The complex Grothendieck inequality for 2 × 2 matrices, Bull. Soc. Math. Gr`ece (N.S.) 27 (1986), 133–136. [250] S. Trott, A pair of generators for the unimodular group, Canad. Math. Bull. 5 (1962), 245–252. [251] B.S. Tsirelson, Quantum generalizations of Bell’s inequality, Lett. Math. Phys. 4 (1980), 93–100. [252] A. Valette, Minimal projections, integrable representations and property (T), Arch. Math. (Basel) 43 (1984), 397–406. [253] D. Voiculescu, Property T and approximation of operators, Bull. London Math. Soc. 22 (1990), 25–30. [254] D. Voiculescu, K. Dykema, and A. Nica, Free Random Variables, American Mathematical Society, Providence, RI, 1992. [255] S. Wassermann, On tensor products of certain group C ∗ -algebras, J. Funct. Anal. 23 (1976), 239–254. [256] S. Wassermann, Injective W ∗ -algebras, Proc. Cambridge Phil. Soc. 82 (1977), 39–47. [257] S. Wassermann, A pathology in the ideal space of L(H ) ⊗ L(H ), Indiana Univ. Math. J. 27 (1978), 1011–1020. [258] S. Wassermann, Exact C ∗ -algebras and related topics, Lecture Notes Series, 19. Seoul National University, Seoul, 1994. [259] S. Wassermann, C ∗ -algebras associated with groups with Kazhdan’s property T, Ann. Math. 134 (1991), 423–431. [260] D. Werner, Some lifting theorems for bounded linear operators, Functional analysis (Essen, 1991), 279–291, Lecture Notes in Pure and Applied Mathematics, 150, Dekker, New York, 1994. [261] P. Willig, On hyperfinite W ∗ -algebras, Proc. Amer. Math. Soc. 40 (1973), 120– 122. [262] J. S. Wilson, On characteristically simple groups, Math. Proc. Cambridge Philos. Soc. 80 (1976), 19–35. [263] S. Woronowicz, Selfpolar forms and their applications to the C ∗ -algebra theory, Rep. Mathematical Phys. 6 (1974), 487–495. [264] C. Zhang, Representation and geometry of operator spaces, Ph.D. thesis, University of Houston, 1995. [265] M. Zippin, The separable extension problem, Israel J. Math. 26 (1977), 372–387. [266] M. Zippin, Extension of bounded linear operators, Handbook of the geometry of Banach spaces, Vol. 2, 1703–1741, North-Holland, Amsterdam, 2003.

Index

A1 ∗ A2 , 55 A1 ⊗max A2 , 88 A1 ⊗min A2 , 88 B(H ), 9 BX , 9 C(n), 319 C ∗ -norm, 4, 87 C ∗ -tensor products, 87 C ∗ (G), 63 C ∗ E, 57 Cλ∗ (G), 71 C ∗ -norm, 453 C0 (n), 418 C1 ∗ C2 , 197 Cu (n), 351 E1N , 358 K(H ), 9 Mn (E), Mn×m (E), 10 OSn , 344 OSn (X), 346 SL2 (Z), 428 SLd (Zp ), 429 SLd (Z), 429 U (A), 9 UG , 63 (y), 121 u, ¨ 38, 161, 462 δ(y), 122 δcb , 344 n2 , np , 10 n∞ , 10 p (I ), 9 λ-CBAP, 200, 221 λG , 71 B, 1 C, 1 π1 · π2 , 89

πU , 453, 463 τN , 265 d, 344 dcb , 344 iA , 5 SN , 273 UN , 272, 276, 325 Bil(E × F ), 18 Bil(X × Y ), 438 amenable, 78 amenable trace, 249 antisymmetric Fock space, 287 approximate lifting, 196 approximately linear, 271 biduals of Banach spaces, 444 bin-norm, 163 binormal norm, 162 Boca’s theorem, 56, 282, 308, 309 Brown algebra, 132, 282, 290, 310 c.b., 10, 12 c.c., 10 c.p., 10 Calkin algebra, 215 CBAP, 200, 213, 220, 221 center, 257 Choi–Effros, 139, 220 circular, 237 complete contraction, 14 complete isometry, 14 complete isomorphism, 14 completely bounded, 12 completely contractive, 14 completely isometric, 14 completely positive, 4

482

Index

completely positive definite, 75 complex conjugation, 44 complex interpolation method, 367 conditional expectation, 234 Connes embedding problem, 262 Connes’s question, 262 contraction, 10 contractive, 10 CPAP, 81, 91, 92, 140, 201, 217, 220, 221 Cuntz algebra, 92, 213, 348 direct sum, 9 dual operator space, 358 exact, 210 exact group, 435 exactness, 327 exactness of the max-tensor product, 141 extension, 327 extreme point, 110 factor, 257, 266 factorization property, 151, 259 Fell’s absorption principle, 71, 93, 94, 96, 143, 144, 153 Fock space, 235, 287 free circular system, 237 free group, 66–68, 72, 82, 84 free product of groups, 84 free products of C ∗ -algebras, 53 free semi-circular system, 236 free-Gaussian, 236 Fubini product, 155 full C ∗ -algebra, 63 generalized weak expectation, 188, 192, 385 GNS, 318, 451 GNS for normal forms, 459 Gromov, 273, 435 Haagerup, 4, 8, 76, 113, 114, 221, 325, 373, 388 Haagerup property, 222 hyperfinite, 164 hyperfinite II 1 -factor, 269 hyperlinear, 271 hypertrace, 256 identity map, 9 infinite multiplicity, 102 infinite tensor product, 270 Kesten, 82 Kesten’s criterion, 143, 144

483

Kirchberg’s conjecture, 280 Kirchberg’s criterion, 264 Kirchberg’s theorem, 1, 137, 182, 183, 195, 347, 350, 361 Krein–Milman, 110 Lieb, 375 lifting property, 194 LLP, 1, 185, 193 local lifting, 194 local reflexivity, 174, 176–179 local reflexivity principle, 446 locally c-liftable, 155 locally c-lifting, 155 locally reflexive, 191 Malcev, 275, 314 matrix model, 276 maximal C ∗ -norm, 88 Mazur’s Theorem, 79, 216, 222, 233, 359, 361, 444, 446, 454 metric surjection, 10 minimal C ∗ -norm, 88 multiplicative domain, 387 multiplicity, 102 multiplier, 73–76, 134 nor-norm, 163 normal, 30, 37 nuclear, 139, 190 nuclear C ∗ -algebra, 181 nuclear pair, 180 OLLP, 195 One-for-all, 281 Open problems, 434 operator space, 11 opposite C ∗ -algebra, 48 permutation, 273, 274, 276, 418, 419, 428 Perron–Frobenius theorem, 377 positive definite, 74 Powers–Størmer inequality, 250, 252 prime numbers, 428 property (τ ), 340 property C, 178 property C  , 178 property (T), 221, 311, 313, 332, 338, 429 Quantum coding sequences, 333 quasi-central approximate unit, 196 QWEP, 2, 185

484

random matrix, 276 reduced C ∗ -algebra, 71 residually finite dimensional, 314 residually finite group, 274, 275 RFD, 314 rotational invariance, 236 s.o.t., 453 Schur’s lemma, 429, 430 second quantization, 237 semicircular, 236 semidiscrete, 179 semidiscreteness, 164 separable, 208, 266 sofic, 273 strong operator topology, 453 strong* operator topology, 205, 461 subnuclearity, 213, 220 Takesaki’s duality theorem, 227 tracial, 228 Tsirelson’s problem, 300

Index

u.c.p., 10 ultrapower, 270 ultraproduct, 238 uniform convexity, 110 unit ball, 9 unitary group, 9 universal C ∗ -algebra (of operator space), 57 universal representation (of C ∗ -algebra), 453, 463 universal representation (of group), 63 w.o.t., 453 weak expectation, 188, 192 weak operator topology, 453 weak* CBAP, 221 weak* CPAP, 164 weakly amenable, 221, 435 WEP, 1, 185, 188, 388, 395 WEP and locally reflexive, 191 WYDL, 375, 378