Preface......Page 6
Contents......Page 10
Part I Tools......Page 13
1 Introduction: The Statement of Loewner's Theorem......Page 15
2 Some Generalities......Page 22
3 The Herglotz Representation Theorems and the Easy Direction of Loewner's Theorem......Page 28
4 Monotonicity of the Square Root......Page 43
5 Loewner Matrices......Page 52
6 Heinävaara's Integral Formula and the Dobsch–Donoghue Theorem......Page 81
7 Mn+1 =Mn......Page 92
8 Heinävaara's Second Proof of the Dobsch–Donoghue Theorem......Page 96
9 Convexity, I: The Theorem of Bendat–Kraus–Sherman–Uchiyama......Page 101
10 Convexity, II: Concavity and Monotonicity......Page 112
11 Convexity, III: Hansen–Jensen–Pedersen (HJP) Inequality......Page 122
12 Convexity, IV: Bhatia–Hiai–Sano (BHS) Theorem......Page 128
13 Convexity, V: Strongly Operator Convex Functions......Page 140
14 2 2 Matrices: The Donoghue and Hansen–Tomiyama Theorems......Page 150
15 Quadratic Interpolation: The Foiaş–Lions Theorem......Page 157
Part II Proofs of the Hard Direction......Page 164
16 Pick Interpolation, I: The Basics......Page 168
17 Pick Interpolation, II: Hilbert Space Proof......Page 177
18 Pick Interpolation, III: Continued Fraction Proof......Page 184
19 Pick Interpolation, IV: Commutant Lifting Proof......Page 190
20 A Proof of Loewner's Theorem as a Degenerate Limit of Pick's Theorem......Page 216
21 Rational Approximation and Orthogonal Polynomials......Page 222
22 Divided Differences and Polynomial Approximation......Page 235
23 Divided Differences and Multipoint Rational Interpolation......Page 246
24 Pick Interpolation, V: Rational Interpolation Proof......Page 261
25 Loewner's Theorem via Rational Interpolation: Loewner's Proof......Page 265
26 The Moment Problem and the Bendat–Sherman Proof......Page 271
27 Hilbert Space Methods and the Korányi Proof......Page 281
28 The Krein–Milman Theorem and Hansen's Variant of the Hansen–Pedersen Proof......Page 286
29 Positive Functions and Sparr's Proof......Page 297
30 Ameur's Proof Using Quadratic Interpolation......Page 309
31 One-Point Continued Fractions: The Wigner–von Neumann Proof......Page 317
32 Multipoint Continued Fractions: A New Proof......Page 329
33 Hardy Spaces and the Rosenblum–Rovnyak Proof......Page 334
34 Mellin Transforms: Boutet de Monvel's Proof......Page 340
35 Loewner's Theorem for General Open Sets......Page 353
Part III Applications and Extensions......Page 364
36 Operator Means, I: Basics and Examples......Page 365
37 Operator Means, II: Kubo–Ando Theorem......Page 371
38 Lieb Concavity and Lieb–Ruskai Strong Subadditivity Theorems, I: Basics......Page 377
39 Lieb Concavity and Lieb–Ruskai Strong Subadditivity Theorems, II: Effros' Proof......Page 384
40 Lieb Concavity and Lieb–Ruskai Strong Subadditivity Theorems, III: Ando's Proof......Page 386
41 Lieb Concavity and Lieb–Ruskai Strong Subadditivity Theorems, IV: Aujla–Hansen–Uhlmann Proof......Page 388
42 Unitarily Invariant Norms and Rearrangement......Page 390
43 Unitarily Invariant Norm Inequalities......Page 399
Fonctions croissantes hilbertiennes......Page 405
Interpolation Functions......Page 409
B Pictures......Page 412
C Symbol List......Page 415
Bibliography......Page 420
Name Index......Page 434
Subject Index......Page 439

##### Citation preview

Grundlehren der mathematischen Wissenschaften  354 A Series of Comprehensive Studies in Mathematics

Barry Simon

Loewner’s Theorem on Monotone Matrix Functions

Grundlehren der mathematischen Wissenschaften A Series of Comprehensive Studies in Mathematics Volume 354

Editors-in-Chief Alain Chenciner, IMCCE - Observatoire de Paris, Paris, France John Coates, Emmanuel College, Cambridge, UK S.R.S. Varadhan, Courant Institute of Mathematical Sciences, New York, NY, USA Series Editors Pierre de la Harpe, Université de Genève, Genève, Switzerland Nigel J. Hitchin, University of Oxford, Oxford, UK Antti Kupiainen, University of Helsinki, Helsinki, Finland Gilles Lebeau, Université de Nice Sophia-Antipolis, Nice, France Fang-Hua Lin, New York University, New York, NY, USA Shigefumi Mori, Kyoto University, Kyoto, Japan Bao Chau Ngô, University of Chicago, Chicago, IL, USA Denis Serre, UMPA, École Normale Supérieure de Lyon, Lyon, France Neil J. A. Sloane, OEIS Foundation, Highland Park, NJ, USA Anatoly Vershik, Russian Academy of Sciences, St. Petersburg, Russia Michel Waldschmidt, Université Pierre et Marie Curie Paris, Paris, France

Grundlehren der mathematischen Wissenschaften (subtitled Comprehensive Studies in Mathematics), Springer’s first series in higher mathematics, was founded by Richard Courant in 1920. It was conceived as a series of modern textbooks. A number of significant changes appear after World War II. Outwardly, the change was in language: whereas most of the first 100 volumes were published in German, the following volumes are almost all in English. A more important change concerns the contents of the books. The original objective of the Grundlehren had been to lead readers to the principal results and to recent research questions in a single relatively elementary and accessible book. Good examples are van der Waerden’s 2-volume Introduction to Algebra or the two famous volumes of Courant and Hilbert on Methods of Mathematical Physics. Today, it is seldom possible to start at the basics and, in one volume or even two, reach the frontiers of current research. Thus many later volumes are both more specialized and more advanced. Nevertheless, most books in the series are meant to be textbooks of a kind, with occasional reference works or pure research monographs. Each book should lead up to current research, without over-emphasizing the author’s own interests. There should be proofs of the major statements enunciated, however, the presentation should remain expository. Examples of books that fit this description are Maclane’s Homology, Siegel & Moser on Celestial Mechanics, Gilbarg & Trudinger on Elliptic PDE of Second Order, Dafermos’s Hyperbolic Conservation Laws in Continuum Physics . . . Longevity is an important criterion: a GL volume should continue to have an impact over many years. Topics should be of current mathematical relevance, and not too narrow. The tastes of the editors play a pivotal role in the selection of topics. Authors are encouraged to follow their individual style, but keep the interests of the reader in mind when presenting their subject. The inclusion of exercises and historical background is encouraged. The GL series does not strive for systematic coverage of all of mathematics. There are both overlaps between books and gaps. However, a systematic effort is made to cover important areas of current interest in a GL volume when they become ripe for GL-type treatment. As far as the development of mathematics permits, the direction of GL remains true to the original spirit of Courant. Many of the oldest volumes are popular to this day and some have not been superseded. One should perhaps never advertise a contemporary book as a classic but many recent volumes and many forthcoming volumes will surely earn this attribute through their use by generations of mathematicians.

Barry Simon

Loewner’s Theorem on Monotone Matrix Functions

123

Barry Simon Division of Physics, Math, and Astronomy Caltech, Pasadena, CA, USA

ISSN 0072-7830 ISSN 2196-9701 (electronic) Grundlehren der mathematischen Wissenschaften ISBN 978-3-030-22421-9 ISBN 978-3-030-22422-6 (eBook) https://doi.org/10.1007/978-3-030-22422-6 Mathematics Subject Classification: 26A48, 26A51, 47A56, 47A63 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book is a love poem to Loewner’s theorem. There are other mathematical love poems, although not many. One sign, not always present and not foolproof, is that like this book, the author has included pictures of some of the main figures in the development of the subject under discussion. The telltale sign is that the reader’s initial reaction is “how can there a whole book on that subject” (although not all narrow books are love poems). Loewner’s theorem concerns the theory of monotone matrix functions, i.e. functions, f so that A ≤ B ⇒ f (A) ≤ f (B) for pairs of selfadjoint matrices. That this is a subtle notion is seen by the fact (see Corollary 14.3) that if f : R → R is monotone on all pairs of 2 × 2 matrices, then f is affine! So Loewner realized one needed to fix a proper interval (a, b) ⊂ R, demand that f : (a, b) → R, and only demand the monotonicity result for pairs A and B all of whose eigenvalues are in (a, b). In 1934, Charles Loewner proved the remarkable result that f on (a, b) is matrix monotone on all such n × n pairs (for all n) if and only if f is real analytic on (a, b) and has an analytic continuation to the upper half plane with a positive imaginary part there. That functions with this property are matrix monotone follows in a few lines from the Herglotz representation theorem, so I call that half the easy half. The other direction is the hard half. Many applications of Loewner’s notion of matrix monotone functions involve explicit examples and so only the easy half. Matrix monotonicity is an algebraic statement, but Loewner’s theorem says it is equivalent to an analytic fact. One fascination of the subject is the mix of the algebraic and the analytic in its study. Interestingly enough, this is not the first love poem to Loewner’s theorem. Fortyfive years ago, in 1974, Springer published W. F. Donoghue’s Monotone Matrix Functions and Analytic Continuation  in the same series where I am publishing this book. Donoghue explained that, at the time of writing, there were three existing proofs of (the hard half of) Loewner’s theorem, all very different: Loewner’s original proof, the proof of Bendat–Sherman, and the proof of Korányi. In fact, as we’ll discuss several times in this book, there was a fourth proof which workers in the field didn’t seem to realize was there—it was due to Wigner–von Neumann and appeared in the Annals of Mathematics! The main goal of Donoghue’s book was to v

vi

Preface

expose those three proofs with their underlying mathematical background. He also had several new results including converses of two theorems of Loewner’s student Otto Dobsch. Donoghue also discussed applications to the theory of schlicht and Herglotz functions. Similarly, the main goal of the current book is to expose many of the proofs of the hard half that now exist. Indeed, we will give 11 proofs in all. As I’ll explain later, there is a 12th proof which I don’t expose. Of course, it is not always clear when two proofs are really different and when one proof should be regarded as a variant of another. I am prepared to defend the notion that these proofs are different (although there are relations among them) and that a proof like Hansen’s in  should be viewed as a variant (albeit an interesting variant) of Hansen’s earlier proof with Pedersen. But I’d agree some might differ. In any event, it is striking that there are no really short proofs and that the underlying structure of the proofs is so different. I like to joke that it is almost as if every important result in analysis is the basis of some proof of Loewner’s theorem. To mention a few of the underlying machines that lead to proofs: the moment problem, Pick’s theorem, commutant lifting theorems, the Krein–Milman theorem, and Mellin transforms. Shortly after Loewner’s 1934 paper, his student Fritz Kraus defined and began the study of matrix convex functions. Donoghue includes Kraus’ 1936 paper in his bibliography but never refers to it in the text and doesn’t discuss the subject of matrix convex or concave functions. In the years since his book, it has become clear that the two subjects are intertwined in deep ways. So my book also has a lot of material on matrix convex functions. We let Mn (a, b) be the functions monotone on n × n matrices. We’ll see that Mn+1 is a strictly proper subset of Mn . Loewner’s theorem gives an effective description of M∞ ≡ ∩Mn , and we’ll see in Chapter 14 that there is an effective description of M2 but there is no especially direct description of Mn . One other subject we explore involves properties of Mn and its convex analog for fixed finite n. I should say something about the history of this book. I first learned of Loewner’s theorem in graduate school 50 years ago, probably from Ed Nelson, one of my mentors. I learned the proof of the easy half and a variant of it appears in Reed– Simon. The easy half seemed more useful since it told one certain function was matrix monotone and that was what could be applied. About 2000, I became curious how one proved the hard half and looked at the expositions in Donoghue and Bhatia’s books. I was intrigued that both quoted a paper of Wigner–von Neumann claiming it had applications to quantum physics. I looked up the paper and was surprised to discover that there was no application to quantum physics but there was a complete and different proof of Loewner’s theorem, which Loewner, Donoghue, and Bhatia didn’t seem to realize was there! For several years, I gave a mathematics colloquium called “The Lost Proof of Loewner’s Theorem.” I found my new proof that appears in Chapter 20 which is, in many ways, the most direct proof. After one of the colloquia, around 2005, I learned of Boutet de Monvel’s unpublished proof and started on this book. I finished about half and then put it aside in 2007 to

Preface

vii

focus on my five-volume Comprehensive Course [325–329]. After completing that, I returned to this book and completed it. The early part of the book was typed in TEX by my wonderful secretary, Cherie Galvez, who we lost to cancer before I returned to the project. I often miss her. There is some literature on multivariable extensions of Loewner’s theorem. The earliest such papers are a series (Korányi , Singh–Vasudeva , and Hansen ) that considers operators on distinct spaces, Aj on Hj , and defines f (A1 , . . . , An ) on the tensor product H1 ⊗ · · · ⊗ Hn . Recently, there have been several papers (Agler et al. , Najafi , Pascoe and Tully-Doyle , and Pálfia ) that involve functions of several variables, even non-commuting variables, on a single Hilbert space. These papers are complicated, and the subject is in flux, so it seemed wisest not to discuss the subject in detail here. However, we should mention that Pálfia  says that his multivariable proof specialized to one variable provides a new proof of the classical result, the 12th proof referred to above. In correspondence, he told me that the restriction to one variable doesn’t especially simplify his proof (his preprint is 40 pages). He didn’t believe it could be shortened to less than about 25 pages and that doesn’t count any mathematical background on the tools he uses, so I decided not to try to include it. The 11 proofs appear in Part II of this book which also has background on tools like Pick’s theorem and rational approximation. There is also a discussion of the analog of Loewner’s theorem when (a, b) is replaced by a more general open set. Part I sets the background for the theorem and also for the theory of matrix convex functions. Part III discusses some applications, many due to Ando. Each part begins with a brief introduction that summarizes the content and context of that part. Karel Löwner was born near Prague in 1893 and proved his great theorem while a professor at the Charles University in Prague. In 1938, he fled from there to the United States (he was arrested when the Nazis entered Prague probably because of his left-wing political activities rather than the fact that he was Jewish) where he made a decision to change his name to the less German Charles Loewner. I have decided to respect his choice by calling the result Loewner’s theorem. The reader should be warned that much of the literature refers to Löwner’s theorem. Of course, we use the original spelling in the bibliography when that is what appeared in the author’s name or in the published title of an article or book. We warn the reader that in our complex inner product, we use a universal convention in the physics community, but the minority one among mathematicians namely, our inner products, are linear in the second vector and antilinear in the first. Other symbols are listed in Appendix C just before the bibliography. We also warn the reader that when speaking of matrices, M, we will call them “positive” if and only if ϕ, Mϕ ≥ 0 for all ϕ and use “strictly positive” when that inner product is strictly positive for all non-zero ϕ. Two individuals should be especially thanked for their aid in improving this book. Mitsuru Uchiyama sent me extensive corrections to an early draft of this book. Detailed corrections of the kind he sent are like gold to an author. Otte Heinävaara was kind enough to share his many insights into the subject of this book. It is a pleasure to also thank Anne Boutet de Monvel, Larry Brown, Fritz Gesztesy, Frank

viii

Preface

Hansen, Ira Herbst, Bill Helton, Elliott Lieb, John McCarthy, James Pascoe, James Rovnyak, and Beth Ruskai for their useful discussions and/or correspondence in connection with this book. And, as always, thanks to my wife, Martha, for her love and support. Los Angeles, CA, USA March 2019

Barry Simon

Contents

Part I Tools 1

Introduction: The Statement of Loewner’s Theorem. . . . . . . . . . . . . . . . . . .

3

2

Some Generalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

3

The Herglotz Representation Theorems and the Easy Direction of Loewner’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

4

Monotonicity of the Square Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

5

Loewner Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

6

Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem

73

7

Mn+1 = Mn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

8

Heinävaara’s Second Proof of the Dobsch–Donoghue Theorem . . . . . .

89

9

Convexity, I: The Theorem of Bendat–Kraus–Sherman–Uchiyama . .

95

10

Convexity, II: Concavity and Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

11

Convexity, III: Hansen–Jensen–Pedersen (HJP) Inequality . . . . . . . . . . . 117

12

Convexity, IV: Bhatia–Hiai–Sano (BHS) Theorem . . . . . . . . . . . . . . . . . . . . . 123

13

Convexity, V: Strongly Operator Convex Functions . . . . . . . . . . . . . . . . . . . . 135

14

2 × 2 Matrices: The Donoghue and Hansen–Tomiyama Theorems. . . 145

15

Quadratic Interpolation: The Foia¸s–Lions Theorem . . . . . . . . . . . . . . . . . . . 153

Part II Proofs of the Hard Direction 16

Pick Interpolation, I: The Basics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

17

Pick Interpolation, II: Hilbert Space Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

18

Pick Interpolation, III: Continued Fraction Proof. . . . . . . . . . . . . . . . . . . . . . 183 ix

x

Contents

19

Pick Interpolation, IV: Commutant Lifting Proof . . . . . . . . . . . . . . . . . . . . . . 189

20

A Proof of Loewner’s Theorem as a Degenerate Limit of Pick’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

21

Rational Approximation and Orthogonal Polynomials . . . . . . . . . . . . . . . . 221

22

Divided Differences and Polynomial Approximation . . . . . . . . . . . . . . . . . . . 235

23

Divided Differences and Multipoint Rational Interpolation . . . . . . . . . . . 247

24

Pick Interpolation, V: Rational Interpolation Proof . . . . . . . . . . . . . . . . . . . . 263

25

Loewner’s Theorem via Rational Interpolation: Loewner’s Proof. . . . 267

26

The Moment Problem and the Bendat–Sherman Proof . . . . . . . . . . . . . . . . 273

27

Hilbert Space Methods and the Korányi Proof . . . . . . . . . . . . . . . . . . . . . . . . . 283

28

The Krein–Milman Theorem and Hansen’s Variant of the Hansen–Pedersen Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

29

Positive Functions and Sparr’s Proof. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

30

Ameur’s Proof Using Quadratic Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . 313

31

One-Point Continued Fractions: The Wigner–von Neumann Proof . . 321

32

Multipoint Continued Fractions: A New Proof . . . . . . . . . . . . . . . . . . . . . . . . . 333

33

Hardy Spaces and the Rosenblum–Rovnyak Proof . . . . . . . . . . . . . . . . . . . . . 339

34

Mellin Transforms: Boutet de Monvel’s Proof . . . . . . . . . . . . . . . . . . . . . . . . . . 345

35

Loewner’s Theorem for General Open Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359

Part III Applications and Extensions 36

Operator Means, I: Basics and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

37

Operator Means, II: Kubo–Ando Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

38

Lieb Concavity and Lieb–Ruskai Strong Subadditivity Theorems, I: Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385

39

Lieb Concavity and Lieb–Ruskai Strong Subadditivity Theorems, II: Effros’ Proof. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

40

Lieb Concavity and Lieb–Ruskai Strong Subadditivity Theorems, III: Ando’s Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395

41

Lieb Concavity and Lieb–Ruskai Strong Subadditivity Theorems, IV: Aujla–Hansen–Uhlmann Proof . . . . . . . . . . . . . . . . . . . . . . . . . . 397

42

Unitarily Invariant Norms and Rearrangement . . . . . . . . . . . . . . . . . . . . . . . . 399

43

Unitarily Invariant Norm Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409

Contents

xi

A

Boutet de Monvel’s Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415

B

Pictures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423

C

Symbol List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Name Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453

Part I

Tools

A real-valued continuous function, f , on (a, b) is said to be monotone on n × n matrices, if and only if A ≤ B ⇒ f (A) ≤ f (B)

(p1.1)

for all n × n Hermitian matrices, A and B, with eigenvalues in (a, b). The set of all such functions will be denoted Mn (a, b) and we define M∞ (a, b) = ∩∞ n=1 Mn (a, b)

(p1.2)

These definitions require one know what the order is on matrices and what f (A) means—we provide precise definitions at the start of Chapter 1. Loewner’s theorem says that a real-valued continuous function, f , on (a, b) lies in M∞ (a, b) if and only if f is real analytic on (a, b) and has an analytic continuation to the upper half plane, C+ , with Im(z) > 0 ⇒ Imf (z) > 0. Such analytic functions have an integral representation which we present in Chapter 3 and that representation easily implies that such functions lie in M∞ (a,√ b), a result we call the easy half of Loewner’s theorem. In particular, since f (x) = x defined on (0, ∞) has such an analytic continuation, it is matrix monotone. Despite this immediate consequence√of the easy half, various authors have found direct proofs of the monotonicity of x which we present in Chapter 4. Proofs of the hard half are the subject of Part II. In Chapter 2, we discuss two separate issues. First we show that one can easily go from Loewner’s theorem for one proper (a, b) to any other. All finite intervals are related to each other by affine maps and all semi-infinite intervals by translation or f (x) → −f (−x). Finally x → −x −1 interchanges (0, ∞) and (−1, 0) and one can prove directly that f (x) = −x −1 is a matrix monotone map. All proofs of the hard part use one of (0, 1), (−1, 1), or (0, ∞) for (a, b). The other issue in Chapter 2 concerns operators on an infinite dimensional Hilbert space. It is a simple fact that matrix monotonicity for finite matrices of arbitrary dimension allows one to prove monotonicity for self-adjoint operators

2

I

Tools

on a Hilbert space and vice versa. While we mainly discuss finite matrix versions here, it will be occasionally useful to deal with infinite dimensional versions instead. This will sometimes (and, in particular, in Chapter 2) require us to use results for operators on Hilbert space. We will give textbook references for these results, most often Simon . Most proofs of the hard direction use a preliminary result of Loewner that completely describes Mn (a, b) in terms of certain n × n matrices built out of f and n points a < x1 < . . . < xn < b. This equivalence is the subject of Chapter 5 where two proofs are given of the equivalence: one using a formula of Daleckiˇi and Krein and one using rank one perturbation theory. In Chapter 14, this equivalence is used to give an explicit description of M2 (a, b) and then use that to show that M2 (−∞, ∞) is precisely the affine functions (with non-negative linear coefficient). A theorem of Dobsch–Donoghue gives a local criteria for a function to lie in M. Chapters 6 and 8 provide two recent elegant proofs of this result by the young Finnish mathematician, Otte Heinävaara. Given the local theorem, it is easy to construct functions in Mn (a, b) that are not in Mn+1 (a, b) and we do this in Chapter 7. For scalar functions on R, there is a simple connection between convex and monotone functions. g is convex on (a, b)  0 if and only if there is a monotone functions, f , on (a, b) with ˆ g(x) = g(0) +

x

f (y)dy

(p1.3)

0

equivalently, the distribution derivative of g is a monotone function. There are similar relations between matrix convex and matrix monotone functions which we discuss in Chapter 9. Chapter 11 discusses analogs of Jensen’s inequality for matrices which is much richer because Jensen’s convex combinations θ x + (1 − θ )y can be replaced by a ∗ xa + b∗ yb where a and b are matrices with a ∗ a + b∗ b = 1. Chapter 10 deals with a striking new connection: a scalar positive function on (0, ∞) which is concave is monotone as is easy to see but, of course, the converse is false. However, for functions on matrices, a positive function on (0, ∞) is matrix concave for all n if and only if it is matrix monotone for all n! Chapter 12 provides a criteria for matrix convexity in terms of Loewner matrices. Chapter 13 discusses a notion stronger than matrix convexity and includes the result that if f ∈ M∞ , so its second finite difference [x, x1 , x2 ; f ]. The equivalence of matrix monotonicity on (0, ∞) and matrix concavity is one of the three remarkable equivalences of matrix monotonicity with some other property so that Loewner’s theorem allows the complete description of some other object. Another such case involves the notion of quadratic interpolation, the theory of Marcinkiewicz real interpolation for inner products norms. The classification of such norms is given by a theorem of Foia¸s–Lions which can be related to matrix monotone functions and we discuss this in Chapter 15. The third equivalence concerns the theory of operator means where the equivalence is called the Kubo– Ando theorem. We postpone the discussion of this subject to Chapters 36–37.

Chapter 1

Introduction: The Statement of Loewner’s Theorem

Most of this monograph will consider finite matrices, but since we will be interested in positivity defined in terms of a (Euclidean) inner product, we make definitions in a general context. Thus H will be a complex separable Hilbert space (perhaps finite dimensional) with inner product · , · which is linear in the second factor and antilinear in the first. Many mathematicians use the opposite convention on which factor is linear but I use the almost universal convention among physicists. For more on the basics of inner products, see [325, Section 3.1]. Definition A bounded operator on H is called positive, written A ≥ 0, if

ϕ, Aϕ ≥ 0

(1.1)

for all ϕ ∈ H. We will use strictly positive if (1.1) has > 0 for all ϕ = 0. Some call this positive definite and (1.1) positive semi-definite. We will drop “definite.” Since the space, H, is complex,

ϕ, Aψ =

1 4



σ¯ (ϕ + σ ψ), A(ϕ + σ ψ)

σ =±1,±i

so any A obeying (1.1) obeys

ϕ, Aψ = ψ, Aϕ

(1.2)

A≤B

(1.3)

that is, A is self-adjoint. We write

© Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_1

3

4

1 Introduction: The Statement of Loewner’s Theorem

as shorthand for B − A ≥ 0. Loewner’s theorem involves the following simple question: For which real-valued functions f of a real variable is it true that A ≤ B ⇒ f (A) ≤ f (B)

(1.4)

Of course, to make sense out of (1.4), one needs to define f (A) given a function, f , of a real variable. It is natural to restrict to self-adjoint A and B, so we need to define f (A) for A self-adjoint. There are two equivalent ways to make this definition for finite matrices:   (i) For a polynomial P (x) = nj=0 αj x j , define P (A) = nj=0 αj Aj . If P and Q are two polynomials, it is easy to see that P (A) = Q(A) if and only if P (λj ) = Q(λj ) for each eigenvalue, λj , of A. Thus, given f , we find any polynomial, P, so that f (λj ) = P (λj ) and let f (A) ≡ P (A). This is equivalent to f (A) =

 

f (λj )



(A − λk )(λj − λk )−1

 (1.5)

k=j

j =1

if λ1 , . . . , λ are the distinct eigenvalues of A. (ii) If λ1 ≤ λ2 ≤ · · · ≤ λn are the eigenvalues of A counting multiplicity (subtle shift from the meaning in (i)), there is a U obeying ⎛ ⎜ A=U⎝

λ1 ..

⎟ −1 ⎠U

.

(1.6)

λn One defines ⎞

⎛ f (λ1 ) ⎜ .. f (A) = U ⎝ .

⎟ −1 ⎠U

(1.7)

f (λn ) U is not unique nontrivially if some eigenvalue is degenerate, but (1.7) is independent of U —for example, by showing all choices lead to (1.5). For A, a bounded self-adjoint operator on an infinite dimensional space, f (A) can again be defined analogous to either of these methods: polynomial approximation corresponding to (i) or the spectral theorem analogous to (ii). Indeed, one way of proving the spectral theorem is to first control polynomial approximation and use that to define spectral measures. In any event, since the infinite dimensional case is peripheral to our concerns, we leave the construction of f (A) in that case to other references, for example, Reed–Simon [279, Chapter VII] or Simon [329, Chapter 5].

1 Introduction: The Statement of Loewner’s Theorem

5

Now that we have settled what f (A) means, we can return to (1.4) and note that if it is intended in too strong a sense, then very few f ’s work. In Chapter 14 and also in Chapter 28, we will prove Theorem 1.1 (= Corollary 14.3 = Corollary 10.7) Let f : R → R be such that for any pair of 2 × 2 self-adjoint matrices, (1.4) holds. Then for some b ∈ R and a ≥ 0, f (x) = ax + b

(1.8)

This result says that if one looks at (1.4) for too general a family of A, B, the set of f ’s is not very rich. Loewner had the good idea to look at f ’s defined on an interval (a, b) ⊂ R. Thus Definition A function f : (a, b) → R is said to lie on Mn (a, b) if and only if (1.4) holds for any pair of self-adjoint n × n matrices, both of whose eigenvalues lie in (a, b). We will sometimes use the phrase “monotone matrix function.” If A, B are (n − 1) × (n − 1) matrices with eigenvalues in (a, b), and if   A 0

A= 0 a+b 2

  B 0

B= 0 a+b 2

≤ B

and f (A) ≤ f (B) ⇔ f (A)

≤ f (B).

This implies that then A ≤ B ⇔ A Mn−1 (a, b) ⊃ Mn (a, b)

(1.9)

It is natural to define M∞ (a, b) =



Mn (a, b)

(1.10)

n

Thus, f ∈ M∞ (a, b) if and only if (1.4) holds for any pair of matrices with eigenvalues in (a, b). In Chapter 2, we will prove that if f ∈ M∞ (a, b), then (1.4) even holds for pairs of self-adjoint operators on an infinite dimensional Hilbert space whose spectrum lies in (a, b). Example 1.2 Consider the function f (x) = x 2 on [0, ∞). Let P and Q be orthogonal projections. Of course, P + Q ≥ P . One can ask when (P + Q)2 ≥ P 2 , that is, when C ≡ Q2 + QP + P Q ≥ 0

(1.11)

Let ψ, ϕ obey Qψ = 0, Qϕ = ϕ. Let ψε = ψ + εϕ, so ψε=0 , Cψε=0 = 0 and

ψε , Cψε = 2 Re(ε ψ, P Qϕ ) + O(ε2 )

(1.12)

6

1 Introduction: The Statement of Loewner’s Theorem

Since we can pick ε so ε ψ, P Qϕ = −ε| ψ, P Qϕ |, (1.11) implies ψ, P Qϕ = 0, that is, (1 − Q)P Q = 0 Thus, P Q = QP Q, so taking adjoints, QP = P Q. Thus (P + Q)2 ≥ P 2 if and only if [P , Q] ≡ P Q − QP = 0 (the “if” is easy). Thus f (x) = x 2 is not matrix monotone on [0, ∞] even for 2 × 2 matrices. We will see later (Theorem 4.1) that f (x) = x α is in M∞ (0, ∞) if and only if 0 ≤ α ≤ 1. We will see below (Proposition 2.2) that A ≤ B ⇔ −A−1 ≤ −B −1 , so that this and Theorem 4.1 imply that f (x) = −x α is in M∞ if and only if −1 ≤ α ≤ 0. This example shows that it is false that A ≤ B and C ≤ D imply that 12 (AC + CA) ≤ 12 (BD + DB) even if A commutes with C and B commutes with D.   Loewner’s theorem exactly describes M∞ (a, b). For simplicity of exposition, we will state it initially for (a, b) = (−1, 1): Theorem 1.3 Let f be a function from (−1, 1) to R. The following are equivalent: (a) f ∈ M∞ (−1, 1) (b) For some positive measure dν on [−1, 1], ˆ f (x) = f (0) +

1

−1

x dν(λ) 1 + λx

(1.13)

(c) f is the restriction to (−1, 1) of a function, f , analytic in (C\R) ∪ (−1, 1) that obeys Im z > 0 ⇒ Im f (z) > 0

(1.14)

The big surprise, of course, is that f ∈ M∞ implies analyticity. In some proofs, this just falls out while in others, its cause is clearer. This book is partially devoted to discussing several proofs of this theorem. This theorem has an “easy half” and a “hard half.” We note first that (b) ⇒ (c) is obvious since (1.13) can be used for z ∈ (C\R) ∪ (−1, 1) and  Im

z 1 + λz



 = Im

z(1 + λ¯z) |1 + λz|2

 =

Im z |1 + λz|2

(1.15)

showing (1.14). That (c) ⇒ (b) is not due to Loewner; it is a variant of the Herglotz representation theorem. Note that (b) ⇒ (a) is equivalent to −1 0, but begin with functions f on D = {z ∈ C | |z| < 1} with Re f > 0 and use conformal mapping. The basic input is a complex Poisson representation: Theorem 3.1 (Complex Poisson Representation) Let f be analytic in a neighborhood of D. Then ˆ f (z) = i Im f (0) +

K(eiθ , z) Re f (eiθ )

dθ 2π

(3.1)

where K(w, z) =

w+z w−z

(3.2)

Remarks 1. Of course, one is interested in (3.1) in cases where f is only analytic in D and f (eiθ ) is some kind of boundary value. Indeed, we will prove such a result below in case Re f > 0. The key, though, is to prove this result in the regular case first. 2. We will give four proofs that illustrate these different aspects of the kernel K and of analytic functions. This is overkill—even in a pedagogic monograph—but I am fond of this result. First Proof Write ∞

K(eiθ , z) =

 1 + ze−iθ =1+2 zn e−inθ −iθ 1 − ze

(3.3)

n=1

© Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_3

17

18

3 The Herglotz Representation Theorems and the Easy Direction of Loewner’s. . .

with the sum converging uniformly in θ for each fixed z ∈ D. Write f (z) =

∞ 

an zn

(3.4)

n=0

converging uniformly on D since f is analytic in a neighborhood of D. Thus Re f (eiθ ) = Re(a0 ) +

∞ 

1 2

(an einθ + a¯ n e−inθ )

(3.5)

n=1

converging uniformly in θ . Since ˆ

e−ikθ einθ

dθ = δkn 2π

(3.6)

using the uniform convergence of (3.3) and (3.5) to interchange the sums and integrals, one finds ˆ

 dθ = Re(a0 ) + K(e , z) Re f (e ) 2( 12 an )zn 2π iθ

n=1

= f (z) − i Im f (0)   Second Proof Let ˆ g(z) =

K(eiθ , z) Re f (eiθ )

dθ 2π

(3.7)

g is clearly analytic in D and we have ˆ Re g(re ) = iϕ

Pr (θ, ϕ) Re f (eiθ )

dθ 2π

(3.8)

where Pr Pr (θ, ϕ) =

1 + r2

1 − r2 − 2r cos(θ − ϕ)

(3.9)

Since h(reiϕ ) ≡ Pr (θ, ϕ) = Re K(eiθ , reiϕ ) is harmonic in reiϕ , we have ˆ dϕ = h(0) = 1 (3.10) Pr (θ, ϕ) 2π

3

The Herglotz Representation Theorems and the Easy Direction of Loewner’s. . .

19

Since Pr (θ, ϕ) is symmetric under interchange of θ and ϕ, the same is true of the θ integral. For each ε > 0, limr↑1 sup|θ−ϕ|>ε |Pr (θ, ϕ)| = 0, and we see Pr is an approximate delta function. Thus, since Re f (reiθ ) is smooth, Re g(eiθ ) has a continuous extension to D and Re g(eiθ ) = Re f (eiθ )

(3.11)

Let w(z) = Re f (z) − Re g(z) for z ∈ D and define w for z ∈ C\D by w(1/z) = −w(z). Then w is harmonic on C and bounded, so constant. Since w(eiθ ) = 0, w ≡ 0. Thus, on D, f (z) − g(z) is an analytic function with vanishing real part, so by the Cauchy–Riemann equations, f (z) − g(z) is an imaginary constant which is i Im f (0), since Im g(z) = 0. This proves (3.1).   Third Proof The Cauchy integral formula says f (z) =

1 2π i

ﬃ |w|=1

f (w) dw w−z

(3.12)

Using w = eiθ , we see dw = ieiθ dθ , so (3.12) says ˆ

eiθ dθ f (eiθ ) eiθ − z 2π  ˆ  iθ 1 e +z dθ = + 1 f (eiθ ) 2 eiθ − z 2π ˆ 1 1 dθ = + f (0) K(eiθ , z)f (eiθ ) 2 2π 2

f (z) =

(3.13) (3.14) (3.15)

For now, we note that the step from the RHS of (3.12) to (3.15) only uses f (eiθ ) harmonic. If w = eiθ , then dw = −ie−iθ dθ = −w −2 dw. Thus ﬃ ﬃ f (w) f (w) 1 1 dw (3.16) dw = − 2π i |w|=1 w¯ − z¯ 2π i |w|=1 w(1 − z¯ w) = f (0)

(3.17)

where (3.16) uses ww ¯ = 1 and (3.17) uses that since |¯z| < 1, f (w)/w(1 − z¯ w) has a pole only at w = 0. Taking complex conjugates, ﬃ f (w) 1 dw f (0) = 2π i |w|=1 w − z ˆ 1 1 dθ + f (0) = K(eiθ , z) f (eiθ ) 2 2π 2

(3.18)

20

3 The Herglotz Representation Theorems and the Easy Direction of Loewner’s. . .

where (3.18) follows by using the steps from the right side of (3.12) to (3.15) which, as noted, only need that f (z) is harmonic. Thus, adding (3.15) to (3.18), we get ˆ f (z) =

1 dθ + (f (0) − f (0) ) 2π 2

K(eiθ , z) Re f (eiθ )

(3.19)  

which is (3.1). Fourth Proof Let u(z) = Re f (z). Since u is harmonic, u(0) =

1 2π

ˆ

u(eiθ )

0

dθ 2π

(3.20)

For z ∈ D, let Tz : D → D by Tz (w) =

w+z 1 + z¯ w

(3.21)

and vz (w) = u(Tz (w))

(3.22)

so vz is also harmonic. Thus, (3.20) becomes 1 u(z) = vz (0) = 2π =

1 2π

ˆ

vz (eiθ )

dθ 2π

u(eiϕ )

dθ dϕ dϕ 2π

0

ˆ

0

(3.23)

where Tz (eiθ ) = eiϕ so eiθ = T−z (eiϕ ) and 1 − |z|2 dθ = dϕ |1 − ze−iϕ |2

(3.24)

by a direct calculation. Thus, (3.23) becomes ˆ u(z) =

Re K(eiθ , z) Re f (eiθ )

dθ 2π

(3.25)

3

The Herglotz Representation Theorems and the Easy Direction of Loewner’s. . .

21

Thus,   ˆ dθ =0 Re f (z) − K(eiθ , z) Re f (eiθ ) 2π which implies the function in (. . . ) is an imaginary constant, thus Im f (0).

(3.26)  

We can now analyze all analytic functions on D with Re f > 0: Theorem 3.2 (Herglotz Representation for D) Let f be an analytic function on D with Re f (z) > 0 for z ∈ D. Then there exists a measure dμ on ∂D so that ˆ f (z) = i Im f (0) +

K(eiθ , z) dμ(θ )

(3.27)

Remarks 1. Our proof shows that μ is unique. 2. Since Re K(eiθ , reiϕ ) = Pr (θ, ϕ) > 0, any non-zero measure dμ leads via (3.27) to an analytic function with Re f > 0. 3. Thus, there is a one-to-one correspondence between functions f with Re f > 0 and pairs (a, dμ) with a = Im f (0) ∈ R and dμ a positive measure. 4. Notice that Re f (0) = μ(∂D)

(3.28)

Proof By subtracting i Im f (0) from f and multiplying by a constant, we can suppose that f (0) = 1 (functions with Re f > 0 and f (0) = 1 are called Carathéodory functions). By Theorem 3.1, for any z ∈ D and r ∈ (0, 1), ˆ f (rz) =

K(eiθ , z) dμr (θ )

(3.29)

where dμr (θ ) = Re f (reiθ )

dθ 2π

(3.30)

Since f (0) = 1, {dμr }0 0 on C+ . Then {f ∈ H(−1, 1) | f (0) = 0, f (0) = 1} and, for any c1 , c2 , {f ∈ H(−1, 1) | |f (0)| ≤ c1 , |f (0)| ≤ c2 } are compact in the topology of uniform convergence on compact subsets of C \ [(−∞, −1] ∪ [1, ∞)]. Similarly, for any two distinct α, β ∈ (−1, 1), {f ∈ H(−1, 1) | |f (α)| ≤ c1 , |f (β)| ≤ c2 } is compact. Remarks 1. This result is conceptually critical for considering many proofs of the hard part of Loewner’s theorem. These proofs approximate f with fn ’s in H(−1, 1) obeying |fn (0)| ≤ c1 , |fn (0)| ≤ c2 , so compactness shows these fn have limit points. 2. By Vitali’s theorem (see [326, Section 6.2]), this convergence is equivalent to pointwise convergence on a dense subset of (−1, 1). 3. Our proof will rely on the compactness of the set of probability measures. One could instead appeal to Montel’s theorem (see [326, Section 6.2])

26

3 The Herglotz Representation Theorems and the Easy Direction of Loewner’s. . .

4. Similarly, for any z0 ∈ C+ , {f ∈ H(−1, 1) | |f (z0 )| ≤ c1 } is compact, for the bound on Im(f (z0 )) gives a bound on the total mass of the measures and then the bound on | Re(f (z0 )| gives a bound on |f (0)|. Proof By the last theorem, f has a representation of the form (1.13) with a ´1 measure dνf obeying −1 dνf (λ) = f (0). Thus, the first result follows from the compactness of the set of probability measures on [−1, 1] [325, Section 5.8] and the second from a more general application of the Banach–Alaoglu theorem. For the α > β result note that β α−β α − = 1 + λα 1 + λβ (1 + λα)(1 + λβ) so using (1 + λα)(1 + λβ) > (1 − |α|)(1 − |β|), we get a bound on νf (−1, 1) and then on |f (0)|. With these, we repeat the above proof.   Next, we want to rewrite (3.40) when f is real analytic on (a, b)  R so that dν is supported on J− ∪ J+ where J− = (−∞, b],

J+ = [a, ∞)

(3.48)

Pick x0 ∈ (a, b) and subtract (3.40) for x0 from (3.40) for z and then set γ = Re f (i) + f (x0 ) to find that ˆ  f (z) = αz + γ + ˆ = αz + γ +

1 1 − y − z y − x0

 dν(y)

z − x0 dν(y) (y − z)(y − x0 )

(3.49)

Since ±(y − x0 ) > 0 for y ∈ J± , we define dν± = ±(y − x0 ) dν(y)  J±

(3.50)

so dν± are positive measures obeying ˆ

ˆ f (z) = αz + γ +

dν± (y) 0 on (0, ∞) if and only if there is a measure dν1 on [0, 1] with ˆ f (z) =

z dν1 (λ) (1 − λ)z + λ

(3.59)

Remarks z 1. When λ = 0, we have that (1−λ)z+λ = 1 at least if z = 0 so we interpret it as 1 even if z = 0. Thus, f (0) = ν1 ({0}). 2. Notice that if (3.59) holds, then ˆ f (1) = dν1 (λ) (3.60)

so f (1) = 1 is equivalent to dν1 being a probability measure. Proof f (x) is given by (3.55), so ˆ f (x) = 0

1

1 dν(x) > 0 [(1 − λ)x + λ]2

so f is monotone and f (x) > 0 implies limx↓0 f (x) ≡ f (0) exists. Moreover, taking x ↓ 0 in (3.55) implies ˆ f (1) − f (0) = 0

1

dν(λ) λ

(3.61)

so, in particular, dν(λ)/λ is a finite measure. Subtracting (3.61) from (3.55) shows ˆ

1

f (z) = f (0) + 0

dν(λ) z λ + (1 − λ)z λ

(3.62)

Setting dν1 (λ) =

dν(λ) + f (0)δλ=0 λ

(3.63)  

we get (3.59).

We will sometimes need a rephrasing of the Herglotz representation, (3.62), for such f ’s, viz. ˆ f (z) = f (0) + 0

1

dν(α) z α + (1 − α)z α

(3.64)

3

The Herglotz Representation Theorems and the Easy Direction of Loewner’s. . .

29

Change variables to λ = α/(1 − α) which maps (0, 1) bijectively to (0, ∞). Equivalently, α = λ/(1 + λ), (1 − α) = 1/(1 + λ). Then α + (1 − α)z = (1 − α)(z + λ) = (z + λ)/(1 + λ) Picking dρ(λ) =

dν(α) α

on (0, ∞), f (0) = a and ν({1}) = b, we get ˆ

f (z) = a + bz + ˆ f (1) = a + b +

0 ∞

z(1 + λ) dρ(λ) z+λ

dρ(λ)

(3.65) (3.66)

0

As with (3.59), we can absorb the point masses into ρ and define a measure ρ1 on [0, ∞] and write (3.65) as ˆ f (z) = 0

z(1 + λ) dρ1 (λ) z+λ

(3.67)

where z(1+λ) z+λ is interpreted as z when λ = ∞ and as 1 when λ = 0 for all z including z = 0. Thus, b = ρ1 ({∞}) and a = ρ1 {0}. Another way that some rewrite (3.65) is as ˆ ∞ zλ dμ(λ) (3.68) f (z) = a + bz + z+λ 0 where instead of dρ being a finite measure, we have that ˆ

λ dμ(λ) < ∞ 1+λ

(3.69)

Remark The last three theorems (i.e., (1.13), (3.55), and (3.59)) can be understood in terms of the Krein–Milman theorem discussed in Chapter 28. Essentially, (a) {f | f analytic in (C\R) ∪ (−1, 1) with ± Im(f ) > 0 when ± Im(z) > 0; f (0) = 0, f (0) = 1} is a compact convex set (in the topology of locally uniform convergence) with extreme points z/(1 + λz). (b) {f | f analytic in C\(−∞, 0) with ± Im(f ) > 0 when ± Im(z) > 0; f (1) = 0, f (1) = 1} is a compact convex set with extreme points (z−1)/(λ+(1−λ)z). (c) {f | f analytic in C\(−∞, 0) with ± Im(f ) > 0 when ± Im(z) > 0; f > 0 on (0, ∞), f (1) = 1} is a compact convex set with extreme points z/(λ+(1−λ)z). Finally, we note that (1.13) implies the easy half of Loewner’s theorem: Theorem 3.12 (b) ⇒ (a) in Theorems 1.3 and 1.6. Proof By Theorem 2.1, it suffices to prove the result if (a, b) = (−1, 1). By (1.13), we need only show that

30

3 The Herglotz Representation Theorems and the Easy Direction of Loewner’s. . .

λ ∈ [−1, 1], −1 < A ≤ B < 1 ⇒

B A ≤ 1 + λA 1 + λB

(3.70)

Let μ = |λ|−1 . Multiplying the right side of (3.70) by μ−1 , we see that we need |μ| ≥ 1, −1 < A ≤ B < 1 ⇒

A B ≤ |μ| ± A |μ| ± B

(3.71)

Since |μ| C = ±1 ∓ |μ| ± C |μ| ± C

(3.72)

we need only show that |μ| ≥ 1, −1 < A ≤ B < 1 ⇒

∓1 ∓1 ≤ |μ| ± A |μ| ± B

(3.73)

This follows from (2.5). For the + case, note A ≤ B ⇒ |μ| + A ≤ |μ| + B ⇒ −(|μ| + A)−1 ≤ −(|μ| + B)−1 while for the − case, A ≤ B ⇒ |μ| − B ≤ |μ| − A ⇒ (|μ| − A)−1 ≤ (|μ| − B)−1   For the analogous half of Theorem 1.8, we need the following preliminary: Lemma 3.13 Let A and B be n × n matrices with A ≤ B. Let (a1 , b1 ), . . . , (a , b ) be distinct open intervals (say, bj < aj +1 < bj +1 ) and k1 , . . . , k positive integers  with j =1 kj = n and # eigenvalues of A in (aj , bj ) = kj

(3.74)

# eigenvalues of B in (aj , bj ) = kj

(3.75)

A(θ ) = (1 − θ )A + θ B

(3.76)

# eigenvalues of A(θ ) in (aj , bj ) = kj

(3.77)

For θ ∈ [0, 1], define

Then, for every θ ∈ [0, 1],

3

The Herglotz Representation Theorems and the Easy Direction of Loewner’s. . .

31

Proof Let e1 (θ ) ≤ e2 (θ ) ≤ · · · ≤ en (θ ) be the eigenvalues of A(θ ). They are continuous in θ (by Corollary 5.29 below) and monotone since A(θ1 ) ≥ A(θ2 ) if θ1 ≥ θ2 (since (B − A) ≥ 0). For i = 1, . . . , k1 , ei (0), ei (1) ∈ (a1 , b1 ) by (3.74)/(3.75), so ei (θ ) ∈ [ei (0), ei (1)] ⊂ (a1 , b1 ). For i = k1 + 1, . . . , k1 + k2 , ei (0), ei (1) ∈ (a2 , b2 ), etc.   Theorem 3.14 Let A, B be n × n matrices obeying (3.74)/ (3.75). Let f have the form (3.33) where supp(dμ) ⊂ R\

k 

(aj , bj )

(3.78)

j =1

Then A ≤ B ⇒ f (A) ≤ f (B)

(3.79)

´ Proof Suppose first (1 + x 2 ) dμ < ∞. From (3.35), 1 + xA(θ1 ) 1 + xA(θ2 ) − = (1 + x 2 )(x − A(θ1 ))−1 (θ2 − θ1 )(B − A)(x − A(θ2 ))−1 x − A(θ1 ) x − A(θ2 ) d Thus, by the lemma, f (A(θ )) is differentiable and dθ f (A(θ )) ≥ 0 since (x − −1 −1 A(θ )) (B − A)(x − A(θ )) ≥ 0. For general μ, the result follows by a limiting argument.  

Notes and Historical Remarks The representation (3.8) for harmonic functions is universally known as the Poisson representation. What we have called the complex Poisson representation, (3.1), is sometimes called the Cauchy representation. Poisson did have an analog for harmonic functions on the ball in R3 , but it was Fatou  who had (3.11) and (3.8). I would have preferred calling them the Fatou representations but, as noted, it has become standard to use Poisson’s name. The Herglotz representation on D is due to Herglotz  and Riesz . Their work was one of the explosion of papers [54, 55, 96, 98, 283, 284, 308, 309, 345] that followed Carathéodory’s initial paper  on analytic functions with positive real part on D. This is responsible for the name “Carathéodory function.” One of my bugaboos is that many discussions of the Herglotz representation use compactness (a.k.a. Helly’s theorem) to prove the existence of a weak limit (3.32). In general, if a limit point is unique, there is likely to be a direct way to prove convergence, and in this case, Lemma 3.3 provides that. It is inelegant to appeal to compactness when such a simple direct argument is available. Extensions to analytic functions on C+ with Im f > 0 are due to Nevanlinna  who used them to study the moment problem (see Simon  or [329, Section 7.7]) and in his work on analytic functions.

Chapter 4

Monotonicity of the Square Root

As explained in the preface, most applications of Loewner’s theorem involve the easy half of the theorem. This chapter is an aside involving the two most significant and common matrix monotone functions: fractional powers and the log. We begin with: Theorem 4.1 Let s ∈ R. On (0, ∞), let fs (x) = x s

(4.1)

Then fs ∈ M∞ (0, ∞) if and only if 0 ≤ s ≤ 1. The function  on (0, ∞) given by (x) = log(x)

(4.2)

lies in M∞ (0, ∞). Remarks 1. Since y → −y −1 is matrix monotone (Proposition 2.2), this implies that x → −x s is matrix monotone if and only if −1 ≤ s ≤ 0. 2. Our official proof will rely on the easy half of Loewner’s theorem, i.e. a Herglotz representation, but given the historical importance, we give five other proofs! 3. For s ∈ / [0, 1], not only is fs ∈ / M∞ (0, ∞), but, either by looking at det(B2 ) and using Theorem 5.18 or by Theorem 14.1, fs ∈ / M2 (a, b) for any (a, b) ⊂ (0, ∞). Proof fs has an analytic continuation to C\(−∞, 0], fs (z) = zs that is, if z = reiϕ , then fs (reiϕ ) = r s eiϕs © Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_4

33

34

4 Monotonicity of the Square Root

Thus, Im fs (reiϕ ) = r s sin(ϕs)

(4.3)

This obeys (1.14) if and only if 0 ≤ s ≤ 1. Thus, Theorem 1.3 implies the first assertion. Similarly, Im (reiϕ ) = ϕ

(4.4)

obeys (1.14) and so, by Theorem 1.3,  ∈ M∞ (0, ∞). Basically, the reason representations

xs

 

and log(x) are monotone is because of the integral

ˆ ∞  sin π s s−1 −1 w A(A + w) dw A = π 0 ˆ ∞ xA − 1 dx log A = A + x 1 + x2 0 s

(4.5) (4.6)

Example 4.2 The function f (z) = π tan(π x) on (−1/2, 1/2) is in M∞ (− 12 , 12 ) for one has the formula (see [326, Problem 9.2.3]) f (z) =

∞  n=0

2z (n +

1 2 2)

− z2

(4.7)

Since 

z Im a 2 − z2

 =

Im(a 2 z − |z|2 z¯ ) |a 2 − z2 |2

=

a 2 + |z|2 Im(z) |a 2 − z2 |2

(4.8)

tan(π z) has a positive imaginary part in the upper half plane. In essence, (4.7) is an “integral” representation for tan(π x) where the measure is a positive point measure.   For historical reasons, we want to describe four other ways of proving matrix monotonicity of x s : in three cases for s = 1/2k and in the other case for 0 < s ≤ 1. Since log(x) = lim s↓0

xs − 1 s

these x s results immediately imply matrix monotonicity of log(x).

(4.9)

4 Monotonicity of the Square Root

35

Lemma 4.3 Let C and D be arbitrary n × n matrices. Then (i) We have that spec(CD) = spec(DC)

(4.10)

CD ≤ DC

(4.11)

CD − z = C[DC − z]C −1

(4.12)

det(CD − z) = det(DC − z)

(4.13)

(ii) If CD is self-adjoint, then

Proof (i) Suppose C is invertible. Since

we have that

Replacing C by C + ε1 and taking ε → 0, we see (4.13) holds even if C is not invertible. Since spec(E) = {zeros of det(E − z)}, (4.13) implies (4.10). (ii) If E is self-adjoint, E = sup{|λ| | λ ∈ spec(E)}

(4.14)

sup{|λ| | λ ∈ spec(E)} ≤ E

(4.15)

while, in general,

Since (4.10) says {|λ| | λ ∈ spec(CD)} = {|λ| | λ ∈ spec(DC)}

(4.16)

(4.10) implies (4.11). √ Here is the second proof of matrix monotonicity of x: √ √ Theorem 4.4 0 < A ≤ B ⇒ A ≤ B

 

Proof By Lemma 2.3, 0 < A ≤ B ⇒ A1/2 B −1/2  ≤ 1 ⇒ B −1/4 A1/2 B −1/4  ≤ 1

(4.17)

⇒ A1/4 B −1/4  ≤ 1

(4.18)

⇒A

(4.19)

1/2

≤B

1/2

36

4 Monotonicity of the Square Root

Here (4.17) is (4.11) with C = B −1/4 , D = A1/2 B −1/4 and (4.18) comes from E ∗ E = E2 . Finally, (4.19) comes from (2.8).   k

k

By induction, A1/2 ≤ B 1/2 ⇒ A1/2 k monotonicity of x 1/2 . Here is the third proof:

k+1

≤ B 1/2

k+1

, so we get matrix

Theorem 4.5 If 0 < C, D, then for all s ∈ (0, 1), CD −1  ≤ 1 ⇒ C s D −s  ≤ 1

(4.20)

Remark By (2.8) with C = A1/2 , D = B 1/2 , we see (4.20) shows that for 0 < s ≤ 1, x → x s , lies in M∞ (0, ∞). Proof For z ∈ C, define f (z) ≡ C z D −z . Since C iy and D −iy are unitary for all real y, f (x + iy) = f (x)

(4.21)

Thus, f (z) is bounded in {z | 0 ≤ Re z ≤ 1}. By the maximum principle, sup 0≤Re z≤1

f (z) =

sup

f (z)

(4.22)

Re z=0 or Re z=1

By (4.21), LHS of (4.22) = max f (0), f (1) = 1 since f (0) = 1 while, by hypothesis, f (1) ≤ 1.

 

εz2

f (z) is bounded in the strip, so fε (z) ≡ e f (z) is bounded in the strip and goes to zero as |y| → ∞. Equation (4.22) follows from applying the usual maximum principle to fε and taking ε ↓ 0. The fourth proof depends on Lemma 4.6 Let C be a strictly positive Hermitian matrix and D a Hermitian matrix. If T ≡ CD + DC is positive, so is D. Proof Shift to a basis in which D is diagonal. Then 0 ≤ Tii = 2Dii Cii . Since Cii > 0, we see that D is a diagonal matrix with positive elements, so a positive matrix.   Here is the fourth proof Fourth Proof of Theorem 4.4 A and√B, we can suppose both are √ By√adding 1 to √ strictly positive. Let C = B + A and D = B − A. Then T = CD + DC = 2(B − A) ≥ 0. Since C is strictly positive, the lemma implies that D ≥ 0.  

4 Monotonicity of the Square Root

37

Here is the fifth proof (which can be viewed as a variant of the fourth): √ √ Fifth Proof of Theorem 4.4 Suppose 0 < A ≤ B but that S ≡ A ≤ T ≡ B is false. Then let P be the projection onto an eigenvector for S − T with eigenvalue e > 0 so P (S − T ) = P (S − T ) = eP . Thus, Tr(P (S − T )(S + T )) = eTr(P (S + T )P ) ≥ 0. On the other hand, by cyclicity of the trace (and [P , S − T ] = 0) Tr(P (S − T )(S + T )) = Tr((S + T )(S − T )P ) = Tr(P (S + T )(S − T )) so 2Tr(P (S − T )(S + T )) = Tr(P (S 2 − T 2 )) = Tr(P (A − B)P ) ≤ 0. It follows that Tr(P (S + T )P ) = 0 so Tr(P SP ) = Tr(P T P ) = 0 so Tr(P (S − T )P ) = 0 so e = 0 contrary to hypothesis. This contradiction establishes the result.   Here is the sixth proof (which is clearly related to the last two proofs) Sixth Proof of Theorem 4.4 Let X = B − A. By taking limits we can suppose that A and X are strictly positive. Define for t ≥ 0 F (t) =

√ A + tX

(4.23)

Clearly, it suffices to prove that F (t) is positive, equivalently, if F (t)ϕ = λϕ that λ > 0. Differentiating F (t)F (t) = A + tX, we see that F (t)F (t) + F (t)F (t) = X Put this equation in ϕ, · ϕ to see that 2λ ϕ, F (t)ϕ = ϕ, Xϕ Since F (t) and X are strictly positive, so is λ.

 

Having seen five proofs of the matrix monotonicity of log(x), we give an application: Theorem 4.7 (Segal’s Lemma) Let A and B be finite self-adjoint matrices. Then eA+B  ≤ eA eB 

(4.24)

Proof By adding a constant to B, we can suppose eA eB  = 1

(4.25)

By (2.8), (4.25) ⇒ e2A ≤ e−2B ⇒ 2A ≤ −2B

(4.26)

38

4 Monotonicity of the Square Root

⇒A+B ≤0 ⇒ spec(A + B) ⊂ (−∞, 0] ⇒ spec(eA+B ) ⊂ (0, 1]

(4.27)

⇒ eA+B  ≤ 1 where (4.26) uses the matrix monotonicity of log(x), and (4.27) the spectral mapping theorem.   Segal’s lemma is important in the study of models of quantum field theory; see the Notes. That fs of (4.1) is matrix monotone on operators has been called by some the Loewner–Heinz or Heinz inequality (even though Heinz’s rediscovery followed Loewner’s work by 17 years)! There are a number of variants that have been given names and there are papers proving the equivalence to each other and to the Loewner–Heinz inequality. One is the Cordes inequality that 0 ≤ A, B;

0 ≤ s ≤ 1 ⇒ As B s  ≤ ABs

(4.28)

which is just (4.20) if A = C and B = D −1 . Another is the Heinz–Kato inequality: given A, B positive and T another operator, if for all ϕ, ψ, we have that T ϕ ≤ Aϕ T ∗ ψ ≤ Bψ then | ψ, T ϕ | ≤ As ϕB 1−s ψ

(4.29)

for all 0 ≤ s ≤ 1. To prove (4.29), we use the singular value decomposition (see [329, Section 3.5]) which in the finite dimensional case says that if T is an n-dimension operator, there are orthonormal bases {κj }nj=1 and {ηj }nj=1 and non-negative reals {ej }nj=1 so that |T |κj = ej κj and |T ∗ |ηj = ej ηj and so that if U is the unitary with U κj = ηj , we have the polar decomposition T = U |T | = |T ∗ |U . This implies that U |T |U ∗ = |T ∗ | so that for any positive t, U |T |t U ∗ = |T ∗ |t which implies that |T |t U ∗ = U ∗ |T ∗ |t

(4.30)

It follows that

ψ, T ϕ = ψ, U |T |1−s |T |s ϕ = |T |1−s U ∗ ψ, |T |s ϕ = U ∗ |T ∗ |1−s ψ, |T |s ϕ

(4.31)

using (4.30) with t = 1 − s. Thus, | ψ, T ϕ | ≤ |T ∗ |1−s ψ|T |s ϕ

(4.32)

4 Monotonicity of the Square Root

39

With these preliminaries, we can prove the Heinz–Kato inequality (4.29). For the left side of (4.29) and the Loewner–Heinz inequality imply that for 0 ≤ s, t ≤ 1 and all ϕ, ψ, we have that |T |s ϕ ≤ As ϕ and |T ∗ |t ψ ≤ B t ψ which, with (4.32), implies the right side of (4.29). This proves the Heinz–Kato inequality. Another equivalent form is the Chan–Kwong inequality: If we have four positive matrices and if A commutes with C and B commutes with D then A ≥ B and C ≥ D ⇒ A1/2 C 1/2 ≥ B 1/2 D 1/2 . To see this we first note we can suppose all the operators are strictly positive since we can add 1 and take  ↓ 0. Assuming that, let X ≡ A1/2 C 1/2 and Y ≡ B 1/2 D 1/2 . Then XA−1 X = C ≥ D = Y B −1 Y ≥ Y A−1 Y since A ≥ B ⇒ B −1 ≥ A−1 . Multiplying on both sides by A−1/2 we see that (A−1/2 XA−1/2 )2 ≥ (A−1/2 Y A−1/2 )2 so, by the monotonicity of square root, A−1/2 XA−1/2 ≥ A−1/2 Y A−1/2 . Multiplying on both sides by A1/2 , we get that X ≥ Y , the conclusion of the Chan–Kwong inequality. Taking C = D = 1, we recover the monotonicity of the square root. But we get even more: taking A = Xs , C = Xt , B = Y s , D = Y t , the Chan–Kwong inequality implies that the set of s for which x → x s is matrix monotone is closed under averages so it contains all dyadic rationals in [0, 1] and so, by a continuity argument all of [0, 1]. Since the above proof of the Chan–Kwong inequality only needed monotonicity of the square root, we see one again a way to go from monotonicity of the square root to all fractional powers. As a final equivalent inequality we mention (there are still a few others in the Notes), we consider the Fujii–Furuta inequality: for any pair of positive operators, C and D, one has that CDC ≤ C 2 D. If C = A1/2 and D = B, we note that CDC = A1/2 B 1/2  and the inequality is just (4.28) for s = 1/2. The point in singling this out is that Fujii–Furuta  have a compact discussion of the equivalences centered on this form. Moreover, there is a vast generalization due to McIntosh (see the Notes): A, B ≥ 0, X arbitrary n × n matrices ⇒ As XB 1−s  ≤ AXs XB1−s (4.33) for 0 ≤ s ≤ 1. The special case A = B = C 2 , D = X, s = 1/2 is the Fujii–Furuta inequality. Notes and Historical Remarks Despite the fact that the easy half of Loewner  implies monotonicity of square root, the work was so little known in the early 1950s that Heinz  made a splash in 1951 proving this in the context of certain PDE bounds. The proof given after our statement of Theorem 4.4 is essentially due to Kato . Kato noted (and Pedersen  rediscovered) that the same idea proves that if 0 ≤ Aγ ≤ B γ for γ = α and γ = β, then it is true for γ = 12 (α + β) which proves that x → x s is matrix monotone for 0 ≤ s ≤ 1. The fourth proof is borrowed from Bhatia  who notes that the arguments are variants of those in Lax . The fifth proof is from Ogawasara  who also proved that a C ∗ -algebra in which 0 ≤ A ≤ B ⇒ A2 ≤ B 2 is abelian (see Example 1.2). The sixth proof appears in Hiai–Petz .

40

4 Monotonicity of the Square Root

AB + BA is sometimes called the symmetric product and 12 (AB + BA) is called the Jordan product after the physicist, Pascual Jordan, who used it in quantum theory (not the mathematician, Camille Jordan, of the Jordan normal form and Jordan curve theorem). Moslehian–Najafi  have proven an interesting connection between these products and matrix monotone functions. If A, B ≥ 0, then their symmetric product is positive if and only if for every positive matrix monotone function, f on (0, ∞), one has that f (A + B) ≤ f (A) + f (B) In particular, this implies for such f , 0 < p ≤ 1/2 and 0 ≤ A ≤ B, we have that f (B p ) ≤ f ((B p + Ap )/2) + f ((B p − Ap )/2). Segal’s lemma is due to Segal ; that it follows from matrix monotonicity of log is a remark of Simon and Høegh-Krohn . It is an abstraction of an argument of Nelson . For analogs concerning traces (Golden–Thompson inequalities), see Simon . Equation (4.10) is an extremely useful result as discussed, for example, by Deift . It extends to the Hilbert space case in a slightly different form spec(CD)\{0} = spec(DC)\{0}

(4.34)

Removing 0 is necessary, for if {ϕn } is an orthonormal basis and Cϕn = ϕn+1 , then 0 ∈ spec(CC ∗ ), but 0 ∈ / spec(C ∗ C). Equation (4.34) follows from (CD − λ)−1 = −λ−1 + λ−1 C(DC − λ)−1 D The Heinz–Kato inequality is from Kato , the Cordes inequality appeared in Cordes  (and independently in Furuta ), and the Chan–Kwong inequality in Chan–Kwong . The McIntosh inequality was unpublished but appeared without proof in an often quoted technical report that does not seem to be available online. A generalization to general unitarily invariant norms (see Chapter 42) with a proof can be found in Kittaneh  (see also ). Papers that discuss equivalent forms of the Heinz–Loewner inequality include [104, 105, 107, 115, 118, 184, 366]. Fujii–Furuta  discuss some other equivalent forms such as AX + XB ≥ As XB 1−s + A1−s XB s  (for A, B ≥ 0 and 0 ≤ s ≤ 1) and A2 B + BA2  ≥ ABA (for A ≥ 0 and B ∗ = B). Another equivalent form is the result of Furuta  that for A, B ≥ 0, one has that s → As B s  is a convex, increasing function. Furuta  has proven a generalization of the matrix monotonicity of x s ; 0 ≤ s ≤ 1—namely if 0 ≤ A ≤ B, then (B r Ap B r )1/q ≤ (B r B p B r )1/q for r, p, q > 0 obeying (1 + 2r)q ≥ p + 2r (the x s result is q = 1, r = 0, p = s). For further literature on this inequality, see [106, 108–112, 114, 174, 367]. For other direct constructions of matrix monotone functions, (i.e., functions in M∞ (0, ∞)), see Furuta , Uchiyama [348, 349], and Uchiyama–Hasumi . For a general discussion of matrix inequalities, see the book of Furuta .

4 Monotonicity of the Square Root

41

An operator, perhaps unbounded, on a Hilbert space, H, is called maximal accretive if Re ϕ, Aϕ ≥ 0 for all ϕ ∈ D(A) and Ran(A + 1) = H (it is known [280, Theorem X.48] this is equivalent to being the generator of a contraction semigroup). Kato  has a generalization of the Heinz–Loewner inequality: If A and B are maximal accretive operators on a Hilbert space with D(A) ⊂ D(B) so that Bϕ ≤ Aϕ for all ϕ ∈ D(A), then for 0 ≤ s ≤ 1 and all ϕ ∈ D(As ), one has that B s ϕ ≤ exp(π 2 s(1 − s)/2)As ϕ (it is known how to define fractional powers of maximal accretive operators in a natural way). In this chapter, we proved directly that several functions (x → x α ; 0 < α < 1 and x → log(x)) are matrix monotone and later we’ll discuss others. We should mention there is a literature on still other functions, many of use in statistical physics and/or quantum information theory. Some proofs are direct and others prove the applicability of Loewner’s theorem. For example, Szabó  proved that the function f (x) =

(x − 1)2 (x p − 1)(x 1−p − 1)

(4.35)

lies in M∞ (0, ∞) if 0 < p < 1. (It is matrix monotone decreasing if p ∈ (−1, 0) ∪ (1, 2)). For other examples, see the book of Hiai–Petz  and the papers of Nagisa , Nagisa–Wada , Nakamura , and Uchiyama .

Chapter 5

Loewner Matrices

In this chapter, we will reduce matrix monotonicity to the positivity of certain matrices and determinants. Seven of the eleven proofs we’ll give of the hard part of Loewner’s theorem rely on this reduction. It is a long chapter in part because we’ll discuss two variants of the basic Loewner matrix (multipoint and extended Loewner matrices) and a local variant (Dobsch matrix) as well as analogs that will be needed when we study matrix convex functions. We’ll also present tools including eigenvalue perturbation theory, the Daleckiˇi–Krein formula and divided differences that will be useful later. Finally, we’ll prove an important smoothness result that any f ∈ Mn (a, b) is a C 2n−3 function on (a, b). Suppose f is C 1 on (a, b) and a < x1 < x2 < · · · < xk < b. The Loewner matrix associated to these points is the k × k matrix,  f (xi )−f (xj ) (Lk (x1 , . . . , xk ))ij =

xi −xj

f (xi )

i = j i=j

(5.1)

This formula suggests it is useful to define [xi , xj ; f ] by the right side of (5.1). Here is the key reduction: Theorem 5.1 (Loewner) Let f be a C 1 function on (a, b). Then f ∈ Mn (a, b) if and only if for all a < x1 < · · · < xn < b, we have Ln (x1 , . . . , xn ) ≥ 0

(5.2)

We’ll give two proofs of this basic result. The first will involve an explicit formula and the second, close to Loewner’s proof will use the theory of rank for df (A+λB) dλ one perturbations. It is not really a restriction to suppose f is C 1 . For we will also prove in this chapter that: Theorem 5.2 Any f ∈ Mn (a, b) is C 2n−3 . In particular, any f in Mn (a, b) for some n ≥ 2 is C 1 . © Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_5

43

44

5 Loewner Matrices

We will see in Chapter 7, that there exist f in Mn (−1, 1) where f is not classically C 2n−2 . Not only does Theorem 5.2 illuminate Theorem 5.1, it turns out that Theorem 5.1 will be the key to proving Theorem 5.2. Our proof of Theorem 5.2 will involve an d explicit formula for dλ f (A + λC)λ=0 when A is the diagonal matrix ⎞ ⎛ x1 0 ⎟ ⎜ A = ⎝ ... ⎠ 0 xn

(5.3)

To state and exploit this theorem, it will be convenient to have Definition Let X, Y be two n-dimensional matrices. Their Schur product X " Y is defined by (X " Y )mk = Xmk Ymk

(5.4)

Lemma 5.3 The Schur product of two positive matrices is positive. Proof For any vector ϕ, let Q(ϕ) be the matrix (ϕ)

Qmk = ϕ¯m ϕk

(5.5)

which is clearly positive. Since any positive matrix has an orthonormal basis of eigenvectors with non-negative eigenvalues, any positive X is a sum of Q(ϕ) ’s. Thus, it suffices to prove the result for X = Q(ϕ) , Y = Q(ψ) . Since Q(ϕ) " Q(ψ) = Q(ϕ"ψ) where (ϕ " ψ)m = ϕm ψm , the special case is obvious.

 

As a second preliminary, it will be useful to have two approximation theorems: One is a simple extension of the Weierstrass approximation theorem [325, Theorem 2.4.2]. The other is specific to matrix monotone functions. Proposition 5.4 (a) Let f be a continuous function on some bounded open interval (a, b). Then there exist polynomials Pm so that for any [c, d] ⊂ (a, b), sup |f (x) − Pm (x)| → 0 c≤x≤d

as m → ∞. Moreover, if f is C k , then for  = 0, 1, . . . , k,

(5.6)

5 Loewner Matrices

45

    d  d  sup   f −  Pm  → 0 dx c≤x≤d dx

(5.7)

as m → ∞. (b) Let (a, b) be a finite interval and f ∈ Mn (a, b) be continuous. Then there exist C ∞ functions fm on (a + m1 , b − m1 ) so that fm ∈ Mn (a + m1 , b − m1 ) and (5.6) holds with Pm replaced by fm . If f is C k , (5.7) holds with Pm replaced by fm . If f is assumed matrix monotone, but not a priori continuous, then fm exists in Mn (a + m1 , b − m1 ) so supc≤x≤d;m |fm (x)| < ∞ and fm (x) → f (x) for almost all x. Proof (a) If we prove the result for closed intervals, [c, d], uniformly on [c, d], it follows for open intervals (a, b) by a two-step approximation. By scaling, we can take [c, d] = [0, 1], so suppose f is continuous on [0, 1]. Define the Bernstein polynomials Bm (x) by   m    j m j m−j Bm (x) = f x (1 − x) m j

(5.8)

j =0

Introduce the shorthand Em,x (Q(j, x)) =

m    m j =0

j

x j (1 − x)m−j Q(j, x)

(5.9)

E stands for expectation since Em,x (1) = 1;

Q ≥ 0 ⇒ Em,x (Q(j, x)) ≥ 0

by the binomial theorem (see also the Notes). We have that d d m (1 + a) = da  da 

  m   m j m−j (x + a) (1 − x) j j =0

so evaluating both sides at a = 0, we get Em,x (j (j − 1) . . . (j −  + 1)) = m(m − 1) . . . (m −  + 1)x 

(5.10)

In particular, Em,x (j ) = mx so

Em,x (j (j − 1) + j ) = m(m − 1)x 2 + mx

(5.11)

46

5 Loewner Matrices

Em,x

     j 2 1 2 x x− = x2 + 1 − x + − 2x 2 m m m =

x(1 − x) m

(5.12)

Thus,        j  |Bm (x) − f (x)| = Em,x f − f (x)  m ≤ 2 sup |f (x)| Em,x (χ{j ||x− j |>δ} ) + sup |f (x) − f (y)| |x−y|≤δ

m

x

≤ 2δ −2

x(1 − x) sup|f (x)| + sup |f (x) − f (y)| m x |x−y|≤δ

It follows that lim sup sup |Bm (x) − f (x)| ≤ sup |f (x) − f (y)| m→∞

|x−y|≤δ

x

(5.13)

Taking δ ↓ 0 and using uniform continuity of f , we see Bm → f uniformly. Now suppose f is C k . Since d dx

       m j m−1 j −1 m−1 j x (1−x)m−j = m x (1−x)m−j − x (1−x)m−1−j j j −1 j

we see that d Em,x (Q(j )) = m Em−1,x (Q(j + 1) − Q(j )) dx

(5.14)

Letting δ be the operator (δQ)(j ) = Q(j + 1) − Q(j ), we see, by induction, that    (δ Q)(j ) d m(m − 1) . . . (m −  + 1) Em,x (Q(j )) = Em−,x dx  m ( m1 ) (5.15) Since Q(j ) ≡ f (j/m) and  ≤ k implies (δ  Q)(j ) ( m1 )

d f − dx 



j m

 →0

uniformly in j, m, the same argument that led to (5.14) shows that df dx 

(x).

d B (x) dx  m

5 Loewner Matrices

47

(b) Let j be an approximate identity, that is, jm (x) =

  x 1 j1 m m

where j1 is supported on (−1, 1), j1 ≥ 0, j1 is C ∞ , and fm (x) be defined for x ∈ (a + m1 , b − m1 ) by

´1

−1 j1 (x) dx

= 1. Let

ˆ fm (x) =

f (x − y)jm (y) dy

(5.16)

fm is C ∞ since j is, and in Mn (a + m1 , b − m1 ) since for |y| ≤ m1 , f (· − y) ∈ Mn (a + m1 , b − m1 ). By a standard argument, if f is continuous, fm → f uniformly, and if f is C k , d  fm /dx  → d  f/dx  . If f is monotone, fm (x) → f (x) at points of continuity for f which include a.e. x.   Remark One can also prove (a) by using convolution (first with scaled Gaussians and then approximating the Gaussians with polynomials), but the Bernstein polynomials have a more explicit feel. Lemma 5.5 Let fk , f∞ be C 1 functions on (a, b) so that fk → f∞ and dfk /dx → df∞ /dx uniformly on subintervals [c, d] ⊂ (a, b). Then (a) If A and C are self-adjoint matrices and A has distinct eigenvalues in (a, b), then for λ near 0, fk (A + λC) is differentiable in λ and   dfk (A + λC)  df∞ (A + λC)  lim =   k→∞ dλ dλ λ=0 λ=0

(5.17)

(b) If a < x1 < · · · < xm < b, then Lm (x1 , . . . , xm ; fk ) → Lm (x1 , . . . xm ; f∞ ). Proof We see below (see Corollary 5.29) that for λ near 0, there are eigenvalues x1 (λ), . . . , xm (λ) and one-dimensional eigenprojections P1 (λ), . . . , Pm (λ), so that A + λC =

m 

xj (λ)Pj (λ)

(5.18)

j =1

and xj and Pj are analytic near λ = 0. Then f (A + λC) =

m  j =1

and so f (A + λC) is C 1 near λ = 0 and

f (xj (λ))Pj (λ)

(5.19)

48

5 Loewner Matrices

 df dxj dPj df (A + λC) = (xj (λ)) Pj (λ) + f (xj (λ)) dλ dx dλ dλ m

(5.20)

j =1

This immediately implies (5.17). (b) is immediate from the definition.

 

We can now find an explicit formula for Loewner matrices enter:

df dλ (A

+ λC) that makes it clear where

Theorem 5.6 (Daleckiˇi–Krein Formula) Let a < x1 < · · · < xm < b and let f be a C 1 function on (a, b). Let A be the diagonal matrix ⎛ ⎜ A=⎝

x1 ..

⎟ ⎠

.

(5.21)

xm and C an arbitrary m × m self-adjoint matrix. Then   d f (A + λC) = Lm (x1 , . . . , xm ; f ) " C dλ λ=0

(5.22)

Remark We’ll provide two more proofs of (5.22) later, one after the proof of Theorem 5.13 and the other implicitly in the proof of Theorem 5.16. Proof (Loewner!) By eigenvalue perturbation theory, for small real λ, there exists m an orthonormal basis, {ϕk (λ)}m k=1 , and eigenvalues, {xk (λ)}k=1 , so that (A + λC)ϕk (λ) = xk (λ)ϕk (λ) ϕk (0) = δk ;

xk (0) = xk

(5.23) (5.24)

Thus, using ϕk (λ), f (A + λC)ϕ (0) = f (A + λC)ϕk (λ), ϕ (0) ,

ϕk (λ), [f (A + λC) − f (A)]ϕ (0) = [f (xk (λ)) − f (x (0))] ϕk (λ), ϕ (0) (5.25) so taking f (u) = u, we get

ϕk (λ), λCϕ (0) = (xk (λ) − x (0)) ϕk (λ), ϕ (0)

(5.26)

Combining these last two equations yields 

 f (A + λC) − f (A) ϕ (0) = [xk (λ), x (0); f ] ϕk (λ), Cϕ (0) λ (5.27) Taking λ to zero, we see that ϕk (λ),

5 Loewner Matrices

49



   d f (A + λC) = [xk , x ; f ]Ck dλ λ=0 k

(5.28)  

which is (5.22)

Remark While f (A) doesn’t appear,   it is not hard to show that if f is matrix d f (A + λC)λ=0  ≤ f (A)C. More is true. Under the monotone, then  dλ monotonicity assumption, the sup over all C with C = 1 is equal to f (A). This is a result of . We have the tools to prove Theorem 5.1, but to state a third equivalence, we need to analyze the relationship between positivity of a matrix and of certain determinants. Proposition 5.7 Let A be a d×d self-adjoint matrix and A(d−1) the matrix obtained by dropping the last row and column. Then their eigenvalues interlace, that is, if λ1 ≤ λ2 ≤ · · · ≤ λd are the eigenvalues of A and μ1 ≤ μ2 ≤ · · · ≤ μd−1 of A(d−1) , then λj ≤ μj ≤ λj +1

(5.29)

Proof Since λ1 = min ϕ, Aϕ ϕ=1

μ1 = min ϕ, Aϕ ϕ=1 ϕd =0

(5.30)

clearly, λ1 ≤ μ1 . Let ψ1 be a unit eigenvector for λ1 and A. Then λ2 =

min ϕ, Aϕ ≤

ϕ=1

ϕ,ψ1 =0

min ϕ, Aϕ

ϕ=1 ϕd =0

ϕ,ψ1 =0

(5.31)

There is a linear combination, ϕ, of the eigenvectors associated to μ1 , μ2 orthogonal to ψ1 , and it has ϕd = 0, so RHS of (5.31) ≤ μ2 . By similar arguments, λj ≤ μj Replacing min’s by max’s, we get λj +1 ≥ μj   Definition Let A be a d × d matrix. Let I ⊂ {1, . . . , d}. The principal determinant, dI (A), is the determinant of the #I × #I matrix {Aij }i,j ∈I . The main principal determinants are dj (A) = d{1,...,j } (A) for j = 1, 2, . . . , d.

50

5 Loewner Matrices

Proposition 5.8 A self-adjoint matrix A is strictly positive if and only if each main principal determinant is strictly positive. A self-adjoint matrix A is positive if and only if each principal determinant is non-negative.  0  Remark 00 −1 has all main principal determinants non-negative, but is not positive. Thus, we need all principal and not just main principal in the second sentence. Proof If A is (strictly) positive, so is each AI = {Aij }i,j ∈I , so its eigenvalues are (strictly) positive, so its determinant, dI (A), is (strictly) positive. We prove the converse in the strictly positive case inductively in d = dim(A). d = 1 is trivial. If the result holds for dimension d − 1 and dj (A) > 0 for all j , then dj (A(d−1) ) > 0 for j = 1, . . . , d−1. So, by induction, A(d−1) is strictly positive and its eigenvalues obey 0 < μ1 ≤ · · · ≤ μd−1 . By (5.29), 0 < μ1 ≤ λ2 ≤ · · · ≤ λd . Since λ1 λ2 . . . λd > 0, we conclude that λ1 > 0 also, so A is strictly positive. For the general converse, let B(ε) = A + ε1. Then, with d = dim(A), det(B(ε)) = det(A)+ε



dI (A)+ε2

#(I )=d−1



dI (A)+· · ·+εd ≥ 0

(5.32)

#(I )=d−2

The eigenvalues of B(ε) are {λj + ε}dj =1 are nonzero for all sufficiently small ε, that is, det(B(ε)) = 0, so det(B(ε)) > 0. Similarly, dj (B(ε)) > 0 for ε small, so by the strictly positive case, B(ε) > 0, and thus limε↓0 B(ε) = A ≥ 0.   The following includes Theorem 5.1: Theorem 5.9 Let f be a real-valued function on (a, b). Then, the following are equivalent: (a) f ∈ Mn (a, b) (b) For each a < x1 < · · · < xn < b

(5.33)

Ln (x1 , . . . , xn ; f ) ≥ 0

(5.34)

we have

(c) For each  ≤ n and a < x1 < · · · < x < b, det(L (x1 , . . . , x ; f )) ≥ 0

(5.35)

Proof (b) ⇔ (c) by Proposition 5.8 and the fact that any principal determinant is det(L (xi1 . . . xi ; f )). (a) ⇒ (b) Pick x1 , . . . , xn obeying (5.33) and A as given by (5.21). Let C ≥ 0. Then A + λC ≥ A for λ ≥ 0, so f (A + λC) ≥ f (A), so by (5.22),

df (A+λC) dλ

≥ 0. Thus,

5 Loewner Matrices

51

C ≥0⇒L"C ≥0

(5.36)

If C is the rank one matrix, Cij = ϕ¯i ϕj , Tr(L " C) = ϕ, Lϕ so (5.36) implies L is positive. (b) ⇒ (a) By Lemma 5.3 and (5.22), if C ≥ 0, (5.33) holds and A has the form (5.21), then   df (A + λC) ≥0 (5.37) dλ λ=0 But positivity is basis independent, so (5.37) holds for any A with distinct eigenvalues (by shifting to a basis where A is diagonal). From this, we see that if A ≤ B and A + λ(B − A) have distinct eigenvalues for all λ ∈ [0, 1], then f (B) ≥ f (A). If A has distinct eigenvalues and B ≥ A and C(λ) = A + λ(B − A), then the eigenvalues of C(λ) are analytic in λ in a neighborhood of [0, 1] (see Corollary 5.29 and the Notes), so there exist finitely many 0 = λ0 < λ1 < · · · < λ < λ+1 = 1 with C(λ) having degenerate eigenvalues only at λ1 , λ2 , . . . , λ , and perhaps at λ+1 . By the above, f (C(λj + ε)) ≤ f (C(λj +1 − ε)) so taking ε ↓ 0, we see that f (C(λj )) ≤ f (C(λj +1 )) and thus, f (A) ≤ f (B) so long as A has distinct eigenvalues. Since any A with eigenvalues in (a, b) is a limit of such matrices, Am , with distinct eigenvalues and f (Am ) ≤ f (Am + (B − A)) for m large, we see f (A) ≤ f (B) in general.   The next key development from Theorem 5.1 is to prove an infinitesimal version of (5.35). In particular, we are heading towards showing that if f is C 2n−1 , the matrix Bn (x0 ) with (Bn (x0 ; f ))k =

f k+−1 (x0 ) (k +  − 1)!

(5.38)

is positive. We will call B the Dobsch matrix. It is a Hankel matrix (see ), i.e. its matrix elements are only a function of i+j . Hankel matrices are relevant ´to the study of the moment problem (see [325, Section 4.17]). In particular, if cj = x j dμ(x) for a positive measure and f (x) = c0 x + c1 x 2 + . . . c2n−2 x 2n−1 , then Bn (0; f ) is

52

5 Loewner Matrices

a positive matrix and, if the support of μ has at least n points, a strictly positive matrix(see [325, Theorem 4.17.3]). Intuitively the Dobsch matrix will come from taking x0 < x1 < · · · < xn and letting xj +1 − xj → 0 suitably. It will help to have the machinery of divided differences. We’ll discuss the basics here and say a lot more about them in Chapters 22 and 23 Definition Given a continuous function f on (a, b) and x1 , . . . , xn distinct in (a, b), we define [x1 , . . . , xn ; f ] inductively by [x1 ; f ] = f (x1 ) [x1 , . . . , x ; f ] =

(5.39)

[x1 , . . . , x−1 ; f ] − [x2 , . . . , x ; f ] x1 − x

(5.40)

The reader will note that the off-diagonal matrix elements of L(x1 , . . . , xn ; f ) are precisely [xi , xj ; f ]. We want to show that [x1 , . . . , xn ; f ] is symmetric in x1 , . . . , xn and can be extended continuously to coincident points if f is C n−1 , and in that case, [x, . . . , x; f ]k-times =

f (k−1) (x) (k − 1)!

(5.41)

This suggests that [x1 , . . . , xn+1 ; f ] called the nth divided difference is a kind of generalized nth derivative with the advantage that for it to be defined (for all x’s unequal), we don’t need any kind of a priori smoothness or even measurability hypothesis on f . For example, recall that a function f : (a, b) → R is called convex if and only if for all x, y ∈ (a, b) and θ ∈ [0, 1], we have that f (θ x + (1 − θ )y) ≤ θf (x) + (1 − θ )f (y)

(5.42)

It is well known that if f is C 2 , it is convex if and only if f (2) (x) ≥ 0 for all x ∈ (a, b). But what if f is a priori not C 2 ? Writing x1 = x, x2 = y, and x3 = θ x + (1 − θ )y, so that θ = (x3 − x2 )/(x1 − x2 ) a little manipulation shows that (5.42) ⇐⇒ ∀x1 ,x2 ,x3 ∈(a,b) [x1 , x2 , x3 ; f ] ≥ 0

(5.43)

Once (5.41) and the mean value theorem for divided differences (Theorem 5.15 below) are proven, this immediately provides a proof that when f is C 2 , then f is convex ⇐⇒ f ≥ 0. The following will be very useful Theorem 5.10 (Genocchi–Hermite Formula) Let f ∈ C n−1 (a, b). Then for any distinct x1 , . . . , xn ∈ (a, b), one has that

5 Loewner Matrices

53

ˆ [x1 , . . . , xn ; f ] =

ˆ

1

ˆ

t1

dt1

dt2 . . . 0

0

tn−2

dtn−1 f (n−1) (x(tj , xj ))

(5.44)

0

x(tj , xj ) = (1 − t1 )x1 + (t1 − t2 )x2 + · · · + (tn−2 − tn−1 )xn−1 + tn−1 xn Remarks 1. Letting ⎧ if j = 1 ⎨ 1 − t1 , sj = tj −1 − tj , if j = 2, . . . , n − 1 ⎩ if j = n tn−1 ,  we have that sj ≥ 0, nj=1 sj = 1 which are standard coordinates for the (n − 1) dimensional simplex, n−1 and y(sj , xj ) ≡ x(tj , xj ) =

n 

(5.45)

sj xj

j =1

This is a convex combination of the xj ’s so it lies in [minj xj , maxj xj ] and (5.46) can be written in the symmetric form ˆ [x1 , . . . , xn ; f ] =

ˆ ... n−1

⎛ ⎞ n  f (n−1) ⎝ sj xj ⎠ ds2 . . . dsn

(5.46)

j =1

2. Sometimes x(tj , xj ) is written x(tj , xj ) = x1 + (x2 − x1 )t1 + · · · + (xn − xn−1 )tn−1

(5.47)

Proof We use induction on n. For n = 1, we have setting x = x1 + t (x2 − x1 ) ˆ

1

f ((1 − t1 )x1 + t1 x2 ) dt1 = (x2 − x1 )

−1

0

ˆ

x2

f (x) dx x1

= [x1 , x2 ; f ]

(5.48)

Letting yj = x1 + (x2 − x1 )t1 + · · · + (xn−2 − xn−3 )tn−3 + (xn−2+j − xn−2 )tn−2 ; we have that, as in (5.48),

j = 1, 2

54

5 Loewner Matrices

ˆ

1

f (y1 + (xn − xn−1 )tn−1 ) dtn−1 =

0

f (n−2) (y2 ) − f (n−2) (y1 ) xn − xn−1

(5.49)

Integrating dt1 . . . dtn−2 and using the inductive result for n − 1 yields RHS of (5.44) =

[x1 , . . . , xn−2 , xn ; f ] − [x1 , . . . , xn−2 , xn−1 ; f ] xn − xn−1

= [x1 , . . . , xn ; f ]

(5.50)  

Corollary 5.11 Let f  ∈ C n (a, b). Then [x1 , . . . , xn ; f ] has a continuous extension to all (x1 , . . . , xn ) ∈ nj=1 (a, b), i.e. to some coincident points. Moreover, (5.41) holds.

Proof The integral in (5.44) makes sense for all such x’s and is continuous. If all xj = x0 , then x(tj , xj ) = x0 for all t so the integral is f (n−1) (x0 ) times the volume of the simplex which, by doing the integral (or using symmetry), is 1/(n − 1)!   Corollary 5.12 [x1 , . . . , xn ; f ] is symmetric in the xj ’s.  Proof The measure ds2 . . . dsn on n−1 is equal to j =k dsj for all k showing that (5.46) is invariant under permutations of the xj ’s.   There is another tool, due to Frobenius, that leads to a second proof of symmetry as well as an explicit formula for [x1 , . . . , xn ; f ] in terms of the values of f at xj , or, if some of the xj are coincident, the derivatives of f at the coincident points. This key formula [x1 , . . . , xn ; f ] =

n  j =1

f (xj ) k=j (xj − xk )

(5.51)

can be established inductively and then used to provide another proof of (5.41), but it is easier, following Frobenius, to develop formulae for polynomial f (in fact, f analytic in a neighborhood of (a, b) will do) and then use Proposition 5.4 to prove what we want for general f . Theorem 5.13 (i) [x1 , . . . , xn ; f ] obeys (5.51) and is a symmetric function of (x1 , . . . , xn ). (ii) If f is C n−1 , then [x1 , . . . , xn ; f ] has a continuous extension to (a, b)n given by [x1 , . . . , xn ; f ] =

  j =1

   mj −1    d 1 f (x) (x − yk )−mk  (mj − 1)! dx  k=j

k=0

(5.52)

5 Loewner Matrices

55

if (x1 , . . . , x n ) includes y1 m1 -times, y2 m2 -times, . . . , where y1 , . . . , y are distinct and j =1 mj = n. In particular, taking  = 1, (5.41) holds. Remark If X = (x1 , . . . , xn ) with yj , mj defined as in (ii) and if m = max(mj ), then one only needs that f ∈ C m−1 for [x1 , . . . , xn ; f ] to have a continuous extension from non-coincident points to a neighborhood of X ∈ (a, b)n . Proof We will exploit the fact that if 1 z−x

hz (x) =

(5.53)

then, by a direct computation, [x1 , . . . , xn ; hz ] =

n 

(z − xj )−1

(5.54)

j =1

Thus, if f is analytic in a simply connected neighborhood, N, of (a, b) (and, in particular, if f is a polynomial) and if C is a contour in N that surrounds (a, b) once in a counterclockwise sense, then ﬃ 1 f (z) f (x) = dz (5.55) 2π i C z − x so (5.54) implies [x1 , . . . , xn ; f ] =

1 2π i

f (z) dz j =1 (z − xj )

n

(5.56)

For such f , (5.51) is immediate by the residue calculus and (5.56). By Proposition 5.4, for any continuous and distinct x1 , . . . , xn , (5.51) holds. By the residue calculus, we also obtain (5.52) and so (5.41) for such f . From (5.51) or (5.56), it is obvious that [x1 , . . . , xn ; f ] is symmetric in the xj ’s for analytic f ’s. Fix f ∈ C  (a, b). By convoluting with Gaussian that shrink to a point, we can find entire analytic functions {fm }∞ m=1 so that for k = 0, 1, . . . , , (k) (k) fm → f uniformly on each (a +, b −). If f is merely continuous this implies symmetry in the xj ’s at non-coincident x’s. Moreover, if f ∈ C n−1 , by (5.44), we have that (5.52) for fm implies it for f .   Here is an alternate proof of the Daleckiˇi–Krein formula (5.22) that appears in Horn–Johnson  and which relies on (5.56) Second Proof of Theorem 5.6 Suppose first that f is analytic in a simple connected neighborhood of [a, b] so that (5.55) holds. Then

56

5 Loewner Matrices

1 f (A + tC) = 2π i

f (z) dz z − (A + tC)

(5.57)

A simple use of the second resolvent formula shows that   d −1  [z − (A + tC)]  = (z − A)−1 C(z − A)−1 dt t=0

(5.58)

Thus, for f obeying (5.55)  ﬃ  d 1 f (z)  f (A + tC)ij  Cij dz = dt 2π i (z − xi )(z − xj ) t=0  if i = j f (xi )Cii , = [xi , xj ; f ]Cij , if i = j

(5.59) (5.60)

by (5.56). For general C 1 functions, f , we can use Lemma 5.5 and convolution with Gaussians of small support.   One special case of (5.51) has an interesting consequence. Theorem 5.14 (Lagrange Interpolation) Let x1 , . . . , xn be n distinct points in R and let f be a function defined at the xj ’s. Then there is a unique polynomial, of degree at most n − 1 with P (xj ) = f (xj ),

j = 1, . . . , n

(5.61)

Moreover, one has that P (x) = cx n−1 + lower order;

c = [x1 , x2 , . . . , xn ; f ]

(5.62)

Remarks 1. We will later (Theorem 22.2) provide another proof that relies on the invertibility of Vandermonde matrices. 2. As we’ll explain in the Notes to Chapter 22, this result goes back to Newton although the explicit formula (5.63) below is due to Lagrange. 3. The formula (5.63) becomes singular as points become coincident but there is a version called Hermite interpolation that doesn’t when f is sufficiently smooth. It is discussed in Chapter 22. Proof If two polynomials obey (5.61), their difference vanishes at n points and is a polynomial of degree at most n − 1, so identically zero. This proves uniqueness. The explicit polynomial

5 Loewner Matrices

57

P (x; x1 , . . . , xn ; f ) =

n 

f (xj )

j =1

k=j (x

− xk )

k=j (xj

− xk )

(5.63)

is easily seen to obey (5.61) proving existence. Equation (5.62) is immediate from this formula and (5.51).   Theorem 5.10 has another consequence: Theorem 5.15 (Mean value theorem for divided differences) Let f be a C n−1 function on (a, b) ⊂ R. Let a < x1 ≤ x2 ≤ · · · ≤ xn < b

(5.64)

Then there exists w ∈ [x1 , xn ] so that [x1 , . . . , xn ; f ] =

f (n−1) (w) (n − 1)!

(5.65)

Remark Chapter 22 will have another proof using Theorem 5.14; see Theorem 22.11. Proof By (5.44), f (n−1) (w) f (n−1) (w) ≤ [x1 , . . . , xn ; f ] ≤ max w∈[x1 ,xn ] (n − 1)! w∈[x1 ,xn ] (n − 1)! min

(5.66)

since the volume of the simplex is 1/(n − 1)!. By continuity of the derivative, there is a point in [x1 , xn ] where the intermediate value is taken.   As a first application of divided differences, we note the following analog of the Daleckiˇi–Krein formula Theorem 5.16 Let A, C be as in Theorem 5.6 and let f ∈ C 2 (a, b). Then    d2  f (A + λC) = 2 [xi , xk , xj ; f ] Cik Ckj ij  dλ2 λ=0

(5.67)

k

Remarks 1. This method can also be used to give an alternate proof of Theorem 5.6 (the Daleckiˇi–Krein formula). 2. There is also a proof of this using the ideas in our second proof of the Daleckiˇi– Krein formula. Proof By Proposition 5.4, it suffices to handle the case where f is a polynomial and so the case f (x) = x m , m = 0, 1, . . . . In that case,

58

5 Loewner Matrices

(A + λC)m =

m 



λ

=0

Aq1 CAq2 C · · · CAq+1

q1 ,...,q+1 ≥0 q1 +···+q+1 =m−

so   d2 m (A + λC)  =2 dλ2 λ=0



Aq1 CAq2 CAq3

q1 +q2 +q3 =m−2

and therefore   d2 m  [(A + λC) ] =2 ij  dλ2 λ=0



q

q

q

xi 1 Cik xk 2 Ckj xj 3

q1 +q2 +q3 =m−2, k

On the other hand, [x1 , x2 ; x m ] =

x1m − x2m = x1 − x2



q

q

x1 1 x2 2

q1 +q2 =m−1

so, by induction 

[x1 , x2 , x3 ; x m ] =

q

q

q

x1 1 x2 2 x3 3

q1 +q2 +q3 =m−2

proving (5.67) in case f (x) = x m .

 

Remark Analogously, if f is C m , then     1 d mf (A + λC) = m m! dλ λ=0 j1 jm+1

n 

Cj1 j2 . . . Cjm jm+1 [xj1 , . . . , xjm+1 ; f ]

j2 ,...,jm =1

(5.68) (5.67) suggests that we single out for x0 , x1 , . . . , xn ∈ (a, b) the n × n Kraus matrix, Kn (x0 ; x1 , . . . , xn ; f )ij = [x0 , xi , xj ; f ];

1 ≤ i, j ≤ n

(5.69)

(5.67) focuses interest on Kn in cases where x0 ∈ {xj }nj=1 . Notice that Kn (x0 ; x1 , . . . , xn ; f ) = Ln (x1 , . . . , xn ; [x0 , ·; f ])

(5.70)

There will also be an analog of Bn . The Hansen–Tomiyama matrix is defined by

5 Loewner Matrices

59

Hn (x; f )ij =

f (i+j ) (x) ; (i + j )!

1 ≤ i, j ≤ n

(5.71)

We now return to studying the Dobsch matrix. We will approximate the Dobsch matrix (5.38) with matrices [An (x1 , . . . , xn ; f )]k = [x1 , . . . , xk , x1 , . . . , x ; f ]

(5.72)

since (5.41) implies An (x1 , . . . , xn ) → Bn (x0 ) if x1 , . . . , xn → x0 . Some authors call An the extended Loewner matrix, a name we reserve for another object (see (5.100)). We’ll call An the multipoint Loewner matrix. In this regard, it will be useful to know a priori that (5.72) is, for suitable f , strictly positive. Lemma 5.17 Let dμ be a nontrivial finite measure on R\(a, b) and define f for x ∈ (a, b) by ˆ f (x) =

(y − x)−1 dμ(y)

(5.73)

Then An (x1 , . . . , xn ; f ) is strictly positive for each x1 , . . . , xn ∈ (a, b). Remarks 1. A measure is nontrivial if its support is not a finite set of points. 2. Below, we will only need one such f . For b < c < d, we could take ˆ

d

(y − x)

−1

c



d −x dy = log c−x

 (5.74)

Proof By (5.54), [An (x1 , . . . , xn ; f )]k =

ˆ  k

(y − xj )−1

j =1

 

(y − xj )−1 dμ(y)

(5.75)

j =1

Thus, n 

Ak ζ¯k ζ =

ˆ |Q(x1 , . . . , xn ; ζ1 , . . . , ζn ; y)|2 dμ(y)

(5.76)

k,=1

with Q(y) =

n  =1

ζ

 

(y − xj )−1

j =1

(5.77)

60

5 Loewner Matrices

Now, Q is a rational function of´ y, so its zeros are a finite set if (ζ1 , . . . , ζ ) =  (0, . . . , 0). Since dμ is nontrivial, |Q|2 dμ > 0.   Theorem 5.18 Let f ∈ Mn (a, b) be C 1 . Then for any distinct x1 , . . . , xn in (a, b), An (x1 , . . . , xn ; f ) of (5.55) is positive. If f is C 2n−1 , then the matrix Bn (x0 ; f ) is positive for all x0 ∈ (a, b). Remarks 1. We will shortly prove that if n ≥ 2 and f ∈ M1 , then f is C 1 . 2. We will see in Chapter 6 that the last result has a converse—i.e. if Bn (x0 ; f ) is positive for all x0 ∈ (a, b), then f ∈ Mn (a, b). In that chapter, we’ll also find a more direct way to go from positive Loewner matrices to positive Dobsch matrices. 3. We’ll see in Chapter 8 that the first assertion also has a converse, indeed, for any distinct x1 , . . . , xn , one has that Ln (x1 , . . . , xn ; f ) ≥ 0 ⇐⇒ An (x1 , . . . , xn ; f ) ≥ 0. See Theorem 8.6. Proof The second statement follows from the first by (5.41) and taking xj = x0 +j ε with ε ↓ 0. In the matrix Ln (x1 , . . . , xn ; f ) = [xi , xj ; f ], subtract row 1 from rows 2, 3, . . . , n, and see det([xi , xj ; f ]) =

n 

(1)

(xi − x1 ) det(Aij )

i=2

where  (1) Aij

=

[x1 , xj ; f ]

if i = 1

[x1 , xi , xj ; f ]

if i ≥ 2

Now subtract row 2 from rows 3, . . . , n and find det([xi , xj ; f ]) =

n 

(xi − x1 )

n 

(2)

(xj − x2 ) det(Aij )

j =3

i=2

where

(2)

Aij

⎧ ⎪ ⎪ ⎨[x1 , xj ; f ] = [x1 , x2 , xj ; f ] ⎪ ⎪ ⎩[x , x , x , x ; f ] 1

Iterating,

2

i

j

if i = 1 if i = 2 if i ≥ 3

5 Loewner Matrices

61

det([xi , xj ; f ]) =



(n)

(xi − xj ) det(Aij )

i 0 for all θ ∈ [0, 1] but a finite number, so by Proposition 5.8, An (x1 , . . . , xn ; (1 − θ )f + θf0 ) > 0 for all but finitely many θ in [0, 1]. Taking such a set of θ going to 0, we see that An (x1 , . . . , xn ; f ) ≥ 0.   Similarly, we have the following: Theorem 5.19 Let f be a C 2n function on (a, b). Suppose that for all a < x1 < x2 < · · · < xn < b and x0 ∈ {xj }nj=1 , we know that Kn (x0 ; x1 , . . . , xn ; f ) ≥ 0. Then for all x0 ∈ (a, b), we have that Hn (x0 ; f ) ≥ 0. Remarks 1. This is important because we’ll eventually prove (see Theorem 9.2) that the condition on Kn is equivalent to f being convex on n × n matrices. 2. We’ll prove a converse of this in Chapters 6 and 8 Proof By (5.70) and the proof of the last theorem, we see that An (x1 , . . . , xn ; [x0 , ·; f ]) ≥ 0. If we now let all of x1 , . . . , xn approach x0 , we see that Hn (x0 ; f ) ≥ 0.   Recall [325, Section 6.2] that any locally integrable, measurable function, f , on (a, b) defines a distribution, that is, a function on C0∞ (a, b) via ˆ g →

b

f (x)g(x) dx a

and that any distribution T has a distributional derivative dT /dx defined by

(5.80)

62

5 Loewner Matrices

dT (g) = T (−g ) dx

(5.81)

Proposition 5.20 Any f ∈ M1 (a, b) is a measurable function. If f ∈ Mn (a, b), then f (2n−1) , the (2n − 1)st distribution derivative is positive. Remark We say a distribution, T, is positive if and only if g ≥ 0 ⇒ T (g) ≥ 0. Proof M1 (a, b) is monotone functions. Let q(c) = sup {x | f (x) < c} x∈{a,b}

Then f −1 ((−∞, c]) = (a, q(c))

or

(a, q(c)]

so f is measurable. Let f ∈ Mn (a, b). By Proposition 5.4(b), there exist C ∞ functions fm in Mn (a− 1 1 m , b + m ) so that fm are bounded uniformly on any [c, d] ⊂ (a, b) and fm → f pointwise almost everywhere. It follows that if Tm , T are the distributions for fm and f , then Tm (g) → T (g) for each test function g. By Theorem 5.18, Bn (x; fm ) ≥ 0 for all m and all x ∈ (a, b). Since the n, n matrix element of B is fm(2n−1) /(2n − 1)!, we see fm(2n−1) (x) ≥ 0, so Tm(2n−1) ≥ 0, so T (2n−1) ≥ 0.   This Proposition, Theorem 5.2, and the mean value theorem for divided difference, (5.65), immediately imply that: Corollary 5.21 Let f ∈ Mn (a, b). Then for j = 1, . . . , n and any distinct x1 , . . . , x2j ∈ (a, b), we have that [x1 , . . . , x2j ; f ] ≥ 0

(5.82)

Remark The smoothness required for the mean value theorem for divided differences may not hold for j = n but one can, as usual, convolute f with a smooth positive function and take limits to get the case j = n. Corollary 5.22 Let f ∈ Mn (−1, 1), n ≥ 2 be an odd function (i.e. f (−x) = −f (x)). Then f is convex on (0, 1) and concave on (−1, 0). Remark This should be distinguished from the fact that if f ∈ Mn (0, ∞), n ≥ 2, then f is concave (see Corollary 14.5)! Proof By the same approximation argument used in the Proposition, we can suppose that f is C ∞ . Then f (3) ≥ 0. Since oddness implies that f (0) = 0, we see that ±f (x) ≥ 0 for ±x ≥ 0. The convexity and concavity follow.  

5 Loewner Matrices

63

Lemma 5.23 (a) If T is a distribution on (a, b) with T = 0, then T is given by the constant function. (b) If T is a distribution on (a, b) with T (k) = 0, then T is given by a polynomial. (c) If T is a distribution on (a, b) so T (k) is given by a continuous function, then T is given by a continuous function which is classically C k . (d) If T is a distribution on (a, b) so T (2) ≥ 0, then T is given by a continuous function. Remark The proof of (d) shows that in that case T is given by a convex function. ´ ∞ ∞ Proof (a) Pick ´ a function h in C0 (a, b) with h(y) dy = 1. For any g ∈ C0 (a, b), let ´ x 0 (g) = g(y) dy. Then 0∞(g − 0 (g)h) = 0, so g − 0 (g)h = q where q(x) = a (g − 0 (g)h(y)) dy is in C0 . Thus, T (q ) = 0 implies T (g) = 0 (g)T (h), that is, T = T (h)1. (b) By (a), T (k−1) = c1 1. Thus, (T − c1 x k−1 /(k − 1)!)(k−1) = 0, so T (k−2) = c2 + c1 x. Iterating, T is a polynomial of degree k − 1. (c) If T (k) = f , then with ˆ

ˆ

x

g(x) =

dx1 a

ˆ

x1

xk−1

dx2 . . . a

f (xk ) dxk a

we have T (k) − g (k) = 0, so T = g+ polynomial. g is C k , so T is also. (d) For each [c, d], pick h in C0∞ (a, b) with h ≥ 0 and h = 1 on [c, d]. Then hf ∞ ± f ≥ 0 for each f ∈ C0∞ (a, b) supported in [c, d], so |T (2) (f )| ≤ f ∞ T (2) (h), and thus T (2) extends to a positive functional on C[c, d], that is, a measure μ on [c, d]. If ˆ g(x) =

x

μ([c, y]) dy c

then g (2) (f ) = T (2) (f ) for f ∈ C0∞ (c, d), and thus T = g+ linear function, that is, T is given by a continuous function in each [c, d].   Distributional results normally only determine a function for a.e. x, and we want results for all x. The key is Lemma 5.24 Let g, f be two functions on (a, b) so that g is monotone (i.e., in M1 ) and f is continuous. If g(x) = f (x) for a.e. x, then g(x) = f (x) for all x. Proof Fix x0 ∈ (a, b). By monotonicity, limε↓0 g(x0 ± ε) = g± (x0 ) exists and g− (x0 ) ≤ g(x0 ) ≤ g+ (x0 )

(5.83)

Since f = g for a.e. x, we can find xn± → x0 so xn+ ↑ and xn− ↓ and g(xn± ) = f (xn± ). Thus, g± (x0 ) = f (x0 ), so by (5.83), g(x0 ) = f (x0 ).  

64

5 Loewner Matrices

We can now prove Theorem 5.2, that any f ∈ Mn (a, b) lies in C (2n−3) . Proof of Theorem 5.2 (Proof of Theorem 5.2) Let f ∈ Mn (a, b). By Proposition 5.20, f (2n−1) ≥ 0, the (2n − 1)st distributional derivative is positive. By Lemma 5.23(d), f (2n−3) is given by a continuous function, so by Lemma 5.23(c), f is equal a.e. to a C 2n−3 function. By Lemma 5.24, f is equal to this C 2n−3 function for all, not just a.e., x.   Next, we want to present a generalization of the positivity of Loewner determinants that is closer to Loewner’s original proof of Theorem 5.1. (We will not need this extension in most of this book—indeed, it will only be used in Loewner’s proof (Chapter 25).) We will start by analyzing some rank one perturbations. Let A be a finite self-adjoint matrix and let ϕ ∈ Cn and let B = A + ϕ, · ϕ

(5.84)

We will suppose ϕ is cyclic for A, that is, A has simple eigenvalues and ϕ is not orthogonal to any eigenspace. Let FA (z) = ϕ, (A − z)−1 ϕ

FB (z) = ϕ, (B − z)−1 ϕ

(5.85)

(A − z)−1 = (B − z)−1 + (A − z)−1 (B − A)(B − z)−1

(5.86)

Since

we see FA (z) = FB (z) + FA (z)FB (z)

(5.87)

or FB (z) =

FA (z) 1 + FA (z)

(5.88)

It will be convenient instead to use GA (z) = 1 + FA (z)

GB (z) = 1 + FB (z)

(5.89)

∂ so (5.88) says B has eigenvalues at zeros of GA . Since ∂z (λ − z)−1 = (λ − z)−2 , G is monotone in x on R between poles, so it has exactly one zero between such poles. Since G → 1 as x → ±∞, the final zero of B is above the last pole of G, that is, the eigenvalues λ1 < λ2 < · · · < λn of A and μ1 < · · · < μn of B obey

λ1 < μ1 < λ2 < · · · < λn < μn

(5.90)

related, of course, to (5.29). Suppose η(1) , . . . , η(n) and ψ (1) , . . . , ψ (n) are the orthonormal eigenvectors of A and B, respectively. Then

5 Loewner Matrices

65

GA (z) = 1 +

n  | η(j ) , ϕ |2 λj − z

(5.91)

j =1

This shows that one can obtain the eigenvalues of B by forming (5.91) and finding its zeros. But it also gives us a way to solve the inverse problems given the λ’s and μ’s obeying (5.90) to form G(z) =

n  (μj − z) (λj − z)

(5.92)

j =1

and use it and (5.91) to find η(j ) , ϕ . We have: Proposition 5.25 Let {λj }nj=1 and {μj }nj=1 obey (5.90). Then there exist n × n selfadjoint matrices A and B obeying Aη(j ) = λj η(j )

Bψ (j ) = μj ψ (j )

(5.93)

for orthonormal families and a vector ϕ so that (5.84) holds and

ϕ, η(j ) > 0

ϕ, ψ (j ) > 0

(5.94)

and det( η(j ) , ψ (j ) ) = 1

(5.95)

Proof Let A be given by ⎛ λ1 ⎜ .. A=⎝ .

⎞ ⎟ ⎠

(5.96)

λn and let η(j ) be the vector with 1 in the j place and zero elsewhere. Form G(z) by (5.92) and note that G(z) = 1 +

n  j =1

αj λj − z

(5.97)

where n (μk − λj ) >0 αj = k=1 k=j (λk − λj )

(5.98)

66

5 Loewner Matrices

The positivity in (5.98) holds since both numerator and denominator have (j − 1) negative terms. Equation (5.97) holds since both sides have the same polar singularities and their difference goes to zero at infinity. Let ϕ=

n 

1/2

αj η(j )

(5.99)

j =1

Thus, with B given by (5.84), GA (z) = G(z), so the zeros are at the μj , which must be the eigenvalues for B. By (5.88), FB has a pole at each μj , so ϕ is not orthogonal to any eigenspace of B, and thus we can uniquely pick ψ (j ) , so (5.93) and (5.94) hold. It remains to prove (5.95). η(j ) , ψ (j ) is a real orthogonal matrix, so its determinant is +1 or −1. For y ∈ [0, 1], let B(y) = A + y ϕ, · ϕ and pick an orthonormal basis ψ (k) (y) for B(y) uniquely by ϕ, ψ (k) (y) > 0. By eigenvalue perturbation theory (see Corollary 5.30), ψ (k) (y) is continuous in y, so det( η(j ) , ψ (k) (y) ) is continuous in [0, 1] with values ±1, so identically 1.

 

Following Loewner , we define the extended Loewner matrix: Definition Let f be a real-valued continuous function on (a, b). For 2n distinct points λ1 , . . . , λn , μ1 , . . . , μn in (a, b), we define the extended Loewner matrix by (L(e) n (λ1 , . . . , λn ; μ1 , . . . , μn ; f ))k = [λk , μ ; f ]

(5.100)

Theorem 5.26 If a < λ1 < μ1 < λ2 < · · · < μn < b and f ∈ Mn (a, b), then det(L(e) n (λ1 , . . . , λn ; μ1 , . . . , μn ; f )) ≥ 0

(5.101)

Remark The proof does not depend on matrix monotonicity on an open interval. With the natural definition of Mn ( ) for an arbitrary open subset of R (see Theorem 1.7 and Chapter 35), all we need is f ∈ Mn ( ) for some containing all the λ’s and μ’s. Proof Construct A, B, ϕ, η(j ) , ψ (j ) as in Proposition 5.25. For any function f , note

ψ (k) , (f (B) − f (A))η() = f (B)ψ (k) , η() − ψ (k) , f (A)η() = (f (μk ) − f (λ )) ψ (k) , η()

(5.102)

Take f (x) = x and get

ψ (k) , ϕ ϕ, η() = (μk − λ ) ψ (k) , η()

(5.103)

5 Loewner Matrices

67

and use (5.103) and (5.102) to obtain Mk ≡ ψ (k) , (f (B) − f (A))η() = [μk , λ ; f ] ψ (k) , ϕ ϕ, η()

(5.104)

Now, if Lk = η(k) , ψ () , then (LM)k = η(k) , (f (B) − f (A))η() so since det(L) = 1 (by (5.95)), (5.104) becomes det(f (B) − f (A)) = det([μk , λ ; f ])



ψ (k) , ϕ ϕ, η()

(5.105)

k,

Since k, ψ (k) , ϕ ϕ, η() > 0, we see that if f (B)−f (A) ≥ 0, then det(f (B)− f (A)) ≥ 0 and (5.105) implies (5.101).   Remark The proof shows that strict positivity of (5.101) is equivalent to strict positivity of f (B) − f (A) (see (5.105)). In particular, if f, f1 ∈ Mn (a, b) and (e) (e) det(Ln (f1 )) > 0, so is det(Ln (f + f1 )). Note that, by taking μk = λk + ε and ε ↓ 0, (5.101) implies (5.35) and thus provides a second (actually, the original) proof that (a) ⇒ (c) in Theorem 5.9. We want next to translate the monotone matrix hypotheses of Theorems 1.7 and 1.8 into results about Loewner matrices. Theorem 5.27 Let f obey (a) of Theorem 1.8. Then for any x1 , . . . , xn obeying a < x1 < · · · xj < b < c < xj +1 < · · · < xn < d

(5.106)

for some j , we have Ln (x1 , . . . , xn ; f ) ≥ 0

(5.107)

Proof Given such an (x1 , . . . , xn ), let A be given by (5.3) and let C ≥ 0. By Corollary 5.29 below, for λ small, A + λC has j eigenvalues in (a, b) and n − j in d (c, d). So, by the hypothesis, dλ f (A + λC) ≥ 0. By (5.35), Ln " C ≥ 0, so taking   C = (ϕ, · )ϕ, Ln ≥ 0. Our final topic in this chapter is to indicate the proofs of the results in eigenvalue perturbation theory used earlier in this chapter. Theorem 5.28 Let A(λ) be an analytic family of n × n matrices in a neighborhood, N, of λ = 0 symmetric about R with ¯ A(λ)∗ = A(λ)

(5.108)

68

5 Loewner Matrices

Let x1 (0) be an eigenvalue of A(0) for which the corresponding spectral projection P1 (0) is one-dimensional. Then there exists a connected open set, N0 , symmetric about R, with 0 ∈ N0 ⊂ N and a scalar analytic function, x1 (λ), and operatorvalued analytic function, P1 (λ), on N0 so that (i)

P1 (λ)|λ=0 = P1 (0)

x1 (λ)|λ=0 = x1 (0)

(ii)

A(λ)P1 (λ) = P1 (λ)A(λ) = x1 (λ)A(λ)

(5.110)

(iii)

P1 (λ)2 = P1 (λ)

(5.111)

(iv)

P1 (λ)∗ = P1 (λ¯ )

(5.109)

Remark (5.111) says P1 (λ) is a projection. By (5.109) and continuity, P1 is rank one. By (5.110), Ran(P1 ) are eigenvectors of A(λ). Proof Pick ε so {w | |w − x1 (0)| ≤ 2ε} contains only one eigenvalue of A(0), namely x1 (0). Since {B | det(B) = 0} is open and B → B −1 is analytic on this set, there is a disk N1 about λ = 0, so det(A(λ) − w) = 0 if λ ∈ N1 and 2ε < |w − x1 (0)| < 2ε, and (A(λ) − w)−1 is jointly analytic there. Define P1 (λ) =

1 2π i

ˆ |w−x1 (0)|=ε

dw w − A(λ)

x1 (λ) = Tr(A1 (λ)P1 (λ))

(5.112) (5.113)

By diagonalizing A(λ) when λ is real, one sees that P1 (λ) is precisely the projection onto the eigenspaces whose eigenvalues, x, obey |w − x| < ε. In particular, (5.111) holds for λ real, and so by analytic continuation, for all λ. Since A(λ)(w − A(λ))−1 = (w − A(λ))−1 A(λ), we have A(λ)P1 (λ) = P1 (λ)A(λ), and once we prove that dim(Ran(P1 )) = 1, it is clear this equals x1 (λ)P1 (λ). Since P1 (λ) is analytic and Tr(P1 (λ)) is an integer, this trace is constant, so 1, since (5.109) obviously holds. (iv) is obvious from (5.112) and (5.108).   Corollary 5.29 Let {A(λ) | 0 ≤ λ ≤ 1} be a family of self-adjoint n × n matrices indexed by [0, 1] so that each A(λ) has simple eigenvalues and so that A(λ) can be analytically continued to a complex neighborhood of [0, 1]. Then there exist n real analytic rank one matrices {Pj (λ)}nj=1 and real analytic functions {xj (λ)}nj=1 on [0, 1] so that A(λ) =

n 

xj (λ)Pj (λ)

(5.114)

Pj (λ)Pk (λ) = δj k Pj (λ)

(5.115)

j =1

5 Loewner Matrices

69

Proof By the theorem, these are analytic locally, but the x’s and P ’s obeying (5.114)/(5.115) are uniquely determined up to order for matrices with simple eigenvalues.   Corollary 5.30 Under the hypotheses of Corollary 5.29, suppose also there is ϕ with

ϕ, Pj (λ)ϕ = 0

(5.116)

for all j and all λ in [0, 1]. Then one can choose real analytic vectors, ηj (λ), with ηj (λ) ∈ Ran(Pj (λ))

(5.117)

ηj (λ), ηk (λ) = δj k

(5.118)

Moreover, if each A(λ) is real and ϕ is real, the ηj (λ) can be chosen real. Remark It is not hard to show this can be done even if (5.116) does not hold, but it is simpler in that case and is all we need. Proof Take ηj =

Pj (λ)ϕ

ϕ, Pj (λ)ϕ 1/2  

Notes and Historical Remarks The idea of reducing matrix monotonicity to positivity of certain matrices and determinants is basic to Loewner’s original paper  and was extended by his doctoral student Dobsch  who, in particular, introduced the matrix we call B(x0 ). We have named the matrix after him. We will have more to say about it in Chapter 6. Loewner’s proof used what we call the extended Loewner matrix with L coming as a limit. The terminology for these matrices is not standard. What we call the Loewner matrix, Donoghue  calls the Pick matrix, based on work of Pick that we will discuss in Chapters 16–20. We will see that L is a degenerate limit of Pick matrices, but it is sufficiently different that I prefer not using that name. Donoghue does call determinants of our Ln , Loewner determinants. Donoghue’s “extended Loewner matrix” is our A, very different from what we call the extended Loewner matrix! Anyhow, the reader needs to be aware that there is no standard naming in the literature. Divided differences (under a different name) for unequal points go back to Newton  (see the discussion in Chapter 22). The name seems to be due to de Morgan . The Genocchi–Hermite formula is named after Genocchi [121– 123] and Hermite  who had formulae related to (5.44). The Cauchy integral formula approach (5.56) goes back to Frobenius . The mean value theorem for divided differences appeared first in Schwarz . For more on the history,

70

5 Loewner Matrices

see  and see de Boor  for more on the formalism. They are so important to the theory, that the second section in Donoghue’s book  is entitled “Divided Differences”. Our treatment here of divided differences is influenced by de Boor  and Heinävaara  Bernstein polynomials appeared in Bernstein . As our notation suggests, there is a probabilistic interpretation of the polynomials and of the convergence. Let v1 , v2 , . . . be independent identically distributed random variables taking only two values 0 and 1. Suppose 1 is taken with probability x and 0 with probability 1 − x so E(vj ) = x

(5.119)

Let 1  v m n

sm =

(5.120)

=1

Then     j j m−j m = x (1 − x) Prob sm = m j

(5.121)

and E(f (x)) = Bm (x) the Bernstein polynomial of order m. That Bm (x) → f (x) is just the law of large numbers. Equation (5.12) is a standard calculation of E((sm − x)2 ) and the calculation that follows is essentially the Chebyshev inequality proof of the law of large numbers; see [325, Section 7.2] The Daleckiˇi–Krein formula is due to Ju. L. Daleckiˇi and S. G. Krein . As explained in Daleckiˇi  when S. Krein’s more famous brother, M. G. Krein, was told of their result, he immediately remarked that it must be connected with Loewner’s theorem! Similar formulae are in Loewner  In , Ju. L. Daleckiˇi and S. G. Krein developed formulae for higher derivatives which includes Theorem 5.16 and even a Taylor theorem with remainder. They also stated results for operators on Hilbert space including d f (A(λ) = dλ

ˆ a

b

ˆ a

b

f (ν) − f (μ) dA(λ) dEλ (ν) dEλ (μ) ν−μ dλ

(5.122)

where dEλ (·) is the spectral measure of the self-adjoint operator A(λ). Their result requires strong hypothesis on the regularity of λ → A(λ) and of f and making sense of the double spectral integral. [68, 69] are in Russian;  has a summary (without proofs) in English.

5 Loewner Matrices

71

(5.122) is the starting point for a long series on double spectral integral by Birman–Solomyak [39–41] (the last is a summary in English) with many applications, the study of what happens when A(λ) lies in certain trace ideal spaces and applications to Krein spectral shifts. Simon  has a proof of one of their spectral shift results that relies on    dA(λ) d Tr(f (A(λ)) = Tr (5.123) f (A(λ)) dλ dλ One can simplify his proof by noting that (5.123) follows immediately for the finite dimensional case from the Daleckiˇi–Krein formula and one can then extend to the trace class by taking limits of the integral of the derivative to get an integral formula for the trace class case that one can differentiate. Schur products were discussed by Schur  who, in particular, proved Lemma 5.3. They are also called Hadamard products, although Horn’s historical notes  seem to indicate this is due to an incorrect offhand attribution of von Neumann propagated by Halmos rather than due to actual work of Hadamard! The Hansen–Tomiyama matrix was defined by them [138, 139] in 2007. They noted Theorem 5.19 and conjectured that a converse holds, a result proven ten years later by Heinävaara as we’ll discuss in Chapters 6 and 8. As we remarked, the condition on Kraus matrices in Theorem 5.19 was proven by Kraus (in 1936) to be equivalent to convexity on n × n matrices; see Chapter 9. Corollary 5.22 is an observation of Moslehian et al.  Two standard references for eigenvalue perturbation theory are Kato  and Reed–Simon ; see also [329, Sections 1.4 and 2.3]. The basic papers are Rellich , Nagy , and Kato . In particular, Rellich proved that if (5.108) holds, then eigenvalues and eigenprojections are analytic in a neighborhood of N ∩R even if there are degeneracies. Analyticity of the eigenvalues (which we actually use once in this chapter) is easier. All solutions of det(x − A(λ)) = 0

(5.124)

are real for λ real, and one shows, first, that the solutions of (5.124) are given by convergent series in (λ − λ0 )1/p for suitable p, and then that reality of all branches implies analyticity. Analyticity of the eigenprojections is more subtle [178, 281, 329], but not used in this chapter. In proving Proposition 5.25, we used rank one perturbation theory which has extra features like (5.88). This theory is associated especially with work of Aronszajn , Krein , and Donoghue . For further discussion of rank one perturbations, see Simon 

Chapter 6

Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem

Recall (see (5.38)) that given a C 2n−1 function, f , on (a, b), we defined its Dobsch matrix at x0 ∈ (a, b) by Bn (x0 ; f )k =

f (k+−1) (x0 ) (k +  − 1)!

1 ≤ k,  ≤ n

(6.1)

We proved (see Theorem 5.18) that if f ∈ Mn (a, b) and f ∈ C 2n−1 , then Bn (x0 ; f ) is positive (but not necessarily strictly positive) for each x0 ∈ (a, b). We also showed that in general functions in Mn (a, b) are in C 2n−3 (and in the next chapter, we’ll see there are examples of such functions which are not in C 2n−2 ), so we’ll need a replacement for pointwise positivity if f is only C 2n−3 . Any continuous function, f , on (a, b) defines a distribution and so we can form distributional derivatives (see Simon [325, Chapters 6 and 9] for background on distributions). We define the matrix Bn (x; f ) as the distribution given by (6.1) where f (k+−1) is a distributional derivative. We say that Bn (·; f ) is positive if, for every Cn -valued function, ϕ(x), a C ∞ function of compact support in (a, b), we have that  ˆ 1≤k,≤n a

b

f (x) d k+−1 (−1)k+−1 k+−1 [ϕk (x)ϕ (x)] dx ≥ 0 (k +  + 1)! dx

(6.2)

Since any f ∈ Mn (a, b) is a distributional limit of C ∞ functions in M(a + , b − ) (see Proposition 5.4), we conclude that for any f ∈ Mn (a, b), Bn (·; f ) is a positive distribution. One main result of this chapter is a converse to Theorem 5.18 which means that positivity of Bn (x0 ; f ) is equivalent to f ∈ Mn (a, b). The second main result is an analog of the next theorem for matrix convex functions. In Chapter 8, we’ll provide a simpler proof of these facts (that is not unrelated), so the reader can skip this chapter. We include it for historical reasons and because it is an interesting proof.

© Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_6

73

74

6 Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem

Theorem 6.1 (The Dobsch–Donoghue Theorem) Let f be a C 2n−1 function on (a, b). Then f ∈ Mn (a, b) if and only if the matrix Bn (x0 ; f ) is positive for all x0 ∈ (a, b). If f is only a priori continuous, then f ∈ Mn (a, b) if and only if the distribution B(·; f ) is a positive distribution. Remarks 1. Since we already know one direction, we’ll only prove after many preliminaries that positivity of Bn implies monotonicity on n × n matrices. 2. In many ways, this theorem belongs to Chapter 5. It is placed here only because we’ll also have an analog for Cn which relies on the results given in Chapter 9. 3. Without this theorem, it isn’t clear how to construct functions in Mn that aren’t already in M∞ . In the next chapter, using this theorem, we’ll easily construct functions f ∈ Mn (a, b) that aren’t in Mn+1 (a, b), let alone M∞ (a, b). 4. While the distributional result only assumes that f is a priori continuous, we recall that positivity of the distribution Bn (·; f ) implies positivity of the distributional derivative, f (2n−1) , and this implies that f is classically C 2n−3 (see Lemma 5.23). Before starting the proof, we note a simple corollary Theorem 6.2 (Donoghue Localization Theorem) Let a < c < b < d all in R and let f be a real-valued function on (a, d). If f ∈ Mn (a, b) and f ∈ Mn (c, d), then f ∈ Mn (a, d). Remarks 1. For n = ∞, this is a simple consequence of Loewner’s theorem since if f has a Herglotz continuation from (a, b) and from (c, d), they agree on (c, b) and so are the same and thus f has a Herglotz continuation from (a, d). 2. We’ll see this as an immediate consequence of Theorem 6.1. Historically though, the first proof of Theorem 6.1 relied on first proving Theorem 6.2; see the Notes. Proof The hypothesis and Theorem 6.1 imply that Bn (x; f ) is positive for x in (a, b) ∪ (c, d) = (a, d). By the other direction in the theorem, f ∈ Mn (a, b). There is a tiny subtlety we brushed under the rug in the last paragraph. If f is C n−1 , we are dealing with pointwise positivity and the argument is valid. If however f is only C n−3 , then Bn (x; f ) is only a positive distribution. There are two ways of fixing this. One can use a partition of unity to show that if a distribution is positive on two overlapping intervals, it is positive on their union. Alternatively, one can convolute with a smooth function of small support, shrinking the intervals somewhat so that they still overlap.   We now turn to the proof of Theorem 6.1. Because of Theorem 5.18, we need to show that if Bn (x; f ) is a positive distribution on (a, b), then the Loewner matrix Ln (x1 , . . . , xn ; f ) is positive for all a < x1 < x2 < · · · < xn < b. By the usual approximation plus adding a small amount of log, we can suppose that f is C ∞ on (a, b) with all derivatives bounded and that Bn is a pointwise strictly positive

6 Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem

75

matrix on all of (a, b). Let X be shorthand for (x1 , . . . , xn ) as above and write Ln (x1 , . . . , xn ; f ) ≡ L(X; f ). Then we will prove Theorem 6.3 (Heinävaara’s First Integral Formula) Fix n ≥ 2. Let f ∈ C ∞ (a, b) and X as above. Then there exists a non-negative continuous function, IX (t), on R supported on [x1 , xn ] and a continuous n × n matrix valued function, C(t, X), so that ˆ L(X; f ) = (2n − 1)

∞ −∞

C ∗ (t, X)Bn (t; f )C(t, X)IX (t) dt

(6.3)

Remarks 1. This is not a mere existence statement. C and I will be explicit, albeit complicated looking, functions (the Notes in Chapter 8 will explain why they have the form that they do). C(t, X) will be a polynomial in t while IX (t) will be a piecewise polynomial function of t with changes of polynomial at the xj . Both functions are f independent. ´∞ ´x 2. While we write −∞ , it is really x1n so Bn (t; f ) is defined there. 3. It is easy to extend this result to C 2n−1 functions, f . Proof of Theorem 6.1 Given Theorem 6.3 As already noted, we can suppose that f is C ∞ with bounded derivatives and that Bn (t; f ) is strictly positive. Since Bn is positive, so is C ∗ BC. Since I ≥ 0, the integrand in (6.3) is positive so L(X; f ) is a positive matrix.   We begin the proof of (6.3) by carefully defining C and I . For 1 ≤ k ≤ n, define polynomials gk,X (t, y) by gk,X (t, y) =



(1 + y(t − x ))

(6.4)

=k

For 1 ≤ i, j ≤ n let Cij be the coefficient of y i−1 in gj,X (t, y), i.e., 

C(t, X)ij =

i−1 

(t − xa )

(6.5)

{xa1 ,...,xai−1 }∈X\{xj } =1

C(t, X) is the n × n matrix with matrix elements C(t, X)ij We will also need pX (z) =

n 

(z − x )

=1

We recall the function hz (t) = (z − t)−1 of (5.53). C enters because of

(6.6)

76

6 Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem

Proposition 6.4 For any X, t ∈ R and z ∈ C \ ({t} ∪ [x1 , xn ]), we have that C ∗ (t, X)Bn (t, hz )C(t, X) =

L(X, hz )pX (z)2 (z − t)2n

(6.7)

Proof Use D for the left-hand side of (6.7) and ·, · r for the bilinear form without −k−1 , we have that B (t; h ) is a a complex conjugate. Since h(k) n z z (t)/k! = (z − t) rank one matrix Bn (t; hz ) = (z − t)−2 v(z, t), · r v(z, t)

(6.8)

where vj (z, t) = (z − t)−(j −1) , j = 1, . . . , n. Thus, since C has real matrix elements D = (z − t)−2 C ∗ (t, X)v(z, t), · r C ∗ (t, X)v(t, z)

(6.9)

C was chosen so that [C ∗ (t, X)v(z, t)]j =

n 

Cij vi (z, t)

i=1

=

n 

Cij (z − t)−(i−1) = gj,X (t,

i=1

1 ) z−t

(6.10)

Thus Dij =

1 1 )gj,X (t, z−t ) gi,X (t, z−t

(z − t)2     t − x  t − x 1 1 + 1 + = z−t z−t (z − t)2 =i

=

  1 (z − x ) (z − x ) 2n (z − t) =i

=

=j

=j

pX (z)2 1 1 (z − t)2n z − xi z − xj

= [xi , xj ; hz ]

pX (z)2 (z − t)2n

by the formula (5.54) for [xi , xj ; hz ]. This proves (6.7).

(6.11)  

Next, we begin the construction of IX by defining residue functionals, Rj , j = 1, . . . , n for rational functions, r, on C whose only possible poles are at points in X.

6 Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem

We set ρ =

1 2

77

mini=j |xi − xj | and let Rj (r) =

1 2π i

ﬃ |z−xj |=ρ

r(z) dz

(6.12)

Rj is, of course, the residue of r at xj . When we look at Rj (H ) where H is a (z) function of several variables, say H (z, w), we’ll use Rj to indicate the variable that is integrated over. Define SX (z, t) = −

(z − t)2n−2 pX (z)2

(6.13)

Given (6.7), it is clear why this is a natural function. Then define (z)

Ij,X (t) = Rj (SX (z, t))

(6.14)

where the residue is taken in the z variable with t as a parameter. Thus  d (z − t)2n−2  Ij,X (t) = −   dz =j (z − x )2 

(6.15) z=xj

so we see that Ij,X is a polynomial in t of degree 2n − 2 with Ij,X (xj ) = 0 because 2n − 2 ≥ 2 and there is a (z − t)2n−3 or (z − t)2n−2 in all the terms in the derivative. Define for t ∈ R  IX (t) = Ij,X (t) (6.16) {j |xj 0.

(6.35)  

Remark The proof shows that I is strictly positive on (x1 , xn ) \ {xj }nj=1 . Proof of Theorem 6.3 We’ve already proven that IX (t) is supported on [x1 , xn ] and is non-negative, so we only need to prove (6.3). Suppose first that f = hz for z ∈ C \ [x1 , xn ]. Then, by (6.7) and (6.18), ˆ RHS of (6.3) = (2n − 1)L(X, hz )

∞ −∞

pX (z)2 IX (t) dt (z − t)2n

(6.36)

6 Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem

= L(X, hz )

81

(6.37)

proving (6.3) when f = hz . If f is analytic in a neighborhood of (a, b), then we can pick a simple closed curve, γ , surrounding [x1 , xn ] inside the domain of f and so that the inside of γ is also in the domain of analyticity of f . Then the Cauchy integral formula can be written as f (·) =

1 2π i

ﬃ hz (·)f (z) dz

(6.38)

γ

so we can integrate (6.3) for the special case of all hz to get it for all f analytic in a neighborhood of (a, b). If f ∈ C ∞ (a, b), we can convolute with Gaussians to get a family of entire () functions, fn so that fn → f () for each  uniformly in a neighborhood of [x1 , xn ] and so prove (6.3) for general C ∞ f .   This completes our discussion of the Dobsch–Donoghue theorem. We turn to an analog for matrix convex functions where we will prove Theorem 6.8 (Heinävaara’s Theorem) following are equivalent

Fix n. Let f ∈ C 2n (a, b). Then the

(a) The Kraus matrix, Kn (x0 ; x1 , . . . , xn ; f ), of (5.69) is positive for all x0 , x1 , . . . , xn ∈ (a, b). (b) The Kraus matrix, Kn (x0 ; x1 , . . . , xn ; f ), of (5.69) is positive for all x0 , x1 , . . . , xn ∈ (a, b) with x0 ∈ {xj }nj=1 . (c) The Hansen–Tomiyama matrix, Hn (x; f ), of (5.71) is positive for all x ∈ (a, b). Remark We’ll eventually see (Theorem 9.2) that (b) (and therefore all three) are equivalent to f being an n × n matrix convex function. The proof we give in this chapter of this theorem will rely on an integral representation analogous to (6.3). We will fix x0 and X = {x1 , . . . , xn } with xj ∈ (a, b); j = 0, 1, . . . , n. In place of SX (z, t) given by (6.13) we define Tx0 ,X (z, t) = −

(z − t)2n−1 (z − x0 )pX (t)2

(6.39)

and instead of Ij,X given by (6.14), we define (z)

Jj,x0 ,X (t) = Rj (Tx0 ,X (z, t)) where now j = 0 is included. And, of course,

(6.40)

82

6 Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem



Jx0 ,X (t) =

(6.41)

Jj,x0 ,X (t)

{j |xj 0 (otherwise, replace xj by −xj ). In (6.30), z2n−2 /pX (z)2 is replaced by z2n−1 /(z − x0 )pX (z)2 so that (6.31) becomes 1 2π i

ˆ

i∞

−i∞

dw (y0 − w)pY (w)2

(6.48)

which is positive by the same argument.  

6 Heinävaara’s Integral Formula and the Dobsch–Donoghue Theorem

83

Given the above results, one easily gets the following by proving it first when f = hz , then for functions analytic in a neighborhood of [a, b], and then, in general, as we did in the proof of (6.3): Theorem 6.10 (Heinävaara’s Second Integral Formula) C 2n (a, b). Then for {x0 } ∪ X ⊂ (a, b), we have that: ˆ K(x0 ; X; f ) = 2n

∞ −∞

Fix n. Let f

C ∗ (t, X)Hn (t; f )C(t, X)Jx0 ,X (t) dt

(6.49)

where Jx0 ,X ≥ 0. Proof of Theorem 6.8 (a)⇒(b) is trivial. (b)⇒(c) is Theorem 5.19. (c)⇒(a) is immediate from (6.49).

 

Notes and Historical Remarks The Dobsch–Donoghue Theorem is named after Dobsch  who proved the half that f ∈ Mn ⇒ Bn (·, ; f ) ≥ 0 in 1937 and Donoghue who proved the more subtle half in his book  in 1974. His proof is long, quite involved and not easy to understand. Donoghue proved Theorems 6.1 and 6.2 in the opposite order we do. Namely, he used Loewner’s interpolation machinery to prove Theorem 6.2. Then he showed that if Bn (x0 , f ) > 0, there is an interval, Ix0 , containing x0 so that f ∈ Mn (Ix0 ). In this way he could use Theorem 6.2 to prove Theorem 6.1 when Bn is strictly positive (which can be arranged by adding a small amount of a log). Loewner stated the localization theorem (Theorem 6.2) in his initial paper  saying it was easy to prove, but he gave no details. It took 45 years for the first proof to appear and an additional 35 years for the truly simple proofs of Heinävaara [146, 148] (see also Chapter 8 below). As we noted in the Notes to Chapter 5, (a)⇒(c) in Theorem 6.8 is due to Hansen– Tomiyama , who conjectured the other direction held (and so, given results of Kraus we discuss in Chapter 9 the analogs of Theorems 6.1 and 6.2 for matrix convex functions). Heinävaara  proved this conjecture and this chapter follows that work. Chapter 8 will present his second proof of these results.

Chapter 7

Mn+1 = Mn

In this chapter, we’ll present explicit examples that show that Mn+1 (a, b) is strictly smaller than Mn (a, b) and that show that 2n − 3 in Theorem 5.2 is optimal. Example 7.1 In Proposition 24.2, we will prove that for f a Herglotz function, L(e) (f ) is strictly positive if and only if its Herglotz representation in the form (3.39) has at least n points in the support of the representing measure. To prove the analog for Dobsch matrices, we need to compute Bn (0; f ) for f (z) = −(z − z0 )−1 . For small z, we can write   ∞ z −1  −1−n n − (z − z0 )−1 = z0−1 1 − = z0 z z0

(7.1)

n=0

to see that f (n) (0)/n! = z0−1−n . We conclude that for f (z) = −(z − z0 )−1 with z0 ∈ R \ {0}, Bn (0; f ) is the positive rank one operator Bn (0; f )k = v(z0 )k v(z0 )

v(z0 )k = z0−k

Notice if f (z) = z, then (7.2) holds with v(∞)k = δk1 .

(7.2)  

Proposition 7.2 Let f be a Herglotz function which is real and analytic on (a, b) ⊂ R. Then for x0 ∈ (a, b), Bn (x0 ; f ) is strictly positive if and only if the Herglotz representation, (3.39), has μ with at least n points in its support. Proof As in the proof of Proposition 24.2, we need to show that if {zj }nj=1 are n distinct points in R ∪ {∞} \ {0}, then {v(zj )}nj=1 span Cn . This is equivalent to the n × n matrix with columns {v(zj )}nj=1 having non-zero determinant. If no  zj is ∞, this determinant is nj=1 zj−1 times a non-vanishing n × n Vandermonde  determinant. If z1 = ∞, it is nj=2 zj−2 times a non-vanishing (n − 1) × (n − 1) Vandermonde determinant.  

© Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_7

85

7 Mn+1 = Mn

86

Example 7.3 We will construct f ∈ Mn (a, b), which is not a C 2n−1 function. In particular, f ∈ / Mn+1  (a, b). Start with g which has Bn (0; g) strictly positive, for example g(z) = − nj=1 (z + j )−1 . Define f− (x) =

2n−1  j =1

g (j ) (0) j x j!

f+ (x) = f− (x) +

x 2n−1 (2n − 1)!

(7.3)

Since Bn (0; f− ) = Bn (0; g), we have that Bn (0, f− ) is strictly positive. Since Bn (0; f+ ) = Bn (0; f− ) + [(2n − 1)!]−1 δ (nn) where δ (nn) is the matrix with zeros in all places except for a 1 in the nn place, we see that Bn (0, f+ ) is strictly positive also. By continuity of Bn (x0 ; h) in x0 for h which is C ∞ , for some  > 0, Bn (x; f± ) is strictly positive for |x| < . Define f (x) = f± (x) if 0 ≤ ±x < 

(7.4)

Then Bn (x; f ) is strictly positive for 0 < |x| < . B(x; f ) has a simple jump discontinuity at x = 0, which doesn’t impact the positivity of B(·; f ) as a distribution on (−, ). We conclude from Theorem 6.1 that f ∈ Mn (−, ). Notice that f is C ∞ away from 0 but that limy↓0 f (2n−1) (y) − f (2n−1) (−y) = 1 so f is not C (2n−1) as claimed. If we want (a, b), a given proper open interval of R instead of (−, ), we can use conformal mapping as in Theorem 2.4.   Example 7.4 We’ll construct f ∈ Mn (a, b) with f not C 2n−2 showing that the 2n − 3 of Theorem 5.2 is optimal. Pick g as in the last example. Then we can find α > 0 so that if α is added to all the g 2n−2 (0) terms in B, it is still strictly positive. Now define f− as in (7.3) but f+ by f+ (x) = f− (x) +

αx 2n−2 (2n − 2)!

(7.5)

Again, Bn (0; f± ) is strictly positive and we can find  > 0, so Bn (x; f± ) > 0 if |x| < . Define f by (7.4). Then Bn (x; f± ) > 0 if 0 < |x| < , but f is not C 2n−2 . f (2n−2) has a jump discontinuity at 0, so the distributional derivative f (2n−1) has a delta function term which contributes a term to the distribution Bn (·; f ) but since α > 0, it doesn’t change the positivity. Thus, f ∈ Mn (−, ) but not C 2n−2 and so not in Mn+1 . By conformal mapping, we can replace (−, ) by any given proper interval (a, b). In particular, there is f ∈ Mn (−1, 1), which is C 2n−3 but not C 2n−2 .   Example 7.5 The examples above are not smooth. Moreover, that f is not in Mn+1 is at a single point; one can show for small , f is in both M(0,) and M(−,0) . There are simple examples which are smooth; indeed there are polynomials for which the failure to lie in Mn+1 is not at a single point! Fix n. Let

7 Mn+1 = Mn

87

1 1 x 2n−1 f (x) = x + x 3 + · · · + 3 2n − 1

(7.6)

By looking at the power series for log(1 + x), one sees that   1+x 1 3 1 1 2j −1 x g(x) ≡ x + x + · · · + + · · · = log 3 2j − 1 2 1−x

(7.7)

By Theorem 4.1 (or a simple check that it is Herglotz), g ∈ M∞ (−1, 1). Since it is not rational, Proposition 7.2 implies that Bn (0; g) is strictly positive. Since up to order 2n − 1 f and g have the same derivatives at x = 0, Bn (0; f ) is also strictly positive. The lower 3 × 3 block of Bn+1 (0; f ) is ⎛

⎞ a(x) b(x) (2n − 1)−1 ⎝ ⎠ b(x) (2n − 1)−1 0 −1 0 0 (2n − 1) where a(x) and b(x) are irrelevant since the zero elements imply the determinant of this matrix is −(2n − 1)−3 coming from the main anti-diagonal. Thus we conclude that Bn+1 (x; f ) is not positive for any x. It follows by continuity of Bn that f ∈ Mn (−, ) for some small  but that   f ∈ / Mn+1 (a, b) for any a < b. Example 7.6 We will construct f ∈ Mn (a, b) so that for some c ∈ (a, b), f  (c, b) is a rational function of degree n − 1 (so for x ∈ (c, b), det(Bn (x; f )) = 0) but f  (a, c) has Bn strictly positive. This shows that Bn can fail to be strictly positive on part of the domain of definition of f without that holding everywhere. Letg be a rational Herglotz function of degree n − 1 on (−1, ∞), e.g., g(z) = −1 − n−1 j =1 (z + j ) . Let f+ (z) = g(z) and f− (z) =

2n−1  g j (0) z2n−1 + zj (2n − 1)! j! j =0

and f (z) = f± (z) for ±z ≥ 0. As above, for some  > 0, f ∈ M(−,∞) . f  (0, ∞)   is rational Herglotz and f  (−, 0) has Bn > 0. Notes and Historical Remarks The idea behind Example 7.3 is given in Donoghue’s book [81, pgs 83–84] but without being very specific and without an explicit example. He refers to adding a singular monotone function to f (2n−2) without saying what to choose (in our example, we make the simplest choice adding xχ(0,∞) (x) to f (2n−2) ). Also, his statement that Mn+1 = Mn is not so explicit (he says “Unfortunately, we have had to invoke the theorem that Mn is a local property without having given its proof. In the absence of that theorem, we would not yet

88

7 Mn+1 = Mn

be sure that the classes Mn (a, b) were all distinct as n increases”; his proof of the local theorem is only later in the book). Because Donoghue’s discussion was so murky, Hansen–Ji–Tomiyama  wrote a paper claiming the first explicit examples with Mn+1 = Mn . They have the lovely example in Example 7.5. For further discussion of this and related examples, see Hansen–Tomiyama , Osaka–Silvestrov–Tomiyama , and Osaka . In Example 7.5, we saw that Bn (x; f ) was not positive for any x because there was a principle submatrix which was a Hankel matrix whose determinant was negative since came from a single antidiagonal product of strictly positive numbers. As noted in , the same argument implies that if f is a polynomial with deg(f ) ≤ 2n − 2, then f ∈ / Mn (a, b) for all a < b.

Chapter 8

Heinävaara’s Second Proof of the Dobsch–Donoghue Theorem

In this chapter, we provide a really short, simple, and elegant proof due to Heinävaara of Theorem 6.1. It will be a direct consequence of the mean value theorem for divided differences. Below, Pn will denote the real polynomials of degree at most n, a real vector space of dimension n + 1. Proposition 8.1 Fix k. Let f be a C k function on (a, b) ⊂ R. (a) If f (k) (x) ≥ 0 for all x ∈ (a, b), then [y1 , . . . , yk+1 ; f ] ≥ 0 for any choice of k + 1 y’s in (a, b). (m) (b) If, for every x ∈ (a, b), there exists {yj }jk+1,∞ =1,m=1 so that as m → ∞, we

(m) ; f ] ≥ 0, then f (k) (x) ≥ 0 for all have that yj(m) → x and [y1(m) , . . . , yk+1 x ∈ (a, b).

Remarks 1. If g (k) (x) ≥ 0 for all x, then g is called a k-tone function. 2. The y’s need not be distinct. Proof (a) is an immediate consequence of Theorem 5.15, the mean value theorem for divided difference. (b) follows from continuity of [y1 , . . . , yk+1 ; f ] in its arguments, even at coincidence points, and (5.41).   Proposition 8.2 Let f be C (2n−1) near x0 . Let Bn (x0 ; f ) be the Dobsch matrix at x0 . Then Bn (x0 ; f ) is a positive matrix if and only if for every q ∈ Pn−1 , we have that (q 2 f )(2n−1) (x0 ) ≥ 0 Proof By translation covariance, we can suppose that x0 = 0. By Leibniz’ rule for higher derivatives of products

© Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_8

89

90

8 Heinävaara’s Second Proof of the Dobsch–Donoghue Theorem

(q 2 f )(2n−1) (0) = (2n − 1)! =

 i,j | i+j ≤2n−1

q (i) (0) f (2n−1−i−j ) (0) q (j ) (0) i! (2n − 1 − i − j )! j !

(8.1)

n−1 (i)  q (0) f (2n−1−i−j ) (0) q (j ) (0) i! (2n − 1 − i − j )! j !

(8.2)

n  q (n−k) (0) f (k+−1) (0) q (n−) (0) (n − k)! (k +  − 1)! (n − )!

(8.3)

i,j =0

=

k,=1

=

n 

(8.4)

ck c (Bn (0; f ))k

k.=1

 k (i) = 0 if i ≥ n and that if where q(x) = n−1 k=0 cn−k x . To get (8.2), we note that q i, j ≤ n − 1, then i + j ≤ 2n − 1. Equation (8.3) uses k = n − i,  = n − j . Since q runs through all of Pn−1 as (c1 , . . . , cn ) runs through Rn , we get the result.   Proposition 8.3 Let f be C 1 on (a, b) and a < x1 < · · · < xn < b n distinct points. Then the Loewner matrix, Ln (x1 , . . . , xn ; f ), is positive if and only if for every q ∈ Pn−1 , we have that [x1 , x1 , x2 , x2 , . . . , xn , xn ; q 2 f ] ≥ 0

(8.5)

Proof As usual by adding a function with strictly positive Loewner matrix if need be, we can approximate f with analytic functions and suppose that f is analytic in a simply connected neighborhood of [x1 , xn ]. Let C be a contour that surrounds [x1 , xn ]. Then, by (5.56), n 

ci cj [xi , xj ; f ] =

i,j =1

=

=

1 2π i 1 2π i 1 2π i

ﬃ  n

ci cj

i,j =1

⎛ n  ⎝

(8.6)

⎞2 cj ⎠ f (z) dz z − xj

(8.7)

q(z)2 f (z) dz (z − x1 )2 . . . (z − xn )2

(8.8)

j =1

f (z) dz (z − xi )(z − xj )

= [x1 , x1 , x2 , x2 , . . . , xn , xn ; q 2 f ]

(8.9)

by (5.56) again. Here q(z) =

n  i=1

ci

 j =i

(z − xj )

(8.10)

8 Heinävaara’s Second Proof of the Dobsch–Donoghue Theorem

91

Such a  q obeys q(xi ) = ci j =i (xi − xj ) so q ≡ 0 ⇒ ci ≡ 0. Thus the n functions { j =i (z − xj )}ni=1 are linearly independent and so a basis for Pn−1 . Thus, as (c1 , . . . , cn ) runs through all of Rn , q runs through all of Pn−1 . Equation (8.9) implies the result.   Proof of Theorem 6.1 f ∈ Mn (a, b) ⇐⇒ ∀a 0, and some positive measure dν on [0, 1]. Notice, despite the naive fact that it seems as if the integrand vanishes at x = 0, in fact, one has that limx↓0 f (x) = ν({0}). That we only need this for non-negative f ’s follows by noting that if g ∈ M∞ (0, ∞), then for a > 0, f (x) = g(x + a) − g(a) is a non-negative function in M∞ (0, ∞), so (p2.1) implies the necessary analyticity for g (by then taking a ↓ 0). The remark following Lemma 2.3 shows that the analyticity result for M∞ (−1, 1) or M∞ (0, ∞) implies it for all (a, b). Here are the proofs that we will present: (1) Degenerate Pick’s Theorem Proof (New Here) This proof is in Chapter 20 with background on Pick’s theorem in Chapters 16–18 and 24. Pick’s theorem studies the question of when there is a Herglotz function, f , on C+ with f (zj ) = wj for specified points {zj , wj }nj=1 in C+ and states this is true exactly when certain matrices are positive. The hard part of Loewner’s theorem is a kind of degenerate version of the hard part of Pick’s theorem where the z’s and w’s are moved into R. This proof moves them slightly into C+ and takes limits. There is a proof of Pick’s theorem using Hilbert space methods due to Korányi and Sz.-Nagy (exposed in Chapter 17) and if you combine that proof with this proof of Loewner’s theorem, you get a proof very close to Korányi’s

162

II

Proofs of the Hard Direction

proof so, in one sense, this is a variant of Korányi’s proof (see (4) below). But there are other proofs of Pick’s theorem than the one of Korányi and Sz.-Nagy, so this really is a distinct proof. We note that Pick was Loewner’s thesis advisor so Loewner certainly knew this connection and may have been motivated by analogy in finding his theorem. (2) Rational Interpolation Proof (Loewner) This proof, the original one, is presented in Chapter 25 with background on rational interpolation in Chapters 21–24. (3) Moment Problem Proof (Bendat–Sherman) This proof is presented in Chapter 26. If f has a representation of the form (1.13), then f is C ∞ and f (n+1) (0) = (n + 1)!

ˆ

1

−1

λn dν(−λ)

(p2.2)

so one can hope to find dν by solving the moment problem. Once one has  f (n+1) (0)x n done that, one knows that the right side of (1.13) is just ∞ n=0 (n+1)! . So, we need to know that f is analytic in D to complete the proof. Here we are saved by a variant of a remarkable theorem of Bernstein which says if f on (−1, 1) has (−1)n f (n) (x) ≥ 0 for all x ∈ (−1, 1), then f has an analytic continuation to D! As we’ve seen (see Theorem 5.2) any f ∈ M∞ (−1, 1) is C ∞ and (see Theorem 5.18 and the fact that, by (5.38), f (2n−1) (x) is a diagonal matrix element of a positive matrix) f (2n−1) (x) ≥ 0. This is enough to use the Bernstein’s theorem ideas to get analyticity of f ∈ M∞ (−1, 1) on D. Positivity of the Dobsch matrix also provides conditions that guarantee that the moment problem is solvable. Bendat–Sherman used the Hamburger moment problem (seeking measures on (−∞, ∞)), while here we use the simpler-to-control Hausdorff moment problem (seeking measures on (−1, 1)). This chapter is self-contained in the sense that we provide proofs of the solubility of the Hausdorff moment problem and of the improved Bernstein’s theorem that we need. (4) Hilbert Space Proof (Korányi) This proof is presented in Chapter 27. Since the Loewner matrix is positive, we can use it to define an inner product, < ., . >L . One then constructs a vector φ and self-adjoint operator, A, so that < φ, (A − x)−1 φ >L = f (x)

(p2.3)

and obtains the measure for the Loewner representation as a spectral measure for operator A and vector φ. Korányi demanded (p2.3) for all x ∈ (−1, 1) and required the theory of self-adjoint extensions and the spectral theorem for unbounded operators. Instead, our presentation constructs n×n matrices, point measures, and takes limits as n → ∞. This variant of his proof avoids some technical issues with unbounded operators. (5) Krein–Milman Proof (Hansen–Pedersen) This proof is presented in Chapter 28. Many of the classical integral representation theorems of analysis can

II

Proofs of the Hard Direction

163

be viewed as special cases of a strong form of the Krein–Milman that holds when the set of extreme points are closed; see Choquet  or Simon . This is true of Loewner’s theorem also—one needs to set up a topology in which M∞ (a, b) (or some convenient normalized subset) is compact and then identify the extreme points. We provide a variant of the original Hansen– Pedersen proof that borrows from improvements of Boutet de Monvel, Bhatia, and, most importantly, a later paper of Hansen (see the Notes to Chapter 28). One disadvantage of this proof is that analyticity of matrix monotone functions is unexplained—the extreme points are uniformly analytic, so their integrals are. But analyticity is something that one observes “just happens to hold” for the extreme points. (6) Hahn–Banach Proof (Sparr) This proof is presented in Chapter 29. Sparr’s idea is to define a particular class of continuous functions on [−∞, 1], and use a given f ∈ M∞ (0, ∞) to define a positive functional on such functions. He then appeals to a variant of M. Riesz’ version of the Hahn–Banach theorem (see [325, Theorem 5.5.8]) to extend to a positive functional on all continuous functions and thereby get a measure which provides the required integral representation. In fact, we prove that Sparr’s set of functions is dense, so positivity allows extension without appealing to the Hahn–Banach theorem. Therefore, it might be better to call this the Riesz–Markov proof, except the Riesz–Markov theorem is behind many of the other proofs. (7) Interpolation Theory Proof (Ameur) This proof is presented in Chapter 30. It relies on the fact that matrix monotone functions are also interpolation functions. By the careful construction of certain operators which obey the hypothesis of what an interpolation function has to do, it allows the construction of a positive measure. (8) Continued Fraction Proof (Wigner–von Neumann) This proof is presented in Chapter 31. This proof uses a key lemma that if f is a C 1 function on (−1, 1) with f (0) = 0 and if g(x) = −

1 1 + f (x) f (0)x

(p2.4)

then f ∈ Mn (−1, 1) ⇒ g ∈ Mn−1 (−1, 1). They prove this lemma by looking at the Loewner matrices for f and g. For f ∈ M∞ (−1, 1), one can iterate and get a continued fraction expansion. They then use a variant of Markov’s method for convergence of continued fractions to get convergence of the truncated fractions to a function obeying the required integral representation. (9) Multipoint Continued Fraction Proof (New Here) This proof is presented in Chapter 32. It is a small variant of the Wigner–von Neumann proof. The truncated continued fraction for that proof has more and more derivatives at 0 that agree with those of f and one needs some estimates that, while not hard, are also not trivial. This proof does the procedure at more and more dyadic rationals and so gets functions with an integral representation that provide f

164

II

Proofs of the Hard Direction

at more and more points so weak convergence and an integral representation for the limitx are easy. (10) Hardy Space Proof (Rosenblum–Rovnyak) This proof is presented in Chapter 33. It depends on the fact that operators on the Hardy space H 2 (D) that commute with multiplication by z are multiplication by H ∞ (D) functions. That result is proven in Chapter 19. (11) Mellin Transform Proof (Boutet de Monvel; Previously Unpublished) This proof is presented in Chapter 34. It depends on writing one version of the Herglotz representation theorem on (0, ∞) as a convolution on the multiplicative group of positive reals of a measure with the function x/(1+x). Mellin transform which is Fourier transform on this group lets one construct a measure using Bochner’s theorem. The proof relies on the infinitesimal version of when a function is an interpolation function. The proofs (5), (6), (7), and (11) do not explicitly rely on positive of Loewner matrices for all n but the other 7 proofs do (the proof (5) uses the fact that if f is matrix monotone and nonconstant, then f (t) > 0 for all t; while there may be other proofs, we prove this by using positivity of 2 × 2 Loewner matrices). Chapter 35 discusses versions of Loewner’s theorem when (a, b) is replaced by a general open set. In particular, it proves the perhaps surprising result that if a < b < c < d and f ∈ M∞ ((a, b) ∪ (c, d)), then f has an analytic continuation to all of (a, d) and so f is the restriction of a function in M∞ (a, d).

Chapter 16

Pick Interpolation, I: The Basics

In 1916, Pick asked and answered the following question: Given indexed collections {zj }j ∈I in D and {wj }j ∈I in {w | Re w > 0} with the zj distinct, when does there exist an analytic function F on D with Re F > 0 so that for all j ∈ I , F (zj ) = wj

(16.1)

This is the first of five chapters on Pick’s theorems. In this chapter, we will first state the theorems in three versions: for Carathéodory functions on D, for Schur functions on D, and for Herglotz functions on C+ . We prove the conditions are necessary and that the three theorems are “equivalent.” Finally, we will reduce to the case where I is finite using a compactness result. In the next three chapters, we will provide the proofs of the hard part of the theorem: one using the spectral theorem for finite Hermitian matrices, the second using continued fractions, and the third using Hardy spaces. The fifth chapter deduces the hard part of Loewner’s theorem as a limit of Pick’s theorem! This is something of an aside on our main theme, but it is a relevant aside. That is not clear now but will be shortly. In Chapter 3, we used the same symbol f for Herglotz and Carathéodory functions. In this chapter, for clarity as we jump back and forth, we will use G for a Herglotz function,

G : C+ → C+

(16.2)

F for a function,

F : D → {z | Re z > 0}

(16.3)

f for a function,

f:D→D

(16.4)

F ’s of (16.3) are “essentially” Carathéodory functions; the latter are normalized by F (0) = 1. We will define Schur functions as analytic maps of D → D, that is, analytic with

© Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_16

165

166

16 Pick Interpolation, I: The Basics

sup |f (z)| ≤ 1

(16.5)

z∈D

the unit ball in H∞ . We will call the functions in (16.4) strong Schur functions. The Schur functions that are not strong are precisely of the form f (z) = eiθ ∈ ∂D

(16.6)

for a fixed θ and all z, that is, constants of magnitude 1. We will see that the Schur functions are a natural compactification of (16.4). We will look at the Pick problem for (16.2) first. By (3.33) and (3.34) for z, ζ ∈ C+ , G(z) − G(ζ ) =A+ z − ζ¯

ˆ

1 + x2 dμ(x) (x − z)(x − ζ¯ )

(16.7)

Thus, if G(zj ) = wj

(16.8)

we have for each finite F ⊂ I ,  j,k∈F

α¯ j αk

    ˆ   2   αj 2 wj − w¯ k  dμ(x) = A αj  + (1 + x 2 ) zj − z¯ k x − zj  j ∈F

(16.9)

j ∈F

We thus have half of Theorem 16.1 (Pick’s Theorem for Herglotz Functions) Let {zj }j ∈I , {wj }j ∈I be indexed sets in C+ with the zj distinct. Then (16.8) holds for some Herglotz function, G, if and only if the Pick matrix Mj k = is positive, that is, finitely many j .

 j,k

wj − w¯ k zj − z¯ k

(16.10)

α¯ j αk Mj k ≥ 0 for any {αj }j ∈I , which is non-zero only for

We have proven positivity of (16.10) is necessary for (16.8) to have a solution. Equation (16.9) also lets us answer when M is strictly positive for I finite. Theorem 16.2 If G obeys (3.33) and (16.8) for {zi }ni=1 , if M is given by (16.10), and if ker(M) = 0, then either A = 0 and supp(dμ) is at most (n − 1) points or else A = 0 and supp(dμ) is at most (n − 2) points. Conversely, if either A = 0 and supp(dμ) is at most (n − 1) points or else A = 0 and supp(dρ) is at most (n − 2) points, then ker(M) = 0.

16 Pick Interpolation, I: The Basics

167

Remarks 1. If we think of A = 0 meaning μ({∞}) = 0 (as discussed in Chapter 3), then this says M is strictly positive if and only if supp(dμ) (in R ∪ {∞}) has at least n points in its support. 2. One can refine this theorem (by using our proof below) to show that dim(ker(M)) = j > 0 if and only if the number of points in supp(dμ) (in R ∪ {∞}) is exactly n − j . Proof One can write n  j =1

αj Q(x) = x − zj j (x − zj )

(16.11)

where Q(x) =

 n

 αj x n−1 + lower order

(16.12)

j =1

Moreover, if Q ≡ 0, each pole on the left of (16.11) has zero residue, so α j = 0, that is, (α1 , . . . , αn ) = (0, . . . , 0) implies Q has at most n  − 1 zeros, and if nj=1 αj = 0, at most n − 2 zeros. This plus (16.9), which shows α¯ j αk Mj k = 0 implies that dμ is supported on the zeros of Q, yield the first sentence in the theorem.  For the converse, note that if dμ is supported non {xk }k=1 , then (RHS of (16.9) = 0) ⇔ Q(xk ) = 0 for all xk , and if A = 0 also j =1 αj = 0 which is  (or if A = 0,  + 1) linear conditions on α. If A = 0, the kernel of this map is of dimension at least n −  (and if A = 0, n − ( + 1)). This proves the second, converse part of the theorem.   The “hard part” of Theorem 16.1 is to show μ ≥ 0 is sufficient. The connection to Loewner’s theorem is now clear. If x1 , . . . , xn ∈ (0, 1) are distinct, and f is a possible matrix monotone function and zj = xj + iε and wj = f (xj ) + iεf (xj )

(16.13)

then the limit in (16.10) is Ln (x1 , . . . , xn ; f )! So the Loewner matrix is the limit of Pick matrices. To pass from C+ to D, it is natural to use the map from D to C+ , 

1+z T (z) = i 1−z Notice that

 (16.14)

168

16 Pick Interpolation, I: The Basics

2i(1 − zζ¯ ) (1 − z)(1 − ζ¯ )

T (z) − T (ζ ) =

(16.15)

Since Mj k is a positive matrix if and only if Nij ≡ a¯ j Mj k ak is a positive matrix for all vectors aj non-vanishing, (16.15) suggests the replacement for z − ζ¯ when z, ζ ∈ D is 1 − zζ¯ . This motivates the original form of Pick’s theorem: Theorem 16.3 (Pick’s Theorem for Carathéodory Functions) Let {zj }j ∈I and {wj }j ∈I be indexed sets in D and {z | Re z > 0} with the zj distinct. Then there is an analytic function F : D → {z | Re z > 0} with F (zj ) = wj

(16.16)

if and only if the Pick matrix (C)

Mj k =

wj + w¯ k 1 − zj z¯ k

(16.17)

is positive. Theorem 16.4 (Pick’s Theorem for Schur Functions) Let {zj }j ∈I and {wj }j ∈I be indexed sets in D with the zj distinct. Then there is an analytic function f : D → D so f (zj ) = wj

(16.18)

if and only if the Pick matrix (S)

Mj k =

1 − wj w¯ k 1 − zj z¯ k

(16.19)

is positive. We note first that if {zj }j ∈I ⊂ C+ , {wj }j ∈I ⊂ C+ , {ζj }j ∈I ⊂ D, and {ωj }j ∈I ⊂ {z | Re z > 0}, and if zj = T (ζj )

wj = iωj

(16.20)

with T given by (16.14), then, by (16.15), wj − w¯ k 1 ωj + ωk = (1 − ζj )(1 − ζ¯k ) zj − z¯ k 2 1 − ζj ζ¯k

(16.21)

Thus Mj k (z; w) =

1 2

(C) (1 − ζj )(1 − ζ¯k )Mj k (ζ ; ω)

(16.22)

16 Pick Interpolation, I: The Basics

169

and one side of (16.22) is positive if and only if the other side is. Thus, Theorem 16.5 Either half of Theorem 16.1 is equivalent to the corresponding half of Theorem 16.3. In particular, since we have proven necessity of Mij ≥ 0 in (C) Theorem 16.1, we have necessity of Mij ≥ 0 in Theorem 16.3. Similarly, if {zj }j ∈I ⊂ C+ , {wj }j ∈I ⊂ C, {ζj }j ∈I ⊂ D, and {ωj }j ∈I ⊂ D, and if zj = T (ζj )

wj = T (ωj )

(16.23)

with T given by (16.14), then, by (16.20), wj − w¯ k = z j − zk



 1 − ωj ω¯ k (1 − ζj )(1 − ζ¯k ) 1 − ζj ζ¯k (1 − ωj )(1 − ω¯ j )

(16.24)

(S)

and thus Mj k (z; w) is positive if and only if Mj k (ζ ; ω) is positive. Thus, Theorem 16.6 Either half of Theorem 16.1 is equivalent to the corresponding half of Theorem 16.4. In particular, since we have proven necessity of Mij ≥ 0 in Theorem 16.1, we have necessity of Mij(S) ≥ 0 in Theorem 16.4. Thus, as we choose, we can prove the hard half of any of these theorems. Finally, we turn to reducing to the case where I is finite. The key is a compactness result which follows from: Theorem 16.7 For each ζ0 ∈ D and w0 ∈ C+ , {F | F analytic on D, Re F > 0, F (ζ0 ) = −iw0 }

(16.25)

is uniformly bounded on compact subsets of D. For each w0 , z0 ∈ C+ , {G | G analytic on C+ , Im G > 0, G(z0 ) = w0 }

(16.26)

is uniformly bounded on compact subsets of D. Proof If T is given by (16.14), then P(G) = −iG ◦ T

(16.27)

is a bijection of G’s in (16.26) to F ’s in (16.25) if ζ0 = T −1 (z0 ). Thus, the result for (16.25) implies it for (16.26). For z0 ∈ D, the map Tz0 (z) =

z − z0 1 − z¯ 0 z

(16.28)

170

16 Pick Interpolation, I: The Basics

w −iθ z , it maps ∂D to ∂D and maps D to D (since Tz0 (eiθ ) = eiθ w 0 ¯ with w = 1 − e Tz0 (0) ∈ D) and

Tz0 (z0 ) = 0

Tz0 (0) = −z0

(16.29)

By a direct calculation, = T−z0 Tz−1 0

(16.30)

If Cζ0 ,w0 is the set (16.25), then F →

((F ◦ T−ζ0 ) + i Re w0 ) Im w0

maps Cζ0 ,w0 to C0,−i , so it suffices to prove (16.25) is bounded from the case C0,−i of Carathéodory functions, that is, F (0) = 1. The Schur lemma says that if f : D → D and f (0) = 0, then g(z) = f (z)/z also maps D → D (see the Notes). By this fact, F (z) =

1 + zf (z) 1 − zf (z)

(16.31)

sets up a one–one correspondence between F ∈ C0,−i and Schur functions. In particular, since |f (z)| ≤ 1, |F (z)| ≤

1 + |z| 1 − |z|

which is uniformly bounded on any compact subset of D.

(16.32)  

Montel’s theorem says that if ⊂ C is an open set and if Kn ⊂ is a family of compact subsets with ∪Kn = , then for any sequence Cn ∈ (0, ∞), {f | f analytic on ; supz∈Kn |f (z)| ≤ Cn } is compact in the topology of uniform convergence on each Kn . Moreover this topology is the same as pointwise convergence (see the Notes). Thus, Theorem 16.8 In the topology of uniform convergence on compacts, the set of Schur functions is compact as are the sets (16.25) and (16.26). Remark By the proof, in place of G(z0 ) = w0 , we can show compactness if G(z0 ) ∈ C a compact subset of C+ . Theorem 16.9 To prove the sufficiency of M ≥ 0 (resp. M (C) ≥ 0, M (S) ≥ 0) in Theorem 16.1 (resp. Theorem 16.3, Theorem 16.4), it suffices to prove it for the case where I is finite.

16 Pick Interpolation, I: The Basics

171

Proof Fix j0 ∈ I and note {G | G(zj0 ) = wj0 } is compact. For each finite F ⊂ I with j0 ∈ F , we can find GF so GF (zj ) = wj for j ∈ F . The set of finite F ’s in I under inclusion is a directed set and {GF }F finite ⊂I is a net so, by compactness, there is a limit point G (see the Notes), which it is easy to see solves the problem for I .   Since we will have proofs of Loewner’s theorem that go through approximation by Herglotz functions, the following compactness result related to the above results is of interest. Notice that fixing G(z0 ) for z0 ∈ C+ fixes two real numbers, Re G(z0 ) and Im G(z0 ). Since those Herglotz functions, f , analytic in (a, b) which we are interested in are real on (a, b), to fix two real numbers we need to fix f at two points. Theorem 16.10 Let a < c < d < b where a can be −∞ or b can be +∞. Let N be the set of Herglotz functions, f , with an analytic continuation to = (C\R)∪(a, b) so that f is real on (a, b) and f (c) = 0, f (d) = 1. Then N is compact in the topology of uniform convergence on compact subsets of , equivalently of pointwise convergence. Remark If α < β and g(c) = α

g(d) = β

(16.33)

then f (x) = (g(x) − α)/(β − α) is in N if and only if (16.33) holds, so there is no loss in fixing these values. Proof If (a, b) = R, the only functions are linear, and there is nothing to prove since N has a single function! If (a, b) = R, then there is a conformal map that takes C+ to C+ , a → −1, b → 1, and c → 0. Thus, with x0 the image of d, functions in N obey ˆ f (x) =

1

−1

x dρ(x) 1 + λx

(16.34)

where ˆ

1

−1

1 1 dρ(x) = 1 + λx0 x0

(16.35)

so ˆ

1

−1

ˆ ρ≤

sup |1 + λx0 | λ∈[−1,1]



1 + x0 x0



1

−1

dρ 1 + λx0 (16.36)

172

16 Pick Interpolation, I: The Basics

The set of measures with this bound is compact and those obeying (16.35) is closed.   In the same way, one proves Theorem 16.11 Let a < c < b where a can be −∞ and b can be +∞. Let N be the set of Herglotz functions, f , with an analytic continuation to = C\R ∪ (a, b) so that f is real on (a, b) and f (c) = 0, f (c) = 1. Then N is compact in the topology of uniform convergence on compact subsets of , equivalently of pointwise convergence. The Loewner matrix is a degenerate form of the Pick matrix (16.10). This is the idea behind the proof of the hard part of Loewner’s theorem in Chapter 20 but it also gives a quick proof of the easy half of Loewner’s theorem without needing the explicit form of the Herglotz representation. (It is true that our proof of the easy half of Pick’s theorem used a Herglotz representation but there are other proofs; see, e.g., Chapter 19.) Proof of (c)⇒(a) in Theorems 1.3 and 1.6 If f is Herglotz, then for any n and any z1 , . . . , zn in C+ , we have that the n × n matrix M(z1 , . . . , zn ; f ) = {(f (zi ) − f (zj ))/(zi − zj )}1≤i,j ≤n is a positive matrix. Given x1 , . . . , xn distinct in (a, b), one sees that L(x1 , . . . , xn ; f ) = lim↓0 M(x1 +i, . . . , xn +i; f ) so the Loewner matrices are all positive which means that f is matrix monotone.   Notes and Historical Remarks Pick’s theorem in the form of Theorem 16.3 is due to Pick . Independently, Nevanlinna  proved Theorem 16.1. Agler– McCarthy  have a whole book on extensions, including to multivariable situations using an approach of Sarason . Of course, (16.30) says Tz0 commutes with T−z0 . It is not hard to see that Tz0 commutes with Tw0 if and only if z0 and w0 lie on a line through 0. In general, Tz0 ◦ Tz1 is not T−Tz0 (−z1 ) as you might guess from (16.29), but rather, it is z → T−Tz0 (−z1 ) (eiθ z) where θ is a function of z0 , z1 . The Schwarz lemma, while very powerful, is easy to prove. If f (0) = 0 and |f (z)| ≤ 1 on D, then for each R < 1 with g(z) = f (z)/z, we have sup |g(z)| ≤ R −1

|z|≤R

by the maximum principle. Taking R ↑ 1, we see supz∈D |g| ≤ 1. Montel’s theorem is also simple (see, e.g., Titchmarsh  or Simon [326, Section 6.2]). For by Cauchy estimates, covering Kn by the interiors of suitable K ’s, we get bounds not only on f but on d m f/dzm on each Kn , bounds good enough to sum the Taylor zeros in a neighborhood of any point in . A subsequence argument then easily establishes compactness. For a discussion of nets, see Kelley . If I is countable, one can use sequences.

16 Pick Interpolation, I: The Basics

173

Related to Pick’s theorem is the following result of Hindmarsh  which appeared in his thesis done under the direction of Loewner: Theorem 16.12 (Hindmarsh ) If f is a continuous function of C+ to C+ so that for any three distinct points in C+ , Mij =

f (zi ) − f (zj ) zi − z¯ j

is a positive 3 × 3 matrix, then f is analytic. By Pick’s theorem, this result holds if each n×n matrix generated by any n points is positive. So this refines Pick’s theorem. 2 × 2 matrices are not sufficient, as can be seen by taking f (z) = i Im z. The key to the proof is to take sequences of triples (z1 , z1 + ε, z1 + iε) and by taking ε ↓ 0 to get the Cauchy–Riemann equations. For details, see Hindmarsh  or Donoghue .

Chapter 17

Pick Interpolation, II: Hilbert Space Proof

Our goal here is to prove the following part of Theorem 16.1 which, by the results of the last chapter, completes the proofs of Theorems 16.1, 16.3, and 16.4. Theorem 17.1 Let {zj }nj=1 and {wj }nj=1 be points in C+ so that the zj are distinct and Mj k given by (16.10) is positive. Then there exist A ≥ 0 and a measure μ with at most n points in its support (n − 1 points if A > 0) so that ˆ wj = Azj +

dμ(x) x − zj

(17.1)

Remarks 1. This is slightly stronger than Theorem 16.1 in that we specify that the number of points in supp(dμ) is at most n. 2. We will prove that if M is strictly positive, then one can take A = 0. By Theorem 16.2, M > 0 implies #(supp(dμ)) ≥ n, so if M > 0, (17.1) holds with dμ having precisely n points and no fewer. Below we will prove: Theorem 17.2 If M is strictly positive, then ˆ wj =

dμ(x) x − zj

(17.2)

where μ has n points in its support. Proof of Theorem 17.1 given Theorem 17.2 Suppose M is positive but not strictly positive. The function log(z) analytically continued from (0, ∞) to C\(−∞, 0] has (essentially (16.7))

© Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_17

175

176

17 Pick Interpolation, II: Hilbert Space Proof

log(z) − log(ζ¯ ) = z − ζ¯

ˆ

0

−∞

dx (x − z)(x − ζ¯ )

(17.3)

so, by (16.8), the matrix (ε)

M j k = Mj k +

ε[log(zj ) − log(zk )] zj − z¯ k

(17.4)

is strictly positive for ε > 0, so there exist measures dμ(ε) with ˆ wj + ε log(zj ) =

dμ(ε) (x) x − zj

(17.5)

By the remark after Theorem 16.8, the set of Herglotz functions with |f (z1 )−w1 | ≤ Im(w1 )/2 is compact, so with ˆ G(ε) (z) =

dμ(ε) (x) x−z

find a limiting Herglotz function G(z) solving G(zj ) = wj . By Theorem 16.2, since ker(M) = 0, G has a representation ˆ G(z) = Az +

dμ(x) x−z

(17.6)

where either A > 0 and μ has at most (n − 2) points in its support or A = 0 and μ has at most n − 1 points in its support.   Here is the intuition behind the construction below. Suppose we find an n × n symmetric matrix A and vector ϕ in Cn so that wj = ϕ, (A − zj )−1 ϕ

(17.7)

Let x1 , . . . , xn be the eigenvalues of A and ψ1 , . . . , ψn an orthonormal basis with Aψj = xψj . Then wj =

n  | ϕ, ψ |2 x − z

(17.8)

=1

which is precisely of the form (17.2), and the result is proven. Continuing the supposition on A existing so (17.7) holds, let ηj = (A − zj )−1 ϕ

(17.9)

17 Pick Interpolation, II: Hilbert Space Proof

177

Since [(A − zk )−1 − (A − z¯ j )−1 ] (zk − z¯ j )

(A − z¯ j )−1 (A − zk )−1 =

(17.10)

we have

ηj , ηk =

wk − w¯ j = Mkj zk − z¯ j

(17.11)

Since M is strictly positive, the η’s are a basis, so ϕ is uniquely determined by

ϕ, ηj = wj

(17.12)

and A is determined by (from (17.9) ⇒ (A − zk )ηk = ϕ)

ηj , Aηk = zk ηj , ηk + ηj , ϕ   wk − w¯ j + w¯ j = zk zk − z¯ j =

zk wk − z¯ j w¯ j zk − z¯ j

(17.13)

The argument below will turn this around. These calculations explain why we define ϕ and A as we do! Proof of Theorem 17.2 Let Mj k be given by (16.10). Let ηj be the vector in Cn with (ηj )k = δj k

(17.14)

Define an inner product on Cn by

α, β =



α¯ j Mkj βk

(17.15)

which is a strictly positive inner product since M t > 0. Thus (17.11) holds. Since M t > 0, we can define any vector in Cn by its inner products with the ηj . Explicitly, ψ=



j (ψ)ηj

(17.16)

 (M −1 )j k ηk , ψ

(17.17)

where j (ψ) =

k

So we pick ϕ so that (17.12) holds. This uniquely determines ϕ.

178

17 Pick Interpolation, II: Hilbert Space Proof

Define a matrix, A, by (17.13). Since the η’s are a basis (albeit, not an orthonormal basis), this determines A uniquely as a matrix on Cn . Moreover, by the uniqueness, A is self-adjoint if and only if

Aηj , ηk = A∗ ηj , ηk = ηj , Aηk that is, if and only if

ηk , Aηj = ηj , Aηk

(17.18)

Thus, A = A∗ if and only if the matrix elements in the nonorthonormal basis ηj are a self-adjoint matrix. Equation (17.13) has this property, that is, 

zj wj − z¯ k w¯ k zj − z¯ k

 =

zk wk − z¯ j w¯ j zk − z¯ j

so A = A∗ . Moreover,

ηj , (A − zk )ηk =

zk wk − z¯ j w¯ j (wk − w¯ j ) − zk zk − z¯ j zk − z¯ j

= w¯ j = ηj , ϕ and thus, (A − zk )ηk = ϕ

(17.19)

It follows that (recall Im zk = 0 and A is self-adjoint, so A − zk is invertible) ηk = (A − zk )−1 ϕ, so wj = ϕ, ηj = ϕ, (A − zj )−1 ϕ As explained before (17.8), this implies (17.7), which proves (17.2).

(17.20)  

Notes and Historical Remarks The proof in this chapter is essentially that exposed in Chapters X–XI in Donoghue , which in turn is motivated by Sz.Nagy and Korányi . Donoghue uses the language of reproducing kernels, which seems to me to obscure the situation. Moreover, these authors deal directly with infinite sets using the spectral theorem rather than using compactness to reduce to the finite dimensional case. There is an awkwardness in the above proof when M is not strictly positive. We added something small and strictly positive, found f , took a limit, and then

17 Pick Interpolation, II: Hilbert Space Proof

179

used Theorem 16.3 to prove the measure could be taken to have rank(M) points rather than dim(M). It is more natural if rank(M) < dim(M) to instead still use M to define an inner product · , · M and consider the Hilbert space H = M/{ϕ |

ϕ, ϕ M = 0}. This space has dimension rank(M), so if we can define A on H, we get a measure with only rank(M) points. Alas, I do not see any way to directly prove that ϕ, ϕ M = 0 ⇒ ψ, Aϕ M = 0, and so no direct way to follow this strategy for finitely many points in C+ . Indeed, Sz.-Nagy and Korányi  prove a theorem for C+ by considering a set of points, S, which is required to have 0 as a nontangential limit point! They can then handle degenerate M’s. However, Sz.-Nagy and Korányi  do have an argument that works on the quotient space in the Carathéodory case (i.e., Theorem 16.3) if one of the points is 0 (and then by conformal transformation, one gets Theorem 16.3). Let us describe their argument: {zj }nj=1 are distinct points in D with zn = 0 and {wj }nj=1 in C+ so that the matrix M (C) of (16.17) is positive but not necessarily strictly positive. For α ∈ Cn , define

α, α M =

n 

(C) α¯ j αk Mkj

(17.21)

j,k=1

Define V : Cn−1 → Cn (where, by Cn−1 , we mean {α ∈ Cn | αn = 0}) by V ηj =

ηj − ηn zj

(17.22)

for j = 1, . . . , n − 1. Then for 1 ≤ j, k ≤ n − 1, 1 [ ηj , ηk M + ηn , ηn M − ηj , ηn M − ηn , ηk M ] z¯ j zk   1 wk + w¯ j = + (wn + w¯ n ) − (w¯ j + wn ) − (w¯ n + wk ) z¯ j zk 1 − zk z¯ j   wk + w¯ j 1 −1 = z¯ j zk 1 − zk z¯ j

V ηj , V ηk M =

=

wk + w¯ j 1 − zk z¯ j

= ηj , ηk M

(17.23)

Thus,

ϕ, ϕ M = 0 ⇒ V ϕ, V ϕ M = 0

(17.24)

180

17 Pick Interpolation, II: Hilbert Space Proof

Let π : Cn → C where  = rank(M) is the canonical projection of Cn to Cn /{ϕ |

ϕ, ϕ M = 0}, then (17.24) says that we can define a map V1 : π [Cn−1 ] → C by V1 π(u) = π(V u)

(17.25)

If π [Cn−1 ] = C , take V˜ = V1 . Otherwise, one can extend V1 to a unitary map on C (the codimension of π [Cn−1 ] is either 0 or 1). Sz.-Nagy and Korányi deal with infinitely many zj ’s (with z0 = 0) and so their V˜ may only be a contraction. Now, (1 − zk V )ηk = ηk − (ηk − ηn ) = ηn

(17.26)

(1 − zk V˜ )π(ηk ) = π(ηn )

(17.27)

so

Since |zk | < 1 and V˜  = 1, 1 − zk V˜ is invertible, and (17.27) implies π(ηk ) = (1 − zV˜ )−1 π(ηn )

(17.28)

It follows that k ≤ n − 1,

π(ηn ), (1 + zk V˜ )(1 − zk V˜ )−1 π(ηn ) = π(ηn ), (1 + zk V˜ )π(ηk ) = π(ηn ), [2 − (1 − zk V˜ )]π(ηk ) = π(ηn ), 2π(ηk ) − π(ηn ), π(ηn )

(17.29)

= 2 ηn , ηk M − ηn , ηn M = 2(wk + w¯ n ) − (wn + w¯ n ) = 2wk − 2i Im(wn )

(17.30)

Equation (17.29) follows from (17.26). Equation (17.30) also holds for k = n since (zn = 0)

π(ηn ), π(ηn ) = wn + w¯ n = 2wn − 2i Im(wn ) Since V˜ is unitary, by the spectral theorem, there is a point measure on ∂D with at most  pure points with 1 2

π(ηn ), (1 + zV˜ )(1 − zV˜ )−1 π(ηn ) =

ˆ

eiθ + z dμ(θ ) eiθ − z

17 Pick Interpolation, II: Hilbert Space Proof

181

By (17.30), ˆ wk = i Im(wn ) + is the required representation.

eiθ + zk dμ(θ ) eiθ − zk

Chapter 18

Pick Interpolation, III: Continued Fraction Proof

In this chapter, we will prove the following which provides the missing part (given Chapter 16) of Theorem 16.4 and thereby completes (again) the proofs of Theorems 16.1, 16.3, and 16.4: Theorem 18.1 Let {zj }nj=1 and {wj }nj=1 be collections of points in D with the zj

distinct and so that Mj(S) k ≥ 0. Then there is a Blaschke product, B(z), of order at most n so B(zj ) = wj

(18.1)

Remarks 1. M (S) is given by (16.19). 2. If Ran(M (S) ) = n − , one can show that B has exactly order n − . In particular, if M (S) is strictly positive, B has order n. In the above, we define a Blaschke product of order n by B(z) = eiθ0

n 

Tzj (z)

(18.2)

j =1

for z1 , . . . , zn ∈ D, where Tzj (z) =

z − zj 1 − z¯ j z

(18.3)

as discussed in (16.28). Since each Tzj : D → D, B maps D to D, and so Theorem 18.1 solves the Schur function interpolation problem. This is not quite the usual definition of Blaschke product which is defined, when no zj is zero to be positive at z = 0. That choice is convenient for dealing with infinite Blaschke © Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_18

183

184

18 Pick Interpolation, III: Continued Fraction Proof

products (see [326, Section 9.9]); our extra phase factor is necessary for Lemma 18.2 to hold. The strategy of the proof will be to use induction in n. Assuming the result for n−1 points and that zn = wn = 0, we will show that we can take Bn (z) = zBn−1 (z) where Bn−1 is a suitable Blaschke product of order at most n−1. The full result will follow by using the Tz ’s to move (zn , wn ) to (0, 0). One part will be to prove that these maps preserve positivity of M (S) . We will also need to know that Blaschke products are invariant under such transformation—and it is to that we first turn. Lemma 18.2 Let f be a function analytic in a neighborhood of D that obeys |z| = 1 ⇒ |f (z)| = 1

(18.4)

(i) If f is non-vanishing in D, then f (z) = eiθ0 for some real θ0 . (ii) If f has n zeros in D, then f is a Blaschke product of order n. Proof (i) Suppose f is analytic and non-vanishing in {z | |z| < R} for R > 1, define g on {z | |z| > R −1 } by g(z) = [ f (1/¯z) ]−1

(18.5)

Since f is non-vanishing in D, g is analytic in {z | |z| > R −1 }. By (18.4) f (z) = g(z) for |z| = 1, so g provides an analytic continuation of f to C. By (18.5), limz→∞ g(z) = 1/f (0). So this entire function is bounded, thus a constant which, by (18.4), must be eiθ0 . (ii) Let z1 , . . . , zn be the zeros of f (counting multiplicity). Then f (z) j =1 Tzj (z)

n

(18.6)

is analytic in a neighborhood of D, non-vanishing, and obeys (18.4). By (i), this ratio is eiθ0 , that is, f is a Blaschke product.   Proposition 18.3 If B is a Blaschke product of order n and z0 ∈ D, then B ◦ Tz0 and Tz0 ◦ B are both also finite Blaschke products of order n. Proof Since Tz0 maps an analytic function in a neighborhood of D into such a neighborhood, it takes functions obeying (18.4) into themselves, either for B ◦ Tz0 or for Tz0 ◦ B. Since Tz0 is a bijection, it preserves the number of zeros. By (ii) of Lemma 18.2, B ◦ Tz0 and Tz0 ◦ B are Blaschke products of the same order as B.  

18 Pick Interpolation, III: Continued Fraction Proof

185

We need an analog of this result for the M (S) matrix: Proposition 18.4 Suppose {zj }nj=1 and {wj }nj=1 lie in D with the z’s distinct and so that M (S) ({zj }nj=1 , {wj }nj=1 ) is positive. Let ζ0 , ζ1 ∈ D and let z˜ j = Tζ0 (zj )

w˜ j = Tζ1 (wj )

(18.7)

Then M (S) ({˜zj }nj=1 , {w˜ j }nj=1 ) is also positive. Proof We have, by direct calculation, 1 − Tζ0 (z0 )Tζ0 (z1 ) =

(1 − |ζ0 |2 )(1 − z0 z¯ 1 ) (1 − z0 ζ¯0 )(1 − z¯ 1 ζ0 )

(18.8)

Thus, (S)

(S)

Mk ({˜zj }nj=1 , {w˜ j }nj=1 ) = ak a¯  Mkl ({zj }nj=1 , {wj }nj=1 )

(18.9)

ak = (1 − |ζ1 |2 )1/2 (1 − |ζ0 |2 )−1/2 (1 − ζ¯1 wk )−1 (1 − ζ¯0 zk )

(18.10)

where

 

Positivity of one M (S) implies positivity of the other. The final preliminary step is an analog of the Schwarz lemma for this context:

Proposition 18.5 Let {zj }nj=1 and {wj }nj=1 obey the hypotheses of Theorem 18.1. Suppose that wn = zn = 0. Then either (i) For some eiθ ∈ ∂D, wj = eiθ zj , j = 1, . . . , n − 1; or (ii) For j = 1, . . . , n − 1, |wj | < |zj | and the (n − 1) × (n − 1) matrix n−1 M (S) ({zj }n−1 j =1 , {wj /zj }j =1 )

(18.11)

is positive. Proof The matrix M (S) ({zj }nj=1 , {wj }nj=1 ) has the form ⎛ ⎜ M

⎜ M=⎜ ⎝

⎞ 1 1⎟ ⎟ .. ⎟ .⎠

(18.12)

1 1 ... 1 that is, the last row and column are all 1’s. Given an n − 1 vector α1 , . . . , αn−1 , take the n component vector with

186

18 Pick Interpolation, III: Continued Fraction Proof

αn = −

n−1 

(18.13)

αj

j =1

Then, by (18.12), n 

Mj k α¯ j αk =

j,k=1

   n−1 2   n−1 2    

αj  +  αj  Mj k α¯ j αk − 2

n−1 

j =1

j,k=1

=

n−1 

j k − 1)α¯ j αk (M

j =1

(18.14)

j,k=1

Thus, the n − 1 × n − 1 matrix N, given by

j k − 1 Nj k = M

(18.15)

1 − |wj |2 ≥1 1 − |zj |2

(18.16)

|wj |2 ≤ |zj |2

(18.17)

is positive. In particular, Njj ≥ 0, so

which implies that

If |wj0 | = |zj0 | for some j0 , then wj0 = eiθ zj0 and Nj0 j0 = 0. Since N ≥ 0, Nj0 k = 0 for all j , that is, 1 − wj0 w¯ k = 1 − zj0 z¯ k

(18.18)

which implies wk = eiθ zk for all k, and we are in case (i). If |wj | < |zj | for all j , set w˜ j = wj /zj ∈ D. Then 1 − w˜ j w˜ k = (zj z¯ k )−1 (zj z¯ k − wj w¯ k ) = (zj z¯ k )−1 [(1 − zj z¯ k )Nj k ]

(18.19) (18.20)

since Nj k =

zj z¯ k − wj w¯ k 1 − zj z¯ k

(18.21)

18 Pick Interpolation, III: Continued Fraction Proof

187

It follows that ˜ j }nj=1 ) = zk−1 z¯ −1 Nk Mk ({zj }n−1 j =1 , {w (S)

 

is positive.

Proof of Theorem 18.1 We use induction on n. For n = 1, we note that if f = T−w1 ◦ Tz1 , then f (z1 ) = w1 . Thus, suppose we can construct f for n − 1 points. Given n points, let {˜zj }nj=1 , {w˜ j }nj=1 be z˜ j = Tzn (zj )

w˜ j = Twn (wj )

(18.22)

If we find a Blaschke product so that g(˜zj ) = w˜ j , then B = T−wn ◦ g ◦ Tzn obeys (18.3) and is, by Proposition 18.3, also a Blaschke product. Note that, by Proposition 18.4, M (S) ({˜zj }, {w˜ j }) is positive. Thus, we need only prove the result in the special case zn = wn = 0. In that case, we can use Proposition 18.5. If (i) holds, B(z) = eiθ z solves (18.1). If (ii) holds, by Proposition 18.5 and the induction hypothesis, we can find a Blaschke product, g, of order at most n − 1 so g(zj ) =

wj zj

j = 1, . . . , n − 1

(18.23)

Then B(z) = zg(z)  

solves (18.1).

Notes and Historical Remarks The proof in this chapter is due to D. Marshall. A variant appears in his paper  and the proof, essentially in the form we give, appears in Garnett , where it is attributed to Marshall’s unpublished thesis. As we will see, there is a connection to the earlier construction of Wigner–von Neumann  discussed in Chapter 31 and to the Schur algorithm discussed next. Schur [308, 309] found a natural way to parameterize what we now call Schur functions. If f : D → D, let γ0 = f (0). Define f1 by f (z) =

γ0 + zf1 = T−γ0 (zf1 ) 1 + γ¯0 zf1

(18.24)

Then, by the Schwarz lemma, f1 : D → D. If f1 is a constant in ∂D, set γ1 = f1 (0) and stop. Otherwise, set γ1 = f1 (0) ∈ D and iterate using (18.24). In this way, every Schur function is associated to either a finite sequence (γ0 , . . . , γm ) with |γm | = 1 and |γj | < 1 for j = 0, . . . , m − 1, or else to an infinite sequence with

188

18 Pick Interpolation, III: Continued Fraction Proof

|γj | < 1. Schur proved this map was a bijection and the finite sequence precisely corresponded to Blaschke products. Wall  and Geronimus  noted that the Schur algorithm corresponded to a set of continued fraction expansions. Marshall’s proof is essentially a multipoint Schur algorithm and so also is a kind of continued fraction expansion. One can ask about the relation of finiteness in this chapter (i.e., finite Blaschke products) and in the last chapter (i.e., supp(dμ) is finite). It is the following. Let f , F, and dμ be related by 1 + zf (z) F (z) = = 1 − zf (z)

ˆ

eiθ + z dμ(θ ) eiθ − z

(18.25)

Then dμ has exactly N points in its support if and only if f is a Blaschke product of order N − 1.

Chapter 19

Pick Interpolation, IV: Commutant Lifting Proof

In this chapter, we’ll present a proof of Pick’s theorem, Theorem 16.4, due to Sarason , that relies on the Hardy spaces H 2 and H ∞ . It is also a motivation for the Rosenblum–Rovnyak proof  of the hard part of Loewner’s theorem in Chapter 33. Pick’s theorem asserts the existence of a function f ∈ H ∞ with f (zj ) = wj and f ∞ ≤ 1. A key fact underlying the approach of this chapter: if a bounded operator, A : H 2 → H 2 , commutes with Mz , multiplication by z, then there exists f ∈ H ∞ so that Ag(z) = f (z)g(z). This will be proven in Theorem 19.3 below. For Pick’s theorem, we’ll not construct A directly—rather, we’ll look at the ndimensional space which is the orthogonal complement, K, of {g ∈ H 2 |g(zj ) = 0, j = 1, . . . , n}. For now, we denote the orthogonal projection onto K by P . K will have a natural algebraic (but not orthonormal) basis, {sj }nj=1 , of those functions in H 2 with sj , g = g(zj ). The operator X on K given by Xsj = w¯ j sj will turn out to commute with (P Mz P )∗ . The key to proving Pick’s theorem will be to show there exists G on H 2 commuting with Mz so that P GP = X∗ and so that GL(H 2 ) = XL(K) . It is this lifting of the commutant that gives the name to this approach. As we’ll discuss in the Notes, this can be viewed as a special case of a much more general theorem. If G is multiplication by f , we’ll prove that f (zj ) = wj so that XL(K) ≤ 1 will be the sufficient (and necessary) condition to solve the Pick problem for Schur functions. After presenting this solution, we’ll use the machinery to prove a very general abstract theorem that will be the basis of the proof of Loewner’s theorem in Chapter 33. In many ways, our discussion here is taking the scenic route. The machinery we’ll present relies on lovely theorems of Beurling and of Nehari. While the general abstract theorem we state at the end will need this full machinery, as we’ll remark, our application in Chapter 33 only involves a special case where all that is needed is Theorem 19.3 and we could avoid using either the Beurling’s or the Nehari’s theorems. Our proof of Pick’s theorem uses the very special case of © Springer Nature Switzerland AG 2019 B. Simon, Loewner’s Theorem on Monotone Matrix Functions, Grundlehren der mathematischen Wissenschaften 354, https://doi.org/10.1007/978-3-030-22422-6_19

189

190

19 Pick Interpolation, IV: Commutant Lifting Proof

K described above, where, as we’ll see (the proof of Theorem 16.4 later in this chapter), Beurling’s theorem is merely a simple calculation. So neither application requires the Beurling’s theorem, but its proof is easy and the result illuminating. Since we have Beurling’s theorem, adding to the scenic nature of this chapter, we’ll end with an application to prove the Titchmarsh and Lions convolution theorems. We begin with some elementary facts about H 2 and H ∞ (see the books [86, 185] or chapters in [119, 179, 290], [328, Chapter 5] on H p spaces for more): dθ 2 (1) Since {einθ }∞ n=−∞ is an orthonormal basis for L (∂D, 2π ), the Fourier transin· form (aka Fourier series in this case), f → fn# = ∞ e , f , 2sets up a 2 2 ∞ correspondence between L and  (Z) = {{an }n=−∞ | n=−∞ |an | < ∞}. | fn# = 0 for n < ∞}. It is isomorphic (2) The Hardy space, H 2 (D) = {f ∈ L2 ∞ 2 under Fourier transform to {{an }n=0 | ∞ n=0 |an | < ∞}. Given such an f we ∞ n can define, for each z ∈ D, f (z) = n=0 an z which uniformly on ∞converges 2 2n = (1 − |z|2 )−1 .  = |z| {z| |z| < r} for each r < 1 since {zn }∞ n=0 n=0 2 (N)

Thus H 2 (D) is naturally associated to a vector space of analytic functions on D.  n (3) If f is analytic on D, we have that f (z) = ∞ n=0 an z and ∞ 

|an |2 = lim r↑1

n=0

∞ 

ˆ |an |2 r 2n = lim r↑1

n=0

|f (reiθ )|2

dθ 2π

(19.1)

where one can take either limr↑1 or supr 0, there is a rational ϕ with |Sϕ, | > 0 proving that Mf  ≥ f ∞ . Below we’ll see that {Mf | f ∈ L∞ } is precisely the set of operators in L(L2 ) commuting with Mz . dθ (6) H ∞ (D) ≡ H 2 (D) ∩ L∞ (∂D, 2π ) by which we intend those analytic functions 2 on D in H whose boundary values are in L∞ . From the Poisson formula, we see that such a function is bounded on D by supeiθ ∈∂D |f (eiθ )| = f ∞ . On the other hand if f is a bounded analytic function its boundary values are a.e. bounded by supz∈D |f (z)|. We thus see that H ∞ (D) is the space of bounded analytic functions with norm f L∞ (∂D) = sup |f (z)|

(19.9)

z∈D

If f ∈ H 2 , then f (r·) ∈ H ∞ , so by (19.3), H ∞ is dense in H 2 (alternatively, the Taylor polynomials, which are bounded, converge!). (7) If f ∈ H ∞ and g ∈ H 2 , then f g ∈ H 2 , so Mf restricts to a map of H 2 to itself

f (but only for a while—shortly, we’ll drop the tilde). that we’ll denote by M 2 2

Since H ⊂ L , Mf  ≤ Mf . On the other hand, if g ∈ H 2 , we have

192

19 Pick Interpolation, IV: Commutant Lifting Proof

f g = M

f z−n g. that Mf z−n g = z−n Mf g = Mf g ≤ M ∞ −n 2 2

f  = Mf  so we Since ∪n=0 z H is dense in L , we conclude that M henceforth drop the tilde. Thus (19.8) holds where now Mf is the operator on H 2 and f ∞ is the H ∞ -norm. Below we’ll see that {Mf | f ∈ H ∞ } is precisely the set of operators in L(H 2 ) commuting with Mz (8) We next need to consider infinite Blaschke products. To get convergence we can’t use the factors of the form (18.3) since even at z = 0, there is no reason for the phases of the −zj to converge. Thus we define single Blaschke factors by: b(z, w) =

 z

if w = 0

|w| w−z w 1−wz ,

otherwise

(19.10)

This function is |w| at z = 0 so for absolute convergence of an infinite product  of b(z, zj ) at z = 0, it suffices that ∞ j =1 (1 − |zj |) < ∞. In fact, it is easy to see that: (1 − |w|)(1 + |z|) 1 − |z|

|1 − b(z, w)| ≤

(19.11)

from which we conclude that ∞ 

(1 − |zj |) < ∞ ⇒

j =1

B(z, {zj }∞ j =1 ) =

∞ 

b(z, zj ) converges

(19.12)

j =1

and defines a function analytic on D vanishing exactly at the zj ’s. This function is bounded by 1 in magnitude so in H ∞ . Thus, it has boundary values on ∂D and one can show that for a.e. θ , |B(eiθ )| = 1. Conversely, if H 1 (D) is the set of functions analytic in D with ˆ f 1 ≡ lim r↑1

dθ = sup |f (re )| 2π r 0 since |eizx | = e−x Im z ∈ L2 (0, ∞). It thus defines an analytic function on C+ . For y > 0, the Plancherel theorem shows that ˆ ˆ (19.21) |f (k + iy)|2 dk = e−2yx |fˆ(x)|2 dx so ˆ f 22 = sup

0