Bitangential direct and inverse problems for systems of integral and differential equations 9781107018877, 1107018870

490 143 4MB

English Pages 472 [488] Year 2012

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Bitangential direct and inverse problems for systems of integral and differential equations
 9781107018877, 1107018870

Table of contents :
1. Introduction
2. Canonical systems and related differential equations
3. Matrix valued functions in the Nevanlinna class
4. Interpolation problems, resolvent matrices and de Branges spaces
5. Chains that are matrizants and chains of associated pairs
6. The bitangential direct input scattering problems
7. Bitangential direct input impedance and spectral problems
8. Inverse monodromy problems
9. Bitangential Krein extension problems
10. Bitangential inverse input scattering problems
11. Bitangential inverse input impedance and spectral problems
12. Dirac-Krein systems
Bibliography
Index.

Citation preview

BITANGENTIAL DIRECT AND INVERSE PROBLEMS FOR SYSTEMS OF INTEGRAL AND DIFFERENTIAL EQUATIONS This largely self-contained treatment surveys, unites and extends some 20 years of research on direct and inverse problems for canonical systems of integral and differential equations and related systems. Five basic inverse problems are studied in which the main part of the given data is either a monodromy matrix; an input scattering matrix; an input impedance matrix; a matrix-valued spectral function; or an asymptotic scattering matrix. The corresponding direct problems are also treated. The book incorporates introductions to the theory of matrix-valued entire functions, reproducing kernel Hilbert spaces of vector-valued entire functions (with special attention to two important spaces introduced by L. de Branges), the theory of J-inner matrix-valued functions and their application to bitangential interpolation and extension problems, which can be used independently for courses and seminars in analysis or for self-study. A number of examples are presented to illustrate the theory.

Encyclopedia of Mathematics and Its Applications This series is devoted to significant topics or themes that have wide application in mathematics or mathematical science and for which a detailed development of the abstract theory is less important than a thorough and concrete exploration of the implications and applications. Books in the Encyclopedia of Mathematics and Its Applications cover their subjects comprehensively. Less important results may be summarized as exercises at the ends of chapters. For technicalities, readers can be referred to the bibliography, which is expected to be comprehensive. As a result, volumes are encyclopedic references or manageable guides to major subjects.

Encyclo p e d i a o f M a t h e m a t i c s a n d i t s A p p l i c a t i o n s All the titles listed below can be obtained from good booksellers or from Cambridge University Press. For a complete series listing visit www.cambridge.org/mathematics. 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146

G. Gierz et al. Continuous Lattices and Domains S. R. Finch Mathematical Constants Y. Jabri The Mountain Pass Theorem G. Gasper and M. Rahman Basic Hypergeometric Series, 2nd edn M. C. Pedicchio and W. Tholen (eds.) Categorical Foundations M. E. H. Ismail Classical and Quantum Orthogonal Polynomials in One Variable T. Mora Solving Polynomial Equation Systems II E. Olivieri and M. Eul´alia Vares Large Deviations and Metastability A. Kushner, V. Lychagin and V. Rubtsov Contact Geometry and Nonlinear Differential Equations L. W. Beineke and R. J. Wilson (eds.) with P. J. Cameron Topics in Algebraic Graph Theory O. J. Staffans Well-Posed Linear Systems J. M. Lewis, S. Lakshmivarahan and S. K. Dhall Dynamic Data Assimilation M. Lothaire Applied Combinatorics on Words A. Markoe Analytic Tomography P. A. Martin Multiple Scattering R. A. Brualdi Combinatorial Matrix Classes J. M. Borwein and J. D. Vanderwerff Convex Functions M.-J. Lai and L. L. Schumaker Spline Functions on Triangulations R. T. Curtis Symmetric Generation of Groups H. Salzmann et al. The Classical Fields S. Peszat and J. Zabczyk Stochastic Partial Differential Equations with L´evy Noise J. Beck Combinatorial Games L. Barreira and Y. Pesin Nonuniform Hyperbolicity D. Z. Arov and H. Dym J-Contractive Matrix Valued Functions and Related Topics R. Glowinski, J.-L. Lions and J. He Exact and Approximate Controllability for Distributed Parameter Systems A. A. Borovkov and K. A. Borovkov Asymptotic Analysis of Random Walks M. Deza and M. Dutour Sikiri´c Geometry of Chemical Graphs T. Nishiura Absolute Measurable Spaces M. Prest Purity, Spectra and Localisation S. Khrushchev Orthogonal Polynomials and Continued Fractions H. Nagamochi and T. Ibaraki Algorithmic Aspects of Graph Connectivity F. W. King Hilbert Transforms I F. W. King Hilbert Transforms II O. Calin and D.-C. Chang Sub-Riemannian Geometry M. Grabisch et al. Aggregation Functions L. W. Beineke and R. J. Wilson (eds.) with J. L. Gross and T. W. Tucker Topics in Topological Graph Theory J. Berstel, D. Perrin and C. Reutenauer Codes and Automata T. G. Faticoni Modules over Endomorphism Rings H. Morimoto Stochastic Control and Mathematical Modeling G. Schmidt Relational Mathematics P. Kornerup and D. W. Matula Finite Precision Number Systems and Arithmetic Y. Crama and P. L. Hammer (eds.) Boolean Models and Methods in Mathematics, Computer Science, and Engineering V. Berth´e and M. Rigo (eds.) Combinatorics, Automata and Number Theory A. Krist´aly, V. D. R˘adulescu and C. Varga Variational Principles in Mathematical Physics, Geometry, and Economics J. Berstel and C. Reutenauer Noncommutative Rational Series with Applications B. Courcelle Graph Structure and Monadic Second-Order Logic M. Fiedler Matrices and Graphs in Geometry N. Vakil Real Analysis through Modern Infinitesimals R. B. Paris Hadamard Expansions and Hyperasymptotic Evaluation Y. Crama and P. L. Hammer Boolean Functions A. Arapostathis, V. S. Borkar and M. K. Ghosh Ergodic Control of Diffusion Processes N. Caspard, B. Leclerc and B. Monjardet Finite Ordered Sets D. Z. Arov and H. Dym Bitangential Direct and Inverse Problems for Systems of Integral and Differential Equations G. Dassios Ellipsoidal Harmonics

Encyclopedia of Mathematics and its Applications

Bitangential Direct and Inverse Problems for Systems of Integral and Differential Equations DA M IR Z . A ROV South-Ukrainian National Pedagogical University, Odessa

H A R RY DY M Weizmann Institute of Science, Rehovot, Israel

cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo, Delhi, Mexico City Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9781107018877  C

Damir Z. Arov and Harry Dym 2012

This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2012 Printed in the United Kingdom at the University Press, Cambridge A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data Arov, Damir Z. Bitangential direct and inverse problems for systems of integral and differential equations / Damir Z. Arov, Harry Dym. p. cm. – (Encyclopedia of mathematics and its applications ; 145) Includes bibliographical references. ISBN 978-1-107-01887-7 (hardback) 1. Inverse problems (Differential equations) 2. Integral equations. I. Dym, Harry, 1934– II. Title. QA378.5.A76 2012 2012000304 515 .45 – dc23 ISBN 978-1-107-01887-7 Hardback

Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Dedicated to our wives Natasha and Irene, for their continued support and encouragement, and for being ideal companions on the path of life.

CONTENTS

Preface 1 Introduction 1.1 The matrizant as a chain of entire J-inner mvf’s 1.2 Monodromy matrices of regular systems 1.3 Canonical integral systems 1.4 Singular, right regular and right strongly regular matrizants 1.5 Input scattering matrices 1.6 Chains of associated pairs of the first kind 1.7 The bitangential direct input scattering problem 1.8 Bitangential inverse monodromy and inverse scattering problems 1.9 The generalized Schur interpolation problem 1.10 Identifying matrizants as resolvent matrices when J = j pq 1.11 Input impedance matrices and spectral functions 1.12 de Branges spaces 1.13 Bitangential direct and inverse input impedance and spectral problems 1.14 Krein extension problems and Dirac systems 1.15 Direct and inverse problems for Dirac–Krein systems 1.16 Supplementary notes 2 Canonical systems and related differential equations 2.1 Canonical integral systems 2.2 Connections with canonical differential systems 2.3 The matrizant and its properties 2.4 Regular case: Monodromy matrix

page xiii 1 2 4 5 6 8 9 11 12 13 14 15 17 19 21 23 25 27 27 30 33 36

viii

Contents 2.5

Multiplicative integral formulas for matrizants and monodromy matrices; Potapov’s theorems 2.6 The Feller–Krein string equation 2.7 Differential systems with potential 2.8 Dirac–Krein systems 2.9 The Schr¨odinger equation 2.10 Supplementary notes 3

4

Matrix-valued functions in the Nevanlinna class 3.1 Preliminaries on the Nevanlinna class N p×q 3.2 Linear fractional transformations and Redheffer transformations 3.3 The Riesz–Herglotz–Nevanlinna representation 3.4 The class E ∩ N p×q of entire mvf’s in N p×q 3.5 The class  p×q of mvf’s in N p×q with pseudocontinuations 3.6 Fourier transforms and Paley–Wiener theorems 3.7 Entire inner mvf’s 3.8 J contractive, J-inner and entire J-inner mvf’s 3.9 Associated pairs of the first kind 3.10 Singular and right (and left) regular J-inner mvf’s 3.11 Linear fractional transformations of S p×q into itself 3.12 Linear fractional transformations in C p×p and from S p×p into C p×p 3.13 Associated pairs of the second kind 3.14 Supplementary notes Interpolation problems, resolvent matrices and de Branges spaces 4.1 The Nehari problem 4.2 The generalized Schur interpolation problem 4.3 Right and left strongly regular J-inner mvf’s 4.4 The generalized Carath´eodory interpolation problem 4.5 Detour on scalar determinate interpolation problems 4.6 The reproducing kernel Hilbert space H(U ) 4.7 de Branges’ inclusion theorems 4.8 A description of H(W ) ∩ L2m 4.9 The classes UAR (J) and UBR (J) of A-regular and B-regular J-inner mvf’s 4.10 de Branges matrices E and de Branges spaces B(E) 4.11 A coisometry from H(A) onto B(E) ◦ ( j pq ) 4.12 Formulas for resolvent matrices W ∈ E ∩ UrsR ◦ 4.13 Formulas for resolvent matrices A ∈ E ∩ UrsR (Jp ) 4.14 Supplementary notes

37 43 47 49 51 53 56 58 65 69 74 77 79 81 84 91 93 96 98 102 105 107 108 112 118 119 124 127 135 137 140 143 148 149 151 155

Contents

ix

5 Chains that are matrizants and chains of associated pairs 5.1 Continuous chains of entire J-inner mvf’s 5.2 Chains that are matrizants 5.3 Continuity of chains of associated pairs 5.4 Type functions for chains 5.5 Supplementary notes

158 158 162 167 170 177

6 The bitangential direct input scattering problem d 6.1 The set Sscat (dM) of input scattering matrices d (dM) in terms of 6.2 Parametrization of Sscat Redheffer transforms 6.3 Regular canonical integral systems 6.4 Limit balls for input scattering matrices 6.5 The full rank case 6.6 Rank formulas 6.7 Regular systems (= full rank) case 6.8 The limit point case 6.9 The diagonal case 6.10 A Weyl–Titchmarsh like characterization for input scattering matrices 6.11 Supplementary notes

178 178

7 Bitangential direct input impedance and spectral problems 7.1 Input impedance matrices 7.2 Limit balls for input impedance matrices 7.3 Formulas for the ranks of semiradii of the limit ball 7.4 Bounded mass functions and full rank end points 7.5 The limit point case 7.6 The Weyl–Titchmarsh characterization of the input impedance 7.7 Spectral functions for canonical systems 7.8 Parametrization of the set (H(A))psf d (dM) for regular canonical 7.9 Parametrization of the set psf integral systems 7.10 Pseudospectral and spectral functions for singular systems 7.11 Supplementary notes

202 202 206 209 211 213

230 232 237

8 Inverse monodromy problems 8.1 Some simple illustrative examples 8.2 Extremal solutions when J = Im 8.3 Solutions for U ∈ UAR (J) when J = ±Im 8.4 Connections with the Livsic model of a Volterra node 8.5 Conditions for the uniqueness of normalized Hamiltonians

241 244 247 253 258 269

181 183 184 189 192 193 193 195 195 200

215 219 225

x

Contents 8.6 8.7 8.8 8.9 8.10 8.11 8.12

9

Solutions with symplectic and/or real matrizants Entire homogeneous resolvent matrices Solutions with homogeneous matrizants Extremal solutions for J = ±Im The unicellular case for J = ±Im Solutions with symmetric type The inverse monodromy problem for 2 × 2 differential systems 8.13 Examples of 2 × 2 Hamiltonians with constant determinant 8.14 Supplementary notes

277 281 285 291 294 295

Bitangential Krein extension problems 9.1 Helical extension problems 9.2 Bitangential helical extension problems 9.3 The Krein accelerant extension problem 9.4 Continuous analogs of the Schur extension problem 9.5 A bitangential generalization of the Schur extension problem 9.6 The Nehari extension problem for mvf’s in Wiener class 9.7 Continuous analogs of the Schur extension problem for mvf’s in the Wiener class 9.8 Bitangential Schur extension problems in the Wiener class 9.9 Supplementary notes

310 311 317 320 328 334 338

299 304 308

347 350 354

10 Bitangential inverse input scattering problems 10.1 Existence and uniqueness of solutions 10.2 Formulas for the solution of the inverse input scattering problem 10.3 Input scattering matrices in the Wiener class 10.4 Examples with diagonal mvf’s bt1 and bt2 10.5 Supplementary notes

355 356

11 Bitangential inverse input impedance and spectral problems 11.1 Existence and uniqueness of solutions 11.2 Formulas for the solutions 11.3 Input impedance matrices in the Wiener class 11.4 Examples with diagonal mvf’s bt3 and bt4 = Ip 11.5 The bitangential inverse spectral problem 11.6 An example 11.7 Supplementary notes

372 373 375 378 383 393 396 406

12 Direct and inverse problems for Dirac–Krein systems 12.1 Factoring Hamiltonians corresponding to DK-systems 12.2 Matrizants of canonical differential systems corresponding to DK-systems

409 411

357 361 362 371

416

Contents 12.3 12.4 12.5 12.6 12.7 12.8 12.9 12.10 12.11 12.12 12.13

Direct and inverse monodromy problems for DK-systems Direct and inverse input scattering problems for DK-systems Direct and inverse input impedance problems for DK-systems Direct and inverse spectral problems for DK-systems The Krein algorithms for the inverse input scattering and impedance problems The left transform T  for A ∈ W ( j pq ) A Asymptotic equivalence matrices Asymptotic scattering matrices (S-matrices) The inverse asymptotic scattering problem More on spectral functions of DK-systems Supplementary notes

References Symbol index Index

xi 423 424 427 431 433 435 438 438 443 445 447 450 465 469

PREFACE

This book is devoted to direct and inverse problems for canonical integral and differential systems. Five basic problems are considered: those in which an essential part of the data is either (1) a monodromy matrix; or (2) an input scattering matrix; or (3) an input impedance matrix; or (4) a spectral function; or (5) an asymptotic scattering matrix. There is a rich literature on direct and inverse problems for canonical integral and differential systems and for first- and second-order differential equations that can be reduced to such systems. However, the intersection between most of this work and this book is relatively small. The approach used here combines and extends ideas that originate in the fundamental work of M.G. Krein, V.P. Potapov and L. de Branges: M.G. Krein studied direct and inverse problems for Dirac systems (and differential equations that may be reduced to Dirac systems) by identifying the matrizant of the system with a family of resolvent matrices for assorted classes of extension problems that are continuous analogs of the classical Schur and Carath´eodory extension problems. In this monograph we present bitangential generalizations of the Krein method that is based on identifying the matrizants of canonical systems of equations as resolvent matrices of an ordered family of bitangential generalized interpolation/extension problems that were studied earlier by the authors and are also reviewed in reasonable detail in the text. The exposition rests heavily on the theory of J-inner mvf’s (matrix-valued functions) that was developed and applied to a number of problems in analysis (including the inverse monodromy problem for canonical differential systems) by V.P. Potapov in his study of J-contractive mvf’s. The parts of this theory that are needed here are taken mainly from our earlier monograph J-Contractive Matrix Valued Functions and Related Topics. Nevertheless, in order to keep this monograph reasonably self-contained, material that is needed for the exposition

xiv

Preface

is repeated as needed, though usually without proof, and often in simpler form if that is adequate for the application at hand. Extensive use is made of reproducing kernel Hilbert spaces of vector-valued entire functions of the kind introduced by L. de Branges. This theory is developed further and used to provide alternate characterizations of certain classes of entire J-inner mvf’s and some useful results on the isometric inclusion of some nested families of reproducing kernel Hilbert spaces. We have tried to give a reasonable sample of the literature that we felt was most relevant to the topics developed in the monograph. But for every item listed, there are tens if not hundreds of articles that are somewhat connected. To keep the length of the list of references reasonable, we have not referenced articles that deal with canonical systems on the full line, or non-Hermitian Hamiltonians and Hamiltonians with negative eigenvalues. The authors gratefully acknowledge and thank the administration of SouthUkrainian National Pedagogical University for authorizing extended leaves of absence to enable the first author to visit the second and finally, and most importantly, the Minerva Foundation, the Israel Science Foundation, the Arthur and Rochelle Belfer Institute of Mathematics and Computer Science, and the Visiting Professorship program at the Weizmann Institute for the financial support that made these visits possible and enabled the authors to work together under ideal conditions.

1 Introduction

This book focuses on direct and inverse problems for canonical differential systems of the form d u(x, λ) = iλu(x, λ)H(x)J dx

a.e. on [0, d),

(1.1)

where λ ∈ C, J ∈ Cm×m is a signature matrix, i.e., J ∗ = J and J ∗ J = Im , and H(x) is an m × m mvf (matrix-valued function) that is called the Hamiltonian of the system and is assumed to satisfy the conditions m×m ([0, d)) H ∈ L1,loc

and

H(x) ≥ 0

a.e. on [0, d).

(1.2)

The solution u(x, λ) of (1.1) under condition (1.2) is a locally absolutely continuous k × m mvf that is uniquely defined by specifying an initial condition u(0, λ); it is also the unique locally absolutely continuous solution of the integral equation  x u(x, λ) = u(0, λ) + iλ u(s, λ)H(s)dsJ, 0 ≤ x < d. (1.3) 0

The signature matrix J that is usually considered in (1.1) and (1.3) is either ±Im or ± j pq , ±Jp and ±J p , where       Ip 0 −Ip 0 −iIp 0 j pq = , Jp = , Jp = (1.4) −Ip 0 iIp 0 0 −Iq and p + q = m in the first displayed matrix and 2p = m in the other two. Every signature matrix J = ±Im is unitarily similar to j pq , where p = rank(Im + J)

and

q = rank(Im − J)

with p + q = m.

Thus, ±Jp and ±J p are unitarily similar to j p = j pp :  1 −Ip ∗ Jp = V j p V, where V = √ 2 Ip

Ip Ip

(1.5)

 (1.6)

2

Introduction

and Jp =

V∗1 Jp V1 ,



where

−iIp V1 = 0

 0 . Ip

(1.7)

Consequently a canonical system (1.1) with arbitrary signature matrix J = ±Im may (and will) be reduced to a corresponding differential system with j pq in place of J, or with Jp or J p if q = p. The choice of J depends upon the problem under consideration. Thus, for example, J = j pq is appropriate for direct and inverse scattering problems, whereas the choice J = Jp (or J = J p ) is appropriate for direct and inverse spectral and input impedance (alias Weyl–Titchmarsh function) problems. A number of different second-order differential equations and systems of differential equations of first-order may be reduced to the canonical differential system (1.1): the Feller–Krein string equation, the Dirac–Krein differential system, the Schr¨odinger equation with a matrix-valued potential of the form q(x) = v (x) ± v(x)2 and Sturm–Liouville equations with appropriate restrictions on the coefficients. The canonical system (1.1) arises (at least formally) by applying the Fourier transform  ∞ eiλt y(x, t )dt u(x, λ) =  y(x, λ) = 0

to the solution y(x, t ) of the Cauchy problem ∂y ∂y (x, t ) = − (x, t )H(x)J, ∂x ∂t y(x, 0) = 0.

0 ≤ x ≤ d, 0 ≤ t < ∞,

(1.8)

The m × m matrix-valued solution Ux (λ) of the system (1.1) that satisfies the initial condition U0 (λ) = Im is called the matrizant of the system. It may be interpreted as the transfer function from the input data y(0, t ) to the output y(x, t ) on the interval [0, x] of the system in which the evolution of the data is described by equation (1.8), since  y(x, λ) =  y(0, λ)Ux (λ) for 0 ≤ x < d.

(1.9)

1.1 The matrizant as a chain of entire J-inner mvf’s The matrizant Ux (λ) is an entire mvf in the variable λ for each x ∈ [0, d) such that  d  Ux (λ)JUx (ω)∗ = i(λ − ω)Ux (λ)H(x)Ux (ω)∗ dx

a.e. on the interval (0, d),

1.1 The matrizant as a chain of entire J-inner mvf’s

3

which in turn implies that Ux2 (λ)JUx2 (ω)∗ − Ux1 (λ)JUx1 (ω)∗  x2 = i(λ − ω) Ux (λ)H(x)Ux (ω)∗ dx.

(1.10)

x1

Thus, as U0 (λ) = Im , the kernel ⎧ J − U (λ)JU (ω)∗ ⎪ ⎪ −2πi(λ − ω) ⎨ KωU (λ) =

⎪ ⎪ ⎩ 1 ∂U (ω)JU (ω)∗ 2π i ∂λ

i f λ = ω (1.11) if λ = ω

with U (λ) = Ux (λ) is positive on C × C in the sense that n 

v∗i KωUj (ωi )v j ≥ 0

(1.12)

i, j=1

for every choice of points ω1 , . . . , ωn in C and vectors v1 , . . . , vn in Cm , since ⎛ ⎞∗   x  n n n   1 ∗ U ∗ ∗ vi Kω j (ωi )v j = vi Us (ωi ) H(s) ⎝ v j Us (ω j )⎠ ds, 2π 0 i, j=1 i=1 j=1 by formula (1.10). Therefore, by the matrix version of a theorem of Aronszajn in [Arn50], there is an RKHS (reproducing kernel Hilbert space) H(U ) with RK (reproducing kernel) KωU (λ) defined by formula (1.11); see Section 4.6. Moreover, Ux (λ) belongs to the class E ∩ U (J) of entire mvfs that are J-inner with respect to the open upper half plane C+ : Ux (λ)∗ JUx (λ) ≤ J

for λ ∈ C+

(1.13)

Ux (λ)∗ JUx (λ) = J

for λ ∈ R

(1.14)

and

for every x ∈ [0, d), and, as follows easily from (1.10) with x2 = x and x1 = 0 (since U0 (λ) ≡ Im ), Ux# (λ)JUx (λ) = J

for every point λ ∈ C,

(1.15)

in which f # (λ) = f (λ)∗ for any mvf f that is defined at λ. The matrizant Ux (λ), 0 ≤ x < d, is nondecreasing with respect to x in the sense that Ux−1 Ux2 ∈ E ∩ U (J) 1

when 0 ≤ x1 ≤ x2 < d.

(1.16)

4

Introduction

It is also locally absolutely continuous in [0, d) with respect to x and normalized by the conditions Ux (0) = Im

for

0≤x 1,

0

u1 (t, λ) = iλy◦ M(t )J

for 0 ≤ t < d,

and ψ (t ) is a continuous nondecreasing scalar-valued function on [0, d). Our first main objective is to establish the bound |λ|k ψ (t )k ◦ (2.6) y  for 0 ≤ t < d and k ≥ 1. k! The proof is by induction. The case k = 1 is self-evident. Suppose next that the bound is valid for k = n − 1 and let 0 = t0 < t1 < · · · < t < t < d. Then the inequalities         un−1 (t j−1 , λ)[M(t j ) − M(t j−1 )]J     j=1  uk (t, λ) ≤



 

un−1 (t j−1 ) M(t j ) − M(t j−1 ) J

j=1



  |λ|n−1 ψ (t j−1 )n−1 [ψ (t j ) − ψ (t j−1 )]y◦  (n − 1)! j=1



 t |λ|n−1 y◦  ψ (s)n−1 dψ (s) (n − 1)! 0

=

|λ|n−1 ψ (t )n ◦ y  n!

2.1 Canonical integral systems

29

imply that   t n−1 n    un−1 (s, λ)dM(s)J  ≤ |λ| ψ (t ) y◦  for 0 ≤ t < d,   n! 0 which, together with the recursion for un (t, λ), justifies the bound (2.6) for k = n also, and hence by induction for every integer k ≥ 1. Moreover, since the mvf’s un (t, λ) are continuous in t on [0, d) and polynomials in λ and, by the preceding estimates, ∞ 

un (t, λ)

n=1

converges uniformly for t ∈ [0, d1 ] and |λ| ≤ R when d1 < d and R < ∞, it follows that  t n  uk (t, λ) = y◦ + yn−1 (s, λ)dM(s)J yn (t, λ) = y◦ + 0

k=1

converges uniformly to a solution y(t, λ) of the integral equation (2.1) that is continuous in the variable t on the interval [0, d) for each fixed λ ∈ C and entire in the variable λ for each fixed t ∈ [0, d). The bound yn (t, λ)



y◦  +

n 

y◦ 

|λ|k ψ (t )k k!

k=1





y  exp{|λ|ψ (t )}

is used to justify bringing the limit inside the integral and also serves to justify the stated inequality (2.2). 2. Uniqueness: Let  y(t, λ) be any other continuous solution of the equation (2.1) on the interval [0, d) with initial condition  y(0, λ) = y◦ . Then the mvf z(t, λ) = y(t, λ) −  y(t, λ) is a continuous solution of the equation  t z(t, λ) = iλ z(s, λ)dM(s)J

for 0 ≤ t < d,

for 0 ≤ t < d.

0

Fix t < d, choose λ so that |λ| ψ (t ) < 1 and let δt = max{z(s, λ) : s ∈ [0, t]}. Then, since z(s, λ) ≤ |λ|δs ψ (s) ≤ |λ|δt ψ (t ), it follows that (1 − |λ|ψ (t ))δt ≤ 0.

30

Canonical systems and related differential equations

Therefore, δt = 0. But this implies that zt (λ) = 0 in a neighborhood of zero, and hence, as zt is entire, zt (λ) = 0 in the whole complex plane. Thus, the solution y(t, λ) that was obtained by successive approximation in the earlier part of the proof is in fact the only solution of the canonical integral system (6.1) that is continuous in the variable t on the interval [0, d) for each fixed λ ∈ C. 3. The bound (2.2): The bound (2.2) was established for the solution that was obtained by successive approximation in Part 1. However, in view of Part 2, the integral equation (2.1) has only one continuous solutions and hence it is subject to this bound. 

2.2 Connections with canonical differential systems If the mass function M(t ), 0 ≤ t < d, in the canonical integral system (2.1) is locally absolutely continuous on the interval [0, d), and M  (t ) = H(t ) a.e. on [0, d), then the system (2.1) can be written as  t y(t, λ) = y(0, λ) + iλ y(s, λ)H(s)dsJ for 0 ≤ t < d. (2.7) 0

Thus, in this case the unique continuous solution y(t, λ) of system (2.1) is locally absolutely continuous in t on the interval [0, d) for each fixed λ ∈ C and it is a solution of the canonical differential system y (t, λ) = iλy(t, λ)H(t )J,

a.e. on 0 ≤ t < d,

(2.8)

where the Hamiltonian H(t ) satisfies the conditions H(t ) ≥ 0 a.e. on [0, d),

and

m×m ([0, d)). H ∈ L1,loc

(2.9)

Conversely, if y(t, λ), 0 ≤ t < d, is a locally absolutely continuous solution of a canonical differential system of the form (2.8) in which the mvf H(t ) satisfies the conditions (2.9), then y(t, λ) is also a continuous solution of the canonical  integral t

system (2.1) with locally absolutely continuous mass function M(t ) =

H(s)ds. 0

A canonical integral system (2.1) may always be reduced to a corresponding canonical differential system even when the mass function M(t ), 0 ≤ t < d, is not locally absolutely continuous: Lemma 2.2 Let M(t ) be a continuous nondecreasing m × m mvf on the interval [0, d] with M(0) = 0 and let ψ (t ) = trace M(t ).

(2.10)

Then ψ (t ) is a continuous nondecreasing function on [0, d] with ψ (0) = 0 and M(t ) is absolutely continuous with respect to ψ (t ) on the interval 0 ≤ t ≤ d.

2.2 Connections with canonical differential systems Proof

31

The claim follows from the bound M(t2 ) − M(t1 ) ≤ trace{M(t2 ) − M(t1 )} = ψ (t2 ) − ψ (t1 ),

which is valid for 0 ≤ t1 ≤ t2 < d.

(2.11) 

If ψ (t ) is a strictly increasing continuous function on [0, d) with ψ (0) = 0 and  = lim ψ (t ), t↑d

then there exists a continuous strictly increasing function ϕ(x) on [0, ) such that ϕ(ψ (t )) = t for 0 ≤ t < d and ψ (ϕ(x)) = x for 0 ≤ x < . Let y(t, λ), 0 ≤ t < d, be a continuous solution of the system (2.1) and u(x, λ) = y(ϕ(x), λ),

0 ≤ x < .

Then u(x, λ) is a continuous solution of the canonical integral system with mass function  M(x) = M(ϕ(x)) on the interval [0, ):



u(x, λ) = u(0, λ) + iλ

x

(2.12)

 u(s, λ)d M(s)J,

0 ≤ x < .

(2.13)

0

 Moreover, since (2.11) holds, the mvf M(x) is locally absolutely continuous on  the interval [0, ) and M(0) = M(ϕ(0)) = M(0) = 0. Consequently,  x  (x) a.e.on [0, ),  H(s)ds, 0 ≤ x < , where H(x) = M M(x) = 0

and hence u(x, λ) is a locally absolutely continuous solution of the canonical differential system u (x, λ) = iλu(x, λ)H(x)J

a.e. on [0, ),

(2.14)

where the m × m mvf H(x), the Hamiltonian of the system, satisfies the conditions m×m H ∈ L1,loc ([0, ))

and

H(x) ≥ 0

a.e. on [0, ).

(2.15)

Moreover, since  trace M(x) = trace M(ϕ(x)) = trace M(ϕ(ψ (t ))) = trace M(t ) = x,

(2.16)

it follows that trace H(x) = 1

a.e. on the interval [0, ).

Conversely, if u(x, λ) is a locally absolutely continuous solution of the system  (2.14), then it is a solution of the integral system (2.13) with mass function M(x)

32

Canonical systems and related differential equations

defined in (2.12). Consequently, the function y(t, λ) = u(ψ (t ), λ),

0≤t 0

C

p×p

−∞

(the Carath´eodory class) if q = p and it is holomorphic in C+ and (R f )(λ) =

f (λ) + f (λ)∗ ≥ 0 2

for every point λ ∈ C+ ; p×q the subclass of outer mvf’s in N+p×q N+p×q (the Smirnov class) and Nout will be defined in (3.2) and (3.3), respectively; N p×q (the Nevanlinna class of mvf’s with bounded Nevanlinna characp×q and teristic) if it can be expressed in the form f = h−1 g, where g ∈ H∞ 1×1 h ∈ H∞ (= H∞ ); E p×q for the class of entire p × q mvf’s. For each class of p × q mvf’s X p×q we shall use the symbols X

instead of X 1×1

p×q Xconst

and

Xp

for the set of mvf’s in X

instead of X p×1 , p×q

that are constant

(3.1)

58

Matrix-valued functions in the Nevanlinna class

and E ∩ X p×q

for the class of entire mvf’s in X p×q .

The notation listed below will be used extensively. "∞ g, hst = −∞ trace h(μ)∗ g(μ)dμ for the standard inner product in L2p×q ; + to denote the orthogonal projection from the Hilbert space L2p×q onto the closed subspace H2p×q ; − = I − + for the complementary projection; L denotes the orthogonal projection onto a closed subspace L of a Hilbert space; f # (λ) = f (λ)∗ , f ∼ (λ) = f (−λ)∗ ; ρω (λ) = −2πi(λ − ω); & α∈A {Lα } for the closed linear span of subsets Lα in a Hilbert space X ; et = et (λ)= exp(itλ); ln |a| if|a| ≥ 1 ln+ |a| = 0 if |a| < 1.

3.1 Preliminaries on the Nevanlinna class N p×q The class N p×q is closed under addition, and, when meaningful, multiplication and inversion. Moreover, even though N p×q is listed last, it is the largest class in the classes of meromorphic p × q mvf’s in C+ that are listed above. The well-known theorem of Fatou on the existence of boundary values for functions in H∞ guarantees that every mvf f ∈ N p×q has nontangential boundary values f (μ) a.e. on R. In particular, f ∈ N p×q =⇒ f (μ) = lim f (μ + iν) ν↓0

a.e. on R.

Moreover, this f (μ) may be extended to a measurable p × q mvf on the full axis R and f is uniquely defined by its boundary values on a subset of R of positive Lebesgue measure; see e.g., theorem 3.4 in [ArD08b] and the references cited there. In view of this, a mvf f ∈ N p×q will often be identified with its boundary value. A mvf f ∈ S p×q is said to belong to the class (1) (2) (3) (4)

Sinp×q p×q S∗in p×q Sout p×q S∗out

of inner mvf’s if f (μ)∗ f (μ) = Iq a.e. on R; of ∗-inner mvf’s if f (μ) f (μ)∗ = Ip a.e. on R; of outer mvf’s if { f h : h ∈ H2q } is dense in H2p ; of ∗-outer mvf’s if f ∼ is outer.

It is also readily checked that Sinp×q = ∅ ⇐⇒ p ≥ q,

p×q Sout = ∅ ⇐⇒ p ≤ q

3.1 Preliminaries on the Nevanlinna class N p×q

59

and q×p f ∈ Sinp×q ⇐⇒ f ∼ ∈ S∗in .

However, if, as in most of this monograph, attention is restricted to square inner p×p and outer mvf’s in S p×p , then it is not necessary to consider the classes S∗in and p×p separately, since S∗out p×p S∗in = Sinp×p

and

p×p p×p S∗out = Sout .

In keeping with the convention (3.1), Sin = Sin1×1

and

1×1 Sout = Sout .

p×q The Smirnov class N+p×q and the subclass Nout of outer p × q mvf’s in N+p×q may be defined by the formulas

N+p×q = {h−1 g : g ∈ S p×q and h ∈ Sout }

(3.2)

p×q p×q = {h−1 g : g ∈ Sout and h ∈ Sout }. Nout

(3.3)

and

The Smirnov maximum principle We turn next to the Smirnov maximum principle. In the formulation, f ∈ N+p×q is identified with its boundary values. Theorem 3.1 If f ∈ N+p×q , then 



sup ν>0

−∞

(trace{ f (μ + iν)∗ f (μ + iν)})r/2 dμ  =

∞ −∞

(trace{ f (μ)∗ f (μ)})r/2 dμ

for 1 ≤ r < ∞, sup trace{ f (λ)∗ f (λ)} = ess supμ∈R trace{ f (μ)∗ f (μ)}, λ∈C+ 



sup ν>0

−∞

  f (μ + iν) dμ = r



−∞

 f (μ)r dμ for 1 ≤ r < ∞

and sup  f (λ) = ess supμ∈R { f (μ)},

λ∈C+

where in these equalities both sides can be infinite. In particular, N+p×q ∩ Lrp×q (R) = Hrp×q for 1 ≤ r ≤ ∞.

(3.4)

60

Matrix-valued functions in the Nevanlinna class

Proof



See theorem A on p. 88 of [RR85].

In view of the inclusion Hrp×q ⊂ N p×q for 1 ≤ r ≤ ∞, every f ∈ Hrp×q , 1 ≤ r ≤ ∞, has nontangential boundary values. Moreover, the norm in Hrp×q can be computed in terms of boundary values only, and, by Theorem 3.1, the corresponding spaces Hrp×q can be identified as closed subspaces of the Lebesgue spaces Lrp×q on the line, for 1 ≤ r ≤ ∞. In particular, H2p×q is isometrically included in the Hilbert space L2p×q with inner product  ∞  f , gst = trace {g(μ)∗ f (μ)}dμ. (3.5) −∞

Analogous classes will be considered for the open lower half plane C− . In particular, a p × q mvf f is said to belong to the Nevanlinna class with respect to C− if f # ∈ N q×p ; the Smirnov class N−p×q if f # ∈ N+q×p . Every mvf f in the Nevanlinna class with respect to C− also has nontangential boundary values. Thus, f (μ) = lim f (μ − iν) ν↓0

a.e. on R.

Moreover, the orthogonal complement (H2p×q )⊥ of H2p×q in L2p×q will be identified as the boundary values of the set of p × q mvf’s f that are holomorphic in C− and meet the constraint  ∞  f 22 = sup trace{ f (μ − iν)∗ f (μ − iν)}dμ < ∞. ν>0

Thus, f ∈

(H2p×q )⊥

−∞

⇐⇒ f # ∈ H2q×p . Therefore,

L2p×q = H2p×q ⊕ (H2p×q )⊥

and

L2p = H2p ⊕ (H2p )⊥

if q = 1.

The Cauchy formula f (ω) =

1 2π i

and the Poisson formula Iω f (ω) = π







−∞

∞ −∞

f (μ) dμ μ−ω

for ω ∈ C+

f (μ) dμ for ω ∈ C+ |μ − ω|2

(3.6)

(3.7)

are valid for every f ∈ Hrp×q for 1 ≤ r < ∞. Formula (3.7) is also valid for p×q . f ∈ H∞ This follows from theorems 11.2 and 11.8 in [Du70], since it suffices to verify the asserted formulas for each entry in the mvf f .

3.1 Preliminaries on the Nevanlinna class N p×q

61

Inner–outer factorization Let f (λ) be a mvf that is meromorphic in some open nonempty subset  of C (not necessarily connected). Then h f denotes the set of points ω ∈  at which f is holomorphic, h+f = h f ∩ C+ ,

h−f = h f ∩ C−

and

h0f = h f ∩ R.

The rank of a meromorphic p × q mvf f (λ) in C+ is defined by the formula rank f = max{rank f (λ) : λ ∈ h+f }. The following implications will be useful: p×q =⇒ rank f = p and f ∈ Nout

f ∈ Sinp×q =⇒ rank f = q.

(3.8)

Theorem 3.2 Every mvf f ∈ N+p×q that is not identically equal to zero admits an inner–outer factorization of the form r×q where bL ∈ Sinp×r and ϕL ∈ Nout

f (λ) = bL (λ)ϕL (λ),

(3.9)

and a ∗-outer–∗-inner factorization of the form f (λ) = ϕR (λ)bR (λ),

p×r r×q where ϕR ∈ N∗out and bR ∈ S∗out .

(3.10)

In both of these factorizations, r = rank f .

(3.11)

The factors in each of these factorizations are defined uniquely up to replacement of bL and ϕL

by

bL u and u∗ ϕL

ϕR and bR

by

ϕR v and v∗ bR ,

and

where u and v are constant unitary r × r matrices. Moreover f ∈ Htp×q ⇐⇒ ϕL ∈ Htr×q ⇐⇒ ϕR ∈ Htp×r , ρi−1 f ∈

Htp×q

⇐⇒ ρi−1 ϕL ∈

Htr×q

⇐⇒ ρi−1 ϕR ∈

1 ≤ t ≤ ∞,

Htp×r ,

(3.12)

1 ≤ t ≤ ∞, (3.13)

and r×q p×r ⇐⇒ ϕR ∈ S∗out , f ∈ S p×q ⇐⇒ ϕL ∈ Sout

1 ≤ t ≤ ∞,

(3.14)

Proof See theorem 3.71 in [ArD08b] for the first part. The nontrivial directions in the last three sets of equivalences follow from Theorem 3.1, the Smirnov maximum principle. 

62

Matrix-valued functions in the Nevanlinna class

The Beurling–Lax theorem A number of results connected with the Beurling–Lax theorem are now presented without proof; proofs may be found on pp. 108–111 of [ArD08b] and the references cited there. Theorem 3.3 (Beurling–Lax) Let L be a proper closed nonzero subspace of H2p such that et f ∈ L f or every f ∈ L and every t ≥ 0. Then there exists a positive integer r ≤ p and an inner mvf b ∈ Sinp×r such that L = bH2r .

(3.15)

Moreover, this mvf b(λ) is uniquely defined by L up to a unitary constant right multiplier. Remark 3.4 For many of the applications in this monograph, the following reformulation of Theorem 3.3 is useful: Let M be a proper closed nonzero subspace of H2p such that + e−t f ∈ M f or every f ∈ M and every t ≥ 0. Then there exists a positive integer r ≤ p and an inner mvf b ∈ Sinp×r such that def

M = H(b) = H2p  bH2r .

(3.16)

Moreover, this mvf b(λ) is uniquely defined by M up to a unitary constant right multiplier. The connection between this formulation and Theorem 3.3 rests on the observation that if f ∈ M and g ∈ M⊥ , the orthogonal complement of M in H2p , then et g, f st = g, + e−t f st

for every t ≥ 0.

Thus, a closed subspace M of H2p meets the conditions formulated just above if and only if M⊥ meets the conditions formulated in Theorem 3.3. Lemma 3.5 Let bα (λ) = (λ − α)/(λ − α) for α ∈ C+ and let L be a proper closed subspace of H2p . Then the following assertions are equivalent: (1) et L ⊆ L for every t ≥ 0. (2) bα L ⊆ L for at least one point α ∈ C+ . (3) bα L ⊆ L for every point α ∈ C+ .

3.1 Preliminaries on the Nevanlinna class N p×q

63

The generalized backwards shift operator Rα is defined for vvf’s and mvf’s by the rule ⎧ ⎨ f (λ) − f (α) if λ = α λ−α (Rα f )(λ) = (3.17) ⎩ f  (α) if λ = α for every λ, α ∈ h f . In order to keep the typography simple, the space in which Rα acts will not be indicated in the notation. Lemma 3.6 If L is a proper closed subspace of H2p , then the following assertions are equivalent: (1) (2) (3) (4)

+ e−t f ∈ L for every f ∈ L and every t ≥ 0. Rα L ⊆ L for at least one point α ∈ C+ . Rα L ⊆ L for every point α ∈ C+ . There exists a positive integer r ≤ p and an essentially unique mvf b ∈ Sinp×r such that L = H2p  bH2r .

Lemma 3.7 If bH2r ⊇ b1 H2p , where b ∈ Sinp×r and b1 ∈ Sinp×p , then r = p and b−1 b1 ∈ Sinp×p . Corollary 3.8 If bH2r ⊇ βH2p , where b ∈ Sinp×r and β ∈ Sin , then r = p and βb−1 ∈ Sinp×p . Theorem 3.9 If bα ∈ Sinp×p for α ∈ A, then: I There exists an essentially unique mvf b ∈ Sinp×p such that (1) b−1 bα ∈ Sinp×p for every α ∈ A. b−1 bα ∈ Sinp×p for every α ∈ A, then  b−1 b ∈ Sinp×p . (2) If  b ∈ Sinp×p and  Moreover, if bα is entire for some α ∈ A, then b is entire. II If p×p ◦ b−1 α b ∈ Sin

for some b◦ ∈ Sinp×p and every α ∈ A,

(3.18)

Sinp×p

then there exists an essentially unique mvf b ∈ such that p×p b ∈ S for every α ∈ A. (1) b−1 α in (2) b−1 b◦ ∈ Sinp×p for every b◦ ∈ Sinp×p for which (3.18) holds. Moreover, '  bH2p = bα H2p in setting I and bH2p = bα H2p in setting II. α∈A

α∈A

Furthermore, if bα is entire for every α ∈ A, then b is entire. A mvf b that satisfies the two conditions in I (resp., II) is called a greatest common left divisor (resp., least common right multiple) of the family {bα : α ∈ A}.

64

Matrix-valued functions in the Nevanlinna class

Corollary 3.10 If b1 and b2 belong to Sinp×p , then there exists an essentially unique b ∈ Sinp×p such that ' b2 H2p . bH2p = b1 H2p Moreover, if b ∈ Sinp×p , then H(b) = H(b1 ) ∩ H(b2 ) ⇐⇒ bH2p = b1 H2p

'

b2 H2p

and b1 ∈ E ∩ Sinp×p

or

b2 ∈ E ∩ Sinp×p =⇒ b ∈ E ∩ Sinp×p .

The next two theorems summarize a number of well-known results for the convenience of the reader; proofs may be found on pp. 116–121 of [ArD08b] and the references cited there. Theorem 3.11 If s ∈ S p×p , then: (1) (2) (3) (4)

det s ∈ S. p×p s ∈ Sout ⇐⇒ det s ∈ Sout . s ∈ Sinp×p ⇐⇒ det s ∈ Sin . p×p . det{Ip − s(λ)} ≡ 0 in C+ =⇒ that the mvf Ip − s is outer in H∞

If s ∈ S p×q and s(ω)∗ s(ω) < Iq at a point ω ∈ C+ , then s(λ)∗ s(λ) < Iq at every point λ ∈ C+ . Theorem 3.12 If f ∈ N p×p and det f (λ) ≡ 0 in C+ , then p×p (1) f ∈ Nout if and only if both f and f −1 belong to N+p×p . p×p p×p if and only if f −1 ∈ Nout . (2) f ∈ Nout

Moreover, if f ∈ N+p×p . then p×p f ∈ Nout ⇐⇒ det f ∈ Nout .

(3.19)

Lemma 3.13 If b ∈ Sinp×p and d = det b, then: (1) b−1 d ∈ Sinp×p . (2) If det b(λ) is constant in C+ , then b(λ) ≡ constant. p×p Proof Let f = db−1 . Then f ∈ H∞ and, as follows with the help of (3) of ∗ Theorem 3.11, f (μ) f (μ) = Ip a.e. on R. Therefore, f ∈ Sinp×p by the maximum p×p . This proves (1); (2) follows from (1).  principle in H∞

Lemma 3.14 If b ∈ Sinp×q , then Rα bξ ∈ H(b) for every α ∈ C+ and every ξ ∈ Cq . If b ∈ Sinp×p and d = det b, then H(b) ⊆ H(dIp ).

3.2 Linear fractional transformations Proof

65

If α ∈ C+ and ξ ∈ Cq , then Rα bξ ∈ H2p and, for every h ∈ H2q , ( ( ) ) ξ b(α)ξ ,h − , bh = 0 − 0 = 0, Rα bξ , bhst = μ−α μ−α st st

since ξ /(λ − α) belongs to (H2q )⊥ and b(α)ξ /(λ − α) belongs to (H2p )⊥ . The last assertion follows from Lemma 3.13 and Corollary 3.8. 

3.2 Linear fractional transformations and Redheffer transformations The linear fractional transformations TU based on an m × m mvf   u11 (λ) u12 (λ) U (λ) = u21 (λ) u22 (λ)

(3.20)

that is meromorphic in C+ with blocks u11 (λ) of size p × p and u22 (λ) of size q × q, respectively, is defined by the formula TU [x] = {u11 (λ)x(λ) + u12 (λ)}{u21 (λ)x(λ) + u22 (λ)}−1

(3.21)

for p × q mvf’s x(λ) that belong to the set D(TU ) = {x(λ) : x is p × q meromorphic mvf in C+ with det[u21 (λ)x(λ) + u22 (λ)] ≡ 0 in C+ }, (3.22) which is the domain of definition of TU . The notation TU [X] = {TU [x] : x ∈ X} for subsets X of D(TU ) will be useful. If U1 and U2 are m × m meromorphic mvf’s in C+ , x ∈ D(TU1 ) and TU1 [x] ∈ D(TU2 ), then x ∈ D(TU2U1 )

and

TU2 [TU1 [x]] = TU2U1 [x].

(3.23)

Moreover, if U2 (λ)U1 (λ) = Im , then TU1 maps D(TU1 ) bijectively onto D(TU2 ) and = TU2 , i.e., TU−1 1 TU1 [D(TU1 )] = D(TU2 ),

TU2 [D(TU2 )] = D(TU1 ),

TU2 TU1 |D(TU1 ) = I|D(TU1 )

and

TU1 TU2 |D(TU2 ) = I|D(TU2 ) .

(3.24)

Remark 3.15 In some applications it is convenient to introduce a second linear fractional transformation: the left linear fractional transformation TU [y] = {y(λ)u12 (λ) + u22 (λ)}−1 {y(λ)u11 (λ) + u21 (λ)},

(3.25)

66

Matrix-valued functions in the Nevanlinna class

which acts in the set of q × p mvf’s y(λ) that are meromorphic in C+ and belong to the set D(TU ) = {y(λ) : det{y(λ)u12 (λ) + u22 (λ)} ≡ 0 in C+ }, the domain of definition of TU . The transformation TU in (3.21) is sometimes referred to as a right linear fractional transformation and denoted TUr . The notation s12 = TUr [0 p×q ] = u12 u−1 22

and

χ = TU [0q×p ] = u−1 22 u21

(3.26)

will be used for m × m mvf’s U that are meromorphic in C+ with det u22 (λ) ≡ 0 in C+ . Lemma 3.16 If U (λ) is a meromorphic m × m mvf in C+ with block decomposition (3.20) such that det u22 (λ) ≡ 0 in C+ , then: (1) S p×q ⊆ D(TUr ) ⇐⇒ χ ∈ S q×p and χ (λ)∗ χ (λ) < Ip for each point λ ∈ C+ . for each (2) S q×p ⊆ D(TU ) ⇐⇒ s12 ∈ S p×q and s12 (λ)∗ s12 (λ) < Iq point λ ∈ C+ . Proof



This is lemma 4.64 in [ArD08b].

In the future, we shall usually drop the superscript r and write TU instead of TUr . Lemma 3.17 Let U (λ) be an m × m mvf that is meromorphic in C+ with block decomposition (3.20) and assume that S p×q ⊆ D(TU ) (so that (1) of Lemma 3.16 is applicable). Then the mvf  = PG(U ) = (P− + P+U )(P+ + P−U )−1

with P± = (Im ± j pq )/2

is meromorphic in C+ and the blocks σi j of the four block decomposition of  are given by the formulas −1 −1 −1 σ11 = u11 − u12 u−1 22 u21 , σ12 = u12 u22 , σ21 = −u22 u21 , σ22 = u22

(3.27)

and the Redheffer transform R [ε] = σ12 + σ11 ε(Iq − σ21 ε)−1 σ22

(3.28)

is a p × q mvf that is meromorphic in C+ and holomorphic in h+  for every choice of ε ∈ S p×q . Moreover, TU [ε] = R [ε] for every ε ∈ S p×q . Proof

Under the given assumptions, σ21 ∈ S q×p and σ21 (λ)∗ σ21 (λ) < Ip

for λ ∈ C+ .

(3.29)

3.2 Linear fractional transformations

67

Thus, det{Iq − s21 (λ)ε(λ)} = 0 for every point λ ∈ C+ and every ε ∈ S p×q . Therefore, the p × q mvf R [ε] is meromorphic in C+ and holomorphic in h+  for every  ε ∈ S p×q . Formula (3.29) is a straightforward calculation. Lemma 3.18 Let U (λ) be an m × m mvf that is meromorphic in C+ with block decomposition (3.20) and assume that S p×q ⊆ D(TU ) (so that (1) of Lemma 3.16 is applicable and the formulas (3.27)–(3.29) are in force). Then for each point ω ∈ hU+ ∩ hu−1 there exists exactly one matrix W ∈ Uconst ( j pq ) such that the blocks 22 (λ) = U (λ)W meet the following normalization conditions  ui j of the new mvf U at the point ω:  u11 (ω) ≥ 0,

 u22 (ω) > 0 and  u21 (ω) = 0.

(3.30)

Moreover, TU[ε] = R  [ε],

(3.31)

 = PG[U]  meet the  with P± = (Im ± j pq )/2 and the blocks  where  σi j of  normalization conditions  σ11 (ω) ≥ 0,

 σ22 (ω) > 0 and  σ21 (ω) = 0.

(3.32)

Furthermore, det U (ω) = 0 ⇐⇒  u11 (ω) > 0. Proof

The matrix W may be defined by formula (3.127), with k = −χ (ω)∗ = −u21 (ω)∗ u22 (ω)−∗

and unitary matrices u and v such that  u11 (ω) = {u11 (ω) + u12 (ω)k∗ }(Ip − kk∗ )−1/2 u ≥ 0 and  u22 (ω) = {u21 (ω)k + u22 (ω)}(Iq − k∗ k)−1/2 v > 0.



Matrix balls A matrix ball B in C

p×q

with center γc ∈ C p×q is a set of the form

p×q {γc + R QRr : Q ∈ Sconst , R ∈ C p×p , Rr ∈ Cq×q , R ≥ 0 and Rr ≥ 0};

R is called the left semiradius and Rr the right semiradius. More information on matrix balls may be found, e.g., in section 2.4 of [ArD08b]. In particular, it is known that the center γc of a matrix ball is uniquely defined by the ball and that if the ball contains more than one point, then the left and right semiradii R ≥ 0 and Rr ≥ 0 of B are defined by the ball up to the transformation R −→ ρR

and

Rr −→ ρ −1 Rr

for some ρ > 0.

68

Matrix-valued functions in the Nevanlinna class

Lemma 3.19 Let U (λ) be an m × m mvf that is meromorphic in C+ with block decomposition (3.20) and assume that S p×q ⊆ D(TU ) (so that (1) of Lemma 3.16 is applicable and formulas (3.27)–(3.28) are in force). Fix a point ω ∈ hU+ ∩ hu−1 22  = [ and let  σi j ] be defined as in Lemma 3.18. Then the set def

BU (ω) = {TU (ω) [ε(ω)] : ε ∈ S p×q }

(3.33)

σ11 (ω) is a matrix ball with center  σ12 (ω) and left and right semiradii R (ω) =  σ22 (ω), respectively: and Rr (ω) =  def

p×q σ12 (ω) +  σ11 (ω)Q σ22 (ω) : Q ∈ Sconst }. BU (ω) = {

(3.34)

In this ball Rr (ω) > 0 and R (ω) > 0 ⇐⇒ det U (ω) = 0. Proof

This follows easily from Lemmas 3.17 and 3.18 and the fact that p×q ]. TU [S p×q ] = TU[S p×q ] = R  [S

 Remark 3.20 The center γc (ω) and the semiradii R (ω) and Rr (ω) of the ball BU (ω) may be expressed directly in terms of the blocks of U (ω) by the formulas σ11 (ω)2 =  u11 (ω)2 R (ω)2 =  = {u11 (ω) + u12 (ω)k∗ }(Ip − kk∗ )−1 {u11 (ω) + u12 (ω)k∗ }∗ , Rr (ω)2 =  σ22 (ω)2 =  u22 (ω)−2 = (u22 (ω)u22 (ω)∗ − u21 (ω)u21 (ω)∗ )−1

(3.35)

(3.36)

and σ12 (ω) = TU(ω) [0] = TU (ω) [−χ (ω)∗ ] γc (ω) =  = (u12 (ω)u22 (ω)∗ − u11 (ω)u21 (ω)∗ )Rr (ω)2 ,

(3.37)

where k = −χ (ω)∗

χ (ω) = u22 (ω)−1 u21 (ω).

and

The linear fractional transformation TV [x] = (Ip − x)(Ip + x)−1

(3.38)

based on the matrix V defined by formula (1.6) is called the Cayley transform. It is readily checked that C p×p ⊂ D(TV ),

C p×p = TV [S p×p ∩ D(TV )]

(3.39)

3.3 The Riesz–Herglotz–Nevanlinna representation

69

and S p×p ∩ D(TV ) = {s ∈ S p×p : det{Ip + s(λ)} = 0 in C+ } = TV [C p×p ]. (3.40) Thus, c ∈ C p×p ⇐⇒ c = TV [s] for some s ∈ S p×p ∩ D(TV ).

(3.41)

The inclusion C p×p ⊂ N p×p , which follows from (3.41), serves to guarantee that every mvf c ∈ C p×p has nontangential boundary limits c(μ) at almost all points μ ∈ R. In particular, c(μ) = lim c(μ + iε) ε↓0

for almost all points μ ∈ R.

3.3 The Riesz–Herglotz–Nevanlinna representation It is well known that a p × p mvf c(λ) belongs to the Carath´eodory class C p×p if and only if it admits an integral representation via the Riesz–Herglotz–Nevanlinna formula   ∞  1 μ 1 c(λ) = iα − iλβ + − dσ (μ) for λ ∈ C+ , (3.42) π i −∞ μ − λ 1 + μ2 where α = α ∗ ∈ C p×p , β ∈ C p×p , β ≥ 0 and σ (μ) is a nondecreasing p × p mvf on R such that  ∞ d(trace σ (μ)) < ∞. (3.43) 1 + μ2 −∞ The parameters α and β are uniquely defined by c(λ) via the formulas α = Ic(i)

and

β = lim ν −1 Rc(iν). ν↑∞

(3.44)

A mvf σ (μ) in (3.42) will be called a spectral function of c(λ); it always may (and will be) chosen so that σ (μ+) + σ (μ−) . (3.45) 2 Under these normalization conditions, σ (μ) is uniquely defined by c(λ). The particular formula !   ∞ cos(πt/2) i t 1 μ 1 1 it−1 =−i dμ + − 2 sin πt λ sin πt πi 0 μ − λ 1 + μ μt (3.46) σ (0) = 0

and

σ (μ) =

for t ∈ (−1, 0) ∪ (0, 1) and λ ∈ C \ R+ and variations thereof play a useful role in a number of applications.

70

Matrix-valued functions in the Nevanlinna class In view of the normalization (3.45), the Stieltjes inversion formula  μ2 σ (μ2 ) − σ (μ1 ) = lim R(c(μ + iν))dμ ν↓0

(3.47)

μ1

is valid at every pair of points μ1 , μ2 ∈ R, and not just at points of continuity of σ ; see, e.g., [KaKr74a]. The spectral function σ (μ) can be decomposed into the sum σ (μ) = σs (μ) + σa (μ),

(3.48)

of two nondecreasing p × p mvf’s σs (μ) and σa (μ), where σs (μ) is the singular component of σ (μ), i.e., σs (μ) = 0 for almost all points μ ∈ R, and σa (μ) is the locally absolutely continuous part of σ (μ) normalized by the condition σa (0) = 0, i.e.,  μ σa (μ) = f (b)db, (3.49) 0

where

 f (μ) ≥ 0 a.e. on R and

∞ −∞

trace f (μ) dμ < ∞. 1 + μ2

(3.50)

The convergence of the last integral follows from (3.43). In fact, condition (3.43) is equivalent to the two conditions  ∞  ∞ d(trace σs (μ)) trace f (μ) < ∞ and dμ < ∞. 2 1+μ 1 + μ2 −∞ −∞ Moreover, f (μ) = σ  (μ) = Rc(μ) and hence, in view of formula (3.50),  ∞ trace{c(μ) + c(μ)∗ } −∞

1 + μ2

a.e. on R

(3.51)

dμ < ∞.

(3.52)

If σ (μ) is locally absolutely continuous, i.e., if σ (μ) = σa (μ), then the p × p mvf f (μ) = σ  (μ) is called the spectral density of c(λ). Formula (3.42) implies that  1 ∞ dσ (μ) , Rc(i) = β + π −∞ 1 + μ2 which leads easily to the following conclusions: Lemma 3.21 Let c ∈ C p×p . Then in formula (3.42)  1 ∞ dσ (μ) , β = 0 ⇐⇒ Rc(i) = π −∞ 1 + μ2

(3.53)

3.3 The Riesz–Herglotz–Nevanlinna representation whereas β = 0 and σ (μ) is locally absolutely continuous if and only if  1 ∞ Rc(μ) Rc(i) = dμ. π −∞ 1 + μ2

71

(3.54)

In view of the integral representation formula (3.42) and the decomposition (3.48), every mvf c ∈ C p×p has an additive decomposition p×p with cs ∈ Csing

c(λ) = cs (λ) + ca (λ),

and

ca ∈ Cap×p ,

(3.55)

where Cap×p = {c ∈ C p×p : β = 0

and

σs (μ) = 0}

(3.56)

and p×p Csing = {c ∈ C p×p : σa (μ) = 0} = {c ∈ C m×m : Rc(μ) = 0

a.e. on R}. (3.57) This decomposition is unique up to an additive purely imaginary constant p × p matrix. Thus, in terms of the notation introduced in (3.42), (3.48) and (3.51), we may set   ∞ 1 μ 1 cs (λ) = iα − iβλ + (3.58) − dσs (μ) πi −∞ μ − λ 1 + μ2 and 1 ca (λ) = πi







−∞

μ 1 − μ − λ 1 + μ2

 f (μ)dμ,

(3.59)

where f (μ) = Rc(μ) a.e. on R. This decomposition corresponds to the normalization ca (i) ≥ 0, and is uniquely determined by this normalization. There are other normalizations that may be imposed on ca that insure uniqueness of the decomposition (3.55). Some proper subclasses of the Carath´eodory class C p×p (a)

p×p C p×p ∩ H∞ .

p×p If c ∈ C p×p ∩ H∞ , then, by a well-known theorem of the brothers Riesz (see, e.g., p. 74 [RR94]), β = 0 in formula (3.42), the spectral function σ (μ) of c(λ) is locally absolutely continuous and (3.58) reduces to cs (λ) = iα, i.e., C p×p ∩ p×p ⊂ Cap×p . Therefore, H∞   ∞ μ 1 1 − f (μ)dμ, (3.60) c(λ) = iα + πi −∞ μ − λ 1 + μ2

where α ∗ = α ∈ C p×p , f (μ) = (Rc)(μ) ≥ 0

a.e. on R

and

p×p f ∈ L∞ (R).

(3.61)

72

Matrix-valued functions in the Nevanlinna class

It is clear that if condition (3.61) is in force, then the function c(λ) defined by formula (3.60) belongs to Cap×p and that Rc(λ) ≤  f ∞ Ip

for λ ∈ C+ .

p×p : (Rc)(μ) ≥ δc Ip > 0 a.e. on R}, where δc > (b) C˚ p×p = {c ∈ C p×p ∩ H∞ 0 depends upon c. The class C˚ p×p is simply related to the class S˚ p×p that was defined in (1.37):

C˚ p×p = TV [S˚ p×p ].

(3.62)

If c ∈ C , then the mvf f (μ) in the integral representation (3.60) is subject to the bounds ˚ p×p

δ1 Ip ≤ f (μ) ≤ δ2 Ip

a.e. on R,

where 0 < δ1 ≤ δ2 . The classes S˚ p×q and C˚ p×p play a significant role in the study of the inverse problems considered in this monograph, in view of the characterizations (1.36) and (1.86) of the classes UrsR ( j pq ) and UrR ( j pq ), respectively. (c) Cszp×p = {c ∈ C p×p : ln{det(c + c∗ )} ∈  L1 }. The condition refers to the nontangential boundary values of c. The inclusion C˚ p×p ⊂ Cszp×p is proper. (d)

C0p×p = {c ∈ C p×p : sup {νc(iν) : ν > 0} < ∞}.

It is known that a mvf c(λ) belongs to C0p×p if and only if it admits a representation of the form  ∞ 1 1 dσ (μ), (3.63) c(λ) = π i −∞ μ − λ where σ is a bounded nondecreasing p × p mvf on R, or, equivalently, if and only if c ∈ C p×p and sup {ν trace Rc(iν)} < ∞ and lim c(iν) = 0. ν>0

ν↑∞

(3.64)

The mvf σ in formula (3.63) coincides with the spectral function of c; in the representation (3.42) for this mvf c,  1 ∞ μ dσ (μ). β = 0 and α = − π −∞ 1 + μ2 (e)

C p×p ∩ Real ∩ Symm = {c ∈ C p×p : c(−λ) = c(λ) and

c(λ)τ = c(λ)}.

The integral representation formula (3.42) for c(λ) implies that   ∞  1 μ 1 c(−λ) = −iα − iβλ − − dσ (μ) for λ ∈ C+ . πi −∞ μ + λ 1 + μ2 Thus, with the help of the integral representation formula for c(λ)τ , it is readily checked that a mvf c ∈ C p×p belongs to the subclass of real symmetric mvf’s in

3.3 The Riesz–Herglotz–Nevanlinna representation

73

C p×p if and only if the parameters in the integral representation (3.42) for c(λ) are subject to the following restrictions: α = 0,

β = β = β τ ≥ 0,

σ (μ) = σ (μ)

and

σ (μ) = −σ (−μ) for μ < 0,

(3.65)

where σ is assumed to be normalized as in (3.45). Therefore,  c(λ) + c(−λ) 1 λ ∞ c(λ) = dσ (μ) = −iβλ + 2 iπ −∞ μ2 − λ2 √ and hence, upon setting τ (μ) = (2/π )σ ( μ) for μ > 0, formula (3.42) can be rewritten as    ∞ dτ (μ) (3.66) for λ ∈ C+ , c(λ) = −iλ β + μ − λ2 0 where β ∈ R p×p is positive semidefinite and τ is a nondecreasing mvf on R+ that is subject to the constraint  ∞ d trace τ (μ) 0.

(3.68)

for λ ∈ C \ {[0, ∞)}

(3.69)

has the following special properties: − iG ∈ C p×p ,

G(λ) = G(λ),

G(λ)τ = G(λ) for λ ∈ C \ R−

(3.70)

and c(λ) = −iλ G(λ2 )

belongs to the class C p×p ∩ Real ∩ Symm.

(3.71)

The conditions (3.70) define the Stieltjes class of mvf’s G(λ). Formula (3.69) with a positive semidefinite matrix β ∈ R p×p and a nondecreasing p × p mvf τ on R+ that is subject to the constraint (3.67) provides a complete parametrization of this class. Moreover, formula (3.71) establishes a one-to-one correspondence between mvf’s G(λ) in the Stieltjes class and real symmetric mvf’s c ∈ C p×p ; see [KaKr74a] for more information on these classes in the scalar case. Lemma 3.22 The following assertions hold: (1) C p×p = TV [S p×p ∩ D(TV )]. (2) C p×p ⊂ N+p×p .

74

Matrix-valued functions in the Nevanlinna class

p×p if Rc(ω) > 0 for at least one (and hence every) (3) If c ∈ C p×p , then c ∈ Nout point ω ∈ C+ . p×p . (4) C˚ p×p = TV [S˚ p×p ] and C˚ p×p ⊂ Nout

(5) If c ∈ C p×p , then: c−1 ∈ N+p×p ⇐⇒ det c(λ) ≡ 0 in C+ p×p ⇐⇒ c ∈ Nout ⇐⇒ c−1 ∈ C p×p . p×p p×p (6) Csz = TV [Ssz ].

Proof The first five assertions follow from lemmas 3.57 and 3.58 in [ArD08b]; whereas (6) follows from the identity Rc = (Ip + s∗ )−1 (Ip − s∗ s)(Ip + s)−1

a.e. on R

and the fact that ln det(Ip + s) ∈  L1 since det(Ip + s) ∈ N for s = TV [c] and  c ∈ C p×p .

3.4 The class E ∩ N p×q of entire mvf’s in N p×q A p × q mvf f (λ) = [ f jk (λ)] is entire if each of its entries f jk (λ) is an entire function. The class of entire p × q mvf’s f (λ) will be denoted E p×q . If f also belongs to some other class X p×q , then we shall simply write f ∈ E ∩ X p×q . An entire p × q mvf is said to be of exponential type if there is a constant τ ≥ 0 such that  f (λ) ≤ γ exp{τ |λ|},

for all points λ ∈ C

(3.72)

for some γ > 0. In this case, the exact type τ ( f ) of f is defined by the formula τ ( f ) = inf{τ : (3.72) holds}.

(3.73)

Equivalently, an entire p × q mvf f , f ≡ 0, is said to be of exponential type τ ( f ), if τ ( f ) = lim sup r→∞

ln  M(r) < ∞, r

(3.74)

where M(r) = max { f (λ) : |λ| = r}. The simple bounds in the next lemma will be useful. Lemma 3.23 If V ∈ C p×q , then V 2 ≤ trace(V ∗V ) ≤ qV 2

(3.75)

det(V ∗V ) ≤ V 2q .

(3.76)

and

3.4 The class E ∩ N p×q of entire mvf’s in N p×q

75

If p = q and V is invertible with V −1  ≤ 1, then V  ≤ | det V | ≤ V  p .

(3.77)

Proof Let μ21 ≥ · · · ≥ μ2q denote the eigenvalues of the positive semidefinite matrix V ∗V . Then (3.75) and (3.76) are immediate from the observation that trace(V ∗V ) = μ21 + · · · + μ2q ,

V  = μ1

and det(V ∗V ) = μ21 · · · μ2q . Finally, (3.77) is easily obtained from the same set of formulas when q = p and  V is invertible with V −1  ≤ 1, because then μ2j ≥ 1 for j = 1, . . . , p. The inequalities (3.75) applied to the matrix f (λ) yield the auxiliary formula τ ( f ) = lim sup r→∞

ln max{trace( f (λ)∗ f (λ)) : |λ| = r} . 2r

(3.78)

Moreover, a mvf f ∈ E p×q is of exponential type if and only if all the entries fi j (λ) of the mvf f are entire functions of exponential type. Furthermore, if f ∈ E p×q is of exponential type, then τ ( f ) = max{τ ( fi j ) : fi j ≡ 0, 1 ≤ i ≤ p, 1 ≤ j ≤ q}.

(3.79)

To verify inequality (3.79), observe first that the inequality | fi j (λ)|2 ≤ trace{ f (λ)∗ f (λ)} implies that τ ( fi j ) ≤ τ ( f ), if fi j ≡ 0. On the other hand, if τ = max{τ ( fi j ) : fi j ≡ 0, 1 ≤ i ≤ p, 1 ≤ j ≤ q}, then there exists a number γ > 0 such that trace f (λ)∗ f (λ) =

p q  

| fi j (λ)|2 ≤ γ exp{2(τ + ε)|λ|}

i=1 j=1

for every ε > 0. Consequently, τ ( f ) ≤ τ . Also, it is clear that if M± (r) = max {| f (λ)| : |λ| ≤ r

and

λ ∈ C± },

then τ ( f ) = max{τ+ ( f ), τ− ( f )}, where τ± ( f ) = lim sup r→∞

ln M± (r) ; r

(3.80)

76

Matrix-valued functions in the Nevanlinna class

and that τ± ( f ) = max{τ± ( fi j ) : fi j ≡ 0, 1 ≤ i ≤ p, 1 ≤ j ≤ q}. Analogs of formula (3.78) are valid for τ± ( f ). Moreover, if a mvf f ∈ E p×q has exponential type τ ( f ), then the types τ+ ( f ) and τ− ( f ) of the mvf f in the closed upper and lower half planes C+ and C− are not less than the exponential types τ f+ and τ f− of the mvf f on the upper and lower imaginary half axis respectively, i.e., τ f± = lim sup def

ν↑∞

ln  f (±iν)  ≤ τ± ( f ). ν

(3.81)

The notation δ( f ) = τ (det f ) and

δ ±f = τ ± (det f )

(3.82)

will be used for f ∈ E p×p with τ ( f ) < ∞. The latter notation will also be used for functions in the Nevanlinna class in the appropriate half plane: Let δ +f = lim sup ν↑∞

ln | det f (iν)| ν

and

δ −f = lim sup ν↑∞

ln | det f # (iν)| ν

(3.83)

for f ∈ N p×p with det f (λ) ≡ 0 and for f # ∈ N p×p with det f (λ) ≡ 0, respectively. The notation τ f and type( f ) (resp., δ f and type(det f )) will also be used in place of τ ( f ) (resp., δ( f )). Theorem 3.24 (M.G. Krein) Let f ∈ E p×q . Then f ∈ N p×q if and only τ+ ( f ) < ∞ and f satisfies the Cartwright condition  ∞ + ln  f (μ) dμ < ∞. (3.84) 1 + μ2 −∞ Moreover, if f ∈ E ∩ N p×q , then τ+ ( f ) = τ f+ . Proof



See [Kr47], [Kr51b] and section 6.11 of [RR85].

Lemma 3.25 If f ∈ E ∩ N p×q and f (λ) ≡ 0, then τ f+ = lim sup ν↑∞

Proof

ln trace{ f (iν)∗ f (iν)} . 2ν

(3.85) 

This is immediate from (3.75).

Theorem 3.26 If f (λ) = [ f jk (λ)] is a p × q mvf of class N p×q such that f is holomorphic in C+ and f (λ) ≡ 0, then τ f+ = max{τ f+jk : j = 1, . . . , p, k = 1, . . . , q Proof

The proof is the same as that of (3.79).

and

f jk (λ) ≡ 0}. 

3.5 The class  p×q of mvf’s in N p×q with pseudocontinuations

77

Theorem 3.27 Let f ∈ E p×p be invertible in C+ and let f −1 ∈ N+p×p . Then: (1) f ∈ E ∩ N p×p . (2) 0 ≤ τ f+ < ∞.

ln | det f (iν)| ln | det f (iν)| (3) δ +f = lim supν↑∞ = limν↑∞ exists as a limit ν ν and τ f+ ≤ δ +f ≤ pτ f+ . +

(4) e−iδ f λ det f (λ)−1 ∈ Nout . + p×p . (5) δ +f = pτ f+ if and only if eiτ f λ f (λ) ∈ Nout Proof



See theorem 3.94 in [ArD08b].

p×p . Then the Corollary 3.28 Let f ∈ E p×p be invertible in C+ with f −1 ∈ H∞ following are equivalent:

ln  f (iν) = 0. ν ln[trace{ f (iν)∗ f (iν)}] = 0. (2) lim ν ν↑∞ ln | det f (iν)| = 0. (3) limν↑∞ ν p×p . (4) f ∈ Nout (1) lim

ν↑∞

Proof This formulation takes advantage of the fact that for a sequence of nonnegative numbers {xk }, lim sup xk = 0 ⇐⇒ k→∞

lim xk = 0.

k→∞

The rest is immediate from the preceding two theorems and the inequalities in Lemma 3.23. 

3.5 The class  p×q of mvf’s in N p×q with pseudocontinuations A p × q mvf f− defined in C− is said to be a pseudocontinuation of a mvf f ∈ N p×q , if (1) f−# ∈ N q×p , i.e., if f− is a meromorphic p × q mvf in C− with bounded Nevanlinna characteristic in C− and (2) lim f− (μ − iν) = lim f (μ + iν) (= f (μ)) a.e. on R. ν↓0

ν↓0

The subclass of all mvf’s f ∈ N p×q that admit pseudocontinuations f− into C− will be denoted  p×q . Since a pseudocontinuation f− is uniquely defined by its boundary values on a subset of R of positive Lebesgue measure, each f ∈  p×q admits only one pseudocontinuation f− . Although  p×q ⊂ N p×q ,

78

Matrix-valued functions in the Nevanlinna class

by definition, and f ∈ N p×q is defined only on C+ , we will consider mvf’s f ∈  p×q in the full complex plane C via the formulas f (λ) = f− (λ) for λ ∈ h f− ∩ C− and f (μ) = lim f (μ + iν) ν→0

a.e. on R.

The symbol h f will be used to denote the domain of holomorphy of this extended mvf f (λ) in the full complex plane and h+f = h f ∩ C+ ,

h−f = h f ∩ C−

and

h0f = h f ∩ R.

Thus, h±f is all of C± except for the poles of f in this set; but h0f may be empty. We shall also write  p =  p×1 ,

 = 1

and

 p×q ∩ X p×q =  ∩ X p×q

for short. It is clear that a p × q mvf f belongs to the class  p×q if and only if all the entries in the mvf f belong to the class . Moreover,  p×q is a linear space and f ∈  p×r , g ∈ r×q =⇒ f g ∈  p×q ; f ∈  p×p =⇒ det f ∈  and trace f ∈ ; f ∈  p×p and det f ≡ 0 =⇒ f −1 ∈  p×p ; f ∈  p×q =⇒ f # ∈ q×p , f ∼ ∈ q×p and f (−λ) ∈  p×q . The next result follows from the characterization of scalar functions f ∈  ∩ H2 that is due to Douglas, Shapiro and Shields [DSS70]. Theorem 3.29 Let f ∈ H2p×q . Then f ∈  ∩ H2p×q if and only if b−1 f ∈ (H2p×q )⊥ for some b ∈ Sin . Proof

See corollary 3.107 in [ArD08b].



If f ∈  p×q and if the restriction of f to C+ has a holomorphic extension to C− , then this extension coincides with the pseudocontinuation f− of f , as follows from the uniqueness of the holomorphic extension. In particular, the pseudocontinuation f− of an entire mvf f ∈  p×q is the restriction of f to C− . Thus, if f ∈ E p×q , then f ∈  p×q ⇐⇒ f has bounded Nevanlinna characteristic in both half planes C+ and C− ⇐⇒ f ∈ N p×q and f # ∈ N q×p .

3.6 Fourier transforms and Paley–Wiener theorems

79

Theorem 3.30 (M.G. Krein) Let f ∈ E p×q . Then f ∈  p×q if and only if f is an entire mvf of exponential type and satisfies the Cartwright condition  ∞ + ln  f (μ)  dμ < ∞. (3.86) 1 + μ2 −∞ Moreover, if f ∈ E ∩  p×q , then (1) τ+ ( f ) = τ f+ , τ− ( f ) = τ f− . (2) τ f− + τ f+ ≥ 0. (3) τ ( f ) = max {τ f+ , τ f− }. Proof Assertions (1) and (3) follow from Theorem 3.24. To verify (2), suppose to the contrary that τ f+ + τ f− < 0 and let g = eδ f , where τ f+ < δ < −τ f− . Then τg+ = −δ + τ f+ < 0 and τg− = δ + τ f− < 0, which is impossible, since in this case g(λ) ≡ 0. Therefore, (2) is also valid. 

3.6 Fourier transforms and Paley–Wiener theorems The Fourier transform and inverse transform  ∞  ∞ 1  eiμt f (t )dt and f ∨ (t ) = e−iμt f (μ)dμ f (μ) = 2π −∞ −∞

(3.87)

will be considered mainly for f ∈ L1p×q and f ∈ L2p×q . "A If f ∈ L2p×q , then the integral is understood as the limit of the integrals −A in L2p×q as A ↑ ∞. Moreover, the mapping f is a unitary operator in L2p×q , f → (2π )−1/2  i.e., it is onto, the Plancherel formula holds:  f , gst = 2π f , gst

for f , g ∈ L2p×q ,

and f (t ) = (  f )∨ (t )

a.e. on R.

(3.88)

f (μ) belongs to L2p×q if and only if If f ∈ L2p×q , then μ  f

is locally absolutely continuous on R

and

f  ∈ L2p×q .

(3.89)

Moreover, if these conditions hold, then μ f (μ) = i  f  (μ).

(3.90)

f ∈ W p×r (0),  g ∈ W r×q (0),  f  g ∈ W p×q (0) and If f ∈ L1p×r and g ∈ L1r×q , then   ∞ ( f  g)∨ (t ) = f (t − u)g(u)du a.e. on R. (3.91) −∞

Formula (3.91) is also valid if f ∈ L1p×r , g ∈ Lsr×q and 1 < s < ∞.

80

Matrix-valued functions in the Nevanlinna class

Theorem 3.31 (N. Wiener and R. Paley) If  f ∈ W p×p (0) and γ ∈ C p×p , then p×p and a mvf  g ∈ W p×p (0) such that there exists a matrix δ ∈ C (γ +  f (μ))(δ +  g(μ)) = Ip

(3.92)

if and only if det (γ +  f (μ)) = 0 for every point μ ∈ R and γ is invertible.

(3.93)

If  f ∈ W+p×p (0) (resp., W−p×p (0)) and γ ∈ C p×p , then there exists a matrix δ ∈ p×p and a mvf  g ∈ W+p×p (0) (resp., W−p×p (0)) such that (3.92) holds for all C points μ ∈ R if and only if det (γ +  f (λ)) = 0 for every point λ ∈ C+ (resp., C− ) and γ is invertible. (3.94) Proof The stated assertions for mvf’s are easily deduced from the scalar versions, the first of which is due to N. Wiener; the second (and third) to Paley and Wiener; see [PaW34] for proofs and, for another approach, [GRS64].  If (3.92) holds, then (by the Riemann–Lebesgue lemma) γ δ = Ip . Theorem 3.32 (Paley–Wiener) Let f be a p × q mvf that is holomorphic in C+ . Then f ∈ H2p×q if and only if  ∞ eiλx f ∨ (x)dx for λ ∈ C+ f (λ) = 0

and some f ∨ ∈ L2p×q (R+ ). Moreover, if f ∈ H2p×q , then its boundary values f (μ) admit the one-sided Fourier representation  ∞ eiμx f ∨ (x)dx a.e. on R. f (μ) = 0

Proof This follows easily from the scalar Paley–Wiener theorem; see, e.g., pp. 158–160 of [DMc72].  Theorem 3.33 (Paley–Wiener) A p × q mvf f admits a representation of the form  β eiλx f ∨ (x)dx for λ ∈ C f (λ) = −α

with 0 ≤ α, β < ∞ if and only if f (λ) is an entire and some f ∨ ∈ p × q mvf of exponential type with τ+ ( f ) ≤ α and τ− ( f ) ≤ β and f ∈ L2p×q . L2p×q ([−α, β])

Proof This follows easily from the scalar Paley–Wiener theorem; see, e.g., pp. 162–164 of [DMc72]. 

3.7 Entire inner mvf’s

81

3.7 Entire inner mvf’s A scalar entire inner function f (λ) is automatically of the form f (λ) = f (0)eiλd ,

| f (0)| = 1

where

and

d ≥ 0.

The set of entire inner p × p mvf’s is much richer. It includes mvf’s of the form f (λ) = f (0)eiλD , where f (0) ∈ C p×p ,

D ∈ C p×p ,

f (0)∗ f (0) = Ip

and

D ≥ 0,

as well as products of mvf’s of this form. A complete description of the class E ∩ Sinp×p is given by Potapov’s theorem (Theorem 2.10) with J = Ip , and by Theorem 2.15 (the proof of which is based on Theorem 2.10 and is also due to Potapov). Lemma 3.34 If s ∈ S p×p and d(λ) = det s(λ), then s ∈ E ∩ Sinp×p ⇐⇒ d ∈ E ∩ Sin ⇐⇒ d(λ) = γ eβ (λ) for some γ ∈ T and β ≥ 0. Proof Theorem 3.11 guarantees that s ∈ Sinp×p if and only if d ∈ Sin . Thus, the implication s ∈ E ∩ Sinp×p =⇒ d ∈ E ∩ Sin is clear. Conversely, if d ∈ E ∩ Sin , then d = γ eδ for some choice of γ ∈ T and δ > 0, s ∈ Sinp×p and s(λ)−1 = e−iδλ h(λ) p×p for some choice of h ∈ H∞ . Therefore,

e−iδμ s(μ) = h(μ)∗ for almost all points μ ∈ R, which in turn implies that s(λ) ξ ∈ eiδλ (H2p )⊥ λ−ω for ω ∈ C+ and ξ ∈ C p . But this in turn implies that s(λ) − s(ω) ξ ∈ H2p  eδ H2p , λ−ω and hence that s is entire, in view of Theorem 3.33.



Similar considerations lead easily to the following supplementary result. Lemma 3.35 If b ∈ E ∩ Sinp×p and b = b1 b2 , where b1 , b2 ∈ Sinp×p , then b1 , b2 ∈ E p×p .

82 Proof

Matrix-valued functions in the Nevanlinna class By Lemma 3.34, b ∈ E ∩ Sinp×p =⇒ det b(λ) = γ eiλδ ,

where |γ | = 1 and δ ≥ 0. Therefore, det b1 (λ) is of the same form and hence  b1 ∈ E ∩ Sinp×p . The same argument applies to b2 . The preceding analysis also leads easily to the following set of conclusions, which will be useful in the sequel. Theorem 3.36 If b ∈ E ∩ Sinp×p , then: (1) det b(λ) = eiλα × det b(0) for some α ≥ 0. (2) The limits in (3.81) and (3.83) satisfy the inequalities 0 ≤ τb− ≤ δb− ≤ pτb− < ∞.

(3.95)

ln | det b(−iν)| = α. ν − (4) pτb ≥ α, with equality if and only if

(3) δb = δb− = lim

ν↑∞

b(λ) = eiλα/p b(0). (5) b(λ) is an entire mvf of exponential type τb− . Proof Item (1) is established in Lemma 3.34. Moreover, in view of (1), f (λ) = b(λ)−1 = b# (λ) is entire and satisfies the hypotheses of Theorem 3.27. The latter serves to establish (2); (3) is immediate from (1) and then (4) is immediate from (5) of Theorem 3.27. Finally, the proof of (5) rests mainly on the observation that τ f+ = lim sup ν↑∞

ln  f (iν) ln b(−iν) = lim sup ν ν ν↑∞

bounds the growth of b(λ) on the negative imaginary axis. Theorem 3.24 applied to entire mvf’s that are of the Nevanlinna class in the lower half plane C− implies that τ− (b) = τ f+ . Therefore, τ (b) = max {τ− (b), τ+ (b)} = τ− (b), since b(λ) ≤ 1 in  C+ . In subsequent developments, the structure of entire inner mvf’s will be of central importance. The following three examples illustrate some of the possibilities: Example 3.37 If b(λ) = eiλa Ip with a ≥ 0, then pτb = δb = pa. Example 3.38 If b(λ) = eiλa ⊕ Ip−1 = diag{eiλa , 1, . . . , 1} with a ≥ 0, then τb = δb = a.

3.7 Entire inner mvf’s

83

Example 3.39 If b(λ) = diag{eiλa1 , eiλa2 , . . . , eiλa p } with a1 ≥ a2 ≥ · · · ≥ a p ≥ 0, then τb = a1

and

δb = a1 + · · · + a p .

For future use, we record the following observation which is relevant to Example 3.38. Lemma 3.40 If b ∈ E ∩ Sinp×p admits a factorization b(λ) = b1 (λ)b2 (λ) with bi ∈ Sinp×p for i = 1, 2 and if δb = τb , then: (1) bi ∈ E ∩ Sinp×p and (2) δbi = τbi for i = 1, 2. Proof Since (1) is available from Lemma 3.35, we can invoke Theorem 3.36 to help obtain the chain of inequalities δb = δb1 + δb2 ≥ τb1 + τb2 ≥ τb . But now as the upper and lower bounds are presumed to be equal, equality must prevail throughout. This leads easily to (2), since δbi ≥ τbi .  Lemma 3.41 If b ∈ E ∩ Sinp×p and τb = a, then: (1) (2) (3) (4)

H(b) ⊆ H(ea Ip ), or, equivalently, ea b−1 ∈ Sinp×p . Every vvf f ∈ H(b) is an entire vvf of exponential type τ f ≤ a. a = max{τ f : f ∈ H(b)}. Rα bξ ∈ H(ea Ip ) for every α ∈ C and ξ ∈ C p .

Proof Assertion (2) follows from the corresponding assertion for mvf’s U ∈ U (J) in Theorem 4.43, below. Then, since H(b) ⊂ H2p , (1) follows from (2) and the Paley–Wiener theorem. Assertion (3) follows from (2), the equality (3.79) and the fact that R0 bξ ∈ H(b) for every ξ ∈ C p . Assertion (4) then follows from (1) and Lemma 3.14 when α ∈ C+ . However, since the entries of b(λ) are entire functions of exponential type no larger than a that are bounded on R, the Paley–Wiener theorem guarantees the existence of a mvf h ∈ L2p×p such that  a b(λ) = b(0) + iλ eiλs h(s)ds for every λ ∈ C 0

and this in turn leads to the formula    a eiλt h(t ) − iαe−iαt (Rα b)(λ) = i 0

which serves to justify (4).

a

 eiαs h(s)ds dt,

t



84

Matrix-valued functions in the Nevanlinna class

3.8 J contractive, J-inner and entire J-inner mvf’s The Potapov class P (J) is the class of m × m mvf’s U (λ) that are meromorphic in C+ and are J-contractive in hU+ , i.e., U (λ)∗ JU (λ) ≤ J

for all points λ ∈ hU+ .

(3.96)

In [Po60], V.P. Potapov obtained a multiplicative representation of the mvf’s in the multiplicative semigroup P ◦ (J) = {U ∈ P (J) : det U (λ) ≡ 0}

(3.97)

that generalizes the well-known Blaschke–Riesz–Herglotz factorization for scalarvalued functions s in the Schur class. It is easily seen that P (Im ) = S m×m ,

P ◦ (Im ) = {s ∈ S m×m : det s(λ) ≡ 0}

and P (−Im ) = {U : U −1 ∈ S m×m } = P ◦ (−Im ). The Potapov–Ginzburg transform of P (J) into S m×m and U (J) into Sinm×m If U ∈ P (J) and J = ±Im , then the PG (Potapov–Ginzburg) transform S = PG(U ) is given by the formula S = PG(U ) = (P− + P+U )(P+ + P−U )−1 = (P+ − UP− )−1 (UP+ − P− ),

(3.98)

where P± =

1 (Im ± J) 2

(3.99)

are complementary orthogonal projections and P+ − P− = J. It may be checked that if U ∈ P (J), then det{(P+ + P−U (λ))(P+ − U (λ)P− )} = 0 for λ ∈ hU+ ,

(3.100)

Im − S(λ)∗ S(λ) = (P+ + U (λ)∗ P− )−1 {J − U (λ)∗ JU (λ)}(P+ + P−U (λ))−1 for λ ∈ hU+

(3.101)

and Im − S(λ)S(λ)∗ = (P+ − U (λ)P− )−1 {J − U (λ)JU (λ)∗ }(P+ − P−U (λ)∗ )−1 for λ ∈ hU+ ;

(3.102)

3.8 J contractive, J-inner and entire J-inner mvf’s

85

see, e.g., lemma 2.3 in [ArD08b]. Thus, S(λ) is a holomorphic contractive mvf on hU+ and hence has a unique holomorphic extension to a mvf in S m×m . Moreover, det{(P+ + P− S(λ))(P+ − S(λ)P− )} = 0

for λ ∈ C+ ,

(3.103)

and U = PG(S) = (P− + P+ S)(P+ + P− S)−1 = (P+ − SP− )−1 (SP+ − P− ), (3.104) Formula (3.104) implies the inclusion P (J) ⊂ N m×m .

(3.105)

Thus, as mvf’s f ∈ N m×m have nontangential boundary values a.e. on R, the same holds true for U ∈ P (J). Consequently, U (μ) = lim U (μ + iν) ν↓0

exists a.e. on R and U (μ)∗ JU (μ) ≤ J

a.e. on R

(3.106)

for a mvf U ∈ P (J). This implies that U (μ) = 0 a.e. on R if J = Im . A mvf U ∈ P (J) belongs to the class U (J) of J-inner mvf’s if its nontangential boundary values are J-unitary a.e. on R, i.e., if U (μ)∗ JU (μ) = J

for almost all points μ ∈ R.

(3.107)

If U ∈ U (J), then its domain of definition extends into C− by the symmetry principle U (λ) = JU # (λ)−1 J

for λ ∈ hU+ such that U (λ) is invertible,

(3.108)

and into R by nontangential limits. Consequently U (λ) is holomorphic in hU = hU+ ∪ hU− ∪ hU0 , where hU± and hU0 are the domains of holomorphy of U (λ) in C± and R, respectively, and hU− ⊇ {λ ∈ C− : λ ∈ C+

and

det U (λ) = 0}.

Moreover, the restriction U− = U|hU− of the mvf U to hU− is a meromorphic pseudocontinuation of the restriction U+ = U|hU+ of the mvf U to hU+ , i.e., U− is a meromorphic mvf in C− with bounded Nevanlinna characteristic in C− and U (μ) = lim U (μ + iν) = lim U (μ − iν) ν↓0

ν↓0

a.e. on R.

(3.109)

Thus, U (J) ⊂ m×m

(3.110)

86

Matrix-valued functions in the Nevanlinna class

and the PG transform maps U (J) onto {S ∈ Sinm×m : det (P+ + P− S) ≡ 0 in C+ }. The class U (J) is a multiplicative semigroup; a mvf U1 ∈ U (J) will be called a left divisor (resp., right divisor) of U if U1−1U ∈ U (J) (resp., UU1−1 ∈ U (J)). If U ∈ E ∩ U (J), then, in view of (3.110), U ∈ E ∩ m×m and, by Theorem 3.30, it has finite exponential type τU and τU = max{τU+ , τU− }.

(3.111)

If U ∈ U (J), then | det U (μ)| = 1 a.e. on R and hence U (J) ⊆ P ◦ (J) ⊆ P (J), where the last inclusion is proper if J = −Im . Thus, Potapov’s results on the multiplicative structure of mvf’s in the class P ◦ (J) are automatically applicable to mvf’s in the class U (J). Moreover, def

U ∈ U (J) =⇒ det U ∈ N = N 1×1

and

| det U (μ)| = 1 a.e. on R

and hence U ∈ U (J) =⇒ det U (λ) =

b(λ) , d(λ)

def

where b, d ∈ Sin = Sin1×1 .

Thus, U ∈ E ∩ U (J) =⇒ det U (λ) = γ eiαλ ,

where α ∈ R, γ ∈ C and |γ | = 1. (3.112)

In view of the equivalences S∗ S ≤ Im ⇐⇒ SS∗ ≤ Im and det(P+ − P− S) = 0 ⇐⇒ det(P+ + P− S∗ ) = 0 for any S ∈ Cm×m , the PG transform implies that U ∗ JU ≤ J ⇐⇒ UJU ∗ ≤ J

for every U ∈ Cm×m .

Moreover, since J 2 = Im , U ∗ JU = J ⇐⇒ UJU ∗ = J

for every U ∈ Cm×m .

Thus, in terms of the notation f ∼ (λ) = f (−λ)∗ ,

(3.113)

3.8 J contractive, J-inner and entire J-inner mvf’s

87

it follows that U ∈ P (J) ⇐⇒ U ∼ ∈ P (J)

and U ∈ U (J) ⇐⇒ U ∼ ∈ U (J).

Formulas (3.101) and (3.102) and the preceding discussion yield the following conclusions: Theorem 3.42 The PG transform (3.98) maps the class P (J)

onto {S ∈ S m×m : det{P+ + P− S(λ)} ≡ 0}

U (J)

onto {S ∈ Sinm×m : det{P+ + P− S(λ)} ≡ 0}.

and

If J = j pq with p ≥ 1 and q ≥ 1, then the orthogonal projectors defined in (3.99) can be written explicitly as     Ip 0 p×p 0 0 P+ = and P− = . 0 0q×q 0 Iq Correspondingly, W ∈ P ( j pq ) and S = PG(W ) may be written in block form as     w11 w12 s11 s12 W = and S = (3.114) w21 w22 s21 s22 with blocks w11 and s11 of size p × p and w22 and s22 of size q × q. Remark 3.43 If W ∈ E ∩ U ( j pq ), then the identities W (λ) = j pq {W # (λ)}−1 j pq

w11 = {s#11 }−1

and

and the inequalities (3.121) and (3.123) imply that τW+ = τw+22

and

τW− = τw−11 .

(3.115)

Formulas (3.98)–(3.104) lead to the following conclusions: Lemma 3.44 Let W ∈ P ( j pq ), let S = PG(W ) be written in the standard block + . Then form (3.114) and fix λ ∈ hW (1) w22 (λ) and s22 (λ) are invertible.  −1   Ip Ip w11 w12 0 = (2) S = 0 Iq 0 w21 w22

−w12

−1  w11

−w22

w21

−1 s11 = w11 − w12 w−1 22 w21 , s12 = w12 w22 , −1 s21 = −w−1 22 w21 and s22 = w22 .

0 −Iq

 , i.e.,

(3.116)

88

Matrix-valued functions in the Nevanlinna class

(3) W =

 s11

s12

0

Iq



i.e.,

Ip

0

s21

s22



−1 =

Ip

−s12

−1  s11

0

−s22

s21

0 −Iq

−1 w11 = s11 − s12 s−1 22 s21 , w12 = s12 s22 ,

 ,

(3.117)

−1 w21 = −s−1 22 s21 and w22 = s22 .

(4) s11 ∈ S p×p , s12 ∈ S p×q , s21 ∈ S q×p , s22 ∈ S q×q and s12 (λ)∗ s12 (λ) < Iq

and

s21 (λ)∗ s21 (λ) < Ip .

(3.118)

det s11 (λ) and det S = det w11 (λ) . det w22 (λ) det s22 (λ) (6) W (λ) is invertible ⇐⇒ s11 (λ) is invertible, i.e., (5) det W (λ) =

det W (λ) = 0 ⇐⇒ det s11 (λ) = 0. (7) The formulas for W and S can also be expressed as     Ip s12 s11 Ip 0 0 W = 0 Iq 0 w22 −s21 Iq and S=

 Ip 0

w12 Iq



w11

0

0

s22



Ip

0

−w21

Iq

(3.119)

 .

(3.120)

(8) w22 (λ) = s−1 22 (λ) and w22 (λ) ≤ W (λ) ≤ 3w22 (λ).

(3.121)

(9) If W (λ) is invertible (or, equivalently, s11 (λ) is invertible), then     Ip Ip −s12 (λ) 0 s11 (λ)−1 0 −1 (3.122) W (λ) = s21 (λ) Iq 0 s22 (λ) 0 Iq and s11 (λ)−1  ≤ W (λ)−1  ≤ 3s11 (λ)−1 .

(3.123)

(10) If W ∈ U ( j pq ) and det W (λ) = 0, then w#11 (λ)s11 (λ) = Ip .

(3.124)

Proof Items (1), (2), (3) and the inclusions in (4) are immediate from formulas (3.100), (3.103), (3.98) and (3.104). Moreover, since S∗ S ≤ Im and SS∗ ≤ Im , it is readily seen that s∗12 s12 + s∗22 s22 ≤ Iq

and

s21 s∗21 + s22 s∗22 ≤ Iq .

The inequalities in (3.118) then follow from the fact that s22 is invertible at points + . λ ∈ hW

3.8 J contractive, J-inner and entire J-inner mvf’s

89

Item (6) is immediate from (5), which, in turn, follows from (3) and (2). Item (7) is obtained from the Schur complement formula       w11 w12 Ip w12 w−1 w11 − w12 w−1 Ip 0 0 22 22 w21 = w21 w22 0 Iq Iq 0 w22 w−1 22 w21 (3.125) and the formulas in (3.116). Similar arguments serve to justify formula (3.120). The upper bound in (8) follows from (3.119), since the matrices si j are contractive. Finally, (9) follows from (7), and (10) follows from (3.108) with J = j pq and (3.122).  The preceding analysis implies that the PG transform maps P ( j pq )

bijectively onto {S ∈ S m×m : det s22 (λ) ≡ 0}

U ( j pq )

bijectively onto {S ∈ Sinm×m : det s22 (λ) ≡ 0}.

and

Let Uconst (J) = {U ∈ Cm×m : U ∗ JU = J}. Lemma 3.45 If W ∈ Uconst ( j pq ), then there exists a unique choice of parameters k ∈ C p×q , u ∈ C p×p , v ∈ Cq×q

with

k∗ k < Iq ,

u∗ u = Ip and v∗ v = Iq such that



(Ip − kk∗ )−1/2

W =⎣ k∗ (Ip − kk∗ )−1/2

k(Iq − k∗ k)−1/2 ∗

(Iq − k k)

⎤⎡ ⎦⎣

−1/2

u 0

0

(3.126)

⎤ ⎦.

(3.127)

v

Conversely, if k, u, v is any set of three matrices that meet the conditions in (3.126), then the matrix W defined by formula (3.127) is j pq -unitary. Proof Let W ∈ Uconst ( j pq ), let wi j and si j be the blocks in the four-block decomposition of W and S = PG(W ), respectively, and let k = s12 . Then, by Lemma 3.44, k∗ k < Iq , s11 s∗11 = Ip − kk∗ ,

s∗22 s22 = Iq − k∗ k

and

s21 s∗11 = −s22 k∗ ,

since S∗ S = Im , where m = p + q. The first two equalities imply that s11 = (Ip − kk∗ )1/2 u

and

s22 = v∗ (Iq − k∗ k)1/2 ,

(3.128)

where u∗ u = uu∗ = Ip , v∗ v = vv∗ = Iq and u and v are uniquely defined by s11 and s22 , respectively. Moreover, these formulas lead easily to the asserted parametrization of W with parameters that are uniquely defined by W and satisfy the constraints

90

Matrix-valued functions in the Nevanlinna class

in (3.126), since −1 −1 −1 ∗ −∗ w11 = s−∗ 11 , w22 = s22 , w12 = ks22 and w21 = −s22 s21 = k s11 .

The converse is easily checked by direct calculation.

(3.129) 

Lemma 3.46 If W ∈ Uconst ( j pq ), then u = Ip and v = Iq in the parametrization formula (3.127) if and only if W > 0. Proof If W ∈ Uconst ( j pq ) and W > 0, then the parametrization formula (3.127) implies that w11 = (Ip − kk∗ )−1/2 u > 0

and

w22 = (Iq − k∗ k)−1/2 v > 0.

Therefore, by the uniqueness of polar decompositions, u = Ip and v = Iq . Conversely, if W ∈ Uconst ( j pq ) and u = Ip and v = Iq , then, by Schur complements, formula (3.127) can be written as     Ip 0 Ip k (Ip − kk∗ )1/2 0 , W = 0 Iq 0 (Iq − k∗ k)−1/2 k∗ Iq ∗ 1/2 . But this clearly displays W as a positive since w11 − w12 w−1 22 w21 = (I p − kk ) definite matrix. 

Remark 3.47 In view of Lemma 3.46, the product in (3.127) is the polar decomposition of the constant j pq -unitary matrix W , in which the first term on the right is a positive definite j pq -unitary matrix and the second term is both unitary and j pq -unitary. m×m Connections with the classes C m×m and Csing m×m Recall that Csing denotes the subclass of mvf’s c(λ) in the Carath´eodory class C m×m with singular spectral functions σ (μ) (in the integral representation (1.65)), i.e., m×m Csing = {c ∈ C m×m : Rc(μ) = 0 a.e. on R}.

(3.130)

It is easily checked that the Cayley transform S = TV (C) = (Im − C)(Im + C)−1

(3.131)

maps C m×m bijectively onto {S ∈ S m×m : det{Im + S(λ)} ≡ 0}

(3.132)

and that C(λ) may be recovered from S(λ) by the same formula: C = TV (S) = (Im − S)(Im + S)−1 .

(3.133)

m×m Moreover, TV maps Csing onto

{S ∈ Sinm×m : det{Im + S(λ)} ≡ 0}.

(3.134)

3.9 Associated pairs of the first kind

91

Analogous connections exist between the classes P (J) and C m×m and between the m×m : classes U (J) and Csing Lemma 3.48 The transform C = J(Im − U )(Im + U )−1

(3.135)

maps {U ∈ E ∩ U ◦ (J) : det(Im + U (λ)) ≡ 0} injectively into m×m {C ∈ Csing : C is meromorphic in C, hC ⊃ C+ ∪ C− ∪ {0} and C(0) = 0}. (3.136)

Proof



See Lemma 6.3 and the proof of theorem 5.22 in [ArD08b].

3.9 Associated pairs of the first kind Associated pairs of the first kind will be defined initially for mvf’s W ∈ U ( j pq ) in terms of inner–outer and outer–inner factorizations of the blocks s11 and s22 in the PG transform of W ; see (3.138) below. Subsequently. associated pairs of the first kind will be defined for mvf’s U ∈ U (J) when J is unitarily equivalent to j pq . Let W ∈ U ( j pq ) and let S = PG(W ) be the Potapov–Ginzburg transform of W . Then, by Lemma 3.44, the blocks on the diagonal in the standard four-block decomposition of S are such that s11 ∈ S p×p ,

det s11 (λ) ≡ 0,

s22 ∈ S q×q

and

det s22 (λ) ≡ 0.

(3.137)

Therefore, both of these blocks admit inner–outer and outer–inner factorizations. In particular, s11 = b1 ϕ1

where

b1 ∈ Sinp×p

and

p×p ϕ1 ∈ Sout

(3.138)

s22 = ϕ2 b2

where

b2 ∈ Sinq×q

and

q×q ϕ2 ∈ Sout .

(3.139)

and

A pair {b1 , b2 } of mvf’s b1 ∈ Sinp×p and b2 ∈ Sinq×q that is obtained from the factorizations (3.138) and (3.139) is called an associated pair of W and the set of such pairs is denoted ap(W ). If {b1 , b2 } ∈ ap(W ), then ap(W ) = {b1 u, vb2 : u ∈ C p×p

and

v ∈ Cq×q

are unitary matrices}.

If J = V ∗ j pqV,

V ∗V = VV ∗ = Im

and W (λ) = VU (λ)V ∗ ,

(3.140)

92

Matrix-valued functions in the Nevanlinna class

then U ∈ U (J) ⇐⇒ W ∈ U ( j pq ).

(3.141)

If U ∈ U (J) and W (λ) is defined by (3.140), then an associated pair {b1 , b2 } ∈ ap(W ) is called an associated pair of the first kind for U. The set of such pairs is denoted apI (U ), i.e., apI (U ) = ap(VUV ∗ ). This definition depends upon the choice of the unitary matrix V in the formula J = V ∗ j pqV , in a nonessential way: If V1 is also a unitary matrix such that J = V1∗ j pqV1 , then (V1V ∗ ) j pq = j pq (V1V ∗ ) and hence



u V1V = 0 ∗

 0 , v



u i.e., V1 = 0

 0 V, v

where u ∈ C p×p and v ∈ Cq×q are both unitary matrices. Thus, if W1 (λ) = V1U (λ)V1∗ , then W1 (λ) = V1V ∗ (VU (λ)V ∗ )VV1∗ = (V1V ∗ )W (λ)(V1V ∗ )∗    ∗  u 0 u 0 = W (λ) . 0 v 0 v∗ Consequently, {b1 , b2 } ∈ ap(W ) ⇐⇒ {ub1 u∗ , v∗ b2 v} ∈ ap(W1 ).

(3.142)

Thus, the set apI (U ) depends upon the choice of the unitary matrix V such that J = V ∗ j pqV ∗ only up to the transformation b1 (λ) −→ ub1 (λ)u∗

and

b2 (λ) −→ v∗ b2 (λ)v

for some pair of unitary matrices u ∈ C p×p and v ∈ Cq×q , which depend upon the choice of the unitary matrices V and V1 . Lemma 3.49 If U ∈ U (J), J is unitarily equivalent to j pq and {b1 , b2 } ∈ apI (U ), then det U = γ

det b1 det b2

for some γ ∈ T.

(3.143)

Proof We may assume that J = j pq , and {b1 , b2 } ∈ ap(W ), where W ∈ U ( j pq ). Then, in view of formulas (3.138) and (3.139), ϕ1∗ ϕ1 = s∗11 s11 = Ip − s∗21 s21

3.10 Singular and right (and left) regular J-inner mvf’s

93

and ϕ2 ϕ2∗ = s22 s∗22 = Iq − s21 s∗21 . Therefore, | det ϕ2 (μ)| = | det ϕ1 (μ)|

a.e. on R

(3.144)

and hence, as det ϕ1 and det ϕ2 are both outer functions, det ϕ1 (λ) = γ det ϕ2 (λ) for some γ ∈ T.

(3.145)

Formula (3.143) now follows easily from (5) of Lemma 3.44 and formula (3.145).  Lemma 3.50 Let U = U1U2 , where U j ∈ U (J) for j = 1, 2 and J = V ∗ j pqV for (1) some unitary matrix V ∈ Cm×m and suppose {b1 , b2 } ∈ apI (U ) and {b(1) 1 , b2 } ∈ p×p q×q −1 −1 and b2 (b(1) ∈ Sin . apI (U1 ). Then (b(1) 1 ) b1 ∈ Sin 2 ) Proof This follows from lemma 4.28 in [ArD08b], which treats the case J = j pq . 

3.10 Singular and right (and left) regular J-inner mvf’s Recall that a mvf U ∈ U (J) is said to belong to the class US (J) of singular J-inner m×m , i.e., mvf’s if it belongs to Nout m×m US (J) = U (J) ∩ Nout .

(3.146)

Clearly, U ∈ US (J) ⇐⇒ U ∼ ∈ US (J). If J = ±Im , then US (J) is just the class of unitary matrices in Cm×m . If J = ±Im , then every elementary Blaschke–Potapov factor U (λ) = Im +

ε πi(λ − ω)

with ω ∈ R, ε ∈ Cm×m , εJ ≥ 0, ε2 = 0, (3.147)

with a pole in R belongs to the class US (J), since (λ − ω) is an outer function in the Smirnov class N+ and hence both U (λ) and U (λ)−1 = Im −

ε πi(λ − ω)

belong to the Smirnov class N+m×m . Similarly every elementary Blaschke–Potapov factor U (λ) = Im + λε

with ε ∈ Cm×m , εJ ≥ 0, ε2 = 0,

(3.148)

94

Matrix-valued functions in the Nevanlinna class

with a pole at ∞ belongs to US (J) as do finite products and convergent infinite products of such factors. The latter assertion is a consequence of the following result: Theorem 3.51 Let U = U1U2 , where Uk ∈ U (J) for k = 1, 2. Then U ∈ US (J) if and only if U1 ∈ US (J) and U2 ∈ US (J). Moreover, convergent left or right products of mvf’s in US (J) belong to US (J). Proof



See theorems 4.33 and 4.44 in [ArD08b].

A mvf U ∈ U (J) is said to belong to the class UrR (J) of right regular J-inner mvf’s if U = U1U2

with U1 ∈ U (J) and U2 ∈ US (J) =⇒ U2 ∈ Uconst (J).

Analogously, a mvf U ∈ U (J) is said to belong to the class UR (J) of left regular J-inner mvf’s if U = U2U1

with U1 ∈ U (J) and U2 ∈ US (J) =⇒ U2 ∈ Uconst (J).

It is useful to note that U ∈ UrR (J) ⇐⇒ U ∼ ∈ UR (J).

(3.149)

If J = ±Im , then UrR (J) = UR (J) = U (J). Thus, UrR (J) and UR (J) are only proper subclasses of U (J) if J = ±Im . Lemma 3.52 If U ∈ U (J), P± = (Im ± J)/2, J = ±Im and S = PG(U ), then m×m U ∈ US (J) ⇐⇒ (P+ + P− S)/2 ∈ Sout

Proof

and

m×m (P− + P+ S)/2 ∈ Sout .

See lemma 4.40 in [ArD08b].



Lemma 3.53 If U ∈ U (J), J = ±Im and {b1 , b2 } ∈ apI (U ), then (1) U ∈ N+m×m ⇐⇒ b2 (λ) is a constant unitary matrix. (2) U −1 ∈ N+m×m ⇐⇒ b1 (λ) is a constant unitary matrix. (3) U ∈ US (J) ⇐⇒ b1 (λ) and b2 (λ) are both constant unitary matrices. Proof

This follows from lemmas 4.33 and 4.39 in [ArD08b].



Lemma 3.54 Let U = U1U2 , where U j ∈ U (J) for j = 1, 2 and J = ±Im . Then (1) U ∈ N+m×m ⇐⇒ U1 ∈ N+m×m and U2 ∈ N+m×m . (2) U # ∈ N+m×m ⇐⇒ U1# ∈ N+m×m and U2# ∈ N+m×m . (3) U2 ∈ US (J) ⇐⇒ apI (U1 ) = apI (U ). Proof It suffices to consider the case J = j pq . But then the blocks of the PG transforms S = PG(W ), S1 = PG(W1 ) and S2 = PG(W2 ) are connected by the

3.10 Singular and right (and left) regular J-inner mvf’s

95

formulas (1) (2) (1) −1 (2) (Ip − s12 s21 ) s11 s11 = s11

(3.150)

(2) (1) (2) −1 (1) (Iq − s21 s12 ) s22 . s22 = s22

(3.151)

and

Since the middle factors on the right in these two formulas are both outer mvf’s, thanks to (4) of Theorem 3.11, it follows that p×p p×p (1) ⇐⇒ s11 ∈ Sout s11 ∈ Sout

and

p×p (2) s11 ∈ Sout ,

q×q q×q (1) s22 ∈ Sout ⇐⇒ s22 ∈ Sout

and

q×q (2) s22 ∈ Sout

and q×q (2) ∈ Sout ap(W ) = ap(W1 ) ⇐⇒ s22

and

p×p (2) s11 ∈ Sout

⇐⇒ W2 ∈ US ( j pq ).



Theorem 3.55 Let U ∈ U (J), J = ±Im , and let {b1 , b2 } ∈ apI (U ). Then , hU+ = h+ b# 2

hU− = h− b1

and

hU ⊆ hb#2 ∩ hb1 .

(3.152)

Moreover: (1) If U is entire, then b1 and b2 are entire. (2) If U ∈ UrR (J), then hU = hb#2 ∩ hb1 and hence, U is entire if and only if b1 and b2 are entire. Proof

This is theorem 4.54 in [ArD08b].



Theorem 3.56 Let U ∈ E ∩ U (J), where J = V ∗ j pqV for some unitary m × m matrix V and let {b1 , b2 } ∈ apI (U ). Then: (1) (2) (3) (4)

b1 ∈ E ∩ Sinp×p and b2 ∈ E ∩ Sinq×q . τU+ = τb+−1 and τU− = τb−1 . 2 U ∈ US (J) ⇐⇒ τU = 0. U ∈ US (J) ⇐⇒ τb1 = 0 and τb2 = 0, i.e., if and only if b1 (λ) = b1 (0) and b2 (λ) = b2 (0).

Proof The first assertion is contained in Theorem 3.55; the second follows from theorem 4.58 in [ArD08b]. Assertion (4) is covered by Lemma 3.54; (3) then follows from (2) and (4).  Remark 3.57 The converse of assertion (1) in Theorem 3.55 is not true. The elementary Blaschke–Potapov factor in (3.147) belongs to the class US (J). Consequently, if {b1 , b2 } ∈ apI (U ), then b1 (λ) and b2 (λ) are both unitary constant

96

Matrix-valued functions in the Nevanlinna class

matrices and therefore entire but U is not entire. The matrices   0 −δ J = Jp , and ε = with δ ∈ C p×p and 0 0

δ>0

meet the stated conditions. Theorem 3.56 implies that if J = ±Im , then an entire J-inner mvf U (λ) is a singular J-inner mvf if and only if it is of minimal exponential type. The resolvent matrices of completely indeterminate moment problems and bitangential generalizations of such problems are products of elementary Blaschke–Potapov factors with poles at infinity and hence they belong to the class E ∩ US (J). The matrizants of the matrix Schr¨odinger equation (2.88) with a locally summable potential q(x) on [0, d) also belong to this class, since (as was shown in Section 2.9) they are entire mvf’s of minimal exponential type. Characterizations of the mvf’s U in the classes US (J) and UrR (J) in terms of the corresponding RKHS’s H(U ) as in (1.27) and (1.24) will be discussed later in Chapter 4.

3.11 Linear fractional transformations of S p×q into itself The proofs of the next two lemmas may be found on pp. 210–213 of [ArD08b] and the references cited therein. Lemma 3.58 If W ∈ P ( j pq ), then: (1) S p×q ⊆ D(TW ) and TW [S p×q ] ⊆ S p×q . (2) S q×p ⊆ D(TW ) and TW [S q×p ] ⊆ S q×p . If W ∈ U ( j pq ), then also: q×p q×p ] ⊆ S∗in . (3) If p ≥ q, then TW [Sinp×q ] ⊆ Sinp×q and TW [S∗in p×q p×q q×p  (4) If p ≤ q, then TW [S∗in ] ⊆ S∗in and TW [Sin ] ⊆ Sinq×p . (5) If W = W1W2 , where W1 , W2 ∈ P ( j pq ), then W1 , W2 ∈ U ( j pq ) and TW [S p×q ] ⊆ TW1 [S p×q ].

Lemma 3.59 If Wn ∈ U ( j pq ), n ≥ 1, is a nondecreasing  sequence such that  + W (λ) = lim Wn (λ) for λ ∈ hW and W ∈ P ◦ ( j pq ), n n↑∞

n≥1

then TWn+1 [S p×q ] ⊆ TWn [S p×q ], TW [S p×q ] =

 n≥1

TWn [S p×q ]

3.11 Linear fractional transformations of S p×q into itself

97

(n) and W ∈ U ( j pq ). Moreover, if {b(n) 1 , b2 } ∈ ap(Wn ) for n ≥ 1 and this sequence is normalized by the conditions

b(n) 1 (ω) > 0,

b(n) 2 (ω) > 0

at a point ω ∈ C+ ,

then the limits b1 (λ) = lim b(n) 1 (λ) and n↑∞

b2 (λ) = lim b(n) 2 (λ) n↑∞

exist at each point λ ∈ C+ , b1 ∈ Sinp×p , b2 ∈ Sinq×q and {b1 , b2 } ∈ ap(W ). If Wn ∈ U ◦ ( j pq ) for every integer n ≥ 1, then the same conclusion holds for (n) ω = 0, i.e., for the normalization b(n) 1 (0) = I p and b2 (0) = Iq . Remark 3.60 If W ∈ P ( j pq ), εn ∈ S p×q for n ≥ 1 and s(λ) = lim TW [εn ], n↑∞

then there exists a subsequence {εnk }, k = 1, 2, . . . of {εn } that converges to a mvf ε0 ∈ S p×q at each point of C+ . Therefore, s(λ) = lim TW (λ) [εnk (λ)] = (TW [ε0 ])(λ) ; k↑∞

i.e., the set TW [S p×q ] is a closed subspace of S p×q with respect to pointwise convergence. Proofs of the next several results may be found on pp. 230–233 of [ArD08b]. Lemma 3.61 If U ∈ E ∩ U (J), J = ±Im , and {b1 , b2 } ∈ apI (U ), then ρU ∈ P (J) for some scalar function ρ if and only if ρ(λ) = eiγ eβ (λ),

where γ = γ , β = β2 − β1 ,

(3.153)

and β1 and β2 are nonnegative numbers such that p×p e−1 β1 b1 ∈ Sin

and

q×q e−1 β2 b2 ∈ Sin .

(3.154)

Moreover, for such a choice of ρ, ρU ∈ E ∩ U (J). Theorem 3.62 Let W be a nondegenerate meromorphic m × m mvf in C+ and let S p×q ⊆ D(TW )

and

TW [S p×q ] = S p×q .

(3.155)

Then there exists a scalar meromorphic function ρ(λ) in C+ such that ρW ∈ Uconst ( j pq ). If W ∈ P ◦ ( j pq ) and (3.155) holds, then W ∈ Uconst ( j pq ). Lemma 3.63 Let W and W1 both belong to U ( j pq ). Then TW [S p×q ] = TW1 [S p×q ]

and

ap(W ) = ap(W1 )

98

Matrix-valued functions in the Nevanlinna class

if and only if W1 (λ) = W (λ)U

and U ∈ Uconst ( j pq ).

Theorem 3.64 Let W and W1 both belong to U ( j pq ), let {b1 , b2 } ∈ ap(W ) and (1) {b(1) 1 , b2 } ∈ ap(W1 ). Then the conditions p×p −1 (b(1) 1 ) b1 ∈ Sin ,

−1 b2 (b(1) ∈ Sinq×q 2 )

and

TW [S p×q ] ⊆ TW1 [S p×q ]

hold if and only if W1−1W ∈ U ( j pq ). Moreover, if W1−1W ∈ U ( j pq ), then W1−1W ∈ US ( j pq ) ⇐⇒ ap(W1 ) = ap(W ).

(3.156)

Lemma 3.65 Let W ∈ E ∩ U ( j pq ), W1 ∈ P ◦ ( j pq ), {b1 , b2 } ∈ ap(W ) and suppose that TW [S p×q ] ⊆ TW1 [S p×q ].

(3.157)

Then: (1) W1 ∈ E ∩ U ( j pq ) and W =

eβ1 W1W2 , eβ2

(3.158)

where W2 ∈ E ∩ U ( j pq ), β1 ≥ 0, β2 ≥ 0, p×p e−1 β1 b1 ∈ E ∩ Sin

and

eβ −1 b2 ∈ E ∩ Sinq×q . 2

(3.159)

(2) If equality prevails in (3.157), then W2 ∈ Uconst ( j pq ).

3.12 Linear fractional transformations in C p×p and from S p×p into C p×p Let m = 2p and let

 A(λ) =

a11 (λ) a12 (λ) a21 (λ) a22 (λ)



be a meromorphic m × m mvf in C+ with blocks ai j (λ) of size p × p, and let   b11 (λ) b12 (λ) (3.160) B(λ) = A(λ)V = b21 (λ) b22 (λ) and

 W (λ) = VA(λ)V =

where V is defined in formula (1.6).

w11 (λ) w12 (λ) w21 (λ) w22 (λ)

 ,

(3.161)

3.12 Transformations in C p×p and from S p×p into C p×p

99

If A ∈ U (Jp ), then it is convenient to consider linear fractional transformations based on the mvf’s W (λ) = VA(λ)V,

B(λ) = A(λ)V

(3.162)

and A(λ). The mvf’s B(λ) belong to the class U ( j p , Jp ) of m × m mvf’s that are meromorphic in C+ and ( j p , Jp )-contractive in h+ B , i.e., B(λ)∗ Jp B(λ) ≤ j p

for every λ ∈ h+ B

(3.163)

and are also ( j p , Jp )-unitary a.e. on R: B(μ)∗ Jp B(μ) = j p

a.e. on R.

(3.164)

Moreover, A ∈ U (Jp ) ⇐⇒ B ∈ U ( j p , Jp ) ⇐⇒ W ∈ U ( j p )

(3.165)

and A ∈ E ∩ U (Jp ) ⇐⇒ B ∈ E ∩ U ( j p , Jp ) ⇐⇒ W ∈ E ∩ U ( j p ).

(3.166)

If A ∈ P (Jp ) and B and W are defined by (3.162), then, in view of Lemma 3.58 and formulas (3.39) and (3.40), S p×p ∩ D(TB ) = {ε ∈ S p×p : det(b21 ε + b22 ) ≡ 0 in C+ } = {ε ∈ S p×p : TW [ε] ∈ D(TV )} and TB [S p×p ∩ D(TB )] = TV [TW [S p×p ∩ D(TB )]] ⊆ C p×p . Recall that C(A) = TB [S p×p ∩ D(TB )].

(3.167)

Then it is easy to check that TA [C p×p ∩ D(TA )] ⊆ C(A) ⊆ C p×p ⊂ D(TV ), TV [C(A)] ⊆ TW [S p×p ],

p×p TV [Sinp×p ∩ D(TV )] ⊆ Csing ,

C˚ p×p ⊂ D(TA ),

(3.168) (3.169) (3.170)

and, if τ ∈ C˚ p×p and c = TA [τ ], then (Rc)(λ) > 0 for every point λ ∈ C+ . Moreover, it is readily seen that if A ∈ U (Jp ), then also p×p p×p ∩ D(TA )] ⊆ TB [Sinp×p ∩ D(TB )] ⊆ Csing ⊂ D(TV ) TA [Csing

(3.171)

and TVB [Sinp×p ∩ D(TB )] ⊆ TW [Sinp×p ].

(3.172)

100

Matrix-valued functions in the Nevanlinna class

In view of (3.162) and (3.113), condition (3.163) is equivalent to the condition B(λ) j p B(λ)∗ ≤ Jp

for every

λ ∈ h+ B,

(3.173)

whereas condition (3.164) is equivalent to the condition B(μ) j p B(μ)∗ = Jp

for almost all points μ ∈ R.

(3.174)

In particular, (3.173) implies that: r(λ) = b22 (λ)b22 (λ)∗ − b21 (λ)b21 (λ)∗ ≥ 0 def

for λ ∈ h+ B,

(3.175)

for λ ∈ h+ B,

(3.176)

whereas (3.163) implies that b12 (λ)∗ b22 (λ) + b22 (λ)∗ b12 (λ) ≥ Ip and (3.174) implies that b22 (μ)b22 (μ)∗ − b21 (μ)b21 (μ)∗ = 0 a.e. on R.

(3.177)

Lemma 3.66 If B ∈ U ( j p , Jp ), then: (1) (2) (3) (4) (5) (6)

det b22 (λ) = 0 for every point λ ∈ h+ B. r = T [0 ] belongs to C p×p . The mvf c0 = b12 b−1 p×p B 22 −1  The mvf χ = b22 b21 = TB [0 p×p ] belongs to Sinp×p . The mvf ρi−1 (b22 )−1 belongs to H2p×p . ρi−1 (b#21 )−1 ∈ H2p×p . # −1 b11 − b12 b−1 22 b21 = −(b21 ) .

Proof

This follows from lemma 4.35 in [ArD08b].



Lemma 3.67 Let A ∈ U (Jp ), let B = AV and let χ = b−1 22 b21 . Then the following conditions are equivalent: (1) (2) (3) (4) (5)

Ip − χ (ω)χ (ω)∗ > 0 for at least one point ω ∈ C+ . Ip − χ (λ)χ (λ)∗ > 0 for every point λ ∈ C+ . b22 (ω)b22 (ω)∗ − b21 (ω)b21 (ω)∗ > 0 for at least one point ω ∈ h+ B. . b22 (λ)b22 (λ)∗ − b21 (λ)b21 (λ)∗ > 0 for every point λ ∈ h+ B S p×p ⊆ D(TB ).

If A ∈ E ∩ U (J), then conditions (1)–(5) are equivalent to the condition ia21 (0) > 0. Proof

This follows from lemma 4.70 in [ArD08b].

(3.178) 

Lemma 3.68 Let A ∈ U (Jp ), B = AV and W = VAV. Then TV [TA [C p×p ∩ D(TA )]] ⊆ TW [S p×q ].

(3.179)

3.12 Transformations in C p×p and from S p×p into C p×p

101

If s ∈ TW [S p×q ], there is a sequence of mvf’s sn ∈ TVA [C p×p ∩ D(TA )] such that s(λ) = lim sn (λ). n↑∞

p×p Moreover, if s ∈ TW [Sinp×q ], then the mvf’s sn may be chosen from TVA [Csing ∩ D(TA )].

Proof

This follows lemma 4.71 in [ArD08b].



Lemma 3.69 Let A ∈ P (Jp ), A1 ∈ P ◦ (Jp ) and A−1 1 A ∈ P (J p ). Then C(A) ⊆ C(A1 ). Proof

This is lemma 4.72 in [ArD08b].

(3.180) 

Theorem 3.70 Let A ∈ E ∩ U (Jp ), B(λ) = A(λ)V, √   E(λ) = [E− (λ) E+ (λ)] = 2 0 Ip B(λ), χ = E+−1 E− , ω ∈ C+ and Ip − χ (ω)χ (ω)∗ > 0. Then the set B(ω) = {c(ω) : c ∈ C(A)}.

(3.181)

p×p }. B(ω) = {γc (ω) + R (ω)vRr (ω) : v ∈ Sconst

(3.182)

is a matrix ball:

with center γc (ω), left semiradius R (ω) > 0 and right semiradius Rr (ω) > 0 that are given by the formulas ¯ −1 = 2E−# (ω)−1 {Ip − χ (ω)∗ χ (ω)}−1 E−# (ω)−∗ R (ω)2 = −r(ω)

(3.183)

Rr (ω)2 = r(ω)−1 = 2E+ (ω)−∗ {Ip − χ (ω)χ (ω)∗ }−1 E+ (ω)−1

(3.184)

γc (ω) = TB(ω) [−χ (ω)∗ ].

(3.185)

Proof This follows from Lemma 3.19, formulas (3.32), (3.36), (3.37) and (3.175) and Lemma 3.66.  Remark 3.71 The particular left and right semiradii that are defined by formulas (3.184) and (3.183) can also be expressed in terms of the RK KωE (λ) of the RKHS B(E) for ω ∈ C+ (see (4.116)): Rr (ω)−2 = ρω (ω)KωE (ω)/2 and

R (ω)−2 = −ρω¯ (ω)K ¯ ωE ¯ ¯ (ω)/2.

(3.186)

102

Matrix-valued functions in the Nevanlinna class Affine generalizations of C p×p

Let Cp×p denote the set of pairs {u(λ), v(λ)} of p × p mvf’s that are meromorphic in C+ and meet the following two conditions:   u(λ) + ≤ 0 for λ ∈ h+ (1) [u(λ)∗ v(λ)∗ ]Jp u ∩ hv ; v(λ)   u(λ) ∗ ∗ + (2) [u(λ) v(λ) ] > 0 for at least one point λ ∈ h+ u ∩ hv ; v(λ) A of such pairs be and, for A ∈ P (Jp ), let the linear fractional transformation T defined by the formula A [{u, v}] = (a11 (λ)u(λ) + a12 (λ)v(λ))(a21 (λ)u(λ) + a22 (λ)v(λ))−1 (3.187) T on the set A ) def = {{u, v} ∈ Cp×p : det(a21 (λ)u(λ) + a22 (λ)v(λ)) ≡ 0}. (3.188) Cp×p ∩ D(T If det v(λ) ≡ 0, then: {u, v} ∈ Cp×p ⇐⇒ uv−1 ∈ C p×p and A ) ⇐⇒ uv−1 ∈ C p×p ∩ D(TA ). {u, v} ∈ Cp×p ∩ D(T Moreover, the set C(A) defined in (3.167) may be reexpressed as A )}. A [{u, v}] : {p, q} ∈ Cp×p ∩ D(T C(A) = {T

(3.189)

3.13 Associated pairs of the second kind Associated pairs of the second kind will be defined initially for mvf’s A ∈ U (Jp ) in terms of inner–outer and outer–inner factorizations of the blocks (b#21 )−1 and b−1 22 in the mvf B(λ) = A(λ)V; see (3.190) below. Subsequently, associated pairs of the second kind will be defined for mvf’s U ∈ U (J) when J is unitarily equivalent to Jp . Thus, a mvf U ∈ U (J) with p = q, i.e., with rankP+ = rankP− , will have associated pairs of both the first and second kind. Let A ∈ U (Jp ) and let B(λ) = A(λ)V and W(λ) = VA(λ)V. Then B ∈ U ( j p , Jp ) and W ∈ U ( j p ) with blocks bi j and wi j , respectively. Then, in view of Lemma 3.66, (b#21 )−1 ∈ N+p×p

and

p×p b−1 22 ∈ N+ .

(3.190)

3.13 Associated pairs of the second kind

103

Therefore, (b#21 )−1 and b−1 22 admit essentially unique inner–outer and outer–inner factorizations in N+p×p : (b#21 )−1 = b3 ϕ3

and

b−1 22 = ϕ4 b4 ,

(3.191)

p×p , j = 1, 2. The pair {b3 , b4 } is called an associwhere b j ∈ Sinp×p and ϕ j = Nout ated pair of the mvf B, and the set of all such pairs is denoted by ap(B). Thus, {b3 , b4 } ∈ ap(B). Moreover, if {b3 , b4 } is a fixed associated pair of B, then

ap(B) = {{b3 u, vb4 } : u and v are unitary p × p matrices}. In view of the relation A(λ) = VW (λ)V, the set apI (A) of associated pairs of the first kind for A was defined as apI (A) = ap(W ). Analogously, we define apII (A) = ap(B) as the set of associated pairs of the second kind for A. This definition can be extended to mvf’s in the class U (J) based on a signature matrix J that is unitarily equivalent to Jp : If J = V ∗ JpV for some unitary matrix V ∈ Cm×m ,

(3.192)

then U ∈ U (J) ⇐⇒ VUV ∗ ∈ U (Jp ). If U ∈ U (J), then the pair {b3 , b4 } is said to be an associated pair of the second kind for U (λ) and we write {b3 , b4 } ∈ apII (U ) if {b3 , b4 } ∈ apII (VUV ∗ ). The set apII (U ) depends upon the choice of the unitary matrix V in (3.192). The next two lemmas follow from lemmas 4.36 and 4.38 in [ArD08b]. Lemma 3.72 Let A ∈ U (Jp ), {b1 , b2 } ∈ apI (A) and {b3 , b4 } ∈ apII (A) and let W = VAV and s12 = TW [0]. Then (Ip + s12 ) b3 = b1 ϕ 2

and

b4

(Ip + s12 ) = ψb2 2

(3.193)

p×p p×p for some ϕ ∈ Sout and ψ ∈ Sout .

Corollary 3.73 If U ∈ U (J), J is unitarily equivalent to Jp , α ≥ 0 and β ≥ 0, then {eα Ip , eβ Ip } ∈ apI (U ) ⇐⇒ {eα Ip , eβ Ip } ∈ apII (U ). Proof 3.11.

p×p by Theorem This follows from (3.193), since (Ip + s12 )/2 ∈ Sout 

Lemma 3.74 Let A ∈ U (Jp ) and let {b1 , b2 } ∈ apI (A) and let {b3 , b4 } ∈ apII (A). Then the following equivalences are valid: (1) A ∈ N+m×m ⇐⇒ b2 is a constant matrix ⇐⇒ b4 is a constant matrix. (2) A−1 ∈ N+m×m ⇐⇒ b1 is a constant matrix ⇐⇒ b3 is a constant matrix.

104

Matrix-valued functions in the Nevanlinna class

(3) A ∈ US (Jp ) ⇐⇒ b1 and b2 are constant matrices ⇐⇒ b3 and b4 are constant matrices. Theorem 3.75 Let U ∈ U (J), where J is unitarily equivalent to Jp and let {b3 , b4 } ∈ apII (U ). Then , hU− = h− and hU ⊆ hb#4 ∩ hb3 . (1) hU+ = h+ b3 b#4 (2) If U is entire, then b3 and b4 are entire. (3) If U ∈ UrR (J), then hU = hb#4 ∩ hb3 and hence i f U ∈ UrR (J) and b3 and b4 are entire, then U is entire. Proof

(3.194) 

This is theorem 4.56 in [ArD08b].

Theorem 3.76 Let A ∈ E ∩ U (Jp ) and A1 ∈ P ◦ (Jp ), let {b3 , b4 } ∈ apII (A) and suppose that C(A) = C(A1 ). Then A1 ∈ E ∩ U (Jp ) and can be expressed in the form A1 (λ) =

eβ2 (λ) A(λ)V, eβ1 (λ)

where e−β1 b3 ∈ Sinp×p , e−β2 b4 ∈ Sinp×p , β1 ≥ 0, β2 ≥ 0 and V ∈ Uconst (Jp ). Proof



See corollary 4.97 in [ArD08b].

Theorem 3.77 Let A and A1 both belong to U (Jp ), let {b3 , b4 } ∈ apII (A) and (1) {b(1) 3 , b4 } ∈ apII (A1 ). Then the conditions p×p −1 (b(1) 3 ) b3 ∈ Sin ,

−1 b4 (b(1) ∈ Sinp×p 4 )

and

C(A) ⊆ C(A1 )

hold if and only if A−1 1 A ∈ U (J p ). Proof

This is theorem 4.98 in [ArD08b].



Remark 3.78 Theorem 3.77 remains valid if the condition C(A) ⊆ C(A1 ) is replaced by the condition TA [C p×p ∩ D(TA )] ⊆ TA1 [C p×p ∩ D(TA1 )]. Corollary 3.79 Let A and A1 both belong to U (Jp ). Then C(A) = C(A1 )

and

apII (A) = apII (A1 )

if and only if A1 (λ) = A(λ)U

and U ∈ Uconst (Jp ).

3.14 Supplementary notes

105

3.14 Supplementary notes In [Po60] V.P. Potapov considered more general injective linear fractional transm×m than (3.98). The transformation (3.98) formations of the class Pconst (J) into Sconst was introduced later by Y.Yu. Ginzburg [Gi57] for bounded linear operators in Hilbert space. The PG transform for J = j pq was also considered by R. Redheffer in his development of transmission line theory [Re62]. He also studied the transformation (3.189) in the more general setting of contractive linear operators between Hilbert spaces instead of contractive matrices. Bibliographical notes on mvf’s in the Nevanlinna class and the subclasses considered here (especially P (J) and U (J)) may be found in the Supplementary Notes to chapters 3 and 4 in [ArD08b]. Associated pairs for mvs W ∈ U ( j pq ) and A ∈ U (Jp ) and the subclasses of singular and right regular mvf’s were introduced and studied by D.Z. Arov in his investigations of generalized Schur and Carath´eodory interpolation problems in [Ar84], [Ar88], [Ar89], [Ar90] and [Ar93]. The supplementary classifications of right regular, left regular, right strongly regular, left strongly regular and a number of characterizations of these classes were developed later in the joint work of the authors in their study of inverse problems for canonical integral and differential systems [ArD97]–[ArD12a]. Lemma 3.61 and Theorems 3.62 and 3.76 follow from general results on meromorphic minus matrices in C+ by L.A. Simakova; see section 4.17 of [ArD08b], [Si74], [Si75] and [Si03]. In particular, she proved the following useful facts: Theorem 3.80 If W is an m × m mvf that is meromorphic in C+ and if m = p + q with 1 ≤ p ≤ m − 1 and S p×q ⊂ D(TW ),

TW [S p×q ] ⊆ S p×q

and

det W (λ) ≡ 0,

then ρW ∈ P ◦ ( j pq ) for some scalar function ρ that is meromorphic in C+ . Moreover, if TW [Sinp×q ] ⊆ Sinp×q ,

then

ρW ∈ U ( j pq ).

Simakova also described the set of scalar multipliers that can arise in Theorem 3.80; see theorem 4.80 in [ArD08b]. Theorem 3.81 If A is an m × m mvf that is meromorphic in C+ and if m = 2p with p ≥ 1 and TA [C p×p ∩ D(TA )] ⊂ C p×p

and

det A(λ) ≡ 0,

then ρA ∈ P ◦ (Jp ) for some scalar function ρ that is meromorphic in C+ . Moreover, if p×p p×p TA [Csing ∩ D(TA )] ⊆ Csing ,

then

ρA ∈ U (Jp ).

106

Matrix-valued functions in the Nevanlinna class

The proof of Theorem 3.80 was based in part on a theorem of M.G. Krein and Yu.L. Shmuljan in [KrS66] that established analogous results for injective constant linear fractional transformations of contractive linear operators in a Hilbert space into contractive linear operators in a possibly different Hilbert space. This result of Krein and Shmuljan also follows from R. Redheffer’s work on linear fractional transformations in [Re60], where the corresponding result was obtained for the Redheffer transform (3.28) even when the ε are contractive linear operators between two Hilbert spaces and the values of the transform are contractive linear operators between two Hilbert spaces. Linear fractional transformations of linear contractive operators are studied in [KrS67].

4 Interpolation problems, resolvent matrices and de Branges spaces

This chapter is devoted to the interplay between (1) three interpolation problems; (2) descriptions of their solutions as linear fractional transformations (of Schur class mvf’s) based on appropriately restricted resolvent matrices when the interpolation problems are completely indeterminate; and (3) formulas for these resolvent matrices in terms of the given data when the interpolation problems are strictly completely indeterminate. The basic fact is that there is a one-to-one correspondence between completely indeterminate interpolation problems and appropriately restricted resolvent matrices. More precisely:

Interpolation problem

In the class

Class of resolvent matrices

Applicable to direct and inverse

GCIP GSIP NP

C p×p S p×q B p×q

UrR (J p ) UrR ( j pq ) MrR ( j pq )

Impedance & spectral problems Input scattering problem Asymptotic scattering problem

The abbreviations in the first column stand for generalized Carath´eodory interpolation problem, generalized Schur interpolation problem and Nehari problem, respectively In all three of these problems uniqueness of the resolvent matrices is obtained by restricting to suitably normalized right regular mvf’s in the indicated class and, in the first two listed problems, also requiring the two inner mvf’s in the interpolation problem to be associated pairs of the resolvent matrix. If the interpolation problem is strictly completely indeterminate then the unique resolvent matrix for the

108

Interpolation problems and de Branges spaces

problem that is determined as outlined above will be right strongly regular. The de Branges theory of reproducing kernel Hilbert spaces will be exploited to obtain formulas for the resolvent matrices for the first two problems in the list and also in the study of direct and inverse spectral problems.

4.1 The Nehari problem It is convenient to begin with the following formulation of the Nehari problem: NP(): Given a bounded linear operator  from H2q into (H2p )⊥ , describe the set N () = { f ∈ B p×q : − M f |H2q = }.

(4.1)

The NP() is called determinate if it has exactly one solution and indeterminate otherwise. It is called completely indeterminate if for every nonzero vector η ∈ Cq there exist solutions f1 , f2 ∈ N () such that  f1 η − f2 η∞ > 0. Let B˚ p×q = { f ∈ B p×q :  f ∞ < 1}. The NP() is called strictly completely indeterminate if ˚ p×q = ∅. N () ∩ B It is readily checked by direct calculation that if  = − M f |H2q .

(4.2)

p×q , then for some f ∈ L∞

− Met  = Met |H2q A bounded linear operator  from (4.3) is called a Hankel operator.

H2q

into

for every t ≥ 0. (H2p )⊥

(4.3)

that satisfies the constraints in

Theorem 4.1 If  is a bounded linear operator from H2q into (H2p )⊥ , then: p×q ⇐⇒  is a Hankel operator. (1)  = − M f |H2q for some f ∈ L∞ (2) If  is a Hankel operator, then p×q  = min{ f ∞ : f ∈ L∞

and

 f = − f |H2q }.

(4.4)

(3) N () = ∅ ⇐⇒  is a Hankel operator and  ≤ 1. Proof

The implication =⇒ in (1) and the inequality  ≤  f ∞

p×q for every f ∈ L∞ for which (4.2) holds

(4.5)

are easily checked. The rest is more complicated; see, e.g., theorem 7.2 in [ArD08b]. 

4.1 The Nehari problem

109

Corollary 4.2 If  is a bounded linear operator from H2q into (H2p )⊥ , then NP() is strictly completely indeterminate if and only if  is a strictly contractive Hankel operator. Let ω ∈ C+ and let



A+ (ω) = and

 A− (ω) =

 η : η ∈ Cq ∩ (I −  ∗ )1/2 H2q ρω

(4.6)

 ξ : ξ ∈ C p ∩ (I −  ∗ )1/2 (H2p )⊥ . ρω

(4.7)

Theorem 4.3 Let  be a Hankel operator with  ≤ 1. Then: (1) The numbers dim A+ (ω) and dim A− (ω) are independent of the choice of the point ω ∈ C+ . (2) The NP() is determinate if and only if A+ (ω) = {0} or

A− (ω) = {0}

(4.8)

for at least one (and hence every) point ω ∈ C+ . (3) The NP() is completely indeterminate if and only if dim A+ (ω) = q

and

dim A− (ω) = p

(4.9)

for at least one (and hence every) point ω ∈ C+ . Moreover, the two conditions in (4.9) are equivalent. Proof



See, e.g., theorem 7.5 in [ArD08b].

Remark 4.4 Assertion (3) of Theorem 4.3 implies that: (1) The NP() is completely indeterminate if and only if ( ) ξ 2 ∗ −1 ξ lim (I − r  ) , < ∞ for every ξ ∈ Cq . r↑1 ρi ρi st (2) If the NP() is completely indeterminate, then I −  ∗  > 0. A measurable m × m mvf A on R with block decomposition   a − b− A= b+ a + belongs to the class Mr ( j pq ) of right gamma generating matrices if (1) A(μ)∗ j pq A(μ) = j pq a.e. on R. (2) b+ and a+ are nontangential boundary values of mvf’s in the Nevanlinna class in C+ such that q×q s22 = a−1 + ∈ Sout def

and

q×p s21 = −a−1 . + b+ ∈ S def

(4.10)

110

Interpolation problems and de Branges spaces

(3) a− and b− are nontangential boundary values of mvf’s in the Nevanlinna class in C− such that p×p s11 = (a#− )−1 ∈ Sout . def

(4.11)

Since A(μ) j pq A(μ)∗ = j pq a.e. on R when U ∈ Mr ( j pq ), a− (μ)b+ (μ)∗ = b− (μ)a+ (μ)∗

a.e. on R

and hence # s21 = −(a−1 − b− ) .

(4.12)

Moreover, s21 (λ)∗ s21 (λ) < Ip

for λ ∈ C+ .

(4.13)

In view of (4.10) and (4.13) it is readily seen that if A ∈ Mr ( j pq ), then S p×q ⊂ D(TA )

and

TA [S p×q ] ⊆ B p×q .

(4.14)

It is easily checked that the nontangential boundary values of a mvf W ∈ US ( j pq ) belong to the class Mr ( j pq ). Moreover, m×m US ( j pq ) = Nout ∩ Mr ( j pq ) = {A ∈ Mr ( j pq ) : TA [S p×q ] ⊆ S p×q }.

Lemma 4.5 If A1 ∈ Mr ( j pq ), W ∈ US ( j pq ) and A = A1W , then A ∈ Mr ( j pq ) and TA [S p×q ] ⊆ TA1 [S p×q ].

(4.15)

Conversely, if A1 and A belong to Mr ( j pq ) and (4.15) is in force, then A = A1W for some mvf W ∈ US ( j pq ). Proof

See lemma 7.17 in [ArD08b].



A mvf A ∈ Mr ( j pq ) belongs to the class MrR ( j pq ) of right regular gamma generating matrices if A = A1W with A1 ∈ Mr ( j pq ) and W ∈ US ( j pq ) implies that W ∈ Uconst ( j pq ). A mvf A ∈ Mr ( j pq ) belongs to the class MrsR ( j pq ) of right strongly regular gamma generating matrices if TA [S p×q ] ∩ B˚ p×q = ∅. Thus, in view of the equivalence m×m A ∈ L∞ ⇐⇒ TA [0]∞ < 1

for mvf’s A ∈ Mr ( j pq ) (see, e.g., lemma 7.11 in [ArD08b]), it follows that m×m L∞ ∩ Mr ( j pq ) ⊆ MrsR ( j pq ).

4.1 The Nehari problem

111

Necessary and sufficient conditions for a mvf in the class Mr ( j pq ) to belong to the classes MrR ( j pq ) and MrsR ( j pq ) may be found in sections 7.3 and 10.2 of [ArD08b], respectively. These conditions imply that MrsR ( j pq ) ⊂ MrR ( j pq ). Theorem 4.6 If NP() is completely indeterminate, then: (1) There exists a mvf A ∈ Mr ( j pq ) such that N () = TA [S p×q ].

(4.16)

Moreover, this mvf is defined by  up to a constant j pq -unitary multiplier on the right and automatically belongs to the class MrR ( j pq ). (2) For each point ω ∈ C+ , there is exactly one mvf A ∈ Mr ( j pq ) for which (4.16) holds that meets the normalization conditions a− (ω) > 0,

b+ (ω) = 0 and

a+ (ω) > 0.

(4.17)

(3) The NP() is strictly completely indeterminate if and only if A ∈ MrsR ( j pq ). If A ∈ Mr ( j pq ),

f ◦ ∈ TA [S p×q ]

and

 = − M f ◦ |H2q ,

(4.18)

then: (a) NP() is completely indeterminate and TA [S p×q ] ⊆ N () with equality if and only if A ∈ MrR ( j pq ). (b)  = − M f |H2q for every choice of f ∈ TA [S p×q ]. Proof



See, e.g., theorem 7.22 in [ArD08b].

A mvf A ∈ Mr ( j pq ) for which the equality (4.16) holds is called a resolvent matrix for NP(). Formulas for the unique resolvent matrix A ∈ Mr ( j pq ) for NP() that is subject to the supplementary constraint (4.17) may be found in theorem 7.45 of [ArD08b] when NP() is strictly completely indeterminate. Theorem 4.7 Every mvf A ∈ Mr ( j pq ) admits an essentially unique factorization A = A1W Proof

with A1 ∈ MrR ( j pq )

See theorem 7.24 in [ArD08b].

and

W ∈ US ( j pq ). 

There are classes M ( j pq ) of left gamma generating matrices, MR ( j pq ), of left regular gamma generating matrices and MsR ( j pq ) of left strongly regular gamma

112

Interpolation problems and de Branges spaces

generating matrices. They are defined in such a way that A ∈ M ( j pq ) ⇐⇒ A∼ ∈ Mr ( j pq ) A ∈ MR ( j pq ) ⇐⇒ A∼ ∈ MrR ( j pq ) A ∈ MsR ( j pq ) ⇐⇒ A∼ ∈ MrsR ( j pq ). These classes play a role in the consideration of left linear fractional transformations. Theorem 4.8 If W ∈ U ( j pq ) and {b1 , b2 } ∈ ap(W ), then   b1 (μ) 0 A(μ) a.e. on R W (μ) = 0 b2 (μ)−1

(4.19)

for some mvf A ∈  ∩ Mr ( j pq ) and b1 TA [S p×q ]b2 ⊆ S p×q .

(4.20)

Conversely, if A ∈  ∩ Mr ( j pq ), then there exists a pair of mvf’s b1 ∈ Sinp×p and b2 ∈ Sinq×q such that b1 TA [0]b2 ∈ S p×q . Moreover, the mvf W defined by (4.19) with this choice of b1 and b2 belongs to the class U ( j pq ), {b1 , b2 } ∈ ap(W ) and (4.20) holds. If a pair of mvf’s A ∈  ∩ Mr ( j pq ) and W ∈ U ( j pq ) are connected by formula (4.19), then A ∈ MrR ( j pq ) ⇐⇒ W ∈ UrR ( j pq )

(4.21)

A ∈ MrsR ( j pq ) ⇐⇒ TW [S p×q ] ∩ S˚ p×q = ∅.

(4.22)

and

Proof

See theorems 7.26 and 7.27 in [ArD08b].



4.2 The generalized Schur interpolation problem In this section we shall study the following interpolation problem for mvf’s in the Schur class S p×q , which we shall refer to as the GSIP, an acronym for the generalized Schur interpolation problem: GSIP(b1 , b2 ; s◦ ): Given mvf’s b1 ∈ Sinp×p , b2 ∈ Sinq×q and s◦ ∈ S p×q , describe the set ◦ −1 p×q S(b1 , b2 ; s◦ ) = {s ∈ S p×q : b−1 1 (s − s )b2 ∈ H∞ }.

The mvf’s s(λ) in this set are called solutions of this problem.

(4.23)

4.2 The generalized Schur interpolation problem

113

The set S(b1 , b2 ; s◦ ) of solutions s of a GSIP(b1 , b2 ; s◦ ) is the set of mvf’s s ∈ S p×q such that H(b1 ) Ms |H2q = X11 , and

H(b1 ) Ms |H∗ (b2 ) = X12

− Ms |H∗ (b2 ) = X22 ,

(4.24)

where the Xi j are the blocks of the linear contractive operator  X=

X11

X12

0

X22



H2q

H(b1 )



−→

H∗ (b2 )

(H2p )⊥

:



(4.25)

that are defined by the formulas X11 = H(b1 ) Ms◦ |H2q , and

X12 = H(b1 ) Ms◦ |H∗ (b2 )

X22 = − Ms◦ |H∗ (b2 ) .

(4.26)

Thus, only the part of s◦ that influences the operator X plays a role in the GSIP(b1 , b2 ; s◦ ). Remark 4.9 The relevance of the operator X to the GSIP(b1 , b2 ; s◦ ) is perhaps best understood by noting that if ◦ −1 f ◦ = b−1 1 s b2

 = − M f ◦ |H2q ,

(4.27)

−1 s ∈ S(b1 , b2 ; s◦ ) ⇐⇒ b−1 1 sb2 ∈ N ().

(4.28)

and

then

Thus, the set S(b1 , b2 ; s◦ ) depends only upon the mvf’s b1 , b2 and the Hankel operator  defined in (4.27). Thus, the GSIP(b1 , b2 ; s◦ ) can be formulated in terms of b1 , b2 , and a given block triangular operator X of the form (4.25); the problem is to describe the set S(b1 , b2 ; X ) = {s ∈ S p×q : X11 = H(b1 ) Ms |H2q , and

X12 = H(b1 ) Ms |H∗ (b2 )

X22 = − Ms |H∗ (b2 ) }.

(4.29)

This problem will be referred to as the generalized Sarason problem and will be denoted GSP(b1 , b2 ; X ). It will prove convenient, however, to consider a relaxed version of this problem: p×q GSP(b1 , b2 ; X; H∞ ), which is to describe the set p×q : X11 = H(b1 ) Ms |H2q , H∞ (b1 , b2 ; X ) = {s ∈ H∞

and

X12 = H(b1 ) Ms |H∗ (b2 )

X22 = − Ms |H∗ (b2 ) }.

(4.30)

114

Interpolation problems and de Branges spaces

Theorem 4.10 Let b1 ∈ Sinp×p and b2 ∈ Sinq×q and let X be a bounded linear block triangular operator of the form (4.25). Then (1) H∞ (b1 , b2 : X ) = ∅ if and only if the operator  = Mb∗1 {X11 + + (X22 + X12 )PH∗ (b2 ) }Mb∗2 |H2q

(4.31)

acting from H2q into (H2p )⊥ is a bounded Hankel operator. (2) If H∞ (b1 , b2 : X ) = ∅, then X = min{s∞ : s ∈ H∞ (b1 , b2 ; X )}. (3) S(b1 , b2 ; X ) = ∅ if and only if the operator  that is defined in (4.31) is a contractive Hankel operator. Moreover, if s◦ ∈ S(b1 , b2 ; X ), then S(b1 , b2 ; s◦ ) = S(b1 , b2 ; X ) and X = min{s∞ : s ∈ S(b1 , b2 ; X )}. Proof

This follows from Remark 4.9 and Theorem 4.1.



Remark 4.11 If b2 = Iq , then X = X11 = H(b1 ) Ms |H2q and formula (4.31) simplifies to  = Mb∗1 X. Moreover,  is a Hankel operator operator in the sense that

from H2q

into

H(b1 ) Met X = XMet |H2q

(4.32) (H2p )⊥

if and only if X is a Toeplitz

for every t ≥ 0.

(4.33)

Thus, S(b1 , Iq ; X ) = ∅ if and only if X ≤ 1 and X meets the condition (4.33). The GSIP(b1 , b2 , ; s◦ ) is said to be completely indeterminate if there exists at ∩ h+ for which least one point ω ∈ h+ b−1 b−1 1

2

{s ∈ S(b1 , b2 , ; s◦ ) : {s(ω) − s◦ (ω)}η = 0} = ∅ for every nonzero vector η ∈ Cq .

(4.34)

Moreover, if the condition (4.34) holds for one point ω ∈ C+ , then it holds for ∩ h+ and hence for every point ω ∈ C+ if b1 and b2 are every point ω ∈ h+ b−1 b−1 1 2 entire. A GSIP(b1 , b2 ; s◦ ) is called strictly completely indeterminate if S(b1 , b2 ; s◦ ) ∩ S˚ p×q = ∅.

(4.35)

The GSIP(b1 , b2 ; s◦ ) with entire inner mvf’s b1 , b2 plays an essential role in the study of bitangential direct and inverse monodromy and input scattering problems.

4.2 The generalized Schur interpolation problem

115

Under this extra restriction, the inner mvf’s b1 and b2 may be normalized by the conditions b1 (0) = Ip

and

b2 (0) = Iq .

Then the Paley–Wiener theorem yields the integral representations  a1  a2 iλt e h1 (t )dt and b2 (λ) − Iq = iλ eiλt h2 (t )dt, b1 (λ) − Ip = iλ 0

0

(4.36)

where h1 ∈ L2p ([0, a1 ]),

h2 ∈ L2q ([0, a2 ]),

a j = τb j .

The set S(b1 , b2 ; s◦ ) depends only upon the mvf’s h1 and h2 and the restriction of the inverse Fourier transform (in the generalized sense) of s◦ to the interval [0, a1 + a2 ]. A more precise discussion of this dependence will be furnished in Chapter 9. The class E ∩ UrR ( j pq ) of entire right regular j pq -inner mvf’s may be identified with the class of resolvent matrices of such completely indeterminate interpolation problems based on entire inner mvf’s b1 and b2 : Theorem 4.12 If the GSIP(b1 , b2 ; s◦ ) is completely indeterminate and b1 and b2 are entire inner mvf’s, then: ◦ ( j pq ) such that (1) There exists exactly one mvf W ∈ E ∩ UrR

S(b1 , b2 ; s◦ ) = TW [S p×q ] and

{b1 , b2 } ∈ ap(W ).

(4.37)

 ∈ P ◦ ( j pq ) and (2) If W S(b1 , b2 ; s◦ ) = TW [S p×q ],

(4.38)

 ∈ E ∩ UrR ( j pq ). then W  ∈ U ◦ ( j pq ) for which (4.38) holds is described by the (3) The set of all mvf’s W formula  (λ) = eβ2 (λ) W (λ), W eβ1 (λ)

(4.39)

p×p q×q −1 where β j ≥ 0, e−1 β1 b1 ∈ Sin and eβ2 b2 ∈ Sin .

Proof

This follows from Theorems 4.6 and 4.8 and Lemma 3.65.



Theorem 4.13 If W ∈ E ∩ U ( j pq ), {b1 , b2 } ∈ ap(W ) and s◦ ∈ TW [S p×q ], then b1 and b2 are entire mvf’s and: (1) The GSIP(b1 , b2 , ; s◦ ) is completely indeterminate. (2) TW [S p×q ] ⊆ S(b1 , b2 ; s◦ ), with equality if and only if W ∈ UrR ( j pq ). (3) S(b1 , b2 ; s◦ ) is independent of the choice of s◦ ∈ TW [S p×q ].

116

Interpolation problems and de Branges spaces

p×p q×q (4) If  b1 ∈ Sinp×p and  b2 ∈ Sinq×q are such that  b−1 and b2 b−1 1 b1 ∈ Sin 2 ∈ Sin , ◦ −1  W ∈  ∈ E ∩ UrR ( j pq ) such that W then there exists exactly one mvf W  ). b1 ,  b2 } ∈ ap(W U ( j pq ) and {

 are entire by Theorem 3.55 and Lemma b j , j = 1, 2, and W Proof The mvf’s b j ,  3.35. (1), (2) and (4) follow from theorems 7.50 and 7.51 in [ArD08b]. Finally, if s ∈ S(b1 , b2 ; s1 ) for some mvf s1 ∈ TW [S p×q ], then (2) guarantees that s1 ∈ S(b1 , b2 ; s◦ ) and hence that ◦ −1 −1 −1 −1 ◦ −1 b−1 1 (s − s )b2 = b1 (s − s1 )b2 + b1 (s1 − s )b2 p×q belongs to H∞ . Thus, S(b1 , b2 ; s1 ) ⊆ S(b1 , b2 ; s◦ ). This justifies (3), since the  same inclusion holds if s◦ and s1 are interchanged.

The next result gives a full description of TW [S p×q ] for W ∈ U ( j pq ). It plays a useful role in interpolation problems. Theorem 4.14 If W ∈ U ( j pq ), then a mvf s ∈ S p×q belongs to the set TW [S p×q ] if and only if:   (1) Ip −s f ∈ H2p ; (2) −s∗ Iq f ∈ (H2q )⊥ and .  / Ip −s (3) f , f ≤  f , f H(W ) −s∗ Iq st for every f in the RKHS H(W ) based on W . Proof



See [Dy03b].

Remark 4.15 We focus primarily on entire b1 , b2 and W in this monograph, because this setting suffices for the identification of matrizants of canonical systems with resolvent matrices of appropriate interpolation problems based on entire b1 and b2 . However, to prove the next theorem, we need general versions of Theorems 4.12 and 4.13 in which b1 , b2 and W are not necessarily entire; see theorems 7.48 and 7.50 in [ArD08b]. Theorem 4.16 The GSIP(b1 , b2 ; s◦ ) is completely indeterminate if and only if there exists a mvf s ∈ S(b1 , b2 ; s◦ ) such that L1 , ln det(Iq − s∗ s) ∈ 

(4.40)

S(b1 , b2 ; s◦ ) ∩ Sszp×q = ∅.

(4.41)

i.e., if and only if

Proof If the GSIP(b1 , b2 ; s◦ ) is completely indeterminate, then by theorem 7.57 in [ArD08b] (which is essentially the same as Theorem 4.12, but without the

4.2 The generalized Schur interpolation problem

117

restriction that b1 , b2 and W are entire) there exists a mvf W ∈ UrR ( j pq ) such that TW [S p×q ] = S(b1 , b2 ; s◦ ). Therefore, s12 = TW [0] ∈ S(b1 , b2 ; s◦ ). But for this choice of s (4.40) holds, since det{Iq − s12 (μ)∗ s12 (μ)} = det{s22 (μ)∗ s22 (μ)} = | det ϕ2 (μ)|2 a.e. on R and det ϕ22 is outer in H∞ . The proof in the other direction is divided into steps: p×q 1. If s ∈ L∞ and s∞ ≤ 1, then s satisfies the constraint (4.40) if and only if  ∞ ln(1 − s(μ)) dμ > −∞. (4.42) 1 + μ2 −∞

This follows easily from the observation that if A ∈ C p×q is contractive, then (1 − A2 )q ≤ det(Iq − A∗ A) ≤ 1 − A2 . 2. If s ∈ S(b1 , b2 ; s◦ ) meets the condition (4.42), then the GSIP(b1 , b2 ; s◦ ) is completely indeterminate. If s ∈ S(b1 , b2 ; s◦ ) meets the condition (4.42), then there exists a scalar function ϕ ∈ Sout such that 1 − s(μ) = |ϕ(μ)| a.e. on R. Let E ∈ C p×q be a matrix with E = 1 and set s1 (λ) = s(λ) + ϕ(μ)b1 (λ)Eb2 (λ). p×q and Then clearly s1 ∈ H∞

s1  = s + ϕb1 Eb2 ∞ = ess sup{s(μ) + ϕ(μ)b1 (μ)Eb2 (μ) : μ ∈ R} ≤ ess sup{s(μ) + |ϕ(μ)| : μ ∈ R} = 1. p×q , it is readily seen that s1 ∈ S(b1 , b2 ; s◦ ) and that for Moreover, since ϕE ∈ H∞ any point ω ∈ C+ at which b1 (ω) and b2 (ω) are both invertible and any non zero vector η ∈ Cq ,

{s1 (ω) − s(ω)}η = ϕ(ω)b1 (ω)Eb2 (ω)η can be made nonzero by an appropriate choice of E. Therefore S(b1 , b2 ; s) is completely indeterminate. This completes the proof, since S(b1 , b2 ; s) =  S(b1 , b2 ; s◦ ). Corollary 4.17 If a GSIP(b1 , b2 ; s◦ ) is strictly completely indeterminate, then it is completely indeterminate.

118

Interpolation problems and de Branges spaces

4.3 Right and left strongly regular J-inner mvf’s The classes UrsR (J) and UsR (J) of right and left strongly regular J-inner mvf’s play a significant role in our study of direct and inverse problems for canonical integral and differential systems. There are a number of different characterizations of these classes. In this section we shall focus mainly on a characterization that is particularly useful in the study of bitangential direct and inverse problems. A mvf W ∈ U ( j pq ) is said to be a right strongly regular j pq -inner mvf if TW [S p×q ] ∩ S˚ p×q = ∅;

(4.43)

it is said to be a left strongly regular j pq -inner mvf if TW [S q×p ] ∩ S˚ q×p = ∅.

(4.44)

If J = ±Im and J = V ∗ j pqV for some constant m × m unitary matrix V , then U ∈ U (J) is said to be a right (left) strongly regular J-inner mvf if W (λ) = VU (λ)V ∗ is a right (left) strongly regular j pq -inner mvf. This definition does not depend upon the choice of V : If V1∗ j pqV1 = V2∗ j pqV2 , then the matrix V1V2∗ is both unitary and j pq -unitary and therefore it must be of the form V1V2∗ = diag {u1 , u2 }, where u1 and u2 are constant unitary matrices of sizes p × p and q × q, respectively. Thus, the mvf’s W1 (λ) = V1U (λ)V1∗ and W2 (λ) = V2U (λ)V2∗ are related by the formula  ∗    u1 0 u1 0 (λ) W W2 (λ) = 1 0 u∗2 0 u2 and, consequently, TW2 [S p×q ] = u∗1 TW1 [S p×q ]u2 . Therefore, TW2 [S p×q ] ∩ S˚ p×q = ∅ ⇐⇒ TW1 [S p×q ] ∩ S˚ p×q = ∅. The classes of left and right strongly regular J-inner mvf’s will be denoted UsR (J) and UrsR (J), respectively. The convention UsR (±Im ) = UrsR (±Im ) = U (±Im ) will be convenient. The preceding definitions imply that U ∈ UrsR (J) ⇐⇒ U ∼ ∈ UsR (J).

(4.45)

Other characterizations of the class UrsR (J) in terms of properties of the RKHS H(U ) and in terms of the Treil–Volberg matrix Muckenhoupt (A2 ) condition are presented in (1.26) and in chapter 10 of [ArD08b], respectively.

4.4 The generalized Carath´eodory interpolation problem

119

Theorem 4.18 The following inclusions hold: m×m (1) U (J) ∩ L∞ (R) ⊆ UrsR (J) ∩ UsR (J). L2m×m ⊂ UR (J) ∩ UrR (J). (2) UrsR (J) ∪ UsR (J) ⊂ U (J) ∩ 

Proof

This is theorem 4.75 in [ArD08b].



2×2 An example of a mvf U ∈ E ∩ UrsR (J1 ) that does not belong to L∞ is presented in Section 11.6 (see Remark 11.28); another example is furnished on pp. 525–527 of [ArD08b].

Theorem 4.19 If U ∈ UrsR (J) ∪ UsR (J), then U ∈ UR (J) ∩ UrR (J), i.e., UrsR (J) ∪ UsR (J) ⊆ UR (J) ∩ UrR (J). Moreover, if U ∈ UrsR (J) and U (λ) = U1 (λ)U2 (λ)U3 (λ) with Ui ∈ U (J) for i = 1, 2, 3, then: (1) U1 ∈ UrsR (J). (2) U1U2 ∈ UrsR (J). (3) Ui ∈ UR (J) ∩ UrR (J) for i = 1, 2, 3. Proof

This is theorem 4.76 in [ArD08b].



Theorem 4.19 justifies the use of the terminology strongly regular. In view of the inclusion UrsR (J) ⊂ UrR (J), the class UrsR ( j pq ) may (and will) be identified with the subclass of mvf’s in U ( j pq ) that are resolvent matrices of strictly completely indeterminate GSIP’s. This leads to the following conclusions: Theorem 4.20 If W ∈ E ∩ UrsR ( j pq ), {b1 , b2 } ∈ ap(W ) and s◦ ∈ TW [S p×q ], then: (1) b1 and b2 are entire mvf’s. (2) The GSIP(b1 , b2 ; s◦ ) is strictly completely indeterminate. Conversely, if (1), (2) and (4.38) hold for some choice of b1 ∈ Sinp×p , b2 ∈ Sinq×q , s◦ ∈ S p×q , and if W ∈ P ◦ ( j pq ), then W ∈ E ∩ UrsR ( j pq ). Proof This follows from the the definition (4.43) of the class UrsR ( j pq ), the  inclusion UrsR ( j pq ) ⊂ UrR ( j pq ) and Theorem 4.12.

4.4 The generalized Carath´eodory interpolation problem The GCIP (generalized Carath´eodory interpolation problem) is an analog of the GSIP that is considered in the class C p×p instead of the class S p×q .

120

Interpolation problems and de Branges spaces

GCIP(b3 , b4 ; c◦ ): given b3 ∈ Sinp×p , b4 ∈ Sinp×p and c◦ ∈ C p×p , describe the set p×p ◦ −1 C(b3 , b4 ; c◦ ) = {c ∈ C p×p : b−1 3 (c − c )b4 ∈ N+ }.

(4.46)

The restriction of this problem to the setting of entire inner mvf’s b3 and b4 plays an essential role in the study of direct and inverse input impedance and spectral problems for canonical systems. Accordingly we shall focus on this case in this monograph. The space N+p×p is introduced in the formulation of this problem (rather than p×p p×p (e.g., c(λ) = H∞ ), because there are mvf’s c ∈ C p×p that do not belong to H∞ i/λ), whereas C p×p ⊂ N+p×p . The GCIP(b3 , b4 ; c◦ ) is called completely indeterminate if there exists a point ω ∈ hb−1 ∩ hb−1 such that 3

4

{c ∈ C(b3 , b4 ; c◦ ) : {c(ω) − c◦ (ω}η} =  ∅

(4.47)

for every nonzero vector η ∈ C p ; it is called strictly completely indeterminate if C(b3 , b4 ; c◦ ) ∩ C˚ p×p = ∅. There are connections between the sets S(b1 , b2 ; s◦ ) and C(b3 , b4 ; c◦ ): Theorem 4.21 Let c◦ ∈ C p×p and let s◦ = TV [c◦ ]. Then the conditions 1 −1 1 p×p p×p and (4.48) b (Ip + s◦ )b3 ∈ Sout b4 (Ip + s◦ )b−1 2 ∈ Sout 2 1 2 serve to define one of the pairs {b1 , b2 } and {b3 , b4 } of p × p inner mvf’s in terms of the other, up to constant unitary multipliers. Moreover, for any two such pairs, the conditions 1 1 −1 p×p p×p and (4.49) b (Ip + s)b3 ∈ Sout b4 (Ip + s)b−1 2 ∈ Sout 2 1 2 are satisfied for every mvf s ∈ S(b1 , b2 ; s◦ ) ∩ D(TV ) and C(b3 , b4 ; c◦ ) = TV [S (b1 , b2 ; s◦ ) ∩ D(TV )]. Proof

This theorem is the same as lemma 7.68 in [ArD08b].

(4.50) 

Theorem 4.22 The GCIP(b3 , b4 , ; c◦ ) is completely indeterminate if and only if there exists a mvf c ∈ C(b3 , b4 , ; c◦ ) such that ln det(c + c∗ ) ∈  L1 ,

(4.51)

C(b3 , b4 ; c◦ ) ∩ Cszp×p = ∅.

(4.52)

i.e., if and only if

Proof

This follows from Theorem 4.16 and Theorem 4.21.



Corollary 4.23 If a GCIP(b3 , b4 , ; c◦ ) is strictly completely indeterminate, then it is completely indeterminate.

4.4 The generalized Carath´eodory interpolation problem

121

If W (λ) = VA(λ)V, then, by definition of the classes UrsR (J), A ∈ UrsR (Jp ) ⇐⇒ W ∈ UrsR ( j p ) ⇐⇒ TW [S p×q ] ∩ S˚ p×p = ∅.

(4.53)

Moreover, since C˚ p×p = TV [S˚ p×p ],

(4.54)

A ∈ UrsR (Jp ) ⇐⇒ C(A) ∩ C˚ p×p = ∅.

(4.55)

this implies that

Theorem 4.24 Let U ∈ E ∩ U (J) with J unitarily equivalent to j pq and let {b1 , b2 } ∈ apI (U ). Then: (1) b1 and b2 are entire, τU+ = τb−2 and τU− = τb−1 . (2) τU+ ≤ δb−2 ≤ qτU+ and τU− ≤ δb−1 ≤ pτU− . If q = p and {b3 , b4 } ∈ apII (U ), then b3 and b4 are entire and: (3) τb−1 = τb−3 and δb−1 = δb−3 . (4) τb−2 = τb−4 and δb−2 = δb−4 . Proof



This follows from theorem 4.58 in [ArD08b].

In the bitangential generalizations of the extension problem considered by Krein and Melik-Adamyan [KrMA86] and the Krein extension problems that will be discussed in Chapter 9, the inner mvf’s b3 and b4 will be restricted to be entire and will be normalized by the conditions b3 (0) = Ip

and

b4 (0) = Ip .

Then the Paley–Wiener theorem yields the integral representations  a3  a4 iλt e h3 (t )dt and b4 (λ) − Ip = iλ eiλt h4 (t )dt, b3 (λ) − Ip = iλ 0

0

(4.56)

where h3 ∈ L2p ([0, a3 ]),

h4 ∈ L2p ([0, a4 ])

and

a j = τb j .

The set C(b3 , b4 ; c◦ ) depends only upon the mvf’s h3 and h4 and the restriction of the inverse Fourier transform (in the generalized sense) of c◦ to the interval [0, a3 + a4 ]. A more precise discussion of this dependence will be presented in Section 9.1. The next theorem follows from theorems 7.69 and 7.70 in [ArD08b] and from Theorem 3.75.

122

Interpolation problems and de Branges spaces

Theorem 4.25 The following three sets of implications are in force: I. If the GCIP(b3 , b4 ; c◦ ) is completely indeterminate and b3 and b4 are entire inner mvf’s, then: ◦ (Jp ) such that (1) There exists exactly one mvf A˚ ∈ E ∩ UrR ˚ C(b3 , b4 ; c◦ ) = C(A)

and

˚ {b3 , b4 } ∈ apII (A).

(4.57)

(2) If A ∈ P ◦ (Jp ) is such that C(A) = C(b3 , b4 ; c◦ ),

(4.58)

then A ∈ E ∩ UrR (Jp ) and must be of the form A(λ) =

eβ4 (λ) ˚ A(λ)V, eβ3 (λ)

(4.59)

◦ where A˚ ∈ E ∩ UrR (Jp ) satisfies (4.57), β3 ≥ 0, β4 ≥ 0,

e−β4 b4 ∈ Sinp×p ,

e−β3 b3 ∈ Sinp×p

and

V ∈ Uconst (Jp ).

(4.60)

(3) Every mvf of the form (4.59) that meets the constraints in (4.60) satisfies the conditions A ∈ U (Jp )

and

C(A) = C(b3 , b4 ; c◦ ).

(4.61)

Moreover, A ∈ E ∩ UrR (Jp ). II. If A ∈ E ∩ U (Jp ), {b3 , b4 } ∈ apII (A) and c◦ ∈ C(A), then b3 and b4 are entire mvf’s, the GCIP(b3 , b4 ; c◦ ) is completely indeterminate and C(A) ⊆ C(b3 , b4 ; c◦ ),

(4.62)

with equality if and only if A ∈ UrR (Jp ). III. If A ∈ E ∩ U (Jp ), {b3 , b4 } ∈ apII (A) and c◦ ∈ C(A), then A ∈ UrsR (Jp ) if and only (4.58) holds for some strictly completely indeterminate GCIP. A mvf A for which (4.58) holds is called a resolvent matrix for the GCIP(b3 , b4 ; c◦ ). The class E ∩ UrR (Jp ) of entire right regular Jp -inner mvf’s may be identified with the class of resolvent matrices of completely indeterminate generalized Carath´eodory interpolation problems based on entire inner mvf’s b3 and b4 . The class E ∩ UrsR (Jp ) of entire right strongly regular Jp -inner mvf’s may be identified with the class of resolvent matrices of such strictly completely indeterminate problems. Remark 4.26 If A ∈ UrsR (Jp ), b(λ) = A(λ)V, {b3 , b4 } ∈ apII (A), χ1 (λ) = b4 (λ)b3 (λ) and Ip − χ1 (ω)χ1 (ω)∗ > 0 for at least one point ω ∈ C+ ,

4.4 The generalized Carath´eodory interpolation problem

123

then S p×p ⊆ D(TB )

and hence

C(A) = TB [S p×p ].

In particular, S p×p ⊆ D(TB ) if either e−β b3 ∈ S p×p or e−β b4 ∈ S p×p for some β > 0; see remark 7.72 in [ArD08b]. Lemma 4.27 Let A ∈ U (Jp ), {b1 , b2 } ∈ apI (A), {b3 , b4 } ∈ apII (A), B(λ) = p×p ]∩ A(λ)V, χ = b−1 22 b21 and W (λ) = VA(λ)V. Then for every s ∈ TW [S p×p p×p D(TV ), there exists a pair of mvf’s ϕs ∈ Sout and ψs ∈ Sout such that (Ip + s) b3 = b1 ϕs 2

and

b4

(Ip + s) = ψs b2 . 2

(4.63)

If Ip − χ (ω)χ (ω)∗ > 0 for at least one point ω ∈ C+ ,

(4.64)

then: (1) TW [S p×p ] ⊂ D(TV ) and (4.63) holds for every s ∈ TW [S p×p ]. (2) S p×p ⊂ D(TB ) and C(A) = TB [S p×p ]. Proof Let c◦ = TA [Ip ]. Then, by Lemma 3.72, (4.48) holds with s◦ = TV [c◦ ], and, by Theorem 4.21, (4.49) holds for every s ∈ S(b1 , b2 ; s◦ ). Therefore, (4.49) holds for every s ∈ TW [S p×p ] ∩ D(TV ), since TW [S p×p ] ⊆ S(b1 , b2 ; s◦ ), by theorem 7.50 in [ArD08b].  Lemma 4.28 Let A ∈ UrsR (Jp ), {b1 , b2 } ∈ apI (A), {b3 , b4 } ∈ apII (A), W (λ) = VA(λ)V and let γ = inf{s∞ : s ∈ TW [S p×q ] ∩ S˚ p×q }. Then γ < 1 and ! 1 (1 − γ )b3 (λ) ≤ b1 (λ) ≤ 2 p (1 − γ )−p b3 (λ) 2 and

! 1 (1 − γ )b4 (λ) ≤ b2 (λ) ≤ 2 p (1 − γ )−p b4 (λ) 2

(4.65)

(4.66)

for every point λ ∈ C+ . Proof



See lemma 3.5 in [ArD03a].

Lemma 4.29 Let A ∈ UrsR (Jp ) and let {b1 , b2 } ∈ apI (A) and {b3 , b4 } ∈ apII (A). Then: lim b1 (iν) = 0 ⇐⇒ lim b3 (iν) = 0

ν↑∞

ν↑∞

(4.67)

124

Interpolation problems and de Branges spaces

and lim b2 (iν) = 0 ⇐⇒ lim b4 (iν) = 0.

ν↑∞

Proof

ν↑∞

(4.68) 

This is an immediate corollary of Lemma 4.28.

Lemma 4.30 Let A ∈ UrsR (Jp ), {b3 , b4 } ∈ apII (A), B(λ) = A(λ)V, W (λ) = VA(λ)V and assume that either

lim b3 (iν) = 0 or

ν↑∞

lim b4 (iν) = 0.

ν↑∞

(4.69)

Then every mvf s ∈ TW [S p×p ] satisfies the inequality s(λ)∗ s(λ) < Ip

f or every

λ ∈ C+

(4.70)

and hence S p×p ⊆ D(TB ) and C(A) = TB [S p×p ].

(4.71)

Consequently, (4.71) holds, if either e−α b3 ∈ Sinp×p or e−α b4 ∈ Sinp×p for some α > 0. Proof

See lemma 3.7 on p.24 in [ArD03a]



4.5 Detour on scalar determinate interpolation problems In this section we establish a few facts connected with determinate interpolation problems in S = S 1×1 and C = C 1×1 for future use. The main principle that underlies these facts is contained in the next theorem. Theorem 4.31 If s◦ ∈ S, b ∈ Sin and 1 is a singular value of the operator X = H(b) Ms◦ |H2 , then s◦ ∈ Sin and S(b, 1; s◦ ) = {s◦ }. Proof If 1 is a singular value of X, then there exists a nonzero vector u ∈ H2 such that ust = Xust = H(b) s◦ ust ≤ s◦ ust ≤ ust . But this implies that |s◦ (μ)| = 1 a.e. on R and that s◦ u ∈ H(b). Thus, s◦ ∈ Sin and, if s = s◦ + b f for some f ∈ H∞ , then H(b) su = H(b) s◦ u + H(b) b f u = H(b) s◦ u = s◦ u. Therefore, ust = s◦ ust = H(b) s◦ ust = H(b) sust ≤ sust ≤ ust ,

4.5 Detour on scalar determinate interpolation problems

125

which in turn implies that H(b) su = su and hence that s◦ u = H(b) s◦ u = H(b) su = su. Consequently, as u ≡ 0, s◦ = s.



Lemma 4.32 If s◦ is a rational inner function and a > 0, then there exists a nonzero function h ∈ H(ea ) such that s◦ h ∈ H(ea ). Proof

The proof is broken into steps.

1. If h is holomorphic at a point ω ∈ C, then ⎧ k−1 ⎪ ⎪ f (λ) − f (ω) − · · · − f (k−1) (ω) (λ − ω) ⎪ ⎪ ⎨ (k − 1)! (Rkω h)(λ) = (λ − ω)k ⎪ ⎪ ⎪ ⎪ f (k) (ω) ⎩ k!

if

λ = ω

if

λ=ω (4.72)

for every integer k ≥ 1. This is a straightforward calculation that may be verified by induction. 2. If ω1 , . . . , ωn is a given set of distinct points in C and k1 , . . . , kn is a given set of nonnegative integers, then there exists a function h ∈ H(ea ) such that h(0) (ω j ) = · · · = h(k j ) (ω j ) = 0

for j = 1, . . . , n.

Let ϕω(k) (t ) = t k e−iωt

for k = 0, 1, . . .

and suppose that f ∈ L2 ([0, a]) is orthogonal to ϕω(k) for k = 0, . . . , . Then h =  f belongs to H(ea ) and  a f (t )t k eiωt dt = (−i)k h(k) (ω) 0 =  f , ϕω(k) st = 0

for k = 0, . . . , . The same calculation with f ∈ L2 ([0, a]) now chosen to be f meets orthogonal to ϕω(k)j for k = 0, . . . , k j and j = 1, . . . , n implies that h =  the stated conditions. 3. There exists a function h ∈ H(ea ) such that s◦ h ∈ H(ea ). If s◦ is a rational inner function and γ = lim|λ|↑∞ s◦ (λ), then |γ | = 1. If s◦ ≡ γ , then the assertion is obvious. If not, then s◦ will have poles ω j of order k j for j = 1, . . . , n. Thus, if h ∈ H(ea ) is constructed as in Step 2 to have a zero of order k j at the poles ω j , then s◦ (λ)h(λ) = γ h(λ) +

k1  j=1

c1 j h(λ) + · · · (λ − ω1 ) j

126

Interpolation problems and de Branges spaces

Therefore, in view of the special properties of h and the formula in Step 1, h(λ) h(λ) − h(ω1 ) = = (Rω1 h)(λ) λ − ω1 λ − ω1 h(λ) h(λ) − h(ω1 ) − (λ − ω1 )h(1) (ω1 ) = = (R2ω1 h)(λ) (λ − ω1 )2 (λ − ω1 )2 .. .

.. .

h(λ) k = · · · = (Rωj1 h)(λ). k j (λ − ω1 ) Thus, as H(ea ) is invariant under the action of Rα it is readily seen that  s◦ h ∈ H(ea ). Theorem 4.33 If s◦ ∈ Sin is rational, then: (1) 1 is a singular value of the operator X = H(ea ) Ms◦ |H(ea ) . (2) SP(s◦ ; a) = {s◦ } for every a > 0. Proof In view of Lemma 4.32, there exists a nonzero function h ∈ H(ea ) such that s◦ h ∈ H(ea ). Therefore, Xh = s◦ h and h, h − Xh, Xh = h, h − s◦ h, s◦ h = 0. Thus, as X ≤ 1, it follows that (I − X ∗ X )h = 0 and hence that (1) holds; (2) then follows from Theorem 4.31.



Remark 4.34 It is also possible to prove (2) of Theorem 4.33 directly without relying on (1): If s = s◦ + ea f for some f ∈ H∞ and h is a nonzero function in H(ea ) such that Xh = s◦ h, then H(ea ) sh = H(ea ) s◦ h + H(ea ) ea f h = H(ea ) s◦ h. Therefore, hst = s◦ hst = H(ea ) s◦ hst = H(ea ) shst ≤ shst ≤ hst . Thus, sh = H(ea ) sh = H(ea ) s◦ h = s◦ h and hence, as h ≡ 0, s = s◦ .

4.6 The reproducing kernel Hilbert space H(U )

127

Theorem 4.35 If c◦ ∈ C is rational and Rc◦ (μ) = 0 at every point μ ∈ R except at the poles of c◦ (i.e., c◦ ∈ Csing ) and a > 0, then CP(c◦ ; a) = {c◦ }. Proof

If c◦ (μ) = 0, then s◦ = (1 − c◦ )/(1 + c◦ )

is a rational inner function. Therefore, since SP(s◦ ; a) = {s◦ } by Theorem 4.33, the asserted result follows. 

4.6 The reproducing kernel Hilbert space H(U ) Theorem 4.36 Let  be a subset of C and let the m × m matrix-valued kernel Kω (λ) be positive on  × . Then there is a unique Hilbert space H of m × 1 vvf’s f (λ) on  such that Kω ξ ∈ H

and

Kω ξ , f H = ξ ∗ f (ω)

for every ω ∈ , ξ ∈ Cm and f ∈ H. Proof This is a matrix version of a theorem of Aronszajn in [Arn50]; for a proof, see, e.g., theorem 5.2 in [ArD08b].  The space H is called an RKHS (reproducing kernel Hilbert space) with RK (reproducing kernel) Kω (λ). It is readily checked that an RKHS has exactly one RK and that Kα (β )∗ = Kβ (α)

for α, β ∈ .

Lemma 4.37 Let H be an RKHS of m × 1 vvf’s on some nonempty open subset  of C with RK Kω (λ) on  × . Then the two conditions (1) Kω (λ) is a holomorphic function of λ in  for every point ω ∈ ; (2) the function Kω (ω) is continuous on ; are in force if and only if (3) every vvf f ∈ H is holomorphic in . Proof

See lemma 5.6 and corollary 5.7 in [ArD08b]



We shall be particularly interested in RKHS’s of holomorphic vvf’s on an open set  that are invariant under the generalized backwards shift operator Rα for α ∈ . More precisely, we shall focus on the RKHS’s H(b) and H∗ (b) for b ∈ E ∩ Sinp×p , H(U ) for U ∈ E ∩ U (J) √ and B(E) for entire de Branges matrices E; and in particular for E(λ) = 2N2∗ A(λ)V for some A ∈ E ∩ U (Jp ). However, for the sake of added perspective, we shall begin with a more general setting:

128

Interpolation problems and de Branges spaces

If U ∈ U (J), then the kernel ⎧ J − U (λ)JU (ω)∗ ⎪ ⎪ ⎨ ρω (λ) KωU (λ) = ⎪ ⎪ ⎩ U  (ω)JU (ω)∗ 2πi

if λ = ω (4.73) if λ = ω

is positive on hU × hU . Therefore, it defines a unique RKHS that will be denoted H(U ); see e.g., (1) of theorem 5.31 in [ArD08b]. Theorem 4.38 If U ∈ E ∩ U (J) for some signature matrix J ∈ Cm×m , then: (1) H(U ) ⊂ E ∩  p and every vvf in H(U ) is of exponential type and satisfies the Cartwright condition (3.86). (2) The RKHS H(U ) is Rα invariant for every point α ∈ C. (3) Rα is a bounded linear operator on H(U ) for every point α ∈ C. (4) The de Branges identity Rα f , gH −  f , Rβ gH − (α − β )Rα f , Rβ gH = 2π ig(β )∗ J f (α) (4.74) in which the subscript H denotes H(U ), is in force for every choice of α, β ∈ C and f , g ∈ H(U ). Conversely, if (1)–(4) hold for a RKHS H, then H = H(U ) for some U ∈ E ∩ U (J). Proof Assertions (1)–(4) follow from theorems 5.31, 5.49 and remark 5.32 in [ArD08b] and from Theorem 3.30; for a proof of the converse statement, see, e.g., theorem 5.21 in [ArD08b].  The RKHS’s H(b) = H2p  bH2p

and

H∗ (b) = (H2p )⊥  b# (H2p )⊥

(4.75)

based on b ∈ Sinp×p will play an important role. The RK’s kωb (λ) for H(b) and bω (λ) for H∗ (b) are given by the formulas ⎧ I − b(λ)b(ω)∗ ⎪ ⎨ p if λ =  ω −2πi(λ − ω) kωb (λ) = (4.76) ⎪ ⎩ b (ω)b(ω)∗ if λ = ω 2πi and

⎧ # b (λ)b# (ω)∗ − Ip ⎪ ⎨ −2πi(λ − ω) bω (λ) =  ⎪ ⎩ b (ω)∗ b(ω) 2πi

if λ = ω if λ = ω

(4.77)

4.6 The reproducing kernel Hilbert space H(U )

129

at points λ, ω and ω at which the indicated functions are holomorphic. If b is entire, then both RK’s are defined on C × C. We remark that formulas (4.76) and (4.77) are both special cases of (4.73); the former corresponds to the choice U = b and J = Ip ; the latter to the choice U = b# and J = −Ip . Theorem 4.39 If U ∈ E ∩ U ◦ (J), then: (1) The operator R0 is a Volterra operator, i.e., it is a compact linear operator from H(U ) into itself with {0} as its only point of spectrum. (2) The linear operator √ F0 : f ∈ H(U ) −→ 2π f (0) ∈ Cm (4.78) is bounded and (F0∗ v)(λ) =



2πK0U (λ)v when v ∈ Cm .

(4.79)

(3) The operators R0 , F0 and J are connected by the relations R0 − R∗0 = iF0∗ JF0 and

'

Rn0 F0∗ Cm = H(U ).

(4.80)

(4.81)

n≥0

(4) The mvf U (λ) may be recovered from R0 , F0 and J by the formula U (λ) = Im + iλF0 (I − λR0 )−1 F0∗ J

for every λ ∈ C

(4.82)

0  0     U U    U (λ) ≤ 1 + 2π|λ|  K0 (0)  Kλ (λ) .

(4.83)

and is subject to the bound

Proof The fact that Rα is a bounded linear operator for every α ∈ C was established in Theorem 4.38. It is also easy to check that it satisfies the resolvent identity Rα − Rβ = (α − β )Rα Rβ .

(4.84)

The identity (I − ωR0 )−1 = I + ωRw

for ω ∈ C

(4.85)

then serves to guarantee that {0} is the only point of spectrum of R0 . The two assertions in (2) follow from the formula √ v∗ F0 f = 2π  f , K0U vH(U ) for v ∈ Cm . Formula (4.80) is equivalent to the de Branges identity (4.74) with α = β = 0.

130

Interpolation problems and de Branges spaces

The formula R0 + R∗0 R0 − R∗0 +i = AR + iAI 2 2i exhibits R0 as a finite dimensional perturbation of the operator AR , which will be shown to be compact in Theorem 4.42. Therefore, R0 is compact. Next, (4.80) implies that R0 =

 f , Rn0 F0∗ vH(U ) = 0

for every v ∈ Cm and n = 0, 1, . . .

if and only if  f , (R∗0 )n F0∗ vH(U ) = 0 for every v ∈ Cm and n = 0, 1, . . ., or, equivalently, if and only if v∗ f (n) (0) = 0 for every v ∈ Cm and n = 0, 1, . . .. n! which serves to justify (4.81). Next, (4.82) follows from the fact that U (λ) =

∞ 

λj

j=0

since √ U ( j) (0) 2π v = F0 R0j Uv j!

and

F0∗ Jv =

U ( j) (0) , j! √

1 2πK0U (λ)Jv = √ (R0U )(λ)v. i 2π

On the other hand, the identity U (λ) = λ

U (λ) − Im + Im = 2πiλK0U (λ)J + Im λ

implies that U (λ) ≤ 1 + 2π|λ|K0U (λ). The inequality (4.83) then follows from the fact that 0 0 |η∗ K0U (λ)ξ | = |K0U ξ , KλU ηH(U ) | ≤ ξ ∗ K0U (0)ξ η∗ KλU (λ)η 0 0 ≤  K0U (0)ξ   KλU (λ)η for every choice of ξ , η ∈ Cm .



Remark 4.40 If U ∈ E ∩ U ◦ (J), then the supplementary formulas KωU (λ)ξ = (I − ωR0 )−1 K0U (λ)U (ω)∗ ξ 1 = √ (I − ωR0 )−1 F0∗U (ω)∗ ξ , ξ ∈ Cm . 2π

(4.86)

4.6 The reproducing kernel Hilbert space H(U )

131

and, with the usual identification of m × m matrices as bounded linear operators in Cm , KωU (λ) =

1 F0 (I − λR0 )−1 (I − ωR∗0 )−1 F0∗ , 2π

(4.87)

in which F0∗ acts on a matrix column by column, are also useful. The former yields the identity ' (I − ωR0 )−1 F0∗ Cm = H(U ) (4.88) ω∈

for any open set  that contains the point 0, which is equivalent to (4.81). Remark 4.41 In view of (4.80)–(4.82), the colligation N◦ = (R0 , F0 ; H(U ), Cm ; J) is the de Branges model of a simple Livsic–Brodskii J-node with characteristic mvf U (λ); it will be discussed Section 8.4. Theorem 4.42 If U ∈ E ∩ U ◦ (J) and def

AR =

1 (R0 + R∗0 ), 2

then: (1) AR is a compact self-adjoint operator. (2) λ j is a nonzero eigenvalue of AR if and only if det(Im + U (1/λ j )) = 0. Moreover, the multiplicity of λ j is equal to the dimension of the null space of the matrix (Im + U (1/λ j )) and, if ω j = 1/λ j , (AR f )(λ) = λ j f (λ) ⇐⇒ f (λ) = π iω j KωUj (λ)η j for some nonzero vector η j ∈ Cm such that (Im + U (ω j ))η j = 0. (3) AR belongs to the von Neumann–Schatten class S p for p > 1, i.e., AR is a compact operator with a countable set of singular values s1 , s2 , . . . such 1 that |s j | p < ∞ for p > 1. Proof

The formula R0 − R∗0 = iF0∗ JF0

implies that AR =

1 1 (R0 + R∗0 ) = R0 − iF0∗ JF0 2 2

132

Interpolation problems and de Branges spaces

and hence that 1 (AR f )(λ) = (R0 f )(λ) − 2πiK0U (λ)J f (0) 2 f (λ) − f (0) 1 Im − U (λ) = + f (0) λ 2 λ 2 f (λ) − {Im + U (λ)} f (0) = . 2λ The rest of the proof is broken into steps. 1. If α ∈ R \ {0}, β = 1/α and Im + U (β ) is not invertible, then α is an eigenvalue of AR of multiplicity equal to the dimension of the null space of the matrix Im + U (β ). In view of the preceding discussion, (AR f j )(λ) = λ j f j (λ) ⇐⇒ f j (λ) =

{Im + U (λ)} f j (0), 2(1 − λλ j )

i.e., if λ j = 0, ω j = 1/λ j and λ = ω j , then, f j (λ) = ω j

{Im + U (λ)} η j, 2(ω j − λ)

where {Im + U (ω j )}η j = 0,

since f (λ) is an entire vvf. Thus, f j (λ) = ω j

U (λ) − U (ω j ) η j, 2(ω j − λ)

1 f j (ω j ) = −ω j U  (ω j )η j 2

and, as ω j ∈ R, η j = −U (ω j )η j ⇐⇒ U (ω j )∗ Jη j = −Jη j . Consequently, the formula for the eigenfunction f j (λ) can be rewritten in terms of the RK for H(U ) as f j (λ) = πiω j KωUj (λ)Jη j . Thus,  f j , fk H(U ) = π 2 ω j ωk KωUj Jη j , KωUk Jηk  = ηk∗ JKωUj (ωk )Jη j = 0 if k = j. Moreover, if also {Im + U (ω j )}ξ j = 0, then KωUj Jη j , KωUj Jξ j  = ξ j∗ JKωUj (ω j )Jη j =

i ∗  ξ JU (ω j )η j . 2π j

Consequently, such a pair of vvf’s in H(U ) will be orthogonal in H(U ) if ξ j and η j are orthogonal with respect to the inner product JKωUj (ω j )Jη, ξ  in Cm based on the positive definite matrix JKωUj (ω j )J.

4.6 The reproducing kernel Hilbert space H(U )

133

2. If α ∈ R \ {0}, β = 1/α and Im + U (β ) is invertible, then αI − AR is a bounded invertible operator from H(U ) onto itself, i.e., the nonzero spectrum of AR consists only of eigenvalues. It suffices to check that for each g ∈ H(U ) there exists exactly one vvf f ∈ H(U ) such that (αI − AR ) f = g. But a necessary condition for this is that f (λ) =

−2λg(λ) + {Im + U (λ)} f (0) 2(1 − λα)

and hence, since f (λ) is an entire vvf, 2βg(β ) = {Im + U (β )} f (0), i.e., f (0) = {Im + U (β}−1 2βg(β ). But this in turn leads to the formula f (λ) = =

λg(λ) − {Im + U (λ)}{Im + U (β )}−1 βg(β ) α(λ − β ) λg(λ) − βg(β ) {Im − {Im + U (λ)}{Im + U (β )}−1 }βg(β ) − α(λ − β ) α(λ − β )

= βg(λ) + β 2 (Rβ g)(λ) − 2πiβKβU (λ)x, where x = βJU (β ){Im + U (β )}−1 g(β ). 3. AR is compact Since the reciprocals 1/λ j of the nonzero eigenvalues λ j of AR are the roots of an entire function, zero is the only possible limit point of the λ j . Therefore, they can be reindexed according to size. Let γ1 , γ2 , . . . denote a reindexing of the eigenvalues of AR with |γ j | ≥ |γ j+1 | and let ϕ j , j = 1, 2, . . . denote a corresponding set of orthonormal vvf’s in H(U ). Then, since AR = A∗R , the spectral theorem for bounded self-adjoint operators implies that  γ j  f , ϕ j H(U ) ϕ j AR f = j≥1

for every f ∈ H(U ). If AR has finite dimensional range, then it is automatically compact. If not, set AR(n) f =

n  j=1

AR f , ϕ j H(U ) ϕ j =

n  j=1

γ j  f , ϕ j H(U ) ϕ j

134

Interpolation problems and de Branges spaces

for n = 1, 2, . . .. Then the bounds AR(n+k) f − AR(n) f 2H(U ) ≤ |γn+1 |2  f 2H(U ) guarantee that AR − AR(n)  ≤ |γn+1 |, and hence that AR can be uniformly approximated in operator norm by finite rank operators and is therefore compact. 4. AR belongs to the von Neumann–Schatten class S p for p > 1. Since AR = A∗R , the numbers |γ j | = s j , the singular values of AR . Moreover,   s1+δ = |γ j |1+δ < ∞, j j≥1

j≥1

since the numbers γ1−1 , γ2−1 , . . . are the nonzero roots (counting multiplicities) of the entire function of exponential type det{Im + U (λ)}; see e.g., p. 19 of [DMc72] for the bound. Therefore, AR ∈ S p for p > 1; pp. 91–95 of [GK69] is a good source of information on the von Neumann–Schatten class.  Theorem 4.43 If U ∈ E ∩ U (J), then: (1) Every f ∈ H(U ) is an entire vvf of exponential type τ f and meets the Cartwright condition L1 . ln( f ∗ f ) ∈ 

(4.89)

(2) The type τ f = max{τ f+ , τ f− }, where ln  f (±iν) . ν↑∞ ν

τ f± = lim (3) τ f± = τ± ( f ), where τ± ( f ) = lim

r↑∞

max{ f (λ) : |λ| ≤ r and λ ∈ C± } . r

(4) τ f ≤ τU and τ f± ≤ τU± for every f ∈ H(U ). Moreover, equality is achieved for each of these three inequalities by K0U ξ for appropriate choices of ξ ∈ Cm . Proof This follows from Theorem 4.44 and a theorem of M.G. Krein on entire mvf’s with bounded Nevanlinna characteristics in C+ and C− ; see Theorem 3.24.  Theorem 4.44 If U ∈ U (J), then H(U ) ⊂ m and ∩ f ∈H(U ) h f = hU . Moreover, RαUξ ∈ H(U ) for every α ∈ hU and ξ ∈ Cm . Proof

See theorem 5.49 in [ArD08b].



4.7 de Branges’ inclusion theorems

135

An isometry from H(S) onto H(U ) when S = PG(U ) If J = Im , then U (J) = Sinm×m and H(S) = H2m  SH2m

for S ∈ Sinm×m .

(4.90)

If J = −Im , then U (J) = {U : U # ∈ Sinm×m } and H(U ) = (H2m )⊥  S# (H2m )⊥ = H∗ (S)

for S = U # ∈ Sinm×m .

(4.91)

If J = ±Im , U ∈ U (J) and S = PG(U ), then the description of H(U ) will be formulated in terms of L(λ) = (P+ − S(λ)P− )−1

for λ ∈ hU ∩ hS ,

(4.92)

with the help of the formula KωU (λ) = L(λ)KωS (λ)L(ω)∗

on  × .

(4.93)

Theorem 4.45 Let J be an m × m signature matrix, U ∈ U (J), S = PG(U ), G (μ) = P+ + U (μ)P−U (μ)∗

a.e. on R

and let L(λ) be defined by (4.92). Then the formula f (λ) = L(λ)g(λ),

g ∈ H(S)

(4.94)

defines a unitary operator, acting from H(S) onto H(U ), i.e., f ∈ H(U ) if and only if (P+ − SP− ) f ∈ H2m

and

(P− − S# P+ ) f ∈ (H2m )⊥

and, if f ∈ H(U ), then f 2st .  f 2H(U ) = (P+ − SP− ) f 2st = G−1/2  Proof

This follows from theorem 5.45 in [ArD08b].



4.7 de Branges’ inclusion theorems The next two theorems explore the connection between closed Rα invariant subspaces L of H(U ) and the left divisors U1 of U ∈ U (J). Theorem 4.46 (L. de Branges) Let U ∈ U (J) and let L be a closed subspace of H(U ) that is Rα invariant for every point α ∈ hU . Then there exists an essentially unique mvf U1 ∈ U (J) such that L = H(U1 ) and U1−1U ∈ U (J). Moreover, the space H(U1 ) is isometrically included in H(U ), and H(U ) = H(U1 ) ⊕ U1 H(U2 ),

where U2 = U1−1U.

(4.95)

136

Interpolation problems and de Branges spaces

Proof



See e.g., theorem 5.50 in [ArD08b].

Theorem 4.47 (L. de Branges) If U, U1 , U2 ∈ U (J) and U = U1U2 , then H(U1 ) sits contractively in H(U ), i.e., H(U1 ) ⊆ H(U ) (as linear spaces) and  f H(U ) ≤  f H(U1 )

for every f ∈ H(U1 ).

The inclusion is isometric if and only if H(U1 ) ∩ U1 H(U2 ) = {0}.

(4.96)

The condition (4.96) is in force if and only if H(U ) = H(U1 ) ⊕ U1 H(U2 ). Proof

(4.97) 

See e.g., theorem 5.52 in [ArD08b].

Theorem 4.48 If U1 , U2 ∈ U (J) and U = U1U2 , then hU = hU1 ∩ hU2 . Proof In view of Theorem 4.47, the vvf fξ ,ω = KωU1 ξ belongs to H(U ) for every ξ ∈ Cm and every ω ∈ hU1 . Therefore, h fξ ,ω ⊇ hU by Theorem 4.38. This yields the inclusion hU1 ⊇ hU . Since U ∈ U (Jp ) =⇒ U τ ∈ U (Jp ), U τ = U2τ U1τ and hU = hU τ , the same argument applied to U2τ yields the inclusion hU2 ⊇ hU . Consequently, hU ⊆ hU1 ∩ hU2 . Thus, as the opposite inclusion is self-evident, equality prevails.  Corollary 4.49 If U1 , U2 ∈ U (J) and U = U1U2 is an entire mvf, then U1 and U2 are also entire mvf’s.  ∈ U ( j pq ), and { Theorem 4.50 If W ∈ U ( j pq ), {b1 , b2 } ∈ ap(W ), W b1 ,  b2 } ∈  ), then the following three statements are equivalent: ap(W (1) (2) (3)

 −1W ∈ U ( j pq ). W p×p q×q  −1 and TW [S p×q ] ⊇ TW [S p×q ]. b−1 1 b1 ∈ Sin , b2 b2 ∈ Sin  H(W ) ⊆ H(W ) and the inclusion is contractive.

Proof The equivalence of (1) and (2) is covered by Theorem 3.64; the implication (1) =⇒ (3) follows from Theorem 4.47. Conversely, if (3) holds, then 





KωW u2H(W ) = u∗ KωW (ω)u = KωW u, KωW uH(W ) 



≤ KωW uH(W ) KωW uH(W ) ≤ KωW uH(W ) KωW uH(W ) , which in turn leads easily to the bound  (ω) j pqW  (ω)∗ ≤ j pq − W (ω) j pqW (ω)∗ j pq − W

+ + for ω ∈ hW ∩ hW .

Therefore,  (ω)−1W (ω) j pqW (ω)∗W  (ω)−∗ ≤ j pq W

4.8 A description of H(W ) ∩ L2m

137

+ + for points ω ∈ hW ∩ hW  at which the indicated inverses exist, i.e., (3) =⇒ (1). 

Remark 4.51 An analog of Theorem 4.50 for inner mvf’s is: If b,  b ∈ Sinp×p , then b) ⊆ H(b). (1)  b−1 b ∈ Sinp×p if and only H( p×p −1  b) ⊆ H∗ (b). (2) bb ∈ Sin if and only H∗ (  ∈ U (Jp ), {b3 , b4 } ∈ apII (A) and { b3 ,  b4 } ∈ Theorem 4.52 If A ∈ U (Jp ), A apII (A1 ), then the following three statements are equivalent: (1) (2) (3)

−1 A ∈ U (Jp ). A p×p p×p  −1  ⊇ C(A). and C(A) b−1 3 b3 ∈ Sin , b4 b4 ∈ Sin  H(A) ⊆ H(A) and the inclusion is contractive.

Proof The equivalence of (1) and (2) is covered by Theorem 3.77; the verification of (1) ⇐⇒ (3) is similar to the verification of (1) ⇐⇒ (3) in the proof of Theorem 4.50.  We remark that the equivalence of (1) and (3) in Theorem 4.50 extends to mvf’s in U (J) for J = ±Im .

4.8 A description of H(W ) ∩ L2m In this section we present a description of the space H(W ) ∩ L2m for W ∈ U ( j pq ) and then use it to characterize the classes US (J), UrR (J) and UsrR (J) of singular, right regular and strongly right regular J-inner mvf’s in terms of the linear space ) ⊆ H(U ) for left LU = H(U ) ∩ L2m and further to show that the inclusion H(U  of U is isometric if U ∈ UrR (J). divisors U Since H(U ) ⊂ L2m (R) if J = ±Im , only the case J = ±Im is of interest and thus, it suffices to focus on J = j pq and H(W ) ∩ L2m (R) for W ∈ U ( j pq ). For each W ∈ U ( j pq ), let the operators X11 : H2q → H(b1) andX22 : H∗ (b2 ) → (H2p )⊥ be defined in terms of {b1 , b2 } ∈ ap(W ) and s ∈ TW S p×q by the formulas X11 = H(b1 ) Ms |H2q ,

X22 = − Ms |H∗ (b2 ) .

Theorem 4.53 Let W ∈ U ( j pq ), {b1 , b2 } ∈ ap(W ), s ∈ TW [S operators X11 and X22 be defined by formula (4.98). Then:

(4.98) p×q

] and let the

(1) The operators X11 and X22 do not depend upon the choice of s ∈ TW [S p×q ]. (2) The subspace LW = H(W ) ∩ L2m is also given by the formula      g X22 h ) and h ∈ H (b ) . (4.99) LW = + : g ∈ H(b 1 ∗ 2 ∗ X11 g h and:

(  f 2H(W )

=

Ip −s∗

−s Iq



) for every f ∈ LW .

f, f st

(4.100)

138 Proof

Interpolation problems and de Branges spaces 

This is covered by theorem 5.81 in [ArD08b].

Theorem 4.54 Let U ∈ U (J). Then: (1) U admits a right regular-singular factorization U = U1U2

with U1 ∈ UrR (J)

and U2 ∈ US (J)

(4.101)

that is unique up to multiplication by a constant J-unitary factor V on the right of U1 and V ∗ on the left of U2 . (2) LU = LU1 = H(U1 ). (3) H(U1 ) is isometrically included in H(U ). Proof If J = ±Im , then the theorem is obvious, since H(U ) is a closed subspace of L2m and US (J) = Uconst (J). If J = ±Im , then it follows from theorem 5.89, lemma 5.85 and corollary 5.87 in [ArD08b].  Theorem 4.55 Let U ∈ U (J). Then: (1) U ∈ US (J) ⇐⇒ H(U ) ∩ L2m = {0}. (2) U ∈ UrR (J) ⇐⇒ H(U ) ∩ L2m is dense in H(U ). (3) U ∈ UrsR (J) ⇐⇒ H(U ) ⊂ L2m . Moreover, the following are equivalent: (a) U ∈ UrsR (J). (b) There exist a pair of constants γ2 ≥ γ1 > 0 such that γ1  f st ≤  f H(U ) ≤ γ2  f st

for every

f ∈ H(U ).

(4.102)

(c) H(U ) is a closed subspace of L2m . (d) H(U ) ⊂ L2m . Proof

This follows from theorems 5.86 and 5.92 in [ArD08b].



−1U ∈ U (J), then the inclusion  ∈ UrR (J) and U Theorem 4.56 If U ∈ U (J), U ) ⊆ H(U ) is isometric. H(U  −1W ∈ U ( j pq ) and Proof It suffices to verify the statement for W ∈ U ( j pq ), W     W ∈ UrR ( j pq ). Let {b1 , b2 } ∈ ap(W ) and {b1 , b2 } ∈ ap(W ). Then, by Theorem 4.50, p×p  b−1 1 b1 ∈ Sin ,

q×q b2 b−1 2 ∈ Sin

and

TW [S p×q ] ⊆ TW [S p×q ].

Therefore, Theorem 4.13 guarantees that b1 ,  b2 ; s) for every s ∈ TW [S p×q ]. TW [S p×q ] = S( 11 and Now let X11 and X22 be the operators defined by formula (4.98) and let X 22 be defined by the same formulas but with  b1 in place of b1 and  b2 in place of X

4.8 A description of H(W ) ∩ L2m b2 and let



   22 h g X + : g ∈ H( b1 ) ∗ g X h 11

LW =

and

139

 h ∈ H∗ ( b2 ) .

b2 ) ⊆ H∗ (b2 ), it follows that Then, since H( b1 ) ⊆ H(b1 ), H∗ ( ∗ ∗ 11 22 h for h ∈ H∗ ( X11 g=X g for g ∈ H( b1 ) and X22 h = X b2 ).

Therefore, by Theorem 4.53,  ) ∩ L2m = LW ⊆ LW = H(W ) ∩ L2m H(W and

(  f 2H(W ) =

Ip −s∗

−s Iq



) =  f 2H(W )

f, f

for every f ∈ LW .

st

 ) ⊆ H(W ) is contractive by Theorem 4.47 and isoThus, as the inclusion H(W  ) by Theorem 4.55, the inclusion must be metric on LW and LW is dense in H(W isometric.  Theorem 4.57 If Un ∈ UrR (J) for n = 1, 2, . . . is a monotone  sequence such that  U (λ) = lim Un (λ) for every point λ ∈ hU+n n↑∞

n≥1



and U ∈ P (J), then U ∈ UrR (J). Proof Lemma 3.59 implies that the limit U ∈ U (J). Therefore, in view of Theorem 4.55, it remains only to show that if f ∈ H(U ) is orthogonal to H(U ) ∩ L2m , then f = 0. However, since Un−1U ∈ U (J) and Un ∈ UrR (J), Theorem 4.56 guarantees that the inclusions H(Un ) ⊆ H(U ) are isometric. Thus, if f ∈ H(U ) is orthogonal to H(U ) ∩ L2m , it is also orthogonal to H(Un ) ∩ L2m and hence also to H(Un ) for every positive integer n, since H(Un ) ∩ L2m is dense in H(Un ). Therefore,  f , KωUn ξ H(U ) = 0

for every ω ∈ hU and ξ ∈ Cm .

Moreover, as KwU ξ − KωUn ξ 2H(U ) = ξ ∗ KωU (ω)ξ − ξ ∗ KωUn (ω)ξ → 0 as n ↑ ∞ for ω ∈ ∩n≥1 hUn , it follows that  f , KωU ξ H(U ) = lim  f , KωUn ξ H(U ) = 0 n↑∞

for every ω ∈

hU+

and ξ ∈ Cm . Consequently, f = 0.



140

Interpolation problems and de Branges spaces

4.9 The classes UAR (J) and UBR (J) of A-regular and B-regular J-inner mvf’s A mvf U ∈ U (J) will be said to belong to the class UAR (J) of A-regular J-inner mvf’s if every left divisor of U belongs to the class UrR (J). This class has a number of different characterizations: Lemma 4.58 The following statements are equivalent: (1) U ∈ UAR (J). (2) Every right divisor of U belongs to the class UR (J). (3) If U = U1U2 with factors U1 , U2 ∈ U (J), then U1 ∈ UrR (J) and U2 ∈ UR (J). (4) U ∼ ∈ UAR (J). (5) If U = U1U2U3 with factors U1 , U2 , U3 ∈ U (J), then Ui ∈ US (J) if and only Ui ∈ Uconst (J). Proof If U2 is a right divisor of U, then U = U1U2 with U1 , U2 ∈ U (J). Thus, if U2 = U3U4 with U3 ∈ US (J) and U4 ∈ U (J), then the formula U = U1U3U4 exhibits U1U3 as a left divisor of U. Therefore, U1U3 ∈ UrR (J), by the definition of the class UAR (J). Consequently, U3 ∈ Uconst (J) and hence U2 ∈ UR (J), i.e., (1) =⇒ (2). The remaining assertions may be verified in much the same way and are left to the reader.  In view of Theorem 4.19 and Lemma 4.58, UrsR (J) ∪ UsR (J) ⊆ UAR (J) ⊆ UrR (J) ∩ UR (J).

(4.103)

A mvf U ∈ U (J) will be said to belong to the class UBR (J) of B-regular J-inner mvf’s if every left divisor U1 ∈ U (J) of U satisfies the de Branges condition H(U1 ) ∩ U1 H(U1−1U ) = {0}.

(4.104)

In view of Theorem 4.46, the condition in (4.104) holds if and only if the inclusion H(U1 ) ⊆ H(U ) is isometric. Lemma 4.59 Let U ∈ U (J) and let T be the operator defined on H(U ) by the formula (T f )(λ) = U ∼ (λ)J f (−λ) for λ ∈ hU ∩ hU ∼ .

(4.105)

Then T is a unitary operator from H(U ) onto H(U ∼ ). Proof Let λ, ω ∈ hU ∼ , −λ, −ω ∈ hU and suppose that λ = ω, det U ∼ (λ) = 0 and det U ∼ (ω) = 0. Then ∼

U (−λ)JU ∼ (ω)∗ . KωU (λ) = U ∼ (λ)JK−ω

(4.106)

4.9 The classes UAR (J) and UBR (J)

141

Therefore, the operator T maps the dense subspace L1 of vvf’s f ∈ H(U ) of the form f (λ) =

n 

U K−ω (λ)JU ∼ (ω j )∗ ξ j j

with ξ j ∈ Cm and n ≥ 1

(4.107)

j=1

into the dense subspace L2 of vvf’s g(λ) = (T f )(λ) = U ∼ (λ)J =

n 

n 

U K−ω (−λ)JU ∼ (ω j )∗ ξ j j

j=1 ∼

KωUj

(λ)ξ j

(4.108)

with ξ j ∈ C and n ≥ 1. m

j=1

Moreover, if f and g are defined by the above formulas, then  f , f H(U ) =

n  n 

U ξ j∗U ∼ (ω j )JK−ω (−ω j )JU ∼ (ωk )∗ ξk k

j=1 k=1

=

n  n 

(4.109) ∼ ξ j∗ KωUk (ω j )ξk

= g, gH(U ∼ ) .

j=1 k=1

Thus, T maps L1 isometrically onto L2 . Moreover, if f ∈ H(U ), then there exists a sequence of vvf’s fk ∈ L1 such that  f − fk H(U ) → 0 as k ↑ ∞. But, as H(U ) is a RKHS, this implies that fk (λ) → f (λ) at each point λ ∈ hU as k ↑ ∞. Thus, if gk = T fk for k = 1, 2, . . . , then gk (λ) = (T fk )(λ) = U ∼ (λ)J fk (−λ) → U ∼ (λ)J f (−λ) as k ↑ ∞ for each point λ ∈ hU ∼ such that −λ ∈ hU . Since gk H(U ∼ ) =  fk H(U ) →  f H(U )

as k ↑ ∞

and gk − g j H(U ∼ ) =  fk − f j H(U ) , there exists a vvf g ∈ H(U ∼ ) such that gk − gH(U ∼ ) → 0 as k ↑ ∞. Therefore, since H(U ∼ ) is a RKHS and  hg , hU ∼ = g∈H(U ∼ )

gk (λ) → g(λ) at each point λ ∈ hU ∼ as k ↑ ∞. Consequently, g(λ) = U ∼ (λ)J f (−λ) = (T f )(λ) for f ∈ H(U ), i.e., T maps H(U ) into H(U ∼ ). Therefore, since T is an isometry on the full space  H(U ) and L2 is dense in H(U ∼ ), T maps H(U ) onto H(U ∼ ).

142

Interpolation problems and de Branges spaces

Theorem 4.60 If T is the operator defined on H(U ) by formula (4.105), then UsR (J) = {U ∈ U (J) : T f ∈ L2m (R) Proof

for every f ∈ H(U )}.

This follows from Lemma 4.59 and formulas (1.26) and (4.45).

(4.110) 

Remark 4.61 Since g(μ) belongs to L2m (R) if and only if g(−μ) belongs to L2m (R), the equality (4.110) is equivalent to the following equality UsR (J)

= {U ∈ U (J) : U # J f ∈ L2m (R) for every f ∈ H(U )} = {U ∈ U (J) : U −1 f ∈ L2m (R) for every f ∈ H(U )}.

(4.111)

Theorem 4.62 U ∈ UBR (J) ⇐⇒ U ∼ ∈ UBR (J). Proof If U ∈ UBR (J) and U = U1U2 is a factorization of U with factors U1 , U2 ∈ U (J), then U ∼ = U2∼U1∼ . Let f ∈ H(U2∼ ) ∩ U2∼ H(U1∼ ). Then, by Lemma 4.59, f (λ) = U2∼ (λ)J f2 (−λ) = U2∼ (λ)U1∼ (λ)J f1 (−λ), where f j ∈ H(U j ) for j = 1, 2. Therefore, J f2 (−λ) = U1∼ (λ)J f1 (−λ), i.e., f2 (λ) = JU1# (λ)J f1 (λ) = U1 (λ)−1 f1 (λ). Thus, f1 = U1 f2 , and hence f1 ∈ H(U1 ) ∩ U1 H(U2 ) = {0}. Consequently, f = 0, i.e., H(U1 ) ∩ U1 H(U2 ) = {0} =⇒ H(U2∼ ) ∩ U2∼ H(U1∼ ) = {0}. The converse implication then follows from the fact that ( f ∼ )∼ = f .



Theorem 4.63 If J = ±Im , then UsR ∪ UrsR (J) ⊆ UAR (J) ⊆ UBR (J).

(4.112)

Proof If U ∈ UrsR (J) and U1 ∈ U (J) is a left divisor of U, then Theorem 4.19 guarantees that U1 ∈ UrR (J). On the other hand, if U ∈ UsR (J) and U1 ∈ U (J) is a left divisor of U, then U ∼ ∈ UrsR (J), U1∼ ∈ U (J) and U1∼ is a right divisor

4.10 de Branges matrices E and de Branges spaces B(E)

143

of U ∼ . Therefore, by another application of Theorem 4.19, U1∼ ∈ UR (J) and hence U1 ∈ UrR (J). Therefore, the first inclusion holds. The second follows from Theorem 4.56.  Theorem 4.64 If U1 , . . . , Un ∈ U (J) and U = U1 · · · Un , then U ∈ UBR (J) =⇒ Uk ∈ UBR (J)

for k = 1, . . . , n.

Proof It suffices to consider the case n = 2. Then if U = U1U2 , U1 = UaUb , with Ua , Ub , U2 ∈ U (J) and U ∈ UBR (J), the two factorizations U = U1U2 and U = Ua (UbU2 ) imply that  f H(U1 ) =  f H(U )

for every f ∈ H(U1 )

 f H(Ua ) =  f H(U )

for every f ∈ H(Ua ),

and

respectively. Therefore,  f H(Ua ) =  f H(U1 )

for every f ∈ H(Ua ),

which proves that U1 ∈ UBR (J). The proof that U2 ∈ UBR (J) follows from formula U ∼ = U2∼U1∼ and Theorem 4.62. 

4.10 de Branges matrices E and de Branges spaces B(E ) In this section we summarize a number of results from the theory of de Branges matrices E(λ) and de Branges spaces B(E) in the special case that E is an entire p × 2p mvf and hence B(E) is a RKHS of entire p × 1 vvf’s. These results are adapted from the treatment on pp. 295–305 of [ArD08b] (which deals with the more general setting of meromorphic de Branges matrices) and are presented without proof. The mvf E(λ) = [E− (λ) E+ (λ)],

(4.113)

with p × p blocks E± that are entire mvf’s and meet the conditions det E+ (λ) ≡ 0

and

χ = E+−1 E− ∈ Sinp×p def

(4.114)

will be called an entire de Branges matrix. In view of (4.114), E+ (λ)E+# (λ) ≡ E− (λ)E−# (λ) in C.

(4.115)

144

Interpolation problems and de Branges spaces

If E is an entire de Branges matrix, then the kernel ⎧ ∗ ∗ ⎨ E+ (λ)E+ (ω) − E− (λ)E− (ω) if λ = ω ρω (λ) KωE (λ) = ⎩ − 1 {E+ (ω)E+ (ω)∗ − E− (ω)E− (ω)∗ } if λ = ω. 2π i

(4.116)

is positive on C × C, since χ ∈ Sinp×p and KωE (λ) = E+ (λ)kωχ (λ)E+ (ω)∗

on hχ × hχ .

(4.117)

Therefore, by Theorem 4.36, there is exactly one RKHS B(E) with RK KωE (λ) associated with each de Branges matrix E; it will be called a de Branges space. Moreover, since the kernel KωE (λ) is an entire function of λ for every fixed ω ∈ C and KωE (ω) is continuous on C, Lemma 4.37 guarantees that every vvf f ∈ B(E) is entire. Theorem 4.65 If an entire de Branges matrix E = [E− and χ = E+−1 E− , then:

E+ ] belongs to  p×2p

(1) f ∈ B(E) ⇐⇒ E+−1 f ∈ H(χ ) ⇐⇒ E−−1 f ∈ H∗ (χ ). (2) f ∈ E ∩  p for every f ∈ B(E), i.e., B(E) ⊂ E ∩  p . (3) If f ∈ B(E), then  f 2B(E) = E+−1 f 2st =





−∞

f (μ)∗ E (μ) f (μ)dμ,

(4.118)

(4.119)

where E (μ) = E+ (μ)−∗ E+ (μ)−1 = E− (μ)−∗ E− (μ)−1

(4.120)

is well defined on R, except at the real zeros of det E(λ). Remark 4.66 In view of formula (4.117), the mapping g+ ∈ H(χ ) −→ E+ g+

(4.121)

defines a unitary operator from H(χ ) onto B(E) and the mapping g− ∈ H∗ (χ ) −→ E− g−

(4.122)

defines a unitary operator from H∗ (χ ) onto B(E). Lemma 4.67 Let E = [E− E+ ] be an entire de Branges matrix and let χ = E+−1 E− . Then the following conditions are equivalent: (1) The inequality KωE (ω) > 0 holds for at least one point ω ∈ C+ . (2) The inequality KωE (ω) > 0 holds for every point ω ∈ C+ . (3) The equality { f (ω) : f ∈ B(E)} = C p holds for at least one point ω ∈ C+ .

4.10 de Branges matrices E and de Branges spaces B(E)

145

The equality { f (ω) : f ∈ B(E)} = C p holds for every point ω ∈ C+ . The inequality kωχ (ω) > 0 holds for at least one point ω ∈ hχ . The inequality kωχ (ω) > 0 holds for every point ω ∈ hχ . The equality { f (ω) : f ∈ H(χ )} = C p holds for at least one point ω ∈ hχ . The equality { f (ω) : f ∈ H(χ )} = C p holds for every point ω ∈ hχ . √ Moreover, if E ∈  p×2p (and hence if E = 2N2∗ AV for some A ∈ E ∩ U (Jp )), then the equivalences in (1)–(4) hold with C in place of C+ . (4) (5) (6) (7) (8)

Regular de Branges matrices E and spaces B(E) Let E be an entire de Branges matrix. Then the space B(E) will be called a regular de Branges space if it is Rα invariant for every point α ∈ C+ ; E will be called a regular de Branges matrix if ρα−1 E+−1 ∈ H2p×p

and

ρα−1 E−−1 ∈ (H2p×p )⊥

(4.123)

for at least one (and hence every) point α ∈ C+ . Since E− = χ E+ , the constraints in (4.123) guarantee that E ∈  p×2p and hence, by Theorem 3.30, that E is an entire mvf of exponential type. Lemma 4.68 If E = [E− implications are in force:

E+ ] is an entire de Branges matrix, then the following

(a) E is a regular de Branges matrix =⇒ (b) B(E) is a regular de Branges space =⇒ (c) B(E) is Rα invariant for at least one point α ∈ C+ ; i.e., (a) =⇒ (b) =⇒ (c). If KωE (ω) > 0

for at least one (and hence every) point ω ∈ C+ , (4.124)

then (c) =⇒ (a) and hence (a) ⇐⇒ (b) ⇐⇒ (c). Moreover, if E(λ) is an entire regular de Branges matrix, then B(E) is a RKHS of p × 1 entire vvf’s, its RK, KωE (λ), is defined on C × C by formula (4.116) and B(E) is Rα invariant for every point α ∈ hχ . Theorem 4.69 Let E = [E− E+−1 E− and suppose that

E+ ] be an entire de Branges matrix, let χ =

E+ (0) = E− (0) = Ip .

(4.125)

Then: (1) B(E) is a regular de Branges space if and only if it is R0 invariant. (2) E is a regular de Branges matrix if and only if E ∈  p×2p and R0 E+ ξ ∈ B(E) and R0 E− ξ ∈ B(E) for every ξ ∈ C p . (3) If E is a regular de Branges matrix, then B(E) is R0 invariant.

146

Interpolation problems and de Branges spaces

(4) If B(E) is R0 invariant and −iχ  (0) > 0, then E is a regular de Branges matrix, (5) −iχ  (0) > 0 if and only if each of the eight equivalent conditions in Lemma 4.67 is in force. Connections between mvf’s A ∈ E ∩ U (Jp ) and entire de Branges matrices E If A ∈ E ∩ U (Jp ), then Lemma 3.66 guarantees that √ E(λ) = [E− (λ) E+ (λ)] = 2N2∗ A(λ)V = [a22 (λ) − a21 (λ) a22 (λ) + a21 (λ)]

(4.126)

is an entire regular de Branges matrix. Moreover, KωE (λ) = 2N2∗ KωA (λ)N2

for λ, ω ∈ C,

(4.127)

det E± (μ) = 0 on R, E (μ) = E+ (μ)−∗ E+ (μ)−1 = E− (μ)−∗ E− (μ)−1 def

on R

(4.128)

and E ∈ L1

p×p

.

(4.129)

A mvf A ∈ E ∩ U (Jp ) is called perfect if the mvf c0 = TA [Ip ] belongs to the class Cap×p , i.e., if the matrix βc0 = 0 in the representation formula (1.65) for c0 (λ). (In this case the spectral function σc0 (μ) in that formula is automatically locally absolutely continuous.) This condition is equivalent to the condition  1 ∞ Rc0 (μ) Rc0 (i) = dμ. (4.130) π −∞ 1 + μ2 Additional information on the connections between entire regular de Branges matrices E(λ) and perfect mvf’s A ∈ E ∩ U (Jp ) is provided by the next two theorems, which follow from the more general formulations (that do not assume that E and A are entire) on pp. 302–305 in [ArD08b]. Theorem 4.70 If A ∈ E ∩ U (Jp ), E is given by (4.126) and c = TA [Ip ], then: (1) E is a regular de Branges matrix and c ∈  ∩ C p×p is a meromorphic mvf that is holomorphic in C+ . (2) The mvf A can be recovered from E and c by the formula   1 −c# E− cE+ V. (4.131) A= √ E− E+ 2 (3) The mvf c admits a unique decomposition of the form c = cs + ca

(4.132)

4.10 de Branges matrices E and de Branges spaces B(E)

147

with components cs = −iβλ for some β ∈ C p×p that is nonnegative and 1 ca (λ) = iα + πi





−∞



 μ 1 − E+ (μ)−∗ E+ (μ)−1 dμ μ−λ 1 + μ2 for λ ∈ C+

(4.133)

for some Hermitian matrix α ∈ C p×p . (4) The given mvf A ∈ E ∩ U (Jp ) admits a factorization of the form A(λ) = As (λ)Aa (λ), where

 As (λ) =

and

Ip

−iβλ

0

Ip

(4.134)



  1 −c#a (λ)E− (λ) ca (λ)E+ (λ) V. Aa (λ) = √ E+ (λ) E− (λ) 2

(4.135)

(4.136)

Moreover, As ∈ E ∩ US (Jp ), Aa ∈ E ∩ U (Jp ) and the mvf’s ca and Aa are uniquely determined by E up to an additive constant iα in (4.133) and a corresponding left constant multiplicative factor in (4.136) that is of the form   Ip iα (4.137) with α = α ∗ ∈ C p×p . 0 Ip Conversely, if E = [E− E+ ] is an entire regular de Branges matrix and if c = cs + ca , where cs (λ) = −iβλ for some β ∈ C p×p , β ≥ 0 ca is defined by formula (4.133), then ca ∈  p×p ; the mvf A that is defined by formulas (4.131)–(4.133) belongs to the class E ∩ U (Jp ); and (4.126) holds. Theorem 4.71 Let E = [E− E+ ] be an entire regular de Branges matrix and let E (μ) = E+ (μ)−∗ E+ (μ)−1 on R. Then: (1) There exists a perfect mvf A ∈ E ∩ U (Jp ) such that (4.126) holds. Moreover, A is uniquely defined by E up to a left constant Jp -unitary factor of the form (4.137) by formulas (4.133) and (4.136). There is only one such perfect mvf Aa for which ca (i) > 0, i.e., for which α = 0 in (4.133). (2) If E satisfies the conditions (4.125), then there is exactly one perfect mvf A ∈ E ∩ U ◦ (Jp ) for which (4.126) holds. It is given by formula (4.131), where  1 λ ∞ c(λ) = Ip + (4.138) { (μ) − Ip }dμ. πi −∞ μ(μ − λ) E

148

Interpolation problems and de Branges spaces

4.11 A coisometry from H(A) onto B(E ) The next theorem shows that if E(λ) is a regular de Branges matrix that is related to an entire mvf A ∈ U (Jp ) by (4.126), then there is a simple formula that defines a coisometric map from H(A) onto B(E). Theorem 4.72 Let A ∈ E ∩ U (Jp ) and let E, ca , cs , Aa and As be defined by A as in Theorem 4.70. Let U2 denote the operator that is defined on H(A) by the formula √ (4.139) (U2 f )(λ) = 2[0 Ip ] f (λ) for f ∈ H(A). Then: (1) H(As ) = { f ∈ H(A) : (U2 f )(λ) ≡ 0}. Moreover,    βu H(As ) = ker U2 = : u ∈ Cp 0

(4.140)

with inner product (

  ) βu βv = 2π v∗ βu. , 0 0 H(A ) s

(2) The orthogonal complement of H(As ) in H(A) is equal to As H(Aa ), i.e., H(A) = H(As ) ⊕ As H(Aa ).

(4.141)

(3) The operator U2 is a partial isometry from H(A) onto B(E) with kernel H(As ), i.e., U2 maps H(A)  H(As ) isometrically onto B(E). (4) The operator U2 is unitary from H(A) onto B(E) if and only if the mvf A is perfect, i.e., if and only if βc0 = 0, where c0 = TA [Ip ]. Proof



This follows from theorem 5.76 in [ArD08b].

Theorem 4.73 Let E = [E− E+ ] be an entire regular de Branges matrix such that (4.125) holds and let A(λ) be the unique perfect matrix in U ◦ (Jp ) such that (4.126) holds. Let E (μ) be defined by formula (4.128), G± (λ) = (R0 E± )(λ) and

G = [G+ + G−

G+ − G− ].

(4.142)

Then G± ξ ∈ B(E) for every ξ ∈ C p and the adjoint U2∗ of the unitary operator U2 is given by the formula  ∞ 1 λg(λ) − μg(μ) G(μ)∗ E (μ) dμ. (4.143) (U2∗ g)(λ) = √ λ−μ 22π i −∞ Proof

See theorem 6.15 in [ArD08b].



◦ 4.12 Formulas for resolvent matrices W ∈ E ∩ UrsR ( j pq )

149

◦ 4.12 Formulas for resolvent matrices W ∈ E ∩ UrsR ( j pq )

In view of Theorem 4.20, there exists exactly one resolvent matrix W ∈ E ∩ ◦ ( j pq ) for each strictly completely indeterminate GSIP(b1 , b2 ; s◦ ) with b1 ∈ UrsR E ∩ Sinp×p , b2 ∈ E ∩ Sinq×q and {b1 , b2 } ∈ ap(W ). In this section we shall present a formula for this resolvent matrix in terms of b1 , b2 and the block triangular operator X of the form (4.25) for which S(b1 , b2 ; s◦ ) = S(b1 , b2 ; X ), the set defined in (4.29). Moreover, by Theorem 4.10, the GSIP(b1 , b2 ; s◦ ) is strictly completely indeterminate if and only if X < 1. Let H(b1 , b2 ) =

H(b1 ) . ⊕ H∗ (b2 )

The condition X < 1 is equivalent to the condition that the bounded linear operator   ∗ I − X11 X11 −X12 (4.144) : H(b1 , b2 ) → H(b1 , b2 ) X = ∗ ∗ −X12 I − X22 X22 is strictly positive, i.e., X < 1 ⇐⇒ X > εI

for some ε > 0.

To verify this, first use Schur complements to verify that  ∗ ∗ ∗ I − X11 X11 − X12 (I − X22 X22 )−1 X12 X < 1 ⇐⇒ 0

(4.145)

 0 > εI ∗ I − X22 X22

∗ for some ε > 0, and then replace the 22 block entry by I − X22 X22 and compare the resulting operator matrix with the corresponding Schur complement formula for X . If X = 1, then applying (4.145) to ρX with 0 < ρ < 1 and then letting ρ ↑ 1 yields the supplementary equivalence

X ≤ 1 ⇐⇒ X ≥ 0.

(4.146)

◦ Theorem 4.74 Let W ∈ E ∩ UrsR ( j pq ), {b1 , b2 } ∈ ap(W ) and s12 = TW [0]. Let the operators X11 , X22 and X12 be defined by the formulas in (4.26) with s = s12 and let the operator X be defined by formula (4.144). Then:

(1) The formula 

I f = ∗ X11

X22 I

  g , h

  g ∈ H(b1 , b2 ), h

(4.147)

150

Interpolation problems and de Branges spaces defines a bounded bijective operator   I X22 LX = : H(b1 , b2 ) → H(W ) ∗ X11 I

(4.148)

from H(b1 , b2 ) onto H(W ) with bounded inverse and hence X = LX∗ LX > εI (2) If f is as in (4.147), then

for some ε > 0.

(4.149)

(

 f 2H(W )

   ) g g = X , h h st

(4.150)

(3) If s ∈ TW [S p×q ], or, equivalently, if s ∈ S(b1 , b2 ; X ), then (  ) Ip −s 2 f, f  f H(W ) = −s∗ Iq st

(4.151)

for every f ∈ H(W ). Proof

This follows from Theorem 4.53, since H(W ) ⊂ L2m when W ∈ UrsR ( j pq ). 

∗ H(b ) is a closed subspace of H(d I ) If d j (λ) = det b j (λ) for j = 1, 2, then X11 1 1 p that is invariant under the action of Rα for every α ∈ C+ . Therefore, by Lemma 3.6 and Corollary 3.8, and their counterparts in (H2p )⊥ , ∗ H(b ) = H(b ˚1) X11 1

and

X22 H∗ (b2 ) = H∗ (b˚ 2 )

(4.152)

p×p and for some pair of mvf’s b˚ 1 ∈ Sinp×p and b˚ 2 ∈ Sinq×q such that d1 b˚ −1 1 ∈ Sin q×q −1 d2 b˚ 2 ∈ Sin , respectively. If b1 and b2 are entire, as in the present case, then d1 and d2 are entire and hence, in view of Lemma 3.35, b˚ 1 and b˚ 2 are also entire.   Consider the set Hm (b1 , b2 ) of m × m mvf’s F = f1 f2 · · · fm with columns f j ∈ H(b1 , b2 ) for 1 ≤ j ≤ m, as the orthogonal sum  of m copies of m the space H(b1 , b2 ) and the set H (W ) of m × m mvf’s K = k1 k2 · · · km with columns k j ∈ H(W ), for 1 ≤ j ≤ m, as the orthogonal sum of m copies of the space H(W ). Let the operators

X : Hm (b1 , b2 ) → Hm (b1 , b2 ) and LX : Hm (b1 , b2 ) → Hm (W ) act on these spaces of m × m mvf’s column by column:    X f1 f2 · · · fm = X f1 X f2 · · ·

X f m



and  LX f1

f2

···

  fm = LX f1

LX f2

···

 LX f m .

◦ 4.13 Formulas for resolvent matrices A ∈ E ∩ UrsR (Jp )

151

∗ Analogously, let the operators X11 and X22 act on p × p and q × q mvf’s respectively, column by column.

Theorem 4.75 If the GSIP(b1 , b2 ; s◦ ) with entire inner mvf’s b1 and b2 is strictly completely indeterminate and the mvf W is the unique resolvent matrix of this ◦ ( j pq ) with {b1 , b2 } ∈ ap(W ), then the RK of the problem in the class E ∩ UrsR RKHS H(W ) is given by the formula X KωW (λ) = (LX −1 X Fω )(λ),

(4.153)

where  FωX (λ)

=

kωb1 (λ) 

  (X11 kωb1 )(λ)

∗ b2 ω )(λ) (X22

bω2 (λ)

on C × C,

(4.154)

p×p q×q   b2 ∈ E ∩ Sinq×q are such that b˚ −1 and  b2 b˚ −1 b1 ∈ E ∩ Sinp×p and  1 b1 ∈ Sin 2 ∈ Sin and b˚ 1 and b˚ 2 are defined in (4.152). Thus,

W (λ) = Im + 2πiλ K0W (λ) j pq ,

(4.155)

where K0W (λ) is obtained from formula (4.153) with ω = 0. In these formulas, the b j may be normalized at the point λ = 0 to be Ip if j = 1 entire inner mvf’s b j and  and Iq if j = 2, in which case k0b1 (λ) = b02 (λ)

Ip − b1 (λ) , −2πiλ

b# (λ) − Iq = 2 −2πiλ



b1 (λ) Iq −  , −2πiλ   b# (λ) − Ip 0b2 (λ) = 2 . −2πiλ

k0b1 (λ) = and

(4.156)

Proof This follows from theorem 5.95 in [ArD08b], which is obtained from Theorem 4.74.  Remark 4.76 In formulas (4.154) and (4.156) it is possible to choose  b1 (λ) = b2 (λ) = det b2 (λ) Iq . det b1 (λ) Ip and 

◦ 4.13 Formulas for resolvent matrices A ∈ E ∩ UrsR (Jp )

In view of Theorem 4.25, there exists exactly one resolvent matrix A ∈ ◦ (Jp ) for each strictly completely indeterminate GCIP(b3 , b4 ; c◦ ) with E ∩ UrsR b3 ∈ E ∩ Sinp×p , b4 ∈ E ∩ Sinp×p and {b3 , b4 } ∈ apII (A). In this section we shall present a formula for this resolvent matrix in terms of the data of the interpolation problem.

152

Interpolation problems and de Branges spaces

p×p , Let A ∈ UrsR (Jp ), {b3 , b4 } ∈ apII (A), c◦ ∈ C(A) ∩ H∞

11 = H(b3 ) Mc◦ |H2p ,

22 = − Mc◦ |H∗ (b4 )

(4.157)

12 = H(b3 ) Mc◦ |H∗ (b4 ) .

and

p×p The operators i j do not change if c◦ is replaced by any mvf c ∈ C(A) ∩ H∞ . In fact, p×p p×p = {c ∈ C p×p ∩ H∞ : (4.157) is in force C(b3 , b4 ; c◦ ) ∩ H∞

when c◦ is replaced by c}.

(4.158)

Next, formulas for the resolvent matrix A will be given in terms of the RK KωA (λ) of the RKHS H(A), which will be expressed in terms of the operators

 = 2R

 11 |H(b3 )



12

0

H∗ (b4 ) 22

−∗11

22

I

I

H(b3 )

H(b3 )



−→

H∗ (b4 )

H∗ (b4 )

  g

H(b3 )

:



(4.159)

and

L

  g h

 =

   g h

for

h



.



(4.160)

H∗ (b4 )

The orthogonal sum considered in (4.159) and (4.160) will be denoted H(b3 , b4 ). The rest of this section follows from the results that are presented with proofs on pp. 523–531 in [ArD08b]. We first present a description of the space H(A). Theorem 4.77 Let A ∈ E ∩ UrsR (Jp ), B(λ) = A(λ)V and let the operators 11 , p×p and {b3 , b4 } ∈ 22 and 12 be defined by formula (4.157), where c ∈ C(A) ∩ H∞ apII (A). Furthermore, let L and  be defined by formulas (4.160) and (4.159), respectively. Then  H(A) =

L

  g h

 : g ∈ H(b3 ) and h ∈ H∗ (b4 ) .

(4.161)

Moreover, L is a bounded linear operator from H(b3 , b4 ) onto H(A) with bounded inverse, ∗ L  = L

◦ 4.13 Formulas for resolvent matrices A ∈ E ∩ UrsR (Jp )

153

is a bounded linear positive operator from H(b3 , b4 ) onto itself with bounded inverse and, if   g f = L (4.162) for some g ∈ H(b3 ) and h ∈ H∗ (b4 ), h then  f 2H(A) =  f , f st = (c + c∗ )(g + h), (g + h)st p×p . for every c ∈ C(A) ∩ H∞

Theorem 4.78 Let A ∈ E ∩ UrsR (Jp ), let E(λ) = apII (A). Then

(4.163)

√ ∗ 2N2 A(λ)V and {b3 , b4 } ∈

B(E) = H∗ (b4 ) ⊕ H(b3 )

(4.164)

as linear spaces of vvf’s, but not as Hilbert spaces (unless E+ E+# = Ip ) and there exist a pair of positive constants γ1 and γ2 such that γ1  f st ≤  f B(E) ≤ γ2  f st

(4.165)

for every f ∈ B(E). ˚ = Jp A(λ)Jp and let {b3 , b4 } ∈ apII (A) Lemma 4.79 Let A ∈ E ∩ UrsR (Jp ), A(λ) ˚ ˚ ˚ and {b3 , b4 } ∈ apII (A). Then ∗11 H(b3 ) = H(b˚ 3 )

and

22 H∗ (b4 ) = H∗ (b˚ 4 ).

(4.166)

Remark 4.80 If A ∈ E ∩ U (Jp ), B = AV and A˚ = Jp AJp , then b˚ 3 and b˚ 4 are p × p inner mvf’s in the factorizations (b#11 )−1 = b˚ 3 ϕ˚3

and

˚ b−1 12 = ϕ˚4 b4

p×p with ϕ˚ j ∈ Nout for j = 1, 2.

Moreover, if A ∈ E ∩ U (Jp ) and B = AV, then   1 −E˚− E˚+ B= √ 2 E− E+

(4.167)

(4.168)

where E = [E−

E+ ] = [a22 − a21

a22 + a21 ]

E˚ = [E˚−

E˚+ ] = [a11 − a12

a11 + a12 ]

and

are entire de Branges matrices and b˚ 3 and b˚ 4 are entire inner p × p mvf’s. The formula √ def √ U1 f = 2N1∗ f = 2[Ip 0] for f ∈ H(A) (4.169)

154

Interpolation problems and de Branges spaces

˚ which is unitary if and only defines a coisometric operator from H(A) onto B(E), if A˚ is a perfect matrix. Theorem 4.81 If the GCIP(b3 , b4 ; c◦ ) with entire inner mvf’s b3 and b4 is strictly completely indeterminate and the mvf A is the unique resolvent matrix of this ◦ (Jp ) with {b3 , b4 } ∈ apII (A), then the RK of the problem in the class E ∩ UrsR RKHS H(A) is given by the formula  KωA (λ) = (L −1  Fω )(λ),

(4.170)

where the operators L : Hm (b3 , b4 ) → Hm (A)

and

 : Hm (b3 , b4 ) → Hm (b3 , b4 )

act on the columns of m × m mvf’s and    −(11 kωb3 )(λ) kωb3 (λ)  ∈ Hm (b3 , b4 ). Fω (λ) =  (∗22 ωb4 )(λ) bω4 )(λ) p×p p×p   b3 ∈ E ∩ Sinp×p and  b4 ∈ E ∩ Sinp×p are such that b˚ −1 and  b4 b˚ −1 3 b3 ∈ Sin 4 ∈ Sin and b˚ 3 and b˚ 4 are the entire inner mvf’s defined in (4.166). Thus,

A(λ) = Im + 2πiλK0A (λ)Jp ,

(4.171)

where K0A (λ) is obtained from formula (4.170) by setting ω = 0. In these formulas b j (λ) may be normalized by setting b j (0) =  b j (0) = Ip . the entire mvf’s b j (λ) and  √ Theorem 4.82 Let A ∈ E ∩ UrsR (Jp ), {b3 , b4 } ∈ apII (A), E(λ) = 2N2∗ A(λ)V and let the operator  be defined by formula (4.159). Then .    b / kωb3 η1 kλ3 η2 ∗ E (4.172) η2 Kω (λ)η1 = 2  b , ω4 η1 bλ4 η2 st for every pair of points λ, ω ∈ C and every pair of vectors η1 , η2 ∈ C p . Moreover, if c ∈ C(A) ∩ C˚ p×p , then there exist numbers γ1 > 0, γ2 > γ1 such that γ1 Ip ≤ (Rc)(μ) ≤ γ2 Ip

(4.173)

for almost all points μ ∈ R, and for every such choice of γ1 and γ2 , and every point ω ∈ C,     γ2−1 kωb3 (ω) + bω4 (ω) ≤ KωE (ω) ≤ γ1−1 kωb3 (ω) + bω4 (ω) .

(4.174)

With a slight abuse of notation, formula (4.170) can be reexpressed in the following more convenient form:

4.14 Supplementary notes Theorem 4.83 In the setting of Theorem 4.81,    u  u 11 12 K0A = L ,  u21  u22

155

(4.175)

ui j (λ) are p × p mvf’s that are obtained as the solutions of the where the  ui j =  system of equations       u11  u12 −11 k0b3 k0b3 = (4.176)    u21  u22 ∗22 0b4 b04 and the operators in formulas (4.175) and (4.176) act on the indicated matrix arrays column by column. In particular, the columns of  u11 (λ) and  u12 (λ) belong u21 (λ) and  u22 (λ) belong to H∗ (b4 ). to H(b3 ) and the columns of 

4.14 Supplementary notes Nehari [Ne57] formulated a scalar version of the NP() on the unit circle T and obtained a criterion for N () = ∅. Versions of this problem for mvf’s on T were studied in [AAK71a] and [AAK71b], wherein criteria for the problem to be determinate and completely indeterminate were formulated and parametrization formulas for the set of solutions analogous to (4.16) were obtained in the completely indeterminate case. Operator-valued versions of NP() on T were studied by L.A. Page [Pa70]; see V.V. Peller [Pe03] for a readable comprehensive treatise on Nehari problems and related issues. Continuous analogs on NP() for mvf’s in the Wiener class were studied in [AAK71b], [KrMA86] and [Dy89b]. The parametrization in Theorem 4.6 was obtained in [AAK68] and [Ad73]. The proof was based on the generalization of the Lax–Phillips scheme in [AdAr66] and [Ad73] and on parametrization formulas for unitary extensions of isometric operators. Generalized Schur interpolation problems and generalized Carath´eodory interpolation problems were studied in [Ar84], [Ar88], [Ar89] and [Ar95a]. In these papers a parametrization of the set of solutions to each of these problems in terms of linear fractional transformations of S p×q were given in the completely indeterminate case and the classes UrR ( j pq ) and UrR (Jp ) were identified with classes of resolvent matrices for these problems. Later, in [ArD02a] and [ArD05a], the formulas that are presented for these resolvent matrices in Sections 4.12 and 4.13 in the strictly completely indeterminate case were obtained by methods based on the theory of RKHS’s of L. de Branges that were used earlier to parametrize the solutions of a number of bitangential interpolation problems for mvf’s in [Dy89a], [Dy89b], [Dy89c], [Dy90], [Dy98], [Dy03a], [Dy03b], [BoD98] and [BoD06]. Theorem 4.16 is taken from [ArD97]; the argument used to justify Step 2 in the proof of the theorem is adapted from an argument used on pp. 137–138

156

Interpolation problems and de Branges spaces

of the book by Hoffman [Ho62] to characterize extreme points of the unit ball in H∞ . The theory of the RKHS’s H(U ) and B(E) originates in the work of L. de Branges. In particular, the characterization of RKHS’s H(U ) in terms of Rα invariance and the identity (4.74) is presented in [Br63] and [Br65] for vectorvalued functions that are holomorphic in an open connected set  that contains the origin; an important technical improvement by J. Rovnyak extended the result to open connected sets , even if  ∩ R = ∅ [Rov68]; some extensions to Krein spaces and general classes of domains are presented in [AlD93]. A more general version of Theorem 4.38 for RKHS’s H(U ) with U ∈ P ◦ (J) is presented in theorem 5.31 of [ArD08b]. A readable introduction to the spaces H(U ) and B(E) for m = 2 is [GoM97]. Additional insight into the role of Rα invariance and the de Branges identity (4.74) is obtained by considering finite dimensional RKHS’s: A finite dimensional RKHS of vvf’s that is Rα invariant is automatically a space of rational vvf’s of the form {F (λ)v : v ∈ Cn } with F (λ) = C(A1 − λA2 )−1 and an inner product that is defined in terms of a positive semidefinite matrix P. The de Branges identity is then equivalent to a Lyapunov–Stein equation for P if it is invertible and to a Riccati equation if not; see, e.g., [Dy01a], [Dy01b] and [Dy03a]. Sections 4.8 and 4.13 extend the descriptions of the spaces H(W ) and H(A) in Chapter 2 of [Dy89b] and [Dy90]. The class UBR (J) was introduced and studied recently in [ArD12a]; the class UAR (J), which plays a significant role in the study of the bitangential inverse monodromy problem, was introduced even more recently, in the final stages of preparation of this book for publication. As of today we do not have alternate descriptions of either of these classes that are analogous to the Treil–Volberg matrix version of the Muckenhoupt (A2 ) condition in [TrV97] that is used to characterize the class UrsR (J) in theorem 10.12 of [ArD08b], (see also [ArD01b], [ArD03b]), or even to the characterizations of UrsR (J) in (1.26) and (1.36). The inclusions in (4.112) imply that in order for U to be in UAR (J) it is necessary that U ∈ UBR (J) and sufficient that U ∈ UrsR (J) ∪ UsR (J). However, as UrsR (J) ∪ UsR (J) is a proper subclass of UBR (J), these two conditions are not the same. The GSIP(b1 , b2 ; s◦ ) and the GCIP(b3 , b4 ; c◦ ) with entire inner mvf’s b1 , . . . , b4 were studied in [ArD98] as bitangential generalizations of a pair of extension problems that were studied by M.G. Krein. They will be used extensively in the formulation and analysis of bitangential direct and inverse problems in the next several chapters. The continuous analogs of the classical Schur and Carath´eodory extension problems with solutions in a Wiener class of mvf’s that were studied by M.G. Krein and F.E. Melik-Adamyan in [KrMA86] can be viewed as special cases of the GSIP(ea Ip , Iq ; s◦ ) and the GCIP(ea Ip , Ip ; c◦ ), respectively. This will be discussed in more detail in Chapter 9.

4.14 Supplementary notes

157

Time domain versions of the GCIP(b3 , b4 ; c◦ ) with entire inner mvf’s b3 , b4 were considered in section 8.5 of [ArD08b] and will be reviewed briefly in Chapter 9 below. Time domain versions of the GSIP(b1 , b2 ; s◦ ) with entire inner mvf’s b1 , b2 will also be considered in this chapter. The problem of describing the set S(b1 , b2 ; X ) that is considered in Section 4.12 is a bitangential generalization of the scalar Sarason problem, which is to describe the set S(b; X ) = {s ∈ S : H(b) Ms |H2 = X}, given a scalar inner function b and a bounded linear operator X from H2 into H(b); see [Sar67] and, for further generalizations with nonsquare inner mvf’s, [Kh90] and [Kh95].

5 Chains that are matrizants and chains of associated pairs

Recall that a family of mvf’s Ut ∈ E ∩ U ◦ (J), 0 ≤ t < d, is said to be a normalized nondecreasing  (resp., ) chain of entire J-inner mvfs if U0 (λ) ≡ Im and Ut−1 Ut2 ∈ E ∩ U ◦ (J) 1

(resp., Ut2 Ut−1 ∈ E ∩ U ◦ (J) 1

for 0 ≤ t1 ≤ t2 < d.

The chain is said to be strictly increasing if Ut−1 Ut2 ≡ Im (resp., Ut2 Ut−1 ≡ Im ) 1 1 when t1 < t2 . In view of Theorem 2.5, the matrizant Ut (λ), 0 ≤ t < d, of a canonical integral system (2.1) with a continuous nondecreasing mass function M(t ) on the interval [0, d) is a normalized nondecreasing  continuous chain of entire J-inner mvf’s. However, the converse is not true: some sufficient conditions for the converse to hold are presented in Theorem 5.6. A chain of pairs {bt1 , bt2 }, 0 ≤ t < d, of entire inner mvf’s is said to be a normalized nondecreasing continuous chain of pairs if {bt1 }, 0 ≤ t < d, is a normalized nondecreasing  continuous chain and {bt2 }, 0 ≤ t < d, is a normalized nondecreasing  continuous chain. It is said to be a strictly increasing chain of pairs if at least one of the ratios (bs1 )−1 bt1 , bt2 (bs2 )−1 is a nonconstant mvf for each choice of 0 ≤ s < t < d. In Section 5.3, it will be shown that normalized nondecreasing chains of associated pairs of the first and second kind of a normalized motonic  continuous chain of entire A-regular J-inner mvf’s (with J = ±Im ) are continuous.

5.1 Continuous chains of entire J-inner mvf’s This section reviews some properties of normalized nondecreasing  and  continuous chains of entire J-inner mvf’s that will be needed in the study of bitangential direct and inverse problems.

5.1 Continuous chains of entire J-inner mvf’s

159

Theorem 5.1 Let {Ut (λ) : 0 ≤ t ≤ d} be a normalized nondecreasing  chain of entire J-inner m × m mvf’s such that the Hilbert spaces H(Ut ) are included isometrically in H(Ud ) and Kωt (λ) = KωUt (λ) for 0 ≤ t ≤ d. Let t denote the orthogonal projection from H(Ud ) onto H(Ut ). Then t Kωd ξ = Kωt ξ

for every ξ ∈ Cm , ω ∈ C and t ∈ [0, d]

(5.1)

and the following statements are equivalent: (1) Kωt (ω) is a continuous mvf of t on the interval [0, d] for some point ω ∈ C. (2) Ut (λ) is a continuous mvf of t on the interval [0, d] for every point λ ∈ C. (3) Kωt (λ) is a continuous mvf of t on the interval [0, d] for every pair of points λ, ω in C. (4) Kωt ξ is a continuous vvf of t in the Hilbert space H(Ud ) on the interval [0, d] for every ξ ∈ Cm and ω ∈ C. (5) The orthogonal projections t are strongly continuous in the Hilbert space H(Ud ). (6) The two equalities  H(Ut+ε ) = H(Ut ) for every t ∈ [0, d) (5.2) 0 0. Then for each t ∈ [0, d] there exists a unique pair of mvf’s bt1 ∈ E ∩ Sinp×p and bt2 ∈ E ∩ Sinq×q such that H(bt1 ) = H(eϕ1 (t ) Ip ) ∩ H(b1 ), H∗ (bt2 ) = H∗ (eϕ2 (t )) Iq ) ∩ H∗ (b2 ),

bt1 (0) = Ip , bt2 (0) = Iq

b01 (λ) = Ip , and

b02 (λ) = Iq ,

by Theorem 8.9 applied to b1 and its dual version applied to b2 . This family of pairs meets the stated conditions.  Lemma 8.15 If Ut , 0 ≤ t ≤ d, is a normalized nondecreasing  chain of entire J-inner mvf’s, then the following statements are equivalent: (1) Ut (λ) is left (resp., right) continuous on [0, d] for each point λ ∈ C. (2) 2π K0Ut (0) is left (resp., right) continuous on [0, d]. (3) trace 2π K0Ut (0) is left (resp., right) continuous on [0, d]. Proof The first two equivalences follow readily from Theorem 5.1 and its proof; the third rests in part on the fact that if 0 ≤ t1 ≤ t2 ≤ d, then the matrix Ut

Ut

Q = 2πK0 2 (0) − 2πK0 1 (0) is positive semidefinite and hence the i j entry qi j of Q is subject to the bounds |qi j |2 ≤ qii q j j ≤ {trace Q}2 . The rest is easy and is left to the reader.



254

Inverse monodromy problems

◦ Theorem 8.16 If W ∈ E ∩ UrR ( j pq ), W ≡ Im , {bt1 , bt2 }, 0 ≤ t ≤ d, is a normalized nondecreasing continuous chain of pairs of entire inner mvf’s of sizes p × p and q × q, respectively, such that {bd1 , bd2 } ∈ ap(W ) and if s◦ ∈ TW [S p×q ], then:

(1) For each t ∈ [0, d] there exists exactly one mvf Wt ∈ U ◦ ( j pq ) such that S(bt1 , bt2 ; s◦ ) = TWt [S p×q ]

and

{bt1 , bt2 } ∈ ap(Wt ).

(8.31)

The family of resolvent matrices Wt , 0 ≤ t ≤ d, that is obtained this way is independent of the choice of s◦ ∈ TW [S p×q ] and enjoys the following properties: (2a) Wt , 0 ≤ t ≤ d, is a normalized nondecreasing  chain of mvf’s in the ◦ ( j pq ) with W0 = Im and Wd = W . class E ∩ UrR (2b) Wt (λ) is left continuous on [0, d] for each λ ∈ C. (2c) If Wt (λ) is continuous on [0, d] for each λ ∈ C then it is the matrizant of the canonical integral system (6.1) with mass function that meets the constraints (1.22) and is given by M(t ) = 2π K0Wt (0). (2d) If W ∈ E ∩ UAR ( j pq ), then Wt (λ) is continuous on [0, d] for each λ ∈ C. If M(t ) is any continuous nondeceasing m × m mvf on [0, d] with M(0) = 0 and if the matrizant Wt (λ), 0 ≤ t ≤ d, of the canonical integral system (6.1) with this ◦ ( j pq ) for every t ∈ [0, d], then M(t ) is the mass function belongs to the class UrR solution of the inverse monodromy problem with data W (λ) = Wd (λ) and

{bt1 , bt2 } ∈ ap(Wt )

f or 0 ≤ t ≤ d,

where bt1 (0) = Ip and bt2 (0) = Iq for every t ∈ [0, d) and M(t ) may be obtained from this data via (1) and (2c). Proof

The proof is broken into steps:

1. Verification of (1) and (2a): The GSIP(bt1 , bt2 ; s◦ ) is completely indeterminate for every t ∈ [0, d] by Theorem 4.13. Therefore, Theorem 4.12 guarantees that ◦ ( j pq ) such that (8.31) for each t ∈ [0, d] there exists exactly one mvf Wt ∈ E ∩ UrR holds. Moreover, W0 (λ) = Im

and Wd (λ) = W (λ),

since TIm [S p×q ] = S(Ip , Iq ; s◦ )

and

TW [S p×q ] = S(bd1 , bd2 ; s◦ ) = TWd [S p×q ],

by Theorem 4.13. The asserted monotonicity of the chain Wt , 0 ≤ t ≤ d, follows from the monotonicity of the chain of pairs {bt1 , bt2 } and Theorem 3.64. Theorem 4.13 insures that this chain of resolvent matrices is independent of the choice of s◦ ∈ TW [S p×q ].

8.3 Solutions for U ∈ UAR (J) when J = ±Im 2. Verification of (2b): In view of Theorem 5.1, it suffices to show that ' def Lt− = H(Wt−ε ) = H(Wt ) for every t ∈ (0, d].

255

(8.32)

0 0,

t is entire and that imply that W t t H(bt−ε 1 ) ⊆ H(b1 ) ⊆ H(b1 )

and

t t H∗ (bt−ε 2 ) ⊆ H∗ (b2 ) ⊆ H∗ (b2 )

for t ∈ (0, d) and 0 < ε < t. Therefore, since the chain of pairs {bt1 , bt2 } is continuous by assumption, Theorem 5.1 (with J replaced by Ip ) implies that H(bt1 ) = H( bt1 )

and

H∗ (bt2 ) = H∗ ( bt2 )

for t ∈ (0, d)

and hence, if the entire inner mvf’s  bt1 and  bt2 are normalized, bt1 =  bt1

and

bt2 =  bt2

for t ∈ (0, d],

i.e., t ) {bt1 , bt2 } ∈ ap(W

for t ∈ (0, d].

(8.33)

t−1Wt ∈ U ( j pq ), another application of Theorem 4.13 supplies Moreover, since W the inclusions S(bt1 , bt2 ; s◦ ) ⊇ TWt [S p×q ] ⊇ TWt [S p×q ] = S(bt1 , bt2 ; s◦ ), which yields the identity S(bt1 , bt2 ; s◦ ) = TWt [S p×q ] for t ∈ [0, d).

(8.34)

◦ But since there is only one mvf in the class UrR ( j pq ) that meets the constraints t = Wt W t (0) for 0 ≤ t < d and hence justifies (8.33) and (8.34), this implies that W (8.32).

3. Verification of (2c): This is immediate from Theorems 4.56 and 5.6. 4. Verification of (2d): In view of (2b), it remains only to prove that Wt (λ) is right continuous on [0, d] for each λ ∈ C. In view of Theorem 5.1, it suffices to show that  def Lt+ = H(Wt+ε ) = H(Wt ) for every t ∈ [0, d). (8.35) 0 0,

imply that bt1 ) ⊆ H(bt+ε H(bt1 ) ⊆ H( 1 )

and

H∗ (bt2 ) ⊆ H∗ ( bt2 ) ⊆ H∗ (bt+ε 2 )

for t ∈ [0, d) and 0 < ε ≤ d − t. Therefore, since the chain of pairs {bt1 , bt2 } is continuous by assumption, Theorem 5.1 (with J replaced by Ip ) implies that bt1 ) H(bt1 ) = H(

and

H∗ (bt2 ) = H∗ ( bt2 )

for t ∈ [0, d)

and hence, as all the indicated inner mvf’s are entire and may be normalized, bt1 bt1 = 

and

bt2 =  bt2

for t ∈ [0, d),

t ∈ ∈ U ( j pq ) and W ∈ UAR ( j pq ), W i.e., (8.33) holds. Moreover, since UrR ( j pq ) and hence, by Theorem 4.13, (8.34) is in force. But since there is only ◦ ( j pq ) that meets the constraints (8.33) and (8.34), this one mvf in the class UrR t = Wt for 0 ≤ t < d and hence justifies (8.35). implies that W t−1W W

5. Verification of the rest: By Theorem 2.5, the matrizant Wt , 0 ≤ t ≤ d, is a normalized nondecreasing  continuous chain of entire J-inner mvf’s. Moreover, since Wt ∈ UrR ( j pq ) for every t ∈ [0, d] by assumption, Theorem 5.13 insures that the normalized chain of associated pairs {bt1 , bt2 } ∈ ap(Wt ), 0 ≤ t ≤ d, is a continuous nondecreasing chain of pairs on [0, d]. Furthermore, S(bt1 , bt2 ; s◦ ) = TWt [S p×q ] and

{bt1 , bt2 } ∈ ap(Wt )

for every t ∈ [0, d].

Therefore, the matrizant Wt , 0 ≤ t ≤ d, coincides with the family of resolvent matrices that is obtained in Step 1.  ◦ (J), where J = V ∗ j pqV for some unitary matrix Theorem 8.17 Let U ∈ E ∩ UAR V , U (λ) ≡ Im and let {bt1 , bt2 }, 0 ≤ t ≤ d, be a normalized strictly increasing continuous chain of pairs of entire inner mvf’s such that {bd1 , bd2 } ∈ ap(VUV ∗ ). Then there exists exactly one normalized (as in (1.19)) solution H(x), 0 ≤ x ≤ , of the bitangential inverse monodromy problem for canonical differential system (1.1) with monodromy matrix U for which the matrizant x satisfies the condition t(x)  {bt(x) 1 , b2 } ∈ ap(x ) for every x ∈ [0, ], where  = −itrace {U (0)J} and t(x) is a function from [0, ] into [0, d] such that t(0) = 0 and t() = d. The function t(x) is uniquely defined by the given data of the bitangential inverse monodromy problem and it is a strictly increasing continuous function on [0, ].

8.3 Solutions for U ∈ UAR (J) when J = ±Im

257

Proof Let W (λ) = VU (λ)V ∗ and fix s◦ ∈ TW [S p×q ]. Then, since W ∈ UAR ( j pq ), Theorem 8.16 guarantees that the family of resolvent matrices Wt , 0 ≤ t ≤ d, that is obtained in (1) of that theorem is the matrizant of the canonical integral system (6.1) with mass function M(t ) = 2π K0Wt (0) for 0 ≤ t ≤ d. Moreover, by Lemmas 5.14 and 8.15, x(t ) = trace M(t ) is a strictly increasing continuous function on [0, d] with x(0) = 0 and x(d) = . Therefore, there exists exactly one strictly increasing continuous function t(x) on [0, ] such that t(x(t )) = t for every t ∈  [0, d]. Moreover, M(x) = M(t(x)) is absolutely continuous with respect to x on [0, ]. Let  (x) H(x) = M

and

x (λ) = Wt(x) (λ).

Then H(x) is a normalized solution of the stated inverse problem with matrizant x (λ), i.e.,  x u (λ)H(u)du j pq for 0 ≤ u ≤ . x (λ) = Im + iλ 0

This establishes the existence of at least one solution. Suppose next that H (1) (x) is a second normalized solution of the stated inverse problem with matrizant x(1) (λ) = Wϕ(x) (λ) for some function ϕ(x) from [0, ] into [0, d] such that ϕ(0) = 0 and ϕ() = d. Then  x x(1) (λ) = Im + iλ u(1) (λ)H (1) (u)du j pq for 0 ≤ u ≤ . 0

= Wϕ(x0 ) Suppose further that ϕ(x0 ) ≤ t(x0 ) at some point x0 ∈ [0, ]. Then x(1) 0 is a left divisor of x0 = Wt(x0 ) and hence,  0

x0

! ∂ (1) H (1) (u)du = M (1) (x0 ) = −i x0 (0) j pq ∂λ !  x0 ∂ ≤ −i H(u)du. x0 (0) j pq = M(x0 ) = ∂λ 0

But this implies that M (1) (x0 ) − M(x0 ) ≥ 0 and

trace {M (1) (x0 ) − M(x0 )} = 0.

Therefore, the two mass functions must be equal, i.e.,  x0  H (1) (u)du = ϕ(x0 ) ≤ t(x0 ) =⇒ 0

x0

H(u)du. 0

But this serves to complete the proof since the same conclusion holds if ϕ(x0 ) ≥  t(x0 ) and x0 is any point in [0, ].

258

Inverse monodromy problems

8.4 Connections with the Livsic model of a Volterra node The representation of a mvf U ∈ E ∩ U ◦ (J) as the monodromy matrix of a canonical integral system (2.1) is intimately connected with the Livsic triangular functional model of a Volterra operator node with characteristic function U (λ). Let L(X, Y ) denote the space of bounded linear operators acting from a Hilbert space X into a Hilbert space Y , and let L(X ) = L(X, X ). The ordered set N = (K, F; X, Y ; J) of two separable Hilbert spaces X and Y , two bounded linear operators K ∈ L(X ), F ∈ L(X, Y ) and a signature operator J = J ∗ = J −1 ∈ L(Y ), is called an LB (acronym for Livsic–Brodskii) J-node if K − K ∗ = iF ∗ JF.

(8.36)

UN (λ) = I + iλF (I − λK)−1 F ∗ J

(8.37)

The function

is called the characteristic function of the node N. It is defined and holomorphic on the set !K = { λ ∈ C : (I − λK)−1 ∈ L(X )}. However, UN (λ) may be extended to a holomorphic mvf on a domain hUN ⊇ !K . Clearly, 0 ∈ !K and UN (0) = I. Moreover, J − UN (λ)JUN (ω)∗ = −i(λ − ω)F (I − λK)−1 (I − ωK ∗ )−1 F ∗

(8.38)

J − UN (ω)∗ JUN (λ) = −i(λ − ω)JF (I − ωK ∗ )−1 (I − λK)−1 F ∗ J

(8.39)

and

for points λ, ω ∈ !K and consequently UN (λ) and UN (λ)∗ are J-contractive in !K ∩ C+ and J-unitary on !K ∩ R. The node N is said to be a simple node if ∞ 

ker{FK n } = {0}.

(8.40)

n=0

A simple LB J-node is uniquely defined up to unitary equivalence by its characteristic function; see e.g., [Liv73] and [Bro72]. Throughout the rest of this section, we shall fix Y = Cm , endowed with the usual inner product u, v = v∗ u for u, v ∈ Cm . Then the operators in L(Y ) will be defined by m × m matrices, J will be an m × m signature matrix and UN (λ) will be an m × m mvf, which is called the characteristic mvf of the LB J-node N = (K, F; X, Cm ; J).

8.4 Connections with the Livsic model of a Volterra node

259

Every mvf U ∈ U ◦ (J) may be identified as the characteristic mvf of the simple LB J-node N0 = {R0 , F0 ; H(U ), Cm ; J}, where H(U ) is the RKHS with RK given by (4.73), √ f (λ) − f (0) and F0 : f ∈ H(U ) −→ 2π f (0) ∈ Cm , λ (8.41) see, e.g., theorem 6.2 in [ArD08b]. Moreover, if U ∈ E ∩ U ◦ (J), then as noted in Section 4.6, R0 is a Volterra operator and its real part is a compact operator that belongs to the von Neumann–Schatten class S p for every p > 1. To put this conclusion in perspective, recall that if {Eμ }, −∞ < μ < ∞, is a family of spectral projectors associated with a self-ajoint operator  ∞ A= μdEμ in a Hilbert space X, R0 : f ∈ H(U ) −→

−∞

then A is said to have purely singular spectrum if σx (μ) = Eμ x, xX is singular with respect to Lebesgue measure (i.e., σx (μ) = 0 a.e. on R) for every vector x ∈ X. Theorem 8.18 Every mvf U ∈ U ◦ (J) is the characteristic mvf of a simple LB J-node N = (K, F; X, Cm ; J) in which KR = (K + K ∗ )/2 has purely singular spectrum. Conversely, if N = (K, F; X, Cm ; J) is a simple LB J-node and KR has purely singular spectrum, then the characteristic mvf U of N belongs to the class U ◦ (J). Moreover, K is a Volterra operator ⇐⇒ U ∈ E ∩ U ◦ (J). Proof The proof of the first half of the theorem rests on the observation that if U = UN is the characteristic mvf of the LB J-node N = (K, F; X, Cm ; J), then the mvf C = J(Im − U )(Im + U )−1 that is introduced in Lemma 3.48 admits the representation C(λ) =

−iλ F (I − λKR )−1 F ∗ 2

and if N is a simple node, then



for λ ∈ C+

FKRn = {0}.

n≥0

For additional details see pp. 18–30 in [Bro72] and lemma 6.3 and theorem 6.4 in [ArD08b]. 

260

Inverse monodromy problems

The generalized Fourier transform FN for the node N = (K, F; X, Cm ; J) is defined by the formula 1 (FN x)(λ) = √ F (I − λK)−1 x 2π

for x ∈ X;

(8.42)

it maps X into H(UN ). Theorem 8.19 Let N = (K, F; X, Cm ; J) be an LB J-node with characteristic mvf UN ∈ U ◦ (J) and let N0 = {R0 , F0 ; H(U ), Cm ; J} be the simple LB node defined by formula (8.41) for U (λ) = UN (λ). Then: (1) Formula (8.42) defines a coisometry FN acting from X onto H(U ). (2) The operator FN intertwines K and R0 : FN K = R0 FN .

(8.43)

Moreover, F = F0 FN

and

ker FN =



ker(FK n ).

(8.44)

n≥0

(3) The operator FN is unitary if and only if the LB-node N is simple, i.e., if and only if condition (8.40) is met. (4) If the LB J-node is simple, then the operator FN establishes a unitary equivalence between the simple LB J-nodes N = (K, F; X, Cm ; J)

(8.45)

and N0 . Proof

Let

n 1  (I − ω j K ∗ )−1 F ∗ u j , x= √ 2π j=1

with ω j ∈ hU , u j ∈ Cm and n ≥ 1.

Then formulas (8.38), (4.73) and (8.42) yield the relations f =

n 

KωUj u j = FN x

and

 f 2H(U ) = x2X .

j=1

Therefore, since the linear manifold of the considered vvf’s f ∈ H(U ) is dense in the space H(U ), FN is an isometry from the closure of the linear manifold of the corresponding vectors x in X onto H(U ). The closure of the latter is the orthogonal complement of  ker(FK n ) = ker FN . n≥0

8.4 Connections with the Livsic model of a Volterra node

261

Thus, FN is a coisometry from X onto H(U ) as claimed. This establishes (1) and (3). Statement (2) may be verified by direct calculation. Finally, (4) follows from (1)–(3).  Theorem 8.20 Let N = (K, F; X, Cm ; J) be an LB J-node with characteristic mvf U ∈ U ◦ (J) and spectrum of K equal to {0}. Then N is a simple node if and only if ker K ∩ ker F = {0}. Proof



This follows from theorem 5.2 in [ArD04b]. Products and projections of LB J-nodes

The product N = N1 × N2 of two LB J-nodes N j = (K j , Fj ; X j , Cm ; J), j = 1, 2, is defined as the LB J-node N = (K, F; X, Cm ; J), where X = X1 ⊕ X2 ,     K1 iF1∗ JF2 and F = F1 F2 . K= 0 K2 If N = (K, F; X, Cm ; J) is an LB J-node, L is a proper nonzero closed subspace of X and if KL = L K|L

and

FL = F|L ,

then (KL )∗ = L K ∗ |L ,

(FL )∗ = L F ∗

and KL − KL∗ = iFL∗ JFL , i.e., NL = (KL , FL ; L, Cm ; J)

is an LB J-node.

The node NL is called the projection of the node N onto the subspace L. Moreover, if KL ⊆ L, then K ∗ L⊥ ⊆ L⊥ , K = L KL + L⊥ KL + L KL⊥ + L⊥ KL⊥ = KL L + 0 + L (K − K ∗ )L⊥ + KL⊥ L⊥ and N = NL × NL⊥ is the product of the two LB J-nodes. Therefore, K = KL L + KL⊥ L⊥ + iFL∗ JFL⊥

and

F = FL + FL⊥ .

Theorem 8.21 If N = (K, F; X, Cm ; J) is an LB J-node and L is a closed subspace of X that is invariant under K, then N = NL × NL⊥

and

UN (λ) = UNL (λ)UNL⊥ (λ)

f or λ ∈ !K .

Moreover, if N is a simple node, then the nodes NL and NL⊥ are both simple.

262 Proof

Inverse monodromy problems See, e.g., theorem 2.2 on p. 8 of [Bro72].



Theorem 8.22 Let N = N1 × N2 be the product of the LB J-nodes N1 = (K1 , F1 ; X1 , Cm ; J) and N2 = (K2 , F2 ; X2 , Cm ; J). Then (1) X1 ⊕ {0} is a closed subspace of X that is invariant under K, and N1 and N2 are projections of N onto the subspaces X1 ⊕ {0} and {0} ⊕ X2 , respectively. (2) If N is a simple LB J-node then the nodes N1 and N2 are simple too. (3) UN (λ) = UN1 (λ)UN2 (λ) for λ ∈ !K . Proof (1) follows from the definitions of products and projections of a node; (2) and (3) follow from Theorem 8.21.  A left divisor U1 of U ∈ U ◦ (J) belongs to the Brodskii class BU of left divisors of U if it is the characteristic mvf of the projection of a simple LB J-node N = (K, F; X, Cm ; J) with characteristic mvf U ∈ U ◦ (J) onto a closed K-invariant subspace of X. Remark 8.23 The converse of Assertion (2) in Theorem 8.22 is false: if UN ∈ U ◦ (J) and U j = UN j , j = 1, 2, and N j are simple LB J-nodes then their product N is simple if and only if U1 ∈ BU ; see, e.g., theorem 2.3 in [Bro72] and remark 6.9 in [ArD08b]. Theorem 8.24 If U ∈ U ◦ (J), U1 ∈ U ◦ (J) and U1−1U ∈ U (J), then the inclusion H(U1 ) ⊆ H(U ) is isometric if and only if U1 ∈ BU . Proof Let N0 = (R0 , F0 ; H(U ), Cm ; J) be the functional model of a simple LB J-node with characteristic mvf U (λ) that is considered in Theorem 8.19 and suppose that U1 ∈ BU , i.e., U1 is the characteristic mvf of a simple node NL for some closed subspace L of X that is invariant under K. Then, by Theorem 8.22, NL is also a simple node and hence the generalized Fourier transform FNL for NL onto its functional model (NL )0 = (R0 , F0 ; H(U1 ), Cm ; J) is unitary. Therefore, the inclusion H(U1 ) ⊆ H(U ) is isometric. Conversely, if the inclusion H(U1 ) ⊆ H(U ) is isometric, then L = H(U1 ) is a  closed subspace of H(U ) that is invariant under R0 and U1 = UNL . ◦ Theorem 8.25 If U ∈ U ◦ (J) and U1 ∈ UrR (J) is a left divisor of U, then U1 ∈ BU , i.e., U1 = UNL for some R0 invariant closed subspace L of H(U ).

Proof

This follows from Theorems 8.24 and 4.56.



An LB J-node N = (K, F; X, Cm ; J) is said to be a Volterra node if K is a Volterra operator, i.e., if K is compact and 0 is the only point in its spectrum. ◦ Theorem 8.26 If U ∈ E ∩ UR (J) is the characteristic mvf of a simple Volterra m node N = (K, F; X, C ; J), then K is a completely non-self-adjoint operator, i.e., there are no nonzero closed subspaces L of X that are invariant under K such that K|L is self-adjoint.

8.4 Connections with the Livsic model of a Volterra node

263

Proof Let L be a closed subspace of X that is invariant under K such that KL is self-adjoint. Then since K is a Volterra operator, KL is also a Volterra, operator, and the spectral theorem implies that KL = 0. Thus NL = (0, FL ; L, Cm ; J) is a simple Volterra node with characteristic mvf UNL (λ) = Im + iλFL (I − λKL )−1 FL∗ J = Im + iλFL FL∗ J. Moreover, since iFL∗ JFL = KL − KL∗ = 0, it follows that UNL (λ)−1 = Im − iλFL FL∗ J and hence that UNL ∈ US (J) which is not possible when U ∈ UR (J), unless  FL FL∗ = 0. Therefore, FL∗ = 0 and thus, as L is a simple node, L = 0. Remark 8.27 The conclusions of Theorem 8.26 remain valid if U ∈ E ∩ U ◦ (J) does not admit a left divisor of the form Im + iλVV ∗ J with V ∗ JV = 0 and VV ∗ = 0. A family C = {Lα : α ∈ A} of closed invariant subspaces of an operator K in a Hilbert space X is called a chain of invariant subspaces if it is totally ordered by inclusion, i.e., if L1 ∈ C and L2 ∈ C, then either L1 ⊆ L2 or L2 ⊆ L1 . A chain C is called maximal if it is not contained in any other chain of closed invariant subspaces of K. It is known that a compact operator K in a separable Hilbert space X has at least one nontrivial closed invariant subspace and that every chain C of closed invariant subspaces of K is contained in a maximal chain of closed invariant subspaces of K; see, e.g., theorems 15.1 and 15.3 in [Bro72]. Correspondingly, a family {Uα : α ∈ A} of mvf’s in BU is called a chain if either Uα−1Uβ ∈ U (J) or Uβ−1Uα ∈ U (J) for every choice of α, β ∈ A. Such a chain is called a maximal chain if it is not contained in any other chain of mvf’s in BU . The stated connections between the left divisors of the characteristic mvf UN of a simple LB J-node N = (K, F; X, Cm ; J) and the closed subspaces of X that are invariant under K lead to the following conclusions: If U ∈ E ∩ U ◦ (J) and U (λ) ≡ Im , then: (1) U has at least one nontrivial left divisor in the Brodskii class BU . (2) Every chain in BU is contained in a maximal chain in BU .  ∈ BU may be parametrized by the index (3) Every mvf U 

αU = 2πtrace K0U (0).

(8.46)

Then αU ∈ [0, d], where d = αU and if U1 ∈ BU and U2 ∈ BU belong to the same chain, then U1−1U2 ∈ U ◦ (J) ⇐⇒ H(U1 ) ⊆ H(U2 ) ⇐⇒ αU1 ≤ αU2

264

Inverse monodromy problems and U1 = U2 ⇐⇒ H(U1 ) = H(U2 ) ⇐⇒ αU1 = αU2 .

(4) If N = (K, F; X, Cm ; J) is a simple LB J-node with characteristic mvf U ∈ U ◦ (J), then there exists a one-to-one correspondence between maximal chains of closed subspaces of X that are invariant under K and maximal chains of left divisors of U in BU . The Livsic triangular model of a Volterra node is based on the m × m mvf M(t ) that appears in the Potapov multiplicative representation (2.40) of a mvf U ∈ E ∩ U ◦ (J). This is equivalent to viewing U (λ) as the monodromy matrix of a canonical integral system (2.1) with a continuous nondecreasing m × m mvf M(t ) on the interval [0, d] with M(0) = 0: To each nondecreasing continuous m × m mvf M(t ) on [0, d] with M(0) = 0, associate the set NM = (KM , FM ; XM , Cm ; J), where XM = L2m (dM; [0, d]),  (KM f )(t ) = iJ

d

 dM(s) f (s) and FM f =

t

d

(8.47)

dM(s) f (s) for f ∈ XM . (8.48)

0

It is easy to check that KM and FM are well defined by these formulas as operators in L(XM ) and L(XM , Cm ), respectively. Then, since FM∗ : ξ ∈ Cm −→ fξ (t ) ∈ XM , where fξ (t ) = ξ on [0, d] and ∗ (KM



t

f )(t ) = −iJ

dM(s) f (s) for 0 ≤ t ≤ d,

0

it is readily seen that ∗ KM − KM = iFM∗ JFM

and hence that NM is an LB J-node. Theorem 8.28 Let M(t ) be a continuous nondecreasing m × m mvf on [0, d] with M(0) = 0 and let Ut (λ), 0 ≤ t ≤ d, be the matrizant of the corresponding integral system (2.1). Let NM = (KM , FM ; XM , Cm ; J) be the LB J-node defined by formulas (8.47) and (8.48). Then: (1) The generalized Fourier transform (8.42) for the LB J-node NM coincides with the generalized Fourier transform  d 1 Us (λ)dM(s) f (s) for every f ∈ XM , (8.49) (F f )(λ) = √ 2π 0 for the canonical integral system (2.1).

8.4 Connections with the Livsic model of a Volterra node

265

(2) The characteristic mvf UNM (λ) of the node NM coincides with the monodromy matrix Ud (λ) of this system. (3) The family    d XMt = f ∈ XM : f (s)∗ dM(s) f (s) = 0 for 0 ≤ t ≤ d t

is a maximal chain of closed subspaces of XM that are invariant under KM . (4) The mvf’s Ut (λ) are the characteristic mvf’s of the projections NtM = t , FMt ; XMt , Cm ; J) of the node NM onto the subspaces XMt . (KM (5) The node NM is simple if and only if the inclusions H(Ut ) ⊆ H(Ud ) are isometric for every t ∈ [0, d]. (6) If Ud ∈ UBR (J), then the node NM is simple. Proof Our first objective is to show that the two transforms (8.42) and (8.49) coincide, i.e.,  d −1 FM (I − λKM ) f = U (t, λ)dM(t ) f (t ) for every f ∈ L2m (dM; [0, d]). 0

To this end, let η ∈ Cm and ∗ −1 ∗ ∗ −1 ) FM η = (I − λKM ) fη , g = (I − λKM

and observe that ∗

η FM (I − λKM )

−1

 f =  f , gXM =

d

g(t )∗ dM(t ) f (t )

for f ∈ XM .

0

Now fix λ ∈ C and let uη (t ) = U (t, λ)∗ η for t ∈ [0, d]. Then, as the matrizant Ut (λ) = U (t, λ), 0 ≤ t ≤ d, is the continuous solution of the integral system (2.1), it follows that  t ∗ dM(s)uη (s) = fη (t ) + λ(KM uη )(t ), uη (t ) = fη (t ) − iλJ 0

i.e., ∗ −1 ) fη )(t ) = g(t ). uη (t ) = ((I − λKM

Thus, η∗ FM (I − λKM )−1 f = η∗



d

U (t, λ)dM(t ) f (t ),

0

which justifies (1). Moreover, Im + iλFM (I − λKM )−1 FM∗ J = Im + iλ



d

U (t, λ)dM(t )J = Ud (λ),

0

i.e., the characteristic function of the operator node NM based on M(t ) is equal to the monodromy matrix of the canonical integral system based on M(t ), which justifies (2).

266

Inverse monodromy problems

The spaces XMt , 0 ≤ t ≤ d, are clearly closed subspaces of XM that are invariant under KM . To verify the asserted maximality, let L be a closed subspace of XM that is invariant under KM such that either L ⊆ XMt or XMt ⊆ L for each choice of t ∈ [0, d]. Let A− = {t ∈ [0, d] : XMt ⊆ L} and let A+ = {t ∈ [0, d] : L ⊆ XMt }. Then 0 ∈ A− , d ∈ A+ , A− ∪ A+ = [0, d] and, since  ' XMs = XMt = XMs , t