741 96 14MB
English Pages [462]
Table of contents :
Cover
Title page
Copyright
Contents
Preface
Some basic notation
Chapter 1. Background
1.1. Onevariable calculus
1.2. Euclidean spaces
1.3. Vector spaces and linear transformations
1.4. Determinants
Chapter 2. Multivariable differential calculus
2.1. The derivative
2.2. Inverse function and implicit function theorems
2.3. Systems of differential equations and vector fields
Chapter 3. Multivariable integral calculus and calculus on surfaces
3.1. The Riemann integral in ? variables
3.2. Surfaces and surface integrals
3.3. Partitions of unity
3.4. Sard’s theorem
3.5. Morse functions
3.6. The tangent space to a manifold
Chapter 4. Differential forms and the GaussGreenStokes formula
4.1. Differential forms
4.2. Products and exterior derivatives of forms
4.3. The general Stokes formula
4.4. The classical Gauss, Green, and Stokes formulas
4.5. Differential forms and the change of variable formula
Chapter 5. Applications of the GaussGreenStokes formula
5.1. Holomorphic functions and harmonic functions
5.2. Differential forms, homotopy, and the Lie derivative
5.3. Differential forms and degree theory
Chapter 6. Differential geometry of surfaces
6.1. Geometry of surfaces I: geodesics
6.2. Geometry of surfaces II: curvature
6.3. Geometry of surfaces III: the GaussBonnet theorem
6.4. Smooth matrix groups
6.5. The derivative of the exponential map
6.6. A spectral mapping theorem
Chapter 7. Fourier analysis
7.1. Fourier series
7.2. The Fourier transform
7.3. Poisson summation formulas
7.4. Spherical harmonics
7.5. Fourier series on compact matrix groups
7.6. Isoperimetric inequality
Appendix A. Complementary material
A.1. Metric spaces, convergence, and compactness
A.2. Inner product spaces
A.3. Eigenvalues and eigenvectors
A.4. Complements on power series
A.5. The Weierstrass theorem and the StoneWeierstrass theorem
A.6. Further results on harmonic functions
A.7. Beyond degree theory—introduction to de Rham theory
Bibliography
Index
Back Cover
Introduction to Analysis in Several Variables Advanced Calculus
Michael E. Taylor
Introduction
to Analysis in Several
Variables
Advanced Calculus
PIift/d�nAL/
UNDERGRADUA��' 46
Introduction
to Analysis in Several
Variables
Advanced Calculus
Michael
• •• . .....
�;r
Taylor
AMS
:;., "
.
E.
AMERICAN MATHEMATICAL SOCIETY Providence, Rhode Island
USA
EDITORIAL COMMITTEE
Gerald B. Folland (Chair)
Jamie Pommersheim
2010
Mathematics Subject Classification.
Steven J. Miller Serge Tabachnikov
Primary 26B05, 26BlO, 26B12, 26B15, 26B20.
www.amsoorg/bookpages/amstext46
For additional information and updates on this book, visit
Library of Congress CataloginginPublication Data
Names: Taylor, Michael E., 1946 author. Title: Introduction to analysis in several variables : advanced calculus / Michael E. Taylor. Description: Providence, Rhode Island American Mathematical Society, [2020] I Series: Pure and applied undergraduate texts, 19439334 ; volume 46 I Includes bibliographical references and index. Identifiers: LCCN 20200097351 ISBN 9781470456696 (paperback) I ISBN 9781470460167 (ebook) Subjects: LCSH: Calculus. I FUnctions of several real variables. I Functions of several complex variables. I AMS: Real functions  FUnctions of several variables  Continuity and differentia tion questions. I Real functions  FUnctions of several variables  Implicit function theorems, Jacobians, transformations with several variables. I Real functions  FUnctions of several variables  Calculus of vector functions. I Real functions  FUnctions of several variables Integration: length, area, volume. I Real functions  FUnctions of several variables  Integral formulas (Stokes, Gauss, Green, etc.). Classification: LCC QA303.2 .T38 2020 I DDC 515dc23 LC record available at https://lccn.loc.gov/2020009735
Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for permission to reuse portions of AMS publication content are handled by the Copyright Clearance Center. For more information, please visit www.ams.org/publications/pubpermissions. Send requests for translation rights and licensed reprints to [email protected]
© 2020 by the American Mathematical Society. All rights reserved. The American Mathematical Society retains all rights except those granted to the United States Government. Printed in the United States of America.
o The paper used in this book is acidfree and falls within the guidelines established to ensure permanence and durability. Visit the AMS home page at https://www.ams.org/ 10 9 8 7 6 5 4 3 2 1
25 24 23 22 21 20
Contents
Preface
vii
Some basic notation
xi
Cha pter 1. Background 1.1. Onevariable calculus 1.2. Euclidean spaces 1.3. Vector spaces and linear transformations 1.4. Determinants
1 2 17 22 31
Chapter 2. Multivariable differential calculus 2.1. The derivative 2.2. Inverse function and implicit function theorems 2.3. Systems of differential equations and vector fields
39 39 56 68
Chapter 3. Multivariable integral calculus and calculus on surfaces 3.1. The Riemann integral in variables 3.2. Surfaces and surface integrals 3.3. Partitions of unity 3.4. Sard 's theorem 3.5. Morse functions 3.6. The tangent space to a manifold
87 88 117 145 146 147 148
Chapter 4. Differential forms and the GaussGreenStokes formula 4.1. Differential forms 4.2. Products and exterior derivatives of forms 4.3. The general Stokes formula
153 154 160 164
n

v
vi
Contents 4.4. The classical Gauss, Green, and Stokes formulas 4.5. Differential forms and the change of variable formula
169 179
Chapter 5. Applications of the GaussGreenStokes formula 5.1. Holomorphic functions and harmonic functions 5.2. Differential forms, homotopy, and the Lie derivative 5.3. Differential forms and degree theory
185 186 200 205
Chapter 6. Differential geometry of surfaces 6.1. Geometry of surfaces I: geodesics 6.2. Geometry of surfaces II: curvature 6.3. Geometry of surfaces III: the GaussBonnet theorem 6.4. Smooth matrix groups 6.5. The derivative of the exponential map 6.6. A spectral mapping theorem
221 225 238 252 265 283 288
Chapter 7. Fourier analysis 7.1. Fourier series 7.2. The Fourier transform 7.3. Poisson summation formulas 7.4. Spherical harmonics 7.5. Fourier series on compact matrix groups 7.6. Isoperimetric inequality
291 294 310 330 332 372 378
Appendix A. Complementary material A.1. Metric spaces, convergence, and compactness A.2. Inner product spaces A.3. Eigenvalues and eigenvectors A.4. Complements on power series A.5. The Weierstrass theorem and the StoneWeierstrass theorem A.6. Further results on harmonic functions A.7. Beyond degree theoryintroduction to de Rham theory
381 382 393 398 402 408 410 416
Bibliography
437
Index
441
Preface
This text was produced for the second part of a twopart sequence on advanced cal culus, whose aim is to provide a firm logical foundation for analysis, for students who have had three semesters of calculus and a course in linear algebra. The first part treats analysis in one variable, and the text [49] was written to cover that material. The text at hand treats analysis in several variables. These two texts can be used as companions, but they are written so that they can be used independently, if desired. Chapter 1 treats background needed for multivariable analysis. The first section gives a brief treatment of onevariable calculus, including the Riemann integral and the fundamental theorem of calculus. This section distills material developed in more detail in the companion text [49]. We have included it here to facilitate the indepen dent use of this text. Subsequent sections in Chapter 1 present the basic linear algebra background of use for the rest of this text. They include rna terial on ndimensional Euclidean spaces and other vector spaces, on linear transformations on such spaces, and on determinants of such linear transformations. Chapter 2 develops multidimensional differential calculus on domains in ndimen sional Euclidean space Itn. The first section defines the derivative of a differentiable map : (') .... Itm, at a point E ('), for (') open in Itn, as a linear map from Itn to Itm, and establishes basic properties, such as the chain rule. The next section deals with the inverse function theorem, giving a condition for such a map to have a differentiable inverse, when n = m. The third section treats n X n systems of differential equations, bringing in the concepts of vector fields and flows on an open set (') E Itn. While the emphasis here is on differential calculus, we do make use of integral calculus in one variable, as exposed in Chapter l. Chapter 3 treats multidimensional integral calculus. We define the Riemann in tegral for a class of functions on Itn and establish basic properties, including a change of variable formula. We then study smooth mdimensional surfaces in Itn, and extend
F
x

vii
Preface
viii
the Riemann integral to a class of functions on such surfaces. Going further, we ab stract the notion of surface to that of a manifold, and study a class of manifolds known as Riemannian manifolds. These possess an object known as a We also define the Riemann integral for a class of functions on such manifolds. The change of variable formula is instrumental in this extension of the integral. In Chapter 4 we introduce a further class of objects that can be defined on surfaces, A kform can be integrated over a kdimensional surface, endowed with an extra structure, an Again the change of variable formula plays a role in establishing this. Important operations on differential forms include products and the exterior derivative. A key result of Chapter 4 is a general Stokes formula, an important integral identity that can be seen as a multidimensional version of the fun damental theorem of calculus. In §4.4 we specialize this general Stokes formula to classical cases, known as theorems of Gauss, Green, and Stokes. A concluding section of Chapter 4 makes use of material on differential forms to give another proof of the change of variable formula for the integral, much different from the proof given in Chapter 3. Chapter 5 is devoted to several applications of the material on the GaussGreen Stokes theorems from Chapter 4. In §5.1 we use Green's theorem to derive fundamental properties of holomorphic functions of a complex variable. Sprinkled throughout ear lier sections are some allusions to functions of complex variables, particularly in some of the exercises in §§2.1 2.2. Readers with no previous exposure to complex variables might wish to return to these exercises after getting through §5.1. In this section, we also discuss some results on the closely related study of harmonic functions. One result is Liouville 's theorem, stating that a bounded harmonic function on all of Rn must be constant. When specialized to holomorphic functions on C = R' , this yields a proof of the fundamental theorem of algebra. In §5.2 we define the notion of smoothly homotopic maps and consider the be havior of closed differential forms under pullback by smoothly homotopic maps. This material is then applied in §5.3, which introduces degree theory and derives some inter esting consequences. Key results include the Brouwer fixed point theorem, the Jordan Brouwer separation theorem (in the smooth case), and the study of critical points of a vector field tangent to a compact surface, and connections with the Euler characteris tic. We also show how degree theory yields another proof of the fundamental theorem of algebra. Chapter 6 applies results of Chapters 25 to the study of the geometry of surfaces (and more generally of Riemannian manifolds). Section 6.1 studies geodesics, which are locally lengthminimizing curves. Section 6.2 studies curvature. Several varieties of curvature arise, including Gauss curvature and Riemann curvature, and it is of great interest to understand the relations between them. Section 6.3 ties the curvature study of §6.2 to material on degree theory from §5.3, in a result known as the GaussBonnet theorem. Section 6.4 studies smooth matrix groups, which are smooth surfaces in M(n, F) that are also groups. These carry left and right invariant metric tensors, with important
metric tensor.
differentialforms.
orientation.
Preface
ix
consequences for the application of such groups to other aspects of analysis, including results presented in § 7A. Chapter 7 is devoted to an introduction to multidimensional Fourier analysis. Sec tion 7.1 treats Fourier series on the ndimensional torus un, and §7.2 treats the Fourier transform for functions on Rn. Section 7.3 introduces a topic that ties the first two to gether, known as Poisson's summation formula. We apply this formula to establish a classical result of Riemann, his functional equation for the Riemann zeta function. The material in §§7.1 7.3 bears on topics rather different from the geometrical ma terial emphasized in the latter part of Chapter 3 and in Chapters 46. In fact, this part of Chapter 7 could be tackled right after one gets through §3.1. On the other hand, the last three sections of Chapter 7 make strong contact with this geometrical mate rial. Section 704 treats Fourier analysis on the sphere snl, which involves expanding a function on snl in terms of eigenfunctions of the Laplace operator !J.s, arising from the Riemannian metric on snl. This study of course includes integrating functions over sn1 It also brings in the matrix group SO(n), introduced in Chapter 3, which acts on each eigenspace V k of !J.s, and its subgroup SO(n  1), and makes use of integrals over SO(n  1). Section 704 also makes use of the GaussGreenStokes formula and ap plications to harmonic functions, from §§4A and 5.1. We believe the reader will gain a good appreciation of the utility of unifying geometrical concepts with those aspects of Fourier analysis developed in the first part of Chapter 7. We complement §7A with a brief discussion of Fourier series on compact matrix groups, in §7.5. Section 7.6 deals with the purely geometric problem of showing that, among smoothly bounded planar domains 0 c R' with fixed area, the disks have the smallest perimeter. This is the twodimensional isoperimetric inequality. Its placement here is due to the fact tha t its proof is an applica tion of Fourier series. The text ends with a collection of appendices, some giving further background ma terial, others providing complements to results of the main text. Appendix A.1 covers some basic notions of metric spaces and compactness used from time to time through out the text, such as in the study of the Riemann integral and in the proof of the fun damental existence theorem for ODE. As is the case with §1.1, Appendix A.1 distills material developed at a more leisurely pace in [49], again serving to make this text independent of the first one. Appendices A.2 and A.3 complement results on linear algebra presented in Chap ter 1 with some further results. Appendix A.2 treats a general class of inner product spaces, both finite and infinite dimensional. Treatments in the latter case are relevant to results on Fourier analysis in Chapter 7. Appendix A.3 treats eigenvalues and eigen vectors oflinear transforma tions on finitedimensional vector spaces, providing results useful in various places, from §2.1 to §6.6. Appendix AA discusses the remainder term in the power series of a function. Ap pendix A.5 deals with the Weierstrass theorem on approximating a continuous func tion by polynomials, and an extension, known as the StoneWeierstrass theorem, a useful tool in analysis, with applications in §§5.3, 7.1, and 704. Appendix A.6 builds on material on harmonic functions presented in Chapters 5 and 7. Results range from a
Preface
x
removable singularity theorem to extensions of Liouville's theorem. Appendix A.7 in troduces de Rham cohomology, as an extension of degree theory, developed in Chapter 5. We point out some distinctive features of this treatment of advanced calculus. 1) These formulas form a high point in any advanced calculus course, but we do not want them to be seen as the culmination of the course. Their significance arises from their many appli cations. The first application we treat is to the theory of functions of a complex variable, including the Cauchy integral theorem and basic consequences. This basically constitutes a minicourse in complex analysis. (A much more extensive treatment can be found in [51].) We also derive applications to the study of har monic functions, in n variables, a study that is closely related to complex analysis when n = 2. We also apply differential forms and the Stokes formula to results of a topo logical flavor, involving a set of tools known as degree theory. We start with a result known as the Brouwer fixed point theorem. We give a short proof, as a di rect application of the Stokes formula, thus making this theorem a precursor to degree theory rather than an application. 2) This starts with calculus on surfaces, comput ing surface areas and surface integrals, given in terms of the metric tensors these surfaces inherit, but it proceeds much further. There is the question of finding geodesics, shortest paths, described by certain differential equations, whose co efficients arise from the metric tensor. Another issue is what makes a curved surface curved. One particular measure is called the Gauss curvature. There are formulas for the integrated Gauss curvature, which in turn make contact with degree theory. Such matters are examples of connections uniting analysis and geometry, and is pursued in the text. Other connections arise in the treatment of Fourier analysis. In addition to Fourier analysis on Euclidean space, the text trea ts Fourier analysis on spheres. Matrix groups, such as rotation groups SO(n), make an appearance, both as tools for studying Fourier analysis on spheres and as further sources of problems in Fourier analysis, thereby expanding the theater in which we bring to bear tech niques of advanced calculus developed here.
Applications of the GaussGreenStokes formulas.
The unity ofanalysis and geometry.
Acknowledgment During the preparation of this book, my research has been supported by a number of NSF grants, most recently DMS1500817.
Some basic notation
This list of some basic notation will be used throughout the text. R is the set of real numbers. C is the set of complex numbers. Z is the set of integers. Z+
is the set of integers 2: O.
N is the set of integers 2: 1 (the
natural numbers).
Q is the set of rational numbers. x E IR. means x is an element of IR, i.e., x is a real number. (a, b) denotes the set of x E R such that a < x < b. [a, b] denotes the set of x E R such that a :S x :S b. (x E R : a :S x :S b) denotes the set of x in R such that a :S x :S b. [a, b) = {x E R : a :S x < b}and (a, b] = {x E R : a < x :S b}. Z=
x  iy if z = x + iy E C, x, Y E R. 
xi
Some basic notation
xii
o denotes the closure of the set D. f A + B denotes that the function f takes points in the setA to points in B. One also says f maps A to B. :
x + Xo means the variable x tends to the limit xo . f(x) = O(x) means f(x)/x is bounded. Similarly g(E) bounded. f(x) = o (x) as x limit.
+ a
(resp., x
=
+
00) means f(x)/x
+ a
as x tends to the specified
sup lan l means S is the smallest real number that satisfies S n there is no such real number, then we take S = +00.
S
lim sup lak l k+oo
=
lim (sup lak l ) . n+oo k'::?n
2:
lan l for all
n.
If
Chapter 1
Background
This first chapter provides background material on onevariable calculus, the geom etry of ndimensional Euclidean space, and linear algebra. We begin in §1.1 with a presentation of the elements of calculus in one variable. We first define the Riemann integral for a class of functions on an interval. We then introduce the derivative and establish thejU relating differentiation and integration as essentially inverse operations. Further results are dealt with in the exercises, such as the change of variable formula for integrals and the Taylor formula with remainder for power series. Next we introduce the ndimensional Euclidean spaces Rn, in §1.2. The dot prod uct on Rn gives rise to a norm, hence to a distance function, making Rn a metric space. (More general metric spaces are studied in Appendix A.l.) We define the notion of open and closed subsets of Rn, of convergent sequences, and of compactness, and we establish that nonempty closed, bounded subsets of Rn are compact. This material makes use of results on the real line R, dealt with at length in [49], and reviewed in Appendix A.l. The spaces Rn are special cases of vector spaces, explored in greater generality in §1.3. We also study linear transformations : V W between two vector spaces. We define the class of finitedimensional vector spaces, and show that the dimension of such a vector space is well defined. If V is a real vector space and dim V = n, then V is isomorphic to Rn. Linear transformations from Rn to Rm are given by m X n matrices. In §1.4 we define the determinant, detA, of an n X n matrix A, and show that A is invertible if and only if detA '" O. In Chapter 2, such linear transformations arise as derivatives of nonlinear maps, and understanding the behavior of these derivatives is basic to many key results in multivariable calculus, both in Chapter 2 and in subsequent chapters.
ndamental theorem ofcalculus,
T
....

1
1.
2
Background
1 . 1 . Onevariable calculus
In this brief discussion of onevariable calculus, we introduce the Riemann integral and relate it to the derivative. We will define the Riemann integral of a bounded function over an interval I = [a, b1 on the real line. For now, we assume is real valued. To start, we partition I into smaller intervals. l' of I is a finite collection of subintervals (Jk : a :S :S NJ, disjoint except for their endpoints, whose union is I. We can order the Jk so thatJk = [Xko xk+d, where
A partition
k
(1.1.1)
We call the points Xk the
(1.1.2)
f
endpoints of 1'. We set maxsize (1') = OmaxN li(Jk )' �k�
We then set
f(x)li(h), Icp(f) = �k sup I k (1.1.3) L/f) = � inff(x) li(Jk )' k Ik Definitions of sup and inf are given in (A.1.17)(A.1.18). We call Icp(f) and £p(f) the upper sum and lower sum, respectively, of f, associated to the partition 1'. See Figure 1.1.1 for an illustration. Note that £pU) :S [p(f). These quantities should approximate the Riemann integral of f, if the partition l' is sufficientlyfine. To be more precise, if l' and Q are two partitions of I, we say l' refines Q and write
l' > Q, if l' is formed by partitioning each interval in Q. Equivalently, l' > Q if and only if all the endpoints of Q are also endpoints of 1'. It is easy to see that any two partitions have a common refinement; just take the union of their endpoints, to form a new partition. Note also that refining a partition lowers the upper sum of and raises its lower sum,
f
(1.1.4)
a
b Figure 1.1.1. Upper and lower sums
1.1.
Onevariable calculus
3
Consequently, if 1) are any two partitions and Q is a common refinement, we have (1.1.5) Now, whenever f : J + R is bounded, the following quantities are well defined: (1.1.6)
I (f)
=
inf Icp(f), I (f)
:PEI1(I)
=
sup Lp Cf),
:PEI1(I)
where II(I) is the set of all partitions of J. We call I (f) the lower integral of f and I(f) its upper integral. Clearly, by (1.1.5), I(f) :S I(f). We then say thatf is provided I(f) J(f), and in such a case, we set
=
(1.1.7)
Riemann integrable
1 f(x)dx = J f(x)dx = I (f) = I (f). b
I
We will denote the set of Riemann integrable functions on J by :fI (I). We derive some basic properties of the Riemann integral. Proposition 1.1.1. Iff, (1.1.8)
g E :fI (I), then f + g E :fI (I), and J(f + g)dX = J fdx + J gdx. I
I
I
Proof. If Jk is any subinterval of J, then sup(f
Ik
+ g) :S sup f + sup g I I k
k
+ g)
and
inf(f
Ik
+
+ g) 2: inf f + inf g, Ik Ik
so, for any partition :P, we have I:p(f :S I:p(f) I:p(g). Also, using common refine ments, we can approximate I (f) and I(g) by I:p(f) and I:p(g) and the same goes for I (f :S I (f) I ( ). Thus the characterization (1.1.6) implies I (f A parallel argument implies I (f D 2: I (f) I ( ), and the proposition follows.
simultaneously + g).
+ g)
+ g)
+ g
+ g
Next, there is a fair supply of Riemann integrable functions. Proposition 1.1.2. Iff
is continuous on J, then f is Riemann integrable.
Proof. Any continuous function on a compact interval is bounded and uniformly con tinuous (see Propositions A.1.15 and A.1.16). Let w(8 ) be a modulus of continuity for f, so (1.1.9)
I x  y l :S 8 ==} If(x)  f(y)1 :S w(8),
w(8) +
0 as 8 + O.
Then (1.1.10) which yields the proposition.
D
4
1.
Background
We denote the set of continuous functions on Iby C(I). Thus Proposition 1.1.2 says C(I) c :fI (I). The proof of Proposition 1.1.2 provides a criterion on a partition guaranteeing that and are close to h when is continuous. We produce an extension, giving a condition under which and are close, and are close, and given bounded on I. Given a partition 1'0 of I, set E 1'0}. minsize(1'o) = min{e(Jk ) (1.1.11)
11'(f)
f dx f 11'(f) I(f)
I1'(f)
f
I1'(f)
I(f)
: Jk Lemma 1.1.3. Let l' and Q be two partitions ofI. Assume (1.1.12) maxsize(1') :S � minsize(Q). Let If I :S M on I. Then 2Me(I), 11'(f) :s Io(f) + T (1.1.13) I1'(f) 2: Io(f)  2�e(I).
Proof. Let 1', denote the minimal common refinement of l' and Q. Consider on the one hand those intervals in l' thatare contained in intervals in Q and on the other hand those intervals in l' that are contained in intervals in Q. Each interval of the first type is also an interval in 1',. Each interval of the second type gets partitioned, to yield two intervals in 1',. Denote by 1',b the collection of such divided intervals. By (1.1.12), the lengths of the intervals in 1',b sum to It follows that
not
:S e(I)/k. II1'(f)  I1\(f) I :S � 2Me(J) :S 2M e�) , JEP1b and similarly II1'(f)  I1\(f) 1 :S 2Me(I)/k. Therefore 2Me(l), 11'(f) :S 11', (f) + T Since also 11\(f) :S Io(f) and I1\(f) 2: I o(f), we obtain (1.1.13).
D
The following consequence is sometimes called Darboux 's theorem. b
Let 1'v be a sequence ofpartitions ofI into v intervals Jv such that maxsize(1'v ) O. If f : I + It is bounded, then (1.1.14 ) Consequently, v (1.1.15 ) � f( ) (Jv ), f E :fI (I) I(f) vlim + oo k=l IVk e k for arbitrary IVk E Jv in which case the limit is h f dx. Theorem 1.1.4.
+
=
b
=
1
:S k :S v,
1.1.
Onevariable calculus
Proof. As before, assume
5
III :S M. Pick E
=
11k > O. Let Q be a partition such that
f of f f of f
I ( ) :S I ( ) :S I ( ) + E, £ ( ) 2: £ ( ) 2: £ ( )  £.
Now pick N such that v �N
===?
maxsize :Pv :::; E minsize Q.
Lemma 1.1.3 yields, for v 2: N,
Icp/f) Io(f) + 2MIi (I)E, Lpv (f) £o(f)  2MIi (I)£. I(f) 11'/ f) I(f) + [2MIi (I) + l]E, £(f) £1'v (f) £(f)  [2MIi (I) + 1]£. :S
2:
Hence, for v
2: N,
This proves (1.Ll4).
:S
:S
2:
2:
D
Remark. The sums on the right side of (1.1.15) are called Riemann sums, approxi mating hi (when is Riemann integrable).
dx
I
Remark. A second proof of Proposition 1.1.1 can readily be deduced from Theorem Ll.4.
IVk
One should be warned that once such a specific choice of 1'v and has been made, the limit on the right side of (loLlS) might exist for a bounded function that is Riemann integrable. This and other phenomena are illustrated by the following example of a function which is not Riemann integrable. For E J, set
not
il(x)
(1. Ll6)
=
1 if
x E Q, il(x)
=
0 if
I
x
x II' Q,
where is the set of numbers. Now every interval J c J of positive length contains points in and points notin so for any partition l' ofIwe haveI1' ( = Ii (l) and £1' ( = 0, hence
Q
7
il)
Q
(1. Ll )
rational
il)
Q,
I(
il)
=
1i (I), £(
il)
=
O.
Note that if 1'v is a partition of J into v equal subintervals, then we could pick each to be rational, in which case the limit on the right side of (loLlS) would be 1i(I), or we could pick each to be irrational, in which case this limit would be zero. Alterna tively, we could pick half of them to be rational and half to be irrational, and the limit would be 1i (I)/2. Associated to the Riemann integral is a notion of size of a set S, called If S is a subset of J, define the
IVk
IVk
characteristicfunction,
8
(1. Ll ) We define (1. Ll9)
Xs(x)
=
1 if X E S, 0 if
content.
x II' s.
upper content cont+ and lower content cont by cont+ (S) =
I(Xs),
connS) =
£(Xs)·
6 We say S only if
1.
Background
has content, or is contented if these quantities are equal, which happens if and
XS E :R(I), in which case the common value of cont+(S) and cont(S) is (1.1.20) m(S) = J Xs(x)dx. I
It is easy to see tha t cont+ (S)
(1.1.21)
N
= inf !� Ii (lk) : S k=l
el,
U. . . UlN ]'
finite
where lk are intervals. Here, we require S to be in the union of a collection of intervals. See the appendix at the end of this section for a generalization of Proposition 1.1.2, which gives a sufficient condition for a bounded function to be Riemann integrable on J, in terms of the upper content of its set of discontinuities. There is a more sophisticated notion of the size of a subset of J, called Lebesgue measure. The key to the construction of Lebesgue measure is to cover a set S by a (either finite or set of intervals. The of S e J is defined by
countable
infinite)
(1.1.22)
m*(S)
outer measure
= inf !� Ii (h) : S k�l
e
U h].
k�l
Here (hl is a finite or countably infinite collection of intervals. Clearly (1.1.23) m*(S) :s cont+ (S).
Xs= =
Note that if S J n then ,9, defined by (1.1.16). In this case it is easy to see that cont+ (S) 1i(I), but m*(S) O. Zero is the "right" measure of this set. More material on the development of measure theory can be found in a number of books, including and It is useful to note that h is additive in J, in the following sense.
=
= Q,
[17] [47].
I dx Proposition 1.1.5. Ifa < b < c, I : [a,c]+ R, I, = /l la,bl ' I, = /l lb,cl , then I E :R([a,cJ)=/, E :R([a,bJ) and /, E :R([b,cJ), ( 1.1.24 )
and, il this holds, ( 1.1.25 )
[a,c]
b
Proof. Since any partition of has a refinement for which is an endpoint, we may as well consider a partition :P :P, U:P" where :P, is a partition of and :P, is a partition of Then ( 1.1.26 )
so
( 1.1.27 )
[b, c].
=
[a, b]
1.1.
Onevariable calculus
7
Since both terms in braces in (1.1.27) are 2: 0, we have equivalence in (1.1.24). Then (1.1.25) follows from (1.1.26) upon taking sufficiently fine partitions. D LetI = b]. If E Jl(I), then E Jl( the function
[a, xD for all x E [a, b], and we can consider g(x) 1x f(t) dt.
[a, f
f
(1.1.28 ) If
=
a Xo Xl b, then :S
:S
:S
( 1.1.29 )
so, if
g(XI )  g(xo)
=
If I :S M,
( 1.1.30 )
, f(t)dt, x lxo
In other words, if E Jl(I), then is Lipschitz continuous on I. function : + R is said to be differentiable at E exists the limit
f
g
g (a, b)
A
x
(a, b) provided there
I [g(x + h)  g(x)] g'(x). o h h When such a limit exists, g'(x), also denoted dg/dx, is called the derivative of g at x. Clearly, g is continuous wherever it is differentiable. ( 1.1.31 )
=
lim �
The next result is part of the fundamental theorem of calculus.
Iff E C([a, bD, then thefunction g, defined by (1.1.28), is differentiable at each point x E (a, b), and ( 1.1.32 ) g'(x) f(x). Proof. Parallel to ( 1.1.29 ) , we have, for h > 0, h 1 ( 1.1.33 ) h[g(x + h)  g(x)] hl1x+ f(t)dt. x 0 > a such that If(t)  f(x)1 :S £ If f is continuous at x, then, for any £ > 0, there exists whenever I t  x l :S o. Thus, the right side of ( 1.1.33 ) is within £ of f(x) whenever h E (0, 0]. Thus the desired limit exists as h " O. A similar argument treats h /' o. D
Theorem 1.1.6.
=
=
The next result is the rest of the fundamental theorem of calculus. Theorem 1.1.7. ( 1.1.34
)
IfG is differentiable and G'(x) is continuous on [a, b], then 1 G'(t)dt G(b)  G(a). b
=
Proof. Consider the function ( 1.1.35 )
g(x)
=
1x G'(t)dt.
8
1.
Background
Figure 1.1.2. Illustration of the mean value theorem
We have g E C([a, b]), g(a) 0, and, by Theorem 1.1.6, g'(x) G'(x), 'if x E (a, b). Thus f(x) g(x)  G(x) is continuous on [a,b], and (1.1.36) I'(x) 0 'ifx E (a, b). We claim that (1.1.36) impliesf is constant on [a, b]. Granted this, sincef(a) g(a) G(a) for all x E [a, b], so the integral (1.1.35) is equalD toG(a)G(x) G(  G(a)a), we for have all x f(x) E [a, b]. Taking x b yields (1.1.34). =
=
=
=
=
,
=
=
=
The fact that (1.1.36) implies is constant on [a,b] is a consequence of the fol lowing result, known as the meanfvalue theorem. This is illustrated in Figure 1.1.2. Theorem 1.1.8. Let f : [a, j3] R be continuous, and assume f is differentiable on (a, j3). Then 3 5 E (a, j3) such that 1'(5) f(j3�=�(a) (l.1.37) Proof. Set g(x) f(x)  K(X  a), where K is the right side of (1.1.37 ) . Note that I'(O K, so it suffices to show thatg'(O o for some 5 E (a,j3). Note also g'(O that g(a) g(j3). Since [a, j3] is compact, g must assume a maximum and a minimum on [a,j3]. Since g(a) g(j3), one of these must be assumed at an interior point, at which g' vanishes. ....
=
=
=
=
=
=
1.1.
Onevariable calculus
9
Now, to see that (1.1.36) implies f is constant on [a, b], ifnot, 313 E (a, b] such that f( j3) '" f(a). Then just apply Theorem 1.1.8 to f on [a, 13], This completes the proof of Theorem 1.1.7. D We nowextendTheorems 1.1.6 and 1.1.7 to the setting of Riemann integrable func tions.
Proposition 1.1.9. Letf E Jl([a,bJ), and define g by (1.1.28). Ifx E [a, b ] and f is continuous at x, then g is differentiable at x, and g'(x) = f(x). The proof is identical to that of Theorem 1.1.6. Proposition 1.1.10. Assume G is differentiable on [a, b ] and G' E Jl([a,b]). Then (1.1.34) holds. Proof.
We have G(b)  G( a) = � [G(a + (b  a)='=1  )  G(a + (b  a) ) 1 nl
k
bo
for some Ik satisfying
k n
n
n
+1 a + (b  a) < Ik < a + (b  a), as a consequence of the mean value theorem. Given G' E Jl( [a, bD, Darboux's theorem (Theorem 1.1.4) implies that as one gets G(b)  G(a) = J: G'(t) dt. D Note that the beautiful symmetry in Theorems 1.1.6 and 1.1.7 is not preserved in Propositions 1.1.9 and 1.1.10. The hypothesis of Proposition 1.1.10 requires G to be differentiable at each x E [a, b], but the conclusion of Proposition 1.1.9 does not yield differentiability at all points. For this reason, we regard Propositions 1.1.9 and 1.1.10 as lessjUndamental than Theorems 1.1.6 and 1.1.7. There are more satisfactory extensions of the fundamental theorem of calculus, involving the Lebesgue integral, and a more subtle notion of the derivative of a nonsmooth function. For this, we can point the reader to [47, Chapters 1011] ' So far, we have dealt with integration of real valued functions. Iff : I we set f = fl + if, with fj : I and say f E Jl(I) if and only if fl and f, are in Jl(I). Then J f dx = J fl dx + i J f, dx. There are straightforward extensions of Propositions 1.1.51.1.10 to complex valued functions. Similar comments apply to functions f : I If a function G is differentiable1 on (a, b) and G' is continuous onk (a, b), we say G is a C1 kfunction and write G E C ((a, b»). Inductively, we say G E C ((a, b») provided G' E C  1 ((a, b»). k n
k
n
n
n + 00,
+ C,
+ R
I
I
I
+ Rn .
1.
10
Background
An easy consequence of the definition (1.1.31) of the derivative is that for any real constants a, b, and c, f(x) = ax' + bx + c �� = 2ax + b. Now, it is a simple enough step to replace a, b, c by y, z, w in these formulas. Having done that, we can regard y, z, and w as variables, along with x, ==}
F(x, y, z, w) = yx' + zx + w. We can then hold y, z and w fixed (e.g., set y = a, z = b, w = c), and then differentiate with respect to x. We get aF = 2yx + z, ax the partial derivative of F with respect to x. Generally, if F is a function of variables, Xl , . . . , Xn ,
n
we set
(1.1.38)
where the limit exists. Section 2.1 carries on with a further investigation of the deriva tive of a function of several variables. Complementary results on Riemann integrability. Here we provide a condition, more general than Proposition 1.1.2, which guarantees Riemann integrability. Proposition 1.1.11. Let f : I be a boundedfunction, with I = [a, b]. Suppose that the set S ofpoints ofdiscontinuity off has the property (1.1.39) cont+ S = o. ....
II!.
( )
Then f ell(I). Proof. Say I f(x) 1 :S M. Take E > O. As in (1.1.21), take intervals J" ... , lN such that S J, JN and L�=, lI(Jk) < £. In fact, fatten each Jk such that S is contained in the interior of this collection of intervals. Consider a partition 1'0 of I, whose intervals include J" ... ,IN, amongst others, which we label I" ... ,IK' Now f is continuous on each interval Iv, so, subdividing each Iv as necessary, hence refining 1'0 to a partition 1'" we arrange that supf  inff < E on each such subdivided interval. Denote these subdivided intervals I;, ... ,II.. It readily follows that N a :S I1', (f)  I1', (n < � 2MII(Jk ) + � EII(Ik) k=l k=l < 2EM + EII(I). Since E can be taken arbitrarily small, this establishes that f E :fI(I). D c
u
...
U
L
1.1.
11
Onevariable calculus
even better result is that such f is Riemann integrable if and only if (1.1.40) m*(S) = 0, where m*(S) is defined by (1.1.22). The implication m*(S) = a =} f E :fI(I) (in the ndimensional setting) is established in Proposition 3.1.31 of this text. For the one dimensional case, see also [49, Proposition 4.2.12] ' For the reverse implication f E :fI(I) =} m*(S) = 0, one can see standard books on measure theory, such as [17] and [47]. We give an example of a function to which Proposition 1.1.11 applies, and then an example for which Proposition 1.1.11 fails to apply, though the function is Riemann integrable. Example 1. LetI = [0, 1]. Define f : I by f(O) = 0, f( ) = (I)j for E ( (j + l) , j ], j O. Then If I 1 and the set of points of discontinuity of f is S = {a} {2j : j I}. It is easy to see that cont+ S = O. Hence f E :fI(I). See Exercises 19 20 below for a more elaborate example to which Proposition 1.1.11 applies. Example 2. Again I = [0,1] ' Define f : I by f( ) = 0, if Q, m 1 n ' 1'f X = n , . Iowest terms. Then If I 1 and the set of points of discontinuity of f is Remark.
An
+
x
x
R
r
r
2:
:S
u
x
x
II'
2:
+

R
III
:S
S = In Q. As we have seen following (1.1.23), cont+ S = 1, so Proposition 1.1.11 does not apply.
Nevertheless, it is fairly easy to see directly that I(f) = I(f) = 0, so f E :fI(I). In fact, given E > 0, f E only on a finite set, hence 2:
I(f) :S E, 'ifE > o.
As indicated following (1.1.23), (1.1.40) does apply to this function. By contrast, the function in (1.1.16) is discontinuous at each point of I. We mention an alternative characterization ofI(f) and I(f), which can be useful. Given I = [a,b], we say : I is piecewise constant on I (and write E PK(I» 11
g
+
R
g
1.
12
Background
provided there exists a partition l' = (hl ofI such that g is constant on the interior of each interval Clearly, PK(I) 3/(I). It is easy to see that if f : I is bounded, Y(f) = inf!j f, dx : f, E PK(I), f, fJ, (1.1.41) I(f) = sup!j fa dx : fa E PK(I), fa :S f J. Hence, given f : I bounded, f E 3/(I) for each £ > 0, 3fo, f, E PK(I) such that (1.1.42) fo :S f :S f, and j u,  fo)dx < £. This can be used to prove (1.1.43) f,g E 3/(I) fg E 3/(I), via the fact that (1.1.44 ) In fact, we have the following, which can be used to prove (1.1.43). Proposition 1.1.12. Let f E 3/(I), and assume I f I :S M. Let 'P : [M,M] be continuous. Then'P f E 3/(I). Proof. We proceed in steps. Step 1. We can obtain 'P as a uniform limit on [M, M] of a sequence 'Pv of continuous, piecewise linear functions. Then 'Pv f 'P f uniformly on I. A uniform limit g of functions gv E 3/(I) is in 3/(I) (see Exercise 12). So it suffices to prove Proposition 1.1.12 when 'P is continuous and piecewise linear. Step 2. Given 'P : [M, M] continuous and piecewise linear, it is an exercise to write 'P = 'P,  'P" with 'Pj : [M,M] monotone, continuous, and piecewise linear. Now 'P, f, 'P, f E 3/(I) =} 'P f E 3/(I). Step 3. We now demonstrate Proposition 1.1.12 when 'P : [M, M] is monotone and Lipschitz. By Step 2, this will suffice. So we assume M :S x, < x, :S M 'P(x, ) :S 'P(x, ) and 'P(x, )  'P(x,) :S L(x,  x,), for some L < 00. Given £ > 0, pick fa.!, E PK(I), as in (1.1.42). Then 'P0fo, 'P0f, E PK(I), 'P0fo :S 'P0f :S 'P0f" and j('P f,  'P fo)dx :s L ju,  fo)dx :s L£. This proves 'P f E 3/(I). D c
Jk .
+
R
2:
I
I
+
R
�
I
==}
+
°
°
+
°
R
°
°
°
+
+
R
+
==}
I
°
°
°
I
R
R
1.1. Onevariable calculus
13 Exercises
1. Let c > 0, and let f : l ac, bc] be Riemann integrable. Working directly with the definition of integral, show that + R
b
be
1 f(cx) dx � 1
(1.1.45)
More generally, show that
1
(1.1.46)
=
G
bdlc
adie
Ge
f(x) dx. be
1 f(x)dx. f(cx + d)dx 01 =
ac
2. Let f : I X S R be continuous, where I [a,b] and S eRn . Take "'(Y) h f(x, y) dx. Show that", is continuous on S. Hint. If fj : I R are continuous and If, (x)  f,(x) 1 :S 0 on I, then (1.1.47) If f, dx  f f, dxl :S e(I)o. =
+
=
+
I
I
Hint. Suppose Yj E S, Yj y E S. Let S {Yj } {y}. This is compact. Thus f : I X S R is uniformly continuous. Hence If(x, Yj )  f(x, Y)I :S w( IYj  yl), 'ifx E I, where w( 0) 0 as 0 O. 3. With f as in Exercise 2, suppose gj : S R are continuous and a :S go(Y) < g, (y) :S b. Take cAy) J:o'/;:1 f(x, y) dx. Show that", is continuous on S. Hint. Make a change of variables, linear in x, to reduce this to Exercise 2. 4. Suppose f : (a, b) (c, d) and g : (c, d) R are differentiable. Show that hex) g(f(x» is differentiable and h'(x) g'(f(x» f'(x). This is the chain rule. Hint. Peek at the proof of the chain rule in §2.1. 5. If f, and f, are differentiable on (a, b), show that f, (x)f,(x) is differentiable and :x (Nx)f,(x») f{(x)f,(x) + Nx)t;(x). If f,(x) '" 0, 'if x E (a, b), show that f,(x)1f,(x) is differentiable and � ( Nx» ) _ f{(x)f,(x)  Nx)f;(x) . dx f,(x) f,(x)' =
+
U
+
+
+
+
=
=
+
+
=
=
1.
14 6. Let ", : [a,b] [A,B] be C 1 on a neighborhood J of [a, b], with [a, b]. Assume r:p(a) = A, ",(b) = Show that the identity +
xE
1
B
(1.1.48)
B. f(y)dy 1 f(",(t»)",'(t)dt, =
Background
",'(x) > o for all
b
for any ( ), = [A, B], follows from the chain rule and the fundamental theorem of calculus. This is the for the onedimensional integral. Replace b by by and differentiate.
fECI I
change ofvariableformula x, B ",(x), 7. Show that (1.1.48) holds for each f E PK(I). Using (1.1.41)(1.1.42), show that f E 3/(I) =} f 0 E 3/([ a, b]) and (1.1.48) holds. (This result contains that of Exercise
Hint.
'"
1.)
Clan a neighborhood of [a, b], then (1.1.49) 1 f(s)g'(s)ds  1 f'(s)g(s)ds + [J(b) (b)  f(a) (a)]. This transformation of integrals is called integration by parts. 9. Let f : (a, a) It be a Cj +l function. Show that for x E (a, a), ) (O)xj + R/x), f(j2 (1.1.50) + f(x) f(O) + f'(O)x + f"(0) ... + X 2 J.. f
8. Show tha t if and g are b
b
=
g
+
=
,

where
R/x)
(1.1.51)
=
g
1X (x:J. s)j f(j+l)(s)ds. ,
o
This is Taylor 's formula with remainder. See §2.1 for the multidimensional extension. Use induction. If (1.1.50)(1.1.51) hold for 0 :S j :S k, show that they hold for j = k 1, by showing that
Hint.
+
f(k+l)(O) xk+l + 1 x (x  s)k+l f(k+2) (s)ds. (k + 1)! (k + 1)! To establish this, use the integration by parts formula (1.1.49), with f(s) replaced by f(k+l) (S) and with appropriate (s). See Appendix §A.4 for further material on the re mainder formula. Note that another presentation of (1.1.51) is jl (1.1.53) RJ (x) (Jx++1)! 11 f(j +l) ((l _ t1/(j +l))X) dt. 10. Assume f : (a, a) It is a cj function. Show that for x E (a, a), (1.1.50) holds (1.1.52)
1x (x k! s)kf(k+l)(s)ds o
=
0
g
=
+
with
(1.1.54 )
0
RJ (x)
=
1 (j  1)!
1X(X  s)jl [f(j)(s)  f(j)(O)] ds. 0
Hint. Apply (1.1.51) with j replaced by j  1. Add and subtract f(j ) (O) to the factor j f( ) (s) in the resulting integrand.
15
1.1. Onevariable calculus =
11. Given I [a, b], show that (1.1.55) f,g E 3/U) ==} fg E 3/U), as advertised in (1.1.43). 12. Assume I k E 3/U) and I k f uniformly on I. Prove that f E 3/U) and (1.1.56) +
I =
=
I
13. Given I [a, b], Ie [a + b  E], assume Ik E 3/U), Ilk I :S M on I for all k, and Ik f uniformly on Ie (1.1.57) ' for all E (0, (b  a)/2). Prove thatf E 3/U) and that (1.1.56) holds. 14. Use the fundamental theorem of calculus to compute (1.1.58) 1 x'dx, r E Q \ {lJ, where a :S a < b < 00 if r 2: a and a < a < b < 00 if r < o. 15. Use the change of variable result of Exercise 6 to compute 11 xY1 + x2 dx. E,
+
E
b
16. We say f E 3/(R) provided f l[k,k+ l[ E 3/([k, k + 1]) for each k E E, and 00 lk+1 If(x)1 dx < 00. (1.1.59) k�OO k If f E 3/(R), we set (1.1.60) L00 f(x)dx l�� lk/(X)dX =
Formulate and demonstrate basic properties of the integral over R of elements of 3/(R).
17.
This exercise discusses the integral test for absolute convergence of an infinite se ries, which goes as follows. Let f be a positive, monotonically decreasing, continuous function on [0, 00), and suppose l ak l = f(k). Then
Prove this.
Hint. Use
to
l ak l
< 00
=
100 f(x)dx < 00.
1.
16 18. Use the integral test to show that if p
Background
> 0,
00 1 � P < oo = P > 1. k=l k
(1.1.61)
Hint. Use Exercise 14 toI evaluate IN(P) = fIN xp dx, for P '"  1, and letN + 00. See if you can show 1;00 X  dx = 00 without knowing about 10gN. Subhint. Show that 1;' X I dx = f�N X I dx. In Exercises 19  20, e c R is the Cantor set, defined as follows. Take a closed, bounded interval [a, b 1 = eo. Let e 1 be obtained from eo by deleting the open middle third interval of length (b  a)/3. At the jth stage, ej is a disjoint union of 2j closed intervals, each oflength j (b  a). Then ej +1 is obtained from ej by deleting the open middle third of each of these 2j intervals. We have eo e 1 ej , each a . oe = n closed subset of [a, b]. The Cantor set is e j> j 19. Show that cont+ ej = (2/3)j (b  a), and conclude that r
J
J ...
e = o.
cont+ 20. Define f : in passing from
J ... J
[a, b1 + R as follows. We call an interval of length j (b  a), omitted ej _ 1 to ej , a jinterval. Set f(x) = 0, if E e, ( l)j , if belongs to a jinterval. r
X
X
Show that the set of discontinuities of f is
Jl( [a, bD.
e. Hence Proposition 1.1.11 implies f E
21. Generalize Exercise 8 as follows. Assume f and g are differentiable on a neighbor hood of bl and f' , g' Then show that holds. Use the results of Exercise to show that Cfg),
[a, E Jl([ a, bD. (1.1.49) Hint. 11 E Jl([a,bD. 22. Let f I + R be bounded, I = [a, b]. Show that :
j fl E C(I), f fJ, rCf) = sup!! fa dx fa E C(I), fa :S fl· ICf) = inf! fl dx
:
l 2:
I
:
I
1.2.
Euclidean spaces
17
Compare (1.1.41). Then show that :R(I) = for each £ > 0,
3fo '/1 E C(I) such that fo :S f :S fl and f Ul  fo)dx < £.
fE
(1.1.62)
I
Compare (1.1.42).
1.2. Euclidean spaces
The space
Rn , n·dimensional Euclidean space, consists of n·tuples of real numbers,
(1.2.1)
x. Xj Rn . x n = ( , , ) E R Y Yl ' " Yn , (1.2.2) x + Y = (Xl + Yl , "" Xn + Yn ) E Rn . Addition is done componentwise. Also, given a E R, we have
The number is called the jth component of Here we discuss some important algebraic and metric structures on First, there is addition. If is as in (1.2.1) and also we have
(1.2.3)
x
This is scalar multiplication. In (1.2.1), we represent as a row vector. Sometimes we want to represent by a column vector,
Xl ) ( X= :
x
(1.2.4 )
(Xl +: Yl )
Then (1.2.2)(1.2.3) are converted to (1.2.5)
x+y=
Xn
Xn + Yn
,
.
( )
aXl ax = : aXn
.
We also have the dot product, (1.2.6)
n X . Y = jL: XjYj = XIYl + ... + XnYn E =l
x, Y E Rn . The dot product has the properties X · Y = y . X, X · (ay + hz) = a(x · y) + h(x · z), (1.2.7) X . X > 0 unless X = O.
given
Note that (1.2.8)
x . x = xt + ... + x�.
IR,
1.
18 We set (1.2.9 ) which we call the norm of
(1.2.10)
Ix l = yx:x,
x. Note that (1.2.7) implies (ax) · (ax) = a'(x · x),
hence
(1.2.11 )
Background
l ax l = l al . IxL
for
a E R, x E Rn .
Taking a cue from the Pythagorean theorem, we say tha t the in is (1.2.12)
[Rn
distance from x to y
d(x, y) = I x  YI·
For us, ( 1.2.9 ) and ( 1.2.12 ) are simply definitions. We do not need to depend on a deriva tion of the Pythagorean theorem via classical Euclidean geometry. Significant proper ties will be derived below, without recourse to a prior theory of Euclidean geometry.
X
A set equipped with a distance function is called a metric space. We consider metric spaces in general in Appendix A.1. Here, we want to show that the Euclidean distance, defined by ( 1.2.12 ), satisfies the
triangle inequality, d(x,y) :S d(x,z) + d(z,y), Vx,y,z E Rn .
(1.2.13 )
This in turn is a consequence of the following, also called the triangle inequality. Proposition 1.2.1. (1.2.14
)
The norm ( 1.2.9) on Rn has the property Ix + yl :S Ix l + Iyl. Vx,y E Rn .
Proof. We compare the squares of the two sides of ( 1.2.14 ). First,
Ix + yl ' = (x + y) . (x + y) =x·x+y·x+y·x+y·y = lx i' + 2x . Y + lyl 2
(1.2.15 ) Next, (1.2.16 )
( Ix l + Iyl )' = l xi ' + 21 xl . Iyl + lyl 2 We see that ( 1.2.14) holds if and only if x . y :S I x l . Iy l . Thus the proof of Proposition 1.2.1 is finished off by the following result, known as Cauchy's inequality.
Proposition 1.2.2. (1.2.17)
For all x, y E Rn, Ix , yl :S Ix l . Iyl·
Proof. We start with the chain
(1.2.18 )
which implies (1.2.19 )
O :S I x  yl' = (x  y) . (x  y) = lx i ' + Iyl '  2x · y, 2x · y :S lx i' + Iy l' , Vx,y E Rn .
D
1.2.
Euclidean spaces
19
x tx and y by t l y, with t > 0, the left side of (1.2.19) is unchanged, so 2x · y :S t' lx l' + t' Iyl ', Vt > o. (1.2.20) Now we pick t so that the two terms on the right side of (1.2.20) are equal, namely (1 . 2.21) t' = Q:l l xi ' t' = !:::.! Iy l ' (At this point, note that (1.2.17) is obvious if x = 0 or y = 0, so we will assume that x '" 0 and y '" 0.) Plugging (1.2.21) into (1.2.20) gives (1.2.22) X · y :S I xl · Iyl. V x,y E Itn. This is almost (1.2.17). To finish, we can replace x in (1.2.22) by  x = (l)x, getting (1.2.23) (x · y) :S Ix l . Iy l. and together (1.2.22) and (1.2.23) gives (1.2.17). D We now discuss a number of notions and results related to convergence in Itn . First, a sequence of points (Pj ) in Itn converges to a limit P E Itn (we write Pj p) if and only if (1.2.24) I Pj  p i + 0, where I . I is the Euclidean norm on Itn , defined by (1.2.9), and the meaning of (1.2.24) is that for every 0 > 0 there exists N such that (1.2.25) j 2: N ==} I pj  p i < £. If we write Pj = (Plj , · .. , Pnj ) and P = (PI' ... , Pn ), then (1.2.24) is equivalent to (Plj  PI )' + ... + (Pnj  Pn )' + 0, as j which holds if and only if I Pej  Pel + 0 as j for each e E {1, . . . , n}. n That is to say, convergence Pj p in It is eqivalent to convergence of each compo nent. A set S c It n is said to be closed if and only if (1.2.26) Pj E S, Pj P ==} p E S. The complement Itn \ S of a closed set S is open. Alternatively, D c Itn is open if and only if, given q E D , there exists 0 > 0 such that Biq) C D, where (1.2.27) B£(q) = (p E Itn : I p  ql < oj, so q cannot be a limit of a sequence of points in It n \ D. An important property of Itn is completeness, a property defined as follows. A se quence (Pj ) of points in It n is called a Cauchy sequence if and only if (1.2.28) I Pj  Pk l + 0, as j, k Again we see that (Pj ) is Cauchy in Itn if and only if each component is Cauchy in It. It is easy to see that if Pj P for some P E Itn , then (1.2.28) holds. The completeness Ifwe replace by we have
+
+ 00,
+ 00,
+
+
+ 00 .
+
property is the converse.
1.
20 Theorem 1.2.3.
for some p E Itn.
Background
If(pj ) is a Cauchy sequence in Itn, then it has a limit, i.e., (1.2.24) holds Pj
P Itn
It
Proof. Since convergence + in is equivalent to convergence in of each component, the result is a consequence of the completeness of This is proved in D �.
It.
compactness.
Completeness provides a path to the following key notion of A non empty set K c is said to be compact if and only if the following property holds.
Itn
(Pj)
Each infinite sequence in K has a subsequence that converges to a point in K. !tis clear that ifK is compact, then itmustbe closed. !tmustalso be bounded, i.e., there exists K such 00 such thatK C Indeed, if K is not bounded, there exist that 1. In such a case, 2: 1 whenever j '" k, so cannot 2: have a convergent subsequence. The following converse result is the ndimensional BolzanoWeierstrass theorem.
(1.2.29)
R< pI j+ ll I pjl +
BR(O).
Pj E (Pj)
I pj  Pkl
Ifa nonempty K c Itn is closed and bounded, then it is compact. Proof. If K e lt n is closed and bounded, it is a closed subset of some box 13 = {(Xl , " " Xn ) E Itn : a :S Xk :S b, \f k}. (1.2.30) Clearly, every closed subset of a compact set is compact, so it suffices to show that 13 is compact. Now, each closed bounded interval [a, b1 in It is compact, as shown in Ap
Theorem 1.2.4.
pendix A.l, and (by reasoning similar to the proof of Theorem 1.2.3) the compactness of 13 follows readily from this. D
Itn ,
We establish some further properties of compact sets K c leading to the impor tant result, Proposition 1.2.8 below. A generalization of this result is given in Appendix A.1.
Let K c Itn be compact. Assume Xl X, X3 form a decreasing sequence ofclosed subsets ofK. Ifeach Xm '" 0, then nmXm '" O. Proof. Pick Xm E Xm . If K is compact, (xm ) has a convergent subsequence, xmk + Since {xmk : k 2: e} c Xm" which is closed, we have E n mXm . D Corollary 1.2.6. Let K c Itn be compact. Assume Ul C U, C U3 C form an increasing sequence ofopen sets in Itn. Ifum Um K, then UM Kfor some M. Proof. Consider Xm = K \ Um . D J
Proposition 1.2.5.
J
J
. . .
y.
y
J
J
. . .
Before getting to Proposition 1.2.8, we bring in the following. Let 0 denote the set of rational numbers, and let on denote the set of points in all of whose components are rational. The set C has the following property: given and such that 0, there exists Let :R = (1.2.31) (0, oo )}.
£>
Itn denseness
on Itn q E on I p  ql < £. (B,(q) : q E on , r E O n
P E Itn
1.2.
Euclidean spaces
21
on countable,
Note that 0 and are i.e., they can be put in onetoone correspondence with N. Hence :R is a countable collection of balls. The following lemma is left as an exercise for the reader. Lemma 1.2.7. 0c (1.2.32) 0 = U {E : E E :R , E c O } .
Rn be a nonempty open set. Then
Let
To state the next result, we say that a collection {Ua : a E A} covers K if K c uaEAUa. If each Ua C Rn is open, it is called an open cover of K. If 13 c A and K C UPE1J Up, we say {Up : f3 E 13} is a subcover. The following is part of the n
dimensional HeineBorel theorem. (See Theorem A.1.1O.) Proposition 1.2.8. If (1.2.33) {Ua : a E A}
K e Rn is compact, then it has thefollowing property. Every open cover ofK has afinite subcover.
Proof. By Lemma 1.2.7, it suffices to prove the following. Every countable cover {Ej : j E N} of by open balls ( 1.2.34 ) has a finite subcover. To see this, write :R = {Ej : j E N}. Given the cover {Ua }, pass to {Ej : j E J}, where j E J if and only of Ej is contained in some Ua . By (1.2.32 ), {Ej : j E J } covers If (1.2.34) holds, we have a subcover {Ee : e E L} for some finite L C J . Pick ae E A such that Ee C Ua,. The {Ua, : e E L} is the desired finite subcover advertised in (1.2.33). Finally, to prove ( 1.2.34) , we set Um = E, U . . . U Em ( 1.2.35 ) and apply Corollary 1.2.6. D
K
K.
Exercises
R'
1. Jdentifying z = x + iy E C with (x,y) E and w = u + iv show that the dot product satisfies Z · W = Re zw.
E C with (u, v) E R',
2. Show that the inequality ( 1.2.14) implies ( 1.2.13 ). 3. Prove Lemma 1.2.7. 4. Use Proposition 1.2.8 to prove the following extension of Proposition 1.2.5. Proposition 1.2.9. {Xa : a E A}
Let K e Rn be compact. Assume is a collection of closed subsets ofK. Assume thatfor eachfinite set 13 C A, naE1JXa '" 0. Then
Hint.
Consider Ua =
Rn \ Xa '
n Xa ", 0
1.
22 5. Let
Background
K e Rn be compact. Show that there exist xo, Xl E K such that IXo l :S l xL VX E K, Ixr l 2: I xL Vx E K.
We say
x iIXol = min Ix i. I XI I = max XEK XEK I
1.3. Vector spaces and linear transfonnations
Rn
We have seen in §1.2 how is a vector space, with vector operations given by (1.2.2) (1.2.3) for row vectors, and by (1.2.4)(1.2.5) for column vectors. We could also use complex numbers, replacing by and allowing in (1.2.3) and (1.2.5). We will use F to denote or C. Many other vector spaces arise naturally. We define this general notion now. A vector space over F is a set V, endowed with two operations, that of vector addition and multiplication by scalars. That is, given V and w and F, then are defined in V. Furthermore, the following properties are to hold, for all V, F. First there are laws for vector addition: Commutative law : w), Associative law : (1.3.1) Zero vector : 3 a V, a Negative : 3 O. Next there are laws for multiplication by scalars: Associative law : (1.3.2) Unit : 1 . Finally there are two distributive laws:
R
Rn en ,
aEe
v, W E
a, b E
aE
v+
av u, v, W E
u + v = v + u, (u + v) + W = u + (v + E v + = v,  v, v + (v) = a(bv) = (ab)v, v = v.
a(u + v) = au + av, (a + b)u = au + bu. It is easy to see that Rn and en satisfy all these rules. We will present a number
(1.3.3)
of other examples below. Let us also note that a number of other simple identities are automatic consequences of the rules given above. Here are some, which the reader is invi ted to verify: w w 0, a. (1 ) O· 0, (1.3.4 ) w 0 w ( a. 0,
v + = v ::::} = v + v = + O v = v, v= v + = ::::} = v, v + l)v = v = (l)v = v.
1.3.
Vector spaces and linear transformations
23
= [a, l'
Here are some other examples of vector spaces. LetI b] denote an interval in R, and take a nonnegative integer Then C I denotes the set offunctions I + F whose derivatives up to order are continuous. We denote by the set of polynomials in with coefficients in F. We denote by Ji the set of polynomials in of degree :S In these various cases, (1.3.5) Such vector spaces and certain of their linear subspaces playa major role in the material developed in these notes. Regarding the notion just mentioned, we say a subset W of a vector space is a linear subspace provided E W, E F ==} E W. (1.3.6) Then W inherits the structure of a vector space.
k
x,
k( )
k.
f:
x
k.
(f + g)(x) = f(x) + g(x), (af)(x) = af(x).
V
Wj
al w l + a,w,
aj
V
Linear transformations and matrices. If and W are vector spaces over F (R or C), a map (1.3.7) + W is said to be a provided E F, E (1.3.8) We also write E W). In case W, we also use the notation
T:V
linear transformation T(al v l + a,v,) = al Tvl + a,Tv" V aj T L(V, V=
Vj V. L(V) = L(V, V).
Linear transformations arise in a number of ways. For example, an m X n matrix
A with entries in F defines a linear transformation (1.3.9 ) by
( 1.3.10 )
We also have linear transformations on function spaces, such as multiplication operators (1.3.11) given E Ck(I), I b], and the operation of differentiation: ( 1.3.12 ) Ck +I (I) + Ck (I),
f
= [a, D:
Df(x) = rex).
We also have integration: ( 1.3.13 )
J : ck(I)
+
Ck+I (I),
Jf(x) = 1 X f(y)dy.
Note also that + Ji, ( 1.3.14 ) + where denotes the space of polynomials in of degree :S
1'k
D : 1'k+1
J : 1'k x
1'k+ l , k.
1.
24 Two linear transformations 1) E
Background
L(V, W) can be added: W, (T, + T,)v T,v + T,v.
T, + T, : V (1.3.15) Also T E L(V, W) can be multiplied by a scalar: (1.3.16) aT : V W, (aT)v a(Tv). This makes L(V, W) a vector space. We can also compose linear transformations S E L(W,X), T E L(V, W): ST : V X, (ST)v S(Tv). (1.3.17) +
=
+
=
+
=
For example, we have
(1.3.18)
given f E Ck(I). When two transformations
(1.3.19)
A
are represented by matrices, e.g., as in
(1.3.10) and
(1.3.20) then
(1.3.21)
(�a'�bt1 Lamtbn
is given by matrix multiplication,
(1.3.22)
Another way of writing
(1.3.23)
AB
=
(1.3.22) is to represent A and B as
and then we have
8=1 aitbej'
n AB (dij )' dij � To establish the identity (1.3.22), we note that it suffices to show the two sides have the same effect on each ej E rk , 1 :S j :S k, where ej is the column vector in rk whose jth entry is 1 and whose other entries are O. First note that (1.3.24)
(1.3.25)
=
=
1.3.
Vector spaces and linear transformations
25 (1.3.10).
where the jth column is in E, as one can see via Similarly, if right side of is the jth column of this matrix, i.e.,
(1.3.22), Dej
=
(1.3.26)
(�alt.bej ) 'Lamtbtj
D denotes the
. De). On the other hand, applying A to ( 1.3.25 ), via (1.3.10), gives the same result, so (1.3.25) holds. Associated with a linear transformation as in ( 1.3.7) , there are two special linear spaces, the null space of T and the range of T. The null space of T is ( 1.3.27 ) N(T) (v E V : Tv OJ, =
=
and the range of T is (
1.3.28 )
:R(T) = {Tv : v E V}. Note thatN(T) is a linear subspace of and :R(T) is a linear subspace of If N(T) = we say T is injective; if :R(T) = we say T is surjective. Note that T is injective if and only if T is onetoone, i.e.,
0, (
W,
1.3.29 )
V
W.
onto. If T is onetoone and onto, we say it is an inverse T ' : W V
If T is surjective, we also say T is In such a case the
isomorphism. ( 1.3.30 )
+
is well defined, and it is a linear transformation. We also say T is invertible, in such a case. Basis and dimension. Given a finite setS = (v" . . . , v k } in a vector space of S is the set of vectors in of the form (
V
1.3.31 )
Cj
V, the span
with arbitrary scalars, ranging over r = R or C. This set, denoted Span(S) is a linear subspace of The set S is said to be if and only if there exist scalars c" . . . , ck not all zero, such that ( ) vanishes. Otherwise we say S is
V.
linearly dependent 1.3.31 linearly independent. If {v" . . . , V k} is linearly independent, we say that S is a basis of Span(S) and that k is the dimension of Span(S). In particular, if this holds and Span(S) V, we say k dim V. We also say V has a finite basis, and that V is finite dimensional. By convention, if V has only one element, the zero element, we say V a and dim V O. It is easy to see that any finite set S {v" . . . , V k} C V has a maximal subset that =
=
=
=
=
is linearly independent and such a subset has the same span as S, so Span(S) has a basis. To take a complementary perspective, S will have a minimal subset So with the same span, and any such minimal subset will be a basis of Span(S). Soon we will show that any two bases of a finitedimensional vector space have the same number of elements (so dim is well defined). First, let us relate to rk
V
V
V
1.
26 So say V has a basis (1.3.32) by (1.3.33) where
=
Background
S {v" ... , V k}. We define a linear transformation A rk V :
+
(1.3.34 )
rk
, ek}
S
We say {e" . . . is the standard basis of The linear independence of is equiv alent to the injectivity of A, and the statement that spans V is equivalent to the sur jectivity of A. Hence the statement that is a basis of V is equivalent to the statement thatA is an isomorphism, with its inverse uniquely specified by (1.3.35)
S
S
We begin our demonstration tha t dim V is well defined with the following concrete result.
... , vk+' are vectors in rk, then they are linearly dependent. Proof. We use induction on k. The result is obvious if k 1. We can suppose the last component of some Vj is nonzero, since otherwise we can regard these vectors as elements of rk ' and use the inductive hypothesis. Reordering these vectors, we can assume the last component of v k+' is nonzero, and it can be assumed to be 1. Form Wj Vj  Vkj Vk+ l , 1 :::; j :::; k, where Vj (v 'j , ... , V kj )t Then the last component of each of the vectors w" . . . , w k is 0, so we can regard these as k vectors in rk 1 By induction, there exist scalars a" ... , ak not all zero, such that Lemma 1.3.1. Ifv"
=
=
=
so we have
=
al V I + ... + ak vk (al Vkl + ... + ak vkk)vk+ l , the desired linear dependence relation on {v" ... , V k+ r l .
D
With this result in hand, we proceed.
If V has a basis {v" ... , Vk} with k elements and {w" . . . , we} C V is linearly independent, then e :S k. Proof. Take the isomorphism A rk V described in (1.3.32)(1.3.33). The hy potheses imply that {A' w" . . . ,A ' we} is linearly independent in rk , so Lemma 1.3.1 implies e :S k. D Corollary 1.3.3. If V isfinite dimensional, any two bases of V have the same number of elements. If V is isomorphic to W, these spaces have the same dimension. Proposition 1.3.2.
:
+
1.3.
Vector spaces and linear transformations S #S #S #T. W.
27
T
V,
#S :S #T and #T :S #S, W takes a basis of V to
Proof. If (with elements) and are bases of we have hence = For the latter part, an isomorphism of onto a basis of
V
D
The following is an easy but useful consequence.
If V isfinite dimensional and W c Va linear subspace, then W has W :S dim V. Proof. Suppose {W I' ... , we} is a linearly independent subset of W. Proposition 1.3.2 implies e :S dim V. If this set spans W, we are done. If not, there is an element wt + , E W not in this span, and {W I' ... , wt+ r l is a linearly independent subset of W. Again e + 1 :S dim V. Continuing this process a finite number of times must produce a basis �W. D Proposition 1.3.4. dim
afinite basis and
A similar argument establishes:
Suppose V is finite dimensional, W c V a linear subspace, and {W I , ... , we} a basis ofW. Then V has a basis of the form {W I , ... , We, u" ... , urn}, and e + m = dim V. Having this, we can establish the following result, sometimes called thejUndamen tal theorem oflinear algebra. Proposition 1.3.6. Assume V and W are vector spaces, V is finite dimensional, and (1.3.36) A : V + W is a linear map. Then (1.3.37) dimN(A) + dim:R(A) dim V. Proof. Let { W I , ... , we} be a basis of N(A) c V, and complete it to a basis Proposition 1.3.5.
=
V. Set L (1.3.38)
of
=
Span
{u" ... , urn}, and consider Ao : L + W,
Clearly, w E :R(A) =} so
(1.3.39)
W
=
=
Ao
=
AIL '
a,w, + . . . + aewe + b l Ur + . . . + brnurn) b,u, + . . . + brnurn),
A( Ao(
:R(Ao) = :R(A).
Furthermore, =
n
=
(1.3.40) N(Ao) N(A) L O. Hence Ao : L + :R(Ao) is an isomorphism. Thus dim:R(A) m, and we have (1.3.37). The following is a significant special case.
=
dim:R(Ao)
=
dimL
=
D
1.
28 Corollary 1.3.7.
Background
Let V befinite dimensional, and let A : V V be linear. Then A injective = A surjective = A isomorphism. +
We mention that these equivalences can fail for infinitedimensional spaces. For example, if denotes the space of polynomials in then
l'
x,
=
Mx : l' l' (Mxf(x) xf(x» is injective but not surjective, while D : l' l' (Df(x) rex»� is surjective but not +
=
+
injective. Next we have the following important characterization of injectivity and surjectivity.
Assume V and W arefinite dimensional and A : V W is linear.
Proposition 1.3.8.
Then (1.3.41) and (1.3.42)
+
=
A surjective = AB Iw, for some B E L(W, V), =
A injective = CA Iv, for some C E L(W, V). Proof. Clearly, AB I =} A surjective and CA I =} A injective. We establish the converses. First assume A : V W is surjective. Let { } be a basis of W. Pick Vj E V such that A j Wj' Set (1.3.43) This works in (1.3.4 1). Next assume A : V W is injective. Let { ... , V k } be a basis of V. Set Wj AVj . Then { ... , Wk} is linearly independent, hence a basis of :R(A), and we then can produce a basis { . . . , Wb } of W. Set (1.3.44) This works in (1.3.42). D An m X n matrix A defines a linear transformation A rn I'm , as in (1.3.9) (1.3.10). The columns ofA are (1.3.45) As seen in (1.3.25), (1.3.46) where e" ... , e n is the standard basis of rn . Hence (1.3.47) :R(A) linear span of the columns ofA, =
v
w"
=
=
+
w"
v"
+
w"
. . . , we
u"
=
. . . , urn
+
=
so
(1.3.48)
1.3.
Vector spaces and linear transformations
29
Furthermore,
( � cj ej ) = 0
(1.3.49)
n
A
=
j=l
n
� cj aj = 0,
j=l
so
(1.3.50)
N(A)
=
0
=
,
{aI ' . . . an } is linearly independent.
We have the following conclusion, in case m = Proposition 1.3.9.
are equivalent: (1.3.51)
n.
LetA be an n X n matrix, definingA : Fn Fn. Then thefollowing A is invertible, the columns ofA are linearly independent, the columns ofA span Fn . +
Exercises
1. Show that the results in (1.3.4) follow from the basic rules (1.3.1)(1.3.3). Hint. To start, add to both sides of the identity + w = and take account first of the associative law in (1.3.1), and then of the rest of (1.3.1). For the second line of (1.3.4), use the rules (1.3.2) and (1.3.3). Then use the first two lines of (1.3.4) to justify v
v
v,
the third line, etc.
2. Demonstrate the following results for any vector space. Take a E F,
v
E V.
a · O = O E V, a(  v ) = av.
Hint. Feel free to use the results of (1.3.4).
W,X C V be linear subspaces. We say V = W + X,
Let V be a vector space (over F), and let
(1.3.52)
provided each v E
(1.3.53)
V can be written = + v
We say
W
x,
wE
W,
V = W Ell X,
(1.3.54)
provided each v E V has a unique representation
3. Show that
V = W Ell X
=
x
E X.
(1.3.53). n
V = W + X and W X = O.
1.
30 4. LetA : Fn
+
I'm
be defined by an m X n matrix, as in
Background
(1.3.9)(1.3.10).
(a) Show that :R(A) is the span of the columns of A. See (b) Show that N(A) = a if and only if the columns ofA are linearly independent.
Hint.
(1.3.25).
5. Define the transpose of an m X n matrixA = (ajk) to be the n X m matrixAt = (akj )' Thus, if A is as in (1.3.9)(1.3.10), (1.3.55) For example, A=
k
(� D ==}
Suppose also B is an n X matrix, as in that
(1.3.56) 6. Let
At =
G ! �)
(1.3.20), soAB is defined, as in (1.3.21). Show
(AB)t = B tAt
A=
(l 2 3) ,
B=
W
ComputeAB and BA. Then compute AtB t and BtA'.
7. Let 1'5 be the space of real polynomials in of degree :S 5, and set T : 1'5 1t3 , Tp = (p(l), p(O), P(1»). Specify :R(T) and N(T), and verify (1.3.37) for V = 1'5 ' W = 1t 3 , A = T. 8. Denote the space of m X n matrices with entries in (as in (1.3.10» by (1.3.57) M(m x n, F). +
x
I'
If m =
n, denote it by
(1.3.58)
M(n, F).
Show that dimM(m X n, F) = mn, especially
dimM(n, F) = n2
1.4.
Determinants
31
1.4. Determinants
Determinants arise in the study of inverting a matrix. To take the and the system
x y, (1.4.1)
2 X 2 case, solving for
ax + by = cx + dy = u can be done by multiplying these equations by d and b, respectively, and subtracting, and by multiplying them by c and a, respectively, and subtracting, yielding (ad  bc)x = du  bu, (1.4.2) (ad  bc)y = au  c . u,
u
The factor on the left is
(1.4.3)
det
(� �) = ad  bc,
(1.4.2) for x and y leads to = de�A (�c :) , (1.4.4) A = ( � �) provided detA '" O. We now consider determinants of n X n matrices. Let M(n, F) denote the set of n X n matrices with entries in F = R or C. We write ... aln (1.4.5) : = (al , ··· , an ), ... ann where alj (1.4.6) aj  : anj is the jth column of A. The determinant is defined as follows. Proposition 1.4.1. There is a uniquejunction B M(n, F) F, (1.4.7) satisfying thefollowing three properties: (a) B is linear in each column aj ofA; (b) B(ll) = B(A) ifA is obtainedfromA by interchanging two columns; (c) B(I) = 1. This defines the determinant, (1.4.8) B(A) = detA. If(c) is replaced by (c' ) B(I) = r, and solving
= KI
)
_
:
()
+
1.
32 then (1.4.9)
ileA)
Background
= r detA.
The proof will involve constructing an explicit formula for detA by following the rules (a)(c). We start with the case n We have
= 3. 3 detA = � aj ' det(ej , a" a3 ) (1.4.10) j =l by applying rule (a) to the first column ofA, a, = Lj aj ' ej . Here and below, {ej 1 :S j :S n} denotes the standard basis of rn , so ej has a 1 in the jth slot and a 's elsewhere. Applying (a) to the second and third columns gives 3 detA = � aj ' ak' det(ej , eb a3 ) j,k=l (1.4.11) 3 = � aj , ak, ae3 det(ej , eb ee)· j,k,t=l This is a sum of 27 terms, but most of them are a. Note that rule (b) implies (1.4.12) detB = a whenever B has two identical columns. Hence det(ej , eb ee) = a unless j , k, and e are distinct, that is, unless (j, k,e) is a permutation of (1, 2, 3). Now rule (c) says ( 1.4.13 ) and we see from rule (b) thatdet(ej , eb ee) = 1 if one can convert (ej , eb ee) to (e" e" e 3 ) by an even number of column interchanges, and det(ej , eb ee) =  1 if it takes an odd :
number of interchanges. Explicitly,
(1.4.14) Consequently, (
det(e" e" e 3 ) det(e" e3 , e,) det(e 3 , e" e , )
1.4.11) yields
= 1, = 1, = 1,
det(e" e 3 , e , ) det(e" e" e 3 ) det(e 3 , e" e,)
=  1, =  1, =  l.
= au a22 a33  au a32 a23 ( 1.4.15 ) + a21 a32 a13  a21 a12 a33 + a3 1 a12 a23  a31 a22 a13· Note that the second indices occur in ( 1, 2, 3 ) order in each product. We can rearrange these products so that theftrst indices occur in ( 1, 2, 3 ) order: detA = au a22 a33  au a2 3 a32 ( 1.4.16 ) + a13 a21 a32  a12 a21 a33 + a12 a23a31  a13a22 a31 · detA
1.4.
Determinants
33
Now we tackle the case of general n. Parallel to detA aj det(ej , a" . . . 1 j
=�
(1.4.17)
L:
(1.4.10)(1.4.11), we have
, an ) ahl ··· ajn n det(eh ' · · · ' ejJ,
i l ,· · ·,jn by applying rule (a) to each of the n columns of A. As before, implies det(ej " . . . , ejJ unless (j 1 , are all distinct, that is, unless (j1 , is a permutation of the set . . . , n). We set set of permutations of ( . . . , n). ) That is, consists of elements Cl, mapping the set { . . . , n} to itself, Cl : { . . . , n} + { . . . , n}, ( ) that are onetoone and onto. We can compose two such permutations, obtaining the product Clr given Cl and r in A permutation that interchanges just two el ements of { . . . , n}, say j and k (j '" k), is called a and is labeled (jk). It is easy to see that each permutation of { . . . , n} can be achieved by successively transposing pairs of elements of this set. That is, each element Cl is a product of transpositions. We claim that det( ...,e ) (sgn Cl) det( e 1 , . . . , e ) sgn Cl, ( ) where if Cl is a product of an even number of transpositions, sgn a ( ) if Cl is a product of an odd number of transpositions. In fact, the first identity in ( ) follows from rule (b) and the second identity from rule (c). There is one point to be checked here. Namely, we claim that a given Cl cannot simultaneously be written as a product of an even number of transpositions and an odd number of transpositions. If Cl could be so written, sgn Cl would not be well defined, and it would be impossible to satisfy condition (b), so Proposition would fail. One neat way to see that sgn Cl is well defined is the following. Let Cl act on functions of n variables by
=0
(1.4.18
1.4.19
Sn
(1, 2, Sn =
(1.4.12) ... ,jn )
... ,jn )
1, 2, 1, 1, 2,
1, 2,
ES, 1, n
Sn .
transposition
1,
1.4.20
ea(l)l ,
1.4.21
= 1,  1,
E Sn
a(n)n =
n=
1.4.20
E Sn
1.4.1 E Sn
(
1.4.22)
E Sn , =
It is readily verified that if also r ( ) g Clf ==} rg (w)f · Now, let P be the polynomial P(X1 ' " ' ' ) ( ) II (Xj  X )'
1.4.23
xn = l�j O"(k).
n,
Note that
1.4.29 )
= sgn 0'1 , so, parallel to (1.4.16), we also have ( 1.4.30 ) detA = � (sgn O') ala(I) ' " ana(n)' aESn Comparison with ( 1.4.27 ) gives detA = detAt, ( 1.4.31 ) where A = ( aj k ) =} At = ( akj). Note that the jth column ofAt has the same entries as and sgn 0'
the jth row of A. In light of this, we have the following.
Corollary 1.4.2.
In Proposition 1.4.1, one can replace "columns" by "rows."
1.4.
Determinants
35
The following is a key property of the determinant, called multiplicativity.
1.4.3. Given A and B in M( n, F), (1.4.32) det(AB) = (detA)(detB). Proof. For fixed A, apply Proposition 1.4.1 to ,9,(B) = det(AB). (1.4.33) !f B = (b" ... , bn ), with jth column bj , then AB = (Ab" ... ,Abn ). (1.4.34) Clearly, rule (a) holds for ,9, . Also, if If = (ba(, ) , ... , ba(n) is obtained from B by permuting its columns, then Ali has columns (Aba( , ) , ... ,Aba(n) ' obtained by permuting the columns ofAB in the same fashion. Hence rule (b) holds for ,9,. Finally, rule (c') holds for ,9" with r = detA, and (1.4.32) follows, D Corollary 1.4.4. IfA E M(n, F) is invertible, then detA '" O. Proof. !fA is invertible, there exists B E M(n, F) such thatAB = I. Then, by (1.4.32), D (detA)(detB) = 1, so detA '" O. The converse of Corollary 1.4.4 also holds. Before proving it, it is convenient to Proposition
show that the determinant is invariant under a certain class of column operations, given as follows.
1.4.5. IfA is obtainedfromA = (a" ... , an ) E M(n, F) by adding cae to ak for some c E F, e '" k, then (1.4.35) detA = detA. Proof. By rule (a), detA = detA + c detAb, where Ab is obtained from A by replacing the column ak by ae . Hence Ab has two identical columns, so detAb = 0, and ( 1.4.35 ) holds. D We now extend Corollary 1.4.4. Proposition 1.4.6. IfA E M(n, F), then A is invertible if and only if detA '" O. Proof. We have half of this from Corollary 1.4.4. To finish, assume A is not invertible. As seen in §1.3, this implies the columns a" ... , an ofA are linearly dependent. Hence, for some k, ( 1.4.36 ) ak + L: Gt at = 0, #k with Ce E F. Now we can apply Proposition 1.4.5 to obtain detA = detA, where A is obtained by adding L Ceat to ak' But then the kth column ofA is 0, so detA = detA = D O. This finishes the proof of Proposition 1.4.6. Proposition
Further useful facts about determinants arise in the following exercises.
1.
36
Background
Exercises
1. Show that (1.4.37)
det
[:
a1 2 a22
aln a�n
an'
ann
[1 1 = 0� det
=  1, F).
whereA l ! (ajk) '(I)Y =  Y. Related calculations show that GIi(n, R) is open in M(n, R). In fact, given X E GIi(n, R), Y E M(n, R),
(2.1.21)
(2.1.22) which by (2.1.20) is invertible as long as
IIX 1 YII < 1. One can proceed from here to compute Dcf>(X). See the exercises. We return to general considerations and derive the chain rule for the derivative. Let F : Rm be differentiable atk x E , as above, let U be a neighborhood of m z = F(x) in R , and let G : U R be differentiable at z. Consider H = G o F. We have H(x + y) = G(F(x + y» = G(F(x) + DF(x)y + R(x, y») (2.1.24 ) = G(z) + DG(z)(DF(x)y + R(x,y») + R1 (x,y) = G(z) + DG(z)DF(x)y + R,(x,y) (2.1.23)
(') +
(')
+
with
II R,(x,y) 11 0 as y 0. Ilyll Thus G o F is differentiable at x, and (2.1.25) D(G 0 F)(x) = DG(F(x» . DF(x). +
+
2.1.
The derivative
43
Another useful remark is that by the fundamental theorem of calculus applied to
'P(t) = F(x + ty),
F(x + y) = F(x) + 1 DF(x + ty)y dt, ,
(2.1.26)
F
provided is C 1 For a typical application, see (2.3.46). For the study of higherorder derivatives of a function, the following result is fun damental.
Assume F (') . .. Itm is of class C' with (') open in Itn. Then,for each x E ('), l :S j,k :S n, 3 3F 3 3F (2.1.27) 3xj 3Xk (x) = 3Xk 3xj (x). Proof. It suffices to take m = 1. We label our function f (') . .. It. For 1 :S j :S n, we :
Proposition 2.1.2.
:
set
(2.1.28)
Itn .
where {e" . . . , e n } is the standard basis of The mean value theorem (for functions of alone) implies that if exists on ('), then, for , h > a sufficiently small,
3jf = 3f/3xj
Xj
x E (')
(2.1.29)
for some (0, 1), depending on and h. Iterating this, if for , h > a sufficiently small, he (2.1.30) he he he ,
aj E x E (')
with
x 3/3kf) exists on ('), then, Ilk,hllj,hf(x) = 3k(llj,hf Xx + ak k) = Ilj,h (3kfXx + ak k) = 3j 3kf(x + ak k + aj j )
aj , ak E (0, 1). Here we have used the elementary result
(2.1.31) We deduce the following. Proposition 2.1.3. (2.1.32)
If3kf and 3j 3kf exist on (') and 3j 3kf is continuous at Xo E ('), then 3j 3kf(xo) = hlim +o Ilk'hllJ',hf(xo).
Clearly, (2.1.33) so we have the following, which easily implies Proposition 2.1.2.
D
In the setting ofProposition 2.1.3, ifalso 3jf and 3k3jf exist on (') and 3k3jf is continuous at xo, then (2.1.34 )
Corollary 2.1.4.
44
2.
Multivariable differential calculus
We now describe two convenient notations to express higherorder derivatives of a ek function f : where is open. In one, leU be a ktuple of integers between 1 and n; J (j" . . . , j k )' We set
0 c Rn
0 R, = +
(2.1.35) We set IJI
= k, the total order of differentiation. As we have seen in Proposition 2.1.2,
a,ajf = aAf provided f E e'(O). It follows that if f E ek(O), then aik . . . ajJ = at, . . . atJ whenever reI ' . . . , ek} is a permutation of U" . . . , jk}' Thus, another con
a
venient notation to use is the following. Let be an ntuple of nonnegative integers, ) Then we set (
a = a" ... , an .
f(a)(x) = af' . . . a�" f(x), lal = a, + ... + an ' Note that if IJI = la l = k and f E ek(o), (2.1.37) f(J) (x) = f(a)(x), with a, = #{e : jt = i}.
(2.1.36)
Correspondingly, there are two expressions for monomials in x a X , l . . . xa"
= (x" . . . , xn ),
xl = x1· 1 . . . xl·k ' xa = n and xl = xa provided J and a are related as in (2.1.37). Both these notations are called multiindex notations. We now derive Taylor 's formula with remainder for a smooth function P : 0 R,
(2.1.38)
+
making use of these multiindex notations. We will apply the onevariable formula (1.1.50)(1.1.51), i.e., 1 1 () 'P(t) 'P(O) 'P' (O) t (2.1.39) 'P" (O) t' 'P k (O)tk r (t), k! k
=
+
+ '2
+ ... +
+
with (2.1.40) given 'P ek+ ' (I), ( a, a) . (See Exercise 9 of §1.1, Exercise 7 of this section, and also Appendix A.4 for further discussion.) Let us assume 0 and that the line segment from 0 to x is contained in O. We set 'P(t) P(tx) and apply (2.1.39)(2.1.40) with t 1. Applying the chain rule, we have
E
I=
=
=
(2.1.41)
'P'(t)
E0
n = � ajP( tx)Xj = � p(J) (tx)xJ j ='
111='
Differentiating again, we have
= � p(J+K) (tx)x1+K = � p(J)(tx)x1 , 111=',IKI=' 111= ' where, if IJI = k, IKI = e, we take J + K = (h, . . . , h, k" . . . , kg). Inductively, we have 'P(k) (t) = � p(J) (tx)xJ (2.1.43) (2.1.42)
'P"(t)
111= k
45
2.1. The derivative Hence, from
(2.1.44)
(2.1.39) with t = 1, P(x) = P(O) + � p(J)(O)xl + ... + ,k1 � p(J)(O)xl + Rk(X), . III=k 111 = '
or, more briefly,
P(x) = �k JI 1 ., P(J) (0)xl + Rk(x), III< where ' (2.1.46) Rk(x) = �,. � (10 (1  s)kp(J)(sx) ds)r. IJI =k+l This gives Taylor 's formula with remainder for P E Ck+' (Q) in the Jmultiindex no (2.1.45)
tation. We also want to write the formula in the amultiindex notation. We have
(2.1.47) where
�=k p(J) (tx)xl = �=k v(a)p(a) (tx)xa , lal III
(2.1.48)
veal = #(J : a = a(J»), and we define the relation a = a(J) to hold provided the condition (2.1.37) holds or, equivalently, provided xl = xa . Thus v(a) is uniquely defined by (2.1.49) � v(a)xa = � xl = (x, + ... + xn )k lal =k III =k To evaluate veal, we can expand (x, + ... +xn )k in terms of xa by a repeated application of the binomial formula, (x, + ... + xn )k = (x, + (x, + ... + xn »)k '" (k) al , (X2 + ... + Xn )ka = L.J X a 1 al�k 1
(2.1.50)
= = � (kal )(k a2a, ) ... (k  a, a...  an ' ) xf' ... x�n n lal =k = � v(a)xa . lal =k _
We have
v(a) equal to the product of binomial coefficients given above, i.e., to k! (k  a,)! (k  a, ...  a _,ll a,! (k  a,)! . a,!(k a,  a,)! an !(k a,  n an )! = :Ca,,!"..k!.'Ca'' ! . n _
.. ·
46
2.
In other words, for (2.1.51)
lal = k, veal = "a.k!
where
Multivariable differential calculus
a! = al ! '" an ! .
Thus the Taylor formula (2.1.45) can be rewritten 1 + (2.1.52)
P(x) = � ,a. p(a)(O)xa Rk(x), lal :.::;k
where (2.1.53) The formula (2.1.52)(2.1.53) holds for It is significant that ( 2.1.52 ), with a variant of (2.1.53), holds for In fact, for such we can apply (2.1.53) with k replaced by k  1 to get 1 ( 2.1.54 +
PEC P E Ck
k+ 1
P,
P(x) = � ,a. p(a)(O)xa Rk_ l (x), lal :.::;k l with 1 Rk_ l (X) = � ,a.k (1 ( 1  s)k lp(a)(sx)ds)xa. ( 2.1.55 ) lal =k 0 We can add and subtract p(a) (O) to p(a) (sx) in the integrand above and obtain the fol lowing. Proposition 2.1.5. If P E C k on a ball E,(O), theformula ( 2.1.52 ) holdsfor x E E,(O), with )
( 2.1.56 )
Remark. Note that ( 2.1.56 ) yields the estimate
( 2.1.57 )
The term corresponding to IJI interest. It is
= 2 in ( 2.1.45 ), or lal = 2 in (2.1.52), is of particular
( 2.1.58 ) :
Hessian of a C' function P (') .... R as an n X n rna trix, ). ( 2.1.59 ) D'P(y) = ( aXa'p (y) aX k j Then the power series expansion of secondorder about a for P takes the form ( 2.1.60 ) P(x) = P(O) + DP(O)x + 2:1 x . D'P(O)x + R,(x),
We define the
2.1.
The derivative
where, by (2.1.57), (2.1.61)
47
sup IF(a) ( sx )  p(a) ( O) I . IR,(x) 1 :S Cn lx l' Oc::;s:O;l,lal =2
y G.
In all these formulas we can translate coordinates and expand about E For example, (2.1.60) extends to 1 ( , ), (2.1.62)
P(x) = P(y) + DP(y)(x  y) + l (x  y) . D'P(y)(x  y) + R, x y
with
sup IF(a) (y + sex  y»  p(a) (y) l · I R,(x,y)1 :S Cn lx  yl' O:o;s:O;l,lal=2 Example. If we take P(x) as in (2.1.8) so DP(x) is as in (2.1.9), then I sinx, Xl X2 ) D'P(x) = (cossinx Xl X2  sinxl sinx2 .
(2.1.63)
COS
COS
COS
The results (2.1.62)(2.1.63) are useful for extremal problems, i.e., determining where a sufficiently smooth function P R has local maxima and local min ima. Clearly, if P E and P has a local maximum or minimum at E then of P. To check what kind of In such a case, we say is a critical point is, we look at the matrix A D 'P(xo), assuming P E C ' ( By Proposition 2.1.2, A is a symmetric matrix. A basic result in linear algebra is that ifA is a real, symmetric matrix, then Rn has an orthonormal basis of eigenvectors, {V I , " " v n }, satisfyingAvj Aj Vj ; the real numbersAj are the eigenvalues ofA. We say A is positive definite if all Aj 0, and we say A is negative definite if all Aj We say A is strongly indefinite if some Aj 0 and another Ak Equivalently, given a real, symmetric matrix A, A positive definite = V · Av 2: C l v l ' , (2.1.64 ) A negative definite = V · Av :S C v ' , for some C 0, all v E Rn , and A strongly indefinite = 3 v, w E R n , nonzero, such that (2.1.65) V · Av 2: v , w · w  w ,
DP(xo) = O.
:G Xo critical point nX n = +
CI ( G)
Xo
nXn
=
>
>
>
Xo G, G). < O.
< O.
 ll
C l l'
A :S C l l'
> O. In light of (2.1.45)(2.1.46), we have the following result. Proposition 2.1.6. Assume P E C' ( G) is real valued and G is open in Rn . Let Xo E G be a critical pointfor P. Then (i) D'P( xo ) positive definite =} P has a local minimum at xo, (ii) D'P( xo ) negative definite =} P has a local maximum at xo, (iii) D'P(xo) strongly indefinite =} P has neither a local maximum nor a local minimum at Xo. In case (iii), we say Xo is a saddle point for P. for some C
The following is a test for positive definiteness.
48
2.
Multivariable differential calculus
Proposition 2.1.7. LetA = (aij) be a real, symmetric, n X n matrix. For 1 :s e :s n, form the e x e matrixAe = (aij )l(X) X I , F(X)
=
=
= x2
Compute DF(X)Y using each of the following approaches:
= C!>(X)C!>(X) and use the product rule (Exercise 2). = c!>(S(X» and use the chain rule. = S(C!>(X» and use the chain rule. 4. Identify R ' and e via = x + iy. Then multiplication by i on e corresponds to (a) Take F(X) (b) Take F(X) (c) Take F(X)
z
(0 1)
applying
1= 1 0 . Let (') c R ' be open, and let f : (') R' be C 1 Say f = (u, v). Regard Df(x,y) as a 2 X 2 real matrix. One says f is holomorphic, or complex analytic, provided the CauchyRiemann equations hold: au = av ' au =  av ' ax ay ay ax +
Show that this is equivalent to the condition
Df(x,y) l = I Df(x,y). Generalize to (') open in em, f : (') en. 0, 5. Letf be C I on a region in R' containing [a, b] X {y}. Show that as h 1 . y + h)  f(x,y)] + � )1[J(x, a/x,y), umformly on [a,b] x {y}. +
+
54
2.
Multivariable differential calculus
Hint. Show that the left side is equal to Ii
1
1h af/x, y + s) ds,
a and use the uniform continuity of af lay on [a, b] X [y 8, y + 8]; cf. Proposition A.1.16. 0
6. In the setting of Exercise 5, show that
d
d
Y
b b af I f(x,y)dx = I aY (x,y)dx. a
a
7. Considering the power series
(j)(y) f(x) = fey) + f'(y)(x  y) + ... + f.J .,(x y)J. + R/x,y), show that
aRj =  1 f(J. + 1) (y)(x  y)J,. R/x, x) = o. ay "J .
Use this to rederive (1.1.51), and hence (2.1.39)(2.1.40). We define "big oh" and "little oh" notation:
f(x) = O(x) (as x 0) I f�) I :S C as x 0, f(x) = o(x) (as x 0) f(x) x a as x O. 8. Let 0 e Rn be open, and lety E O. Show that f E Ck+ 1 ( 0 ) =} f(x) = � ,a1 f(a)(y)(x y)a + O( l x Yl k+l ), lal:.::;k . f E Ck(O) =} f(x) = � ,a1 f(a)(y)(x  y)a + o( lx  ylk). lal:.::;k . 9. Assume G : U 0 , F : 0 O. Show that ( 2.1.97 ) F, G E C1 ==} F o G E C 1 More generally, show that, for k E N, ( 2.1.98 ) F, G E Ck ==} F o G E Ck Hint. Write H = F o G, with heCx) = feCgl (X), . . . , gn(x» , and use ( 2.1.25 ) to get n ( 2.1.99 ) aj heCx) = � akfeCgl , ' " ,gn )ajgk· k=l +
�
+
�
+
+
_
+
+
+
_
2.1. The derivative
55
Show that this yields (2.1.97). To proceed, deduce from (2.1.99) that
n 3jA, heCx) = � 3k, 3k, feCg1 , · . . ,gn )(3j, gk, )(3h gk) khkJ=l (2.1.100) n + � 3kfeCg 1 ' " ' ' gn )aj, 3h gk· k=l Use this to get (2.1.98) for k = 2. Proceeding inductively, show that there exist con stants C(Il, J#, k#) = C(Il, J1, . . . , J�, k1 , . . . , k�) such that if F, G E Ck and IJI :S k, (2.1.101) where the sum is over and J1 + . . . + J� follows from this.
Il :S IJI, J1 + . . . + J� J, IJvl 2: 1, J means J is a rearrangement of J1 + . . . + Jw Show that ( 2.1.98 ) �
�
(X) = X 1 is Ck for each k,
Hint. D(X)Y = _X 1 YX 1 as 3 3em (X) = 3(X) Xem = D(X)Bem = (X)Bem (X), where X = (xem ) and Bem has just one nonzero entry, at position (e, m). Iterate this to 10. Show that the map : Gl(n, R) .... Gl(n, R) given by i.e., E C OO . Start with the material of Exercise 3. Write
get
3e,m, 3e, m , (X) = (3e,m, (X» Be, m , (X)  (X)Be, m, (3e,m, (X» ,
and continue.
Exercises 11  13 deal with properties of the determinant as a differentiable function on spaces of matrices. 11. Let M(n, R) be the space of n X n matrices with real coefficients, and let det M(n, R) .... R be the determinant. Show that if I is the identity matrix,
D det(I)B = Tr B, i.e.,
d det(I + tB I dt
) = = Tr B. ,
o
12. IfA(t) a t is a smooth curve in M(n, R), use the expansion of (dldt) detA(t) as a sum of n determinants, in which the rows ofA(t) are successively differentiated, to show that detA(t) Tr( Cof(A(t» ' ' A'(t)) , t and deduce that, for A, B E M(n, R), det(A)B Tr (Cof(A)' . B) .
= ( jk( »
1
=
D
=
56
2.
Multivariable differential calculus
Here Cof(A), the cofactor matrix, is defined in Exercise 4 of §1.4. 13. Suppose A E show that
M(n, R) is invertible. Using det(A + tE) = (detA) det(I + tK 1 E), D det(A)E = (detA) Tr(K 1 E).
Comparing this result with that of Exercise 12, establish Cramer 's formula: (detA)A l Cof(A)"
=
Compare the derivation in Exercise 4 of §1.4.
(x, y) on R' by f(x, y) = yx2xy+ y2 (x, y) # (0, 0), 0, (x, y) = (0, 0). Show thatf is continuous on R' and smooth on R' \ (0, 0). Show thataflax and aflay exist at each point of R' and are continuous on R' \ (0, 0) but not on R2 Show that af (0, 0) = af (0, 0) = O. ax ay Show that f is not differentiable at (0, 0). Hint. Show that f(x, y) is not o( ll(x, y) ll ) as (x, y) (0, 0), by considering f(x, x). 15. Define g(x, y) on R' by g(x,y) = x'x+iy" (x,y) # (0, 0), 0, (x, y) = (0, 0). Show that g is smooth on R' \ (0, 0) and is class C I on R2 Show that axayg and ayaxg exist at each point of R' and are continuous on R' \ (0, 0) but not on R2 Show that a ag a ag ay ax (O,O) = 1, ax a/a' 0) = o. 14. Define f
+
2.2.
Inverse function and implicit function theorems
inverse junction theorem
The gives a condition under which a function can be locally inverted. This theorem and its corollary, the are fundamental results in multivariable calculus. First we state the inverse function theorem. Here, we assume 2: 1.
k
implicitjunction theorem,
2.2.
Inversejunction and implicitjunction theorems
57
Let F be a Ck mapfrom an open neighborhoodD ofPo E Rn to Rn with qo = F(po · Suppose the derivative DF(po) is invertible. Then there is a neighborhood U of Po and a neighborhood V of qo such that F : U V is onetoone and onto, and F ' : V Uis a Ck map. (OnesaysF : U Visa diffeomorphism.) First we show that F is onetoone on a neighborhood of Po, under these hypothe ses. In fact, we establish the following result, of interest in its own right. Proposition 2.2.2. Assume D c Rn is open and convex, and let f : D Rn be C1 Assume that the symmetric part of Df(u) is positive definite for each u E D. Then f is onefoone on O. Proof. Take distinct points u" u, E D , and set u,  u, = w. Consider 'P : [0, 1] R, given by 'P(t) = W · feu, + tw). Then 'P'(t) = w Df(u, + tw)w > a for t E [0, 1], so 'P(a) # 'PCl). But 'P(a) = w feu,) D and 'PCl) = W · feu,), so feu,) # feu,). Theorem2.2.1. )
+
+
+
+
+
.
.
To continue the proof of Theorem 2.2.1, let us set
(2.2.1) Then f(a) = a and Df(a) = I, the identity matrix. We will show that f maps a neigh borhood of a onetoone and onto some neighborhood of a. We can write (2.2.2) feu) = u + R(u), R(a) = 0, DR(a) = 0, and R is C r . Pick b > a such that (2.2.3) lI u ll :S 2b ==} IIDR(u) 11 :S � . Then Df = I + DR has positivedefinite symmetric part on E'b(a) = {u E Rn : lI uli < 2b}, so by Proposition 2.2.2, f : E'b(a) Rn is onetoone. We will show that the range f(E' b(a» contains Eb(a), that is to say, we can solve (2.2.4 ) feu) = v, given v E Eb(a), for some (unique) u E E' b(a). This is equivalent to u + R(u) = v. To get the solution, we set (2.2.5) Tv(u) = v  R(u). +
Then solving (2.2.4) is equivalent to solving (2.2.6) We look for a fixed point (2.2.7)
Tv(u) = u. u = K(v) = f ' (v ).
58
2.
Multivariable differential calculus
J,
DK(O) = i.e., that K(v) = v + rev), rev) = o( ll v ll ).
Also, we want to show that
(2.2.8)
The "little oh" notation is defined in Exercise 8 of §2.1. If we succeed in doing this, it follows that, for Y close to P 1 ( ) is defined. Also, taking
qo, G(y) = y x = Po + u, y = P(x), v = f(u) = A(y  qo),
as in (2.2.1), we have, via (2.2.8),
G(y) = Po + u = Po + K(v) = Po + K(A(y  qo» = Po +A(y  qo) + o( l y  qoll )· Hence G is differentiable at qo and (2.2.9) A parallel argument, with replaced by a nearby x and P(x), gives (2.2.10) Thus our task is to solve (2.2.6). To do this, we use the following general result, known as the
Po
DG(y) = DP(G(y»  l
y=
contraction mapping theorem. Theorem 2.2.3. Let X be a complete metric space, and let T : X X satisfy (2.2.11) dist(Tx, Ty) :s rdist(x,y), for some r k< 1. (We say T is a contraction.) Then T has a uniquefixed point x. Por any Yo E X, T yo x as k Proof. Pickyo E X and let Yk = Tkyo. Then dist(Y b Yk+ l ) :S rk dist(yo, Yl ), so dist(Y b Yk+ m ) :S dist(Y b Yk+ l ) + ... + dist(Yk+ m _ l , Yk+ m ) :S (rk + ... + rk+ m  1) dist(Yo , Yl ) (2.2.12) 1 :S rk ( 1  r) dist(yo, Yl )' It follows that (Yk ) is a Cauchy sequence, so it converges; Yk x. Since TYk = Yk+ l and T is continuous, it follows that Tx = x, i.e., x is a fixed point. Uniqueness of the fixed point is clear from the estimate dist(Tx, Tx') :S r dist(x, x'), which implies dist(x, x') = a if x and x' are fixed points. This proves Theorem 2.2.3. D +
+
+ 00 .
+
Returning to the task of solving (2.2.6), having b as in (2.2.3), we claim that (2.2.13) where :S B b( ) (2.2.14 ) sup
Xu = (u E 2 O : Ilu  v ii Au}, Au = Il wl O. Now (2.2.26) holds for 8 E (Jr/2, Jr/2), but not on all of (Jr, Jr). Furthermore, (2.2.27) holds for (r, 8) in a neighborhood of (ro, 80) = (1, 0), but it does not hold on all of (0, 00) X (Jr/2, Jr/2). We see that Proposition 2.2.2 does not capture the full force of the diffeomorphism property of (2.2.24). We move on to another example. As in §2.1, we can extend Theorem 2.2.1, replac ing Rn by a finitedimensional real vector space, isometric to a Euclidean space, such ' n as M(n, R) '" R . As an example, consider Exp : M(n, R) M(n, R), Exp(X) = = k=�00o ,k1. Xk (2.2.28) Smoothness of Exp follows from Corollary 2.1.12. Since (2.2.29) Exp(Y) = I + Y + 2:1 y' + . . . , we have (2. 2.30) D Exp(O)Y = Y, \fY E M(n, R), so D Exp(O) is invertible. Then Theorem 2.2.1 implies that there exist a neighborhod U of a E M(n, R) and a neighborhood V of I E M(n, R) such that Exp : U V is a smooth diffeomorphism. To motivate the next result, we consider the following example. Take a > a and consider the equation x' + y' = a', F(x,y) = x' + y'. (2.2.31) Note that (2.2.32) DF(x,y) = (2x 2y), D](x,y) = 2x, DyF(x,y) = 2y. The equation (2.2.31) defines y implicitly as a smooth function of x if Ixl < a. Explicitly, (2.2.33) I xl < a y = Ya'  xl. +

.
1
+
eX
....
==}
62
2.
Multivariable differential calculus
x
Similarly, (2.2.31) defines implicitly as a smooth function of y if Iy l
)
0, there exists I :S a. Furthermore, (2.2.35)
Ixo
Xo E It,
Similarly, given and
< a; explicitly as

Yo E It, there exists Xo such that F(xo, Yo) = a' if and only if Iyo I :S a,
(2.2.36) Note also that whenever (2.2.37)
(x, y) E It' and F(x, y) = a' > 0, DF(x,y) '" 0, so either DxF(x, y) '" 0 or DyF(x, y) '" 0, and, as seen above whenever (xo, Yo) E It' and F(xo, Yo) = a' > 0, we can solve F(x, y) = a' for either y as a smooth function of x for x near Xo or for x as a smooth function ofy for y near Yo . We move from these observations to the next result, the implicit function theorem.
SupposekU is a neighborhood of Xo E Itm and V is a neighborhood of t Yo E It , and we have a C map (2.2.38) F U X V Itt , F(xo, Yo) = uo. Assume DyF(xo,Yo) is invertible. Then the equationk F(x,y) = Uo defines y = g(x, uo) for x near Xo (satisfying g(xo, uo) = Yo) with g a C map. Proof. Consider H U X V Itm X Itt defined by (2.2.39) H(x,y) = (x,F(x,y»). (Actually, regard (x, y) and (x, F(x, y» as column vectors.) We have (2.2.40) DH = (D�F D�F) ' Thus DH(xo,yo) is invertible, so G = H 1 exists, on a neighborhood of (xo, uo), and is Ck, by the inverse function theorem. Let us set (2.2.41) G(x, u) = (s(x, u), g(x, u» . Theorem 2.2.5.
:
:
+
+
Then
H 0 G(x, u) = H(s(x, u),g(x, u» = (s(x, u), F(s(x, u),g(x, u» ). Since H 0 G(x, u) = (x, u), we have s(x, u) = x, so (2.2.43) G(x, u) = (x,g(x, u»
(2.2.42)
and hence (2.2.44
)
H 0 G(x, u) = (x, F(x, g(x, u» ),
2.2.
Inversejunction and implicitjunction theorems
63
hence (2.2.45)
F(x,g(x, u» = u. Note that G(xo, uo) = (xo, Yo), so g(xo, uo) = Yo , and
g
is the desired map.
D
Here is an example where Theorem 2.2.5 applies. Set
F : 1t4
(2.2.46) We have (2.2.47)
+
1t2 , F(u, v,x, y) = (X(Uxu2++yvV2 ») . F(2, 0, 1, 1)
Note that
= (�)
)
2X V Du,vF(u, v,x,y) = (2XU x Y
(2.2.48) hence
�)
,
Du,vF(2, 0, 1, 1) = (� is invertible, so Theorem 2.2.5 (with (u, v) in place ofy and (x, y) in place of x) implies (2.2.49)
that the equation
(2.2.50) defines smooth functions
F(u, v, x,y) = (�)
u = u(x,y), v = v(x,y), (u(l, v(l, = (x, y) (xo, Yo) = e= n z = (x,y) E It , x E Itn I , y E It, F(z) E It. DyF = 3yF. F xo yo = uo, 3yF(xo, Yo)
(2.2.51) for (1, 1), satisfying (2.2.50), with 1), 1» (2, 0). near Let us next focus on the case 1 of Theorem 2.2.5, so (2.2.52) Then If ( , ) Theorem 2.2.5 says that if '" 0, (2.2.53) then one can solve (2.2.54 ) for ( ) for near (satisfying ( with g a function. This phenomenon ) was illustrated in (2.2.31)(2.2.35). To generalize the observations involving (2.2.36) (2.2.37), we note the following. Set (ZI , " " Z ) The condi ( tion (2.2.53) is '" O. Now a simple permutation of variables allows us to assume (2.2.55) and deduce that one can solve (2.2.56) Let us record this result, changing notation and replacing Z by
x
F(x, y) = Uo y = g x, uo , Ck Xo g xo, uo = Yo), Zn ), o = xo, Yo . (x, y) = z = that3zn F(zo)
x.
64
2.
Proposition 2.2.6.
Multivariable differential calculus
Let be a neighborhood ofxo E Itn. Asume we have a Ck junction F It, F(xo) = uo, [)
: [)
(2.2.57) and assume (2.2.58) Then there exists j E {1, ... , n} such that one can solve F(x) = Uo for (2.2.59) with (XlO , ... , XjO " ' " xno) = xofor a Ck junction Remark. For F It, it is common to denote DF(x) by VF(x), (2.2.60) Here is an example to which Proposition 2.2.6 applies. Using the notation (x, y) = (Xl' X,), set F : It' It, F(x,y) = x' + y'  x. (2.2.61) +
g.
: [) +
+
Then
VF(x,y) = ( 2x  1, 2y), which vanishes if and only if x = 1/2, Y = O. Hence Proposition 2.2.6 applies if and only if (xo, Yo) '" (1/2, 0). Let us give an example involving a real valued function on M(n, It), namely (2.2.63) det : M(n, It) It. As indicated in Exercise 13 of §2.1, if detX '" 0, (2.2.64) D det(X)Y = (detX)Tr(X 1 Y), (2.2.62)
+
so
detX '" a
(2.2.65) We deduce that if
(2.2.66)
X E
==} D det(X) '" O.
o M(n, It),
detXo
= a ", 0,
then, writing
(2.2.67)
there exist !', v E
(2.2.68)
{1, . . . , n} such that the equation detX = a
has a smooth solution of the form
(2.2.69)
such that if the argument of g consists of the matrix entries of Xo other than the !', v entry, then the left side of is the !', v entry of Xo.
(2.2.69)
2.2.
Inversejunction and implicitjunction theorems
65
Let us return to the setting of Theorem 2.2.5, with e not necessarily equal to In notation parallel to that of (2.2.55), we assume F is a Ck map, 1.
(2.2.70)
where is a neighborhood of Zo in Rn. We assume (2.2.71) DF(zo) : Rn + Rt is surjective. Then, upon_ reordering the variables z = (z" ... ,Zn), we can write Z = (x,y), x = (x" ... , X ), Y = (y" ... , Y ), such that DyF(zo) is invertible, and Theorem 2.2.5 ap plies. Thusn t(for this reorderingt of variables), we have a Ck solution to (2.2.72) F(x,y) = Uo, Y = g(x, uo), satisfying Yo = g(xo,uo), Zo = (xo,Yo)· To give one example to which this result applies, we take anotherlook atF : R4 . .. R' in (2.2.46). We have (2.2.73) DF(u, v, � y) = (2XUx 2xvy u' u+ v' v0) . The reader is invited to determine for which (u, v, x, y) E R4 the matrix on the right side of (2.2.73) has rank 2. Here is another example, involving a map defined on M(n, R). Set (2.2.74 ) F : M(n, R) + R', F(X) = (;:tff) Parallel to (2.2.64), if detX # 0, Y E M(n, R), DF(X)Y = ((detX��r�x'Y») ( 2.275 ) Hence, given detX # 0, DF(X) : M(n, R) . .. R' is surjective if and only if Y» ( 2.2.76 ) L : M(n, R) . .. R', LY = (Tr(X' TrY ) is surjective. This is seen to be the case if and only if X is not a scalar multiple of the identity I E M( n, R). Q
Exercises 1.
SupposeF : U . .. Rn isa C' map, p E U, openinRn,andDF(p)isinvertible. With
q = F( p), define a map N on a neighborhood of p by ( 2.2.77 ) N(x) = x + DF(x)' (q F(x») .
Show that there exists
and C such that, for 0 :S Ilx  p il :S ==} I N(x)  p il :S C
E>
0
'F G(x), with C!>(X) = XI , as in Exercises 3 and 10 of §2.1, ')(x) DG(x), 'F(x) DP(x). Apply Exercise 9 of §2.1 to show that, in general G,'F,c!> E Ct ==} ') E Cl Deduce that if one is givenkPl E Ck and onekknows that G E Ckl , then this result applies to give ') = DG E C , hence G E C 7.l Show that there is a neighborhood a of (1,0) E R2 and there are functions u, v, W E C (a) (u = u(x,y), etc.) satisfying the equations u3 + v3  xw3 = 0, u2 + yw2 + = 1, xu+ yvw = 1, C
+
+
c
(J ,
c
+
+
+
0
0
V
2.2.
67
Inversejunction and implicitjunction theorems
for (x, y) E G, and satisfying u(l, 0) = 1, v(l, 0) = 0, w(l, 0) = 1. Hint. Define F : R5 R3 by (U3 + v3 _ XW3) F(u,v, w,x,y) = u' +yw' + v . xu + yvw Then F(l, 0, 1, 1, 0) = (0, 1, 1)' . Evaluate the 3 x3 matrix Du,v,wF(l, 0, 1, 1, 0). Compare (2.2.46)(2.2.51). 8. Consider F : M(n, R) M(n, R), given by F(X) = X2 Show thatF is a diffeomor phism of a neighborhood of the identity matrix I onto a neighborhood of I. Show that F is not a diffeomorphism of a neighborhood of +
+
onto a neighborhood of I (in case n = 2). 9. Prove Corollary 2.2.4. 10. Let I : R' R3 be a Cl map. Assume 1(0) = (0, 0, 0) and a a axl (0) x ayl (0) = (0, 0, 1). Show that there exist neighborhoods G and 0 of a E R' and a Cl map u : 0 R such 3 that the image of G under I in R is the graph of u over Hint. LetTI : R3 R' beTI(x,y,z) = (x,y), andconsider I"(x,y) = TI(f(x,y» , 1" : R' R2 Show that DI"(O) : R' R' is invertible, and apply the inverse function theorem. Then let u be the zcomponent of 101"1 . 11. Generalize Exercise 10 to the setting where I : Rm Rn (m < n) is Cl and D1(0) : Rm Rn is injective. Remark. For related results, see the opening paragraphs of § 3. 2 . 12. Let 0 c Rn be open and contain Po. Assume F : 0 Rn is continuous and F(po) = qo. Assume F is CI on 0 and DF(x) is invertible for all x E Finally, assume there exists R > a such that (2.2.79) x E ao ==} I F(x)  qo l 2: R. Show that ( 2. 2 . 8 0 ) +
+
O.
+
+
+
+
+
+
O.
68
2.
Multivariable differential calculus
Hint. Given Yo E BR/, (qo), use compactness to show that there exists Xo E D such that inl l F(x) Yo lI· I F(xo) Yo I = XEO
Use the hypothesis (2.2.79) to show that Xo E D. If F(xo) '" Yo, use F(xo + tz) = F(xo) + tDF(xo)z + o( l tzl ), to produce z E Rn (say DF(xo)z = Yo F(xo» such thatF(xo + tz) is closer to Yo than F(xo) is, for small t > O. This is a contradiction. 13. Do Exercise 12 with the conclusion (2.2.80) strengthened to (2.2.81) Hint. It suffices to show that F(D) Bs(qo) for each Given such produce n n a diffeomorphism R R such that Exercise 12 applies to F, and yields the desired conclusion. 'P :
2.3.
+
J
S < R.
'P
S,
0
Systems of differential equations and vector fields
In this section we study
systems of ODE, dy = F(t, y ), y(to) = Yo · (2.3.1 ) dt To begin, we prove the following fundamental existence and uniqueness result. Theorem 2.3.1. Let Yo E D, an open subset ofRn, and let R be an interval contain ing to. Suppose F is continuous on D and satisfies thefollowing Lipschitz estimate in y: ( 2.3.2 ) I F(t, y,)  F(t, y,)11 :S Lily,  y, lI for t E , Yj E D. Then the equation ( 2.3.1) has a unique solution on some tinterval containing to. To begin the proof, we note that the equation (2.3.1) is equivalent to the integral equation yet) = Yo + 1 F(s,y(s»)ds. ( 2.3.3 ) Existence established via the Picard iteration method, which is the following. Guess YoCt),wille.gbe., YoCt) = Yo. Then set ( 2.3.4 ) Yk(t) = Yo + 1 F(S,Yk_,(s»)ds. We aim to show that as k Yk(t) converges to a (unique) solution of (2.3.3), at least for t close enough to to. nXn
Ie
IX
l
t
to
t
to
+ 00 ,
2.3.
69
Systems ofdifferential equations and vectorfields
To do this, we use the contraction mapping theorem, established in §2.2. We look for a fixed point of , defined by (2.3.5) (y)(t) = Yo + 1 F(s,y(s»)ds. Let (2.3.6) = {u E C(J, Rn ) : u(to) = Yo , sup Ilu(t)  Yo II :S R }. Here] = [to T, to + T], where T will be chosen, sufficiently small, below. The quantity R is picked so that B (yo) = {y : Ily  Yo I :S R} is contained in D, and we also suppose] Then there exists M such that (2.3.7) sup IIF(s, y)11 :S M. Then, provided T b such that y( t) solves (2.3.1)for t E (a" b,). Proof. We deduce from (2.3.14) that there exists 8 > 0 such that for each y, E K, t, E (2.3.15)
[a,
b], the solution to
(2.3.16)
exists on the interval [t,  8, t, 8]. Now, under the current hypotheses, take t, E (b  8/2, b) and y, = y(t,), with yet) solving (2.3.1). Then solving (2.3.16) continues D y(t) past t = b. Similarly one can continue y(t) past t = a. Here is an example of a global existence result that can be deduced from Proposi tion 2.3.2. Consider the 2 2 system for y = (x, v): +
X
dx = v, dt dv = x3 . dt Here we take 0 = R' , F(t, y) = F(t, x, v) = (u, _x3). If (2.3.17) holds for t E (a, b), we have .:£( v' + X4 ) = v dv + x3 dx = 0 (2.3.18) dt 2 4 dt dt ' so each y(t) = (x(t), vet»� solving (2.3.17) lies onglobal a levelexistence curve x4of/4solutions + v' / 2 = C, hence is confined to a compact subset of R' , yielding to (2.3.17). For more examples of global existence, see Exercises 24 below, and also further (2.3.17)
rnaterial below, treating linear systems. The discussion above dealt with firstorder systems. Often one wants to deal with a higherorder ODE. There is a standard method of reducing an nthorder ODE (2.3.19) inlet) = f(t,y, y', ... , in I) � to a firstorder system. One sets u = (Uo, ... , Un_I ) with (2.3.20) Uo = y, Uj = y(j) , and then
(2.3.21) If y takes values in Rk, then U takes values in Rkn.
2.3.
71
Systems ofdifferential equations and vectorfields
If the system (2.3.1) is nonautonomous (i.e., if F explicitly depends on t), it can be converted to an autonomous system (one with no explicit tdependence) as follows. Set z = (t, y). We then have dz ( dy ) dt = 1, dt = (l,F(z») = G(z). (2.3.22) Sometimes this process destroys important features of the original system (2.3.1). For example, if (2.3.1) is linear, (2.3.22) might be nonlinear. Nevertheless, the trick of con verting (2.3.1) to (2.3.22) has some uses. Linear systems. Here we consider linear systems of the form dx ( 2. 3 . 2 3 ) dt = A(t)x, x(O) = xo, given A(t) continuous in t E l (an interval about 0), with values in M(n, It). We will apply Proposition 2.3.2 to establish global existence of solutions. It suffices to establish the following. Proposition 2.3.3. if l A(t) 11 :S Kfor t E l, then the solution to ( 2. 3 . 2 3 ) satisfies (2.3.24) Proof. It suffices to prove ( 2. 3 . 24 ) for t O. Then yet) = x(t) satisfies dy dt = C(t)y, y(0) = xo, C(t) = A(t)  KI. ( 2. 3 . 2 5 ) We claim that, for t 0, ( 2. 3 . 2 6 ) 11y(t) 11 :S Ily(O) I! . which then implies (2.3.24), for t O. In fact, d IlyCt) II' = y'(t) . y(t) y(t) . y'(t) dt ( 2. 3 . 2 7 ) = 2y(t) . (A(t)  K)y(t). Now yet) ·A(t)y(t) :S I YCt) II · I A(t)y(t) ll :S 1 A(t) II ' IIy(t) II' , so the hypothesis IIA(t)11 :S K implies ( 2. 3 . 2 8 ) 1t 11y(t) II' :S O. yielding (3.26). D Thanks to Proposition 2.3.3, we have, for s, t E l, the solution operator for (2.3.23), ( 2. 3 . 2 9 ) Set, s) E M(n, It), Set, s)x(s) = x(t). We have 3 ( 2. 3 . 3 0 ) 3t S(t, s) = A(t)S(t, s), S(s, s) = I. Note that Set, s)S(s, ) = Set, In particular, Set, s) = S(s, t)1 eKt
2:
2:
2:
+
r
r).
72
2. Multivariable differential calculus
We can use the solution operator Set, s) to solve the inhomogeneous system dx dt = A(t)x + f(t), x(to) = Xo · (2.3.31) Namely, we can take (2.3.32) x(t) = Set, to)xo + 1 S(t,s)f(s)ds. This is known as Duhamel's formula. Verifying that this solves (2.3.30) is an exercise. We will make good use of this in the next subsection. Dependence of solutions on initial data and other parameters. We study how the solution to a system of differential equations dx (2.3.33) dt = F(x), x(O) = y, depends on the initial condition y. As shown in (2.3.22), there is no loss of generality in considering the autonomous system ( 2. 3 . 3 3 ). We will assume that F D Rn is smooth and that D c Rn is open and convex, and denote the solution to (2.3.33) by x = x(t, y). We want to examine smoothness in y. LetDF(x) denote the matrix valued function of partial derivatives of F. To start, we assume F is of class C1 , i.e., DF is continuous on D, and we want to show x(t, y) is differentiable in y. Let us recall what this means. Take y E D and pick R > 0 such thatBR (y) is contained in D. We seek an matrix Wet, y) such that, for Wo E Rn , Il woll :S R, ( 2. 3 . 3 4 ) x(t, y + wo) = x(t, y) + Wet, y)wo + ret, y, wo), where ( 2. 3 . 3 5 ) r(t,y, wo) = o( ll wo ll ), which means ret, y, wo) = ( 2. 3 . 3 6 ) lim o wo� Il wo ll When this holds, x(t,y) is differentiable in y, and ( 2. 3 . 3 7 ) Dyx(t,y) = W(t,y). In other words, ( 2. 3 . 3 8 ) x(t,y + wo) = x(t,y) + Dyx( t,y)wo + o( llwoll ). In the course of proving this differentiability, we also want to produce an equation for Wet, y) = Dyx(t, y). This can be done as follows. Suppose x(t, y) were differentiable in y. (We do not yet know that it is, but that is okay.) Then F(x(t, y» is differentiable in y, so we can applyDy to (2.3.32). Using the chain rule, we get the following equation, dW dt = DF(x)W, W(O, y ) = J, ( 2. 3 . 3 9 ) t
to
:
+
nXn
nX n
o.
2.3.
Systems ofdifferential equations and vectorfields
73
called the linearization of (2.3.33). Here, I is the identity matrix. Equivalently, given Wo E [Rn , (2.3.40) w(t, y) = W( t, y)wo is expected to solve dw dt = DF(x)w, w(O) = woo (2.3.41) Now, we do not yet know that x(t, y) is differentiable, butwe do know from results above on linear systems that (2.3.39) and (2.3.41) are uniquely solvable. It remains to show that with such a choice of W(t,y), (2.3.34)(2.3.35) hold. To rephrase the task, set (2.3.42) x(t) = x(t,y), x,(t) = x(t,y + wo), z(t) = x,(t)  x(t), and let Wet) solve (2.3.39), and let wet) satisfy (2.3.40)(2.3.41). We then have x(t,y + wo) = x(t,y) + W(t,y)wo + (z(t)  w( t»), so the task of verifying (2.3.34)(2.3.35) is equivalent to the task of verifying (2.3.43) Ilz( t)  w( t) 11 = o( ll woll)· To establish (2.3.43), we will obtain for z(t) an equation similar to (2.3.41). To begin, (2.3.42) implies dz (2.3.44) dt = F(x,)  F(x), z(O) = woo Now the fundamental theorem of calculus gives (2.3.45) with , G(x x) = 1 DF(rx, + (l  r)x)dr. (2.3.46) If F is C' , then G is continuous. Then (2.3.44)(2.3.45) yield dz (2.3.47) dt = G( x x )z , z(O) = woo Given that (2.3.48) IIDF(u) 11 :S L, 'if E 0, which we have by continuity ofDF, after possibly shrinking 0 slightly, we deduce from Proposition 2.3.3 that (2.3.49) that is, (2.3.50) Ilx( t, y)  x(t,y + wo)ll :S eltlL ll wo ll· This establishes that x(t, y) is Lipschitz in y. To proceed, since G is continuous and G(x, x) = DF(x), we can rewrite (2.3.47) as dz (2.3.51) dt = G(x + z, x )z = DF(x)z + R(x, z), z(O) = wo, nXn
"
"
u
74
2. Multivariable differential calculus
where (2.3.52) F E CI (O) ==} II R(x,z) 11 = o( llz ll ) = o( ll wa ll )· Now comparing (2.3.51) with (2.3.41), we have (2.3.53) dd/z  w) = DF(x)(z  w) + R(x,z), (z  w)(o) = o. Then Duhamel's formula gives ( 2. 3 . 5 4 ) z(t)  wet) = 1 S(t, s)R(x(s),z(s» ds, where Set, s) is the solution operator for dldt  B(t), with B(t) = G(XI (t), x(t» , which as in (2.3.49), satisfies ( 2. 3 . 5 5 ) We hence have (2.3.43), i.e., ( 2. 3 . 5 6 ) Ilz(t)  w(t) 11 = o( ll wa ll )· This is precisely what is required to show that x(t, y) is differentiable with respect to y, with derivative W = Dyx(t, y) satisfying (2.3.39). Hence we have the following. Proposition 2.3.4. IfF E C I ( O ) and ifsolutions to ( 2. 3 . 3 3 ) existfort E (Ta, Tl ), then, for each such t, x(t, y) is Cl in y, with derivative Dyx(t, y) satisfying (2.3.39 ). We have shown that x(t, y) isboth Lipschitz and differentiable in y. The continuity of W(t, y) iny follows easilyby comparing the differential equations ofthe form (2.3.39) for W(t, y) and W(t, + wa), in the spirit of the analysis of z(t) done above. If F possesses further smoothness, we can establish higher differentiability of x(t, y) in y by the following trick. Couple ( 2. 3 . 3 3) and ( 2. 3 . 3 9 ) to get a system of differ ential equations for (x, W), dx = F(x), dt ( 2. 3 . 5 7 ) dW dt = DF(x)W, with initial conditions ( 2. 3 . 5 8 ) x(o) = y, W(O) = I. We can reiterate the preceding argument, getting results on D/x, W), hence on D;x(t,y), and continue proving the following. t
Y
Proposition 2.3.5. ifF E
Ck(O), then x(t, y) is Ck in y.
Similarly, we can consider dependence of the solution to dx ( 2. 3 . 5 9 ) dt = F(r, x), x(O) = y, on a parameter r, assuming F smoothjointly in (r, x). This result can be deduced from the previous one by the following trick. Consider the system dx = F(z, y), dz = 0, x(O) = ( 2. 3 . 6 0 ) y, z(O) = r. dt dt
2.3.
75
Systems ofdifferential equations and vectorfields
Then we get smoothness of x(t, y) jointly in ( y). As a special case, let F( x) = F(x). In this case x(to, y) = x(Tto, y), so we can improve the conclusion in Proposi tion 2.3.5 to F E ekeD) ==} x E ek jointly in (t, y). (2.3.61) Vector fields and flows. Let U Rn be open. A vector field on U is a smooth map (2.3.62) X: U Rn . Consider the corresponding ODE, y dt = X(y), y(0) = x, (2.3.63) with x E U. A curve y(t) solving (2.3.63) is called an integral curve of the vector field X. It is also called an orbit. For fixed t, write (2.3.64) y = ye t, x) = .'F}(x). The locally defined .'F}, mapping (a subdomain of) U to U, is called theflow gener ated by the vector field X. As a consequence of the results on smooth dependence of solutions to ODE in (2.3.64), yis a smooth function of(t, x). The vector field X defines a differential operator on scalar functions, ( 2. 3 . 6 5 ) Lxf(x) = hlim _o h l [f(.'F�x)  f(x)] = t f(.'F}x) lt_o' We also use the common notation ( 2. 3 . 6 6 ) Lxf(x) = Xf, that is, we apply X to f as a firstorder differential operator. Note that if we apply the chain rule to (2.3.65) and use (2.3.63), we have '" aj (x) axaf ' Lxf(x) = X(x) . Vf(x) = L..J ( 2. 3 . 6 7 ) if X = L a/x)ej , with (ejl the standard basis of Rn. In particular, using the notation ( 2. 3 . 6 6 ), we have ( 2. 3 . 6 8 ) In the notation (2.3.66), ( 2. 3 . 6 9 ) We note that X is a derivation, that is, a map on eOO(U), linear over R, satisfying ( 2. 3 . 7 0 ) XUg) = (Xf)g + f(Xg). Conversely, any derivation on eOO(U) defines a vector field, i.e., has the form (2.3.69), as we now show. Proposition 2.3.6. IfX is a derivation on eOO(U), then X has theform (2. 3 . 6 9). T
T,
T,
T,
T,
c
+
d
d d
j
2. Multivariable differential calculus
76
Set aj(x) = XXj, X, = L aj(x)a/aXj, and Y = X X" Then Y is a derivation satisfyingYxj = Oforeach j. We aim to show that Yf = a for all f. Note thatwhenever Y is a derivation 1 . 1 = 1 =} y . 1 = 2Y . 1 =} y . 1 = O. Thus Y annihilates constants. Thus in this case Y annihilates all polynomials of degree ::; 1. Now we show that Yf(p) = a forl all p E U. Without loss of generality, we can suppose p = O. Then, with bj(x) = fo (ajf)(tx) dt, we can write f(x) = f(O) + � bj(x)xj. It immediately follows that Yf vanishes at 0, so the proposition is proved. D A fundamental fact about vector fields is that they can be straightened out near points where they do not vanish. To see this, let X be a smooth vector field on U, and suppose X(p) '" O. Then near p there is a hyperplane H that is not tangent to X near p, say on a portion we denote M. We can choose coordinates near p sonthat p is the origin l and M is given by = OJ. Thus we can identify a point x' E R near the origin with x' E M. We can(xndefine a map (2.3.71) M (to, to) U by (2.3.72) ( X', t) = Jc ( x' ). This is Coo and has surjective derivative at(O, 0), and so by the inverse function theorem it is a local diffeomorphism. This defines a new coordinate system near p, in which the flow generated by X has the form (2.3.73) k(X', t) = (x', t + s). If we denote the new coordinates by (Ul , "" un), we see that the following result is established. Theorem 2.3.7. If X is a smooth vector field on U and X(p) '" 0, then there exists a coordinate system (Ul , ... , un ), centered at p (so u/p) = 0) with respect to which (2.3.74) X = auan ' By contrast with the situation in Theorem 2.3.7, ifX is a vector field on U, p E U, andX(p) = 0, we say p is a critical point ofX. It is ofinterest to understand the behavior ofX and its flow near such a critical point. One special feature that arises here is that if X(p) = 0, then the linearization of (2.3.33) at p, given in general by (2.3.41), takes the special form dw (2.3.75) dt =Aw, A = DX(p), since the solution to (2.3.33) satisfying x(O) = pis x(t) = p. (Here F has been relabeled X.) The solution to (2.3.75) is given explicitly as a matrix exponential, Proof.
Y :
Y
Y
(2.3.76)
X
+
Y
2.3.
Systems ofdifferential equations and vectorfields
77
Figure 2.3.1. 1\vo sinks, a saddle, and a center
explored in the exercise set entitled "Exercises on the matrix exponential" at the end of this section. We say p is a nondegenerate critical point of ifA = DX(p) is invertible. In such a case, the behavior of etA is governed by that of Xthe eigenvalues {Aj} ofA. In particular, ReAj 0 \if j ==} etAwo + 0 as t + +00, ReAj > 0 \if j ==} etAwo  + 0 as t  + 00. We say X has a sink at p in the first case and a source at p in the second case. If Re Aj is positive for some j and negative for some j, but never 0, we sayX has a saddle at p. If ReAj = 0, we say X(p) has a center at p. This exhausts the possibilities for nondegenerate critical points in dimension 2. In higher dimension there are other possibilities, which the reader can catalogue. In dimension 2, we illustrate these cases in Figure 2.3.1, showing two sinks, a saddle, and a center, taking, respectively, 1) ' (1 1) ' (1 1) ' (2.3.77) A = (1 1) ' (a1 a with a > 0 in the second case. Reverse the signs on the first two matrices to exhibit sources. It follows from material above that the action of :F} on p + Wo is close to that of etA on wo, for small wo, and for t in a bounded interval, say [To, Tol Of course, for [To, Tol, both :F}(p + wo) and etAwo move very little when Wo is small. If one is tot Eshow that the structure of the orbits ofX near p is close to that of the linearization, a
o
gl(xa )v,,_ l (Ra ) l :s CV(Ro)w(o),
where Xa is an arbitrary point in Ra. Now, if :Po is a sufficiently fine partition of Ro, it follows from the proof of Propo sition 3.1.6 that the second sum in (3.1.34) is arbitrarily close to dv,,_l , since b,}; has content zero. Furthermore, an argument such as used to prove Proposition 3.1.7 shows that bD has content zero, and one verifies that, for a sufficiently fine partition, the first sum in (3.1.34) is arbitrarily close to f dv". This proves the desired identity (3.1.29). D We next take up the change of variables formula for multiple integrals, extending the onevariable formula, (1.1.44). We begin with a result on linear changes of vari ables. The set of invertible real matrices is denoted In (3.1.35) and subsequent formulas, If dV denotes f dV for some cell R on which f is supported. The integral is independent of the choice of such a cell; cf. (3.1.25). I" go
In
nXn IR
Gl(n, R).
3.1.
The Riemann integral in n variables
95
AE
f
Proposition 3.1.10. Let be a continuous function with compact support in R n . If GI(n, R),
then
J f(x)dV = I detAI J f(Ax)dV.
(3.1.35)
Let 9 be thel set of elements E R) for which (3.1.35) is true. Clearly, 1 andAdetAB IisEa subgroup 9. Using detA = (detA) = (detA)(detB), we can conclude that 9 of R). In more detail, for A E R), f as above, let
Proof.
GI(n,
Gl(n,
Gl(n,
Then for all such f. We see that so
A,B E 9 =IAB(f) = I detBI1I(fA) = I detBIl l detAI1I(f) = I detAB I1I(f) =AB E 9. Applying a similar argument to (f) = I(f), also yields the implication A E 9 AI E 9. To prove the proposition, it will therefore suffice to show that 9 contains all ele ments of the following three forms, since (as shown in the exercises on row reduction at the end of this section) the method of applying elementary row operations to reduce a matrix shows that any element of R) is a product of a finiten number of these elements. Here, {ej : I j denotes the standard basis of R , and (J denotes a permutation of {I, A1 ej = ea(j) , A,ej = cjej, Cj '" a (3.1.36) A3e, = e, + cel , A3ej = ej for j '" 2. The proofs of (3.1.35) in the first two cases are elementary consequences of the defini tion of the Riemann integral, and can be left as exercises. We show that (3.1.35) holds for transformations of the formA3 by using Theorem 3.1.9 (in a special case) to reduce it to the case = 1. Given f E C(R), compactly supported, and b E R, we clearly have (3.1.37 ) J f(x)dx = J f(x+ b)dx. lAA,
=}
Gl(n,
:S
:S n}
. . . , n}.
n
96
3. Multivariable integral calculus and calculus on surfaces
Now, for the case A = A3 , with x = (Xl , X'), we have (3.1.38)
J f(XI
+ ex"
x')dv,,(x) =
J (J f(XI ex" x') dXI) dv,,_I(X') = J(j f(X I , X' ) dx l ) dv,,_ I (X'), +
the second identity by (3.1.37). Thus we get (3.1.35) in case A = A3, so the proposition D It is desirable to extend Proposition 3.1.10 to some discontinuous functions. Given a cell R and f : R It, bounded, we say (3.1.39) f E CI(R) = the set of discontinuities of f is nil. Proposition 3.1.6 implies (3.1.40) CI(R) 3/(R). From the closure of the class of nil sets under finite unions, it is clear that CI(R) is closed under sums and products, i.e., that CI(R) is an algebra offunctions on R. We will denote by CI,(ltn) the set of bounded functions f : Itn nIt such that f has compact support and its set of discontinuities is nil. Any f E CI,(lt ) is supported in somen cell R, and f iR E CI(R). Here is another useful class of functions. Given a cell R It and f : R It bounded, we say f E PK(R) = 3 a partition l' of R such that f is constant ( 3.1.4 1 ) on the interior of each cell Ra E 1'. The following will be a useful tool for extending Proposition 3.1.10. It is also of interest in its own right, and it will have other uses. Proposition 3.1.11. Given a cell R Itn and f : R It bounded, Y(f) = inf!j g dV : g E PK(R), g f J R = inf! j gdV : g E CI(R), g f J ( 3.1.4 2 ) R = inf! j gdV : g E C(R), g f J . � ��
+
c
+
c
+
c
+
2:
2:
2:
Similarly,
(
3.1.43 )
R
I(f) = suP! gdV : g E R = suP! gdV : g E R = suP! gdV : g E R
j j j
PK(R), g :S fJ CI(R), g :S fJ C(R), g :S fJ.
3.1.
The Riemann integral in n variables
97
Proof. Denote the three quantities on the right side of (3.1.42) by I, (f),I,(f), and 13(f), respectively. The definition orI,(f) is sufficiently close to that orI(f) in (3.1.8) that the identity I(f) = I,(f) is apparent. Now I,(f) is an inf over a larger class of functions g than that defining I,(f), so I,(f) :S I,(f). On the other hand, I(g) I(f) for all g involved in definingI,(f), so I,(f) I(f), hence I,(f) = I(f). Next, 13(f) is an inf over a smaller class of functions g than that defining I,(f), so 13(f) I(f). On the other hand, given E > 0 and 1jJ E PK(R), one can readily find g E C(R) such that g 1jJ and JR (g  1jJ) dV < E. This implies 13 (f) :S I(f) + dor all D E > 0 and finishes the proof of (3.1.42). The proof of (3.1.43) is similar. We can now extend Proposition 3.1.10. Say / E :Rn ,(Rn) if / has compact support, say in some cell R, and / E :R(R). Also say / E C,(R ) if / is continuous on Rn, with compact support. Proposition 3.1.12. GivenA E Gl(n, R), the identity (3.1.35 ) holds/or all / E :R,(Rn ). Proof. We have from Proposition 3.1.11 that, for each v E N, there exist gv, hv E C,(Rn) such that hv :S / :S gv and, with B = J / dV, B  � :S J hv dV :S B :S J gv dV :S B + �. Now Proposition 3.1.10 applies to gv and hv , so ( 3.1.44 ) B  � :S I detA I J hv(Ax)dV :S B :S I detA I J gv(Ax)dV :S B + � . Furthermore, with /A (x) = /(Ax), we have hv(Ax) :S /A(X) :S gv(Ax), so (4.44) gives B  �v :S I detA I I(fA ) :S I detA I I(fA ) :S B + �v , ( 3.1.45 ) for all v, and leting v + 00 we obtain (3.1.35). D Corollary 3.1.13. If'L, R n is a compact, contented set and A E Gl(n, R), thenA('L,) = {Ax : x E 'L,} is contented, and ( 3.1.46 ) V(A('L,») = I detAI V('L,). We now extend Proposition 3.1.10 to nonlinear changes of variables. Proposition 3.1.14. Let (') and 0 be open in R n, let G : (') + 0 be a C' diffeomorphism, and let / be a continuousfunction with compact support in D. Then ( 3.1.47 ) J /(y)dV(y) = J /(G(x») I detDG(x)1 dV(x). Proof. It suffices to prove the result under the additional assumption that 0, which we make from here on. Also, using a partition of unity (see §3.3), we can/write / as a finite sum of continuous functions with small supports, so it suffices to treat the case where / is supported in a cell Ii. c 0 and / G is supported in a cell R c (') . See Figure 3.1.2. Let l' = {Ra} be a partition of R. Note that for each Ra E 1', bG(Ra) = G(bRa)' so G(Ra) is contented, in view of Propositions 3.1.4 and 3.1.8. 2:
2:
2:
2:
C
n
v
2:
0
98
3.
Multivariable integral calculus and calculus on surfaces R
G
)
o G(Ra)
f+ O
Figure 3.1.2. Image of a cell
Let be the center of Rw and let Ra = Ra  Iw a cell with center at the origin. Then la (3.1.48) is an ndimensional parallelepiped, each point ofwhich is very close to a point in G(Ra)' if Ra is small enough. To be precise, for Y E Rw G(la + y) = �a + DG(la )Y + ( lwY)Y, (lwY) =
See Figure 3.1.3.
11 [DG(la
+
ty)  DG( la )] d t.
G(Ra)
•
�o
'10
•
Figure 3.1.3. Cell image closeup
3.1.
99
The Riemann integral in variables n
Consequently, given 0 > have
(3.1.49)
for all Ra E :P. Now, by
0, if 8 > 0 is small enough and maxsize(:P) :S 8, then we
(3.1.46),
(3.1.50)
V(Ha) = Idet DG( la)1 VeRa) '
Hence
(3.1.51)
V(G(Ra») :S
(1 + o)nldet DG(la)1 VeRa)'
Now we have
(3.1.52)
J I dV = �a J I dV G(Ra)
:S � sup I 0 G(x) V(G(Ra») a Ra :S ( + o)n � sup I 0 G(x) IdetDG( la)1 VeRa) ' a Ra To see that the first line of ( ) holds, note that IXG(Ra) is Riemann integrable, by Proposition note also that L. a I XG(Ra) = I except on a set of content zero. Then the additivity result in Proposition applies. The first inequality in ( ) is elementary; the second inequality uses ( ) and 1 2: O. If we set
1
3.1.52
3.1.6;
(
3.1.2 3.1.51
3.1.53 )
3.1.52
hex) = I 0 G(x) IdetDG(x)1.
then we have (
3.1.54)
sup I 0 G(x) Idet DG(la)1 :S sup hex) +
M
provided III :S and partitions, we get (
w( 8) is a modulus of continuity for DG. Taking arbitrarily fine
3.1.55 )
J I dV :S J h dV.
n
v
Ifwe apply this result, with G replaced by G 1 , (') and by h, given by ( ) we have (
3.1.56 )
Mw( 8),
3.1.53 ,
The inequalities (
0 switched, and I replaced
J h dV :S J h o G1 (y) l detDG1(y)l dV(y) = J I dV.
v
n
n
3.1.55) and ( 3.1.56 ) together yield the identity (3.1.47). D We now extend Proposition 3.1.14 to more general Riemann integrable functions. Recall that I E :R,(Rn) if I has compact support, say in some cell R, and I E :R(R). If 0 c Rn is open and I E :R,(Rn) has support in 0, we say I E :R,(D). We also say I E (l,(D) if I E (l,(Rn) has support in 0, and we say I E C,(D) if I is continuous
with compact support in D.
100
3. Multivariable integral calculus and calculus on surfaces
0 be open in Rn, and let G 0 + 0 be a C1 diffeomor phism. Iff E :R,(O), then f o G E :R,( 0 ), and (3.1.47) holds. Proof. The proof is similar to that of Proposition 3.1.12. Given v E N, we have from Proposition 3.1.11 that there exist gv, hv E C,( O ) such that hv :S f :S gv and, with :
Theorem 3.1.15. Let 0 and
B = J[1 f dV,
� J hv dV :S B :S J gv dV :S B + �.
B  :S Then Proposition
3.1.14 applies to hv and gv, so
� J hv(G(x» 1 detDG(x) 1 dV(x)
B  :S
v
J
�
:S B :S gv(G(x» l detDG(x) l dV(x) :S B + . v
Now, with f c(x) = f( G(x» , we have hv( G(x»
:S fc(x) :S gv( G(x» , so
 c detDG ) :S B + ,1 B  v1 :S I(f c (3.1.57) detDG I(f ) :S l l I I v for all v, and letting v + 00 , we obtain (3.1.47). D We have seen how Proposition 3.1.11 has been useful. The following result, to some degree a variant of Proposition 3.1.11, is also useful. R + R be bounded, B E R. Suppose that, for each v E Z+, there exist'¥v, '"'v E :R(R) such that
Lemma 3.1.16. Let F
:
(3.1.58) and
(3.1.59)
Then F E :R(R) and
(3.1.60)
J
B  ov :S '¥v(x) dV(x) :S R
J '"'vex) dV(x) :S B + ov,
R
Ov + O.
l F(x)dV(x) = B.
Furthermore, ifthere exist '¥v, '"'v E :R(R) such that (3.1.58) holds and (
3.1.61 )
J('"'v(x)  '¥v(x» dV :S Ov + 0,
R then there exists B such that (3.1.59 ) holds. Hence F E :R(R), and ( 3.1.60) holds. The most frequently invoked case of the change of variable formula, in the case
= 2, involves the following change from Cartesian to polar coordinates: ( 3.1.62 ) x = r cos 8, y = r sin 8. n
3.1.
101
The Riemann integral in variables n
(2.3.111). Thus, take O(r, e) = (r cos e, r sin e). We have e rsin B) (3.1.63) DO(r, e) = (cos sm e rcos e ' det DO(r, e) = r. See
For example, if p E (0, 00) and
Dp = ((x,y) E It' : x' + y' :s p'},
(3.1.64) then, for f E C(Dp),
J f(x, y) dA = 1 1
p ,,,
(3.1.65)
Dp
f(rcos e, rsin e)r de dr.
To get this, we first apply Proposition with (') = [E, p] X  E], then apply Theorem then let E " O. We next use Lemma to establish the following useful result on products of Riemann integrable functions.
3.1.14,
3.1.9,
[0, 2IT
3.1.16
Proposition 3.1.17. Oiven fl ,I, E :JI(R), we have fl I, E :JI(R). Proof. It suffices to prove this when fj 2: O. Take partitions:Pv and functions ¢jv , I"jv 2: 0, constant in the interior of each cell in :Pv , such that and
J ¢jv dV, J I"jv dV J fj dV. +
We apply Lemma Note that Hence (
3.1.16 with '"'v  '±Iv = I"lV ( I",v  ¢'v ) + ¢,v ( I"lV  ¢lV ) :S B(I",v  ¢'v ) + B(I" lv  ¢lV )'
D 3.1.61 ) holds, giving F E :JI(R). As a consequence of Proposition 3.1.17, we can make the following construction. Assume R is a cell and S c R is a contented set. If f E :JI(R), we have Xsf E :JI(R), by Proposition 3.1.17. We define ( 3.1.66 ) J f(x)dV(x) = J Xs(x)f(x) dV(x).
s
Note how this extends the scope of (
R
3.1.24).
102
3. Multivariable integral calculus and calculus on surfaces
:
Integrals over Itn . It is often useful to integrate a function whose support is not bounded. Generally, given a bounded function f Itn + It, we say f E Jl(ltn ) provided f i E Jl(R) for each cell R c It n , and
R
for some C
O. Pick a partition 170 of R such that loPk (fk) 2: Thus f k 2: 'Pk 2: for some 'P k E PK R , constant on the interior of each cell in 1'b with integral 2: The contribution to JR 'Pk dV from the cells on which 'P k :S is :S so the contribution from the cells on which 'Pk 2: must be 2: Since 'Pk :S K on R, it follows that the latter class of cells must have total volume 2: Consequently, for each k, there exists S k C R, a finite union of cells in 1'b such that
a12. al4 a14, a14. a14K.
()
al4
(3.1.126) Then fe 2:
(3.1.127)
al4 on Sk for all e :S k. Hence, with
we have
(3.1.128)
The hypothesis fe
" 0 on R implies
Ve " 0 as e /' 00 . Without loss of generality, we can take S k open in (3.1.126), hence each Ve is open.
(3.1.129)
3.1.26 is hence a consequence of the following, which (3.1.129) are contradictory. D Proposition 3.1.27. If Ve e R are open sets,for e E N, then (3.1.130) Proof. Assume Ve " 0. If the conclusion of (3.1.130) fails, then (3.1.131)
The conclusion of Proposition implies that and
(3.1.128)
for some b > O. Passing to a subsequence if necessary, we can assume cont(Ve) :S b + 0e, oe re b. Then we can pick Ke e Ve, a compact union of finitely many cells in a partition of R, such that
(3.1.132)
(3.1.133)
0, there is a cell R
c
and c :s
J gdV.
Rn such that
J ( Igl + Igd) dV < E,
(3.1.145)
IRn\R and Corollary 3.1.29 gives (3.1.146)
J gk dV /' J g dV.
R R We deduce that c 2: J"n g dV  E for all E > 0, so (4.137) holds.
D
In the Lebesgue theory of integration, there is a stronger result. Namely, if gk are integrable on Rn and gk(X) /' g(x) for each x, and if there is a uniform upper bound J"n gk dx :S B < 00 , then g is integrable on Rn and the conclusion of (3.1.143) holds. Such a result can be found in [47]. Upper content and outer measure. Given a bounded set S c Rn , its upper content is defined in (3.1.13) and an equivalent characterization given in (3.1.15). A related quantity is the outer measure of S, defined by (3.1.147)
m*(S) = inf! � V(R k ) : Rk C
k�l
Rn cells, S C U RkJ. k�l
The difference between (3.1.15) and (3.1.147) is that in (3.1.15) we require the cover of Sbycells to be finite and in (3.1.147) we allow any countable cover of S by cells. Clearly, (3.1.147) is an inf over a larger collection of objects than (3.1.15), so (3.1.148) m*(S) :S cont+ (S). We get the same result in (3.1.147) if we require (3.1.149)
3.
112
Multivariable integral calculus and calculus on surfaces
(just expand each R k by a factor of (1 + r kE» . Since any open cover of a compact set has a finite sub cover (see Appendix A.1), it follows that (3.1.150) S compact ==} m*(S) = cont+ (S). On the other hand, it is readily verified from (3.1.147) that S countable ==} m*(S) = O.
(3.1.151)
For example, if R = {x E Rn : 0 :S Xj :S 1, \if j}, then m*(R n on) = 0, but cont+ (R n on) = 1, (3.1.152) the latter result by (3.1.16). We now establish the following integrability criterion, which sharpens Proposition 3.1.6.
R be bounded, and let S c R be the set of points of
Proposition 3.1.31. Let f
R
(3.1.153)
m*(S) = 0 ==} f E 3/(R).
discontinuity off. Then
+
Proof. Assume If I :S M and pick E > O. Take a countable collection {R k} of cells that are open (in R) such that S C u k>,Rk and L k> , V(Rk ) < £. Now f is continuous at each p E R \ S, so there exists a cell R%, open (in R), containing p, such that sUPR # f p infR� f < £. Then {R k : k E N } U {Rg : p E R \ S} is an open cover of R. Since R is compact, there is a finite subcover, which we denote {R" . . . , RN , Rf, . . . , R,\j}. We have N (3.1.154) f < E, \if j E {1, . . . , M}. � V(Rk ) < E and su#p f  inf R'!' Rj k=
l
]
Recall that R = I, X . . . X In is a product of n closed, bounded intervals. Also each cell R k and RJ is a product of intervals. For each v E {1, . . . , n}, take the collection of all endpoints in the vth factor of each of these cells, and use these to form a partition of Iv. Taking products yields a partition of R. We can write = {Lk : 1 :S k :S Il} (3.1.155)
l'
l'
where we say k E A provided Lk is contained in a cell of the form Rf for some j E {1, . . . , M}, as in (3.1.154). Consequently, if k E 13, thenLk c Re for some E {1, . . . , N}, so N (3.1.156) L U k C U Re . kE13 8= 1 We therefore have
e
(3.1.157)
sup f  inf f < E, Lj
Lj
\if j E A.
3.1.
113
The Riemann integral in variables n
It follows that
o :s Icp(f)  LpCf) < � 2MV(Lk) + � £V(Lj ) kE13 J EA < 2£M + EVeR). D Since £ can be taken arbitrarily small, this establishes that f E 3/(R). Remark. The condition (3.1.153) is sharp. That is, given f : R + R bounded, f E 3/(R) � m*(S) = O. Proofs of this can be found in standard measure theory texts, such (3.1.158)
as [47].
Exercises
1. Show that any two partitions of a cell R have a common refinement. Hint. Consider the argument given for the onedimensional case in §1.1. 2. Write down a careful proof of the identity (3.1.16), i.e., cont+ (S) = cont+ (S'). 3. Write down the details of the argument giving (3.1.25), on the independence of the integral from the choice of cell. 4. Write down a direct proof that the transformation formula (3.1.35) holds for those linear transformations of the form A, and A, in (3.1.36). Compare Exercise 1 of §1.1.
5. Consider spherical polar coordinates on R3 , given by x = p sin I" cos e, y = p sin I" sin e, z = p cos 1", i.e., take G(p, 1", e) = (p sin I" cos e, p sin I" sin e, p cos e). Show that det DG(p, 1", e) = p' sin 1". Use this to compute the volume of the unit ball in R3 .
6. If E is the unit ball in R3 , show that Theorem 3.1.9 implies VeE) = 2
J /1  lxi' dA(x),
D
where D = {x E R' : Ix l :s 1} is the unit disk. Use polar coordinates, as in (3.1.62) (3.1.65), to compute this integral. Compare the result with that of Exercise 5. 7. Apply Corollary 3.1.13 and the answer to Exercises 5 and 6 to compute the volume of the ellipsoidal region in R3 defined by
y2 Z2 1, :s + ,. a c b' + ,. x2
given a, b, c E (0, 00).
3.
114
Multivariable integral calculus and calculus on surfaces
8. Prove Lemma 3.1.16. 9. If R is a cell and S c R is a contented set, and f E :JI(R), we have, via Proposition 3.1.17,
J f(x)dV(x) = J Xs(x)f(x) dV(x).
s
R
Show that if Sj c R are contented and they are disjoint (or more generally cont+ (S, n S, ) = 0), then, for f E :JI(R),
J f(x) dV(x) = J f(x)dV(x) + J f(x)dV(x). s,
s,
10. Establish the convergence result (3.1.68), for all f E :JI(Rn).
11. Take B = {x E Rn : Ix l :S 1/2}, and let f : B + R+ . Assume f is continuous on B \ O. Show that f E :JI#(B) ¢}
J f dV is bounded as £ " O.
Ixl>£
12. With B e Rn as in Exercise 11, define % : B + R by
for x '" O. Say %(0) = O. Show that % E :JI#(B) ¢} b > 1.
13. Show that
14. Theorem 3.1.9, relating multiple integrals and iterated integrals, played the follow ing role in the proof of the change of variable formula (3.1.47). Namely, it was used to establish the identity (3.1.50) for the volume of the parallelepiped Hw via an appeal to Corollary 3.1.13, hence to Proposition 3.1.10, whose proof relied on Theorem 3.1.9. Try to establish Corollary 3.1.13 directly, without using Theorem 3.1.9, in the case when � is either a cell or the image of a cell under an element of R).
Gl(n,
3.1.
The Riemann integral in variables
115
n
In preparation for the next three exercises, review the proof of Proposition 1.1.12.
15. Assume f E :R(R), If I :S M, and let l" : [M,M] + I He Lipschitz and monotone. Show directly from the definition that I" 0 f E :R (R) . 16. If I" : [M, M] + R is continuous and piecewise linear, show that you can write I" = 1" 1  1", with I"j Lipschitz and monotone. Deduce that f E :R (R) =} I" 0 f E :R (R)
when I" is piecewise linear.
17. Assume Uv E :R(R) and that Uv + uniformly on R. Show that U E :R(R). Deduce that if f E :R(R), If I :S M, and 1jJ : [M, M] + R is continuous, then 1jJ 0 f E :R (R). u
18. Let R c Rn be a cell, and let f, g : R + R be bounded. Show that I(f + g) :S I(n + I(g), l(f + g) 2: l(f) + l(g)· Hint. Look at the proof of Proposition 1.1.1. 19. Let R c Rn be a cell, and let f : R + R be bounded. Assume that for each £ > 0, there exist bounded f£, g£ such that f = f£ + go' f£ E :R(R), I( lg£ I ) :S £. Show that f E :R (R) and
J f£ dV J f dV. +
R
Hint. Use Exercise 18.
R
20. Use the result of Exercise 19 to produce another proof of Proposition 3.1.6. 21. Behind (3.1.45 ) is the assertion that if R is a cell, g is supported on K c R, and Ig l :S M, then I( lg l ) :S M cont+ (K). Prove this. More generally, if g, h : R + R are bounded and Ig l :S M, show that I( l gh l ) :S MI( l h l ) · 22. Establish the following Fubinitype theorem, and compare it with Theorem 3.1.9. Proposition 3.1.32. Let A c Rm and B e Rn be cells, and take f E :R(A X B). For x E A, define fx : B + R by fAy) = f(x,y). Define Lf , Uf : A + R by Lf(x) = l(fJ, Uf(x) = I(1x). Then Lf and Uf belong to :R(A), and
J f dV = J Lf(x)dx = J Uf(x)dx.
AxE
A
A
Hint. Given £ > 0, use Proposition 3.1.11 to take 1", 1jJ E PK (A X B) such that I" :S f :S 1jJ,
J 1jJdVJ I"dV < £.
116
3. Multivariable integral calculus and calculus on surfaces
With definitions of 'Px and 1jJx analogous to that of fx, show that
J 'P dV = J 'Px dx :5, I(Lf) :5, I(Uf ) :5, J 1jJx dx = J 1jJ dV. A
AxE
AxE
A
Deduce that and proceed.
Exercises on row reduction and matrix products We consider the following three types of row operations on an If (J is a permutation of {l, let
... , n},
n X n matrix A = (ajk)'
Pa(A) = (aa(j)k)'
If C = (c" . . . , Cj ), and all Cj are nonzero, set Finally, if C E R and!l '" v, define
!l,(A) = (Cj ' ajk )'
£�v,(A) = (bjk), bVk = avk  ca�b bjk = ajk for j '" v. We relate these operations to left multiplication by matrices Fa, M" and E�v, ' defined
by the following actions on the standard basis {e" . . . , e n l of R n: and
1. Show that
3. Show that if!l '" v, then E�v, = Fa'E21,Fa, for some permutation (J. 4. If B = Pa(A) and C = !loeB ), show that A = PaM,C. Generalize this to other cases where a rnatrix C is obtained from a matrix A via a sequence of row opera tions. 5. If A is an invertible, real X matrix (i.e., A E R» , then the rows of A form a basis of Rn. Use this to show that A can be transformed to the identity matrix via a sequence of row operations. Deduce that any A E R) can be written as a finite product of matrices of the form Pa, M, and E�v, ' hence as a finite product of matrices of the form listed in (3.1.36).
n n
Gl(n, Gl(n,
3.2.
3.2.
Surfaces and sUiface integrals
117
Surfaces and surface integrals
A smooth m·dimensional surface M c Rn is characterized by the following property. Given p E M, there is a neighborhood U of p in M and a smooth map 'P : (') .... U, from an open set (') c Rm bijectively to U, with injective derivative at each point, and continuous inverse 'P  1 : U .... ('). Such a map 'P is called a coordinate chart on M. We call U c M a coordinate patch. If all such maps 'P are smooth of class Ck , we say M is a surface of class Ck In §4.3 we will define analogous notions of a C k surface with boundary and of a C k surface with corners. There is an abstraction of the notion of a surface, namely the notion of a manifold, which we will discuss at the end of this section. Examples include projective spaces and other spaces obtained as quotients of surfaces. If 'P : (') .... U is a Ck coordinate chart, such as described above, or more generally 'P : (') .... Rn is a Ck map with injective derivative, and 'P(xo) = p, we set
(3.2.1) a linear subspace of Rn of dimension m, and we denote by NpM its orthogonal com· plement. It is useful to consider the following map. Pick a linear isomorphism A Rnm .... NpM, and define
(3.2.2) q, : (') X Rnm + Rn , q,(x,z) = 'P(x) + Az. Thus q, is a Ck map defined on an open subset of Rn . Note that (3.2.3)
Dq,(Xo , O)
( �) = Dcp(xo)v + Aw,
so Dq,(xo, 0) : Rn .... Rn is surjective, hence bijective, so the inverse function theorem applies; q, maps some neighborhood of (xo, 0) diffeomorphically onto a neighborhood of p E Rn . Suppose there is another Ck coordinate chart, 1jJ : 0 .... U. Since 'P and 1jJ are by hypothesis one·to·one and onto, it follows that P = 1jJ l 0 'P : (') .... 0 is a well·defined map, which is one·to·one and onto. See Figure 3.2.1. Also P and p 1 are continuous. In fact, we can say more. Lemma 3.2.1. Under the hypotheses above, P is a C k diffeomorphism. Proof. It suffices to show that P and p 1 are Ck on a neighborhood of Xo and Yo, respectively, where 'P(xo) = 1jJ(yo) = p. Let us define a map '¥ in a fashion similar to (3.2.2). To be precise, we set t;,M = Range D1jJ(yo), and let NpM be its orthogonal complement. (Shortly we will show that t;,M = TpM, but we are not quite ready for that.) Then pick a linear isomorphism E : Rn m .... NpM and consider
'¥ : 0 X Rnm + Rn , '¥(y, z) = 1jJ(y) + Ez. Again, '¥ is a C k diffeomorphism from a neighborhood of (Yo, 0) onto a neighborhood of p. To be precise, there exist neighborhoods (j of (xo, 0) in (') X Rn m , 0 of (Yo, 0) in
O x Rnm , and a of p in Rn such that
q, : (j + a and '¥ : 0 + a
3. Multivariable integral calculus and calculus on surfaces
118
Figure 3.2.1. Coordinate charts
are Ck diffeomorphisms. It follows that '±I I 0 (x, 0) E i'5 and (y, 0) E D,
(3.2.4)
(') +
D is a Ck diffeomorphism. Now note that, for
'±I I 0 (x, 0) = (F(x), 0),  1 0 '±I(y, 0) = (F l (y), 0).
In fact, to verify the first identity in (3.2.4), we check that
'±I(F(x),O) = 1jJ(F(x» + EO = 1jJ(1jJ l 0 cp(x» = cp(x) = (x, 0). The identities in (3.2.4) imply that F and F 1 have the desired regularity. Thus, when there are two such coordinate charts, cp have a Ck diffeomorphism F : (') + 0 such that
(3.2.5)
:
(') +
cp = 1jJ 0 F.
By the chain rule,
(3.2.6)
Dcp(x) = D1jJ(y)DF(x),
Y
= F(x).
U, 1jJ : 0
D +
U, we
3.2.
Surfaces and sUiface integrals
119
In particular this implies that RangeD",(xo) = Range D,p(yo), so 1pM in (3.2.1) is in dependent of the choice of coordinate chart. It is called the tangent space to M at p. Remark. An application ofthe inverse function theorem related to the proof of Lemma 3.2.1 can be used to show that if (') c Itm is open, m < n, and ", : (') .... Itn is a Ck map such thatD",(p) : Itm .... Itn is injective, (p E (') , then there is a neighborhood (j of p in (') such that the image of (j under ", is a Ck surface in Itn . Compare Exercise 11 in
§2.2.
We next define an object called the metric tensor on M. Given a coordinate chart '" : (') .... U, there is associated an m X m matrix G(x) = (gjk(X») of functions on ('), defined in terms of the inner product of vectors tangent to M:
a"'e a", a", = L..J .:; a"'e gjk(X) = D",(x)ej . D",(x)ek = aXj . aXk e= 1 aXj aXk , where {ej : 1 :S j :S m} is the standard orthonormal basis of Itm . Equivalently, (3.2.7)
(3.2.8)
G(x) = D",(x)' D",(x).
We call (gj k ) the metric tensor of M, on U, with respect to the coordinate chart ", : (') .... U. Note that this matrix is positive definite. From a coordinateindependent point of view, the metric tensor on M specifies inner products of vectors tangent to M, using the inner product of Itn . If we take another coordinate chart 1jJ : Q .... U, we want to compare (gj k ) with H = (hj k ), given by
(3.2.9)
hjk(y) = D1jJ(y)ej . D1jJ(y)eb i.e., H(y) = D,p(y)' D1jJ(y).
As seen above we have a diffeomorphism F : Consequently,
(3.2.10)
(') .... Q
G(x) = DF(x)' H(y)DF(x),
such that (3.2.5)(3.2.6) hold.
for y = F(x),
or equivalently,
(3.2.11) We now define the notion of surface integral on M. If f : M .... function supported on U, we set
(3.2.12)
It is a continuous
J f dS = J f o cr{x) .jg(x) dx,
M
v
where
(3.2.13)
g(x) = detG(x).
We need to know that this is independent of the choice of coordinate chart ", : (') .... U. Thus, if we use 1jJ : Q .... U instead, we want to show that (3.2.12) is equal to
120
3. Multivariable integral calculus and calculus on surfaces
10 f 0 1jJ(y) Yhey) dy, where hey)
= detH(y). Indeed, since f 0 1jJ 0 F = f 0 'P, we can apply the change of variable formula, Theorem 3.1.15, to get
ff
(3.2.14)
o
0 1jJ(y)
Now, (3.2.10) implies that
.jhey) dy = f f v
0 'P(x) yh(F(x»
(3.2.15)
.jg(x) = IdetDF(x)l .jhey),
( 3.2.16 )
fUl + f, )dV = f fl dV + f f, dV,
IdetDF(x)1 dx.
so the right side of ( 3.2.14 ) is seen to be equal to (3.2.12), and our surface integral is well defined, at least for f supported in a coordinate patch. More generally, if f : M + It has compact support, write it as a finite sum of terms, each supported on a coordinate patch, and use ( 3.2.12 ) on each patch. Using (3.1.11 ) , one readily verifies that M
M
M
if fj : M + It are continuous functions with compact support. Let us pause to consider the special cases m = 1 and m = 2. For m = 1, we are considering a curve in Itn, say 'P : [a, b] + Itn. Then G(x) is a 1 X 1 matrix, namely G(x) = 1'P'(x)1 2 If we denote the curve in Itn byy, rather than M, the formula ( 3.2.12 ) becomes the arc length integral
f
( 3.2.17 )
r
f ds =
1f b
0 'P(X)
1'P'(x)1 dx.
In case m = 2, let us consider a surface M c 1t3 , with a coordinate chart 'P : M. For f supported in U, an alternative way to write the surface integral is
f f dS = f f
( 3.2.18 )
u v
M
v
0 'P(x)
(') +
Uc
13 l 'P X 3, 'P1 dxl dx"
u
v
where X is the cross product of vectors and in 1t3 To see this, we compare this integrand with the one in ( 3.2.12 ). In this case, 3 'P 3 'P 3 l 'P ' 3' 'P = 13l 'PI ' 13' 'PI'  (3l 'P ' 3,'P)2 ( 3.2.19 ) g = det l ' l 3, 'P ' 31 'P 3, 'P ' 3, 'P Recall from ( 1.4.45 ) that X = sin e[, where e is the angle between and Equivalently, since = cos e, ( 3.2.20 ) X = cos' e) = V)2
(
l u v i lu l l v l l u · v l ul l v l lu v i ' lu l' l v l ' (l 
)
u
lu l ' l v l'  (u ·
v.
Thus we see that 13 l 'P X 3,'P1 = yg, in this case, and ( 3.2.18 ) is equivalent to ( 3.2.12 ). An important class of surfaces is the class of graphs of smooth functions. Let E C l (O) for an open 0 C Itn l , and let M be the graph of z = u(x). The map 'P(x) = (x, (x») provides a natural coordinate system, in which the metric tensor for mula ( 3.2.7 ) becomes
u
( 3.2.21 )
u
3.2.
Surfaces and sUiface integrals
121
If u is C 1 , we see that gjk is continuous. To calculate g = det(gjk), at a given point p E D, if 'lu(p) '" 0, rotate coordinates so that 'lu(p) is parallel to the axis. We obtain
Xl
yg = (1 + I'lu l' ) 1/' .
(3.2.22)
(See Exercise 31 for another take on this formula.) In particular, the (n1)dimensional volume of the surface M is given by
v,, 1 (M) =
(3.2.23)
J dS = J (1 + l'lu(x)I')1/' dx.
M
n
Particularly important examples of surfaces are the unit spheres sn 1 in Rn ,
sn 1 = {x E Rn : Ix l = 1}. Spherical polar coordinates on Rn are defined in terms of a smooth diffeomorphism, (3.2.25) R : (0, 00) X sn 1 + Rn \ 0, R(r, w) = rw. Let (hem ) denote the metric tensor on sn 1 (induced from its inclusion in Rn ) with respect to some coordinate chart I" : (') .... U C sn 1 Then we have a coordinate chart '"' : (O, oo) x (') .... U c Rn given by,",(r,y) = rl"(Y). Take Yo = r, y = (Y 1 , . . . , Yn 1 )' (3.2.24)
In the coordinate system ,", the Euclidean metric tensor (ejk ) is given by = ao '"" ao '"' = I"(y) . I"(y) = 1, e Oj = ao '"' . aj,", = I"(y) . ajl"(Y) = 0, 1 :S j :S n  1, ejk = r' aj l'" akl" = r'hj b 1 :S j, k :S n  l.
e oo
The fact that I"(y) . ajl"(Y) = a follows by applying a/aYj to the identity l"(Y) ' I"(y) = O. To summarize,
(3.2.26) Now (3.2.26 ) yields
(3.2.27)
We therefore have the following result for integrating a function in spherical polar co ordinates,
(3.2.28 )
J f(x)dx = J [f f(rw)rn1 dr] dS(w).
IRn
snl
We next compute the (n  l)dimensional area An _ 1 of the unit sphere sn 1 using ( 3.2.28 ) together with the computation
(3.2.29 )
J
"n
e 1 x12
dx = nn/2 ,
e Rn ,
3.
122
Multivariable integral calculus and calculus on surfaces
1 00 ",(r)rn1 dr.
from (3.1.75). First note that, whenever f(x) = ",(Ix l ), (3.2.28) yields
J ",(Ixl) dx
(3.2.30)
=
Rn
1 00
An_ 1
1 00
In particular, taking ",(r) = e" and using (3.2.29), we have
(3 . 2.31)
nn/2 = An l
e r
2
o
rn 1 dr = .2.!.An l
e S
sn/2 1 ds,
0
where we used the substitution s = r' to get the last identity. We hence have
/ An_ 1 = 2nn� 2 ' r( )
(3.2.32)
where r(z) is Euler 's Gamma function, defined for z > 0 by
(3.2.33)
r(z) =
1 00 e'sz1 ds.
We need to complement (3.2.32) with some results on r(z) allowing a computation of r(nI2) in terms of more familiar quantities. Of course, setting z = 1 in (3.2.33), we immediately get r( l ) =
(3.2.34)
1.
Also, setting n = 1 in (3.2.31), we have
(�)
or r
(3.2.35)
=
][1/2
We can proceed inductively from (3.2.34)(3.2.35) to a formula for r(nI2) for any n E Z + using the following. Lemma 3.2.2. For all z > 0,
r(z +
(3.2.36) Proof.
We can write r(z +
1) = 
1) = zr(z).
1 00 (dsd e')sz ds 1 00 =
o
0
e '
d _ ds (sz) ds,
the last identity by integration by parts. The last expression here is seen to equal the right side of (3.2.36). D Consequently, for k E Z+ ,
(3.2.37) Thus (3.2.32) can be rewritten (3.2.38)
][ A,k_ l = (k2_ k1)1 '
3.2.
Surfaces and sUiface integrals
123
We discuss another important example of a smooth surface, in the space M(n, R) '" Rn' of real n X n matrices, namely SO(n), the set of matrices T E M(n, R) satisfying T'T = I and det T > 0 (hence det T = 1). The exponential map Exp : M(n, R) + M(n, R) defined by Exp(A) = e" has the property Exp : Skew(n) + SO(n),
(3.2.39)
where Skew(n) is the set of skewsymmetric matrices in M(n, R). As seen in (2.2.28)
(2.2.30),
D Exp(O)Y = Y, \f Y E M(n, R),
(3.2.40)
and hence the inverse function theorem implies that there is a ball D centered at 0 in M(n, R) that is mapped diffeomorphically by Exp onto a neighborhood D of I in M(n, R). From the identities ExpX' = (ExpX)', Exp(X) = (EXpX)  l , we see that, given X E D, A = ExpX E D,
A E SO(n) = X E Skew(n).
Thus there is a neighborhood (') of 0 in Skew(n) that is mapped by Exp diffeomorphi cally onto a smooth surface U c M(n, R), of dimension m = n(n  1)/2. Furthermore, U is a neighborhood of I in SO(n). For general T E SO(n), we can define maps
(3.2.41)
'PT : (') + SO(n), 'PT(A) = T Exp(A),
and obtain coordinate charts in SO(n), which is consequently a smooth surface of di mension n(n  1)/2 in M(n, R). Note that SO(n) is a closed bounded subset of M(n, R); hence it is compact. We use the inner product on M(n, R) computed componentwise; equivalently,
(3.2.42)
(A, B) = Tr (B 'A) = Tr(BA').
This produces a metric tensor on SO(n). The surface integral over SO(n) has the fol lowing important invariance property. Proposition 3.2.3. Given f E C (SO(n») , ifwe set
(3.2.43)
PTf(X) = f(XT), ATf(X) = f(TX),
for T,X E SO(n), we have (3.2.44)
J PTf dS = J ATf dS = J f dS.
SO(n)
SO(n)
SO(n)
Proof. Given T E SO(n), the maps RT,L T : M(n, R) + M(n, R) defined by RT(X) = XT, LT(X) = TX are easily seen from (3.2.42) to be isometries. Thus they yield maps of SO(n) to itself which preserve the metric tensor, proving (3.2.44). D
124
3.
Multivariable integral calculus and calculus on surfaces
Since SO(n) is compact, its total volume V(SO(n») define the integral with respect to Haar measure,
(3.2.45)
fs o(n) 1
dS is finite.
We
J f(g)dg = v(s�(n») J f dS.
SO(n)
SO(n)
This is used in many arguments involving averaging over rotations. Examples of such averaging arise in §5.1, §5.3, and §7.4. See also §6.4 for generalizations to other matrix groups. Extended notion of coordinates. Basic calculus as developed in this text so far has involved maps from one Euclidean space to another, of the type F : Rn + Rm . It is convenient and useful to extend our setting to F : V + W, where V and W are general finitedimensional real vector spaces. There is the following notion of the derivative. Let V and W be as above, and let [2 c V be open. We say F : [2 + W is differ entiable at x E [2 provided there exists a linear map L : V + W such that, for y E V small,
(3.2.46)
F(x + y) = F( x) + Ly + r( x, y) ,
with r( x, y) + 0 faster than y + 0, i.e.,
Ilr(x, y) 11 + 0 as y + 0. Ilyll For this to be meaningful, we need norms on V and W. Often these norms come from inner products. See Appendix A.2 for a discussion of inner product spaces. If (3.2.46) (3.2.47) hold, we setDF(x) = L and call the linear map (3.2.47)
DF( x ) : V + W
the derivative of F at x. We say F is C 1 if DF(x ) is continuous in x. Notions of F in Ck are produced in analogy with the situation in §2.1. Of course, we can reduce all this to the setting of §2.1 by picking bases of V and W. Often such V and W arise as linear subspaces of R n , such as TpM in (3.2.1), or V = NpM, mentioned right below that. As noted there, we can take a linear isomorphism of such V with Rk for some k, and keep working in the context of maps between such standard Euclidean spaces, as in (3.2.2). However, it can be convenient to avoid this distraction, and, for example, replace (3.2.2) by
(3.2.48) and (3.2.3) by
(3.2.49)
D c!> ( xo, 0)
(�) = Dqo xo v
( ) + w.
In order to carry out Lemma 3.2.1 in this setting, we want the following version of the inverse function theorem.
3.2.
Surfaces and sUiface integrals
125
Proposition 3.2.4. Let V and W be real vector spaces, each of dimension n. Let F be a Ck mapfrom an open neighborhood D of E V to W, with q = F( k 2: 1. Assume
Po
the derivative
Po),
o
DF(po ) : V + W is an isomorphism. Then there exist a neighborhood U of Po and a neighborhood a ofqo such that F : U + a is onetoone and onto, and F 1 : a + U is a C k map.
While Proposition 3.2.4 is apparently an extension of Theorem 2.2.1, there is no extra work required to prove it. One can simply take linear isomorphisms A : Itn + V and B : Itn + W and apply Theorem 2.2.1 to the map G(x) = B  1 F(Ax). Thus Proposition 3.2.4 is not a technical improvement of Theorem 2.2.1, but it is a useful conceptual extension. With this in mind, we can define the notion of an mdimensional surface M e V (an ndimensional vector space) as follows. Take a vector space W, of dimension m. Given p E M, we require there to be a neighborhood U of p in M and a smooth map 'P : (') + U, from an open set (') c W bijectively to U, with an injective derivative at each point. We call such a map a coordinate chart. If all such maps are smooth of class Ck , we say M is a surface of class C k As a further wrinkle, we could take different vector spaces H), for different p E M, as long as they all have dimension m. The reader is invited to formulate the appropriate modification of Lemma 3.2.1 in this setting. Submersions. Let V and W be finitedimensional real vector spaces, let D c V be open, and let F : D + W be a C k map, k 2: 1. We say F is a submersion provided that, for each x E D , DF(x) : V + W is surjective. (This requires dim V 2: dim W.) We establish the following submersion mapping theorem, which the reader might recognize as a variant of the implicit function theorem. In the statement, ker T denotes the null space, ker T = {V E V : Tv = O}, if T : V + W is a linear transforma tion. Proposition 3.2.5. With V, W, and D k 2: 1. Fix p E W, and consider
c
V as above, assume F : D + W is a Ck map,
(3.2.50) S = (x E V : F(x) = pl. Assume that,for each X E S, DF(x) : V + W is surjective. Then S is a Ck sUiface in D. Furthermore,for each X E S, (3.2.51) TxS = ker DF(x). Proof. Given q E S, set Kq = ker DF(q) and define Gq : V + W Ell Kq , Gq(x) = (F(x), Pq(x (3.2.52) where Pq is a projection of V onto Kq . Note that Gq(q ) = (F(q), 0) = (p, 0). (3.2.53) Also (3.2.54) DGq(x) = (DF(x), Pq), x E V.

q» ,
126
3. Multivariable integral calculus and calculus on surfaces
We claim that DGq(q) = (DF(q),Pq) : V + W Ell Kq is an isomorphism. This is a special case of the following general observation.
(3.2.55)
D
Lemma 3.2.6. IfA : V + W is a surjective linear map and P is a projection of V onto kerA, then (3.2.56) (A, P) : V + W Ell ker A is an isomorphism. We postpone the proof of this lemma and proceed with the proof of Proposition
3.2.5. Having (3.2.55 ), we can apply the inverse function theorem (Proposition 3.2.4) to obtain a neighborhood U of q in V and a neighborhood (') of (p, 0) in W Ell Kq such that Gq : U + (') is bijective, with Ck inverse Gq ! : (') + U, Gq ! (p, O) = q. ( 3.2.57 ) By ( 3.2.52 ), given x E U, ( 3.2.58 ) X E S = Gq(x) = (p, v), for some v E Kq. Hence S n U is the image under the Ck diffeomorphism Gq ! of (') n {(p, v) : v E Kq}. Hence S is smooth of class Ck and dim TqS = dimKq. It follows from the chain rule that TqS C Kq, so the dimension count yields TqS = Kq. This proves Proposition 3.2.5.
Note that we have the following coordinate chart on a neighborhood of q E S: ( 3.2.59 ) 'Pq(v) = Gq ! (p, v), 'Pq : Dq + S, where Dq is a neighborhood of 0 in TqS = Kq = ker DF(q). It remains to prove Lemma 3.2.6. Indeed, given that A : V + W is surjective, the fundamental theorem of linear algebra implies dim V = dim(W Ell kerA), and itis clear that (A, P) in ( 3.2.56 ) is injective, so the isomorphism property follows.
Remark. In case V = Rn and W = R, DF(x) is typically denoted VF(x), the hy pothesis on DF(x) becomes VF(x) '" 0, and ( 3.2.51 ) is equivalent to the assertion that dim S = n  1 and, for X E S, ( 3.2.60 )
Compare the discussion following Proposition 2.2.6.
We illustrate Proposition 3.2.5 with another proof that ( 3.2.61 ) S O( n) C M(n, R) is a smooth surface, different from the argument involving (3.2.39)(3.2.41). To get this, we take ( 3.2.62 ) V = M(n, R), W = (A E M(n, R) : A = A'}, and F : V + W, F(X) = X'X. ( 3.2.63 ) Now, given X, Y E V, Y small, ( 3.2.64 ) F(X + Y) = X'X + X'Y + Y'x + O( II Y II'),
3.2.
Surfaces and sUiface integrals
127
so
DF(X)Y = Xt y + ytX.
(3.2.65) We claim that
X E SO(n) ==} DF(X) : M(n, R) + W is surjective. Indeed, given A E W, i.e., A E M( n, R) and At = A, and X E S O( n), we have (3.2.67) Y = 2:1 XA ==} DF(X)Y = A. This establishes (3.2.66), so Proposition 3.2.5 applies. Again we conclude that SO(n) is a smooth surface in M(n, R). (3.2.66)
Riemann integrable functions on a surface. Let M c Rn be an mdimensional surface, smooth of class C r . We define the class :R,(M) of compactly supported Rie mann integrable functions as follows, guided by Proposition 3.1.11. If f : M + R is bounded and has compact support, we set
(3.2.68 )
l(f) = inf!j g dS : g E C,(M), g 2: f J, M
I(f) = suP!j h dS : h E C,(M), h :S fJ,
M where C,(M) denotes the set of continuous functions on M with compact support. Then ( 3.2.69 )
f E :R,(M) = l(f) = J(f),
and if such is the case, we denote the common value by fM f dS. It follows readily from the definition and arguments produced in §3.1 that ( 3.2.70 )
f,. !, E :R,(M) =? f, + f, E :R,(M) and
J(f, + f, )dS = J f, dS + J f, dS.
M M M In fact, using ( 3.2.16 ) for functions that are continuous on M with compact support, one obtains from the definition ( 3.2.68 ) that if fj : M + R are bounded and have compact support,
l(f, + f, ) :S l(f,) + l(f, ), I(f, + f, ) 2: I(f,) + I(f, ),
which yields ( 3.2.70 ) . Also one can modify the proof of Proposition 3.1.17 to show that ( 3.2.71 )
Furthermore, if 'P 0 + U c M is a coordinate chart and f E :R,(U), then an application of Proposition 3.1.11 gives ( 3.2.72 )
f 0 'P E :R,( 0),
and
J f dS = J f(cr{x» .jg(x) dx,
M
V
3.
128
Multivariable integral calculus and calculus on surfaces
with g(x) as in (3.2.12)(3.2.13). Given any f E :JI,(M), we can take a continuous partition of unity {Uj}, write f = Lj fj = Lj ujf, and use (3.2.70)(3.2.72) to express fM f dS as a sum of integrals over coordinate charts. If � c M has compact closure, then (3.2.73) cont+ � = leX,;),
and � is contented if and only if XL E :JI,(M). In such a case, (3.2.73) is the area of�. Given f : M + R, bounded and compactly supported, in parallel with (3.1.39) we say f E CI,(M) � the set � of points of discontinuity of f (3.2.74) satisfies cont+ � = o. We have
(3.2.75) and (again parallel to Proposition 3.1.11) if f supported,
(3.2.76)
:
M
+
R is bounded and compactly
!! dS : E CI,(M), 2: f J, M !(f) = sup!! h dS : h E CI,(M), h fJ.
l(f) = inf
g
g
M
g
:S
One can proceed from here to define the spaces
(3.2.77)
:JI(M), :JI"(M), and establish properties of functions in these spaces, in analogy with work in §3.1 on :JI(Rn) and :JI"(Rn). We leave such an investigation to the reader. Vector fields and flows on surfaces. Let M c Rn be a smooth, mdimensional sur face. Asmooth vector field X onM (sometimes called a tangent vector field) is a smooth map (3.2.78) X : M Rn such that X(p) E TpM, E M. If 'P : + U c M is a coordinate chart, then there is a unique smooth vector field X� : + Rm such that
(') (')
+
'if P
(3.2.79)
The vector field X generates a flow 'FJc on M, satisfing
(3.2.80) at least for small iti. As before, we have the defining property
(3.2.81) Note also that
(3.2.82)
1t 'FJc(x) = X('FJc(x» .
3.2.
Surfaces and sUiface integrals
129
With slight abuse of notation, we will use the same symbol X for the vector field X on M and for the associated vector field X� on a coordinate patch. Valuable information on the behavior of the flow :Fk can be obtained by investi gating the tderivative of
(3.2.83)
v , (x) = v( :Fk(x» ,
given v E CJ(M), i.e., v is of class C ' and vanishes outside some compact subset of M. In fact, we take v E CJ(U), and identify this with v E CJ( 0), with 0 c Rm as above. The chain rule plus (3.2.81) yields d (3.2.84) v (x) = X(:Fk(x» , 'lv(:Fk(x» . dt , In particular, d v( (x» (3.2.85) = X(x) . 'lv(x). d s :Fl 1 , Here 'lv is the gradient of v, given by 'lv = (av/ax" . . . , av/axm ). A useful alternative formula to (3.2.84) is d d v,(x) = v,( :Fl (x» 1 s _ dt ds ( 3.2.86 ) = X(x) . 'lv,(x), 0
0
the first equality following from ( 3.2.82 ) and the second from ( 3.2.85 ), with v replaced by v , . One significant consequence of (3.2.86), which will lead to the formula ( 3.2.91 ) below, is that, for v E CJ( 0), ( 3.2.87 )
:t J v(:Fk(x» ygdx = J X(x) · 'lv,(x)ygdx v
v
=
J divX(x)v(:Fk(x» ygdx. v
Here, div X(x) is the divergence of the vector field X(x) = (X, (x), . . . , Xm (x» , defined (in local coordinates) by ( 3.2.88 )
div X(x) = g' /2 � J
� (g1l2X/x» .
aj
The last equality in ( 3.2.87 ) follows by integration by parts,
J Gk(x)��: dx =  J ��: v,(x) dx,
Gk(x) = ygXk(x),
followed by summation over k. We restate ( 3.2.87 ) in global terms,
( 3.2.89 )
:t J v(:Fk(x» dV =  J divX(x)v(:Fk(x» dV. M
M
3.
130
Multivariable integral calculus and calculus on surfaces
So far, we have (3.2.89) for First, note that (3.2.89) implies
v E CJ(M). We extend this to less regular functions.
Jv(:J'}(x» dV  J vex) dV t = 1 J divX(x)v(:J'k(X» dV ds.
M
(3.2.90)
M
M
v
Basic results on the integral allow one to pass from E CJ(M) in (3.2.58) to more general including = O (the characteristic function of 0, defined to be equal to 1 on 0 and on M \ D), for smoothly bounded compact 0 C M. In more detail, if 0 c M is a compact, smoothly bounded subset, let Eo = E M : dist(x, D) :S oj. There exists 00 > such that Eo c M for 0 E (0, 00]. For such 0, one can produce E CJ(M) such that
v,
v X
a
(x
a
Va
Va = 1 on Eo, Then
O :S va :S 1,
va = a on M \ Eo.
IJ Xo(x) dV  J va(x) dvi :S Vol (Eo \ D) 0, as 0 0, J va(x) dV J Xo(x) dV. +
so, as
8
+
Similar arguments give
+
M
+
0,
M
J va(:J'}(x» dV J Xo(:J'}(x» dV, +
M
and
M
1t J divX(X)Va(:J'k(X» dV ds 1t J divX(X)Xo(:J'k(X» dV ds. +
M
M
v Xo
These results allow one to take = in (3.2.90). Now one can pass from (3.2.90) back to (3.2.89), via the fundamental theorem of calculus. Note that Vol D) =
:J'}(
t
J Xo(:J'xt(x» dV.
M
t, and v by Xo, and deduce the following. Proposition 3.2.7. IfX is a c 1 vectorfield, generating theflow :J'}, well defined on M for t E I, and 0 C M is compact and smoothly bounded, then, for t E I, (3.2.91) 1t Vol :J'}(D ) = J divX(x) dV. We can apply (3.2.89), with replaced by
3}; (O)
3.2.
Surfaces and sUiface integrals
131
This result is behind the notation divX, i.e., the divergence ofX. Vector fields with positive divergence generate flows :FJc that magnify volumes as t increases, while vector fields with negative divergence generate flows that shrink volumes as t increases. We will see more of the divergence operation on vector fields in §§4.4 and 5.2. Projective spaces, quotient surfaces, and manifolds. Real projective space I'n 1 is obtained from the sphere sn 1 by identifying each pair of antipodal points, (3.2.92) I'n 1 = sn 1 / �, where (3.2.93) x � y = x = ±y, for x, Y E sn 1 e Rn . More generally, if M e Rn is an mdimensional surface, smooth of class Ck , satisfying o '" M, x E M =} x E M, (3.2.94) we define (3.2.95) I'(M) = M/ �, using the equivalence relation (3.2.93). Note that M has the metric space structure d(x, y) = Ilx  yll. and then I'(M) becomes a metric space with metric (3.2.96) d([x], [y]) = min{d(x' , y' ) : x' E [x], y' E [y]}, or, in view of (3.2.93), (3.2.97) d([x], [y]) = min{d(x,y), d(x, y»). Here, x E M and [x] E I'(M) is its associated equivalence class. The map x [x] is a continuous map (3.2.98) p : M + I'(M). It has the following readily established property. f+
Lenuna 3.2.8. Each p E I'(M) has an open neighborhood U c I'(M) such that p  1 (U) = Uo U U1 is the disjoint union oftwo open subsets of M, and, for j = 0, 1, P : Uj .... U is a
homeomorphism, i.e., it is continuous, onetoone, and onto, with continuous inverse.
Given p E I'(M), (Po, p r J = p  1 (p), let Uo be a neighborhood of Po in M for which there is a C k coordinate chart 'Po : 0 .... Uo (0 c Rm open). Then 'P 1 (X) = 'Po (x) gives a coordinate chart 'PI : 0 .... U1 onto a neighborhood U1 of PI E M. If Uo is picked small enough, Uo and U1 are disjoint. The projection p maps Uo and U1 homeomorphically onto a neighborhood U of p in I'(M), and we have coordinate
charts, (3.2.99)
In fact, p o 'PI = p o 'Po. If 'Po : 0 .... Uo is another C k coordinate chart, then, as in Lemma 3.2.1, we have a Ck diffeomorphism F : 0 .... 0 such that 'Po 0 F = 'Po. Similarly 'PI 0 F = 'PI, with 'PI (x) = 'Po(x), and we have p 0 'Pj 0 F = p o 'Pj. The structure just placed on the quotient surface I'(M) makes it a manifold, an object we now define.
132
3. Multivariable integral calculus and calculus on surfaces
Given a metric space X, we say X has the structure of a Ck manifold of dimension m provided the following conditions hold. First, for each p E X, we have an open neighborhood LIp of p in X, an open set Op c Itm , and a homeomorphism
(3.2.100)
Next, if also E X and Llpq 'Pp l ( Llpq) to Oqp 'Pq l ( Llpq),
q
'Pp : Op + LIp. Up n Uq '" 0, then the homeomorphism from Opq
=
=
(3.2.101)
Frq 'PO' I 0 'Pp lvpq '
(3.2.102)
Gp(x) DFrq(x)'Gq(y)DFrq(x),
=
is a Ck diffeomorphism. As before, we call the maps 'Pp : Op + LIp c X coordina te charts onX. A metric tensor on a Ck manifold X is defined by positivedefinite symmetric mXm matrices Gp E Ck l ( 0p), satisfying the compatibility condition =
for
(3.2.103) We then set ( 3.2.104)
satisfying ( 3.2.105 )
for x and y as in ( 3.2.103 ) . If f : X + set ( 3.2.106 )
It is a continuous function supported in LIp, we
J f dS J f('Pp(x» .jgp(x)dx. =
vp
x
As in (3.2.14)(3.2.15), this leads to a welldefined integral Ix f dS for f E C,(X), ob tained by writing f as a finite sum of continuous functions supported on various co ordinate patches LIp. From here we can develop the class of functions :R,(X) and their integrals over X in a fashion parallel to that done above when X is a surface in Itn . The quotient surfaces !,(M) are examples of Ck manifolds as defined above. They get natural metric tensors with the property that p in ( 3.2.98 ) is a local isometry. In such a case, ( 3.2.107)
J f dS � J f o p dS. =
M
P(M)
Another important quotient manifold is the jlat torus
( 3.2.108 )
un Itn /1'n . Here the equivalence relation on Itn is x Y � x  Y E 1'n . Natural local coordinates on un are given by the projection p : Itn + un, restricted to sufficiently small open sets in Itn . The quotient un gets a natural metric tensor for which p is a local isometry. =
�
3.2.
Surfaces and sUiface integrals
133
Given two Ck manifolds X and Y, a continuous map 1jJ : X + Y is said to be smooth of class Ck provided that for each p E X, there are neighborhoods U of p and U of q = 1jJ(p), and coordinate charts 1" 1 : + U, 1", : i'5 + U, such that 1", 1 o 1jJ 0 1" 1 : + i'5 is a C k map. We say 1jJ is a C k diffeomorphism if it is onetoone and onto and 1jJ l : Y + X is a Ck map. If X is a Ck manifold and M c Rn a Ck surface, a Ck diffeomorphism 1jJ : X + M is called a Ck embedding of X into Rn . Here is an embedding of Tn into R'n:
(')
(')
n n 1jJ(x) = � (cos 27rxj )ej + � (sin 27rxj )en+j' j= l j= l A priori, 1jJ : Rn + R'n, but 1jJ(x) = 1jJ(y) whenever x y E zn , so this naturally induces a smooth map Tn + R'n, which can be seen to be an embedding. If M e Rn is an mdimensional surface satisfying (3.2.94), an embedding of !,(M) into M(n, R) can be constructed via the map (3.2.110) M(n, R), 1jJ(x) = xxt 1jJ : Rn (3.2.109)

+
Note that
(3.2.111) We need a couple of lemmas. Lemma 3.2.9. For 1jJ as in (3.2.110), x, y E Rn,
(3.2.112)
1jJ(x) = 1jJ(y)
Proof.
= x = ±y.
The map 1jJ is characterized by 1jJ(x)ej = Xj X, where x is as in (3.2.111) and {ej } is the standard basis of Rn . It follows that if x '" 0, 1jJ(x) has exactly one nonzero eigenvalue, namely l xi', and 1jJ(x)x = I x l 'x. Thus 1jJ(x) = 1jJ(y) implies that l xi ' = Iyl ' and that x and y are parallel. Thus x = ay and a = ±l. D Lemma 3.2.10. In the setting ofLemma 3.2.9, ifx '" 0,
(3.2.113)
D1jJ(x) : Rn
Proof. A calculation gives
+ M(n, R) is injective.
(3.2.114) D1jJ(x)v = xv' + vxt Thus, if v E ker D1jJ(x), (3.2.115) Both sides are rank 1 elements of M(n, R). The range of the left side is spanned by x and that of the right side is spanned by v, so v = ax for some a E R. Then (3.2.115)
becomes
(3.2.116) which implies a = 0 if x '" o.
D
134
3.
Multivariable integral calculus and calculus on surfaces
Remark. Here is a refinement of Lemma 3.2.10. Using the inner product on M( n , R) given by (3.2.42), we can calculate (3.2.117) (D1jJ(x)v, D1jJ(x)v) = 2Uxl ' Iv l' + (x · v)'). Lemmas 3.2.9 and 3.2.10 imply that if M e Rn is an mdimensional surface satis fying (3.2.94), then 1jJIM yields an embedding of I'(M) into M( n , R). Denote the image surface by M#. As we see from (3.2.117), this embedding is not typically an isometry. However, if M = sn l and v is tangent to sn l at x, then v · x = 0, and (3.2.117) implies that in this case the embedding of I'n l into M( n, R) is an isometry, up to a factor of 2. It is the case that if X is any manifold that is a countable union of compact sets, then X can be embedded into R n for some n. In case X is compact, this is not very hard to prove, using local coordinate charts and smooth cutoffs, and the interested reader might take a crack at it. If X is provided with a metric tensor, this embedding might not preserve this metric tensor. If it does, one calls it an isometric embedding. It is the case that any such manifold has an isometric embedding into Rn for some n (if k is sufficiently large). This result is the famous Nash embedding theorem, and its proof is quite difficult. For X compact and a proof is given in [46, Chapter 14].
Ck
Coo,
Polar decomposition of matrices. We define the spaces Sym ( n ) and :P(n) by Sym ( n) = (A E M(n, R) : A = At } , (3.2.118) :P(n) = (A E Sym ( n) : x · Ax 0, '
It is easy to show that :P(n) is an open, convex subset of the linear space Sym ( n ). We aim to prove the following result. Proposition 3.2.11. GivenA E
such that (3.2.119)
Gl+ (n, R), there exist unique U E SO(n) and Q E :P(n) A = UQ.
The representation (3.2.119) is called the polar decomposition of A. Note that (3.2.120) (UQ)t UQ = QUt UQ = Q', so if the identity (3.2.119) were to hold, we would have (3.2.121) AtA = Q'Note also that (3.2.122) A E GI(n, R) ==} AtA E :P(n), since x . AtAx = (Ax) · (Ax) = [Ax l 'To prove Proposition 3.2.11, we bring in the following basic result oflinear algebra. See Appendix A.3.
Proposition 3.2.12. Given E E Sym ( n ), there is an orthonormal basis ofRn consisting of eigenvectors ofE, with eigenvalues Aj E R. Equivalently, there exists V E SO( n) such
that
(3.2.123)
E = VD V 1 ,
3.2.
Surfaces and sUiface integrals
135
with (3.2.124)
If
D=
('1
B E :P(n), then each Aj > O. We can then set
(3.2.125)
ii}(2
) VI'
and obtain the following. Corollary 3.2.13. Given
(3.2.126)
B E :P(n), there is a unique Q E :P( n) satisfying Q' = B.
We say Q = B1/'. To obtain the decomposition (3.2.119), we set
(3.2.127) Note that
(3.2.128)
E
and (det U)(det Q) = detA > 0, so det U > 0, and hence U SO(n), as desired. By (3.2.121) and Corollary 3.2.13, the factor Q :P(n) in (3.2.119) is unique, and hence so is the factor U. We can use Proposition 3.2.11 to prove the following.
E
E
Proposition 3.2.14. The set Gl + (n, R) is connected. Infact, given A Gl + (n, R), there is a smooth path y : [0, 1] + Gl+ (n, R) such thaty(O) = I and y(l) = A. Proof. To start, we have tha t
(3.2.129)
Exp : Skew(n) + SO(n) is onto.
See Exercise 14 below for this (or Corollary A.3.9). Hence, withA = UQ as in (3.2.119), we have a smooth path aCt) = Exp(tS), a : [0, 1] + SO(n), such that a( O) = I and a(l) = U. Since :P(n) is a convex subset of Sym(n), we can take [3(t) = (1 t)I + tQ, obtaining a smooth path [3 : [0, 1] + :P(n), such that [3( 0 ) = I and [3(1) = Q. Then

(3.2.130) does the trick.
yet) = a(t)[3(t) D
136
3. Multivariable integral calculus and calculus on surfaces
Exercises
1. The following exercise deals with parametrizing a curve by arc length. (a) Let y : [a, b] + Itn be a C 1 curve, and assume y'(t) '" a for t E [a, b]. By (3.2.17), the length of the segment y([a, t]) is
t
Ii(t) = l IY '(X) 1 dx.
We have a c 1 map Ii : [a, b] + [O,L] (where L = Ii(b» , satisfying li'(t) = I l C t) > O. Hence (by the onedimensional inverse function theorem) there is a C 1 inverse, e 1 : [O,L] + [a, b]. We set cr(s) = y(e 1 (s» , s E [0, L].
1
Show that : [0, L] + Itn is a C 1 curve and l '(s) = 1. We say that is obtained from y by parametrization by arc length. (b) Consider the circle C = ((x,y) E It' : x' +y' = I}. Showthat C hasa parametriza tion by arc length, say y, such that yeO) = (1, 0) and y' (0) = (0, 1). Let us say yet) = (c(t), s(t» . Remark. The curve segment y([O, t]) has length t. In trigonometry, the line seg ments from (0, 0) to (1, 0) = yeO) and from (0, 0) to (c(t), set»� = yet) are said to meet at an angle, measured in radians, equal to the length of this curve, i.e., to t ra dians. Then the geometric definition of the trigonometric functions cos t and sin t yields cos t = c(t), sin t = set). (c) Consult the "Auxiliary exercises on trigonometric functions" in §2.3, and deduce from (2.3.111) that cos t and sin t, defined above, are identical with cos t and sin t as they appear in (2.3.108 ) . For a related approach, see Exercises 11  13 below. (d) Taking IT as characterized below (2.3.112), show that the arc length of the unit circle C is equal to 2IT (reiterating the case = 2 of ( 3.2.32» . eJ
1
eJ
eJ
n
2. Compute the volume of the unit ball En = (x E Itn : I xl :S I). Hint. Apply ( 3.2.30) with 'P = XIO,I] '
3. Taking the upper half of the sphere sn to be the graph of xn+l = ( 1  I xl ')I!', for x E En , the unit ball in Itn, deduce from (3.2.23 ) and ( 3.2.30) that "!' 1 n l An = 2An_ 1 dr = 2An_ 1 (sin 8 )n l d8. 1Use this to get an alternative derivation of the formula ( 3.2.38 ) for An . Hint. Rewrite this formula as
1� o
2 r
An = An_ 1 bn_ 1 , bk =
I 0
1" sink 8 d8.
3.2.
Surfaces and sUiface integrals
137
To analyze h, you can write, on the one hand, bk+ 2 = bk
1 sink e cos2 e de, "
and on the other, using integration by parts, b k+ 2 = Deduce that
1 cos e :e sink+l e de. "
4. Suppose M is a surface in �n of dimension 2, and suppose 'P : (')
+
coordinate chart, with (') c �2 Set 'Pjk(X) = ('P/x), 'Pk(X»), so 'Pjk : (') that the formula (3.2.12) for the surface integral is equivalent to
U +
c M is a �2 Show
J f dS = J f o 'P(x) j� O. Now we have
l'
( 3 . 4.1)
a
and, if 0 > is small enough, then Ira(Y)1 :S £lyl :S £0, for Xa + Y E Ra. Thus F(Ra) is contained in an £o·neighborhood of the set Ha = F(xa) + DF(xa)(Ra  xa) ' which is a parallelipiped of dimension :S n  1, and diameter :S M0, if IDFI :S M. Hence ( 3 . 4.2)
cont+ F(Ra) :S Coon :S C'£V(Ra),
for Ra E 1'''.
Thus
(3.4.3)
cont+ F(C n K) :S � cont+ F(Ra) :S C" £.
Taking £ .... 0, we have the proof.
RaEPIl
D
This is the easy case of a result known as Sard's theorem, which also trea ts the case F : (') .... Rn when (') is an open set in Rm , m > n. Then a more elaborate argument is needed, and one requires more differentiability, namely that F is class Ck, with k = m  n + 1. A proof can be found in [36] or [44].
3.5.
Morsejunctions
147
3.5. Morse functions
If D c Rn is open. a C' function f : D critical point of f is nondegenerate. i.e ..
(3.5.1)
'if P E D.
.... R is said to be a Morse junction if each
Vf(p) = a =? D 'f(p) is invertible.
where D 'f( p) is the symmetric n X n rnatrix of second· order partial derivatives defined in §2.1. More generally. if M is an n·dimensional surface. a C' function f : M .... R is said to be a Morse function if f 0 'P is a Morse function on D for each coordinate patch
'P : D .... U c M.
Our goal here is to establish the existence of lots of Morse functions on an n· dimensional surface M. For simplicity. we restrict our attention to the case where M is compact. Here is our main result. C RN be a compact, smooth, ndimensional sUiface. For a E N R , set 'Po : M + R, 'Po (x) = a . x, x E M. (3.5.2) Take f E C'(M). Then the set Gf of a E RN such that
Theorem 3.5.1. Let M
(3.5.3) f + 'Po : M + R is a Morsejunction is a dense open subset ofRN His easy to verify that Gf is open, since when (3.5.1) holds, a small C' perturbation g of f has the property that D ' g(x) is invertible for x near p. What is not so easy is to show that Gf is dense (or even nonempty!). Our proof of such denseness will make use of Sard's theorem from §3.4. We begin with an easy special case. Proposition 3.5.2. In the setting of Theorem 3.5.1, assume N
with D c Rn+1 open. Then
= n + 1 and M = 3D,
{a E sn : a II' Go} is a nil set, (3.5.4) hence it has empty interior in the unit sphere sn. Proof. Here we are examining when 'Po is a Morse function on M. Let N : M .... sn be the exterior unit normal. Then Xo E M is a critical point of 'Po if and onlyifN(xo) = ±a. Such a point Xo is a nondegenerate critical point of 'Po if and only if it is not a critical point of N. Hence, if ±a E sn are regular values of N, then 'Po is a Morse function, i.e., a E Go. By Sard's theorem, the set of points in sn that are critical values of N is a nil set, so the proof of Proposition 3.5.2 is done. D We begin to tackle Theorem 3.5.1 with the following result. Lemma 3.5.3. Let D c Rn be open, and take g E C'( D). Let U c D be the closure of a smoothly bounded open U. Set ga(x) = g(x) + a · x. Let Gg denote the set of a E Rn such that go 111 has only nondegenerate critical points. Then Rn \ Gg is a nil set.
3.
148
Multivariable integral calculus and calculus on surfaces
Proof. Consider (3.5.5) F(x) = Vg(x), F : 0 + Rn . A point x E D is a critical point of go if and only if F(x) = a, and this critical point is degenerate only if, in addition, a is a critical value of F. Hence the desired conclusion holds for all a E Rn that are not critical values of F lu' Again Sard 's theorem applies. D Proof ofTheorem 3.5.1. Each p E M has a neighborhood [fp in M such that Up c Op c M and some n of the coordinates Xj on RN produce coordinates on Op ' Say Xl , " " xn do it. Let (an + l , ' " , aN) be fixed, but arbitrary. Then Lemma 3.5.3 can be applied to g = f + L�+ l aj xj , treated as a function of ( Xl , " " Xn ). It follows that, for all (al , ' " , an ) but a nil set, f + has only nondegenerate critical points in Up. Thus {a E RN : f + has only nondegenerate critical points in Up} (3.5.6) is dense in RN We also know this set is open. Now M can be covered by a finite collection of such sets [fp, so Of ' defined in Theorem 3.5.1, is a finite intersection of open dense subsets of RN , hence it is open and dense, as asserted. D
1"0 1"0
3.6.
The tangent space to a manifold
LetX be a Ck manifold of dimension m, as defined in §3.2. Thus, for each p E X, there are an open set [fp C X, an open Op e Rn, and a homeomorphism (3.6.1) I"p : Op + [fp of Op onto Up. One requires the following compatibility. If also q E X and Upq [fp n Uq '" 0, then the homeomorphism from Opq = I"p l ([fpq) to Oqp = I"O' l ([fpq),
Frq = 1"0' 1 o l"p l vp, ' is a Ck diffeomorphism. The rna ps I"p : Op .... [fp C X are coordinate charts on X.
(3.6.2)
In §3.2 we defined the tangent space 'fpX as an m·dimensional linear subspace of Rn in case X is a Ck surface in Rn . Here we will give an intrinsic definition of TpX, for a general Ck manifold, and show that it is naturally isomorphic to the tangent space defined in §3.2 when X is a Ck surface in Rn . We proceed as follows. Given p E X, let ep(X) denote the set of C l curves : (1, 1) + X, yeO) = p. (3.6.3) l The notion that such is a C map is as defined below (3.2.108).
y
Y
Definition. The tangent space TpX is the set of equivalence classes of curves in ep(X), where, given Yl E ep(X), (3.6.4) � Yl = "b (O) = ; (0), where (3.6.5) and £ > a is sufficiently small that < £ =} E [fp.
Yo,
Yo
,,
ItI
y/t)
3.6.
The tangent space to a manifold
149
As we will see shortly, the set 1pX is unchanged if one takes some other coordinate chart about p. Given the characterization above, we have a bijective map 1J'Pp l : TpX Rm, defined by (3.6.6) l l
+
1J'Pp ([y]) = ('Pp o y)'(O),
where, for y E ep(X), [y] denotes its equivalence class in TpX. Given some other coordinate chart 1jJ : + Up, we likewise set
(')
+
1J1jJ l (p) : TpX Rm , 1J1jJ l (p)([y]) = (1jJ l o y)'(O), (3.6.7) with y E ep(X). To compare (3.6.6) and (3.6.7), we apply the chain rule to (3.6.8) 1jJl 0 y = 1jJ l 0 'Pp 0 'Pp l 0 y, obtaining 1J1jJ l (p)([y]) = D(1jJ l o 'Pp )( ) ('Pp l o y)'(O) (3.6.9) l l
O
= D(1jJ 0 'Pp )(o) 1J'Pp ([y]), where = 'Pp l (p). Now D(1jJ  l 0 'Pp)(o) : Rm + Rm is a linear isomorphism, so the bijective map (3.6.6) gives TpX the structure of an mdimensional vector space, 0
independent of the choice of coordinate chart about p. Note that if X = Rm and p E Rm, then we can use the identity map as the coordi nate chart 'Pp ' i.e., 'Pp ( ) = This leads to the natural isomorphism (3.6.10) Jp : 1pRm Rm, Jp( [Y] ) = y'(O), Y E ep(Rm). Now suppose X and Y are two Ck manifolds of dimension m and n, respectively, and
x x.
+
(3.6.11) f : X + Y 1 is a C map, as defined below (3.2.108). Take p E X, q = f(p). Note that (3.6.12) f 0 Y E eq(y). y E ep(X) We can define (3.6.13) by (3.6.14 ) Df(p)([y]) = [f o y], Y E ep(x). In view of (3.6.10), we see that if X c Rm and Y c Rn are open subsets, then (3.6.15) gives the same map as we specified for Df(p) as it was defined in §2.1. Here are some related results, whose proofs are left to the reader. Proposition 3.6.1. Given a C 1 coordinate chart 1jJ : + Up c X, leading to (3.6.7), we
==}
have 1jJ l : Up + Rm, and (3.6.16)
D1jJ l (p) : TpX
(')
+ TpRm '" Rm
coincides with 1J1jJ l (p) in (3.6.7). Proposition 3.6.2. In the setting of (3.6.11)(3.6.14), the map Df(p) in (3.6.13) is linear.
3.
150
Multivariable integral calculus and calculus on surfaces
Going further, suppose Z is a C 1 manifold, and we have a C 1 map (3.6.17) g : Y + Z, g(q) = r, so, parallel to (3.6.13), (3.6.18) Dg(q) : TqY + T,Z. We then have the chain rule
D(g 0 fX p)([y]) = [g 0 f o y] = Dg(q) Df(p)([y]),
(3.6.19)
for y E '"p (X). IfX is a Ck manifold of dimension m, we can form the disjoint union of the tangent spaces TpX, TX =
(3.6.20)
U TpX,
PEX
called the tangent bundle of X. It is useful to know that TX has a natural structure of a Ck 1 manifold, produced as follows. Suppose 'P : (') + U c X is a coordinate chart, and set TU = upE U TpX c TX. We have D'P : (') X Rm + TU, Dr:p(x, v) = D 'P( x) v , (3.6.21) D'P(x) : Rm '" TxRm + T 0 on ('). (One says 'P
preserves orientation.) Given this, one can define
Jf3
(4.0.9)
M
f3
whenever is a k form and M c Rn is a kdimensional surface, assuming M possesses an orientation. Complementing the important identities (4.0.6)(4.0.8), one has the following, which could be called the fundamental theorem of the calculus of differential forms,
J da = J a,
(4.0.10)
M
aM
when M is a kdimensional oriented surface (or manifold) with smooth boundary 3M, and a is a smooth (k  l)form on M. This identity generalizes classical identities of Gauss, Green, and Stokes, and is called the general Stokes formula. The proof of this, in §4.3, is followed by a discussion of these classical cases in §4.4. In §4.5 we will use the theory of differential forms to obtain another proof of the change of variable formula for the integral, a proof very much different from the one given in §3.1. Chapter 5 will present a number of substantial applications of the calculus of dif ferential forms, particularly of the GaussGreenStokes formula. 4.1. Differential fonns
It is very desirable to be able to make constructions that depend as little as possible on a particular choice of coordinate system. The calculus of differential forms, whose study we now take up, is one convenient set of tools for this purpose. We start with the notion of a 1form. It is an object thatgets integrated over a curve. Formally, a 1form on 0 e R n is written (4.1.1) a = � a/x) dXj .
j
If r
: [a, b1 + 0 is a smooth curve, we set
(4.1.2) In other words, (4.1.3)
J a = 1 � aj(r(t»)rj (t) dt. b
y
J a = J y*a,
y
I
4.1.
Differentialforms
where I = [a, b] and
155
y*a = � a/y(t» yj(t)dt
j
is the pullback of a under the map y. More generally, if F : (') «(') c Rm open), the pullback F*a is a 1form on (') defined by (4.1.4)
+
D is a smooth map
F*a = � a/Fey»� � dY k'
3F Yk
j,k
The usual change of variable formula for integrals gives (4.1.5) r
if Y is the curve F 0 (J . If F : (') + D is a diffeomorphism and (4.1.6)
a
"' bJ. (x) 3 X = L..J
3xj

is a vector field on D, recall from (2.3.40) that we have the vector field on ('), (4.1.7) If we define a pairing between 1forms and vector fields on D by (4.1.8)
(X, a) = � bj (x)a/x) = b a,
j
.
a simple calculation gives (F#X, F*a) = (X, a) 0 F. Thus, a 1form on D is characterized at each point p E D as a linear transformation of the space of vectors at p to R. More generally, we can regard a kform a on D as a kmultilinear map on vector fields, (4.1.9)
(4.1.10) we impose the further condition of antisymmetry when k 2: 2, (4.1.11) Let us note that a Oform is simply a function. There is a special notation we use for kforms. If 1 :S h (j" . . . , jk), we set 1 '" a = � a·j (x) dx·11 /\ . . . /\ dx·Jk ' (4.1.12) k! where (4.1.13)
J
< ...
o ..,kn = ( � (sgn 0')b 'a( ,) b2a(2) '" bna(n))dx, A .. · A dXn IJESn = (detE) dx, A .. · A dxn.
Here Sn denotes the set of permutations of {1, . . . , n}, and the last identity is the formula for the determinant presented in (1.4.30). It follows that if F : (') + D is a C ' map between two domains of dimension n and (4.1.22) a = A(x)dx, A .. · A dxn is an nform on D, then (4.1.23) F*a = detDF(y)A(F(y» dy, A .. · A dYn.
4.1.
Differentialforms
157
Comparison with the change of variable formula for multiple integrals suggests that one has an intrinsic definition of 10 a when a is an nform on D, n = dim D. To implement this, we need to take into account that det DF(y) rather than 1 detDF(y)1 appears in (4.1.21). We say a smooth map F : (') + D between two open subsets of Itn preserves orientation if detDF(y) is everywhere positive. The object called an orienta tion on D can be identified as an equivalence class of nowhere vanishing nforms on D, two such forms being equivalent if one is a multiple of another by a positive function in COO(D); the standard orientation on Itn is determined by dXl A . . . A dXn  If S is an ndimensional surface in It n+ k , an orientation on S can also be specified by a nowhere vanishing form W E An (s). If such a form exists, S is said to be orientable. The equiv alence class of positive multiples a(x)w is said to consist of positive forms. A smooth map 1jJ : S + M between oriented ndimensional surfaces preserves orientation pro vided 1jJ* cr is positive on S whenever cr E An (M) is positive. If S is oriented, one can choose coordinate charts which are all orientation preserving. We mention that there exist surfaces that cannot be oriented, such as the famous Mobius strip, and also the projective space 1" , discussed in §3.2. See Exercise 13 below. We define the integral of an nform over an oriented ndimensional surface as fol lows. First, if a is an nform supported on an open set D c Itn , given by (4.1.22), then we set (4.1.24)
J a = J A(x)dV(x), 0
o
the right side defined as in §3.1. If (') is also open in Itn and F : (') preserving diffeomorphism, we have (4.1.25)
+
D is an orientation
0
v
as a consequence of (4.1.23) and the change of variable formula (3.1.47). More gener ally, if S is an ndimensional surface with an orientation, say the image of an open set (') c It n by I" : (') + S, carrying the natural orientation of ('), we can set (4.1.26) s
v
for an nform a on S. If it takes several coordinate patches to cover S, define Is a by writing a as a sum of forms, each supported on one patch. We need to show that this definition of Is a is independent of the choice of coordi nate system on S (as long as the orientation of S is respected). Thus, suppose I" : (') + U c S and 1jJ : D + U c S are both coordinate patches, so that F = 1jJ l 0 I" : (') + D is an orientationpreserving diffeomorphism, as in Figure 3.2.1. We need to check that if a is an nform on S, supported on U, then (4.1.27)
J I"*a = J 1jJ*a. v
0
4. Differential forms and the GaussGreenStokesformula
158
To establish this, we first show that for any form a of any degree,
1jJ 0 F = cp ==} cp*a = F*1jJ*a. It suffices to check (4.1.28) for a = dXj . Then (4.1.19) gives 1jJ* dXj = l}a1jJ/axe ) dXe ,
(4.1.28) so
(4.1.29) but the identity of these forms follows from the chain rule,
a1jJj aF a j Dcp = (D1jJ)(DF) ==} axCP = � aX ax e . m t t m
(4.1.30)
Now that we have (4.1.28 ), we see that the left side of (4.1.27) is equal to (4.1.31 )
J F*(1jJ*a),
which is equal to the right side of (4.1.27 ) , by (4.1.25 ) . Thus the integral of an nform over an oriented ndimensional surface is well defined. Exercises
1. If F : Uo implies
+
U, and G : U,
+
U, are smooth maps and a E Ak(U,), then (4.1.26 )
(G 0 F)*a = F*( G*a) in Ak( Uo ). In the special case that Uj = R n and F and G are linear maps, and k = n, show that this identity implies (4.1.32 )
det(GF) = (detFXdetG).
(4.1.33 )
Compare this with the derivation of (1.4.32). 2. Let Ak Rn denote the space of kforms (4.1.12) with constant coefficients. Show that dim" Ak Rn =
(4.1.34 )
If T : Rm
(4.1.35 )
+
(:) .
Rn is linear, then T* preserves this class of spaces; we denote the map
Similarly, replacing T by T* yields (4.1.36 )
3. Show thatAk T is uniquely characterized as a linear map from Ak Rm to Ak Rn which satisfies
4.1.
Differentialforms
159
4. Show that if S, T : Rn
+
Rn are linear maps, then
(4.1.37) Relate this to (4.1.28). If {e" . . . , e n } is the standard orthonormal basis of Rn , define an inner product on Ak Rn by declaring an orthonormal basis to be (4.1.38) (Aa, j3) = (a,Atj3), a, j3 E Ak Rn , where ( , ) is the inner product on Ak Rn defined above. (4.1.39)
5. Show tha t if T : Rn
+
Rn is linear, with transpose Tt , then
(4.1.40)
Hint. Check the identity «Ak T)a, j3) = (a, (AkTt )j3) when a and j3 run over the or thonormal basis (4.1.38). That is, show that if a = ej , then
A . . . A eit ,
j3 =
e" A . . A e " , ·
(4.1.41)
Hint. Say T = ( tij )' In the spirit of (4.1.21), expand Tej, A . . the left side of (4.1.41) is equal to
(4.1.42)
·
A Teit ' and show that
L: (sgn a)tio( l)il . . . tio(k)it '
aESk
where S k denotes the set of permutations of {1, . . . , k}. Similarly, show that the right side of (4.1.41) is equal to (4.1.43)
To compare these two formulas, see the treatment of ( 1.4.31). 6. Show that if{u" . . . , un } is any orthonormal basis of Rn , then the set{uj , A " ' A Uik 1 :S j, < . . . < j k :S n} is an orthonormal basis of Ak Rn . Hint. Use Exercises 4 and 5 to show that if T : Rn + Rn is an orthogonal transforma tion on Rn (i.e., preserves the inner product), then Ak T is an orthogonal transformation on Ak R n . 7. Let Vj , Wj E Rn , 1 :S j :S k (k < n). Form the matrices V, whose k columns are the column vectors V I , . . . , Vb and W, whose k columns are the column vectors W I , . . . , W k . Show that ( v , A . . . A vb A . . A W k) = det W t V (4.1.44) = det V'W. w,
·
4.
160
Differentialforms and the GaussGreenStokesformula
Hint. Show that both sides are linear in each Vj and in each Wj . Use this to reduce the problem to verifying (4.1.44) when each Vj and each Wj is chosen from among the set of basis vectors {e" . . . , e n }. Use antisymmetries to reduce the problem further. 8. Deduce from Exercise 7 that if Vj , Wj E Rn , then (4.1.45)
(v, A . . · A vb w, A . . . A Wk) = �(sgn Jr)(V" w".(,) ".
. . · (Vb W"'(k) '
where Jr ranges over the set of permutations of {1, . . . , k}. 9. Show that the conclusion of Exercise 6 also follows from (4.1.45). 10. n LetA, B : Rk + Rn be linear maps and set ", = e, A . . . A e k E Ak Rk We have AkA"" AkB'" E AkRn . Deduce from (4.1.44) that (4.1.46) 11. Let 'P : (') each x E (') ,
+
Rn be smooth, with (') c Rm open. Deduce from Exercise 10 that for
IIAmD'P(x)"'II' = detD'P(x)'Dcp(x), where ", = e, A . . . A em Deduce that if 'P : (') + U c M is a coordinate patch on a smooth mdimensional surface M e R n and f E C(M) is supported on U, then f dS = f('P(x» IIAmD'P(x)"'11 dx. (4.1.48) (4.1.47)
'
J
M
J
v
12. Show that the result of Exercise 5 in §3.2 follows from (4.1.48), via (4.1.41)(4.1.42). 13. Recall the projective spaces I'n , constructed in §3.2. Show that I' n is orientable if and only if n is odd. Hint. Let p : sn + I'n denote the natural projection, and let A : sn + sn be the antipodal map, so p oA = p. If a E An (l'n ) is nowhere vanishing, consider {3 = p*a E An(sn ). Show that {3 is nowhere vanishing. Show that p* = A* p*, hence A*{3 = {3, butA is orientation preserving if and only if n is odd.
4.2. Products and exterior derivatives of fonns
Having discussed the notion of a differential form as something to be integrated, we now consider some operations on forms. There is a wedge product, or exterior product, characterized as follows. If a E Ak(D) has the form (4.1.12) and if {3 = � b i (X)dxi , A . . · A dxi, E Ag(D), (4.2.1)
4.2.
Products and exterior derivatives offorms
define (4.2.2)
161
'" a·(x)b a A {3 = L.J j I (x) dx· ll A · · · A dx· Jk A dx·11 A · · · A dx·Ie j,i
in Ak+8 (D). A special case of this arose in (4.1.18)(4.1.21). We retain the equivalence (4.1.14). It follows easily that (4.2.3)
In addition, there is an interior product if a E producing ,xa = ajX E Ak '(D), defined by
Ak (D) with
a vector field X on D,
(4.2.4)
Consequently, if a = dxj , A . . . A dXik' D, = a/ax" then
(4.2.5)

ajDJe = ( 1)8 1 dx11 A · · · A dxJe A · · · A dxlk'
where dXj, denotes removing the factor dXj,. Furthermore, i ll' U"
. . . , jkJ
==} ajD, = o.
If F : (') + D is a diffeomorphism, a, {3 are forms, and X is a vector field on D, it is readily verified that (4.2.6)
F*(a A {3) = (F*a) A (F*{3), F*(ajX) = (F*a)j(F#X).
We make use of the opera tors Ak and 'k on forms: (4.2.7) There is the following useful anticommutation relation, (4.2.8)
where O ke is 1 if k = (4.2.5). We also have
e, a
otherwise. This is a fairly straightforward consequence of
(4.2.9) From (4.2.8)(4.2.9) one says that the operators ('j , Aj : 1 :S j :S
algebra.
nJ generate a
Another important operator on forms is the exterior derivative, (4.2.10)
defined as follows. If a E Ak(D) is given by (4.1.12 ), then
(4.2.11 )
Equivalently, (4.2.12 )
aa·} "' dX"C A dx·11 A · · · A dx·}t . da = L.J ax j,e t
Clifford
4.
162
Differentialforms and the GaussGreenStokesformula
where ae = a/axe and At is given by (4.2.7). The antisymmetry dXm A dXe = dxe A dxm , together with the identity a'a/axeaxm = a'a/axm axe , implies
d(da) = 0,
(4.2.13)
for any differential form a. We also have a product rule, (4.2.14) The exterior derivative has the following important property under pullbacks,
F*(da) = dF*a,
(4.2.15)
if a E Ak ([j) and F : (') + [j is a smooth map. To see this, extending (4.2.14) to a formula for d(a A 131 A . . A j3e ) and using this to apply d to F*a, we have ·
a a dF*a = � a( j 0 F(x») dXe A (F*dxh ) A . . A (F*dxh) j,e Xt + �(±)aj(F(x»)(F*dxh ) A . . A d(F*dXjJ A . . A (F*dxh)' j,v ·
(4.2.16)
·
·
Now the definition (4.1.18)(4.1.19) of pullback gives directly that
aF, '" F*dXi = L..; dXt = dFi , a e xe and hence d(F*dx, ) = ddf, = 0, so only the first sum in (4.2.16) contributes to dF*a. (4.2.17)
Meanwhile, (4.2.18)
�
aa· F*da = � ax (F(x») (F*dxm ) A (F*dXj' ) A . . J,m
·
A (F*dxik ) '
so (4.2.15) follows from the identity (4.2.19)
aa·J a (a o F(x») dx � a(F(x») F*dxm , = � j e t Xt m axm
which in turn follows from the chain rule. If da = 0, we say a is closed; if a = dj3 for some 13 E Ak l ([j), we say a is exact. Formula (4.2.13) implies that every exact form is closed. The converse is not always true globally. Consider the multivalued angular coordinate e on R' \ (0, 0); de is a single valued closed form on R' \ (0, 0) which is not globally exact. An important result, called the Poincare lemma, is that every closed form is locally exact. A proof is given in §5.2. (A special case is established in §4.3.)
4.2.
Products and exterior derivatives offorms
163
Exercises
1. If a is a kform, verify the formula (4.2.14), i.e., d (a /\ (3) = ( d a) /\ f3 + (l)ka /\ df3. If a is closed and f3 is exact, show that a /\ f3 is exact. 2. Let F be a vector field on U, open in R3 , F = I.: f/x)a/aXj' The vector field G = curl F is classically defined as a formal determinant (4.2.20)
curlF = det
(�:
)
�� �: ,
f, f, f3
where {ej } is the standard basis of R3 Consider the 1form 'P = that d'P and curl F are related in the following way: (4.2.21)
I.: f/x)dxj . Show
3 3 , , d'P = g, (x) dx, /\ dX3 + g,(x) dX3 /\ dx, + g3 (X) dx, /\ dx,. curlF = � g/x)ej = � g/x) a/aXj ,
See (4.4.30)(4.4.37) for more on this connection. 3. IfF and 'P are related as in Exercise 2, show that curl F is uniquely specified by the relation
d'P /\ a = (curl F, a) '" for all 1forms a on U C R3 , where ", = dx, /\ dx, /\ dX3 is the volume form. (4.2.22)
4. Let B be a ball in R3 , and let F be a smooth vector field on B. Show that
3 U E COO(B) S.t. F = grad u ==} curl F = O. Hint. Compare F = grad u with 'P = duo (4.2.23)
5. Let B be a ball in R3 , and let G be a smooth vector field on B. Show that (4.2.24)
3 vector field F S. t. G = curl F ==} div G = O.
Hint. If G = I.: g/x)a/aXj , consider (4.2.25)
Show that (4.2.26)
d1jJ = (div G) dx, /\ dx, /\ dX3 '
6. Show that the 1form de mentioned below (4.2.19) is given by
 y dx de = x dy x2 + y2
4.
164
Differentialforms and the GaussGreenStokesformula
For the next set of exercises, let 0 be a planar domain,and let X = f(x, y) 3/3x+ g(x, y) 3/3y be a nonvanishing vector field on O. Consider the 1form a = g(x, y) dx f(x,y)dy. 7. Let y : I + 0 be a smooth curve, I = (a, b). Show that the image C = y(I) is the image of an integral curve of X if and only if y*a = a. Consequently, with slight abuse of notation, one describes the integral curves by g dx  f dy = a. If a is exact, i.e., a = du, conclude the level curves of u are the integral curves of X. 8. A function cp is called an integrating factor if it = cpa is exact, i.e., if d(cpa) = 0, provided 0 is simply connected. Show that an integrating factor always exists, at least locally. Show that cp = eV is an integrating factor if and only if Xv =  div X. Find an integrating factor for a = (x' + y'  l)dx  2xydy. 9. Define the radial vector field R
= x1 3/3xl + . . . + xn3/3xn , on Rn . Show that
w = dXI A . . A dXn ==} n wjR = � (_l)j  I Xj dXI A . . j =l ·
·
A dXj A
..
·
n
A dX
Show that d(wjR) =
n w.
10. Show that if F : R n + Rn is a linear rotation (i.e., F E Exercise 9 has the property that F*{3 = {3.
4.3.
SO(n» ,
then {3
= wjR in
The general Stokes formula
The Stokes formula involves integrating a kform over a kdimensional surface with boundary. We first define that concept. Let S be a smooth kdimensional surface (say in RN ), and letM be an open subset of S, such thatits closure M (in RN ) is contained in S. Its boundary is 3M = M\M. We say M is a smooth surface with boundary if also 3M is a smooth (k1)dimensional surface. In such a case, any p E 3Mhas a neighborhood U c S with a coordinate chart cp : (') + U, where (') is an open neighborhood of a in Rk , such that cp(a) = p and cp maps {x E (') : Xl = OJ onto U n 3M. If S is oriented, then M is oriented, and 3M inherits an orientation, uniquely de termined by the following requirement: if 

(4.3.1) then 3M = {(x" . . . , Xk )j has the orientation determined by dx, A . . . A dXk ' We can now state the Stokes formula.
4.3.
The general Stokesformula
165
Proposition 4.3.1. Given a compactly supported (k1)form {3 ofclass e 1 on an oriented kdimensional surface M (ofclass e') with boundary aM, with its natural orientation, (4.3.2) Proof. Using a partition of unity and invariance of the integral and the exterior deriv ative under coordinate transformations, it suffices to prove this when M has the form (4.3.1). In that case, we will be able to deduce (4.3.2) from the fundamental theorem of calculus. Indeed, if (4.3.3) with b/x) of bounded support, we have
' abj d{3 = (l)J  1 aXj dX1 A " ' A dxk'
(4.3.4) Ifj > 1, we have (4.3.5)
J
M
d{3 = (_ l)j  l
J
!1
00

00
;
ab o = 0, ax dXj ] dx'
and also K*{3 = 0, where K : aM + M is the inclusion. On the other hand, for j = 1, we have
(4.3.6)
This proves the Stokes formula (4.3.2).
D
It is useful to allow singularities in aM. We say a point p E M is a corner of di mension if there is a neighborhood U of p in M and a e' diffeomorphism of U onto a neighborhood of a in
v
(4.3.7 )
K = {x E jtk : Xj :S 0, for l :S j :S k  v},
where k is the dimension of M. If M is a e' surface and every point p E aM is a corner (of some dimension), we say M is a e' surface with corners. In such a case, aM is a lo cally finite union of e' surfaces with corners. The following result extends Proposition 4.3.1. Proposition 4.3.2. lfM is a e' sUiface ofdimension k, with corners, and {3 is a compactly supported (k  l){orm ofclass e 1 on M, then (4.3.2) holds.
4. Differential forms and the GaussGreenStokesformula
166
Proof. It suffices to establish this when {3 is supported on a small neighborhood of a corner p E 3M, of the form U described above. Hence it suffices to show that (4.3.2 ) holds whenever {3 is a (k  I)form of class C 1 , with compact support on K in (4.3.7 ); and we can take {3 to have the form (4.3.3). Then, for j > k  v , (4.3.5) still holds, while, for j :S k  v, we have, as in (4.3.6),
(4.3.8 )
f d{3
K
= (_I)l'  1
=
fila 3b3Xjj dXj J dXl ' " dXj . . . dXk �

00
f {3
8K
This completes the proof.
D
The reason we required M to be a surface of class C' (with corners) in Propositions 4.3.1 and 4.3.2 is the following. Due to the formulas (4.1.18)(4.1.19) for a pullback, if {3 is of class Cj and F is of class Ct , then F*{3 is generally of class C� , with f.l = min(j, e  1 ) . Thus, if j = e = 1, F*{3 might be only of class Co , so there is not a well defined notion of a differential form of class C I on a C 1 surface, though such a notion is well defined on a C ' surface. This problem can be overcome, and one can extend Propositions 4.3.1 and 4.3.2 to the case where M is a C 1 surface (with corners), and {3 is a (k  I)form with the property that both {3 and d{3 are continuous. We will not go into the details. Substantially more sophisticated generalizations are given in [14]. We will mention one useful extension of the scope of Proposition 4.3.2, to surfaces with piecewise smooth boundary that do not satisfy the corner condition. An example is illustrated in Figure 4.3.1. There the point p is a singular point of 3M that is not a corner, according to the definition using (4.3.7 ). However, in many cases M can be di vided into pieces (Ml and M, for the example presented in Figure 4.3.1 ) and each piece Mj is a surface with corners. Then Proposition 4.3.2 applies to each piece separately, (4.3.9 )
f d{3 f {3, =
Mj
8Mj
and one can sum over j to get (4.3.2 ) in this more general setting.
Figure 4.3.1. Division into surfaces with corners
4.3.
The general Stokesformula
167
We next apply Proposition 4.3.2 to prove the following special case of the Poincare lemma, which will be used in §S.1. Proposition 4.3.3. If a is a l{orm on B = (x E R' : I x l
exists a real valued u E eOO(B) such that a = duo
a for x E Rn \ U, and grad ",(x) # a for x E au, so grad ", points out of U. Show that the natural orientation on au, as defined just before Proposition 4.3.1, is the same as the following. The equivalence
4.
168
Differentialforms and the GaussGreenStokesformula
71
rr, x R
72
o
Figure 4.3.2. Antiderivative of closed Iform
class of forms f3 E An 1 (aU) defining the orientation on au satisfies the property that dcp A f3 is a positive multiple of dX, A . . . A dxn , on au.
4. Suppose U = {x E Itn : xn < OJ. Show that the orientation on au described above is that of ( _ l)n l dX, A . . . A dXn _ l ' If V = {x E Itn : xn > OJ, what orientation does av inherit?
5. Extend the special case of Poincare's lemma given in Proposition 4.3.3 to the case where is a closed 1form on B = (x E Itn : Ixi < 1J, i.e., from the case dim B = 2 to higher dimensions. a
6. Define f3 E N 1 (ltn ) by n j= l
f3 = �( _ 1)j  1 Xj dx l A " ' A dXj A " ' A dxn Let 0 c It n be a smoothly bounded compact subset. Show that
� J f3 = Vol(D). an
4.4.
The classical Gauss, Green, and Stokesformulas
169
7. In the setting of Exercise 6, show that if f E C1 (O), then
j ff3 = j(Rf + nf) dx,
an
n
where
8. In the setting of Exercises 6 7, and with sn 1
C
Itn the unit sphere, show that
j ff3 = j f dS.
snl
sn l
Hint. LetB c Itn 1 be the unit ball, and define ", : B + sn 1 by ",(x') = (x', y1  Ix' I' ). Compute ",*f3. Compare surface area formulas derived in §3.2. j Another approach. The unit sphere sn 1 '> Itn has a volume form (cf. (4.4.13» , it must be a scalar multiple g(j*f3), and, by Exercise 10 of §4.2, g must be constant. Then Exercise 6 identifies this constant, in light of results from §3.2. See the exercises in §4.4 for more on this.
9. Given f3 as in Exercise 6. show that the (n  l )form on Itn \ a is closed. Use Exercise 6 to show that Isn.,
'"
'" 0, and hence ", is not exact.
10. Let D c Itn be a compact, smoothly bounded subset. Take ", as in Exercise 9. Show that
j
an
4.4.
'"
= An_ 1 if O E D,
a
if
a II' D.
The classical Gauss, Green, and Stokes formulas
The case of (4.3.1) where S = D is a region in It' with smooth boundary yields the classical Green's theorem. In this case, we have (4.4.1)
( ag  af ) f3 = f dx + g dy, df3 = ax ay dx /\ dy,
and hence (4.3.1) becomes the following
4.
170
Differentialforms and the GaussGreenStokesformula
Proposition 4.4.1. If D is a region in It' with smooth boundary, and f and g are smooth junctions on D, which vanish outside some compact set in D, then (4.4.2) Note that if we have a vector field X = x,a/ax + x,a/ay on D, then the integrand on the left side of (4.4.2) is ax, ax, + ay _ d''vX, (4.4.3) ax provided g = X" f = X, . We obtain (4.4.4)
jj(diVX) dX dY = j (X, dX + X, dY). n
an
If aD is parametrized by arclength, as yes) = (x(s),y(s»), with orientation as defined for Proposition 4.3.1, then the unit normal v, to aD, pointing out of D, is given byv(s) = (y'(s), x'es»), and (4.4.4) is equivalent to (4.4.5)
jj(diVX) dXdY = j(X, V)dS. n
an
This is a special case of Gauss's divergence theorem. We now derive a more general form of the divergence theorem. We begin with a definition of the divergence of a vector field on a surface M. Let M be a region in Itn , or an ndimensional surface in Itn + k , provided with a volume form WM E AnM. (4.4.6) LetX be a vector field on M. Then the divergence of X, denoted div X, is a function on M given by (4.4.7) If M = It n , with the standard volume element (4.4.8) and if a "' XJ. (x) (4.4.9) X = L..J , aXj then n wJX = � (l)j  'Xj (x)dx, A · · · A dXj A · · · A dxn (4.4.10) j= l Hence, in this case, (4.4.7) yields the familiar formula n  L.J (4.4.11) div X '" aJxj ' j= l
4.4.
The classical Gauss, Green, and Stokesformulas
171
where we use the notation (4.4.12) Suppose now that M is endowed with both an orientation and a metric tensor gjk(X). Then M carries a natural volume element "'M , determined by the condition that if one has an orientationpreserving coordinate system in which gjk (PO) = 0jb then ", M ( Po) = dx, A . . . A dXn  This condition produces the following formula, in any orientationpreserving coordinate system: (4.4.13) by the same sort of calculations as done in (3.2.10)(3.2.15). We now compute div X when the volume element on M is given by (4.4.13). We have (4.4.14)
"'MjX = �(l)j 'Xj yg dx, A . . j
·
A dXj A . . A dx ·
n,
and hence (4.4.15) Here, as below, we use the summation convention. Hence the formula (4.4.7) gives (4.4.16) Compare (3.2.56). We now derive the divergence theorem, as a consequence of the Stokes formula, which we recall is (4.4.17) for an (n1)form on M, assumed to be a smooth compact oriented surface with bound ary. If = "'M jX, formula (4.4.7) gives a
(4.4.18)
J(divX) "'M = J "'MjX.
M
aM
This is one form of the divergence theorem. We will produce an alternative expression for the integrand on the right before stating the result formally. Given that "'M is the volume form for M determined by a Riemannian metric, we can write the interior product "'M jX in terms of the volume element "'aM on 3M, with its induced orientation and Riemannian metric, as follows. Pick coordinates on M, centered at Po E 3M, such that 3M is tangent to the hyperplane {x, = OJ at Po = 0 (with M to the left of 3M), and such that gj k(PO) = 0jb so "'M(PO) = dx, A . . . A dxn . Consequently, "'aM(Po) = dx, A . . . A dx, . It follows that at Po, (4.4.19)
4.
172
Differentialforms and the GaussGreenStokesformula
where v is the unit vector normal to 3M, pointing out of M and j : 3M '> M the natural inclusion. The two sides of (4.4.19), which are both defined in a coordinate independent fashion, are hence equal on 3M, and the identity (4.4.18) becomes
j(diVX) WM = j (X, V) WaM'
(4.4.20)
M aM Finally, we adopt the notation of the sort used in §§3.1  3.2. We denote the volume element on M by dV and that on 3M by dS, obtaining the divergence theorem: Theorem 4.4.2. IfM is a compact surface with boundary and X is a smooth vectorfield
onM, then
j(diVX) dV = j (X, V)dS,
(4.4.21)
M
aM
where v is the unit outwardpointing normal to 3M. The only point left to mention here is that M need not be orientable. Indeed, we can treat the integrals in (4.4.21) as surface integrals, as in §3.2, and note thatall objects in (4.4.21) are independent of a choice of orientation. To prove the general case, just use a partition of unity supported on orientable pieces. We obtain some further integral identities. First, we apply (4.4.21)withX replaced by uX. We have the following derivation identity: (4.4.22)
div uX = u divX + (du,X) = u divX + Xu,
which follows easily from the formula (4.4.16). The divergence theorem immediately gives
j(diVX)U dV + j XudV = j (X, V)UdS.
(4.4.23)
M M aM Replacing u by uv and using the derivation identity X(uv) = (Xu)v + u(Xv), we have (4.4.24 )
j [(xu)v + U(XV)] dV =  j(diVX)UV d V + j (X, V)UVdS.
M
M
aM
It is very useful to apply (4.4.23) to a gradient vector field X. If v is a smooth func tion on M, grad v is a vector field satisfying (4.4.25)
(grad v, Y) = (Y, dv),
for any vector field Y, where the brackets on the left are given by the metric tensor on
M and those on the right by the natural pairing of vector fields and 1forms. Hence grad v = X has components xj = gj k3k v, where (gj k ) is the matrix inverse of (gj k ). Applying div to grad v defines the Laplace operator, (4.4.26)
4.4.
The classical Gauss, Green, and Stokesformulas
173
When M is a region in Rn and we use the standard Euclidean metric, so div X is given by (4.4.11), we have the Laplace operator on Euclidean space, a' v a' v (4.4.27) = ax? ax� Now, setting X grad v in (4.4.23), we have X = (grad grad v), and (X, v) (v, grad v), which we call the normal derivative of v, and denote av/av. Hence
liv  + ... + . u u, (4.4.28) j u(liv) dV =  j(grad u, grad v) dV + j u�� dS. If we interchange the roles of u and v and subtract, we have (4.4.29) j u(liv)dV = j(liU)V dV+ j[u��  �>l dS M
M
M
aM
M
aM
Formulas (4.4.28)(4.4.29) are also called Greenformulas. We will make further use of them in §S.l. We return to the Green formula (4.4.2), and give it another formulation. Consider a vector field Z = (f, g, h) on a region in R3 containing the planar surface U = {(x, y, 0) : (x, y) E OJ. If we form k j az , curl Z = det axi ay (4.4.30)
(
f
h
g
)
we see that the integrand on the left side of (4.4.2) is the kcomponent of curl Z, so (4.4.2) can be written
jj(curl Z) ' k dA = j (Z · T) ds,
(4.4.31)
u
au
where T is the unit tangent vector to au. To see how to extend this result, note that k is a unit normal field to the planar surface U. To formulate and prove the extension of (4.4.31) to any compact oriented surface with boundary in R3 , we use the relation between curl and the exterior derivative dis cussed in Exercises 2 3 of §4.2. In particular, if we set 3 3 a F = fj (X) ' (4.4.32) = � fi (x) dXj , then curl F = (4.4.33)
L3,
�
aXj
/x) a/aXj where
cp
j= l
g
dcp =
g,
(x)
dx, dX3 + g,(x) dX3 dx, + g3(X) dx, dx,. /\
/\
/\
Now suppose M is a smooth oriented (n  l)dimensional surface with bound ary in Rn . Using the orientation of M, we pick a unit normal field N to M as fol lows. Take a smooth function v which vanishes on M but such that Vv(x) '" 0 on M. Thus Vv is normal to M. Let (J E An' (M) define the orientation of M. Then /\ (J = /\ . . . /\ x , where a(x) is nonvanishing on M. For x E M, we n take N( x ) = Vv(x)/IVv(x)1 if a(x) > 0, and N( x ) = Vv(x)/IVv(x)1 if a(x) < O. We
dv
a(x)dx,
d
4.
174
N
Differentialforms and the GaussGreenStokesformula
N
call the positive unit normal field to the oriented surface M, in this case. Part of the motivation for this characterization of is that if D c Rn is an open set with smooth boundary M = 3D, and we give M the induced orientation, as described in §4.3, then the positive normal field just defined coincides with the unit normal field pointing out of D. Compare Exercises 2 3 of §4.3. Now, if G = (g l , gn ) is a vector field defined near M, then
N
f (N . G)dS = f (�( l)j  1g/x) dXI "
(4.4.34 )
"
M
M
/\
... dXj ...
/\
dXn ).
This result follows from (4.4.19). When n = 3 and G = curl F, we deduce from (4.4.32)(4.4.33) that
JJ dcp = JJ(N curl F) dS
(4.4.35)
M
Furthermore, in this case we have
M
f cp = f (F · T) ds,
(4.4.36)
aM
aM
where T is the unit tangent vector to 3M, specified as follows by the orientation of 3M; if r E AI (3M) defines the orientation of 3M, then (T, r) > a on 3M. We call T the forward unit tangent vector field to the oriented curve 3M. By the calculations above, we have the classical Stokes formula; Proposition 4.4.3. If M is a compact oriented surface with boundary in R3 and F is a
Cl vectorfield on a neighborhood ofM, then
(4.4.37)
N
ff(N ' curlF) dS = f (F · T) ds, M
aM
where is the positive unit normalfield on M and T is theforward unit tangentfield to
3M.
Remark. The right side of (4.4.37) is called the circulation of F about 3M. Proposition 4.4.3 shows how curl F arises to measure this circulation. Direct proof of the divergence theorem. Let D be a bounded open subset of Rn , with a C l smooth boundary 3D. Hence, for each p E 3D, there is a neighborhood U of p in Rn, a rotation of coordinate axes, and a C l function u : 0 + R, defined on an open set 0 e Rn  I , such that
D n U = (x E Rn : Xn :S u(x'), x' E OJ n U, where x = (x', xn ), x' = (Xl ' " ' ' Xn _ l )' We aim to prove that, given f E C I (O) and any constant vector e E R n , (4.4.38)
f e · Vf(x) dx = f (
n
an
e
· N)fdS,
4.4.
The classical Gauss, Green, and Stokesformulas
175
where dS is surface measure on 3n and N(x) is the unit normal to 3n, pointing out of n. At x = (x', u(x'» E 3n, we have N = (1 + IVuI 2 ) 1/2 (_Vu, 1).
(4.4.39)
To prove (4.4.38), we may as well suppose f is supported in such a neighborhood
U. Then we have
() Xn:::;U(xl)
j f(x', u(x'»)dx' = j (e n N)f dS. =
(4.4.40)
.
an
The first identity in (4.4.40) follows from Theorem 3.1.9, the second identity from the fundamental theorem of calculus, and the third identity from the identification 1/ dS = (1 + IVuI 2 ) 2 dx', established in (3.2.22). We use the standard basis {e l , " " e n } of R n . Such an argument works when e n is replaced by any constant vector e with the property that we can represent 3n n U as the graph of a function Yn = u(y'), with the Yn axis parallel to e. In particular, it works for e = e n + aej , for 1 :S j :S n 1 and for l a l sufficiently small. Thus, we have 
(4.4.41)
j(en + aej)
n
'
Vf(x)dx =
j(en + aej) · N f dS.
an
If we subtract (4.4.40) from this and divide the result by a, we obtain (9.38) for e = ej , for all j, and hence (4.4.38) holds in general. Note that replacing e by ej and f by fj in (4.4.38) and summing over 1 :S j :S n, yields (4.4.42)
j(diV F)dx = j N · FdS,
n
an
for the vector field F = (fl , ' " , fn ). This is the usual statement of Gauss 's divergence theorem, as given in Theorem 4.4.2 (specialized to domains in Rn ). Reversing the argument leading from (4.4.2) to (4.4.5), we also have another proof of Green's theorem, in the form (4.4.2).
4.
176
Differentialforms and the GaussGreenStokesformula
Exercises
 VV(x)
1. Newton's equation md ' xldt' = for the motion in Rn of a body of mass m, in a potential force field F = ',lV, can be converted to a firstorder system for 5), with 5 = mx. One gets d )= 5), dt 5 where Hf is a Hamiltonian vectorfield on R'n, given by
(x,
(x,
Hf(x,
In the case described above,
f(x, 5) = 2m1 151 ' + Vex).
Calculate div Hf from (4.4.11).
2. Let X be a smooth vector field on a smooth surface M, generating a flow 'Fk. Let O e M be a compact, smoothly bounded subset, and set at = 'Fk(a). As seen in Proposition 3.2.7,
1t Vol(O t ) = j (diVX) dV.
(4.4.43)
v,
Use the divergence theorem to deduce from this that d Vol(O t ) = (X, v)dS. (4.4.44 ) dt
j
Remark. Conversely, a direct proof of (4.4.44), together with the divergence theorem, would lead to another proof of (4.4.43). 3. Show that if F : R 3 (4.4.45)
+
R3 is a linear rotation, then, for a C ' vector field Z on R3 , F# (curl Z) = curl(F# Z).
4. Let M be the graph in R3 of a smooth function, = E bounded region with smooth boundary (maybe with corners). Show that
z u(x, y), (x, y)
j(curl F N)dS = JJ [( a:; _ a�)(��)
(4.4.46 ) M
()
0 c
R', a
aF3 )(_ aU ) + ( aF, _ aF, ) ] dx dy, ax ay ax ay where a�/ax and a�/ay are evaluated at (x, y, u(x, y»). Show that j (F · T) ds = j (F, + F3 aU ) dx + (F, + F3 aU ) dy, (4.4.47 ) ax ay a + ( az _ F,
aM
8V
4.4.
The classical Gauss, Green, and Stokesformulas
177
where Fj(x,y) = Fj(x,y, u(x,y»). Apply Green's theorem, with f = PI + P3 (3u/3x), g = P, + P3(3u/3y), to show that the right sides of (4.4.46) and (4.4.47) are equal, hence proving the Stokes theorem in this case. 5. Let M c It n be the graph of a function Xn = u(x'), x' = (X l , . . . , Xn _ l ) ' If
n j= l as in (4.4.34), and ",(x') = (x' , u(x' »), show that l ",*{3 = ( _ l)n gj(x' , u(x'») x)u  gn (x' , U(X'») dXl A . . · A dXn_ l j= l I = ( _ l)n G . ('\7u, 1) dX l A . . . A dXn_ l , where G = (g l , " " gn ), and verify the identity (4.4.34) in this case. Hint. For the last part, recall Exercises 23 of §4.3, regarding the orientation of M. {3 = �(l)j  l g/x)dx l A . . . A dXj A . . . A dxn ,
[�
;
]
6. Let S be a smooth oriented twodimensional surface in 1t3 , and M an open subset of S, with smooth boundary; see Figure 4.4.1. Let N be the positive unit normal field to S, defined by its orientation. For x E 3M, let vex) be the unit vector, tangent to M, normal to 3M, and pointing out of M, and let T be the forward unit tangent vector field to 3M. Show that on 3M, N X v = T, v X T = N. 7. If M is an oriented (n1)dimensional surface in Itn , with positive unit normal field N, show that the volume element "'M on M is given by "' M =
'''IN, where ", = dXl A · · · A dXn is the standard volume form on Itn . Deduce that the volume element on the unit sphere sn l C Itn is given by "'sn'
=
n �(1)j  1 Xj dXl A . . . A dXj A . . . A dxn , j= l
if sn l inherits the orientation as the boundary of the unit ball.
8. Let M be a Ck surface, k 2: 2. Suppose ", : M .... M is a Ck isometry, i.e., it preserves the metric tensor. Taking ",*u(x) = u( 0 such that f(x  Ae,) is supported in (x : I x l > RJ, where e, = (1, 0, . . . , 0). Also take A large enough that the image of {x : Ix l :S R} under 'P does not intersect the support of f(x  Ae,). We can set (4.5.5)
a1jJ
F(x) = f(x)  f(x  Ae, ) = ax, (x),
where (4.5.6)
1jJ(x)
=
1 f(x  se,)ds, A
1jJ E CJ(Rn ).
4.5.
Differentialforms and the change ofvariableformula
181
Then we have identities involving nforms, (4.5.7)
=
a F(X)dxl A . . . A d
a1jJ Xn = aXl dXl A . . A dXn = d1jJ A dx, A . . . A dXn = d(1jJdx, A . . A dxn), ·
·
=
=
i.e., a d{3, with {3 1jJ dx, A . . A dXn a compactly supported (n  l)form of class C 1 Now the pullback of a under 'P is given by (4.5.8) 'P*a F('P(x») detD'P(x)dxl A . . A dXn Furthermore, the right side of (4.5.8) is equal to (4.5.9) f(r:p(x») detD'P(X) dXl A . . . A dXn  f(x  Ae l ) dXl A . . A dXn Hence we have ·
=
·
·
(4.5.10 )
J f(r:p(x») detDr:p(x) dxl . . · dxn  J f(X)dxl . . · dxn = J 'P*a = J 'P*d{3 = J d('P*{3),
where we use the general identity (4.5.11 ) 'P*d{3 d('P*{3), a consequence of the chain rule. On the other hand, a very special case of the Stokes theorem applies to 'P*{3 Y � Y/X)dXl A . . A dXj A . . A dxn , (4.5.12 )
=
= =j
·
·
with Yj E CJ(Rn ). Namely (4.5.13 )
an '" (l)}'  1 dy L.; dX A . . A dxn , aX j l j
=
·
and hence, by the fundamental theorem of calculus, (4.5.14 )
J dy = O.
This gives the desired identity (4.5.4 ), from (4.5.10).
D
We make some remarks on Theorem 4.5.2. Note that 'P is not assumed to be one toone or onto. In fact, as noted in [32], the identity (4.5.4 ) implies that such 'P must be onto, and this has important topological implications. We mention that if one puts absolute values around det D'P(x) in (4.5.4 ) , the appropriate formula is (4.5.15 )
J f('P(x») IdetD'P(x)1 dx = J f(x)n(x)dx, where n(x) = #(y : 'P(y) = xl. A proof of (4.5.15 ) can be found in texts on geometrical
measure theory. As noted in [32], Theorem 4.5.2 was proven in [4]. The proof there makes use of differential forms and the Stokes theorem, but it is quite different from the proof given
4.
18 2
Differentialforms and the GaussGreenStokesformula
here. A crucial difference is that the proof in [4] requires that one knows the change of variable formula as formulated in Theorem 4.5.l. We now show how Theorem 4.5.1 can be deduced from Theorem 4.5.2. We will use the following lemma.
Lemma 4.5.3. In the setting of Theorem 4.5.1, and with detD", > 0, given p E 0, there exists a neighborhood U of p and a cl map : Rn + Rn such that (4.5.16) '" on ", I (U), (x) x for Ix l large, and (4.5.17) (x) E U ==} x E ", I (U).
=
=
G.
Granted the lemma, we proceed as follows. Assume detD", > 0 on Given f E C(D), suppf c K, compact in 0, cover K with a finite number of subsets Uj as in Lemma 4.5.3, and, using a continuous partition of unity (cf. §3.3), write f L.j fj , supp fj C Uj . Also, let j have the obvious significance. By Theorem 4.5.2, we have
=
J fj(j(x» detDj(x)dx = J fj dx.
(4.5.18) But we also have
J U/x» detD/x)dx = J U",(x» detD",(x) dx.
(4.5.19)
v
Now summing over j gives (4.5.3). If we do not have detD", > 0 on compose with the map
G, then detD", < 0 on G. In this case, one can
(4.5. 20 ) (for which Theorem 4.5.1 is elementary) and readily recover the desired result. We turn to the proof of Lemma 4.5.3. Say q ", l (p), D",(q) A E Gt + (n, R), i.e., A E Gt(n, R) and detA > O. Translating coordinates, we can assume p q O. We set
=
(4.5.21)
W(x)
=
= =
= f3(x)q;(x) + (1  f3(x»Ax,
where f3 E CO"(Rn) has support in a small neighborhood of q and f3 = 1 on a smaller neighborhood V ",  I (U), chosen so that we can apply Corollary 2.2.4, to deduce that (4.5. 22 )
=
W maps R n diffeomorphically onto its image, an open set in R n .
In fact, estimates behind the proof of Proposition 2.2.2 imply that, for appropriately chosen f3, there exists b > 0 such that IW(x)  W(y)1 2: b l x  y l for all x,y E Rn . Hence the image W(Rn ) is closed in Rn , as well as open, so actually W maps Rn diffeomorphi cally onto R n . Note that W '" on V ",  I (U). We want to alter W(x) for large I x l to obtain , satisfying (4.5.16)(4.5.17). To do this, we use the fact that Gt + (n, R) is connected (see
=
=
4.5.
Differentialforms and the change ofvariableformula
183
Proposition 3.2.14). Pick a smooth path r : [0, 1] + Ot+ (n, It) such that ret) t E [0, 1/4] and ret) I for t E [3/4, 1]' Let (4.5.23) M sup 1I[(t)' 11 . so I r(t)x l 2: M' Ix l . \f x E Itn .
= = O:o;t:O;l = (x E Itn : Ixl
= A for
Now assume U C ER, < Rd, so '±I(V) c ER, . Next, take R , so large that V 'I'' (U) C ER, and (4.5.24) Ix l 2: R, ==} 1'±I(x) 1 > MR, and '±I(x) Ax. Now set ,",(x) '±I(x) for Ix l :S R" r(t)x for Ix l R, + t, O :S t :S 1, (4.5.25) x for Ix l 2: R, + l. This map has the properties (4.5.16)(4.5.17).
=
=
=
=
Chapter 5
Applications of the GaussG reenStokes formula
In this chapter we present two major types of applications of the theory of differential forms developed in Chapter 4. The first set of applications, given in §5.1, deals with complex function theory. If D e e is an open set, a C l function : D + C is said to be holomorphic if it is com plex differentiable, or equivalently if it satisfies a set of equations called the Cauchy Riemann equations. We deduce from Green 's theorem thatifD is a smoothly bounded domain and E C I (O) is holomorphic on D, then we have the Cauchy integral theo rem,
f
f
J f(z)dz = 0,
(5.0.1)
an
and the Cauchy integral formula,
(5.0.2)
J Zfez) f(zo) = � 27[[  Zo dz, Zo E D. an
These key results lead to further results on holomorphic functions, such as power series developments. In §5.1 we also consider functions on domains D c Rn that are harmonic, and we use GaussGreen formulas to establish results about such functions, such as mean value properties and the Liouville theorem, which states that a bounded harmonic on all of Rn must be constant. These results specialize to holomorphic functions on C One consequence is the fundamental theorem of algebra, which states that if is a nonconstant polynomial on C it must have a complex root.
p(z)

185
186
5. Applications ofthe GaussGreenStokesformula
The second set of applications, given in §§5.25.3, yields important results on the topological behavior of smooth maps on regions in R n , and on surfaces and more gen erally on manifolds. A central notion here is that of degree theory. If X is a smooth, compact, oriented, ndimensional surface and F : X + Y is a smooth map to a com pact, connected, oriented, ndimensional surface Y, then the degree of F is given by Deg(F) =
(5.0.3)
J F*OJ,
x
where OJ is an nform on Y such that Jy OJ = 1. That this is well defined, independent of the choice of such OJ, is a consequence of the fundamental exactness criterion, given in Proposition 5.3.6, which says a smooth nform a on Y is exact, i.e., has the form a = d{3, if and only if Jy a = O. With this, we are able to develop degree theory as a powerful tool. Applications range from the Brouwer fixedpoint theorem (actually arising here as a precursor to degree theory) and the JordanBrouwer separation theorem (in the smooth case) to a degreetheory proof of the fundamental theorem of algebra. We also consider on a compact surface M a vector field X with nondegenerate crit ical points, define the index of such a vector field, and show that Index X = X(M) (5.004 ) is independent of the choice of such a vector field. This defines an invariant x(M), called the Euler characteristic. Investigations of X(M) will play an important role in Chapter 6. 5.1. Holomorphic functions and harmonic functions
Let f be a complexvalued C 1 function on a region D C R'. We identify R' with C, via z = x + iy, and write fez) = f(x,y). We say f is holomorphic on D provided it is complex differentiable, in the sense that (5.1.1)
lim I (f(z + h) fez)) exists,
h�o
h

for each z E D. When this limit exists, we denote it r(z), or df/dz. An equivalent condition (given f E C I ) is that f satisfies the CauchyRiemann equation, af 1 af (5.1.2) ax In such a case, 1 af af (5.1.3) f, (z) =  (z) = C'  (z). ax ' ay Note that f(z) = z has this property, butf(z) = z does not. The following is a conve nient tool for producing more holomorphic functions.
Lemma 5.1.1. Iff and g are holomorphic on D, so is fg. Proof. We have (5.104 )
5.1.
Holomorphicfunctions and harmonicfunctions
f
187
f
so if and g satisfy the CauchyRiemann equation, so does g. Note that
d (fg)(z) = f'(z)g(z) + f(z)g'(z).
(5.1.5)
dz
Using Lemma 5.1.1, one can show inductively that if k E N, C, and
D
zk is holomorphic on
,:!:",dzzk = kzkl Also, a direct analysis of (5.1.1) gives this for k = 1, on C \ 0, and then an inductive ..
(5 1 6)
argument gives it for each negative integer k, on C \ O. The exercises explore various other important examples of holomorphic functions. Our goal in this section is to show how Green's theorem can be used to establish basic results about holomorphic functions on domains in C (and also develop a study of harmonic functions on domains in Rn ). In Theorems 5.1.25.1.4, 0 will be a bounded domain with piecewise smooth boundary, and we assume 0 can be partitioned into a finite number of C' domains with corners, as defined in §4.3. To begin, we apply Green 's theorem to the line integral
J fdZ = Jf(dX + idY).
an
an
Clearly, (4.4.2) applies to complexvalued functions, and if we setg
JfdZ = JJ(i��  ��)dXdY
(5.1.7)
f
an
= if, we get
n
Whenever is holomorphic, the integrand on the right side of (5.1.7) vanishes, so we have the following result, known as Cauchy's integral theorem. Theorem5.1.2. (5.1.8)
Iff E CI(O) is holomorphic, then J f(z)dz = O. an
Using (5.1.8), we can establish Cauchy's integral formula:
Iff E CI(O) is holomorphic and zo E D, then (5.1.9) f(zo) = J:..., J zfez) Zo dz. Proof. Note that g(z) = f(z)/(z  zo) is holomorphic on 0 \ {zo}. Let D, be the disk of radius centered at zoo Pick so small that D, O. See Figure 5.1.1. Then (5.1.8) implies fez) dz = J fez) dz. J ZZo (5.1.10) ZZo
Theorem 5.1.3.
2;n
r
an
C
r
an
aDr
5. Applications of the GaussGreenStokesformula
188
Gf
T
D,
Figure 5.1.1. Proving Cauchy's integral formula
( ) = Zo + rew
To evaluate the integral on the right, parametrize the curve 3D, by y e Hence d de, so the integral on the right is equal to
z = ire18
(5.1.11) As
r
+
1o '" reI
f(Zo + re18) [r. e"e de  [. "$
0, this tends in the limit to
1'" f(Zo + re18) de. 0
27rif(zo), so (5.1.9) is established.
• z
Figure 5.1.2. Convergent power series on Dr(zo)
D
5.1.
Holomorphicfunctions and harmonicfunctions
fE
189
D, z E D,. Zo E D, f(1) dl'. J f(Z) = � ; 2n: (I'  zo)  (z zo)
Suppose C I (O) is holomorphic, C D, where is the disk of radius centered atzo, and suppose See Figure 5.1.2. Then Theorem 5.1.3 implies (5.1.12)
r
an
We have the infinite series expansion
_ _1_ f( ZZO ) n zo) (I'  zo)  (z  I' zo n=o I' zo ' which is valid as long as Iz  Zo I I I'  Zo I . Hence, given I z  Zo I r, this series is uniformly convergent for I' E a , and we have f(1) (Z ZO )n dlJ (5.1.14 ) fez) = � f 2m n=O I' Zo I' Zo 1
(5.1.13)
0,
� r J u(x) dx  J U(X) dJ J �'(P)
V(E (O»)
B, (q)
Note that V(E,(O») = Cn rn , where Cn is evaluated in Exercise 2 of §3.2. Thus (5.1.27)
lu(p)  u(q) 1 :S
�� J
l u(x) 1 dx,
l1(p,q,,)
where
/l( p, q, r) = E,(p) L E,(q) = (E,(p) \ E,(q») u (E,(q) \ E,(p») See Figure 5.1.3. Note that if a = I p  q l . then /l(p, q, r) C ENa (P) \ E,_ a (P); hence (5.1.29) V(/l( p, q, r») :S C(p, q) rn I , r 2: 1. It follows that if l u(x) 1 :S M for all x E Rn , then (5.1.30) lu(p)  u(q) 1 :S MCn C(p, q) r I , 'if r 2: 1. Taking r + 00 , we obtain u(p)  u(q) = 0, so u is constant. D (5.1.28)
We will now use Liouville's theorem to prove the fundamental theorem of algebra. Theorem 5.1.9. If p(z) = anzn + an _ I Z n 1 + . . . + al z + ao is a polynomial of degree n 2: 1 (an '" 0), then p(z) must vanish somewhere in C
5.
192
Applications ofthe GaussGreenStokesformula
�
alP, q, r)
Figure 5.1.3. Two large balls, with centers at p and q
Proof. Consider
fez) = 1 ' p(z) If p(z) does not vanish anywhere in C, then fez) is holomorphic on all ofC. (See Exer
(5.1.31)
cise 9 below.) On the other hand,
(5.1.32) so
fez) = zn � a + a _ ,+ . . . + a z,,o n' n n lZ 1 1
_ _ _
_ _ _
If(z) 1 + 0, as Iz l + 00 . Thus f is bounded on C, if p(z) has no roots. By Proposition 5.1.8, fez) must be con stant, which is impossible, so p(z) must have a complex root. D (5.1.33)
From the fact that every holomorphic function f on 0 C R' is harmonic, it follows that its real and imaginary parts are harmonic. This result has a converse. Let u E C' ( 0) be harmonic. Consider the 1form
au dx + au dy. a = ay ax We have da = (Ll.u)dx A dy, so a is closed if and only if u is harmonic. Now, if 0 is diffeomorphic to a disk, it follows from Proposition 4.3.3 that a is exact on 0, whenever (5.1.34 )


5.1.
Holomorphicfunctions and harmonicfunctions
it is closed, so, in such a case, (5.1.35) L'.u 0 on 0 ==} 3 v E C 1 ( 0) S.t. In other words,
=
au ax
(5.1.36)
au ay
= ayav '
=  axav '
193
a = dv.
This is precisely the CauchyRiemann equation (5.1.1) for f following. Proposition 5.1.10. If 0
= u + v, so we have the i
is diffeomorphic to a disk and u E C'( 0) is harmonic, then u is the real part ofa holomorphicfunction on O. c R'
The function v (which is unique up to an additive constant) is called the harmonic
conjugate of u. o
c
We close this section with a brief mention of holomorphic functions on a domain e n . We say f E C 1 ( 0) is holomorphic provided it satisfies
af aXj
(5.1.37)
=
= � aafYj , i
1 :S j :S n.
Suppose Z E O, Z (Zl , " " Zn ). Suppose l E O whenever Iz  I I < r. Then, by successively applying Cauchy's integral formula (5.1.9) to each complex variable Zj , we have that (5.1.38) Yn
Yl
where Yj is any simple counterclockwise curve about Zj in e with the property that Ilj  zj l < rlyn for all Ij E Yj· Consequently, if p E e n and 0 contains the polydisc D {z E en : IZj  Pj I :S 0, \if j},
=
then, for z E D, the interior of D, we have
=
fez) (27[i)n (5.1.39)
J ... J f(I)[(11  PI )  (Zl  P1 )f1 . . . [(In  Pn )  (Zn  Pn )r 1 dl1 . . . dIn '
= (I E e : II  Pj I = oj. Then, parallel to (5.1.12)(5.1.15), we have (5.1.40 ) fez) = � (Z for Z E D, where a = ( a1 , " " an ) is a multiindex, (z = (Zl  P1 )a, . . . (zn  Pn) ,
where Cj
a;::': O
 p)a
ca
_
a p) ,
an
as in (2.1.13), and (5.1.41)
ca
= (27[ )n J ... J f(I)(11  P1 )a, 1 . . . (In  Pn)an1 dl1 . . . dIn· i
en
C1
5.
194
Applications ofthe GaussGreenStokesformula
Thus holomorphic functions on open domains in en have convergent power series expansions. We refer to [2], [26], and [51] for more material on holomorphic functions of one complex variable, and to [31] for material on holomorphic functions of several complex variables. We will return to harmonic functions in §7.4 and Appendix A.6. For more infor mation on harmonic functions, one can see [30] and [46]. Exercises
fk
fk
1. Let : 0 + e be holomorphic on an open set 0 C C Assume + f and V + Vf locally uniformly in D. Show that f : 0 + e is holomorphic. See Exercise 26 for a stronger result.
fk
2. Assume
fez) = k=�o akzk 00
(5.1.42)
Izl z Il (5.1.43) r(z) = k=� kakzkl , for Izl < R. l 3. As in (2.3.104), the exponential function eZ is defined by
is absolutely convergent for < R. Deduce from Proposition 2.1.10 and Exercise 1 above that f is holomorphic on < R, and that 00
(5.1.44 ) Deduce from Exercise 2 that
eZ is holomorphic in z.
4. By (2.3.105), we have
eZ
Use this to show directly from (5.1.1) that is complex differentiable and on C, giving another proof that is holomorphic on C Hint. Use the power series for to show that
eZ
h eZ
e
he 1 h�o = 1. 5. For another approach to the fact that eZ is holomorphic, use lim
and (2.3.104) to verify that
h
eZ satisfies the CauchyRiemann equation.
(dldz)eZ =
5.1.
Holomorphicfunctions and harmonicfunctions
6. For Z E C, set
1 ) (e1Z _ e  1Z . 2i Show that these functions agree with the definitions of cos t and sin t given in (2.3.106) (2.3.108), for z = t E R. Show that cos z and sinz are holomorphic in z E C. Show that (5.1.45)
1 ) cosz = "2 (e1Z + e 1Z ,
195
.
sinz =
.
and
.
d . sm z = cosz, dz
d . z, cos z =  sm dz
(5.1.46)
.

cos2 Z + sin2 z = 1,
(5.1.47) for all z E C.
7. Let (') , 0 be open in C. If f is holomorphic on ('), with range in 0, and g is holomor phic on 0, show that h = g o f is holomorphic on ('), and h' (z) = g' (f(z» f'(z). Hint. See the proof of the chain rule in §2.1. 8. Let 0
c
C be a connected open set, and let f be holomorphic on O.
(a) Show that if f(zj ) = a for distinct Zj E 0 and Zj + Zo E 0, then fez) = a for Z in a neighborhood of Zoo Hint. Use the power series expansion (5.1.1). (b) Show that if f = a on a nonempty open set (') c 0, then f = a on O. Hint. Let U c 0 denote the interior of the set of points where f vanishes. Use part (a) to show that U n O is open. 9. Let 0 = C \ (00, 0] and define log :
(5.1.48)
0
logz =
+
C by
J � d\,
Yz
where Yz is a path from 1 to z in O. Use Theorem 5.1.2 to show that this is independent of the choice of such path. Show that it yields a holomorphic function on C \ (00, 0], satisfying d 1 logz = z' z E C \ (00, 0]' dz 10. Taking logz as in Exercise 9, show that (5.1.49)
Hint. If ",(z) denotes the left side, show that ",(1) = 1 and ",'(z) = ",(z)/z. Use unique ness results from §2.3 to deduce that ",(x ) = x for x E (0, 00), and from there deduce
that ",(z) = z, using Exercise 8. Alternative. Apply dldz to show that
logeZ = z,
5. Applications of the GaussGreenStokesformula
196
for z in some neighborhood ofO. Deduce from this (and Exercise 3 of §2.2) that (5.1.49) holds for z in some neighborhood of 1. Then get it for all z E C \ ( 00 0] using Exercise 8. 
11. With [2 (5.1.50)
= C \ (00, 0] as in Exercise 9 and z = a
a
ealogz.
,
E C, define za for z E [2 by
Show that this is holomorphic on [2 and (5.1.51) 12. Let a
= C \ {[l, oo ) u (oo, l]}, and defineAs : a As(z ) = J (1  \2 ) 1/2 d\,
+
C by
where O'z is a path from a to z in a. Show that this is independent of the choice of such a path, and that it yields a holomorphic function on a. 13. With As as in Exercise 12, show that
=
As(sin z) z, for z in some neighborhood ofO. (Hint. Apply d/dz.) From here, show that sin(As(z» z, 'if z E a.
=
Thus we write (5.1.52) Compare (2.3.113). 14. Look again at Exercise 4 in §2.1.
15. Lookagain at Exercises 3 5 in §2.2. Write the result as an inverse function theorem for holomorphic rna ps. 16. Differentiate (5.1.9) to show that, in the setting of Theorem 5.1.3, for k E N, we have the derivative Cauchy integral formula, fez) (5.1.53) f(k) (Zo ) � dz. 2n:i (z _ zo)k+l
= J an
Show that this also follows from (5.1.15). 17. Assume f is holomorphic on C, and set
M(zo, R)
= Izsupzol:::;R If(z)l.
5.1.
H% morphicfunctions and harmonicfunctions
197
Use the k = 1 case of Exercise 16 to show that
:S M(Z;, R) , 'I R E (0, ) 18. In the setting of Exercise 17, assume f is bounded, say If(z) 1 :S M for all Z E Deduce that r(zo) = for all Zo E and in that way obtain another proof of Liou Ir (zo)1
00 .
C
C
a
ville's theorem in the setting of holomorphic functions on C (Note that Proposition 5.1.8 is more general. ) The next four exercises deal with the function
( 1: et'HZ dt,
G z) =
(5.1.54 )
zEC
19. Show that G is continuous on C 20. Show that G is holomorphic on C, with
( 1: tet'HZ dt. 1 [G(z + h)  G(z)] = 1 00 et'HZ_1 (eth  l)dt G' z) =
Hint. Write
h
and where so
 00
eW = 1 + w + R(w), IR(w)l :S c[w I2e1wl ,
21. Show that, for x E R,
Hint. Write
h
{Jrex'/4 G(x) = eX' /4 1: e(t  xI2)' dt, G(x) =
and make a change of variable in the integral. 22. Deduce from Exercises 21 and 8 that G z) = (5.1.55)
( {Jrez'/4,
Vz E C
The next exercises deal with the Gamma function, (5.1.56) defined for z > a in (3.2.33).
5.
198
Applications ofthe GaussGreenStokesformula
23. Show that the integral is absolutely convergent for Rez > 0 and defines r(z) as a holomorphic function on {z E C : Rez > OJ. 24. Extend the identity (3.2.36), i.e., r(z + 1) (5.1.57) to Re z > O.
= zr(z),
25. Use (5.1.57) to extend r to be holomorphic on C \ {O, 1, 2, 3, . . . }. 26. Use the result of Exercise 16 to show that if fv are holomorphic on an open set + f uniformly on compact subsets of 0, then f is holomorphic on 0 and f� + l' uniformly on compact subsets.
D e e and fv
27. The Riemann zeta function \'(z) is defined for Rez > 1 by (5.1.58)
\'(z)
= k=�00l k1z
Show that \'(z) is holomorphic on {z E C : Re z > 1}. The following exercises deal with harmonic functions on domains in Rn . 28. Using the formula (4.4.26) for the Laplace operator together with the formu la (3.2.26) for the metric tensor on Rn in spherical polar coordinates x reu, x E Rn , r Ix l , eu E sn l , show that of u E C' (O), 0 e Rn , a' n1 a 1 (5.1.59) Ll.u(reu) a u(reu) + ,Ll.s u(reu), a , u(reu) +  
=
=
=
r
where Ll.s is the Laplace operator on sn l . 29. If f(x)
(5.1.60 )
r
r
r
= ",( Ixl) on Rn, show that n1 ",'( lx l ). Ll.f(x) = ","(Ixl ) + Txf
In particular, show that (5.1.61 ) if n 2: 3, and (5.1.62 )
Ixl (n') is harmonic on Rn \ 0, log Ix l is harmonic on R' \ O.
If (') , 0 are open in Rn , a smooth map ", : (') + 0 is said to be conformal provided the matrix function G(x) D",(x)'D",(x) is a multiple of the identity, G(x) y(x)I. Recall formula ( 3.2.2 ).
=
=
=
30. Suppose n 2 and ", preserves orientation. Show that ", (pictured as a function '" : (') + C) is conformal if and only if it is holomorphic. If '" reverses orientation, '" is conformal � q; is holomorphic (we say ", is antiholomorphic).
5.1.
H% morphicfunctions and harmonicfunctions
199
31. If (') and D are open in It' and u is harmonic on D, show that u o !p is harmonic on ('), whenever !p : (')
+
D is a smooth conformal map.
Hint. Use Exercise 7 and Proposition 5.1.10.
The following exercises will present an alternative approach to the proof of Propo sition 5.1.5 (the mean value property of harmonic functions). For this, let BR {x E Itn : Ix l :S R}. Assume u is continuous on BR and C' and is harmonic on the interior BR. We assume n 2: 2.
=
=
o
32. Given g E SO(n), show that ugCx) u(gx) is harmonic on BR. Hint. See Exercise 7 of §4.4. 33. As in Exercise 27 of §3.2, define Au E C(BR) by Au(x)
= J u(gx) dg. SO(n)
Thus Au(x) is a radial function, Au(x)
= Su( lxl),
Suer)
= A�_, J u(rw)dS(w). snl
Deduce from Exercise 32 above that Au is harmonic on BR.
=
34. Use Exercise 29 to show that r:p(r) Suer) satisfies !pU(r) +
n  1 !p'(r) 0, r

=
for r E (0, R). Deduce from this differential equation that there exist constants Co and C, such that
= 2. Then show that, since Au(x) does not blow up at x = 0, C, = O. Hence Au(x) = Co, \if x E BR. 35. Note that Au(O) = u(O). Deduce that for each r E (0, R], (5.1.63) u(O) = Suer) = ::f J u(rw)dS(w). n' Co + C, logr,
s nl
if n
5.
200
Applications ofthe GaussGreenStokesformula
5.2. Differential forms, homotopy, and the Lie derivative
Let X and Y be smooth surfaces. Two smooth maps fa, !, : X + Y are said to be smoothly homotopic provided there is a smooth F : [0, 1] xX + Y such thatF(O, and F(I, f, The following result illustrates the significance of maps being homotopic.
x) =
x) = (x).
fo(x)
Proposition 5.2.1. Assume X is a compact, oriented, kdimensional surface and a
Ak( y) is closed, i.e., da o. Iffa, !, : X + Y are smoothly homotopic, then
=
J f(;a = J fr*a.
(5.2.1)
x
In fact, with [0, 1] x X
E
x
= D, this is a special case of the following.
Proposition 5.2.2. Assume 0 is a smoothly bounded, compact, oriented (k I)dimen sional surface and that a E Ak( y) is closed. ifF : 0 + Y is a smooth map, then
+
J F*a = O.
(5.2.2)
an
Proof. The Stokes theorem gives
J F*a = J dF*a = 0, since dF*a = F*da and, by hypothesis, da = O. (5.2.3)
an
n
D
Proposition 5.2.2 is one generalization of Proposition 5.2.1. Here is another. Proposition 5.2.3. Assume X is a kdimensional surface and a E At ( y) is closed. If fa, f, : X + Y are smoothly homotopic, then f(;a  fr*a is exact, i.e., (5.2.4
)
f(;a  fr*a
for some {3 E At' (X).
= d{3,
Proof. Take a smooth F : R x X + Y such thatF(j,
ii: = F*a E At(R x X).
(5.2.5) Note that (5.2.6)
x) = f/x). Consider
dii: = F*da = O. Now consider , : R x X + R x X, ,(t,x) = (s + t,x).
We claim that
ii:  rii: = diJ,
(5.2.7)
for some iJ E At ' (R X X). Now take (5.2.8)
=
{3 j*iJ, j : X + R x X,
j(x) = (0, x).
5.2.
Differentialforms, homotopy, and the Lie derivative
We have F (5.2.9)
201
0j = fa, F0 1 0j = !1. so it follows that fo*a  Ita = j*a  j*cPra = j*d/3,
given (5.2.7), which yields (5.2.4) with f3 as in (5.2.8).
D
It remains to prove (5.2.7), under the hypothesis that da = O. The following result gives this. The formula (5.2.10) uses the interior product, defined by (4.2.4 )(4.2.5). Lemma 5.2.4. If E At (R x X) and is as in (5.2.6), then
a
, :s ;a = ; (d(aja,) + (da)ja,j.
(5.2.10)
Hence, ifda = 0, (5.2.7) holds with
1 /3 = 1 (;a)ja, ds. Proof. Since ;+ 0 = ;� = �;, it suffices to show that (5.2.10) holds at s = O. It also suffices to work in local coordinates on X. Say a = L..J '" a# (t x) dx· /\ ... /\ dx· (5.2.12) + � aJ(t, x)dt /\ dxj, /\ ... /\ dXj,_, . j We have ;a given by a similar formula, with coefficients replaced by af(t + s, x) and aJ(t + s, x), hence d ;a l = '" a,a,# (t,x)dx, /\ ... /\dx, 0 � s d (5.2.13) + � a,aJ(t, x) dt /\ dxj, /\ ... /\ dXj,_, . j (5.2.11)

I
s=
Meanwhile (5.2.14 ) so (5.2.15)
'
Ie
11
1
,
e
aja, = �j aJ(t, x) dxj, /\ ... /\ dXjH ' d(aja,) = �j a,aJ(t, x) dt /\ dxj, /\ ... /\ dXj,_, + � ax" aJ(t, x) dxv /\ dxj, /\ ... /\ dXjH ' j,v
A similar calculation yields =� (5.2.16)
(da)ja,
a,af(t, x) dx" /\ ... /\ dx" '" a abet x) dxv /\ dx· /\ ... /\ dx· .  L.J j,v Xv j
'
11
Jel
Comparison of (5.2.15)(5.2.16) with (5.2.13) yields (5.2.10) at s = 0, proving Lemma 5.2.4. D
5.
202
Applications ofthe GaussGreenStokesformula
The following consequence of Proposition 5.2.3 is the Poincare lemma. Proposition 5.2.5. Let X be a smooth kdimensional surface. Assume the identity map I : X + X is smoothly homotopic to a constant map K : X + X, satisfying K(x) = p. Then,for all e E (1, . . . , kj, (5.2.17) a E Ae (X), da = a ==} a is exact. Proof. By Proposition 5.2.3, a  K*a is exact. However, K*a = O.
D
Proposition 5.2.5 applies to any open X C jtk that is starshaped, so (5.2.18) D, : X + X for S E [0, 1], D,(x) = SX. Thus, for any open starshaped X c jtk , each closed a E Ae (X) is exact. We next present an important generalization of Lemma 5.2.4. Let D be a smooth ndimensional surface. If a E Ak(D) and X is a vector field on D, generating a flow :Tk, the Lie derivative £xa is defined to be (5.2.19)
£xa =
d d t (:Tk)*al t=o .
Note the similarity to the definition (2.3.82) of £x Y for a vector field Y, for which there was the alternative formula (2.3.85). The following useful result is known as Cartan 's formula for the Lie derivative. Proposition 5.2.6. We have (5.2.20) £xa = d(aJX) + ( d a)JX. Proof. We can assume D is an open subset of jtn . First we compare both sides in the special case X = a/axe = ae. Note that ( :TJ)* a = � a/x + tee ) dxj , A . . . A dXik' (5.2.21) j so (5.2.22) £a,a = � ax, a/x) dxj , A . . . A dXik = aea. j To evaluate the right side of (5.2.21), with X = at , we could parallel the calculations (5.2.14)(5.2.16). Alternatively, we can use (4.2.12) to write this as
n
d('ea) + 'eda = � (aj Aj 'e + 'eaj Aj )a. j= Using the commutativity of aj with Aj and with 'e, and the anticommutativityrelations (4.2.8), we see that the right side of (5.2.23) is aea, which coincides with (5.2.22). Thus the proposition holds for X = a/axe. Now we prove the proposition in general, for a smooth vector field X on D. It is to be verified at each point Xo E D. If X(xo) '" 0, we can apply Theorem 2.3.7 to choose a coordinate system about Xo so X = a/ax! and use the calculation above. This shows that the desired identity holds on the set (xo E D : X(xo) '" OJ, and by continuity it holds on the closure of this set. However, if Xo has a neighborhood on which X (5.2.23)
l
5.2.
Differentialforms, homotopy, and the Lie derivative
vanishes, it is clear that .cxa = 0 near completes the proof.
203
Xo and also ajX and dajX vanish near Xo. This D
From (5.2.19) and the identity 'FJ/' = 'Fk'FJc, it follows that (5.2.24 ) It is useful to generalize this. Let F, be a smooth family of diffeomorphisms of M into M. Define vector fields X, on F,(M) by
d dt F,(x) = X,(F,(x» .
(5.2.25) Then, given a E Ak(M), (5.2.26)
1t Fta = Ft£xt a
= Ft [d(ajX,) + (da)jX, ] .
In particular, if a is closed, then, if F, are diffeomorphisms for 0 :S t :S 1, (5.2.27)
F{a  F;a = df3, f3 =
1 1 Ft(ajX,)dt.
The fact that the left side of (5.2.27) is exact is a special case of Proposition 5.2.3, but the explicit formula given in (5.2.27) can be useful. More on the divergence of a vector field. Let M c Itn be an mdimensional, ori ented surface, with volume form w. Then dw = 0 on M, so, if X is a vector field on M, .cxw = d(wjX).
(5.2.28) Comparison with (4.4.7) gives (5.2.29)
(divX)w = .c xw.
This is sometimes taken as the definition of divX. It readily leads to a formula for how the flow 'FJc affects volumes. To get this, we start with (5.2.30)
.:£('1" )*w = ('F'x )*.cxw dt x = ('FJc)*((divX)w).
5.
204
t
t,
Applications ofthe GaussGreenStokesformula
Hence, if 0 c M is a smoothly bounded domain on which the flow :Fk is defined for E J, then, for such
1t Vol :Fk(O) 1t J (:Fk)*w J(:Fk)*((diVX) W) =
n
=
(5.2.31)
n
=
In other words,
J (divX) w.
.71(.0)
1t Vol :Fk(O) J (divX) dV.
(5.2.32)
=
.:Fk (O)
This result is equivalent to Proposition 3.2.7, but the derivation here is substantially different. Compare also the discussion in Exercise 2 of §4.4. Exercises
1. Show that if a is a kform and X,Xj are vector fields, (5.2.33)
(.cXa)(xl ' · · · ,Xk)
j
Recall from (2.3.85) that .c xXj
=
[X, Xj ], and rewrite (5.2.33) accordingly.
2. Writing (5.2.20) as deduce that (5.2.34 )
(da)(XO,XI , . . . , Xk ) = ( .cXo a)(XI , . . · , Xk )  (dlxo a)(XI , . . · ,Xk ) '
3. In case a is a Iform, deduce from (5.2.33)(5.2.34) that (da)(XO,XI ) = XO ' a(XI )  Xl ' a(Xo)  a([XO,XI ])· (5.2.35) 4. Using (5.2.33)(5.2.34) and induction on k, show that if a is a kform,
(5.2.36)
(da)(Xo, · · · ,Xk) k '" = L)  I )eXe · a(Xo, . . · , Xe , . . · , Xk )
�
5.3.
Differentialforms and degree theory
205
Here, 2,; indicates thatXe has been omitted. 5. Show that if X is a vector field, f3 is a 1form, and a is a kform, then (5.2.37)
=
(AplX + lxAp )a (X, f3)a.
Deduce that (5.2.38)
=
(df) A (aJX) + (df A a)JX (Xf)a.
6. Show that the definition (5.2.19) implies (5.2.39) £x(fa) f£xa + (Xf)a.
=
7. Show that the definition (5.2.19) implies (5.2.40) d£xa £x(da).
=
8. Denote the right side of (5.2.20) by Lxa, i.e., set
=
Lxa d(aJX) + (da)JX.
(5.2.41)
Show that this definition directly implies
=
Lx(da) d(Lxa).
(5.2.42)
9. With Lx defined by (5.2.41), show that
=
Lx(fa) fLxa + (Xf)a.
(5.2.43)
Hint. Use (5.2.38).
10. Use the results of Exercises 6 9 to give another proof of Proposition 5.2.6, i.e., £xa Lxa. Hint. Start with £xf Xf (X, df) Lxf· 11. In Exercises 11 12, letX and Y be smooth vector fields on M, and let a E Ak(M).
=
= =
=
13. Using Exercise 11 and (5.2.29), show that div[X, Y]
= X(div Y)  Y(divX).
5.3. Differential forms and deg ree theory
Degree theory assigns an integer, Deg(f), to a smooth map f : X .... Y, when X and Y are smooth, compact, oriented surfaces of the same dimension, and Y is connected. This has many uses, as we will see. Results of §5.2 provide tools for this study. A major ingredient is the Stokes theorem.
206
5. Applications ofthe GaussGreenStokesformula
As a prelude to our development of degree theory, we use the calculus of differen tial forms to provide simple proofs of some important topological results of Brouwer. The first two results concern retractions. If Y is a subset of X, by definition a retraction of X onto Y is a map ", : X + Y such that ",(x) = x for all x E Y. Proposition 5.3.1. There is no smooth retraction ", : B + sn l of the closed unit ball B in Rn onto its boundary sn 1 This is Brouwer 's noretraction theorem. In fact, it is just as easy to prove the fol lowing more general result. The approach we use is adapted from [29]. Proposition 5.3.2. If M is a compact oriented ndimensional surface with nonempty
boundary aM, there is no smooth retraction ", : M + aM.
Proof. Pick", E An 1 (aM) to be the volume form on aM, so JaM '" > O. Now apply the Stokes theorem to f3 = ",*",. If '" is a retraction, then ", 0 j(x ) = x, where j : aM '> M is the natural inclusion. Hence j*q;*cu = so we have (5.3.1)
CU, J '" = J d",*",
aM
M
But d",*", = ",*d", = 0, so the integral (5.3.1) is zero. This is a contradiction, so there can be no retraction. D A simple consequence of this is the famous Brouwerfixedpoint theorem. We first present the smooth case. Theorem 5.3.3. ifF : B + B is a smooth map on the closed unit ball in Rn , then F has
afixed point.
Proof. We are claiming that F(x) = x for some x E B. If not, define cAx) to be the endpoint of the ray from F(x) to x, continued until it hits aB = sn l . See Figure 5.3.1. An explicit formula is
yb + 4ac  b , cAx) = x + t(x  F(x» , t = ' 2a a = Il x  F(x) II', b = 2x · (x  F(x» , c = l  ll x l12 Here t is picked to solve the equation Il x + I(x F(x» II ' = 1. Note that ac 2: 0, so t 2: O. It is clear that ", would be a smooth retraction, contradicting Proposition 5.3.1.
D
Now we give the general case, using the StoneWeierstrass theorem (established in Appendix A.5) to reduce it to Theorem 5.3.3. Theorem 5.3.4. IfG : B + B is a continuous map on the closed unit ball in Rn, then G
has afixed point.
Proof. If not, then
I G(x)  x l = 8 > O. xinf Ell
5.3.
Differentialforms and degree theory
207
F(x) {)B
B
x
cp(x)
Figure 5.3.1. Purported retraction of B onto 8B
P
The StoneWeierstrass theorem implies there exists a polynomial such that < 8/8 for all E E. Set
G(x)1
x
F(x) = (1  � )P(x) Then F E .... E and IF(x)  G(x) 1 < 8/2 for all x E E, so F(x)  x l > �.2 xinf EB I
IP(x) 
:
This contradicts Theorem 5.3.3.
D
As a second precursor to degree theory, we next show that an evendimensional sphere cannot have a smooth nonvanishing vector field. Proposition 5.3.5. There is no smooth nonvanishing vectorfield on sn if n = 2k is even.
Proof. If X were such a vector field, we could arrange it to have unit length, so we would have X : sn .... sn with X( v ) 1. v for v E sn c Rn +l . Thus there would be a unique unit speed curve Yv along the great circle from v toX( v), oflength 7[/2. Define a smooth family of maps : sn .... sn by v) = Yv(t). Thus v) = v, v) = X( v), and = A would be the antipodal map, A( v) =  v. By Proposition 5.2.3, we deduce thatA*",  '" = df3 is exact, where ", is the volume form on sn . Hence, by the Stokes
F"
Ft
Ft(
Fo(
F,,;, (
5.
208
Applications ofthe GaussGreenStokesformula
theorem, (5.3.2) Alternatively, (5.3.2) follows directly from Proposition 5.2.1. On the other hand, it is straightforward thatA*w = (_1)n +1 w, so (5.3.2) is possible only when n is odd. D Note that an important ingredient in the proof of both Propositions 5.3.2 and 5.3.5 is the existence of nforms on a compact oriented ndimensional surface M that are not exact (though of course they are closed). We next establish the following exactness criterion, a counterpoint to the Poincare lemma. Proposition 5.3.6. If M is a compact, connected, oriented surface of dimension n and a E NM, then a = df3for some f3 E An 1 (M) ifand only if
J a = O.
(5.3.3)
M
We have already discussed the necessity of (5.3.3). To prove the sufficiency, we first look at the case M = sn . In that case, any nform a is of the form a(x)w, a E c oo (sn ), w the volume form on sn , with its standard metric. The group G = SO(n + 1) of rotations of Rn +1 acts as a transitive group of isometries on sn . In §3.2 we constructed the integral of functions over SO(n + 1), with respect to Haar measure. As seen in §3.2, we have the smooth map Exp : Skew(n + 1) + SO(n + 1), giving a diffeomorphism from a ball (') about a in Skew(n + 1) onto an open set U C SO(n + 1) = G, a neighborhood of the identity. Since G is compact, we can pick a finite number of elements Ij E G such that the open sets Uj = {Ij g : g E U} cover G. Using Corollary A.3.9, we can pick �j E Skew(n + 1) such that Exp �j = Ij . Define '"'j t : Uj + G for a :S t :S 1 by (5.3.4 )
,",jt (lj Exp(A») = (Exp t�j )(Exp tA l , A E
(') .
Now partition G into subsets OJ , each of whose boundaries has content zero, such that OJ C Uj . If g E OJ , set get) = '"'jt (g). This family of elements of SO(n + 1) defines a family of maps Fgt : sn + sn . Now by (5.2.27) we have 1 a = g*a  dKg(a), Kg(a) = Fg"t (aJXgt ) d t, (5.3.5)
1
for each g E SO(n + 1), where Xgt is the family of vector fields on sn associated to Fgt , as in (5.2.25). Therefore, (5.3.6)
a=
J g*adg  d J KgCa) dg. G
G
5.3.
Differentialforms and degree theory
209
Now the first term on the right is equal to acu, where a fact, the constant is
a 1 Vol sn
=
(5.3.7)
= J a(g . x)dg is a constant; in
J a.
sn
Thus in this case (5.3.3) is precisely what serves to make (5.3.6) a representation of a as an exact form. This takes care of the case M sn . For a general compact, oriented, connected M, proceed as follows. Cover M with open sets 0" . . . , OK such that each OJ is diffeomorphic to the closed unit ball in Rn . Set U, 0" and inductively enlarge each OJ to Uj , so that Uj is also diffeomorphic to the closed ball, and such that Uj + , n Uj '" 0, 1 :S j < K. You can do this by drawing a simple curve from 0 j + ' to a point in Uj and thickening it. Pick a smooth partition of unity 'Pj , subordinate to this cover. (See §3.3.) Given a E NM, satisfying (5.3.3), take 'Pj a. Most likely J ii, c , '" 0, so take E AnM, with compact support in U, n U" such that J C r . Set a, ii, and redefine ii, to be the old ii, plus Make a similar construction using J ii, c" and continue. When you are done, you have
=
=
iij =
0',
0' , .
0',
=
=
=
 0'"
=
(5.3.8) with aj compactly supported in Uj . By construction, (5.3.9 )
=
for 1 :S j < K. But then (5.3.3) implies J aK a too. Now pick p E sn and define smooth maps 'h
M + sn , which map Uj diffeomorphically onto sn \ p, and map M \ Uj to p. There is a unique Vj E Ansn, with compact support in sn \ p, such that 1jJ*Vj aj . Clearly (5.3. 10 )
:
J Vj = 0,
=
=
sn
so by the case M sn of Proposition 5.3.6 already established, we know that Vj for some Wj E An' sn , and then (5.3. 11 )
= dWj
This concludes the proof of Proposition 5.3.6. We are now ready to introduce the notion of the degree of a map between compact oriented surfaces. Let X and Y be compact oriented ndimensional surfaces, and as sume Y is connected. We want to define the degree of a smooth map F : X Y. We pick E An y such that
cu
(5.3. 12 )
....
J cu = l. y
210
5. Applications ofthe GaussGreenStokesformula
We propose to define
(5.3.13)
Deg(F) =
J F*w.
x
The following result shows that Deg(F) is indeed well defined by this formula. The key argument is an application of Proposition 5.3.6. Lemma 5.3.7. The quantity (5.3.13) is independent ofthe choice of w, as long as (5.3.12)
holds.
Proof. Pick w, E An y satisfying Jy w, this implies
(5.3.14)
(J)
Thus (5.3.15 )

CUI
=
=
1, so Jy(w  w,) = o. By Proposition 5.3.6,
da, for some a E An 1 y.
J F*w  J F*w, J dF*a
x
=
x
x
=
0,
and the lemma is proved.
D
The following homotopy invariance of degree is a most basic property. Proposition 5.3.8. If Fa and F, are smoothly homotopic, then Deg(Fo) = Deg(F,). Proof. By Proposition 5.2.1, if Fa and F, are smoothly homotopic, then Jx F�w
&�
D
The following result is a simple but powerful extension of Proposition 5.3.8. Com pare the relation between Propositions 5.2.1 and 5.2.2. Proposition 5.3.9. Let M be a compact oriented surface with boundary, dimM = n + l. Take Y as above, n = dim Y. Given a smooth map F : M + Y, let f = FlaM : 3M + Y.
Then
Deg(f) = o. Proof. Applying the Stokes theorem to a = F*w, we have
J rw J dF*w. =
aM
M
But dF*w = F*dw, and dw = a if dim Y = n , so we are done.
D
Brouwer 's noretraction theorem is an easy corollary of Proposition 5.3.9. Compare the proof of Proposition 5.3.2. Corollary 5.3.10. IfM is a compact oriented surface with nonempty boundary 3M, then there is no smooth retraction 'P : M + 3M.
5.3.
Differentialforms and degree theory
211
Proof. Without loss of generality, we can assume M is connected. If there were a re traction, then aM = cp(M) must also be connected, so Proposition 5.3.9 applies. But then we would have, for the map id. = CP1aM ' the contradiction that its degree is both a and 1. D We next give an alternative formula for the degree of a map, which is very useful in many applications. In particular, it implies that the degree is always an integer. A point Yo E Y is called a regular value of F, provided that, for each x E X sat isfying F(x) = Yo, DF(x) : TxX .... TyO Y is an isomorphism. The easy case of Sard 's theorem, discussed in §3.4, implies that most points in Y are regular. Endow X with a volume element "'x, and similarly endow Y with "' y . If DF(x) is invertible, define JF(x) E R \ a by F*(",y) = JF(x)wx . Clearly, the sign ofJF(x), i.e., sgn JF(x) = ±1, is independent of choices of "'x and "'y, as long as they determine the given orientations of X and Y. Proposition 5.3.11. IfYo is a regular value ofF, then (5.3.16)
Deg(F) = � (sgnJF(xj) : F(xj ) = Yo}.
Proof. Pick", E An y, satisfying (5.3.12), with support in a small neighborhood of Yo. Then F*", will be a sum L "'j , with "'j supported in a small neighborhood of Xj , and D J "'j = ±l as sgn JF(xj ) = ±1. For an application of Proposition 5.3.11, letX be a compact smooth oriented hy persurface in Rn +l , and set 0 = Rn +1 \ X. Given p E 0, define xp If, : X + sn , If,(x) = (5.3.17) Ix  p I ' It is clear that Deg(If,) is constant on each connected component of D. It is also easy to see that, when p crosses X, Deg(If,) jumps by ±1. Thus 0 has at least two connected components. This is most of the smooth case of the JordanBrouwer separation theo rem: 
Theorem5.3.12. IfX is a smooth compact oriented hyperswjace ofRn+l , which is con nected, then 0 = Rn+1 \ X has exactly two connected components. Proof. X being oriented, it has a smooth global normal vector field. Use this to sepa rate a small collar neighborhood e of X into two pieces; e \ X = eo u e 1 . The collar e is diffeomorphic to [1, 1] X X, and each ej is clearly connected. It suffices to show that any connected component a of 0 intersects either eo or e1 . Take p E aa. If p II' X, then p E 0, which is open, so p cannot be a boundary point of any component of D. Thus aa c X, so a must intersect a ej . This completes the proof. D Let us note that, of the two components of 0, exactly one is unbounded, say Do, and the other is bounded, call it 0 1 , Then we claim (5.3.18) E OJ ==} Deg(If,) = j. Indeed, for p very far from X, If, : X .... sn is not onto, so its degree is O. And when p crosses X, from Do to 0 1 , the degree jumps by + 1.
P
5.
212
Applications ofthe GaussGreenStokesformula
For a simple closed curve in R', Theorem 5.3.12 is the smooth case of the Jordan curve theorem. That special case of the argument given above can be found in [45]. The existence of a smooth normal field simplifies the use of basic degree theory to prove such a result. For a general continuous, simple closed curve in R', such a normal field is not available, and the proof of the Jordan curve theorem in this more general context requires a different argument, which can be found in [21]. We apply results just established on degree theory to properties of vector fields, particularly of their critical points. A critical point of a vector field V is a point where V vanishes. Let V be a vector field defined on a neighborhood (') of p E Rn , with a single critical point, at p. Then, for any small ball B, about p, B, C ('), we have a map
v,: : BBr + sn l , V,(x)
(5.3.19)
= WVex) (x)I '
The degree of this map is called the index of V at p, denoted indp (V); it is clearly inde pendent of If V has a finite number of critical points, then the index of V is defined to be r.
(5.3.20)
Index(V)
= � indp/V).
If 1jJ : (') + (')' is an orientationpreserving diffeomorphism, taking p to p and V to W, then we claim
(5.3.21) In fact, D 1jJ( p) is an element of GL(n, R) with positive determinant, so it is homotopic to the identity, and from this it readily follows that V, and w,. are homotopic maps of 3B, + sn l . Thus one has a welldefined notion of the index of a vector field with a finite number of critical points on any oriented surface M. There is one more wrinkle. Suppose X is a smooth vector field on M and p an iso lated critical point. If you change the orientation of a small coordinate neighborhood (') of p, then the orientations of both 3B, and sn l in (5.3.19) get changed, so the as sociated degree is not changed. Hence one has a welldefined notion of the index of a vector field with a finite number of critical points on any smooth surface M, oriented or not. A vector field V on (') C Rn is said to have a nondegenerate critical point at p provided DV(p) is a nonsingular n X n matrix. The following formula is convenient. Proposition 5.3.13. If V has a nondegenerate critical point at p, then
(5.3.22)
indp (V)
= sgn detDV(p). =
=
Proof. If P is a nondegenerate critical point, and we set 1jJ(x) DV(p)x, 1jJ,(x) 1jJ(x)/ l1jJ(x) 1 . for x E 3B" it is readily verified that 1jJ, and V, are homotopic, for small. The fact that Deg( 1jJ,) is given by the right side of (5.3.22) is an easy consequence of Proposition 5.3.11 D The following is an important global relation between index and degree.
r
5.3.
Differentialforms and degree theory
213
Proposition 5.3.14. Let D be a smooth bounded region in Rn +l . Let V be a vector field on D, with afinite number ofcritical points Pj' all in the interior D. Define F : 3D + sn by F(x) V(x)IW(x)l. Then (5.3.23) Index(V) Deg(F).
=
= Proof. Ifwe apply Proposition 5.3.9 to M = D \ U j B£(pj ), we see that Deg(F) is equal to the sum of degrees of the maps of 3B£(pj ) to sn , which gives (5.3.23).
D
Next we look at a process of producing vector fields in higherdimensional spaces from vector fields in lowerdimensional spaces. Proposition 5.3.15. Let W be a vectorfield on Rn , vanishing only at O. Define a vector field V on Rn+k by V(x,y) (W(x),y). Then V vanishes only at (0, 0). Then we have indoW ind (o ,o )V. (5.3.24)
=
=
Proof. If we use Proposition 5.3.11 to compute degrees of maps, and choose Yo E sn l C sn+k l , a regular value of w,., and hence also for v,:, this identity follows. D
We turn to a more sophisticated variation. LetXbe a compact ndimensional sur face in Rn + k , and a finite number of critical  let W be a (tangent) vector field on X withpoints Pj' Let D be a small tubular neighborhood of X, 7r : D + X mapping z E D to the nearest point in X. Let ",(z) dist(z,X)2 Now define a vector field V on 0 by V(z) W(7r(z» + V",(z). (5.3.25) Proposition 5.3.16. ifF : 3D + sn + k l is given by F(z) V(z)IW(z)1. then (5.3.26) Deg(F) Index(W).
=
=
=
= Proof. We see that all the critical points of V are points in X that are critical for W, and, as in Proposition 5.3.15, Index(W) = Index(V). Then Proposition 5.3.14 implies Index(V) = Deg(F). D
Since ",(z) is increasing as one goes away from X, it is clear that, for z E 3D, V(z) points out of D, provided it is a sufficiently small tubular neighborhood of X. Thus F : 3D + sn + k l is homotopic to the Gauss map
(5.3.27)
N : 3D + sn+k l ,
given by the outward pointing normal. This immediately gives the following. Corollary 5.3.17. Let X be a compact ndimensional surface in R n+ k , let D be a small tubular neighborhood ofX, and letN : 3D + sn+k l be the Gauss map. IfW is a vector field on X with afinite number of critical points, then (5.3.28 ) Index(W) Deg(N).
=
Clearly, the right side of ( 5.3.28 ) is independent of the choice of W. Thus any two vector fields on X with a finite number of critical points have the same index, i.e., Index(W) is an invariant of X. This invariant is denoted Index(W) x(X), (5.3.29 ) and is called the Euler characteristic of X.
=
5.
214
Applications ofthe GaussGreenStokesformula
Remark. The existence of smooth vector fields with onlynondegenerate critical points (hence only finitely many critical points) on a given compact surface X follows from results presented in §3.5. Exercises
1. Let X be a compact, oriented, connected surface. Show that the identity map I : X + X has degree 1. 2. Suppose Y is also a compact, oriented, connected surface. Show that if F : X + Y
is not onto, then Deg(F) = O.
3. !fA : sn + sn is the antipodal map, show that Deg(A) = ( _1)n 1 4. Show that the homotopy invariance property given in Proposition 5.3.8 can be de duced as a corollary of Proposition 5.3.9. Hint. Take M = X X [0, 1] ' 5. Let p(z) = zn + an_ 1 zn 1 + . . . + a1z + aa be a polynomial of degree n 2: 1. The fundamental theorem of algebra, proved in §5.1, states that P(za) = a for some Za E C We aim for another proof, using degree theory. To get this, by contradiction, assume p : C + C \ o. For r 2: 0, define F,
: Sl + Sl ,
Show that each F, is smoothly homotopic to Fa, and note that Deg(Fa) = O. Then show that there exists ra such that
r 2: ra =} F, is homotopic to , i where ( e18 ) = e nB. Show that Deg( 0 on G. cp : G + [0, 00 ), Let � c Rn+2 be the surface � ((x,y) : x E G, y2 cp(x»), and let v : � + sn +l be the outward pointing unit normal. Show that
c
=
=
=
(5.3.33 )
and deduce that (5.3.34 )
Degv
= DegN,

= 2:1 x(�)'
DegN
Hint. Taking N : X + sn C sn+l , show that each regular value of N is also a regular value of v, with the same preimage inX c �. Then show that Proposition 5.3.11 applies. 14. Actually, (5.3.33) holds whether n is odd or even. Can you get anything else from this?
15. In the setting of Exercise 12 (n is odd), generalize the construction of Exercise 6 to show directly that there is a smooth, nowherevanishing vector field tangent to X.
5.3.
Differentialforms and degree theory
217
C> C> C>
7) x
Figure 5.3.3. Threeholed torus, Euler characteristic 4
=
Rn be open, and let f : (') + R be smooth of class C' . Let V Vf. Assume p E (') is a nondegenerate critical point of f, so its Hessian D'f(p) is a nonde 16. Let (')
c
generate n X n symmetric matrix. Say
(5.3.35)

D'f(p) has e positive eigenvalues and n e negative eigenvalues.
Show that Proposition 5.3.13 implies
(5.3.36) 17. LetX c Rn+k be a smooth, compact, ndimensional surface. Assume there exists
f E C' (X), with just two critical points, a max and a min, both nondegenerate. Use Exercise 16 to show that
(5.3.37) Considering sn
x(X) c
= 2 if n is even, 0 if n is odd.
Rn+l , use this to give another demonstration of (5.3.30).
18. Let T C R3 be the "inner tube" surface described in Exercise 15 of §3.2. (a) Show that rotation about the zaxis is generated by a vector field that is tangent to T and nowhere vanishing of T . (b) Define f : T + R by f(x,y,z) x, (x,y,z) E T. Show that f has four critical points, a max, a min, and two saddles. Deduce from Exercise 16 that Vf is a vector field on T of index O. (c) Show that both parts (a) and (b) imply X( T) o.
=
=
19. Figure 5.3.3 shows a threeholed torus M in R3 , lined up along the xaxis. Define 'P : M + R, and consider the vector field X
'P
= V'P on M.
= xlM ,
(a) Show thatX has eight critical points: one source, six saddles, and one sink. (b) Deduce from part (a) that
=
X(M) 2  6
= 4.
218
5. Applications ofthe GaussGreenStokesformula
20. Extending the scope of Exercise 19, consider a gholed torus, Mg, again lined up along the xaxis, and define a similar vector field X on Mg . Show that X has 2g 2 critical points: one source, 2g saddles, and one sink. Deduce that
+
Compare part (b) of Exerise 18.
21. Let M c Itn be a smooth, compact, mdimensional surface. Assume (5.3.38) X E M = X E M, O ", M, and form the projective manifold !,(M), as in §3.2. LetXbe a vector field on !,(M) with
only nondegenerate critical points. Show that there is naturally associated a vector field Y on M such that Index Y = 2 IndexX. Deduce that
X(M) = 2X(!,(M» .
22. In the setting of Exercise 20, with Mg arranged to satisfy (5.3.38), show that X(!'(Mg» = 1  g. Let X c Itn be an mdimensional surface, and let Y c ltv be a lldimensional surface, both smooth of class C k Then the Cartesian product
X x Y = {(X,y) E ltn X ltv : x E X, y E y} C ltn X ltv
has a natural structure of an (m Il)dimensional Ck surface.
+
23. Let X and Y be as above, and assume they are both compact. Let Vi be a smooth vector field tangent to X and V, a smooth vector field tangent to Y, both with only nondegenerate critical points. Say (p,j are the critical points of Vi and {qj } those of v,. (a) Show that W(x, y) = Vi (x) + V,(y) is a smooth vector field tangent to X X Y. Show that its critical points are precisely the points {(p" qj »), each nondegenerate. Show that Proposition 5.3.13 gives (5.3.39) (b) Show that Index W = (Index Vi )(Index V,).
(5.3.40) (c) Deduce that
(5.3.41)
X(X X Y) = x(X)X(Y).
LetX and Ybe smooth, compact, oriented surfaces in It n . Assume k = dimX, dim Y, and k = n  1. Assume X n Y = 0. Set
+e
(5.3.42)
'P
xy : X x Y + sn l , cp(x ' Y ) = 1 x  yI ' 
e=
5.3.
Differentialforms and degree theory
219
Figure 5.3.4. 1\vo curves in R3 with linking number 1
We define the linking number
leX, Y, Rn ) = Deg",.
(5.3.43) 24. Let y and (J R/(2lL"):
C
R3 be the following simple closed curves, parametrized by s, t
E
yes) = (cos s, sin s, 0), (J(t) = (0, 1 + cos t, sin t). Thus y is a circle in the (x, y)plane centered at (0, 0, 0) and (J is a circle in the (y, z)plane, centered at (0, 1, 0), both of unit radius. See Figure 5.3.4. Show that ley, (J, R3 ) = l. Hint. With ", as above, show that e, = (0, 1, 0) E S' has exactly one preimage point, under ", : y X (J .... S2
25. Let M be a smooth, compact, oriented, (n l)dimensional surface, and assume '" : M .... Rn \ a is a smooth map. Set ",(x) F(x) = ",(x) I ' F : M + sn l . 1 1 n nTake cu E A (R \ 0) to be the form considered in Exercises 9 10 of §4.3, i.e., n cu = Ix l n � Xj dX l A . . . A dXj A . . . A dxn . j= l 
Show that Deg(F) =
::fJ n l M
",*cu .
5.
220 Hint. Use Proposition 5.2.1 (with
Applications ofthe GaussGreenStokesformula
Y = Rn \ 0) and Exercise 9 of §8, to show that J 'P*w = J F*w,
M j
M
j*w is the area form on sn1 26. In Exercise 25, take n = 3 and M = = R'/(2][Z'), parametrized by (s, t) E R'. Show that 'P*w = 1'P13 det (3�;1 3�;, 3�;3) (ds dt) 3t 'Pl 3t'P' 3t'P3 = 1'P13'P . (�� X ��) (ds dt). In case 'P(s, t) = yes)  cr(t), deduce the Gauss linking number formula, 1 yes)  CJ(t) » l(y, CJ, R3 ) =  4][ J y(s) cr(t) 3 . (y' (s) X CJ' ( t dsdt. 1 I and show that, under sn l
'>
Rn ,
lr'
A
A
_
T'
Y
Y
Rn as in (5.3.42)(5.3.43). We say an unlinking of X and is a pair of smooth families 5t : X + Rn , � t : + Rn of smooth maps such that 50(X) x, �o(y) y, V x E X, Y E 5t (X) n �t (Y) V E [0, 1], 51 (X) and �1 ( ) are separated by a hyperplane in Rn . Show that if there is an unlinking of X and then leX, Rn ) Deduce that there 27. Take X,
c
Y
=
= Y, = 0, t Y Y, Y, = O. is no unlinking of the curves CJ and y in Exercise Hint. Consider 'Pt : X X Y sn l , given by 'Pt(x,y) = 5t(X)  �t(Y) ' 15t(X)  �t(y)I +
24.
28. Let f : sn + sn be a smooth map with the property that VX E sn, f(x) ", x. Show that f is smoothly homotopic to the identity map, and hence Degf
= 1.
29. Let g : sn + sn be a smooth map, and assume n 2k is even. Show that Degg ", 1 ==} g has a fixed point in sn . Hint. LetA : sn + sn be the antipodal map, and consider f A 0 g.
=
=
30. What sort of fixedpoint result can you establish for g : sn + sn when n is odd?
Chapter 6
Differential geometry of surfaces
Here we study the geometry of an ndimensional surface M in Rk , or more generally of an ndimensional manifold M, equipped with a metric tensor (a Riemannian mani fold). The first object of our study is the class of geodesics, curves : I .... M that are critical points for the length functional
y
L(y) = l b I Y'(t)1 dt, with fixed endpoints, say y( a) = y( b ) = q. If we parametrize y to have constant speed, these curves are equivalently critical points of the energy functional (6.0.2) B(y) = � l b Ily'(t)I ' dt, (6.0.1)
p,
and, moreover, critical points of (6.0.2) are seen to automatically have constant speed. The geodesic condition can be expressed as a differential equation. We provide three approaches to this geodesic equation. In the first approach, M C Rk is an ndimensional surface. We see that a smooth curve : I .... M is a critical point of (6.0.2) if and only if, for each E (a, b), is normal to M at To derive a differential equation from this characterization, we bring in the matrixvalued function : M .... M(k, R), given by
y
(6.0.3)
yet). P P( ) = orthogonal projection of Rk onto TxM,
t
yU(t)
x
for x E M. We derive the geodesic equation
(6.0.4)
yU(t) [DP�(y(t» Y'(t)ly'(t) = 0, where P�(x) = I  P( ) . (DP( ) will appear again, in (6.0.11).) +
x
x

221
6.
222
Differential geometry ofsurfaces
The second approach uses local coordinates and works on an arbitrary Riemann ian manifold. We take yet) = (x1(t), . . . , xn (t» and write (6.0.2) as
(6.0.5)
B(y)
=�
1
b
gj k( X(t» xj (t)xk(t) dt,
using the summation convention (sum over repeated indices). We obtain the geodesic equation in the form xt + xj i:k rt .k 0, (6.0.6 ) J where rtj k are the Christoffel symbols, given by (6.1.47). We also rewrite this as a first order system, for (x, 5), where 58 = g8kxk ; see (6.1.49). This is a Hamiltonian system. The fact that solutions are constant speed curves is manifested as a conservation law. Our third approach to the geodesic equation brings in the notion of a covariant derivative, which associates to each tangent vector field X on M a firstorder differential operator Vx ' itself acting on vector fields on M. To each Riemannian manifold M there is a naturally associated LeviCivita covariant derivative, given by the formula (6.1.62). We show thatwhen M c �k is an ndimensional surface, then the LeviCivita covariant derivative is given by (6.0.7 )
VxY = P(x)DxY, at x E M, where Dx acts componentwise on Y and P(x) is the projection (6.0.3 ). For a
general Riemannian manifold M, the geodesic equation takes the form (6.0.8 )
at each point yet), with T(t) = y'(t). Using the basic existence, uniqueness, and smooth dependence on parameters for systems of ODE, we obtain, for each p E M, an exponential map Expp : U + M, (6.0.9 ) defined on a neighborhood of 0 in TpM by Expp (v) = Yv(l), (6.0.10 )
where yv(t) is the unique constantspeed geodesic satisfying Yv(O) = p, y�(O) = v. The derivative D Expp (O) is the identity map on TpM, so Expp gives a diffeomorphism from some open neighborhood of 0 E TpM onto a neighborhood of p in M, called an exponential coordinate system. We next take up the study of curvature, in §6.2. If M C �k is a connected n dimensional surface, it is part of a flat plane in �k if and only if P(x) is constant on M. Hence a measure of how M curves at x is given by (6.0.11 ) DP(x) : TxM + M(k, �). Given X(x) E TxM, we set DxP(x) = DP(x)X(x), yielding DxP : M + M(k, �), (6.0.12 ) when X is a vector field on M. One sees that, for x E M, (6.0.13 )
6.
Differential geometry ofsurfaces
where vxM
= (TxM)�
c jtk
223
If X and Y are tangent vector fields to M, there is the
secondfundamentalform, given by (6.0.14) II(X, Y) (DxP) Y,
=
which is normal to M. If \ is a normal field to M, we define the Weingarten map, A ,( x ) : TxM + TxM, by (6.0.15) (A,X, Y) (\, II (X, Y» . One has the Weingarten formula,
=
(6.0.16) In case k so
=n
+ 1, so M has codimension 1, we take a smooth unit normal field N to M,
(6.0.17)
N : M + sn is the Gauss map, introduced in §5.3, giving rise to the Gauss curvature,
(6.0.18)
K(x)
= det(DN(x)1
TxM
),
= TxM. In this case, the Weingarten formula becomes DxN = ANX, K(x) = (l)n detAN(x).
whereDN(x) : TxM + TN(x) sn
(6.0.19) so we have
(6.0.20)
Section 6.2 explores a number of explicit computations of the Gauss curvature of special classes of surfaces. The measures of curvature mentioned above are extrinsically defined, i.e., they are defined by how the surface M sits in jtk . There is also an intrinsically defined curva ture, the Riemann curvature, defined as follows. If M is a Riemannian manifold and X, Y, Z are vector fields on M, we set
=
(6.0.21)
R(X, Y)Z VXVyZ  VyVXZ  V[X,YlZ,
where V is the LeviCivita covariant derivative on M. There is a formula for R in terms of the Christoffel symbols, presented in (6.2.65)(6.2.69). In case M C jtk is an n dimensional surface, we have a formula relating the Riemann curvature to the Wein garten map and the second fundamental form: (6.0.22) R(X, Y)Z AII(Y,z)X AII(x,z) Y. In case k
(6.0.23)
=n
+
1, we can set
and deduce that
(6.0.24) In case n
(6.0.25)
= 2, k =
=
= ll(X, Y)N, (ll(X, W) g(X,Z») . (R(X, Y)Z, W) = det II (X, Y)
ll(Y, W) II(Y,Z) 3, this yields the formula K(x) (R(U, V)V, U)(x)
=
6.
224
Differential geometry ofsurfaces
for the Gauss curvature in terms of the Riemann curvature (where U and V form an orthonormal basis of TxM). This is the Gauss theorema egregium. It implies that the Gauss curvature of M, initially defined extrinsically, is actually an intrinsic measure of the curvature of M, when M c R3 is a twodimensional surface. One can use the right side of (6.0.25) to define the Gauss curvature of any twodimensional Riemannian manifold, whether or not it is a surface in R3 . In (6.2.102)(6.2.107), we show that the Gauss curvature of a surface M C Rn + l of dimension n is intrinsically defined whenever n = 2m is even, and we extend the definition of Ga uss curva ture to all Riemannian manifolds of dimension 2m. In §6.3 we tie results on curvature to results on degree theory from Chapter 5. Results of §5.3 show that if M c Rn+1 is a compact, ndimensional surface, with Gauss map N : M .... sn , and if n = 2m is even, then Deg(N) =
(6.0.26)
�n J K(x) dS(x), and M
1 Deg(N) = '2 x(M),
whereAn is the ndimensional area of the unit sphere sn , K(x) is the Gauss curvature of M, given by (6.0.18), and X(M) is the Euler characteristic of M. Putting these identities together and specializing to n = 2 yields
(6.0.27)
J K dS = 27rX(M),
M
for M e R3 , which is part of the classical GaussBonnet theorem. We have two primary goals in this section. One is to establish (6.0.27) for an arbitrary twodimensional com pact Riemannian manifold, regardless of whether it is a surface in R 3 Going further, we consider a domain D in such a twodimensional Riemannian manifold M and seek to extend (6.0.27) to a formula for In K dS. For example, we show that if D c M is smoothly bounded and M \ D has k connected components, each diffeomorphic to a closed disk, then
(6.0.28)
J K dS + J K ds = 27r(X(M)  k),
an n where K is the geodesic curvature of 3D. We make some comments on higherdimen
sional versions of the GaussBonnet theorem, beyond the case of codimensionl sur faces treated in (6.0.26), noting that pursuing this is a task for a more advanced course. In §6.4 we study smooth matrix groups, which are subsets G c M(n, that are smooth surfaces and have the property
F)
(6.0.29)
g, h E G ==} gh, g l E G.
Such surfaces get both left and right invariant metric tensors, and associated volume elements, and we investigate properties of the resulting invariant integrals. We show that the matrix exponential has the property (6.0.30) A E 9 = TIG ==} etA E G, V t E R,
6.1.
Geometry ofsurfaces I: geodesics
225
and explore a Lie algebra structure on g. In case G has a biinvariant metric tensor, we show that these curves etA are geodesics on M and obtain formulas for the covariant derivative and the Riemann curvature in terms of the Lie algebra structure on g. We conclude this chapter with some results on the derivative of the exponential map, actually two exponential maps: first the matrix exponential, and then the map Expp in (6.0.9)(6.0.10). We particularly consider existence ofcritical points, associated to conjugate points on M, and examine the influence of curvature on the existence of such critical points. This also serves as an introduction to a topic that can be pursued much further in a more advanced course. 6.1. Geometry of surfaces I: geodesics
Let M be a smooth, ndimensional surface in Rk (k > n), or more generally a smooth, ndimensional manifold equipped with a metric tensor (a Riemannian manifold). Let y : [a, b] M be a smooth curve. As seen in §3.2, its length is ....
(6.1.1) where
(6.1.2)
L(y) =
i IIY'(t)11 dt, b
Ily' (t) II' = (y'(t),y'(t» = y'(t) . G(y(t» y'(t) = � gjk(y(t» y; (t)Yk(t), j,k
the last expression being given in a local coordinate system, with G = (gjk ) denoting the metric tensor in this coordinate system. We use ( , ) to denote the inner product of two vectors defined by this metric tensor:
(V(x), W(x» = Vex) . G(x)W(x) = � gjk(X)V/X)W k(X). j,k In case M is a surface in R k , with the induced metric tensor, (V(x), W(x» is given by the standard dot product on Rk , applied to Vex), W(x) E TxM C Rk We aim to study smooth curves y that are length minimizing, among curves with
(6.1.3)
the same endpoints. Such curves are called geodesics. They have the following prop erty. Let y, be a smooth family of curves satisfying
y, : [a, b] + M, y,(a) '" p, y,(b) '" q, (6.1.4) with Yo = y. Then L(y,) 2: L(yo) for all so d (6.1.5) dl(Y,) = o. In other words, Yo is a critical point of the length functional. We define geodesic to in
s,
clude all such critical pa ths. Later we will investiga te the length minimizing properties of general geodesics.
226
6.
Differential geometry ofsurfaces
Note that L(y) is unchanged under reparametrization. We will parametrize that Ilyb(t) 11 = is constant. Then
Co
6 6
( .1. ) Equivalently,
Yo so
:l(y,)I,=o = :s 1b(y;(t), y;(t» 1/2 dt b 1 =� 2eo : (y;(t), y;(t» dtls=O. a
S
6
( . 1 .7) where
6
( . 1 .8)
B(y,)
b
= � l Ily;(t)112 dt b = � l (y;(t),y;(t» dt
is the energy of the curve y, : [a, b1 + M. Thus we shift from seeking a critical point of the length functional to finding a critical point of the energy functional. As we will see, such a critical path automatically has the property that Ily' (t) 11 is constant. Our next goal is to derive a differential equation characterizing critical points of the energy functional. We will discuss three approaches to such a geodesic equation. In all cases, we start with
6
( . 1 .9) In our first approach, we assume M is a smooth surface in Rk Then we have from ( . 1 .7) that
6 (6 . 1 .10)
Using the identity
(6.1.11) together with the fundamental theorem of calculus and the fact that (given (6.1.4» a ( . 1 .1 ) (t) 0, at t a and b, ay we have
6 2
(6.1.13) where
(6.1.14)
s,
=
=
6.1.
227
Geometry ofsurfaces I: geodesics v
x
P{x)v
M
k
Figure 6.1.1. The projection P(x) of R onto TxM
Now, given any smooth V : [a, b] + Rk satisfying Vet) E Tyo (t) M and Veal = V(b) = 0, one can find a smooth family of curves y, : [a, b] + M satisfying (6.1.4), such that Yo = Y and (6.1.14) holds. We have the following. Proposition 6.1.1. IfM c Rk is a smooth sUl!ace, a smooth curve y = Yo
is a critical point of the energyjunctional if and only if, for each t E (a, b), (6.1.15) y"(t) 1 Vet) for each Vet) E Ty(t)M.
: [a, b] + M
A convenient restatement of (6.1.15) can be formulated as follows. We take
P : M + M(k, R), P(x) = orthogonal projection of Rk onto TxM. See Figure 6.1.1. Then (6.1.15 ) is equivalent to the statement that (6.1.17 ) P(y(t» y"(t) = 0, for all t E (a, b). In order to get a differential equation in standard form, we comple ment ( 6.1.17) with (6.1.18 ) P�(y(t» y'(t) = 0, (where P�(x) = I  P(x» , which follows from the fact that y'(t) E Ty(t)M. Applying dldt to (6.1.18), and using the product rule and the chain rule, we have (6.1.16 )
(6.1.19 )
0=
1t P�(y(t» y'(t) = [DP�(y(t» y'(t)ly' (t)
+ P�(y(t»
y"(t).
6.
228
Differential geometry ofsurfaces
Here, for x E M, (6 . 1 .20) so DP�(y(t))y'(t) E M(k, R). Adding (6.1.19) to (6.1.17) gives
(6.1.21)
y"(t) + [DP�(y(t))y'(t)]y'(t) = O.
In order to apply ODE theory to (6.1.21 ) , it is convenient to have the following set up. Assume M = Mo and that there is a diffeomorphism Ea X M G, where Ea is an open ball about 0 in Rkn and G is an open neighborhood of Mo in R k Then G is a union of surfaces My, y E Ea . We extend P in (6.1.16) to a smooth map P : G + M(k, R), (6.1.22 ) P(x) = orthogonal projection of Rk onto TxMy, if x E My. ....
Thus we can regard ( 6.1.21 ) as an ODE on G. By results of §2.3, it has a unique short time solution, given initial data yea) = p E G, y'(a) = v E Rk The following result shows when solutions to ( 6.1.21 ) give geodesics on M. Proposition 6.1.2. Assume yet) solves ( 6.1.21 ), on an interval I, containing a, with ini
tial data (6.1.23 ) yea) = p E M, y'(a) = v E TpM. Then yet) satisfies (6.1.17)(6.1.18), and yet) E M,for t E I.
Proof. To start, we derive an identity based on the fact that each P(x) is a projection. Applying dldt to (6.1.24 )
P(y(t))P(y(t)) = P(y(t))
gives
(6.1.25 )
[DP(y(t))y'(t)]P(y(t)) + P(y(t)) [DP(y(t))y'(t)] = DP(y(t))y'(t),
hence
(6.1.26 )
P(y( t)) [DP(y( t))y' (t)] = [DP(y( t))y' (t)]p�(y( t )). Meanwhile, applying P(y(t)) to (6.1.21 ) yields (6.1.27 ) P(y(t))y"(t) = P(y(t)) [DP(y(t))y'(t)]y'(t). Hence, by ( 6.1.26 ) , (6.1.28 ) P(y(t))y"(t) = [DP(y(t))y'(t)]p�(y(t))y'(t). Now set (6.1.29 )
o(t) = P�(y(t))y'(t).
Applying dldt, we have
(6 . 1 .30)
,,' (t) = [DP�(y(t))y'(t)] + P�(y(t))y"(t) = P(y(t))y"(t) = [DP(y(t))y'(t)]o(t),
6.1.
Geometry ofsurfaces I: geodesics
229
the second identity by (6.1.21) and the third identity by (6.1.28). Hence (J satisfies a firstorder, homogeneous, linear OD E, so
=
(6.1.31) y'(a) E Ty(a)M =} (J(a) 0 =} (J(t) = o. From (6.1.28)(6.1.29), it is clear that (6.1.32) CJ(t) = 0 ==} P(y(t» y'(t) = 0, so we have (6.1.17)(6.1.18), and (6.1.18) yields yet) E M for all t E l. D Corollary 6.1.3. In the setting ofProposition 6.1.2, if (6.1.23) holds, then Ily' (t) 11 is con stant. Proof. We have
d dt (y'(t), y'(t» 2(y"(t), y'(t» , which vanishes when (6.1.17)(6.1.18) hold.
=
(6.1.33)
D
=
We give an alternative presentation of the geodesic equation in case k n + 1, i.e., M has codimension 1 in Rk Suppose M is defined by u(x) ',lu(x) '" O. Then (6.1.17) is equivalent to
= c,
=
(6.1.34)
y"(t) K(tlVu(y(t» ,
for a real valued function K, which remains to be determined. Meanwhile, the condi tion that u(y(t» = implies
c
=
(6.1.35) (y'(t), ',lu(y(t» ) 0 (cf. (6.1.18» , and differentiating this gives (6.1.36) (y"(t), ',lu(y(t» ) (y'(t), D'u(y(t» y'(t» , where D'u is the k X k matrix of secondorder partial derivatives of u. Comparing (6.1.34) and (6.1.36) gives K(t), and we have the ODE (y' (t), D'u(y( t»y' (t») (6.1.37) ',lu(y(t» , y"(t) II ',lu(y(t» II' for a geodesic y lying in M.
=
=
For our second approach to the geodesic equation, we letM be a general Riemann ian manifold. We assume y, : [a, b] M, satisfying (6.1.4), are contained in a single coordinate patch, on which the metric tensor is given by the nX n symmetric, positive definite matrix G (gjk ) ' We use the following notation:
=
(6.1.38)
....
= x,(t) = (x;(t), . . . , x�(t» , 1t y,(t) = x,(t) = (x;(t), . . . , x�(t» . y,(t)
We use the following summation convention, converting (6.1.2) to
(6.1.39)
=
Ilx,(t) II ' gjk (X,(t» x; (t)x�(t).
6.
230
Differential geometry ofsurfaces
(We sum over repeated indices.) Thus the energy functional is
(6.1.40)
E(x, ) = �
Hence, with
1 gjk(x(t» xi(t)x}(t)dt. b
(6.1.41)
= xo(t), we have 1sE(x, to = 1 [gjk(X(t» :/i(t)I,=/(t) (6.1.42)
and x(t)
b
agkt xk(t)xt (t)] dt, ax} where we have made use of the symmetry gjk = gkj ' Now, in analogy with (6.1.11), we + � vj (t)
2
can write
(6.1.43)
1t (gjk(X(t»Vj(t)xk(t») = gjk(X(t» :/i(t)I,=/k(t) + gjk (X(t»Vj (t)xk(t) + xt (t)
Thus, by the fundamental theorem of calculus, (6.1.44 )
�E dS (Xs )I s=O 

_
�;: Vj (t)xk(t).
lb [gj.kVj x"k + x. t aagjk Vj x' k a
_
and the stationary condition (d ld becomes
Xt j agkt k t] �v 2 aXj x x dt,
s)E(x, )I, o = 0 for all variations of the form (6.1.4) =
(6.1.45 )
Symmetrizing the quantity in parentheses with respect to k and e yields the geodesic equation (6.1.46 )
where rtjk is defined by (6.1.47 )
The functions rt ij are called the Christoffel symbols. We will see more of them. We next convert (6.1.46 ) to a firstorder system. It is convenient to set (6.1.48 )
6.1.
Geometry ofsurfaces I: geodesics
Then consider the system
231
xe = gek(X)5b . 1 agjk 5e =  2: axe 5j5b
(6.1.49)
where (gjk ) is the matrix inverse to (gj k ). If we apply d/dt to the first equation and plug in the second one for tb we get
(6 . 1 .50)
1 agik agit t X··e _ ( 2:gej j + gkj j ) t",b ax ax
and using (6.1.48), together with
a G(X) l G(X) l _G(X)aG l, = . ax} ax} straightforward manipulations yield the geodsic equation (6.1.46). A special structure of the system (6.1.49) is revealed by writing this system as xe = a f(x, 5), e (6.1.52) . =  a f(x, ), 5 5e axe (6.1.51)
�
where
(6.1.53) A system of ODEs of the form (6.1.52) is called a Hamiltonian system. Note that if (6.1.52) holds, then
(6.1.54)
d (x t ) ·e af t af d/ " = x axe + > t a5e = xe te + te xe = o.
Hence f(x(t), 5(t) is constant for a solution to (6.1.52). Since
gjk(X)5j5k = 5 . G(X) 1 5 = G(x)x . G(X) l G(X)X (6.1.55) = gjk (x)Xi xk , this implies that the solution curve to the geodesic equation (6.1.46) has constant speed, parallel to the result of Corollary 6.1.3. The passage from the geodesic equation (6.1.46) to the system (6.1.52) is a special case of a more general passage from a Lagrangian equation to an associated Hamilton ian system. More on this can be found in [50, Chapter 4, §7] and in [46, Chapter 1, §12] ' We now discuss a third approach to the geodesic equation. Again, M is a smooth ndimensional Riemannian manifold, y, : [a, b] M is a smooth family of curves satisfying (6.1.4), and Vet) is defined as in (6.1.14). Let ....
(6.1.56)
T = y;(t).
6.
232 Then, parallel to (6.1.9),
Differential geometry ofsurfaces
j
b d 1 (6.1.57) dsE(y,) = 2: a VeT, T) dt. Now we need a generalization of (3/3s)y;(t) and of the formulas (6.1.10)(6.1.11). To achieve this, we introduce the notion of a covariant derivative. If X and Y are vector fields on M, the covariant derivative Vx Y is a vector field on M. The following properties are required. We assume that VxY is additive in both X and Y, that
(6.1.58) for f E COO(M), and that
(6.1.59)
Vx(fY) = fVxY + (Xf)Y. Thus Vx acts as a derivation. The operator Vx is also required to have the following relation to the Riemannian metric: (6.1.60) X(Y,Z) = (VxY, Z) + (Y, VxZ) . One further property will uniquely specify V:
(6.1.61)
VxY  VyX = [X, Y]. If all these properties hold, we say V is the LeviCivita covariant derivative on the Rie mannian manifold M. We have the following basic existence and uniqueness result.
Proposition 6.1.4. Associated with a Riemannian metric is a unique LeviCivita covari
ant derivative, given by 2('17x Y, Z) = X(Y, Z) + Y(X, Z)  Z(X, Y) (6.1.62) + ([X, Y], Z)  ([X,Z], Y) 
([Y, Z],X).
Proof. To obtain the formula (6.1.62), cyclically permute X, Y, and Z in (6.1.60) and take the appropriate alternating sum, using (6.1.61) to cancel out all terms involving V but two copies of (Vx Y, Z). This derives the formula and establishes uniqueness. On the other hand, if (6.1.62) is taken as the definition of VxY, then verification of the properties (6.1.58)(6.1.61) is a straightforward exercise. D To see that the passage from (6.1.57) to the next step « 6.1.64) below) generalizes passage from (6.1.9) to (6.1.10), we note the following. Proposition 6.1.5. lfM is a smooth surface in jtk, with the induced Riemannian metric, and if '17M is its LeviCivita covariant derivative, then, for X and Y tangent to M,
(6.1.63) V� Y = P(x)DxY, at x E M, where Dx acts componentwise on Y and P(x) is the projection (6.1.16). Proof. It is routine to verify that the right side of (6.1.63) satisfies all the conditions described in (6.1.58)(6.1.61) that define the LeviCivita covariant derivative. D
6.1.
Geometry ofsurfaces I: geodesics
233
The identity (6.1.63) will also play an important role in the study of curvature, in the next section. We resume our analysis of (6.1.57), which becomes
d dsE(y,)I,=o = Ia b(VvT, T)dt. Since a/as and a/at commute, we have [V, T] = 0 on Yo, and (6.1.61) implies d b (6.1.65) dsE(y,)I,=o = Ia ('lTV, T) dt.
(6.1.64)
The replacement for (6.1.11) is
(6.1.66)
T(V, T) = ('lTV, T) + (V, VTT),
so, by the fundamental theorem of calculus,
d dsE(y,)I,=o = Ia b(V, VTT) dt. If this is to vanish for all smooth vector fields over Yo, vanishing at (6.1.67)

have
p
and q, we must
VTT = o. We show how this leads again to (6.1.46). If M has a coordinate chart 0 c Rn that carries a Riemannian metric (gjk ) and a (6.1.68)
corresponding LeviCivita covariant derivative, the Christoffel symbols can be defined by
VD, Dj = � rkj,Db
(6.1.69)
k
where D k = a/a Xk ' The formula (6.1.62) implies
(6.1.70)
agjk ag'k _ ag'j ) + gkere'J _ ..tc2 ( ax· ax·J aXk ' , 
in agreement with (6.1.47). We can rewrite the geodesic equation (6.1.68) for x(t) as follows. With x = (Xl" " , xn and T = (Xl , . . . , xn), we have
(6.1.71)
)
yet) =
0 = � VT(xeDe ) = �(xeDe + Xe VTDe ). e e
In view of (6.1.69), this becomes
(6.1.72) where we bring back the summation convention. We have recovered the geodesic equa tion (6.1.46). Note that if T = then T(T, T) = 2('1 T T, T) = 0, so if (6.1.68) holds, automatically has constant speed. Shortly we will verify that a curve satisfying the geodesic equation is indeed locally length minimizing. For a given p E M, the exponential map
yet)
(6.1.73)
y'(t),
Expp : U + M
6.
234
Differential geometry ofsurfaces
=V!.
•
p
IV
M
Figure 6.1.2. The exponential map Expp (tv) = yu(t)
is defined on a neighborhood of a in 'fpM
'"
Rn by
(6.1.74) where yv(t) is the unique constantspeed geodesic satisfying
= p, y�(O) = v. See Figure 6.1.2. Note that Exp/tv) = Yv(t). It is clear that Expp is well defined and smooth on a sufficiently small neighborhood U ofO in TpM, and its derivative at a is the identity. Thus, perhaps shrinking U, we have that Expp is a diffeomorphism of U onto Yv(O)
(6.1.75)
a neighborhood (') of p in M. This provides what is called an exponential coordinate system on a neighborhood of p E M. Clearly, the geodesics through p are the straight lines through the origin in this coordinate system. We now establish a result, known as the Gauss lemma, which will imply that each geodesic is locally length minimizing. For a > a small, let �a E TpM : I a}, and let Sa EXPp ( �a ).
= {v
=
Proposition 6.1.6. Any unitspeed geodesic through p hitting Sa at
to Sa.
=
=q
vi =
t = a is orthogonal
Proof. If YoCt) is a unitspeed geodesic, Yo(O) p, yo(a) E Sa ' and V E TpM is tangent to Sa ' there is a smooth family of unitspeed geodesics y,(t), such that y,(O) and (3/3s)y,(a) I, = o V. Since L(y,) B(y,) is constant, we can use (6.1.75)(6.1.76)
=
=
=P
6.1.
Geometry ofsurfaces I: geodesics
to conclude that
(6.1.76)
0=
which proves the proposition.
235
1a T , T) dt = Yb(a» , (V,
(V
D
{v p : Ilv l :S yet) Expp(tw), a :S
Corollary 6.1.7. Suppose Expp : Ea + M is a diffeomorhism ofEa = E T M a} onto its image 13. Then,for each q E 13, q = the curve =
Expp(w), t :S 1, is the unique shortest pathfrom to q. Proof. We can assume I l w l = a. Let CJ : [0, 1 ] M be another constant speed path from to q, say I CJ '(t) 1 = b. We can assume CJ(t) E 13 for all t E [0, 1]; otherwise restrict CJ to [O, P ], where p = inf{t a : CJ(t) E 313) and the argument below will p
+
p
2:
show this segment has length 2: a. For all such that E 13 \ p, we can write determined in the unit sphere of T M and tensor of M back to Ea , we have (6.1.77) = + by the Gauss lemma. Hence 1
t
CJ(t)
wet)
CJ(t) = Expp(r(t)w(t» , for uniquely p , ret) E (0, a]. If we pull the metric I CJ'(t)I ' r'(t)' r(t)' l w'(t)I ' b=
(6.1.78)
L(CJ) = 1 1ICJ'(t)1 dt = � [ I CJ '(t) I ' dt 1 1 � r'(t)' dt. 2:
Cauchy's inequality yields
(6.1.79)
11 Ir'(t)1 dt :S (1 1 r'(t)' dt)1/', a
a
so the last quantity in (6.1.78) is 2: a'jb. This implies b = a for all The corollary is proved.
Ilw'(t)1
t.
2:
a, with equality only if D
The following is a useful converse.
y : [0, 1]
Proposition 6.1.8. Let + M be a constant speed Lipschitz curve from p to q that is absolutely length minimizing. Then is a smooth curve satisfying the geodesic equation.
y
Proof. We make use of the following fact, which will be established below. Namely, there exists a > a such that, for each point E : Ea + M is a diffeomorphism of Ea onto its image (and a is independent of E So choose E and consider = The hypothesis implies that must be a lengthminimizing curve from to for all E [0, 1]. By Corollary 6.1.7, coincides with a geodesic for E + and for E  p, where + =
y, Exp y). x to [0, 1] Xoyet),YCto). t Xo t [to, to a] t [to to], x
x
y
yet) to a
6.
236
Differential geometry ofsurfaces
=
min (to + a, 1) and to  f3 max( to  a, 0). We need only show that if to E (0, 1), these two geodesic segments fit together smoothly, i.e., that y is smooth in a neighborhood of to. To see this, pick £ > a such tha t £ < min Cto , a ) , and consider t, to  £. The same argument as above applied to this case shows that y coincides with a smooth geodesic on a segment including to in its interior, so we are done. D
=
The asserted lower bound on a follows from compactness plus the following ob servation. Given p E M, there is a neighborhood (') of (p, 0) in TM on which (6.1.80) [' : (') + M, ['(x, v) Exp/v), (v E TxM), is defined. Let us set (6.1.81) 'F(x, v) (x, Exp/v» , 'F : (,) + M x M. We readily compute that
=
=
(6.1.82)
D'F(p, 0 )
= G �) ,
as a map on TpM Ell TpM, where we use Expp to identify T (p,o) TpM '" 'fpM Ell TpM '" T (p,p) (MxM). Hence the inverse function theorem implies that 'F is a diffeomorphism from a neighborhood of(p, 0) in TM onto a neighborhood of (p, p) in M X M. Let us remark that though a geodesic is locally length minimizing, it need not be globally length minimizing. Easy examples arise when M is the standard unit sphere in Euclidean space, for which the great circles are the geodesics. Exercises
1. Let u : Rk .... R be smooth, and assume y : ( a, b) .... Rk is a smooth solution to (6.1.37). Note that, in general, d dt u(y(t» (y'(t), Vu(y(t» ). Show that ( 6.1.37) implies
=
(6.1.83 )
Deduce that if to E ( a, b),
1t (y'(t), Vu(y(t» ) = o.
y'Cto) 1. Vu(y(to» ==} d u(y(t» = o. dt 2. In the setting of Exercise 1, note that
1t (y'(t), y'(t» = 2(y"(t), y'(t» .
Deduce that if ( 6.1.37) holds, then
y'Cto) 1. Vu(y(to» ==} Ily' (t) 11 = Ily'Cto) ll·
Hint. Use (6.1.83 ) again.
6.1.
Geometry ofsurfaces I: geodesics
237
3. Suppose M is a smooth surface of dimension k  1 in Rk Let
M Rk be a N smooth unit normal field to M. Show that an alternative form of the geodesic equation
(6.1.37) is
:
+
yU(t) = (y'(t), 1tN(y(t» /N(y(t» .
(u, v) in which the metric tensor takes the form ), v) = (
4. We say a twodimensional Riemannian manifold has a Clairaut parametrization if there are coordinates
G,(U)
G(u,
G,(u)
with no v dependence. Compute rejk in this case, and show that the geodesic equa tions (6.1.46) become
u + 2 u  2  v = 0 . . = 0. v.. + , uv I G'l · 2 G,
..
I G; · 2 G, G' G2
'
Note that the second equation is equivalent to
1/G,(u)v) = 0,
hence
G,(u)v
= a.
Use this and the constant speed condition, G,(u)it'
to get a firstorder ODE for
5. Show that
+
G,(u)v '
= c',
u, to which you can apply separation of variables.
Hint. Start with and plug in (6.1.69).
6. Consider the exponential map Expp that, in this coordinate system,
:
U
ra bk(p)
+
M defined in (6.1.74)(6.1.75). Show
= O.
Hint. Since the line through the origin in any direction aDj have
+ bDk is a geodesic, we
7. In the setting of Exercise 6, deduce that at the center of an exponential coordinate system, agjk ) ( aXe P
=
0
, Vj,k,li.
6.
238
Differential geometry ofsurfaces
8. Let M be a compact, connected Riemannian manifold, and take p, q E M,
P
'"
q.
Let 'F denote the set of smooth curves Cl : [0, 1] + M satisfying CJ(O) = p, Cl(l) = q, IICl'(t)11 independent of t. Let d(p, q) = inf{L(Cl) : Cl E 'F}, and take Clv E 'F such that L(Clv) + d(p, q). Use the ArzelaAscoli theorem to show that there is a uniformly convergent subsequence ClVk + Y : [0, 1] + M, yeO) = p, y(l) = q, such thaty is length minimizing, among such curves. Use Proposition 6.1.8 to establish that y is a geodesic from p to q.
6.2.
Geometry of surfaces II: curvature
The curvature of a surface (or, in the onedimensional case, a curve) is a measure of its not being flat. To take the simplest case, let y : (a, b) + Rk be smooth, with nonvanishing velocity. We can parametrize y by arclength, so T(t) = y'(t), II T(t)11 = 1. (6.2.1) Then y is a straight line if and only if T(t) is constant. Thus a measure of how y curves is given by T(t). (6.2.2) We call this the curvature vector of y. Note that (6.2.3) T . T = 1 ==} T'(t) . T(t) = 0, so T(t) is orthogonal to T(t). In connection with this, we can interpret Proposition 6.1.1 as follows. Proposition 6.2.1. Let M e Rk be a smooth surface, andlety : ( a, b) + M be a smooth curve, parametrized by arc length. Then y is a geodesic on M if and only ifthe curvature vector ofy is normal to M at each point ofy(t). Let us now specialize to planar curves, so y : ( a, b) + R' . In such a case, we apply counterclockwise rotation by 900 to T(t) to get a unit normal to y: 0 1 N(t) = IT(t), 1 = . (6.2.4) 1 0 In such a case, ( 6.2.3 ) implies that T(t) is parallel to N(t), say T(t) = K(t)N(t), (6.2.5 ) and we call K(t) the curvature of y. Note that, by (6.2.4), N'(t) = K(t)lN(t) = K(t)J'T(t) (6.2.6 ) =  K(t)T(t).
(
)
6.2.
Geometry ofsurfaces II: curvature
239
M
Figure 6.2.1. The unit normal to M at x
We move on to the case that M c Rn+1 is a smooth, codimension1 surface, with smooth unit normal N. See Figure 6.2.1. Then N : M + sn C [R n+ l
(6.2.7)
is called the Gauss map. In this case M is flat if and only if N is constant, so we measure its curva ture by (6.2.8) In particular, we have a welldefined realvalued function (6.2.9)
=
=
called the Gauss curvature of M at x. In case n 1, (6.2.6) yields K(y(t» K(t). Of course, the map (6.2.8) contains further curvature information when n > 1. We will return to this below. For a general ndimensional surface M e R k , a natural object with which to mea sure how M curves in Rk is the map P, introduced in (6.1.16), i.e., P : M + M(k, R), (6.2.10) P(x) orthogonal projection of Rk onto TxM.
=
We see that M is flat if and only if P is constant, so a natural measure of curvature is (6.2.11)
DP(x) : TxM + M(k, R).
6.
240
Differential geometry ofsurfaces
We use the notation
DxP : M + M(k, R), for a vector field X on M; DxP(x) DP(x)X(x). Since we are differentiating a smooth
(6.2.12)
=
family of symmetric k X k matrices, it is clear that (6.2.13) DxP(x)' DxP(x) in M(k, R), for each x E M. The following is another useful fact.
=
Proposition 6.2.2. For x E M, let
(6.2.14) the orthogonal complement ofTxM in Rk Then, ifX is a vectorfield on M, DxP(x) : TxM + vxM, and (6.2.15) DxP(x) : vxM + TxM. Proof. We start with the projection identity, P rule, to obtain
(6.2.16)
= PP, and apply Dx, using the product
=
DxP (DxP)P + P(DxP),
which in turn yields
= =
(DxP)P P�(DxP), (DxP)P� P(DxP), where P� I  P. This gives (6.2.15). (6.2.17)
=
D
To proceed, we bring in an object called the secondfundamentalform of M e Rk , defined for vector fields X and Y on M by (6.2.18) II(X, Y) DxY  Vi'Y, where '17M is the intrinsic covariant derivative on M, given by Proposition 6.1.4. Clearly, for f E COO(M), (6.2.19) IIUX, Y) f II (X, Y). Furthermore, if g E COO(M), II(X,gY) Dx(gY)  Vi'(gY) (6.2.20) gDx Y + (Xg)Y  gVi' Y  (Xg)Y g II(X, Y). We also note that
=
=
= = =
(6.2.21) as a consequence of Proposition 6.1.5. In particular, (6.2.22) II(X, Y)(x) E vxM, \if x E M. A smooth function S : M + Rk with the property that s(x) E vxM for all x is called a normal field. The following important identity connects DxP to the second fundamental form.
6.2.
Geometry ofsurfaces II: curvature
241
Proposition 6.2.3. IfX and Y are smooth vectorfields on M,
(6.2.23)
(DxP)Y = II(X, Y).
Proof. We have
(6.2.24)
Dx Y = Dx(PY) = (DxP)Y + P(Dx Y) = (DxP)Y + V�Y,
hence
(6.2.25)
(DxP)Y = DxY  V�Y = II(X, Y).
D
From this, (6.2.13), and (6.2.15), we have the following. Corollary 6.2.4. If 5 is a smooth normal field on M, then (DxP) 5 is a vectorfield on M,
uniquely specified by the identity (6.2.26) « DxP)s, Y) = (5, II (X, Y» , for all vectorfields Y on M.
This last identity motivates us to bring in an object called the Weingarten map, defined as follows. If 5 is a normal field on M, we define
(6.2.27) by
(6.2.28)
(A,X, Y) = (5, II (X, Y» .
Let us note that
(6.2.29)
II (X, Y)  II(Y,X) = (DxY  DyX)  ( v�Y  V¥X) = [X, Y]  [X, Yj
= 0,
so
(6.2.30)
II(X, Y) = II(Y,X),
and hence
(6.2.31) From (6.2.26) we have (6.2.32) Using
(6.2.33)
0 = Dx(P\) = (DxP)5 + PDX5,
we are led from (6.2.32) to the following result, known as the Weingarten formula. Proposition 6.2.5. If 5 is a normalfield and X is a vectorfield on M, then
(6.2.34)
6.
242
Differential geometry ofsurfaces
Second proof. We know PDXI is tangent to M. Given a vector field Y,
(PDXI, Y) = (DXI, Y) = Dx( l, Y)  (I, DxY) (6.2.35) = (I,DxY  V�Y) = (I, II (X, Y», where we have used (I, Y) = 0 and (s, V�Y) = O. The last identity in (6.2.35) yields (6.2.34). D In order to state a more definitive result, we can define a covariant derivative VV on normal fields on M by
(6.2.36) when X is a vector field on M and 1 is a normal field. One readily verifies analogues of (6.1.58)( 6.1.59), V'fxI = fV'XI, (6.2.37) VxUO = fV'XI + (Xf)5 Furthermore, parallel to (6.1.60), if � is also a normal field, (6.2.38) X(s, �) = (V'X I, �) + (I, Vx�), thanks to the identity X(s, �) = (DXI ' � ) + (s,Dx �). Using (6.2.36), we deduce the following extension of Proposition 6.2.5. Proposition 6.2.6. In the setting of Proposition 6.2.5, (6.2.39) The right side splits DXI into the sum ofa normalfield on M and a vectorfield tangent to
M.
In case k = n + 1, so M has codimension 1, with smooth unit normal N, we can combine (6.2.34) with (6.2.8), to deduce the following classical Weingarten formula. Corollary 6.2.7. IfM has codimension 1 in Rn+" with smooth unit normalfield N, and X is a vectorfield on M, then (6.2.40) Consequently, (6.2.9) yields the formula
K(x) = (  I)n d tAN(x), for the Gauss curvature of M at x. In case M has codimension 1 in R k and we have in hand the smooth unit normal N, it is natural to define a realvalued form IT, by (6.2.42) II (X, Y) = IT(X, Y)N. (6.2.41)
e
This is the classical case of the second fundamental form.
6.2.
Geometry ofsurfaces II: curvature
243
It is useful to note the following characterization of IT, when M has codimension in Rk Translating and rotating coordinates, we can move a specific point p E M to the origin in Rk , and suppose M is given locally by
1
(6.2.43) where x'
=
=
Xk f(x'), Vf(O) 0,
= (Xl ' . . . , Xk_l )' We can then identify the tangent space ofM at p with Rk1
Proposition 6.2.8. In the setting described in the previons paragraph, the secondfunda mentalform ofM at p is given by
(6.2.44)
k l a'f II(X, Y) � aa (O)Xj ¥e. j,t= l Xj Xt
=
_
Proof. From (6.2.34) we have, for any \' normal to M,
(6.2.45)
(II (X, Y), \)
Taking
(6.2.46 )
\'
gives the desired formula.
= (D \', Y). x
= (_ aafXl " " ' _ aXafk_l
,1
) D
= I ,
=
If M is a surface in R3 , given locallybyx3 f(X , X ) with f(O) O and Vf(O) then the Gauss curvature of M at the origin is seen by (6.2.44 ) to equal
= 0,
a'f det( a a(O» ). Xj Xe
(6.2.47 )
Besides providing a good conception of the second fundamental form of a codi mension 1 surface in R k , Proposition 6.2.8 leads to useful formulas for computation, one of which we will give in (6.2.53). First, we give a more invariant reformulation of Proposition 6.2.8. Suppose the (k  l)dimensional surface M in R k is given by (6.2.48 )
u(X)
= c,
with Vu '" 0 on M. Then we can use the computation ( 6.2.45 ) with \' (6.2.49 )
(II (X, Y), Vu)
= y . (D'u)X,
= Vu to obtain
where D'u is the k X k matrix of secondorder partial derivatives of u. In other words, (6.2.50 )
=
ll(X, Y) IIVu ll  1 Y . (D'u)X,
for X and Y tangent to M. In particular, if M is a twodimensional surface in R3 , given by (6.2.48 ), then the Gauss curvature at p E M is given by (6.2.51 )
K(p)
= IIVu(p)II ' det(D'u)IT M ' p
where D'u lTpM denotes the restriction of the quadratic form D'u to the tangent space TpM. With this calculation we can derive the following formula, extending ( 6.2.47 ) .
6.
244
Differential geometry ofsurfaces
Proposition 6.2.9. If M C R3 is given by (6.2.52)
=
then, at p (x',f(x'» E M, the Gauss curvature is given by a'f (6.2.53) K(p) (1 + IIV f(x') II')' det( axj ax
) Proof. We can apply (6.2.51) with u(x) = f(x" x,)  X3 ' Note that IIVu ll ' = 1 + IIVf(x') II ' and 0) D'u = (D'f (6.2.54) a o · Noting that a basis of TpM is given by (1, 0, ad) = v" (0, 1, a,f) = v" we obtain det( vJ . (D' U ) k) (6.2.55) det D'u l T M = = (1 + IIVf(x')II')' detD'f, det ( j , v k )
=
V
p
V
which yields (6.2.53).
D
Intrinsic curvature. So far, we have considered how M is curved in Rk That is, we have examined extrinsic measures of curvature of M. We now look a t intrinsic measures of curvature of M. In this setting, M can be any ndimensional Riemannian manifold, not necessarily a surface in Rk One distinguishing property of Rn with the standard flat metric (Ojk ) and associ ated covariant derivative D is that, for any two vector fields X and Y on Rn, (6.2.56 )
=
DXDy  DyDX D[x,Yj ·
With this in mind, if M is an ndimensional Riemannian manifold, with LeviCivita covariant derivative V, we set (6.2.57 )
=
R(X, Y)Z VXVyZ  VyVXZ  V[X,YlZ. We call R the Riemann curvature of M. It is clearly additive in each term X, Y, and Z. The following property implies it is linear in each variable, over COO(M). Proposition 6.2.10. Given f, g, h E COO(M), (6.2.58 ) R(fX, gY)hZ fghR(X, Y)Z.
=
Proof. It suffices to treat f, g, and h separately. To start,
(6.2.59 )
Use the identity [fX, Yj
= fIX, Yj  (Yf)X to write this as
(6.2.60 )
fVxVyZ  Vy(fVxZ)  fV[x,YlZ + (Yf)VxZ,
(6.2.61 )
fVyVxZ  (Yf)VxZ,
and then write the second term in (6.2.60) as to get (6.2.58 ) when g
= h = 1. The other cases are similar.
D
6.2.
Geometry ofsurfaces II: curvature
245
It follows that R (X, Y)Z is determined by its behavior on the coordinate vector fields Dj 3;3xj' We define the components Ra bjk of the Riemann curvature (in a coordinate system) by
=
(6 . 2 .62 ) (using the summation convention). Since [Dj , D k l
R(Dj , Dk)D b
(6.2.63)
Recall from (6.1.69) that
= 0,
= 'VDj 'VDkDb  'VDk 'VDjDb' =
(6.2.64) 'VDk D b ra bkDa , where ra bk are the Christoffel symbols, given by (6.1.70). Plugging this into (6.2.63 ) and using the derivation property (6.1.59 ) readily gives the formula 3 ra  3 ra + ra r'  ra r' (6.2.65 ) Ra bjk cj bk ck bj· 3Xk bj 3Xj bk These formulas can be written in shorter form, as follows. For each j and k in {l, . . . , n},
=
we define n X n matrices (6.2.66 )
Then (6.2.63 ) is equivalent to (6.2.67 )
=
where [lj, rk l ljrk  rk lj is the matrix commutator. Note that !Rjk is antisymmetric in j and k. Now we can define a connection lform r and a curvature 2jorm by (6.2.68 )
1 r � lj dXj , '2 � !Rjk dXj /\ dXb j j,k and the formula (6.2.67 ) is equivalent to (6.2.69 ) dr + r /\ r.
=
Q=
Q
Q=
We next mention a couple of basic symmetries of the Riemann curvature. Proposition 6.2.11. The Riemann curvature satisfies
(6.2.70 )
and (6.2.71 )
= (R(X, Y)Z, W) = (Z, R(X, Y)W). R(X, Y)Z R(Y,X)Z
Proof. The identity (6.2.70 ) is immediate from the definition ( 6.2.57) . Next, the metric property ( 6.1.60 ) of 'V yields (6.2.72 )
which gives (6.2.71 ).
o = (XY  YX  [X, Y])(Z, W) = (R(X, Y)Z, W) + (Z, R(X, Y)W),
D
6.
246
Differential geometry ofsurfaces
We set
(6.2.73)
Ra bj k = ga, R' bj k = (Da , R (Dj , D k )D b ). Then the identities (6.2.70)(6.2.71) become Ra bj k = Ra bkj (6.2.74) = R bajk· Having defined the Riemann curvature of a general ndimensional Riemannian manifold and developed some of its basic properties, we turn back to the case of a smooth ndimensional surface M c Rk , with its induced metric, and relate the Rie mann curvature to the second fundamental form. We again make use of Proposition 6.1.5, which says the LeviCivita covariant derivative 'V on M is given by
(6.2.75)
'VxY = PDxY.
Consequently, (6.2.57) implies
R(X, Y)Z = PDx(PDyZ)  PDy(PDxZ)  PD[x,YlZ = PDXDyZ  PDyDXZ  PD[x,YlZ (6.2.76) + P(DxP)(DyZ)  P(DyP)(DxZ). Now DXDyZ  DyDXZ  D[x,YlZ = 0, so we are left with the last two terms. Further
more,
P(DxP)(DyZ) = (DxP)P�(DyZ) = (DxP) H(Y,Z) (6.2.77) = AII (Y,Z)X, the first identityby (6.2.17), the second by (6.2.21), and the third by (6.2.32). This yields the following result.
Proposition 6.2.12. The Riemann curvature of a sUiface M C Rk satisfies
(6.2.78) R(X, Y)Z = AII (Y,z)X  AII(x,z) Y, ifX, Y, and Z are vectorfields on M. Equivalently, if W is also a vectorfield on M, (6.2.79) (R(X, Y)Z, W) = (H (Y, Z), H (X, W»  (H (X, Z), H(Y, W» . Corollary 6.2.13. Assume k = n + 1, so M is a codimension 1 sUiface. Then ll(X, W) !'r(X,Z» . (6.2.80) (R(X, Y)Z, W) = det ll(Y, W) H(Y,Z)
(
In the setting of Corollary 6.2.13, we have
(6.2.81) Also, in this setting,
(
)
)
ll(X,X) !'r Y» . (R(X, Y)Y,X) = det ll(Y,X) eX, H(Y, Y)
ll(X, Y) = (ANX, Y). We therefore have the following, known as Ga uss 's theorema egregium. (6.2.82)
6.2.
Geometry ofsurfaces II: curvature
247
Proposition 6.2.14. Assume M C 1t3 is a twodimensional sUiface. Take p E M, and let U and V be vectorfields on M such that (U(p), V(p)}forms an orthonormal basis of TpM. Then the Gauss curvature ofM at p is given by
(6.2.83)
K(p)
= (R(U, V)V, U)(p).
= detAN(p), but ( A U, U) detAN( p) = det N (AN V, U)
Proof. By(6.2.41), K(p)
(6.2.84)
)
(AN U, V» , (AN V, V)
and the result then follows from (6.2.81)(6.2.82).
D
It follows that the Gauss curvature is an intrinsic quantity on a twodimensional surface. In fact, if M is a twodimensional Riemannian manifold, not necessarily a surface in 1t 3 , one can define its Gauss curvature at p E M by (6.2.83). One can readily show that the right side of (6.2.83) is independent of the choice of orthonormal basis of 'fpM. Going further, if we take two vectors X, Y E TpM and expand them in terms of an orthonormal basis {U, V}, the following result is a straightforward consequence of Propositions 6.2.10 and 6.2.11 and (6.2.83). Proposition 6.2.15. IfX and Y are vectorfields on a twodimensional Riemannian man ifold M, with Gauss curvature K( ), then
x
(6.2.85) Ifwe take a coordinate chart on M and setX fields, then the left side of (6.2.85) becomes
= D" Y = D" the coordinate vector
(6.2.86)
G(x), the matrix defining the metric K(x) = R�(:�X), g(x) = det G(x),
as defined in (6.2.73), and the matrix on the right is tensor. Then (6.2.85) yields the identity
(6.2.87)
=
when dimM 2. Here is an explicit calculation of the Gauss curvature for an important class of two dimensional manifolds.
Proposition 6.2.16. Let M be a twodimensional Riemannian manifold. Suppose that
one has a coordinate chart in which the metric tensor takes theform
(6.2.88)
G(x) = (G,(X) G,(X») ' g(x) = G,(x)G,(x).
Then the Gauss curvature is given by (6.2.89)
6.
248
Differential geometry ofsurfaces
Proof. One can first compute that
..1: ( 3, G,/G, 3,G,/G, (6.2.90) 3,G,/G, 3, G,/G, ) . = = ..1:2 (3, G,/G, 3,G,/G, Then, computing !R 1 2 = we have = 3,f,  3,f, 3,G, 3,G, _ . .1: _ . .1: R1 212 = 2 3,( G, ) 2 3,( G, ) ..1: (_ 3,G, 3,G, 3,G, 3,G, ) (6.2.91) 4 G, G, G, G, 1 3,G, 3,G, 3, G, 3, G, ).  ;( ( 0; 0;  0; 0;Now R '2 1 2 = G, R ' 2 1 2 in this case, and (6.2.87) yields f,
= (fab,) = 2
f,
(fab,)
+
(Rab1 2 )
+
(6.2.92)
G"
3, 3"
G"
f, f,  f,f"
+
x
If we divide (6.2.91) by then in the resulting formula for K( ) interchange and and and sum the two formulas for K( ) , we get
(6.2.93)
_ ..1: [� 3,( 3,G, ) 4 G, G, 3,G, ..1:  4 [�G, 3,( G, )
K(x) =
+
+
which is readily transformed into (6.2.89).
x
�G, 3,( 3,G, G, ) ] , �G, 3,( 3,G, G, ) ]
G, G"
G, and D
Coordinates in which the metric tensor takes the form (6.2.88) are called orthogo nal coordinates. If in addition we have = they are called isothermal coordinates. In such a case, (6.2.89) specializes to the following neat formula. Corollary 6.2.17. Suppose dimM = 2, and one has an isothermal coordinate system in
which (6.2.94) (x)  e'v(x) for a smooth Then the Gauss curvature is given by (6.2.95)
0J.b
g.Jk 
v.
3'v 3'v ) _,v ( 3xf ax�
K(x)  e

+

.
A significant example of Corollary 6.2.17 is provided by the Poincare disk,
(6.2.96)
D = (x E R'
: Il x ll < IJ,
Application of (6.2.95) yields
(6.2.97)
gjk(X) = (1 _ Il4xll')' 0jk.
K = 1,
for this metric. A related example is the Poincare upper half plane,
(6.2.98)
U
again yielding (6.2.97).
= (x E R'
: x, > OJ,
gjk(X) = x,' 0jb
6.2.
Geometry ofsurfaces II: curvature
249
It is clear that one cannot obtain the Gauss curvature of an ndimensional M
=
c
=
Rn+' from its Riemann curvature R when n 1. In fact, if n 1, (6.2.70) implies R = 0, while the Gauss curvature is given by(6.2.6). More generally, one cannot obtain
K(x) from R when n is odd. On the other hand, we can obtain K(x) from R when the dimension of M is even. We discuss how this works. Assume M is a smooth surface of dimension n 2m in Rn+ l Take p E M, and let {e" . . . , e n } be an orthonormal basis of TpM. For short, setA AN ' By (6.2.70),
=
=
(R( ej , e k )eb, e a ) 
(6.2.99)
de t
It follows that (6.2.100)
Aej AAe k
=� �
bl > b2
Now (detA)e, A . . . A en
= rm
(6.2.101)
((AA
ej , e a) eb e a )
(Aej , eb»
(Aeb eb )
).
(R(ej , e k )eb" eb,)eb, A eb, .
=
(Ae, AAe,) A . . A (Aen _ , AAen) ·
�(R(e" e,)eb" eb,) . . . (R(en_" en)ebn , ebn_,) b
where the sum runs over b E Sn , the set of permutations of {I, . . . , n}. Since eb, A . . A ebn (sgn b) e , A . . Ae n , this leads to a formula for detA. Recalling (6.2.41), we obtain the following higherdimensional extension of Proposition 6.2.14. ·
=
·
Proposition 6.2.18. Assume M c Rn+' is a surface of dimension n
and let {e" is given by (6.2.102)
. . . , en} be an orthonormal basis ofTpM.
K(p)
= rn!' �
bESn
=
2m. Pick p E M, Then the Gauss curvature ofM at p
(sgn b)(R(e" e,)eb" eb, ) ' " (R(en_" en)ebn , ebn_') .
A variant of the formula (6.2.102) is given by taking arbitrary permutations of This gives
{e" . . . , en } and summing.
(6.2.103)
K(p)
2n =' n.
/2
�
a,bESn
(sgn a)(sgn b)(R(ea" e a)eb" eb,)
We can use this formula to define the Gauss curvature of any Riemannian manifold M of dimension n 2m, regardless of whether it is a surface in Rn+1 One needs to check that the right side of (6.2.103) is independent of the choice of an orthonormal basis of TpM. One way to get this involves the following objects. In addition to the curvature 2form 0 in (6.2.68), one has the curvature (2, 2)form
=
(6.2.104)
6.
250
Differential geometry ofsurfaces
One can define a product on (k, e)forms by
(6.2.105) given kr forms aj and erforms {3j . The product is a (k, + k" e, + e,)form. Now, if dimM = n = 2m, we form the (n, n)form
(6.2.106) Ifwe take p E M and choose a coordinate system about p in which the coordinate vec tor fields D" . . . , Dn are orthonormal, then comparison with the calculations (6.2.103) (6.2.104) above give, at p,
(6.2.107)
"'M
where, at p, = dx, A Adxn (in this coordinate system). Consequently, if we take to be the volume form on M, then (6.2.106)(6.2.107) can be taken as a definition of the Gauss curvature on a general Riemannian manifold of dimension n = 2m. Note that n = 2 ==} D = R'212 (dx, A dx,) (dx, A dx,), and = yg dx, A dx" and :peo) = D, (6.2.108)
"'M
. . ·

0
"'M
1 ==} :P(O) = R' g 212
in which case (6.2.107) implies
(6.2.109) and we recover (6.2.87).
1 K = R, g 212 ,
"'M 0 "'M,
for n = 2,
Exercises
1. Give another proof of Proposition 6.2.3, starting with o = Dx(P� Y) = (DxP)Y + P�Dx Y, and using (6.2.21). 2. Discuss how (6.2.6) is a special case of the Weingarten identity (6.2.40). 3. We say a twodimensional Riemannian manifold has a Clairaut parametrization if there are coordinates (u, v) in which the metric tensor takes the form O(u , v)
=
(
O,(U)
)
O,(U»
(with no v dependence). Show that the Gauss curvature is given by
1 .:£ OS(u) , K(u) = ___ g = 0,0,. 2yg(u) du yg(u)
6.2.
Geometry ofsurfaces II: curvature
251
4. LetM C 1t 3 be a surface of revolution, with coordinate chart
=
X(u, v) (g(u), h(u)cos v, h(u) sin v),
=
obtained by taking the curve y(u) (g(u), h(u), 0) in the xyplane and rotating it about the xaxis in 1t3 Show that in these coordinates the metric tensor is given by
G(u, v)
= (IY'(U)I'
)
h(u)' .
Apply Exercise 3 to compute the Gauss curvature.
y, [a, b] M be a smooth family of curves satisfying y,(a) = y,(b) = q. Yo has unit speed. Define Vet) = 3,y,(t)I,=o E TYo(t)M. Recall the energy function B(y, ). Show that d', B(y, )I o = 2 ja b[(R(V, T)V, T) + (VTV, VTV) + (VvV, VTT)1 dt. ds ,= Note that the last term in the integrand vanishes if Yo is a geodesic. Show that, since Veal = V(b) = 0, the middle term in the integrand can be replaced by (V, VTVTV), so, if Yo is a geodesic,
5. Let Assume
:
p,
....
(6.2.110) 6. Let M C Itk be a smooth ndimensional surface, and let X, Y, and Z be smooth vactor fields on M. Use (6.2.18) and (6.2.39) to verify that
=
DXDyZ V�V¥Z + II(X, V¥Z)  AII (Y,z)X + Vx II(Y,Z). Get a similar expression for DyDXZ, and apply (6.2.18) to D[x, YlZ, Use the fact that DXDyZ  DyDXZ  D[x,YlZ 0 to deduce that the Riemann curvature ofM satisfies
=
(6.2.111)
R(X, Y)Z = ! II(X, V¥Z) + II(Y, V�Z) + II([X, Y], Z)
 Vx II(Y, Z) + VPI(X, Z)]
+ !AII (y,Z)X  AII (x,z) y].
The quantity in the first set of braces is normal to M, so it vanishes. This vanishing is called Cod=i's equation. The identity of Y)Z with the quantity in the second set of braces is equivalent to the Gauss equation (6.2.78 ).
R(X,
=
7. In the setting of Exercise 6, assume k n + 1. Take the inner product of both sides of (6.2.111) with N and deduce that Codazzi 's equation is equivalent to (6.2.112)
6.
252 Hint. Start with
Differential geometry ofsurfaces
(N, II(X, V¥Z» = (ANX, V¥Z) =
and then show that
Y(ANX,Z)  (V¥(ANX),Z),
(N, Vy II (X, Z» = Y(ANX, Z) .
8. Take M e Rk as in Exercise 7 (k = dimM + 1). We say a point p E M is an umbilic ifAN(p) = l(p)I, for some l(p) E R. Assume that each p E M is an umbilic and that M is connected. Show thaU must be constant. Hint. Apply the Codazzi equation (6.2.112). Deduce that
(Xl)Y = (YA)X for arbitrary smooth vector fields X and Y on M, hence Xl = Yl = O.
6.3. Geometry of surfaces III: the GaussBonnet theorem
Here we establish results that make contact between material on curvature in §6.2 and material of §5.3, including material on the degree of the Gauss map, and material on the Euler characteristic, defined in (5.3. 29). We will also bring in the notion of parallel transport. To begin, assume M is a smooth, compact surface of dimension n in Rn +l , with smooth unit normal N ... sn . As seen in §5.3, the degree of this map satisfies Deg(N) =
(6 .3.1)
J N*""
M
for any nform '" on sn that integrates to 1. In particular, we can take
(6 .3.2)
where "'s is the volume form of sn , and (cf. ( 3. 2.3 2»
An =
(6.3.3)
J
sn
2n(n+l)/2
"'s = r«n + 1)/2) '
Recall that for each E M, DN( ) : TxM ... TN(x) sn = TxM. We leave the following result as an exercise.
x
x
Proposition 6.3.1. In the setting described above,
(6 .3.4) where ",M is the volumeform ofM. ing.
Recalling the characterization of Gauss curvature in ( 6.2.9 ) , we deduce the follow
6.3.
Geometry ofsurfaces III: the GaussBonnet theorem
253
Proposition 6.3.2. IfM c Rn+1 is a compact, ndimensional surface, with smooth unit normal N : M + sn, and associated Gauss curvature K, then
(6.3.5)
Deg(N) =
�n J K(x)dS(x). M
Putting this together with Corollary 5.3.17, we have the following.
Proposition 6.3.3. Take M as in Proposition 6.3.2, and assume dimM = n = 2m is
even. Then (6.3.6)
Deg(N) = 2 X(M),
1
where x(M) is the Euler characteristic ofM, defined by (5.3.29). Consequently, (6.3.7)
J K(x) dS(x) = �n X(M).
M
Proof. (Compare Exercise 11 of §5.3). Corollary 5.3.17 applies directly to a neighbor hood 0 of M whose boundary is essentially two copies of M, with normals N and N. The conclusion (5.3.28)(5.3.29) then becomes x(M) = Deg(N) + Deg( N) (6.3.8) = (1 + (l)n )Deg(N), which yields (6.3.6) when n is even.
D
Specializing (6.3.7) to n = 2, we get
(6.3.9)
J K(x)dS(x) = 27[x(M),
M
when M c R3 is a compact, twodimensional surface. This is the classical Gauss Bonnet formula for surfaces without boundary. See Figure 6.3.1 for an example, in volving a threeholed torus. As seen in §5.3, Exercise 19, x(M) = 4 in this case, so
J KdS = 87[.
M
The classical formula also treats twodimensional surfaces with boundary, and we will want to derive such a result below. We also want to extend (6.3.9) from compact twodimensional surfaces in R3 to arbitrary compact twodimensional Riemannian manifolds. One approach to such extensions involves the notion of parallel transport, which we now introduce. Let M be an ndimensional Riemannian manifold, and let y : [0, r] + M be a smooth (or piecewise smooth) curve. Sayy(O) = p, and take Va E TpM. We say a vector field V along y is defined by parallel transport (or parallel translation) if
(6.3.10)
'II
TV = 0,
T=y, '
6.
254
Differential geometry ofsurfaces
�(
K > O
M
K 0, let D/ £ ) denote the disk of radius £ centered at Pj ' and set D£ = M \ U D/£). j By (6.3.58), w
Show that lim £ �O
J
aDj(.r)
w
=
2Jrindp] (V).
(Recall (5.3.19)(5.3.20) and Exercise 7 of §5.3.) Deduce that
J
K dS =
2Jr Index(V),
M thus obtaining another proof of the GaussBonnet formula, (6.3.9), for general com pact, oriented, twodimensional Riemannian manifolds.
9. In this exercise, we consider the compact twodimensional surface M depicted in Figure 6.3.3. This is shown sitting in a cube e c R3 , say e = [Jr, Jr P, but we identify opposite faces of e and take (6.3.59) a compact surface without boundary. (a) Show that the natural isomorphism lr3 = R3 leads to a welldefined Gauss map (6.3.60 ) M e lr3 ==} N : M S2 , and that, parallel to the case M C R3 , 1 Deg(N) = lX(M). (6.3.61 )
Tx
....
6.3.
Geometry ofsurfaces III: the GaussBonnet theorem
263
Figure 6.3.3. A surface M C U3 satisfying X(M) = 4
Furthermore, show that Proposition 6.3.2 continues to hold. (These observations extend more generally to ndimensional M e lrn+ l , n even.) (b) Show that the surface M depicted in Figure 6.3.3 is diffeomorphic to the three holed torus depicted in Figure 6.3.1, hence
x(M) = 4.
(6.3.62)
(c) Combining (6.3.62) with either part (a) or the GaussBonnet theorem for general compact twodimensional manifolds, deduce that (6.3.63)
J K dS = 87[.
M
(d) Now regard M as a compact surface with boundary (consisting of six circles) in 1t3, and use Proposition 6.3.10, with M relabeled as D, and putting 0 in a surface diffeomorphic to S' . In this case, K = 0 on 3D. Show that Proposition 6.3.10 (with these changes in labeling) also implies (6.3.63). This approach to (6.3.63) does not use X(M) = 4,just XeS' ) = 2. (e) See if you can produce a surface M e lr3 , looking like that in Figure 6.3.3, whose Gauss curvature satisfies (6.3.64)
K(x) < O, VX E M.
6.
264
Differential geometry ofsurfaces
Figure 6.3.4. Regular tetrahedron in R3
e
10. Let T C R3 be a (solid) regular tetrahedron; cf. Figure 6.3.4. The angle between two faces is specified by
cos e = � , sin e = 2f , sin(}  e) = �
Assume one vertex of T is the origin in R3 Dilate T by a factor of 2, and consider D
= S2
n
2T .
e,
(a) Show that 0 C S2 is a geodesic triangle, each ofwhose angles is equal to specified above. (b) Deduce from the Gauss formula, Proposition 6.3.6, that Area(D) hence
= 3e  Jr,
= "2Jr  3 sm_ , '31 . Remark. It is shown in [52, §7.4] that sin (1/3) is not a rational multiple of Jr. �,!; Area (n)
.
'
6.4.
Smooth matrix groups
265
6.4. Smooth matrix g roups
A smooth matrix group is a subset
G c M(n, F), F = R or C,
(6.4.1) with the two properties:
G is a smooth mdimensional surface in the vector space M( n, F); G is a group, i.e., g" g, E G ==} g I g, E G, and (6.4.2) g E G ==} g is invertible and g ' E G. We assume G is nonempty, and note that the two conditions in (6.4.2) imply l E G, where I E M( n, F) is the n X n identity matrix. (i) (ii)
Here are some examples of smooth matrix groups: Gl( n, F) = (g E M(n, F) : detg '" O}, Sl(n, F) = (g E M(n, F) : detg = 1}, O(n) = (g E M(n, R) : g*g = I}, (6.4.3) SO(n) = (g E O(n) : detg = 1}, U(n) = (g E M(n, C) : g*g = I}, SU(n) = (g E U(n) : detg = 1}. The fact that all these sets of matrices satisfy (6.4.2) follows from the identities (6.4.4) and the fact thatg E M(n, F) is invertible if and only if detg '" O. These facts also imply that if G c M(n, F) is a matrix group, then in fact (6.4.5)
G c Gl( n, F).
Note that the defining property detg '" 0 implies (6.4.6)
Gl( n, F) is open in M( n, F).
The fact that the other groups listed in (6.4.3) are smooth surfaces can be established using the submersion mapping theorem, Proposition 3.2.5, which we recall here. Proposition 6.4.1. Let V and W befinitedimensional vector spaces, let [2
C V be open, and let F : [2 c W be a smooth map. Fix p E W, and consider (6.4.7) S = (x E [2 : F(x) = pl. Assume that ,for each X E S, DF(x) : V + W is surjective. Then S is a smooth surface in [2. Furthermore, for each X E S, (6.4.8) To apply this to the groups listed in (6.4.3), we start with Sl(n, F ) . Here we take (6.4.9)
V = M(n, F ), W = F, F : V + W, F(A) = detA.
6.
266
Differential geometry ofsurfaces
Now, given A invertible,
F(A + B) = det(A + B) = (detA)det(I + K 1B), and we have, for X E M( n, F), (6.4.11) det(I + X) = 1 + TrX + O( IIX II'), (6.4.10)
so
DF(A)B = (detA) Tr(K 1 B). Now, given A E Sl(n, F), or even A E Gl(n, F), it is readily verified that (6.4.12)
(6.4.13) is nonzero, hence surjective, and Proposition 6.4.1 applies. We turn to O( n). In this case, v = M(n, R), W = (X E M(n, R) : X = X*}, (6.4.14) F : V + W, F(A) = A*A. Now, given A E V, (6.4.15)
F(A + B) = A*A + A*B + B*A + O( IIB II'),
so
DF(A)B = A*B + B*A = A*B + (A*B)*.
(6.4.16) We claim that (6.4.17 )
A E O(n) ==} DF(A) : M(n, R) + W is surjective. Indeed, given X E W, i.e., X = X* E M(n, R), we have
(6.4.18 )
1 B = lAX ==} DF(A)B = X.
Again Proposition 6.4.1 applies so O(n) is a smooth surface. Similar arguments apply to U(n). For SU(n), we take V = M(n, C), W = (X E M(n, C) : X = X*} Ell R, (6.4.19 ) F : V + W, F(A) = (A*A, 1m detA). Note thatA E U(n) implies I detA I = 1, so ImdetA = 0 � detA = ±l. As a further comment on O(n), we note that, given A E M(n, R), defining
A : [Rn + !R.n, (6.4.20 )
A E O(n) = (Au,Av) = (u, v), Vu, v E Rn , where (u, v) is the Euclidean inner product on Rn , (6.4.21 ) (u, v) = Uj Vj ,
� j
where u = (Ul, . . . , un ) , etc. Similarly, given A E M( n, C), defining A : en (6.4.22 )
'
A E U(n) = (Au,Av) = (u, v), V u, v E en ,
+
en,
6.4.
267
Smooth matrix groups
(u,
where v) denotes the Hermitian inner product on en, (6.4.23 ) v) =
(u,
�j Ui;j.
Note that (6.4.24) v) = v) defines the Euclidean inner product on en ", R n , and we have (6.4.25) U(n) '> O(2n).
(u,
Re(u,
2
If G c M(n, F) is a smooth matrix group, it is of particular interest to consider the tangent space to G at the identity element, (6.4.26) 9 = TIG c M(n, F), an Rlinear subspace. For the groups listed in (6.4.3), we have the following specific identifications: TISl(n, F) = (A E M(n, F) : TrA = OJ, TIO(n) = (A E M(n, R) : A* = AJ = TISO(n), (6.4.27 ) TIU(n) = (A E M(n, C) : A* = AJ, TISU(n) = (A E M(n, C) : A* = A, TrA = OJ. For the first two identities, take A = I in (6.4.12 ) and (6.4.16), respectively, yielding DF(I)A = TrA and DF(I)A = A + A*, respectively. Having (6.4.27 ) and making use of the identities (6.4.28 )
one readily verifies the following, for (6.4.29 )
Exp(A) = e" =
00 1 Ak � k=o k. ,
Proposition 6.4.2. For each matrix group listed in ( 6.4.3 ), (6.4.30 ) Exp : TIG + G. Here is a significant extension of Proposition 6.4.2. Proposition 6.4.3. Let G c M(n, F) be a smooth matrix group, A E M(n, F), we have A E 9 = etA E G, V t E R. (6.4.31 )
9 = TIG. Then, for
The " {=" part is clear from the identity (d/dt)etA l t= o = A. As for the "=} " part, we have this for G in ( 6.4.3 ) , by Proposition 6.4.2. We will postpone the proof of the rest of Proposition 6.4.3 until later in this section, but will proceed to develop some consequences of this result, starting with the following. For A, B E M(n, F), set (6.4.32 ) [A,B] = AB  BA. This is called the commutator, or the Lie bracket ofA and B.
6.
268
Differential geometry ofsurfaces
Proposition 6.4.4. Let G c M(n, F) be a smooth matrix group, 9 = TIG. Then A, B E g ==} [A, B] E g. (6.4.33) Remark. For the tangent spaces TIG listed in (6.4.27), the implication (6.4.33) is straightforward, via such identities as (6.4.34) [A, B]* = [A*, B*], Tr[A,B] = O. We turn to the general situation. Proof. Given g E G, A E g, (6.4.35) and the left side of (6.4.35) belongs to G, so by (6.4.31) we have (6.4.36) g  lAg E g, \if g E G, A E g. E Setting g = et , B E g, we have (6.4.37) Applying dldt at t = 0 gives (6.4.33)
D
The commutator [A, B] = AB  BA gives 9 the structure of a Lie algebra. We aim to establish further relations between the Lie algebra structure of 9 and the group structure of G. To begin, for A, B E g, we have, for small t, (6.4.38)
etAetB
�
�
= (I + tA + A' + 0(t3») (I + tB + B' + 0(t3»)
t' (A' + 2AB + B' ) + 0(t3), = I + teA + B) + z
and similarly (6.4.39)
etBetA
t' (A' + 2BA + B') + 0(t3), = I + teA + B) + z
hence (6.4.40) Consequently, (6.4.41)
etAetBe tAetB
= I + t'[A,B] + 0(t3 ) = et' IA,BI + 0(t3).
We apply these calculations to show how the Lie algebra structure is preserved under smooth homomorphisms of G. Thus, assume we have a smooth homomorphism (6.4.42) Tr : G + Gl(m, F), i.e., a smooth map satisfying (6.4.43) Note that (6.4.43) implies (6.4.44) Tr(I) = Tr(I · I) = Tr(I)Tr(I), hence Tr(I) = I,
6.4.
Smooth matrix groups
Let us set (6.4.45) for A E
g.
269
(J = DJr(I) : 9 + M(m, F), so d CJ(A) = d s Jr(e'A )1 , 0'
Note that, for such A,
d Jr(etA ) _ d Jr(e(Ht)A )1 _ t ,=0 ds d d ,A A ds Jr(e )Jr(et ) 1s=o CJ(A )Jr( etA ),
= (6.4.46) = = and since y( t) = Jr( etA ) satisfies yeO) = I, this gives 
(6.4.47)
We are ready to prove the following.
(J
Proposition 6.4.5. IfJr is a smooth homomorphism in (6.4.42) and : 9 + M(m, F) is given by (6.4.45), then, forA, B E 9. we have (6.4.48)
CJ( [A, B])
= [CJ(A), CJ(B)].
Proof. Applying Jr to (6.4.41), we have (6.4.49) By (6.4.47) and a second application of (6.4.41), the left side of (6.4.49) is equal to (6.4.50) Comparing the right sides of (6.4.49) and (6.4.50), we have (6.4.51)
d A,B ds Jr(e'I I)I ,=o [CJ(A), CJ(B)],
=
which gives (6.4.48).
D
=
We next associate to each A E TIG 9 a certain vector field on G. To start, take A E M(n, F), the Lie algebra of Gl( n, F). We define a vector field XA on Gl( n, F) by (6.4.52)
for g E Gl ( n, F). This vector field is left invariant. That is to say, iffor each h E Gl( n, F) we define left translation Lh : Gl( n, F) + Gl ( n, F) by (6.4.53) then we have (6.4.54) We now have the following simple result: Proposition 6.4.6. IfA E 9
= TIG, then XA is tangent to G.
6.
270
g XA(g)
Differential geometry ofsurfaces
Proof. Given E G, we have Lg : G + G, and hence (6.4.55) DLg(I) : TIG + Tg G, hence A E 9 =} E Tg G. GivenA E M(n, F), the flow (6.4.56) as is readily checked:
D
yl on Gl(n, F) generated by XA is given by yl(g) = getA,
1t Yl (g) lt=o = XA (g) (by definition) = gA, which coincides with (d/dt)getA l t = o. With these observations, we can give a short Proof of Proposition 6.4.3 (the "=}" part). As seen in §3.2, the flow yl generated by Xthen, invariant each smooth surface in G l( n , F) to which XA is tangent. !fA E g, A leaves by Proposition 6.4.6, G has this property, and etA = yl (I). D (6.4.57)
We record the following useful complement to Proposition 6.4.3. Proposition 6.4.7. IfG c M(n, F) is a smooth matrix group, 9 = TIG, then Exp : g + G is a smooth map and there exist neighborhoods (') of 0 E 9 and of l E G such that Exp : (') + is a diffeomorphism.
Q
Q
Proof. The mapping property has just been proved. The smoothness follows from the smoothness of Exp M(n, F) + G l n , F). We also have for D Exp(I) : 9 + TIG = 9 that D Exp(I)A = A, VA E g, so the inverse function theorem applies. D
(
X X) Xu(x) = 1t u(yt(x» lt=o' where yt is the flow generated by X. In particular, for A E M(n, F), XAu(g) = ddt u(getA )It=o (6.4.58) = Du(g) · gA, where the dot product gives the action of Du(g) E L(M(n, F), It) on gA E M(n, F).
As seen in §2.3, a smooth vector field defines a differential operator (also denoted on smooth functions by
Recall from §2.3 that the Lie bracket of vector fields is given by (6.4.59) The following result provides an equivalence between the Lie algebra structure on and the Lie bracket in (6.4.59).
9
6.4.
Smooth matrix groups
271
Proposition 6.4.8. GivenA,B E M(n, F), we have (6.4.60) Proof. To begin, we have
XAXBU(g) = aasa't u(getAe'B )1 tO ' and hence (6.4.62) (XAXB XBXA )U(g) = aasa't [u(getAe'B ) _ u(ge'BetA )] 1',t=o . We can extend (6.4.40) to e'Ae'B = ",BetA + st[A,B] + O« lsl + Itl)3 ). (6.4.63) Consequently, u(ge'A e'B ) = u(ge'B etA + stg[A, B] + O« lsl + Itl )3 » (6.4.64) Applying a'/asat) I " t = o, we obtain (XAXB XBXA )U(g) = Du(g) g[A, B] (6.4.65) (6.4.61)
s, 
.
D
the last identity holding by (6.4.58). This proves (6.4.60).
We turn to the production of metric tensors on a smooth matrix group G c M(n, It). (For simplicity, we restrict our a ttention to F It here, which is no real loss of general ity, since Gl(n, C) C GI(2n, It.) To start, we use the inner product on M(n, It) computed componentwise; equivalently (6.4.66) (A, B) Tr(B*A) Tr(BA*). This produces a metric tensor on G c M( n, It), by restriction. This Riemannian metric interfaces well with the group structure of G in the following situation.
=
=
=
Proposition 6.4.9. Assume the smooth matrix group G c M( n, It) satisfies (6.4.67 ) Gc n. Then the metric tensor on G inducedfrom the inner product (6.4.66 ) on M( n, It) is invari
O( )
ant under both left and right translations, i.e., under (6.4.68 ) Lg , Rg : G + G, E G, defined by x E G. (6.4.69 ) Lg(x) x, Rg(x)
g = xg, g,
=g
Proof. Indeed, we have (6.4.70 ) Lg , Rg : M( n, It) + M( n, It), isometries, where (6.4.71 ) LgA gA, RgA A , A E M( n, It), E
'if g E O( n),
=
=g
g O(n).
D
6. Differential geometry ofsurfaces
272
In the setting of Proposition 6.4.9, we say the Riemannian metric on G defined above is biinvariant. When G does not satisfy (6.4.68), the metric tensor on G induced from M(n, It) is generally not what we want to deal with. In such cases, take some positivedefinite inner product ( , ) on 9 = TIG (perhaps that induced from (6.4.66» , and define inner products ( , )e,g and ( , )" g on Tg G by
(DLg(I)A, DLg(I)B)e,g = (A, B), (DRg(I)A,DRg(I)B)" g = (A, B),
(6.4.72)
for A, B E g. We have the following. Proposition 6.4.10. Given an inner product ( , ) on 9 = TIG, there are unique extensions to Riemannian metric tensors ( , )e and ( , ), on G such that,for all g E G, Lg : G + G is an isometryfor ( , )e, and (6.4.73 ) Rg : G + G is an isometryfor ( , ),. These metric tensors give rise to volume elements and hence to integrals that are, respectively, left and right invariant, so
J f(x) dVg(x) = J f(gx)dVg(x), G G J f(x)dV,(x) = J f(xg)dV,(x),
(6.4.74)
G
G
for all g E G and all integrable f (e.g., f continuous and compactly supported on G). Generally, if dVi and dV, are two smooth volume elements on a smooth surface M, they differ by a smooth positive factor, so
(6.4.75 )
J f(x)dVi(x) = J f(x)godx)dV,(x).
M
M
If M = G and dVg, dilg are both smooth leftinvariant volume elements, then (6.4.76) equals both
(6.4.77) and (6.4.78) for all f E Ca( G),
J f(x)dVg(x) = J f(x)q;(x) dilg(x) G
G
J f(gx)dVg(x) = J f(gx)q;(x) dilg(x) G
G
J f(gx)go(gx) dilg(x),
G
g E G. This forces gl(x) = gl(gx) for all x, g E G, hence go = constant.
6.4.
Smooth matrix groups
273
We can apply this observation to compare a leftinvariant volume element dv" and a right translation of this, say dv"h , defined by
J f(x)dv"h(X) = J f(xh)dv,,(x).
(6.4.79) We have (6.4.80) Note that if h I ' h, (6.4.81)
G
G
dv"h(X) = 1J!(h)d v,,(x), h E G.
E G, then f(xh, h,)d v,,(x) = 1jJ(h,) f(xh,) d v,,(x)
J
J
G
=
G
J
1jJ(h,)1J!(h,) f(x) dv,,(x),
so (6.4.82 ) 1J! : G + (0, 00 ) is a smooth multiplicative homomorphism, (6.4.83 ) This leads to the following.
G
Proposition 6.4.11. If the only smooth homomorphism 1J! : G (0, 00) is the trivial homomorphism 1jJ(g) = 1, then the leftinvariant volume element dv" is also right invariant. ....
Note that the image of G under 1J! must be a multiplicative subgroup of ( 0, 00 ) , and the only proper subgroup is (ll. Since the image is compact if G is compact, we have the following.
Proposition 6.4.12. If G is a compact, smooth matrix group, then each leftinvariant volume element on G is also right invariant. Given G compact, we normalize the biinvariant volume element, and define
(6.4.84 )
J f(g)dg = v,,�G) J f(x) dv,,(x). G
G
Recall that we already have a biinvariant Riemannian metric tensor, hence a bi invariant integral, in case G c O(n). We come full circle with the following result. Proposition 6.4.13. Let G c M(n, R) be a smooth, compact matrix group. Then there is an inner product on Rn preserved by the action of G. Proof. Let uct ( , ) by (6.4.85 )
< , ) denote the standard inner product on Rn, and define a new inner prod (v, w) = J o. D
=
For example, if M is a twodimensional Riemannian manifold with Gauss curva tureK(x) :S a for all x E M, then there are no conjugate points. In contrast, for positive curvature, there is the following result.
6.
288
Differential geometry ofsurfaces
Proposition 6.5.6. Let M be a twodimensional Riemannian manifold. Assume its
Gauss curvature K satisfies K(x) 2: K > 0, Vx E M. (6.5.49) (K Then any geodesic on M of length > / has conjugate points. IT
We refer to [10] for a proof of this and to [11] for a higherdimensional generaliza tion. We state a few other results about Jacobi fields and conjugate points, referring to [11] for proofs. We mention that the proof of Proposition 6.5.6 makes use of the second variational formula (6.2.110), as does the proof of the next result. Proposition 6.5.7. If Y is a unit speed geodesic on a Riemannian manifold M and p = ye a) and q = y(b) are conjugate alongy (b > a), then
(6.5.50)
d(p, yet»� < t  a, for t > b.
To formulate the next result, we say a Riemannian manifold M is complete if Expp is defined on all of TpM, for each p E M. Proposition 6.5.8. Assume M is a complete Riemannian manifold of dimension 2. If
(6.5.49) holds, then M is compact and (6.5.51)
d(p, q) :5, (K' V p, q E M. IT
Note that a sphere of radius R = 1/{K in R3, whose Gauss curvature is = K, illus trates the sharpness of Propositions 6.5.6 and 6.5.8. The proof of Proposition 6.5.8 uses Propositions 6.5.6 and 6.5.7. Higherdimen sional extensions can also be found in [11]. 6.6. A spectral mapping theorem
LetA E M(n, C), and let ,", (z) be given by
(6.6.1)
,", (z )
=
00
�
k=o
k C kz ,
a power series that is convergent for all z E C We define ,",(A) E M(n, C) by
(6.6.2)
,",(A) =
00
�
k=o
k c kA
Convergence of this series follows by estimates of the form (2.1.18), which hold for complex as well as real matrices. We want to describe Spec ,",(A) in terms of SpecA, where SpecA denotes the set of eigenvalues of A. We prove the following, which plays a role in the proof of Proposition 6.5.2. Proposition 6.6.1. For A E M(n, C) and ,",(A) defined above,
(6.6.3)
Spec ,",(A) = (,",(Aj ) : Aj E Spec A}.
6.6.
A spectral mapping theorem
289
The proof will make use of some basic results of linear algebra, which are discussed in Appendix A.3 and treated in more detail in [50, Chapter 2] and in [52, §§6 7]' To start, let us note that (6.6.3) is quite easy to prove when A is diagonalizable, i.e., en has a basis : 1 :S j :S n} of eigenvectors ofA, satisfying = 1 :S j :S n. (6.6.4) In this case, A u = so (6.6.2) gives
{Uj
k j AJUj,
(6.6.5)
AUj AjUj,
(A) =
Uj k=�o CkAJUj = (Aj)Uj. 00
This immediately gives (6.6.3). The proof of (6.6.3) requires more work when A is not diagonalizable. In that case, for each E Spec A, define the generalized eigenspace, (6.6.6) 91'(A,Aj ) = E en : (A  /)V = a for some v). Itis a general result (proved in [50] and [52]) that each E en has a unique decompo sition E Spec A. (6.6.7) E 91'(A,Aj ), =
Aj
We also write (6.6.8) Similarly, (6.6.9)
{v
A V
Aj
v �j Vj ' Vj
en = en =
v
EB
AESpecA
EB
f.,lESpec 1 we work with here is p = 2. One has (7.1.62)
(7.1.63)
=
Ilf llip (2Jr)n
a E s(zn), b E s'(zn). (a, b) = kEZn (7.1.120)
S
v
Note that
< 00
N N l(a,b)l :S C(sup(l k + Ikl) +n+1 Ia(k)I)(sup(1 k + Ikl ) lb(k)lj. Conversely, given f3 as in (7. 1 . 1 21), satisfying (7. 1 . 1 23), we define (7.1.126) with Ok as in (7. 1 . 8 2), and verify that b E s(zn ) and that (7.1.127) (a, b) = f3(a), D first for a = Ok > for all k E zn , and then for all a E s(zn ). We are now ready to define 'F and 'F* in (7. 1 . 1 19). Parallel to (7.1.84), we define 'Fw E s(zn )for w E V'(P)by (7.1.128) (a,'Fw) = (ra,w), a E s(zn), and we define 'F* b E V' (P) for b E (zn ) by (f, rb) = ('Ff, b), f E COO(P). (7.1.129) As we have seen, a E s(zn ) 'F*a E s(zn), with estimates (7.1.112), and f E (7.1.110). These results enable one to deduce COO(P) 'Ff E s(zn), with estimates that (7. 1 . 1 28) defines 'Fw E (zn ) and (7. 1 . 1 29) defines 'F*b E V'(P). Here is a further extension of the Fourier inversion formula. Proposition 7.1.13. The maps 'F and 'F* in (7. 1 . 1 19) are twosided inverses of each other, i.e., (7.1.130) 'F*'Fw = w and 'F'F*b = b, V w E V'(lfn ), b E s'(zn ). (7.1.125)
S
=}
=}
S
307
7.1. Fourier series
7.1.10. (7.1.12) (7.1.131) Yf(a) (k) = (ik)a:Ff(k), 'if f E coo(lfn ). We can extend this to V'(P), using (7. 1 . 1 17)(7. 1 . 1 18) and (7. 1 . 1 28)(7. 1 . 1 29), to ob tain the following. The proof is parallel to that of Proposition The result implies
Proposition 7.1.14. Given
(7.1.132)
W E V' (lfn ),
Exercises
1. Consider f(8) = 181 for . (a) Compute j(k). (b) Show that f E A(lf' ). C(lf' )
7[
:S 8 :S 7[, extended periodically to define an element of
(c) Use the Fourier inversion formula
(7.1.133) (d) Deduce from (c) that
(7.1.4) at 8 = 0 to show that
00
1
� = k= k' 6 '
(
7.1.134)
l
2 7[
Hint. Decompose this sum into the sum over k odd and the sum over k even.
2. 2. Consider g(' 8) = 1 for 0
Remark.
L;= , k' is \,( )
0
0,
< 8 < 7[, for 7[ < 8 < defining a bounded integrable function on If . (a) Compute g(k). (b) Use the Plancherel identity ( ) to obtain another derivation of ( )
7.1.46 7.1.133 . 3. Apply the Plancherel identity to f and j in Exercise 1. Use this to show that (7. 1 . 1 35 )
Note. This sum is \,(4). 4. Given E :R(lf' ), set (
7.1.136)
f
Show that
P'f(8) =
rl kl j(k)eik8 , � kEZ
O :S r
0, (7.2.15) fa(x) = f(ax) ==} 'Ffa(5) = anl(a l 5), and consequently, for b > 0 (7.2.16) gb(X) = e blxl' on Rn ==} 'Fgb CO = 'F*gb(5) = (2b )nI2e I "'/4b From (7.2.16) we see that 'Fg 1 /2 = g l/2 and also that 'F*'Fgb = 'F'F*gb = gb. The Fourier inversion formula asserts that (7.2.17) in appropriate senses, depending on the nature of f. We will approach this by exam ining (7.2.18)
IJ(x) = (2Jr)nI2 R5")eol '" eix , d5",
J
Rn
7. Fourier analysis
312
with £ > O. By(6.2.3), j( S )e£"I ' is Riemann integrable over Rn whenever f E Jl(Rn ). We can plug in (7.2.1) for j(5) and switch order of integration, getting (7.2.19)
IJ(x) = (2,,)n =
where
JJ f(y)e,(xY) 'e£"I' dy dS
J f(y)H£(x  y)dy,
H£(x) = (2,,)n e£I "'+'x.' dl;.
J
(7.2.20)
"n
Using (7.2.16), we have (7.2.21)
A change of variable and use of J"n e l x l ' dx = "n/2 gives
J H£(x)dx = 1,
(7.2.22)
' o.
Using this information, we will be able to prove the following.
Proposition 7.2.2. Assume f is bounded and continuous on Rn, and take lJ(x) = J f(y)H£(x  y) dy, with H£ as in (7.2.21)(7.2.22). Then, as £ " 0,
lJ(x)
(7.2.23)
+ f(x),
'
This establishes the desired finiteness of J"n I/COI d s .
n.
D
Having seen important roles played by L' and L' norms and L' inner products, we are motivated to advertise how Fourier analysis is a natural setting in which to work with spaces of functions larger than Jl(Rn ) or Jl # (Rn ), spaces that are labeled L' (Rn ) and L' (Rn ). These are defined using the Lebesgue theory of integration. We give a brief description of this, referring the reader to other sources, such as [17] or [47], for a detailed presentation. To start, we say a set S C Rn is measurable provided that, for each cell R e Rn , (7.2.78)
m*(S n R) + m*(R \ S) = VCR),
where m* is the outer measure, defined in (3.1.124). We say a function f : Rn + C is measurable provided that, for each open 0 C C, f ' ( 0) c Rn is measurable. The Lebesgue integral associates a value in [0, +00] to (7.2.79)
j f(x)dx
"n
7.2.
The Fourier transform
319
' n f (7. 2.80) Ilf l v = J If(x)1 dx In such a case, one can write f = fo +  fo  + i(f1 +  f,  ) with all fj± measurable and 2: 0, and with finite integral, and the process alluded to above applies to evaluate these integrals, and hence to evaluate J"n f(x)dx. There is one further wrinkle. The space L' (ltn ) actually consists of equivalence classes of measurable functions satisfying (7. 2.80), where the equivalence is f, � f, if and only if (x E Itn : fr(x) '" f, (x)} has outer measure O. This makes L' (ltn ) a normed space. More generally, for p E [1, (0), LP(ltn ) consists of equivalence classes of measur able functions for which (7. 2.8 1 ) Ilfl ip = J If(x)IP dx The only case of p > 1 that we work with here is p = 2. One has (7.2.82) f,g E L'(ltn) ==} fii, E L' (ltn), and L'(ltn ) is an inner product space, via (7.2.43), extended to this setting. These LP norms satisfy the triangle inequality: (7.2.83) Ilf + gil£, :S Ilf l £, + Ilgl £, · For P = l or 2, the proofs of (7. 2 . 8 3) are as described before. For other p E (1, (0) (which we do not deal with here), the reader can consult [17] or [47]. Thanks to of a metric space, with d(f, g ) = I l f  g I LP . We (7.say2.83),.... LP(ltin LPn) ifhas the structure fv f Ilfv f i b .... o. For all p E [1, (0), these spaces have the following important metric properties. The first is a denseness property. Proposition A. Given f E LP(ltn ) and k E N, there existfv E C�(ltn ) such thatfv .... f inLP. f f
for each measurable satisfying (x) 2: 0 for all x. The space L (lt ) consists of all measurable functions such that < 00.
< 00.
The next is a completeness property. Proposition B. If (fv) is a Cauchy sequence in
f
such that fv .... in LP.
LP(ltn), then there exists f E LP(ltn)
We refer to [17] or [47] for proofs of these results. We mention that if is bounded and continuous on then E only if E Hence is equivalent to E is bounded and continuous, and E We also have
Itn, f L'(ltn) if and f f Jl(ltn). (7.2.38) (7.2.84) A(ltn) = (f L'(ltn) : f f L'(ltn)}. A(ltn) L'(ltn), (7.2.85) either by (7.2.44) or by (7.2.86) Ilf l i, :S (sup If l)l f l v :S (2J1fn/' l fl v l fl v · C
7.
320
Fourier analysis
The following neat extensions of Propositions 7.2.5 and 7.2.7 illustrate the useful ness of the Lebesgue theory of integration in Fourier analysis. Proposition 7.2.10. The maps 'F and 'F* have unique continuous linear extensionsfrom (7.2.87) 'F, r : A(Rn ) + A(Rn )
to
(7.2.88)
'F, 'F* : L' (Rn ) + L' (Rn ),
and the identities (7.2.89)
and (7.2.90)
r'Ff = f, 'Frf = f, 11'Ff 11£' = Ilrf ll£' = Ilf ll£'
holdfor all f E L'(Rn ). Proposition 7.2.11. Define SR by (7.2.91)
SRf(x) = (27r)n/,
Then SR : L' (Rn ) .... L' (Rn ), and (7.2.92)
as R + 00 .
J !COe'X ' ds.
"I 2,
(7.2.153) Now, (7.2.154) so
J IICOI' ds. g�lhsl:::;¥If (7.2.150) holds, we deduce that, for 0 < I h l 1, J IIcol' ds Clh l'a, (7.2.156 ) jJlj, :O; , ' hence (setting I h l = rNl ), for e 1, J IIcol' ds cr'al (7.2.157) Ilf  !h Iii,
(7.2.155)
2:
2
:S
:S
I ,, 1:'::; TfiT 2:
:S
Cauchy's inequality gives
2£:0;1 S"1:'::; 2N 1
J llcol ds :s 1 J IICOI' dsJ1/' x l
2{':.::; I S"1:'::;2£+ l
(7.2.158 )
2£:.;; I S"1:'::; 2£+ l
1dsJ1I'
crat . 2m = cr (a 1I')l :S
Summing over e E N and using (again by Cauchy's inequality)
J ill d s C l I I l v = C l f l v ,
(7.2.159 )
:S
"1< '
D
then gives the proof. To see how close to sharp Proposition 7.2.15 is, consider ( ) = XI ( X) = if O :S :S (7.2.160) o otherwise. We have, for :S (7.2.161 ) =
fx
I hl 1,
1
Ilf  !h ili, 21 hl,
x
1,
7.2.
The Fourier transform
327
=
so (7.2.150) holds, with a 1/2. Since A(R) C C(R), this function does not belong to A(R), so the condition (7.2.151) is about as sharp as it could be. To produce the appropriate generalization to variables, let us focus on (7.2.158), and note that when R is replaced by Rn , the integral of 1 becomes 2nl So to obtain the result
n
ds
�
(7.2.162) we want
J I/(sl' ds :s
(7.2.163)
c r ,yt ,
n
y > 2'
2{':$;IS"I:'::; 2N 1
It is convenient to rewrite this as
J I s l'kl/col' ds :S
(7.2.164)
c r 'at ,
2£:0; I S"1:'::;2£+ l
where
n = 2k, 1 a >  ' if n = 2k + 1. 2 Now we bring in Proposition 7.2.14. Assume = p E Jl(Rn ), for 1J3I k, (7.2.166) where a priori is defined as an element of 3'(Rn ), by (7.2.139). Then calculations parallel to (7.2.152)(7.2.157), applied to in place of show that if (7.2.167) pM£' C W , for 1J3I k, p a > 0, if
(7.2.165)
aPf
aPf f
:S f,
fP :S Ilf (f :S l then J IsPlcol' ds :S (7.2.168) for e 1. Summing over 1J3I :S k then yields (7.2.164). We hence have the following higherdimensional extension of Proposition 7.2.15. Proposition 7.2.16. Assume n = 2k or n = 2k + 1. Take f E Jl(Rn ) and assume that (7.2.169) aPf = fP E Jl(Rn), for 1J3I :S k, and that, for each such {3, 
c rat ,
2£:.;; I S"1:'::;2N 1
2:
(7.2.170) where a satisfies (7.2.165). Then E A(Rn ).
f
We mention a result that refines Proposition 7.2.16 when bring in the difference operators
(7.2.171) where {e" . . . , e n } is the standard basis of Rn . Here is the result.
n > 1. To state it, we
7.
328
Fourier analysis
E Jl(Rn)
Proposition 7.2.17. Assume n = 2k or n = 2k + 1. Take f and assume that there exists < 00 such that,for :S k, (7.2.172) f  f Mp :S for :S 1, 0 < £ :S 1, where satisfies (7.2.165). Then f
1J31 Cl hl a, Iltl� (tl� E A(Rn).
C
a
I hl
We will not present a proof of Proposition 7.2.17 but leave this as a challenge to the ambitious reader. Exercises
1. Letf :
Rn
+
C satisfy
If(a)(x)1 :S C(l + Ixl)(n+l) for lal :S n + 1. Show that IJeOI :S C(l + 15I )(n+l). Deduce thatf E A(Rn ). 2. Sharpen the result of Exercise 1 as follows. Assume f satisfies :S + for :S [�l + 1.
If(a)(x)1 C(l Ixl )(n+l) lal Then show thatf E A(Rn). 3. Take n = 1. For each of the following functions f : R (7.2.173) f(x) = e1xl, (7.2.174) f(x) = 1 +1 x2 ' (7.2.175) f(x) = XI_l,lj(X), (7.2.176) f(x) = (1  Ixl)XI_l,lj(X).
+
C, compute f( o .
4. In each case of Exercise 3, record the identity that follows from the Plancherel iden tity (7.2.50), established in Proposition 7.2.5. 5. Define f,
E C(R) by
f,(x) = (1  x'), o
E A(R)
for for
Ixl :S 1, Ixl > 1.
Show that f, for each > 0, as a consequence of Proposition 7.2.15. What is the best conclusion one could draw from Proposition 7.2.97 r
The next exercises bear on the function
,",n(x) = J eix , dS(5), x E Rn. snl
2
7.2. The Fourier transform
39
6. Show that
'"'n coo(Rn). l)'"'n = Rn. ,",n(x) = I"n( lxl )· '"'n
(a) E (b) (t>+ a on is radial, i.e., (c)
tl I"�(s) + s I"�(s) + I"n(s) = O. Note that I" n (s) = '"' n (se l ) is a smooth, even function of s.
7. Deduce from Exercise 6 and the formula for in spherical polar coordinates that n 1 
8. Convert the calculation
I"n(s) = J e" " dS(O snl
into a formula for
J w dS(5),
e E N.
snl
9. More generally, produce a formula for
J 5adS(5), a = (al , ··· ,an), aj E Z+ ,
snl
in terms of
,",�a)(O).
10. In the spirit of Exercises 89, use
e lxl '/4
= nI2 J eI'" e'x., d5 ,,
Rn
to produce a formula for
in terms of derivatives of e l xl ' /4 at
11. Show that
x = o.
Ixl2n defines an element of 3' (Rn), and tl( lxI2n) = Cno on Rn, Cn = en 2)An_l , 
7.
330 for n 2: 3. Similarly, show that
!l( log I xl) =
Hint. Check Exercises 1314 of §4.4.
Fourier analysis
2Tro on R'.
7.3. Poisson summation fonnulas
Rn
Comparing Fourier transforms of functions on with Fourier series of related func· tions on leads to highly nontrivial identities, known as Poisson summation formu· las. We derive some of them here. To start, we take
un
fE
(7.3.1) We have
2
(7.3. )
qJ E
qJ(x) =
3(Rn),
coo(Rn),
� f(x + 2Tre).
tEl'n
qJ(x) = qJ(x +
hence (with slight abuse of notation)
2Trk), 'if k E zn,
(7.3.3) We next observe that
J qJ(x)eik x dx = J f(x)eik x dx, 'if k E zn. Consequently, with ifJ(k) defined as in (7.1.1) and f(l;) defined as in (7. 2.1), (7.3.5) ifJ(k) = (2Tr)n/2f(k), 'if k E zn. Now the Fourier inversion formula, in the form of Proposition 7.1.1, applies to qJ : (7.3.6) qJ(x) = � ifJ(k)eik x , 'if x E P. kEEn
(7.3.4)
Putting this together with (7.3.1) and (7.3.5) gives the following general Poisson sum· mation formula.
3(Rn), we have,for each x E Rn, � f(x + 2Tre) = (2Tr)n/2 kEZn � f(k)eib tEZn
Proposition 7.3.1. Given f E
(7.3.7) In particular, (7.3.8)
We can apply this to
(7.3.9)
= (2Tr) n/2 � f(k). � f(2Tre) tEZn kEzn
f(x) = etlxl' ,
7.3.
Poisson summationformulas
331
and use (7.2.16) to evaluate j(k). This leads to
� e4"." 1tI' = (4m)nI2 � e 1kl'/4' , t > o. tEZn kEZn
(7.3.10)
Taking r = 4m , we can rewrite this as
L: enrltl2 = rn/2 L: enlkI2/r , r > o. ksl'n tun
(7.3.11)
This result is known as the Jacobi inversion formula. The Riemann functional equation. The Riemann zeta function \'(s) is defined by
00
(7.3.12) \'(s) = � k' , Re s > 1. k=l This defines \'(s) as a function holomorphic in {s E C : Re s > I}. See (5.1.58). Here we establish a formula of Riemann that extends \'(s) beyond the halfplane Re s > 1. To start the analysis, we relate \'(s) to the function
00
get) = � e".k't
(7.3.13) We have
1
(7. 3 .14)
00
k=l
g(t)t, 1 d t =
i: k2,,,' 1 00 e't, 1 d t
k=l = \,(2s),,'r(s), for Re s > 1/2. The Gamma function re s) is as in (5.1.56). This gives rise to further identities, via the n = 1 case of the Jacobi inversion formula (7.3.11), i.e., o
0
(7.3.15) which implies

()
1 1/2 t 1/2 1 get) = 12 + _t+ g _t . 2 To use this, we first note from ( 7. 3 . 14) that, for Re s > 1, '/2\,(s) = g( t)t,/2 1 d t r (7.3.16)
(7.3.17)
U} 2

1
1 1 g(t)t,/21 dt + 1 00 g(t)t,/21 dt. 0
=
00
Into the integral over [0, 1], we substitute the right side of (7.3.16) for get), to obtain
(7.3.18)
r
(�),,'/' (s) = 1 1 (� + �t1/2)t'/21 dt 00 1 + 1 g( t 1 )t,/23/2 d t + 1 g( t)t,/2 1 d t. \
7.
332
Fourier analysis
We evaluate the first integral on the right and replace t by lit in the second integral, to obtain, for Re s > 1,
J oo
1 l /2 1 ,l/2 ..Ie r(�2 )7r'/2!;(s) _ s  l s + 1 [t' + t(  ]g( t)t d t. Note that get) :S C e '" for t E [ 1 , 00 ) , so the integral on the right side of (7.3.19) defines a function holomorphic for all S E C As seen in §S.l, r(z) is holomorphic on (7.3.19)
=
_
_
C \ {O, 1, 2, 3, . . . }. Further results on the Gamma function include the following.
Lemma 7.3.2. Thefunction 1/ r(z) extends to be holomorphic on all ofe, with zeros at
{0, 1, 2, 3, . . . }.
We refer to [51, Chapter 4] for a proof. Given this, we have from (7.3.19) that !;(s) extends to be holomorphic on C \ {1}. The formula (7.3.19) does more than establish such a holomorphic extension of the zeta function. Note that the right side of (7.3.19) is invariant under replacing s by 1  s. Thus we have the following identity, known as Riemann's functional equation,
(7.3.20) The Riemann zeta function plays a central role in the study of prime numbers. Basic material on this can be found in [51, Chapter 4], and a great deal more in [13]. Exercise
1. Show that the Poisson summation formula (7.3.8) applies when f E A(ltn) and 'P E A(P), where, as in (7.3.1), 'P(x) Le f(x + 27re) . =
7.4. Spherical hannonics
One type of generalization of Fourier series on the circle S I ]fl is Fourier series on the ndimensional torus, treated in §7.1. Another, which we treat in this section, in volves the unit sphere sn l C Itn , leading to what are called spherical harmonics. Our approach to this theory will emphasize contact with the following Dirichlet problem for harmonic functions on the unit ball, '"
En {x E Itn : Ix l < 1}, 3En sn 1 _ Namely, given f E c(sn l ), we seek a function U E C(E n ) n c 2 (En) satisfying L'.u 0 on En, uls n., f. (7.4.2) (7.4.1)
=
=
=
=
7.4.
Spherical harmonics
333
In case n = 2, connections between this problem and Fourier series are explored in Exercises 48 of §7.1. In particular, we have from (7.1.139)(7.1.141) that, when n = 2,
(7.4.3)
u( re' 8 ) =
1  r' 2IT
1
"
_"
f(ep) dep. 1  2rcos (8  ep) + r'
A change of variable gives, for x E R' , Ix l u(x) =
(7.4.4)
I  l x i'
2IT
0, there exists q E S n B£(p), q ", p. It follows that p is an accumulation pointofS if and only if each B£(p), £ > 0, contains infinitely many points of S. One straightforward
observation is that all points of S \ S are accumulation points of S. The interior of a set S e X is the largest open set contained in S, i.e., the union of all the open sets contained in S. Note that the complement of the interior of S is equal to the closure of X \ S. We now turn to the notion of compactness. We say a metric space X is compact provided the following property holds: each sequence (Xk ) in X has a convergent subsequence. (A.l.4 ) We will establish various properties of compact metric spaces, and provide various equivalent characterizations. For example, it is easily seen that (A.l.4) is equivalent to (A.loS) each infinite subset S e X has an accumulation point. The following property is known as total boundedness. Proposition A.1.S. IfX is a compact metric space, then given £ > 0, 3 finite set {Xl ' " ' ' XN} (A.lo6) Proof. Take £ > 0 and pick X l E X. If B£(X I ) = X, we are done. If not, pick X, E X \ B£(X I ) ' If B£(X I ) U B£( x, ) = X, we are done. If not, pick X3 E X \ [B£(XI ) U B£(x,)]. Continue, taking xk+l E X \ [B£(XI ) U u B£(Xk )], if B£(X I ) U u B£(Xk ) '" X. Note that, for 1 :S i, j :S k, . . .
. . .
i '" j ==} d(x" xj ) 2: EIf one never covers X this way, consider S = {Xj : j E N}. This is an infinite set with no accumulation point, so property (A.loS) is contradicted. D Corollary A.l.6. IfX is a compact metric space, it has a countable dense subset.
384
A. Complementary material
Proof. Given £ = r n , let Sn be a finite set of points Xj such that (E£(xj )} covers X. Then e = Un Sn is a countable dense subset of X. D Here is another useful property of compact metric spaces, which will eventually be generalized even further, in (A.lo1O) below. Proposition A.I.7. Let X be a compact metric space. Assume K, J K, J K3 a decreasing sequence ofclosed subsets ofX. Ifeach Kn '" 0, then nnKn '" O.
J
. . .
Proof. Pick Xn E Kn  If (A.l.4) holds, (xn ) has a convergent subsequence, xnk Since {xnk : k 2: e} c Kn" which is closed, we have y E nnKn .
form +
y.
D Corollary A.I.S. Let X be a compact metric space. Assume U, c U, C U3 C form an increasing sequence of open subsets ofX. If Un Un = X, then UN = Xfor some N. . . .
Proof. Consider Kn = X \ Un . The following is an important extension of Corollary A.lo8.
D
Proposition A.I.9. IfX is a compact metric space, then it has the property, every open cover {Ua : a E A} of X has ajinite subcover. (A.1.7) Proof. Each Ua is a union of open balls, so it suffices to show that (A.l.4) implies the following: every cover {Ea : a E A} of X (A.lo8) by open balls has a finite sub cover. Let e = {Zj : j E N} c X be a countable dense subset of X, as in Corollary A.lo6. Each Ea is a union of balls E,/zj ), with Zj E e nEw rj rational. Thus it suffices to show that every countable cover {Ej : j E N} of X (A.lo9) by open balls has a finite subcover. For this, we set and apply Corollary A.lo8.
D
The following is a convenient alternative to property (A.lo7): if Ka
CX
are closed and
n Ka = O,
a then some finite intersection is empty. Considering Ua = X \ Kw we see that (A.1.7) = (A.l.10). The following result, known as the HeineBorel theorem, completes Proposition A.lo9.
(A.l.10)
Theorem A.I.IO. For a metric space X,
(A.l.4) = (A.1.7).
A.i. Metric spaces, convergence, and compactness
385
Proof. By Proposition A.L9, (A.1.4) =} (AL7). To prove the converse, it will suffice to show that (A.L1O) =} (A.L5). So let S e X and assume S has no accumulation point. We claim that such S must be closed. Indeed, if Z E S and z II' S, then z would have to be an accumulation point. Say S = {xa : a E A } . Set Ka = S \ {xa }. Then each Ka has no accumulation point, hence Ka C X is closed. Also naKa = 0. Hence there exists a finite set 'F C A such that naEJKa = 0, if (A.L1O) holds. Hence S = U aEJ{xa } is finite, so indeed D (A.l.10) =} (A.L5). Remark. So far we have that for every metric space X, (A.1.4) = (Al.5) = (A.1.7) = (A.l.10) ==} (A.L6). We claim that (A.L6) implies the other conditions ifX is complete. Of course, compact ness implies completeness, but (AL6) may hold for incomplete X, e.g., X = (0, 1) C R. Proposition A.1.n. If X is a complete metric space with property (A.L6), then X is
compact.
Proof. It suffices to show that (A.L6) =} (A.L5) if X is a complete metric space. So let S e X be an infinite set. Cover X by balls
Bl/, (Xl ), . . . , BI/2 (XN)'
One of these balls contains infinitely many points of S, and so does its closure, say Xl = BI /' (Y I ) ' Now cover Xby finitely many balls of radius 1/4; their intersection with Xl provides a cover of Xl ' One such set contains infinitely many points of S, and so does its closure X, = Bl/4(Y' ) n Xl ' Continue in this fashion, obtaining Xl :l X, :l X3 :l ' " :l Xk :l Xk+ l :l . . . , Xj C B,j(Yj ), each containing infinitely many points of S. One sees that (Yj ) forms a Cauchy se quence. If X is complete, it has a limit, Yj z, and z is seen to be an accumulation point of S. D ....
If Xj , 1 ;S j ;S m, is a finite collection of metric spaces, with metrics dj , we can define a Cartesian product metric space
m X = II Xj' d(x,y) = d l (X I , Y I ) + . . . + d m (xm , Ym ) · j= l Another choice of metric is o(x, y) = Yd l (X l ' Yl )' + . . . + dm (xm , Ym )2 The metrics d and 0 are equivalent, i.e., there exist constants Co, Cl E (0, 00) such that (A.l.12) Coo(x, y) ;S d(x,y) ;s Cl o(X,y), V x, y E X. m A key example is R , the Cartesian product of m copies of the real line R. (A. loll)
We describe some important classes of compact spaces. Proposition A.1.12. IfXj are compact metric spaces, 1 ;S j ;S m, so is X =
rr7= 1 Xj '
A. Complementary material
386
Proof. If (xv) is an infinite sequence of points in X, say Xv = (X lV " ' " xmv), pick a convergent subsequence of ( XlV ) in Xl ' and consider the corresponding subsequence of (xv), which we relabel (xv). Using this, pick a convergent subsequence of (x,v) in X, . Continue. Having a subsequence such that Xjv + Yj in Xj for each j = 1, . . . , m, we then have a convergent subsequence in X. D The following result (already stated in Theorem 1.2.4) is useful for calculus on Rn . Proposition A.1.B. IfK is a closed bounded subset ofR n , then K is compact. Proof. The discussion above reduces the problem to showing that any closed interval
a
I = [ , b] in R is compact. This compactness is a corollary of Proposition A.1.ll. For
pedagogical purposes, we redo the argument here, since in this concrete case it can be streamlined. Suppose S is a subset of I with infinitely many elements. Divide I into two equal subintervals, II = [ , b r J , I, = [b l , b], b l = ( + b)/2. Then either II or I, must contain infinitely many elements of S. Say Ij does. Let X l be any element of S lying in Ij. Now divide Ij in two equal pieces, Ij = Ij l U Ij,. One of these intervals (say Ijk ) contains infinitely many points of S. Pick x, E Ij k to be one such point (different from Xl )' Then subdivide Ijk into two equal subintervals, and continue. We get an infinite sequence of distinct points Xv E S, and I xv  xv+kl :S rV (b for k 2: 1. Since R is complete, (xv) converges, say to Y E I. Any neighborhood ofy contains infinitely many points in S, so we are done. D
a
a
 a),
If X and Y are metric spaces, a function f : X + Y is said to be continuous provided Xv + X in X implies f(xv) + f(x) in Y. An equivalent condition, which the reader is invited to verify, is (A.1.13) U open in Y ==} f l ( U) open in X. Proposition A.1.14. IfX and Y are metric spaces, f : X + Yis continuous, andK c X
is compact, then f(K) is a compact subset ofY.
Proof. If (yv ) is an infinite sequence of points in f(K), pick Xv E K such that f(xv) = YV' If K is compact, we have a subsequence Xv + P in X, and then Yv + f(p) in ]
Y.
]
D
If F : X + R is continuous, we say f E C(X). A useful corollary of Proposition A.1.14 is the following. Proposition A.1.IS. IfX is a compact metric space and f E C(X), then f assumes a maximum and a minimum value on X. Proof. We know from Proposition A.1.14 that f(X) is a compact subset of R. Hence f(X) is bounded, say f(X) c I = [ b]. Repeatedly subdividing I into equal halves, as in the proof of Proposition A.1.13, at each stage throwing out intervals that do not intersect f(X), and keeping only the leftmost and rightmost interval amongst those remaining, we obtain points a E f(X) and f3 E f(X) such that f(X) C [a,f3]. Then
a,
A.i. Metric spaces, convergence, and compactness
387
f( xo ) for some Xo E X is the minimum and f3 maximum.
a =
=
f(x, ) for some x, E X is the
D
At this point, the reader might take a look at the proof of the mean value theorem, given in §l.1, which applies this result. If S c R is a nonempty, bounded set, Proposition A.lo13 implies S is compact. The function � : S + R, �(x) = x is continuous, so by Proposition A.lo1S it assumes a maximum and a minimum on S. We set (A.l.14) sup S = m�x x, inf S = miD x, sES
xES
when S is bounded. More generally, if S c R is nonempty and bounded from above, say S C (00, B], we can pick A < B such that S n [A, B] is nonempty, and set (A.l.1S) sup S = sup S n [A, B]. Similarly, if S c R is nonempty and bounded from below, say S C [A, 00), we can pick B > A such that S n [A, B] is nonempty, and set (A.l.16) inf S = inf S n [A, B]. If X is a nonempty set and f : X + R is bounded from above, we set (A.l.17) sup f(x) = supf(X), XEX
and if f : X + (A.l.18)
R is bounded from below, we set inf f(x) = inff(X). XEX
Iff is not bounded from above, we set sup f = +00, and iff is not bounded from below, we set inff = 00. Given a setX, f : X + R, and Xn + x, we set (A.l.19) and (A.l.20)
lim sup f( xn ) = lim (sup f( Xk »)
n+ oo
n+oo k'?:.n
lim inf f( xn ) = lim ( inf f( Xk ») '
n+ oo
n+oo k'?:.n
We return to the notion of continuity. A function f E C(X) is said to be uniformly continuous provided that, for any £ > 0, there exists 8 > a such that (A.l.21) x, y E X, d(x,y) :S 8 ==} If(x)  f(y) 1 :S £. An equivalent condition is that f have a modulus of continuity, i.e., a monotonic func tion ", : [0, 1) + [0, 00) such that 8 " a =} ",(8) " 0, and such that (A.l.22) x,y E X, d(x,y) :S 8 :s 1 ==} If(x)  f(y) 1 :S ",(8). Not all continuous functions are uniformly continuous. For example, ifX = (0, 1) C R, then f( x ) = sin l/x is continuous, but not uniformly continuous, on X. The following result is useful, for example, in the development of the Riemann integral in §2.lo Proposition A.l.16. IfX is a compact metric space and f E C(X), then f is uniformly
continuous.
A. Complementary material
388
Proof. If not, there exist Xv, Yv E X and t > a such that d(xv, Yv) :S TV but (A.l.23) Taking a convergent subsequence Xv + p, we also have Yv + p. Now continuity of D f at p implies f(xv ) + f(p) and f(yv ) + f(p), contradicting (A.14). ]
]
]
]
If X and Y are metric spaces, the space C(X, Y) of continuous maps f has a natural metric structure, under some additional hypotheses. We use D (f , g) = sup d(J(x), g(x») . (A.l.24)
:
X + Y
xEX
This sup exists provided f(X) and g(X) are bounded subsets of Y, where to say B e Y is bounded is to say d : BxB + [0, 00) has bounded image. In particular, this supremum exists if X is compact. The following result is useful in the proof of the fundamental local existence result for ODE, in §2.3. Proposition A.I.17. IfX is a compact metric space and Y is a complete metric space, then C(X, Y), with the metric (A.1.16), is complete. Proof. That D(f,g) satisfies the conditions to define a metric on C(X, Y) is straight forward. We check completeness. Suppose (fv) is a Cauchy sequence in C(X, Y), so, as v + co ,
(A.l.2S)
sup sup d(Jv+k(X),fv(x») :S tv + O.
k�O XEX
Then in particular (fv(x» is a Cauchy sequence in Y for each X E X, so it converges, say to g(x) E Y. It remains to show that g E C(X, Y) and that fv + g in the metric (A.1.16). In fact, taking k + 00 in the estima te above, we have (A.l.26) sup d(g(x), fv (x») :S tv + 0, XEX
i.e., fv + g uniformly. It remains only to show thatg is continuous. For this, let Xj + X in X and fix t > O. Pick N so that tN < £. Since fN is continuous, there exists J such thatj ?: J =? d(fN(Xj),fN(X» < £. Hence j ?: J =? d(g(xj),g(x») :S d(g(Xj),fN(Xj») + d(JN(Xj),fN(X») + d(JN(X), g(x») < 3£. This completes the proof.
D
In case Y = It, C(X, It) = C(X), introduced earlier in this appendix. The distance function (A.1.24) can be written D (f , g) = Il f  gll,"p' Ilf ll,"p = sup If(x) 1. xEX
where Ilf ll,"p is a norm on C(X). Generally, a norm on a vector space V is an assignment f f+ II f ll E [0, 00), satisfying IIf ll = a � f = 0, Il af ll = l a i llf ll , Ilf + g il :S IIf ll + Ilg ll.
A.i. Metric spaces, convergence, and compactness
389
given f, g E V and a a scalar (in R or C). A vector space equipped with a norm is called a normed vector space. !tis then a metric space, with distance function D(f, g) = Ilf  g il· If the space is complete, one calls V a Banach space. In particular, by Proposition A.LI7, C(X) is a Banach space, when X is a compact metric space. We next give a couple of slightly more sophisticated results on compactness. The following extension of Proposition A.LI2 is a special case of Tychonov's theorem. Proposition A.1.IS. If (Xj : j E Z + ) are compact metric spaces, so is X =
II�= 1 Xj'
Here, we can make X a metric space by setting (A.l.27)
d(x,y) =
d/p/x), ph» i; rj 1 + dj(pj(x), ph» , j= l
where Pj X + Xj is the projection onto the jth factor. !t is easy to verify that if Xv E X, then Xv + Y inX, as v + 00, ifand only if, for each j, p/xv) + p/y) in Xj . Proof. Following the argument in Proposition A.LI2, if (xv) is an infinite sequence of points in X, we obtain a nested family of subsequences (A.l.28) (xv) J (Xlv) J (x'v) J . . . J (xjv) J . . . such tha t peCxjv ) converges in Xe , for I :S e :S j . The next step is a diagonal construc tion. We set (A.l.29) Then, for each j, after throwing away a finite number N(j ) of elements, one obtains from (5v) a subsequence of the sequence (xj v) in (A.L28), so PeC5v) converges in Xe for all e. Hence (5v) is a convergent subsequence of (xv), D The next result is the ArzelaAscoli theorem. Proposition A.1.I9. Let X and Y be compact metric spaces, andfix a modulus of conti nuity w( 8). Then (A.l.30) (Ow = {J E C(X, Y) : d(J(x), f(x'») :S w(d(x, x'») Vx, x' E X}
is a compact subset ofC(X, Y). Proof. Let(fv) be a sequence in (Ow' Let � be a countable dense subset of X, as in Corol lary A.L6. For each x E �, (fv(x» is a sequence in Y, which hence has a convergent subsequence. Using a diagonal construction similar to that in the proof of Proposition A.LI8, we obtain a subsequence ('i'v) of (fv) with the property that 'i'v(x) converges in Y, for each x E �, say 'i'v(x) + 1jJ(x), (A.1.3I) for all x E �, where 1jJ : � + Y. So far, we have not used (A.L30). This hypothesis will now be used to show that 'i'v converges uniformly on X. Pick £ > O. Then pick 8 > a such that w( 8) < £/3. Since X is compact, we can cover X by finitely many balls Ba(xj), I :S j :S N, Xj E �. Pick M
390
A. Complementary material
so large that I"v(Xj) is within £/3 of its limit for all v 2: M (when 1 :S j :S N). Now, for any x E X, picking e E {l, . . . , N} such that d(x, xe) :S 8, we have, for k 2: 0, v 2: M,
(A.l.32)
d (I"v+k(X), I"v(x») :S d(I"v+k(X), I"v+k(Xe») + d(I"v+k(Xe ), I"v(xe») + d (I"v(xe ), I"v(x»)
:S £/3 + £/3 + £/3. Thus ( I"v(x» is Cauchy in Y for all x E X, hence convergent. Call the limit 1jJ(x), so we now have (A.lo31) for all x E X. Letting k + 00 in (A.lo32) we have uniform convergence of I"v to 1jJ. Finally, passing to the limit v +
00
in
d(I"v(x), I"v(x'» :S w(d(x, x'»
(A.l.33)
D We want to restate Proposition A.lo19, bringing in the notion of equicontinuity. Given metric spaces X and Y, and a set of maps 'F c C(X, Y), we say 'F is equicontin uous at a point Xo E X provided 't£ > 0 , 3 8 > O such that'tx E X, f E 'F,
(A.l.34)
dx(x, xo) < 8 ==} dy(f(x), f(xo» < £.
We say 'F is equicontinuous on X if it is equicontinuous at each point of X. We say 'F is uniformly equicontinuous on X provided 't£ > 0 , 3 8 > O such that'tx, x' E X, f E 'F,
(A.l.35)
dx(x, x') < 8 ==} dy(f(x), f(x'» < £.
Note that (A.lo35) is equivalent to the existence of a modulus of continuity w such that 'F c (Ow, given by (A.l.30). It is useful to record the following result. Proposition A.1.20. Let X and Y be metric spaces, and let 'F c C(X, Y). Assume X is compact. then (A.l.36) 'F equicontinuous ==} 'F is uniformly equicontinuous. Proof. The argument is a variant of the proof of Proposition A.lo16. In more detail, suppose there exist xv, x� E X, £ > 0, and fv E 'F such that d(xv, x�) :S rv but
(A.l.37) Taking a convergent subsequence XVj + P E X, we also have X�j tinuity of 'F at p implies that there esists N < 00 such that
+ p.
Now equicon
(A.l.38) contradicting (A.lo37).
D
Putting together Propositions A.lo19 and A.lo20 then gives the following. Proposition A.1.21. Let X and Y be compact metric spaces. If'F C C(X, Y) is equicon
tinuous on X, then it has compact closure in C(X, Y).
A.i. Metric spaces, convergence, and compactness
391
We next define the notion of a connected space. A metric space X is said to be connected provided that it cannot be written as the union of two disjoint nonempty open subsets. The following is a basic class of examples. Proposition A.1.22. Each interval I in R is connected. Proof. Suppose A c I is nonempty, with nonempty complement B e l, and both sets are open. Take a E A, b E B; we can assume a < b. Let \, = sup {x E [a,b] : x E Aj This exists as a consequence of the basic fact that R is complete. Now we obtain a contradiction, as follows. Since A is closed, \, E A. But then, since A is open, there must be a neighborhood (\'  E, \' + E) contained inA; this is not possible. D We say X is pathconnected if, given any p, q E X, there is a continuous map y : [0, 1] .... X such that yeO) = p and y(l) = q. His an easy consequence of Proposition A.1.22 thatX is connected whenever it is pathconnected. The next result, known as the intermediate value theorem, is frequently useful. Proposition A.1.23. Let X be a connected metric space, and let f : X .... R be continu ous. Assume p, q E X, andf(p) = a < f(q) = b. Then, given any c E (a, b), there exists Z E X such that f ez) = c. Proof. Under the hypotheses, A = (x E X : f(x) < cj is open and contains p, while B = (x E X : f(x) > cj is open and contains q. If X is connected, then A u B cannot be all of X; so any point in its complement has the desired property. D Exercises
1. If X is a metric space, with distance function d, show that
Id(x,y)  d(x' , y')I :S d(x,x') + d(y,y'), and hence
d : X X X + [0, 00) is continuous.
2. Let 'P : [0, 00 ) .... [0, 00 ) be a C' function. Assume 'P(O)
=
0,
'P' > 0,
'P" < o.
Prove that if d(x, y) is symmetric and satisfies the triangle inequality, so does
8(x,y) = 'P(d(x,y» . Hint. Show that such 'P satisfies 'P(s + t) :S 'P(s) + 'P(t), for s, t E R+ . 3. Show that the function d(x, y) defined by (A.1.27) satisfies (A.l.1). Hint. Consider 'P(r) = r/(l + r).
A. Complementary material
392
4. LetX be a compact metric space. Assume fj , f E C(X) and /j(x) /' f(x), 'if x E X. Prove that fj + f uniformly on X. (This result is called Dini's theorem.) Hint. For E > 0, letK/E) = (x E X : f(x)f/x) 2: EJ. Note thatK/E) J Kj+1 (E) J
. . ..
Given a metric space X and f : X + [00, 00], we say f is lower semicontinuous at x E X provided f l ((C, 00]) C X is open, 'if c E lt. We say f is upper semicontinuous provided
f  l ([oo, C» is open, 'ifc E It.
5. Show that
f is lower semicontinuous = f  l ([ 00, c]) is closed, 'if c E lt, and
f is upper semicontinuous = f 1 ([c, 00]) is closed, 'if c E lt.
6. Show that
f is lower semicontinuous = Xn + x implies liminff(xn ) 2: f(x).
Show that
f is upper semicontinuous = Xn + x implies lim sup f(xn ) :S f(x). 7. Given S e X, show that XS is lower semicontinuous Xs is upper semicontinuous
¢::::::} S is open. = S is closed.
8. If X is a compact metric space, show that f : X + It is lower semicontinuous ==} min f is achieved. 9. In the setting of (A.1.ll), let Show that
1/ 8(x,y) = !d1 (X1 , Yl )' + . . . + dm (xm , Ym )' ] ' . 8(x,y) :S d(x,y) :S ym8(x,y).
10. LetX and Y be compact metric spaces. Show that if 'F c C(X, Y) is compact, then 'F is equicontinuous. (This is a converse to Proposition A.1.20.) 11. Recall that a Banach space is a complete normed linear space. Consider C 1 (I), where I = [0, 1], with norm
Ilf lb = sup If I + sup If' II I
A.2. Inner product spaces
393
Show that C I (I) is a Banach space. 12. Let 'F = (f E C I (I) : Il f l b :S 1}. Show that 'F has compact closure in C(I). Find a function in the closure of 'F that is not in C I (I).
A.2. Inner product spaces
In §6.1 we have looked at norms and inner products on finitedimensional vector spaces other than Rn, and in §§7.17.2 we have looked at norms and inner products on spaces of functions, such as C(lrn) and :R(Rn), which are infinitedimensional vector spaces. We discuss general results on such objects here. Generally, as discussed in §1. 3 , a complex vector space V is a set on which there are operations of vector addition, (A.2.1) f,g E V = f + g E V, and multiplication by an element of C (called scalar multiplication), (A.2.2) a E C, f E V = af E V, satisfying the following properties. For vector addition, we have (A.2.3 ) f + g = g + f, (J + g) + h = f + (g + h), f + a = f, f + (f) = O. For multiplication by scalars, we have (A.2.4 ) a(bf) = (ab)f, 1 · f = f· Furthermore, we have two distributive laws, (A.2.5) a(J + g) = af + ag, (a + b)f = af + bf· These properties are readily verified for the function spaces mentioned above. An inner product on a complex vector space V assigns to elements f,g E V the quantity (J, g) E C, in a fashion that obeys the three rules, (A.2.6)
(arll + a,f"g) = al (Jl , g) + a,(J" g), (J, g) = (g, f), (J,f) > a unless f = o.
A vector space equipped with an inner product is called an inner product space. For example, 1 (A.2.7) f( e)g( e) de (J, g) = 2IT
J 
s'
defines an inner product on C(S I ), and also on :R(S I ), where we identify two functions that differ only on a set of upper content zero. Similarly, (A.2.8)
(J,g) =
1: f(x)g(x) dx
394
A. Complementary material
defines an inner product on Jl(R) (where, again, we identify two functions that differ only on a set of upper content zero). As another example, we define e' to consist of sequences (ak )kEZ such that
(A.2.9) An inner product on e' is given by 00
((ak ), (bk») = � akbk ' k=  oo Given an inner product on V, one says the object II!I I. defined by
(A.2.1O)
(A.2.ll)
.j
II!II = (f,f),
is the norm on V associated with the inner product. Generally, a norm on V is a function ! f+ I ! I satisfying
(A.2.l2) (A.2.13) (A.2.l4)
Il a!11 = l al ' II!II. a E C, ! E V, II!II > a unless ! = 0, I ! + g il :S II!II + Ilg ll·
The property (A.2.l4) is called the triangle inequality. A vector space equipped with a norm is called a normed vector space. We can define a distance function on such a space by
d(f , g) = II!  g il· Properties (A.2.l2)(A.2.l4) imply that d : V X V + [0, 00) satisfies the properties in (A.l.1), making V a metric space. If II!II is given by (A.2.ll), from an inner product satisfying (A.2.6), it is clear that (A.2.l2)(A.2.13) hold, but (A.2.l4) requires a demonstration. Note that
(A.2.l5)
(A.2.l6)
II! + g ll ' = (f + g, ! + g) = II!II ' + (f, g) + (g, f) + Il g ll ' = II!II ' + 2 Re(f,g) + Il g ll ',
while
(A.2.l7)
( II!II + Ilg ll )' = II!II' + 211!11 . Ilg ll + Ilg ll'.
Thus to establish (A.2.l7), it suffices to prove the following, known as Cauchy's in equality. Proposition A.2.1. For any inner product on a vector space V, with
(A.2.ll), (A.2.l8)
l(f, g)1 :S II!II . Ilg ll. 'if !, g E V.
II!I I defined by
A.2. Inner product spaces
395
Proof. We start with (A.2.l9)
a :S Il f  g il ' = Ilf ll '  2 Re(f, g) + IIg ll ' ,
which implies (A.2.20)
2 Re(f,g) :S Ilf ll ' + IIg ll ' , 'if f, g E V.
Replacing f by af for arbitrary a E C of absolute value 1 yields 2 Re a(f, g) :S Ilf ll ' + IIg ll ' , for all such a, hence 21(f,g)1 :S Ilf ll ' + IIg ll ' , 'if f,g E V. Replacing f by tf and g by t ' g for arbitrary t E (0, 00), we have (A.2.2l)
21(f,g)1 :S t' ll f ll ' + t  ' llg ll ' ,
'if f,g E V, t E (0, 00).
Ifwe take t' = Ilg ll l ll f ll . we obtain the desired inequality (A.2.l8). This assumes f and D g are both nonzero, but (A.2.l8) is trivial if f or g is O. An inner product space V is called a Hilbert space if it is a complete metric space, i.e., if every Cauchy sequence (fv) in V has a limit in V. The space e' has this complete ness property, but C(S ' ), with inner product (A.2.7), does not, nor does :R(S ' ). Appen dix A.l describes a process of constructing the completion of a metric space. When applied to an incomplete inner product space, it produces a Hilbert space. When this process is applied to C(S ' ), the completion is the space L'(S' ). An alternative construc tion of L'(S ' ) uses the Lebesgue integral. For this approach, one can consult [47, Chap ter 4]. For the rest of this appendix, we confine our attention to finitedimensional inner product spaces. If V is a finitedimensional inner product space, a basis {u" . . . , un } of V is called an orthonormal basis of V provided (A.2.22) i.e., (A.2.23) In such a case we see that (A.2.24)
==} (v, w ) = a l b, + . . . + an bn .
It is often useful to construct orthonormal bases. The construction we now describe is called the GrammSchmidt construction. Proposition A.2.2. Let {v" " " v n } be a basis of V, an inner product space. Then there is an orthonormal basis {u" " " un} of V such that (A.2.25)
Span{uj : j :S e} = Span{vj : j :S e},
1 :S e :s n.
A. Complementary material
396 Proof. To begin, take (A.2.26)
Now define the linear transformation II : V + V by II v = (v, u,)u, and set 02 = V2  1 v2 = V2  CV2 ' Ul )Ul " We see that (0" u,) = (v" u,)  (v" u, ) = O. Also 0, '" 0 since u, and v , are linearly independent. Hence we set )
(A.2.27) Inductively, suppose we have an orthonormal set {u " . . . , um } with m < n and (A.2.25) holding for 1 :s e :S m. Then define Pm : V + V by (A.2.28) and set Vm +l = vm+l  Pm Vm +l (A.2.29) = v m +l  (V m +l ' Ul )U l  . . .  (V m +l ' um )um o We see that (A.2.30) j :S m =} ( Om+ " Uj ) = (vm+ " Uj )  (vm + " Uj ) = o. Also, since v m +l II' Span{v" . . . , v m } = Span{u" . . . , Um }, it follows that O m + ' '" O. Hence we set (A.2.31)
D
This completes the construction. Example. Take V = 1'" with basis {I, x, x' } and inner product given by , (A.2.32) (p, q) = p(x)q(x)dx.
I,
The GrammSchmidt construction gives first 1 u,(x) = {2'
(A.2.33) Then
O,(x) = x, since by symmetry (x, u,) = o. Now J�, x' dx = 2/3, so we take (A.2.34) Next
u,(x) =
.J'{x.
1 V3 (X) = x'  (x', u,)u, = x'  3' _
since by symmetry (x', u,) = o. Now J�, (x'  1/3)' dx = 8/45, so we take (A.235)
U3 (X) =
j¥(x'  � )
A.2. Inner product spaces
397
Let V be an ndimensional inner product space, and let W c V be an mdimen sional linear subspace. By Proposition A.2.2, W has an orthonormal basis {Wj, . . . , w m }. We know from §1.3 that V has a basis of the form (A.2.36) Applying Proposition A.2.2 again gives the following. Proposition A.2.3. If V is an ndimensional inner product space and W c V is an mdimensional linear subspace with orthonormal basis {Wj, . . . , wm}, then V has an or
thonormal basis oftheform (A.2.37)
We see that if we define the orthogonal complement of W in V as (A.2.38) W� = {v E V : (v, w) = 0, V w E W}, then (A.2.39) W� = Span{uj, . . . , u } . In particular, dim W + dim W� = dim V. (A.2AO) e
In the setting of Proposition A.2.3, we can define Pw E L(V) by (A.2Al)
m Pwv = �(v, Wj )Wj , for v E V,
j= l and we see tha t Pw is uniquely defined by the properties PwW = W, V W E W, Pwu = O, V U E W�. (A.2A2) We call Pw the orthogonal projection of V onto W. Note the appearance of such or thogonal projections in the proof of Proposition A.2.2, namely in (A.2.28). Another object that arises in the setting of inner product spaces is the adjoint, de fined as follows. If V and W are finitedimensional inner product spaces and T E L(V, W), we define the adjoint (A.2A3) T* E L(W, V), (v, T*w) = (Tv, w). If V and W are real vector spaces, we also use the notation T' for the adjoint, and call it the transpose. In case V = W and T E L(V), we say (A.2A4 ) T is selfadjoint = T* = T, and T is unitary (if = C), or orthogonal (if = R) (A.2AS) = T* = T  1
F
F
The following gives a significant connection between adjoints and orthogonal com plements.
398
A. Complementary material
Proposition A.2.4. Let V be an ndimensional inner prodnct space, and let W a linear subspace. Take T E L(V). Then (A.2.46) T : W + W ==} T* : W� + W�.
c
V be
Proof. Note that
(A.2.47) if T
: W
+
(w, T*u) = (Tw, u) = 0, V w E W , u E W� , W. This shows that T*u 1 W for all u E W� , and we have (A.2.46).
D
In particular,
(A.2.48)
T = T*, T :
W
+
W ==}
T
: W�
+
W�.
A.3. Eigenvalues and eigenvectors
Let T : V + V be linear. If there is a nonzero v E V such that
(A.3.1) for some Aj E F, we say Aj is an eigenvalue of T, and v is an eigenvector. Let O'(T, Aj ) denote the set of vectors v E V such that (A.3.1) holds. It is clear that O'(T, Aj ) (the Ar eigenspace of T) is a linear subspace of V and T : O'(T, Aj ) O'(T, Aj ). (A.3.2) The set of Aj E F such that O'(T, Aj ) '" a is denoted Spec(T). Clearly, Aj E Spec(T) if and only if T  A/ is not injective, so, if V is finite dimensional, Aj E Spec(T) = det(A/  T) = O. (A.3.3 ) We call KT(A) = det(;tI  T) the characteristic polynomial of T. If F = C, we can use thejUndamental theorem oJ algebra, which says every non constant polynomial with complex coefficients has at least one complex root. (See §S.I for a proof of this result.) This proves the following. Proposition A.3.1. If V is a finitedimensional complex vector space and T E L(V), then T has at least one eigenvector in V. Remark. If V is real and KT(A) does have a real root Aj , then there is a real Areigen vector. +
Sometimes a linear transformation has only one eigenvector, up to a scalar multi ple. Consider the transformation A : (;3 + (;3 given by (A.3.4 )
A=
(� � �). a a
2
We see that det(;tI  A) = (A  2)3 , so A = 2 is a triple root. It is clear that O'(A, 2 ) = Span{ed, (A.3.5 ) where e, = (I, 0, 0)' is the first standard basis vector of (;3 If one is given T E L(V), it is of interest to know whether V has a basis of eigen vectors of T. The following result is useful.
399
A.3. Eigenvalues and eigenvectors
Proposition A.3.2. Assume that the characteristic polynomial ofT E LeV) has k dis tinct roots, A" . . . , Ab with eigenvectors vj E c( T, Aj ), 1 :S j :S k. Then {v" . . . , Vk} is linearly independent. In particular, if k = dim V, these vectorsform a basis ofV. Proof. We argue by contradiction. If {v" . . . , Vk} is linearly dependent, take a minimal subset thatis linearly dependent and (reordering if necessary) say this setis {v" . . . , vm } , with TVj = Aj Vj , and (A.3.6) with Cj '" a for each j E {l, . . . , m}. Applying T Am I to (A.3.6) gives (A.3.7) a linear dependence relation on the smaller set{ v" . . . , v m d. This contradiction proves the proposition. D 
_
Here is another important class of transformations that have a full complement of eigenvectors.
Proposition A.3.3. Let V be an ndimensional inner product space, T E LeV). Assume T is selfadjoint, i.e., T = T*. The V has an orthonormal basis ofeigenvectors ofT.
Proof. First, assume V is a complex vector space (I' = C). Proposition A.3.1 implies that there exists an eigenvector v, of T. Let W = Span {vd. Then Proposition A.2.4 gives
T : W�
(A.3.8)
and dim W � = n

+
W�,
1. The proposition then follows by induction on n.
D
If V is a real vector space (I' = R), then the characteristic polynomial det(AI T) has a complex root, say A, E C Denote by V the complexification of V. The transfor mation T extends to T E LeV), as a selfadjoint transformation on this complex inner product space. Hence there exists nonzero v, E V such that Tv, = A, v,. We now take note of the following. 
Proposition A.3.4. IfT = T*, every eigenvalue ofT is real. Proof. Say Tv, = A, v" v, '" O. Then Ar ll v r ll ' = (A, v" v,) = (Tv" v,) (A.3.9)
Hence A, = A" so A, is real. Returning to the proof of Proposition A.3.3 when V is a real inner product space, we see that the (complex) rootA, of det(AI T) must in fact be real. Hence A,I T : V + V is not injective, so there exists a A,eigenvector v, E V. Induction on n, as in the argument above, finishes the proof. D 
Here is a useful general result on orthogonality of eigenvectors.

A. Complementary material
400
Proposition A.3.5. Let V be an inner product space, T E L(V). If (A.3.1O) Tu = AU, T*v = fiv, A '" {l, then (A.3.11) U 1 v. Proof. We have (A.3.12)
A(U, v) = (Tu, v) = (u, T*v) = {leU, v).
D
As a corollary, it T = T*, then
Tu = AU, Tv = {lV , A ", {l =} U 1 v. Our next goal is to extend Proposition A.3.3 to a broader class of transformations. Given T E L(V), where V is an ndimensional complex inner product space, we say T is normal if T and T* commute, i.e., TT* = T*T. Equivalently, taking (A.3.13)
T = A + iE, A = A*, E = E*,
we have (A.3.14)
T normal = AE = EA.
Generally, for A, E E L(V), we see that (A.3.15)
EA = AE ==} E : I'(A, Aj) + I'(A, Aj).
Thus, in the setting of (A.3.13), we can find an orthonormal basis of each space I' (A, A), A E SpecA, consisting of eigenvectors of E, to get an orthonormal basis of V consisting of vectors that are simultaneously eigenvectors of A and E, hence eigenvectors of T. This establishes the following.
Proposition A.3.6. Let V be an ndimensional complex inner product space, and let T E L( V) be a normal transformation. Then V has an orthonormal basis of eigenvector ofT. Note that if T has the form (A.3.13)(A.3.14) and A = a + ib, a, b E R, then I'( T, A) = I'(A, a) n l'(E , b) (A.3.16)
= I'( T*, A).
We deduce from Proposition A.3.5 the following.
Proposition A.3.7. In the setting ofProposition A.3.6, with T normal, (A.3.17) A ", {l ==} I'( T, A) 1 I'( T, {l). An important class of normal opera tors is the class of unitary operators, defined in Appendix A.2. We recall that if V is an inner product space and T E L ( V) , then (A.3.18)
T is unitary = T* = T 1
A.3. Eigenvalues and eigenvectors
401
We write T E U(V), if V is a complex inner product space. We see from (A.3.16) (or directly) that (A.3.19)
T E U(V),
;\ E Spec T ==} ;t = ;\ 1 ==} 1;\1 = l.
We deduce that if T E U(V), then V has an orthonormal basis of eigenvectors of T, each eigenvalue being a complex number of absolute value l. If V is a real ndimensional inner product space and (A.3.18) holds, we say T is an orthogonal transformation, and write T E O(V). In such a case, V typically does not have an orthonormal basis of eigenvectors ofT. However, V does have an orthonormal basis with respect to which such an orthogonal transformation has a special structure, as we proceed to show. To get it, we construct the complexification of V, Vc = {u + iv : u, v E V}, (A.3.20) which has a natural structure of a complex ndimensional vector space, with a Hermit ian inner product. A transformation T E O(V) has a unique ,:: linear extension to a transformation on Vc, which we continue to denote by T, and this extended transfor mation is unitary on Vc. Hence Vc has an orthonormal basis of eigenvectors of T. Say u + iv E Vc is such an eigenvector, (A.3.21) T(u + iv) = e 18(u + iv), e18 II' {l, l}. Writing eia = c + is, C, S E IR, we have Tu + iTv = (c  is)(u + iv) (A.3.22) = cu + sv + i(su + cv), hence Tu = cu + SV, (A.3.23) Tv = su + cv. In such a case, applying complex conjugation to (A.3.21) yields T(u  iv) = e18 ( u  iv), and d e '" e  18 if e18 II' {l, l}, so Proposition A.3.7 yields (A.3.24) u + iv .1 u  iv, hence O = (u + iv, u  iv) = (u, u)  (v, v) + i(v, u) + i(u, v) (A.3.25) = l u i ' _ l v i ' + 2i(u, v), or equivalently (A.3.26) l u i = Iv l and u 1. v. Now Span{u,v} C V has an (n 2)dimensional orthogonal complement, on which T acts, and an inductive argument gives the following.
A. Complementary material
402
Proposition A.3.8. Let V be a ndimensional real inner product space, and let T : V + V be an orthogonal transformation. Then V has an orthonormal basis in which the matrix representation ofT consists ofblocks (A.3.27 )
plus perhaps an identity matrix block if1 E Spec T, and a block that is I if 1 E Spec T. This result has the following consequence, advertised in Exercise 14 of §3.2.
Corollary A.3.9. For each integer n 2: 2, Exp : Skew(n) (A.3.28 )
+
SO(n) is onto.
As in §3.2 we leave the proof as an exercise for the reader. The key is to use the Euler type identity eel 0 1 cos e  sin e (A.3.29 ) J = 1 0 ==} = sm e cos e .
(
)
(
)
In cases when T is a linear transform on an ndimensional complex vector space V, and V does not have a basis of eigenvectors of T, it is useful to have the concept of a generalized eigenspace, defined as (A.3.30) gC(T,Aj ) = {v E V : (t  AjI)kv = 0 for some k). If Aj is an eigenvalue of T, nonzero elements of gc(T, Aj ) are called generalized eigen vectors. Clearly, C(T,Aj ) C gC(T,Aj)' Also T : gC(T,Aj ) + gC(T, Aj)' Furthermore, one has the following.
Proposition A.3.10. Iff.l '" Aj, then (A.3.31) It is useful to know the following.
Proposition A.3.11. If W is an ndimensional complex vector space and T E L(V), then W has a basis ofgeneralized eigenvectors ofT. We will not give a proof of this result here. A proof can be found in [50, Chapter 2, §7] and also in [52]. A.4. Complements on power series
If a function f is sufficiently differentiable on an interval in R containing x and y, the Taylor expansion about y reads 1 f(x) = fey) + f'(y)(x  y) + . . . + , f(n) (y)(x  y)n + R n (x,y). (AA.l) n. Here, Tn (x, y) = fey) + . . . + f(n) (y )(x  y)nIn! is that polynomial of degree n in x all of whose xderivatives of order :S n, evaluated aty, coincide with those off. This prescrip tion makes the formula for Tn (x, y) easy to derive. The analysis of the remainder term R n (x, y) is more subtle. One useful result about this remainder is the following. Say
A.4. Complements on power series
403
x > y, and for simplicity assume I(n+l) is continuous on [y, xl; we sayI E cn+1 ([y, xl).
Then
m
(AA.2)
:S l(n+l) (O :S M, 'if 5 E [y, xl (x  y)n+l (x _ y)n+l (x,y) :S M R =m ( :s n ( n + 1)! n + 1)!
Under our hypotheses, this result is equivalent to the Lagrange form of the remainder, (AA.3) for some In between x and y. A proof of (AA.3). will be given below. One of our purposes here is to comment on how effective estimates on R n (x, y) are in determining the convergence of the infinite series
I(k)(y) _ k _ (x y) �_ k=o k., to I(x). That is to say, we want to perceive that R n (x, y) + a as n + 00 , in appropriate (AAA)
00
circumstances. Before we look at how effective the estimate (AA.2) is at this job, we want to introduce another player, and along the way discuss the derivation of various formulas for the remainder in (AA.1). A simple formula for R n (x, y) follows upon taking the yderivative of both sides of (AA.1); we are assuming that I is at least ( n + l)fold differentiable. When we do this (applying the Leibniz formula to those terms that are products) an enormous amount of cancellation arises, and the formula collapses to
aRn =  � (n+l)(y)(x _ n y) , Rn(x, x) = o. I ay n! If we concentrate on R n (x,y) as a function of y and look at the difference quotient [Rn(x, y)  Rn (x, x)l/(y  x), an immediate consequence of the mean value theorem is (AA.5)
that
(AA.6) for some 5n between x and y. This result, known as Cauchy's formula for the remain der, has a slightly more complicated appearance than (AA.3), but as we will see it has advantages over Lagrange's formula. The application of the mean value theorem to obtain (AA.6) does not require the continuity of I (n+l ) , but we do not want to dwell on that point. If I(n +l ) is continuous, we can apply the fundamental theorem of calculus to (AA.5) in the yvariable and obtain the basic integral formula (AA.7)
Rn (x,y) = � n.
lx(X  s)n I(n+l)(s)ds. y
Another proof of (AA.7) is indicated in Exercise 9 of §l.l. If we think of the integral in (AA.7) as (xy) times the mean value of the integrand, we see (AA.6)as a consequence.
A. Complementary material
404
On the other hand, if we want to bring a factor of (x  y)n+l outside the integral in (AA.7), the change of variable x  s = t(x  y) gives the integral formula
(AA.8)
1 Rn(x, y) = , (x  y)n+l n.
1 1 tn f(n+l) (ty + (1  t)x) dt. 0
If we think of this integral as 1/(n + 1) times a weighted mean value of f(n+l ) , we recover the Lagrange formula (AA.3). From the Lagrange form (AA.3) of the remainder in the Taylor series (AA.1) we have the estimate (AA.9)
sup If(n +l ) ( 1) 1 
\EI(x,y)
where lex, y) is the open interval from x to y (either (x, y) or (y, x), disregarding the trivial case x = y). Meanwhile, from the Cauchy form (AA.6) of the remainder we have the estimate
(AA.lO) We now study how effective these estimates are in determining that various power series converge. We begin with a look at these remainder estimates for the power series expansion about the origin of the simple function
(AA.ll)
1 f(x) = 1 x'
We have, for x '" 1,
(AA.12) and formula (AA. 1 ) becomes 1 (AA.13) = 1 + x + . . . + xn + R n (x, 0). 1x Of course, everyone knows tha t the infinite series (AA.14 )
l + x + · · · + xn + . . .
(AA.15)
I Rn(x, 0) 1 :S Ixl n+l . sup 1 1 I'I n2
converges to f(x) in (AA.ll), precisely for x E (1, 1). What we are interested in is what can be deduced from the estimate (A.4.9), which, for the function (AA.ll), takes the form _
\EI(x,O)
We consider two cases. First, if x :S 0, then 1 1  1' 1 2: 1 for I' E lex, 0), so
(AA.16)
A.4. Complements on power series
405
Thus the estimate (AA.9) implies that R n (x, O) + a in (AA.13), for all Suppose however that x 2: O. What we have from (AA.15) is x 2: a ==} I Rn(x, 0) 1 :S I xl n+ l sup 1 1 I'I n2
x E (1, 0] '
_
o:.,;s":.::;x
(AA.17)
+l = l 1 x C x Jn . This tends to a as n + 00 if and only if x < 1  x, i.e., if and only if x < 1/2. What we have is the following.
Conclusion. The estimate (AA.9) implies the convergence of the Taylor series (about the origin) for the function f(x) = 1/(1  x), only for 1 < x < 1/2. This example points to a weakness in the estimate (AA.9), coming from the La grange form of the remainder. Now let us see how well we can do with the estimate (AA.lO), coming from the Cauchy form of the remainder. For the function (AA.ll), this takes the form
(AA.18)
I Rn (x, 0) 1 :S (n + 1) I xl sup
'EI(x,O)
For 1 < x :S a one has an estimate like (AA.16), with a factor of (n + 1) thrown in. On the other hand, one readily verifies that a :S
so we deduce from (AA.18) that
I :S x < 1 ==} X1  II :S x, _
n+1 O :S x < 1 ==} IRn(x, O) I :S (n + 1) 1x  x ' which does tend to a for all x E [0, 1). (AA.19)
One might be wondering if one could come up with some more complicated exam ple, for which Cauchy's form is effective only on an interval shorter than the interval of convergence. In fact, you cannot. Cauchy's form of the remainder is always effective in the interior of the interval of convergence. This can be demonstrated, using methods of complex analysis. We look at some more power series and see when convergence can be established at an endpoint of an interval of convergence, using the estimate (A.4. 1O) , i.e.,
(AA.20)
IRn (x, Y)1 :S Cn (x,y), yl Cn (x, y) = I x � sup (x on (n+l ) ( 01n. ,EI(x,y) I  f
We consider the family of examples,
(AA.21)
f(x) = (1  x)a , a > O.
The power series expansion has radius of convergence 1 (if a is not an integer) and, as we will see, one has convergence at both endpoints, + 1 and 1, whenever a > O. Let us see when Cn (±l, 0) + O. We have (AA.22) f(n+ l) (x) = ( 1)n+ 1 a( a  1) (a  n) (1 x)an l _
. . ·
_
A. Complementary material
406 Hence
Cn(1. 0) = l a(a 1)n'.... (a  n) 1 lsup M. Assume that the identity map I : M .... M is smoothly homotopic to a map K M .... M with image in X, so K = ' 0 F, with F : M .... X. Show that, for all k 2: 0, F? : J{k(M) + J{k(M) is the identity, hence In particular, 6. Take M, X" as in Exercise 5. This time, assume there is a retraction R : M .... X, so R 0 ' : X .... X is the identity map. Show that, for all k 2: 0, ,* R* : J{k (X) + J{k(X) is the identity, hence In particular, 7. Now combine Exercises 5 and 6. Take M, X" as above. Assume the identity map
I : M .... M is smoothly homotopic to a map K = ,oR, where R : M .... X is a retraction. (We say X is a smooth deformation retract of M.) Deduce that , induces isomorphisms ,*
: J{k(M) =. J{k(X), 'if k 2: o.
Bibliography
[1] R. Abraham and J. E. Marsden, Foundations of mechanics, Benjamin/Cummings Publishing Co., Inc., Advanced Book Program, Reading, Mass., 1978. Second edi tion, revised and enlarged; With the assistance of Tudor Ratiu and Richard Cush man. MR515141 [2] L. V. Ahlfors, Complex analysis: An introduction to the theory of analytic functions ofone complex variable, 3rd ed., McGrawHill Book Co., New York, 1978. Interna tional Series in Pure and Applied Mathematics. MR510197 [3] V. I. Arnolld, Mathematical methods of classical mechanics, SpringerVerlag, New YorkHeidelberg, 1978. Translated from the Russian by K. Vogtmann and A. We instein; Graduate Texts in Mathematics, 60. MR0690288 [4] L. BaezDuarte, Brouwer'sfixedpoint theorem and a generalization of theformula for change ofvariables in multiple integrals, J. Math. Anal. Appl. 177 (1993), no. 2, 412414, DOl 1O.1006/jmaa.1993.1265. MR1231489 [5] R. Batt and L. W. Tu, Differential forms in algebraic topology, Graduate Texts in Mathematics, vol. 82, SpringerVerlag, New YorkBerlin, 1982. MR658304 [6] R. C. Buck, Advanced calculus, 3rd ed., McGrawHill Book Co., New York AucklandBogota, 1978. With the collaboration of Ellen F. Buck; International Series in Pure and Applied Mathematics. MR0476931 [7] F. Dai and Y. Xu, Approximation theory and harmonic analysis on spheres and balls, Springer Monographs in Mathematics, Springer, New York, 2013. MR3060033 [8] G. de Rham, Differentiable manifolds: Forms, currents, harmonic forms, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 266, SpringerVerlag, Berlin, 1984. Translated from the French by F. R. Smith; With an introduction by S. S. Chern. MR760450 [9] A. Devinatz, Advanced calculus, Holt, Rinehart and Winston, New YorkMontreal, Que.London, 1968. MR0227326 
437
438
Bibliography
[10] M. P. do Carmo, Differential geometry of curves and sUifaces, PrenticeHall, Inc., Englewood Cliffs, N.J., 1976. Translated from the Portuguese. MR0394451 [11] M. P. do Carmo, Riemannian geometry, Mathematics: Theory & Applications, Birkhauser Boston, Inc., Boston, MA, 1992. Translated from the second Por tuguese edition by Francis Flaherty. MR1138207 [12] J. J. Duistermaat and J. A. C. Kolk, Lie groups, Universitext, SpringerVerlag, Berlin, 2000. MR1738431 [13] H. M. Edwards, Riemann's zeta junction, Dover Publications, Inc., Mineola, NY, 2001. Reprint of the 1974 original [Academic Press, New York; MR0466039 (57 #5922)]' MR1854455 [14] H. Federer, Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, Band 153, SpringerVerlag New York Inc., New York, 1969. MR0257325 [15] H. Flanders, Differentialforms with applications to the physical sciences, Academic Press, New YorkLondon, 1963. MR0162198 [16] W. H. Fleming, Functions of several variables, AddisonWesley Publishing Co., Inc., Reading, MAss.London, 1965. MR0174675 [17] G. B. Folland, Real analysis: Modern techniques and their applications, Pure and Applied Mathematics (New York), John Wiley & Sons, Inc., New York, 1984. A WileyInterscience Publication. MR767633 [18] P. Franklin, A Treatise on Advanced Calculus, John Wiley & Sons, Inc., New York, 1940. MROO02571 [19] H. Goldstein, Classical Mechanics, AddisonWesley Press, Inc., Cambridge, Mass., 1951. MROO43608 [20] B. Goursat, A course in mathematical analysis: Vol 1: Derivatives and differentials,
definite integrals, expansion in series, applications to geometry. Vol. 2, Part 1: Func tions of a complex variable. Vol. 2, Part 2: Differential equations, Transla ted by E.
[21] [22] [23] [24] [25]
R. Hedrick (Vol. 1), and E. R. Hedrick and O. Dunkel (Vol. 2), Dover Publications, Inc., New York, 1959. MR0106155 M. J. Greenberg and J. R. Harper, Algebraic topology: Afirst course, Mathematics Lecture Note Series, vol. 58, Benjamin/Cummings Publishing Co., Inc., Advanced Book Program, Reading, Mass., 1981. MR643101 V. Guillemin and P. Haine, Differentialforms, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2019. MR3931306 J. K. Hale, Ordinary differential equations, WileyInterscience [John Wiley & Sons], NewYorkLondonSydney, 1969. Pure and Applied Mathematics, Vol. XXI. MR0419901 A. Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002. MR1867354 N. J. Hicks, Notes on differential geometry, Van Nostrand Mathematical Stud ies, No. 3, D. Van Nostrand Co., Inc., Princeton, N.J.TorontoLondon, 1965. MR0179691
Bibliography
439
[26] E. Hille, Analyticfunction theory, Chelsea Publishing Company, New York, 1977. [27] M. W. Hirsch and S. Smale, Differential equations, dynamical systems, and linear algebra, Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New YorkLondon, 1974. Pure and Applied Mathematics, Vol. 60. MR0486784 [28] E. W. Hobson, The theory of spherical and ellipsoidal harmonics, Chelsea Publishing Company, New York, 1955. Reprint of 1931 Cambridge University Press monograph. MR0064922 [29] Y. Kannai,An elementary proofofthe noretraction theorem, Amer. Math. Monthly 88 (1981), no. 4, 264 268, DOl 10.2307/2320550. MR610487 [30] O. D. Kellogg, Foundations of potential theory, Reprint from the first edition of 1929. Die Grundlehren der Mathematischen Wissenschaften, Band 31, Springer Verlag, BerlinNew York, 1967. MR0222317 [31] S. G. Krantz, Function theory ofseveral complex variables, John Wiley & Sons, Inc., New York, 1982. Pure and Applied Mathematics; A WileyInterscience Publica tion. MR635928 [32] P. D. Lax, Change of variables in multiple integrals, Arner. Math. Monthly 106 (1999), no. 6, 497501, DOl 10.2307/2589462. MR1699248 [33] C. Lebrun and M. Taylor, The Hopfbracket, manuscript, 2013, available at http : / /mtaylor . web. une . edu/notes, item #16. [34] S. Lefschetz, Differential equations: geometric theory, Pure and Applied Mathemat ics. Vol. VI, Interscience Publishers, Inc., New York; Interscience Publishers Ltd., London, 1957. MR0094488 [35] L. H. Loomis and S. Sternberg, Advanced calculus, AddisonWesley Publishing Co., Reading, Mass.LondonDon Mills, Ont., 1968. MR0227327 [36] J. W. Milnor, Topology from the differentiable viewpoint, Based on notes by David W. Weaver, The University Press of Virginia, Charlottesville, Va., 1965. MR0226651 [37] H. K. Nickerson, D. C. Spencer, and N. E. Steenrod, Advanced calculus, D. Van Nostrand Co., Inc., TorontoPrinceton, N.J.New YorkLondon, 1959. MR0123651 [38] F. W. J. Olver, Asymptotics and specialjunctions, Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New YorkLondon, 1974. MR0435697 [39] M. H. Protter and C. B. Morrey Jr., Aftrst course in real analysis, 2nd ed., Under graduate Texts in Mathematics, SpringerVerlag, New York, 1991. MR1123269 [40] B. Simon, Harmonic analysis: A Comprehensive Course in Analysis, Part 3, Amer ican Mathematical Society, Providence, RI, 2015. MR3410783 [41] K. T. Smith, Primer of modern analysis, 2nd ed., Undergraduate Texts in Mathe matics, SpringerVerlag, New YorkBerlin, 1983. MR710655 [42] M. Spivak, Calculus on Manifolds, Benjamin, New York, 1965. [43] M. Spivak, A comprehensive introduction to differential geometry. Vol. IV, Publish or Perish, Inc., Boston, Mass., 1975. MR0394452
440
Bibliography
[44] S. Sternberg, Lectures on differential geometry, PrenticeHall, Inc., Englewood Cliffs, N.J., 1964. MR0193578 [45] J. J. Stoker, Differential geometry, Pure and Applied Mathematics, Vol. XX, Interscience Publishers John Wiley & Sons, New YorkLondonSydney, 1969. MR0240727 [46] M. E. Taylor, Partial differential equations: Basic theory, Texts in Applied Mathe matics, vol. 23, SpringerVerlag, New York, 1996. MR1395147 [47] M. E. Taylor, Measure theory and integration, Graduate Studies in Mathematics, vol. 76, American Mathematical Society, Providence, RI, 2006. MR2245472 [48] M. Taylor, Differential Geometry, Lecture notes, Preprint, 2019. [49] M. Taylor, Introduction to analysis in one variable, Pure and Applied Undergrad uate Texts, vol. 47, American Mathematical Society, Providence, RI, to appear. [50] M. Taylor, Introduction to differential equations, American Mathematical Society, Providence, RI, 2011. [51] M. E. Taylor, Introduction to complex analysis, Graduate Studies in Mathematics, vol. 202, American Mathematical Society, Providence, RI, 2019. MR3969984 [52] M. Taylor, Linear Algebra, Pure and Applied Undergraduate Texts, vol. 45, Amer ican Mathematical Society, Providence, RI, to appear. [53] M. E. Taylor, Noncommutative harmonic analysis, Mathematical Surveys and Monographs, vol. 22, American Mathematical Society, Providence, RI, 1986. MR852988 [54] J. A. Thorpe, Elementary topics in differential geometry, SpringerVerlag, New YorkHeidelberg, 1979. Undergraduate Texts in Mathematics. MR528129 [55] V. S. Varadarajan, Lie groups, Lie algebras, and their representations, Prentice Hall, Inc., Englewood Cliffs, N.J., 1974. PrenticeHall Series in Modern Analysis. MR0376938 [56] E. Whittaker and G. Watson, Modern Analysis, Cambridge University Press, 1927. [57] E. B. Wilson, Advanced calculus, Dover, NewYork, 1958. (Reprintofl912 edition.)
Index
accumulation point, 383
Cauchy's inequality, 18, 299, 315, 341, 394
Ad, 282
CauchyRiemann equations, 53, 66, 186,
ad, 282
193
adjoint, 397
ce11, 88
algebra of sets, 91
center, 77, 214
analysis, vii
central function, 375
angle defect, 256
chain rule, 1 3 , 42, 72, 75, 118, 149, 158, 195
anticommutation relation, 161
change of variable formula, 14, 94, 97, 102,
antipodal map, 208, 214
107, 120, 155, 180
antipodal points, 131
character of a representation, 375
antisymmetry, 155
characteristic function, 5, 90
arc length, 120 arcsin, 83, 196 arctan, 84 area, 121 ArzebAscoli theorem, 238, 389 averaging over rotations, 124 ba11, 382 Banach space, 389 basis, 25, 369 biinvariant metric, 272 binomial coefficient, 368 BalzanoWeierstrass theorem, 20 bracket, 78 Brouwer fixedpoint theorem, 206, 416
characteristic polynomial, 398 Christoffel symbols, 230, 233, 245 Clairaut parametrization, 237, 250 classical Stokes formula, 174 Clifford algebra, 161 closed, 382 closed form, 162, 416 closed set, 1 9 closure, 9 0 , 383 Codazzi equation, 251 cofactor matrix, 36, 56 column vector, 17, 24 commutator, 267, 353 commutator identities, 355 compact, 20, 383
Cantor set, 16
complete metric space, 382
Carlan's formula for Lie derivative, 202
completeness property, 19, 301, 319
Cartesian product, 218
completion, 382
Cauchy integral formula, 187, 196, 370
complex analytic, 53
Cauchy integral theorem, 187
complexification, 401
Cauchy remainder formula, 48, 403, 407
conformal, 198
Cauchy sequence, 19, 382
conjugate points, 287 
441
442
Index
connected, 135, 183, 391
divergence, 129, 203
connection Iform, 245
divergence theorem, 170, 172, 175
content, 89
dot product, 1 7
continuous, 3, 40, 89, 386
dual space, 302, 321
contraction mapping theorem, 58, 69
Duhamel's formula, 72, 86, 284
convergence, 19 convergent power series, 50
eigenfunction, 335
convolution, 312, 376
eigenspace, 398
coordinate chart, 117, 148
eigenvalue, 134, 335, 398
coordinate syatem, 154
eigenvector, 134, 398
cos, 82, 136, 139, 195
ellipsoid, 1 1 3
countable, 383
embedding, 1 3 3
covariant derivative, 232
energy functional, 221, 226
covariant derivative on normal fields, 242
Euclidean metric tensor, 121
Cramer's formula, 36, 56
Euclidean space, 17, 41
critical point, 47, 53
Euler characteristic, 213, 253, 428
critical point ofa vector field, 76, 212 cross product, 37, 120 curl, 163, 174 curvature, 222, 238
curvature (2,2)form, 249 curvature 2form, 245 curvature vector, 238 curve, 120, 136, lS4 Darboux theorem, 4, 89, 144 de Rham cohomology, 417 definition vs. formula, 34 deformation retract, 435 Deg, 205, 210
Euler's formula, 82, 140, 402 Euler's other formula, 215, 257 Exp , 233 p exact form, 162, 200, 416 exactness criterion, 208 Exp, 61, 86, 123, 1 4 0 , 4 0 2 expansion by minors, 36 exponential function, 8 1 exponential map, 1 2 3 , 222, 233, 283 exterior derivative, 161 exterior product, 160 extremal problem, 47
degree, 205, 209, 214, 416
fixed point, 57
degree of Gauss map, 252
flat torus, 1 3 2
dense, 338, 383
flow, 75
derivation, 75
Fourier coefficients, 294
derivative, 7, 40, 124
Fourier inversion formula, 291, 292, 294,
derived representation, 354, 358 det, 31, 86
303, 306, 313, 317, 321, 325 Fourier series, 291, 294
determinant, 3 1 , 55, 156
Fourier transform, 292, 310
diffeomorphism, 57, 117, 133, 155
Fubini's theorem, 92, 102, 115
differentiability ofpower series, 5 1
fundamental theorem of algebra, 191, 214,
differentiable, 7 , 4 0 , 124 differential equation, 68 differential form, 154 differential operator, 75 dimension, 25 Dini's theorem, 392 Dirichlet problem, 293, 308, 332, 342, 343, 411 disk, 378 distance, 18, 382 distribution, 305 diY, 129, 163, 170, 203
398 fundamental theorem of calculus, 7, 40, 165, 226, 230, 2 3 3 , 4 0 3 fundamental theorem o f linear algebra, 27, 126, 337 FunkHeeke theorem, 350, 375 Gl (n, �), 41 Gl+ (n,
�), 134, 135, 182
Gamma function, 122, 197, 331
Gauss theorema egregium, 224, 246 Gauss angle defect formula, 257
443
Index Gauss CUlvature, 216, 223, 239, 242, 244, 249 , 250 , 252, 261
integral test, 15 integrating factor, 164
Gauss linking number formula, 220
integration by parts, 14, 122
Gauss map, 213, 239
interior, 90, 383
GaussBonnet formula, 216
interior product, 161, 201
GaussBonnet theorem, 224, 253
intermediate value theorem, 391
Gaussian integral, 103
interval, 2
Gegenbauer polynomials, 344
inverse, 25
generalized eigenspace, 289, 402
inverse Fourier transform, 310
generalized GaussBonnet theorem, 260
inverse function theorem, 56, 117, 123,
generating function, 344 geodesic CUlVature, 258 geodesic equation, 221, 226, 229, 230, 237, 277
124, 196 invertible, 29, 35 irreducible representation, 280, 352 isometric embedding, 134
geodesic triangle, 257
isomorphism, 25
geodesics, 221, 225
isoperimetric inequality, 378
geometric series, 308
isothermal coordinates, 248
global diffeomorphism, 60
iterated integral, 9 2
grad, 163 GrammSchmidt construction, 395
Jacobi field, 287
Green formulas, 173
Jacobi inversion formula, 331
Green's theorem, 169, 177, 187, 379
Jacobi variational equation, 286
group, 278
Jordan curve theorem, 212 JordanBrouwer separation theorem, 211
Haar measure, 124, 208, 350 Hamiltonian system, 231
Kunneth formula, 427
Hamiltonian vector field, 176 harmonic, 179, 187, 190, 198, 308, 410
ladder operators, 356
harmonic conjugate, 193
Lagrange remainder formula, 48, 403, 407
harmonic form, 419, 420
Laplace operator, 172, 177, 178, 190, 198,
harmonic function, 332
335, 410
harmonic polynomial, 336, 342
Lebesgue integral, 300, 318
Harnack's inequality, 413
Lebesgue measure, 6
HeineBorel theorem, 21, 384
Legendre polynomials, 345, 360
Hessian, 46
length, 225
Hilbert space, 395
length functional, 221, 226
Hodge decomposition, 419, 422
LeviCivita covariant derivative, 222, 232
Hodge Laplacian, 420
Lie algebra, 268
holomorphic, 53, 66, 186
Lie algebra isomorphism, 359
homotopic, 200, 213
Lie algebra representation, 354
homotopy invariance, 210, 214
Lie bracket, 78, 267, 353
Hopf index theorem, 428
Lie derivative, 78, 202
Hopf invariant, 429
Lie group, 278 linear system of ODE, 71
implicit function theorem, 62, 125
linear transformation, 23, 40
index of a vector field, 212
linearization of an ODE, 73
inf, 2
linearly dependent, 25
injective, 25
linearly independent, 25
inner product, 119, 151, 301, 314, 393
linking number, 219
inner tube, 140, 217
Liouville's theorem, 191, 197, 413
integral equation, 68
Lipschitz, 92
integral of an nform, 157
local diffeomorphism, 60
444
Index
local maximum, 47, 53
orthonormal basis, 119, 134, 339, 395
local minimum, 47, 53
outer measure, 6, 1 1 1
log, 81, 195 lower content, 5, 89
�(n, �), 30, 3 1 , 86 �(n, Gl.), 123
manifold, 131, 148
parallel translation, 253 parallel transport, 253 parametrization by arc length, 136, 238 partial derivative, 10, 40 partition, 2, 88
matrix, 23
partition of unity, 145, 165
matrix exponential, 76, 85, 283, 352
permutation, 33
matrix group, 265
PeterWeyl theorem, 372
matrix multiplication, 24
PI, 343
maximum, 386
pi, 82, 83, 136
maximum principle, 190, 334, 4 1 1
Picard iteration method, 68
maxsize, 2 , 88
piecewise constant, 1 1
mean value property, 190, 334, 411
PK,96
mean value theorem, 8, 43
Plancherel identity, 291, 299, 307, 3 1 5 , 318,
measurable, 300
326, 379
metric space, 18, 382
Poincare disk, 248
metric tensor, 119, 132, 150, 225
Poincare duality, 424
minimum, 386
Poincare lemma, 162, 167, 202, 208, 418,
modulus of continuity, 387
435
monotone convergence theorem, 109
Poisson integral, 343
Morse function, 147
Poisson integral formula, 294, 308, 334
multiindex notation, 44
Poisson summation formula, 330, 332
multilinear notation, 49
polar coordinates, 100, 333
multilinear Taylor formula, 50
polar decomposition, 134
multip licativity, 35
polynomial, 23
multipole expansion, 368
positive definite, 47 power series, 189, 194, 402
negative definite, 47
prime numbers, 332
neighborhood, 382
product rule, 162
Newton method miracle, 66
projection, 143, 275
Newton's method, 60, 66
projective space, 1 3 1 , 160, 368
nil set, 91
pullback, 155, 156, 162
noretraction theorem, 206, 210
Pythagorean theorem, 1 8
nondegenerate critical point, 77, 212 norm, 18, 40, 296, 314, 388, 394
quotient surface, 1 3 1
normal field, 240 normal transformation, 400
radial vector field, 164
null space, 25
range, 25
open, 382
real analytic, 66
rational numbers, 382 open set, 19, 40
real numbers, 382
orbit, 75
regular value, 211
orientable, 157, 160
removable singularity theorem, 336, 411
orientation, 156
representation, 143, 275, 351
orthogonal complement, 397
retraction, 206
orthogonal coordinates, 248
Riemann curvature, 223, 244, 278
orthogonal projection, 227, 239, 339, 397
Riemann integrability criterion, 1 1 2
orthogonal transformation, 401
Riemann integrable, 3, 8 9 , 127
orthogonality, 336
Riemann integral, 2, 3 , 88
445
Index Riemann sum, 5, 89
tangent vector field, 128
Riemann zeta function, 331
Taylor formula with remainder, 14, 44
Riemann's functional equation, 332
tempered distribution, 324
Riemannian manifold, 225
torus, 294
Rodrigues formula, 371
total boundedness, 383
rotation group, 351
Tr, 38, 86
row operation, 95, 116
trace, 38
row vector, 1 7
transpose, 30 triangle inequality, 18, 296, 301, 314, 382,
saddle, 77, 214 saddle point, 47, 53
394 triangulation, 257
Sard's theorem, 146, 211
trigonometric function, 82, 136
Schur's lemma, 373
trigonometric polynomial, 295, 410
Schwarz reflection principle, 4 1 2
Tychonov theorem, 389
second fundamental form, 223, 240, 242 selfadjoint, 397
umbilic, 252
sequence, 383
unbounded integrable function, 103
simply connected, 418
uniformly continuous, 387
sin, 82, 136, 139, 195
unimodular group, 274
sinh, 142
unit normal, 239
sink, 77, 214
unitarily equivalent, 281
Skew(n), 123, 140, 352, 402
unitary, 397
SO(n), 37, 123, 140, 199, 351, 402
unitary representation, 280, 351
source, 77, 214
unlinking, 220
span, 25 Spec, 398 spectral mapping theorem, 285, 288 sphere, 121 spherical coordinates, 361 spherical harmonic expansion, 342 spherical harmonics, 293, 332, 335
upper content, 5, 89 vector, 155 vector field, 75, 128, 150, 240 vector space, 22, 4 1 , 393 volume, 88, 121 volume of a ball, 1 1 3 , 136
spherical polar coordinates, 1 1 3 , 121, 335
wedge product, 160
Stokes formula, 164, 177, 200, 210, 255, 262
Weierstrass approximation theorem, 408
StoneWeierstrass theorem, 206, 282, 295,
Weingarten formula, 223, 241, 242, 250
339, 409 SU(n), 359
Weingarten map, 223, 241 Weyl orthogonality relations, 281
submersion, 125 submersion mapping theorem, 125
zonal function, 348
subsequence, 383
zonal harmonics, 348
summation convention, 171, 229 sup, 2 surface, 117 surface integral, 119 surface of revolution, 142, 251 surface with boundary, 164 surface with corners, 165 sUljective, 25 Sym(n), 134 tan, 84 tangent bundle, 150 tangent space, 119, 148
Selected Published Titles in This Series 46
Michael E. Taylor,
44
Alejandro Uribe A. and Daniel A. Visscher,
Introduction to Analysis in Several Variables, 2020 Explorations in Analysis, Topology, and
Dynamics, 2020 43
Allan Bickle,
42
Steven H. Weintraub,
41
William J. Terrell,
40
Heiko Knospe,
39
Andrew D. Hwang,
38
Mark Bridger,
37
FUndamentals of Graph Theory, 2020 Linear Algebra for the Young Mathematician, 2019
A Passage to Modern Analysis, 2019
A Course in Cryptography, 2019 Sets, Groups, and Mappings, 2019
Real Analysis, 2019
Mike MestertonGibbons,
An Introduction to GarneTheoretic Modelling, Third
Edition, 2019 36
Cesar E. Si lva,
35
Alvaro LozanoRobledo,
34
C. Herbert Clemens,
Invitation to Real Analysis, 2019
33
Brad G. Osgood,
32
John M. Erdman,
31
Benjamin Hutz,
30 29 28
Number Theory and Geometry, 2019
TwoDimensional Geometries, 2019
Lectures on the Fourier Transform and Its Applications, 2019 A Problems Based Course in Advanced Calculus, 2018
An Experimental Introduction to Number Theory, 2018
Steven J. Miller,
Mathematics of Optimization: How to do Things Faster, 2017
Tom 1. Lindstr�m,
Spaces, 2017
Foundations and Applications of Statistics: An Introduction Using R, Second Edition, 2018 Randall Pruim,
27
Shahriar Shahriari,
26
Tamara J . Lakins,
Algebra in Action, 2017
The Tools of Mathematical Reasoning, 2016
25
Hossein Hosseini Giv,
24
Helene Shapiro,
23
Sergei Ovchinnikov,
22
Hugh L. Montgomery,
21
John M. Lee,
20
Paul J. Sally, Jr.,
19
R. Clark Robinson,
Mathematical Analysis and Its Inherent Nature, 2016
Linear Algebra and Matrices, 2015 Number Systems, 2015 Early Fourier Analysis, 2014
Axiomatic Geometry, 2013 Fundamentals of Mathematical Analysis, 2013 An Introduction to Dynamical Systems: Continuous and Discrete,
Second Edition, 2012 18
Joseph L. Taylor,
17
Peter Du ren,
Foundations of Analysis, 2012
Invitation to Classical Analysis, 2012
16
Joseph L. Taylor,
15
Mark A. Pinsky,
Partial Differential Equations and BoundaryValue Problems with Applications, Third Edition, 1998
14
Michael E. Taylor,
13
Randall Pruim,
Complex Variables, 2011
Introduction to Differential Equations, 2011
Foundations and Applications of Statistics, 2011
12
John P. D'Angelo,
11
Mark R. Sepanski ,
10
Sue E. Goodman,
An Introduction to Complex Analysis and Geometry, 2010 Algebra, 2010 Beginning Topology, 2005
9
Ronald Solomon,
8
I. Martin Isaacs,
7
Victor Goodman and Joseph S tampfli,
6
Michael A. Bean,
Abstract Algebra, 2003 Geometry for College Students, 2001 The Mathematics of Finance, 2001
Probability: The Science of Uncertainty, 2001
www.ams.org /bookstore /amstextseries/ .
For a complete list of titles in this series, visit the AMS Bookstore at
This text was produced for the second part of a twopart sequence on advanced calculus, whose aim is to provide a firm logical foundation for analysis. The first p art treats analysis in one variable, and the text at hand treats analysis in several variables. After a review of topics from onevariable analysis and linear algebra, the text treats in succession multivariable differential calculus, including systems of differential equations, and multivariable integral calculus. It builds on this to develop calculus on surfaces in Euclidean space and also o n manifolds. It introduces differential forms and establishes a general Stokes formula. It describes various applications of Stokes formula, from harmonic functions to degree theory. The text then studies the differential geometry of surfaces, including geodesics and curvature, and makes contact with degree theory, via the GaussBonnet theorem . The text also takes up Fourier analysis, and bridges this with results on surfaces, via Fourier analysis on spheres and on compact matrix groups.
I SBN
9 7 8  1 470456696
_ �
For additional information and updates on this book, visit
. ams.org/bookpages/amstext46
www
• •••• •
iJtAM S .
AMSTEXT/46
. ••. .
www.a m s.org